arxiv_id
stringlengths
0
16
text
stringlengths
10
1.65M
1904.10488
\section{Genesis of the Fireball Shock Model} \label{intro} Fireballs in astrophysics generally refer to an optically thick plasma whose temperature exceeds the electron rest mass and which can produce $e^\pm$ pairs and photons in equilibrium with a baryonic plasma. An early study of the fireball radiation physics aimed at GRBs, leaving aside consideration of specific sources, was that of \cite{Cavallo+78}. The fireball would expand and adiabatically cool as it converts its internal into kinetic energy, and they suggested that this kinetic energy could be reconverted into radiation as it impacts the external medium, the highest efficiency (for non-relativistic expansion) being achieved when the fireball swept up an amount of external matter comparable to the fireball mass (the analog of the start of the Sedov-Taylor phase of SNe). \cite{Paczynski86} proposed that a merging binary neutron star (BNS) would liberate enough energy in a short time to power a GRB at cosmological distances, and he and \cite{Goodman86grb} showed that in this case the expansion would be relativistic, the bulk Lorentz factor accelerating with radius as $\Gamma \propto r$, The initial blackbody plasma temperature would be of order a few MeV, which in the linearly expanding comoving frame would drop as $T'\propto 1/r$, but in the observer frame this would be boosted by the bulk Lorentz factor back to its initial few MeV value, with an approximately blackbody spectrum, most of the photons escaping when the plasma became optically thin to Thompson scattering. A different aspect of BNS mergers emphasized by \cite{Eichler+89bns} was their role as emitters of gravitational waves and their likely role as sources of r-process heavy elements, at the same time as being likely to appear as GRBs. A more detailed study of the properties of relativistically expanding fireballs \citep{Paczynski90wind,Shemi+90} showed that the bulk Lorentz factor growth $\Gamma(r)\propto r$ would saturate to a maximum value $\eta \sim E_f/M_f c^2 \gg 1$, where $E_f$ and $M_f$ are the initial energy of the explosion and the initial baryonic mass entrained in the outflow. After a saturation radius $r_{sat}\sim r_0 \eta$, where $r_0$ is the launching radius, the Lorentz factor remains constant, but since adiabatic cooling continues, the radiation energy that can escape after the photosphere becomes optically thin represents an increasingly smaller fraction of the final kinetic energy of expansion. The dynamics is similar also for neutron star-black hole (NS-BH) mergers, which would also be important GRB candidates \citep{Narayan+91nsbh,Narayan+92merg}, the latter mentioning briefly that reconnection, ejection of cosmic rays and their collisions might contribute non-thermal radiation in addition to the optical thick spectrum. There were several problems with the above initial fireball models, namely, (1) for simplicity, a spherical geometry was usually tacitly assumed, and this combined with a low radiative efficiency would require excessively large explosion energies for the brighter bursts; (2) the main part of the gamma-ray spectrum predicted is approximately blackbody, whereas observed spectra are mainly non-thermal; and (3) for plausible baryon loads most of the explosion energy would be wasted on bulk kinetic energy, instead of radiation. To address these issues the jet-like fireball shock model was developed, which in its main features is to this day the most widely used model. As a natural way to resolve the inefficiency of spherical models, \cite{Meszaros+92tidal} pointed out that collimation of the fireball would be expected in the slower outflow (the dynamical ejecta) resulting from the tidal heating and the radiation from the merging BNS system. This could be powered by reconnection between their magnetospheres and collisions between their winds, as well as by neutrino-antineutrino interactions going into pairs, which would occur preferentially along the symmetry axis of the merger. This would create a hot radiation bubble, which would escape through the wind preferentially along the centrifugally rarefied axis of rotation, making a relativistic jet. For the case of NS-BH binary merger, \cite{Meszaros+92entropy} discussed the increased radiative efficiency due to gravitational focusing by the BH of the neutrino-antineutrino interactions from the disrupted NS debris, giving a quantitative discussion of channeling into a jet along the axis, To address the problem of the thermal spectrum and the radiative inefficiency at the photosphere due to most of the energy being converted into kinetic energy form, \cite{Rees+92shock} showed that both of these issues are solved by considering the strong forward and reverse shocks produced in the deceleration of the relativistic ejecta by the external medium, which (unlike in the non-relativistic expansion) occurs when the ejecta has swept up an external mass which is $\sim 1/\Gamma$ of its own mass, re-thermalizing about half of the bulk kinetic energy. The strong shock leads to a power-law relativistic electron spectrum via the Fermi mechanism, and via synchrotron radiation results in non-thermal power-law spectra. For a brief (impulsive) initial energy input, the effects of the deceleration are felt on a timescale $t_{dec}\sim r_{dec}/2\Gamma_i c^2$, when the initial forward shock Lorentz factor has dropped to $\sim \Gamma_i/2$ and the reverse shock, initially weak, has just become trans-relativistic. The results are the same whether the outflow is jet-like or spherical, for jet opening angles larger than $1/\Gamma$. At the photospheric radius a thermal spectrum is still emitted, but occurring above the saturation radius, adiabatic cooling makes its spectral contribution sub-dominant. The dynamics and the synchrotron and inverse Compton spectra from the forward and reverse external shock were discussed in detail in \cite{Meszaros+93gasdyn,Meszaros+93multi}. A major motivation for introducing internal shocks arose after the launch of the Compton GRO (CGRO) spacecraft in late 1991, which found gamma-ray light curves which showed variabilities as short as $10^{-3}\hbox{~s}$. Such short variability can get smeared out in external shocks, which occur at relatively large radii. \cite{Rees+94is} showed that internal shocks at radii much smaller than those of the external shock can arise due to irregularly ejected gas shells of different bulk Lorentz factors. These can collide and shock at intermediate radii above the photosphere but below the external shock, leading to observable radiation whose variability is due the variability of the ejection from the central engine. Being above the photosphere, the shock radiation from synchrotron and inverse Compton is unsmeared and non-thermal. A different power source for GRBs was proposed by \cite{Woosley93col}, in addition to BNS and NS-BH mergers.This is the collapsar model, resulting from the collapse of the core of massive stars leading to a central black hole (or temporarily a magnetar). When the core is rotating fast enough, the mass fallback towards the BH would lead to an accretion disk powering a jet, which if fed long enough, can break out from the collapsing stellar envelope. The BH, accretion disk and jet resulting from this is similar to those expected in compact BNS or NS-BH mergers, and the shock radiation outside the envelope would have similar properties. However the accretion can last much longer, since fall-back times are long and the outer accretion radii would be larger, leading to longer total burst durations. This led to a natural explanation for the striking dichotomy between the two populations of short ($\Delta t_\gamma \mathrel{\mathpalette\simov <} 2\hbox{~s}$ and long ($2\hbox{~s} \mathrel{\mathpalette\simov <} \Delta t_\gamma \mathrel{\mathpalette\simov <} 10^3\hbox{~s}$) GRBs identified by \cite{Kouveliotou+93}. Multi-wavelength, broadband spectra are expected in general from the external forward and reverse shocks, the reverse shock synchrotron predicting optical/UV radiation and the forward shock inverse Compton scattering of synchrotron photons reaching GeV energies \cite{Meszaros+93multi}. The latter provided a model \citep{Meszaros+94geV} for the long-lasting GeV emissions first seen in CGRO-EGRET data, while the former provided an explanation for the optical ``prompt" optical emission first detected by \cite{Akerlof+99}. Internal shocks also lead to broadband spectra \citep{Papathanassiou+96is}, typically harder than in external shocks, due to the larger comoving magnetic fields at the smaller radii. The observation of some of the longer duration bursts, whose duration could significantly exceed the expected deceleration time $t_{dec}\sim r_{dec}/2\Gamma_i^2 c^2$, motivated a more detailed discussion of external shocks \citep{Sari+95hydro,Sari97thickshell}, distinguishing between thin shell cases (the limiting case from brief impulsive accretion) and thick shell cases (for longer accretion), where the reverse shock may be relativistic. The long-term afterglow of a GRB, as opposed to the ``prompt" emission discussed above, was first discussed quantitatively \citep{Meszaros+97ag} in a paper which appeared two weeks before the first announced detection of an X-ray afterglow from GRB 970228 with the Beppo-SAX satellite \citep{Costa+97-970228}. Its optical afterglow was discovered by \cite{Vanparadijs+97-gtrb970228opt}, and other afterglows soon followed, including GRB 970508 \citep{Metzger+97-970508redshift} which yielded the first redshift ($z=0.835$), proving that they were indeed cosmological. The observations confirmed in their main features the predictions of the afterglow model, including the power law time decay, spectra and timescale \citep{Wijers+97-970228}. Synchrotron radiation, including the transition between slow and fast cooling regimes \citep{Meszaros+98view,Sari+98spectra}, provided a satisfactory fit for most of the observations in the subsequent period. \section{Standard GRB Model and its Evolution} \label{sec:sm} The ``standard" GRB fireball shock model outlined in the second half of the above section has proved extremely durable, despite a number of challenges and modifications of detail. For comprehensive reviews, see e.g., \cite{Piran99rev,Piran04grbrev,Kumar+15grbrev,Zhang19book}. Its simplest form, most often used for interpreting observations, is in Fig. \ref{fig:smjet}. \begin{figure}[h] \centering \includegraphics[width=4cm]{jet-schem.pdf} \caption{The standard GRB fireball shock model, e.g. from a collapsar (for compact mergers, the ``collapse" region is replaced by the dynamical ejecta). Shown are the photosphere, internal shock and external shock resulting in the afterglow \citep{Meszaros01sci}. } \label{fig:smjet}% \end{figure} The afterglow radiation, from radio through optical, X-ray and more recently GeV is overall well fitted, with some modifications, by the external shock synchrotron emission, e.g. \cite{Zhang+06ag}. For the ''prompt" emission (broadly the typically MeV radiation within $\Delta t_\gamma \simeq T90$), however, an origin in terms of synchrotron has been criticized, e.g. \cite{Preece+98death}, since the low energy slope of some GRB prompt spectra is harder then the limiting synchrotron slope of -2/3 in $dN/dE$ (harder than +1/3 in $E dN/dE$). One possible solution is that the prompt emission may be due to the optically thick photosphere, whose peak can be in the MeV range and the low energy slope is as hard as +2 \citep{Eichler+00thermal}. This works but it requires an additional shock or other component to make a high energy power law \citep{Meszaros+00phot}; also, if the photosphere is well above the saturation radius adiabatic cooling makes it radiatively inefficient. Radiatively efficient photospheres, however, may arise naturally if the photospheres are dissipative \citep{Rees+05photdis}, e.g. by magnetic reconnection, or subphotospheric shocks. A natural sub-photospheric dissipation mechanism is proton-neutron decoupling, which can produce efficient photospheric spectra from low energies all the way to multi-GeV \citep{Beloborodov10pn}, Fig. \ref{fig:phot-sync} (left). \begin{figure}[h] \begin{minipage}[h]{3.5cm} \centerline{\includegraphics[width=3.5cm,height=3.0cm]{Beloborodov10photspec.pdf}} \end{minipage} \vspace*{-3cm} \begin{minipage}[h]{3.5cm} \centerline{\includegraphics[width=3.5cm,height=3.0cm]{Burgess19syspec.pdf}} \end{minipage} \vspace*{3cm} \caption{Left: Spectrum of a photosphere heated by $pn$ decoupling \citep{Beloborodov10pn}. Right: Synchrotron spectra (e.g. internal or external shocks) accounting for time-dependence of transition between cooling regimes \citep{Burgess+18grbsync}. } \label{fig:phot-sync} \end{figure} However, the earlier critiques of the synchrotron low energy slopes may be unjustified, having been obtained taking wide time bins or time-integrated spectra. If one considers the time evolution of the shock-accelerated electrons, from injection through cooling, and takes into account that electrons radiating at different frequencies have different energies and may be in different cooling regimes (fast, intermediate, slow), the spectra convolved with the detector energy resolution and response function can give various slopes, and the great majority of the observed GRB slopes can be fitted with synchrotron, e.g. \cite{Burgess+18grbsync, Ravasio+19sycoolspec}, Fig. \ref{fig:phot-sync}, right. A critique of the simple internal shocks in which only the electrons radiate (leptonic models) has been that they are generally inefficient, not dissipating enough of the mechanical energy in the relative motion between successively ejected shells, and Fermi acceleration putting much of this dissipated energy in non-radiating protons. In more realistic internal shocks, however, this radiative inefficiency can be larger, e.g. when the dissipation is largely by magnetic instabilities and reconnection, or if hadronic collisions and reacceleration of secondaries are taken into account. Thus, the early GRB paradigm based on internal plus external shocks and an inefficient photosphere (Fig. \ref{fig:grb-paradigm}, top) has, since about 2005, evolved into one of an efficient photosphere and/or an efficient internal shock plus external shock (Fig. \ref{fig:grb-paradigm}, bottom). \begin{figure}[h] \centering \includegraphics[width=4cm]{phot-shocks3.pdf} \caption{The early classical paradigm of the standard model (top) and the newer version giving more emphasis to the photosphere and considering alternative mechanisms in the internal shock or prompt emission region. } \label{fig:grb-paradigm}% \end{figure} An example of efficient magnetic dissipation internal shock models is the ICMART model of \cite{Zhang+11icmart}, while an example of an efficient hadronic internal shock with secondary reacceleration is that of \cite{Murase+12reac}. After the launch of the Fermi Gamma-ray Space Telescope 2008, its LAT detector started to observe in a significant fraction of GRBs that the so-called Band broken power law spectrum above the MeV peak extended into the GeV range, as already found in some previous EGRET spectra. Such ``extended" Band spectra can be modeled, e.g. with photospheric models, as seen in Fig. \ref{fig:phot-sync} (left). However, in many Fermi-LAT GRBs the GeV appeared as a second power-law component, harder than the Band $\beta$ upper branch. The question is whether this second, harder GeV component is due to inverse Compton (IC) upscattering of the Band component, or is it due to protons being accelerated and leading to cascades with radiation from secondary leptons. Both types of models can give reasonable fits. Leptonic models where the Band spectrum arising in the photosphere is up-scattered by shocked electrons in internal shocks give reasonable results, e.g. \cite{Toma+11phot}. An alternative leptonic model considers a baryonic or magnetic photosphere producing a Band spectrum which is up-scattered in an external shock, also giving good fits \citep{Veres+12photes}. Fig. \ref{fig:gevlephad} (left). \begin{figure}[h] \begin{minipage}[h]{3.5cm} \centerline{\includegraphics[width=3.5cm,height=3.0cm]{Veres+12lepbarphotisspec.pdf}} \end{minipage} \vspace*{-3cm} \begin{minipage}[h]{3.5cm} \centerline{\includegraphics[width=3.5cm,height=3.0cm]{Murase+12hadisspec.pdf}} \end{minipage} \vspace*{3cm} \caption{Left: Leptonic model with photosphere plus external shock upscattering a GeV second component \cite{Veres+12photes}. Right: Hadronic model with internal shock accelerating electrons and protons leading to cascades and secondary re-acceleration, leading to self-consistent Band spectrum and second GeV component \citep{Murase+12reac}.} \label{fig:gevlephad} \end{figure} \\ Hadronic models, on the other hand, could in principle have substantial advantages. E.g. \cite{Murase+12reac} calculated an internal shock model accelerating both electrons and protons, where hadronic cascades and stochastic reacceleration of the leptonic secondaries in the post-shock turbulence leads self-consistently to both the Band MeV and the GeV second hard component from the same region, Fig. \ref{fig:gevlephad} (right). This model provides good efficiency, which is one of its attractions for internal shocks, and it may be applicable not only to shocks but also, e.g. to magnetic dissipation regions, where MHD turbulence is expected. \section{Some Recent Developments} \label{sec:recent} Recently the MAGIC imaging air fluorescence telescope (IACT) announced the detection of photons in excess of 300 GeV, and perhaps up to a TeV, in the bright GRB190114C \citep{Mirzoyan+19-190114C-MAGIC}, also detected at other energies by Fermi, Swift, INTEGRAL and numerous other facilities. This was the first high confidence ($\sim 20\sigma$) detection of a GRB with an IACT at such energies, a long awaited feat which should be easier to accomplish with the future CTA. Preliminary analyses show that the long lasting ($\sim 10^3\hbox{~s}$) sub-TeV component is mostly associated with the afterglow seen at other energies, e.g. Fig. \ref{fig:190114Clcspec}. The spectral slope of the sub-TeV component appears harder than the usual Band component, pending further MAGIC analysis. \begin{figure}[h] \begin{minipage}[h]{3.5cm} \centerline{\includegraphics[width=3.5cm,height=3.0cm]{Wang+19grb190114Cspec.pdf}} \end{minipage} \vspace*{-3cm} \begin{minipage}[h]{3.5cm} \centerline{\includegraphics[width=3.5cm,height=3.0cm]{Ravasio+19-190114Cspec2.pdf}} \end{minipage} \vspace*{3cm} \caption{Left: fits to preliminary data showing light curves of various energy components of GRB 190114C \citep{Wang+19-190114tev}. Right: fits to preliminary data at several epochs for the spectrum of GRB190114C \citep{Ravasio+19-190114Cspec}.} \label{fig:190114Clcspec} \end{figure} \\ Detection of $\mathrel{\mathpalette\simov <} \hbox{~TeV}$ emission from a GRB has two requirements, one being that the redshift be smallish, so that $\gamma\gamma$ absorption in the IGM external background light is not too severe, and the other being that the same absorption is absent or at least mitigated in the GRB radiation zone and its immediate neighborhood. Fermi-LAT detections of GRBs have shown source-frame emission up to several tens of GeV and in one case even $\mathrel{\mathpalette\simov <} 100\hbox{~GeV}$, but the present $\mathrel{\mathpalette\simov >} 300\hbox{~GeV}$ can put significant constraints on models. Much theoretical work remains to be done on this event. The other major recent development was the short GRB 170817 detection both electromagnetically (EM) through multi-wavelength photons, and through gravitational waves (GWs). This was very exciting, being the first high significance multi-messenger detection of a transient using GWs\footnote{The other previous high significance multi-messenger transient was SN 1987a, where besides photons also thermal (MeV) neutrinos were detected.}. This was a short GRB (SGRB) with $\Delta t_\gamma \leq 2\hbox{~s}$), detected by Fermi, INTEGRAL, Swift and other EM instruments. These objects were long expected to arise from BNS mergers, an interpretation for which accumulating evidence, e.g. \cite{Gehrels+09araa}, had almost but not quite reached the 100\% confidence level. In this case, slightly preceding the EM flash, the associated detection of GWs \citep{LIGO+17gw170817disc}, which were also expected from BNS mergers, conclusively confirmed that SGRBs where indeed BNS mergers. In addition, it also confirmed that BNSs can also produce a type of optical/IR flash known as a kilonova, surmised to be responsible also for the elements heavier than the Fe-group via the r-process, e.g. \cite{Hotokezaka+18nsmerg,Kasliwal+19-170817rproc}.. \begin{figure}[h] \centering \includegraphics[width=7.0cm,height=3.5cm]{Kasliwal+17-170817sci.pdf} \caption{GRB/GW170817A, two opposed views on the SGRB radiation from a BNS. Left: observed radiation dominated by the cocoon, for a choked jet. Right: observed radiation dominated by an emergent top-hat jet \citep{Kasliwal+17-170817sci}.} \label{fig:Kasliwal17cocjet}% \end{figure} The SGRB radiation of GRB/GW170817 looked typical, except for being fainter and somewhat softer than expected for its low distance of 40 Mpc. The role played in GRBs by cocoons \citep{Meszaros+01coljet, Ramirez+02cocoon} and choked jets \citep{Meszaros+01choked} had been considered early on, and in the case of GRB/GW170817A a natural possibility was that its weaker $\gamma$-rays might be attributed either to a choked jet with a cocoon breakout, or to an off-axis top-hat emergent jet, e.g. \cite{Kasliwal+17-170817sci,Ioka+17gw170817cocoon} and others. While a cocoon interpretation may be favored over a simple top-hat jet, the observation of a superluminal jet signature \citep{Mooley+18-170817reljet2} and other features of the afterglow \citep{Troja+18-179817structjet} indicate that either a Gaussian structured jet or a cocoon could fit the data. The next burning question, as far as GRB multi-messenger studies, is whether GRBs can also be detectable via neutrinos. Of course both LGRBs (as core-collapse objects) and SGRBs (compact mergers involving at least one neutron star which is heated to virial temperatures) will emit a large fraction of their core binding energy in thermal (5-30 MeV) neutrinos. At these energies the neutrino-nucleon detection cross section is of order $10^{-44}\hbox{~cm}^2$, and at cosmological distances the flux is undetectable with current detectors. High energy neutrinos however have much higher cross sections ($\sim 10^{-34}\hbox{~cm}^2$ around 10 TeV), and IceCube is detecting a diffuse astrophysical flux in the 10 TeV-10 PeV range \citep{IC3+13pevnu1,IC3+13pevnu2}. The total number of neutrinos so far is of order 50, distributed isotropically in the sky, with localization error circles ranging from $\sim 1^o$ (for muon neutrino tracks) to $15-30^o$ (for electron neutrino cascades), hence difficult to associate with individual sources. Recently, however, a high energy (multi-TeV) muon neutrino was detected, with the blazar TXS 0506+56 within its error circle, which was undergoing a $\gamma$-ray flaring episode in near time coincidence with the neutrino arrival. The region also showed other previous neutrinos in the past years, but without coincident $\gamma$-ray flares, so the total coincidence significance is $\sim 3.5\sigma$, which is interesting but not yet considered conclusive evidence \citep{IC3multi+18txs0506,IC3+18txs0506previousnu}. The possibility of GRBs being high energy neutrino sources has been investigated by IceCube using classical GRBs, i.e. bright, EM-detected, mainly LGRBs. These have been disfavored by IceCube analyses, e.g. \cite{IC3+15promptnugrb}, using particular models of the neutrino emission expected. The same conclusion is reached by IceCube for classical GRBs in a model-independent way using constraints based on neutrino multiplet observations \citep{IC3+18grbnuconstraint}, but the same study leaves unconstrained a (theoretically plausible) origin in low-luminosity or choked GRBs, e.g. \cite{Senno+16hidden}. Low luminosity and/or choked GRBs could be more numerous than classical GRBs, and at the typically high redshifts they would be electromagnetically missed or hard to detect, while their cumulative neutrino flux could add up to what IceCube sees. SGRBs would in principle appear to be ideal objects for constraining physical models if in addition to GWs they also produced observable neutrinos. At first sight it would seem that the expected neutrino fluxes would be much lower than in LGRBs, since the SGRB prompt MeV emission is shorter and underluminous compared to LGRBs. However a large fraction of SGRBs also exhibit a longer tail ($\mathrel{\mathpalette\simov <} 100\hbox{~s}$) of softer radiation in the 50 keV range. This softer extended emission (EE) can be modeled as a late jet emission with a bulk Lorentz factor lower than the prompt, providing a higher comoving density of target photons for $p\gamma$ photo-hadronic interactions leading to neutrinos. The neutrino flux is still low at typical redshifts, but at the redshift $z\sim 0.01$ (about 40 Mpc) of GBB 170817A, it could have been detectable by IceCube, if the jet been head-on \citep{Kimura+17sgrbnu}; however, for a higher inclination angle $\theta_{LOS} \sim 20-30^o$ of the line of sight relative to the jet axis, as inferred from multi-wavelength observations, the lower Doppler boost in that direction implies a much lower observable flux, which falls below the IceCube sensitivity, Fig. \ref{fig:bnsnu} (left). \begin{figure}[h] \begin{minipage}[h]{3.5cm} \centerline{\includegraphics[width=3.5cm,height=2.5cm]{Kimura+17bnseejetic3.pdf}} \end{minipage} \hspace{5mm} \vspace*{-3cm} \begin{minipage}[h]{3.5cm} \centerline{\includegraphics[width=3.7cm,height=2.5cm]{Kimura+18bnstransjet.pdf}} \end{minipage} \vspace*{3cm} \caption{ Left: IceCube and Antares upper limits for GRB170817 \citep{AntaresIC3-170817nu}, compared to BNS jet EE extended emission model \citep{Kimura+17sgrbnu} for various jet offset angles. Right: Internal and collimation shocks in trans-ejecta jet propagating through a BNS dynamical ejecta \citep{Kimura+18transejnu}.} \label{fig:bnsnu} \end{figure} \\ The SGRB jet and shock structure is likely to be more complicated as it is making its way through the dynamical ejecta, Fig. \ref{fig:bnsnu} (right). Both collimation shocks and internal shocks are expected in choked jets or before the jet emerges from the ejecta, and the internal shocks occurring in the pre-collimation jet satisfy the conditions for Fermi acceleration of charged particles, leading to neutrinos via photo-hadronic interactions \citep{Kimura+18transejnu}. One can expect from such events a few up-going neutrinos in IceCube from a merger at 40 Mpc occurring in the Northern sky, if the jet is directed at Earth. For optimistic jet parameters, a joint GW-IceCube detection might be achievable in a few years of operation, or for Ice-Cube Gen 2 this would be probable even for moderate jet parameters. \\ \noindent {\it Acknowledgments:} I am grateful to K. Murase, S.S. Kimura and D.B. Fox for discussions, and the Eberly Foundation for support. \bibliographystyle{aa.bst} \input{gen.bbl} \end{document} \begin{figure}[h] \begin{minipage}[h]{4.0cm} \centerline{\includegraphics[width=4.5cm,height=3.2cm]{gw170817ligoferint.pdf}} \end{minipage} \vspace*{-3cm} \begin{minipage}[h]{3.5cm} \centerline{\includegraphics[width=4.5cm,height=3.0cm]{BNS-KN.pdf}} \end{minipage} \vspace*{3cm} \caption{Left: The detection of GRB/GW170817 through Fermi, LIGO and INTEGRAL (plus other instruments). Right: GRB/GW170817 inferred observation geometry, showing post-merger dynamical ejecta and jet, off-axis GRB emission, kilonova optical/IR emission and GW emission. } \label{fig:170817-lc-schem} \end{figure} \begin{figure}[h] \begin{minipage}[h]{4.0cm} \centerline{\includegraphics[width=4.5cm,height=3.2cm]{gw170817ligoferint.pdf}} \end{minipage} \vspace*{-3cm} \begin{minipage}[h]{3.5cm} \centerline{\includegraphics[width=4.5cm,height=3.0cm]{BNS-KN.pdf}} \end{minipage} \vspace*{3cm} \caption{Left: The detection of GRB/GW170817 through Fermi, LIGO and INTEGRAL (plus other instruments). Right: GRB/GW170817 inferred observation geometry, showing post-merger dynamical ejecta and jet, off-axis GRB emission, kilonova optical/IR emission and GW emission. } \label{fig:170817-lc-schem} \end{figure} \partial{\partial} \def{\boldsymbol\psi}{{\boldsymbol\psi}} \def{\bar{\boldsymbol\psi}}{{\bar{\boldsymbol\psi}}} \def{\boldsymbol\lambda}{{\boldsymbol\lambda}} \def{\boldsymbol\sigma}{{\boldsymbol\sigma}} \def{\bar \nu}{{\bar \nu}} \def{\bar \psi}{{\bar \psi}} \def{\bar X}{{\bar X}} \def{\bar p}{{\bar p}} \def{\bar q}{{\bar q}} \def{\bar Q}{{\bar Q}} \def{\bar q}{{\bar q}} \def{\bar \ell}{{\bar \ell}} \def\nu_e{\nu_e} \def\barnue{{\bar \nu_e}} \def\nu_\mu{\nu_\mu} \def{\bar \nu}_\mu{{\bar \nu}_\mu} \def\nu_\tau{\nu_\tau} \def{\bar \nu}_\tau{{\bar \nu}_\tau} \def\nu_L{\nu_L} \def\nu_R{\nu_R} \defe_R{e_R} \defe_L{e_L} \def\psi_L{\psi_L} \def\psi_R{\psi_R} \def{\rm km}{{\rm km}} \def{\rm kg}{{\rm kg}} \def{\rm pc}{{\rm pc}} \def{\rm kpc}{{\rm kpc}} \def{\rm Mpc}{{\rm Mpc}} \def{\rm Gpc}{{\rm Gpc}} \def{\rm sr}{{\rm sr}} \def{\rm hr}{{\rm hr}} \def{\rm yr}{{\rm yr}} \def{\rm Gyr}{{\rm Gyr}} \defT_{GUT}{T_{GUT}} \defT_{EW}{T_{EW}} \defMod. Phys. Lett.\,{m_{Pl}} \def\m-pl{m_{Pl}} \defT_{Pl}{T_{Pl}} \defh_{75}{h_{75}} \def\Omega h^2_{75}{\Omega h^2_{75}} \def\Omega h^2_{70}{\Omega h^2_{70}} \def{\tilde a}{{\tilde a}} \defa_{eq}{a_{eq}} \defa_{dec}{a_{dec}} \def{\bf CP}{{\bf CP}} \def{\bf C}{{\bf C}} \def{\bf P}{{\bf P}} \def{\bf CPT}{{\bf CPT}} \def{\bf B}{{\bf B}} \def{\bf L}{{\bf L}} \def{\rm T}{{\rm T}} \def\boldmath{\boldmath} \def\boldsymbol{\boldsymbol} \def\bs{p}{\boldmath{p}} \def\bsy{P}{\boldsymbol{P}} \def{\cal L}{{\cal L}} \def{\cal S}{{\cal S}} \def{\cal P}{{\cal P}} \def{\cal D}{{\cal D}} \def{\cal V}{{\cal V}} \def{\cal T}{{\cal T}} \def{\tilde{\cal D}}{{\tilde{\cal D}}} \def{\bf A}{{\bf A}} \def{\bf U}{{\bf U}} \def{\bf D}{{\bf D}} \def{\tilde{\bf D}}{{\tilde{\bf D}}} \def{\bf A}{{\bf A}} \def{\tilde{\bf A}}{{\tilde{\bf A}}} \def{\bf F}{{\bf F}} \def{\tilde{\bf F}}{{\tilde{\bf F}}} \def{\bf H}{{\bf H}} \def{\rm G}{{\rm G}} \def{\mu{\rm G}}{{\mu{\rm G}}} \def{\rm nG}{{\rm nG}} \def\fun#1#2{\lower3.6pt\vbox{\baselineskip0pt\lineskip.9pt \ialign{$\mathsurround=0pt#1\hfil##\hfil$\crcr#2\crcr\sim\crcr}}} \def\VEV#1{\left\langle #1\right\rangle} \def\VEV{\sigma_{\rm ann}|v|}{\VEV{\sigma_{\rm ann}|v|}} \def\mbox{\boldmath$\theta$}{\mbox{\boldmath$\theta$}} \def\mbox{\boldmath $l$}{\mbox{\boldmath $l$}} \defJour. Cosmology and Astro-Particle Phys.\,{Jour. Cosmology and Astro-Particle Phys.\,} \def\ {MNRAS}\ {M.N.R.A.S.\,} \defin~prep.\,{in~prep.\,} \def\ {ApJ}\ {Astrophys.J.\,} \def\ {ApJL}\ {Astrophys.J.Lett.\,} \def\ {ApJS}\ {Astrophys.J.Supp.\,} \def\ {AJ}\ {Astron.J.\,} \def\ {Nat}\ {Nature\,} \defNew Ast.\,{New Ast.\,} \defNew~Ast.Rev.\,{New~Ast.Rev.\,} \defNucl. Phys.\,{Nucl. Phys.\,} \defComm. Math. Phys.\,{Comm. Math. Phys.\,} \defPhys. Rev. Lett.\,{Phys. Rev. Lett.\,} \defPhys. Lett.\,{Phys. Lett.\,} \defRev. Mod. Phys.\,{Rev. Mod. Phys.\,} \defInt. Jour. Mod. Phys.\,{Int. Jour. Mod. Phys.\,} \defMod. Phys. Lett.\,{Mod. Phys. Lett.\,} \defPhys. Rev.\,{Phys. Rev.\,} \def\ {Phys. Rev. D.}\ {Phys.Rev.D\,} \def\ {ARA\&A}\ {Annu.Rev.Astron.Astrophys.\,} \def\ {A\&A}\ {Astron.Astrophys.\,} \def\ {A\&AS}\ {Astron.Astrophys.Supp.\,} \def\ {A\&A}\ {Astron.Astrophys.\,} \def\ {Publ. Astr. Soc. Japan}\ {Pub.Astr.Soc.Japan\,} \def\ {PhysRep}\ {Phys.Rep.\,} \def\ {Ap\&SS}\ {Astrophys. Space Sci.} \def\ {A\&A}\ {\ {A\&A}\ } \def\ {A\&A}\ {\ {A\&A}\ } \def\ {A\&A Review}\ {\ {A\&A Review}\ } \def\ {A\&AS}\ {\ {A\&AS}\ } \def\ {A\&AS}\ {\ {A\&AS}\ } \def\ {Acta Astron.}\ {\ {Acta Astron.}\ } \def\ {Adv. Space Res.}\ {\ {Adv. Space Res.}\ } \def\ {AJ}\ {\ {AJ}\ } \def\ {Aust. J. Phys.}\ {\ {Aust. J. Phys.}\ } \def\ {Aust. J. Phys. Astr. Supp.}\ {\ {Aust. J. Phys. Astr. Supp.}\ } \def\ {Ann. d'Ap.}\ {\ {Ann. d'Ap.}\ } \def\ {ApJ}\ {\ {ApJ}\ } \def\ {ApJL}\ {\ {ApJL}\ } \def\ {ApJS}\ {\ {ApJS}\ } \def\ {Ap\&SS}\ {\ {Ap\&SS}\ } \def\ {ARA\&A}\ {\ {ARA\&A}\ } \def\ {Astro. Lett. and Communications}\ {\ {Astro. Lett. and Communications}\ } \def\ {BAAS}\ {\ {BAAS}\ } \def\ {BAN}\ {\ {BAN}\ } \def\ {Bul. Astron. Inst. Neth.}\ {\ {Bul. Astron. Inst. Neth.}\ } \def\ {Contemp. Phys.}\ {\ {Contemp. Phys.}\ } \def\ {Curr. Sci.}\ {\ {Curr. Sci.}\ } \def\ {JA\&A}\ {\ {JA\&A}\ } \def\ {J. de Physique}\ {\ {J. de Physique}\ } \def\ {Mem. Soc. Astron. Ital.}\ {\ {Mem. Soc. Astron. Ital.}\ } \def\ {Messenger}\ {\ {Messenger}\ } \def\ {MNRAS}\ {\ {MNRAS}\ } \def\ {Nat}\ {\ {Nat}\ } \def\ {Nature Phys. Sci.}\ {\ {Nature Phys. Sci.}\ } \def\ {New Astron.}\ {\ {New Astron.}\ } \def\ {Observatory}\ {\ {Observatory}\ } \def\ {Proc. Astron. Soc. Aust.}\ {\ {Proc. Astron. Soc. Aust.}\ } \def\ {PASP}\ {\ {PASP}\ } \def\ {Publ. Astr. Soc. Japan}\ {\ {Publ. Astr. Soc. Japan}\ } \def\ {Proc. Cambridge Phil. Soc.}\ {\ {Proc. Cambridge Phil. Soc.}\ } \def\ {Pub. David Dunlap Obs.}\ {\ {Pub. David Dunlap Obs.}\ } \def\ {Phil. Trans. R. Soc. London A}\ {\ {Phil. Trans. R. Soc. London A}\ } \def\ {PhysRep}\ {\ {PhysRep}\ } \def\ {Phys. Rev. Lett.}\ {\ {Phys. Rev. Lett.}\ } \def\ {Physica Scripta}\ {\ {Physica Scripta}\ } \def\ {Physics Today}\ {\ {Physics Today}\ } \def\ {Proc. Nat. Acad. Sci.}\ {\ {Proc. Nat. Acad. Sci.}\ } \def\ {Phys. Rev. D.}\ {\ {Phys. Rev. D.}\ } \def\ {QJRAS}\ {\ {QJRAS}\ } \def\ {R. Obs. Annals}\ {\ {R. Obs. Annals}\ } \def\ {R. Obs. Bull.}\ {\ {R. Obs. Bull.}\ } \def\ {Science}\ {\ {Science}\ } \def\ {SvA}\ {\ {SvA}\ } \def\ {Space Sci. Rev.}\ {\ {Space Sci. Rev.}\ } \def\ {Zeitschr. f. Astroph.}\ {\ {Zeitschr. f. Astroph.}\ } \def\hline{\hline} \def\tableline{\tableline}
1904.10258
\section{Introduction} The work of Greg Chaitin has been at the centre of my intellectual interests. He was not only one of the examiners for my PhD in computer science in 2011, but my work would have not been possible without his seminal ideas. I met Greg in 2006, having been invited to his home and to his office at IBM's legendary Thomas J. Watson Research Center (see Fig.~\ref{medal}), both in Yorktown Heights, NY, U.S. His house was typical of an intellectual, full of objects and books piled one on top of the other. But Greg also had exotic pieces of sculpture scattered around his house, and an over-representation of Leibniz- and G\"odel-related books, the two major sources of influence on his work and thinking. \begin{figure}[ht!] \centering \includegraphics[width=6cm]{chaitinwatson.png}\\ \bigskip \includegraphics[width=3.8cm]{ChaitinMedalside.png}\hspace{.7cm}\includegraphics[width=4.05cm]{LeibnizMedalside.png}\\ \caption{(Top) A picture of Greg Chaitin I took outside his longtime office at IBM Research headquarters, the Thomas J. Watson Research Center. (Bottom) The two sides of the medal I helped Wolfram design, featuring material relating to Chaitin's life achievements on the obverse and Leibniz' own medal celebrating binary arithmetic on the reverse.} \label{medal} \end{figure} In 2008, I organized a two-part panel discussion (Fig.~\ref{panels}) during the Wolfram Science conference at the University of Vermont in Burlington, an event that, together with a second meeting I organized (with Adrian German) in 2008 at Indiana University Bloomington~\ref{panels}, will, I believe, come to be regarded as a historic. The first part of the panel discussion addressed the question "Is the Universe random?" Participating were Cris Calude, John Casti, Greg Chaitin, Paul Davies, Karl Svozil and Stephen Wolfram. The second part addressed the question "What is computation and How does nature compute?". The participants were Ed Fredkin, Greg Chaitin, Tonny Legett, Cris Calude, Rob de Ruyter, Tom Toffoli, and Stephen Wolfram, with George Johnson, Gerardo Ortiz and myself serving as moderators. Transcriptions of both parts were published in my volume~\cite{randomness} and ~\cite{computableuniverse}. In 2007, I also helped Stephen Wolfram design a commemorative medal to celebrate the 60th birthday of Gregory Chaitin, which also involved minting for the first time a 300 year old medal originally designed by Gottfried Leibniz to celebrate the invention or discovery of binary arithmetic (see Fig.~\ref{medal}). I published a blog post soon after we came up with the idea for the medal, explaining the pre- and post-minting story of the medal~\cite{blogpost}. \begin{figure}[ht!] \centering \includegraphics[width=8cm]{panel1.jpg}\\ Panel Part I: Is the universe random?\\ \bigskip \includegraphics[width=8cm]{panel2.jpg}\\ Panel Part II: What is computation and How does nature compute?\\ \caption{Part I and Part II panel discussion pictures. (Top) From left to right: Hector Zenil, Stephen Wolfram, Paul Davies, Ugo Pagallo, Greg Chaitin, Cris Calude, Karl Svozil, Gordana Dodig Crnkovic, and John Casti. (Bottom) From left to right: Hector Zenil, Greg Chaitin, Ed Fredkin, Rob de Ruyter, Tonny Legett, Cris Calude, Tom Toffoli, and Stephen Wolfram. Transcriptions in ~\cite{randomness} and ~\cite{computableuniverse}.} \label{panels} \end{figure} At the centre of my research is Greg Chaitin's work on what is known as algorithmic probability, which in turn is related to Chaitin's Omega ($\Omega$) number, also called the halting probability. My very first research paper~\cite{zenilcalude} was published in Chaitin's 60th birthday festchrift~\cite{calude}. The paper advanced the claim and provided the first numerical evidence of a bias present in nature that I later advanced in a contribution that won an FQXi prize~\cite{zenilessay} on the digital or analogue nature of the universe. Such claims have more recently been noticed and expanded~\cite{kamal}, a further confirmation in which my own methods played an important role again. In a later follow-up piece, I connected Turing patterns with Turing machines, explaining how symmetry-breaking creates structure---as opposed to randomness---out of nothing, with only computability being assumed~\cite{naturalcomputing}. \section{Complexity from Computation} Not only did Chaitin help found one of the most exciting areas of modern science (computer science), but it may turn out that his contribution, together with that of Alan Turing, may have had a more profound effect on our understanding of our physical reality than we had hitherto supposed. At the beginning of the twentieth century and through the end of the Second World War, computers were human, not electronic, mainly women. The work of a computer consisted precisely in solving tedious arithmetical operations with paper and pencil. This was looked upon as work of an inferior order. At an international mathematics conference in 1928, David Hilbert and Wilhelm Ackermann suggested the possibility that a mechanical process could be devised that was capable of proving all mathematical assertions. This notion is referred to as \textit{Entscheidungsproblem} (in German), or `the decision problem'. If a human computer did no more than execute a mechanical process, it was not difficult to imagine that arithmetic would be amenable to a similar sort of mechanization. The origin of Entscheidungsproblem dates back to Gottfried Leibniz, who having (around 1672) succeeded in building a machine based on the ideas of Blaise Pascal that was capable of performing arithmetical operations (named Staffelwalze or the Step Reckoner), imagined a machine of the same kind that would be capable of manipulating symbols to determine the truth value of mathematical principles. To this end Leibniz devoted himself to conceiving a formal universal language, which he designated characteristica universalis, a language that would encompass, among other things, binary language and the definition of binary arithmetic. In 1931, Kurt G\"odel~\cite{godel} arrived at the conclusion that Hilbert's intention (also referred to as `Hilbert's programme') of proving all theorems by mechanizing mathematics was not possible under certain reasonable assumptions. G\"odel advanced a formula that codified an arithmetical truth in arithmetical terms and that could not be proved without arriving at a contradiction. Even worse, it implied that there was no set of axioms that contained arithmetic free of true formulae that could not be proved. In 1944, Emil Post, another key figure in the development of the concepts of computation and computability (focusing especially on the limits of computation) found~\cite{post} that this problem was intimately related to one of the twenty-three problems (the tenth) that Hilbert, speaking at the Sorbonne in Paris, had declared the most important challenges for twentieth century mathematics. Usually, Hilbert's programme is considered a failure, though in fact it is anything but. Even though it is true that G\"odel debunked~\cite{godel} the notion that what was true could be proved, presenting a negative solution to the `problem of decision', and Martin Davis~\cite{davis} (and independently, Julia Robinson~\cite{robinson}) used G\"odel's negative result to provide a negative solution to Hilbert's tenth problem (the argument for which was completed by Yuri Matiyasevich~\cite{matiyasevich}), Hilbert's supposedly failed programme originated what we now know as Computer Science, the field that wouldn't have been possible without Alan M. Turing's concept of the universal machine. \subsection{One machine for everything} Not long after G\"odel, Alan M. Turing made his appearance. Turing contemplated the problem of decision in much cruder terms. If the act of performing arithmetical operations is mechanical, why not substitute a mechanical device for the human computer? Turing's work represented the first abstract description of the digital general-purpose computer as we know it today. Turing defined what in his article he termed an `a' computer (for `automatic'), now known as a Turing machine. A Turing machine is an abstract device which reads or writes symbols on a tape one at a time, and can change its operation according to what it reads, and move forwards or backwards through the tape. The machine stops when it reaches a certain configuration (a combination of what it reads and its internal state). It is said that a Turing machine produces an output if the Turing machine halts, while the locations on the tape the machine has visited represent the output produced. The most remarkable idea advanced by Turing is his demonstration that there is an `a' machine that is able to read other `a' machines and behave as they would for an input s. In other words, Turing proved that it was not necessary to build a new machine for each different task; a single machine that could be reprogrammed sufficed. This erases the distinction between program and data, as well as between software and hardware, as one can always codify data as a program to be executed by another Turing machine and vice versa, just as one can always build a \textit{universal machine} to execute any program and vice versa. Turing also proved that there are Turing machines that never halt, and if a Turing machine is to be \textit{universal}, and hence able to simulate any other Turing machine or computer program, it is actually expected that it will never halt for an infinite number of inputs of a certain type (while halting for an infinite number of inputs of some other type). And this is what Turing would have expected, given G\"odel's results and what he wanted to demonstrate: that Hilbert's mechanization of mathematics was impossible. This result is known as the undecidability of the halting problem. In his seminal article Turing defined not only the basis of what we today know as digital general-purpose computers, but also software, programming and subroutines. And thus without a doubt it represents the best answer to date that we have to the question `What is an algorithm?' In fact in Alan Turing's work on his universal machine, he even introduced the concept of a subroutine that helped him in his machine construction. These notions are today the cornerstone of the field that Turing, more than anyone else, helped establish, viz. Computer Science. Once we approach the problem of defining what an algorithm is and arrive at the concept of \textit{universality} that Turing advanced, the question to be considered in greater detail concerns the nature of algorithms. Given that one now has a working definition of the algorithm, one can begin to think about classifying problems, algorithms and computer programs by, for example, the time they take or the storage memory they may require to be executed. One may assume that the time required for an algorithm to run would depend on the type of machine, given that running a computer program on a Pentium PC is very different from executing it on a state-of-the-art super computer. This is why the concept of the Turing machine was so important---because any answers to questions about problem and algorithm resources will only make sense if the computing device is always the same. And that device is none other than the universal Turing machine. So for example, every step that a Turing machine performs while reading its tape is counted as a time step. Many algorithms can arrive at the same conclusion taking different paths, but some may be faster than others, but this is now a carefully considered matter when fixing the framework for Turing's model of computation: one asks whether there is an algorithm that surpasses all others in economy as regards the resources required when using exactly the same computing device. These are the questions that opened up an entire new field in the wake of Turing's work, the development of which Turing would certainly have been delighted to witness. This field is today referred to as the theory of Computational Complexity, which would not have been possible without a concept such as that of the universal Turing machine. The theory of Computational Complexity focuses on classifying problems and algorithms according to the time they take to compute when larger inputs are considered, and on how size of input and execution time are related to each other. This is all connected to two basic resources needed in any computation: space (storage) and time. For example, one obvious observation relating to this theory is that no algorithm will need more space than time to perform a computation. One can then quickly proceed to ask more difficult but more interesting questions, such as whether a machine can execute a program faster if it is allowed to behave probabilistically instead of deterministically. What happens if one adds more than one tape to a universal Turing machine operation? Would that amount to implementing an algorithm to solve a problem much faster? Or one may even ask whether there is always an efficient algorithm for every inefficient algorithm, a question that may lead us to a fascinating topic connecting computers and physics. \subsection{The world of simple programs} If algorithms can come in a variety of types, some slow, others faster, what is it that allows nature to produce, with no apparent effort, what seem to us to be complex objects? These range from the laws of physics to the formation of matter and galaxies to the beginning of life on Earth (and possibly in other parts of the universe). In the end, one can see all these natural phenomena as a kind of computation, regardless of whether it is of exactly the same type as that performed by a Turing machine. This latter possibility cannot be completely disregarded. Thanks to Turing we know that even simple devices such as universal Turing machines possess incredible power. One of the natural world's most fascinating characteristics is that it presents a wide range of physical and biological systems that behave in different ways, just like algorithms, most of them having some regular features while nonetheless being hard to predict. Climate is a case in point. Even though it is cyclical, it is impossible to predict its details more than a week in advance. Where does nature's complexity come from? Throughout human history we have encountered objects, in particular mathematical ones, that seem complex to us. One set of such objects comprises numbers that can be expressed as the division $p/q$, with $p$ and q being integers. Numbers 5, 0.5 or even infinite numbers such as 0.333… can be written as 5/1, 1/2, and 1/3, respectively. But as far back as the ancient Greeks numbers have been known, such as $\pi$ and the square root of 2, which cannot be expressed in this way. One could think of arithmetical division as an algorithm that can be performed by a Turing machine, the result being provided in the output tape. Multiplication, for example, is an algorithm to shorten the number of steps needed to perform the same operation using only addition. In the case of numbers that admit a rational representation $p/q$, the algorithm of the division of integers consists of the common procedure of finding quotients and remainders. In the case of numbers such as $\pi$ and the square root of 2, the algorithm produces an infinite non-periodic expansion, so that the only way to represent them is symbolically (i.e. $\pi$ and $\sqrt{2}$). The Pythagoreans found that those numbers with ostensible infinite complexity could be produced from very simple operations, for example, when seeking the value of the hypotenuse of a right triangle with sides of length 1. Since Euclid, it has also been known that such numbers are not the exception among real numbers that are found, for example, in the continuous interval (0, 1). In algorithmic terms, rational and irrational numbers are different in nature. When one starts a Turing machine that implements the algorithm for division, there is no algorithm that allows for the production of an irrational number followed by halting, whereas the division of rational numbers can halt (when the remainder is zero) or enter an infinite cycle that will produce a repetitive decimal expansion. In engineering, including systems programming, the intuition of what is complex (in comparison to an irrational number in mathematics, for example) has been radically different. The usual assumption has been that to produce something that seems complex, a process that is just as complex has to be devised. This issue, however, is closely connected to Turing's concept of universality, given that a universal Turing machine that is programmable is, in principle, capable of producing any degree of `complexity', for example, the type of complexity (or randomness) that one can see in the decimal expansion of $\pi$. If Euclid's algorithm for division or $\pi$ can produce such apparent complexity, how usual is it to run a random computer program that produces the same complexity? If computer programs that produce complexity need a very complex description, the probability of finding one small enough would be very low. For example, even though Turing's 1936 article contains all the main elements of the traditional description of a universal Turing machine capable of reproducing the type of complexity to be found in the digits of $\pi$, the construction of his universal machine requires at least 18 states, and at least 23 instructions (the exact number cannot be calculated on the basis of Turing's article due to the fact that he uses subroutines that can be implemented on machines of different sizes). Whatever the actual threshold for reaching Turing universality, it had typically been thought to be high (a case in point: von Neumann's universal builder, a system that was the anticipation of the modern concept of cellular automata, requires 29 states), and it was thought that a universal machine would require a certain minimum complexity (at least as to the number of states and symbols required to describe it). In an experiment with extremely small and simple computer programs, Stephen Wolfram found that this threshold of complexity and universality was likely to be extremely low~\cite{wolfram}, and that very little was required to find a machine that produced high complexity or that was capable of being Turing universal. Indeed, not only did Wolfram find a very small Turing machine with only 2 symbols and 5 states that was capable of carrying out universal computation under some very simple conditions, that is, a computer that was powerful enough to emulate any standard computer, there was another Turing machine that Wolfram suspected was Turing-universal, and to ascertain whether this was so, in 2007 Todd Rowland and I organized a prize competition (\url{https://www.wolframscience.com/prizes/tm23/}) with a view to determining how simple the rules for a universal Turing machine could in fact be. Wolfram's other favourite computer programs are called Elementary Cellular Automata~\cite{wolfram} and it is on one of these that the 2,5 original smallest universal Turing machines are based. Elementary Cellular Automata (ECA) are another superb example of a computing model capable of very rich behaviour. ECA are minimalistic computer programs that are very visual, as they print out their space-time evolution in two dimensions and can be subjected to easy visual inspection. Recently, my colleagues and I found another extremely small Turing-universal computer by combining two ECA, which are themselves extremely simple~\cite{zeniljurgen}. In Fig.~\ref{simplified}, I show how the ideas of Chaitin (and Kolmogorov and Solomonoff) can help understand the complexity of ECA by looking at how difficult it is to describe them succinctly from their generating model. The simplified rule shown in Fig.~\ref{simplified} is an upper bound on their Kolmogorov-Chaitin complexity. Such simple estimations already provide a much better characterization than other simplifications, such as that of the so-called Langton's $\lambda$ parameter, as they correspond better to the literature on the complexity of ECA. And even better estimations and tighter bounds can be found using more powerful approaches based on Chaitin's (and Levin's) work, notably with the help of two methods that I and my team put together called CTM and BDM (as shown in Fig.~\ref{simplified}), rooted in a beautiful concept called algorithmic probability, which is deeply related to Chaitin's own $\Omega$ number. \subsection{Algorithmic probabilities} \label{ctm} Just as the formulae for the production of the digits of $\pi$ are compressed versions of $\pi$, the laws of physics can be seen as systems that compress natural phenomena. These laws are valuable because it is thanks to them that the result of a natural phenomenon can be predicted without having to wait for it to unfold in real time, e.g. one can solve the equations that describe planetary movement instead of waiting two years to know the future positions of a planet. For all practical purposes the laws of physics are like computer programs and the scientific models we have of them: executable on digital computers and susceptible of numerical solutions. At the centre of Chaitin's work related to his Chaitin $\Omega$~\cite{chaitin}, and also the variation introduced by Solomonoff~\cite{solomonoff} and Levin~\cite{levin}, is the notion of algorithmic probability, which can be defined as, $$AP(s) = \sum_{p:U(p) = s} 1/2^{|p|} $$ That is, the sum over all the programs $p$ for which a universal Turing machine $U$ outputs $s$ and halts. The notion behind $AP$ is very intuitive. If one wished to produce the digits of $\pi$ randomly, one would have to try time after time until one managed to hit upon the first numbers corresponding to an initial segment of the decimal expansion of $\pi$. The probability of success is extremely small: $1/10$ digits multiplied by the desired quantity of digits. For example, $1/10^{2400}$ for a segment of 2400 digits of $\pi$. But if instead of shooting out random numbers one were to shoot out computer programs to be run on a digital computer, the result would be very different. A program that produces the digits of $\pi$ would have a higher probability of being produced by a computer. Concise and known formulas for $\pi$ could be implemented as short computer programs that would generate any arbitrary number of digits of $\pi$. \subsection{Algorithmic Chaitin complexity} The length of the shortest program that produces a string is today the mathematical definition of randomness, as introduced by Kolmogorov~\cite{kolmo}, Chaitin~\cite{chaitin}, and Solomonoff~\cite{solomonoff}, and later expanded by Levin~\cite{levin}, and also called Kolmogorov-Chaitin complexity. The idea is relatively simple. If a string $s$ of length $|s|$ cannot be produced by a program $|p|$ shorter than $|s|$ in bits, then the string $s$ is random because it cannot be effectively described in a shorter way than by $s$ itself, there being no program $p$ that generates $s$ whose length is shorter than $s$. Formally, $$C_U(s) = \min\{|p|:U(p)=s\}$$ \noindent where $U$ is a universal Turing machine. The invariance theorem~\cite{kolmo,chaitin,solomonoff} guarantees that the value of $C$, whether calculated with one particular universal Turing machine or any other universal Turing machine, is the same at the limit. Formally, if $U_1$ and $U_2$ are two universal Turing machines and $C_{U_1}(s)$ and $C_{U_2}(s)$ are the values for the algorithmic complexity of $s$ for $U_1$ and $U_2$ respectively, there exists a constant $c$ such that $$|C_{U_1}(s) - C_{U_2}(s)| < c$$ Thus, the longer the string, the less important $c$ is and the more stable the algorithmic complexity value $C$ is. One of the disadvantages of $C$ is that, given the halting problem for Turing machines, $C$ is not computable, which is to say that given a string, there is no algorithm that returns the length of the shortest computer program that produces it. \subsection{Complexity is in inverse relation to probability} Algorithmic probability and algorithmic complexity $K$ are inversely proportional, as established by the so-called algorithmic Coding theorem~\cite{cover,calude}: $$|-\log_2 AP(s) - C(s)| < c $$ \noindent where $c$ is a constant independent of $s$. The Coding theorem implies that the algorithmic complexity can be estimated from the frequency of a string. To illustrate the above let us consider $\pi$. Under the assumption of Borel's absolute normality of $\pi$, whose digits appear randomly distributed, and with no knowledge of the deterministic source and nature of $\pi$ as produced by short mathematical formulae, we ask how an entropy versus an algorithmic metric performs. First, the Shannon entropy rate (thus assuming the uniform distribution along all integer sequences of $N$ digits) of the $N$ first digits of $\pi$, in any base, would suggest maximum randomness at the limit. However, without access to or without making assumptions as regards the probability distribution, approximations to algorithmic probability would assign $\pi$ high probability, and thus the lowest complexity by the Coding theorem, as has been done in~\cite{d4,d5,kolmo2d,bdmpaper}. Just as with $\pi$ but in application to graphs and networks, it has been proven how certain graphs can be artificially constructed to target any level of Shannon entropy~\cite{zkpaper,morzy}, preserving low algorithmic randomness. But how relevant is the algorithmic Coding theorem in explaining, e.g., natural phenomena, if it only applies to Turing-universal systems? We know that the natural world allows and can carry out Turing-universal computation because we have been able to build computers that take elements from nature and make them perform as general-purpose computers. However, we don't know how much of the natural world is Turing-computable or how physical laws may naturally implement any form of computation. So, in~\cite{liliana} we showed that up to 60\% of the bias found in the output of systems that are not Turing universal may be explained by the algorithmic Coding theorem. This means that this theorem is far more relevant than expected, as it not only explains the way data and patterns distribute for a very particular computational model, but does so, to a considerable extent, for basically any computing model. It is not difficult to imagine that nature operates at some of these levels, perhaps not even on a fixed one but at multiple scales, with the algorithmic Coding theorem being relevant to all of them. \subsection{The Coding Theorem Method} The algorithmic \textit{Coding Theorem Method} (CTM)~\cite{d4,d5} provides the means for approximation via the frequency of a string. Now, why is this so? The underlying mathematics originates from the relation specified by algorithmic probability between frequency of production of a string from a random program and its algorithmic complexity. It is also therefore denoted as the algorithmic \textit{Coding theorem}, in contrast to another well known coding theorem in classical information theory. Essentially, the numerical approximation hinges on the fact that the more frequently a string (or object) occurs, the lower its algorithmic complexity. Conversely, strings with a lower frequency have higher algorithmic complexity. Otherwise stated, $$CTM(s) = - \log_2 AP(s) + c$$ The way to implement a compression algorithm at the level of Turing machines, unlike popular compression algorithms, which are heavily based on Shannon entropy, is to go through all possible compression schemes. This is equivalent to traversing all possible programs that are a compressed version of a piece of data, which is exactly what the CTM algorithm does. \subsection{The Block Decomposition Method} Our approach to Chaitin's halting probability and Solomonoff-Levin's algorithmic probability consists in asking after the probability of a matrix being generated by a random Turing machine on a 2-dimensional array, also called a \textit{termite} or \textit{Langton's ant}~\cite{langton}. Hence an accounting procedure is performed using Turing machines that aims to approximate the algorithmic complexity of the identified structures. This technique is referred to as the \textit{Block Decomposition Method} (BDM), as introduced in~\cite{zenilgraph} and ~\cite{bdmpaper}. The BDM technique requires a partition of the adjacency matrix corresponding to the graph into smaller matrices. With these building blocks at hand we numerically calculate the corresponding algorithmic probability by running a large set of small 2-dimensional deterministic Turing machines, and then---by applying the algorithmic Coding theorem, as discussed above---its algorithmic complexity. Following such a divide-and-conquer scheme we can then approximate the overall complexity of the original adjacency matrix by the sum of the complexity of its parts. Note that we have to take into account a logarithmic penalization for repetition, given that $n$ repetitions of the same object only add $\log n$ to its overall complexity, as one can simply describe a repetition in terms of the multiplicity of the first occurrence. Technically, this translates into the algorithmic complexity of a labelled graph $G$ by means of $BDM$ which is defined as follows: \begin{equation} \label{newecaeq} BDM(G,d) = \sum_{(r_u,n_u)\in A(G)_{d\times d}} \log_2(n_u)+K_m(r_u) \end{equation} where $K_m(r_u)$ is the approximation of the algorithmic complexity of the sub-arrays $r_u$ arrived at by using the algorithmic Coding theorem, while $A(G)_{d\times d}$ represents the set with elements $(r_u,n_u)$, obtained by decomposing the adjacency matrix of $G$ into non-overlapping squares, i.e. the block matrix, of size $d$ by $d$. In each $(r_u,n_u)$ pair, $r_u$ is one such square and $n_u$ its multiplicity (number of occurrences). From now on $BDM(G,d=4)$ will be denoted only by $BDM(G)$, but it should be taken as an approximation to $C(G)$ unless otherwise stated (e.g. when taking the theoretical true $C(G)$ value). Once CTM is calculated, BDM can be implemented as a look-up table, and hence runs efficiently in linear time for non-overlapping fixed size submatrices. \subsection{Algorithmic Information Dynamics} Unlike most complexity measures that are designed for static objects, except those related to dynamical systems (e.g. Lyapunov exponents), I have led the introduction of a measure of algorithmic complexity adapted for dynamical systems and designed to characterize the change of algorithmic complexity of an object evolving over time. The measure is universal in the sense that it can deal with any computable feature that the system may display over time, either spontaneously or as a result of an external perturbation/intervention/interaction. At the core of Algorithmic Information Dynamics~\cite{maininfo,crc}, the algorithmic causal calculus that we introduced, is the quantification of the change of complexity of a system under natural or induced perturbations, particularly the direction (sign) and magnitude of the difference of algorithmic information approximations denoted by $C$ between an object $G$, such as a cellular automaton or a graph and its mutated version $G^\prime$, e.g. the flip of a cell bit (or a set of bits) or the removal of an edge $e$ from $G$ (denoted by $G\backslash e= G^\prime$). The difference $| C(G) - C(G\backslash e) |$ is an estimation of the shared algorithmic mutual information of $G$ and $G\backslash e$. If $e$ does not contribute to the description of $G$, then $| C(G) - C(G\backslash e) | \leq \log_2|G|$, where $|G|$ is the uncompressed size of $G$, i.e. the difference will be very small and at most a function of the graph size, and thus $C(G)$ and $C(G\backslash e)$ have almost the same complexity. If, however, $| C(G) - C(G\backslash e) | \leq \log_2|G|$ bits, then $G$ and $G\backslash e$ share at least $n$ bits of algorithmic information in element $e$, and the removal of $e$ results in a loss of information. In contrast, if $C(G) - C(G\backslash e) > n$, then $e$ cannot be explained by $G$ alone, nor is it algorithmically not contained/derived from $G$, and it is therefore a fundamental part of the description of $G$, with $e$ as a generative causal mechanism in $G$, or else it is not part of $G$ but has to be explained independently, e.g. as noise. Whether it is noise or part of the generating mechanism of G depends on the relative magnitude of n with respect to $C(G)$ and to the original causal content of $G$ itself. If $G$ is random, then the effect of $e$ will be small in either case, but if $G$ is richly causal and has a very small generating program, then $e$ as noise will have a greater impact on $G$ than would removing $e$ from an already short description of $G$. However, if $| C(G) - C(G\backslash e) | \leq \log_2 |G|$, where $|G|$ is, e.g., the vertex count of a graph, or the runtime of a cellular automaton, $G$, then $e$ is contained in the algorithmic description of $G$ and can be recovered from $G$ itself (e.g. by running the program from a previous step until it produces $G$ with $e$ from $G\backslash e$). We have shown how we can infer and reconstruct space-time evolutions by quantification of the disruptiveness of a perturbation. We can then extract the generating mechanism from the ordered time indices, from least to most disruptive, and produce candidate generating models. Simpler rules have simpler hypotheses, with an almost perfect correspondence in row order. Some systems may look more disordered than others, but locally the relationship between single rows is for the most part preserved (indicating local reversibility). We have shown that the later in time a perturbation is injected into a dynamical system the less it contributes to the algorithmic information content of the overall space-time evolution. We then move from discrete 2D systems to the reconstruction of phase spaces and space-time evolutions of $N$-dimensional, continuous, chaotic, incomplete and even noisy dynamical systems. \section{Computability in Nature} Chaitin's work has allowed me to frame and attempt to answer questions related to the random or algorithmic nature of the world and universe: whether it is in some sense mechanistic and can be reproduced by an artificial mechanism such as an electronic computer. A first point to note is that we as human observers access the world through what we call science, at least when we do so formally, and the aim of science is to perform causal inference from empirical observations about the world or the universe. And the driver is twofold: to understand the world and to formulate predictions regarding natural phenomena. The result of this approach over the last millennia has been quite successful; some may describe it as incredible, even unreasonable. \subsection{Cellular automata as inexhaustible pattern generators} The cellular automata (CA) model can be seen as a pattern-generating computational system, and because the model is Turing-complete it is in a formal sense an inexhaustible source of complexity generation. CA have an intrinsic parallel nature that make them ideal for visually representing a computation. We recognize certain CA performing computation as having specific meanings, for instance, colliders, counters, and majority deciders. To illustrate the concept of algorithmic complexity, we can represent ECA rules in a simplified fashion as shown in Fig.~\ref{simplified}, featuring the original rule and the simplified one using wildcards . \begin{figure}[ht!] \centering \textnormal{ }{\small Rule 255}\\ \medskip \scalebox{.21}{\includegraphics{simpleCA.png}} \bigskip \bigskip Estimations of Algorithmic Complexity for different ECA rules\\ \medskip \scalebox{.3}{\includegraphics{simplifiedrules.png}} \bigskip \bigskip \caption{\label{simplified}(Top) An example of a cellular automaton, in this case Elementary Cellular Automaton (ECA) rule 255 which consists in traversing the cells three by three along each row and converting any three cells starting from the original initial condition (a single black cell) and printing another black cell below. (Middle) ECA cases showing the process of wildcard rule simplification. For rule 255 the simplified description of the ECA tells us that any black or white combination of three cells always leads to the same black cell.} \end{figure} The use of `wildcards' allows some sort of trivial compression allowing a shorter description, and thus tighter upper bounds to Kolmogrov-Chaitin complexity as compared to their original, longer 8-bit descriptions. In Fig.~\ref{simplified} four ECA are shown, with their rules represented as sets of local rules or icons followed by their wildcard simplified version, which consists in reducing the description length of the global rule by looking at ways to compress the local rules. An implementation of this type of wildcard simplification can be found in the Wolfram Language under the RulePlot[] function with Appearance $\rightarrow$ ``Simplified" indicated. Clearly, the more random-looking and complex, the larger the wildcard description. Each rule allows a greater simplification (compared to its original 8-bit description) when the rule produces less algorithmically complex patterns (the number followed by each rule is its Wolfram class). \begin{figure}[ht!] \centering \scalebox{.38}{\includegraphics{ECAscomplexity.png}} \caption{\label{ecas}An a priori measure based on non-zero density (called Langton's lambda or $\lambda$) of cellular automata versus tighter bounds of algorithmic randomness according to the rule simplification scheme, and one of the most popular lossless compression algorithms (LZW) used to approximate algorithmic complexity. Values are normalized by the maximum value of each index applied to all ECA starting from a random initial configuration and letting each rule evolve for 100 steps. LZW collapses all cases because they are too small to differentiate them, but even for larger runtimes LZW confounds most classes, particularly 1 with 2, and 3 with 4. In contrast, BDM assigns higher algorithmic randomness to class 3 compared than to class 4. BDM also collapses the simplest classes 1 and 2 more than simpler indexes, including the popular LZW used and abused to approximate algorithmic complexity.} \end{figure} The simplified rule size as described in Fig.~\ref{simplified} is an a priori algorithmic randomness measure based on a simplification of the CA global rule, and BDM is an a posteriori observer-oriented measure based Chaitin's randomness. Introduced by Wolfram~\cite{wolfram}, Wolfram's classes correspond to the typical behaviour of each ECA based on how often a rule behaves in a certain way for a random initial condition. Classes 1 and 2 are the most simple, while class 3 displays statistically-random behaviour and class 4 displays random behaviour together with sophisticated structures with which computation can be implemented~\cite{wolfram,cook}. In Fig.~\ref{ecas}, it is clear how an a priori parameter such as Langton's $\lambda$ that measures the density of non-zeros in the description of a rule as a measure of complexity deals poorly with the system and is easily outperformed by methods based on Chaitin's algorithmic randomness, one of which (BDM) is numerically based on a variation of Chaitin's $\Omega$, unlike methods such as LZW and other popular lossless compression algorithms that are mostly based on Shannon entropy~\cite{liliana}. BDM quantifies algorithmic complexity circumventing the use of any popular lossless compression algorithm which are misleading as they are closer to Entropy than to algorithmic complexity~\cite{liliana}. \subsection{Science: mapping data to models} So what compression can tell us about the type of knowledge and the way in which we can access the world? While the practice of science, and mechanistic models in particular, points towards an algorithmic universe governed by ordered laws and computable principles, it is only with the theory of algorithmic randomness that we are able to mathematically understand and quantify the subtleties in this space of computable models. The methods that I have developed over the years, building upon the theory of algorithmic complexity, can help map the space of computable models and provide the means to access this knowledge by mapping the space of computable models and observations (data), making them natural underlying candidate mechanistic models of that data (observations). To this end, data is chunked in smaller pieces as illustrated in Fig.~\ref{bdmmapping}, with each piece having a corresponding candidate computable model from the space of computer programs. Because the computer power needed to traverse the space is huge (as difficult as the most difficult computable function, the so-called Busy Beaver problem~\cite{rado}), we have used supercomputers in the past and have now proposed also new methods based on cryptocurrencies that can be exploited to compute the space, rather than computing useless one-way functions as it is typical for cryptocurrencies. The resulting algorithmic landscape in Fig.~\ref{bdmmapping} is not flat because different pieces of data have associated computable models (computer programs) of different lengths, so some parts of the data are more difficult to produce than others, i.e. require more information to be specified, either because they are disconnected from the rest or are parts in contact with other systems injecting new information. \begin{figure}[ht!] \centering \scalebox{.32}{\includegraphics{BDMmapping.png}} \bigskip Coding Theorem and Block Decomposition Methods \medskip \scalebox{.39}{\includegraphics{decomposition.png}} \caption{\label{bdmmapping}Algorithmic Information Dynamics (AID) guides the exploration and matching. Together CTM and BDM are the methods at the heart of AID~\cite{d4,d5,bdm}, helping to mine and guide the search for computable candidate models reproducing a piece of data (or the components of a smaller piece of data) from the space of all computer programs. Automacoin is a cryptocurrency, currently under development, that illustrates the way in which more responsible computer power use can help science using methods like CTM and BDM. (Bottom) The process of block decomposition of a ECA (or any other system) consists in dividing the CA time-space evolution into smaller blocks for which estimations of algorithmic randomness are precomputed using the Coding Theorem Method. This image is licensed under a Creative Commons Attribution 3.0 License.} \end{figure} Randomness may sometimes seem to be dominating science, because if we separate noise from signal it is what appears as noise that bothers science and is the subject of current investigation. Once science tames noise, it becomes signal, and science moves to the next apparent source of noise. Science has provided us with all sorts of models, such as statistical and mechanistic models. Mechanistic models are very important because they suggest underlying mechanisms as candidates for what may occur with actual phenomena. A mechanistic/algorithmic approach provides a causal model from first principles. Unlike a statistical description, it is prescriptive, as it provides guidance for manipulating causes. For example, in Fig.~\ref{sciencealgs}, a mechanistic model of the solar system is shown. In a mechanistic model such as this one, if we were forced to place the sun and the moon at a certain distance from each other in order to reproduce solar and lunar eclipses, then such a model would suggest a distance that can be verified or falsified by other means, that in turn may feed back to our model to make it better. The traditional approach is to consider a model that starts from certain values, and after some time provides some clue as to where the system may be at a future time, with some precision and in less time than the time taken by the actual unfolding of the phenomenon being studied. If a model does not comply with these requirements then it is traditionally seen as less valuable, so in general, the more precise and the faster it can be, the better. But more importantly, a model is better if it can explain more with less. \begin{figure}[ht!] \centering \scalebox{.55}{\includegraphics{naturepatterns.png}} \caption{\label{naturepatterns}Patterns all over the surface of the planet made by living organisms other than humans and far removed from chance, such as fairy rings, basaltic hexagonal columns, termite mounds. Left: The Giant's Causeway in Northern Ireland. Hexagonal basalt columns formed by sea water cooling incandescent lava relatively quickly, around 60 million years ago, as a result of the interplay of a small number of forces. Right: A `fairy ring' on the ground is formed in a natural manner by the differential growth of Micelios mushrooms, i.e. an extremely simple mechanical process. This image is licensed under a Creative Commons Attribution 3.0 License.} \end{figure} The movement of planets initially appeared to us as ungoverned, and some of them even seemed to be randomly moving in the sky, and were thus called \textit{wanderers} in Greek. It was not until Galileo, Copernicus, Kepler and later Newton that it was found that planets followed rules. But there were some movements of the internal planets such as Venus, as well as other celestial phenomena, which could not be fully explained by Newtonian mechanics, and thus General Relativity was needed---another mechanistic model to explain what again was apparently random but was found not to be so under the new model. It is true that moving from one theory to another reveals that we cannot ever be sure that a given theory definitively explains the data, nor can we say that these rules or differential equations are followed by nature or physics, yet it is also undeniable that each iteration of a model is more encompassing, able to explain not only previously puzzling phenomena but many others as well. In this sense, the compass of what new theories explain, generally speaking, exceeds their own lengths. In other words, we gain much more explanatory and even predictive power with an increasingly compact number of theories over time. This is not only a clear indication that our world and universe is removed from randomness, with a handful of models being able to explain almost every aspect of physical experience, if not human experience, it is also strong evidence of the algorithmic nature of the world. Indeed, that the universe can be explained with theories and models of apparently ever-decreasing length means that the data they describe is formally of low algorithmic randomness, and that finding ever smaller models among all possible computable models is algorithmically very likely, which in turn, by the coding theorem, means that the world is algorithmically simple. \subsection{Pervasiveness of Turing universality} While it was known that Turing universality could be achieved with extreme simplicity (in terms of resources, i.e. state+symbols), it was not known how pervasive it was. Once a computer program is Turing universal it can simulate any other computer program. Our work is the first to shed some light on quantifying how pervasive Turing universality may be. It turns out that the number of Turing-universal machines is of density one, meaning that basically almost all computer programs are capable of universal computation, given the appropriate initial conditions~\cite{juergenzenil2}. We have also shown how computer programs previously thought to be essentially different can actually simulate other computer programs of unbounded complexity. In this sense, we have shown a complete collapse of the so-called Wolfram classes in a fundamental way~\cite{juergenzenil2}. But we have also shown that there is an asymmetry in the number of possible compilers accessible during a given exhaustive compiler exploration time for performing these simulations, which reintroduces a hierarchy and re-establishes the classes from an epistemological perspective. The fundamental collapse, however, strengthens Wolfram's so-called Principle of Computational Equivalence. By way of the aforementioned asymmetry we also defined a topological measure based upon simulation networks, establishing the degree of difficulty involved in finding the right compilers to make a program simulate other programs. All this constitutes a completely novel Bayesian approach to Turing universality and a powerful framework within which to reprogram a system's behaviour. These ideas are also consistent with my many first views on programmability, based on Turing's 'imitation game,' for testing black box capabilities from input perturbation (interrogation) and output assessment. We propose to use the tools of algorithmic complexity to study such mappings as an ultimate behavioural evaluation framework. We have conceived these tools based on the work of Chaitin and the other founders of AIT as alternatives to compression, tools which allow us to find underlying models. The methods represent a way to navigate through the space of all possible (small) computer programs in order to match computable models with data observations from the real world. My tools based on Chaitin's work are helping us understand some aspects of dynamic systems useful in the context of living systems, and moreover able to steer and manipulate their behaviour. Another question of relevance concerns computability versus uncomputability. Here again, even assuming that there are noncomputable processes, our best tools consist in simulating everything in a digital computer by representing systems and their respective models with computable algorithms, even in those cases where we are dealing with apparently continuous differential equations, which, thanks to CPU's finite precision capabilities, in the end become discrete and symbolic approximations of these apparent and convenient continuous representations (see Fig.~\ref{sciencealgs}). We have seen how we have contributed to matching data with computable models~\cite{maininfo}, but how successful and relevant are computable models? The distinction between models as equations and models as computer programs represents a gap between the discrete and the continuous that does not seem necessarily fundamental (see Fig.~\ref{sciencealgs}). While mechanistic models can also be wrong, they have the advantage of conforming to the laws of physics on the basis of which they are constructed, depending on the nature of the model, whether classical or not. While the model may look analogue because it moves in apparently continuous space and time, measurements are of finite accuracy. Most models in science today are not only mechanistic but algorithmic, and they run on electronic digital computers. They are thus computable models of the physical phenomena they are meant to describe. In other words, modern science has become a computational science, both in practice and in principle in many cases, and its power has not diminished. On the contrary, it has never been more powerful, in both its explanatory and its predictive capacity. \begin{figure}[ht!] \centering \scalebox{.6}{\includegraphics{solarsystem.png}} \caption{\label{sciencealgs}A mechanistic model is a model that can be run and simulated by mechanical means by setting it in an initial condition, following a set of instructions, and determining an output, e.g. the state of the model. In this case, for example, the placement of the planets. Image from Wikipedia under Creative Commons license.} \end{figure} \section{Complexity in Nature} \subsection{Algorithmic probability in biology} I aim to understand the behaviour of artificial and biological systems from a computational perspective, and to develop methods to reprogram biological cells at a molecular/genetic level as we do computers. I do this by combining the power of the conceptual and methodological advances represented by the formal theories of information (Shannon, Kolmogorov, Chaitin, Solomonoff, Levin, et al) and computation (as advanced by Turing, Church, Kleene, et al), and by exploiting the versatility of principles from areas such as dynamical systems and complex networks, to develop a novel, elegant, and very sophisticated framework and field, namely algorithmic information dynamics, to help deal with questions of causation and produce model-driven, generative and mechanistic explanations for natural and artificial processes. Biology is mostly discrete, without much room for apparent continuity. Not only is the genetic code discrete, finite, and composed of 4 letters, genes produce an integral number of viable proteins that either dock or do not, and genes can be simulated closely by modelling them as being either on or off. Proteins are very finite objects. They converge in great numbers, and when they don't they most likely never dock or fulfil their purpose. Chemical concentrations that appear continuous are only a useful representation. Living systems, however, are, in a manner of speaking, assembled out of chemical LEGO pieces. Even signals among cells are proteins that can be counted and followed one by one. Living systems themselves depend on the very binary choice that enables cells to differentiate between themselves and everything else by way of their perfectly limited membranes. Communication among cells is through the physical interchange of proteins---the same kind of LEGO pieces are involved here as everywhere else. And just as with everything else, in most regular cases the piece either docks and fulfils its purpose or it does not. Even Brownian motion that plays a role in, e.g., protein folding, is produced by finite and well-defined particles or molecules interacting with other molecules. We have explored how non-trivial deterministic systems can display unbounded changes in complexity. We have proved that undecidability is essential for legitimate forms of OEE definitions and that this is also deeply connected to open and closed systems, with self-referential state-dependent systems displaying a greater degree of versatility. For OEE to be possible (under reasonable definitions), then open and undecidable dynamic systems are necessary. But even more important are the results that we have reported regarding the substitution of uniformly distributed mutations for mutations occurring according to the so-called Universal Distribution that predicts the way in which rule-based systems behave according to the computational power of their source (see paper J49). It turns out that this simple and sound substitution (because neither the world nor physical laws are truly random) can significantly speed up evolutionary convergence, may justify the need of genetic memory and modularity (e.g. gene organization), and even explain phenomena such as diversity explosions and mass extinctions that have no extrinsic (e.g. climate) explanations. Equipped with the measures that we have developed, my team and I aim to tackle a fundamental challenge in science: that of developing tools for causal discovery~\cite{nmi,maininfo}. This in order to unveil the design principles and generating mechanisms of arbitrary dynamic systems. In particular, the development of an interventional calculus based upon the theory of computability and algorithmic probability that identifies the key markers in genetic networks with which to steer and manipulate cell function and cell fate. The range of application of this work is very general, and aims to generate effective intervention tools to steer the causal content and thus the fate of artificial and natural complex systems. I am engaged in devising strategies to understand, test and reprogram artificial and natural systems as models of computation. I have also devised two measures for testing the `algorithmicity’ and programmability of a system. We also introduced a perturbation-based calculus to study the information landscape of complex systems and networks---a calculus capable of unveiling information on the energy landscape---using a measure of algorithmic (re)programmability that connects the theory of dynamic systems with (algorithmic) information theory to reveal the dynamic landscape at the heart of the problem of causality. I use computer programs as models of the world. Determinism from classical mechanics implies that everything can be seen as a computer program and that the complexity of systems led purely by causal elements included in their description are dominated by their evolving time. This allows us to find clues to move systems between different functions, thereby effectively reprogramming them~\cite{maininfo}. \subsection{Algorithmic intelligence} The human mind is algorithmically biased to suit specific purposes. Some of these algorithmic mechanisms can replace previously considered biases based on prior experience (e.g. the so-called system 1). We have known that random generation tasks, in which people have to generate random-looking sequences, are linked to some important aspects of mental and cognitive capacities. We have introduced a new approach to the study of the behaviour of animals and humans based on algorithmic information theory. We have shown that humans best outsmart computers when they are tested on randomness generation at the age of 25~\cite{ploscompbio}. When competing against all possible algorithms, we have found that human behaviour is at its most algorithmically random at 25 years of age, thereby introducing a new measure of cognitive complexity possibly related to other biological and cultural phenomena, such as creativity. Indeed, humans produce more randomness than most computer programs when they are 25 years old. In parallel, we are also working on measures of integrated information with which to profile brain networks. \begin{figure}[ht!] \centering \scalebox{.25}{\includegraphics{rule54v50.png}} \bigskip \scalebox{.25}{\includegraphics{rule82v60.png}} \caption{\label{rule54v50}Examples of simple interacting computer programs (top: ECA rules 54 and 50, bottom: ECA rules 82 and 60) with a simple function determining the interaction from which data emerges and gets convoluted. Different rules come from different generating sources. In this case different Elementary Cellular Automata rules. Observers will often face this problem and will not be able to disentangle data without the right tools. The world is made of this type of interaction, producing apparent complexity and concealing the underlying generating rules that new methods can help disentangle~\cite{nmi}. They do not even have to be complicated in themselves, but when interacting they produce a highly complex pattern that cannot be quantified with traditional tools, and may be irreducible.} \end{figure} In Fig.~\ref{rule54v50}, ECA can be seen to be interacting with each other and generating structures according to a meta CA rule. Often these interactions appear random, and to an observer they may be difficult to deconvolve or disentangle, but new methods introduced by my team and based on computability theory and the work of Chaitin and Levin (together with the work of Kolmogorov and Solomonoff) help new machine learning algorithms to distinguish the sources by their most likely causal generating mechanisms (in this case, each side is generated by either rules 54 and 50 or 82 and 60)~\cite{nmi}. Most systems in nature are like black boxes that we cannot open and fully understand, and we are therefore left with approaches not very different from the black box approach of Alan Turing---his imitation game---for dealing with the challenge and question of machine intelligence. What we conceived was a test for systems where questions would be inputs and answers would be compressed. One could then discern the variation among the compressed outcomes for answers of varying complexity and assess the system's capabilities. The idea has evolved into a perturbation algorithmic calculus able to characterize artificial and biological systems~\cite{maininfo,nmi}. These tools are becoming a new approach to the whole area of machine learning and even of Artificial Intelligence, introducing the inferential power of algorithmic probability to complement the current state of areas such as deep learning, which are weak on the fundamentals for understanding human intelligence, such as symbol manipulation, logic, inference, and the understanding of cause and effect. \subsection{An algorithmic universe} \begin{figure}[ht!] \centering \scalebox{.45}{\includegraphics{sciencealgs.png}} \caption{\label{movingtarget}Science moves apparently random data from observation to model/theory. Indeed, scientific models tend to cover previously random data together with other previously unexplained phenomena. Whether there is a model that can encompass all other models is an open question, but the tendency so far has been clear, and not an artefact of humans as intelligent or conscious observers. This image is licensed under a Creative Commons Attribution 3.0 License.} \end{figure} Certain parts of the universe seem ordered and structured (see Fig.~\ref{naturepatterns}). On Earth, for example, life is an example of organization and structure, contrasting with what can be deemed background noise (comparable to what appears on the screen of an old non-tuned analogue television) left over by the Big Bang and serving as proof of the state in which the universe found itself after its first moments of existence. What Turing would probably never have guessed is that when running random Turing machines, the machines produce highly structured objects. Could this be merely an interesting analogy or could it perhaps be an actual indication that the universe is more algorithmic than initially expected? If so, then Turing machines and computer programs are not just products of the human imagination, they are perhaps responsible for the order in the universe. Alan Turing may have had an intuitive answer to this question, as he was also interested in structure formation and helped found another area in biochemistry called \textit{morphogenesis}~\cite{turingmorpho}, the study of pattern formation, starting from a simple shape which would first break its symmetries at random. We have made great progress at taming apparent randomness since ancient times, with mathematical logic and rationality to begin with---even if we sometimes seem to be regressing to ancient times. A long time ago we left behind explanations based on divinity, magic or paranormal phenomena in exchange for tools such as inference, and statistics. Approaches such as correlation in regression were very useful but they have been overused and abused, and new and better tools for dealing with modern causality are badly needed. With ever-increasing predictive power, science moves random observations under the explanation of computable models and merges previously computable models into ever more encompassing models whose program-size may be difficult to quantify but whose explanatory power is increasingly greater, hence indicating a clear pattern whereby the universe looks increasingly ordered and of low Kolmogorov-Chaitin complexity (Fig.~\ref{gut}). Science has been mostly motivated by models, has been a model-driven practice. A case can be made regarding the size of new models/theories. For example, taken at face value, the description of the equations for General Relativity (GR) may give the impression of being longer than those for Newtonian classical gravity, hence a possible indication (upper bounds of comparable measure) of their underlying algorithmic complexity. However, the GR equations are strictly shorter than the equations specifying the Newtonian version of gravity plus all the corrections that they require to account for the phenomena that they cannot account for by themselves, such as the movement of the internal planets, in particular the rate of precession of the perihelion of Mercury's orbit. In comparison, General Relativity requires few to no corrections. In other words, Newtonian mechanics would require a stream of regular adjustments to match the computable accounts of General Relativity for such phenomena. One may also argue that even GR can only predict the precession movement of Mercury for a certain time because of 3-body kinds of phenomena, and this is right, but the fact that we can predict with extremely high accuracy the movements of Mercury for the next 1000 years is different from not been able to account for its movements at all using Newtonian mechanics. So, in some sense, science can be illustrated as the practice of moving natural phenomena from what we as observers perceive as random, towards non-random phenomena. Science moves empirical data from apparent randomness to algorithmic non-randomness (Fig.~\ref{movingtarget}). Each one of these phenomena under the non-random quadrant was previously on the random phenomena quadrant. One can see that there is also a cascade effect in the right quadrant, with models explaining all or several other previously advanced models. So the overwhelming evidence is that the world has a strong algorithmic component, because science can explain, if not predict, most of what happens in the universe to an increasingly high degree of accuracy and comprehensiveness. The fact that these models compress more observations from natural phenomena in a reduced number of models is an indication that, indeed, compression is comprehension. \begin{figure}[ht!] \centering \scalebox{.3}{\includegraphics{gut.png}} \caption{\label{gut}This diagram shows in what direction science has successfully been operating in previous centuries, and how this has been increasingly sped up in the last century, relying on a handful of fundamental theories that have unified seemingly disparate areas of science, each time explaining more with less. The existence of science is evidence of the algorithmic nature of the world, and the fact that more modern scientific models encompass larger observations strengthens the hypothesis of an algorithmic universe. This image is licensed under a Creative Commons Attribution 3.0 License} \end{figure} Even in some non-classical interpretations noise may be only apparent, because in fact all deterministic trajectories are explored and we just happen to be in one particular branch of a multiverse. That of course does not solve the problem of the source of global randomness at the level of the multiverse itself, determining why we experience only one random universe and not all others. However, evidence (Fig.~\ref{movingtarget} and Fig.~\ref{gut}) also suggests that what we think is noise is often a signal whose source is unknown or irrelevant to a system of interest. Time has tell again and again that every time there is two seemingly disparate phenomena they are often sides of a common underlying duality or symmetry, and that every time that there is a physical constant associated to an apparent fine-tuned irrational number appearing as a hyper-parameter of our universe, there is an algorithmic model in which such a constant is emergent from first principles as it has happened in areas such as optics or electromagnetism. Their presence suggest simpler encompassing underlying computable models able to collapse seemingly fundamental constants and connect apparent disparate phenomena. \section{Conclusions} Studying randommness from the computing perspective affords us a framework for studying the nature of the world that discerns the way in which patterns in the universe are distributed. In a world of computable processes in which the laws (like programs) do not have a slantwise distribution, $AP(s)$ would indicate the probability of a natural phenomenon occurring as a result of running a program. Distribution $AP(s)$ has an interesting particularity: it can start from basically anything, and, like a robust distribution, it remains qualitatively unchanged. It is the process that determines the form of the distribution and not the initial distribution of the programs. This is important because one does not make any strong initial assumption about the distribution of the initial conditions or of the laws of physics. Computer programs can be looked at from a certain vantage point as laws of physics. If one starts with a random initial condition (input) and executes a program chosen at random, there is a very good probability that its final appearance will be regular, and frequently very well organized. By the same token, if one were to shoot out particles at random, the probability of groups forming in the way they do would be so small---in the absence of laws of physics---that nothing whatsoever would happen in the universe. Perhaps Alan Turing hasn't only helped us understand the world of computing machines and computing programs, founding an entire scientific area, but has also taught us a great deal about the universe in which we live and how it came to be the way it is. Algorithmic complexity and algorithmic probability may explain the unreasonable effectiveness of mathematics in the natural sciences, showing it to have been based on very solid foundations.
1904.10432
\subsection{\rm \bf Introduction} The Belle collaboration has recently published results for $R_D$ and $R_{D^*}$ with a semileptonic tag \cite{Belle2019, Abdesselam:2019dgh}, and their result is consistent with the Standard Model (SM) expectation within $1.2\sigma$. Consequently, the experimental world average has moved towards the SM. However, the tension between the experimental world average and the SM expectation is still more than $3\sigma$, and thus, it is interesting to re-examine the status of the various New Physics (NP) explanations in view of the new world-average. In Table.~\ref{tab-exp} below, we collect all the experimental results related to this anomaly. \begin{table}[h!] \begin{center} \begin{tabular}{|p{0.7cm}|p{1.75cm}c|p{3.85cm}c|} \hline & \multicolumn{2}{c|}{SM prediction} & \multicolumn{2}{c|}{Measurement} \\ \hline \multirow{ 2}{*}{$R_D$} & $0.300 \pm 0.008$ &\cite{Aoki:2016frl} & $0.407 \pm 0.046$ (pre-Moriond) & \cite{Amhis:2016xyh} \\ & $0.299 \pm 0.011$ & \cite{Na:2015kha} & $0.334 \pm 0.031$ & \cite{Amhis:2016xyh,Belle2019,Abdesselam:2019dgh} \\ \hline \multirow{ 2}{*}{$R_{D^*}$} & \multirow{ 2}{*}{$0.258 \pm 0.005$} & \multirow{ 2}{*}{\cite{Bigi:2017jbd,Jaiswal:2017rve,Bernlochner:2017jka,Amhis:2016xyh}} & $0.306 \pm 0.015$ (pre-Moriond) & \cite{Amhis:2016xyh} \\ & & & $0.297 \pm 0.015$ & \cite{Amhis:2016xyh,Belle2019} \\ \hline $P_\tau^{D^*}$ & $-0.47 \pm 0.04$ &\cite{Bigi:2017jbd} & $-0.38^{+0.55}_{-0.53}$ & \cite{Hirose:2016wfn,Hirose:2017dxl} \\ \hline $F_L^{D^*}$ & $0.46 \pm 0.04$ & & $0.60 \pm 0.087$ & \cite{Abdesselam:2019wbt} \\ \hline $R_{{J/\psi}}$ & $ 0 .290 $ & & $0.71 \pm 0.25$ & \cite{Aaij:2017tyk} \\ \hline \end{tabular} \caption{\sf Observables, their SM predictions and the experimentally measured values. The pre-Moriond experimental averages for $R_D$ and $R_{D^*}$ are based on \cite{Lees:2012xj, Lees:2013uzd, Huschle:2015rga, Aaij:2015yra, Sato:2016svk, Hirose:2016wfn, Hirose:2017dxl, Aaij:2017deq, Aaij:2017uff}. \label{tab-exp}} \end{center} \end{table} The most general effective Lagrangian for the decay $b \to c \, \tau^- \, \bar{\nu}_\tau$ involving mass dimension-6 operators and only left-chiral neutrinos can be written as \begin{align} & {\cal L}^{b \to c \, \tau \, \nu}_{\rm eff} = -\frac{4 G_F V_{cb}}{\sqrt{2}} \left( \rm C_V^{LL} \, [\bar{c} \, \gamma^\mu P_L \, b] [\bar \tau \, \gamma_\mu \, P_L \, \nu] \right. \nn \\ + & \left. \rm C_V^{RL} \, [\bar{c} \, \gamma^\mu P_R \, b] [\bar \tau \, \gamma_\mu \, P_L \, \nu] + \rm C_S^{LL} [\bar{c} \, P_L \, b] [\bar \tau \, P_L \, \nu] \right. \label{eff-lag} \\ + & \left. \rm C_S^{RL} [\bar{c} \, P_R \, b] [\bar \tau \, P_L \, \nu] + \rm C_T^{LL} [\bar{c} \, \sigma_{\mu \nu} P_L \, b] [\bar \tau \, \sigma^{\mu \nu} \, P_L \, \nu] + \rm h.c. \right) \nn \end{align} If one uses power-counting rules arising from linearly-realised $\rm SU(2) \times U(1)$ gauge invariance, it turns out that the Wilson Coefficient (WC) $\rm C_V^{RL}$, with the possibility of lepton non-universality, is only generated at the mass dimension-8 level \cite{Azatov:2018knx}. Thus, it is expected to be suppressed compared to the other WCs as long as the scale of NP is not too close to the Higgs vacuum expectation value, thus we will ignore it in this analysis. If one also assumes the existence of light right-chiral neutrino(s), as was first done in \cite{Becirevic:2016yqi} to solve the $R_D$ anomaly, five additional operators can be constructed by the replacement $P_L \to P_R$ in the leptonic currents of Eq.~\ref{eff-lag}. In particular, a pure-right chiral vector current namely, \bal \hspace*{-2.5mm}{\cal L}^{b \to c \, \tau \, \nu}_{\rm eff} \supset -\frac{4 G_F V_{cb}}{\sqrt{2}} \rm C_V^{RR} \, [\bar{c} \, \gamma^\mu P_R \, b] [\bar \tau \, \gamma_\mu \, P_R \, \nu] + \rm h.c. \, \eal was considered by several authors \cite{Asadi:2018wea, Greljo:2018ogz, Azatov:2018kzb} , and we will include it in our analysis. As the experimental situation for $R_D$ and $R_{D^*}$ is far from clear, we do not try to perform a fit to the WCs; for an early global fit, see \cite{Freytsis:2015qca}. Instead, we show how $R_D$ and $R_{D^*}$ vary with respect to the WCs, and overlay the current $1\sigma$ experimental world-average and the corresponding currently allowed values of the WCs. In Fig.~\ref{fig:cvll-rdrds}, we show this for two WCs $\rm C_V^{LL}$ and $\rm C_V^{RR}$ assuming them to be real. \begin{figure}[!h!] \centering \begin{tabular}{cc} \hspace*{-5mm}\includegraphics[scale=0.32]{rdrds_cvll.pdf} & \hspace*{-2.5mm} \includegraphics[scale=0.32]{rdrds_cvrr.pdf} \end{tabular} \caption{\sf Variations of $R_D$ and $R_{D^*}$ against Re[$\rm C_V^{LL}$] and Re[$\rm C_V^{RR}$]. The green horizontal regions correspond to the experimental $1\sigma$ average from table \ref{tab-exp} and the grey vertical regions correspond to the ranges of the WCs that produce $R_D$ and $R_{D^*}$ values within their $1\sigma$ experimental world average. Note that, $\rm C_V^{LL}(\rm SM) = 1, \rm C_V^{RR}(\rm SM) = 0$. \label{fig:cvll-rdrds}} \end{figure} It can be seen from the left panel that $\rm C_V^{LL}=\rm C_V^{LL}(\rm SM) =1$ is now at the edge of the $1\sigma$ allowed region for $R_D$. This is due to the fact the the new experimental world-average for $R_D$ is now consistent with the SM expectation at $\sim 1\sigma$ level. So the anomaly is mostly driven by $R_{D^*}$. In order to be consistent with both $R_D$ and $R_{D^*}$ simultaneously at the $1\sigma$ level, $\rm C_V^{LL}$ has to be in the range $\rm C_V^{LL}:[1.045, 1.107]$. So there has not been a qualitative change in the situation after the new Belle measurement. Similarly, the allowed range for $\rm C_V^{RR}$ now is $| \rm C_V^{RR} |:[0.305, 0.480]$. The lower edge of this range, $| \rm C_V^{RR} | = 0.305$, is now consistent with the $2\sigma$ upper bound $| \rm C_V^{RR} | = 0.32$ from the LHC $p \, p \to \tau \, \nu$ searches \cite{Greljo:2018tzh}\footnote{Note, however, that for $| \rm C_V^{RR} | = 0.305$, the value of $R_{D^{(*)}}$ is at the lower edge of the experimental 1$\sigma$ allowed region. Moreover, the sensitivity of the current high-$p_T$ measurements is not enough to constrain the left-handed scenario $\rm C_V^{LL} \approx 1.05$. Thus, the right-handed scenario is statistically worse than the $\rm C_V^{LL}$ solution.} (bound from LHC $p \, p \to \tau \, \nu + X$ searches was also studied in \cite{Altmannshofer:2017poe, Iguro:2018fni}). Note that, both the WCs $\rm C_V^{LL}$ and $\rm C_V^{RR}$ can be generated by a single $U_1(3,1,2/3)$ Leptoquark mediator \cite{Alonso:2015sja,Barbieri:2015yvd,DiLuzio:2017vat,Azatov:2018kzb,Calibbi:2017qbu}. Variations of $R_D$ and $R_{D^*}$ with respect to $\rm C_T^{LL}$ and $\rm C_S^{LL}=-8\rm C_T^{LL}$ are shown in Fig.~\ref{fig:ctll-csctll-rdrds}. It can be seen from the left panel of Fig.~\ref{fig:ctll-csctll-rdrds} that a simultaneous solution of $R_D$ and $R_{D^*}$ is possible for $\rm C_T^{LL}$ in the range $\rm C_T^{LL}:[-0.021,-0.013]$. We remind the readers that the corresponding value of $\rm C_T^{LL}$ before the recent Belle result was $\rm C_T^{LL} \sim 0.35$ \cite{Bardhan:2016uhr,Azatov:2018knx} which was strongly disfavoured both by the LHC $p \, p \to \tau \, \nu$ searches \cite{Aaboud:2018vgh,Sirunyan:2018lbg,Greljo:2018tzh} as well as the measurement of $F_L^{D^*}$ \cite{mitp-talk}. The new allowed range for $\rm C_T^{LL}$, on the other hand, is completely safe. Thus, this has been a qualitative change after the new Belle measurement. \begin{figure}[!h!] \centering \begin{tabular}{cc} \hspace*{-4mm}\includegraphics[scale=0.32]{rdrds_ctll.pdf} & \hspace*{-2.5mm} \includegraphics[scale=0.32]{rdrds_csllctll_Re.pdf} \end{tabular} \caption{\sf Variations of $R_D$ and $R_{D^*}$ against Re[$\rm C_T^{LL}$] and $\rm Re[\rm C_S^{LL}]=-8 \rm Re[\rm C_T^{LL}]$. \label{fig:ctll-csctll-rdrds}\\[0.4mm]} \end{figure} The specific relation $\rm C_S^{LL} \approx -8\rm C_T^{LL}$ (at the $m_b$ scale) shown on the right panel is interesting because it is generated by a single $S_1(\bar{3},1,1/3)$ Leptoquark mediator \cite{Bauer:2015knc}. The allowed range of the WC in this case is [0.113, 0.170] which, as can be seen from Fig.~\ref{fig:BcTauNu-branching}, produces ${\mathcal B}(B_c^- \to \tau^- \bar{\nu}_\tau)$ less then its SM value, and thus is completely safe. \begin{figure}[!h!] \centering \begin{tabular}{c} \hspace*{-5mm} \includegraphics[scale=0.45]{Bc2taunu_ReIm.pdf} \end{tabular} \caption{\sf Variation of ${\mathcal B}(B_c^- \to \tau^- \bar{\nu}_\tau)$ with respect to Re[$\rm C_S^{LL}$] and Im[$\rm C_S^{LL}$]. \label{fig:BcTauNu-branching}\\[0.4mm]} \end{figure} Another single mediator solution that has been discussed in the literature is the so-called $R_2(3,2,7/6)$ Leptoquark \cite{Dorsner:2013tla,Becirevic:2018afm}. which, contrary to the $S_1(\bar{3},1,1/3)$ Leptoquark mediator, generates $\rm C_S^{LL} \approx + 8\rm C_T^{LL}$ (see the sign difference) at the $m_b$ scale\footnote{Note that, the relation $\rm C_S^{LL} = \pm 8 \rm C_T^{LL}$ are approximately true only at the $m_b$ scale. It is obtained by QCD renormalization group flow from the leptoquark matching scale ($\approx {\rm few \, TeV}$) where the actual relations are $\rm C_S^{LL} = \pm 4 \rm C_T^{LL}$.}. In the left panel of Fig.~\ref{fig:csctll-im-rdrds}, we show this case assuming real values of the WCs. It can be seen that, the combination $\rm Re[\rm C_S^{LL}] = + 8 Re[\rm C_T^{LL}]$ at most can produce $R_D$ and $R_{D^*}$ at the lower edge of their $1\sigma$ experimental world-average if a simultaneous solution is desired (for $\rm Re[\rm C_S^{LL}] = + 8 Re[\rm C_T^{LL}] \approx -0.12$). A much better description of the data is possible if imaginary WCs are assumed as shown in the right panel of Fig.~\ref{fig:csctll-im-rdrds}. The case of imaginary WCs in this context was first discussed in \cite{Sakaki:2014sea}, and later also in \cite{Becirevic:2018afm,Blanke:2018yud, Iguro:2018vqb, Biswas:2018jun,Huang:2018nnq}. \begin{figure}[!h!] \centering \begin{tabular}{cc} \hspace*{-5mm} \includegraphics[scale=0.32]{rdrds_csllctll_Re_plus.pdf} & \hspace*{-2.5mm} \includegraphics[scale=0.32]{rdrds_csllctll_Im.pdf} \end{tabular} \caption{\sf Variations of $R_D$ and $R_{D^*}$ against $\rm Re[\rm C_S^{LL}]= + 8 Re[\rm C_T^{LL}]$ and $\rm Im[\rm C_S^{LL}]= + 8 Im[\rm C_T^{LL}]$. \label{fig:csctll-im-rdrds}\\[0.4mm]} \end{figure} In this case, one needs $\rm Im[\rm C_S^{LL}]= + 8 Im[\rm C_T^{LL}]$ in the range [0.480, 0.820] which gives ${\mathcal B}(B_c^- \to \tau^- \bar{\nu}_\tau) > 10\%$, see Fig.~\ref{fig:BcTauNu-branching}. However, the authors of Ref.~\cite{Akeroyd:2017mhr} claimed an upper bound of $10\%$ on this branching ratio, arising from the LEP data taken on the $Z$ peak. Thus, the $\rm Im[\rm C_S^{LL}]= + 8 Im[\rm C_T^{LL}]$ solution seems to be in slight tension if the $10\%$ upper bound is taken at face value. While some authors \cite{Blanke:2018yud} expressed concerns about the validity of this bound, not much effort was made to estimate as to how much this bound can be relaxed. We will discuss this in detail in the next section. As the operator $\rm C_S^{RL}$ alone cannot explain $R_D$ and $R_{D^*}$ simultaneously, we do not discuss it anymore. Before concluding this section, we would like to make a couple of comments on the impact of $F_L^{D^*}$ and $P_\tau^{D^*}$ on the various scenarios. In all the scenarios explaining the $R_D$ and $R_D^*$ anomalies, the variation of $P_\tau^{D^*}$ is less than $\sim 2.5\%$ from the SM prediction. Unfortunately, this is also true about $F_L^{D^*}$, the only exception being the $\rm Im[\rm C_S^{LL}]= 8 Im[\rm C_T^{LL}]$ solution in which case the variation can be $5 -10\%$ below the SM. Thus, distinguishing the various explanations by either $P_\tau^{D^*}$ or $F_L^{D^*}$ looks difficult at the moment. \vspace*{2mm} {\bf LEP bound on ${\mathcal B}(B_c^- \to \tau^- \bar{\nu}_\tau)$: } \vspace*{3mm} As mentioned in the previous section, the authors of \cite{Akeroyd:2017mhr} used the LEP data \cite{Acciarri:1996bv} collected at the $Z$ peak to put an upper bound on the branching fraction of $B_c^- \to \tau^- \bar{\nu}_\tau$. As this constraint has potentially interesting consequences for the $R_D$ and $R_{D^*}$ anomalies, in this section we will revisit it in detail. In Ref.~\cite{Acciarri:1996bv}, the L3 collaboration obtained an upper bound on the number of $B^- \to \tau^- \bar{\nu}_\tau$ events, $\mathcal{N}(B^- \to \tau^- \bar{\nu}_\tau) < 3.8$. Based on this, they provided an upper bound \begin{equation} \mathcal{B}(B^- \to \tau \bar{\nu}_\tau) < 5.7 \times 10^{-4} \, \text{at} \, 90\% \, \text{C.L.} \label{eq:lep-1} \end{equation} As $ \mathcal{N}(B^- \to \tau^- \bar{\nu}_\tau) \propto f_{b \to B^-} \times \mathcal{B}(B^- \to \tau \bar{\nu}_\tau) $ where, $f_{b \to B^-}$ is the inclusive probability that a $b$ quark hadronizes into a $B_c^-$ or a $B_u^-$ meson, and Ref.~\cite{Acciarri:1996bv} uses a value $f_{b \to B^-} = 0.382 \pm 0.025$, the bound in Eq.~\ref{eq:lep-1} can be translated into the following bound \begin{equation} f_{b \to B^-} \times \mathcal{B}(B^- \to \tau \bar{\nu}_\tau) < 2.035 \times 10^{-4} \end{equation} Separating the total number of events into those coming from $B_u^-$ and $B_c^-$ decays, we get \bal & f_{b \to B_u^-} \, \mathcal{B}(B_u^- \to \tau^- \bar{\nu}_\tau) + f_{b \to B_c^-} \, \mathcal{B}(B_c^- \to \tau^- \bar{\nu}_\tau) \nn \\ & \hspace{5.2 cm} < 2.035 \times 10^{-4} \eal This gives, \bal &\mathcal{B}(B_c^- \to \tau^- \bar{\nu}_\tau) < \left(\frac{ 2.035 \times 10^{-4} }{f_{b \to B_u^-} \, \mathcal{B}(B_u^- \to \tau^- \bar{\nu}_\tau)} - 1 \right) \times \nn \\ & \hspace{4.2 cm} \frac{f_{b \to B_u^-}}{f_{b \to B_c^-}} \, \mathcal{B}(B_u^- \to \tau^- \bar{\nu}_\tau) \label{eqn:BrC} \eal The quantities $\mathcal{B}(B_u^- \to \tau^- \bar{\nu}_\tau)$ and $f_{b \to B_u^-}$ are known experimentally: \bal \mathcal{B}(B_u^- \to \tau^- \bar{\nu}_\tau) &= (1.06 \pm 0.20) \times 10^{-4} \, \text{\cite{Amhis:2016xyh,Tanabashi:2018oca}} \label{exp:BuTauNu}\\ f_{b \to B_u^-} &= 0.412 \pm 0.008 \, \text{\cite{Amhis:2016xyh,Tanabashi:2018oca}} \text{(LEP)} \label{LEP:fu}\\ f_{b \to B_u^-} &= 0.340 \pm 0.021 \, \text{\cite{Amhis:2016xyh, Tanabashi:2018oca}} \text{(Tevatron)} \label{TEV:fu} \eal Note that, the hadronization fractions in $Z$ decays do not necessarily need to be identical to those in $p \, \bar{p}$ collisions because of the different momentum distributions of the b-quark in these processes; in $p \, \bar{p}$ collisions, the $b$ quarks have momenta close to $m_b$, rather than $\sim m_Z/2$ in $Z$ decays. In fact, CDF and LHCb collaborations have reported evidence for a strong $p_T$ dependence of he $\Lambda_b^0$ fraction \cite{Aaij:2011jp,Aaltonen:2008zd,Aaltonen:2008eu,Aaij:2014jyk}. The LHCb and the ATLAS collaborations have also studied the $p_T$ dependence of $f_{b \to B_s}/f_{b \to B_d}$ \cite{Aaij:2013qqa,Aad:2015cda}, but the results are not conclusive yet. Therefore, we use the measurement of $f_{b \to B_u^-}$ from LEP only and plot the upper bound on $\mathcal{B}(B_c^- \to \tau^- \bar{\nu}_\tau)$ as a function of $f_{b \to B_u^-}/f_{b \to B_c^-}$ in Fig.~\ref{LEP:fu}. The upper bound $\mathcal{B}(B_c^- \to \tau^- \bar{\nu}_\tau) = 10\%$ corresponds to $f_{b \to B_u^-}/f_{b \to B_c^-} \approx 4 \times 10^{-3}$. \begin{figure}[!h!] \centering \begin{tabular}{c} \includegraphics[scale=0.5]{BRcvsFcFu.pdf} \end{tabular} \caption{\sf $\mathcal{B}(B_c^- \to \tau^- \bar{\nu}_\tau)|_{\rm max}$ as a function of $f_{b \to B_u^-}/f_{b \to B_c^-}$. The width of the plot corresponds to the uncertainties in Eq.~\eqref{exp:BuTauNu} and \eqref{LEP:fu}. \label{bctaunu-fcfu}\\[1mm]} \end{figure} In order to find a real upper bound on $\mathcal{B}(B_c^- \to \tau^- \bar{\nu}_\tau)$ we need to know the value of $f_{b \to B_u^-}/f_{b \to B_c^-}$, or at least a lower bound on $f_{b \to B_u^-}/f_{b \to B_c^-}$. Moreover, we need to know $f_{b \to B_u^-}/f_{b \to B_c^-}$ at LEP, and with the exact kinematical cuts used in \cite{Acciarri:1996bv}. Ref.~\cite{Akeroyd:2017mhr} tries to find the ratio $f_{b \to B_u^-}/f_{b \to B_c^-}$ from measurements of $R_{\pi^+/K^+}$ and $R_{\pi^+/\mu^+}$ defined as \bal R_{\pi^+/K^+} &= \frac{f_{\bar{b} \to B_c^+}}{f_{\bar{b} \to B_u^+}}\, \frac{\mathcal{B}\left(B_c^+ \to J/\psi \, \pi^+ \right)}{\mathcal{B}\left(B_u^+ \to J/\psi \, K^+ \right)} \\ R_{\pi^+/\mu^+} &= \frac{\mathcal{B}\left(B_c^+ \to J/\psi \, \pi^+ \right)}{\mathcal{B}\left(B_c^+ \to J/\psi \, \mu^+ \, \nu\right)} \, . \eal It then follows that \begin{eqnarray} \frac{f_{\bar{b} \to B_c^+}}{f_{\bar{b} \to B_u^+}}\, \frac{\mathcal{B}\left(B_c^+ \to J/\psi \, \mu^+ \, \nu_\mu \right)}{\mathcal{B}\left(B_u^+ \to J/\psi \, K^+ \right)} = \frac{R_{\pi^+/K^+}}{R_{\pi^+/\mu^+}} \\ \Rightarrow \, \frac{f_{\bar{b} \to B_c^+}}{f_{\bar{b} \to B_u^+}} = \frac{\mathcal{B}\left(B_u^+ \to J/\psi \, K^+\right)}{\mathcal{B}\left(B_c^+ \to J/\psi \, \mu^+ \, \nu_\mu\right)} \frac{R_{\pi^+/K^+}}{R_{\pi^+/\mu^+}} \, . \end{eqnarray} Using \bal R_{\pi^+/\mu^+} & = 0.0469 \pm 0.0054 \, \text{\cite{Aaij:2014jxa}} \\ R_{\pi^+/K^+}& =^{^{\hspace{-4mm} \rm LHCb}} (0.683 \pm 0.02) \times 10^{-2} \text{\cite{Aaij:2014ija}} \label{fcfu-lhcb-0}\\ & =^{^{\hspace{-4mm} \rm CMS}} (0.48 \pm 0.08) \times 10^{-2} \text{\cite{Khachatryan:2014nfa}} \label{fcfu-cms-0} \\ \mathcal{B}\left(B_u^- \to J/\psi \, K^-\right) & = (9.99 \pm 0.36) \times 10^{-4} \text{\cite{Amhis:2016xyh}} \eal we get, \bal \frac{f_{\bar{b} \to B_c^+}}{f_{\bar{b} \to B_u^+}} &= \frac{(1.22 - 1.75) \times 10^{-4}}{\mathcal{B}\left(B_c^+ \to J/\psi \, \mu^+ \, \nu_\mu\right)} (\text{using \cite{Aaij:2014ija}}) \label{fcfu-lhcb} \\ \frac{f_{\bar{b} \to B_c^+}}{f_{\bar{b} \to B_u^+}} & = \frac{(0.74 - 1.40) \times 10^{-4}}{\mathcal{B}\left(B_c^+ \to J/\psi \, \mu^+ \, \nu_\mu\right)} (\text{using \cite{Khachatryan:2014nfa}}) \label{fcfu-cms} \eal As the LHCb and CMS measurements of $R_{\pi^+/K^+}$ are about $2.5\sigma$ away from each other, we consider them separately and do not use their average. Moreover, while the LHCb Collaboration uses the cuts $0 < p_T (B_c^+), \, p_T (B_u^+) < 20 \ {\rm GeV}$ and $2.0 < \eta < 4.5$ in their analysis (at $\sqrt{s} = 8 \ {\rm TeV}$), the CMS Collaboration uses $p_T (B_c^+), \, p_T (B_u^+) > 15 \ {\rm GeV}$ and $|\eta| < 1.6$ (at $\sqrt{s} = 7 \ {\rm TeV}$). Thus the discrepancy could be due to the dependence of $f_{\bar{b} \to B_c^+}/f_{\bar{b} \to B_u^+}$ on kinematics. Plugging Eqs.~\ref{fcfu-lhcb} and \ref{fcfu-cms} into Eq.~\ref{eqn:BrC}, one can obtain a bound on $\mathcal{B}(B_c^- \to \tau^- \bar{\nu}_\tau)$ directly as a function of ${\mathcal{B}\left(B_c^+ \to J/\psi \, \mu^+ \, \nu_\mu\right)}$. This is shown in the right panel of Fig.~\ref{fcfu-Jpsi}. \begin{figure}[!h!] \centering \begin{tabular}{cc} \hspace*{-4mm} \includegraphics[scale=0.35]{fcbyfb_LHCb_CMS_1.pdf} & \hspace*{-2mm} \includegraphics[scale=0.35]{fcbyfb_LHCb_CMS_2.pdf} \end{tabular} \caption{\sf Variations of $f_{\bar{b} \to B_c^+}/f_{\bar{b} \to B_u^+}$ and $\mathcal{B}(B_c^- \to \tau^- \bar{\nu}_\tau)|_{\rm max}$ with respect to ${\mathcal{B}\left(B_c^+ \to J/\psi \, \mu^+ \, \nu_\mu\right)}$. \label{fcfu-Jpsi}} \end{figure} Using ${\mathcal{B}\left(B_c^+ \to J/\psi \, \mu^+ \, \nu_\mu\right)} \leq 2.5 \times 10^{-2}$, as used in \cite{Akeroyd:2017mhr}, we get $f_{\bar{b} \to B_c^+}/f_{\bar{b} \to B_u^+} \gtrsim 3 \times 10^{-3}$ and $\mathcal{B}(B_c^- \to \tau^- \bar{\nu}_\tau) \lesssim 14\%$ from the CMS data, the latter being similar but slightly weaker than \cite{Akeroyd:2017mhr}. We would like to make two comments at this stage: \begin{itemize} \item The bound on $\mathcal{B}(B_c^- \to \tau^- \bar{\nu}_\tau)$ depends linearly on ${\mathcal{B}\left(B_c^+ \to J/\psi \, \mu^+ \, \nu_\mu\right)}$. As ${\mathcal{B}\left(B_c^+ \to J/\psi \, \mu^+ \, \nu_\mu\right)}$ has not yet been measured, a model independent bound is not possible. Moreover, even the SM calculation, and in particular the uncertainty, is not fully under control at the moment. Thus, a precise bound on $\mathcal{B}(B_c^- \to \tau^- \bar{\nu}_\tau)$ cannot be obtained currently. \item Even in the presence of better information on ${\mathcal{B}\left(B_c^+ \to J/\psi \, \mu^+ \, \nu_\mu\right)}$, Eqs.~\eqref{fcfu-lhcb} and \eqref{fcfu-cms} provide values of $f_{\bar{b} \to B_c^+}/f_{\bar{b} \to B_u^+}$ at the LHC and for the specific kinematic regions used in \cite{Aaij:2014ija} and \cite{Khachatryan:2014nfa}. As discussed before, the value of $f_{\bar{b} \to B_c^+}/f_{\bar{b} \to B_u^+}$ at LEP may be different from the above because of 1) larger average $p_T$ of the b-mesons produced at LEP 2) $b \bar{b}$ pairs produced at LEP are in the colour singlet state contrary to most of the $b \bar{b}$ pairs produced at the LHC which are in the colour octet state. \end{itemize} In view of the above, we try to estimate the ratio $f_{\bar{b} \to B_c^+}/f_{\bar{b} \to B_u^+}$ at LEP using the event generator Pythia8 \cite{Sjostrand:2006za,Sjostrand:2014zea} which has Hadronization model tuned to provide a good description of the available experimental data. The results are shown in Table.~\ref{tab-pythia}. In each of the cases presented in Table.~\ref{tab-pythia}, we have generated 1 million events in order to reduce the statistical uncertainty. In Case-I, we have used the same $p_T$ and $\eta$ cuts as in \cite{Khachatryan:2014nfa}, and we get a value $f_{b \to B_c^-}/f_{b \to B_u^-} = 1.06 \times 10^{-3}$ which is much smaller than $f_{b \to B_c^-}/f_{b \to B_u^-} = 3 \times 10^{-3}$ which was used to obtain a bound $\mathcal{B}(B_c^- \to \tau^- \bar{\nu}_\tau) \leq 10\%$. Note that, from Eq.~\ref{fcfu-cms}, $f_{b \to B_c^-}/f_{b \to B_u^-} = 1.06 \times 10^{-3}$ would correspond to ${\mathcal{B}\left(B_c^+ \to J/\psi \, \mu^+ \, \nu_\mu\right)} \approx 6 \times 10^{-2}$ (see the left panel of Fig.~\ref{fcfu-Jpsi}) which is much larger than the values considered in \cite{Akeroyd:2017mhr}. \begin{table}[h!] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline & & $f_{b \to B_u^-}$ & $f_{b \to B_c^-}$ & $\dfrac{f_{b \to B_c^-}}{f_{b \to B_u^-}}$ \\ \hline & LHC 7 TeV & & & \\ I & $p_T (B_c^+, B_u^+) > 15 \ {\rm GeV}$ & 0.255 & $ 2.7 \times 10^{-4}$ & $1.06 \times 10^{-3}$ \\ & $ |\eta| < 1.6$ & & & \\ \hline & LHC 7 TeV & & & \\ II & $p_T (B_c^+, B_u^+) < 15 \ {\rm GeV}$ & 0.301 & $ 5.7 \times 10^{-4}$ & $1.89 \times 10^{-3}$ \\ & $ |\eta| < 1.6$ & & & \\ \hline & LHC 7 TeV & & & \\ & $q \bar{q} \to Z \to b \bar{b}$ only & & & \\ III & $p_T (B_c^+, B_u^+) > 15 \ {\rm GeV}$ & 0.374 & $ 4.1 \times 10^{-4}$ & $1.09 \times 10^{-3}$ \\ & $ |\eta| < 1.6$ & & & \\ \hline & LHC 7 TeV & & & \\ & $g g \to b \bar{b}$ and $q \bar{q} \to g \to b \bar{b}$ & & & \\ IV & $p_T (B_c^+, B_u^+) > 15 \ {\rm GeV}$ & 0.255 & $ 2.5 \times 10^{-4}$ & $0.98 \times 10^{-3}$ \\ & $ |\eta| < 1.6$ & & & \\ \hline V & LEP (at the Z peak) & 0.42 & $ 4.5 \times 10^{-4}$ & $1.07 \times 10^{-3}$ \\ \hline \end{tabular} \caption{\sf Hadronization fractions calculated from Pythia8. \label{tab-pythia}} \end{center} \end{table} In the third row of Table.~\ref{tab-pythia}, we changed the $p_T$ cut to $p_T < 15 \text{GeV}$ in order to check the $p_T$ dependence of the Hadronization fractions. In this case, we get ${f_{b \to B_c^-}}/{f_{b \to B_u^-}} = 1.89 \times 10^{-3}$ which is considerably larger than that in Case-I. This is consistent with the general findings in \cite{Aaij:2011jp,Aaltonen:2008zd,Aaltonen:2008eu,Aaij:2014jyk, Aaij:2013qqa,Aad:2015cda} and confirms that the measurement of ${f_{b \to B_c^-}}/{f_{b \to B_u^-}}$ from LHCb (Eq.~\ref{fcfu-lhcb-0} and \ref{fcfu-lhcb}) which uses $p_T (B_c^+), \, p_T (B_u^+) < 20 \ {\rm GeV}$ is indeed not expected to be the same as that measured in CMS (Eq.~\ref{fcfu-cms-0} and \ref{fcfu-cms}) which used $p_T (B_c^+), \, p_T (B_u^+) > 15 \ {\rm GeV}$. In rows 4 and 5 of Table.~\ref{tab-pythia}, we considered $b \, \bar{b}$ production through only Z boson (produced $b \bar{b}$ are in QCD singlet state) and through only QCD interactions (produced $b \bar{b}$ are in QCD triplet state) respectively. We observed only $\sim 10\%$ variation in the ${f_{b \to B_c^-}}/{f_{b \to B_u^-}}$ between these two cases. Finally, at the Z peak, we obtain $f_{b \to B_u^-} = 0.42$, $f_{\bar{b} \to B_s} = 0.094$ (not shown in the table), and ${f_{b \to B_c^-}}/{f_{b \to B_u^-}} =1.07 \times 10^{-3}$, the first two numbers being consistent with their experimental measurements \cite{Amhis:2016xyh,Tanabashi:2018oca}. Using the number ${f_{b \to B_c^-}}/{f_{b \to B_u^-}} =1.07 \times 10^{-3}$, from Fig.~\ref{bctaunu-fcfu}, we get \bal \mathcal{B}(B_c^- \to \tau^- \bar{\nu}_\tau) \leq 39\% \, . \eal We warn the readers that this bound should only be taken as an estimate because, after all, Pythia only uses a Hadronization model adjusted to describe a large amount of available experimental data well (as we saw, indeed it reproduced the correct values for $f_{b \to B_u^-}$ and $f_{\bar{b} \to B_s}$), and the value of $f_{b \to B_c^-}$ obtained from Pythia is neither based on any first principle calculation nor on direct experimental data. \vspace*{3mm} To summarise, in this short note, we have shown that \begin{itemize} \item the recent Belle results on $R_D$ and $R_{D^*}$ have interesting implications on the various possible EFT explanations of the data. The most important being that the pure tensor explanation is now completely allowed both by the measurement of $F_L^{D^*}$ and the high-$p_T$ $p \, p \to \tau \, \nu$ searches by ATLAS and CMS. \item the solution in terms of a pure right-chiral vector current (involving right-chiral neutrinos) has now moved into the $2\sigma$ allowed range of the LHC $p \, p \to \tau \, \nu$ searches. \item the upper bound on the branching fraction of $B_c^- \to \tau^- \bar{\nu}_\tau$ from the LEP data is much weaker than the bound $10\%$ used in the recent literature. Our estimate of this bound, based on the Hadronization model implemented in Pythia8, is approximately $40\%$. This bound, while being independently important, may also have interesting implications on the various scalar-pseudoscalar explanations of the $R_D$ and $R_{D^*}$ data. \end{itemize} {\bf Acknowledgement } \\ The research of DB was supported in part by the Israel Science Foundation (grant no. 780/17) and by the Kreitman Foundation Post-Doctoral Fellowship. DG would like to acknowledge support through Ramanujan Fellowships of the Department of Science and Technology, Government of India. \vspace*{3mm} {\bf References } \vspace*{2mm} \providecommand{\href}[2]{#2}\begingroup\raggedright
2102.10123
\section{Introduction} \label{Introduction} The increasing tension between the local measurements of the current expansion rate \cite{friedman,Riess:2019cxk} and that derived from the temperature anisotropies in the Cosmic Microwave Background (CMB) \cite{Aghanim:2018eyx} assuming the $\Lambda$-Cold Dark Matter ($\Lambda$CDM) model has motivated the investigation of generalized models of the dark sector, physics beyond the standard model, and alternative gravity theories \cite{Verde:2019ivm, Graef:2018fzu, Benetti:2017juy, DiValentino:2019qzk, Bernal:2016gxb, Guo:2018ans, Vattis:2019efj, Capozziello:2020nyq, Benetti:2020hxp, Benetti:2019gmo, Pan:2019gop,Poulin:2018cxd,Knox:2019rjx,Yang:2021hxg,Alcaniz:2019kah}. In the case of dynamical dark energy models, an important aspect worth considering is how to properly treat the dark energy (DE) perturbations, which in principle can be done in two different ways. The first one is to explicitly include DE perturbations in the perturbed equations and assume a sound speed to DE. If a luminal sound speed is assumed DE perturbations can be always neglected. An alternative procedure is to decompose the dynamical dark energy into a pressureless, clustering component and a vacuum-type term with an Equation-of-State (EoS) parameter, $w = -1$ \cite{Zimdahl:2005ir, Wang:2013qy, Borges:2013bya}. In this case, it is possible to show \cite{micol} that the latter does not cluster in the limit of sub-horizon scales, while the former can be reinterpreted as dark matter. In this framework, a flux of energy between the dark components generally occurs, whose observational signature would indicate a dynamical nature of the original DE field. Such an energy flux violates adiabaticity and characterizes the so-called interacting DE models (iDE). In a previous work \cite{micol}, it was shown that a joint analysis of the CMB (Planck 2015) data and SNe observations constrains the interaction parameter $\alpha$ of a particular class of iDE models to be slightly positive, corroborating results of similar studies \cite{wands2,Aurich:2017lck}. Moreover, a strong correlation was found between $\alpha$, the Hubble constant ($H_0$), and the normalization of the matter power spectrum on scales of 8$h^{-1}$ Mpc ($\sigma_8$), i.e., while a positive interaction parameter favours higher values of $H_0$, a negative $\alpha$ favours lower values of $\sigma_8$. This correlation is particularly important in the study of iDE models as a possible solution of the current $H_0$ and $\sigma_8$ tensions. The aim of this paper is twofold: first, to perform an updated analysis of \cite{micol} with the Planck (2018) likelihoods \cite{Aghanim:2018eyx}, where the CMB polarization has been taken into account. Second, to explore the influence of a non-zero spatial curvature in the determinations of the other model parameters, motivated by the recent study of \cite{DiValentino:2019qzk}. The present analysis also pays special attention to the role played by the $H_0$ prior from local measurements on the parameter estimates. In particular, we find that such a prior has a significant influence on the determination of the iDE model parameters, given the $\alpha$-$H_0$ correlation reported in \cite{micol}. \section{Parametrising the interaction} \label{Sec:Theory} \subsection{Background} In a FLRW universe fulfilled with a pressureless component interacting with a vacuum-like term, the Friedmann and conservation equations assume the form \begin{eqnarray} \label{Friedmann} 3H^2 = \rho_m + \Lambda,\\ \label{conservation} \dot{\rho}_m + 3H\rho_m = \Gamma \rho_m = -\dot{\Lambda}, \end{eqnarray} where $\Gamma$ is defined as the rate of matter creation (not necessarily constant). We will use a parametrisation for the vacuum term evolution given by \begin{equation} \label{Lambda} \Lambda = \sigma H^{-2\alpha}, \end{equation} where $\alpha~(> -1)$ is the interaction parameter, and $\sigma = 3 (1 - \Omega_{m0}) H_0^{2(\alpha+1)}$. From (\ref{Friedmann}) and (\ref{conservation}) we can show that \begin{equation} \label{Gamma} \Gamma = -\alpha \sigma H^{-(2\alpha +1)}. \end{equation} By including a conserved radiation component, we obtain the Hubble function \begin{equation}\label{eq:E} E(z) = {H(z)}/{H_0} = \sqrt{\left[ (1-\Omega_{m0}) + \Omega_{m0} (1+z)^{3(1+\alpha)} \right]^{\frac{1}{(1+\alpha)}} + \Omega_{R0} (1+z)^4}. \end{equation} A negative $\alpha$ corresponds to creation of matter, while a positive one means that dark matter is annihilated. As one may check, for $\alpha = 0$ the standard $\Lambda$CDM model is recovered. Eq.~(\ref{eq:E}) is the Hubble function of a generalized Chaplygin gas (GCG) \cite{cg1,Dev:2002qa,Alcaniz:2002yt,cg2,cg3}, which behaves like cold matter at early times and as a cosmological constant in the asymptotic future. Actually, it is equivalent to a non-adiabatic Chaplygin gas \cite{Borges:2013bya,non-adiabatic2,wands2,bb}, because the vacuum component does not cluster and, consequently, there is no pressure term in the perturbation equations. For this reason, the power spectrum does not suffer from oscillations and instabilities present in the adiabatic version. Note as well that conserved baryons are included in the term proportional to $(1+z)^3$ in the binomial expansion of the square brackets. A small spatial curvature can also be included by adding a term $\Omega_{k} (1+z)^2$ into the square root. \subsection{Primordial perturbations} The Boltzmann equations for conserved baryons and radiation are the same as in the standard model. For the dark sector, assuming that there is no momentum transfer in the dark matter rest frame, the Poisson and dark matter perturbation equations in the conformal Newtonian gauge are \cite{micol,Salzano:2021zxk} \begin{equation}\label{thetad2} \theta'_{dm}+\mathcal{H}\theta_{dm}-k^2\Phi=0, \end{equation} \begin{equation}\label{deltad2} \delta'_{dm}-3\Phi'+\theta_{dm}=-\frac{aQ}{\rho_{dm}} \left[ \delta_{dm} - \frac{1}{k^2} \left( k^2 \Phi + \frac{Q'}{Q}\theta_{dm} \right) \right], \end{equation} \begin{equation} -k^2 \Phi=\frac{a^2}{2} (\rho_{dm} \delta_{dm} + \rho_b \delta_b) - \left( \frac{a^3 Q}{2} - \frac{3a^2}{2}\mathcal{H}\rho_m \right)\frac{\theta_{dm}}{k^2}, \end{equation} where $\mathcal{H} = aH$, $Q = \Gamma \rho_m = - \dot{\Lambda}$, a prime indicates derivative w.r.t. the conformal time, $\theta_{dm}$ is the dark matter velocity potential, and $\Phi$ is the gravitational potential. In the sub-horizon limit $k \gg \mathcal{H}$, these equations assume the form \begin{equation}\label{ol} \theta'_{dm}+\mathcal{H}\theta_{dm}-k^2\Phi=0, \end{equation} \begin{equation}\label{ol1} \delta'_{dm}+\theta_{dm}=-\frac{aQ}{\rho_{dm}} \delta_{dm}, \end{equation} \begin{equation}\label{ol2} -k^2 \Phi=\frac{a^2}{2} (\rho_{dm} \delta_{dm} + \rho_b \delta_b). \end{equation} For the vacuum term we have $\delta \Lambda = -aQ\theta_{dm}/k^2$ and $\delta Q = Q'\theta_{dm}/k^2$, that are negligible for sub-horizon modes. The vacuum term velocity remains undetermined. \subsubsection{Primordial perturbations: non-flat case} Let us show now that the inclusion of spatial curvature in the dynamical equations at late time is only relevant at the background level. For this, consider the components of Einstein's equations in the longitudinal gauge, \begin{equation}\label{omj} \psi-\phi=a^2\pi, \end{equation} \begin{equation}\label{del} 3\mathcal{H}(\psi'+\mathcal{H}\phi)+k^2\psi-3\kappa\phi=-\frac{a^2}{2}\delta\rho, \end{equation} \begin{equation}\label{sees} \psi''+2\mathcal{H}\psi'+\mathcal{H}\phi'+(2\mathcal{H'}+\mathcal{H}^2-\kappa)\phi=\frac{a^2}{2}(\delta p-\frac{2}{3}k^2\pi), \end{equation} \begin{equation}\label{kal} \psi'+\mathcal{H}\phi=-\frac{a^2}{2}(\rho+p)v, \end{equation} where the spatial curvature can assume values $\kappa=0, +1$ ou $-1$. In the absence of anisotropic stress $\pi=0$, the gravitational potential and curvature perturbation are equal, $\phi=\psi$. The perturbation in the fluid pressure is \begin{equation}\label{1} \delta p=c_{s}^2\delta\rho+(c_{s}^2-c_{a}^2)\rho'v, \end{equation} where $c_{s}^2$ and $c_{a}^2=p'/\rho'$ are, respectively, the entropic and adiabatic sound speed, and $v$ is the matter velocity potential. With the help of the continuity equation, \begin{equation}\label{2} \rho'=-3\mathcal{H}(\rho+p), \end{equation} we are able to rewrite the equation $(\ref{kal})$ as \begin{equation}\label{3} \frac{a^2}{2}\rho'v=3\mathcal{H}(\phi'+\mathcal{H}\phi). \end{equation} Substituting $(\ref{1})$ into $(\ref{sees})$ and eliminating $\delta\rho$ and $(a^2/2)\rho'v$ through equations $(\ref{del})$ and $(\ref{3})$, it is easy to find the second order differential equation for the gravitational potential, \begin{equation}\label{tam} \phi''+3\mathcal{H}(1+c_{a}^2)\phi'+[2\mathcal{H}'+(1+3c_{a}^2)\mathcal{H}^2+c_{s}^2(k^2-3\kappa)-\kappa]\phi=0. \end{equation} Combining $(\ref{kal})$ with $(\ref{del})$ and using the continuity equation $(\ref{2})$ we obtain the Poisson equation \begin{equation}\label{gf} -2(k^2-3\kappa)\phi=a^2\rho\delta^c, \end{equation} where $\delta\rho^{c}=\delta\rho+\rho'v$ is a comoving gauge invariant quantity. For any interacting model with \begin{equation} \rho_m'+3\mathcal H\rho_m=aQ, \end{equation} the adiabatic sound speed that appear in equation $(\ref{tam})$ is directly related to the energy transfer function as \begin{equation} c_a^2=-\frac{aQ}{3\mathcal H\rho_m}. \end{equation} As in the flat case, it is easy to show that the vacuum energy component does not cluster in sub-horizon scales, and therefore there is no entropic sound speed, i.e. $c_s^2\propto \delta\rho_{\Lambda}^c=0$ in the perturbation equation $(\ref{tam})$. Then, using ($\ref{gf}$) into $(\ref{tam})$ we can find a second order differential equation for the evolution of the density contrast, \begin{equation}\label{dif} \delta_m^{c''}+\bigg(\mathcal H+\frac{aQ}{\rho_m}\bigg)\delta_m^{c'}+\bigg[\bigg(\frac{aQ}{\rho_m}\bigg)'+\frac{aQ}{\rho_m}\mathcal H+\mathcal H'-\mathcal H^2-\kappa\bigg]\delta_m^c=0. \end{equation} From the Friedmann and Raychaudhuri equations \begin{equation} \mathcal H^2+\kappa=\frac{a^2}{3}(\rho_m+\rho_{\Lambda}), \end{equation} \begin{equation} \mathcal H'+\mathcal H^2+\kappa=\frac{a^2}{6}(\rho_m+4\rho_{\Lambda}), \end{equation} we derive the following relation, \begin{equation}\label{sa} \mathcal H'-\mathcal H^2-\kappa=-\frac{1}{2}a^2\rho_m. \end{equation} Note that the spatial curvature parameter $\kappa$ that appears in the last term of equation ($\ref{dif}$), due to the contribution of non-flat terms of Einstein's equations ($\ref{del}$) and ($\ref{sees}$), is eliminated in view of ($\ref{sa}$). Hence we can get an equation for $\delta_m^c$ in the final form \begin{equation} \delta_m^{c''}+\bigg(\mathcal H+\frac{aQ}{\rho_m}\bigg)\delta_m^{c'}+\bigg[\bigg(\frac{aQ}{\rho_m}\bigg)'+\frac{aQ}{\rho_m}\mathcal H-\frac{1}{2}a^2\rho_m\bigg]\delta_m^c=0. \end{equation} This same differential equation for total matter can be derived by combining equations ($\ref{ol}$)-($\ref{ol2}$), which assumes a universe with zero spatial curvature. Therefore, we see that the evolution of $\delta_m$ at sub-horizon scales is affected by spatial curvature only through its background solutions. \section{Analysis and Results} \label{Sec:Analysis} For our analysis, we updated the CMB data set used in the previous work\footnote{In Ref.\cite{micol} we have used the second release of Planck data \cite{Aghanim:2015xee} ``TT+lowP" (2015), namely the high-$\ell$ Planck temperature data (in the range $30< \ell <2508$) from the 100-, 143-, and 217- GHz half-mission TT cross-spectra, and the low-P data by the joint TT, EE, BB and TE likelihood (in the range $2< \ell <29$).}, joining the Plik ``TT,TE,EE+lowE" CMB Planck (2018) likelihood (by combination of temperature power spectra and cross correlation TE and EE over the range $\ell \in [30, 2508]$, the low-$\ell$ temperature Commander likelihood, and the low-$\ell$ SimAll EE likelihood) \cite{Aghanim:2019ame}, its lensing reconstruction power spectrum\footnote{As shown by the Planck Collaboration, lensing data is needed both to resolve the tension of the CMB data with a flat universe prediction, and to reduce the tension of the CMB with data from Redshift Space Distorsion and Weak Lensing. Let us stress that the Planck ``TT,TE,EE+lowE + lensing" data set combination is considered by the Planck Collaboration as the most robust of the current release \cite{Aghanim:2018eyx}.} ~\cite{Aghanim:2019ame,Aghanim:2018oex} and SNe data from the Joint Light-curve sample~\cite{Betoule:2014frx}. The latter is constructed from Supernova Legacy Survey (SNLS) and Sloan Digital Sky Survey (SDSS), consisting of 740 data points covering the redshift range $0.01< z <1.3$. This sample allows for light-curve recalibration with the model under consideration, which is an important issue when testing alternative cosmologies~\cite{Taddei:2016iku,micol}. Although SNe data have lower statistical power with respect to Planck, they are useful for fixing the background cosmology at low redshifts in models involving dark energy evolution and modified gravity. We include a prior on $\Omega_{b0}h^2$ in order to take into account the observations of D/H abundance \cite{Cooke:2017cwo}, and we call such a data set Planck(2018)+lensing+JLA+$\Omega_{b0}h^2$ prior as ``base data set". Also, we consider the Hubble constant of SH$0$ES collaboration, $H_0$ = $74.03 \pm 1.42$ km/s/Mpc ~\cite{Riess:2019cxk}, that is in tension at 4.4$\sigma$ with CMB estimations within the minimal cosmological model, to discuss the changes in the parameters constraining due to the assumption of this prior. \begin{table}[] \centering \caption{{ $68\%$ confidence limits for the model parameters. We call as ``base data set" the set of Planck(2018)+lensing+JLA+$\Omega_{b0}h^2$ prior. The $\Delta \chi^2_{best} = \Delta \chi^2_{\Lambda CDM} - \Delta \chi^2_{model}$ refers to the best fit of the model (negative value means a better $\chi^2$ of the reference model, $\Lambda$CDM).} \label{tab:flat}} \begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{Parameter}& \multicolumn{2}{c}{$\Lambda$CDM}& \multicolumn{2}{|c|}{$\Lambda$(t)CDM}\\ \hline { }& {base dataset }& {base dataset + SH$0$ES}& {base dataset} & {base dataset + SH$0$ES}\\ \hline $100\,\Omega_b h^2$ & $2.223 \pm 0.012$ & $2.235 \pm 0.012$ & $2.221 \pm 0.012$ & $2.221 \pm 0.013$ \\ $\Omega_{cdm} h^2$ & $0.1206 \pm 0.0012$ & $0.1190 \pm 0.0011$ & $0.1156 \pm 0.0099$ & $0.0890 \pm 0.0093$ \\ $\alpha$ & $-$ & $-$ & $0.03 \pm 0.06$ & $0.20 \pm 0.07$ \\ $H_0$ & $ 67.04 \pm 0.50 $ & $ 67.78 \pm 0.48 $ & $ 67.50 \pm 1.22 $ & $ 70.73 \pm 1.02 $ % \\ \hline \hline $\Delta \chi^2_{\rm best}$ & $-$ & $-$ & $0.8 $ & $ 9.3$ \\ \hline \end{tabular \end{table} \begin{table}[] \centering \caption{{ $68\%$ confidence limits for the model parameters, using the ``base dataset". The $\Delta \chi^2_{best} = \Delta \chi^2_{\Lambda CDM} - \Delta \chi^2_{model}$ refers to the best fit of the model (negative value means a better $\chi^2$ of the reference model, $\Lambda$CDM).} \label{tab:curve}} \begin{tabular}{|c|c|c|} \hline {Parameter}& {$\Lambda$CDM + $\Omega_k$}& {$\Lambda$(t)CDM + $\Omega_k$}\\ \hline $100\,\Omega_b h^2$ & $2.220 \pm 0.012$ & $2.225 \pm 0.013$ \\ $\Omega_{cdm} h^2$ & $0.1208 \pm 0.0011$ & $0.1123 \pm 0.0177$ \\ $\Omega_{k}$ & $0.0008 \pm 0.0008$ & $-0.004 \pm 0.007$ \\ $\alpha$ & $-$ & $0.06 \pm 0.11$ \\ $H_0$ & $ 67.29 \pm 0.60 $ & $ 66.38 \pm 1.94 $ \\ \hline \hline $\Delta \chi^2_{\rm best}$ & $ - $ & $ -0.4 $ \\ \hline \end{tabular \end{table} We modify the numerical Cosmic Linear Anisotropy Solving System (CLASS) code~\cite{Blas:2011rf} according with the theory discussed in the previous sections, and we use the Monte Python~\cite{Audren:2012wb} code to perform Monte Carlo Markov Chains (MCMC) analyses. We build our theory by letting free the usual cosmological parameters, namely, the physical baryon density, $\omega_b=\Omega_{b}h^2$, the physical cold dark matter density, $\omega_{cdm}=\Omega_{cdm}h^2$, the optical depth, $\tau_{reio}$, the primordial scalar amplitude, $\mathcal A_s$, the primordial spectral index, $n_s$, the Hubble constant $H_0$, in addition to the interaction parameter, $\alpha$. \begin{figure}[t] \centerline{\includegraphics[scale=0.4]{LtCDM_2018.pdf} \hspace{.4in}} \caption{Comparison between flat and curved $\Lambda$(t)CDM models, using as ``base dataset" the Planck 2018 likelihood combined with SNe JLA sample and $\Omega_{b0}h^2$ prior.} \label{fig:triangle_LtCDM} \end{figure} We present our results in Table \ref{tab:flat} (flat universe) and Table \ref{tab:curve} (non-zero curvature). Within the standard model framework, we first note that the base data set fully supports the flat hypothesis. At the same time, when a Gaussian prior on $H_0$ centered in the SH$0$ES value is considered, we note a shift in the constrained value of the cold dark matter density, although the two values remain within $1\sigma$ agreement. This behavior is not observed when we analyse the $\Lambda$(t)CDM model, where the $H_0$ prior has a significant influence both on the peak shift of $\omega_{cdm}$ to lower values (removing such a prior it is fully compatible with that predicted by the standard model) and on the $\alpha$ constraint. In this latter, using the SH$0$ES prior the standard model is discarded at $3\sigma$. These results are shown in Fig. \ref{fig:triangle_LtCDM}, where we show the $\Lambda$(t)CDM model with (red line) and without (green line) the SH$0$ES prior, and without the assumption of flat universe (light blue line). The analysis considering a curved space in the context of the $\Lambda$(t)CDM model shows an anti-correlated behaviour between the curvature density and the $\alpha$ parameter. A flat and non-interacting universe is still compatible at $1\sigma$ even if the data show preference for slight negative curvature values and slight positive interaction parameter. Noteworthy, it seems that a spatially curved universe relaxes the degeneracy between $\alpha$ and $H_0$, and also between $H_0$ and $\sigma_8$. Finally, in order to comment our results in the light of the previous ones \cite{micol}, let us now compare the results obtained for both $\Lambda$(t)CDM and standard models using both 2015 and 2018 Planck likelihoods, combined with JLA + $\Omega_{b0}h^2$ prior + SH0ES prior. We remind that the analysis using Planck 2015 (dashed black curve) was presented in \cite{micol}, where we used the most robust combination at the time, namely ``TT + lowP" \cite {Aghanim:2015xee}, while for the new analysis of this work we have chosen the combination currently considered more reliable, that is ``TT,TE,EE + lowE" \cite{Aghanim:2019ame}. The main difference between the two Planck likelihoods essentially lies in: (i) the use at high-$\ell$ of polarization modes, and (ii) a different treatment of EE polarization at low multipoles \cite{Aghanim:2019ame}, that implies a stronger constraint on the optical depth parameter (for a detailed discussion we refer the reader to Ref. \cite{Aghanim:2018eyx}). Due to the correlation between cosmological parameters, this also determines a preference, in the standard cosmological model context, for lower values of $A_s$ and the late-time fluctuation amplitude parameter, $\sigma_8$\footnote{The $\sigma_8$ parameter roughly corresponds to the primordial scalar amplitude, $A_s$, converted into present fluctuations amplitude.}, than those estimated by Planck (2015) likelihood. This can be seen in Fig. \ref{fig:triangle_LtCDMvsLCDM}, comparing the standard model constrained with Planck data from the 2015 release (dashed gray curve) and the 2018 release (solid blue curve). At the same time, we also note a significant different constraint on the cold dark matter density, comparing the $\Lambda$(t)CDM (red solid curve) and $\Lambda$CDM model using 2018 CMB data. On the other hand, the Hubble parameter is compatible with that predicted by $\Lambda$CDM at only $2.6\sigma$, relaxing the tension with the value measured by SH$0$ES to 1.9$\sigma$. We also emphasize that $\alpha=0$ (the value of the interaction parameter required to recover the standard model) is excluded by the analysis using the 2018 CMB data, with a clear preference for a positive $\alpha$, that is, an energy flux from dark matter to dark energy. As noted in the previous work \cite{micol}, there exist a positive correlation between the values of $\alpha$ and $H_0$, which implies (due to the additional degree of freedom) that the $\Lambda$(t)CDM model can predict values of $H_0$ in better agreement with the local measurements than the standard $\Lambda$CDM model. Our present analysis not only confirms this correlation but also shows that it is even stronger when the Planck (2018) data are used. On the other hand, given the anti-correlation between these two latter parameters and $\omega_{cdm}$, a significant shift of the cold dark matter density to smaller values are obtained for the iDE model when compared with the standard model prediction. It is worth noticing that these results are obtained using a prior on $H_0$ given by the SH$0$ES collaboration. By removing this prior from the analysis the constraints on these parameters are compatible with those of the $\Lambda$CDM model, as shown in Fig. \ref{fig:triangle_LtCDM}. \begin{figure}[t] \centerline{\includegraphics[scale=0.3]{LtCDMvsLCDM_15vs18.pdf} \hspace{.4in}} \caption{Comparison between $\Lambda$CDM and $\Lambda$(t)CDM models using Planck likelihood 2015 and Planck likelihood 2018 combined with SNe JLA sample, as well as $\Omega_{b0}h^2$ and SH0ES priors.} \label{fig:triangle_LtCDMvsLCDM} \end{figure} \section{Final Remarks} \label{Sec:Conclusions} In this paper we have not only updated the previous results of \cite{micol} with the Planck 2018 polarization data, but also explored the influence of non-zero spatial curvature, and the weight of the priors choice in the analysis, particularly the use of the SH$0$ES prior on the local value of the Hubble parameter, when constraining the cosmological parameters with Planck data. Taking such a $H_0$ prior in combination with CMB data has in fact opened up numerous debates on whether it is statistically valid or not to perform analyses of models by combining data in tension \cite{Verde:2013wza, Handley:2019wlz, Battye:2014qga, Seehars:2014ora, Raveri:2018wln, Efstathiou:2020wem, Gonzalez:2021}. The choice to use this prior must therefore be seriously considered and the results obtained carefully analysed. We have shown that, without such a prior, the current CMB data are not capable of discerning an interaction in the dark sector, even in combination with SNe Ia data. On the other hand, when the SH$0$ES prior is taken into account, the best-fit of the $\alpha$ parameter is clearly positive, whereas the standard model ($\alpha=0$) is excluded with $\approx 3\sigma$ confidence level. This results from the fact that the interaction and the Hubble parameters are directly correlated, as shown by the analysis with Planck data only. Therefore, a prior that prefers a higher value for the latter will also naturally lead to a higher value to the former. We have also considered the role of spatial curvature in the data analysis. We note that the best-fits of the cosmological parameters are not substantially altered, and no robust sign of the presence of curvature can be concluded. In this respect, the curvature has a weaker influence on the analysis as compared to the number of relativistic species, that leads to a negative interaction parameter if left free, as shown in \cite{micol}. Our general conclusion is that the signature of interaction, if exists, is too weak to be found with the present data set. On the other hand, the present analysis also suggests that the $H_0$ tension observed in the context of the $\Lambda$CDM model seems to be more fundamental, refusing solutions based exclusively on generalizations of the dark sector. \section*{Acknowledgements} We are thankful to Joel Carvalho for an insightful discussion on the proper use of the SH$0$ES prior. MB thanks support of the Istituto Nazionale di Fisica Nucleare (INFN), sezione di Napoli, iniziative specifiche QGSKY. SC is supported by CNPq (Brazil) with grant 307467/2017-1. JSA acknowledges support from CNPq (grants no.~310790/2014-0 and 400471/2014-0) and FAPERJ (grant no.~204282). The authors thank the use of CLASS and Monte Python codes. We also acknowledge the use of the High Performance Data Center (DCON) at the Observat\'orio Nacional for providing the computational facilities to run our analysis.
2102.10289
\section{Introduction} \IEEEPARstart{M}{odel} Predictive Control (MPC) is a well-known method to solve finite-horizon optimal control problems online, which has been extensively investigated in various fields \cite{qin2003survey,vazquez2014model,li2014fast}. However, existing MPC algorithms still suffer from a major challenge: relatively low computation efficiency \cite{lee2011model}. One famous approach to tackle this issue is the moving blocking technique, which assumes constant control input in a fixed portion of the prediction horizon. It increases the computation efficiency by reducing the number of variables to be optimized \cite{cagienard2007move}. This solution cannot guarantee control performance, system stability, and constraint satisfaction. In addition, Wang and Boyd (2009) proposed an early termination interior-point method to reduce the calculation time by limiting the maximum number of iterations per time step \cite{wang2009fast}. However, these online methods are still unable to meet the online computing requirement for nonlinear and large-scale systems. Some control algorithms choose to calculate a near-optimal explicit policy offline, and then implement it online. Bemporad \emph{et al}. (2002) first proposed the explicit MPC method to increase the computation efficiency, which partitioned the constrained state space into several regions and calculated explicit feedback control laws for each region \cite{bemporad2002explicit}. During online implementation, the onboard computer only needs to choose the corresponding state feedback control law according to the current system state, thereby reducing the burden of online calculation to some extent. Such algorithms are only suitable for small-scale systems, since the required storage capacity grows exponentially with the state dimension \cite{kouvaritakis2002needs}. Furthermore, significant efforts have been devoted to approximation MPC algorithms, which can reduce polyhedral state regions and simplify explicit control laws. Geyer \emph{et al}. (2008) provided an optimal merging approach to reduce partitions via merging regions with the same control law \cite{geyer2008optimal}. Jones \emph{et al}. (2010) proposed a polytopic approximation method using double description and barycentric functions to estimate the optimal policy, which greatly reduced the partitions and could be applied to any convex problem \cite{jones2010polytopic}. Wen \emph{et al}. (2009) proposed a piecewise continuous grid function to represent an explicit MPC solution, which reduced the requirements of storage capacity and improved online computation efficiency \cite{wen2009analytical}. Borrelli \emph{et al}. (2010) proposed an explicit MPC algorithm which can be executed partially online and partially offline\cite{borrelli2010computation}. In addition, some MPC studies employed a parameterized function to approximate the MPC controller. They updated the function parameters by minimizing the MPC cost function with a fixed prediction horizon through supervised learning or reinforcement learning \cite{aakesson2005neural,aakesson2006neural,cheng2015neural}. Note that the policy performance and the computation time for each step usually increase with the length of the prediction horizon. The above-stated algorithms usually have to make a trade-off between control performance and computation time, and then select a conservative fixed prediction horizon to meet the requirement of real-time decision-making. However, on-board computing resources are usually dynamically changing, so these algorithms usually lead to calculation timeouts or resources waste. In other words, these algorithms cannot adapt to the dynamic allocation of computing resources and make full use of the available computing time to select the longest model prediction horizon. In this paper, we propose an offline MPC algorithm, called Recurrent MPC (RMPC), for finite-horizon optimal control problems with large-scale nonlinearities and nonaffine inputs. Our main contributions can be summarized as below: \begin{enumerate} \item A recurrent function is employed to approximate the optimal policy, which maps the system states and reference values directly to the control inputs. Compared to previous algorithms employing non-recurrent functions (such as fully connected neural networks), which are only suitable for fixed-horizon predictive control \cite{aakesson2005neural,aakesson2006neural,cheng2015neural}, the inclusion of the recurrent structure allows the algorithm to select an appropriate prediction horizon according to current computing resources. In particular, the output of the learned policy function after $N$ recurrent cycles corresponds to the nearly optimal solution of $N$-step MPC. \item A policy optimization objective is designed by decomposing the MPC cost function according to the Bellman's principle of optimality. The optimal recurrent policy can be obtained by directly minimizing the designed objective function. Therefore, unlike most explicit MPC algorithms \cite{bemporad2002explicit, kouvaritakis2002needs,geyer2008optimal,jones2010polytopic,wen2009analytical,borrelli2010computation} that can only handle linear systems, the proposed algorithm is applicable for nonlinear and non input-affine systems. \item RMPC calculates a near-optimal recurrent policy offline, and then implement online. Experiment shows that it is over 5 times faster than the traditional online MPC algorithms \cite{Andreas2006Biegler,bonami2008algorithmic} under the same problem scale. \end{enumerate} The paper is organized as follows. In Section \ref{sec:pre}, we provide the formulation of the MPC problem. Section \ref{sec:RMPC} presents RMPC algorithm and proves its convergence. In Section \ref{sec:simulation}, we perform a hardware-in-the-loop (HIL) simulation to demonstrate the generalizability and effectiveness of RMPC. Section \ref{sec:experiment} verifies the performance of RMPC in a four-wheeled robot, and Section \ref{sec:conclusion} concludes this paper. \section{Preliminaries} \label{sec:pre} Consider the general time-invariant discrete-time dynamic system \begin{equation} \label{eq.system} x_{i+1}=f(x_{i}, u_{i}) \end{equation} with state $x_{i}\in \mathcal{X} \subset \mathbb{R}^{n}$, control input $u_{i}\in \mathcal{U} \subset \mathbb{R}^{m}$, and the system dynamics function $f:\mathbb{R}^{n} \times \mathbb{R}^{m} \to \mathbb{R}^{n}$. We assume that $f(x_i,u_i)$ is Lipschitz continuous on a compact set $\mathcal{X}$, and the system is stabilizable on $\mathcal{X}$. The $N$-step Model Predictive Control (MPC) problem without state constraints is given as \begin{equation} \label{eq.valuedefinition} \begin{aligned} &\min_{u^{N}_{0},\cdots, u^{N}_{N-1}} V(x_{0},r_{1:N},N)={\sum_{i=1}^{N}l(x_{i},r_{i},u^{N}_{i-1}(x_{0},r_{1:N}))}\\ &\qquad \text{s.\;t.} \qquad \eqref{eq.system}, \ u\in \mathcal{U}, \end{aligned} \end{equation} where $V(x_{0},r_{1:N},N)$ is the cost function, $x_{0}$ is initial state, $N$ is length of prediction horizon, $r_{1:N}=[r_{1},r_{2},\cdots,r_{N}]$ is reference trajectory, $u^{N}_{i-1}$ is the control input of the $i$th step in $N$-step prediction, and $l\geq0$ is the utility function. The purpose of MPC is to find the optimal control sequence to minimize the objective $V(x_{0},r_{1:N},N)$, which can be denoted as \begin{equation} \label{eq.control_sequence} \begin{aligned} \left[{u^{N}_{0}}^*(x_{0},r_{1:N}),{u^{N}_{1}}^*(x_{0},r_{1:N}),\cdots,{u^{N}_{N-1}}^*(x_{0},r_{1:N})\right] \\=\mathop{\arg\min}_{u^{N}_{0},u^{N}_{1},\cdots,u^{N}_{N-1}}V(x_{0},r_{1:N},N), \end{aligned} \end{equation} where the superscript $^*$ represents optimum. \section{Recurrent Model Predictive Control} \label{sec:RMPC} \subsection{Recurrent Policy Function} In practical applications, we only need to execute the first control input ${u^{N}_{0}}^*(x_{0},r_{1:N})$ of the optimal sequence in \eqref{eq.control_sequence} at each time step. Given a control problem, assume that $N_{\text{max}}$ is the maximum feasible prediction horizon. We aim to make full use of computation resources and adaptively select the longest prediction horizon $k\in[1,N_{\text{max}}]$, which means we need to calculate and store the optimal control input ${u^{k}_0}^*(x,r_{1:k})$ of $\forall x\in \mathcal{X}$, $\forall r_{1:k}$, and $\forall k\in[1,N_{\text{max}}]$ in advance. This requires us to find an efficient way to represent the policy for different prediction horizon $N\in[1,N_{\text{max}}]$ and solve it offline. We first introduce a recurrent function, denoted as $\pi^{c} (x_{0},r_{1:c};\theta)$, to approximate the control input ${u^{c}_{0}}^*(x_{0},r_{1:c})$, where $\theta$ is the vector of function parameters and $c$ is the number of recurrent cycles of the policy function. The goal of the proposed Recurrent MPC (RMPC) algorithm is to find the optimal parameters $\theta^*$, such that \begin{equation} \label{eq.equation_optimal} \begin{aligned} \pi^{c}(x_{0},r_{1:c};\theta^*)&={u^{c}_{0}}^*(x_{0},r_{1:c}),\\ \forall x_{0}&\in \mathcal{X}, \forall r_{1:c} , \forall c \in [1,N_{\text{max}}]. \end{aligned} \end{equation} The structure of the recurrent policy function is illustrated in Fig. \ref{fig_structure}. All recurrent cycles share the same parameters $\theta$, where $h_c\in\mathbb{R}^q$ is the vector of hidden states. \begin{figure}[htb] \captionsetup{justification =raggedright, singlelinecheck = false,labelsep=period, font=small} \centering{\includegraphics[width=0.35\textwidth]{figure/recurrentapproximatefunction.pdf}} \caption{The structure of the recurrent policy function.} \label{fig_structure} \end{figure} Each recurrent cycle is mathematically described as \begin{equation} \label{eq.recurrentstructure} \begin{aligned} &h_c=\sigma_h(x_{0},r_{c},h_{c-1};\theta_{h}), \\&\pi^c (x_{0},r_{1:c};\theta)=\sigma_y(h_c;\theta_{y}), \\&c\in[1,N_{\text{max}}], \theta=\theta_{h}\mathop{\cup}\theta_{y}, \end{aligned} \end{equation} where $h_0=0$, $\sigma_h$, and $\sigma_y$ are activation functions of hidden layer and output layer, respectively. As shown in Fig. \ref{fig_structure}, the recurrent policy function outputs a control input at each recurrent cycle. Assuming that we have found the optimal parameters $\theta^*$, it follows that the output of the $c$th cycle $\pi^c(x_{0},r_{1:c};\theta^*)={u^{c}_{0}}^*(x_{0},r_{1:c})$ for $\forall c\in[1,N_{\text{max}}]$. This indicates that the more cycles, the longer the prediction horizon. In practical applications, the calculation time of each cycle $t_c$ is different due to the dynamic change of computing resource allocation (see Fig. \ref{fig_resource}). At each time step, the total time assigned to the control input calculation is assumed to be $T$. Denoting the final number of the recurrent cycles at each time step as $k$, then the corresponding control input is $\pi^k(x_{0},r_{1:k};\theta^*)$, where \begin{equation} \nonumber k=\left\{ \begin{aligned} &N_\text{max}, &\quad \sum_{c=1}^{N_{\text{max}}}t_c\le T, \\ &p, &\quad \sum_{c=1}^{p}t_c\le T < \sum_{c=1}^{p+1}t_c. \end{aligned} \right. \end{equation} Therefore, the recurrent policy is able to make full use of computing resources and adaptively select the longest prediction step $k$. In other word, the more computing resources allocated, the longer prediction horizon will be selected, which would usually lead to better control performance. \begin{figure}[!htb] \captionsetup{justification =raggedright, singlelinecheck = false,labelsep=period, font=small} \centering{\includegraphics[width=0.4\textwidth]{figure/computingresource.pdf}} \caption{Maximum recurrent cycles in different cases.} \label{fig_resource} \end{figure} \begin{remark} Existing MPC algorithms usually employ non-recurrent approximation functions to represent the policies \cite{aakesson2005neural,aakesson2006neural,cheng2015neural}, which must select a fixed prediction horizon in advance. When the prediction horizon changes, the optimization problem must be reconstructed to learn a new corresponding policy. Conversely, RMPC employs recurrent function to approximate the optimal policy, which maps the system states and reference values directly to the control inputs. The use of recurrent structure allows RMPC to select an appropriate model prediction horizon according to current computing resources. The output of the learned policy network after $N$ recurrent cycles corresponds to the nearly optimal solution of $N$-step MPC. \end{remark} \subsection{Objective Function for Policy Learning} To find the optimal parameters $\theta^*$ offline, we first need to represent the MPC cost function in \eqref{eq.valuedefinition} in terms of $\theta$, denoted by $V(x_{0},r_{1:N},N; \theta)$. From \eqref{eq.valuedefinition} and the Bellman's principle of optimality, the global minimum $V^*(x_{0},r_{1:N},N)$ can be expressed as \begin{equation} \nonumber \begin{aligned} V^*(&x_{0},r_{1:N},N) \\&=l(x_{1},r_{1},{u^{N}_0}^*(x_0,r_{1:N}))+V^*(x_{1},r_{2:N},N-1)\\ &=\sum_{i=1}^{2}l(x_i,r_i,{u^{N-i+1}_{0}}^*(x_{i-1},r_{i:N})) +\\ &\qquad\qquad\qquad\qquad \qquad\qquad\qquad V^*(x_{2},r_{3:N},N-2)\\ &\vdots\\ &=\sum_{i=1}^{N-1}l(x_i,r_i,{u^{N-i+1}_{0}}^*(x_{i-1},r_{i:N})) +V^*(x_{N-1},r_N, 1)\\ &=\sum_{i=1}^{N}l(x_i,r_i,{u^{N-i+1}_{0}}^*(x_{i-1},r_{i:N})). \end{aligned} \end{equation} \iffalse \begin{figure}[!htb] \captionsetup{justification =raggedright, singlelinecheck = false,labelsep=period, font=small} \centering{\includegraphics[width=0.40\textwidth]{figure/replaceu.pdf}} \caption{Decomposition of MPC problem} \label{fig_replaceu} \end{figure} \fi Furthermore, according to \eqref{eq.valuedefinition}, one has \begin{equation} \label{eq.optimal_V} \begin{aligned} V^*(x_{0},r_{1:N},N)&=\sum_{i=1}^{N}l(x_{i},r_{i},{u^{N}_{i-1}}^*(x_{0},r_{1:N}))\\ &=\sum_{i=1}^{N}l(x_i,r_i,{u^{N-i+1}_{0}}^*(x_{i-1},r_{i:N})). \end{aligned} \end{equation} Therefore, for the same $x_0$ and $r_{1:N}$, it is clear that \begin{equation} \label{eq.optimalforanyone} {u^{N}_{i-1}}^*(x_{0},r_{1:N})={u^{N-i+1}_{0}}^*(x_{i-1},r_{i:N}),\quad \forall i\in[1,N]. \end{equation}This indicates that the $i$th optimal control input ${u^{N}_{i-1}}^*(x_{0},r_{1:N})$ in \eqref{eq.control_sequence} can be regarded as the optimal control input of the $N$-$i$+$1$-step MPC control problem with initial state $x_{i-1}$. Hence, by replacing all $u^{N}_{i-1}(x_{0},r_{1:N})$ in \eqref{eq.valuedefinition} with $u^{N-i+1}_{0}(x_{i-1},r_{i:N})$, the cost function of $N$-step MPC can be rewritten as $$V(x_{0},r_{1:N},N)=\sum_{i=1}^{N}l(x_i,r_i,u^{N-i+1}_{0}(x_{i-1},r_{i:N})).$$ Immediately, we can obtain the $N$-step cost function in terms of $\theta$: \begin{equation} \label{eq.appro_V} V(x_{0},r_{1:N},N;\theta)=\sum_{i=1}^{N}l(x_i,r_i,\pi^{N-i+1}(x_{i-1},r_{i:N};\theta)) \end{equation} Fig. \ref{fig:objective} illustrates the reshaped $N$-step cost function intuitively. \begin{figure}[!htb] \captionsetup{justification =raggedright, singlelinecheck = false,labelsep=period, font=small} \centering{\includegraphics[width=0.4\textwidth]{figure/training.pdf}} \caption{Illustration of the objective function of RMPC.} \label{fig:objective} \end{figure} To find the optimal parameters $\theta^*$ that make \eqref{eq.equation_optimal} hold, we can construct the following objective function: \begin{equation} \label{eq.lossfunction} J(\theta)=\mathop{\mathbb{E}}_{\substack{x_0\in\mathcal{X}, r_{1:{N_{\text{max}}}}}}\Big\{V(x_{0},r_{1:{N_{\text{max}}}},N_{\text{max}};\theta)\Big\}. \end{equation} Therefore, we can update $\theta$ by directly minimizing $J(\theta)$. The policy update gradients can be derived as \begin{equation} \label{eq.updatagradient} \begin{aligned} \frac{\text{d}J}{\text{d}\theta}&=\mathop{\mathbb{E}}_{ \small\begin{array}{ccc} \small x_0\in\mathcal{X}, r_{1:N_{\text{max}}}\\ \end{array} } \Big\{\frac{\text{d}V(x_{0},r_{1:N_{\text{max}}},N_{\text{max}};\theta)}{\text{d}\theta}\Big\},\\ \end{aligned} \end{equation} where \begin{equation} \nonumber \begin{aligned} &\frac{\text{d}V(x_{0},r_{1:N},N_{\text{max}};\theta)}{\text{d}\theta}=\\ &\qquad\qquad\qquad\sum_{i=1}^{N_{\text{max}}}\frac{\text{d}l(x_{i},r_{i},\pi^{N_{\text{max}}-i+1}(x_{i-1},r_{i:N_{\text{max}}};\theta))}{\text{d} \theta}. \end{aligned} \end{equation} Denoting $\pi^{N_{\text{max}}-i+1}(x_{i-1},r_{i:N_{\text{max}}};\theta)$ as $\pi^{N_{\text{max}}-i+1}$, $l(x_{i},r_{i},\pi^{N_{\text{max}}-i+1}(x_{i-1},r_{i:N_{\text{max}}};\theta))$ as $l_{i}$, $\frac{\mathrm{d}x_i}{\mathrm{d}\theta}$ as $\phi_{i}$ and $\frac{\mathrm{d}\pi^{N_{\text{max}}-i+1}}{\mathrm{d}\theta}$ as $\psi_{i}$, we further have \begin{equation} \nonumber \frac{\text{d} V(x_{0},r_{1:N_{\text{max}}},N_{\text{max}};\theta)}{\text{d} \theta}=\sum_{i=1}^{N_{\text{max}}} \Big\{\frac{\partial l_{i}}{\partial x_{i}}\phi_{i}+\frac{\partial l_{i}}{\partial \pi^{N_{\text{max}}-i+1}}\psi_{i}\Big\}, \end{equation} where \begin{equation} \nonumber \begin{aligned} \phi_{i} &=\frac{\partial f(x_{i-1},\pi^{N_{\text{max}}-i+1})}{\partial x_{i-1}}\phi_{i-1} +\frac{\partial f(x_{i-1},\pi^{N_{\text{max}}-i+1})}{\partial \pi^{N_{\text{max}}-i+1}}\psi_{i}, \end{aligned} \end{equation} with $\phi_0=0$, and \begin{equation} \nonumber \psi_{i} =\frac{\partial\pi^{N_{\text{max}}-i+1}}{\partial x_{i-1}}\phi_{i-1} + \frac{\partial\pi^{N_{\text{max}}-i+1}}{\partial \theta}. \end{equation} Fig. \ref{fig_gradient} visually shows the backpropagation path of the policy gradients. \begin{figure}[!htb] \captionsetup{justification =raggedright, singlelinecheck = false,labelsep=period, font=small} \centering{\includegraphics[width=0.4\textwidth]{figure/Graident_BP.pdf}} \caption{Backprogagation Path of the Policy Update Gradients} \label{fig_gradient} \end{figure} Taking the Gradient Descent (GD) method as an example, the policy update rule is \begin{equation} \label{eq.update_rule} \begin{aligned} \theta_{K+1} &= -\alpha_{\theta} \frac{\text{d}J}{\text{d}\theta} + \theta_K, \end{aligned} \end{equation} where $\alpha_{\theta} $ denotes the learning rate and $K$ indicates $K$th iteration. The pseudo-code and diagram of the proposed RMPC algorithm are shown in Algorithm \ref{alg:RMPC} and Fig. \ref{fig:flowchart}. \begin{algorithm}[!htb] \caption{RMPC algorithm} \label{alg:RMPC} \begin{algorithmic} \STATE Given an appropriate learning rate $\alpha_\theta$ and an arbitrarily small positive number $\epsilon$. \STATE Initial with arbitrary $\theta_0$ \REPEAT \STATE Randomly select $x_0\in \mathcal{X}$ and the corresponding $r_{1:N_{\text{max}}}$ \STATE Calculate $\frac{\text{d}J(\theta_K)}{\text{d}\theta_K}$ using \eqref{eq.updatagradient} \STATE Update policy function using \eqref{eq.update_rule} \UNTIL $|J(\theta_{K+1})-J(\theta_{K})|\le \epsilon$ \end{algorithmic} \end{algorithm} \begin{figure}[!htb] \captionsetup{justification =raggedright, singlelinecheck = false,labelsep=period, font=small} \centering{\includegraphics[width=0.3\textwidth]{figure/flowchart.pdf}} \caption{The training flowchart of RMPC.} \label{fig:flowchart} \end{figure} \begin{remark} Most existing explicit MPC algorithms \cite{bemporad2002explicit, kouvaritakis2002needs,geyer2008optimal,jones2010polytopic,wen2009analytical,borrelli2010computation} can only handle linear systems. As a comparison, RMPC is applicable for general nonlinear and non input-affine systems, because the nearly optimal recurrent policy can be obtained by directly minimizing the designed objective function using policy gradient methods. \end{remark} \begin{remark} The objective function of RMPC in \eqref{eq.lossfunction} is identical to the traditional MPC problem \eqref{eq.valuedefinition}. The only difference between them is that RMPC aims to find an explicit nearly optimal recurrent policy rather than numerical solutions. Therefore, RMPC can be regarded as a special explicit solver for the traditional MPC problem in \eqref{eq.valuedefinition}. \end{remark} \subsection{Convergence and Optimality} There are many types of recurrent functions belonging to the structure defined in \eqref{eq.recurrentstructure}, and the recurrent neural network (RNN) is the most commonly used one. In recent years, deep RNNs have been successfully implemented in many fields, such as natural language processing and system control, attributing to their ability to process sequential data \cite{mikolov2010recurrent,li2017novel}. Next, we will show that as the iteration index $K\rightarrow\infty$, the optimal policy $\pi^c (x_{0},r_{1:c};\theta^*)$ that makes \eqref{eq.equation_optimal} hold can be achieved using Algorithm \ref{alg:RMPC}, as long as $\pi^c (x_{0},r_{1:c};\theta)$ is an over-parameterized RNN. The over-parameterization means that the number of hidden neurons and layers is sufficiently large. Before the main theorem, the following lemma and assumption need to be introduced. \begin{lemma}[Universal Approximation Theorem\cite{li1992approximation,schafer2007recurrent,hammer2000approximation}] \label{lemma.ability} Consider a sequence of finite functions $\{F^i(y^i)\}_{i=1}^n$, where $n$ is the number of functions, $y^i=[y_1,y_2,\hdots,y_i]\in\mathbb{R}^i$, $i\in [1,n]$ is the input dimension, and $F^i(y^i): \mathbb{R}^i\rightarrow \mathbb{R}^d$ is a continuous function on a compact set. Describe an RNN ${G}^c (y^c;W,b)$ as \begin{equation} \nonumber \begin{aligned} &h_c=\sigma_h(W_y^\top y^{c}+U_h^\top h_{c-1}+b_y),\\ &{G}^c (y^c;W,b)=\sigma_y(W_h^\top h_{c}+b_h), \end{aligned} \end{equation} where $c$ is the number of recurrent cycles, $W=W_h\mathop{\cup}W_y$, $b=b_h\mathop{\cup}b_y$ and $U_h$ are parameters, $\sigma_h$ and $\sigma_y$ are activation functions. Supposing ${G}^c (y^c;W,b)$ is over-parameterized, for any $\{F^i(y^i)\}_{i=1}^n$, $\exists U_h, W, b$, such that \begin{equation} \nonumber \left \| {G}^c(y^c; W,b)-F^c(y^c) \right \|_{\infty} \le \epsilon, \quad \forall y^c\in\mathbb{R}^c, c \in [1,n], \end{equation} where $\epsilon \in\mathbb{R^+}$ is an arbitrarily small error. \end{lemma} The reported experimental results and theoretical proofs have shown that the straightforward optimization methods, such as GD and Stochastic GD (SGD), can find global minima of most training objectives in polynomial time if the approximate function is an over-parameterized NN or RNN \cite{allen2019convergence,du2019gradient}. Based on this fact, we make the following assumption. \begin{assumption} \label{assumption.global} If the approximate function is an over-parameterized RNN, the global minimum of the objective function in \eqref{eq.lossfunction} can be found using an appropriate optimization algorithm such as SGD \cite{allen2019convergencernn}. \end{assumption} Now, we are ready to show the convergence and optimality of RMPC. \begin{theorem}[Recurrent Model Predictive Control] \label{theorem.optimality} Suppose $\pi^c (x_{0},r_{1:c};\theta)$ is an over-parameterized RNN. Through Algorithm \ref{alg:RMPC}, any initial parameters $\theta_0$ will converge to $\theta^*$, such that \eqref{eq.equation_optimal} holds. \end{theorem} \begin{proof} By \eqref{eq.lossfunction}, we have \begin{equation} \nonumber \begin{aligned} \min_{\theta}J(\theta)&=\min_{\theta}\mathop{\mathbb{E}}_{\substack{x_0\in\mathcal{X},r_{1:{N_{\text{max}}}}}}\Big\{V(x_{0},r_{1:{N_{\text{max}}}},N_{\text{max}};\theta)\Big\}\\ &\ge\mathop{\mathbb{E}}_{\substack{x_0\in\mathcal{X},r_{1:{N_{\text{max}}}}}}\Big\{\min_{\theta}V(x_{0},r_{1:{N_{\text{max}}}},N_{\text{max}};\theta)\Big\}. \end{aligned} \end{equation} By Lemma \ref{lemma.ability}, there always $\exists \theta^{\dagger}$, such that \begin{equation} \centering \nonumber \theta^{\dagger}=\arg\min_{\theta}V(x_{0},r_{1:N_{\text{max}}},N_{\text{max}};\theta),\quad \forall x_0\in \mathcal{X},r_{1:N_{\text{max}}}. \end{equation} Then, it directly follows that \begin{equation} \centering \nonumber J(\theta^{\dagger})=\min_\theta J(\theta). \end{equation} Furthermore, according to \eqref{eq.optimal_V}, \eqref{eq.optimalforanyone}, and the Bellman's principle of optimality, $\theta^\dagger$ can also make \eqref{eq.equation_optimal} hold, i.e., $\theta^\dagger=\theta^*$. Note that $\theta^\dagger$ may not be unique. From Assumption \ref{assumption.global}, we can always find $\theta^*$ by repeatedly minimizing $J(\theta)$ in \eqref{eq.lossfunction} using \eqref{eq.update_rule}, which completes the proof. \end{proof} Thus, we have proven that the RMPC algorithm can converge to $\theta^*$. In other words, it can find the explicit nearly optimal policy of MPC with different prediction horizons, whose output after $c$th recurrent cycles corresponds to the nearly optimal solution of $c$-step MPC. \begin{remark} Theorem \ref{theorem.optimality} shows that under mild assumptions, the output of the converged policy of RMPC is exactly the optimal solution of the traditional MPC problem in \eqref{eq.valuedefinition}. This means that if RMPC converges to the nearly optimal solution, it would inherit the stability property of the original MPC problem. The stability conditions for the case where the approximate error of the learned recurrent policy cannot be ignored will be established in further studies. \end{remark} \section{Simulation Verification} \label{sec:simulation} In order to evaluate the performance of the proposed RMPC algorithm, we choose the vehicle lateral control problem in the path tracking task as an example \cite{li2017driver}. It is a nonlinear and non-affine control problem and a widely-used verification and application task for MPC \cite{duan2021adaptive, cheng2020model,ji2016path}. \subsection{Overall Settings} The recurrent policy network is trained offline on the PC, and then deployed to the industrial personal computer (IPC). The vehicle dynamics used for policy training are different from the controlled plant, which is provided by the Carsim simulator \cite{benekohal1988carsim}. For online applications, the IPC-controller gives the control signal to the plant according to the state information and the reference trajectory. The plant feeds back the state information to the IPC-controller, so as to realize the closed-loop control process. The feedback scheme of the HIL experiment is depicted in Fig. \ref{fig_HIL}. The type of IPC-controller is ADLINK MXE-5501, equipped with Intel i7-6820EQ CPU and 8GB RAM, which is used as a vehicle onboard controller \cite{Chaoyi2019System}. The plant is a real-time system, simulated by the vehicle dynamic model of CarSim. The longitudinal speed is assumed to be constant, $v_x=16 \text{m/s}$, and the expected trajectory is shown in Fig. \ref{f:comparison_linear}. The system states and control inputs of this problem are listed in Table \ref{tab.state}, and the vehicle parameters are listed in Table \ref{tab.parameters}. \begin{figure}[!htb] \captionsetup{ singlelinecheck = false,labelsep=period, font=small} \centering{\includegraphics[width=0.45\textwidth]{figure/experiment_diagram.pdf}} \caption{Schematic view of the experimental setup.} \label{fig_HIL} \end{figure} \begin{table}[!htb] \caption{State and control input} \centering \label{tab.state} \begin{tabular}{l l l l} \hline\hline Mode &Name &Symbol&Unit\\ \hline state&Lateral velocity at center of gravity (CG) &$v_y$ & [m/s] \\ & Yaw rate &$\omega_r$ & [rad/s] \\ & Yaw angle &$\phi$ & [rad]\\ & Lateral Position&$y$ & [m] \\ input & Front wheel angle &$\delta$ & [rad] \\ \hline\hline \end{tabular} \end{table} \begin{table}[!htb] \caption{Vehicle Parameters} \centering \label{tab.parameters} \begin{tabular}{l l l} \hline\hline Name &Symbol& Unit\\\hline Longitudinal velocity at CG &$v_x$ & 16 [m/s] \\ Front tire cornering stiffness &$k_1$ & -88000 [N/rad] \\ Rear tire cornering stiffness &$k_2$ & -94000 [N/rad] \\ Mass &$m$ & 1500 [kg] \\ Distance from CG to front axle &$a$ & 1.14 [m] \\ Distance from CG to rear axle &$b$ & 1.40 [m] \\ Polar moment of inertia at CG &$I_z$ & 2420 [kg$\cdot\mathrm{m}^2$] \\ Tire-road friction coefficient &$\mu$ & 1.0 \\ System frequency &$f$ & 20 [Hz] \\ \hline\hline \end{tabular} \end{table} \subsection{Problem Description} The offline policy is trained based on the nonliner and non input-affine vehicle dynamics: \begin{equation} \nonumber x = \begin{bmatrix} y \\ \phi \\ v_y \\ \omega_r \end{bmatrix}, u =\delta, x_{i+1}= \begin{bmatrix} v_x \sin\phi + v_y \cos\phi\\ \omega_r\\ \frac{F_{yf}\cos\delta + F_{yr}}{m} - v_x \omega_r\\ \frac{aF_{yf}\cos\delta - bF_{yr}}{I_{z}} \end{bmatrix}\frac{1}{f} + x_i, \end{equation} where $F_{yf}$ and $F_{yr}$ are the lateral tire forces of the front and rear tires, respectively \cite{kong2015kinematic}. The lateral tire forces can be approximated according to the Fiala tire model \begin{equation} \nonumber F_{y\#} = \left\{ \begin{aligned} &-C_\#\tan\alpha_\#\Big(\frac{C_\#^2(\tan\alpha_\#)^2}{27(\mu_\# F_{z\#})^2} - \frac{{C_\#}\left |\tan\alpha_\# \right |}{3\mu_\# F_{z\#}} + 1\Big),\\ &\qquad \qquad \qquad \qquad \qquad \qquad |\alpha_\#|\le|\alpha_{\text{max},\#}|,\\ & \mu_\# F_{z\#},&\\ &\qquad \qquad \qquad \qquad \qquad \qquad |\alpha_\#|>|\alpha_{\text{max},\#}|, \end{aligned} \right. \end{equation} where $\alpha_{\#}$ is the tire slip angle, $F_{z\#}$ is the tire load, $\mu_\#$ is the friction coefficient, and the subscript $\# \in \{f,r\}$ represents the front or rear tires. The slip angles can be calculated from the relationship between the front/rear axle and the center of gravity (CG): \begin{equation} \nonumber \alpha_f = \arctan (\frac{v_y+a\omega_r}{v_x})-\delta, \quad \alpha_r = \arctan (\frac{v_y-b\omega_r}{v_x}). \end{equation} The loads on the front and rear tires can be approximated by: \begin{equation} \nonumber F_{zf} = \frac{b}{a+b}mg, \quad F_{zr} = \frac{a}{a+b}mg. \end{equation} The utility function of this problem is set to be \begin{equation} \nonumber l(x_i,{r_i},u_{i-1}) = ([1,0,0,0]x_i-r_i)^2+10{u_{i-1}}^2+([0,0,0,1]{x_i})^2. \end{equation} Therefore, the policy optimization problem of this example can be formulated as: \begin{equation} \nonumber \centering \begin{aligned} \min_\theta & \mathop{\mathbb{E}}_{ \small\begin{array}{ccc} \small x_0\in\mathcal{X}, r_{1:{N_{\text{max}}}}\\ \end{array} } \Big\{V(x_{0},r_{1:{N_{\text{max}}}},{N_{\text{max}}};\theta)\Big\}\\ s.t. \quad &x_{i} = f(x_{i-1},u_{i-1}), \\ &u_{\text{min}}\leq u_{i-1} \leq u_{\text{max}},\\ &i\in[1,N_{\text{max}}]. \end{aligned} \end{equation} where $V(x_{0},r_{1:{N_{\text{max}}}},{N_{\text{max}}};\theta)=\sum_{i=1}^{N_{\text{max}}}l(x_i,r_i,u_{i-1})$, $u_{i-1}=\pi^{{N_{\text{max}}}-i+1}(x_{i-1},r_{i:{N_{\text{max}}}};\theta)$, $N_{\text{max}}=15$, $u_{\text{min}}=-0.2$ rad, and $u_{\text{max}}=0.2$ rad. \subsection{Algorithm Details} The policy function is represented by a variant of RNN, called GRU (Gated Recurrent Unit). The input layer is composed of the states, followed by 4 hidden layers using rectified linear units (RELUs) as activation functions, with $128$ units per layer. The output layer is set as a $tanh$ layer, multiplied by $0.2$ to confront bounded control inputs. We use the Adam optimization method to update the policy with the learning rate of $2\times10^{-4}$ and the batch size of $256$. \subsection{Result Analysis} Given a nonlinear MPC problem, we can directly solve it with some optimization solvers, such as IPOPT \cite{Andreas2006Biegler} and BONMIN \cite{bonami2008algorithmic}, whose numerical solutions can be approximately regarded as the optimal policy. In the sequel, both IPOPT and BONMIN are implemented in a symbolic framework, called CasADi \cite{2018CasADi}. Since RMPC is an explicit solver of the traditional MPC problem, if the control inputs of RMPC are approximately equal to the optimal numerical solutions under different prediction horizons, we can immediately show that RMPC can adaptively choose the longest prediction step. We run Algorithm \ref{alg:RMPC} for 10 runs and calculate the policy error $e_N$ between the solution of IPOPT and RMPC at each iteration with different prediction steps ($N\in[1,15]$), \begin{equation} \nonumber e_N= \mathop{\mathbb{E}}_{ \small\begin{array}{ccc} \small x_0\in\mathcal{X}, r_{1:N}\\ \end{array} } \left[ \frac{ \vert {u_{0}^{N}}^*(x_0,r_{1:N})-{\pi^N}(x_0,r_{1:N};\theta) \vert }{{u^N_{\text{max}}}^*-{u^N_{\text{min}}}^*}\right], \end{equation} where ${{u^N_{\text{max}}}^*}$ and ${{u^N_{\text{min}}}^*}$ are respectively the maximum and minimum value of ${u_{0}^{N}}^*(x_0,r_{1:N})$ for $\forall x_0 \in \mathcal{X}$, $\forall N \in [1,15]$. Fig. \ref{fig_err_train} plots policy error curves during training with different prediction steps. It is clear that all the policy errors decrease rapidly to a small value during the training process. In particular, after $10^4$ iterations, policy errors for all $N\geq5$ reduce to less than 2\%. This indicates that Algorithm \ref{alg:RMPC} has the ability to find the nearly optimal explicit policy of MPC problems with different prediction horizons. \begin{figure}[!htb] \captionsetup{ singlelinecheck = false,labelsep=period, font=small} \centering{\includegraphics[width=0.4\textwidth]{figure/err_train.pdf}} \caption{Policy errors during training. Solid lines are average values over 10 runs. Shaded regions correspond to 95\% confidence interval.} \label{fig_err_train} \end{figure} Fig. \ref{fig_time} compares the calculation efficiency of RMPC and optimization solvers in online applications. It is obvious that the calculation time of the optimization solvers is much longer than RMPC, and the gap increases with the number of prediction steps. Specifically, when $N=c=15$, the IPOPT solver is about 5 times slower than RMPC (IPOPT for $26.2$ms, RMPC for $4.7$ms). This demonstrates the online effectiveness of the RMPC method. \begin{figure}[!htb] \captionsetup{justification =raggedright, singlelinecheck = false,labelsep=period, font=small} \centering{\includegraphics[width=0.4\textwidth]{figure/time.pdf}} \caption{RMPC vs other solvers in terms of online computation time.} \label{fig_time} \end{figure} Fig. \ref{fig_loss} compares the policy performance of IPOPT and RMPC with different prediction horizons. The policy performance is measured by the cost-to-go of 200 steps (10s) during simulation starting from randomly initialized states, i.e., \begin{equation} \label{eq.loss_simulation} \centering L=\sum_{i=1}^{200} l(x_i,r_i,u_{i-1}). \end{equation} For all prediction domains $N$, RMPC performs as well as the solution of the IPOPT solver. Besides, more recurrent cycles (or long prediction steps) help to reduce the cost-to-go $L$. \begin{figure}[!htb] \captionsetup{ singlelinecheck = false,labelsep=period, font=small} \centering{\includegraphics[width=0.4\textwidth]{figure/loss.pdf}} \caption{Performance comparison between RMPC and IPOPT. Solid lines are average values over 50 initialized states. Shaded regions correspond to 95\% confidence interval.} \label{fig_loss} \end{figure} \begin{figure*}[!htb] \centering \captionsetup{justification =raggedright, singlelinecheck = false,labelsep=period, font=small} \captionsetup[subfigure]{justification=centering} \subfloat[\label{subFig:st1}]{\includegraphics[width=0.22\textwidth]{figure/simu_tractory_pre7.pdf}} \quad \subfloat[\label{subFig:st2}]{\includegraphics[width=0.22\textwidth]{figure/simu_tractory_pre11.pdf}}\quad \subfloat[\label{subFig:st3}]{\includegraphics[width=0.22\textwidth]{figure/simu_tractory_pre15.pdf}} \\ \subfloat[\label{subFig:sc1}]{\includegraphics[width=0.22\textwidth]{figure/simu_output_pre7.pdf}} \quad \subfloat[\label{subFig:sc2}]{\includegraphics[width=0.22\textwidth]{figure/simu_output_pre11.pdf}}\quad \subfloat[\label{subFig:sc3}]{\includegraphics[width=0.22\textwidth]{figure/simu_output_pre15.pdf}} \caption{Simulation results: trajectory and control inputs curves with different recurrent cycles $c$. (a) Trajectories when $c=7$. (b) Trajectories when $c=11$. (c) Trajectories when $c=15$. (d) Control inputs when $c=7$. (e) Control inputs when $c=11$. (f) Control inputs when $c=15$.} \label{f:comparison_linear} \end{figure*} In detail, Fig. \ref{f:comparison_linear} intuitively presents the trajectory curves and the corresponding control inputs of RMPC and the IPOPT solver. Obviously, the trajectory and control inputs generated by the RMPC controller almost overlap with the IPOPT controller. And the trajectory tracking error decreases significantly with the number of recurrent cycles. This explains the importance of adaptively selecting the optimal control input with the longest prediction horizon in real applications. RMPC is an explicit MPC method, whose policy is learned offline based on predetermined system parameters. However, in practical applications, system parameters may vary due to online changes or inaccuracy measurements. To evaluate the robustness of RMPC to different system parameters, we have changed certain system parameters and tested the performance of the fixed policy learned based on parameters in Table \ref{tab.parameters}. The average tracking errors under different parameters are shown in Table \ref{tab.simulated_error}. Results show that the final tracking performance is insensitive to the selected parameter changes. This implies that RMPC preserves good robustness property when the changes of system parameters are restricted within a reasonable range. This is mainly because the system usually behaves analogously as long as parameters do not change a lot. One can also add some parametric noises during the training process to further improve robustness. For those frequently changing and easily detected parameters, simply setting them as policy inputs can be a good choice. \begin{table}[!htb] \centering \caption{Average Tracking Errors with Different System Parameters. The original parameters used for learning are shown in bold.} \label{tab.simulated_error} \begin{tabular}{l l l l l l l l} \hline\hline $m$ [kg] &1200 &1300&1400&\textbf{1500}&1600&1700&1800\\ errors [cm]&1.76&1.65&1.54&1.46&1.39& 1.35&1.33 \\ \hline $v_x$ [m/s] &13& 14 &15&\textbf{16} &17 &18 & 19 \\ errors [cm]&6.89&4.93&3.03&1.46&1.80&3.69&5.70\\ \hline $\mu$ &0.7& 0.8 &0.9&\textbf{1.0} &1.1 &1.2 & 1.3 \\ errors [cm]&1.44&1.45&1.45&1.46&1.46&1.46&1.47\\ \hline $k_1(\times 10^3)$ &-58 &-68 &-78 &\textbf{-88} &-98 &-108 &-118 \\ errors [cm]&1.61&1.40&1.39&1.46&1.56&1.67&1.77\\ \hline $k_2(\times 10^3)$ &-64 &-74 &-84 &\textbf{-94} &-104 &-114 &-124 \\ errors [cm]&1.27&1.35&1.41&1.46&1.50&1.53&1.56\\ \hline\hline \end{tabular} \end{table} To summarize, this example demonstrates the optimality, efficiency, and generality of the RMPC algorithm. \section{Experimental Verification and Future Work} \label{sec:experiment} \subsection{Experimental Verification} As shown in Fig. \ref{fig_HIL}, an IDRIVERPLUS four-wheeled robot is utilized to demonstrate the effectiveness of the proposed method in practical applications. For ease of understanding, this experiment is carried out by replacing the Carsim simulator in Fig. \ref{fig_HIL} with a real robot. Except for the vehicle parameters, all training details are the same as those in Section \ref{sec:simulation}. Note that the actual vehicle parameters are usually quite different from the theoretical model used for learning, since some parameters, such as tire cornering stiffness, are difficult to measure accurately. Therefore, the experimental results can also reflect the robustness of RMPC to inaccurate vehicle parameters. We deployed the learned policy of RMPC in the four-wheeled robot, aiming to follow a sine-shaped reference path. Fig. \ref{f:comparison_experiment} shows the control results of RMPC and the IPOPT solver with different prediction steps. Although the variance of the steering wheel angle is larger than that in simulation due to the existence of system noise, both methods have achieved relatively good tracking performance. Table \ref{tab.tracking_error} compares the average tracking errors of these two methods under different number of prediction steps. Results show that RMPC achieved a smaller average tracking error than IPOPT in all cases. In particular, when $c=3$, $c=9$, and $c=15$, RMPC reduces the tracking error by 56.2\%, 10.1\%, and 4.0\%, respectively. It is also obvious that the trajectory tracking error decreases significantly as the number of recurrent cycles increases, which provides evidence for the advantage of adaptively selecting the maximum prediction horizon. This real-world experiment demonstrates the efficacy of RMPC in practical applications. \begin{figure*}[!htb] \centering \captionsetup{justification =raggedright, singlelinecheck = false,labelsep=period, font=small} \captionsetup[subfigure]{justification=centering} \subfloat[\label{subFig:et1}]{\includegraphics[width=0.22\textwidth]{figure/tractory_pre3.pdf}} \quad \subfloat[\label{subFig:et2}]{\includegraphics[width=0.22\textwidth]{figure/tractory_pre9.pdf}}\quad \subfloat[\label{subFig:et3}]{\includegraphics[width=0.22\textwidth]{figure/tractory_pre15.pdf}} \\ \subfloat[\label{subFig:ec1}]{\includegraphics[width=0.22\textwidth]{figure/output_pre3.pdf}} \quad \subfloat[\label{subFig:ec2}]{\includegraphics[width=0.22\textwidth]{figure/output_pre9.pdf}}\quad \subfloat[\label{subFig:ec3}]{\includegraphics[width=0.22\textwidth]{figure/output_pre15.pdf}} \caption{Experiment results: trajectory and control inputs curves with different recurrent cycles $c$. (a) Trajectories when $c=3$. (b)Trajectories when $c=9$. (c) Trajectories when $c=15$. (d) Control inputs when $c=3$. (e) Control inputs when $c=9$. (f) Control inputs when $c=15$.} \label{f:comparison_experiment} \end{figure*} \begin{table}[!htb] \caption{Average Tracking Errors} \centering \label{tab.tracking_error} \begin{tabular}{l l l} \hline\hline Prediction step &RMPC& IPOPT\\\hline $c=3$ &10.43cm & 23.79cm \\ $c=9$ &9.74cm & 10.83cm \\ $c=15$ &6.75cm & 7.03cm \\ \hline\hline \end{tabular} \end{table} \subsection{Limitations and Future Work} In this paper, the proposed RMPC method is only suitable for problems without state constraints. In the future, we will extend RMPC to constrained cases by combining constrained policy optimization techniques \cite{duan2021adaptive}. Besides, the performance of the proposed RMPC method is evaluated only by the path tracking task of four-wheeled vehicles. More subsequent experiments for different systems and tasks will be addressed in further studies. The future work also includes improving its robustness and investigating the stability of RMPC. \section{Conclusion} \label{sec:conclusion} This paper proposes the Recurrent Model Predictive Control (RMPC) algorithm to solve general nonlinear finite-horizon optimal control problems. Unlike traditional MPC algorithms, it can make full use of the current computing resources and adaptively select the longest model prediction horizon. Our algorithm employs an RNN to approximate the optimal policy, which maps the system states and reference values directly to the control inputs. The output of the learned policy network after $N$ recurrent cycles corresponds to the nearly optimal solution of $N$-step MPC. A policy optimization objective is designed by decomposing the MPC cost function according to the Bellman's principle of optimality. The optimal recurrent policy can be obtained by directly minimizing the designed objective function, which is applicable for general nonlinear and non input-affine systems. The convergence and optimality of RMPC are further proved. We demonstrate its optimality, generality and efficiency using a HIL experiment. Results show that RMPC is over 5 times faster than the traditional MPC solver. The control performance of the learned policy can be further improved as the number of recurrent cycles increases. To prove its practicality, RMPC has also been applied to a real-world robot path-tracking task, and RMPC has achieved better path tracking accuracy than IPOPT in different prediction horizons. \section*{Acknowledgment} The authors are grateful to the Editor-in-Chief, the Associate Editor, and anonymous reviewers for their valuable comments. \bibliographystyle{./Bibliography/IEEEtranTIE}
1612.03273
\section{\uppercase{Introduction}} \label{sec:introduction} Images containing fences occur in several situations such as photographing statues in museums, animals in a zoo etc. Image de-fencing involves the removal of fences or occlusions in images. De-fencing a single photo is striclty an image inpainting problem that involves using data in the regions neighbouring the fence pixels in the frame for filling-in occlusions. The works of \cite{Bertalmio,Criminisi_tip,James,Xu,Konstantinos} addressed the image inpainting problem wherein the portion of the image to be inpainted is specified by a mask manually. As shown in Fig. 1(a), in the image de-fencing problem it is difficult to manually mark all fence pixels since they are numerous and cover the entire image. Image inpainting does not yield satisfactory results when the image contains fine textured regions that have to be filled-in. However, using a video panned across a fenced scene can lead to better results due to availability of additional information in the adjacent frames. Image de-fencing using a captured video involves multiple steps such as fence detection, motion estimation and information fusion. Our focus in this position paper is to propose an automatic fence removal system for images of dynamic scenes. \begin{figure}[t] \centering \subfigure[]{\label{fig:a}\includegraphics[width=24mm]{yanxi_gobs1}} \subfigure[]{\label{fig:b}\includegraphics[width=24mm]{yanxi_gcseg_res}} \subfigure[]{\label{fig:c}\includegraphics[width=24mm]{yanxi_dld_res}} \caption{Fence detection: (a) 1st frame from the video. (b) Segmentation result using graph cut \cite{Bagon2006}. (c) Output of the lattice detection algorithm \cite{Minwoo}.} \end{figure} As discussed in \cite{Yanxi,Minwo,Vrushali}, automated fence detection is the first major task in de-fencing. We propose two methods for automated fence detection in this position paper. Firstly, we take advantage of the strongly directional nature of fence occlusions and use a Gabor filter for fence detection. Secondly, using a supervised learning approach, we train an SVM classifier to detect fence pixels automatically. A block diagram of the proposed automatic image de-fencing system is shown in Fig. 2. It involves three major components. Firstly, we need to design an automatic fence detection scheme that should be able to detect fences/occlusions in any complex scene. Secondly, relative motion between the frames has to be estimated. Lastly, we need an algorithm to fuse the information from adjacent frames and produce a de-fenced image. \begin{figure* \centering \subfigure[]{\includegraphics[width=160mm]{flow6}} \caption{Workflow of proposed automatic image de-fencing algorithm.} \end{figure*} Since our goal in this position paper is to automate the above three steps, initially, we propose an automated approach to fence detection in images of dynamic scenes. Next, we estimate the motion between the frames chosen from the video using the optical flow algorithm \cite{Thomas}. Lastly, we formulate an optimization framework for estimating the de-fenced image by solving the corresponding inverse problem. Since natural images are sparse, we use the split Bregman algorithm for optimization with the total variation (TV) of the de-fenced image as the regularization constraint \cite{Tom}. \section{\uppercase{Motivation}} \cite{Yanxi} first addressed the de-fencing problem via inpainting of fence occlusions. \cite{Minwo} used multiple images for de-fencing, which significantly improves the performance due to availability of hidden information in additional frames. They used a deformable lattice detection method proposed in \cite{Minwoo} for fence detction. However, this is not a robust approach and fails for many real world images as shown in Fig. 1(c), 3(b), 4(b), 5(b). \cite{Vrushali} proposed an improved multi-frame de-fencing technique by using loopy belief propagation. However, there are two issues with their approach. Firstly, the work in \cite{Vrushali} assumed that motion between the frames is global. This assumption is invalid for more complex dynamic scenes where the motion is non global. Also, their method used an image matting technique proposed by \cite{Yuanjie} for fence detection which involves significant user interaction. Recently, \cite{Yadong} proposed a soft fence detection method where visual parallax serves as the cue to distinguish fences from the unoccluded pixels. Therefore, in this position paper we explore these issues and propose techniques for automatic fence detection. \begin{figure}[t] \centering \subfigure[]{\label{fig:a}\includegraphics[width=24mm]{fifa_obs1}} \subfigure[]{\label{fig:b}\includegraphics[width=24mm]{fifa_dld_res}} \subfigure[]{\label{fig:c}\includegraphics[width=24mm]{fifa_thresh_res}} \caption{Fence detection: (a) 1st frame from the video. (b) Output of the lattice detection algorithm \cite{Minwoo}. (c) Fence pixels detected using Gabor filter.} \end{figure} \begin{figure}[t] \centering \subfigure[]{\label{fig:a}\includegraphics[width=24mm]{saathiya_obs1}} \subfigure[]{\label{fig:b}\includegraphics[width=24mm]{saathiya_dld_res}} \subfigure[]{\label{fig:c}\includegraphics[width=24mm]{saathiya_gabor_res}} \caption{Fence detection: (a) 1st frame from the video. (b) Output of the lattice detection algorithm \cite{Minwoo}. (c) Fence pixels detected using Gabor filter.} \end{figure} \begin{figure}[t] \centering \subfigure[]{\label{fig:a}\includegraphics[width=36mm]{kcar}} \subfigure[]{\label{fig:b}\includegraphics[width=36mm]{kcar_dld_res}} \subfigure[]{\label{fig:c}\includegraphics[width=36mm]{kcar_ml_res}} \subfigure[]{\label{fig:c}\includegraphics[width=36mm]{kcar_fence}} \caption{Fence detection: (a) 1st frame from the video. (b) Output of the lattice detection algorithm \cite{Minwoo}. (c) Fence pixels detected using machine learning approach. (d) Fence mask.} \end{figure} The simplest way is to treat the fence detection as a segmentation problem. However for real world problems automated segmentation algorihms fail if the foreground and background layers are of similar color. We employ the graph-cuts based segmentation algorithm proposed by \cite{Boykov2001,Boykov2004,Kolmogorov2004}, on Fig. 1(a) using the Matlab wrapper by \cite{Bagon2006}. The segmentation result for the same is shown in Fig. 1(b). However, the automatic segmentation algorithm failed to detect the fence properly. Also, the method of \cite{Minwoo} failed to detect many fence pixels in Fig. 1(c). We propose two automatic approches to tackle fence detection. Since the fences have a strong directional property, we are motivated to employ a Gabor filter to detect them. To demonstrate the effectiveness of our proposed Gabor filter based technique and to compare with the state-of-the-art lattice detection method \cite{Minwoo} we used a real video from Youtube. We notice that the method of \cite{Minwoo} has wrongly detected fence pixels as shown in Fig. 3(b) and no pattern is detected in Fig, 4(b). Our proposed Gabor filter fence detection results are shown in Figs. 3(c), 4(c) wherein the fences have been properly detected. Secondly, we propose a machine learning based approach to the problem. We tested the technique on a frame from another video of real-world traffic shown in Fig. 5(a). It was found that the technique of \cite{Minwoo} failed to detect fence pixels shown in Fig. 5(b) but our proposed machine learning based approach detected the fence shown in Fig. 5(c). Note that the two methods mentioned above are completely automatic and require no user intervention. \section{\uppercase{Our Framework}} We propose to use the following degradation model for the de-fencing problem \begin{equation} \textbf{y}_{m} = \textbf{O}_{m}\textbf{H}_{m}\textbf{W}_{m}\textbf{x} + \textbf{n}_{m} \end{equation} where $\textbf{y}_{m}$'s are the observations containing fences obtained from the captured video, $\textbf{x}$ is the de-fenced image, $\textbf{H}_{m}$ is blur operator for each frame, $\textbf{W}_{m}$ models the relative motion between frames, $\textbf{O}_{m}$ is obtained from the binary fence masks and $\textbf{n}_{m}$ is Gaussian noise. \subsection{Fence Detection} \subsubsection{Gabor Filter approach} Fences in general are inherently directional in nature. This property can be exploited by using directional filters. We employ the 2D Gabor filter proposed in \cite{Daugman} to our problem. It is given by, \begin{equation} g(x,y;\lambda, \theta , \psi , \sigma ,\gamma) = \exp \big(-\frac{x^{\prime 2} + \gamma^{2} y^{\prime 2}}{2\sigma^{2}}\big) \cos\big(2\pi \frac{x^{\prime}}{\lambda}+\psi\big) \end{equation} Where $\lambda$ represents the wavelength, $\theta$ represents the orientation angle, $\psi$ represents the phase offset, $\sigma$ represents the standard deviation and $\gamma$ represents the aspect ratio. The parameter $\theta$ can be used to specify the orientation of the fences in the image. Here $\theta$ can be varied between 0 - 360 degrees based on the fence orientation. For example for fence detection, in Fig. 4(a) we use the Gabor filter with orientation angles 45, 225 degrees and other parameter values chosen as $\lambda=4$, $\psi=0$, $\gamma=0.5$, and $\sigma=4$. As shown in Fig. 4(c) the fence mask is detected accurately. \subsubsection{Machine learning approach} It is amply demonstrated in the literature that HOG features have been successful in many recognition and object classification problems. In this position paper, we propose a supervised learning approach to detect the fence pixels using HOG features \cite{Dalal}. Firstly, all the dataset images are preprocessed by histogram normalization to reduce the effects of illumination changes. Each training image from dataset of ($100$ positives and $100$ negatives) is divided into non-overlapping cells of size $8 \times 8$ pixels and then the image gradient is computed in terms of magnitude as well as orientation. At every pixel in the \textit{cell}, the orientation is quantized into one of the nine bins, weighted based on its magnitude. The orientation bins are evenly spaced over $0-180$ degrees with each bin of size $20$ degrees. Finally, a histogram with the $9$ orientations is computed for each cell to form a feature vector of size $9\times 8\times 8$. A region of 4 \textit{cells} is clustered together to form a block and every neighboring block has an overlap of 2 \textit{cells}. A single block is thus represented by a feature vector of length $4\times 9 \times 8\times 8$. Every block which consists of un-normalized features from the \textit{cells}, is normalized by its $L2$ norm. Finally, all the feature vectors from the blocks are concatenated to obtain a single large feature vector of size 4752 corresponding to a single training image. Since SVM classifiers were originally used for binary classification problems, we have chosen it for our problem. The extracted HOG features were used for training an SVM for the classification of fence/non-fence. We used the RBF kernel which is given as $k(x_{i},x_{j}) = \exp(\gamma\parallel x_{i}-x_{j}\parallel^{2})$ where the parameter $\gamma$ and the misclassification penalty $C$ are found by a 5-fold cross validation. As shown in Fig. 5(c), we use a sliding window to densely scan the test image from top to bottom and left to right at different scales. For each detector window, HOG features are extracted and fed to the trained SVM classifier to classify the sub-image as fence or non-fence. We replace positions of detected windows with a template binary mask to generate the final fence mask shown in Fig. 5(d). \subsection{Motion Estimation} The basic idea behind our method is that occluded image data in the reference frame is uncovered in other frames of the captured video. Motion estimation is to fuse the information uncovered in the other images for filling in occlusions in the reference frame. The relative shifts among the images have to be estimated in the degradation model of Eq. 1, to effect the image operations corresponding to $\textbf{W}_{m}$. Recently, \cite{Thomas} proposed an optical flow estimation technique, where they have integrated descriptor matching in a variational framework. This method is very effective in detecting sub pixel motion shifts in real world cases images without occlusions. However, for our application we need to accurately estimate optical flow for images with fences. When the optical flow for such images are estimated by \cite{Thomas}, we observe erroneous values around the fenced or occluded pixels. To avoid these errors, we smoothen observations using a Gaussian kernel, prior to using \cite{Thomas} to estimate the optical flow. \subsection{Optimization} We now formulate the optimization problem needed to solve the ill-posed inverse problem of image de-fencing. We minimize an objective function consisting of data fidelity term and a regularization term. We assume total variation (TV) of the de-fenced image as the regularization constraint. TV regularization is a well studied approach which preserves discontinuities in the reconstructed image \cite{Pascal,Konstantinos}. The de-fenced image is the solution of the following optimization problem \begin{equation} \arg\min_{\textbf{x}} \frac{1}{2} \sum_{m=1}^{p}\parallel \textbf{y}_{m}-\textbf{O}_{m}\textbf{H}_{m}\textbf{W}_{m}\textbf{x}\parallel _{2}^{2} + \mu \parallel \nabla \textbf{x} \parallel_{1} \end{equation} where $p$ is the number of frames chosen from the video and $\mu$ is the regularization parameter. The above problem can also be written in a constrained framework as \begin{equation} \begin{split} \arg\min_{\textbf{x}} \frac{1}{2} \sum_{m=1}^{p}\parallel \textbf{y}_{m}-\textbf{O}_{m}\textbf{H}_{m}\textbf{W}_{m}\textbf{x}\parallel _{2}^{2} + \mu \parallel \textbf{d} \parallel_{1} \\ s.t. \hspace{5pt}\textbf{d} = \nabla \textbf{x} \end{split} \end{equation} The above optimization framework is a combination of both $l1$ and $l2$ terms and hence difficult to solve. We employ the split Bregman iterative framework described in \cite{Tom} to solve the above problem. We use an alternative unconstrained formulation as \begin{equation} \begin{split} \arg\min_{\textbf{x}} \frac{1}{2} \sum_{m=1}^{p}\parallel \textbf{y}_{m}-\textbf{O}_{m}\textbf{H}_{m}\textbf{W}_{m}\textbf{x}\parallel _{2}^{2} \\ + \mu \parallel \textbf{d} \parallel_{1} +\frac{ \lambda }{2} \parallel \textbf{d}- \nabla \textbf{x} \parallel_{2}^{2} \end{split} \end{equation} where $\lambda$ is the shrinkage parameter. The iterates to solve the above equation are as \begin{equation} \begin{split} [\textbf{x}^{k+1},\textbf{d}^{k+1}] = \arg\min_{\textbf{x},\textbf{d}} \ \frac{1}{2} \sum_{m=1}^{p}\parallel \textbf{y}_{m}-\textbf{O}_{m}\textbf{H}_{m}\textbf{W}_{m}\textbf{x}^{k}\parallel _{2}^{2} \\+ \mu \parallel \textbf{d}^{k} \parallel_{1} +\frac{ \lambda }{2} \parallel \textbf{d}^{k}- \nabla \textbf{x}^{k} + \textbf{b}^{k}\parallel_{2}^{2} \end{split} \end{equation} \begin{equation} \textbf{b}^{k+1} = \nabla \textbf{x}^{k+1} + \textbf{b}^{k} -\textbf{d}^{k+1} \end{equation} We can now split the above problem into two sub-problems as \\ \\ \textbf{Sub Problem 1:} \begin{equation} \begin{split} [\textbf{x}^{k+1}] = \arg\min_{\textbf{x}} \ \frac{1}{2} \sum_{m=1}^{p}\parallel \textbf{y}_{m}-\textbf{O}_{m}\textbf{H}_{m}\textbf{W}_{m}\textbf{x}^{k}\parallel _{2}^{2} \\ + \frac{ \lambda }{2} \parallel \textbf{d}^{k}- \nabla \textbf{x}^{k} + \textbf{b}^{k}\parallel_{2}^{2} \end{split} \end{equation} This sub-problem is solved by a gradient descent method.\\ \\ \textbf{Sub Problem 2:} \begin{equation} [\textbf{d}^{k+1}] = \arg\min_{\textbf{d}} \mu \parallel \textbf{d}^{k} \parallel_{1} + \frac{ \lambda }{2} \parallel \textbf{d}^{k}- \nabla \textbf{x}^{k+1} + \textbf{b}^{k}\parallel_{2}^{2} \end{equation} The above sub-problem can be solved by applying the shrinkage operator as follows \begin{equation} \textbf{d}^{k+1}=shrink(\nabla \textbf{x}^{k+1}+\textbf{b}^{k},\frac{\lambda}{\mu}) \end{equation} \begin{equation} \textbf{d}^{k+1} = \frac{\nabla \textbf{x}^{k+1}+\textbf{b}^{k}}{\mid \nabla \textbf{x}^{k+1}+\textbf{b}^{k} \mid}*max(\mid \nabla \textbf{x}^{k+1}+\textbf{b}^{k} \mid - \frac{\lambda}{\mu},0) \end{equation} The update for \textbf{b} is as $ \textbf{b}^{k+1} = \nabla \textbf{x}^{k+1} + \textbf{b}^{k} - \textbf{d}^{k+1}$. We tune the parameters $\mu$, $\lambda$ to obtain the best estimate of the de-fenced image. \section{\uppercase{Experimental Results}} For both the synthetic and real-world cases we choose four images from the corresponding video sequence. Ideally, the images should be chosen in such a way that occluded information in the reference frame reappears in the adjacent frames. The de-fencing procedure is carried out individually in each color channel and the results combined to generate the RGB color image. For synthetic experiments, we use the image of a tiger shown in Fig. 6. We shifted this image by (-8,- 8), (8, 8) and (15, 15) pixels to obtain four different frames. Simulating a fence of 7 pixel thickness, we removed image data from all 4 frames. The proposed algorithm was then applied with an initial estimate consisting of random numbers obtained from a uniform PDF. A value of $\lambda=0.01$ and $\mu=0.00001$ are used in the optimization method. The reconstructed image shown in Fig. 6(c) was found to have a PSNR of 39.8377 and SSIM of 0.9976. These quantitative results clearly validate the proposed algorithm. Also, the convergence of the proposed method can be seen in Fig. 6 (d) where we have plotted error vs number of Bregman iterations. The algorithm converges quickly during the first few iterations. \begin{figure}[t] \centering \subfigure[]{\label{fig:a}\includegraphics[width=37mm]{tiger_original}} \subfigure[]{\label{fig:b}\includegraphics[width=37mm]{tiger_obs1}} \subfigure[]{\label{fig:c}\includegraphics[width=37mm]{tiger_restored}} \subfigure[]{\label{fig:d}\includegraphics[width=37mm]{tiger_error}} \caption{(a) Original image. (b) Fenced image. (c) De-fenced image estimated using the proposed algorithm. (d) Error analysis over bregman iterations.} \end{figure} Next, we have conducted experiments on a video from the 'Prison Break' TV sitcom obtained from Youtube. We have taken four frames for our algorithm, two of which are shown in Figs. 7(a), 7(b). We observed that the relative motion in the body region is noticeable whereas in the other parts is less. Therefore, inter-frame motion is non-global which makes the problem more challenging. We first computed the fence masks using the Gabor filter approach. Motion or optical flow between the frames were computed using the method proposed by \cite{Thomas}. Fig 7(c) shows the result of \cite{Minwo}. We observe many artifacts at the lips, shirt and hair of the person shown in the close-ups of Fig. 7(c). The proposed algortithm reconstructs the de-fenced image as shown in Fig. 7(d). We observe that the occlusions in the body region are completely filled-in with hardly any artifacts. \begin{figure}[t] \centering \subfigure[]{\label{fig:a}\includegraphics[width=32mm]{yanxi_obs1}} \subfigure[]{\label{fig:b}\includegraphics[width=32mm]{yanxi_obs4}} \subfigure[]{\label{fig:c}\includegraphics[width=32mm]{yanxi_accv2010_restored}} \subfigure[]{\label{fig:d}\includegraphics[width=32mm]{yanxi_our_restored}} \caption{(a), (b) 1st and 4th frames chosen from the video. (c) De-fenced image using \cite{Minwo}. (d) De-fenced image estimated using the proposed algorithm.} \end{figure} \begin{figure}[t] \centering \subfigure[]{\label{fig:a}\includegraphics[width=37mm]{saathiya_obs1}} \subfigure[]{\label{fig:b}\includegraphics[width=37mm]{saathiya_obs4}} \subfigure[]{\label{fig:c}\includegraphics[width=37mm]{saathiya_inpainting_res}} \subfigure[]{\label{fig:d}\includegraphics[width=37mm]{saathiya_restored100}} \caption{(a), (b) 1st and 4th frames chosen from the video. (c) Inpainting result of \cite{Konstantinos}. (d) De-fenced image estimated using the proposed algorithm.} \end{figure} Lastly, we move towards a more challenging problem, wherein we have used a video of a song from an Indian movie downloaded from Youtube. We have chosen four frames for our experimentation, two of them are shown in Figs. 8(a), (b). We notice a large amount of relative motion between the frames especially in the person's body and lesser amount of motion in the background. We applied an inpainting technique proposed by \cite{Konstantinos} on the frame shown in Fig. 8(a) and the result is shown in Fig. 8(c). We noticed that fence pattern was still visible in the inpainted result particularly on the face portion. However, our multi-frame optimization framework uses actual data uncovered in the adjacent frames to effectively fill-in the missing information in the reference image as shown in Fig. 8(d). We also show challenging cases where our automated fence detection algorithms fail. Fig. 9(b) shows the detected fence obtained using the proposed Gabor filter approach, we observe that some fence pixels are not detected due to similarity in color of both fence and car tyres. We show another example using our machine learning based approach in Fig. 9(d). We observe that the proposed approach failed to detect the fence pixels due to significant deformation in its shape. As a part of future work, we are investigating how to robustly detect fences when the camera is not fronto-parallel to the scene. \section{\uppercase{Conclusions}} \label{sec:conclusion} In this position paper, we proposed an automatic image de-fencing algorithm for real-world videos. We divided the problem of image de-fencing into three tasks and proposed an automatic approach for each one of them. We formulated an optimization framework and solved for the inverse problem using the split Bregman technique assuming total variation of the de-fenced image as the regularization constraint. We have evaluated our proposed algorithm on both synthetic and real-world videos. The obtained results show the effectiveness of our proposed algorithm. As part of future work, we are investigating how to optimally choose the frames from the video. \begin{figure}[t] \centering \subfigure[]{\label{fig:a}\includegraphics[width=37mm]{car_obs1}} \subfigure[]{\label{fig:b}\includegraphics[width=37mm]{car_fence}} \subfigure[]{\label{fig:c}\includegraphics[width=37mm, height=27mm]{bird1}} \subfigure[]{\label{fig:d}\includegraphics[width=37mm, height=27mm]{bird1_ml}} \caption{(a) 1st frame from the video. (b) Fence pixels detected using proposed Gabor filter. (c) 1st frame from the video. (d) Fence pixels detected using proposed machine learning approach.} \end{figure} \bibliographystyle{apalike} {\small
1612.03243
\section{Introduction}\label{sec:introduction} Understanding turbulence in stable density stratification is a central problem in atmosphere, ocean, and climate dynamics, as well as in the context of engineering flows \citep{Riley:2000bh}. In strongly stratified turbulence a common phenomenon is the emergence of a vertically sheared horizontal flow (VSHF). VSHFs have commonly been observed in numerical simulations of strongly stratified Boussinesq turbulence maintained by stochastic excitation \citep{Herring:1989fx,Smith:2001uo,Smith:2002wg,Laval:2003ko,Waite:2004bt,WAITE:2005bj,Brethouwer:2007uk,Marino:2014id,Rorai:2015bw,Herbert:2016cc,Kumar:2017ie}. The VSHF formation mechanism and the mechanism determining the equilibrium VSHF structure remain incompletely understood. Mechanisms that have previously been advanced include rapid distortion theory \citep{Galmiche:2002cw} and resonant interactions among gravity waves \citep{Holloway:1986wg,Smith:2001uo,Smith:2002wg}. Although resonant interactions cannot transfer energy directly into the VSHF due to its vanishing frequency, a mechanism has been suggested in which resonant interactions transfer energy toward the VSHF which is subsequently transferred into the VSHF by non-resonant interactions \citep{Smith:2001uo,Smith:2002wg}. Studying the VSHF equilibration process has proven difficult because the VSHF development timescale is long compared to the timescale for establishment of equilibrium between the turbulence and the VSHF so that obtaining a statistically steady VSHF requires long simulations \citep{Brethouwer:2007uk,Herbert:2016cc}. Computational impediments associated with equilibrating the VSHF in simulation are mitigated by investigating VSHF behaviour in the simplified model of two-dimensional (2D) stratified turbulence. This approach is predicated on establishing that the dynamics of VSHF emergence in the 2D system is similar to that in the 3D system. A potentially important physical difference between 2D and 3D Boussinesq dynamics is that the 3D system admits modes with vertically oriented vorticity, referred to as vortical modes, while the 2D system does not. However, \citet{Remmel:2013ce} recently compared VSHF formation in the full 3D system to that in a reduced 3D system in which these vortical modes were removed from the dynamics and found that similar VSHFs form with or without vortical modes. This result suggests that VSHF formation results from interactions that can be captured in the 2D system. Another important physical difference between 2D and 3D stratified turbulence is that 3D turbulence maintains vorticity by vortex stretching and exhibits a direct cascade of energy toward small scales \citep[\emph{e.g.,}][]{LINDBORG:2006eg}, whereas 2D turbulence does not permit vortex stretching and exhibits an inverse cascade of energy toward large scales \citep[\emph{e.g.,}][]{Kumar:2017ie}. However, previous analysis of stratified turbulence using rapid distortion theory has identified a direct and spectrally nonlocal interaction between large-scale gravity waves and large-scale shear flows that is viable as the central driver of VSHF formation and maintenance, suggesting that the details of the route to dissipation are not primarily involved in VSHF dynamics \citep{Galmiche:2002cw}. Numerical simulations have also demonstrated that VSHFs form robustly in strongly stratified 2D turbulence and that these structures have similar properties to those seen in the 3D case \citep{Smith:2001uo,Kumar:2017ie}, further indicating that VSHF dynamics can be usefully studied in 2D. In view of the great analytic and computational advantage afforded by the availability in 2D of statistical state dynamics to elucidate the mechanisms involved we are motivated to begin our study of VSHF dynamics by exploiting this method, which has proven successful in addressing similar problems of structure formation in related systems \citep[see][and references therein]{FARRELL:2014we}. Spontaneous emergence of large-scale shear flows from small-scale turbulence has been extensively studied in the context of geophysical fluid dynamics, where the emergent structures are referred to as turbulent jets. The banded winds of Jupiter \citep{Vasavada:2005gs} provide a striking example in which the jet structure takes the form of statistically steady planetary-scale zonal (east-west oriented) winds which oscillate in sign as a function of latitude. Layered shear flows are also found in the weakly rotating, strongly stratified environment of the Earth's equatorial oceans. The equatorial deep jets (EDJs) are zonal currents observed below approximately $1000$ metres depth and within $1^{\circ}$ of latitude of the equator in all ocean basins that are characterized by a vertically sheared structure in which the zonal flow oscillates in sign as a function of depth \citep{Eden:2008hl}. Although the EDJs are reminiscent of the VSHFs that emerge in stratified turbulence simulations, these geophysical jets differ from VSHFs in that they are time dependent and exhibit phase propagation in the vertical direction \citep{Brandt:2011fp}. Nonetheless, understanding VSHF emergence in Boussinesq stratified turbulence may provide insight into the EDJs in a manner analogous to the insight provided by barotropic beta-plane turbulence into planetary-scale baroclinic jet formation \citep[\emph{e.g.,}][]{Farrell:2003ud}. As in the case of VSHFs, theoretical understanding of the origin and maintenance of geophysical planetary-scale turbulent jets is not yet secure. Attempts to theoretically explain the formation of large-scale structure from turbulence date back to \citet{FJoRTOFT:1953km} and \citet{Kraichnan:1967us}, who showed that nonlinear spectral broadening together with energy and vorticity conservation implies that energy is transferred, on average, from small scales to large scales in 2D inviscid unstratified turbulence (a similar inverse cascade may occur in 3D rotating turbulence \citep{Sukoriansky:2016ce}). The mechanism of 2D inverse cascade is consistent with the observed concentration of energy at large scales on Jupiter \citep{Galperin:2014tw}, as the planetary-scale flow of the weather layer is believed to be both lightly damped and nearly 2D. Similar arguments have been made for the EDJs in which the jets are suggested to result from a nonlinear cascade in which baroclinic mode energy is funneled toward the equator \citep{Salmon:1982up}. However, the Jovian jets have an intricate and nearly time-invariant structure \citep{Vasavada:2005gs}, and while the vertical structure of the EDJs has not been as well established, they are also observed to be phase coherent over long times and large length scales \citep{Youngs:2015gh}. While general arguments based on the direction of spectral energy transfer predict that the large scales will be energized in these systems, they do not predict the form of these coherent structures. Other theoretical proposals for the origins of the EDJs have been based on instabilities of finite-amplitude equatorial waves \citep{Hua:2008wd} and on the linear response of the equatorial ocean to periodic wind forcing \citep{Wunsch:1977jl,McCrearyJr:1984bb}. Although these mechanisms can produce high-wavenumber baroclinic structure near the equator, how this structure would remain coherent in the presence of turbulence remains an open question. Improving understanding of the formation and maintenance of shear flows in strongly stratified turbulence is the subject of this paper. We focus on a simple example, VSHF emergence in 2D stratified turbulence, which has at least suggestive connection to geophysical systems such as the EDJs. Statistical state dynamics (SSD) refers to a class of theoretical approaches to the analysis of chaotic dynamical systems in which the dynamics are expressed directly in terms of the statistical quantities of the system \citep{FARRELL:2014we}. A familiar example of SSD is the Fokker-Planck equation for the evolution of the probability distribution function of a system whose realizations evolve according to a stochastic differential equation. In this work we apply the simplest nontrivial form of SSD, known as stochastic structural stability theory (S3T) \citep{Farrell:2003ud}, to investigate VSHF formation in the stochastically excited 2D Boussinesq system. By comparing the results of analysis of the S3T system to simulations made with the full nonlinear equations (NL), we show that S3T captures the essential features of the full system, including the emergence and structure of the VSHF and associated density layers. In S3T, and the related system referred to as CE2 \citep[second-order cumulant expansion,][]{Marston:2010ew}, nonlinearity due to perturbation-perturbation advection is either set to zero or stochastically parameterized, so that the SSD is closed at second order. This second-order closure has proven useful in the study of coherent structure emergence in barotropic turbulence \citep{Farrell:2007fq,Marston:2008gx,Srinivasan:2012im,Tobias:2013hk,Bakas:2013ft,Constantinou:2014fh,Parker:2014ui,Bakas:2017uh}, two-layer baroclinic turbulence \citep{Farrell:2008fd,Farrell:2009cq,Marston:2010ew,Marston:2012co,Farrell:2017ed}, turbulence in the shallow-water equations on the equatorial beta-plane \citep{Farrell:2009iu}, drift wave turbulence in plasmas \citep{Farrell:2009dt,Parker:2013hy}, unstratified 2D turbulence \citep{Bakas:2011bt}, rotating magnetohydrodynamics \citep{Tobias:2011cn,Squire:2015kb,Constantinou:2018ut}, 3D wall-bounded shear flow turbulence \citep{Farrell:2012jm,Thomas:2014ek,Thomas:2015dl,Farrell:2017dx,Farrell:2017iz,Farrell:2017br}, and the turbulence of stable ion-temperature-gradient modes in plasmas \citep{StOnge:2017tu}. In the present work we place 2D stratified Boussinesq turbulence into the mechanistic and phenomenological context of the mean flow-turbulence interaction mechanism that has been identified in these other turbulent systems. In formulating the S3T dynamics for the Boussinesq system the perturbation vorticity and buoyancy variables are expressed in terms of ensemble mean two-point covariance functions. When coupled to the dynamics of the mean state this second-order perturbation dynamics contains the statistical wave-mean flow interaction between the turbulent perturbation fluxes and the horizontal mean structure. The dynamics is greatly simplified by discarding the phase information in the horizontal direction pertaining to the detailed configuration of the turbulent perturbation fields, which we will demonstrate to be inessential to the VSHF formation mechanism. Because the S3T dynamics is written in terms of two-point covariance functions the state space of the S3T system is of higher dimension than that of the underlying system, and use of the 2D, rather than 3D, Boussinesq system substantially reduces the resulting computational burden. The SSD approach used in S3T permits identification and analysis of cooperative phenomena and mechanisms operating in turbulence that cannot be expressed using analysis based on a single realization. For example, we will show that in the S3T system, the initial formation of the VSHF occurs through a bifurcation associated with the onset of a linear instability caused by a statistical wave-mean flow interaction mechanism in which turbulent fluxes are organized by a perturbatively small mean flow in such a way as to reinforce that flow. The resulting instability is a statistical phenomenon that lacks analytical expression in the dynamics of a single realization and therefore cannot be fundamentally understood through analysis of single realizations of the turbulent state. However, the reflection of this phenomenon is strikingly apparent in single realizations of the system, and we will demonstrate that the VSHF structures predicted to arise via S3T instabilities emerge in NL simulations of realizations. The S3T system also reveals subtle details of turbulent equilibrium structures that might not otherwise be detected from observing the NL simulations, including the turbulent modification of the horizontal mean stratification producing density layers. Although these density layers are obscured by fluctuations in snapshots of the flow, time-averaging reveals that they coincide with the structure predicted by the S3T system. The present work is closely related to our recent work, \citet{Fitzgerald:2018vx}, in which we apply the linearized differential formulation of S3T, originally developed by \citet{Srinivasan:2012im}, to analyze VSHF formation in 2D Boussinesq turbulence. In \citet{Fitzgerald:2018vx} we focus on the initial linear formation process of VSHFs and analyze how this process depends on the structure of the underlying turbulence and how individual physical processes contribute to the VSHF formation mechanism. In the present work, we instead apply the conventional matrix formulation of S3T and focus on analyzing the structure and maintenance mechanisms of finite amplitude equilibria in 2D Boussinesq turbulence. The structure of the paper is as follows. In \textsection \ref{sec:NLphenom} we introduce the 2D stochastically excited Boussinesq equations and present NL simulation results demonstrating VSHF formation. In \textsection \ref{sec:testfunction} we use SSD to illustrate the wave-mean flow interaction mechanism underlying VSHF formation and maintenance. In \textsection \ref{sec:QL_and_SSD_formulation} we formulate the deterministic S3T system in its conventional matrix form and also the intermediate quasilinear (QL) system, which provides a stochastic approximation to the second-order closure and bridges the gap between NL simulations and the S3T system. In \textsection \ref{sec:modelcomparison} we show that the primary phenomena observed in NL simulations are captured by the QL and S3T systems. In \textsection \ref{sec:scaleselection} we carry out a linear stability analysis of the S3T system and relate the results to the scale selection of the initially emergent VSHF in NL simulations. In \textsection \ref{sec:equilibration} we analyze the finite-amplitude equilibration of the VSHF as a function of the strength of the stochastic excitation. In \textsection \ref{sec:multiple_equilibria} we show that multiple simultaneously stable turbulent equilibrium states exist in this system, a phenomenon which is predicted by S3T and verified in the NL simulations. In \textsection \ref{sec:bifurcation_comparison} we compare the NL, QL, and S3T systems as a function of the excitation strength and show that the VSHF-forming bifurcation predicted by S3T is reflected in the NL and QL systems. We conclude and discuss these results in \textsection \ref{sec:conclusions}. Appendix \ref{sec:appendixA} describes a simplified model system in which the mathematical structure and conceptual utility of S3T is revealed simply. Appendix \ref{sec:appendixB} provides analytical details required for the linear stability analysis. \section{VSHF Formation in Simulations of 2D Boussinesq Turbulence}\label{sec:NLphenom} We study VSHF formation using the 2D stochastically excited Boussinesq equations using a unit aspect ratio $(x,z)$ computational domain doubly periodic with length $L$ in the $x$ and $z$ directions. Anticipating the development of horizontal mean structure we use a Reynolds decomposition in which the averaging operator is the horizontal mean. The resulting equations, in terms of the mean velocity, perturbation vorticity, and mean and perturbation buoyancy are \begin{align} \frac{\partial U}{\partial t} &= -\frac{\partial}{\partial z}\overline{u'w'}-r_m U + \nu \frac{\partial^2 U}{\partial z^2}, \label{eq:NL1} \\ \frac{\partial B}{\partial t} &= -\frac{\partial}{\partial z}\overline{w'b'}-r_m B + \nu \frac{\partial^2 B}{\partial z^2}, \label{eq:NL2} \\ \frac{\partial \Delta \psi'}{\partial t} &= - U\frac{\partial \Delta \psi'}{\partial x} +w'\frac{\partial^2 U}{\partial z^2} + \frac{\partial b'}{\partial x} -\left[J(\psi',\Delta \psi')-\overline{J(\psi',\Delta \psi')}\right]-r \Delta \psi'+\nu \Delta^2 \psi' + \sqrt{\varepsilon}S, \label{eq:NL3} \\ \frac{\partial b'}{\partial t} &= -U \frac{\partial b'}{\partial x}-w'\left(N_0^2+\frac{\partial B}{\partial z}\right) -\left[J(\psi',b')-\overline{J(\psi',b')}\right]-rb'+\nu \Delta b'. \label{eq:NL4} \end{align} In these equations an overline indicates a horizontal mean and primes indicate deviations from the horizontal mean so that $f'=f-\overline{f}$. The velocity is $\boldsymbol{u}=(u,w)$ with $u$ and $w$ the horizontal and vertical velocity components, $U=\overline{u}$ is the horizontal mean horizontal velocity, $b$ is the buoyancy with $B=\overline{b}$ the horizontal mean buoyancy, $\psi$ is the streamfunction satisfying $(u,w)=(-\partial_z \psi,\partial_x \psi)$, and the vorticity is $\Delta \psi=\partial_x w - \partial_z u$ where $\Delta = \partial_{xx}^2+\partial_{zz}^2$ is the Laplacian operator. Perturbation-perturbation advection terms are written using the Jacobian $J(f,g)=(\partial_x f )(\partial_z g)- (\partial_x g)(\partial_z f)$. $\sqrt{\varepsilon}S$ denotes the stochastic excitation, which has zero horizontal mean and excites the perturbation vorticity only. $\varepsilon$ controls the strength of the excitation. $N_0$ is the constant background buoyancy frequency. Dissipation is provided by Rayleigh drag and diffusion acting on both the buoyancy and vorticity fields. Consistent with previous studies of VSHF formation, dissipation coefficients are assumed equal for vorticity and buoyancy, \emph{i.e.,} the Prandtl numbers associated with the Rayleigh drag and with the diffusive dissipation are each set equal to one. To approximate the effects of diffusive turbulent dissipation, which damps the large scales less rapidly, the Rayleigh drag on the mean fields (with coefficient $r_m$) is typically taken to be weaker than that on the perturbation fields (with coefficient $r$). We refer to equations (\ref{eq:NL1})-(\ref{eq:NL4}) as the NL equations (for fully nonlinear) to distinguish them from the quasilinear (QL) and S3T systems which we formulate in \textsection \ref{sec:QL_and_SSD_formulation}. Use of Rayleigh drag in (\ref{eq:NL1})-(\ref{eq:NL4}) departs from the diffusive dissipation commonly used in simulating stratified turbulence \citep[\emph{e.g.},][]{Smith:2001uo}. Rayleigh drag provides a simplified parameterization of dissipation that allows the system to reach statistical equilibrium quickly, enabling simulations to obtain the asymptotic state of the VSHF that is difficult to study comprehensively using diffusive dissipation. We emphasize that the essential phenomenon of VSHF formation does not depend crucially on the details of the dissipation, which we demonstrate using examples near the end of the present section. We choose the stochastic excitation, $\sqrt{\varepsilon}S$ in (\ref{eq:NL3}), to have the spatial structure of an isotropic ring in Fourier space and to be delta-correlated in time. Figure \ref{fig:Z_Forcing_Plot} shows a snapshot of $\sqrt{\varepsilon}S$ (panel (a)) and its wavenumber power spectrum (panel (b)), in which $\boldsymbol{k}=(k,m)$ is the vector wavenumber with $k$ and $m$ the horizontal and vertical wavenumber components. The excitation is homogeneous in space and approximately isotropic, with some anisotropy being introduced by the omission of the horizontal mean ($k=0$) and vertical mean ($m=0$) components of the excitation and also by the finite domain size. We set the total wavenumber of the ring, $k_e$, to be global wavenumber six, $k_e/(2\upi L)=6$. As the excitation is delta-correlated in time, the rate at which energy is injected into the flow by the vorticity excitation is a control parameter that is independent of the system state. Here we define the kinetic energy, $K$, the potential energy, $V$, and the total energy, $E$, of the flow as \begin{align} K=[\frac{1}{2}\boldsymbol{u \cdot u}], && V=[\frac{1}{2}N_0^{-2}b^2], && E=K+V, \end{align} in which square brackets indicate the domain average. The energy injection rate as a function of wavenumber, denoted $\varepsilon_{k,m}$, follows a Gaussian distribution centred at $k_e$ so that $\varepsilon_{k,m}=\alpha\exp [-(|\boldsymbol{k}|-k_e)^2/\delta k^2]$, where $\delta k=2\upi/L$ sets the ring thickness and $\alpha$ is a normalization factor chosen so that the total energy injection rate summed over all wavenumbers, $\sum_{k,m}\varepsilon_{k,m}$, is equal to the value of the parameter $\varepsilon$ appearing in (\ref{eq:NL3}). With this normalization $\varepsilon$ corresponds to the rate at which the vorticity excitation injects energy into the system. Global horizontal wavenumbers $1-8$ have nonzero excitation and all higher horizontal wavenumbers are omitted from the excitation. \begin{figure} \centerline{\includegraphics{Fig1_v2p0.pdf}} \caption{Spatial structure of the stochastic excitation of the vorticity field, $\sqrt{\varepsilon}S$. (a) A sample realization of the excitation pattern, shown in normalized form as $S(x,z,t)/\text{max}[S(x,z,t)]$. (b) The wavenumber power spectrum of $S$, shown in normalized logarithmic form as $\ln(P(k,m)/\text{max}[P(k,m)])$. Here we define $P(k,m)=\langle |\tilde{S}_{k,m}|^2\rangle$ in which $\tilde{S}_{k,m}(t)$ is the Fourier coefficient of the excitation when $S$ is expanded as $S(x,z,t)=\sum_{k,m}\tilde{S}_{k,m}(t)e^{\text{i}(kx+mz)}$. Angle brackets indicate the ensemble average over noise realizations. The parameters of the excitation are $k_e/2\upi=6$ and $\delta k /2\upi=1$.} \label{fig:Z_Forcing_Plot} \end{figure} Equations (\ref{eq:NL1})-(\ref{eq:NL4}) are nondimensionalized by choosing the unit of length to be the domain size, $L$, and the unit of time to be the Rayleigh damping time of the perturbations, $1/r$. The nondimensional parameters of the problem are $\widehat{k_e}=Lk_e$, $\widehat{\delta k}=L\delta k$, $\widehat{r_m}=r_m/r$, $\widehat{\nu}=\nu/(L^2 r)$, $\widehat{\varepsilon}=\varepsilon/(r^3 L^2)$, and $\widehat{N_0^2}=N_0^2/r^2$. We hold fixed the parameters $\widehat{k_e}/2\upi=6$, $\widehat{\delta k }/2\upi=1$, $\widehat{r_m}=0.1$, $\widehat{\nu}=2.4\times 10^{-5}$, and $\widehat{N_0^2}=10^3$ unless otherwise stated. The choice of $\widehat{k_e}$ represents a compromise between providing separation between the excitation scale and the domain scale while minimizing the effects of diffusion on perturbations at the excitation scale. Modelling scale-selective diffusive dissipation motivates setting $\widehat{r_m}<1$ and our specific choice to set $\widehat{r_m}=0.1$, so that the mean fields are damped ten times less rapidly than the perturbation fields, is made for computational convenience. We examine the sensitivity of the system to this choice in figure \ref{fig:newfig1} (a). The value of $\widehat{\nu}$ is small and was selected to ensure numerical convergence. The rate of energy injection by the excitation, $\widehat{\varepsilon}$, is the primary control parameter which is varied to determine the response of the system to changes in excitation. We choose $\widehat{N_0^2}=10^3$ to place the system in the strongly stratified regime in which VSHFs have previously been found to form \citep{Smith:2001uo,Smith:2002wg}. The strongly stratified regime is also the regime relevant to the EDJs. For example, taking the equatorial deep stratification as $N_{\text{deep}} \sim 2\times10^{-3} \text{ s}^{-1}$, a typical gravity wave wavelength of $\lambda_{\text{GW}}\sim 10 \text{ km}$, and a lateral eddy viscosity of $\nu_{\text{eddy}}\sim 100 \text{ m}^2 \text{ s}^{-1}$ gives an effective Rayleigh drag coefficient of $r_{\text{eff}}\sim (2\upi/\lambda_{\text{GW}})^2 \nu_{\text{eddy}}\sim 4\times10^{-5}\text{ s}^{-1}$ and so $\widehat{N_0^2}_{\text{,EDJ}}=N_{\text{deep}}^2/r_{\text{eff}}^2\sim2500$. Although we do not attempt in this work to model the EDJs, which have 3D structure and are influenced by rotation and boundaries, this estimate suggests that the presently studied idealized turbulence is in the appropriate parameter regime to allow comparison between our VSHF dynamics and EDJ phenomena. For the remainder of the paper we work exclusively in terms of nondimensional parameters and drop hats in our notation. We now summarize the behaviour of an NL simulation exhibiting VSHF formation in which the system was integrated from rest over $t\in[0,60]$ with $\varepsilon=0.25$ and the other parameters as described above, which we refer to as the standard parameter case. The standard case value of $\varepsilon$ places the system in the parameter regime in which strong VSHF formation occurs; the sensitivity of the system to $\varepsilon$ is examined in \textsection 6, \textsection 7, and \textsection 9. In \textsection \ref{sec:modelcomparison} we compare the first- and second-order statistical features of NL simulations with the results of the QL and S3T simulations. To perform the numerical integration we use a 2D finite-difference configuration of DIABLO \citep{Taylor:2008um} with 512 gridpoints in both the $x$ and $z$ directions. To estimate the canonical scales and nondimensional parameters of the standard case simulation we use the estimates $U_0\sim\sqrt{\varepsilon}$ for the velocity scale and $L_0\sim1/k_e$ for the length scale. The velocity scale is estimated based on the approximate energy balance in the absence of a VSHF, $\dot{E}\approx -2E+\varepsilon$, together with the estimate $U_0\sim\sqrt{E}$. Using these estimates, the Froude number of the standard parameter case is $Fr\equiv U_0/(L_0N_0)\approx 0.6$, the Ozmidov wavenumber is $k_O/(2\upi)\approx 56$ where $k_O\equiv (N_0^3/\varepsilon)^{1/2}$, and the buoyancy wavenumber is $k_b/(2\upi)\approx 10$ where $k_b\equiv N_0/U_0 \sim N_0/\sqrt{\varepsilon}$. The buoyancy Reynolds number is conventionally defined as $Re_b \equiv \varepsilon/(\nu N_0^2)$ and is used to estimate the ratio of the vertical advection term to the viscous damping term in the horizontal momentum equation in 3D stratified turbulence \citep{Brethouwer:2007uk}. Using this definition, the value of $Re_b$ in the standard parameter case is $Re_b \approx 10.4$. Although our system is 2D and includes Rayleigh drag, this estimate of $Re_b$ is consistent with the time average value in the standard case simulation of the ratio of interest, $(w' \partial_z u')_{RMS}/(-u'+\nu\Delta u')_{RMS}\approx10.7$, where the time average is calculated over the final 15 time units of the simulation and the subscript RMS denotes the root mean square average over space. Indicative example snapshots and time series of the NL system are shown in figures \ref{fig:NL_Phenom_FlowSnapshots}, \ref{fig:NL_Phenom_Hovmollers}, and \ref{fig:NL_Phenom_Energy_timeseries}. Near the start of the integration (figure \ref{fig:NL_Phenom_FlowSnapshots} (a,b)), the structure of the flow reflects the structure of the stochastic excitation and is incoherent with a dominant length scale corresponding to the stochastic excitation scale, $1/k_e$. By $t=60$ (figure \ref{fig:NL_Phenom_FlowSnapshots} (c,d)) the system has evolved into a state in which the flow is dominated by the VSHF, $U$, which manifests as horizontal `stripes' in both the vorticity and streamfunction fields with vertical wavenumber $m_U/(2\upi)=6$. Simulated realizations of the NL system in the standard parameter case are always found to form a VSHF, but the VSHF wavenumber, $m_U$, differs slightly between simulations when the system is initialized from rest. We focus, in this section, on an example in which $m_U/(2\upi)=6$ to facilitate comparison with SSD results in \textsection \ref{sec:modelcomparison}. However, VSHFs with $m_U/(2\upi)=7$ form somewhat more frequently, which we discuss in \textsection \ref{sec:scaleselection}. We analyze how the VSHF wavenumber, $m_U$, is related to the parameters in \textsection \ref{sec:scaleselection} and \textsection \ref{sec:equilibration}, but presently note that in the standard parameter case $m_U$ is closely related to the excitation wavenumber, $k_e/(2\upi)=6$, and that $m_U$ differs from the Ozmidov wavenumber, $k_O/(2\upi)\approx 56$, and from the buoyancy wavenumber, $k_b/(2\upi)\approx 10$. The time evolution of $U$ is shown in figure \ref{fig:NL_Phenom_Hovmollers} (a). The VSHF forms by $t\approx15$ and persists until the end of the integration. Figure \ref{fig:NL_Phenom_Energy_timeseries} shows the time evolution of the kinetic energy of the VSHF, $\overline{K}$, and of the perturbations, $K'$, where these energies are defined as \begin{align} \overline{K}=[\frac{1}{2}U^2], && K'=[\frac{1}{2}\boldsymbol{u' \cdot u'}]. \end{align} The VSHF is the energetically dominant feature of the statistically steady flow, containing approximately six times more kinetic energy than the perturbations. In the statistical equilibrium state, the kinetic energy that is injected into the perturbation field by the stochastic excitation is transferred both into the mean flow, thereby maintaining the VSHF, and into the buoyancy field. Energetic balance is maintained by dissipation of the mean and perturbation energies at large scales by Rayleigh drag, with viscosity contributing only weakly to the total dissipation. \begin{figure} \centerline{\includegraphics{Fig2_v2p0.pdf}} \caption{Snapshots of the vorticity, streamfunction, and velocity fields for the standard case NL simulation showing the development of the VSHF in turbulence. Just after initialization ($t=2.5$), the vorticity field (a) and the streamfunction and associated velocity field (b) are characterized by perturbations at the scale of the excitation. The system evolves into a statistical equilibrium state by $t=60$ in which the vorticity field (c) is dominated by horizontal stripes with alternating sign indicative of a strong VSHF. The streamfunction and velocity field at $t=60$ (d) show that the VSHF is the dominant feature of the instantaneous flow. Parameters are set to the standard values $r_m=0.1$, $N_0^2=10^3$, $k_e/2\upi=6$, $\delta k/2\upi=1$, $\nu=2.4\times 10^{-5}$, and $\varepsilon=0.25$. The buoyancy Reynolds number is $Re_b=10.4$ and the Froude number is $Fr=0.6$.} \label{fig:NL_Phenom_FlowSnapshots} \end{figure} \begin{figure} \centerline{\includegraphics{Fig3_v2p0.pdf}} \caption{Development of the VSHF and associated density layers in the standard case NL simulation. (a) Time evolution of the horizontal mean flow, $U$, which develops from zero at $t=0$ into a persistent VSHF pattern with vertical wavenumber $m_U/2\upi=6$ by $t\approx15$. (b) Time evolution of the horizontal mean stratification, $\overline{N^2}$, which develops into a pattern with vertical wavenumber $m_B/2\upi=12$ that is phase-aligned with $U$ so that regions of weak stratification coincide with the shear regions of the VSHF structure.} \label{fig:NL_Phenom_Hovmollers} \end{figure} \begin{figure} \centerline{\includegraphics{Fig4_v2p1.pdf}} \caption{Kinetic energy evolution in the standard case NL simulation. In statistically steady state, the kinetic energy of the VSHF (dotted line) is approximately six times that of the perturbations (solid line).} \label{fig:NL_Phenom_Energy_timeseries} \end{figure} Although the phenomenon of VSHF emergence in stratified turbulence is well-known, the concurrent development of coherent horizontal mean structure in the buoyancy field has not been emphasized in the literature. Figure \ref{fig:NL_Phenom_Hovmollers} (b) shows the time evolution of the horizontal mean stratification $\overline{N^2}=N_0^2+\partial_z B$. Although $\overline{N^2}$ exhibits more temporal variability than $U$, it is clear that for these parameter values the turbulent fluxes systematically weaken the stratification ($\overline{N^2}<N_0^2$) in the shear regions of the VSHF. Association of mean stratification anomalies with the mean shear produces a vertical wavenumber in $\overline{N^2}$ of $m_{B}/2\upi=12$, twice that of the $m_U/2\upi=6$ structure of the VSHF. The statistical equilibrium horizontal mean state, obtained by averaging the flow subsequent to a spin-up period of 30 time units, is shown in figure \ref{fig:4panel_USNsqRi_meanplot}. Panels (a) and (b) show that, for these parameters, the VSHF has a vertical structure that deviates somewhat from harmonic, with flattened shear regions resulting in a profile resembling a sawtooth structure. Comparison of panels (b) and (c) reveals that the shear extrema coincide with the minima of $\overline{N^2}$. These $\overline{N^2}$ minima correspond to narrow density layers in which $\overline{N^2}$ is reduced by approximately $40\%$ relative to $N_0^2$. Similar density layers have been reported in observations and simulations of the EDJs \citep{Menesguen:2009uu}. As the vertical integral of $\overline{N^2}-N_0^2$ must vanish due to the periodicity of the boundary conditions in the vertical direction by (\ref{eq:NL2}), the narrow density layers are compensated by regions of enhanced stratification. These regions of enhanced stratification have a characteristic structure in which the $\overline{N^2}$ maxima occur just outside the extrema of $U$, with weak local minima of $\overline{N^2}$ occurring at the locations of the VSHF peaks. The locations of strongest shear and weakest stratification correspond to the local minima of the horizontal mean Richardson number, $\overline{Ri}=\overline{N^2}/(\partial_z U)^2$, as shown in figure \ref{fig:4panel_USNsqRi_meanplot} (d). The minimum value of $\overline{Ri}$ is near $\overline{Ri}\approx 0.8>0.25$, indicating that the time mean VSHF structure would be free of modal instabilities in the absence of excitation and dissipation by the Miles-Howard (MH) criterion. Although the MH criterion is formally valid only for steady unforced inviscid flows, it remains useful in our stochastically maintained turbulent flow to guide intuition about the maximum stable shear attainable by the VSHF for a given stratification. We note that this usage of the MH criterion differs from an alternate usage in which $Ri$ is used to distinguish between regions of a flow that are likely to become laminar and regions that are likely to maintain turbulence. This alternative interpretation of the implication of $Ri$ is based on the fact that large perturbation growth is obtained by optimal perturbations in shear flows for which $Ri>1/4$ although modal instability is not permitted \citep{Farrell:1993ue}. In accord with this result turbulence is observed to be supported in shear flows with $Ri>1$ \citep{Galperin:2007dk}. \begin{figure} \centerline{\includegraphics{Fig5_rev_v1p0.pdf}} \caption{Vertical structure of the time average horizontal mean state in the standard case NL simulation. (a) Mean flow, $U$. (b) Mean shear, $\partial U/\partial z$. (c) Mean stratification, $\overline{N^2}$. The vertical dashed line indicates $N_0^2$. (d) Mean Richardson number, $\overline{Ri}=\overline{N^2}/(\partial U/\partial z)^2$. The vertical dashed line indicates $\overline{Ri}=1/4$. Profiles are time averages over $t\in[30,60]$ of the structures shown in figure \ref{fig:NL_Phenom_Hovmollers}.} \label{fig:4panel_USNsqRi_meanplot} \end{figure} To demonstrate that VSHF formation is robust to changes in the control parameters, we show in figure \ref{fig:newfig1} the time evolution of $U$ in four additional cases. Panels (a) and (b) show the response of the system to changes in the dissipation parameters. Panel (a) shows the development of $U$ when the Rayleigh drag on the mean fields is increased by a factor of five ($r_m=0.5$). The mean fields in this case are damped half as rapidly as the perturbations, rather than ten times less rapidly as in the standard case. The excitation strength is $\varepsilon=0.5$ and other parameters are as in the standard case, so that $Re_b = 20.8$ and $Fr = 0.84$. The VSHF has $m_U/(2\upi)=7$ and is similar to that seen in the standard case in figure \ref{fig:NL_Phenom_Hovmollers} (a). In panel (b) is shown the effect of removing Rayleigh drag entirely ($r=r_m=0)$, so that all dissipation is provided by diffusion. In this case, some ambiguity arises regarding how the other parameters should be set, as we nondimensionalize time by the perturbation damping time, $1/r$, in examples other than this figure. For simplicity we choose to retain all parameters as they are set in the standard case as if Rayleigh drag were still present with $r=1$, which gives $Re_b = 10.4$ and $Fr = 0.23$, where for this example only we use the definition $Fr=(\varepsilon k_e^2)^{1/3}/N_0$ due to the absence of Rayleigh drag. The VSHF in this example initially emerges with $m_U/(2\upi)\approx6$ before transitioning to larger scale (smaller $m_U$) as the integration is continued. Transition of the VSHF to smaller values of $m_U$ for weaker damping or stronger excitation is consistent with previous studies of VSHF emergence \citep{Herring:1989fx,Smith:2001uo,Smith:2002wg} and is expected on the basis of analysis of the SSD system in the case of strong excitation, as we show in \textsection \ref{sec:equilibration}. Panels \ref{fig:newfig1} (c) and (d) show the response of the system to reductions in stratification. In these examples we reduce the excitation strength to $\varepsilon=1.5\times 10^{-2}$ for ease of comparison because VSHFs form more rapidly at these stratification values than they do in the standard case. Panel (c) shows the development of $U$ when the stratification is reduced by a factor of ten relative to the standard case ($N_0^2=100$ rather than $N_0^2=1000$, corresponding to $Re_b=6.3$, $Fr=.05$) and panel (d) shows the effect of reducing the stratification by a factor of 25 relative to the standard case ($N_0^2=40$, $Re_b=15.6$, $Fr=.12$). As in the case of modified dissipation, the VSHFs in these examples develop with similar structures as in the standard case shown in figure \ref{fig:NL_Phenom_Hovmollers} (a). We note (not shown) that VSHF formation ceases for sufficiently weak stratification \citep{Smith:2001uo,Kumar:2017ie}. We return to the dependence of the VSHF on stratification in \textsection \ref{sec:scaleselection}. \begin{figure} \centerline{\includegraphics{New_4NewCases_Hovmoller_pdf.pdf}} \caption{Time evolution of the VSHF in four additional cases. Unless otherwise stated all parameters are as in figure \ref{fig:NL_Phenom_FlowSnapshots}. (a) An example with enhanced Rayleigh drag on the mean fields, $r_m=0.5$, and excitation strength $\varepsilon=0.5$ ($Re_b = 20.8$, $Fr = 0.8$). (b) An example with zero Rayleigh drag on both the mean and perturbations, $r=r_m=0$ ($Re_b = 10.4$, $Fr = 0.23$). Dissipation is provided solely by diffusion. (c,d) Two examples with reduced stratification and with excitation strength $\varepsilon=1.5\times10^{-2}$: (c) $N_0^2=100$ ($Re_b=6.3$, $Fr=.05$) and (d) $N_0^2=40$, ($Re_b=15.6$, $Fr=.12$). This figure demonstrates that VSHFs form robustly when the dissipation and stratification are varied.} \label{fig:newfig1} \end{figure} \section{Mechanism of Horizontal Mean Structure Formation}\label{sec:testfunction} In a statistically steady state the VSHF, $U$, and the associated buoyancy structure, $B$, must be supported against dissipation by perturbation fluxes of momentum and buoyancy as expressed in (\ref{eq:NL1})-(\ref{eq:NL2}). In the absence of any horizontal mean structure (\emph{i.e.,} if $U=B=0$), isotropy of the stochastic excitation implies that the statistical mean perturbation momentum flux vanishes ($\langle \overline{u'w'}\rangle=0$, where angle brackets indicate the ensemble average over realizations of the stochastic excitation) and that the statistical mean perturbation buoyancy flux is constant ($-\partial_z \langle \overline{w'b'}\rangle=0$). For the observed horizontal mean structures to emerge and persist, their presence must modify the fluxes so that the fluxes reinforce these structures. In this section we analyze the interaction between the turbulence and the horizontal mean state and demonstrate that the horizontal mean structures do influence the turbulent fluxes in this way. We analyze turbulence-mean state interactions by applying two modifications to (\ref{eq:NL1})-(\ref{eq:NL4}). The first modification is to hold the mean fields constant as $U=U_{\text{test}}$, $B=B_{\text{test}}$. The second modification is to discard the perturbation-perturbation nonlinear terms $[J(\psi',\Delta \psi')-\overline{J(\psi',\Delta \psi')}]$ and $[J(\psi',b')-\overline{J(\psi',b')}]$ from equations (\ref{eq:NL3})-(\ref{eq:NL4}). The resulting equations are \begin{eqnarray} \frac{\partial \Delta \psi' }{\partial t} &=& - U_{\text{test}} \frac{\partial \Delta \psi'}{\partial x} +w'\frac{\partial^2 U_{\text{test}}}{\partial z^2}+ \frac{\partial b'}{\partial x} -\Delta \psi' +\nu \Delta^2 \psi' +\sqrt{\varepsilon}S, \label{eq:testfn1} \\ \frac{\partial b'}{\partial t} &=& -U_{\text{test}} \frac{\partial b'}{\partial x}-w'\overline{N^2}_{\text{test}}-b'+\nu \Delta b' ,\label{eq:testfn2} \end{eqnarray} in which $\overline{N^2}_{\text{test}}=N_0^2 + \partial_z B_{\text{test}}$. Equations (\ref{eq:testfn1})-(\ref{eq:testfn2}) are a system of linear differential equations for the perturbation fields. For this system the time mean fluxes are identical to the ensemble mean fluxes averaged over noise realizations and either method of averaging can be used to calculate the average fluxes in the presence of the imposed horizontal mean state ($U=U_{\text{test}}$ and $\overline{N^2}=\overline{N^2}_{\text{test}}$). We refer to the calculation of perturbation fluxes from (\ref{eq:testfn1})-(\ref{eq:testfn2}) as test function analysis, as it allows us to probe the turbulent dynamics by imposing chosen test functions for the mean flow and buoyancy, $U_{\text{test}}$ and $\overline{N^2}_{\text{test}}$. This approach has been applied to estimate perturbation fluxes in the midlatitude atmosphere \citep{Farrell:1993wf} and in wall-bounded shear flows \citep{Farrell:2012jm,Farrell:2017dx} and we will evaluate its effectiveness in the 2D Boussinesq system in \textsection \ref{sec:modelcomparison}. That the modified perturbation equations are capable of producing realistic perturbation fluxes given the observed mean flow is related to the non-normality of the perturbation dynamics in the presence of shear \citep{Farrell:1996jj}. The modified equations correctly capture the non-normal dynamics, which produce both the positive and negative energetic perturbation-mean flow interactions. The non-normal dynamics of perturbations in stratified shear flow have been analyzed in 2D \citep{Farrell:1993ue} and in 3D \citep{Bakas:2001fk,Kaminski:2014us}. As an illustrative example we show in figure \ref{fig:TestFunction_Gaussian} the results of test function analysis in the case of an imposed mean state comprised of a Gaussian jet peaked in the centre of the domain, $U_{\text{test}}=\exp(-50(z-\frac12)^2)$, and an unmodified background stratification, $\overline{N^2}_{\text{test}}=N_0^2=10^3$. Panel (a) shows the imposed jet, $U_{\text{test}}$, while panel (b) shows the induced perturbation momentum flux divergence, $-\partial_z \langle \overline{u'w'}\rangle$, alongside the negative of the jet dissipation, $(r_m-\nu \partial_{zz})U_{\text{test}}$. The core of the jet is clearly being supported against dissipation by the perturbation momentum fluxes resulting from its modification of the turbulence. This organization of turbulence producing up-gradient momentum fluxes in the presence of a background shear flow is the essential mechanism of VSHF emergence: an initially perturbative VSHF that arises randomly from turbulent fluctuations modifies the turbulence to produce fluxes reinforcing the initial VSHF. This wave-mean flow mechanism is consistent with the results of rapid distortion theory for stratified shear flow \citep{Galmiche:2002cw} and has been identified in simulations of decaying sheared and stratified turbulence \citep{Galmiche:2002gh}. Wave-mean flow interaction has also been hypothesized to be the mechanism responsible for the formation and maintenance of the EDJs \citep{Muench:1999dy,Ascani:2015dd}. Consistent with the results of the NL system shown in \textsection \ref{sec:NLphenom}, the buoyancy fluxes are also modified by imposing a test function horizontal mean state. Figure \ref{fig:TestFunction_Gaussian} (c) shows the imposed stratification, $\overline{N^2}_{\text{test}}$, which is equal to $N_0^2$ in this example. Figure \ref{fig:TestFunction_Gaussian} (d) shows the driving by perturbation fluxes of the stratification anomaly, $-\partial_{zz} \langle \overline{w'b'}\rangle$, alongside the negative of the dissipation of the stratification anomaly, $(r_m-\nu\partial_{zz})(\overline{N^2}_{\text{test}}-N_0^2)$, which is zero in this Gaussian jet example as $\overline{N^2}=N_0^2$. The vertical structure of $-\partial_{zz} \langle\overline{w'b'}\rangle$ is complex. For these parameter values the fluxes act to enhance $\overline{N^2}$ most strongly at the jet maximum, which departs from the NL results in which $\overline{N^2}$ has weak local minima at the locations of the VSHF peaks. \begin{figure} \centerline{\includegraphics{Fig6_v2p2.pdf}} \caption{Test function analysis showing the perturbation flux divergences that develop in response to an imposed horizontal mean state consisting of a Gaussian jet and an unmodified background stratification. (a) Imposed jet, $U_{\text{test}}$. (b) The resulting ensemble mean perturbation momentum flux divergence, $-\partial_z \langle \overline{u'w'} \rangle$, and the negative of the dissipation of the jet, $(r_m- \nu\partial_{zz})U_{\text{test}}$. (c) Imposed stratification, $\overline{N^2}_{\text{test}}$, which is equal to $N_0^2$ in this example. (d) Ensemble mean driving by perturbation fluxes of the stratification anomaly, $-\partial_{zz}\langle \overline{w'b'}\rangle$, and the negative of the dissipation of the stratification anomaly, $(r_m-\nu \partial_{zz})(\overline{N^2}_{\text{test}}-N_0^2)$, which is zero in this example. This example shows that a Gaussian jet organizes the turbulence so that the perturbation momentum fluxes generally accelerate the jet. The buoyancy fluxes are also organized by the jet in such a way as to drive a stratification anomaly with a complex vertical structure. Parameters are as in figure \ref{fig:NL_Phenom_FlowSnapshots}.} \label{fig:TestFunction_Gaussian} \end{figure} This simple example demonstrates the general physical mechanism of horizontal mean structures modifying turbulent fluxes so as to modify the mean state. However, the results of this example indicate that a Gaussian jet together with an unmodified background stratification does not constitute a steady state, as neither the jet acceleration nor the driving of the stratification anomaly due to the perturbation fluxes reflect the specific structure of the imposed mean state ($U=U_{\text{test}}$ and $\overline{N^2}=\overline{N^2}_{\text{test}}$). Although the perturbation fluxes generally act to strengthen $U$, they also distort its structure by sharpening the jet core and driving retrograde jets on the flanks. Similarly, the $\overline{N^2}=N_0^2$ structure is not in equilibrium with the buoyancy fluxes. To maintain a statistically steady mean state as seen in the NL simulations, the turbulence and the mean state must be adjusted by their interaction to produce mean structure which the corresponding fluxes precisely support against dissipation. \begin{figure} \centerline{\includegraphics{Fig7_rev_v1p0.pdf}} \caption{Test function analysis showing the perturbation flux divergences that develop in response to an imposed horizontal mean state corresponding to that which emerges in the standard case NL simulation shown in \textsection \ref{sec:NLphenom}, with $U_{\text{test}}$ and $\overline{N^2}_{\text{test}}$ smoothed and symmetrized. Panels are as in figure \ref{fig:TestFunction_Gaussian}, with the additional vertical dashed line in panel (c) indicating $N_0^2$. This example shows that the horizontal mean structure that emerges in the NL system, consisting of the VSHF and associated density layers, organizes the turbulent fluxes so that these fluxes support the specific structure of the horizontal mean state against dissipation. Parameters are as in figure \ref{fig:NL_Phenom_FlowSnapshots}.} \label{fig:TestFunction_Ufinal} \end{figure} To demonstrate how such cooperative equilibria are established, we show in figure \ref{fig:TestFunction_Ufinal} the results of test function analysis applied to the case in which $U_{\text{test}}$ (panel (a)) and $\overline{N^2}_{\text{test}}$ (panel (c)) are taken to be the time average profiles from the standard case NL integration discussed in \textsection \ref{sec:NLphenom}, smoothed and symmetrized so that the sixfold symmetry of the VSHF and twelvefold symmetry of $\overline{N^2}$ are made exact. As in the Gaussian jet example, the perturbation momentum fluxes support the jet against dissipation (panel (b)). However, unlike the results obtained in the case of a Gaussian jet, the approximately harmonic VSHF that emerges in the NL system leads to flux divergences that are precisely in phase with $U$. This provides an explanation for the structure of the emergent VSHF: its approximately harmonic $U$ profile is a structure which the associated statistical equilibrium fluxes precisely support. Similarly, the structure of $\overline{N^2}$ is supported against dissipation by the perturbation buoyancy fluxes. Some differences between the structure of the perturbation driving and that of the dissipation are seen in panels (b) and (d). In particular, the perturbation driving of the jet is slightly too strong, and the perturbation driving of the stratification anomaly is too strongly negative at the local stratification minima that coincide with the VSHF extrema. These differences arise because the VSHF and horizontal mean stratification anomaly tend to strengthen when perturbation-perturbation nonlinearities are discarded (see \textsection \ref{sec:modelcomparison}) and also because the smoothed and symmetrized stratification anomaly has a somewhat weaker local minimum than that which is found in snapshots of the NL system. This analysis demonstrates that the linear dynamics of the stochastically excited Boussinesq equations produces fluxes consistent with the emergent VSHF and density layers seen in the NL system. In this sense, test function analysis provides a `mechanism denial study' that demonstrates that spectrally local perturbation-perturbation interactions associated with a cascade of energy to large scales are not required to produce VSHFs in stratified turbulence. That VSHF formation does not occur via such a cascade has previously been noted by \citet[]{Smith:2002wg}. The analysis in this section has been conducted using an imposed, constant horizontal mean state. In the next section we extend (\ref{eq:testfn1})-(\ref{eq:testfn2}) by coupling the dynamics of the mean fields to the linearized perturbation equations to formulate the S3T implementation of SSD for this system. \section{Formulating the QL and S3T Equations of Motion}\label{sec:QL_and_SSD_formulation} The QL system is obtained by combining the perturbation equations (\ref{eq:testfn1})-(\ref{eq:testfn2}) with the NL equations for the horizontal mean state (\ref{eq:NL1})-(\ref{eq:NL2}). The resulting QL equations of motion are \begin{eqnarray} \frac{\partial U}{\partial t} &=& -\frac{\partial}{\partial z}\overline{u'w'}-r_m U + \nu \frac{\partial^2 U}{\partial z^2}, \label{eq:QL1} \\ \frac{\partial B}{\partial t} &=& -\frac{\partial}{\partial z}\overline{w'b'}-r_m B + \nu \frac{\partial^2 B}{\partial z^2}, \label{eq:QL2}\\ \frac{\partial \Delta \psi '}{\partial t} &=& - U \frac{\partial \Delta \psi'}{\partial x} +w' \frac{\partial^2 U}{\partial z^2} + \frac{\partial b'}{\partial x} -\Delta \psi' +\nu \Delta^2 \psi' + \sqrt{\varepsilon}S, \label{eq:QL3} \\ \frac{\partial b'}{\partial t} &=& -U \frac{\partial b'}{\partial x}-w'\left(N_0^2+\frac{\partial B}{\partial z}\right)-b'+\nu \Delta b'.\label{eq:QL4} \end{eqnarray} This system can also be obtained directly from the NL system (\ref{eq:NL1})-(\ref{eq:NL4}) by discarding the perturbation-perturbation nonlinearities $[J(\psi',\Delta \psi')-\overline{J(\psi',\Delta \psi')}]$ and $[J(\psi',b')-\overline{J(\psi',b')}]$. The QL dynamics is a coupled system that while simplified retains the dynamics of the consistent evolution of the horizontal mean state together with the stochastically excited turbulence. The 2D Boussinesq equations in the QL approximation have previously been applied to analyze mean flow formation in the case of an unstable background stratification \citep{Fitzgerald:2014wz}. Because (\ref{eq:QL3})-(\ref{eq:QL4}) are linear in perturbation quantities, the QL system does not retain the transfer by perturbation-perturbation interaction of perturbation energy into horizontal wavenumber components that are not stochastically excited. We choose to excite only global horizontal wavenumbers $1-8$. The QL system will therefore not exhibit the full range of small scale motions seen in the NL system. However, in \textsection \ref{sec:modelcomparison} we compare the results of QL simulations with those of the NL system, and show that the QL system reproduces the large-scale structure formation observed in the NL system. This implies that the small scale structures produced by perturbation-perturbation interaction in the NL system do not strongly influence the horizontal mean state and that a faithful representation of the turbulence at all scales is inessential for understanding the statistical structure of the turbulence to second order. The energetics of the QL system, with respect to both the mean and perturbation kinetic and potential energies, is identical to that of the NL system, with the exception that the terms originating from perturbation-perturbation interaction, which redistribute energy within the perturbation field but do not change the domain averaged kinetic or potential energies, are not retained in the QL system. The QL system thus possesses identical energetics to the NL system in the domain averaged sense. Although the QL system constitutes a substantial mathematical and conceptual simplification compared to the NL system, QL dynamics remains stochastic and exhibits significant turbulent fluctuations. These fluctuations obscure the statistical relationships between the horizontal mean structure and the turbulent fluxes discussed in \textsection \ref{sec:testfunction}. To understand the mechanism underlying these statistical relationships it is useful to formulate a dynamics directly in terms of statistical quantities, which we refer to as a statistical state dynamics (SSD). We now formulate the S3T dynamics, which is the SSD we use to study our system. S3T is a closure that retains the interactions between the horizontal mean state and the ensemble mean two-point covariance functions of the perturbation fields which determine the turbulent fluxes. For readers unfamiliar with S3T, Appendix \ref{sec:appendixA} provides a derivation of the S3T equations for a reduced model of stratified turbulence illustrating the conceptual utility of this closure in the context of this reduced model. Derivation of the S3T dynamics begins with the QL equations (\ref{eq:QL1})-(\ref{eq:QL4}). We expand the perturbation fields in horizontal Fourier series as \begin{align} \psi ' (x,z,t) &= \Real \left[ \sum_{n=1}^{N_k} \tilde{\psi}_n(z,t) e^{\text{i}k_n x} \right], \\ b ' (x,z,t) &= \Real \left[ \sum_{n=1}^{N_k} \tilde{b}_n(z,t) e^{\text{i}k_n x} \right]. \end{align} Here $N_k$ is the number of retained Fourier modes ($N_k=8$ for our choice of stochastic excitation) and $k_n=2\upi n$. Considering the Fourier coefficients as vectors in the discretized numerical system (\emph{e.g.}, $\tilde{\psi}_n(z,t)\to \boldsymbol{\psi}_n(t)$), the QL equations (\ref{eq:QL3})-(\ref{eq:QL4}) can be combined into the vector equation \begin{eqnarray} \frac{d}{d t}\left( \begin{array}{c} \boldsymbol{\psi}_n \\ \boldsymbol{b}_n \end{array} \right) = \mathsfbi{A}_n (\boldsymbol{U},\boldsymbol{B}) \left( \begin{array}{c} \boldsymbol{\psi}_n \\ \boldsymbol{b}_n \end{array} \right) + \left( \begin{array}{c} \sqrt{\varepsilon} \boldsymbol{\xi}_n \\ 0 \end{array} \right), \label{eq:QLmatrixform} \end{eqnarray} where $\boldsymbol{\xi}_n=\mathsfbi{\Delta}_n^{-1}\boldsymbol{S}_n$ is the $n$th horizontal Fourier component of the stochastic excitation of the streamfunction. Here $\mathsfbi{\Delta}_n=-k_n^2\mathsfbi{I}+\mathsfbi{D}^2$ in which $\mathsfbi{I}$ is the identity matrix and $\mathsfbi{D}$ is the discretized vertical derivative operator. The linear dynamical operator $\mathsfbi{A}_n$ is given by the expression \begin{multline} \mathsfbi{A}_n (\boldsymbol{U},\boldsymbol{B}) = \\ \left( \begin{array}{cc} -\text{i}k_n \mathsfbi{\Delta}_n^{-1} \text{diag}(\boldsymbol{U}) \mathsfbi{\Delta}_n + \text{i}k_n\mathsfbi{\Delta}_n^{-1}\text{diag}(\mathsfbi{D}^2\boldsymbol{U})-\mathsfbi{I}+\nu \mathsfbi{\Delta}_n& \text{i}k_n\mathsfbi{\Delta}_n^{-1} \\ -\text{i}k_nN_0^2\mathsfbi{I}-\text{i}k_n\text{diag}(\mathsfbi{D}\boldsymbol{B}) & -\text{i}k_n\text{diag}(\boldsymbol{U})-\mathsfbi{I}+\nu \mathsfbi{\Delta}_n \end{array} \right), \label{eq:Adefn} \end{multline} in which diag$(\boldsymbol{v})$ denotes the diagonal matrix for which the nonzero elements are given by the entries of the column vector $\boldsymbol{v}$. We now make use of the ergodic assumption that horizontal averages and ensemble averages are equal, so that, for example, $U=\overline{u}=\langle u \rangle$ and $\overline{u'w'}=\langle u'w' \rangle$. For our system, which is statistically horizontally uniform this assumption is justified in a domain large enough so that several approximately independent perturbation structures are found at each height as seen, \emph{e.g.}, in figure \ref{fig:NL_Phenom_FlowSnapshots} (b). It can then be shown (using the fact that $\sqrt{\varepsilon}S$ is delta-correlated in time) that the ensemble mean covariance matrix, defined as \begin{equation} \mathsfbi{C}_n = \left\langle \left( \begin{array}{c} \boldsymbol{\psi}_n \\ \boldsymbol{b}_n \end{array} \right) \left( \begin{array}{cc} \boldsymbol{\psi}_n^{\dagger} & \boldsymbol{b}_n^{\dagger} \end{array} \right) \right \rangle = \left( \begin{array}{cc} \langle \boldsymbol{\psi}_n \boldsymbol{\psi}_n^{\dagger} \rangle & \langle \boldsymbol{\psi}_n \boldsymbol{b}_n^{\dagger} \rangle \\ \langle \boldsymbol{b}_n \boldsymbol{\psi}_n^{\dagger} \rangle & \langle \boldsymbol{b}_n \boldsymbol{b}_n^{\dagger} \rangle \end{array} \right) = \left( \begin{array}{cc} \mathsfbi{C}_{\psi \psi , n} & \mathsfbi{C}_{\psi b,n} \\ \mathsfbi{C}_{\psi b,n}^{\dagger} & \mathsfbi{C}_{b b,n} \end{array} \right), \end{equation} in which daggers indicate Hermitian conjugation, evolves according to the time-dependent Lyapunov equation \begin{eqnarray} \frac{d}{d t}\mathsfbi{C}_n &=& \mathsfbi{A}_n (\boldsymbol{U},\boldsymbol{B}) \mathsfbi{C}_n + \mathsfbi{C}_n \mathsfbi{A}_n (\boldsymbol{U},\boldsymbol{B})^{\dagger} + \varepsilon \mathsfbi{Q}_n, \label{eq:S3T1} \\ \mathsfbi{Q}_n &=& \left[ \begin{array}{cc} \langle \boldsymbol{\xi}_n \boldsymbol{\xi}_n^{\dagger} \rangle & 0 \\ 0 &0 \end{array}\right], \end{eqnarray} where $\mathsfbi{Q}_n$ is the ensemble mean covariance matrix of the stochastic excitation and has nonzero entries only in the upper left block matrix because we apply excitation only to the vorticity field. Equation (\ref{eq:S3T1}) constitutes the perturbation dynamics of the S3T system and is the S3T analog of the QL equations (\ref{eq:QL3})-(\ref{eq:QL4}). To complete the derivation of the S3T system it remains to write the mean equations (\ref{eq:QL1})-(\ref{eq:QL2}) in terms of the covariance matrix. The ensemble mean perturbation flux divergences can be written as functions of the covariance matrix as \begin{eqnarray} -\frac{\partial}{\partial z}\langle \overline{u'w'}\rangle &=& \sum_{n=1}^{N_k} \frac{k_n}{2} \mbox{Im} \left[ \text{vecd} \left( \mathsfbi{\Delta}_n \mathsfbi{C}_{\psi \psi,n} \right) \right] , \\ -\frac{\partial}{\partial z}\langle \overline{w'b'}\rangle &=& \sum_{n=1}^{N_k} \frac{k_n}{2} \mbox{Im} \left[ \text{vecd} \left( \mathsfbi{D} \mathsfbi{C}_{\psi b,n} \right) \right] , \end{eqnarray} in which vecd$(\mathsfbi{M})$ denotes the vector comprised of the diagonal elements of the matrix $\mathsfbi{M}$. The mean state dynamics then become \begin{eqnarray} \frac{d}{d t}\boldsymbol{U} &=& \sum_{n=1}^{N_k} \frac{k_n}{2} \mbox{Im} \left[ \text{vecd} \left( \mathsfbi{\Delta}_n \mathsfbi{C}_{\psi \psi,n} \right) \right] -r_m \boldsymbol{U} + \nu \mathsfbi{D}^2 \boldsymbol{U} , \label{eq:S3T2} \\ \frac{d}{d t}\boldsymbol{B} &=& \sum_{n=1}^{N_k} \frac{k_n}{2} \mbox{Im} \left[ \text{vecd} \left( \mathsfbi{D} \mathsfbi{C}_{\psi b,n} \right) \right] -r_m \boldsymbol{B} + \nu \mathsfbi{D}^2 \boldsymbol{B} . \label{eq:S3T3} \end{eqnarray} Equations (\ref{eq:S3T1}), (\ref{eq:S3T2}), and (\ref{eq:S3T3}) together constitute the S3T SSD closed at second order. The S3T system is deterministic and autonomous and so provides an analytic description of the evolving relationships between the statistical quantities of the turbulence up to second order, including fluxes and horizontal mean structures, without the turbulent fluctuations inherent in the dynamics of particular realizations of turbulence, such as those present in the NL and QL systems. Although some previous attempts to formulate turbulence closures have been found to have inconsistent energetics \citep{KRAICHNAN:1957ty,OGURA:1963wp}, the dynamics of the S3T system are QL and so S3T inherits the consistent energetics of the QL system. That the second-order S3T closure is capable of capturing the dynamics of VSHF formation, as we demonstrate in \textsection \ref{sec:modelcomparison}, is due to the appropriate choice of mean. In the present formulation of S3T, we have chosen the mean to be the horizontal average structure so that the second-order closure retains the QL mean-perturbation interactions that account for the mechanism of VSHF formation in turbulence. Before proceeding to analysis of the QL and S3T systems, we wish to make two remarks regarding the mathematical structure and physical basis of S3T. First, we note that the ergodic assumption used in deriving S3T is formally valid in the limit that the horizontal extent of the domain tends to infinity and the number of independent perturbation structures at each height correspondingly tends to infinity. In this ideal limit described by S3T, the statistical homogeneity of the turbulence is only broken by the initial state of the horizontal mean structure (which in the examples is perturbatively small), which then determines the phase of the emergent VSHF in the vertical direction. In simulations of the QL and NL systems in a finite domain, the initial mean structure instead results from random Reynolds stresses arising from fluctuations in the perturbation fields. Second, we note that S3T is a canonical closure of the turbulence problem at second order, in that it is a truncation of the cumulant expansion at second order achieved by setting the third cumulant to zero. The mathematical structure of the cumulant expansion determines which nonlinearities are retained and discarded in the QL system, which is a stochastic approximation to the ideal S3T closure. Wave-mean flow coupling enters the equations through second order cumulants and so is retained, while perturbation-perturbation nonlinearities enter as third-order cumulants and so are not retained. In the next section we demonstrate that the QL and S3T systems reproduce the major statistical phenomena observed in the NL system. \section{Comparison of the NL, QL, and S3T Systems}\label{sec:modelcomparison} The most striking feature of the standard case NL simulation discussed in \textsection \ref{sec:NLphenom} is the spontaneous development of a VSHF, $U$, with $\text{max}(U)\approx1$ and vertical wavenumber $m_U/2\upi=6$. The horizontal mean stratification, $\overline{N^2}$, is also modified by the turbulence and develops a structure with vertical wavenumber $m_B/2\upi=12$ in phase with $U$ such that weakly stratified density layers develop in the regions of strongest mean shear. The VSHF in the NL system is approximately steady in time, while the horizontal mean stratification is more variable. In this section we compare these NL results to the behaviour of the QL and S3T systems for the same parameter values. We initialize the QL system from rest, matching the procedure used for the NL system. We initialize the S3T system with $\mathsfbi{C}_n$ corresponding to homogeneous turbulence together with a small VSHF perturbation (amplitude $0.1$) with $m_U/2\upi=6$ that is slightly modified by additional small perturbations (amplitude $.005$). We note that the details of the S3T initialization are unimportant in this example because, as we will show in \textsection \ref{sec:scaleselection}, the VSHF emerges via a linear instability of the homogeneous turbulence and so any sufficiently small initial perturbation to the S3T system will evolve into a $m_U/2\upi=6$ VSHF for these parameter values. \begin{figure} \centerline{\includegraphics{Fig8_v2p0.pdf}} \caption{Development of the VSHF and associated density layers in the QL and S3T systems. Panels show the time evolution of (a) $U$ in the QL system, (b) $\overline{N^2}$ in the QL system, (c) $U$ in the S3T system, and (d) $\overline{N^2}$ in the S3T system. This figure demonstrates that the QL and S3T systems reproduce the phenomenon of spontaneous VSHF and density layer formation shown in figure \ref{fig:NL_Phenom_Hovmollers} for the NL system. Parameters are as in figure \ref{fig:NL_Phenom_FlowSnapshots}.} \label{fig:ModelComparison_Hovmollers} \end{figure} \begin{figure} \centerline{\includegraphics{Fig9_rev_v1p0.pdf}} \caption{Comparison of the kinetic energy evolution and equilibrium VSHF profiles in the NL, QL, and S3T systems. (a) Mean and perturbation kinetic energy evolution. (b) Aligned VSHF profiles. The NL and QL profiles are averaged over $t\in[30,60]$ and the S3T profile is taken to be the state after the S3T system reaches a fixed point. This figure demonstrates that VSHF emergence in the S3T and QL systems occurs with similar structure and energy evolution to that which occurs in the NL system. Parameters are as in figure \ref{fig:NL_Phenom_FlowSnapshots}.} \label{fig:ModelComparison_Energy_and_Uprofiles} \end{figure} Figures \ref{fig:ModelComparison_Hovmollers} (a) and (c) show the time evolution of the VSHF in the QL and S3T systems (see figure \ref{fig:NL_Phenom_Hovmollers} for the corresponding evolution in the NL system). The QL and S3T systems develop VSHF structures with $m_U/2\upi=6$ and the $U$ profiles in the NL, QL, and S3T systems are compared in figure \ref{fig:ModelComparison_Energy_and_Uprofiles} (b). For the NL and QL systems the profiles are time averaged over $t\in[30,60]$, while for the S3T system we show the $U$ state after the S3T system has reached a fixed point. The aligned VSHF structures agree well across the three systems. The time evolution of the horizontal mean stratification, $\overline{N^2}$, in the QL and S3T systems is shown in figures \ref{fig:ModelComparison_Hovmollers} (b) and (d). Like the NL stratification, the QL profile of $\overline{N^2}$ develops a $m_B/2\upi=12$ structure that is more variable in time than $U$ and is phase-aligned with $U$ so that $\overline{N^2}$ is weakest in the regions of strongest shear. The S3T system behaves similarly but is free of fluctuations. The evolution of $\overline{N^2}$ in the S3T system also reveals that the vertical structure of $\overline{N^2}$ changes over time. During the development of the VSHF ($t\lesssim8$), the stratification is enhanced in the regions of strongest shear. As the VSHF begins to equilibrate at finite amplitude, the $\overline{N^2}$ profile reorganizes such that the shear regions are the most weakly stratified. Such reorganization may also occur in the NL and QL systems but is difficult to identify due to the fluctuations present in these systems. Figure \ref{fig:ModelComparison_Energy_and_Uprofiles} (a) shows the evolution of the mean and perturbation kinetic energies of the NL, QL and S3T systems. The growth rate of mean kinetic energy is similar in these three systems. The equilibrium mean energies differ somewhat among the systems, with the VSHFs in the S3T and QL systems having more energy than the NL VSHF. The relative weakness of the NL VSHF is consistent with the scattering of perturbation energy to small scales by the perturbation-perturbation advection terms that are included in NL but not in QL or S3T. The temporal variability of the NL and QL VSHFs, as indicated by the fluctuations in $\overline{K}$, is similar in the stochastic NL and QL systems. The VSHF in S3T is time-independent once equilibrium has been reached as the S3T VSHF corresponds to a fixed point of the S3T dynamics. \begin{figure} \centerline{\includegraphics{Fig10_rev_v1p0.pdf}} \caption{Vertical structure of the horizontal mean states of the QL and S3T systems. Panels are as in figure \ref{fig:4panel_USNsqRi_meanplot} with solid lines showing the S3T state and dotted lines showing the QL state. This figure demonstrates that the QL and S3T systems capture the structure of the horizontal mean state in the NL system, including the phase relationship between $U$ and $\overline{N^2}$. Parameters are as in figure \ref{fig:NL_Phenom_FlowSnapshots}.} \label{fig:New_ModelComparison_QLS3T_4panelplots} \end{figure} The relationship between the $U$ and $\overline{N^2}$ structures is shown in figure \ref{fig:New_ModelComparison_QLS3T_4panelplots} for the QL (dotted curves) and S3T (solid curves) systems (see figure \ref{fig:4panel_USNsqRi_meanplot} for the corresponding structures in the NL system). The equilibrium horizontal mean structures in the QL and S3T systems agree well with those of the NL system. The $U$ profiles (panel (a)) are approximately harmonic with somewhat flattened shear regions and, remarkably, the detailed structure of $\overline{N^2}$ seen in the NL integration is reproduced by the QL and S3T systems (panel (c)), which discard perturbation-perturbation nonlinear interactions. In particular, the presence of weak local stratification minima at the locations of the VSHF peaks is captured by the QL and S3T systems. \begin{figure} \centerline{\includegraphics{Fig11_v3p0_pdf.pdf}} \caption{Comparison of the wavenumber power spectra of kinetic ($K$) and potential ($V$) energy in the NL, QL, and S3T systems. The top row shows the 2D $K$ spectra of the (a) NL, (b) QL, and (c) S3T systems as functions of $(k,m)$, while the middle row (d)-(f) shows the 2D $V$ spectra of those systems. 2D spectra are shown in terms of their natural logarithms and no normalization is performed. The bottom row shows the kinetic energy spectra in the conventional 1D form as functions of (g) vertical wavenumber, $m$, and (h) horizontal wavenumber, $k$. In panel (g), the contributions to the spectra from the VSHFs in each system are also shown. This figure demonstrates that the QL and S3T systems reproduce structural details of the turbulence beyond the horizontal mean state, including the wavenumber distribution of perturbation energy at large scales. Parameters are as in figure \ref{fig:NL_Phenom_FlowSnapshots}.} \label{fig:New_ModelComparison_Spectra} \end{figure} The above comparisons demonstrate that the horizontal mean structures and domain mean kinetic energies of the QL and S3T systems show good agreement with those of the NL system. Figure \ref{fig:New_ModelComparison_Spectra} compares the energy spectra in the three systems. In panels (a)-(f), the 2D spectra of kinetic and potential energy are compared as functions of $(k,m)$, while in panels (g)-(h) the kinetic energy spectra are compared in the more traditional 1D integrated forms as functions of $k$ and $m$ separately. Panel (a) shows the kinetic energy spectrum of the NL system. The dominant and most important feature of the $K$ spectrum is the concentration of energy at $(k,m)=2\upi(0,6)$ which corresponds to the $m_U/2\upi=6$ VSHF structure. This feature is also evident in panel (g), which shows that the peak in the vertical wavenumber spectrum of NL kinetic energy is dominated by the VSHF component of the flow. The energy of the VSHF is also spread across the neighbouring vertical wavenumbers, reflecting both the deviation of the structure of the VSHF from a pure harmonic and also that fluctuations in the VSHF structure project onto nearby vertical wavenumbers. Away from the $k=0$ axis, the $K$ spectrum reveals the expected concentration of energy on the ring of excited wavenumbers $k^2+m^2=k_e^2$, and the spread of this ring to higher $m$ values. This spread is due to the shearing of the ring by the $m_U/2\upi=6$ VSHF, which produces the sum and difference wavenumber components. The quantitative structure of the spectrum associated with this spreading can be seen more clearly in panel (g), in which selected power law slopes are provided for reference. The most important features of the 2D $K$ spectrum of the NL system are captured by the QL and S3T systems. The QL $K$ spectrum (figure \ref{fig:New_ModelComparison_Spectra} (b)) reproduces the primary feature of the energetic dominance of the VSHF over the perturbation field, as well as the minor features of concentration of energy at $k_e$ and the spread of the excited ring structure to higher vertical wavenumbers. The 1D spectrum in panel (g) shows that the QL system quantitatively captures the spectrum of perturbation kinetic energy in the wavenumber range $m/(2\upi)\lesssim 80$. We note that the stochastic excitation directly influences the energy spectrum only near $m/(2\upi)\approx 6$, and so the agreement seen in panel (g) is not a direct result of the structure of the excitation. The primary difference between the $K$ spectra of the NL and QL systems is that the NL system scatters some kinetic energy into the unexcited part of the horizontal wavenumber spectrum ($|k|/2\upi>8$), whereas these unexcited wavenumber components have no energy in the QL system, as can also be seen in panel (h). The vertical wavenumber spectrum of $K$ in the NL system, shown in panel (g), also contains small scale structure for $m\gtrsim 80$ that is not present in the QL system and so can be attributed to perturbation-perturbation nonlinearity. The S3T $K$ spectrum (figure \ref{fig:New_ModelComparison_Spectra} (c)) also captures the most important features of the NL spectrum, but some differences between the S3T spectrum and those of the NL and QL systems are also visible. In the S3T system the VSHF energy is more strongly concentrated in the $m_U/2\upi=6$ harmonic than it is in the NL and QL systems. Additionally, the concentration of energy at the excited ring and the spread of energy to higher $m$ are more distinct in the S3T system than in the NL and QL systems, in which the gaps are filled in by a broad background spectrum. These features are also visible in panel (g). These minor differences between the spectrum of S3T and those of the NL and QL systems are due in part to the absence of fluctuations in the S3T system that are present in the stochastic NL and QL systems. Noise in the stochastic systems produces VSHF fluctuations that spread mean flow energy into $k=0$ modes neighbouring the $m_U/2\upi=6$ harmonic. These transient VSHF fluctuations also contribute to producing the broad background spectrum seen in the NL and QL systems by shearing the ring of excited wavenumbers. The spectrum of potential energy in the NL system is shown in figure \ref{fig:New_ModelComparison_Spectra} (d). Unlike the $K$ spectrum, which is dominated by the horizontal mean flow, $U$, the $V$ spectrum is not dominated by the horizontal mean buoyancy, $B$, although a peak is evident at the $m_B/2\upi=12$ component. In this sense, the VSHF is a `manifest' structure, whereas the horizontal mean density layers are `latent' structures \citep{Berloff:2009ct}. Other features of the spectrum are the expected concentration of potential energy at the ring wavenumber and the spread of the ring to higher vertical wavenumbers as was found for the $K$ spectrum. The $V$ spectra for the QL and S3T systems are shown in figures \ref{fig:New_ModelComparison_Spectra} (e) and (f). The QL and S3T spectra capture the peak associated with the horizontal mean buoyancy layers, the concentration of potential energy at the excitation scale, and the spread of the ring to higher $m$. The differences between the three $V$ spectra are similar to those identified when comparing the three $K$ spectra. Agreement between the NL, QL, and S3T systems indicates that the QL dynamics of the horizontal mean state interacting with the perturbation field accounts for the physical mechanisms responsible for determining the most important aspects of the energy spectra. We emphasize that the QL and S3T systems involve no free parameters, and that the demonstrated agreement between the three systems is not the result of parameter tuning. Because the QL and S3T systems do not contain the perturbation-perturbation interactions required to produce a turbulent cascade, agreement between these systems and the NL system indicates that, in the present model configuration, such a cascade is not essentially involved in determining the equilibrium turbulent state, including the large-scale spectrum. Nonlinear cascades in the NL system only weakly influence the large-scale dynamics and energetics, and we note that for this reason the parameter $\varepsilon$, which is the energy injection rate of the stochastic excitation, should not be interpreted as a rate of turbulent dissipation in the sense of a classical cascade. In the NL system we employ to model VSHF formation the turbulent dissipation rate in the classical cascade sense is close to zero and it is exactly zero in the QL and S3T systems. Similar results have been obtained for barotropic turbulence characterized by strong zonal jets. In the presence of such jets the meridional wavenumber ($\ell$) spectrum of the zonal flow obtains a well-known $\ell^{-5}$ structure at large scales \citep{Huang:2001im,Galperin:2010ix}. \citet{Constantinou:2015ti} showed that the barotropic S3T system, with underlying QL dynamics, captures this $\ell^{-5}$ spectrum, indicating that the large-scale structure of the spectrum in barotropic turbulence with strong zonal jets is primarily determined by perturbation-mean interaction rather than by a turbulent cascade. QL dynamics does not account for the spectrum at very small scales, which is produced by perturbation-perturbation nonlinearity and is inessential to the dynamics of VSHFs. We note that these small scale features may be important to the stirring of passive tracers at small scales \citep{Sukoriansky:2009dz}, which the QL and S3T systems would not be expected to accurately capture in the present model turbulence. Motivated by these results we proceed in the rest of this paper to exploit the S3T system to analyze the mechanisms underlying the organization of structure in stratified turbulence. \section{Linear Stability Analysis of the S3T System}\label{sec:scaleselection} In the previous section we showed that the S3T system reproduces the essential statistical features, up to second order, of the NL system, including both the structure of the horizontal mean state as well as the spectral characteristics of the perturbation field. The S3T system can be understood and analyzed with much greater clarity than the NL system because the S3T system is a deterministic and autonomous dynamical system and is amenable to the usual techniques of dynamical systems analysis. In this section we show that the emergence of VSHFs in 2D stratified turbulence can be traced to a linear instability in the SSD of the stationary state of homogeneous turbulence that has analytic expression in the S3T SSD while lacking analytic expression in the dynamics of single realizations. To determine the properties of this instability, and in particular to understand how the vertical scale of the initially emergent VSHF is selected, we now perform a linear stability analysis of the S3T system. Before linearizing the S3T system it is first necessary to obtain the fixed point statistical state that is unstable to VSHF formation. As shown in \textsection \ref{sec:modelcomparison}, the equilibrium state with a finite-amplitude VSHF and modified horizontal mean stratification is a fixed point of the S3T system. However, the fixed point of the S3T system whose stability we wish to analyze is the state of homogeneous turbulence that is excited by the stochastic excitation and equilibrated by dissipation. This homogeneous state is obscured in the NL and QL systems, both by noise fluctuations and (in examples for which it is SSD unstable) by the development of a VSHF, but roughly corresponds to the interval of nearly constant perturbation kinetic energy at early times ($t\lesssim 5$) in figure \ref{fig:ModelComparison_Energy_and_Uprofiles} (a). If homogeneous turbulence is unstable we obtain an explanation for the observed VSHF formation, since the alternative possibility of sustained homogeneous turbulence is not possible in the presence of small perturbations. For homogeneous turbulence $\boldsymbol{U}=\boldsymbol{B}=0$ and from (\ref{eq:S3T1}) the steady-state perturbation covariance matrix at wavenumber $k_n$ obeys \begin{equation} \mathsfbi{A}_n^{\star}\mathsfbi{C}_n^{\star}+\mathsfbi{C}_n^{\star}\mathsfbi{A}_n^{\star,\dagger}+\varepsilon \mathsfbi{Q}_n=0, \label{eq:S3TFP1} \end{equation} where the $\mathsfbi{A}_n^{\star}$ operator is given by \begin{equation} \mathsfbi{A}_n^{\star} = \left( \begin{array}{cc} -\mathsfbi{I}+\nu\mathsfbi{\Delta}_n & \text{i}k_n \mathsfbi{\Delta}_n^{-1} \\ -\text{i}k_n N_0^2 \mathsfbi{I} & -\mathsfbi{I}+\nu \mathsfbi{\Delta}_n \end{array} \right). \label{eq:S3TAstar} \end{equation} Equation (\ref{eq:S3TFP1}) can be solved analytically for $\mathsfbi{C}_n^{\star}$, and we show details of the solution in Appendix \ref{sec:appendixB}. Figure \ref{fig:Homogeneous_Spectrum_Plot} shows the kinetic and potential energy spectra for this fixed point homogeneous turbulent state. \begin{figure} \centerline{\includegraphics{Fig12_v2p0.pdf}} \caption{Spectral structure of the homogeneous S3T fixed point. (a) Kinetic energy ($K$) spectrum. (b) Potential energy ($V$) spectrum. The spectra are shown in terms of their natural logarithms and no normalization is performed. The $K$ and $V$ spectra are nearly identical to one another, even though only the vorticity field is stochastically excited, due to the strong stratification. This figure shows that the homogeneous turbulence from which the VSHF emerges inherits its structure directly from the stochastic excitation whose structure is shown in figure \ref{fig:Z_Forcing_Plot}. Parameters are as in figure \ref{fig:NL_Phenom_FlowSnapshots}.} \label{fig:Homogeneous_Spectrum_Plot} \end{figure} To analyze the linear stability of this homogeneous turbulent state we perturb the S3T state, $(\mathsfbi{C}_n,\boldsymbol{U},\boldsymbol{B})$, about the fixed point, $(\mathsfbi{C}_n^{\star},0,0)$, as \begin{align} \mathsfbi{C}_n = \mathsfbi{C}_n^{\star} + \delta \mathsfbi{C}_n, && \boldsymbol{U} = \delta \boldsymbol{U}, && \boldsymbol{B} = \delta \boldsymbol{B}, \end{align} where the $\delta$ notation indicates that the first order terms are treated as infinitesimal perturbations. The operator $\mathsfbi{A}_n$ in (\ref{eq:Adefn}) may then be written as \begin{equation} \mathsfbi{A}_n = \mathsfbi{A}_n^{\star}+\delta \mathsfbi{A}_n, \end{equation} where $\mathsfbi{A}_n^{\star}$ given by (\ref{eq:S3TAstar}) and \begin{equation} \mathsfbi{\delta A}_ n = \left( \begin{array}{cc} -\text{i}k_n\mathsfbi{\Delta}_n^{-1}\text{diag}(\delta\boldsymbol{U})\mathsfbi{\Delta}_n+\text{i}k_n\mathsfbi{\Delta}^{-1}_n\text{diag}(\mathsfbi{D}^2\delta\boldsymbol{U}) & 0 \\ -\text{i}k_n\text{diag}(\mathsfbi{D}\delta \boldsymbol{B}) & -\text{i}k_n\text{diag}(\delta \boldsymbol{U}) \end{array}\right). \end{equation} The linearized equations of motion are \begin{eqnarray} \frac{d}{dt} \delta \boldsymbol{U} &=& \sum_{n=1}^{N_k} \frac{k_n}{2} \mbox{Im} \left[ \text{vecd}\left(\mathsfbi{\Delta}_n \delta\mathsfbi{C}_{\psi \psi,n} \right) \right] -r_m \delta \boldsymbol{U} + \nu \mathsfbi{D}^2 \delta \boldsymbol{U} , \label{eq:S3TLIN1} \\ \frac{d}{dt} \delta \boldsymbol{B} &=& \sum_{n=1}^{N_k} \frac{k_n}{2} \mbox{Im} \left[ \text{vecd}\left(\mathsfbi{D}\delta \mathsfbi{C}_{\psi b,n} \right) \right] -r_m \delta \boldsymbol{B} + \nu \mathsfbi{D}^2 \delta \boldsymbol{B} , \label{eq:S3TLIN2} \\ \frac{d}{dt} \delta \mathsfbi{C}_n &=& \mathsfbi{A}_n^{\star} \delta \mathsfbi{C}_n + \delta \mathsfbi{C}_n \mathsfbi{A}_n^{\star,\dagger} + \delta \mathsfbi{A}_n \mathsfbi{C}_n^{\star} + \mathsfbi{C}_n^{\star} \delta \mathsfbi{A}_n^{\dagger} .\label{eq:S3TLIN3} \end{eqnarray} As usual in linear stability analysis, we express the solutions of (\ref{eq:S3TLIN1})-(\ref{eq:S3TLIN3}) in terms of the eigenvectors and eigenvalues of the system. The natural matrix form of the S3T equations obscures the operator-vector structure of the linearized system. The most direct technique for conducting the eigenanalysis is to rewrite the equations in superoperator form by unfolding the matrices $\mathsfbi{\delta C}_n$ \citep{FARRELL:2002tz}. This technique results in linear operators of very high dimension for which eigenanalysis is expensive. We use an alternate method to obtain the eigenstructures in which the linearized equations are rewritten as coupled Sylvester equations \citep[see Appendix B in][]{Constantinou:2014fh}. We note that, for our choice of stochastic excitation, equations (\ref{eq:S3TLIN1})-(\ref{eq:S3TLIN3}) decouple into two separate eigenproblems: one determining the eigenmodes involving mean flow perturbations $\delta \boldsymbol{U}$, which have $\delta \boldsymbol{B}=0$, and a separate eigenproblem determining the eigenmodes involving mean buoyancy perturbations $\delta \boldsymbol{B}$, which have $\delta \boldsymbol{U}=0$. The eigenproblem involving $\delta \boldsymbol{U}$ gives unstable eigenmodes associated with growing VSHFs for the parameter regime we address in this work, while the mean buoyancy eigenproblem has only stable eigenmodes in our parameter regime. The mean buoyancy eigenproblem is therefore irrelevant, in our parameter range, to VSHF formation and we focus on the eigenproblem concerning $\delta \boldsymbol{U}$. We now describe the results of the eigenanalysis of equations (\ref{eq:S3TLIN1})-(\ref{eq:S3TLIN3}). As the fixed point underlying the linearization corresponds to homogeneous turbulence, the eigenfunctions have harmonic structure in $z$ so that $\delta \boldsymbol{U}$ and $\delta \langle \overline{u'w'} \rangle$ are both proportional to $e^{st}e^{\text{i}m_Uz}$. For each $m_U$ permitted by the periodic domain there is a dominant eigenmode with eigenvalue $s(m_U)$. For the parameter range we study, we find that these eigenvalues are real, corresponding to structures for which the perturbation momentum flux divergence, $-\partial_z \delta \langle \overline{u'w'} \rangle$, and the mean flow, $\delta \boldsymbol{U}$, are aligned in phase. \begin{figure} \centerline{\includegraphics{Fig13_rev_v1p0.pdf}} \caption{Growth rates of the eigenmodes responsible for VSHF formation in the S3T system. (a) Growth rate as a function of the VSHF wavenumber $m_U$ for $\varepsilon=0.25$ and two different excitation structures: $k_e/2\upi=6$ (dotted, $Fr=0.6$) and $k_e/2\upi=12$ (solid, $Fr=1.2$). (b) Growth rate as a function of $m_U$ and $\varepsilon$ for $k_e/2\upi=6$. Note the logarithmic $\varepsilon$ axis. (c) Growth rate as a function of $m_U$ for $k_e/2\upi=12$ and four values of $N_0^2$. (d) Growth rate of the fastest growing VSHF structure as a function of $N_0^2$ for $k_e/2\upi=6$. This figure shows that the vertical wavenumber, $m_U$, of the initially emergent VSHF is very sensitive to changes in the spectral structure of the excitation, and also that $m_U\to0$ as $N_0^2\to \infty$ so that the initially emergent VSHF takes on the largest scale permitted by the domain if the stratification is sufficiently strong. Unless otherwise specified, parameters are as in figure \ref{fig:NL_Phenom_FlowSnapshots}.} \label{fig:ScaleSelectionPlot_Combined} \end{figure} Figure \ref{fig:ScaleSelectionPlot_Combined} summarizes how the dominant eigenvalue, $s$, which is the VSHF growth rate, depends on $m_U$ and on the parameters $k_e$, $N_0^2$, and $\varepsilon$. The dotted curve in panel (a) shows the VSHF growth rate as a function of $m_U$ for the standard parameter case with $\varepsilon=0.25$ ($Fr=0.6$). VSHFs with $1\le m_U/2\upi \le 10$ have positive growth rates, with the $m_U/2\upi=6$ structure having the fastest growth rate. This eigenvalue problem thus predicts that a VSHF with vertical wavenumber $m_U/2\upi=6$ will initially emerge from the turbulence, consistent with the structure of the VSHF discussed in \textsection \ref{sec:NLphenom} and \textsection \ref{sec:modelcomparison}. The solid curve in panel (a) shows the VSHF growth rates for the standard parameter case except with $k_e/2\upi=12$ ($Fr=1.2$) so that the excitation is at a smaller scale. Increasing $k_e$ shifts the peak of the growth rate curve toward larger $m_U$ values, resulting in smaller scale VSHFs, and also enhances the VSHF growth rates. Figure \ref{fig:ScaleSelectionPlot_Combined} (b) shows the dependence of $s$ on $m_U$ and $\varepsilon$ for the standard parameter case with $k_e/2\upi=6$. For VSHFs with $1\le m_U/2\upi \le 11$ the growth rate becomes positive for sufficiently large $\varepsilon$. For VSHFs in this wavenumber band, the perturbation fluxes reinforce the infinitesimal VSHF and $s$ increases with increasing $\varepsilon$. For VSHFs outside this band, with $m_U/2\upi \ge 12$, the perturbation fluxes oppose the VSHF so that the growth rate becomes increasingly negative as $\varepsilon$ increases. The dashed line shows the stability boundary, $s=0$. The homogeneous turbulence first becomes unstable near $\varepsilon\approx.042$ ($Fr=.25$) to a VSHF with $m_U/2\upi=5$. As $\varepsilon$ increases, the growth rate of the $m_U/2\upi=6$ VSHF structure exceeds that of the $m_U/2\upi=5$ VSHF so that for the standard parameter case with $\varepsilon=0.25$ ($Fr=0.6$) a VSHF with vertical wavenumber $m_U/2\upi=6$ initially emerges in the turbulence. We note that the emergent VSHF wavenumber varies between NL simulations depending on the particular realization of the stochastic excitation, with the $m_U/(2\upi)=7$ structure occurring somewhat more frequently than $m_U/(2\upi)=6$. The existence of multiple turbulent equilibria in this system is predicted by S3T as discussed in \textsection \ref{sec:multiple_equilibria}. That the NL system often forms a VSHF with a slightly different scale than that predicted by linear stability analysis of the S3T system is likely to due the modification of the background spectrum of turbulence by perturbation-perturbation interactions. The influence of perturbation-perturbation interactions on the S3T stability of homogeneous turbulence has been analyzed in detail by \citet[]{Constantinou:2014fh} in the context of barotropic beta-plane turbulence. Figures \ref{fig:ScaleSelectionPlot_Combined} (c) show how $N_0^2$ influences the scale selection of the initially emergent VSHF. In panel (c) $s$ is shown as a function of $m_U$ for four $N_0^2$ values with $k_e/2\upi=12$ and $\varepsilon=0.25$. As $N^2_0$ increases, $s$ decreases and the peak (indicated by the vertical lines) shifts toward smaller $m_U$. For very large $N_0^2$ the largest values of $s$ occur for VSHFs at the domain scale with $m_U/2\upi=1$. However, unless $\varepsilon$ is also very large the homogeneous turbulent state will remain stable and a domain-scale VSHF will not emerge, because $s$ decreases as $N_0^2$ becomes large. We note that the decrease of the VSHF wavenumber as $N_0^2$ increases demonstrates that $m_U$ is not directly related to either the Ozmidov wavenumber, $k_{O}=(N_0^3/\varepsilon)^{1/2}$, or the buoyancy wavenumber, $k_b=N_0/\sqrt{\varepsilon}$, both of which increase as $N_0^2$ is increased, and also that the S3T prediction of $m_U$ depends on the parameter values and is not always equal to the excitation wavenumber, $k_e$. As $N^2_0$ is decreased toward moderate and weak stratification (not shown), the wavenumber of the initially emergent VSHF remains near $k_e$, consistent with the results of NL simulations shown in figure \ref{fig:newfig1} (c,d). The dependence of $s$ on $N_0^2$ is shown directly in panel (d), which shows $\text{max}[s(m_U)]$, where the maximum is taken over $m_U$, as a function of $N^2_0$. For small $N^2_0$, all modes have negative growth rates. This result provides an explanation for the frequent observation in numerical simulations that the VSHF ceases to emerge when the stratification becomes sufficiently weak. However, this result depends on the details of the stochastic excitation. In Appendix \ref{sec:appendixA} we describe a reduced model in which the excitation is anisotropic and which has the property that $s$ remains positive as $N_0^2\to0$, a result which was also obtained for similarly anisotropic excitation by \citet[]{Bakas:2011bt}. As $N^2_0$ increases from zero $s$ increases to a maximum near $N_0^2\approx10^2$. This increase in growth rate is associated with the strengthening of the feedback between the VSHF and the turbulence described in \textsection \ref{sec:testfunction}. The dependence of the S3T wave-mean flow feedback for harmonic mean structures on the parameter that sets the wave restoring force has been explained analytically in terms of wave dynamics by \citet[]{Bakas:2013ft} for the case of barotropic beta-plane turbulence. For $N^2_0\gtrsim 10^3$ the growth rate falls off as $\sim1/N^2_0$ and approaches a constant asymptotic value as $N_0^2\to\infty$ that is set by the dissipation parameters. \begin{figure} \centerline{\includegraphics{New_Figure_v0p1_pdf.pdf}} \caption{Time evolution of the VSHF in two example simulations illustrating the correspondence between the behavior of the NL system and the predictions of the linear stability analysis of the S3T system as the parameters are varied. Unless otherwise stated all parameters are as in figure \ref{fig:NL_Phenom_FlowSnapshots}. (a) An example with smaller scale excitation, $k_e/(2\upi)=12$ ($Re_b=10.4$, $Fr=1.2$). The VSHF forms with $m_U/(2\upi)\approx 12$. (b) An example with smaller scale excitation, $k_e/(2\upi)=12$, and also stronger stratification, $N_0^2=5\times10^3$ ($Re_b=2.1$, $Fr=0.53$). The VSHF forms with $m_U/(2\upi)\approx10$. This figure demonstrates that, in the NL system, the VSHF forms at smaller scale when the turbulence is excited at smaller scale and that the VSHF forms at larger scale when the stratification is increased, consistent with the predictions of S3T.} \label{fig:New_Jet_Cases} \end{figure} Figure \ref{fig:New_Jet_Cases} shows the time evolution of the VSHF, $U$, in two example simulations that illustrate the correspondence of the behavior of the NL system with the predictions of linear stability analysis of the S3T system shown in figure \ref{fig:ScaleSelectionPlot_Combined}. Panel (a) shows the VSHF evolution in an example in which the parameters are as in the standard case simulation but with the excitation scale modified to $k_e/(2\upi)=12$ ($Fr=1.2$). Consistent with the shift of peak of the VSHF growth rate curve to $m_U/(2\upi)=12$ in figure \ref{fig:ScaleSelectionPlot_Combined} (a), the emergent VSHF in the NL system has $m_U/(2\upi)\approx12$. In panel (b) is shown the VSHF evolution in an example with the same parameters as in panel (a), but with the stratification increased to $N_0^2=5\times10^3$ ($Fr=.53$). Consistent with the shift of the peak of the VSHF growth rate curve toward lower values of $m_U$ under increased stratification in figure \ref{fig:ScaleSelectionPlot_Combined} (c), the emergent VSHF in the NL system has $m_U/(2\upi)\approx10$. Although the peak of $s(m_U)$ in the S3T system occurs at the slightly larger scale $m_U/(2\upi)=8$ for $N_0^2=5\times10^3$, the growth rates of the nearby VSHF wavenumbers are very similar to the peak value, as shown in figure \ref{fig:ScaleSelectionPlot_Combined} (c), so that the VSHF wavenumber that is observed in a given realization of the stochastic system for these parameter values is likely to depend on the particular noise realization. Results of this section demonstrate that the scale of the initially emergent VSHF, $m_U$, is strongly influenced by the spectral structure of the perturbation field, which in our problem is set by $k_e$. As the stratification becomes very strong, the VSHF scale is modified from the scale set by the excitation and tends toward the largest scale allowed by the domain. In realistic turbulence, the implication of this result is that we expect the spectral characteristics of the background turbulence to imprint strongly on the VSHF scale if the turbulence is sufficiently close to the stability boundary and the stratification is not too strong. We emphasize that the linear stability analysis conducted in this section provides a prediction of the scale of the initially emergent VSHF, rather than of the scale of the statistical equilibrium VSHF. For excitation strengths sufficiently near the stability boundary, the prediction based on linear stability analysis is expected to agree with the equilibrium VSHF structure. As the excitation strength is increased beyond the stability boundary, the structure of the VSHF may be modified from the initially emergent structure, as suggested by the NL simulation shown in figure \ref{fig:newfig1} (b). Previous studies of VSHF emergence have primarily been conducted using weak dissipation or strong excitation, so that the excitation strength lies well beyond the stability boundary, and these studies have also observed VSHFs with larger scale than that of the excitation. For example, \citet[]{Herring:1989fx} obtained a $m_U=6$ VSHF in 3D stratified turbulence maintained with $k_e\approx 11$ excitation, \citet[]{Smith:2001uo} obtained a VSHF with energy concentrated near $m_U\approx 10-15$ in the 2D system using $k_e\approx96$, and \citet[]{Smith:2002wg} obtained a VSHF with energy concentrated near $m_U\approx 9-11$ in the 3D system maintained with $k_e\approx 24$. Although precise parameter correspondence between our study and these previous studies is difficult to establish due to differences in model formulation, these previous examples demonstrate that VSHFs with larger scale than that of the excitation are often observed when the system is strongly excited. This behavior is expected based on analysis of the S3T system, which we discuss in \textsection \ref{sec:equilibration}. \section{Equilibration of Horizontal Mean Structure}\label{sec:equilibration} In \textsection \ref{sec:modelcomparison} we showed that the S3T system initialized with a perturbative VSHF with $m_U/2\upi=6$ in the standard parameter case evolves into an equilibrium state with the same VSHF wavenumber (figures \ref{fig:ModelComparison_Hovmollers} and \ref{fig:ModelComparison_Energy_and_Uprofiles}). We now analyze how the structure of this finite-amplitude equilibrium depends on the control parameters. Figure \ref{fig:Drafting_Equilibration_Plot_full} (a, solid curve) shows the maximum value of the $m_U/2\upi=6$ equilibrium $U$ structure, maximized over $z$, as a function of $\varepsilon$. The dotted curve shows an estimate of $U$ from a simple momentum balance model which we will explain later in this section. As suggested by the stability analysis in \textsection \ref{sec:scaleselection}, the $m_U/2\upi=6$ VSHF forms near $\varepsilon\approx0.044$ ($Fr=.25$) when the growth rate of the corresponding eigenmode crosses zero. The bifurcation is supercritical, with the VSHF equilibrating at weak amplitude just beyond the bifurcation point. Near the bifurcation point $U$ increases rapidly with $\varepsilon$, with this rate of increase slowing as $\varepsilon$ increases. \begin{figure} \centerline{\includegraphics{Fig14_rev_v1p0.pdf}} \caption{Equilibrium structure diagnostics for S3T and the simple momentum balance model in the case of the $m_U/2\upi=6$ horizontal mean state as a function of $\varepsilon$. (a) Maximum value of $U$, maximized over $z$, for the stable S3T fixed point with $m_U/2\upi=6$ (solid). This fixed point becomes secondarily unstable near $\varepsilon=0.55$ ($Fr=0.88$) and the dashed continuation shows the amplitude of the unstable solution. The dotted curve shows the estimate of the amplitude of $U$ from the simple momentum balance model (see text). Panels (b)-(e) show the vertical structure of the horizontal mean state as in figure \ref{fig:4panel_USNsqRi_meanplot} with dotted curves indicating the $\varepsilon=0.08$ ($Fr=.34$) state and solid curves indicating the $\varepsilon=0.54$ ($Fr=.88$) state. This figure shows that weak equilibration of the VSHF is captured by the simple momentum balance model and that the $U$ and $\overline{N^2}$ structures, and their phase relationship to one another, vary as $\varepsilon$ is increased. Unless otherwise specified, parameters are as in figure \ref{fig:NL_Phenom_FlowSnapshots}.} \label{fig:Drafting_Equilibration_Plot_full} \end{figure} The structure of the horizontal mean state depends on $\varepsilon$. Figures \ref{fig:Drafting_Equilibration_Plot_full} (b)-(e) (dotted curves) show the horizontal mean structure of the marginally supercritical equilibrium at $\varepsilon=0.08$ ($Fr=.34$). The VSHF structure is similar to that of the unstable $m_U/2\upi=6$ harmonic eigenmode. The phase relationship between $U$ and $\overline{N^2}$ differs from that found in the more strongly excited $\varepsilon=0.25$ ($Fr=0.6$) case discussed in \textsection \ref{sec:NLphenom} and \textsection \ref{sec:modelcomparison}. In the $\varepsilon=0.08$ ($Fr=.34$) case of weak equilibration the stratification is enhanced in the shear regions, rather than weakened, and $\overline{Ri}$ is large for all $z$ due to the weak shear. The solid curves in figures \ref{fig:Drafting_Equilibration_Plot_full} (b)-(e) show the horizontal mean structure of the $\varepsilon=0.54$ ($Fr=.88$) equilibrium. For this more strongly supercritical equilibrium, the shear regions are characterized by weakened stratification and $\overline{Ri}<1/4$. This structure is similar to that shown in figure \ref{fig:New_ModelComparison_QLS3T_4panelplots} for the $\varepsilon=0.25$ case, but with stronger shear and smaller $\overline{Ri}$ values. The VSHF remains hydrodynamically stable (\emph{i.e.,} all eigenvalues of $\mathsfbi{A}_n$ have negative real parts) despite having $\overline{Ri}<1/4$ due to the dissipation acting on the perturbation fields. When $\varepsilon$ is further increased the $m_U/2\upi=6$ fixed point becomes secondarily unstable, indicated by the dashed continuation of the solid curve in figure \ref{fig:Drafting_Equilibration_Plot_full} (a). The changing phase relationship between $U$ and $\overline{N^2}$ shown in figure \ref{fig:Drafting_Equilibration_Plot_full} that occurs as a function of $\varepsilon$ mirrors the change in this relationship shown in figure \ref{fig:ModelComparison_Hovmollers} that occurs as a function of time. Comparison of figures \ref{fig:ModelComparison_Hovmollers} (c) and (d) shows that, when the developing VSHF is weak, $\overline{N^2}$ is enhanced where the shear is strongest. When the VSHF becomes strong, the stratification is reorganized by the turbulent fluxes so that $\overline{N^2}$ is weakest where the shear is strongest. \begin{figure} \centerline{\includegraphics{Fig15_v2p1.pdf}} \caption{Illustration of the simple momentum balance model for weakly supercritical VSHF equilibration for $\varepsilon=0.08$ and $m_U/2\upi=6$, with other parameters as in figure \ref{fig:NL_Phenom_FlowSnapshots}. The solid curve shows the projection of the perturbation momentum flux divergence, calculated using the test function analysis of \textsection\ref{sec:testfunction}, onto the assumed harmonic VSHF structure. The dashed line shows the dissipation acting on the VSHF, given by $(r_m+\nu m_U^2)U_0$. The simple model estimate of the equilibrium VSHF amplitude is the value of $U_0$ at which these terms balance one another. The vertical dotted line indicates the equilibrium VSHF amplitude obtained from the full S3T system. This figure demonstrates that the dynamics of weakly supercritical VSHF equilibration is captured by the simple balance model.} \label{fig:WeakEquil_SimpleBalance_Plot} \end{figure} The mechanism of VSHF equilibration at weak amplitude can be understood using a simple momentum balance model based on the test function analysis of \textsection \ref{sec:testfunction}. To construct the simple model we first approximate the horizontal mean state as $U=U_0 \sin(m_U z)$ and $B=0$, where $U_0$ is the equilibrium VSHF amplitude that we will estimate. We then estimate the acceleration of the VSHF produced by the induced perturbation momentum fluxes as a function of $U_0$ using (\ref{eq:testfn1})-(\ref{eq:testfn2}). Our estimate of the equilibrium VSHF amplitude is the value of $U_0$ for which this acceleration is balanced by dissipation. As $\varepsilon\to\varepsilon_c$ this simple model becomes exact because both $U$ and the perturbation flux divergence become exactly harmonic and $B\to0$. For $\varepsilon > \varepsilon_c$, the structure of $-\partial_z \langle \overline{u'w'} \rangle$ deviates from harmonic and we estimate the equilibrium VSHF amplitude by projecting the acceleration onto the assumed harmonic VSHF structure. We illustrate the simple momentum balance model for $\varepsilon=0.08$ ($Fr=.34$) and $m_U/2\upi=6$ in figure \ref{fig:WeakEquil_SimpleBalance_Plot}, which shows the estimated acceleration (solid) and dissipation (dashed) of the VSHF as functions of the VSHF amplitude, $U_0$. The dissipation, $(r_m+\nu m_U^2)U_0$, increases linearly with $U_0$. For small $U_0$ the acceleration is stronger than the dissipation, consistent with spontaneous VSHF formation as a linear instability for these parameters. Due to the negative curvature of the acceleration as a function of $U_0$ the two terms balance near $U_0\approx 0.65$, which gives the simple model estimate of the equilibrium VSHF amplitude. The vertical dotted line indicates the equilibrium VSHF strength in the full S3T system. For this value of $\varepsilon$ the simple model captures the equilibration dynamics, implying that modification of $\overline{N^2}$ and changes in $U$ structure do not play important roles in the weak equilibration process. The simple model estimate of $\text{max}(U)$ as a function of $\varepsilon$ is shown in figure \ref{fig:Drafting_Equilibration_Plot_full} (a) as the dotted curve. The model estimate matches the results of the full calculation as $\varepsilon\to\varepsilon_c$ and diverges from the full solution as $\varepsilon$ increases. \begin{figure} \centerline{\includegraphics{Fig16_rev_v1p0.pdf}} \caption{Secondary instability of the S3T fixed point corresponding to the $m_U/2\upi=6$ VSHF for $\varepsilon=1$ ($Fr=1.2$), with other parameters as in figure \ref{fig:NL_Phenom_FlowSnapshots}. The upper panels show the time evolution of (a) $U$ and (b) $\overline{N^2}$. The lower panel (c) shows the kinetic energy evolution. This figure shows that for strong excitation the $m_U/2\upi=6$ VSHF state is unstable to the development of a global vertical wavenumber 2 pattern in $U$ that is superimposed on the initial wavenumber 6 pattern, strengthening the VSHF and modifying its structure to produce wider shear regions.} \label{fig:Developing_HardeqInstFig1_Combined} \end{figure} As $\varepsilon$ is increased, the global minimum of $\overline{Ri}$ falls further below $1/4$ and the $m_U/2\upi=6$ state becomes secondarily unstable just beyond $\varepsilon=0.54$ ($Fr=.88$). Although this instability occurs when $U$ is near the laminar stability boundary, which is modified from $\overline{Ri}=1/4$ by the presence of dissipation and by our choice of a finite periodic domain which quantizes the permitted perturbation horizontal wavenumbers, we emphasize that this secondary instability is a property of the S3T dynamics, rather than of the perturbation dynamics determined by the operator $\mathsfbi{A}_n$. In particular, the $m_U/2\upi=6$ state remains hydrodynamically stable at all times during the instability development. Figures \ref{fig:Developing_HardeqInstFig1_Combined} (a) and (b) show the time evolution of $U$ and $\overline{N^2}$ during the development of the secondary instability for $\varepsilon=1$ ($Fr=1.2$). The VSHF structure near $t=30$ reveals that the 6-fold symmetry of the $m_U/2\upi=6$ VSHF is spontaneously broken by the instability. As the instability develops the positive VSHF peaks near $z=0.2$ and $z=0.7$ contract and weaken while their neighbouring negative peaks strengthen and expand. Similarly, the negative VSHF peaks near $z=0.5$ and $z=0.9$ contract and weaken while their neighbouring positive VSHF peaks strengthen and expand. The particular locations of the strengthening and weakening features are the result of the symmetry breaking and so depend on the small perturbations included in the initialization. Figure \ref{fig:Developing_HardeqInstFig1_Combined} (c) shows the evolution of kinetic energy during the instability. The changes in the VSHF structure are associated with an increase in the mean kinetic energy consistent with the broadening of the VSHF pattern allowing $U$ to strengthen while maintaining a hydrodynamically stable shear. Similar behavior is shown to occur in the NL system in figure \ref{fig:newfig1} (b), which shows an example in which the effective excitation strength has been increased relative to the standard case integration by removing the Rayleigh drag terms from the dynamics. In this example, a VSHF with $m_U/(2\upi)\approx 6$ initially emerges from the turbulence and this VSHF transitions to lower wavenumber as the integration is continued. Secondary instabilities of finite-amplitude mean shear flows that result in broader shear patterns also occur in the barotropic beta-plane system and have been analyzed using S3T by \citet[]{Constantinou:2014fh}. The structure of the horizontal mean state before and after the development of the secondary instability for $\varepsilon=1$ ($Fr=1.2$) is shown in figure \ref{fig:Developing_HardeqInstFig2_Combined}. Prior to the instability development ($t=10$, dotted) the structure is similar to that shown for $\varepsilon=0.54$ ($Fr=.88$) in figures \ref{fig:Drafting_Equilibration_Plot_full} (b)-(e) and is characterized by a VSHF with $m_U/2\upi=6$ and weakened stratification in the shear extrema. The $U$ profile of the final equilibrium structure ($t=50$, solid) contains shear regions with two distinct widths which are associated with distinct phase relationships between $U$ and $\overline{N^2}$. For the wider shear regions, the $U$ profile inflects in the centre of the shear region and $\overline{N^2}$ is locally maximized there, resulting in $\overline{Ri}\gtrsim1/4$. The narrower shear regions are similar to those that precede the secondary instability development and have $\overline{Ri}<1/4$. \begin{figure} \centerline{\includegraphics{Fig17_rev_v1p0.pdf}} \caption{Vertical structure of the horizontal mean state in the S3T system before and after the development of the secondary instability for $\varepsilon=1$ ($Fr=1.2$), with other parameters as in figure \ref{fig:NL_Phenom_FlowSnapshots}. Panels are as in figure \ref{fig:4panel_USNsqRi_meanplot} with dotted curves showing the structure for $t=10$ and solid curves showing the structure for $t=50$. This figure shows how the structure of the horizontal mean state is reorganized by the secondary instability. The unstable equilibrium state at $t=10$ has $\overline{Ri}<1/4$ in regions of strongest shear and weakest stratification. The final equilibrium state has shear regions of two different widths in which the broader shear regions have $\overline{Ri}>1/4$ due to enhanced stratification and weakened shear in the cores of the shear regions.} \label{fig:Developing_HardeqInstFig2_Combined} \end{figure} \section{Multiple Turbulent Equilibria in Stratified Turbulence}\label{sec:multiple_equilibria} In \textsection \ref{sec:scaleselection} we showed an NL simulation in which a $m_U/2\upi=6$ VSHF emerges, corresponding to the eigenmode of the linearized S3T system that has the fastest growth rate in the standard parameter case. However, figure \ref{fig:ScaleSelectionPlot_Combined} (a) shows that, for the standard parameter case, all VSHF structures in the wavenumber band $1\leq m_U/2\upi \leq10$ have positive growth rates. The subdominant eigenmodes (\emph{i.e}, those with $m_U/2\upi \neq 6$) continue to finite-amplitude VSHF equilibria at the corresponding wavenumbers. These equilibria may or may not be stable. In this section we demonstrate that multiple turbulent equilibrium states are possible in 2D Boussinesq turbulence by providing an example of such an alternate stable equilibrium in the S3T and NL systems. \begin{figure} \centerline{\includegraphics{Fig18_v2p0.pdf}} \caption{Time evolution of the horizontal mean structure of the $m_U/2\upi=4$ equilibrium state in the NL and S3T systems. Panels show the time evolution of (a) $U$ and (b) $\overline{N^2}$ in the NL system and (c) $U$ and (d) $\overline{N^2}$ in the S3T system. This figure shows that when initialized with a finite-amplitude VSHF with wavenumber $m_U/2\upi=4$ the NL system maintains this structure, resulting in a turbulent equilibrium state different from that discussed in \textsection \ref{sec:NLphenom} for the same parameter values, and that this alternate equilibrium state is also a fixed point of the S3T system. Parameters are as in figure \ref{fig:NL_Phenom_FlowSnapshots}.} \label{fig:New_MultipleEquilibria1} \end{figure} In figures \ref{fig:New_MultipleEquilibria1} (a) and (c) we show the development of $U$ in the NL and S3T systems in an example in which the parameters are set to the standard values ($Fr=0.6$ as in figure \ref{fig:NL_Phenom_FlowSnapshots}) but the initial conditions are chosen to initiate a VSHF with wavenumber $m_U/2\upi=4$. The NL system is initialized with a mean flow $U\propto \sin(m_U z)$ for $m_U/2\upi=4$ and the S3T system is initialized with the same $U$ profile and $\mathsfbi{C}_n=\mathsfbi{C}_n^{\star}$. In the S3T system this initial condition evolves into a stable $m_U/2\upi=4$ fixed point. We note that, as shown in \textsection \ref{sec:scaleselection}, the S3T system will evolve, in the standard parameter case, toward the $m_U/2\upi=6$ fixed point for any sufficiently small initial perturbation, but that in this example the system evolves toward the $m_U/2\upi=4$ fixed point as a result of the finite initial perturbation. In the NL system the $m_U/2\upi=4$ turbulent equilibrium is maintained for the length of the integration. Due to noise in the NL system, the turbulence may eventually transition to another equilibrium state, such as the $m_U/2\upi=6$ state discussed in \textsection \ref{sec:NLphenom}. The development of $\overline{N^2}$ for this example is shown in figures \ref{fig:New_MultipleEquilibria1} (b) and (d). As in the previous examples of equilibria, $\overline{N^2}$ has a doubled vertical wavenumber relative to $U$ and is more variable than $U$ in the NL system. The vertical structure of the horizontal mean state is shown in figure \ref{fig:New_MultipleEquilibria3}. The VSHF structure (panels (a,b)) resembles a sawtooth in both the NL (dotted) and S3T (solid) systems. The phase relationship between $U$ and $\overline{N^2}$ (panel (c)) shares some features with that shown in figure \ref{fig:Drafting_Equilibration_Plot_full} for the $m_U/2\upi=6$ equilibrium with $\varepsilon=0.54$ ($Fr=.88$). In particular, the weakest values of $\overline{N^2}$ occur in the centres of the shear regions. The excitation strength $\varepsilon=0.25$ falls in a transitional range for the $m_U/2\upi=4$ equilibrium in which $\overline{N^2}$ has an approximately $m_B/2\upi=16$ structure. As $\varepsilon$ is increased (not shown), the stratification in the shear centres continues to weaken, producing density layers at these locations, and the stratification near the VSHF peaks is enhanced. \begin{figure} \centerline{\includegraphics{Fig19_rev_v1p0.pdf}} \caption{Vertical structure of the horizontal mean state of the $m_U/2\upi=4$ equilibrium in the NL and S3T systems. Panels are as in figure \ref{fig:4panel_USNsqRi_meanplot} with dotted curves showing the time-averaged structure over $t\in[22,45]$ for the NL system and solid curves showing the final fixed point structure for the S3T system. Parameters are as in figure \ref{fig:NL_Phenom_FlowSnapshots}.} \label{fig:New_MultipleEquilibria3} \end{figure} Figure \ref{fig:New_MultipleEquilibria2} shows the evolution of kinetic energy in the NL and S3T systems. Consistent with the results for the $m_U/2\upi=6$ equilibrium in \textsection \ref{sec:modelcomparison}, the equilibrium value of $\overline{K}$ in the S3T system exceeds that of NL system. In both systems, the broader VSHF in the $m_U/2\upi=4$ equilibrium is more energetic than the VSHF in the $m_U/2\upi=6$ equilibrium. This is consistent with the behaviour shown in figure \ref{fig:Developing_HardeqInstFig1_Combined} (c) in which the broadened VSHF resulting from the secondary instability is more energetic than the $m_U/2\upi=6$ VSHF that precedes the instability. \begin{figure} \centerline{\includegraphics{Fig20_v2p2.pdf}} \caption{Kinetic energy evolution in the NL and S3T systems initialized with a $m_U/2\upi=4$ VSHF. This figure shows that, as for the $m_U/2\upi=6$ equilibrium, the VSHF in the S3T system is more energetic than the VSHF in the NL system, and comparison with figure \ref{fig:ModelComparison_Energy_and_Uprofiles} shows that in both the NL and S3T systems the $m_U/2\upi=4$ VSHF is more energetic than the $m_U/2\upi=6$ VSHF. Parameters are as in figure \ref{fig:NL_Phenom_FlowSnapshots}.} \label{fig:New_MultipleEquilibria2} \end{figure} \section{Reflection of the S3T Bifurcation in the NL and QL Systems} \label{sec:bifurcation_comparison} In \textsection \ref{sec:modelcomparison} we compared the behaviour of the NL, QL, and S3T systems with all parameter values fixed. Comparing the three systems in this way allows for a detailed comparison of the structures of the mean state and of the turbulent spectra to be made. However, our analysis of the S3T system has revealed phenomena, including the bifurcation associated with the initial formation of the VSHF, that can be analyzed only by allowing variation of the control parameters. We now compare the behaviour of the three systems as a function of the excitation strength, $\varepsilon$, in terms of the fraction of the total kinetic energy of the flow that is associated with the VSHF. We define this fraction as $\text{zmf}=\overline{K}/(\overline{K}+K')$, for zonal mean flow (zmf) index, borrowing this definition from studies of barotropic beta-plane turbulence \citep{Srinivasan:2012im,Constantinou:2014fh}. We note that in the context of barotropic turbulence an alternate approach based on regime diagrams has also been used to characterize the transition of turbulence to states dominated by zonal jets \citep{Galperin:2010ix}. \begin{figure} \centerline{\includegraphics[scale=.35]{Fig21_rev_v1p1.png}} \caption{Equilibrium zmf indices in the NL, QL, and S3T systems as functions of $\varepsilon$, with other parameters as in figure \ref{fig:NL_Phenom_FlowSnapshots}. The zmf index measures the fraction of the total kinetic energy of the flow that is associated with the VSHF. This figure shows that the bifurcation through which VSHF forms in the deterministic S3T system is reflected in the behaviour of the NL and QL systems, which show an abrupt increase in the fraction of the total kinetic energy contained in the VSHF near the S3T bifurcation point.} \label{fig:combined_bifurcation_plot} \end{figure} Figure \ref{fig:combined_bifurcation_plot} shows the equilibrium zmf value as a function of $\varepsilon$ for the NL, QL, and S3T systems. As all three systems possess multiple equilibria, there is some ambiguity as to the meaning of the equilibrium energies. For the S3T system we show the maximum zmf obtained when the system is initialized with a perturbative VSHF at each unstable wavenumber $m_U$. For the NL and QL systems we initialize from rest and show the time average of zmf over the final 10 time units of a $t\in[0,450]$ integration. As many long integrations are required for this comparison, the simulations shown in this section are spun up at low resolution and the resulting turbulence is interpolated to the standard resolution of $512^2$ grid points to initialize a simulation of the final 10 time units. As discussed in \textsection \ref{sec:scaleselection}, the S3T system passes through a bifurcation near $\varepsilon \approx0.04$ ($Fr=.24$). This bifurcation is reflected in the zmf indices of the QL and NL systems. For $\varepsilon\lesssim 0.04$ the VSHF accounts for only a few percent of the total kinetic energy of the flow. As $\varepsilon$ increases beyond the S3T bifurcation point, the zmf index increases rapidly and the VSHF becomes energetically dominant. As was found in \textsection \ref{sec:modelcomparison}, the S3T VSHF is the most energetic and the QL VSHF tends to be more energetic than the NL VSHF. The eventual decrease of the zmf indices in the QL and NL systems as $\varepsilon$ is increased may be due to the tendency of those systems to maintain VSHF structures with $m_U/2\upi=6$, even when this is not the most energetic VSHF structure. The S3T curve does not show this decrease as we choose the most energetic VSHF equilibrium to define the S3T equilibrium zmf. This maximally energetic VSHF equilibrium often has a lower vertical wavenumber than that of the fastest growing eigenmode, as discussed in \textsection \ref{sec:multiple_equilibria}. Developing a complete understanding of the behavior of, and correspondence between, the QL, NL, and S3T systems in the limit of strong excitation is beyond the scope of the present study but is an important avenue for future investigation. Note also in figure \ref{fig:combined_bifurcation_plot} the characteristic increase in fluctuating VSHF amplitude in the NL and QL cases as the bifurcation point is approached. This results from excitation of the reflection in QL and NL of the stable modes of the S3T system, which have no analytical expression in the QL and NL systems themselves. These modes are excited by the noise inherent in the QL and NL systems while no such excitation is seen in the noise-free S3T system \citep{Constantinou:2014fh}. \section{Conclusions}\label{sec:conclusions} In this work we studied the formation and maintenance of the VSHF and associated density layers in stratified turbulence by applying SSD to the stochastically excited 2D Boussinesq system. Although highly simplified, the 2D Boussinesq system has previously been shown to reflect the properties of VSHF emergence in 3D \citep{Smith:2001uo,Smith:2002wg}. Our analysis focused on the strongly stratified regime in which the VSHF is known to develop and that is also the regime relevant to geophysical turbulent jets such as the EDJs. Using the S3T implementation of SSD, we showed that VSHFs form spontaneously in this system through the mechanism of cooperative interaction between the turbulence and the mean state. While wave-mean flow interaction has previously been hypothesized to be the mechanism responsible for the formation and maintenance of the EDJs \citep{Muench:1999dy,Ascani:2015dd}, the analytical structure required for constructing a comprehensive theory connecting turbulence to the formation and equilibration of these coherent structures was lacking. The 2D Boussinesq system is a minimal dynamical model that captures horizontal structure formation in stratified turbulence, analogous to the role played by the barotropic beta-plane system in planetary-scale jet formation. Unlike the beta-plane system, the 2D Boussinesq equations do not have a conserved potential vorticity and so this stratified turbulence provides a test of the role played by conservation laws in the formation of jets. In agreement with previous studies, we find that VSHF formation occurs robustly in spite of the absence of vorticity as a conserved quantity. An aspect of horizontal mean structure formation in stratified turbulence highlighted in this work is the formation of horizontal mean density layers. When the VSHF emerges in turbulence, it typically dominates the velocity field at equilibrium and is clearly visible in the instantaneous flow. The associated changes in the stratification, however, are relatively weak and are obscured by turbulent fluctuations. Horizontal averaging reveals the structure of the modified stratification, which agrees well with the predictions of the S3T system. Stratified turbulence thus provides an example in in which the mean state is characterized by examples of both `manifest' and `latent' \citep{Berloff:2009ct} structures simultaneously. The primary contribution of this work is to explain the dynamics of VSHF formation and equilibration in stratified turbulence using SSD. We developed and applied the S3T equations for this system and showed that the behaviour observed in nonlinear simulations mirrors that of the S3T system. S3T provides a deterministic and autonomous dynamical system that describes the formation, temporal evolution, and equilibration of the statistical state of the turbulence at second order. In S3T the third cumulant, which is associated with perturbation-perturbation nonlinearity, is set to zero and the ergodic assumption equating horizontal and ensemble averages is made. The S3T system provides tools, concepts, and insights for understanding turbulent structure formation. For example, test function analysis was used in \textsection \ref{sec:testfunction} to calculate the statistical mean turbulent perturbation fluxes in the presence of an imposed horizontal mean structure. This tool yields the insight that the VSHF forms by modifying the fluxes in such a way as to reinforce the VSHF structure, and explains the specific horizontal mean structure maintained in turbulent equilibrium as being the structure for which the fluxes balance dissipation while not distorting the structure itself. Analysis of the S3T system also allows for the identification of phenomena that are difficult to capture or anticipate in the presence of turbulent fluctuations. For example, the linear stability analysis carried out in \textsection \ref{sec:scaleselection} shows that VSHF formation occurs via a linear instability of the SSD of the underlying homogeneous turbulent state. The growth rate of this instability crosses zero as the strength of the stochastic excitation is increased beyond a critical threshold, resulting in a supercritical bifurcation. This bifurcation behaviour is reflected in the dynamics of both the associated quasilinear (QL) and fully nonlinear (NL) systems. As an additional example, the S3T system predicts the existence of multiple simultaneously stable turbulent equilibria with different horizontal mean structures. This property of stratified turbulence has not previously been emphasized and might be unexpected from the perspective of nonlinear cascade constrained by conservation laws. From the perspective of S3T as an autonomous and nonlinear dynamical system the existence of multiple equilibria is not surprising. The authors acknowledge valuable comments from Navid Constantinou and thank John Taylor for providing the DIABLO code. The authors also acknowledge Boris Galperin and several anonymous reviewers whose suggestions helped to improve the manuscript. J.\ G.\ F.\ was partially supported by a doctoral fellowship from the Natural Sciences and Engineering Research Council of Canada. B.\ F.\ F.\ was partially supported by the U.S. National Science Foundation (NSF) under Grant Nos. NSF AGS-1246929 and NSF AGS-1640989.
1612.03264
\section{Introduction} \label{Sec-I} A self-reinforcing solitary wave which preserves its shape while traversing at a constant speed is known as a bright soliton. The origin of the bright solitons is due to a cancellation of the effects produced by non-linear and dispersive terms in the Hamiltonian. Solitons have been studied in a wide range of systems ranging from water waves, non-linear optics \cite{Kivshar}, ultracold quantum gases including spinor Bose-Einstein condensates (BECs) \cite{Inouye,li, rb, Perez-Garcia,Ieda}, {etc}. In this paper, we study the two-dimensional (2D) vortex-bright solitons in spin-orbit (SO) coupled three-component spin-1 spinor Bose-Einstein condensates. The SO coupling is the coupling between the spin of the atom and its center of mass motion. In neutral atoms, the SO coupling is absent \cite{stringari}. Nevertheless, neutral atoms can be subjected to the SO coupling by creating a non-Abelian gauge potential by suitably modifying the atom-light interaction \cite{Dalibard}. The SO coupling with equal strengths of Rashba \cite{Rashba} and Dresselhaus \cite{Dresselhaus} terms was first engineered in a landmark experiment with a BEC of $^{87}$Rb by dressing two of its internal spin states from within the ground electronic manifold ($5S_{1/2}, F = 1$) with a pair of lasers \cite{Lin}. In recent years, a variety of experimental studies have been done on SO-coupled Bose-Einstein condensates \cite{Aidelsburger}. Solitonic structures have been theoretically investigated in SO-coupled quasi-one-dimensional (quasi-1D) \cite{rela} and quasi-2D pseudospin-1/2 condensates \cite{Xu,Sakaguchi}. Bright solitons have also been theoretically studied in SO-coupled quasi-1D spin-1 \cite{Liu,Gautam-3} and spin-2 condensates \cite{Gautam-4}. In this paper, we study the stable stationary and moving vortex-bright solitons in a quasi-2D \cite{Salasnich} SO-coupled spin-1 condensate using the mean-field Gross-Pitaevskii (GP) {equations} \cite{Ohmi}. We observe that for small strengths of SO coupling which we employ in the paper, the ground state vortex-bright soliton of an SO-coupled polar and weakly-ferromagnetic spin-1 condensate is an axisymmetric vortex-bright soliton of type $(- 1,0, + 1)$ with zero magnetization, where the numbers in the parenthesis are the phase-winding numbers (angular momenta) \cite{Mizushima} associated with the spin components $m_f=+1,0,1$. An anti-vortex in component $m_f=+1$ is associated with an overlapping vortex of opposite circulation in $m_f=-1$ component. Besides this, we have also identified a stationary excited axisymmetric vortex-bright soliton of type $(0,+1,+ 2)$. The spin texture of this excited state vortex-bright soliton shows that it is a {\em coreless Anderson-Toulose} vortex \cite{Mizushima}. For condensates with stronger ferromagnetic interaction, the ground state is an asymmetric vortex-bright soliton with an anti-vortex of unit charge in the spin component $m_f=+1$ associated with a vortex of opposite circulation in the $m_f=-1$ component. In this case the vortex and anti-vortex are separated from each other, and the separation can occur along any arbitrary direction, which will get spontaneously chosen in an experiment, in two-dimensional plane. However, the condensate collapses for very strong ferromagnetic interaction and no vortex-bright soliton can be formed. The 2D vortex-bright solitons were first suggested and studied in the pseudospin-1/2 two-component spin-1 BEC \cite{Sakaguchi}, which is an approximation over the present three-component model of spin-1 BEC. In general, the implementation of SO interaction in the three-component spin-1 BEC is more complicated than the same in the two-component pseudospin-1/2 BEC from both theoretical \cite{gtm} and experimental \cite{campbell} point of view. The present study goes beyond that previous investigation \cite{Sakaguchi}. It provides a more intuitive understanding of the role of SO coupling in generating the solitons, viz. Figs. \ref{fig0}(a)-(b), in addition to a critical study of statics and interaction dynamics of the 2D solitons. Although these solitons behave as true solitons in frontal collision at high velocities, at low velocities, depending on the relative phase, they may repel and bounce back like in the collision of two rigid elastic disks or may transfer all atoms to one soliton to form a soliton molecule. Only the collision of two 1D analytic solitons is truly elastic at all velocities. Besides stationary vortex-bright solitons, we have also investigated the stable moving vortex-bright soliton of the SO-coupled spin-1 condensate. As the present mean-field model does not possess Galelian invariance, the moving solitons are calculated with the Galelian-transformed model \cite{rela,Sakaguchi,Liu,Gautam-3}. We find that the structure of the moving vortex-bright soliton is a function of both the magnitude and the direction of velocity, which can result in different density distributions for vortex-bright solitons moving along different directions. At low velocities, the collision of two vortex-bright solitons with a phase difference of $\pi$ is elastic. The two solitons repel and avoid each other and rebound from the center of collision without ever forming an overlapped profile. The collision dynamics is demonstrated to obey classical laws of motion. If the same initial guess is used for the right and the left moving solitons in the numerical simulation of the stationary state, the solitons acquire a phase difference of $\pi$. If this phase difference is removed before the numerical simulation of the colliding solitons, then after the collision between the two slow moving vortex-bright solitons, all the atoms end up being captured by one of the solitons. Similarly, in the collision of two normal BEC solitons at sufficiently low velocities, the two colliding solitons lose their identity and form a stable overlapping profile called a soliton molecule \cite{Luis,Nguyen}. At large velocities, the two vortex-bright solitons undergo quasi-elastic collision with the two solitons crossing each other irrespective of the phase difference. The paper is organized as follows. In Sec. \ref{Sec-IIA}, we describe the mean-field coupled Gross-Pitaevskii (GP) equations with Rashba SO coupling used to study the vortex-bright solitons in a spin-1 condensate. This is followed by a variational analysis of the stationary axisymmetric vortex-bright solitons in Sec. \ref{Sec-IIB}. In Sec. \ref{Sec-III}, we provide the details of the numerical method used to solve the coupled GP equations with SO coupling. We discuss the numerical results for axisymmetric vortex-bright solitons in Sec. \ref{Sec-IVA}, asymmetric solitons in Sec. \ref{Sec-IVB}, stability of the solitons in Sec. \ref{Sec-IVC}, and moving solitons and collisions between solitons in Sec. \ref{Sec-IVD}. Finally, in Sec. V, we give a summary of our findings. \section{Spin-Orbit coupled BEC vortex-bright soliton} \label{Sec-II} \subsection{Mean-field equations} \label{Sec-IIA} For the study of a quasi-2D vortex-bright soliton, we consider a spin-1 spinor BEC under a harmonic trap $m\omega_z^2 z^2/2$ in the $z$ direction and free in the $x-y$ plane. After integrating out the $z$ coordinate, the single particle Hamiltonian of the condensate with Rashba \cite{Rashba} SO coupling in such a quasi-2D trap is \cite{H_zhai} \begin{equation} H_0 = \frac{p_x^2+p_y^2}{2m} + \gamma p_x \Sigma_x+\gamma p_y \Sigma_y, \label{sph} \end{equation} where $p_x = -i\hbar\partial/\partial x$ and $p_y = -i\hbar\partial/\partial y$ are the momentum operators along $x$ and $y$ axes, respectively, and $\Sigma_x$ and $\Sigma_y$ are the irreducible representations of the $x$ and $y$ components of the spin matrix, respectively, \begin{eqnarray} \Sigma_x=\frac{1}{\sqrt 2} \begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 1\\ 0 & 1 & 0 \end{pmatrix}, \quad \Sigma_y=\frac{1}{\sqrt 2 i} \begin{pmatrix} 0 & 1 & 0 \\ -1 & 0 & 1\\ 0 & -1 & 0 \end{pmatrix}, \end{eqnarray} and $\gamma$ is the strength of SO coupling. In the mean-field approximation, the SO-coupled quasi-2D spin-1 BEC is described by the following set of three coupled two-dimensional GP equations, written here in dimensionless form, for different spin components $m_f=\pm 1,0$ \cite{Ohmi,Kawaguchi} \begin{align} i\frac{\partial \psi_{\pm 1}(\mathbf r)}{\partial t} &= {\cal H}\psi_{\pm 1}(\mathbf r) \pm c^{}_1F_z\psi_{\pm 1}(\mathbf r) + \frac{c^{}_1}{\sqrt{2}} F_{\mp}\psi_0(\mathbf r)\nonumber\\ &-\frac{i\gamma}{\sqrt{2}}\left( \frac{\partial\psi_0}{\partial x}\mp i\frac{\partial\psi_0}{\partial y}\right), \label{gps-1}\\ i \frac{\partial \psi_0(\mathbf r)}{\partial t} &= {\cal H}\psi_0(\mathbf r) + \frac{c_1}{\sqrt 2} [F_{-}\psi_{-1}(\mathbf r) +F_{+}\psi_{+1}(\mathbf r)]\nonumber\\ &-\frac{i \gamma}{\sqrt{2}}\Bigg(\frac{\partial\psi_{1}}{\partial x} +i \frac{\partial\psi_{1}}{\partial y} +\frac{\partial\psi_{-1}}{\partial x}-i\frac{\partial\psi_{-1}}{\partial y}\Bigg) \label{gps-2}, \end{align} where ${\bf F}\equiv\{F_x,F_y,F_z\}$ is a vector whose three components are the expectation values of the three spin-operators over the multicomponent wavefunction, and is called the spin-expectation value \cite{Kawaguchi}. Also, \begin{align}& F_{\pm}\equiv F_x \pm i F_y= \sqrt 2[\psi_{\pm 1}^*(\mathbf r)\psi_0(\mathbf r) +\psi_0^*(\mathbf r)\psi_{\mp 1}(\mathbf r)]\label{fpmspin1}, \\ & F_z= \rho_{+1}(\mathbf r)-\rho_{-1}(\mathbf r)\label{fzspin1}, \quad {\cal H}= -\frac{\nabla^2}{2} +c_0 \rho,\\ &c_0 = \frac{2N \sqrt{2\pi }(a_0+2 a_2)}{3 l_0},~ c_1 = \frac{2N \sqrt{2\pi }({a_2-a_0})}{3 l_0},\label{nonlin}\\ &\nabla^2=\frac{\partial ^2}{\partial x^2} + \frac{\partial ^2}{\partial y^2},~ {\mathbf r }\equiv\{x,y\}, \label{nabla_q2d} \end{align} where $\rho_j=|\psi_j(\mathbf r)|^2$ with $j=\pm 1, 0$ are the component densities, $\rho=\sum_{j}\rho_j$ is the total density, $a_0$ and $a_2$ are the $s$-wave scattering lengths in the total spin 0 and 2 channels, respectively, and asterisk denotes complex conjugate. The normalization condition satisfied by the component wavefunctions $\psi_j$ is $\int \sum_j \rho_j d{\bf r} =1$. All quantities in Eqs. (\ref{gps-1})-(\ref{nabla_q2d}) are dimensionless. This is achieved by writing length, density, and energy in units of $l_0$ $(=\sqrt{\hbar/(M\omega_z)})$, $l_0^{-2}$, and $\hbar\omega_z$, respectively. The energy of the system in dimensionless unit is \begin{align} E &= \int_{-\infty}^{\infty} d {\bf r}\Bigg\{\frac{1}{2}\left(\sum_{j=-1}^1 \left|\nabla \psi_j\right|^2 + {c_0}\rho^2 + {c_1}|\mathbf F|^2\right)\nonumber \\ &- \frac{i\gamma}{\sqrt{2}}\psi_0^* \left(\frac{\partial \psi_1}{\partial x} + \frac{\partial \psi_{-1}}{\partial x}\right) + \frac{\gamma}{\sqrt{2}}\psi_0^*\left(\frac{\partial \psi_1}{\partial y} - \frac{\partial \psi_{-1}}{\partial y}\right) \nonumber \\ &-\frac{i\gamma}{\sqrt{2}} \left(\psi_1^*+\psi_{-1}^*\right)\frac{\partial \psi_0}{\partial x} -\frac{\gamma}{\sqrt{2}} \left(\psi_1^*-\psi_{-1}^*\right)\frac{\partial \psi_0}{\partial y} \Bigg\}. \label{energy} \end{align} In plane polar coordinates, ${\bf r} = (r,\phi)$, Eqs. (\ref{gps-1})-(\ref{gps-2}) are \begin{align} i\frac{\partial \psi_{\pm 1}(r,\phi)}{\partial t} &= {\cal H}(r,\phi)\psi_{\pm 1}(r,\phi) \pm c^{}_1F_z\psi_{\pm 1}(r,\phi) \nonumber\\ &+ \frac{c^{}_1}{\sqrt{2}} F_{\mp}\psi_0(r,\phi)-\frac{i\gamma e^{\mp i\phi}}{\sqrt{2}}\left( \frac{\partial\psi_0}{\partial r}\mp i\frac{\partial\psi_0}{r\partial \phi}\right), \label{gpsp-1}\\ i \frac{\partial \psi_0(r,\phi)}{\partial t} &= {\cal H}(r,\phi)\psi_0(r,\phi) + \frac{c_1}{\sqrt 2} [F_{-}\psi_{-1}(r,\phi) \nonumber\\ &+F_{+}\psi_{+1}(r,\phi)]-\frac{i \gamma} {\sqrt{2}}\Bigg[e^{i\phi}\left(\frac{\partial\psi_{1}}{\partial r} +i \frac{\partial\psi_{1}}{r\partial \phi}\right)\nonumber\\ & +e^{-i\phi}\left(\frac{\partial\psi_{-1}}{\partial r}-i\frac{\partial\psi_{-1}}{r \partial \phi}\right)\Bigg] \label{gpsp-2}. \end{align} The coupled Eqs. (\ref{gpsp-1})-(\ref{gpsp-2}) in polar coordinates are instructive to understand the underlying symmetries of the system. \subsection{Vortex-bright soliton} \label{Sec-IIB} This study revealed two types of stationary quasi-2D low-energy axisymmetric vortex-bright solitons in an SO-coupled spin-1 BEC for an attractive (negative) $c_0$ and for $c_1\ge c_1^{(1)}$ corresponding to polar ($c_1>0$) and weak ferromagnetic ($0>c_1\ge c_1^{(1)}$) domains; at higher energies there could be other states. As $c_ 1$ is decreased further deep into ferromagnetic ($c_1<c_1^{(1)}$) domain, the axisymmetric vortex-bright solitons are no longer the lowest-energy states. For $ c_1^{(1)} >c_1> c_1^{(2)}$, a new type of asymmetric soliton emerges with an energy lower than the axisymmetric soliton(s), which become excited states. Eventually, all types of states collapse for $c_1< c_1^{(2)}$ because of an excess of attraction. The numerical values of $c_1$, e.g. $c_1^{(1)}$ and $c_2^{(1)}$, for the appearance of an asymmetric soliton for $c_1\le c_1^{(1)}$, and finally, its collapse for $c_1\le c_1^{(2)}$ depend on $c_ 0$ and $\gamma$. Using the phase-winding numbers (angular momentum) of the three-component wavefunction to denote a vortex \cite{Mizushima}, the axisymmetric vortex-bright solitons are classified as $ (-1,0,+ 1)$ and $(0,+ 1,+ 2) \equiv (- 2,- 1, 0)$ solitons, where the numbers in the parenthesis are the phase-winding numbers of $\psi_{+1}$, $\psi_0$ and $\psi_{-1}$, respectively. Here the $\pm$ signs in the winding number denote a vortex and and an anti-vortex rotating in opposite directions, respectively. For example, the soliton $(-1,0,+1)$ denotes a state of angular momentum $\mp 1$ in components $\psi_{\pm 1}$ and angular momentum 0 in component $\psi_0$. Here, the cores of the vortices in $m_f= \pm 1$ components are occupied by the polar ($m_f = 0$) component, and thus these solitons can be termed {\em polar-core} vortex-bright solitons. There are no stable stationary axisymmetric solitons of type $(0,0,0)$ without any angular momentum in all components. The details of a $(- 1,0,+ 1)$ vortex-bright soliton $-$ energy and density $-$ are independent of the value of $c_1$ $-$ positive or negative. This is due to the fact that for the stable minimum energy solitons of this type spin density vector $\bf F$ is uniformly zero. However, the same of a $(0,+ 1,+ 2)$ vortex-bright soliton and an asymmetric vortex-bright soliton are dependent on $c_1$. We will use a variational method to analytically study the axisymmetric vortex-bright solitons below. Our numerical studies show that the longitudinal magnetization ${\cal M} = \int \{\rho_{+1}(\bf r)-\rho_{-1}(\bf r)\}d\bf r$ is zero for the $(- 1,0,+ 1)$ solitons; whereas it can be non-zero for the $(0,+ 1,+ 2)$ solitons. This guides our choices of simple variational {\em ansatz} to model the vortex-bright solitons. The $(-1,0,+1)$ vortex-bright soliton with zero magnetization $\cal M$ can be analyzed using the following variational {\em ansatz} \begin{align} \psi_{\pm 1} &= \frac{A_1 r}{\sigma_1^2} \exp\left( - \frac{r^2}{2 \sigma_1^2}\mp i \phi\right),\label{ansatz1}\\ \psi_0&=i \frac{A_2}{\sigma_2} \exp\left(-\frac{r^2}{2 \sigma_2^2}\right)\label{ansatz2}, \end{align} where $r = \sqrt{x^2+y^2}$ and $\phi = \tan^{-1}(y/x)$ are the radial and azimuthal coordinates, $A_i,\sigma_i$ are the variational parameters, which denote the amplitude and the width of the component wavefunctions, respectively. The condition of zero magnetization fixes the amplitudes of components { $\psi_{\pm 1}$} to be equal. The equal and opposite phases $(\mp \phi)$ of these components guarantee their opposite directions of rotation with unit angular momentum $-$ vortex and anti-vortex. Only three of the variational parameters are independent of each other as the fourth, say $A_2$, is fixed by the normalization ($=1$). The variational energy of the soliton, obtained by substituting Eqs. (\ref{ansatz1}) and (\ref{ansatz2}) in Eq. (\ref{energy}), is \begin{eqnarray} E &=& \frac{\pi}{2} \biggr[\biggr\{\frac{A_2^2}{\sigma_2^2}+\frac{4 A_1^2} {\sigma_1^2}-\frac{16 \sqrt{2} A_1 A_2 \gamma \sigma_1^2 \sigma_2} {\left(\sigma_1^2+\sigma_2^2\right)^2}\biggr\}\nonumber \\ &+&c_0 \biggr\{\frac{A_1^4 } {\sigma_1^2}+\frac{4 A_1^2 A_2^2 \sigma_2^2}{\left(\sigma_1^2 +\sigma_2^2\right)^2}+\frac{A_2^4 }{2 \sigma_2^2} \biggr\} \biggr], \label{E1} \end{eqnarray} where $A_2$ is determined by the normalization constraint: \begin{equation} A_2 = \frac{\sqrt{1-2 \pi A_1^2}}{\sqrt{\pi } }\label{const}. \end{equation} As mentioned earlier, in this case $|{\bf F}|^2=0$; consequently, variational energy (\ref{E1}) is independent of $c_1$. Energy (\ref{E1}) can be minimized with respect to the variational parameters $A_i$ and $\sigma_i$, with Eq. (\ref{const}) acting as a constraint, to determine $A_i$ and $\sigma_i$. The numerical result for the component wave functions of a stationary $(-1,0,+1)$ vortex-bright soliton is obtained by an imaginary-time simulation of Eqs. (\ref{gps-1}) and (\ref{gps-2}) with an initial guess of component wave functions (\ref{ansatz1}) and (\ref{ansatz2}). { In case of axisymmetric $(-1,0,+1)$ ground state solutions, ansatz (\ref{ansatz1}) and (\ref{ansatz2}) ensure faster convergence of the numerical results as compared to Gaussian initial guess for the three-component wavefunction, which also lead to the same final converged solutions.} Next we consider the axisymmetric $(0,+1,+2)$ vortex-bright soliton, which has higher energy than an axisymmetric $(- 1,0,+ 1)$ vortex-bright soliton with the same parameters $c_0,c_1,$ and $\gamma$. For a variational study of the $(0,+1,+2)$ vortex-bright soliton, we adopt the following variational {\em ansatz} \begin{align}\label{v1} \psi_{+1}&=i \frac{A_1}{\sigma_1} \exp\left(-\frac{r^2}{2 \sigma_ 1^2}\right)\\ \psi_0&=\frac{A_2}{\sigma_2^2} r\exp\left(-\frac{r^2}{2 \sigma_2^2}+i \phi \right)\\ \psi_{-1}&=-i\frac{A_3}{\sigma_3^3} r^2\exp\left(-\frac{r^2}{2 \sigma_3^2}+i2 \phi \right), \label{v3} \end{align} where $A_i$ and $\sigma_i$ are the variational parameters for the amplitude and the width of the component wave functions. The phases $0,\phi$, and $2\phi$ of the components $\psi_{+1}, \psi_0,$ and $\psi_{-1}$, respectively, ensure their angular momenta as $(0,+1,+2)$, In the case of a $(- 1,0, +1)$ vortex-bright soliton, the zero magnetization condition (${\cal M}=0$) fixes the amplitudes of the wave function components $\psi_{\pm 1}$ to be equal. {Our numerical simulations confirm that} this is not the case for a stable $(0,+1,+2)$ vortex-bright soliton where, in general, ${\cal M} \ne 0$. { Hence, a fixed norm (=1) is the only constraint, which reduces the number of independent variational parameters (=6) by one.} Using Eqs. (\ref{v1})-(\ref{v3}), the energy (\ref{energy}) of the soliton can be written as \begin{widetext} \begin{align} E &= \frac{1}{8} \pi \left[4 \left\{\frac{A_1^2}{\sigma_1^2}+2 \frac{A_2^2}{ \sigma_2^2} -\frac{8 \sqrt{2} A_1 A_2 \gamma \sigma_1 \sigma_2^2} {\left(\sigma_1^2+\sigma_2^2\right)^2}+6 \frac{A_3^2} {\sigma_3^2}-\frac{32 \sqrt{2} A_2A_3 \gamma \sigma_2^2 \sigma_3^3} {\left(\sigma_2^2+\sigma_3^2\right)^3}\right\}\right.\nonumber\\ &+c_0\left\{2 \frac{A_1^4} {\sigma_1^2}+3 \frac{A_3^4} {\sigma_3^2}+\frac{48 A_2^2 A_3^2 \sigma_2^4 \sigma_3^2} {\left(\sigma_2^2+\sigma_3^2\right)^4}+A_1^2 \left(\frac{8 A_2^2 \sigma_1^2}{\left(\sigma_1^2+\sigma_2^2\right)^2} +\frac{16 A_3^2 \sigma_1^4 }{\left(\sigma_1^2+\sigma_3^2\right)^3}\right)+\frac{A_2^4} {\sigma_2^2}\right\} \nonumber\\ &+c_1\left.\left\{2 \frac{A_1^4} {\sigma_1^2} +3 \frac{A_3^4} {\sigma_3^2} + \frac{48 A_2^2 A_3^2 \sigma_2^4 \sigma_3^2}{\left({\sigma_2^2} +{\sigma_3^2}\right)^4} +A_1^2 \left(\frac{8 A_2^2\sigma_1^2}{\left({\sigma_1^2}+{\sigma_2^2}\right)^2}- \frac{16 A_3^2 \sigma_1^4}{\left({\sigma_1^2}+{\sigma_3^2}\right)^3}\right) +\frac{256 A_1 A_2^2 A_3\sigma_1^5 \sigma_2^2 \sigma_3^3}{\left( \sigma_2^2 \sigma_ 3^2+ 2 \sigma_1^2 \sigma_ 3^2+ \sigma_1^2 \sigma_ 2^2 \right)^3}\right\} \right]\label{E2}. \end{align} \end{widetext} The condition of fixed norm $(=1)$ leads to one constraint relating the variational parameters $A_i$ and $\sigma_i$: \begin{eqnarray} \pi (A_1^2 +A_2^2 +2 A_3^2) &=& 1 \label{const2}. \end{eqnarray} Minimizing energy (\ref{E2}) with respect to the variational parameters $A_i$ and $\sigma_i$ under the constraint (\ref{const2}), one can determine the variational parameters. For a $(-2,-1,0)$ vortex-bright soliton {which is degenerate with a (0,+1,+2) soliton}, the appropriate variational ansatz can be obtained from Eqs. (\ref{v1})-(\ref{v3}) by transformations $\psi_{m_f}(r,\phi) \rightarrow \psi_{-m_f}(r,-\phi)$ or $\psi_{m_f}(x,y) \rightarrow \psi_{-m_f}(x,-y)$. The degeneracy of these states is due to underlying symmetry of Eqs. (\ref{gps-1})-(\ref{gps-2}), which remain invariant under the transformation $y \rightarrow -y$ and $\psi_{m_f}(x,y)\rightarrow \psi_{-m_f}(x,-y)$. Under this transformation, a $(-1,0,+1)$ vortex-bright soliton transforms into itself as can be confirmed from the variational ansatz (\ref{ansatz1})-(\ref{ansatz2}); hence there is no degenerate counterpart for a $(-1,0,+1)$ vortex-bright soliton. Equations (\ref{v1})-(\ref{v3}) are invariant under simultaneous transformations of $\gamma\rightarrow -\gamma$ and $\psi_0\rightarrow\psi_0 e^{i\pi}$ (while keeping $\psi_{\pm 1}$ unchanged). It implies that for negative $\gamma$, the vortex-bright solitons are fundamentally identical to those for positive $\gamma$ except for a phase difference of $\pi$ in their $m_f =0$ components. \section{Numerical Procedure} \label{Sec-III} The coupled equations (\ref{gps-1})-(\ref{gps-2}) can be solved by time-splitting Fourier Pseudo-spectral method \cite{psanand} and time-splitting Crank-Nicolson method \cite{Wang, Bao, Muruganandam}. Here, we extend the Fourier Pseudo-spectral method to the coupled GP equations with SO coupling terms and use the same to solve Eqs. (\ref{gps-1})-(\ref{gps-2}). The coupled set of GP equations (\ref{gps-1})-(\ref{gps-2}) can be represented in a simplified form as \begin{equation} \frac{i\partial \Psi}{\partial t} = \left({H_1+H_2+H_3}\right)\Psi\label{GPE}, \end{equation} where $\Psi=(\psi_{+1},\psi_{0},\psi_{-1})^T$ {with $T$ denoting the transpose}, $H_1$, $H_2$ and $H_3$ are $3\times3$ matrix operators defined as \begin{align} H_1 &= \begin{pmatrix} {\cal H}+c_1(\rho_0+\rho_{-}) & 0 &0 \\ 0 & {\cal H}+c_1\rho_{+}&0 \\ 0 & 0 & {\cal H}+c_1(\rho_0-\rho_{-}) \end{pmatrix},\\ H_2 &= \begin{pmatrix} 0 & c_1\psi_0\psi_{-1}^*&0 \\ c_1\psi_0^*\psi_{-1} & 0 &\psi_0^*\psi_{+1} \\ 0 & c_1\psi_0\psi_{+1}^* & 0 \end{pmatrix}, \\ H_3 &=-i\frac{\gamma}{\sqrt{2}} \begin{pmatrix} 0 & \partial_-&0 \\ \partial_+ & 0 & \partial_- \\ 0 & \partial_+ & 0 \end{pmatrix}, \end{align} where \begin{align} \rho_{\pm}&=\rho_{+1}\pm \rho_{-1}, \quad \partial_{\pm}= \left(\frac{\partial}{\partial x} \pm i \frac{\partial}{\partial y}\right) \end{align} Now, the lowest order time-splitting involves solving the following equations successively \begin{eqnarray} \frac{i\partial\Psi}{\partial t} &=& H_1\Psi,\label{GPE1}\\ \frac{i\partial\Psi}{\partial t} &=& H_2\Psi,\label{GPE2}\\ \frac{i\partial\Psi}{\partial t} &=& H_3\Psi\label{GPE3}. \end{eqnarray} Eq. (\ref{GPE1}) can be numerically solved using Fourier Pseudo-spectral method \cite{Wang} which we employ in this paper or semi-implicit Crank-Nicolson method \cite{Muruganandam} {and involves additional time-splitting of $H_1$ into its spatial derivative and non-derivative parts.} The numerical solutions of Eq. (\ref{GPE2}) have been discussed in Refs. \cite{Wang,Martikainen}. We use Fourier Pseudo-spectral method to accurately solve Eq. (\ref{GPE3}). In Fourier space, Eq. (\ref{GPE3}) is \begin{equation} \frac{i\partial\tilde{\Psi}}{\partial t} = \tilde{H}_3\tilde{\Psi}\label{GPEF3}, \end{equation} where tilde indicates that the quantity has been Fourier transformed. Hamiltonian $H_3$ in Fourier space is \begin{equation} \tilde{H}_3 = -i\frac{\gamma}{\sqrt{2}} \begin{pmatrix} 0 & ik_x+k_y& 0\\ ik_x-k_y &0 & ik_x+k_y \\ 0 & ik_x-k_y & 0 \end{pmatrix} \end{equation} The solution of Eq. (\ref{GPEF3}) is \begin{align} \tilde{\Psi}(t+dt) &= e^{-i\tilde{H}_3 dt} \tilde{\Psi}(t) = e^{-i\hat{O}} \tilde{\Psi}(t),\\ &= \left(I + \frac{\cos{\Omega}-1}{\Omega^2}\hat{O}^2- i\frac{\sin{\Omega}}{\Omega}\hat{O}\right)\tilde{\Psi}(t),\label{GPEF3S} \end{align} where $\Omega = \sqrt{|A|^2 +|B|^2}$, where $A = -i\frac{\gamma}{\sqrt{2}}\left(ik_x + k_y\right)dt$ and $ B = -i\frac{\gamma}{\sqrt{2}}\left(ik_x -k_y\right)dt$, and $\hat{O}$ is defined as \begin{equation} \hat{O} = \begin{pmatrix} 0 & A & 0\\ A^*&0& B^*\\ 0&B&0 \end{pmatrix}. \end{equation} The wavefunction in Eq. (\ref{GPEF3S}) is in Fourier space and can be inverse Fourier transformed to obtain the solution in configuration space. In this study, in space and time discretizations, we use space and time steps of $0.1$ and $0.005$, respectively, in imaginary-time simulation, whereas in real-time simulation these are, respectively, $0.1$ and $0.0005$. \section{Numerical results} \label{Sec-IV} \begin{figure}[t] \begin{center} \includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=0.475\linewidth,clip]{fig1a.pdf} \includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=0.51\linewidth,clip]{fig1b.pdf} \caption{ (Color online) Two-dimensional contour plot of energy $E$ of Eq. (\ref{E1}) as a function of widths $\sigma_1$ and $\sigma_2$ for (a) $c_0=-4, \gamma=0.5$ and (b) $c_0=-5, \gamma=0.1$. The actual values of $A_1$ and $A_2$ corresponding to the energy minima in these two cases have been used in Eq. (\ref{E1}) to prepare these plots. } \label{fig0} \end{center} \end{figure} How the SO coupling creates a stable 2D soliton is explicit in the expression for energy (\ref{E1}). A stable bound soliton corresponds to a global minimum of the variational energy. In fact, for SO coupling $\gamma=0$, this energy expression is positive and tends to zero as $\sigma_1,\sigma_2 \to \infty$ and does not have any minimum for $c_0\ge -7$, beyond which $E\to -\infty$ as $\sigma_1,\sigma_2\to 0$ and the system collapses. The contribution of the SO coupling to energy $E$ of Eq. (\ref{E1}) is always negative in the form of a shallow well in the $\sigma_1 -\sigma_2$ plane. Hence by choosing $c_0>-7$ (collapse-free region) and an adequate value of SO coupling $\gamma$, one can have a global minimum at negative energy in the energy expression (\ref{E1}) as a function of $\sigma_1$ and $\sigma_2$ corresponding to a stable 2D soliton. To illustrate how the SO coupling leads to a energy minimum we consider two examples: (a) $c_0=-4, \gamma=-0.5$ and (b) $c_0=-5, \gamma=-0.1$. In these two cases there is a energy minimum (a) $E_{\mathrm{min}}=-0.1441, \sigma_1=2.00, \sigma_2 =1.588, A_1=0.2424$ and (b) $E_{\mathrm{min}}=-0.00668, \sigma_1=8.177, \sigma_2 =6.587, A_1=0.221$. How the energy varies as a function of widths $\sigma_1$ and $\sigma_2$ for given amplitudes $A_1$ and $A_2$ can be seen in contour plots of energy as a function of the widths with the given amplitudes. These plots are shown in Figs. \ref{fig0}(a) and (b) in these two cases explicitly showing the global minima of energy at negative energies. Outside the shaded areas in these plots, the energy function is zero or positive. The same thing also happens in the three-component energy function (\ref{E2}), which, however, is difficult to illustrate graphically. \subsection{Axisymmetric vortex-bright soliton} \label{Sec-IVA} The numerical and analytic variational results for radial density $\rho(r)$ versus $r$ for an axisymmetric $(- 1,0,+ 1)$ vortex-bright soliton for (a) $c_0 = -4, c_1 \ge -0.57$, and $\gamma = 0.5$ and for (b) $c_0 = -5, {c_1 \ge -0.5 }$, and {$\gamma = 0.1$} are shown in Figs. \ref{fig2}(a) and (b), respectively. The numerical result is obtained by an imaginary-time simulation of Eqs. (\ref{gps-1}) and (\ref{gps-2}) with the initial guess of component wave functions (\ref{ansatz1})-(\ref{ansatz2}). The numerical and analytic variational results for radial density $\rho(r)$ versus $r$ for the axisymmetric $(0,+ 1,+ 2)$ vortex-bright solitons for the same {$c_0$, $\gamma$ and $c_1 = -0.25$} are shown in Figs. \ref{fig2}(c) and (d). The numerical result in this case is obtained by an imaginary-time simulation of Eqs. (\ref{gps-1}) and (\ref{gps-2}) with the initial guess of component wave functions (\ref{v1})-(\ref{v3}). The wave function components $\psi_{+1}, \psi_0,$ and $\psi_{-1}$ in Figs. \ref{fig2}(a) and (b) carry angular momenta $- 1, 0$ and $+ 1$ respectively, whereas in Figs. \ref{fig2}(c) and (d), they carry angular momenta $0,+ 1$ and $+ 2$. The $(- 1,0,+ 1)$ states of Fig. \ref{fig2}(a) and (b) are the ground states of the system, whereas the $(0,+1,+2)$ states of Fig. \ref{fig2}(c) and (d) are excited states. For $ c_0=-4, \gamma =0.5$, viz. Figs. \ref{fig2}(a) and (c), the axisymmetric $(- 1,0,+ 1)$ vortex-bright soliton is the ground state for $c_1 \ge c_1^{(1)}= -0.57$ and for $ c_0=-5, {\gamma =0.1}$, viz. Figs. \ref{fig2}(b) and (d), the axisymmetric $(- 1,0,+ 1)$ vortex-bright soliton is the ground state for $c_1 \ge c_1^{(1)}= -0.5$. \begin{figure}[t] \begin{center} \includegraphics[trim = 5mm 0mm 4mm 0mm, clip,width=0.49\linewidth,clip]{./fig2a} \includegraphics[trim = 5mm 0mm 4mm 0mm, clip,width=0.49\linewidth,clip]{./fig2b} \includegraphics[trim = 5mm 0mm 4mm 0mm, clip,width=0.49\linewidth,clip]{./fig2c} \includegraphics[trim = 5mm 0mm 4mm 0mm, clip,width=0.49\linewidth,clip]{./fig2d} \caption{(Color online) Numerical (line) and variational (chain of symbols) results for radial density $\rho(r)$ versus $r$ of the components in an axisymmetric $(- 1,0,+ 1)$ vortex-bright soliton for (a) $c_0 = -4, {c_1\ge -0.57}$, and $\gamma = 0.5$ and for (b) $c_0 = -5, {c_1\ge -0.5}$, and $\gamma = 0.5$. The same in an axisymmetric $(0,+ 1,+ 2)$ vortex-bright soliton for (c) $c_0 = -4, c_1=-0.25$, and $\gamma = 0.5$ and for (d) $c_0 = -5, c_1=-0.25$, and $\gamma = 0.5$. All quantities in this and following figures are dimensionless.} \label{fig2} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[trim = 10mm 0mm 10mm 0mm, clip,width=\linewidth,clip]{./fig3a} \includegraphics[trim = 4mm 0mm 0mm 0mm, clip,width=0.49\linewidth,clip]{./fig3b} \includegraphics[trim = 4mm 0mm 0mm 0mm, clip,width=0.49\linewidth,clip]{./fig3c} \caption{ (Color online) (a) The projection of local magnetization vector (normalized to unity) on the $x-y$ plane for the axisymmetric $(0,+1,+2)$ vortex-bright soliton of Fig. \ref{fig2}(c) with $c_0=-4, c_1=-0.25, \gamma = 0.5$. The color indicates the $l_z$ component. At the center, color value of $+1$ indicates that the local magnetization vector is directed along $+z$ axis, similarly at the edge, color value of $-1$ indicated that the local magnetization vector is directed along $-z$ axis. {(b) Numerical results for component densities $\rho_j(r)$ versus $r$ for the axisymmetric $(0,+1,+2)$ vortex-bright soliton with $c_0=-4, c_1=-0.25, \gamma = 0.5$ showing the oscillation in density; here solid red, dot-dashed black and dashed green lines show the densities of $m_f = +1$, $m_f = 0$ and $m_f = -1$ components, respectively and the inset shows the zoom-in of main figure from $r =4$ to $r=15$. (c) Total numerical (num.) and variational or analytic (anal.) densities as a function of $r$ for the axisymmetric $(0,+1,+2)$ vortex-bright soliton of Fig. \ref{fig2}(c); inset shows the same above the cross-over between the numerical and variational curves. } } \label{fig-3} \end{center} \end{figure} To describe the spatial orientation of the local magnetization vector in a spinor vortex BEC it is convenient to define a local magnetization vector $\mathbf l$, which points in the direction of spin, as the cross product of two vectors $\mathbf m$ and $\mathbf n$ \cite{Mizushima} \begin{equation} \bf l = \bf m \times \bf n, \end{equation} where ${\bf m} \equiv (m_x,m_y,m_z) = {\rm Re}(\psi_x,\psi_y,\psi_z)$ and ${\bf n} \equiv (n_x,n_y,n_z) = {\rm Im}(\psi_x,\psi_y,\psi_z)$, and \begin{eqnarray} \psi_x &=& \frac{-\psi_{+1} +\psi_{-1}}{\sqrt{2}},\\ \psi_y &=& \frac{-i(\psi_{+1} +\psi_{-1})}{\sqrt{2}},\\ \psi_z &=& \psi_0, \end{eqnarray} where ${\rm Re}$ and ${\rm Im}$ stand for real and imaginary parts, respectively, $\psi_x, \psi_y,\psi_z$ are the components of the order parameter in Cartesian basis \cite{Kawaguchi}. An axisymmetric $(0,+1,+2)$ vortex can have two distinct spin textures \cite{Mizushima} which are the spatial distribution of the local magnetization vector. For the $(0,+1,+2)$ vortex, the unit vector $\hat {\bf l} = \hat z \cos \beta(r)+\sin \beta(r)(\hat x\cos\phi+\hat y \sin\phi ),$ here $\phi$ is the azimuthal angle, and $\beta(r)$ varies from $\beta(0) = 0 $ to $\beta(R) = \pi/2$ for a Mermin-Ho coreless vortex \cite{mh} and from $\beta(0) = 0 $ to $\beta(R) = \pi$ for a Anderson-Toulouse coreless vortex \cite{at}, where the outer edge of the condensate is at $r=R$ \cite{Mizushima}. In Fig. \ref{fig-3}(a), we plot the numerically obtained projection of the local magnetization vector on the $xy$ plane for the axisymmetric $(0,+1,+2)$ vortex-bright soliton shown in Fig, \ref{fig2}(c) for parameters $c_0=-4, c_1=-0.25, $ and $\gamma=0.5$. The color of the arrows in Fig. \ref{fig-3}(a), with value ranging from -1 to 1, represents the $z$ component of the local magnetization vector. Here the spin texture has been shown from origin to the second zero of $\rho_0$ which occurs at $r=7.3$ in this case as in shown in Fig. \ref{fig-3}(b). It is evident from Fig. \ref{fig-3}(a) that the spin texture associated with axisymmetric (0,+1,+2) vortex-bright soliton is consistent with the spin texture of an Anderson-Toulouse coreless vortex \cite{Mizushima}. We find that from origin to the second zero of $\rho_0$ a (0,+1,+2) vortex-bright soliton always has this spin texture. At the origin (the first zero of $\rho_0$), the spin points along positive $z$ direction and it gets fully inverted at the second zero of $\rho_0$. From the inset of Fig. \ref{fig-3}(b), it is also evident that the densities of the three components actually have oscillations with multiple zeros. This oscillation and the consequent deviation from the Gaussian shape in density is the main reason for the difference in the analytic and variational density profiles shown in Fig. \ref{fig2}. Since the total norm $(2\pi\int r\rho(r)dr=1)$ for both the numerical and variational densities is the same, it implies that the total variational density which is consistently larger than the numerical density near the origin, as can be inferred from Figs. \ref{fig2}(a)-(d), must be smaller than the total numerical density after a cross-over point. This is indeed the case for all the vortex-bright solitons shown in Fig. \ref{fig2}. To illustrate it for the vortex-bright soliton shown in Fig. \ref{fig2}(c), the total numerical and variational densities are shown in Fig. \ref{fig-3}(c); the inset shows the densities in the domain where total numerical density is consistently higher than the variational one. \subsection{Asymmetric solitons} \label{Sec-IVB} \begin{figure}[t] \begin{center} \includegraphics[trim = 0cm 0cm 0cm 0cm, clip,width=\linewidth,clip]{fig4} \caption{(Color online) The 2D contour plot of densities of the components (a) $m_f = +1$, (b) $m_f = -1$, (c) $m_f = 0$ of an asymmetric soliton with $c_0 = -4$ $c_1 = -0.6$ and $\gamma =0.5$. The corresponding phases are shown in (d) for $m_f = +1$, (e) for $m_f = -1$, and (f) for $m_f = 0$ components. } \label{fig4} \end{center} \end{figure} As $c_1$ is decreased further beyond $c_1^{(1)}$, e.g., for $c_1 < c_1^{(1)}$ the axisymmetric vortex-bright solitons cease to be the ground state and a new type of asymmetric soliton appears as the ground state. Nevertheless, the axisymmetric $(- 1,0,+ 1)$ and $(0,+ 1, + 2)$ solitons are still dynamically stable vortex-soliton solutions, albeit with higher energy, for $c_1^{(2)}<c_1 < c_1^{(1)}$. The two-dimensional contour density and phase plots of the component wave functions for the numerically obtained minimum-energy asymmetric soliton with $c_0 = -4$, $c_1 = -0.6$ $(c_1<c_1^{(1)}) $ and $\gamma =0.5$ are shown in Fig \ref{fig4}(a)-(f). The density corresponding to the component $\psi_0$ is axisymmetric, whereas the densities corresponding to components $\psi_{\pm 1}$ are {\it asymmetric}. However, the total density profile (not shown here) is still radially symmetric. The vortices in an asymmetric profile can lie along an arbitrary direction which will be spontaneously chosen in an experiment. This is due to the fact that Eqs. (\ref{gps-1})-(\ref{gps-2}) and Eqs. (\ref{gpsp-1})-(\ref{gpsp-2}) are invariant under simultaneous transformations: $\phi = \tan^{-1}(y/x) \rightarrow \phi+\theta$ and $\psi_{m_f}(r,\phi) \rightarrow \psi_{m_f}(r,\phi+\theta) e^{ -im_f\theta}$, here $\theta$ is the angle of rotation. Keeping $c_0$ and $\gamma$ fixed at $-4$ and $0.5$, respectively, if we decrease $c_1$ further from $c_1 = -0.6$, we find that the asymmetric ground-state soliton continues to exist for a sufficiently large negative $c_1$ ($-0.6\ge c_1\ge -1.8$) in this case, beyond which it collapses. As $c_1$ is decreased from $-0.6$ to $-1.8$, the vortices in the components $\psi_{\pm 1}$ keep on moving away from each other, and finally can move out of the system. This can lead to a bright soliton with phase singularities lying at the edge of the condensate as is shown in Fig. \ref{fig3} for $c_0 = -4, c_1 = -1.8 $ and $\gamma = 0.5$. In figures Figs. \ref{fig3}(a)-(b) the solitons are bright solitons without any visible vortex core in density. However, the phase jump corresponding to a vortex are seen in Figs. \ref{fig3}(d)-(e). It should be noted that as $c_1$ is decreased from $-0.6$ to $-1.8$, the axisymmetric solitons shown in Figs. \ref{fig2}(a) and (c) are still stable vortex-solitons with energies higher than the asymmetric ground state. \begin{figure}[t] \begin{center} \includegraphics[trim = 0cm 0cm 0cm 0cm, clip,width=\linewidth,clip]{fig5} \caption{(Color online) The 2D contour plot of densities of the components (a) $m_f = +1$, {(b) $m_f = -1$, (c) $m_f = 0$} in an asymmetric soliton with $c_0 = -4$ $c_1 = -1.8$ and $\gamma =0.5$. The corresponding phases are shown in (d) for $m_f = +1$, {(e) for $m_f = -1$, and (f) for $m_f = 0$} components.} \label{fig3} \end{center} \end{figure} The asymmetric solitons have non-zero contribution to energy from the $c_1$ dependent terms in contrast to the axisymmetric $(- 1,0,+ 1)$ vortex-bright soliton, and the details of the asymmetric soliton change as $c_1$ is changed as is illustrated by qualitative different bright solitons in Figs. \ref{fig4} and \ref{fig3}. Similarly, the details of the axisymmetric $(0,+ 1, + 2)$ vortex-bright soliton are also dependent on the value of $c_1$. \subsection{Stability of solitons} \label{Sec-IVC} \begin{figure}[t] \begin{center} \includegraphics[trim = 2mm 1mm 2mm 2mm, clip,width=0.49\linewidth,clip]{./fig6a} \includegraphics[trim = 2mm 1mm 2mm 2mm, clip,width=0.49\linewidth,clip]{./fig6b} \includegraphics[trim = 2mm 1mm 2mm 1mm, clip,width=0.49\linewidth,clip]{./fig6c} \includegraphics[trim = 2mm 1mm 2mm 1mm, clip,width=0.49\linewidth,clip]{./fig6d} \caption{ (Color online) Numerical result of rms sizes of the component wave functions versus time as obtained in real-time simulation using the imaginary-time profiles of Figs. \ref{fig2}(a), (b), (c), and (d) as the initial states. } \label{fig1} \end{center} \end{figure} {\it Dynamical Stability:} We find that an axisymmetric $(- 1,0,+ 1)$ vector soliton and an asymmetric soliton can emerge as the ground states depending upon the choice of interaction parameters $c_0,c_1$ and $\gamma$. Both these solitons have zero magnetization and are dynamically stable. Similarly, the minimum energy axisymmetric $(0,+ 1, + 2)$ vortex-bright soliton, which is an excited state and which has, in general, non-zero magnetization, is dynamically stable too. To test the dynamical stability of the $(-1,0,+1)$ vortex-bright solitons shown in Figs. \ref{fig2}(a) and (b) and the $(0,+1,+2)$ vortex-bright solitons shown in Figs. \ref{fig2}(c) and (d), we performed real-time simulation of the imaginary-time profiles as the initial state over a long interval of time. The steady oscillation of the root mean square ($r_{rms}$) sizes of the components as shown in Figs. \ref{fig1} (a), (b), (c) and (d) corresponding, respectively, to solutions shown in Figs. \ref{fig2} (a), (b), (c), and (d) demonstrates the stability of these solitons. \subsection{Stable moving solitons} \label{Sec-IVD} \begin{figure}[t] \begin{center} \includegraphics[trim = 0mm 0cm 0mm 0cm, clip,width=\linewidth,clip]{fig7.png} \caption{(Color online) The 2D contour plot of the (a) density of component $m_f=+1$, (b) density of component $m_f=-1$, (c) density of component $m_f=0$, of the dynamically stable asymmetric soliton moving with a speed of $0.01$ along the $x$ axis for $c_0=-4, c_1=-0.25$, $\gamma=0.5$; the respective phases are shown in (d)-(f). In (a) and (b) the holes in the density profiles are antivortex and vortex, respectively. The density and phase of the components $m_f=+1,-1$ and 0 of the dynamically stable soliton with the same parameters moving with a speed of $0.4$ along $x$ axis are shown in (g)-(i) and (j)-(l), respectively. } \label{fig6} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[trim = 0.cm 0cm 0cm 0cm, clip,width=0.47\linewidth,clip]{fig8a.png} \includegraphics[trim = 0.cm 0cm 0cm 0cm, clip,width=0.45\linewidth,clip]{fig8b.png} \caption{{(Color online) (a) The 2D contour plot of the total density $\rho(x,y=0,t)$ versus $x$ and $t$ during the collision of two {\em in-phase} vortex-bright solitons, each with $c_0 = -4$, $c_1=-0.25$ and $\gamma = 0.5$, moving in opposite directions along $x$ axis with velocity $v=0.01$. (b) shows the same for $c_0 = -2$, $c_1=-0.25$ and $\gamma = 0.5$.} } \label{new-fig} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[trim = 0.cm 0cm 0cm 0cm, clip,width=\linewidth,clip]{fig9.png} \caption{(Color online) The 2D contour plot of (a) the density of the $m_f =0 $ component $\rho_0(x,y=0,t)$ and (b) total density $\rho(x,y=0,t)$ versus $x$ and $t$ during the collision of two {{\em out-of-phase}} solitons considered in Fig. (\ref{fig6}) moving in opposite directions each with a speed $v=0.01$. The absence of crossing of the tracks in (a) and (b) illustrates that the solitons rebound after the encounter.} \label{fig7} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[trim = 0cm 0cm 0cm 0cm, clip,width=\linewidth,clip]{fig10} \caption{(Color online) The 2D contour plot of densities of the $m_f=+1$ component in the right and left moving solitons during collision shown in Fig. \ref{fig7} at times (a) $t=650$, (c) $t=750$ , (e) $t=800$, (g) $t=850$, and (i) $t=950$. The same for the densities of the $m_f=-1$ component in the right and left moving soliton are shown in (b) $t=650$, (d) $t=750$ , (f) $t=800$, (h) $t=850$, and (j) $t=950$. At $t = 800$ the distance between the two colliding solitons is minimum. Holes in the density profiles of the $m_f = +1$ component correspond to antivortices, whereas the holes in density profiles of the $m_f = -1$ component correspond to vortices.} \label{fig8} \end{center} \end{figure} In order to find the stable moving solitons, one needs to examine the Galilean invariance of the SO-coupled Hamiltonian. Using Galilean transformation $x' = x - vt, y' = y, t' = t$, where $v$ is the relative velocity along $x$ axis of the primed coordinate system with respect to unprimed coordinate system, and using the transformation \begin{equation} \psi_{j}^{}(x,y,t) = \psi'_{j}(x',y',t')e^{ivx'+iv^2t'/2},\label{mov-sol} \end{equation} in Eqs. (\ref{gps-1})-(\ref{gps-2}), we get \cite{Gautam-3} \begin{eqnarray} i \frac{\partial \psi_{\pm 1}'(\mathbf r')}{\partial t'} &=& {\cal H}\psi_{\pm 1}'(\mathbf r') \pm c^{}_1F_z'\psi_{\pm 1}'(\mathbf r') + \frac{c^{}_1}{\sqrt{2}} F_{\mp}'\psi_0'(\mathbf r')\nonumber\\ &-&\frac{i\gamma}{\sqrt{2}}\left( \frac{\partial\psi_0'}{\partial x'}\mp i\frac{\partial\psi_0'}{\partial y'}\right)+\frac{\gamma}{\sqrt{2}}v\psi_0', \label{gpsm-1}\\ i\frac{\partial \psi_0'(\mathbf r')}{\partial t'} &=& {\cal H}\psi_0'(\mathbf r') + \frac{c_1}{\sqrt 2} [F_{-}'\psi_{-1}'(\mathbf r') +F_{+}'\psi_{+1}'(\mathbf r')]\nonumber\\ & -&\frac{i\gamma}{\sqrt{2}}\Bigg(\frac{\partial\psi_{1}'}{\partial x} +i \frac{\partial\psi_{1}'}{\partial y'} +\frac{\partial\psi_{-1}'}{\partial x'}-i\frac{\partial\psi_{-1}}{\partial y'}\Bigg)\nonumber \\ & +&\frac{\gamma}{\sqrt{2}}v(\psi_{+1}'+\psi_{-1}') \label{gpsm-2}. \end{eqnarray} Due to $v$ dependent terms in Eqs. (\ref{gpsm-1})-(\ref{gpsm-2}), the system is not Galilean invariant. Here for the sake of simplicity, we have considered motion along $x$ axis. In the absence of SO coupling ($\gamma = 0$), the Galilean invariance is restored, implying that the moving solitons, {given by Eq. (\ref{mov-sol})}, can be trivially obtained by multiplying stationary solutions of Eqs. (\ref{gps-1})-(\ref{gps-2}) with $e^{i v x}$. This is no longer possible for $\gamma\ne 0$, in which case, the moving solitons are the stationary solutions, presuming that these exist, of Eqs. (\ref{gpsm-1})-(\ref{gpsm-2}) multiplied by $e^{i v x}$ \cite{rela,Sakaguchi,Liu}. The dependence of the shape of the soliton on its velocity is illustrated in Fig. \ref{fig6}, where we present the 2D contour plot of the density and phase of a soliton moving from left to right along $x$ axis with velocity $v=0.01$ and 0.4 for the parameters $c_0= -4, c_1 =-0.25,$ and $\gamma = 0.5$. The density and phase for $v=0.01$ in Figs. \ref{fig6}(a)-(f) clearly show the vortex and antivortex in components $m_f = \pm 1$, whereas in Figs. \ref{fig6}(g)-(l) we find that the vortex and antivortex have disappeared. {These solutions, viz. $\psi_j(x,y,0)$ in Eq. (\ref{mov-sol}), obtained by solving Eqs. (\ref{gpsm-1})-(\ref{gpsm-2}) in imaginary-time simulation and then multiplied by $e^{i v x}$} are dynamically stable as confirmed in real-time simulation of these solutions using Eqs. (\ref{gps-1})-(\ref{gps-2}). The moving soliton has an asymmetric profile, whereas the stationary soliton for the same set of parameters shown in Fig. \ref{fig2}(a) is axisymmetric. However, the density of the $m_f=0$ component of the moving vortex-bright soliton shown in Figs. \ref{fig6} is axisymmetric; the same is true about the total density. The result shown in Figs. \ref{fig6} is a manifestation of the lack of Galilean invariance in the present system which makes the density profile of the moving soliton a function of its velocity $-$ both magnitude and direction. Keeping $c_0 = -4, c_1=-0.25$ and $\gamma = 0.5$ fixed, we find that the Eqs. (\ref{gpsm-1})-(\ref{gpsm-2}) allow the self-trapped stationary solutions for $v\le 0.4$ along the $x$ axis; for $v>0.4$ along the $x$ axis, no localized solitons can be found. As we increase $v$ from zero, the vortices in components $m_f = +1$ and $m_f = -1$ start moving away from each other along $y$ axis. Numerically, we also find that these vortices are located on the line perpendicular to the direction of motion. The Eqs. (\ref{gpsm-1})-(\ref{gpsm-2}) are no longer invariant under transformations: $\phi = \tan^{-1}(y/x) \rightarrow \phi+\theta$ and $\psi_{m_f}(r,\phi) \rightarrow \psi_{m_f}(r,\phi+\theta) e^{ -i m_f\theta}$ due to $v$ dependent terms. The orientation of the vortices in the stationary solutions of Eqs. (\ref{gpsm-1})-(\ref{gpsm-2}) along $y$ axis is the manifestation of the lack of this rotational symmetry. \begin{figure}[t] \begin{center} \includegraphics[trim = 0.cm 0cm 0cm 0cm, clip,width=\linewidth,clip]{fig11.png} \caption{(Color online) The 2D contour plot of total densities of the right and left moving {{\em out-of-phase}} solitons with the same parameters as in Fig. \ref{fig6} during collision with the impact parameter $d=2$, each moving with a speed $v=0.01$, at times (a) $t=0$, (b) $t=400$ , (c) $t=800$, (d) $t=1200$, (e) $t=1600$, and (f) $t=2000$. The direction of motion of the solitons before and after collision are indicated by {white} arrows in (a) and (d), respectively, illustrating a change in the direction of motion after collision. In this case the solitons repel and avoid each other without a direct encounter.} \label{fig9} \end{center} \end{figure} The collision between two one-dimensional integrable solitons is truly elastic. The collision between two 2D solitons is expected to be inelastic, in general. We find that the two {\em in-phase} vortex-bright solitons for $c_0 = -4$, $c_1=-0.25$, $\gamma = 0.5$ and moving with speed of $v=0.01$ in opposite directions collapse after collision as is shown Fig. \ref{new-fig}(a). In order to avert collapse, we considered the collision between the {\em in-phase} vortex-bright solitons with $c_0 = -2$ (half of {the} previous value), $c_1=-0.25$, $\gamma = 0.5$, and $v = 0.01$. In this case after the collision, all the atoms are transferred to one of {the} solitons as is shown in Fig. \ref{new-fig}(b) and hence, effectively {leads} to merger as has also been observed experimentally for scalar solitons \cite{Nguyen}. The collision in this case has similarity to the inelastic collision between the two non-spinor bright solitons at low velocities \cite{Luis}. However, we find that the slowly moving $(- 1,0,+ 1)$ solitons with asymmetric profiles, like the ones shown in Fig. \ref{fig6}, and a phase difference of $\pi$ can collide quasi-elastically. This is demonstrated by real-time simulation of two solitons, obtained by solving Eqs. (\ref{gpsm-1})-(\ref{gpsm-2})for $c_0 = -4$, $c_1=-0.25$, $\gamma = 0.5$ and $v=0.01$ by imaginary-time propagation, placed initially at $t=0$ at $x=\pm 12.7$ and set into motion in opposite directions along $x$ axis with a speed of $v=0.01$. In our simulations, if we use the same initial guess to obtain the right and left moving solitons, they end up acquiring a phase difference of $\pi$. We find that the solitons come close to each other and turn back and retrace their trajectory without crossing each other. This is illustrated by the 2D contour plot of the axisymmetric $m_f=0$ component and total densities, (a) $\rho_0 (x,y=0,t)$ and (b) $\rho (x,y=0,t)$, respectively, versus $x$ and $t$ in Fig. \ref{fig7}. During the collision, the asymmetric densities of $m_f =\pm 1$ components show subtle changes as {are} shown in Fig. \ref{fig8} through snapshots of subsequent 2D contour plots of these densities near the instant of closest {approach} of the two solitons. The density distribution of $m_f =\pm 1$ components in the right and the left moving solitons are not identical as is shown in Figs. \ref{fig8}(a) and (b), which is again due to the break-down of the Galilean invariance. As the left and the right moving solitons collide, the antivortices in the $m_f=+1$ component in the left and the right moving solitons slowly move along the $y$ axis as is evident from Figs. \ref{fig8} (a), (c), (e), (g), and (i); this is accompanied by an analogous movement of vortices in $\psi_{-1}$ as is shown in Figs. \ref{fig8} (b), (d), (f), (h), and (j). These changes ensure that during the course of the collision the two solitons exchange their linear momenta, and thus rebound after collision without ever crossing each other. The repulsive collision between the two bright solitons in a quasi-1D BEC has also been observed experimentally \cite{Nguyen} consistent with our simulations. We have also investigated the {\em out-of-phase} collision between two slowly moving solitons along $x$ axis in opposite directions with non-zero impact parameter $d$. We consider elastic collision between two bright solitons, each with $c_0 = -4$, $c_1 =-0.25$ and $\gamma = 0.5$, placed initially ($t=0$) at $x=\pm 12.7, y=\pm 1$ and set into motion along the $x$ axis in opposite directions with speed $v=0.01$. This collision is illustrated in Fig. \ref{fig9} by the 2D contour plot of the total densities of the two colliding solitons. As in an elastic collision with non-zero impact parameter between two classical objects, the two solitons are deflected from their original trajectory conserving momentum; they do not retrace their trajectories after collision as in Fig. \ref{fig7}. The direction of motion of the solitons before and after collision are shown by white arrows in Figs. \ref{fig9}(a) and (d), respectively. As in the case of head-on collision shown in Fig. \ref{fig8}, here too the vortices in left and right moving solitons rearrange themselves consistent with the change in the direction of motion during the collision. The change in the density profile of the $m_f = +1$ component during the collision is shown in Fig. \ref{fig12}; this is accompanied by an analogous change in the density profile of the $m_f = -1$ component (not shown here), viz. Fig. \ref{fig8}. \begin{figure}[t] \begin{center} \includegraphics[trim = 0.cm 0cm 0cm 0cm, clip,width=\linewidth,clip]{fig12.png} \caption{(Color online) Dynamics of the antivortex cores, close to the positions of closest approach, of $m_f = +1$ components in the right and left moving solitons for the collision shown in Fig. \ref{fig9}. The holes in the density profiles are the antivortices.} \label{fig12} \end{center} \end{figure} The {\em out-of-phase} collision illustrated in Figs. \ref{fig9} can be theoretically analyzed by considering the collision to be equivalent to classical elastic collision between two identical rigid circular disks of equal mass and equal scalar velocity $v$ with a non-zero impact parameter $d$. If the initial velocities of the two disks are \begin{align} {\bf v_j} &= v \cos \theta_j \hat x+ v \sin \theta_j \hat y, \end{align} where $j=1,2$ denote the index of the disk, $\hat x$ and $\hat y$ are the unit vectors along $x$ and $y$ axes, respectively; then the velocity components, ($v_{jx}',v_{jy}'$), after collision are \cite{Becker} \begin{align} v_{jx}' &= v \cos(\theta_{3-j}-\phi)\cos \phi-v\sin(\theta_j-\phi)\sin\phi,\label{ce1}\\ v_{jy}' &= v \cos(\theta_{3-j}-\phi)\sin \phi+v\sin(\theta_j-\phi)\cos\phi\label{ce2}. \end{align} Here $\phi$ is the collision angle and is related to the coordinates of the centers of two disks at the instant of closest approach, denoted by $(C_{jx},C_{jy})$, as \begin{align} \phi = \tan^{-1}\left(\frac{C_{1y} - C_{2y}}{C_{1x}-C_{2x}}\right). \end{align} In our case, when the two solitons are moving along $y = +d/2$, $\theta_1 = 0$ and $y = -d/2$ and $\theta_2 = \pi$, $\phi$ can be written in terms of the impenetrable radius $R$ of the soliton \begin{equation} \phi = -\tan^{-1}\left( \frac{d}{\sqrt{4R^2-d^2}}\right),\label{ce3} \end{equation} where impenetrable radius can be defined as the half of the distance between the centers of the two solitons at the distance of closest approach and is equal to $4.7$ in the present case. Besides the collision shown in Fig. \ref{fig9} with impact parameter $d = 2$, we also studied the collisions with $d = 1,3,4$. Using $\theta_1 = 0$, $\theta_2 = \pi$ in Eqs. (\ref{ce1})-(\ref{ce2}), we find that the magnitude of the velocities of the solitons remain unchanged after collision which is consistent with numerical findings. The final angles are given by $\theta_1 '= \pi+2\phi$ and $\theta_2'=2\pi+2\phi $. These analytic classical results for the elastic collision between two disks are in good agreement with the numerical result of elastic collision between two quantum 2D BEC solitons for $v_1 = v_2 = 0.01$, $\theta_1 = 0$, $\theta_2 = \pi$ and $R = 4.7$. The numerical and analytic results for $\theta_j'$'s in {\em degrees} for different values of impact parameters are summarized in Table I. \begin{table} \caption{Numerical and analytical result for the angles of the emerging solitons after collision shown in Fig. \ref{fig9}. } \begin{center} \begin{tabular}{| c | c | c | c |c|} \hline \multicolumn{1}{ |c| }{}&\multicolumn{2}{ |c| }{numerical}&\multicolumn{2}{ |c| }{analytical} \\ \hline $ d$ & $\theta_{1}'$ & $\theta_{2}'$ & $\theta_{1}' $ & $\theta_{2}'$\\ \hline 1 &168.5& 348.5 &167.8 &347.8\\ \hline 2 &155.2& 335.2 &155.4 &335.4\\ \hline 3 &142.2& 322.2 &142.8 &322.8\\ \hline 4 &130.2& 310.2 &129.6 &309.6\\ \hline \end{tabular} \end{center} \end{table} In contrast to slowly moving vortex-bright solitons, two fast moving vortex solitons can pass through each other during collision irrespective of phase difference. To demonstrate this, we consider the head-on collision between two {\em in-phase} vortex-bright solitons, each with $c_0 = -4$, $c_1=-0.25$ and $\gamma = 0.5$, moving with a speed of $0.4$ in opposite directions along the $x$ axis. The collision is illustrated by successive snapshots of 2D contour plots {of $m_f=\pm 1$ components} in Figs. \ref{fig10}. In this case, the collision dynamics of two {\em out-of-phase} vortex-bright solitons with same $c_0,c_1,\gamma$ and $v$ has little difference from dynamics shown in Figs. \ref{fig10}. This figure is qualitatively different from the dynamics shown in Fig. \ref{new-fig}(b) where at low velocities the solitons do not pass through each other. However, Figs. \ref{fig10} reveal that at high velocities the solitons superpose and cross each other like normal BEC solitons in 1D \cite{1Dcol} and 2D \cite{2Dcol}. \begin{figure}[t] \begin{center} \includegraphics[trim = 0.cm 0cm 0cm 0cm, clip,width=\linewidth,clip]{fig13.png} \caption{(Color online) The 2D contour plot of densities of the $m_f=+1$ components of two {\em in-phase} vortex-bright solitons each with $c_0 = -4$, $c_1=-0.25$ and $\gamma = 0.5$ moving in opposite directions along $x$ axis with velocity $v=0.4$ at times $t=$ (a) 0, (c) 35, (e) 48, (g) 60, and (i) 100. The same for the $ m_f=-1$ components are, respectively, presented in (b), (d), (f), (h), and (j). } \label{fig10} \end{center} \end{figure} \section{Summary} We have studied the formation and dynamics of 2D vortex-bright solitons in a three-component SO-coupled spin-1 spinor condensate using numerical solution and variational approximation of the mean-field GP equation. The ground state vortex-bright solitons are axisymmetric in the 2D plane in the polar ($c_1>0$) and weakly ferromagnetic ($0>c_1>c_1^{(1)}$) domains, whereas they are asymmetric in the strongly ferromagnetic domain ($c_1^{(1)}>c_1>c_1^{(2)}$). For very strong ferromagnetic interaction ($c_1< c_1^{(2)}$) the system collapsed and no solitons can be found. In this problem the {coupled GP equations are} not Galilean invariant. Consequently, to obtain the dynamically stable moving solitons, {the Galilean-transformed coupled GP equations} have been used. The profile of the moving soliton is dependent on its velocity vector. In the study of collision of two moving vortex-bright solitons at small velocities, we find that the {\em in-phase} solitons either collapse or merge into single entity, whereas out-of-phase solitons repel and avoid each other without ever having an overlapping profile. The collision between the {\em in-phase} vortex-bright solitons is thus qualitatively similar to the collision of two normal (non-spinor) BEC solitons in 1D \cite{1Dcol} and 2D \cite{2Dcol}. In the collision of two solitons at large velocities, they form an overlapping profile during interaction and cross each other. Here, the phase difference between the two solitons has little effect on the collision dynamics as the kinetic energy of the solitons is more than sufficient to overcome any repulsion arising due to the phase difference. \label{Sec-V} \begin{acknowledgements} This work is financed by the Funda\c c\~ao de Amparo \`a Pesquisa do Estado de S\~ao Paulo (Brazil) under Contract Nos. 2013/07213-0, 2012/00451-0 and also by the Conselho Nacional de Desenvolvimento Cient\'ifico e Tecnol\'ogico (Brazil). \end{acknowledgements}
2107.08534
\section{Introduction\label{sec:intro}} The Ising model is one of the most popular models in statistical physics: its simplicity makes it easy to study while it is complex enough that many interesting physical phenomena can be studied with it, such as phase transitions and criticality \cite{brush1967history}. Since its inception, numerous variants of the Ising model have been proposed to study different phenomena. An important class of such variants are the Ising models with impurities. These are used to investigate how the presence of impurities, which occur frequently in nature, affects the properties of a system. Common ways to model impurities in the Ising model is by randomly removing spins (site-dilution \cite{hasenbusch2007universality, ballesteros1997ising, ivaneyko2005criticality}), bonds (bond-dilution \cite{Zhong_2020, Hasenbusch_2008, PhysRevB.18.2387, HADJIAGAPIOU20111279, berche2004bond}) or alternatively by randomly modifying the strength of the interactions in some other way \cite{Hasenbusch_2008, wolff1983phase}. In this paper we focus on the variant with bond-dilution. The introduction of bond-dilution to the Ising model changes its properties significantly. For example, it has been shown that the critical temperature that separates the ferromagnetic and paramagnetic phases of the Ising model changes depending on the extent of the bond-dilution \cite{HADJIAGAPIOU20111279}. This even introduces a new type of phase transition because the critical temperature drops to zero at a certain bond concentration creating two phases (zero and non-zero critical temperature) separated by what is referred to as the percolation threshold \cite{jain1995anomalously}. In addition, it appears that the presence of impurities also alters the universality class of the model \cite{hasenbusch2007universality}. A common approach to study the Ising model is the use of Monte Carlo methods. The choice of the algorithm does not change any of the equilibrium properties: all algorithms sample the same (Boltzmann) distribution. However, the dynamics of different algorithms can vary strongly leading to pronounced differences in their efficiency for studying a certain model. In the pure Ising model, cluster algorithms such as the Wolff and Swendsen-Wang algorithms have proven themselves to be much more effective at criticality than single spin-flip algorithms like Metropolis \cite{BarkemaBook}. This difference is expected to be even more pronounced in the bond-diluted Ising model since it has been recently shown that single spin-flip algorithms suffer from a diverging correlation time when the percolation threshold is approached \cite{Zhong_2020}. The dynamics of cluster algorithms for the bond-diluted Ising model remains poorly studied and so it is still unclear whether they actually are more effective. Some studies have proposed that the efficiency of these cluster algorithms carries over to the bond-diluted Ising model and that correlation times actually decrease when site- or bond-dilution is introduced \cite{hennecke1993critical, ivaneyko2005criticality}. We present a quantitative analysis of the dynamics of the Wolff and Swendsen-Wang algorithms to show that this is in fact not the case for the Wolff algorithm. We will demonstrate that the Wolff algorithm suffers from much longer correlation times than in the pure model, caused by isolated (groups of) spins, a fact which has previously been hinted at by Ballesteros et al, who showed that depending on the degree of bond dilution there are different regions, characterised by the size of the groups of isolated spins, where certain Monte Carlo updates are more efficient at thermalising the system \cite{ballesteros1998critical}. We expand upon their work by proving a lower bound on the dynamical exponent of the Wolff algorithm and numerically showing that this lower bound is actually taken for several values of the dilution. This paper is organised as follows. We first define the bond-diluted Ising model, the cluster algorithms, and the observables that we use. Next, we present our results and discuss what they teach us about the correlation times of the Wolff and Swendsen-Wang algorithms. In the final section we summarise our main findings and conclude. \section{Model and Methods\label{sec:modelmethods}} \subsection{Model} In this paper we study the bond-diluted Ising model in two dimensions on a square lattice of size $L\,\times\,L$. This model is a variant of the regular Ising model with nearest-neighbour interactions and is obtained by randomly removing a fraction $1 - p$ of the bonds (i.e. interactions between two neighbours) from the lattice, where $p$ is called the bond concentration. Defined this way, $p$ is the probability that there is a bond between two neighbours. With this choice, $p = 1$ corresponds to the regular Ising model and $p = 0$ to a collection of isolated spins (no interactions). We define the model with the Hamiltonian \begin{equation}\label{eq:hamiltonian} \mathcal{H} = -J \sum_{\langle ij \rangle} c_{ij}(p) s_i s_j \end{equation} \noindent where the sum runs over all pairs of nearest-neighbour sites, $s_i = \pm 1$ is the spin on site $i$ and $c_{ij}(p)$ is a constant that follows a Bernoulli distribution with probability $p$, i.e. it has value $1$ with probability $p$ and value $0$ with probability $1 - p$. We refer to a realization of the $c_{ij}$'s for all nearest-neighbour pairs as a configuration of the model. The bond-dilution is frozen in for a particular configuration. In other words, the values of the $c_{ij}$'s are fixed for a specific configuration. All through the manuscript, energy is measured in units of $J$. \subsection{Algorithms} We use the bond-diluted Ising model to study the behaviour, and in particular the dynamics, of two cluster Monte Carlo algorithms. The first of these is the Wolff algorithm \cite{PhysRevLett.62.361}. The basic idea behind this algorithm is to grow a cluster of spins and flip all the spins in this cluster simultaneously with probability 1. To grow a cluster we perform the following steps \cite{BarkemaBook}, \begin{enumerate} \item choose a spin at random from the lattice, \item consider each of its neighbours. If the spins are aligned, add the neighbour to the cluster with probability $1 - e^{-2\beta J}$ with $\beta = \frac{1}{k_B T}$ and $J$ the coupling constant from the Hamiltonian, \item for each of the neighbours added in step 2 also consider all their neighbours to be added to the cluster and repeat this until no more neighbours exist that have not yet been considered. \end{enumerate} \noindent It can be shown that by growing the cluster in this way we satisfy both ergodicity and detailed balance \cite{BarkemaBook}. It is important to note that in the bond-diluted Ising model two spins are only considered to be neighbours if there is a bond between them. The second algorithm under consideration is the Swendsen-Wang algorithm \cite{PhysRevLett.58.86}. Similar to the Wolff algorithm, clusters of spins are grown according to the aforementioned procedure. It differs, however, in the fact that we do not just grow a single cluster, but cover the entire lattice with clusters and flip each of these with probability $\frac{1}{2}$ in a single step \cite{BarkemaBook}. Since clusters are grown in the same way as in the Wolff algorithm, showing that the Swendsen-Wang algorithm satisfies ergodicity and detailed balance proceeds analogously \cite{BarkemaBook}. \subsection{Observables\label{sec:observables}} During our simulations we keep track of several quantities. This includes the energy of a state, which follows directly from the definition of the model and requires no further explanation. Additionally, we measure a quantity which we will refer to as the spin age and which we define as follows. To extract more information about the dynamics of the Wolff algorithm from our simulations, we label each site in the lattice with a spin age $a_i$, which we define to be the time since site $i$ was last visited (i.e. was part of a Wolff cluster) measured in the number of Wolff cluster moves. In other words, when a site is visited, its age is set to $0$ and each subsequent Wolff cluster move where the site is not visited, the age is incremented by $1$. Once the system is thermalised, both with respect to its configuration of spins and the distribution of ages, we count how often a certain age occurs at various steps in the simulation, to produce a histogram showing the distribution of ages in equilibrium. To be specific, at certain steps in the simulation (between moves) we measure for each age $a$ how many spins in the lattice are labelled with that age at that step and we call this number the age frequency $f_L (a)$. \section{Results and Discussion\label{sec:resultsdiscussion}} \subsection{The Wolff algorithm} We first discuss the behaviour of the Wolff algorithm applied to the bond-diluted Ising model. We will start with a simple argument to show that there must be a lower limit on the correlation time. Then we discuss the results from our numerical analysis to show that this lower bound is also taken for several values of the bond concentration $p$. But before going into the simple argument, let us first introduce some notation and two different time scales that we have used. For all the results we measure time in cluster moves of the algorithm used, because we found this the most intuitive timescale for understanding the results. However, when evaluating the performance of an algorithm, we prefer to measure time such that it scales with required CPU time. Since the CPU time per single Wolff cluster move can vary significantly, we require a second timescale for the Wolff algorithm. A good candidate is to measure time such that $t = 1$ corresponds to the situation where on average as many spins are flipped as there are in the lattice. The relation between the time $t$ and our previous time, which we denote by $t_{\mathrm{steps}}$ for Wolff, is given by $t = t_{\mathrm{steps}} \frac{\langle n \rangle}{L^2}$, where $\langle n \rangle$ is the average size of a Wolff cluster \cite{BarkemaBook}. It can also be shown that $\langle n \rangle$ scales as $L^{\gamma / \nu}$ at the critical temperature such that $L^{\gamma / \nu - 2}$ acts as a conversion factor when required \cite{BarkemaBook}. By construction, the same number of spins, namely all the spins in the lattice, are visited by the Swendsen-Wang algorithm in each cluster move, so $t_{\mathrm{steps}}$ already scales with CPU time for Swendsen-Wang and there is no additional timescale meaning we use $t$ to denote the time measured in Swendsen-Wang cluster moves. Now let us turn to the simple argument. We argue that the correlation time $\tau_{\mathrm{steps}, w}$ for $p < 1$ is bounded from below by $L^2$, i.e. $\tau_{\mathrm{steps}, w} = \Omega(L^2)$. To see this, note that for any $p < 1$ there will always exist at least one isolated spin in the lattice for a sufficiently large system size (i.e. for a sufficiently large system the expectation value for the number of isolated spins will be at least 1). With an isolated spin we mean spins which have all their bonds to the rest of the lattice removed. Such spins would only be flipped by the Wolff algorithm if they are chosen as the seed spin. And since each spin is equally likely to be picked and there are $L^2$ spins, the correlation stored in these spins, however small it might be, will also survive for $\Omega(L^2)$ cluster moves. Therefore, we can conclude that the correlation time $\tau_{\mathrm{steps}, w}$ is bounded from below by $L^2$. To study the behaviour numerically, we ran simulations with the Wolff algorithm for various system sizes with $p = 0.6$ at $(\beta J)^{-1} = 0.940$ where $\beta = \frac{1}{k_B T}$ and $J$ the coupling constant. We chose this value for $p$ because the effects of bond-dilution become more pronounced when the bond fraction $p$ is significantly below $1$. The temperature was chosen to be in the vicinity of the critical temperature as determined with the Binder cumulant. The value we found is also in good agreement with the critical temperature found in other papers, see for example \cite{Zhong_2020}. Unless otherwise mentioned, we used $100,000$ different realizations of the bond dilution in each simulation. Figure \ref{fig:wolff_therm} shows the evolution of the energy of the system towards its thermal equilibrium value as a function of Wolff cluster moves. For $L = 40$ we ran for 400 cluster moves per configuration, for $L = 100$ we ran for 300 cluster moves and in between we tuned the number of cluster moves to roughly keep the CPU time used per simulation constant. At $t_{\mathrm{steps}} = 0$, the system starts in the configuration with all spins pointing up ($s_i = 1$ for all $i$). Notice how the curve seems to transition from a fast decay for small $t_{\mathrm{steps}}$ to a slower decay at large $t_{\mathrm{steps}}$. When the vertical and horizontal axes are scaled with $L^2$ the tails of the curves, the regions of slower decay, collapse. Since these tails are the limiting factor in convergence of the energy to its equilibrium this suggests that the correlation time $\tau_{\mathrm{steps}, w}$ scales as $L^2$ such that $\tau_w$ scales as $L^{z_w}$ with $z_w={\gamma / \nu}$. Numerically, it is reported that $\gamma / \nu$ is independent of $p$ for $p \geq 0.6$, and actually indistinguishable from $\gamma / \nu = 1.75$ as in the regular Ising model \cite{HADJIAGAPIOU20111279}. Note that, while the equilibrium exponents are numerically indistinguishable, the dynamic exponent is very different: in the regular two-dimensional (2D) Ising model the dynamic exponent is reported as $z_w = 0.25(1)$ \cite{BarkemaBook}. \begin{figure} \includegraphics[width=\columnwidth]{wolff_therm.pdf} \caption{\label{fig:wolff_therm} Convergence of the energy $E(t)$ to the thermal equilibrium $\langle E \rangle$ during thermalisation with the Wolff algorithm for different system sizes $L$ with $p = 0.6$ at $(\beta J)^{-1} = 0.940$ where $\beta = \frac{1}{k_B T}$ and $J$ the coupling constant. For $t_{\mathrm{steps}} = 0$ the system starts in a state with all spins pointing up. Both the vertical and horizontal axes were scaled with $L^2$. Note the collapse of the right tails of the curves, suggesting that the correlation time $\tau_{\mathrm{steps}, w} \sim L^2$.} \end{figure} To verify our argument that isolated (groups of) spins exist that are not touched by the algorithm for a long time, we computed a histogram of the distribution of the spin ages throughout a simulation with the Wolff algorithm in the manner described in the Model and Methods section. For these simulations we used $10^4$ realizations of the bond-dilution. To initialise the system we first thermalise with 50 Swendsen-Wang moves, starting from a state with all spins pointing up. We also first run the simulation for $5L^2$ Wolff cluster moves to make sure that spins can actually reach all the ages that we report in the histogram. Finally, we measure the age for an additional $1000$ consecutive Wolff steps. We did the simulations for both $p = 0.6$ at $(\beta J)^{-1} = 0.940$ as before as well as for $p = 0.7$ at $(\beta J)^{-1} = 1.310$, $p = 0.8$ at $(\beta J)^{-1} = 1.648$ and $p = 0.9$ at $(\beta J)^{-1} = 1.964$. For completeness, we also did the simulations at $p = 1$ at $(\beta J)^{-1} = 2.27$. We found these temperatures to be in the vicinity of the critical temperature at their respective bond fractions $p$, again in agreement with the critical temperature found in other papers \cite{Zhong_2020}. The results are shown in figure \ref{fig:wolff_histogram}. \begin{figure*} \subfigure[$p < 1$]{\includegraphics[width=\columnwidth]{wolff_histogram.pdf}}\quad \subfigure[$p = 1$]{\includegraphics[width=\columnwidth]{wolff_histogram_p1.pdf}} \caption{\label{fig:wolff_histogram} Distribution of spin ages $a$ during a simulation with the Wolff algorithm at equilibrium. In figure (a) we see the data for $p = 0.6$ at $(\beta J)^{-1} = 0.940$, $p = 0.7$ at $(\beta J)^{-1} = 1.310$, $p = 0.8$ at $(\beta J)^{-1} = 1.648$ and $p = 0.9$ at $(\beta J)^{-1} = 1.964$. In figure (b) we see the data for $p = 1$ at $(\beta J)^{-1} = 2.27$. Here $\beta = \frac{1}{k_B T}$ and $J$ the coupling constant. The spin age is defined as the time since the site was last visited, measured in Wolff cluster moves. Note the different scaling of the horizontal axis for (a) and (b). The horizontal axis in (a) was scaled with $L^2$, while in (b) it was scaled with $L^{z_{\mathrm{steps}, w}}$ where $z_{\mathrm{steps}, w} = 0.50$ was chosen to correspond with the $z_w = 0.25(1)$ for the regular 2D Ising model \cite{BarkemaBook}. The collapse of the curves in (a) again suggests that the correlation time $\tau_{\mathrm{steps}, w}$ of the Wolff algorithm scales as $L^2$, in agreement with figure \ref{fig:wolff_therm}. Scaling the horizontal axis in (b) with the dynamical exponent for the regular 2D Ising model from the literature also leads to a reasonable collapse, as we would expect.} \end{figure*} The figure clearly shows that some spins survive for a very long time. Also note the strikingly good collapse of the curves in \ref{fig:wolff_histogram}a when we scale the horizontal axis with $L^2$, for $p = 0.6$, $p = 0.7$, $p = 0.8$ and $p = 0.9$. This supports our earlier finding that $\tau_w$ scales as $L^{z_w}$ with $z_w=\gamma / \nu \approx 1.75$ for these values of $p$. In contrast, the histogram drops to zero very quickly for $p = 1$ and we need a different scaling to get a reasonable collapse. This seems to suggest that the effect of long surviving spins only shows up for $p < 1$. \subsection{The Swendsen-Wang algorithm} We now turn our attention to the Swendsen-Wang algorithm. By construction, it visits every spin in the lattice each step, so it should not suffer from the problems encountered with the Wolff algorithm, originating from long surviving spins. Similar to the Wolff algorithm, we ran simulations for various system sizes $L$ at $p = 0.6$ and $(\beta J)^{-1} = 0.940$, i.e. the setup of the simulations was exactly the same, only the algorithm used to update the spins was different. Figure \ref{fig:sw_therm} shows the analogue of figure \ref{fig:wolff_therm}, but then for Swendsen-Wang. In addition, it contains an inset figure that shows the same data but plotted in a different way. At $L = 30$ we ran for 300 Swendsen-Wang steps per configuration while at $L = 100$ we ran for 100 steps; in between we tuned the steps to keep the CPU time used roughly constant. In the main part of the figure we can see that the energy quickly converges to its thermal equilibrium value and the slowly decaying tail from figure \ref{fig:wolff_therm} is absent. Moreover, when scaling the vertical axis with $L^2$ and the horizontal axis with $L^{z_{\rm sw}}$ with $z_{\rm sw} = 0.09(4)$, the curve collapse suggests that the correlation time $\tau_{\rm sw}$ for Swendsen-Wang at $p = 0.6$ scales as $L^{z_{\rm sw}}$. The value for $z_{\rm sw}$ used to scale the horizontal axis was chosen to be the same as the value we determined with a different method which will be described below. Note that the dynamical exponent $z_{\rm sw}$ is significantly smaller at $p = 0.6$ than for the regular 2D Ising model ($p = 1$) where $z_{\rm sw} = 0.25(1)$ \cite{BarkemaBook}. This is the opposite of the super slowing down observed for the Metropolis algorithm \cite{Zhong_2020}. Finally, in the inset figure the data for $h(t)$ versus time $t$ is plotted. Here $h(t) = -\log\left(c \left| E(t) - \langle E \rangle \right|\right)$ with $c = \left| E(0) - \langle E \rangle \right|^{-1}$. The blue curve is a straight line with slope $0.87$. Since the data seems to be parallel to this blue curve instead of a curve with slope $1$, the convergence of the energy seems to be a stretched exponential. \begin{figure} \includegraphics[width=\columnwidth]{sw_therm.pdf} \caption{\label{fig:sw_therm} Convergence of the energy $E(t)$ to the thermal equilibrium $\langle E \rangle$ during thermalisation with the Swendsen-Wang algorithm for different system sizes $L$ with $p = 0.6$ at $(\beta J)^{-1} = 0.940$ where $\beta = \frac{1}{k_B T}$ and $J$ the coupling constant. For $t = 0$ the system starts in a state with all spins pointing up. The vertical axis was scaled with $L^2$ and the horizontal axis with $L^{z_{\rm sw}}$ with $z_{\rm sw} = 0.09(4)$, where $z_{\rm sw}$ was chosen to be the same as in figure \ref{fig:sw_energy_othercorr}. Note that this plot is equivalent to figure \ref{fig:wolff_therm} but for the Swendsen-Wang algorithm. The collapse of the curves suggests that the correlation time for the Swendsen-Wang algorithm scales as $L^{z_{\rm sw}}$ with $z_{\rm sw} = 0.09(4)$. Also note the absence of a slowly decaying tail, demonstrating that the Swendsen-Wang algorithm does not suffer from the same problems that plague the Wolff algorithm (see figure \ref{fig:wolff_therm}). The inset figure in the top-right shows the same data but plotted differently. Here $h(t) = -\log\left(c \left| E(t) - \langle E \rangle \right|\right)$ with $c = \left| E(0) - \langle E \rangle \right|^{-1}$. The blue curve is a straight line with slope $0.87$. Since the data seems to be parallel to this blue curve instead of a curve with slope 1, the convergence of the energy seems to be stretched exponential.} \end{figure} We have already shown that there is a value for the dynamical exponent $z_{\rm sw}$ that gives a good collapse of the data in figure \ref{fig:sw_therm}. However, this plot shows data from simulations out-of-equilibrium so we did not use this data to determine the correct scaling of the correlation time (at least not in the form presented in figure \ref{fig:sw_therm}). Instead, we determined it from equilibrium simulations. For this we computed the evolution of the mean-square displacement of the energy $\langle [E(t) - E(0)]^2 \rangle$ from the same data as was used for figure \ref{fig:sw_therm}. To obtain equilibrium data we discarded all data before the system was thermalised. For $L = 30$ this concerns all data before $t = 50$ and for all other system sizes all data before $t = 20$ (i.e. these times became the new $t = 0$ for determining $\langle [E(t) - E(0)]^2 \rangle$). The results are shown in figure \ref{fig:sw_energy_othercorr}. After scaling the vertical axis with the numerically determined limiting values of the curves, we can collapse the curves using a horizontal scaling of $L^{z_{\rm sw}}$ with $z_{\rm sw} = 0.09(4)$. The uncertainty in the dynamical exponent was determined by tuning the scaling of the axis to determine the range within which the collapse seemed good. The size of this range was then used as a measure of the uncertainty. This confirms our earlier numerical estimate of the dynamical critical exponent for the Swendsen-Wang algorithm at $p = 0.6$. \begin{figure} \includegraphics[width=\columnwidth]{sw_energy_othercorr.pdf} \caption{\label{fig:sw_energy_othercorr} Mean-square displacement of the energy $\langle [E(t) - E(0)]^2 \rangle$ in thermal equilibrium as a function of Swendsen-Wang moves $t$ for different system sizes $L$ with $p = 0.6$ at $(\beta J)^{-1} = 0.940$ where $\beta = \frac{1}{k_B T}$ and $J$ the coupling constant. The vertical axis was scaled with the numerically determined limit value of the curves, while the horizontal axis was scaled with $L^{z_{\rm sw}}$ with $z_{\rm sw} = 0.09(4)$. The dynamical critical exponent for the Swendsen-Wang algorithm was determined by tuning the scaling of the horizontal axis until a good collapse was found.} \end{figure} \section{Summary and Conclusions} We have shown how the correlation times $\tau_w$ and $\tau_{\rm sw}$ of the Wolff and Swendsen-Wang cluster algorithms scale as a function of the system size $L$ when applied to the 2D bond-diluted Ising model. We demonstrated that the Wolff algorithm suffers from a much longer correlation time than in the pure Ising model, caused by isolated (groups of) spins which are infrequently visited by the algorithm. With a simple argument we proved that these cause the correlation time to be bounded from below by $L^{z_w}$ where $z_w=\gamma / \nu \approx 1.75$ for a bond concentration $p < 1$. Furthermore, we showed numerically that this lower bound is actually taken for several values of the bond concentration in the region $0.5 < p < 1$. Moreover, we have shown that the Swendsen-Wang algorithm does not suffer from the same problem, by construction. It has a much shorter correlation time, even shorter than in the pure Ising model. Numerically, we have found that its correlation time scales as $L^{z_{\rm sw}}$ with $z_{\rm sw} = 0.09(4)$ at $p = 0.6$. We expect that the Wolff algorithm will suffer from the same problems in the three-dimensional bond-diluted Ising model, albeit to a lesser degree as more bonds will have to be removed to create isolated spins. In addition, we think the same will hold for the site-diluted and weakly diluted (i.e. where you weaken instead of removing the bonds) Ising models. This could be something to explore in the future.
2106.00418
\section{Introduction} Contextual bandits, where personalized decisions are made sequentially and simultaneously with data collection, are increasingly used to address important decision-making problems where data is limited and/or expensive to collect, with applications in product recommendation \citep{li2010contextual}, revenue management \citep{kallus2020dynamic,qiang2016dynamic}, and personalized medicine \citep{tewari2017ads}. Adaptive experiments, whether based on bandit algorithms or Bayesian optimization, are increasingly being considered in place of classic randomized trials in order to improve both the outcomes for study participants and the chance of identifying the best treatment allocations \citep{atheytrial,kasytrial,kasy2021adaptive,bakshy2018ae}. But, at the end of the study, we still want to construct valid confidence intervals on average treatment effects, subgroup effects, or the value of new personalized interventions. Such confidence intervals are, for example, crucial for enabling credible inference on the presence or absence of improvement of novel policies. However, due to the adaptive nature of the data collection, unlike classic randomized trials, standard estimates and their confidence intervals actually fail to provide correct coverage, that is, contain the true parameter with the desired confidence probability (\emph{e.g.}, 95\%). A variety of recent work has recognized this and offered remedies \citep{hadad2019confidence,luedtke_vdL2016}, but only for the case of non-contextual adaptive data collection. Like classic confidence intervals, when data comes from a contextual bandit -- or any other context-dependent adaptive data collection -- these intervals also fail to provide correct coverage. In this paper, we propose the first asymptotically normal estimator for the value of a (possibly contextual) policy from \emph{context-dependent} adaptively collected data. This asymptotic normality leads directly to the construction of valid confidence intervals. Our estimator takes the form of a \emph{stabilized} doubly robust estimator, that is, a weighted time average of an estimate of the so-called canonical gradient using plug in estimators for the outcome model, where each time point is inversely weighted by its estimated conditional standard deviation given the past. We term this the Contextual Adaptive Doubly Robust (CADR) estimator. We show that, given consistent conditional variance estimates which at each time point only depend on previous data, the CADR estimator is asymptotically normal, and as a result we can easily construct asymptotically valid confidence intervals. This normality is in fact robust to misspecifying the outcome model. A significant technical challenge is actually constructing such variance estimators. We resolve this using an adaptive variance estimator based on the importance-sampling ratio of current to past (adaptive) policies at each time point. We also show that we can reliably estimate outcome models from the adaptively-collected data so that we can plug them in. Extensive experiments using 57 OpenML datasets demonstrate the failure of previous approaches and the success of ours at constructing confidence intervals with correct coverage. \subsection{Problem Statement and Notation} \subparagraph{The data.} Our data consists of a sequence of observations indexed $t=1,\dots,T$ comprising of context $X(t)\in\mathcal X$, action $A(t)\in\mathcal A$, and outcome $Y(t)\in\mathcal Y\subset\mathbb R$ generated by an adaptive experiment, such as a contextual bandit algorithm. Roughly, at each round $t=1,2,\dots,T$, an agent formed a contextual policy $g_t(a\mid x)$ based on all past observations, then observed an independently drawn context vector $X(t)\sim Q_{0,X}$, carried out an action $A(t)$ drawn from its current policy $g_t(\cdot\mid X(t))$, and observed an outcome $Y(t)\sim Q_{0,Y}(\cdot\mid A(t),X(t))$ depending only on the present context and action. The action and context measurable spaces $\mathcal X,\mathcal A$ are arbitrary, \emph{e.g.}, finite or continuous. More formally, we let $O(t):=(X(t), A(t), Y(t))$ and make the following assumptions about the sequence $O(1),\dots,O(T)$ comprising our dataset. First, we assume $X(t)$ is independent of all else given $A(t)$ and has a time-independent marginal distribution that we denote by $Q_{0,X}$. Second, we assume $A(t)$ is independent of all else given $O(1),\dots,O(t-1),X(t)$ and we set $g_t(\cdot\mid X(t))$ to its (random) conditional distribution given $O(1),\dots,O(t-1),X(t)$. Third, we assume $Y(t)$ is independent of all else given $X(t),A(t)$ and has a time-independent conditional distribution given $X(t)=x,A(t)=a$ that is denoted by $Q_{0,Y}(\cdot\mid A,X)$. The distributions $Q_{0,X}$ and $Q_{0,Y}$ are unknown, while the policies $g_t(a\mid x)$ are known, as would be the case when running an adaptive experiment. To simplify presentation we endow $\mathcal A$ with a base measure $\mu_\mathcal A$ (\emph{e.g.}, counting for finite actions or Lebesgue for continuous actions) and identify policies $g_t$ with conditional densities with respect to (w.r.t.) $\mu_\mathcal A$. In the case of $K<\infty$ actions, policies are maps from $\mathcal X$ to the $K$-simplex. Note that, as the agent updates its policy based on already collected observations, $g_t$ is a random $O(1),\ldots,O(t-1)$-measurable object. This is the major departure from the setting considered in other literature on off-policy evaluation, which only consider a fixed logging policy, $g_t=g$, that is independent of the data. See \cref{sec:litreview}. \subparagraph{The target parameter.} We are interested in inference on a \emph{generalized average causal effect} expressed as a functional of the unknown distributions above, $\Psi_0=\Psi(Q_{0,X},Q_{0,Y})$, where for any distributions $Q_X,Q_{Y}$, we define \begin{align}\nota \Psi(Q_{X},Q_{Y}) := \int y Q_X(dx)g(a\mid x)d\mu_\mathcal A(a)Q_Y(dy\mid a,x), \end{align} where $g^*(a\mid x):\mathcal A\times\mathcal X\to[-G,G]$ is a given fixed, bounded function. Two examples are: (a) when $g^*$ is a policy (conditional density), then $\Psi_0$ is its value; (b) when $g^*$ is the difference between two policies then $\Psi_0$ is the difference between their values. A prominent example of the latter is when $\mathcal A=\{+1,-1\}$ and $g^*(a\mid x)=a$, which is known as the average treatment effect. If we include an indicator for $x$ being in some set, then we get the subgroup effect. Defining the conditional mean outcome, $$ \bar{Q}_0(a,x):=E_{Q_{0,Y}(\cdot\mid x,a)}[Y]=\int yQ_{0,Y}(dy\mid a,x), $$ we note that the target parameter only depends on $Q_{0,Y}$ via $\bar{Q}_{0}$, so we also overload notation and write $\Psi(Q_{X},\bar{Q})=\int \bar Q(a,x)Q_X(dx)g(a\mid x)d\mu_\mathcal A(a)$ for any function $\bar Q:\mathcal A\times \mathcal X\to\mathcal Y$. Note that when $|\mathcal A|<\infty$ and $\mu_\mathcal A$ is the counting measure, the integral over $a$ is a simple sum. \paragraph{Canonical gradient.} We will make repeated use of the following function: for any conditional density $(a,x) \mapsto g(a \mid x)$, any probability distribution $Q_X$ over the context space $\mathcal{X}$, and any function $\bar{Q}:\mathcal{A} \times \mathcal{X}$, we define the function $D'(g, \bar{Q}):\mathcal O\to\mathbb R$ by \begin{align}\notag D'(g, \bar{Q})(x,a,y) := \frac{g^*(a \mid x)}{g(a \mid x)}(y - \bar{Q}(a,x)) + \int \bar{Q}(a',x)g^*(a' \mid x) d\mu_\mathcal A(a'). \end{align} Further, define $D(g, Q_X, \bar{Q})=D'(g, Q_X, \bar{Q})-\Psi(Q_X, \bar{Q})$, which coincides with the so-called canonical gradient of the target parameter $\Psi$ w.r.t. the usual nonparametric statistical model comprising all joint distributions over $\mathcal{O}$ \citep{van2000asymptotic,van2003unified}. \paragraph{Integration operator notation.} For any policy $g$ and distributions $Q_X,Q_Y$, denote by $P_{Q,g}$ the induced distribution on $\mathcal{O}$. For any function $f:\mathcal{O} \rightarrow \mathbb{R}$, we use the integration operator notation \begin{align}\notag P_{Q,g}f = \int f(x,a,y) Q_X(dx) g(a \mid x)d\mu_\mathcal A(a) Q_Y(dy\mid a,x), \end{align} that is, the expectation w.r.t. $P_{Q,g}$ \emph{alone}. Then, for example, for any $O(1),\ldots,O(s-1)$-measurable random function $f:\mathcal O\rightarrow\mathbb R$, we have that $P_{Q_0,g_s} f = E_{Q_0,g_s}[f(O(s)) \mid {O}(1),\ldots,O(s-1)]$. \subsection{Related Literature and Challenges for Post-Contextual-Bandit Inference}\label{sec:litreview} \paragraph{Off-policy evaluation.} In non-adaptive settings, where $g_t=g$ is fixed and does not depend on previous observations, common off-the shelf estimators for the mean outcome under $g^*$ include the Inverse Propensity Scoring (IPS) estimator \citep{beygelzimer2009offset,li2011unbiased} and and the Doubly Robust (DR) estimator \citep{dudik2011doubly,robins1994estimation}: \begin{align}\notag \widehat{\Psi}^{\mathrm{IPS}} := \frac{1}{T} \sum_{t=1}^T D'(g,0) , \qquad \widehat{\Psi}^{\mathrm{DR}} := \frac{1}{T} \sum_{t=1}^T D'(g,\widehat{\bar Q}) \end{align} where $\widehat{\bar{Q}}$ is an estimator of the outcome model $\bar{Q}_0(a,x)$. If we use cross-fitting to estimate $\widehat{\bar{Q}}$ \citep{chernozhukov2017double}, then both the IPS and DR estimators are unbiased and asymptotically normal, permitting straightforward inference using Wald confidence intervals (\emph{i.e.}, $\pm1.96$ of the estimated standard error). There also exist many variants of the IPS and DR estimators that, rather than plugging in the importance sampling (IS) ratios $(g^* / g_t)(A(t) \mid X(t))$ and/or outcome-model estimators, instead choose them directly with the aim to minimize error \citep[\emph{e.g.}][]{kallus2018balanced,farajtabar2018more,thomas2016data,wang2017optimal,kallus2019intrinsically}. \paragraph{Inference challenges in adaptive settings.} In the adaptive setting, it is easy to see that, if in the $t$th term for DR we use an outcome model $\widehat{\bar{Q}}_{t-1}$ fit using only the observations $O(1),\ldots,O(t-1)$, then both the IPS and DR estimators both remain unbiased. However, neither generally converges to a normal distribution. One key difference between the non-adaptive and adaptive settings is that the IS ratios $(g^* / g_t)(A(t) \mid X(t))$ can both diverge to infinity or converge to zero. As a result of this, the above two estimators may either be dominated by their first terms or their last terms. At a more theoretical level, this violates the classical condition of martingale central limit theorems that the conditional variance of the terms given previous observations stabilizes asymptotically. \paragraph{Stabilized DR estimators in non-contextual settings.} The issue for inference due to instability of the DR estimator terms was recognized by \citet{luedtke_vdL2016} in another setting. They work in the non-adaptive setting but consider the problem of inferring the maximum mean outcome over all policies when the optimal policy is non-unique. Their proposal is a so-called \textit{stabilized estimator}, in which each term is inversely weighted by an estimate of its conditional standard deviation given the previous terms. This stabilization trick has been also been reused for off-policy inference from \emph{non-contextual} bandit data by \citet{hadad2019confidence}, as the stabilized estimator remains asymptotically normal, permitting inference. In their non-contextual setting, an estimate of the conditional standard deviation of the terms can easily be obtained by the inverse square root propensities. In contrast, in our \emph{contextual} setting, obtaining valid stabilization weights is more challenging and requires a construction involving adaptive training on past data. \subsection{Contributions} In this paper, we construct and analyze a stabilized estimator for policy evaluation from context-dependent adaptively collected data, such as the result of running a contextual bandit algorithm. This then immediately enables inference. After constructing a generic extension of the stabilization trick, the main technical challenge is to construct a sequence of estimators $\widehat{\sigma}_1,\ldots, \widehat{\sigma}_T$ of the conditional standard deviations that are both consistent and such that for each $t$, $\widehat{\sigma}_t$ only uses the previous data points $O(1),\ldots,O(t-1)$. We show in extensive experiments across a large set of contextual bandit environments that our confidence intervals uniquely achieve close to nominal coverage. \section{Construction and Analysis of the Generic Contextual Stabilized Estimator} In this section, we give a generic construction of a stabilized estimator in our contextual and adaptive setting. That is, given generic plug-ins for outcome model and conditional standard deviation. We then provide conditions under which the estimator is asymptotically normal, as desired. To develop CADR, we will then proceed to construct appropriate plug in estimators in the proceeding sections. \subsection{Construction of the Estimator} \paragraph{Outcome and variance estimators.} Our estimator uses a sequence $(\widehat{\bar{Q}}_t)_{t \geq 1}$ of estimators of the outcome model $\bar{Q}_0$, such that, for every $t$, $\widehat{\bar{Q}}_t$ is $O(1),\ldots,O(t)$-measurable, that is, is trained using \emph{only} the data up to time $t$. A key part of our estimator are the conditional variance estimators. Additionally, we require estimates of the conditional standard deviation of the canonical gradient. Define \begin{align}\notag \sigma_{0,t}&:= \sigma_{0,t}(g_t),\\\text{where}~~ \sigma_{0,t}^2(g) &:= \Var_{Q_0,g}\left( D'(g, \widehat{\bar{Q}}_{t-1})(O(t)) \mid O(1),\ldots,O(t-1) \right). \end{align} Let $(\widehat{\sigma}_t)_{t \geq 1}$ be a given sequence of estimates of $\sigma_{0,t}$ such that $\widehat{\sigma}_t$ is $O(1),\ldots,O(t-1)$-measurable, that is, is estimated using \emph{only} the data up to time $t$. \paragraph{The generic form of the estimator.} The generic contextual stabilized estimator is then defined as: \begin{align}\label{eq:stabilized-one-step} \widehat{\Psi}_T := \left( \frac{1}{T} \sum_{t=1}^T \widehat{\sigma}_t^{-1} \right)^{-1} \frac{1}{T} \sum_{t=1}^T \widehat{\sigma}_t^{-1} D'(g,\widehat{\bar Q}_{t-1}) . \end{align} \subsection{Asymptotic normality guarantees} We next characterize the asymptotic distribution of $\widehat{\Psi}_T$ under some assumptions. \begin{assumption}[Non degenerate efficiency bound]\label{assumption:non_degenerate_EB} $ \inf_g P_{Q_0,g} D^2(g, \bar{Q}_0, Q_{0,X}) > 0. $ \end{assumption} Assumption \ref{assumption:non_degenerate_EB} states that there is no fixed logging policy $g$ such that the efficiency bound for estimation of $\Psi(\bar{Q}_0, Q_{0,X})$ in the nonparametric model, from i.i.d. draws of $P_{Q_0,g}$, is zero. If assumption \ref{assumption:non_degenerate_EB} does not hold, there exists a logging policy $g$ such that, if $O=(X,A,Y) \sim P_{Q_0,g}$, then $(g^*(A\mid X) / g(A \mid X)) Y$ equals $\Psi(\bar{Q}_0, Q_{0,X})$ with probability 1. In other words, if assumption \ref{assumption:non_degenerate_EB} does not hold, there exists a logging policy $g$ such that $\Psi(\bar{Q}_0, Q_{0,X})$ can be estimated with no error with probability 1 from a single draw of $P_{Q_0,g}$. Thus, it is very lax. An easy sufficient condition for \cref{assumption:non_degenerate_EB} is that the outcome model has nontrivial variance in that $\Var_{Q_{0,X}}(\int \bar{Q}(a,X)g^*(a \mid x) d\mu_\mathcal A(a))>0$. \begin{assumption}[Consistent standard deviation estimators.]\label{assumption:stdev_est_consistency} $\widehat{\sigma}_t - \sigma_{0,t} \xrightarrow{t\to\infty}0$ almost surely. \end{assumption} In the next section we will proceed to construct specific estimators $\widehat{\sigma}_t$ that satisfy \cref{assumption:stdev_est_consistency}, leading to our proposed CADR estimator and confidence intervals. \begin{assumption}[Exploration rate]\label{assumption:exp_rate_generic_stab_one_step} For any $t\geq1$, we have that $\inf_{a \in \mathcal{A},x \in \mathcal{X}} g_t(a \mid x) \gtrsim t^{-1/2}$ almost surely. \end{assumption} Here, $a_t\gtrsim b_t$ means that for some constant $c>0$, we have $a_t\geq cb_t$ for all $t\geq1$. \Cref{assumption:exp_rate_generic_stab_one_step} requires that the exploration rate of the adaptive experiment does not decay too quickly. Based on these assumptions, we have the following asymptotic normality result: \begin{theorem}\label{thm:asymp_normal_stab_one_step} Denote $\Gamma_T := \left( T^{-1} \sum_{t=1}^T \widehat{\sigma}_t^{-1} \right)^{-1}$. Under \cref{assumption:non_degenerate_EB,assumption:stdev_est_consistency,assumption:exp_rate_generic_stab_one_step}, it holds that \begin{align}\notag \Gamma_T^{-1} \sqrt{T} \left( \widehat{\Psi}_T - \Psi_0 \right) \xrightarrow{d} \mathcal{N}(0,1). \end{align} \end{theorem} \begin{remark} Theorem 1 does not require the outcome model estimator to converge at all. As we will see in \cref{sec:variance-estimator}, our conditional variance estimator does require that the outcome model converges to a fixed limit $\bar{Q}_1$, but this limit does not have to be the true outcome model $\bar{Q}_0$. In other words, consistency of the outcome model is not required at any point of our analysis. \end{remark} \section{Construction of the Conditional Variance Estimator and CADR} \label{sec:variance-estimator} \begin{algorithm}[t!]\caption{The CADR Estimator and Confidence Interval}\label{alg:cadr} \begin{algorithmic} \STATE{\textbf{Input:} Data $O(1),\dots,O(T)$, policies $g_1,\dots,g_T$, target $g^*$, outcome regression estimator} \FOR{$t = 1,2,\dots,T$} \STATE{Train $\widehat{\bar Q}_{t-1}$ on $O(1),\dots,O(t-1)$ using the outcome regression estimator} \STATE{Set $D'_{t,s}=D(g_s,\widehat{\bar Q}_{t-1})(O(t))$ for $s=t,\dots,T$\hfill// (note index order compared to next line)} \STATE{Set $\widehat\sigma_t^2=\frac1{t-1}\sum_{s=1}^{t-1}\frac{g_t(A(s)\mid X(s))}{g_s(A(s)\mid X(s))}(D'_{s,t})^2-\left(\frac1{t-1}\sum_{s=1}^{t-1}\frac{g_t(A(s)\mid X(s))}{g_s(A(s)\mid X(s))}D'_{s,t}\right)^2$} \ENDFOR \STATE{Set $\Gamma_T=\left(\frac1T\sum_{t=1}^T\widehat\sigma_t^{-1}\right)^{-1}$} \STATE{Return estimate $\widehat \Psi_T=\frac{\Gamma_T}T\sum_{t=1}^T\widehat\sigma_t^{-1}D'_{t,t}$ and confidence intervals $\operatorname{CI}_\alpha=[\widehat\Psi_T\pm \zeta_{1-\alpha/2}\Gamma_T/\sqrt{T}]$} \end{algorithmic} \end{algorithm} We now tackle the construction of $\widehat \sigma_t$ satisfying our assumptions; namely, they must be adaptively trained only on past data at each $t$ and they must be consistent. Observe that $\sigma^2_{0,t} = \sigma^2_0(g_t,\widehat{\bar{Q}}_{t-1})$, where we define \begin{align} \sigma^2_0(g, \bar{Q}) &:= \Phi_{0,1}(g,\bar{Q}) - (\Phi_{0,2}(g,\bar{Q}))^2,\\ \Phi_{0,i}(g,\bar{Q}) &:= P_{Q_0,g} (D')^i(g,\bar{Q}),\,i=1,2. \end{align} Designing an $O(1),\ldots,O(t-1)$-measurable estimator of $\sigma_{0,t}^2$ presents several challenges. First, while we can only use observations $O(1),\ldots,O(t-1)$ to estimate it, $\sigma_{0,t}^2$ is defined as a function of integrals w.r.t. $P_{Q_0,g_t}$, from which we have only one observation, namely $O(t)$. Second, our estimation target $\sigma_{0,t}^2 = \sigma_0(g_t, \widehat{\bar{Q}}_{t-1})$ is \emph{random} as it depends on $g_t$ and $\widehat{\bar{Q}}_t$. Third, $g_t,\widehat{\bar{Q}}_t$ depend on the same observations $O(1),\ldots,O(t-1)$ that we have at our disposal to estimate $\sigma_{0,t}^2$. \paragraph{Representation via importance sampling.} We can overcome the first difficulty via importance sampling, which allows us to write $\Phi_{0,i}(g,\bar{Q})$, $i=1,2$ as integrals w.r.t. $P_{Q_0,g_s}$, $s=1,\ldots,t-1$, \emph{i.e.}, the conditional distributions of observations $O(s)$, $s=1,\ldots,t-1$ given their respective past. Namely, for any $s \geq 1$, $i=1,2$, we have that \begin{align} \Phi_{0,i}(g,\bar{Q}) = P_{Q_0,g_s} \frac{g}{g_s} (D')^i(g,\bar{Q}). \label{eq:IS_representation_components_cond_var} \end{align} \paragraph{Dealing with the randomness of the estimation target.} We now turn to second challenge. Since $\sigma_{0,t}^2$ can be written in terms of $\Phi_{0,i}(g_t,\widehat{\bar{Q}}_{t-1})$ for $i=1,2$, \cref{eq:IS_representation_components_cond_var} suggests perhaps an approach based on sample averages of $(g_t / g_s) (D')^i(g_t,\widehat{\bar{Q}}_{t-1})$ over $s$. However, whenever $s < t$, the latter is an $O(1),\ldots,O(t-1)$-measurable function due to the dependence on $g_t$ and $\widehat{\bar{Q}}_t$. Namely, $P_{Q_0,g_s}\{ (g_t / g_s) (D')^i(g_t,\widehat{\bar{Q}}_{t-1})\}$ does not coincide in general with the conditional expectation $E_{Q_0,g_s} [((g_t / g_s) (D')^i(g_t,\widehat{\bar{Q}}_{t-1}))(O(s)) \mid \bar{O}(s-1)]$, as would arise from a sample average. We now look at solutions to overcome this difficulty, considering first $\widehat{\bar{Q}}_{t-1}$ and then $g_t$. \subparagraph{Dealing with the randomness of $\widehat{\bar{Q}}_{t-1}$.} We propose an estimator of $\sigma_0^2(g,\widehat{\bar{Q}}_{t-1})$ for any fixed $g$. While requiring that $\widehat{\bar{Q}}_{t-1}$ converges to the true outcome regression function $\bar{Q}_0$ is a strong requirement, most reasonable estimators will at least converge to some fixed limit $\bar{Q}_1$. As a result, under an appropriate stochastic convergence condition on $(\widehat{\bar{Q}}_{t-1})_{t \geq 1}$, $\Phi_{0,i}(g,\widehat{\bar{Q}}_{t-1})$ can be reasonably approximated by the corresponding Cesaro averages, defined for $i=1,2$ as \begin{align} \bar{\Phi}_{0,i,t}(g) :=& \frac{1}{t-1} \sum_{s=1}^{t-1} \Phi_{0,i}(g,\widehat{\bar{Q}}_{s-1}) = \frac{1}{t-1} \sum_{s=1}^t E_{Q_0,g_s} \left[ ((g / g_s) (D')^i(g,\widehat{\bar{Q}}_{s-1}))(O(s)) \mid \bar{O}(s-1)\right]. \end{align} These are easy to estimate from the corresponding sample averages, defined for $i=1,2$ as \begin{align} \widehat{\Phi}_{i,t}(g) := \frac{1}{t-1} \sum_{s=1}^t ((g/g_s) (D')^i(g,\widehat{\bar{Q}}_{s-1}))(O(s)), \end{align} since for each $i=1,2$, the difference $\widehat{\Phi}_{i,t}(g) - \bar{\Phi}_{0,i,t}(g)$ is the average of a martingale difference sequence (MDS). We then define our estimator of $\sigma_0^2(g, \widehat{\bar{Q}}_{t-1})$ as \begin{align}\label{eq:varest}\widehat{\sigma}_t(g) := \widehat{\Phi}_{1,t}(g) - (\widehat{\Phi}_{2,t}(g))^2.\end{align} \subparagraph{From fixed $g$ to random $g_t$.} So far, we have proposed and justified the construction of $\widehat{\sigma}_t(g)$ as an estimator of $\sigma_{0,t}(g, \widehat{\bar{Q}}_{t-1})$ for a fixed $g$. We now discuss conditions under which $\widehat{\sigma}_t(g_t)$ is valid estimator of $\sigma_{0,t}(g_t, \widehat{\bar{Q}}_{t-1})$. When $g$ is fixed, for each $i=1,2$, the error $\widehat{\Phi}_{i,t}(g) - \Phi_{0,i,t}(g, \widehat{\bar{Q}}_{t-1})$ decomposes as the sum of the MDS average $\widehat{\Phi}_{i,t}(g) - \bar{\Phi}_{0,i,t}(g)$ and of the Cesaro approximation error $\bar{\Phi}_{0,i,t}(g) - \Phi_{0,i}(g, \widehat{\bar{Q}}_{t-1})$. Both differences are straightforward to bound. For a random $g_t$, the term $\widehat{\Phi}_{i,t}(g_t) - \bar{\Phi}_{0,i,t}(g_t)$ is no longer an MDS average. Fortunately, under a complexity condition on the logging policy class $\mathcal{G}$, we can bound the supremum of the martingale empirical processes $\{|\widehat{\Phi}_{i,t}(g) - \bar{\Phi}_{0,i,t}(g)| : g \in \mathcal{G} \}$, which in turn gives us a bound on $|\widehat{\Phi}_{i,t}(g_t) - \bar{\Phi}_{0,i,t}(g_t)|$. \paragraph{Consistency guarantee for $\widehat{\sigma}^2_t$.} Our formal consistency result relies on the following assumptions. \begin{assumption}[Outcome regression estimator convergence]\label{assumption:outcome_regression_convergence} There exists $\beta > 0$, and a fixed function $\bar{Q}_1 : \mathcal{A} \times \mathcal{X} \rightarrow \mathbb{R}$ such that $\|\widehat{\bar{Q}}_t - \bar{Q}_1 \|_{1,Q_{0,X}, g^*} = O(t^{-\beta})$ almost surely. \end{assumption} The next assumption is a bound on the bracketing entropy (see, \emph{e.g.}, \citep{vdV_Wellner96} for definition) of the logging policy class. \begin{assumption}[Complexity of the logging policy class]\label{assumption:logging_policy_entropy} There exists a class of conditional densities $\mathcal{G}$ such that $g_t \in \mathcal{G}\,\forall t\geq1$ almost surely, there exists $G > 0$ such that $\sup_{g \in \mathcal{G}} \|g / g^\rf\|_\infty \leq G$, and for some $p > 0$ \begin{align} \log N_{[\,]}(\epsilon, \mathcal{G} / g^\rf, \|\cdot\|_{2,Q_{0,X},g^\rf}) \lesssim \epsilon^{-p}, \end{align} where $\mathcal{G} / g^\rf := \{ g / g^\rf : g \in \mathcal{G} \}$. \end{assumption} Next, we require a condition on the exploration rate that is stronger than \cref{assumption:exp_rate_generic_stab_one_step}. \begin{assumption}[Exploration rate (stronger)]\label{assumption:minimal_unif_exploration_rate} For ant $t \geq 1$, we have that $\inf_{a \in \mathcal{A}, x \in \mathcal{X}} g_t (a \mid x) / g^\rf(a \mid x) \gtrsim t^{-\alpha(\beta, p)}$ almost surely, where $\alpha(\beta, p) := \min(1/((3+p)), 1 / (1 + 2p), \beta)$. \end{assumption} \begin{theorem}\label{thm:sd} Suppose that \cref{assumption:outcome_regression_convergence,assumption:logging_policy_entropy,assumption:minimal_unif_exploration_rate} hold. Then, $\widehat{\sigma}_t^2 - \sigma_{0,t}^2 = o(1)$ almost surely. \end{theorem} \begin{remark} While we theoretically require the existence of a logging policy class $\mathcal{G}$ with controlled complexity, we do not actually need to know $\mathcal G$ to construct our estimator. Moreover, while we require a bound on the bracketing entropy of the logging policy class $\mathcal{G}$, we impose no restriction on the outcome regression model complexity, permitting us to use flexible black-box regression methods. \end{remark} \begin{remark} Assumption \ref{assumption:outcome_regression_convergence} requires $(\widehat{\bar{Q}}_t)$ to be a sequence of regression estimator, such that for every $t \geq 1$, $\widehat{\bar{Q}}_t$ is fitted on $O(1),\ldots,O(t)$ and for which we can guarantee a rate of convergence to some fixed limit $\bar{Q}_1$. Note that this can at first glance pose a challenge since observations $O(1),\ldots,O(t)$ are adaptively collected. In the appendix, we give guarantees for outcome regression estimation over a nonparametric model using an importance sampling weighted empirical risk minimization. \end{remark} \paragraph{CADR asymptotics.} Our proposed CADR estimator is now given by plugging our estimates $\widehat\sigma_t$ from \cref{eq:varest} into \cref{eq:stabilized-one-step}, as summarized in \cref{alg:cadr} As an immediate corollary of \cref{thm:asymp_normal_stab_one_step,thm:sd} we have our main guarantee for this final estimator, showing CADR is asymptotically normal, whence we immediately obtain asymptotically valid confidence intervals. \begin{corollary}[CADR Asymptotics and Inference] Suppose that \cref{assumption:non_degenerate_EB,assumption:outcome_regression_convergence,assumption:logging_policy_entropy,assumption:minimal_unif_exploration_rate} hold. Let $\widehat{\sigma}_t$ be given as in \cref{eq:varest}. Denote $\Gamma_T := \left( T^{-1} \sum_{t=1}^T \widehat{\sigma}_t^{-1} \right)^{-1}$. Then, $$\Gamma_T^{-1} \sqrt{T} \left( \widehat{\Psi}_T - \Psi_0 \right) \xrightarrow{d} \mathcal{N}(0,1).$$ Moreover, letting $\zeta_\alpha$ denote the $\alpha$-quantile of the standard normal distribution, \begin{align}\notag \mathrm{Pr} \left[ \Psi(Q_{0,X}, \bar{Q}_0) \in \left[ \widehat{\Psi}_T \pm {\zeta_{1-\alpha/2} \Gamma_T}/{\sqrt{T}} \right] \right] \xrightarrow{T \to \infty} 1 - \alpha. \end{align} \end{corollary} \section{Empirical Evaluation} We next present computational results on public datasets that demonstrate the robustness of CADR confidence intervals using contextual bandit data with comparison to several baselines. Our experiments focus on the case of finitely-many actions, $\mathcal A=\{1,\dots,K\}$. \subsection{Baseline Estimators} \label{sec:baselines} We compare CADR to several benchmarks. All take the following form for a choice of $w_t,\omega_t,\widehat{\bar{Q}}_t$: \begin{align}\notag \widehat{\Psi}_T &= \left( \frac{1}{T} w_t \right)^{-1} \frac{1}{T} \sum_{t=1}^T w_t \tilde D'_t,\quad \operatorname{CI}_\alpha=\left[\widehat\Psi_T\pm\zeta_{1-\alpha/2}\sqrt{\frac{\sum_{t=1}^Tw_t^2(\tilde D'_t-\widehat{\Psi})^2}{\left(\sum_{t=1}^Tw_t\right)^2}}\right],\\\text{where}~\tilde D'_t&=\omega_t(Y(t) - \widehat{\bar{Q}}_{t-1}(A(t),X(t))) + \sum_{a=1}^K \widehat{\bar{Q}}_{t-1}(a,X(t))g^*(a \mid X(t)) . \end{align} The Direct Method (DM) sets $w_t=1,\omega_t=0$ and fits $\hat{\bar{Q}}_{t-1}(a,\cdot)$ by running some regression method for each $a$ on the data $\{(X(s),Y(s)):1 \leq s \leq t-1, A(s) = a\}$. We will use either linear regression or decision-tree regression, both using default \verb|sklearn| parameters. Note that even in non-contextual settings, where $\hat{\bar{Q}}_{t-1}$ is a simple per-arm sample average, $\hat{\bar{Q}}_{t-1}$ may be biased due to adaptive data collection \citep{xu2013estimation,luedtke_vdL2016,bowden2017unbiased,nie2018adaptively,hadad2019confidence,shin2019bias}. Inverse Propensity Score Weighting (IPW) sets $w_t=1,\omega_t=(g^*/g_t)(A(t) \mid X(t)),\hat{\bar{Q}}_{t}=0$. Doubly Robust (DR) sets $w_t=1,\omega_t=(g^*/g_t)(A(t) \mid X(t))$ and fits $\hat{\bar{Q}}_{t-1}$ as in DM. More Robust Doubly Robust (MRDR) \citep{farajtabar2018more} is the same as DR but when fitting $\hat{\bar{Q}}_{t-1}$ we reweight each data point by $\frac{g^*(A(s) | X(s))(1 - g_s(A(s) | X(s)))}{g_s(A(s) | X(s))^2}$. None of the above are generally asymptotically normal under adaptive data collection \citep{hadad2019confidence}. Adaptive Doubly Robust (ADR; a.k.a. stabilized one-step estimator for multi-armed bandit data) \citep{luedtke_vdL2016,hadad2019confidence} is the same as DR but sets $w_t=g^{-1/2}_t(A(t) | X(t))$. ADR is unbiased and asymptotically normal for multi-armed bandit logging policies but is biased for context-measurable adaptive logging policies, which is the focus of this paper. Finally, note that our proposal CADR takes the same form as DR but with $w_t=\widehat{\sigma}_t^{-1}$ using our adaptive conditional standard deviation estimators $\widehat{\sigma}_t$ in \cref{eq:varest} \subsection{Contextual Bandit Data from Multiclass Classification Data} \label{sec:multiclass} To construct our data, we turn $K$-class classification tasks into a $K$-armed contextual bandit problems \citep{dudik2014doubly,dimakopoulou2017estimation,su2019cab}, which has the benefits of reproducibility using public datasets and being able to make uncontroversial comparisons using actual ground truth data with counterfactuals. We use the public OpenML Curated Classification benchmarking suite 2018 (OpenML-CC18; BSD 3-Clause license) \citep{bischl2017openml}, which has datasets that vary in domain, number of observations, number of classes and number of features. Among these, we select the classification datasets which have less than 100 features. This results in 57 classification datasets from OpenML-CC18 used for evaluation and \cref{tab:openml} summarizes the characteristics of these datasets. Each dataset is a collection of pairs of covariates $X$ and labels $L\in\{1,\dots,K\}$. We transform each dataset to the contextual bandit problem as follows. At each round, we draw $X(t),L(t)$ uniformly at random with replacement from the dataset. We reveal the context $X(t)$ to the agent, and given an arm pull $A(t)$, we draw and return the reward $Y(t) \sim \mathcal{N}(\textbf{1}\{A(t) = L(t)\}, 1)$. To generate our data, we set $T=10000$ and use the following $\epsilon$-greedy procedure. We pull arms uniformly at random until each arm has been pulled at least once. Then at each subsequent round $t$, we fit $\widehat{\bar Q}_{t-1}$ using the data up to that time in the same fashion as used for the DM estimator above using decision-tree regressions. We set $\tilde A_x(t)=\mathop{\arg\max}_{a=1,\dots,K}\widehat{\bar Q}_{t-1}(a,X(t))$ and $\epsilon_t=0.01 \cdot t^{-1/3}$. We then let $g_t(a\mid x)=\epsilon_t/K$ for $a\neq \tilde A_x(t)$ and $g_t(\tilde A_x(t)\mid x)=1-\epsilon_t+\epsilon_t/K$. That is, with probability $\epsilon_t$ we pull a random arm, and otherwise we pull $\tilde A_{X(t)}(t)$. \begin{table}[t!] \centering \begin{tabular}{|c|c|} \hline Samples & Count \\ \hline $< 1000$ & 17 \\ \hline $\geq 1000$ and $< 10000$ & 30 \\ \hline $\geq 10000$ & 10 \\ \hline \end{tabular} \hspace{5pt} \begin{tabular}{|c|c|} \hline Classes & Count \\ \hline $= 2$ & 31 \\ \hline $> 2 \text{ and } < 10$ & 17 \\ \hline $ \geq 10 $ & 9 \\ \hline \end{tabular} \hspace{5pt} \begin{tabular}{|c|c|} \hline Features & Count \\ \hline $\geq 2 \text{ and } < 10$ & 14 \\ \hline $\geq 10 \text{ and } < 50$ & 34 \\ \hline $\geq 50 \text{ and } \leq 100$ & 9 \\ \hline \end{tabular} \vspace{5pt} \caption{Characteristics of the 57 OpenML-CC18 datasets used for evaluation.} \label{tab:openml} \end{table} \begin{figure*}[t!] \centering \includegraphics[width=1\linewidth]{coverage.png} \caption{Comparison of CADR estimator against DM, IPW, DR, ADR and MRDR w.r.t. 95\% confidence interval coverage on 57 OpenML-CC18 datasets and 4 target policies.} \label{fig:coverage} \end{figure*} We then consider four candidate policies to evaluate: (1) ``arm 1 non-contextual": $g^*(1 \mid x) = 1$ and otherwise $g^*(a \mid x) = 0$ (note that the meaning of label ``1'' changes by dataset), (2) ``arm 2 non-contextual": $g^*(2 \mid x) = 1$ and otherwise $g^*(a \mid x) = 0$, (3) ``linear contextual": we sample a \emph{new} dataset of size $T$ using a uniform exploration policy, then fit $\widehat{\bar Q}_{T}$ as above using linear regression, fix $a^*=\mathop{\arg\max}_{a \in \{1, \dots, K\}}\widehat{\bar Q}_{T}(a,x)$, and set $g^*(a^*\mid x)=1$ and otherwise $g^*(a \mid x) = 0$, (4) ``tree contextual": same as ``linear contextual" but fit $\widehat{\bar Q}_{T}$ using decision-tree regression. \subsection{Results} \Cref{fig:coverage} shows the comparison of CADR estimator against DM, IPW, DR, ADR, and MRDR w.r.t. coverage, that is, the frequency over 64 replications of the 95\% confidence interval covering the true $\Psi_0$, for each of the 57 OpenML-CC18 datasets and 4 target policies. In each subfigure, each dot represents a dataset, the $y$-axis corresponds to the coverage of the CADR estimator and the $x$-axis corresponds to the coverage of one of the baseline estimators. The lines represent one standard error over the 64 replications. The dot is depicted in blue if for that dataset CADR has significantly better coverage than the baseline estimator, in red if it has significantly worse coverage, and in black if the difference in coverage of both estimators is within one standard error. In \cref{fig:coverage}, outcome models for CADR, DM, DR, ADR, and MRDR are fit using linear regression (with default \verb|sklearn| parameters). In the appendix, we provide additional empirical results where we use decision-tree regressions, or where we use the MRDR outcome model for CADR, or where we use cross-fold estimation across time. Across all of our experiments, we observe that the confidence interval of CADR has better coverage of the ground truth than any other baseline, which can be attributed to its asymptotic normality. The second best estimator in terms of coverage is DR. The advantages of CADR over DR are most pronounced when either (a) there is a mismatch between the logging policy and the target policy (\emph{e.g.}, compare the 1st and 2nd rows in \cref{fig:coverage}; the tree target policy is most similar to the logging policy, which also uses trees) or (b) when the outcome model is bad (either due to model misspecification such as with a linear model on real data or due to small sample size). \section{Conclusions} Adaptive experiments hold great promise for better, more efficient, and even more ethical experiments. However, they complicate post-experiment inference, which is a cornerstone of drawing credible conclusions from controlled experiments. We provided here the first asymptotically normal estimator for policy value and causal effects when data were generated from a contextual adaptive experiment, such as a contextual bandit algorithm. This led to simple and effective confidence intervals given by adding and subtracting multiples of the standard error, making contextual adaptive experiments a more viable option for experimentation in practice. \section{Societal Impact and Limitations} Adaptive experiments hold particular promise in settings where experimentation is costly and/or dangerous, such as in medicine and policymaking. By adapting treatment allocation, harmful interventions can be avoided, outcomes for study participants improved, and smaller studies enabled. Being able to draw credible conclusions from such experiments make them viable replacements for classic randomized trials. Our confidence intervals offer one way to do so. At the same time, and especially subject to our assumption of vanishing but nonzero exploration, these experiments must be subject to the same ethical guidelines as classic randomized experiments. Additionally, the usual caveats of frequentist confidence intervals hold here, such as its interpretation only as a guarantee over data collection, this guarantee only being approximate in finite samples when we rely on asymptotic normality, and the risks of multiple comparisons and of $p$-hacking. Finally, we note that our inference focused on an \emph{average} quantity, as such it focuses on social welfare and need not capture the risk to individuals or groups. Subgroup analyses may therefore be helpful in complementing the analysis; these can be conducted by setting $g^*(a\mid x)$ to zero for some $x$'s. Future work may be necessary to further extend our results to conducting inference on risk metrics such as quantiles of outcomes.
1505.07185
\section{Introduction}\label{s:introduction} Deligne--Lusztig theory \cite{DL76} gives a geometric description of the irreducible representations of finite groups of Lie type. In \cite{L79}, Lusztig suggests an analogue of Deligne--Lusztig theory for $p$-adic groups $G$. For an unramified maximal torus $T \subset G$, he introduces a certain infinite-dimensional variety which has a natural action of $T \times G$. Though it is not known in general, when $G$ is a division algebra, one can define $\ell$-adic homology groups $H_i(X)$ functorial for this action. One therefore obtains a correspondence $\theta \mapsto H_i(X)[\theta]$ between characters of $T$ and representations of $G$. In this paper, we study this correspondence and give a description from the perspective of the local Langlands and Jacquet--Langlands correspondences. Let $K$ be a non-Archimedean local field with ring of integers $\cO_K$ and residue field $\FF_q = \cO_K/\pi$ for a fixed uniformizer $\pi$, and let $L \supset K$ be the unramified extension of degree $n$ with ring of integers $\cO_L$. A smooth character $\theta \colon L^\times \to \QQ_\ell^\times$ is said to be \textit{primitive of level $h$} if $h$ is the smallest integer such that $\theta$ and $\theta/\theta^\gamma$ for $1 \neq \gamma \in \Gal(L/K)$ are trivial on $1 + \pi^h \cO_L$. This is equivalent to saying $(L,\theta)$ is a minimal admissible pair. To a primitive character $\theta \colon L^\times \to \overline \QQ_\ell^\times$, one can associate a smooth irreducible $n$-dimensional representation $\sigma_{\xi\theta}$ of the Weil group $\cW_K$ of $K$, which corresponds via local Langlands to an irreducible supercuspidal representation $\pi_\theta$ of $\GL_n(K)$, which finally corresponds via Jacquet--Langlands to an irreducible representation $\rho_\theta$ of $D^\times,$ where $D = D_{k/n}$ is the central division algebra of invariant $k/n$ over $K$. For any $m \in \ZZ$, let $m^+ = \max\{m,0\}$. \begin{main} Let $\theta \colon L^\times \to \overline \QQ_\ell^\times$ be a primitive character of level $h$. Then \begin{equation*} H_i(X)[\theta] = \begin{cases} \rho_\theta & \text{if $i = r_\theta \colonequals (n-1)(h-k)^+$,} \\ 0 & \text{otherwise.} \end{cases} \end{equation*} \end{main} Pictorially, \begin{equation*} \begin{tikzcd}[column sep=tiny,row sep=small] {\theta} \arrow[mapsto]{ddd}[left]{\text{$p$-adic Deligne--Lusztig}} & & {\theta} \arrow[mapsto]{d} & & {\mathfrak X} \arrow{d} \\ {} & & {\sigma_{\xi\theta}} \arrow[mapsto]{d} & & {\cG_K(n)} \arrow{d}{\text{ Local Langlands}} \\ {} & & {\pi_\theta} \arrow[mapsto]{d} & & {\cA_K(n)} \arrow{d}{\text{ Jacquet--Langlands}} \\ H_{r_\theta}(X)[\theta] & \cong & {\rho_\theta} & & {\cA'{}_K(n)} \end{tikzcd} \end{equation*} where \begin{align*} \mathfrak X &\colonequals \{\text{primitive characters $L^\times \to \overline \QQ_\ell^\times$}\} \\ \cG_K(n) &\colonequals \{\text{smooth irreducible dimension-$n$ representations of the Weil group $\cW_K$}\} \\ \cA_K(n) &\colonequals \{\text{supercuspidal irreducible representations of $\GL_n(K)$}\} \\ \cA'{}_K(n) &\colonequals \{\text{smooth irreducible representations of $D^\times$}\} \end{align*} \subsection{What is known.} Lusztig's definition in \cite{L79} has a natural analogue for groups over $\cO_K$ and its quotients $\cO_K/\pi^h$. This is described in \cite{L04}, where Lusztig also explicitly describes the resulting representations for $\SL_2(\cO_K/\pi^h)$ when $h \leq 2$. We note that our paper is in the setting of division algebras, whose finite reductive quotient is trivial, so the work of \cite{L04} does not play a role. We now give a survey of known results on $p$-adic Deligne--Lusztig varieties for division algebras. In the next two sections, we let $G = D_{k/n}^\times$ and $T = L^\times$. Additionally, $G^1$ and $T^1$ denote the norm-1 elements of $G$ and $T$, and let $X$ and $X^1$ be the Deligne--Lusztig construction associated to $G$ and $G^1$. Write $H_i(X) = H_i(X, \overline \QQ_\ell)$ and $H_c^i(X) = H_c^i(X, \overline \QQ_\ell)$. In \cite{L79}, Lusztig proves that when $k=1$, the virtual $G^1$-representations $\sum (-1)^i H_i(X^1)[\theta]$ are (up to sign) irreducible and mutually nonisomorphic. In analogy with the behavior of classical Deligne--Lusztig varieties, one expects the homology groups $H_i(X)[\theta]$ to vanish outside a single degree. Additionally, one hopes to get a description of the irreducible representations arising from these homology groups. There exists a unipotent group scheme $\Unipk$ over $\FF_q$ such that $\UnipkF$ is isomorphic to a subquotient of $D_{k/n}^\times$. The study of $H_i(X)[\theta]$ reduces to the study of certain subschemes $X_h \subset \Unipk$ endowed with a left action by $(1 + \pi \cO_L)/(1 + \pi^h \cO_L)$ and a right action by $\UnipkF$. When $k=1$, these definitions were established in \cite{B12} for arbitrary $K$ if $h \leq 2$ and for $K$ of equal characteristic if $h > 2$. These definitions can be extended to the mixed characteristic case for arbitrary level $h$ and invariant $k/n$, and we do so in this paper. In \cite{BW14}, Boyarchenko and Weinstein study the representations $H_c^i(X_2)$ when $k = 1$ (see Theorem 4.7 of \textit{op.\ cit.}). This comprises one of the main ingredients in studying the cohomology of the Lubin--Tate tower. In \cite{BW13}, they specialize this result to the primitive case to give an explicit and partially geometric description of local Langlands correspondences. In \cite{B12}, Boyarchenko uses the representations $H_c^i(X_2)$ to prove that for any smooth character $\theta \colon T \to \QQ_\ell^\times$ of level $\leq 2$, the representation $H_i(X)[\theta]$ vanishes outside a single degree and gives a description of this representation (see Theorem 5.3 of \textit{op.\ cit.}). Moreover, he shows that if $\theta$ is primitive, $H_i(X)[\theta]$ is irreducible in the nonvanishing degree. In contrast to the structure of the Lubin--Tate tower, we need to understand the cohomology of $X_h$ for all $h$ to understand high-depth representations arising in Deligne--Lusztig constructions. Outside the equal characteristics case for $k = 1$, $n = 3$, and $h = 3$ (see Theorem 5.20 of \cite{B12}), this was completely open. In \cite{C14}, we study $X_h$ in the equal characteristics case for arbitrary $h$, assuming $n = 2$ and $\chi$ is primitive. We prove irreducibility of $H_c^i(X_2)[\chi]$ and vanishing outside a single degree. In addition we prove a character formula in the form of a branching rule for representations of the finite unipotent group $U_{h,1}^{2,q}(\FF_{q^2})$, a subquotient of the quaternion algebra. Using this, we are able to study the representations $H_i(X)[\theta]$ for primitive characters $\theta$. In this paper, we generalize this work to arbitrary $n$, arbitrary $k$, and arbitrary $K$, thereby removing all assumptions outside primitivity. We take a more conceptual approach that allows us to bypass many of the computations needed in \cite{C14}. As a corollary, we obtain a geometric realization of Jacquet--Langlands transfers between representations of division algebras. \begin{remark} In the special case that $n = 2$ and $\Char K > 0$, the $p$-adic Deligne--Lusztig constructions we study in this paper and its prequel \cite{C14} are cut out by equations that look similar to the equations defining certain covers of affine Deligne--Lusztig varieties. This was observed by Ivanov in Section 3.6 of \cite{I15}. \hfill $\Diamond \end{remark} \subsection{Outline of this paper} In Section \ref{s:definitions}, we introduce the unipotent groups $\Unipk$ together with a certain subgroup $H \subset \Unipk$. The finite groups $\UnipkF$ and $H(\F)$ are subquotients of $G$ and $T$, respectively. We then define a certain subvariety $X_h \subset \Unipk$, whose relation to the $p$-adic Deligne--Lusztig construction $X$ is as follows: $X$ can be identified with a set $\widetilde X$ endowed with an ind-scheme structure \begin{equation*} \widetilde X = \bigsqcup_{m \in \ZZ} \varprojlim_h \widetilde X_h^{(m)}, \end{equation*} where $\widetilde X_h^{(0)}$ is the disjoint union of $q^n-1$ copies of $X_h(\overline \FF_q)$. Roughly speaking, the action of $T \times G$ on $\widetilde X$ has essentially two behaviors: there is an action on each $\widetilde X_h^{(m)}$, and there is an action permuting these pieces. In order to understand the $(T \times G)$-representations arising from $H_i(X)$, one must understand these two actions. The former is captured by the action of $H(\F) \times \UnipkF$ on $X_h$; the latter was studied by Boyarchenko in \cite{B12} (see Proposition 5.19 of \textit{op.\ cit.} for the equal characteristics, $k=1$ case). Let $\cA$ denote the set of primitive characters of $H(\F)$. Let $\cG$ denote the set of irreducible representations of $\UnipkF$ whose central character has trivial $\Gal(L/K)$-stabilizer. In Section \ref{s:reps}, we give a correspondence $\chi \mapsto \rho_\chi$ from $\cA$ to $\cG$. When $k = 1$, this construction matches that of Corwin \cite{C74}. In Section \ref{s:jugglingdesc} we study the geometry of $X_h$ using a combinatorial notion known as \textit{juggling sequences.} We prove in Theorem \ref{l:Xhpolys} that the varieties $X_h$ are affine varieties defined by the vanishing of polynomials whose monomials are indexed by juggling sequences. By studying the combinatorics of these objects, we are able to prove structural lemmas crucial to the analysis of $H_c^i(X_h)$. Section \ref{s:cohomdesc} is concerned with combining the general algebro-geometric results of Section \ref{s:alggeom}, the representation-theoretic results of Section \ref{s:reps}, and the combinatorial results of Section \ref{s:jugglingdesc}. In Theorem \ref{t:cohomdesc} that the correspondence $\chi \mapsto \rho_\chi$ is bijective and that every representation $\rho \in \cG$ appears in $H_c^i(X_h)$ with multiplicity $1$. In addition, we prove a character formula for the representations $H_c^i(X_h)[\chi]$ using the Deligne--Lusztig fixed point formula of \cite{DL76}. Section \ref{s:divalg} is devoted to understanding two connections. The first, explained in Section \ref{s:DL}, is to unravel the relationship between the results of Section \ref{s:cohomdesc} and the representations of division algebras arising from $p$-adic Deligne--Lusztig constructions $\widetilde X$. The second, explained in Section \ref{s:LLC}, is to describe $H_i(X)[\theta]$ from the perspective of the local Langlands and Jacquet--Langlands correspondences. We use Theorem \ref{t:cohomdesc}, the trace formula established in Proposition \ref{p:vregtrace}, and a criterion of Henniart described in \cite{BW13} (see Proposition 1.5(b) of \textit{op.\ cit.}). \begin{displaytheorem}[\ref{t:divalg}, \ref{c:JL}] Let $\theta \colon L^\times \to \overline \QQ_\ell^\times$ be a primitive character of level $h$ and let $\rho_\theta$ be the $D^\times$-representation corresponding to $\theta$ under the local Langlands and Jacquet--Langlands correspondences. Then \begin{equation*} H_i(X)[\theta] = \begin{cases} \rho_\theta & \text{if $i = (n-1)(h-k)^+$,} \\ 0 & \text{otherwise.} \end{cases} \end{equation*} Moreover, if $D$ and $D'$ are division algebras of invariant $k/n$ and $k'/n$ with associated Deligne--Lusztig constructions $X$ and $X'$, then the Jacquet--Langlands transfer of $H_\bullet(X)[\theta]$ is isomorphic to $H_\bullet(X)[\theta]$. \end{displaytheorem} Using the techniques developed in this paper, we have evidence to support that for nonprimitive characters $\theta \colon L^\times \to \overline \QQ_\ell^\times$ of level $h$ with restriction $\chi \colon U_L^1 \to \overline \QQ_\ell^\times$, the cohomology groups $H_c^i(X_h)[\chi]$ are irreducible and concentrated in a single non-middle degree. This implies that the homology groups $H_i(X)[\theta]$ are also concentrated in a single degree, though it it not expected that these representations are irreducible in general. We plan to investigate this in a future paper. \subsection*{Acknowledgements} I am deeply grateful to Mitya Boyarchenko for introducing me to this area of research. I'd also like to Jake Levinson for interesting conversations regarding Proposition \ref{p:claim1gen}. \section{Definitions}\label{s:definitions} Let $K$ be a non-Archimedean local field with residue field $\FF_q$ and fixed uniformiser $\pi$. If $K$ has characteristic $p$ (the equal characteristics case), then all the definitions below were already established in \cite{B12} and \cite{BW14}. If $K$ has characteristic $0$, the definitions we establish below are new. We first recall the ring scheme $\bW_K$ of $\cO_K$-Witt vectors. (For an exposition, see for example \cite{FF13}.) For each $r \geq 0$, define the Witt polynomial \begin{equation*} W_r = \sum_{i=0}^r \pi^i X_i^{q^{r-i}} \in \cO_K[X_0, \ldots, X_r]. \end{equation*} There exist polynomials $S_r, M_r \in \cO_K[X_0, \ldots, X_r, Y_0, \ldots, Y_r]$ such that \begin{align*} W_r(S(X,Y)) &= W_r(X) + W_r(Y), \\ W_r(M(X,Y)) &= W_r(X) \times W_r(Y). \end{align*} For any $\cO_K$-algebra $R$, define \begin{equation*} \bW_K(A) = A^\bN \end{equation*} with the addition and multiplication given by $X +_{\bW_K} Y = S(X,Y)$ and $X \times_{\bW_K} Y = M(X,Y)$. From now on, we will view $\bW_K$ as a scheme over $\cO_K/(\pi) = \FF_q$. \begin{definition} Let $A$ be any $\FF_q$-algebra. If $K$ has characteristic $p$, let $\bW(A) = A[\![\pi]\!]$, and if $K$ has characteristic $0$, let $\bW(A) = \bW_K(A)$. For any $h \in \bN$, we define $\bW_h(A) = \bW(A)/(\pi^h)$. It is clear that the functor $A \mapsto \bW_h(A)$ is representable by the affine space $\mathbb{A}^h$. \end{definition} \begin{definition}\label{d:unip} For any $\FF_q$-algebra $A$, define a ring $\cR_{h,k,n,q}(A)$ as follows: \begin{enumerate}[label=\textbullet] \item As a group under addition, $\cR_{h,k,n,q}(A) \colonequals \bW(A)[\tau]/(\tau^n - \pi^k, \pi^h, \pi^{h-k} \tau, \ldots, \pi^{h-k} \tau^{n-1})$. \item The multiplication structure on $\cR_{h,k,n,q}(A)$ is given by the following commutation rule: $\tau \cdot a = a^q \cdot \tau$ for any $a \in A$. \end{enumerate} Elements of $\cR_{h,k,n,q}$ can be written as $A_0 + A_1 \tau + \cdots + A_{n-1} \tau^{n-1}$ where $A_0 \in \bW_h$ and $A_i \in \bW_{h-k}$ for $i = 1, \ldots, n-1$. Then \begin{equation*} \cR_{h,k,n,q}^\times = \{A_0 + A_1 \tau + \cdots + A_{n-1} \tau^{n-1} : A_0 \in \bW_h^\times\} \subset \cR_{h,k,n,q} \end{equation*} and we define \begin{equation*} \Unipk = \{A_0 + A_1 \tau + \cdots + A_{n-1} \tau^{n-1} : A_0 = (1, *, \cdots, *) \in \bW_h^\times\} \subset \cR_{h,k,n,q}^\times. \end{equation*} It is clear that the functor $A \mapsto \Unipk(A)$ is representable by the affine space $\mathbb{A}^{(h-1)+(n-1)(h-k)^+}$. \end{definition} Define \begin{equation*} H \colonequals \{A_0 \in 1 + \bW_{h-1} \subset \bW_h\} \subset \Unipk. \end{equation*} Note that although $\bW_h$ is a commutative group scheme, the group scheme $H$ is not commutative. This will be standard notation throughout this paper outside Section \ref{s:alggeom}. \begin{remark}\label{r:isoms} Since we have natural isomorphisms $\bW(\FF_q) \cong \cO_K$ and $\bW(\F) \cong \cO_L$, we also have natural isomorphisms \begin{flalign*} \phantom{stuffstuffstuff}U_L^1/U_L^h &\overset{\sim}{\to} H(\F), && A_0 \mapsto A_0 \in \bW_h(\F) \\ \F \overset{\sim}{\to} U_L^{h-1}/U_L^h &\overset{\sim}{\to} H_{n(h-1)}(\F), && a \mapsto (1,0,\ldots,0,a). && \end{flalign*} Note also that \begin{flalign*} \phantom{stuff}\UnipkF \cong U_D^1/U_D^{(h)}, \text{ where $U_D^{(h)} \colonequals (1 + P_L^h)(1 + P_L^{h-k}\Pi) \cdots (1 + P_L^{h-k}\Pi^{n-1}).$} &&\Diamond \end{flalign*} \end{remark} \subsection{The varieties $X_h$} \label{s:Xhdef} \begin{definition} For any $\FF_q$-algebra $A$, let $\Mat_{h,k}(A)$ denote the ring of all $n$-by-$n$ matrices $B = (b_{ij})_{i,j=1}^n$ with $b_{ii} \in \bW_h(A)$, $b_{ij} \in \bW_{h-k}(A)$ for $i < j$, and $b_{ij} \in \pi^k \bW_{h-k}(A)$ for $i > j$. The determinant can be viewed as a multiplicative map $\det \colon \Mat_{h,k}(A) \to \bW_h(A)$. \end{definition} For any $\FF_q$-algebra $A$, consider the morphism \begin{equation*} \iota_h \colon \cR_{h,k,n,q}(A) \to \Mat_{h,k}(A) \end{equation*} given by \begin{equation*} \iota_{h,k}\left(\textstyle \sum A_i \tau^i\right) \colonequals \left(\begin{matrix} A_0 & A_1 & A_2 & \cdots & A_{n-1} \\ \pi^k \varphi(A_{n-1}) & \varphi(A_0) & \varphi(A_1) & \cdots & \varphi(A_{n-2}) \\ \pi^k \varphi^2(A_{n-2}) & \pi^k \varphi^2(A_{n-1}) & \varphi^2(A_0) & \cdots & \varphi^2(A_{n-3}) \\ \vdots & \vdots & \ddots & \ddots & \vdots \\ \pi^k \varphi^{n-1}(A_1) & \pi^k \varphi^{n-1}(A_2) & \cdots & \pi^k \varphi^{n-1}(A_{n-1}) & \varphi^{n-1}(A_0) \end{matrix}\right) \end{equation*} where $\varphi \colon \bW \to \bW$ is the $q$th Frobenius endomorphism. Recall from \cite{B12} that the $p$-adic Deligne--Lusztig construction $X$ described in \cite{L79} can be identified with a certain set $\widetilde X$ which can be realized as the $\overline \FF_q$-points of an ind-scheme \begin{equation*} \widetilde X = \bigsqcup_{m \in \ZZ} \varprojlim_h \widetilde X_h^{(m)}. \end{equation*} Here, $\widetilde X_h^{(k)} \colonequals \widetilde X_h^{(0)} \cdot \Pi$ and $\widetilde X_h^{(n)} \colonequals \widetilde X_h^{(0)} \cdot \pi$, where for any $\overline \FF_q$-algebra $A$, \begin{equation*} \widetilde X_h^{(0)}(A) = \{\iota_{h,k}(\textstyle \sum a_i \tau^i) \in \Mat_h(A) : \text{$\det(\iota_{h,k}(\textstyle \sum a_i \tau^i))$ is fixed by $\varphi$}\}. \end{equation*} \begin{definition} For any $\FF_q$-algebra $A$, define \begin{equation*} X_h(A) \colonequals \Unipk(A) \cap \iota_{h,k}^{-1}(\widetilde X_h^{(0)}(A)). \end{equation*} \end{definition} \begin{remark} Notice that $\widetilde X_h^{(0)}$ is a disjoint union of $q^n - 1$ copies of $X_h$. \hfill $\Diamond$ \end{remark} \begin{remark} Note that $X$, $\widetilde X$, $\widetilde X_h^{(m)}$, and $X_h$ all depend on Hasse invariant $k/n$ of the division algebra $D$, but we suppress this from the notation. \hfill $\Diamond$ \end{remark} \subsection{Group actions} The map $\iota_{h,k}$ has the following property, which we will refer to as Property $\ddagger$. If $A$ is an $\F$-algebra, then $\iota_{h,k}(xy) = \iota_{h,k}(x)\iota_{h,k}(y)$ for all $x \in \Unipk(A)$ and all $y \in \UnipkF$. Moreover, for $y \in \UnipkF$, the determinant of $\iota_{h,k}(y)$ is fixed by $\varphi$. It therefore follows that $X_h$ is stable under right-multiplication by $\UnipkF$. We denote by $x \cdot g$ the action of $g \in \UnipkF$ on $x \in X_h$. Pick a generator $\zeta$ of $\F^\times$. The conjugation action of $\zeta$ on $\Unipk(A)$ stabilizes $X_h(A)$. This extends the right $\UnipkF$ action on $X_h$ to an action of the semidirect product $\F^\times \ltimes \Unipk(\F) \cong \cR_{h,k,n,q}^\times(\F)$ on $X_h$. We now describe a left action of $H(\F)$ on $X_h$. We can identify $H(\F)$ with the set $\iota_{h,k}(H(\F))$. Note that by Property $\ddagger$, the map $\iota_{h,k}$ actually preserves the group structure of $H(\F)$, and since $\iota_{h,k}$ is injective, then $H(\F) \cong \iota_{h,k}(H(\F))$ as groups. Explicitly, this isomorphism is given by \begin{equation*} A_0 \mapsto \diag(A_0, \varphi(A_0), \ldots, \varphi^{n-1}(A_0)). \end{equation*} The left-multiplication action of $\iota_{h,k}(H(\F))$ on the $\Mat_{h,k}(A)$ stabilizes $X_h(A)$. We denote by $g * x$ the action of $g \in H(\F) \cong U_L^1/U_L^h$ on $x \in X_h$.\footnote{Warning: This is not the same as the left-multiplication action of $H(\F) \subset H(A)$ on $\Unipk(A)$.} \begin{remark} Let $Z(\UnipkF)$ denote the center of $\UnipkF$. This is a subgroup of $H(\F)$. By direct computation, one sees that the left action of $Z(\UnipkF) \subset H(\F)$ and the right action of $Z(\UnipkF) \subset \UnipkF$ coincide. Note also that the actions of $H(\F)$ and $\cR_{h,k,n,q}^\times(\F)$ commute. \hfill $\Diamond$ \end{remark} \section{General principles: some algebraic geometry}\label{s:alggeom} In this section, we prove some general algebro-geometric results that will allow us to compute certain cohomology groups via an inductive argument. We generalize the techniques of \cite{B12} from $\GG_a$ to a group scheme $Z$, which we now define\footnote{Note also that $Z$ is the subgroup scheme $H$ of $\Unip$ defined in Section \ref{s:definitions}.}. For any $\FF_{q^n}$-algebra $A$, \begin{equation*} Z(A) \colonequals \{A_0 \in 1 + \pi \bW_{h-1}(A)\} \subset \Unipk(A). \end{equation*} Let $G$ be an algebraic group over $\F$ and suppose that $Y \subset G$ is a (locally closed) subvariety defined over $\F$ and put $X = L_{q^n}^{-1}(Y)$, where $L_{q^n} \colon G \to G$ is the Lang map given by $g \mapsto \Fr_{q^n}(g)g^{-1}.$ Let $H \subset G$ be any connected subgroup defined over $\F$ and let $\eta \colon H(\F) \to \overline \QQ_\ell^\times$ be a character. Write $V_\eta = \Ind_{H(\F)}^{G(\F)}(\eta)$. Consider the right-multiplication action of $H(\F)$ on $G$ and form the quotient $Q \colonequals G/(H(\F))$. The Lang map $L_{q^n} \colon G \to G$ is invariant under right multiplication by $H(\F)$ and thus it factors through a morphism $\alpha \colon Q \to G$. On the other hand, the quotient map $G \to Q$ is a right $H(\F)$-torsor, so the character $\eta$ yields a $\overline \QQ_\ell$-local system $\mathcal E_\eta$ of rank $1$ on $Q$. The following lemma is proved in \cite{B12}. \begin{lemma}[Boyarchenko \cite{B12}]\label{l:B2.1} There is a natural $\Fr_q$-equivariant vector-space isomorphism \begin{equation*} \Hom_{G(\FF_q)}(V_\chi, H_c^i(X, \overline \QQ_\ell)) \cong H_c^i(\alpha^{-1}(Y), \mathcal{E}_\chi|_{\alpha^{-1}(Y)}) \qquad \forall i \geq 0. \end{equation*} \end{lemma} As in \cite{B12}, we now make two further assumptions under which the right-hand side of the isomorphism in Lemma \ref{l:B2.1} can be described much more explicitly. This will allow us to certain cohomology groups via an inductive argument. These two assumptions are: \begin{enumerate}[label=\arabic*.] \item The quotient morphism $G \to G/H$ admits a section $s \colon G/H \to G$. \item There is an algebraic group morphism $f \colon H \to Z$ defined over $\F$ such that $\eta = \chi \circ f$ for a character $\chi \colon Z(\F) \to \overline \QQ_\ell^\times$. \end{enumerate} Let $\Loc_\chi$ be the local system on $Z$ defined by $\chi$ via the Lang map $L_{q^n} \colon Z \to Z$. The following lemma is proved in \cite{B12}. \begin{lemma}[Boyarchenko \cite{B12}]\label{l:B2.2} There is an isomorphism $\gamma \colon (G/H) \times H \overset{\simeq}{\longrightarrow} Q$ such that $\gamma^* \mathcal E_\eta \cong (f \circ \pr_2)^*\Loc_\chi$ and $\alpha \circ \gamma = \beta$, where $\pr_2 \colon (G/H) \times H \to H$ is the second projection and $\beta \colon (G/H) \times H \to G$ is given by $\beta(x,h) = s(\Fr_{q^n}(x)) \cdot h \cdot s(x)^{-1}$. \end{lemma} Combining Lemma \ref{l:B2.1} and \ref{l:B2.2} together with the assumption that $\eta = \chi \circ f$ for a character $\chi \colon Z(\F) \to \overline \QQ_\ell^\times$ and an algebraic group morphism $f \colon H \to Z$ defined over $\F$, we obtain the following proposition. \begin{proposition}\label{p:B2.3} Assume that we are given the following data: \begin{enumerate}[label=\textbullet] \item an algebraic group $G$ with a connected subgroup $H \subset G$ over $\F$; \item a section $s \colon G/H \to G$ of the quotient morphism $G \to G/H$; \item an algebraic group homomorphism $f \colon H \to Z$; \item a character $\chi \colon Z(\F) \to \overline \QQ_\ell^\times$; \item a locally closed subvariety $Y \subset G$. \end{enumerate} Set $X = L_{q^n}^{-1}(Y)$. The preimage of $Y$ under the Lang map $L_{q^n}(g) = \Fr_{q^n}(g) g^{-1}$. Then for each $i \geq 0$, we have a $\Fr_{q^n}$-compatible vector space isomorphism \begin{equation*} \Hom_{G(\F)}(\Ind_{H(\F)}^{G(\F)}(\chi \circ f), H_c^i(X, \overline \QQ_\ell)) \cong H_c^i(\beta^{-1}(Y), P^*\Loc_\chi). \end{equation*} Here, $\Loc_\chi$ is the local system on $Z$ corresponding to $\chi$, the morphism $\beta \colon (G/H) \times H \to G$ is given by $\beta(x,h) = s(F_q(x)) \cdot h \cdot s(x)^{-1}$, and the morphism $P \colon \beta^{-1}(Y) \to Z$ is the composition $\beta^{-1}(Y) \hookrightarrow (G/H) \times H \overset{\pr_2}{\longrightarrow} H \overset{f}{\to} Z$. \end{proposition} Our goal now is to prove the following crucial proposition. This is the proposition that gives us an inductive technique for calculating certain cohomology groups. \begin{proposition}\label{p:B2.10} Let $q$ be a power of $p$, let $n \in \NN$, and let $\chi \colon Z(\FF_{q^n}) \to \overline \QQ_\ell^\times$ be primitive. Let $S_2$ be a scheme of finite type over $\FF_{q^n}$, put $S = S_2 \times \mathbb{A}^1$ and suppose that a morphism $P \colon S \to Z$ has the form \begin{equation*} P(x,y) = g(f(x)^{q^{j_1}}y^{q^{j_2}}) \cdot g(f(x)^{q^{j_3}}y^{q^{j_4}})^{-1} \cdot P_2(x) \end{equation*} where \begin{enumerate}[label=\textbullet] \item $j_1 - j_2 = j_3 - j_4$ and $j_2 - j_4$ is not divisible by $n$, \item $f \colon S_2 \to \GG_a$, $P_2 \colon S_2 \to Z$ are two morphisms, and \item $g \colon \mathbb{A}^1 \to Z$ is the morphism $z \mapsto (0, \ldots, 0, z)$. \end{enumerate} Let $S_3 \subset S_2$ be the subscheme defined by $f = 0$ and let $P_3 = P_2|_{S_3} \colon S_3 \to Z$. Then for all $i \in \ZZ$, we have \begin{equation*} H_c^i(S, P^*\Loc_\chi) \cong H_c^{i-2}(S_3, P_3^*\Loc_\chi)(-1) \end{equation*} as vector spaces equipped with an action of $\Fr_{q^n}$, where the Tate twist $(-1)$ means that the action of $\Fr_{q^n}$ on $H_c^{i-2}(S_3, P_3^*\Loc_\chi)$ is multiplied by $q^n$. \end{proposition} \begin{proof} Let $\pr \colon S = S_2 \times \mathbb{A}^1 \to S_2$ be the first projection, let $\iota \colon S_3 \to S_2$ be the inclusion map, and let $\eta \colon S \to Z$ be the morphism $(x,y) \mapsto g(f(x)^{q^{j_1}}y^{q^{j_2}}) \cdot g(f(x)^{q^{j_3}}y^{q^{j_4}})^{-1}$. The sheaf $\Loc_\chi$ is not a multiplicative local system on $Z$. However, since the image of $\eta$ lies in the center of the group scheme $Z$, then \begin{equation*} P^* \Loc_\chi \cong (\eta^* \Loc_\chi) \otimes \pr^*(P_2^* \Loc_\chi). \end{equation*} Thus, by the projection formula, \begin{equation*} R \pr_!(P^* \Loc_\chi) \cong P_2^* \Loc_\chi \otimes R \pr_!(\eta^* \Loc_\chi) \qquad \text{in $D_c^b(S_2, \overline \QQ_\ell).$} \end{equation*} I now claim that \begin{equation*} R \pr_!(\eta^* \Loc_\chi) \cong \iota_!(\overline \QQ_\ell)[2](1) \qquad \text{in $D_c^b(S_2, \overline \QQ_\ell),$} \end{equation*} where $\overline \QQ_\ell$ denotes the constant local system of rank $1$. It is clear that once we have established this, the desired conclusion follows. We therefore spend the rest of the proof proving this. The restriction of $\eta$ to $\pr^{-1}(S_3) \subset S_2$ is constant, so the restriction of the pullback $\eta^* \Loc_\chi$ to $\pr^{-1}(S_3)$ is a constant local system of rank $1$. Thus \begin{equation*} \iota^* R \pr_!(\eta^* \Loc_\chi) \cong \overline \QQ_\ell[2](1) \qquad \text{in $D_c^b(S_3, \overline \QQ_\ell).$} \end{equation*} To complete the proof, we need show that $R \pr_!(\eta^* \Loc_\chi)$ vanishes outside $S_3 \subset S_2$. First notice that $\eta = g \circ \eta_0$ where $\eta_0 \colon S \to \GG_a$ is defined as $(x,y) \mapsto f(x)^{q^{j_1}}y^{q^{j_2}} - f(x)^{q^{j_3}}y^{q^{j_4}}$. Let $\psi$ be the restriction of $\chi$ to $g(\GG_a) \subset Z$. Then \begin{equation*} \eta^* \Loc_\chi \cong \eta_0^* \Loc_\psi. \end{equation*} Here, $\Loc_\psi$ denotes the multiplicative local system on $\GG_a$ induced by $\psi$ via the Lang isogeny. It therefore suffices to show that $R \pr_!(\eta_0^* \Loc_\psi)$ vanishes outside $S_3 \subset S_2$. Now pick $x \in S_2(\overline \FF_q) \smallsetminus S_3(\overline \FF_q)$. By the proper base change theorem, \begin{equation*} R^i \pr_!(\eta_0^* \Loc_\psi)_x \cong H_c^i(\GG_a, f_x^* \Loc_\psi), \end{equation*} where $f_x \colon \GG_a \to \GG_a$ is given by $y \mapsto f(x)^{q^{j_1}}y^{q^{j_2}} - f(x)^{q^{j_3}}y^{q^{j_4}}.$ As in the proof of Proposition 2.10 of \cite{B12}, we can write $\Loc_\psi = \Loc_z$ for some $z \in \F$. Since $\psi$ has conductor $q^n$, then $z$ has trivial $\Gal(\F/\FF_q)$-stabilizer. By Corollary 6.5 of \cite{B12}, we have $f_x^* \Loc_\psi \cong \Loc_{f_x^*(z)}$, where \begin{equation*} f_x^*(z) = f(x)^{q^{j_1}/q^{j_2}} z^{1/q^{j_2}} - f(x)^{q^{j_3}/q^{j_4}} z^{1/q^{j_4}} = f(x)^{q^{j_1 - j_2}}(z^{q^{-j_2}} - z^{q^{-j_4}}). \end{equation*} But $z^{q^{-j_2}} - z^{q^{-j_4}} \neq 0$ since by assumption $z \neq 0$ and $j_2 - j_4$ is not divisible by $n$ by assumption. Thus $f_x^* \Loc_\psi$ is a nontrivial local system on $\GG_a$ and $H_c^i(\GG_a, f_x^* \Loc_\psi) = 0$ for all $i \geq 0$. \end{proof} \begin{proposition}\label{p:B2.10alpha} Suppose that $P \colon S \to Z$ has the form \begin{equation*} P(x,y) = g(f(x)^{q^{j_1}} y^{q^{j_2}} - f(x)^{q^{j_3}} y^{q^{j_4}} + \alpha(x,y)^{q^n} - \alpha(x,y)) \cdot P_2(x) \end{equation*} for some morphism $\alpha \colon S_2 \times \mathbb{A}^1 \to \GG_a$ defined over $\FF_{q^n}$. (Here, $j_1, \ldots, j_4$ are as in Proposition \ref{p:B2.10}.) Then under the same conditions as in Proposition \ref{p:B2.10}, we have \begin{equation*} H_c^i(S, P^* \Loc_\chi) \cong H_c^{i-2}(S_3, P_3^* \Loc_\chi)(-1) \end{equation*} as vector spaces equipped with an action of $\Fr_{q^n}$, where the Tate twist $(-1)$ means that the action of $\Fr_{q^n}$ on $H_c^{i-2}(S_3, P_3^* \Loc_\chi)$ is multiplied by $q^n$. \end{proposition} \begin{proof} Let $P'(x,y) = g(f(x)^{q^j} y - f(x)^{q^n} y^{q^{n-j}})\cdot P_2(x).$ Then $P^* \Loc_\chi$ and $(P')^* \Loc_\chi$ are isomorphic since the pullback of $\Loc_\chi$ by the map $(0, \ldots, 0, z) \mapsto (0, \ldots, 0, z^{q^n})$ is trivial. Then by Proposition \ref{p:B2.10}, the desired conclusion holds. \end{proof} The following proposition is extremely useful in the context of applying the inductive argument described by the above propositions. \begin{proposition}\label{p:claim1gen} Suppose that $S \hookrightarrow R$ is a finite map of polynomial rings over $k = \overline \FF_q$. Assume that $\Frac R$ is finite Galois over $\Frac S$ with Galois group $G$ a $p$-group. Then \begin{enumerate}[label=(\alph*)] \item $R$ is stable under $G$ and $R^G = S$ \item the quotient of monoids $((R \smallsetminus \{0\})/k^\times)^G = (S \smallsetminus \{0\})/k^\times$ \item If $(f) \subset R$ is an ideal such that $(\sigma f) = (f)$ for all $\sigma \in G$, then $f \in S$. \end{enumerate} \end{proposition} \begin{proof} First observe that since $S$ and $R$ are polynomial rings, they are normal and therefore integrally closed. Since $S \hookrightarrow R$ is a finite map, $R$ is the integral closure of $S$ in $\Frac R$. Thus $R$ is $G$-stable. It is clear that $S \subset R^G$ and that $R^G$ is integrally closed in $\Frac S$. But since $S$ is integrally closed, we necessarily have $S = R^G$. This proves (a). To see (b), consider the short exact sequence \begin{equation*} 1 \to k^\times \to \Frac R^\times \to \Frac R^\times/k^\times \to 1 \end{equation*} and take $G$-invariants to get a long exact sequence \begin{equation*} 1 \to k^\times \to \Frac S^\times \to (\Frac R^\times/k^\times)^G \to H^1(G, k^\times) \to \cdots \end{equation*} Since $G$ acts trivially on $k^\times$, we have $H^1(G,k^\times) = \Hom(G, k^\times),$ which is trivial since $G$ is a $p$-group. Thus $(\Frac R^\times/k^\times)^G = \Frac S^\times/k^\times$ and $((R \smallsetminus \{0\})/k^\times)^G = (S \smallsetminus \{0\})/k^\times$. Now we prove (c). If $f = 0$, then we are done, so for the rest of the proof we may assume $f \neq 0$. Necessarily $\sigma f = f$ up to a unit in $R$, and thus their images in the quotient $(R \smallsetminus \{0\})/k^\times$ are equal. Thus the image of $f$ is in $((R \smallsetminus \{0\})/k^\times)^G = (S \smallsetminus \{0\})/k^\times$, and so $f \in S$. \end{proof} \section{Representations of $\UnipF$} \label{s:reps} Let $\cG$ be the set of irreducible representations of $\UnipkF$ whose central character has trivial $\Gal(L/K)$-stabilizer. Let $\cA$ denote the set of all characters of $H(\F)$ whose restriction to the center $Z(\UnipkF)$ of $\UnipkF$ has trivial $\Gal(L/K)$-stabilizer. In this section, we show that $\cG$ can be parametrized by $\cA$ and explicitly describe such a parametrization. There are two main cases of behavior, depending on the parameters $n$, $h$, and $k$. \begin{definition} Given a triple of positive integers $(n,h,k)$ such that $h \geq k+1$, we say that: \begin{enumerate}[label=\textbullet] \item $(n,h,k)$ is in \textit{Case 1} if $(n-1)(h-k)^+$ is even. \item $(n,h,k)$ is in \textit{Case 2} if $(n-1)(h-k)^+$ is odd. \end{enumerate} \end{definition} Consider the following subgroups of $\Unip$: \begin{align*} H'(\F) &\colonequals \left\{\sum_{i=0}^{n-1} A_i \tau^i : \text{$A_{ij} = 0$ if $i > 0$ and $j \leq \frac{h-k}{2} - \frac{i}{n}$}\right\} \subset \UnipF \\ H^+(\F) &\colonequals \left\{\sum_{i=0}^{n-1} A_i \tau^i : \text{$A_{ij} = 0$ if $i > 0$ and $j < \frac{h-k}{2} - \frac{i}{n}$ and $A_{n/2,(h-k-1)/2} \in \FF_{q^{n/2}}$}\right\} \\ \end{align*} We will also need the subgroups \begin{align*} H_0'(\F) &\colonequals \{\textstyle \sum_{i=0}^{n-1} A_i \tau^i \in H'(\F) : A_0 \in Z(\UnipkF)\}, \\ H_0^+(\F) &\colonequals \{\textstyle \sum_{i=0}^{n-1} A_i \tau^i \in H^+(\F) : A_0 \in Z(\UnipkF)\}. \end{align*} Let \begin{align}\label{e:cI} \cI &= \left\{(i,j) : i = 0, \, 1 \leq j \leq h-1\right\} \cup \left\{(i,j) : 1 \leq i \leq n-1, \, \frac{h-k}{2} - \frac{i}{n} < j < h-k\right\}. \end{align} Notice that \begin{align} \label{e:caseindex} [H^+(\F) : H'(\F)] &= \begin{cases} 1 & \text{if $(n,h,k)$ is in Case 1,} \\ q^{n/2} & \text{if $(n,h,k)$ is in Case 2.}\end{cases} \\ \label{e:fullindex} [\UnipF : H^+(\F)] &= q^{n(n-1)(h-k)^+/2} \end{align} For $\chi \in \cA$, define an extension $\chi^\sharp$ of $\chi$ to $H'(\F)$ by \begin{equation*} \chi^\sharp(\textstyle \sum A_i \tau^i) \colonequals \chi(A_0). \end{equation*} Fix any extension $\widetilde \chi$ of $\chi^\sharp$ to $H^+(\F)$. Note that in Case 1, necessarily $\widetilde \chi = \chi^\sharp$. In Case 2, there are $q^{n/2}$ choices of $\widetilde \chi$ since $H^+(\F)$ is abelian. \begin{lemma}\label{l:centralext} If $\rho \in \cG$ has central character $\omega$, then the restriction of $\rho$ to $H_0'(\F)$ contains the character \begin{equation*} \omega^\sharp(A_0 + A_1 \tau + \cdots + A_{n-1} \tau^{n-1}) \colonequals \omega(A_0). \end{equation*} \end{lemma} \begin{proof} We prove this by induction on the subgroups \begin{align*} G_1 &\colonequals \{A_0 + V^{h-k-1} A_1 \tau^{n-1}\} \subset H_0'(\F), \\ G_2 &\colonequals \{A_0 + V^{h-k-1} A_2 \tau^{n-2} + V^{h-k-1} A_1 \tau^{n-1}\} \subset H_0'(\F), \ldots \end{align*} Consider the extension $\omega_1$ of $\omega$ to $G_1$ defined as \begin{equation*} \omega_1 \colon G_1 \to \overline \QQ_\ell^\times, \qquad A_0 + V^{h-k-1} A_1 \tau^{n-1} \mapsto \psi(A_0). \end{equation*} Then for any $g_1 = 1 + VB \tau \in G_1$ and $h = A_0 + V^{h-k-1} A \tau^{n-1} \in H_0'(\F)$, we have \begin{align*} {}^{g_1} \omega^\sharp(h) &= \omega^\sharp(A_0 + V^{h-1}(BA^q - AB^{q^{n-1}})) = \omega^\sharp(A_0) \psi(BA^q - AB^{q^{n-1}}). \end{align*} Since $\psi$ has conductor $q^n$, every character $\F \to \overline \QQ_\ell^\times$ can be written as $A \mapsto \psi(BA^q - AB^{q^{n-1}})$ for some $B \in \F$. Thus the restriction of $\rho$ to $G_1$ contains $\omega_1$. Applying the above argument for each $G_i \subseteq H_0'(\F)$ inductively proves that the restriction of $\rho$ to $H_0'(\F)$ contains $\omega^\sharp$. Suppose that we are in Case 2 and let $i = n/2$ and $j = (h-k-1)/2$. Let $\widetilde \omega$ be any extension of $\omega^\sharp$ to $H_0^+(\F)$. To prove that $\rho|_{H_0^+(\F)}$ contains $\widetilde \omega$, it is enough to prove that the orbit of $\widetilde \omega$ under $\UnipkF$ conjugacy contains every extension of $\omega^\sharp$ to $H_0^+(\F)$. Indeed, for $g = 1 + V^jB \tau^i \in \UnipkF$ and $h = \sum A_i \tau^i \in H_0^+(\F)$, we have \begin{align*} \widetilde \omega \Big((1 + V^j B \tau^i)&(A_0 + A_1 \tau + \cdots + A_{n-1} \tau^{n-1})((1 + V^{h-1}B^{q^i+1}) - V^jB \tau^i)\Big) \\ &= \widetilde \omega\Big((A_0 + V^{h-1}(A_i(B - B^{q^i}))) + A_i \tau^i\Big) \\ &= \widetilde \omega(A_0 + A_i \tau^i) \psi(A_i(B - B^{q^i})). \end{align*} Since $\psi$ has conductor $q^n$, the \begin{equation*} \#\{A_i \mapsto \psi(A_i(B - B^{q^{n/2}}))\} = q^{n/2}, \end{equation*} and this completes the proof. \end{proof} \begin{theorem}\label{t:irrepdesc} For any $\chi \in \cA$, the representation $\rho_\chi \colonequals \Ind_{H^+(\F)}^{\UnipkF}(\widetilde \chi)$ is irreducible with dimension $q^{n(n-1)(h-k)^+/2}$. Moreover, $\cG = \{\rho_\chi : \chi \in \cA\}$. \end{theorem} \begin{proof} The dimension follows from \eqref{e:fullindex}. To prove irreducibility, we use Mackey's criterion. First note that it is clear that $H'(\F)$ centralizes $\chi^\sharp$ and $H^+(\F)$ centralizes $\widetilde \chi$. We must show that these are exactly the centralizers of these characters. Let $V \colon \bW \to \bW$ denote the Verschiebung map. Consider $1 + V^j B \tau^i \in \UnipkF \smallsetminus H'(\F)$ with $i \neq n/2$. Then $(i', j') \colonequals (n-i, h-1-k-j) \in \cI$ and for any $A \in \F$, \begin{align*} \widetilde\chi\Big((1 + V^j B \tau^i)&(1 + V^{j'} A \tau^{i'})(1 - V^j B \tau^i + \cdots)\Big) \\ &= \widetilde\chi\Big((1 + V^j(B)V^{j'}(A)^{q^i} - V^{j'}(A)V^j(B)^{q^{i'}}) + V^{j'} A \tau^{i'}\Big) \\ &= \widetilde \chi\Big(1 + V^{j'}(A) \tau^{i'}\Big) \cdot \psi\Big(V^j(B)V^{j'}(A)^{q^i} - V^{j'}(A)V^j(B)^{q^{i'}}\Big). \end{align*} Since $\psi$ has conductor $q^n$, it follows that $1 + V^j B \tau^i$ does not centralize $\widetilde \chi$. Now consider $1 + V^j B \tau^i \in \UnipkF \smallsetminus H^+(\F)$ with $i = n/2$ and $j = (h-k-1)/2$ so that $B \in \F \smallsetminus \FF_{q^{n/2}}$. Then for any $A \in \FF_{q^{n/2}}$, \begin{align*} \widetilde \chi \Big((1 + V^j B \tau^i)&(1 + V^j A \tau^i)((1 + V^{h-1}B^{q^i+1}) - V^j B \tau^i)\Big)\\ &= \widetilde \chi \Big((1 + V^{h-1}(A(B-B^{q^i})) + V^j A \tau^i\Big) \\ &= \widetilde \chi (1 + V^j A \tau^i) \cdot \psi(A(B - B^{q^i})). \end{align*} Again, since $\psi$ has conductor $q^n$, it follows that $1 + V^j B \tau^i$ does not centralize $\widetilde \chi$. This completes the proof. \end{proof} \section{Juggling sequences, Witt vectors, and the varieties $X_h$}\label{s:jugglingdesc} We give a description of $X_h$ in terms of juggling sequences that will be crucial in understanding the cohomology groups $H_c^i(X_h, \overline \QQ_\ell)$. In this section, we also include some computational lemmas that will be used in the proof of Theorem \ref{t:cohomdesc}. \subsection{Juggling sequences} \begin{definition} A \textit{juggling sequence} of \textit{period $n$} is a sequence $(j_1, \ldots, j_n)$ of nonnegative integers satisfying the following condition: \begin{equation*} \text{The integers $i + j_i$ are all distinct modulo $n$.} \end{equation*} The \textit{number of balls} of a juggling sequence is the \textit{average} of a juggling sequence, $\displaystyle\frac{1}{n} \sum_{i=1}^n j_i$. \end{definition} Throughout, all juggling sequences will be of a fixed period $n$. The following lemmas are straightforward. \begin{lemma}[Properties of juggling sequences]\label{l:juggprops} \mbox{} \begin{enumerate}[label=(\alph*)] \item If $(j_1, \ldots, j_n)$ is a juggling sequence, there exists a unique permutation $\sigma \in S_n$ such that \begin{equation*} (j_1, \ldots, j_n) \equiv (\sigma(1)-1, \ldots, \sigma(n)-n) \mod n. \end{equation*} Given a juggling sequence $j$, we will denote the corresponding permutation by $\sigma_j$. \item Let $c = (1 2 \cdots n) \in S_n$ and let $j$ be a juggling sequence. Then $\sigma_{c \cdot j} = c^{-1}\sigma_j c$. In particular, the map $j \mapsto \sgn \sigma_j$ is invariant under cyclic permutations. \end{enumerate} \end{lemma} \begin{lemma}\label{l:juggleshape} Let $j$ be a juggling sequence of period $n$ with $r$ balls and let $e_i \in \ZZ^n$ denote the $n$-tuple with a $1$ in the $i$th coordinate and $0$'s elsewhere \begin{enumerate}[label=(\alph*)] \item If $j$ has a coordinate labelled $rn$, then $j = (rn) \cdot e_1$ up to cyclic permutation. \item Let $s \leq rn$ be a positive integer with $n \nmid s$. Let $\bar s = s \pmod n$. If $j$ consists of coordinates labelled only by $0,$ $s$, and $rn-s$, then $j = s \cdot e_1 + (rn-s) \cdot e_{\bar s + 1}$ up to cyclic permutation. \end{enumerate} \end{lemma} \subsection{$\cO_K$-Witt vectors} The following lemmas are well-known. \begin{lemma}\label{l:Zwitt} The polynomials $S_r, M_r \in \cO_K[X_0, \ldots, X_r, Y_0, \ldots, Y_r]$ are \begin{align*} S_r(X,Y) &= X_r + Y_r + \sum_{i=1}^r \frac{1}{\pi^i} (X_{r-i}^{q^i} + Y_{r-i}^{q^i} - S_{r-i}(X,Y)^{q^i}), \\ M_r(X,Y) &= \sum_{i = 0}^r \pi^i\left(\sum_{j=0}^{r-i} X_{i+j}^{q^{r-i-j}} Y_{r-j}^{q^j}\right) + \sum_{i=1}^r \frac{1}{q^i}\left(\left(\sum_{j=0}^{r-i} X_j^{q^{r-j}} Y_{r-i-j}^{q^{i+j}}\right) - M_i(X,Y)^{q^i}\right). \end{align*} \end{lemma} \begin{notation} We write $\epsilon_r$ to mean any polynomial in $X_i^\alpha Y_j^\beta$ with $i + j < r$. We write $\delta_r$ to mean any polynomial whose monomials are products of indeterminants whose indices are $<r$. \hfill $\Diamond$ \end{notation} \begin{lemma}\label{l:pwitt} Let $A$ be an $\FF_p$-algebra. Then for $X,Y \in \bW(A)$, \begin{align*} (X +_\bW Y)_r &= X_r + Y_r + \epsilon_r, \\ (X \times_\bW Y)_r &= \sum_{j=0}^{r} X_{j}^{p^{r-j}} Y_{r-j}^{p^j} + \epsilon_r. \end{align*} \end{lemma} \subsection{The varieties $X_h$} We now use the above definitions together with some basic computational results about the ring of Witt vectors to describe the varieties $X_h \subset \Unipk$. We coordinatize $\Unipk = \mathbb{A}^{(h-1) + (n-1)(h-k)^+}$ by writing $A_0 + A_1 \tau + \cdots + A_{n-1} \tau^{n-1} \in \Unipk$ with $A_0 = (x_0, x_n, \ldots, x_{(h-1)n}) \in \bW_h$ and $A_i = (x_i, x_{i+n}, \ldots, x_{i+(h-k-1)n}) \in \bW_{h-k}$ for $i = 1, \ldots, n-1$. Let \begin{equation} \sI \colonequals \{i : \text{$0 \leq i \leq (h-1)n$ if $n \mid i$, and $0 \leq i \leq (h-k-1)n$ if $n \nmid i$}\}.\label{e:sI} \end{equation} Given a juggling sequence $j \in \sI^n$, we have an associated permutation $\sigma \in S_n$. Let \begin{equation*} f_j \colonequals \#\{r : r > \sigma(r)\} \end{equation*} denote the number of anti-exceedances of $\sigma$. \begin{lemma}\label{l:Xhpolys} \begin{enumerate}[label=(\alph*)] \item In the equal characteristics case, the scheme $X_h \subset \Unip$ is defined by the vanishing of the polynomials \begin{equation*} g_{r} \colonequals \sum_j (-1)^{\sgn(\sigma_j)} x_{j_1}^q x_{j_2}^{q^2} \cdots x_{j_{n-1}}^{q^{n-1}}(x_{j_n}^{q^n} - x_{j_n}), \end{equation*} where $r=mn$, $x_0 \colonequals 1,$ and the sum ranges over juggling sequences $j = (j_1, \ldots, j_n) \in \sI^n$ with $|j| = \sum j_i = (m - (k-1) f_j)n$. \item In the mixed characteristics case, the scheme $X_h \subset \Unip$ is defined by the vanishing of the polynomials \begin{equation*} g_{r} \colonequals \sum_j (-1)^{\sgn \sigma_j} x_{j_1}^{q^{m- \lfloor j_1/n \rfloor}q} \cdots x_{j_{n-1}}^{p^{m- \lfloor j_{n-1}/n \rfloor} q^{n-1}}(x_{j_n}^{q^n} - x_{j_n})^{p^{m- \lfloor j_n/n \rfloor}} + \epsilon_{nm}. \end{equation*} where $r = mn$, $x_0 \colonequals 1,$ and the sum ranges over juggling sequences $j = (j_1, \ldots, j_n) \in \sI^n$ with $|j| = \sum j_i = r - (k-1) f_j n$. \end{enumerate} \end{lemma} \begin{proof}[Proof of (a)] Let $A$ denote the matrix associated to $A_0 + A_1 \tau + \cdots + A_{n-1} \tau^{n-1}$ and let $A_{r,s}$ denote the $(r,s)$th entry of $A$. Then if we set $x_i = 0$ for $i \notin \sI$, \begin{equation*} A_{r,s} = \begin{cases} \sum x_{ni+s-r}^{q^{r-1}} \pi^i & \text{if $r \leq s$}, \\ \sum x_{n(i-k+1)+s-r}^{q^{r-1}} \pi^{i} & \text{if $r > s$}. \end{cases} \end{equation*} Let $c_m$ denote the coefficient of $\pi^m$ in \begin{equation*} \det A = \sum_{\sigma \in S_n} (-1)^{\sgn \sigma} \prod_{r=1}^n A_{r,\sigma(r)}. \end{equation*} Then \begin{equation*} c_m = \sum_{\sigma \in S_n} (-1)^{\sgn \sigma} \sum_{|i| = m} \prod_{k=1}^n x_{ni_r^*+\sigma(r)-r}^{q^{k-1}}, \end{equation*} where $i = (i_1, \ldots, i_n) \in \bZ_{\geq 0}^n$ and \begin{equation*} i_r^* = \begin{cases} i_r & \text{if $r \leq \sigma(r)$,} \\ i_r - n(k-1) & \text{if $r > \sigma(r)$.} \end{cases} \end{equation*} Then setting $j_r \colonequals n i_r^* + \sigma(r) - r$ defines a juggling sequence $j = (j_1, \ldots, j_n) \in \sI^n$ with \begin{equation*} |j| = \sum_{r=1}^n j_r = \sum_{r=1}^n n i_r^* + \sigma(r) - r = nm - n(k-1)f_j. \end{equation*} It is clear that every juggling sequence $j \in \sI^n$ arises in this way, and we therefore have \begin{equation*} c_m = \sum_j (-1)^{\sgn \sigma_j} x_{j_1} x_{j_2}^q \cdots x_{j_n}^{q^{n-1}}, \end{equation*} where the sum ranges over juggling sequences $j \in \sI^n$ with $|j| = nm - n(k-1)f_j$. Recall that $X_h$ is defined by the equations $c_m^q - c_m$ for $1 \leq m \leq h-1$. Let $c = (12\cdots n) \in S_n$ and let $j$ be any juggling sequence with $|j| = mn.$ By Lemma \ref{l:juggprops}, $c \cdot j$ is a juggling sequence $|c \cdot j| = mn$, and $\sgn (\sigma_{c \cdot j}) = \sgn(\sigma)$. Thus we may arrange the terms in $c_m^q - c_m$ so that we obtain: \begin{equation*} c_m^q - c_m = \sum_j (-1)^{\sgn \sigma_j} x_{j_1}^q x_{j_2}^{q^2} \cdots x_{j_{n-1}}^{q^{n-1}}(x_{j_n}^{q^n} - x_{j_n}). \end{equation*} Writing $r = mn$ and letting $g_r \equalscolon c_{m}^q - c_m$ completes the proof of (a). The proof of (b) follows the same computation, modulo the multiplication rule $V^i(a) V^j(b) = V^{i+j}(a^{q^j} b^{q^i}) \in \bW_h$. \end{proof} \begin{corollary} $X_h$ is a variety of pure dimension $(n-1)(h-k)^+$. \end{corollary} We will need the following computational lemmas in the proof of Theorem \ref{t:cohomdesc}. \begin{lemma}\label{l:s(x)y coord} Let $y = Y_0 + Y_1 \tau + \cdots + Y_{n-1} \tau^{n-1} \in H'(\overline \FF_q)$ and $s(x) = 1 + X_1 \tau + \cdots + X_{n-1} \tau^{n-1}$, where $X_i = (X_{ij})_{j=1, \ldots, h-k}$ and $X_{ij} = 0$ unless $(i,j) \in J$. \begin{enumerate}[label=(\alph*)] \item In the equal characteristics case, if we write $s(x) \cdot y = \sum A_i \tau^i$, then \begin{equation*} A_i = Y_{ij} + \sum X_{i_1,j_1} Y_{i_1,j_1}^{q^{i_1+nj_1}}, \end{equation*} where the sum varies over pairs $((i_1,j_1),(i_2,j_2)) \in J \times I$ such that \begin{equation*} \begin{cases} j_1 + j_2 = j & \text{if $i_1 + i_2 = i$,} \\ j_1 + j_2 = j - k & \text{if $i_1 + i_2 = i +n$.} \end{cases} \end{equation*} \item In the mixed characteristics case, if we write $s(x) \cdot y = 1 + \sum A_i \tau^i$, then \begin{equation*} A_i = \begin{cases} Y_{ij} + \sum (X_{i_1,j_1}^{q^{j_2}} Y_{i_1,j_1}^{q^{i_1 + j_1 + n j_1}})^q & \text{if $i = 0$}, \\ Y_{ij} + \sum X_{i_1,j_1}^{q^{j_2}} Y_{i_1,j_1}^{q^{i_1 + j_1 + n j_1}} & \text{if $i \neq 0$} \end{cases} \end{equation*} where both sums vary over pairs $((i_1,j_1),(i_2,j_2)) \in J \times I$ such that \begin{equation*} \begin{cases} j_1 + j_2 = j & \text{if $i_1 + i_2 = i$,} \\ j_1 + j_2 = j - k & \text{if $i_1 + i_2 = i +n$.} \end{cases} \end{equation*} \end{enumerate} \end{lemma} \begin{proof} This is a straightforward computation. \end{proof} \begin{lemma}\label{l:claim1} Let $g_k$ be as in Lemma \ref{l:Xhpolys} and let $s(x)$ be as in Lemma \ref{l:s(x)y coord}. Suppose that for any $y, y' \in H'(\overline \FF_q)$ with $L_{q^n}(y) = L_{q^n}(y')$, \begin{equation*} g_r(s(x) \cdot y) = 0 \qquad \Longleftrightarrow \qquad g_r(s(x) \cdot y') = 0. \end{equation*} Then using the identity $L_{q^n}(y) = 1 + \sum x_i \tau^i$, we have that $g_r(s(x) \cdot y)$ is a polynomial in $x_i$ for $i \in \sI$. \end{lemma} This is a corollary of Proposition \ref{p:claim1gen}. \begin{proof} For $i \in J$, define $y_i \colonequals x_i.$ Consider the rings \begin{equation*} R = \overline \FF_q[y_i : i \in \sI] \supset S = \overline \FF_q[x_i : i \in \sI] \end{equation*} and their fraction fields \begin{equation*} E = \Frac R = \overline \FF_q(y_i : i \in \sI) \supset F = \Frac S = \overline \FF_q(x_i : i \in \sI). \end{equation*} In order to apply Proposition \ref{p:claim1gen}, we need to show that $S \hookrightarrow R$ is a finite morphism. To do this, we just need to observe that for each $i \in \sI$, the polynomial $y_i$ is integral over $S$. Notice that $E/F$ is not Galois but is the compositum of a tower of Galois extensions, where each nontrivial extension has Galois group $\FF_{q^n}$. Explicitly, \begin{equation*} E = E_0 \subseteq E_1 \subseteq \cdots \subseteq E_{n(h-1)} = F, \end{equation*} where $E_i = E_{i-1}(y_i).$ Then $E_i = \Frac S_i$ where $S_i = S_{i-1}[y_i]$. By the set-up, we know that for each $\sigma \in \Gal(E_{n(h-1)}/E_{n(h-k)-1})$, \begin{equation*} g_r(s(x) \cdot y) = 0 \quad \Longleftrightarrow \quad g_r(s(x) \cdot \sigma(y)) = 0. \end{equation*} By the Nullstellensatz, this implies that the ideal generated by $g_r(s(x) \cdot y)$ in $S_{n(h-1)} = R$ is equal to the ideal generated by $g_r(s(x) \cdot \sigma(y))$. Thus by Proposition \ref{p:claim1gen}, we have $g_r(s(x) \cdot y) \in S_{n(h-k)-1}$. By induction, $g_r(s(x) \cdot y) \in S$. \end{proof} To prove Proposition \ref{p:dimHom}, we will actually need a more precise result than Lemma \ref{l:claim1}. For $* \in \bN$, let $\overline * \equiv * \pmod n$. \begin{lemma}\label{l:poly} Let $a = 1 + \sum a_i \tau^i$ be as in Lemma \ref{l:s(x)y coord} and let $l \colonequals h-1,$ $m \colonequals h-k+1$, for ease of notation. Consider the descending sequence of integers $r_1 > r_2 > \cdots > r_s$ with $r_1 = mn - 1$, $r_s > n(h-k)/2$, and $n \nmid r_i$. Let $t_s(x) = x^{q^{r_s - n}} + x^{q^{r_s - 2n}} + \cdots + x^{q^{\bar r_s}}$. Set $x_{r_i} = 0 = x_{mn-r_i}$ for $i = 1, \ldots, s -1$. \begin{enumerate}[label=(\alph*)] \item In the equal characteristic case, \begin{equation*} g_{ln}(a) = x_{ln} - x_{r_s}^{q^{n - \bar r_s}} x_{ln - r_s} + x_{r_s}^{q^n} x_{ln - r_s}^{q^{\bar r_s}} + x_{r_s}^{q^n} t_s(x_{ln - r_s})^{q^n} - x_{r_s} t_s(x_{ln - r_s}) + \delta_{ln-r_s}. \end{equation*} \item In the mixed characteristic case, \begin{align*} g_{ln}(a) = x_{ln} &- x_{r_s}^{q^{n - \bar r_s + m - m_0}} x_{mn - r_s}^{q^{m_0+1}} + x_{r_s}^{q^{n+m-m_0}} x_{mn - r_s}^{q^{\bar r_s + m_0 + 1}} \\ &+ x_{r_s}^{q^{n+m-m_0}} t_s(x_{mn - r_s}^{q^{m_0+1}})^{q^n} - x_{r_s}^{q^{m-m_0}} t_s(x_{mn - r_s}^{q^{m_0+1}}) + \delta_{mn-r_s} + \epsilon_{mn}, \end{align*} where $m_0 = \floor{r_s/n}$. \end{enumerate} \end{lemma} \begin{proof} Since $x_{r_i} = 0$ for $i = 1, \ldots, s-1$, any juggling sequence $j = (j_1, \ldots, j_n)$ wherein $y_{mn-r_s}$ contributes to $g_{ln}$ nontrivially must have the following criteria: \begin{enumerate}[label=\textbullet] \item $j_n \neq 0$ \item The numbers $r_i$ and $mn - r_i$ for $i = 1, \ldots, s-1$ do not appear in $j$. \end{enumerate} Since $y_{mn - r_s}$ only appears as the coefficient of $\pi^u \tau^v$ for $nu + v \geq mn - r_s$, it follows that the terms in $g_{ln}$ involving $y_{mn - r_s}$ occur exactly in the parts corresponding to the juggling sequences \begin{align*} &ln \cdot e_n && \longleftrightarrow && 1 \in S_n, \\ &r_s \cdot e_{n - \bar r_s} + (mn - r_s) \cdot e_n && \longleftrightarrow && (n-\bar r_s, n) \in S_n, \\ &(mn - r_s) \cdot e_{\bar r_s} + r_s \cdot e_n && \longleftrightarrow && (\bar r_s, n) \in S_n. \end{align*} We now handle the equal and mixed characteristics cases separately. \begin{enumerate}[label=(\alph*)] \item In the equal characteristics case, the terms involving $y_{mn-r_s}$ occur in the expression \begin{equation*} (a_{ln}^{q^n} - a_{ln}) - a_{r_s}^{q^{n-\bar r_s}}(a_{mn-r_s}^{q^n} - a_{mn-r_s}) - a_{mn-r_s}^{q^{\bar r_s}}(a_{r_s}^{q^n} - a_{r_s}) + \delta_{mn-r_s}. \end{equation*} Thus by Lemma \ref{l:s(x)y coord}(a), we see that the only terms involving $y_{mn-r_s}$ are \begin{equation*} ((x_{r_s} y_{mn-r_s}^{q^{r_s}})^{q^n} - x_{r_s} y_{mn-r_s}^{q^{r_s}}) - x_{r_s}^{q^{n-\bar r_s}}(y_{mn-r_s}^{q^n} - y_{mn-r_s}) - y_{mn-r_s}^{q^{\bar r_s}}(x_{r_s}^{q^n} - x_{r_s}). \end{equation*} This reduces to the expression \begin{align*} -&x_{r_s}^{q^{n - \bar r_s}} x_{mn-r_s} - x_{r_s} (y_{mn-r_s}^{q^{r_s}} - y_{mn-r_s}^{q^{\bar r_s}}) + x_{r_s}^{q^n}(y_{mn-r_s}^{q^{r_s+n}} - y_{mn-r_s}^{q^{\bar r_s}}) \\ &= -x_{r_s}^{q^{n - \bar r_s}} x_{mn-r_s} + x_{r_s}^{q^n}(y_{mn-r_s}^{q^{\bar r_s + n}} - y_{mn-r_s}^{\bar r_s}) \\ &\qquad\qquad\qquad- x_{r_s}(y_{mn-r_s}^{q^{r_s}} - y_{mn-r_s}^{q^{\bar r_s}}) + x_{r_s}^{q^n}(y_{mn-r_s}^{q^{r_s}} - y_{mn-r_s}^{q^{\bar r_s}})^{q^n}. \end{align*} Finally, (a) follows once we recall the fact \begin{equation*} x_{mn-r_s} = y_{mn-r_s}^{q^n} - y_{mn-r_s} + \delta_{mn-r_s}. \end{equation*} \item In the mixed characteristics case, the terms involving $y_{mn-r_s}$ occur in the expression \begin{equation*} (a_{ln}^{q^n} - a_{ln}) - a_{r_s}^{q^{n- \bar r_s} q^{m - \floor{r_s/n}}}(a_{mn-r_s}^{q^n} - a_{mn-r_s})^{q^{m - \floor{(mn-r_s)/n}}} - a_{mn-r_s}^{q^{\bar r_s} q^{m - \floor{(mn-r_s)/n}}} (a_{r_s}^{q^n} - a_{r_s})^{q^{m - \floor{r_s/n}}}. \end{equation*} Let $m_0 = \floor{r_s/n}$. Notice that $\floor{(mn - r_s)/n} = m - m_0 - 1$. By Lemma \ref{l:s(x)y coord}(b), we see that the only terms involving $y_{mn - r_s}$ are \begin{align*} x_{r_s}^{q^{m-m_0+n}} y_{mn - r_s}^{q^{r_s + m_0 + 1 + n}} - x_{r_s}^{q^{m-m_0}} y_{mn - r_s}^{q^{r_s + m_0 + 1}} - x_{r_s}^{q^{n-\bar r_s +m-m_0}}&(y_{mn-r_s}^{q^n} - y_{mn-r_s})^{q^{m_0+1}} \\ - &y_{mn - r_s}^{q^{\bar r_s + m_0 + 1}}(x_{mn - r_s}^{q^n} - x_{mn - r_s})^{q^{m-m_0}}. \end{align*} This reduces to the expression \begin{align*} -x_{r_s}^{q^{n-\bar r_s + m - m_0}}&x_{mn - r_s}^{q^{m_0+1}} + x_{r_s}^{q^{n + m - m_0}} x_{mn - r_s}^{q^{\bar r_s}} \\ &- x_{r_s}^{q^{m-m_0}}t(y_{mn-r_s}^{q^{m_0+1}}) + x_{r_s}^{q^{n+m-m_0}} t(x_{mn-r_s}^{q^{m_0+1}})^{q^n} + \delta_{mn - r_s}.\qedhere \end{align*} \end{enumerate} \end{proof} \section{The representations $H_c^\bullet(X_h)[\chi]$}\label{s:cohomdesc} In this section, we prove the irreducibility of $H_c^i(X_h, \overline \QQ_\ell)[\chi]$ and its vanishing outside a single degree. The key proposition is: \begin{proposition}\label{p:dimHom} For any $\chi \in \cA$, \begin{equation*} \dim\Hom_{\UnipkF}(\rho_\chi, H_c^i(X_h, \overline \QQ_\ell)) = \delta_{i, (n-1)(h-k)^+}, \end{equation*} where $\rho_\chi \in \cG$ is the representation described in Theorem \ref{t:irrepdesc}. Moreover, $\Fr_{q^n}$ acts on $H_c^{(n-1)(h-k)^+}(X_h, \overline \QQ_\ell)$ via multiplication by $(-1)^{(n-1)(h-k)^+} q^{n(n-1)(h-k)^+/2}.$ \end{proposition} Recall that $\F^\times \ltimes \UnipF \cong \cR_{h,k,n,q}^\times(\F)$ and that $\F$ acts on $X_h$ by conjugation. For any $z \in \F$ and any $g,h \in H(\F)$, let $(z,h,g)$ denote the map $X_h \to X_h$ given by $x \mapsto z(h * x \cdot g)z^{-1}$. We prove the following proposition in Section \ref{s:pftrace}. \begin{proposition}\label{p:vregtrace} For any generator $\zeta$ of $\F^\times$, \begin{equation*} \Tr((\zeta, 1, g)^* ; H_c^{(n-1)(h-k)^+}(X_h, \overline \QQ_\ell)[\chi]) = (-1)^{(n-1)(h-k)^+} \chi(g). \end{equation*} \end{proposition} From the multiplicity-one statement of Proposition \ref{p:dimHom}, the nonvanishing statement of Proposition \ref{p:vregtrace}, and a counting argument coming from Theorem \ref{t:irrepdesc}, one obtains the following theorem. \begin{theorem}\label{t:cohomdesc} For any $\chi \in \cA$, the $\UnipF$-representation $H_c^i(X_h, \overline \QQ_\ell)[\chi]$ is irreducible when $i = (n-1)(h-k)^+$ and vanishes otherwise. Moreover, for $\chi, \chi' \in \cA$, we have $H_c^{(n-1)(h-k)^+}(X_h, \overline \QQ_\ell)[\chi] \cong H_c^{(n-1)(h-k)^+}(X_h, \overline \QQ_\ell)[\chi']$ if and only if $\chi = \chi'$. \end{theorem} We prove this in Section \ref{s:pfcohomdesc}. It is a trivial corollary of Theorem \ref{t:cohomdesc} that \begin{corollary}\label{c:Gbij} The parametrization \begin{equation*} \cA \to \cG, \qquad \chi \mapsto \rho_\chi \end{equation*} described in Theorem \ref{t:irrepdesc} is a bijection. \end{corollary} \subsection{Proof of Proposition \ref{p:dimHom}}\label{s:pfdimHom} Note that from Section \ref{s:reps}, the representation \begin{equation*} W_\chi \colonequals \Ind_{H'(\F)}^{\UnipkF}(\chi^\sharp) \end{equation*} is irreducible and isomorphic to $\rho_\chi$ in Case 1, and is a direct sum of $q^{n/2}$ copies of $\rho_\chi$ in Case 2. Thus the statement of the proposition is equivalent to the following two statements: \begin{enumerate}[label=(\alph*)] \item If we are in Case 1, then \begin{equation*} \dim \Hom_{\UnipkF}(W_\chi, H_c^i(X_h, \overline \QQ_\ell)) = \delta_{i,(n-1)(h-k)^+}. \end{equation*} \item If we are in Case 2, then \begin{equation*} \dim \Hom_{\UnipkF}(W_\chi, H_c^i(X_h, \overline \QQ_\ell)) = q^{n/2} \cdot \delta_{i,(n-1)(h-k)^+}. \end{equation*} \end{enumerate} We use Proposition \ref{p:B2.3} to reduce the computation of the space $\Hom_{\UnipF}(W_\chi, H_c^i(X_h, \overline \QQ_\ell))$ to a computation of the cohomology of a certain scheme $S$ with coefficients in a certain constructible $\QQ_\ell$-sheaf $\sF$. Then, to compute $H_c^i(S, \sF)$, we inductively apply Proposition \ref{p:B2.10}, a more general version of Proposition 2.10 in \cite{B12}. This will allow us to reduce the computation to a computation involving a $0$-dimensional scheme in Case 1 and a $1$-dimensional scheme in Case 2. We will treat these cases simultaneously until the final step. \subsubsection*{Step 0.} We first need to establish some notation. \begin{enumerate}[label=\textbullet] \item Let $I \colonequals \{nj + i : (i,j) \in \cI\}$, where $\cI$ was defined in Equation \eqref{e:cI}. Put $I' = I \smallsetminus \{n, 2n, \ldots, n(h-1)\}$ and let $d = |I'| = \floor{(n-1)(h-1)/2}.$ Put $J \colonequals \sI \smallsetminus I$. \item Let $r_s$ denote the $s$th smallest number in $J$ not divisible by $n$. Put $I_0 \colonequals I'$ and $J_0 \colonequals J$. Then define $I_s \colonequals I_{s-1} \smallsetminus \{n(h-k)-r_s\}$ and $J_s \colonequals J_{s-1} \smallsetminus \{r_s\}$. \item Note that $I_d = \varnothing$. In Case 1, $J_d = \varnothing.$ In case 2, $J_d = \{n(h-1)/2\}$. \item Note that $H' = \{1 + \sum a_i \tau^i : i \in I\}$. \item For a finite set $A \subset \NN$, we will write $\mathbb{A} [A]$ to denote the affine space of dimension $|A|$ with coordinates labelled by $A$. \item For $* \in \NN$, we will denote by $[*]$ the representative of $*$ in $\{1, \ldots, n\}$ modulo $n$. We will write $\bar * \equiv * \pmod n$. Note that these only differ when $* \equiv 0 \pmod n$. \end{enumerate} \subsubsection*{Step 1} We apply Proposition \ref{p:B2.3} to the following set-up: \begin{enumerate}[label=\textbullet] \item the group $\Unipk$ together with the connected subgroup $H'$, both of which are defined over $\F$ \item a morphism $s \colon \Unipk/H' \to \Unipk$ defined by identifying $\Unipk/H'$ with affine space $\mathbb{A}[J]$ and setting $s \colon (x_{nj+i})_{nj+i \in J} \mapsto 1 + \sum_{nj+i \in J} V^j(x_{nj+i}) \tau^i$ \item the algebraic group morphism $f \colon H' \to H$ given by $\sum A_i \tau^i \mapsto A_0$ \item an additive character $\chi \colon H(\F) \to \overline \QQ_\ell^\times$ \item a locally closed subvariety $Y_h \subset \Unipk$ which is chosen so that $X_h = L_{q^n}^{-1}(Y_h)$ \end{enumerate} Since $X_h$ has a right-multiplication action of $\UnipkF$, the cohomology groups $H_c^i(X_h, \overline \QQ_\ell)$ inherits a $\UnipkF$-action. For each $i \geq 0$, Proposition \ref{p:B2.3} implies that we have a vector space isomorphism \begin{equation*} \boxed{\Hom_{\UnipF}(W_\chi, H_c^i(X_h, \overline \QQ_\ell)) \cong H_c^i(\beta^{-1}(Y_h), P^* \Loc_\chi)} \end{equation*} compatible with the action of $\Fr_{q^n}$. Here, $\Loc_\chi$ is the local system on $H$ corresponding to $\chi$, the morphism $\beta \colon (\Unip/H') \times H' \to \Unip$ is given by $\beta(x,g) = s(\Fr_{q^n}(x)) \cdot g \cdot s(x)^{-1}$, and the morphism $P \colon \beta^{-1}(Y_h) \to H$ is the composition $\beta^{-1}(Y_h) \hookrightarrow (\Unip/H') \times H' \overset{\pr}{\longrightarrow} H' \overset{f}{\longrightarrow} H.$ We now work out an explicit description of $\beta^{-1}(Y_h) \subset \mathbb{A}[J] \times H'$. For $1 \leq k \leq n(h-1)$ and $k$ divisible by $n$, let $g_k$ be the polynomial described in Lemma \ref{l:Xhpolys}. Write $x = (x_i)_{i \in J} \in \mathbb{A}[J]$ and $g = 1 + \sum_{i \in I} x_i \tau^i \in H'(\overline \FF_q)$. Now pick $y = 1 + \sum_{i \in I} y_i \tau^i \in H'(\overline \FF_q)$ such that $L_{q^n}(y) = g$. Then \begin{equation*} \beta(x,g) = \Fr_{q^n}(s(x)) \cdot L_{q^n}(y) \cdot s(x)^{-1} = L_{q^n}(s(x) \cdot y). \end{equation*} We see that $\beta(x,g) \in Y_h$ if and only if $s(x) \cdot y \in X_h$. Let $s(x) \cdot y = 1 + \sum a_i \tau^i$. By Lemma \ref{l:Xhpolys}, we know that $s(x) \cdot y \in X_h$ if and only if $g_k(a) = 0$ for all $k \leq n(h-1)$ divisible by $n$. Recall from Lemma \ref{l:claim1} that using the identity $L_{q^n}(y) = 1 + \sum_{i \in I} x_i \tau^i$, each polynomial $g_k(a)$, which \textit{a priori} is a polynomial in $x_j$ for $j \in J$ and $y_i$ for $i \in I$, is in fact a polynomial in $x_i$ for $1 \leq i \leq n(h-1)$. \subsubsection*{Step 2} By Lemma \ref{l:claim1}, we know that each polynomial $g_r(s(x) \cdot y)$ can be written as a polynomial in $x_i$'s. Furthermore, they can each be written as a polynomial in $x_i$ for $i \in I_0 \cup J_0$, since $g_r(a)$ is of the form $x_r + \text{(stuff with $x_i$, $i \leq r$)}$. Thus the $in$th coordinates $x_{in}$ of $\beta^{-1}(Y_h) \subset \mathbb{A}[I \cup J]$ are uniquely determined by the other coordinates, which implies that for a subscheme $S^{(0)} \subset \mathbb{A}[I_0 \cup J_0]$, \begin{equation*} H_c^i(\beta^{-1}(Y_h), P^* \Loc_\chi) \cong H_c^i(S^{(0)}, (P^{(0)})^* \Loc_\chi), \end{equation*} where $P^{(0)} \colon S^{(0)} \to Z$ is the map given by $(x_i)_{i \in I_0 \cup J_0} \mapsto (x_n, x_{2n}, \ldots, x_{(h-1)n})$. We now apply Proposition \ref{p:B2.10} to the following set-up: \begin{enumerate}[label=\textbullet] \item Let $S^{(0)} = \mathbb{A}[I_0 \cup J_0]$. \item Let $S_2^{(0)} = \mathbb{A}[I_1 \cup J_0]$. \item Note that $S^{(0)} = S_2^{(0)} \times \mathbb{A}[\{n(h-1)-1\}]$. \item Let $f \colon S_2^{(0)} \to \GG_a$ be defined as $(x_i)_{i \in I_1 \cup J_0} \mapsto x_1$. \item Set $v \in S_2^{(0)}$ and $w = x_{n(h-k)-1}$. By Lemma \ref{l:poly}, in the equal characteristics case, we may write \begin{equation*} P^{(0)}(v,w) = g(f(v)^{q^{n-1}} w - f(v)^{q^n} w^q) \cdot P_2^{(0)}(v), \end{equation*} and in the mixed characteristics case, we may write \begin{equation*} P^{(0)}(v,w) = g(f(v)^{q^{n}} w^{q^{h-k}} - f(v)^{q^{n+1}} w^{q^h}) \cdot P_2^{(0)}(v). \end{equation*} \item Let $S_3^{(0)} = \mathbb{A}[I_1 \cup J_1]$ so that this is the subscheme of $S_2^{(0)} = \mathbb{A}[I_1 \cup J_0]$ defined by $f = 0$, and let $P_3^{(0)} \colonequals P_2^{(0)}|_{S_3^{(0)}} \colon S_3^{(0)} \to Z$. \end{enumerate} Then by Proposition \ref{p:B2.10}, for all $i \in \ZZ$, \begin{equation*} \boxed{H_c^i(S^{(0)},(P^{(0)})^* \Loc_\chi) \cong H_c^{i-2}(S_3^{(0)}, (P_3^{(0)})^* \Loc_\chi)(-1)} \end{equation*} as vector spaces equipped with an action of $\Fr_q$, where the Tate twist $(-1)$ means that the action of $\Fr_q$ on $H_c^{i-2}(S_3^{(0)}, (P_3^{(0)})^* \Loc_\chi)$ is multiplied by $q$. Note that this implies that the action of $\Fr_{q^n}$ is multiplied by $q^n$. \subsubsection*{Step 3} We now describe the inductive step for $l < d$. We apply Proposition \ref{p:B2.10alpha} to the following set-up: \begin{enumerate}[label=\textbullet] \item Let $S^{(l)} \colonequals S_3^{(l-1)} = \mathbb{A}[I_l \cup J_l].$ \item Let $S_2^{(l)} = \mathbb{A}[I_{l+1} \cup J_l]$. \item Note that $S^{(l)} = S_2^{(l)} \times \mathbb{A}[\{n(h-1)-r_l\}]$. \item Let $f \colon S_2^{(l)} \to Z$ be defined as $(x_i)_{i \in I_{l+1} \cup J_l} \mapsto (0, \ldots, 0, x_{r_l})$. \item Set $v \in S_2^{(l)}$ and $w = x_{n(h-k)-r_l}$. Let $t_k(x) = x^{q^{r_l-n}} + \cdots + x^{q^{\bar r_l}}.$ By Lemma \ref{l:poly}, the morphism $P^{(l)} \colonequals P_3^{(l-1)} \colon S^{(l)} \to Z$ has the following form: In the equal characteristics case, \begin{equation*} P^{(l)}(v,w) = g(f(v)^{n - \bar r_l} w - f(v)^{q^n} w^{q^{\bar r_l}}) \cdot P_2^{(l)}(v), \end{equation*} and in the mixed characteristics case, \begin{equation*} P^{(l)}(v,w) = g(f(v)^{q^{n- \bar r_l + h - k - m}} w^{q^{m+1}} - f(v)^{q^{n + h - k - m}} w^{\bar r_l + m + 1}) \cdot P_2^{(l)}(v). \end{equation*} \item Let $S_3^{(l)} = \mathbb{A}[I_{l+1} \cup J_{l+1}]$ so that this is the subscheme of $S_2^{(l)} = \mathbb{A}[I_{l+1} \cup J_l]$ defined by $f = 0$, and let $P_3^{(l)} \colonequals P_2^{(l)}|_{S_3^{(l)}} \colon S_3^{(l)} \to Z$. \end{enumerate} Then by Proposition \ref{p:B2.10}, for all $i \in \ZZ$, \begin{equation*} \boxed{H_c^i(S^{(l)}, (P^{(l)})^* \Loc_\chi) \cong H_c^{i-2}(S_3^{(l)}, (P_3^{(l)})^* \Loc_\chi)(-1).} \end{equation*} \subsubsection*{Step 4: Case 1} Step 3 allows us to reduce the computation about the cohomology of $S^{(0)}$ to a computation about the cohomology of $S^{(d)} \colonequals S_3^{(d-1)}$, which is a point. Thus $\Fr_{q^n}$ acts trivially on the cohomology of $S^{(d)}$ and \begin{equation*} \boxed{\dim H_c^i(S^{(d)}, (P^{(d)})^*\Loc_\chi) = \delta_{0, i}.} \end{equation*} \subsubsection*{Step 4: Case 2} Step 3 allows us to reduce the computation about the cohomology of $S^{(0)}$ to a computation about the cohomology of $S^{(d)} \colonequals S_3^{(d-1)} = \mathbb{A}[\{n(h-k)/2\}]$. The morphism $P^{(d)}$ is \begin{equation*} P^{(d)} \colon S^{(d)} \to Z, \qquad a_{n(h-k)/2} \mapsto \left(0, \ldots, 0, a_{n(h-k)/2}^{q^{n/2}}(a_{n(h-k)/2}^{q^n} - a_{n(h-k)/2})\right). \end{equation*} Then I claim that \begin{equation*} H_c^i(\GG_a, (P^{(d)})^* \Loc_\chi) = H_c^i(\GG_a, P^* \Loc_\psi), \end{equation*} where $\psi$ is the restriction of $\chi$ to $\F \to \overline \QQ_\ell^\times$ and $P_0$ is the morphism \begin{equation*} P_0 \colon \GG_a \to \GG_a, \qquad x \mapsto x^{q^{n/2}}(x^{q^n} - x). \end{equation*} We now compute the cohomology groups $H_c^i(\GG_a, P^* \Loc_\psi)$ in the same way as in sections 6.5 and 6.6 in \cite{BW14}. We may write $P = f_1 \circ f_2$ where $f_1(x) = x^{q^{n/2}} - x$ and $f_2(x) = x^{q^{n/2}+1}$. Since $f_1$ is a group homomorphism, then $f_1^* \Loc_\psi \cong \Loc_{\psi \circ f_1}$. By assumption $\psi$ has trivial $\Gal(\F/\FF_q)$-stabilizer, so $\psi \circ f_1$ is nontrivial. Furthermore, $\psi \circ f_1$ is trivial on $\FF_{q^{n/2}}$. Thus the character $\psi \circ f_1 \colon \F \to \overline \QQ_\ell^\times$ satisfies the hypotheses of Proposition 6.12 of \cite{BW14}, and thus $\Fr_{q^n}$ acts on $H_c^1(\GG_a, P_0^* \Loc_\psi)$ via multiplication by $-q^{n/2}$ and \begin{equation*} \dim H_c^i(\GG_a, P_0^* \Loc_\psi) = q^{n/2} \cdot \delta_{1,i}. \end{equation*} Thus \begin{equation*} \boxed{\dim H_c^i(S^{(d)}, (P^{(d)})^* \Loc_\chi) = q^{n/2} \cdot \delta_{1,i}.} \end{equation*} \subsubsection*{Step 5} We now put together all of the boxed equations. We have \begin{align*} \Hom_{\UnipF}(W_\chi, H_c^i(X_h, \overline \QQ_\ell)) & \cong H_c^i(\beta^{-1}(Y_h), P^* \Loc_\chi) \\ &= H_c^i(S^{(0)}, (P^{(0)})^* \Loc_\chi) \\ &\cong H_c^{i-2}(S_3^{(0)}, (P_3^{(0)})^* \Loc_\chi)(-1) \\ &= H_c^{i-2}(S^{(1)}, (P^{(1)})^* \Loc_\chi)(-1) \\ &\cong H_c^{i-2d}(S^{(d)}, (P^{(d)})^* \Loc_\chi)(-d). \end{align*} Therefore if we are in Case 1, then \begin{equation*} \dim \Hom_{H(\F)}(W_\chi, H_c^i(X_h, \overline \QQ_\ell)) = \delta_{(n-1)(h-k)^+,i}. \end{equation*} Moreover, the Frobenius $\Fr_{q^n}$ acts on $\Hom_{\UnipF}(W_\chi, H_c^i(X_h, \overline \QQ_\ell)_)$ via multiplication by the scalar $q^{n(n-1)(h-k)^+/2}$. If we are in Case 2, then \begin{equation*} \dim \Hom_{\UnipF}(W_\chi, H_c^i(X_h, \overline \QQ_\ell)) = q^{n/2} \cdot \delta_{(n-1)(h-k)^+,i}. \end{equation*} Moreover, the Frobenius $\Fr_{q^n}$ acts on $\Hom_{\UnipF}(W_\chi, H_c^i(X_h, \overline \QQ_\ell))$ via multiplication by the scalar $-q^{n(n-1)(h-k)^+/2}$. Finally, observe that if we are in Case 1, then $(n-1)(h-k)^+$ is even and if we are in Case 2, then $(n-1)(h-k)^+$ is odd. This gives us a uniform way to describe the action of $\Fr_{q^n}$ and concludes the proof of Proposition \ref{p:dimHom}. \subsection{Proof of Proposition \ref{p:vregtrace}} \label{s:pftrace} By the Deligne--Lusztig fixed point formula, \begin{equation*} \sum_i (-1)^i \Tr((\zeta, h, g)^* ; H_c^i(X_h, \overline \QQ_\ell)) = \sum_i (-1)^i \Tr((1,h,g)^* ; H_c^i(X_h^\zeta, \overline \QQ_\ell)). \end{equation*} It is easy to calculate $X_h^\zeta$. Indeed, it can be identified with the subvariety of all elements of $\Unipk$ of the form $A_0 \in 1 + \bW_{h-1}(\F)$. Thus $X_h^{\zeta}$ is just a discrete set naturally identified with $H(\F)$ and the left and right actions of $H(\F)$ are given by left and right multiplication. Therefore $H_c^i(X_h^\zeta, \overline \QQ_\ell) = 0$ for $i > 0$ so \begin{equation*} \sum_i (-1)^i \Tr((1,h,g)^* ; H_c^i(X_h^\zeta, \overline \QQ_\ell)) = \Tr((1,h,g)^* ; H_c^0(X_h^\zeta, \overline \QQ_\ell)). \end{equation*} Furthermore, as a $(H(\F) \times H(\F))$-representation, $H_c^0(X_h^\zeta, \overline \QQ_\ell)$ is the pullback of the regular representation of $H(\F)$ along the multiplication map $H(\F) \times H(\F) \to H(\F)$. Thus \begin{equation*} H_c^0(X_h^\zeta, \overline \QQ_\ell) = \bigoplus_{\chi_0 \in \widehat H(\F)} \chi_0 \otimes \chi_0 \end{equation*} as representations of $H(\F) \times H(\F)$. Therefore \begin{equation*} \sum_{h \in H(\F)} \chi(h)^{-1} \sum_i (-1)^i \Tr((\zeta, h, g)^* ; H_c^i(X_h, \overline \QQ_\ell)) = \chi(g) \cdot \# H(\F). \end{equation*} This is equivalent to \begin{equation*} \sum_i (-1)^i \Tr((\zeta, 1, g)^* ; H_c^i(X_h, \overline \QQ_\ell)[\chi]) = \chi(g), \end{equation*} and since $H_c^i(X_h, \overline \QQ_\ell)[\chi] = 0$ for $i \neq (n-1)(h-k)^+$ by Proposition \ref{p:dimHom}, the desired result follows. \subsection{Proof of Theorem \ref{t:cohomdesc}} \label{s:pfcohomdesc} This is a corollary of Proposition \ref{p:dimHom} and \ref{p:vregtrace}. We have \begin{equation*} \bigoplus_{\chi \in \cA} H_c^{(n-1)(h-k)^+}(X_h, \overline \QQ_\ell)[\chi] = \bigoplus_{\chi \in \cA} \rho_\chi, \end{equation*} where the summands on the left-hand side are mutually nonisomorphic and the summands on the right-hand side are irreducible. It follows then that each $H_c^{(n-1)(h-k)^+}(X_h, \overline \QQ_\ell)[\chi]$ is irreducible. \section{Division algebras and Jacquet--Langlands transfers} \label{s:divalg} Our goal in this final section is to understand two connections. The first, explained in Section \ref{s:DL}, is to unravel the relationship between Theorem \ref{t:cohomdesc} and the representations of division algebras arising from $p$-adic Deligne--Lusztig constructions. Because the equal characteristics claim of Theorem \ref{t:cohomdesc} proves a conjecture of Boyarchenko (see Conjecture 5.18 of \cite{B12}), we can use Proposition 5.19 of \textit{op.\ cit.} to explicitly describe this relationship. In fact, the definitions in Section \ref{s:definitions} allow us to treat all characteristics simultaneously and extend Boyarchenko's work. The second connection, explained in Section \ref{s:LLC}, is to unravel the relationship between the representations described in Section \ref{s:DL} with respect to the local Langlands and Jacquet--Langlands correspondences. The main theorem of this section is Theorem \ref{t:divalg}, which says, colloquially, that the correspondence $\theta \mapsto H_\bullet(\widetilde X)[\theta]$ is consistent with the correspondence given by the composition of the local Langlands and Jacquet--Langlands correspondences. \subsection{Deligne--Lusztig constructions for division algebras} \label{s:DL} Throughout this section, $\theta \colon L^\times \to \overline \QQ_\ell^\times$ will be a primitive character of level $h$ and $\chi \colon U_L^1/U_L^h \to \overline \QQ_\ell^\times$ will be the induced homomorphism. Let $\Khat$ be the completion of the maximal unramified extension of $K$ and let $\varphi$ denote the Frobenius automorphism of $\Khat$ (inducing $x \mapsto x^q$ on the residue field). We can write $D \colonequals D = L \langle \Pi \rangle/(\Pi^n - \pi^k)$, where $L \langle \Pi \rangle$ is the twisted polynomial ring defined by the commutation relation $\Pi \cdot a = \varphi(a) \cdot \Pi$. Write $\cO_D = \cO_L \langle \Pi \rangle / (\Pi^n - \pi^k)$ for the ring of integers of $D$. Define $P_D^r = \Pi^r \cO_D$ and $U_D^r = 1 + P_D^r$. There exists a connected reductive group $\GG$ over $K$ such that $\GG(K)$ is isomorphic to $D^\times$, and a $K$-rational maximal torus $\TT \subset \GG$ such that $\TT(K)$ is isomorphic to $L^\times$. More explicitly, the homomorphism \begin{equation*} F \colon \GL_n(\Khat) \to \GL_n(\Khat), \qquad A \mapsto \varpi^{-1} A^\varphi \varpi, \qquad \text{where $\varpi = \left(\begin{smallmatrix} 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 1 & \cdots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & \cdots & 0 & 0 & 1 \\ \pi^k & 0 & 0 & 0 & 0 \end{smallmatrix}\right)$} \end{equation*} is a Frobenius relative to a $K$-rational structure whose corresponding algebraic group over $K$ is $\GG$. Let $\widetilde G \colonequals \GG(\Khat) = \GL_n(\Khat)$ and $\widetilde T \colonequals \TT(\Khat)$. Let $\BB \subset \GG \otimes_K \Khat$ be the Borel subgroup consisting of upper triangular matrices and let $\UU$ be its unipotent radical. Note that $\widetilde T$ consists of all diagonal matrices and $\widetilde U \colonequals \UU(\Khat)$ consists of unipotent upper triangular matrices. Let $\widetilde U^- \subset \GL_n(\Khat)$ denote the subgroup consisting of unipotent lower triangular matrices. The $p$-adic Deligne--Lusztig construction $X$ for $D^\times$ described in \cite{L79} is the quotient \begin{equation*} X \colonequals (\widetilde U \cap F^{-1}(\widetilde U)) \backslash \{A \in \GL_n(\Khat) : F(A)A^{-1} \in \widetilde U\}, \end{equation*} which can be identified with the set \begin{equation*} \widetilde X \colonequals \{A \in \GL_n(\Khat) : F(A)A^{-1} \in \widetilde U \cap F(\widetilde U^-)\}. \end{equation*} Recall from Section \ref{s:Xhdef} that \begin{equation*} \widetilde X = \bigsqcup_{m \in \ZZ} \varprojlim_h \widetilde X_h^{(m)}, \end{equation*} where each $X_h^{(m)}$ is a scheme of finite type over $\overline \FF_q$. Following \cite{B12}, Section 4.4, we set \begin{equation*} H_i(\widetilde X, \overline \QQ_\ell) = \bigoplus_{m \in \ZZ} \varinjlim_h H_i(\widetilde X_h^{(m)}, \overline \QQ_\ell), \end{equation*} where $H_i(S, \overline \QQ_\ell) \colonequals H_c^{2d-i}(S, \overline \QQ_\ell(d))$ for any smooth $\overline \FF_q$-scheme $S$ of pure dimension $d$. For each $i \geq 0$, $H_i(\widetilde X, \overline \QQ_\ell)$ inherits commuting smooth actions of $\GG(K) \cong D^\times$ and $\TT(K) \cong L^\times$. Given a smooth character $\theta \colon L^\times \to \overline \QQ_\ell^\times$, we may consider the subspace $H_i(\widetilde X, \overline \QQ_\ell)[\theta] \subset H_i(\widetilde X, \overline \QQ_\ell)$ wherein $L^\times$ acts by $\theta$. \begin{proposition}\label{p:divalg} Let $U_D^{(h)} \colonequals (1 + P_L^h)(1 + P_L^{(h-k)^+}\Pi) \cdots (1 + P_L^{(h-k)^+}\Pi^{n-1})$. \begin{enumerate}[label=(\alph*)] \item The representation $H_c^{(n-1)(h-k)^+}(X_h, \overline \QQ_\ell)[\chi]$ extends uniquely to a representation of the semidirect product $\cR_{h,k,n,q}^\times(\F) \cong \cO_D^\times/U_D^{(h)}$ with $\Tr(\eta_\theta^\circ(\zeta)) = (-1)^{(n-1)(h-k)^+} \theta(\zeta).$ \item The inflation $\widetilde \eta_\theta^\circ$ of $\eta_\theta^\circ$ to $\cO_D^\times$ extends to a representation $\eta_\theta'$ of $\pi^\ZZ \cdot \cO_D^\times$ by setting $\eta_\theta'(\pi) = \theta(\pi)$. Then \begin{equation*} H_{(n-1)(h-k)^+}(\widetilde X, \overline \QQ_\ell)[\theta] \cong \eta_\theta \colonequals \Ind_{\pi^\ZZ \cdot \cO_D^\times}^{D^\times}(\eta_\theta') \end{equation*} and $H_i(\widetilde X, \overline \QQ_\ell)[\theta] = 0$ for $i \neq (n-1)(h-k)^+$. \item $H_{(n-1)(h-k)^+}(\widetilde X, \overline \QQ_\ell)[\theta]$ is an irreducible representation of dimension $n \cdot q^{n(n-1)(h-k)^+/2}$. \end{enumerate} \end{proposition} \begin{proof} This is proved in the equal characteristics case for $k/n = 1/n$ in Section 6.15 of \cite{B12}. The proof generalizes without complications, and we outline the arguments here. The uniqueness in (a) follows from the irreducibility of $H_c^{(n-1)(h-k)^+}(X_h, \overline \QQ_\ell)[\chi]$. The representation $\widetilde \eta_\theta^\circ$ is the tensor product $\theta^\circ \otimes H_c^{(n-1)(h-k)^+}(X_h, \overline \QQ_\ell)[\chi]$ where $\theta^\circ(z,g) = \theta(z)$ for $(z,g) \in \langle \zeta \rangle \ltimes \UnipF = \cR_{h,n,q}^\times(\F)$. Finally, the trace identity is a special case of Proposition \ref{p:vregtrace}. Let $\widetilde X_h \colonequals \sqcup_m \widetilde X_h^{(m)}$. The action of $L^\times \times D^\times$ on $\widetilde X$ induces an action of $G \colonequals (L^\times/U_L^h) \times (D^\times/U_D^{(h)})$ on $\widetilde X_h$. Moreover, $H_*(\widetilde X, \overline \QQ_\ell)[\theta] \subset H_*(\widetilde X_h, \overline \QQ_\ell)$, so it is enough to understand the cohomology of $\widetilde X_h$. By construction, it is easy to see that $\widetilde X_h$ is equal to the $G$-translates of $\iota_h(X_h) \subset \widetilde X_h^{(0)}$. One can define an action of \begin{equation*} \Gamma = \langle (\pi, \pi^{-1}) \rangle \cdot \langle (\zeta, \zeta^{-1}) \rangle \cdot (U_L^1/U_L^h \times U_D^1/U_D^{(h)}) \subset G \end{equation*} on $X_h$ so that $\iota_h$ is $\Gamma$-equivariant. Moreover, the stabilizer of $\iota_h(X_h)$ in $G$ is exactly equal to $\Gamma$. The claim in (b) then follows from an analysis of the $\theta$-eigenspace of $\Ind_\Gamma^G(H_i(X_h, \overline \QQ_\ell)).$ For any $x \in U_L^{h-1}$, we have $\eta_\theta'(x) = \psi(x)$ and \begin{equation*} \eta_\theta'(\Pi \cdot x \cdot \Pi^{-1}) = \eta_\theta'(\varphi(x)) = \psi(x^q). \end{equation*} Since $\theta$ is primitive, it follows that the normalizer of $\eta_\theta'$ in $D^\times$ is equal to $\pi^\ZZ \cdot \cO_D^\times$. Irreducibility then follows by Mackey's criterion. The dimension of the $\eta_\theta$ is equal to the product of the index $[D^\times : \pi^\ZZ \cdot \cO_D^\times] = n$ and the dimension of $\eta_\theta'$, so the desired result holds by Theorem \ref{t:irrepdesc}. \end{proof} \begin{remark} Note that if $h \leq k$, then $\cO_D^\times/U_D^{(h)} = \cO_L^\times/U_L^h$, so the character $\theta \colon L^\times \to \QQ_\ell^\times$ can be viewed as a one-dimensional representation of $\pi^\ZZ \cdot \cO_D^\times$. By Proposition \ref{p:divalg}, \begin{flalign*} &&H_0(\widetilde X, \overline \QQ_\ell)[\theta] \cong \Ind_{\pi^\ZZ \cdot \cO_D^\times}^{D^\times}(\theta). &&\Diamond \end{flalign*} \end{remark} \subsection{Local Langlands correspondences} \label{s:LLC} Fix a character $\epsilon$ of $K^\times$ whose kernel is equal to the image of the norm $\Norm_{L/K} \colon L^\times \to K^\times$. Then the representation \begin{equation*} \sigma_\theta \colonequals \Ind_{\cW_L}^{\cW_K}(\theta \circ \rec_L) \end{equation*} is a smooth irreducible $n$-dimensional representation of $\cW_K$. Let $\sX$ denote the set of all characters of $L^\times$ that have trivial stabilizer in $\Gal(L/K)$ and let $\cG_K^\epsilon(n)$ denote the set of (isomorphism classes of) smooth irreducible $n$-dimensional representations $\sigma$ of $\Weil_K$ that satisfy $\sigma \cong \sigma \otimes (\epsilon \circ \rec_F).$ Let $\xi$ be the character of $L^\times$ determined by $\xi(\pi) = (-1)^{n-1}$ and $\xi|_{\cO_L^\times} = 1$. Then \begin{center} \begin{tikzpicture}[xscale=4,yscale=0.65] \draw (0,0) node(a){$\sX/\Gal(L/K)$} (1,0) node(b){$\cG_K^\epsilon(n)$}; \draw[->] (a.east) to node[above]{LCFT} (b.west); \draw (0,-1) node(a'){$\theta$} (1,-1) node(b'){$\sigma_{\xi\theta}$}; \draw[|->] (a'.east) to (b'.west); \end{tikzpicture} \end{center} is a bijection. This is a twisted version of Lemma 1.1 of \cite{BW13}. (Here, LCFT stands for local class field theory.) Now let $\cA_K^\epsilon(n)$ denote the set of (isomorphism classes of) irreducible supercuspidal representations $\pi$ of $\GL_n(K)$ such that $\pi \cong \pi \otimes (\epsilon \circ \det)$. There exists a canonical bijection \begin{center} \begin{tikzpicture}[xscale=4,yscale=0.65] \draw (0,0) node(a){$\cG_K^\epsilon(n)$} (1,0) node(b){$\cA_K^\epsilon(n)$}; \draw[->] (a.east) to node[above]{\text{LLC}} (b.west); \draw (0,-1) node(a'){$\sigma_{\xi\theta}$} (1,-1) node(b'){$\pi_\theta$}; \draw[|->] (a'.east) to (b'.west); \end{tikzpicture} \end{center} known as the local Langlands correspondence. Finally, let $\cA'{}_{\!\!K}^\epsilon(n)$ denote the set of (isomorphism classes of) irreducible representations $\rho$ of $D^\times$ such that $\rho \cong \rho \otimes (\epsilon \circ \Nrd_{D/K})$. Then the Jacquet--Langlands correspondence gives a bijection \begin{center} \begin{tikzpicture}[xscale=4,yscale=0.65] \draw (0,0) node(a){$\cA_K^\epsilon(n)$} (1,0) node(b){$\cA'{}_{\!\!K}^\epsilon(n)$}; \draw[->] (a.east) to node[above]{\text{JLC}} (b.west); \draw (0,-1) node(a'){$\pi_\theta$} (1,-1) node(b'){$\rho_\theta$}; \draw[|->] (a'.east) to (b'.west); \end{tikzpicture} \end{center} \begin{remark}\label{r:epsiloninvar} Since $L/K$ is unramified, the restriction of $\epsilon$ to $\OH_K^\times$ is trivial, and thus the composition $\epsilon \circ \Nrd_{D/K}$ is trivial on $E^\times \cdot \OH_D^\times \supset \pi^\ZZ \cdot \OH_D^\times$. Thus by the construction of $\eta_\theta$, we have that $\eta_\theta$ is invariant under twisting by $\epsilon \circ \Nrd_{D/K}$. \hfill $\Diamond$ \end{remark} Our work describes a correspondence between $L^\times$-characters and $D^\times$-representations given by \begin{center} \begin{tikzpicture}[xscale=8] \draw (0,0) node(a){\{primitive characters of $L^\times$\}} (1,0) node(b){\{irreducible representations of $D^\times$\}}; \draw[->] (a.east) to node[above]{\text{$p$-adic DL}} (b.west); \draw (0,-1) node(a'){$\theta$} (1,-1) node(b'){$\eta_\theta \colonequals H_\bullet(\widetilde X, \overline \QQ_\ell)[\theta]$}; \draw[|->] (a'.east) to (b'.west); \end{tikzpicture} \end{center} By Remark \ref{r:epsiloninvar}, we see that $\eta_\theta \in \cA'{}_{\!\!K}^\epsilon(n)$. In Theorem \ref{t:divalg}, we prove that this correspondence matches the composition of the previous three, therefore giving a geometric realization of the Jacquet--Langlands correspondence. \begin{remark} The construction of the local Langlands and Jacquet--Langlands correspondences was already known. See, for example, \cite{H93}. Recent work of Boyarchenko and Weinstein (see \cite{BW13}) gives a partially geometric construction of these correspondences using the representations $H_c^{n-1}(X_2, \overline \QQ_\ell)[\psi]$ of $U_2^{n,q}(\FF_{q^n})$. Note that in \cite{BW14} and \cite{BW13}, the scheme $X_2$ is denoted by $X$ and the group $U_2^{n,q}(\F)$ is denoted by $U^{n,q}(\F)$. The following theorem shows that Deligne--Lusztig constructions for division algebras give a geometric realization of the Jacquet--Langlands correspondence. \hfill $\Diamond$ \end{remark} \begin{theorem}\label{t:divalg} Let $\theta \colon L^\times \to \overline \QQ_\ell^\times$ be a primitive character of level $h$ and let $\rho_\theta$ be the $D^\times$-representation corresponding to $\theta$ under the local Langlands and Jacquet--Langlands correspondences. Then $H_i(\widetilde X, \overline \QQ_\ell)[\theta] = 0$ if $i \neq (n-1)(h-k)^+$ and \begin{equation*} H_{(n-1)(h-k)^+}(\widetilde X, \overline \QQ_\ell)[\theta] \cong \rho_\theta. \end{equation*} \end{theorem} \begin{proof} By Proposition 1.5(b) of \cite{BW13}, we just need to show that $\eta_\theta \colonequals H_{(n-1)(h-k)^+}(\widetilde X, \overline \QQ_\ell)[\theta]$ satisfies the following two properties: \begin{enumerate}[label=(\roman*)] \item For any character $\epsilon$ of $K^\times$ whose kernel is equal to the image of the norm map $\Norm_{L/K} \colon L^\times \to K^\times$, we have $\eta_\theta \cong \eta_\theta \otimes (\epsilon \circ \Nrd_{D/K})$. \item There exists a constant $c$ such that $\tr \eta_\theta(x) = c \cdot \sum_{\gamma \in \Gal(L/K)} \theta^\gamma(x)$ for each very regular element $x \in \OH_L^\times$. \end{enumerate} Since $L/K$ is unramified, the restriction of $\epsilon$ to $\cO_K^\times$ is trivial, and thus the composition $\epsilon \circ \Nrd_{D/K}$ is trivial on $L^\times \cdot \cO_D^\times \supset \pi^\ZZ \cdot \cO_D^\times$. Thus by construction, $\eta_\theta$ is invariant under twisting by $\eta \circ \Nrd_{D/K}$. This proves (i). We now prove (ii). By the construction of $\eta_\theta$, since $\pi^\ZZ \cdot \OH_D = L^\times \cdot U_D^1$, we have \begin{equation*} \tr \eta_\theta(x) = \sum_{\substack{g \in D^\times/L^\times \cdot U_D^1 \\ gxg^{-1} \in L^\times \cdot U_D^1}} \tr \eta_\theta'(gxg^{-1}). \end{equation*} Now let $x \in \OH_L^\times$ be very regular. By Proposition \ref{p:vregtrace}, $\eta_\theta^\circ(x) = (-1)^{(n-1)(h-k)^+} \theta(x)$. By Lemma 5.1(b) of \cite{BW13}, if $g \in D^\times$ is such that $gxg^{-1} \in L^\times \cdot U_D^1$, then $g \in N_{D^\times}(L^\times) \cdot U_D^1$, where $N_D^\times(L^\times)$ is the normalizer of $L^\times$ in $D^\times$. Therefore \begin{align*} \tr \eta_\theta(x) &= \sum_{g \in N_{D^\times}(L^\times) \cdot U_D^1/L^\times \cdot U_D^1} \tr \eta_\theta'(gxg^{-1}) = \sum_g \tr(\eta_\theta^\circ(gxg^{-1})) \\ &= \sum_g (-1)^{(n-1)(h-k)^+} \theta(gxg^{-1}) = (-1)^{(n-1)(h-k)^+} \cdot \sum_{\gamma \in \Gal(L/K)} \theta^\gamma(x). \qedhere \end{align*} \end{proof} \begin{corollary}\label{c:JL} Let $D$ and $D'$ be division algebras of rank $n$ and let $X$ and $X'$ be their corresponding Deligne--Lusztig constructions. For any primitive character $\theta \colon L \to \overline \QQ_\ell^\times$, the Jacquet--Langlands transfer of $H_{(n-1)(h-k)^+}(X)[\theta]$ is isomorphic to $H_{(n-1)(h-k')}(X)[\theta]$, where $k/n$ and $k'/n$ are the Hasse invariants of $D$ and $D'$. \end{corollary}
1505.07636
\section{Introduction} \label{sec1} We know that there are three generations of fermions in nature. Each generation carries the same quantum number, and only their masses are different from each other. In the Standard Model (SM), the transition among the generations occurs only through the weak boson exchanging, but the flavor-changing processes via the Flavor Changing Neutral Currents (FCNCs) are strongly suppressed because of the Glashow-Iliopoulos-Maiani (GIM) mechanism \cite{Glashow:1970gm}. This SM picture successfully describes the experimental results, but we may wonder why such a flavor structure exists in our nature. We expect that Beyond Standard Model (BSM) exists out of our current experimental reach, and reveals the origin of the three generations. One of the promising BSMs is a flavor symmetric model. In the SM, flavor symmetry is explicitly broken by Yukawa couplings to generate fermion mass matrices. Without the couplings, we could find $SU(3)$ symmetry in each sector of left-handed (right-handed) up-type, down-type quarks and leptons. In flavor symmetric models, the $SU(3)$ symmetry or the subgroup of $SU(3)$ is respected in the Lagrangian, introducing extra scalar bosons charged under the flavor symmetry. For instance, additional $SU(2)_L$-doublet Higgs fields are introduced, and the scalars and fermions are charged under the flavor symmetry to write down Yukawa couplings. The flavor-charged Higgs fields develop nonzero vacuum expectation values (VEVs), and break not only the electro-weak (EW) symmetry but also the flavor symmetry. Then the flavor structure of the SM is effectively generated at the low-energy scale. This BSM would be very attractive and reasonable, and so many types of flavor symmetric models have been proposed so far \cite{Altarelli,Ishimori,S4}. Especially, we could find so many models motivated by the large mixing in neutrino sector, because the experimental result may imply so-called Tri-Bi maximal mixing \cite{TB}, which can be easily accommodated by the BSMs with non-Abelian discrete flavor symmetry such as $A_4$ \cite{A4,A4-3,flavor2}, $S_4$ \cite{S4-0}, and $\Delta (27)$ \cite{delta27-1} {\it etc}.. Recent result on the nonzero $\theta_{13}$ \cite{theta13-1,theta13-2,theta13-3,theta13-4,theta13-5} may require some small modifications in those models, but we could expect that so many kinds of flavor symmetric models can be still consistent with the experimental results \cite{A4-2,A4-4,Hamada:2014xha,S4,S4-3,delta27,flavor-higher,theta13-model}. Then, the next question in this approach to the flavor structure would be how to test the flavor symmetry in experiments. One hint to clarify which kinds of symmetry exist behind the flavor structure would be obtained, if we consider the origin of the remnant symmetry in the fermion mass matrices in the SM. As we discuss in Sec. \ref{sec2}, we see that there is symmetry in mass matrices of leptons in the SM, which is explicitly broken by the weak interaction involving $W$ boson. If flavor symmetry exists behind the SM, the remnant symmetry might be the fragment of the flavor symmetry broken at high energy. In fact, one can find a lot of works on flavor models, based on the assumption that the symmetry in the fermion mass matrices is the subgroup of flavor symmetry \cite{King,Altarelli,Ishimori,S4,S4-3}. In this paper, we investigate especially FCNCs involving charged leptons in flavor symmetric models, where the symmetry in the charged lepton mass matrix is the subgroup of the flavor symmetry and extra $SU(2)_L$-doublet Higgs fields charged under the additional symmetry are introduced. Once we assume that the symmetry is originated from the flavor symmetry spontaneously broken at high energy, we find that the FCNCs involving the extra Higgs fields are predicted by the remnant symmetry. The remnant symmetry would not be respected in the full Lagrangian, but it could well control the FCNCs as long as the breaking terms are enough small in the Higgs potential. In Sec. \ref{sec2}, we discuss the remnant symmetry and our setup in this paper. Then we investigate the FCNCs involving neutral scalars and charged leptons, and discuss Higgs potential in Sec. \ref{section3}. We see that the partially remnant symmetry in the lepton mass matrix well controls FCNCs, and study our signals and current experimental constraints in flavor physics in Sec. \ref{section4}. On the other hand, it is also one of crucial issues to understand how to derive nonzero $\theta_{13}$ in flavor symmetric models. As we mentioned above, nonzero $\theta_{13}$ may require some modifications in conventional setups, because simple scenarios tend to predict vanishing $\theta_{13}$. We study the correlation between $\theta_{13}$, especially given by the mixing angles in charged lepton sector, and FCNCs, in Sec. \ref{section5}. Section \ref{section6} is devoted to summary. In the Appendix A, we introduce the $A_4$ model as a concrete example. \section{Generic Argument about FCNCs in flavor models} \label{sec2} In the SM, the fermions obtain masses according to the nonzero VEV of a Higgs field and Yukawa couplings. The Yukawa couplings should be defined to realize the large mass hierarchies and mixing in quarks and lepton sectors, so that the flavor symmetry that rotates the flavors is explicitly broken by the couplings in the SM. Furthermore, the charged currents involving $W$ boson change the flavors, so that it would be difficult to find out even very simple flavor symmetry such as $Z_2$ and $Z_3$ in the full Lagrangian of the SM. Now, let us focus on mass matrices of leptons and look for symmetry that is respected in only each mass matrix. For instance, if we see only the mass matrices for the charged lepton $(M_l)$ and the neutrinos $(M_N)$, we could find flavor symmetry as, \beq \label{remnant} M_l M^\dagger_l =T_L M_l M^\dagger_l T^\dagger_L, ~ M^\dagger_l M_l =T_R M^\dagger_l M_l T^\dagger_R, ~ M_N=S^T M_N S, \eeq assuming neutrinos are Majorana particles (See Ref. \cite{S4}, for instance). When $M_l$ and $M_N$ are diagonal, $T_L$, $T_R$, and $S$ could be described as \beq T_L=\begin{pmatrix} 1 & 0 & 0 \\ 0 & \eta_L & 0 \\ 0 & 0 & \eta_L^* \end{pmatrix},~ T_R=\begin{pmatrix} 1 & 0 & 0 \\ 0 & \eta_R & 0 \\ 0 & 0 & \eta_R^* \end{pmatrix},~ S=\begin{pmatrix} (-1)^p & 0 & 0 \\ 0 & (-1)^q & 0 \\ 0 & 0 & (-1)^{p+q} \end{pmatrix}, \eeq where $\eta_L$ and $\eta_R$ are the complex numbers which satisfy $\eta_L \eta_L^*=\eta_R \eta_R^*=1$, and $p, \, q$ are integer. $T_L$, $T_R$, and $S$ are not conserved in the full Lagrangian. In fact, they are broken by the gauge interaction with $W$ boson explicitly. However, they may give a hint for the mystery of the flavor structure in the SM. As discussed in Refs. \cite{King,Altarelli,Ishimori,S4,S4-3}, we can find the remnant symmetry, $T_L$, $T_R$, and $S$ in the flavor models which explain the realistic mass matrices naturally, and the remnant ones could be interpreted as the subgroup of the original flavor symmetry. Below, let us discuss such a kind of flavor models and $SU(2)_L$-doublet extra Higgs fields charged under the flavor symmetry, and consider the scenario that the simple remnant symmetry appears after the symmetry breaking. \subsection{Remnant symmetry in flavor symmetric models} \label{sec2-1} Let us consider flavor symmetric models with extra Higgs doublets. The extra symmetry may be non-Abelian discrete symmetry, and the extra scalars may belong to non-trivial singlet, doublet or triplet. Let us focus on Yukawa couplings of charged lepton sector in flavor symmetric models. In general, the couplings for the fermion masses could be described as \beq {\cal L}=-\overline{L}_i M_l(\phi)^{ij} E_{R \, j}+h.c.. \eeq $\phi$ is a scalar charged under the EW symmetry and the flavor symmetry, and we simply assume that $M_l$ only depends on $\phi$. Then $M_l$ satisfies the following relation according to the flavor symmetry $({\cal G})$ with generators $(g)$; \beq M_l( \phi)=g_L M_l( g^\dagger_{\phi} \phi) g_R. \eeq $g_{ L, \, R, \, \phi}$ are defined corresponding to the representations of $L_i$, $E_{R \, i}$, and $\phi$ under ${\cal G}$. When $\phi$ develops the nonzero VEV, the EW and flavor symmetry are broken and mass matrix for charged leptons is generated. Let us simply assume that the remnant symmetry of the flavor symmetry (${\cal T}$), whose generator is $T \subset g$, is still hold in the mass matrix as follows: \begin{eqnarray} \label{remnant2-1} M_l (\langle \phi \rangle ) &=&T_L M_l (T^\dagger_{\phi}\langle \phi \rangle )T^\dagger_R=T_L M_l (\langle \phi \rangle )T^\dagger_R, \\ T^\dagger_{\phi}\langle \phi \rangle &=&T_{\phi}\langle \phi \rangle=\langle \phi \rangle. \label{remnant2} \end{eqnarray} Let us consider the case that $L_i$ is triplet-representation in the diagonal base of $T_L$. Then $T_L$ for $L_i$ would be, \beq\label{T} T_L=\begin{pmatrix} 1 & 0 & 0 \\ 0 & \eta_L & 0 \\ 0 & 0 & \eta_L^* \end{pmatrix}. \eeq If $\eta_L$ is not $\pm 1$, we find that $L_i$ in the diagonal base of $T_L$ is identical to the field ($l_i$) in the mass base according to the relation of Eq. (\ref{remnant}). In this paper, we only focus on the case with $\eta_L \neq \pm 1$. Moreover, we especially discuss flavor models with flavor triplet-representation Higgs doublet $(\phi \equiv H_i)$, so that the VEV alignment is given by Eqs. (\ref{remnant2}) and (\ref{T}) with $T_L=T_\phi$: \beq (\langle \phi_1 \rangle, \, \langle \phi_2 \rangle, \, \langle \phi_3 \rangle) \propto (1, \, 0, \, 0). \eeq The orthogonal directions would be in the mass base of scalars around the VEV, and they may also respect the remnant symmetry, ${\cal T}$. Furthermore, the mass base of $E_{R \, i}$ is also fixed by $T_R$ as we discuss below, so that we can expect that it is possible that the FCNCs involving Higgs fields are qualitatively discussed in this kind of scenario, not mentioning original symmetry ${\cal G}$. \subsection{Setup} Below, we focus on flavor models with triplet-representation Higgs doublet $H_i$. The Yukawa coupling for charged lepton is given by \beq {\cal L}=-\overline{L}_i \Hat{Y}^k_{ij} H_j E_R^k+h.c.. \eeq The texture of the matrices, $\Hat{Y}^k_{ij}$, is fixed by ${\cal G}$, and we can find this type of setups in Refs. \cite{A4,S4,T7,delta27,flavor}. \footnote{$SU(2)_L$ singlets charged under flavor symmetry are introduced allowing higher-dimensional operators in Refs. \cite{A4-3,flavor-higher}. Such kind of models are not be considered in this paper, but could be also related to our studies.} Our assumptions of our setup are as follows: \begin{itemize} \item $L_i$ and $H_i$ are triplet representations of ${\cal G}$, \item $\langle H_i \rangle$ breaks ${\cal G}$ to ${\cal T}$, \item $E_{R \, i}$ is non-trivial singlet of ${\cal T}$. \end{itemize} ${\cal T}$ would not be conserved in the full Lagrangian but partially respected, i.e. in the charged lepton Yukawa couplings. As discussed in subsection \ref{sec2-1}, charged lepton mass matrices only hold the symmetry as in Eq. (\ref{remnant2-1}). It is well-known that this situation successfully realizes realistic mass matrices according to the Tri-Bi maximal mixing structure in flavor models with non-Abelian discrete symmetry: for instance $A_4$ \cite{A4,A4-3}, $S_4$ \cite{S4,S4-0, A4-2,S4-2,S4-3}, $A_5$ \cite{A5}, $T_7$ \cite{T7}, $\Delta(27)$ \cite{delta27}, and $\Delta(6n^2)$ \cite{delta6n2}. Note that $E_{R \, i}$ may belong to the triplet-representation or non-trivial singlet of ${\cal G}$ before the symmetry breaking, but we do not specify it. \subsection{Mass base of charged leptons} $L_i$ and $H_i$ are triplet-representation of ${\cal G}$, so that they are also the triplet of ${\cal T}$. Let us denote $L_i$ by the fields $(l_i)$ in the diagonal base of $T_L$. Then $l_i$ are in the mass base as we discussed above. Let us denote $E_{R \, i}$ as $e_{R \, i}$ in this base. Then, $e_{R \, i}$ are also the ones in the mass base, which transform as $(e_{R \, 1}, \, e_{R \, 2}, \, e_{R \, 3}) \to (e_{R \, 1}, \, \eta_L e_{R \, 2}, \, \eta_L^* e_{R \, 3})$, because of Eq. (\ref{remnant2-1}). Eventually, the texture of $\Hat Y^k_{ij}$ is almost fixed because of $T_L$ in Eq. (\ref{T}): \beq (\Hat Y^1_{ij}) = \frac{\sqrt 2}{v \cos \beta}\begin{pmatrix} m_1 & 0 & 0 \\ 0 & b_1 & 0 \\ 0 & 0 & c_1 \end{pmatrix}, \, (\Hat Y^2_{ij}) =\frac{\sqrt 2}{v \cos \beta} \begin{pmatrix}0 & 0 & c_2 \\ m_2 & 0 & 0 \\ 0 & b_2 & 0 \end{pmatrix}, \, (\Hat Y^3_{ij}) = \frac{\sqrt 2}{v \cos \beta} \begin{pmatrix} 0 & b_3 & 0 \\ 0 & 0 & c_3 \\ m_3 & 0 & 0 \end{pmatrix}, \eeq where, $\langle H_1 \rangle=v\cos\beta/\sqrt{2}$. See Eq. (\ref{eq:betaisdefinedhere}). Nonzero $b_2$ and $c_3$ imply ${\cal T}=Z_3$.\footnote{In our analysis, we assume the relation of Eq. (\ref{A4-relation}), so ${\cal T}$ is set to $Z_3$. } If $\Hat Y^k_{ij}$ is given by $\Hat Y^k_{ij}=y^k S^k_{ij}$, where $S^k_{ij}$ is defined by the multiplication rule of ${\cal G}$ and $y^k$ are dimensionless couplings, the elements of $\Hat Y^k_{ij}$ could be estimated, substituting \beq \label{A4-relation} |b_i|=|c_i|=m_i, \eeq where $m_i$ are the charged lepton masses. In this case, the mass matrix for charged lepton ($(M_l)^k_i$) is given by \beq (M_l)^k_i=\frac{v \cos \beta}{\sqrt 2} \Hat Y^k_{i 1}. \eeq Below, we discuss flavor physics assuming the relation in Eq. (\ref{A4-relation}). \subsection{Mass base of scalars} After the EW and flavor symmetry breaking, we find several scalars: CP-even, CP-odd, and charged scalars. In addition to flavor-triplet $H_i$, we introduce one flavor-singlet Higgs field, $H_q$, in order to realize the realistic mass matrices for quarks. We may need another flavor-charged scalars, $\Phi$, to generate Majorana mass matrices for neutrino mixing and masses. $\Phi$ may break the subgroup ${\cal T}$, and the mixing between $H_i$ and $\Phi$ may be allowed in the lagrangian. The mixing term may break the vacuum alignment as discussed in Eq. (\ref{remnant2}), because the VEV of $\Phi$ corresponds to the ${\cal T}$ breaking term. Below, we give some discussion about the mixing, and let us study $H_i$ and $H_q$, first. Let us decompose the scalars as follows: \beq \label{eq:betaisdefinedhere} H_q= \begin{pmatrix} H^{+}_q \\ \frac{1}{\sqrt 2} (v \sin \beta+H^0_q + i \chi_q) \end{pmatrix}, ~ H_1= \begin{pmatrix} H^{+}_1 \\ \frac{1}{\sqrt 2} (v\cos \beta +H^0_1 + i \chi_1) \end{pmatrix}, \eeq and \beq H_2=\begin{pmatrix} H^{+}_{e} \\ \phi_{e} \end{pmatrix},~H_3=\begin{pmatrix} H^{+}_{\mu} \\ \phi_{\mu} \end{pmatrix}. \eeq $H_q$ and $H_1$ generally mix each other because they develop nonzero VEVs: \beq \begin{pmatrix} H_q^+ \\ H_1^+ \end{pmatrix} = \begin{pmatrix} \cos \beta \\ \sin \beta \end{pmatrix}G^+ + \begin{pmatrix} -\sin \beta \\ \cos \beta \end{pmatrix}H_S^+, \label{mixinghp \eeq \beq \begin{pmatrix} \chi_q \\ \chi_1 \end{pmatrix} = \begin{pmatrix} \cos \beta \\ \sin \beta \end{pmatrix}G^0 + \begin{pmatrix} -\sin \beta \\ \cos \beta \end{pmatrix}A_S, \label{mixinghp \eeq \beq \begin{pmatrix} H^0_q \\ H^0_1 \end{pmatrix} = \begin{pmatrix} \cos \alpha \\ \sin \alpha \end{pmatrix}H^0_{S \, 1} + \begin{pmatrix} -\sin \alpha \\ \cos \alpha \end{pmatrix}H^0_{S \, 2}. \label{mixinghp \eeq $G^0$ and $G^+$ are the Goldstone boson eaten by $Z$ and $W^+$ bosons. If ${\cal T}$ is conserved in the mass matrices of scalars, $H_S^+$, $A_S^+$, $H^0_{S \, 1}$, and $H^0_{S \,2}$ are in the mass bases, and they do not mix with $H_2$ and $H_3$ for the ${\cal T}$ charge conservation. We will give some discussions about the mixing in Sec. \ref{section3}. $\alpha$ is the mixing angle between two CP-even scalars, and fixed by Higgs potential. If we build Higgs potential to lead SM-like Higgs mass and signal strength, $\alpha$ should be identical to $\beta$, and $H^0_{S \, 1}$ is interrupted as the SM Higgs. On the other hand, $H^+_e$, $H^+_\mu$, $\phi_e$ and $\phi_\mu$ are the complex scalars to carry the ${\cal T}$ charges: $H_2 \to \eta H_2$ and $H_3 \to \eta^* H_3$. In general, $H_2$ and $H_3$ would mix each other according to the nonzero VEV $\langle \Phi \rangle$, because $\langle \Phi \rangle$ breaks ${\cal T}$ spontaneously. We discuss the effect against the observables in flavor physics later. \subsection{Yukawa couplings} \label{sec2-E} Now we define ${\cal T}$-conserving Yukawa couplings involving scalars. Based on the above argument, we find the following Yukawa couplings which induce flavor violations: \beq {\cal L_T}=-Y^{ij}_{e} \phi_e \overline{l}_i e_{R \, j} -Y^{ij}_{\mu} \phi_\mu \overline{l}_i e_{R \, j} -(V^\dagger)_{ik} Y^{kj}_{e} H^+_e \overline{\nu_L}_i e_{R \, j} -(V^\dagger)_{ik}Y^{kj}_{\mu} H^+_\mu \overline{\nu_L}_i e_{R \, j}+h.c., \eeq where $V$ is the PMNS matrix. $Y^{ij}_{e}$ and $Y^{ij}_{\mu}$ are defined as \beq \label{eq:Yukawa} (Y^{ij}_{e})=\Hat Y^j_{i2}=\frac{\sqrt 2}{v \cos \beta} \begin{pmatrix} 0 & 0 & b_3 \\ b_1 & 0 & 0 \\ 0 & b_2 & 0 \end{pmatrix},~ (Y^{ij}_{\mu})=\Hat Y^j_{i3}=\frac{\sqrt 2}{v \cos \beta} \begin{pmatrix} 0 & c_2 & 0 \\ 0 & 0 & c_3 \\ c_1 & 0 & 0 \end{pmatrix}. \eeq As we mentioned above, the complex scalars may not be in the mass bases, because of ${\cal T}$-breaking effects in the Higgs potential. In Sec. \ref{section4}, we investigate the FCNC contributions to flavor physics in the ${\cal T}$-conserving limit, and then discuss the corrections from the ${\cal T}$-breaking terms in the Higgs potential to the observables in flavor physics. In fact, the breaking effect is strongly constrained by the $\mu \to e \gamma$ process. On the other hand, the neutral and charged scalars from $H_q$ and $H_1$ consist of Yukawa couplings that are the same as the model called type-X 2HDM in \cite{Aoki:2009ha}, or lepton-specific 2HDM in \cite{Branco:2011iw}: \begin{eqnarray} {\cal L}_{\rm 2HDM}&=&-\frac{m_i \sin \alpha}{v \cos \beta} H^0_{S \, 1} \overline{l}_i e_{R \, j} -\frac{m_i \cos \alpha}{v \cos \beta} H^0_{S \, 2} \overline{l}_i e_{R \, j} +h.c. \nonumber \\ &-&i\frac{m_i}{v \tan \beta} A_S \overline{l}_i e_{R \, i} -V_{ij}\frac{m_i}{v \tan \beta} H^+_S \overline{\nu_L}_i e_{R \, j} +h.c.. \label{2HDM} \end{eqnarray} The phenomenology of lepton-specific 2HDMs has been studied well in Refs. \cite{Aoki:2009ha, Branco:2011iw}. \section{Study of the Higgs potential} \label{section3} Before studying the phenomenological aspects, let us discuss Higgs potential in flavor symmetric models. In our setup, flavor-triplet $H_i$ develops nonzero VEV in the direction of $(\langle H_1 \rangle, \, \langle H_2 \rangle, \, \langle H_3 \rangle) \propto (1, \, 0, \, 0)$, and ${\cal T}$ is not broken. $\Phi_i$ is $SU(2)_L$-singlet and breaks ${\cal G}$ to the subgroup ${\cal S}$ of ${\cal G}$. In general, ${\cal S}$ and ${\cal T}$ are not commutative, so that $\langle \Phi_i \rangle$ breaks ${\cal T}$ and how to realize the vacuum alignment may be one of the issues in our models. For instance, the mechanism to achieve rigid vacuum alignment has been proposed so far \cite{Kobayashi:2008ih}. In order to realize the vacuum alignment that respects ${\cal T}$, especially mixing term between $H_i$ and $\Phi_i$ should be controlled. In general, the Higgs potential is written as \beq \label{potential} V= V_H(H_q, \, H_i) + V_{\Phi} (\Phi_i, \, H_q) + \Delta V (\Phi_i, \, H_i, \, H_q), \eeq where $\Delta V$ only has the mixing terms between $\Phi_i$ and $H_i$ such as $|H^i|^2 |\Phi_i|^2$ and $ H_i^\dagger H_q \Phi_i$. If $\Delta V$ is absent, the vacuum alignment of $\langle H_i \rangle$ and $\langle \Phi_i \rangle$ are independently fixed by $V_H$ and $V_{\Phi}$. In this case, the mass matrices for the scalars originated from $H_i$ and $H_q$ would respect ${\cal T}$-symmetry, while the mass matrices from $H_q$ and $\Phi_i$ would respect ${\cal S}$-symmetry. This means that scalar mass eigenstates are decided according to only the remnant symmetries in each sector, and the flavor violating Yukawa couplings in the mass base of scalars are given by Eq. (\ref{eq:Yukawa}). We consider an example to illustrate our argument. In the absence of $\Delta V$, we can write down the mass matrix for the CP-even scalar mass eigenstates after the spontaneous symmetry breaking. On the basis of $(H_q^0, H_1^0, H_e^0, H_\mu^0, \Phi_1^0, \Phi_2^0, \Phi_3^0)^T$, where $\phi_e=\frac{1}{\sqrt{2}}(H^0_e+iA_e)$, $\phi_\mu=\frac{1}{\sqrt{2}}(H^0_\mu+iA_\mu)$ and $\Phi_j=\frac{1}{\sqrt{2}}(v_{\Phi}+\Phi_j^0+iA_j^\Phi)$ are defined, the mass matrix of the CP-even mass scalars is given in the following form: \beq \left( \begin{array}{c|cc} M^2_q & m^{2T}_{H} & m^{2T}_{\Phi} \\\hline m^2_{H} & M^2_{H} & 0 \\ m^2_{\Phi} & 0 & M^2_{\Phi} \end{array} \right). \label{eq:V''} \eeq Here, $m_{H}^2$ and $m_{\Phi}^2$ are $3$-vectors, $M_{H}^2$ and $M_{\Phi}^2$ are $3\times3$ matrices. $M^2_q $ is the mass term for $H^0_q$. The form of submatrices $M_H^2$, $M_{\Phi}^2$ and sub-vectors $m^2_{H}$, $m^2_{\Phi}$ are fixed by the ${\cal T}$-conserving and ${\cal S}$-conserving conditions, \begin{eqnarray} \label{eq:condition} &&T_{ik}^H M^2_{Hkl} T_{lj}^{H\dag}=M^2_{Hij},~~S_{ik}^\Phi M^2_{\Phi kl}S_{lj}^{\Phi\dag}=M^2_{\Phi ij}, \\\nonumber &&T_{ik}^H m^2_{Hk}=m^2_{Hi},~~S_{ik}^\Phi m^2_{\Phi k}=m^2_{\Phi i}, \end{eqnarray} where $T^H$ and $S^\Phi$ are generators of subgroups ${\cal T}$ and ${\cal S}$ in the triplet representation. Similarly, mass matrices for CP-odd and charged scalar mass eigenstates are also determined only by the remnant flavor symmetry in each sector. We show the most simple example with ${\cal G}=A_4$, ${\cal T}=Z_3$, and ${\cal S}=Z_2$ in Appendix A. Especially, on the base in which $\mathcal{T}$-generator is diagonal, subgroup $\mathcal{T}$ restricts the mass terms for scalar bosons ($H_q^0, H_1^0, H_e^0, H_\phi^0$) which interact with the SM particles as follows: \begin{eqnarray} \left(\begin{array}{cccc} M^2_q & m^2_{H \, 1} & 0 & 0 \\ m^2_{H \, 1} & M^2_{H \, 11} & 0 & 0 \\ 0 & 0 & M^2_{H \, 22} & 0 \\ 0 & 0 & 0 & M^2_{H \, 33} \end{array}\right). \end{eqnarray} ${\cal T}$ completely determines mass matrix for $H_q, H_i$ because $\mathcal{G}$-singlet boson $H_q$ also respects residual symmetry $\mathcal{T}$. Although $H_q^0$, $H_1^0$ and $\Phi_i^0$ may mix each other because of non-zero $m^2_{H \,1}$ and $m^2_{\Phi \, i}$, the mass matrix of the scalar bosons can be described in the model independent way as far as $\Delta V=0$. \subsection*{Scalar mass eigenstates in the case of $\Delta V \neq 0$} Let us consider the case with nonzero $\Delta V$. We simply assume that the nonzero VEV of $\Phi_i$ is higher than the EW scale, and we could write down the ${\cal T}$-conserving and ${\cal T}$-breaking effective potential for $H^i$ and $H^q$ at the renormalizable level: \begin{eqnarray} V_{eff}&=&V_{\cal T} + V_{\slashed{\cal T}}, \\ V_{\cal T}&=& m^2_q |H_q|^2+(m^2_{q1} H_q^\dagger H_1+h.c.) +m^2_1 |H_1|^2+m^2_2 |H_2|^2+m^2_3 |H_3|^2 +V^{(4)}_T, \nonumber \\ V_{\slashed{\cal T}}&=&m^2_{q2} H_q^\dagger H_2 +m^2_{q3} H_q^\dagger H_3+m^2_{12} H_1^\dagger H_2+m^2_{13} H_1^\dagger H_3+m^2_{23} H_2^\dagger H_3 +h.c., \label{eq;T-breaking} \end{eqnarray} where $V^{(4)}_T$ is the function which only include ${\cal T}$-conserving quartic couplings of $H_{q}$ and $H_{i}$. $V_{\cal T}$ and $V_{\slashed {\cal T}}$ are the ${\cal T}$-conserving and ${\cal T}$-breaking potentials. The scalars from $\Phi$ are omitted assuming that they decouple below the EW scale because of the hierarchy between the VEV of $\Phi$ and the VEVs of Higgs doublets. $V_{\slashed{\cal T}}$ is generated by $\Delta V$ in Eq. (\ref{potential}). We could write quartic couplings in $V_{\slashed {\cal T}}$, but the dimensionless couplings are expected to be small, because they are generated by the high-dimensional operators in $V$ or integrating out the heavy scalars in $\Phi$ \footnote{Our argument in this section would be also reasonable, as far as ${\cal T}$-breaking quartic couplings are tiny.}. Now, let us consider the stationary conditions for $H_2$ and $H_3$. In our setup, they should not develop nonzero VEVs to respect ${\cal T}$. The defivatives $\pa_{H_I} V$ ($I=2, \, 3$) under the ${\cal T}$-conserving conditions depend on the ${\cal T}$-breaking terms as, \beq \pa_{H_I} V_{eff}=m^2_{qI} H_q^\dagger+m^2_{1I} H_1^\dagger. \eeq This leads the equation to realize the vacuum alignment where $\langle H_I \rangle=0$ is satisfied: \beq \label{alignment-condition} m^2_{qI} \sin \beta +m^2_{1I} \cos \beta=0. \eeq In general, the stationary conditions for $H_1$ and $H_q$ are independent of this condition, so that the fine-tuning would be required, if $m^2_{qI}=m^2_{1I}=0$ is not satisfied. The quartic couplings in $V_{eff}$ respect the remnant symmetry ${\cal T}$, so that $m^2_{qI}$ and $m^2_{1I}$ only contribute to the mass mixing between ${\cal T}$-charged scalars and ${\cal T}$-trivial scalars. We could conclude that ${\cal T}$-charged scalars do not mix with ${\cal T}$-trivial scalars, unless we admit the tuning in Eq. (\ref{alignment-condition}). On the other hand, the mass mixing between $H_2$ and $H_3$ generated by $m^2_{23}$ would not be controlled by the vacuum alignment, because the both VEVs of $H_2$ and $H_3$ vanish. We discuss the ${\cal T}$-breaking effects in Sec. \ref{sec:T-breaking}. We also study the ${\cal T}$-breaking terms corresponding to mixing between the ${\cal T}$-trivial and the ${\cal T}$-charged scalars, in Sec. \ref{section5}. Note that the scalars in $\Phi_i$ may have the low mass compatible with the EW scale, and they may mix with $H_2$ and $H_3$, although the mixing given by $\Delta V$ should be controlled to realize the vacuum alignment. Moreover, $\Phi_i$ dominantly couple with neutrinos, even if the mixing exists. Eventually we discuss phenomenology in the limit that the mixing with the scalars in $\Phi_i$ is negligible. \section{flavor physics} \label{section4} We have seen that the FCNCs involving scalars are well controlled, if we assume that the partially remnant symmetry ${\cal T}$ is respected in the Yukawa couplings. Even if ${\cal T}$ is broken in the Higgs potential, we could expect that it is possible to discuss the contributions of the FCNCs to flavor physics as far as the breaking effect is smaller than the ${\cal T}$-conserving one. As discussed in subsection \ref{sec2-E}, the ${\cal T}$-conserving FCNCs are distinguishing, so that we could qualitatively analyze their signals in flavor physics. In the Sec. \ref{section4-1}, we consider the ${\cal T}$-conserving case and then we will see the ${\cal T}$-breaking effect including the loop corrections in subsection \ref{sec:T-breaking}. \subsection{${\cal T}$-conserving contributions} \label{section4-1} First of all, let us discuss the flavor physics in the case that ${\cal T}$ is conserved in charged leptons and scalar mass matrices. The Yukawa couplings between scalars and charged leptons are given by Eqs. (\ref{eq:Yukawa}) and (\ref{2HDM}). $\phi_{e, \, \mu}$ and $H^+_{e, \, \mu}$ are the mass eigenstates in this case. The ${\cal T}$-charged scalar masses are also expected to be around the EW scale, because the masses are given by $V_H$, so we are especially interested in the EW-scale masses of the scalars. \subsubsection{${\cal T}$-charged scalar interactions} Through the exchanging of $\phi_{e, \, \mu}$, (flavor-changing) $4$-fermi interactions are effectively generated as \begin{eqnarray} {\cal L}^{(4)}_{\cal T}&=& \frac{1}{v^2 \cos^2 \beta} \Big \{ \frac{|b_3|^2}{m^2_{\phi_e}} (\overline{\tau_R}e_L)(\overline{e_L}\tau_R) + \frac{|b_2|^2}{m^2_{\phi_e}} (\overline{\mu_R}\tau_L)(\overline{\tau_L}\mu_R) + \frac{|b_1|^2}{m^2_{\phi_e}} (\overline{e_R}\mu_L)(\overline{\mu_L}e_R) \nonumber \\ &+&\frac{|c_3|^2}{m^2_{\phi_\mu}} (\overline{\tau_R}\mu_L)(\overline{\mu_L}\tau_R) + \frac{|c_2|^2}{m^2_{\phi_\mu}} (\overline{\mu_R}e_L)(\overline{e_L}\mu_R) + \frac{|c_1|^2}{m^2_{\phi_\mu}} (\overline{e_R}\tau_L)(\overline{\tau_L}e_R) \nonumber \\ &+& \frac{b^*_{2}b_3}{m^2_{\phi_e}} (\overline{\mu_R}\tau_L)(\overline{e_L}\tau_R) + \frac{b^*_1b_2}{m^2_{\phi_e}} (\overline{e_R}\mu_L)(\overline{\tau_L}\mu_R) + \frac{b_3^*b_1}{m^2_{\phi_e}} (\overline{\tau_R}e_L)(\overline{\mu_L}e_R) +h.c. \nonumber \\ &+&\frac{c^*_2c_3}{m^2_{\phi_\mu}} (\overline{\mu_R}e_L)(\overline{\mu_L}\tau_R) + \frac{c_1^*c_2^2}{m^2_{\phi_\mu}} (\overline{e_R}\tau_L)(\overline{e_L}\mu_R) + \frac{c^*_3c_1^2}{m^2_{\phi_\mu}}(\overline{\tau_R}\mu_L)(\overline{\tau_L}e_R) +h.c. \Big \}. \end{eqnarray} The charged Higgs scalars $H^+_{e, \, \mu}$ also induce flavor violation, and it is derived by replacing $\ov{l^i_L}$ with $\ov{\nu_L^k} V_{ki}$. The mass difference between $H^+_{e, \, \mu}$ and $\phi_{e, \, \mu}$ is strongly constrained by the $\rho$ parameter, so that we set $m^2_{\phi_e}=m^2_{H^+_e}$ and $m^2_{\phi_\mu}=m^2_{H^+_\mu}$ in our study. The lower bound on the charged Higgs mass is given by the direct search at the LEP experiment: $m_{H^{\pm}} \gtrsim 80$ GeV \cite{Abbiendi:2013hk}. Below, we survey the parameter space above the lower mass. One of the stringent constraints on the flavor violating couplings is from $e^+ e^- \to l^+ l^-$ $(l=e, \, \mu, \, \tau)$ at the LEP \cite{LEPbound}. Assuming the relation in Eq. (\ref{A4-relation}) with $(m_1, \, m_2, \, m_3)=(m_e, \, m_\mu , \, m_{\tau})$, the processes $e^+ e^- \to \mu^+ \mu^-, \, \tau^+ \tau^-$ are enhanced through $\phi_{e \, \mu}$ exchanging in $t$-channel: \beq m_{\phi_e} \gtrsim 0.62 \times \frac{m_{\tau}}{v \cos \beta}~ {\rm TeV},~m_{\phi_\mu} \gtrsim 2.23 \times \frac{m_{\mu}}{v \cos \beta}~ {\rm TeV}. \eeq The allowed region is summarized in Fig.\ref{fig1} . If we consider the model which does not hold the relation in Eq. (\ref{A4-relation}), $m_{\phi_e}$ may face the stronger bound from $e^+ e^- \to \mu^+ \mu^-$, but the bound from the flavor violating $\tau$ decay is stronger than the LEP bound, as we discuss below. Flavor violating decays of $\tau$ and $\mu$ have been well investigated in the experiments, and the constraints are summarized in Refs. \cite{Hayasaka:2010np, Bellgardt:1987du}. In our models, ${\cal T}$-charge should be conserved, so that the final states from $\tau$ and $\mu$ decays should be ${\cal T}$-charged states. That is, the possible decay patterns of $\tau$ are only \beq \tau^- \to \mu^+ e^- e^-,~ e^+ \mu^- \mu^-. \eeq $\mu \to 3e$, which is strongly constrained by the experiments \cite{Bellgardt:1987du}, can be forbidden. Assuming the relation in Eq. (\ref{A4-relation}), the strong bound on the flavor violating decays is obtained from mainly $\tau^- \to e^+ \mu^- \mu^-$ as Br($\tau^- \to e^+ \mu^- \mu^-$) $< 1.7 \times 10^{-8}$ \cite{Hayasaka:2010np} with \beq {\rm Br}(\tau^- \to e^+ \mu^- \mu^-)=\frac{m^5_\tau}{3 (8 \pi)^3 \Gamma_\tau} \left | \frac{ m_\tau m_\mu}{m^2_{\phi_\mu} (v\cos \beta)^2} \right |^2. \eeq The allowed region is summarized in Fig.\ref{fig1} \footnote{The lepton flavor violating (LFV) $\tau$ decays induced by tree-level FCNCs involving extra scalars have been investigated in a generic two-Higgs-doublet model \cite{LFV-2HDM}. }. In addition, the charged Higgs exchanging processes may also contribute to the $\tau$ and $\mu$ decay. The chirality of the charged lepton in the decays is different from the one in the SM, so that the correction my be quite small. Assuming the charged lepton in the final state is massless, the deviation of the leptonic decay is evaluated as, \beq \Delta{\rm Br}(l_i \to l_j \ov{\nu_k} \nu_n) \simeq \frac{1}{32 G^2_F} \sum_{a=e, \mu} \frac{|Y_a^{ni} Y_a^{* \, kj}|^2}{m^4_{H^+_a}}. \eeq Assuming the relation in the Eq. (\ref{A4-relation}) and $m_{H^+_a}=m_{\phi_a}$ for the EWPOs, we find that the modified branching ratio of muonic $\tau$ decay is at most, \beq {\rm Br}(\tau \to \mu \ov{\nu} \nu) \simeq (1-1.07 \times 10^{-3}) \times {\rm Br}(\tau \to \mu \ov{\nu} \nu)_{\rm SM}, \eeq including the contribution of $\mu$ mass. ${\rm Br}(\tau \to \mu \ov{\nu} \nu)_{\rm SM}$ is the SM prediction and this modified value is within the error of the current experimental measurement of the $\tau$ decay \cite{PDG}. The contribution of charged Higgs carrying also ${\cal T}$-charges to the other decay such as $\mu \to e \nu \nu$ is strongly suppressed by the Yukawa couplings. Including one-loop corrections involving the extra scalars, the $Z$ and $W$ couplings would be slightly deviated from the SM ones: \begin{equation} {\cal L}_{{\rm EW}}= g_Z Z^{\rho} \{ (q_L +\Delta q^i_L) \ov{l^i_L} \gamma_\rho l^i_L+(q_R+\Delta q^i_R) \ov{l^i_R} \gamma_\rho l^i_R+ (q^{\nu}_L +\Delta q^{\nu \, ij}_L) \ov{\nu^i_L} \gamma_\rho \nu^j_L \}, \end{equation} where $(q_L, \, q_R, \, q^{\nu}_L)=(-1/2+\sin^2 \theta_W, \, \sin^2 \theta_W, \, 1/2)$ are defined. $\sin \theta_W$ is the Weinberg angle, and $g_Z$ is the gauge coupling for $Z$-boson interaction. According to the one-loop diagrams involving $\phi_{e \, \mu}$, they are estimated as follows: \begin{eqnarray} \Delta q^{i}_L &=& \sum_{a=e,\mu} \sum^3_{k=1} \frac{|Y^{ik}_a|^2}{16 \pi^2} \frac{M^2_Z}{m^2_{\phi_a}} \left (- \frac{1}{36}- \frac{1}{3} \sin^2 \theta_W \right ), \\ \Delta q^{i}_R &=&\sum_{a=e,\mu} \sum^3_{k=1} \frac{|Y^{ki}_a|^2}{16 \pi^2} \frac{M^2_Z}{m^2_{\phi_a}} \left (\frac{7}{36}- \frac{1}{3} \sin^2 \theta_W \right ), \\ \Delta q^{\nu \, ij}_L &=& \sum_{a=e,\mu} \sum^3_{k,l,n=1}V^{*\,ki} V^{ nj} \frac{Y^{kl}_a Y^{nl}_a}{16 \pi^2} \frac{M^2_Z}{m^2_{\phi_a}} \left (\frac{1}{36}- \frac{7}{18} \sin^2 \theta_W \right ), \end{eqnarray} where $m_i \ll M^2_Z \ll m_{\phi_a}$ is assumed. In the model which satisfies the relation in Eq. (\ref{A4-relation}), $\Delta q^{e, \mu}_L$, $\Delta q^{\tau}_R$, and $\Delta q^{\nu \, ij}_L$ might be sizable. The constraints from $e^+e^- \to \tau^+ \tau^-$ and $\tau^- \to e^+ \mu^- \mu^-$ give the maximal sizes of the deviations: \begin{eqnarray} \Delta q^e_{L}&\approx&2.87 \times 10^{-5}, ~\Delta q^\mu_{L}\approx-1.89 \times 10^{-6},~\Delta q^\tau_{R}\approx3.21 \times 10^{-5}, \\ \Delta q^{\nu \, ij}_{L}&\approx&-1.7 \times 10^{-5} \times V^{i3} V^{* \,j3}. \end{eqnarray} They are too tiny to compare with the current experimental results. In fact, the maximal values are within the error of the measurements of $Z$ boson \cite{Pich:2013lsa}. \subsubsection{${\cal T}$-trivial scalar interactions} $H_q$ and $H_1$ are not charged under ${\cal T}$, and they develop nonzero VEVs. Their Yukawa couplings with SM fermions are flavor-diagonal in the mass base, under the ${\cal T}$-conserving assumption. Then, we could conclude that $H_q$ and $H_1$ induce so-call minimal flavor violation, and evade the stringent constraints from flavor physics. This type of scalars with the interactions in Eq. (\ref{2HDM}) have been well investigated so far, motivated by, for instance, the deviation of muon anomalous magnetic moment \cite{leptophilic2HDM}. The lower bound on the charged Higgs mass comes from the direct search for charged Higgs at the LHC and it is given by top mass to evade the exotic top decay: $m_{H^{\pm}} \gtrsim 172$ GeV \cite{chargedLHC}. The pseudo scalar mass should be close to the charged Higgs mass to avoid too large deviation of the $\rho$ parameter. Flavor changing processes in $B$ physics may constrain our ${\cal T}$-trivial scalars. For instance, $B \to X_s \gamma$ process gives the lower bound on $\tan \beta$: $\tan \beta \gtrsim 1$ \cite{Hermann:2012fc}. Br$(B^- \to \tau^- \nu)$ is also slightly deviated from the SM prediction according to the charged Higgs exchanging, but it is less than $1 \%$ in the region with $m_{H^\pm} \geq 172$ GeV. As pointed out in Ref. \cite{leptophilic2HDM}, the discrepancy of muon anomalous magnetic moment may be explained if $\tan \beta$ is quite large and pseudo scalar is rather small. Let us also survey the parameter region in the next section. \begin{figure}[!t] {\epsfig{figure=phieVStb.eps,width=0.5\textwidth}}{\epsfig{figure=phimuVStb.eps,width=0.5\textwidth}} \vspace{-0.5cm} \caption{$m_{\phi_{e, \mu}}$ and $\tan \beta$. The gray region is excluded by $e^+ e^- \to \tau^+ \tau^-$ at the LEP experiment and the LFV $\tau$ decay, $\tau^- \to e^+ \mu^- \mu^-$.} \label{fig1} \end{figure} \subsection{${\cal T}$-breaking contributions} \label{sec:T-breaking} Next, we investigate the contributions of ${\cal T}$-breaking terms to flavor physics. If we can assume that the remnant symmetry ${\cal T}$ is respected in Higgs potential after the symmetry breaking, we could expect that only ${\cal T}$-symmetric terms are relevant to flavor violating processes. However, as we have seen in Sec. \ref{section3}, $\Delta V$ may be allowed even if we assume that the vacuum alignment respects ${\cal T}$ and ${\cal S}$. After the symmetry breaking, $\Delta V$ induces ${\cal T}$-breaking terms, such as $m^2_{23}$ in Eq. (\ref{eq;T-breaking}), in scalar mass matrices because of nonzero $\langle \Phi \rangle$, so that we may have to control $\Delta V$ to evade the stringent constraints from flavor physics. ${\cal T}$-breaking terms would appear in scalar mass matrices, according to $\Delta V$ in Eq. (\ref{potential}), \begin{eqnarray} {\cal L}_{\cal T} = -\frac{1}{2} (\delta m^2_H)_{ab} H^0_a H^0_b -\frac{1}{2} (\delta m^2_A)_{ab} A_a A_b - (\delta m^2_{H^+})_{ab} H_a^+ H_b^-. \label{deltam} \end{eqnarray} $A_{a}$ and $H_a^+$ denote the two kinds of scalars: $\{A_{a}\}=\{A_e, A_\mu \}$ and $\{H^{\pm}_{a}\}=\{ H^{\pm}_e, H^{\pm}_\mu \}$. There are two CP-even scalars and they mix each other in general according to $(\delta m^2_H)_{ab}$, where $\{H_a^0\}=\{ H^0_{e}, H^0_{\mu}\}$ is defined. Besides, there may be mixings between ${\cal T}$-trivial and ${\cal T}$-charged scalars, such as $(\delta m^2_H)_{a1} H^0_a H^0_{S \, 1}$, although they require the fine-tuning against the parameters in Higgs potential, as discussed in Eq. (\ref{alignment-condition}). The mixings relate to the vacuum alignment, so we analyze the ${\cal T}$-breaking terms including the study about the deviation of the vacuum alignment, in Sec. \ref{section5}. In general, $\Phi$ also predicts extra scalars, and should be involved in the scalar mass matrices. However, $\Phi$ mainly couples with neutrinos, so the constraint on $\Phi^i$ is rather weak. Simply we assume that the scalars from $\Phi^i$ gain heavy masses according to nonzero VEVs of $\Phi^i$ in $V_{\Phi}$, and decouple around the EW scale. We could expect that this assumption leads the approximate ${\cal T}$-conserving situation with \beq (\delta m^2_H)_{ab}, \, (\delta m^2_A)_{ab}, \, (\delta m^2_{H^+})_{ab} \ll m^2_{\phi_{e,\mu}}, \eeq and discuss the bound on the breaking effects from the experiments. The relevant constraints are from $l \to l' \gamma$ processes \cite{LFV-2HDM-2, Chang:1993kw}. Especially, main contribution of the ${\cal T}$-breaking terms would appear in $\mu \to e \gamma$. \subsubsection{constraint from the $\mu \to e \gamma$ process} \label{mutoegamma} The $\mu \to e \gamma$ process has been well investigated in 2HDMs \cite{LFV-2HDM-2, Chang:1993kw}. The MEG experiment released the upper bound on the branching ratio of the flavor changing process: Br$(\mu \to e \gamma) < 5.7 \times 10^{-13}$ \cite{Adam:2013mnn}. It would be updated up to $6 \times 10^{-14}$ in the future \cite{Baldini:2013ke}. Our dominant contribution to the $\mu \to e \gamma$ process is from the one-loop correction involving the scalars of $\phi_e$, because $\phi_e$ has large $(e, \, \tau)$ and $(\tau,\, \mu)$ elements of the flavor-violating Yukawa couplings. If the CP-even scalar and CP-odd scalar masses of $\phi_e$ are different, the $\mu \to e \gamma$ process is easily enhanced. The operator to induce the LFV process is estimated as follows at the one-loop level: \begin{eqnarray} {\cal L}_{\mu \to e \gamma}&=&e C_7 \overline{e_L} \sigma_{\mu \nu} \mu_RF^{\mu \nu}, \\ C_7&=&\frac{ m_{\tau}Y^{e \tau }_e Y^{ \tau \mu }_e}{64\pi^2} \left \{ \frac{U^h_{e \alpha} U^h_{e \alpha}}{m^2_{h_\alpha}} F \left( m^2_{h_\alpha}/m^2_\tau \right ) - \frac{U^A_{e \alpha} U^A_{e \alpha} }{m^2_{A_\alpha}} F \left( m^2_{A_\alpha}/m^2_\tau \right ) \right \}, \end{eqnarray} where $e$ is the electric charge and $F(x)$ is defined as \beq F (x)= \ln ( x ) -\frac{3}{2}. \eeq $U^{h,A}_{ij}$ are the diagonalizing matrices for the mass matrices of CP-even and -odd scalars; \begin{eqnarray} ((U^h)_{a \alpha} m^2_{h_\alpha} (U^h)_{b\alpha})&=&\begin{pmatrix} m^2_{\phi_e} + (\delta m_H^2)_{ee} & (\delta m_H^2)_{\mu e} \\ (\delta m_H^2)_{\mu e} & m^2_{\phi_\mu}+ (\delta m_H^2)_{\mu\mu} \end{pmatrix}, \label{Uh} \end{eqnarray} \begin{eqnarray} ((U^A)_{a \alpha} m^2_{A_\alpha} (U^A)_{b \alpha})&=&\begin{pmatrix} m^2_{\phi_e}+(\delta m_A^2)_{ee} & (\delta m_A^2)_{\mu e} \\ (\delta m_A^2)_{\mu e} & m^2_{\phi_\mu}+ (\delta m_A^2)_{\mu\mu} \end{pmatrix}. \label{UA} \end{eqnarray} Fig. \ref{fig2} shows the excluded region by the current upper bound on $\mu \to e \gamma$ process, in the case that only $(\delta m^2_H)_{ee}$ is nonzero. As we see, the mass difference between the CP-even and CP-odd scalars is severely constrained by the flavor-changing process. If $\phi_e$ is below $300$ GeV, $\tan \beta$ should be smaller than about $2$. This is quite strong, compared to the bound from $e^+e^- \to \tau^+ \tau^-$ in Fig. \ref{fig1}. We may require large $\tan \beta$, for instance, to enhance the muon anomalous magnetic moment \cite{leptophilic2HDM}, but the ratio of the squared mass difference to $m^2_{\phi_e}$, $(\delta m^2_H)_{ee}/m^2_{\phi_e}$, should be much smaller than $O(10^{-2})$. \begin{figure}[!t] {\epsfig{figure=muegamma.eps,width=0.6\textwidth}} \vspace{-0.5cm} \caption{$(\delta m^2_H)_{ee}$ (GeV) and $\tan \beta$. The gray (light-blue) is excluded by $\mu \to e \gamma$ at $m_{\phi_{e}}=1000$ ($600$) GeV. The dashed line is the upper bound with $m_{\phi_{e}}=300$ GeV. } \label{fig2} \end{figure} \subsubsection{constraint from the $\tau \to e \gamma$ process} The scalar of $\phi_\mu$ contribute to the $\tau \to e \gamma$ process, according to the mass difference between the CP-even and CP-odd scalar. The operator for the LFV process is given by \begin{eqnarray} {\cal L}_{\tau \to e \gamma}&=&e C^{\tau}_7 \overline{e_L} \sigma_{\mu \nu} \tau_RF^{\mu \nu}, \\ C^{\tau}_7&=&\frac{ m_{\mu}Y^{e \mu }_\mu Y^{ \mu\tau }_\mu}{64\pi^2} \left \{ \frac{U^h_{\mu \alpha} U^h_{\mu \alpha}}{m^2_{h_\alpha}} \left( F \left( m^2_{h_\alpha}/m^2_\mu \right ) + \frac{m_\tau}{6m_\mu}\right )- \frac{U^A_{\mu \alpha} U^A_{\mu \alpha} }{m^2_{A_\alpha}} \left( F \left( m^2_{A_\alpha}/m^2_\mu \right ) + \frac{m_\tau}{6m_\mu} \right ) \right \}. \nonumber \\ \end{eqnarray} The current upper bound on the $\tau \to e \gamma$ process is $1.1 \times 10^{-7}$ \cite{Aubert:2009ag}, and it is rather weak compared with the one from the $\mu \to e \gamma$ process. In Fig. \ref{fig3}, the regions excluded by $\tau \to e \gamma$, $\tau^- \to e^- \mu^+ \mu^-$, and $\tau^- \to e^+ \mu^- \mu^-$ are summarized. We can conclude that the $\tau^- \to e^+ \mu^- \mu^-$ constraint is the most important to $\phi_\mu$ in our model, even if we include the ${\cal T}$-breaking terms. \begin{figure}[!t] {\epsfig{figure=tauegamma.eps,width=0.6\textwidth}} \vspace{-0.5cm} \caption{$(\delta m^2_H)_{\mu \mu}$ (GeV) and $\tan \beta$. The gray (light-blue) is excluded by $\tau \to e \gamma$ at $m_{\phi_{\mu}}=200$ GeV. The region above the dashed line is the upper bound from $\tau^- \to e^- \mu^+ \mu^-$ and the blue region is the excluded one by $\tau^- \to e^+ \mu^- \mu^-$. } \label{fig3} \end{figure} \subsubsection{muon anomalous magnetic moment $(g-2)_{\mu}$ } In our model, the ${\cal T}$ breaking term which allows the mass mixing between $\phi_e$ and $\phi_\mu$ enhances the muon anomalous magnetic moment and electron, muon EDMs. It is well-known that there is a $3.1$ $\sigma$ deviation from the SM prediction in the muon anomalous magnetic moment $(g-2)_{\mu}$ experimental result \cite{PDG}. In Ref. \cite{leptophilic2HDM}, very light pseudo-scalar is introduced to achieve the anomaly in the leptophilic 2HDM. In our model, we could find new contributions to $(g-2)_{\mu}$ according to the tree-level FCNCs \cite{LFV-2HDM-2}, \beq \label{muong-2} \Delta a_\mu=\frac{ m_\mu m_\tau Y^{ \tau \mu}_e Y^{ \mu \tau }_\mu }{(4 \pi)^2} \left \{ \frac{U^{h}_{e \alpha} U^h_{\mu \alpha} }{m^2_{h_\alpha}} F \left( m^2_{h_\alpha}/m^2_\tau \right ) - \frac{ U^A_{e \alpha} U^A_{\mu \alpha}}{m^2_{A_\alpha}} F \left(m^2_{A_\alpha}/m^2_\tau \right ) \right \}. \eeq Unfortunately, the enhancement of $\Delta a_{\mu}$ is tiny as long as the ${\cal T}$-breaking terms are small, because of the stringent constraint from the $\mu \to e \gamma$ and the $\tau^- \to e^+ \mu^- \mu^-$ processes. Setting $(\delta m^2_{H,A})_{ee}=(\delta m^2_{H,A})_{\mu}=(\delta m^2_{A})_{e\mu}=0$, $\Delta a_\mu$ is at most $O(10^{-2}) \times 10^{-9}$, which is much below the experimental result. One possible way to enhance $\Delta a_\mu$ would be large $\tan \beta$ and light ${\cal T}$-trivial scalar scenario, as pointed out in Ref. \cite{leptophilic2HDM}. In addition, the loop corrections involving extra scalars with ${\cal T}$-breaking terms deviate the mass base of the charged leptons. As long as $\tan \beta$ is rather small, the deviation would be tiny but the large $\tan \beta$ scenario may be also fascinating because of the discrepancy of $(g-2)_\mu$. Furthermore, nonzero $\theta_{13}$ is confirmed at the experiments \cite{theta13-1,theta13-2,theta13-3,theta13-4,theta13-5}, so it is important to discuss the contribution of the ${\cal T}$-breaking terms to the PMNS matrix. In the next section, we investigate the mass mixing from the one-loop correction, and discuss the contribution to $\theta_{13}$ and the flavor changing processes. When Yukawa couplings in Eq. (\ref{muong-2}) are complex, contributions to electric dipole moment occur from their imaginary parts. The electron and muon EDMs are given as follows: \begin{eqnarray} d_e&=&\frac{e }{32\pi^2} {\rm Im}(Y_e^{e\tau}Y_\mu^{\tau e}) \left\{U_{\mu \alpha}^hU_{e \alpha}^h\frac{m_\tau}{m_{h \alpha}^2}F\left(m_{h \alpha}^2/m_\tau^2\right) -U_{\mu \alpha}^AU_{e \alpha}^A\frac{m_\tau}{m_{A \alpha}^2}F \left(m_{A \alpha}^2/m_\tau^2\right)\right\}, \nonumber \\ && \\ d_\mu&=&\frac{e }{32\pi^2} {\rm Im}(Y_\mu^{\mu\tau}Y_e^{\tau \mu}) \left\{U_{\mu \alpha}^hU_{e \alpha}^h\frac{m_\tau}{m_{h \alpha}^2}F\left(m_{h \alpha}^2/m_\tau^2\right) -U_{\mu \alpha}^AU_{e \alpha}^A\frac{m_\tau}{m_{A \alpha}^2}F\left(m_{A \alpha}^2/m_\tau^2\right)\right\}. \nonumber \\ \end{eqnarray} The current upper bounds on the electron and muon EDMs are $|d_e|<8.7 \times 10^{-29} [e{\rm cm}]$ \cite{EDM} and $|d_\mu|<1.8 \times 10^{-19} [e{\rm cm}]$ \cite{Bennett:2008dy}, respectively. Fig. \ref{fig4} shows the allowed region in the case with $m_{\phi_{e}}=m_{\phi_{\mu}}=200$ GeV. The imaginary parts of ${\rm Im}(Y_e^{ij}Y_\mu^{kl})$ are assumed to be given by Eqs. (\ref{A4-relation}) and (\ref{eq:Yukawa}). The red region in Fig. \ref{fig4} shows the excluded region by the current experimental bound on the electron EDM. Depending on the phase of the Yukawa couplings, it is the most stringent one among the relevant constraints in our model. \begin{figure}[!t] {\epsfig{figure=EDM.eps,width=0.6\textwidth}} \vspace{-0.5cm} \caption{$(\delta m^2_H)_{e \mu}$ (GeV) and $\tan \beta$ with $m_{\phi_{e}}=m_{\phi_{\mu}}=200$ GeV. } \label{fig4} \end{figure} \subsection{short summary } Let us summarize the results in this section. We investigate the experimental bounds from flavor physics. In the ${\cal T}$-conserving case, the lepton flavor violating decay, $\tau^- \to e^+ \mu^- \mu^- $, gives the stringent constraint. If we include the ${\cal T}$-breaking terms in the scalar mass matrices, $\mu \to e \gamma$ and the electron EDM are relevant to our model. We summarize the allowed points in Fig. \ref{fig-summary}. $m_{A_e}$ and $m_{A_\mu}$ are set to be equal to $m_{\phi}$ and they are within $126$ GeV and $1$ TeV range. The left figure of Fig. \ref{fig-summary} shows the allowed regions for the ${\cal T}$-breaking terms: the blue points correspond to $|(\delta m^2_H)_{\mu e}/m^2_{\phi}|$ and $|(\delta m^2_H)_{ee}/m^2_{\phi}|$ respectively, and the red points figure out $|(\delta m^2_H)_{\mu \mu}/m^2_{\phi}|$. As discussed in this section, $(\delta m^2_H)_{ee}$ and $(\delta m^2_H)_{\mu e}$ are strongly constrained by $\mu \to e \gamma$ and the electron EDM, while the bound on $(\delta m^2_H)_{\mu \mu}$ is relatively weak. If we take $\tan \beta$ to be larger than $10$, the ${\cal T}$ breaking terms should be less than $O(0.1)$ compared with the ${\cal T}$-conserving parts. In other words, the upper bound on $\tan \beta$ is less than $10$, if $|(\delta m^2_H)_{\mu e}/m^2_{\phi}|$ is larger than $O(0.1)$. In the right figure of Fig. \ref{fig-summary}, we see the allowed region for $m_{\phi}$ and $\tan \beta$. $|(\delta m^2_H)_{ab}/m^2_{\phi}|$ is larger than $0.01$. The black line corresponds to the upper limit with the vanishing $(\delta m^2_H)_{ab}$. The red (blue) points are the allowed ones without (with) the constraint from the electron EDM. $Y_e^{e\tau}Y_\mu^{\tau e}$ is assumed to be pure imaginary on the blue points. As we see, the light $m_{\phi}$ is disfavored by the $\mu \to e \gamma$ process, while the bound from $\tau^- \to e^+ \mu^- \mu^- $ is more important in the heavy $m_{\phi}$ region. If $Y_e^{e\tau}Y_\mu^{\tau e}$ includes imaginary parts, the electron EDM may become more relevant to our model, and give the severe constraint as in Fig. \ref{fig-summary}. \begin{figure}[!t] {\epsfig{figure=Deviation.eps,width=0.5\textwidth}}{\epsfig{figure=mphiVStanbeta.eps,width=0.5\textwidth}} \vspace{-0.5cm} \caption{ $|(\delta m^2_H)_{ab}/m^2_{\phi}|$ and $\tan \beta$ (left), and $m_\phi$ (GeV) and $\tan \beta$ (right). $m_{A_e}$ and $m_{A_\mu}$ are fixed at $m_\phi$, and $m_{\phi}$ is within $126$ GeV and $1$ TeV range. In the left figure, blue points figure out $|(\delta m^2_H)_{\mu e}/m^2_{\phi}|$ and $|(\delta m^2_H)_{ee}/m^2_{\phi}|$, and red points are $(\delta m^2_H)_{\mu \mu}/m^2_{\phi}$. In the right figures, $|(\delta m^2_H)_{ab}/m^2_{\phi}|$ is larger than $0.01$. The blue (red) points are the allowed ones for the experimental bounds with (without) the upper limit of the electron EDM. The black line is the upper bound in the case with $|(\delta m^2_H)_{ab}|=0$. } \label{fig-summary} \end{figure} \section{ Mass mixing induced by ${\cal T}$-breaking terms} \label{section5} Off-diagonal elements of Dirac mass matrix for charged leptons are generated by loop diagrams including interactions coming from $\Delta V$, where the mixing between scalar bosons carrying different ${\cal T}$-charges occurs as shown in Eq. (\ref{deltam}). Including corrections for Dirac mass matrix of charged leptons ($M_l$), let us redefine the mass matrix \beq M_l=\left( \begin{array}{ccc} m_e & \epsilon_{e\mu} & \epsilon_{e\tau} \\ \epsilon_{\mu e} & m_\mu & \epsilon_{\mu\tau} \\ \epsilon_{\tau e} & \epsilon_{\tau\mu} & m_\tau \end{array}\right). \eeq \begin{figure}[h] \begin{center} \epsfig{file=t-breaking.eps,width=0.7\hsize} \caption{1-loop diagram which gives $\epsilon_{ij}$. $H^0_a, A_a$ in this figure is scalar mass eigenstates in ${\cal T}$-breaking case.} \label{fig:tbreaking} \end{center} \end{figure} In order to derive explicit representation for $\epsilon_{ij}$, we consider 1-loop processes as shown in Fig. \ref{fig:tbreaking}. The off-diagonal elements of $M_l$ are estimated as, \begin{eqnarray} \label{eq;neutrino-mixing} \epsilon_{ij}=\sum_{a,\alpha,\beta}\frac{Y_a^{ik}m_kY_b^{kj}}{32\pi^2}\left\{U_{a \alpha}^hU_{b \alpha}^h\ln{\frac{m_{h_\alpha}^2}{\Lambda^2}}-U_{a \alpha}^AU_{b \alpha}^A\ln{\frac{m_{A_\alpha}^2}{\Lambda^2}}\right\}, \end{eqnarray} where $i, \, j, \, k$ indices represent charged leptons, and $a, \, b=e, \, \mu$ are defined. $\Lambda$ is some scale, but $\epsilon_{ij}$ do not explicitly depend on $\Lambda$. We assume $m_k\ll m_{h_\alpha}$, $m_{A_\alpha}$. These loop corrections change the mass base slightly, and would contribute to the physical observables such as neutrino mixing angles. Here, we investigate how large $\theta_{13}$ can be according to the radiative correction and discuss the correlation between the neutrino mixing and the predicted flavor changing process. On the other hand, we may find extra FCNCs generated by the radiative corrections. For instance, the Yukawa couplings of the ${\cal T}$-trivial scalars are flavor-diagonal at the tree level, but nonzero off-diagonal elements appear at the one-loop level, according to the radiative correction to the Yukawa couplings involving the ${\cal T}$-trivial scalars. The one-loop FCNCs can be described as, \begin{eqnarray} {\cal L}^{(1)}_{FCNC}&=& -y^{1}_{ij} H^0_{S \, 1} \ov{l}_i e_{R \, j} -y^{2}_{ij} H^0_{S \, 2} \ov{l}_i e_{R \, j}, \\ y^1_{ij}&=&\sum_{a, \alpha, \beta}\frac{Y_\alpha^{ik}m_kY_\beta^{kj}}{16\pi^2 \sqrt{2}} \left ( \cos \alpha \frac{\pa}{ \pa \langle H_q \rangle} +\sin \alpha \frac{\pa}{ \pa \langle H_1 \rangle} \right ) \left\{U_{\alpha a}^hU_{\beta a}^h\ln{\frac{m_{h_a}^2}{\Lambda^2}}-U_{\alpha a}^AU_{\beta a}^A\ln{\frac{m_{A_a}^2}{\Lambda^2}}\right\}, \nonumber \\ && \\ y^2_{ij}&=&\sum_{a, \alpha, \beta}\frac{Y_\alpha^{ik}m_kY_\beta^{kj}}{16\pi^2 \sqrt{2}} \left ( \cos \alpha \frac{\pa}{ \pa \langle H_1 \rangle} -\sin \alpha \frac{\pa}{ \pa \langle H_q \rangle} \right ) \left\{U_{\alpha a}^hU_{\beta a}^h\ln{\frac{m_{h_a}^2}{\Lambda^2}}-U_{\alpha a}^AU_{\beta a}^A\ln{\frac{m_{A_a}^2}{\Lambda^2}}\right\}. \nonumber \\ && \end{eqnarray} \begin{figure}[h] \begin{tabular}{cc} \begin{minipage}{0.5\hsize} \begin{center} \epsfig{file=yukawachange.eps,width=\hsize} \end{center} \end{minipage} \begin{minipage}{0.5\hsize} \begin{center} \epsfig{file=yukawachange2.eps,width=\hsize} \end{center} \end{minipage} \end{tabular} \caption{1-loop processes which result in change of Yukawa couplings. When ${\cal T}$-trivial scalar bosons $H^0_{S1,2}$ get VEVs, these processes give mass mixing terms drawn in Fig.4. \label{fig;yukawachange}} \end{figure} These Yukawa couplings are vanishing in the ${\cal T}$-conserving limit, so that they are suppressed by the ${\cal T}$-breaking terms. We can expect that $(e,\, \mu)$ and $(e,\, \tau)$ elements may be enhanced because of the sizable Yukawa couplings, as we have seen in the $\mu \to e \gamma$ and $\tau \to e \gamma$ processes. Assuming $(\delta m^2_H)_{ee}$ is only nonzero among the ${\cal T}$-breaking terms, we find that $y^1_{e \mu }$ is approximately estimated as, \beq y^1_{e \mu} \approx - \frac{ 2 \sqrt 2 C_7}{1-F(m^2_{\phi_e}/m^2_\tau)} \times \left ( \cos \alpha \frac{\pa m^2_{\phi_e}}{ \pa \langle H_q \rangle} +\sin \alpha \frac{\pa m^2_{\phi_e}}{ \pa \langle H_1 \rangle} \right ), \eeq where the dependence of $\langle H_1 \rangle$ and $\langle H_q \rangle$ in $(\delta m^2_H)_{ee}$ is ignored. Eventually $y^1_{e \mu}$ is very tiny, because of the stringent $\mu \to e \gamma$ constraint. When the last term is around the EW-scale, $y^1_{e \mu}$ is at most $O(10^{-12})$. $y^1_{e \tau}$ is also small due to the analogy. The bound is weaker, so it could be slightly larger than the $(e,\, \mu)$ element, but it is at most $O(10^{-9})$. The other elements are much smaller, because of the suppression of Yukawa couplings and ${\cal T}$-breaking terms. \subsection*{contributions to neutrino mixing angles} Based on the above estimation, we investigate the contribution to the LFV, and the observed neutrino mixing. In many flavor models, the full flavor symmetry is broken to its subgroups, which are different from each other between the charged lepton sector and the neutrino sector. In a certain type of models $\theta_{13} =0$ is predicted at tree level, while other models lead to non-zero $\theta_{13}$. Here, we restrict ourselves to the former case, that is, $\theta_{13} = 0$ at the first stage. However, as we disscussed above, T-breaking effects from $\Delta V\neq0$ may modify the prediction, and neutrino mixing matrix is altered to have non-zero $\theta_{13}$. Now, we study the contributions of the ${\cal T}$-breaking terms to the neutrino mixing and discuss the possibility that the observed neutrino mixing is achieved by the diagonalizing matrix of charged leptons. ${\cal S}$-breaking entering into the neutrino sector also gives non-zero $\theta_{13}$, but this effect highly depends on the setup of neutrino sector; whether the right-handed neutrinos is present or not, how many if there is, whether the seesaw mechanism arises or not, which type of them if it arises, {\it etc.}. Thus, we concentrate on the charged lepton sector to give model independent considerations. The size of correction for the off-diagonal elements of Dirac mass matrix is given in Eq. (\ref{eq;neutrino-mixing}). In addition, $\epsilon_{ij}$ may be induced by extra heavy particles decoupling at some scale ($\Lambda$) or small deviations of the vacuum alignment, {\it i.e.}, $\langle H^{0}_{e,\mu} \rangle \neq 0$. Then the diagonalizing matrices $U_L, U_R$ for charged leptons are corrected to be, \begin{eqnarray} U_L^\dag\simeq\left( \begin{array}{ccc} 1 & -\frac{\epsilon_{e\mu}}{m_\mu} & -\frac{\epsilon_{e\tau}}{m_\tau} \\ \frac{\epsilon_{e\mu}}{m_\mu} & 1 & -\frac{\epsilon_{\mu\tau}}{m_\tau} \\ \frac{\epsilon_{e\tau}}{m_\tau} & \frac{\epsilon_{\mu\tau}}{m_\tau} & 1 \end{array}\right),~~~~ U_R\simeq\left( \begin{array}{ccc} 1 & \frac{\epsilon_{\mu e}}{m_\mu} & \frac{\epsilon_{\tau e}}{m_\tau} \\ -\frac{\epsilon_{\mu e}}{m_{\mu}} & 1 & \frac{\epsilon_{\tau\mu}}{m_\tau} \\ -\frac{\epsilon_{\tau e}}{m_\tau} & -\frac{\epsilon_{\tau\mu}}{m_\tau} & 1 \end{array}\right). \nonumber\\ \label{ULUR} \end{eqnarray} These small deviations modify the neutrino mixing matrix from the Tri-Bi maximal matrix; \begin{eqnarray} U_{PMNS}= \left( \begin{array}{ccc} 1 & -\frac{\epsilon_{e\mu}}{m_\mu} & -\frac{\epsilon_{e\tau}}{m_\tau} \\ \frac{\epsilon_{e\mu}}{m_\mu} & 1 & -\frac{\epsilon_{\mu\tau}}{m_\tau} \\ \frac{\epsilon_{e\tau}}{m_\tau} & \frac{\epsilon_{\mu\tau}}{m_\tau} & 1 \end{array} \right) \left( \begin{array}{ccc} \sqrt{\frac{2}{3}} & \frac{1}{\sqrt{3}} & 0 \\ -\frac{1}{\sqrt{6}} & \frac{1}{\sqrt{3}} & -\frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{6}} & \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{2}} \end{array} \right). \label{eq:PMNS} \end{eqnarray} Then $\sin{\theta_{13}}\sim \epsilon_{e\mu}/\sqrt{2}m_{\mu}$ is generated. If the contribution of the neutrino sector to $\sin \theta_{13}$ is negligible, we find the required value for the observed $\sin{\theta_{13}}$: $\epsilon_{e\mu} \approx 0.2 \times m_\mu$. First, let us discuss the one-loop corrections to the neutrino mixings. We can write $\theta_{13}$ in terms of ${\cal T}$-breaking effects, according to Eq. (\ref{ULUR}): \begin{eqnarray} \sin{\theta_{13}^{\rm lepton}}&=&\frac{\epsilon_{e\mu}}{\sqrt{2}m_\mu}-\frac{\epsilon_{e\tau}}{\sqrt{2}m_\tau}\nonumber\\ &\simeq&\frac{Y_{e}^{e\tau}Y_{e}^{\tau\mu}}{\sqrt{2}(4\pi)^2}\frac{m_\tau}{m_\mu}\left(U_{ea}^hU_{ea}^h\ln{\frac{m_{h_a}^2}{m_\tau^2}}-U_{ea}^AU_{ea}^A\ln{\frac{m_{A_a}^2}{m_\tau^2}}\right). \end{eqnarray} Here, we neglect a term of $\epsilon_{e\tau}/m_\tau$, because this term is suppressed by a factor $m_\mu^2/m_\tau^2$ as compared with the first term. $\epsilon_{ij}$ are generated by the ${\cal T}$-breaking terms which are strongly constrained by especially $\mu \to e \gamma$. We see the predicted points on $\theta_{13}^{\rm lepton}$ allowed by the $\mu \to e \gamma$ constraint, in Fig. \ref{fig:theta13vsbr(mutoegamma)}. Note that $\sin{\theta_{13}^{\rm lepton}}$ is proportional to $\cos^{-2}{\beta}$, so that it is easily enhanced by $\tan{\beta}$. In Fig. \ref{fig:theta13vsbr(mutoegamma)}, $\tan \beta$ is less than $100$, and then $\sin \theta_{13}$ could be $O(0.01)$. \begin{figure}[h] \begin{center} \includegraphics[width=75mm,angle=270]{theta13vsBr.eps} \end{center} \caption{ Some of possible values of ($\sin{\theta_{13}^{\rm lepton}}$, Br($\mu\to e\gamma$)) for $\tan{\beta}<100$. The larger $\sin{\theta_{13}^{\rm lepton}}$ is, the smaller value of Br($\mu\to e\gamma$) is tend to favored and vice versa. This is because Br($\mu\to e\gamma$) is suppressed by factors $\frac{1}{m_h^2}$, $\frac{1}{m_A^2}$ compared with $\sin{\theta_{13}^{\rm lepton}}$. The horizontal line shows experimental upper bound of Br($\mu\to e\gamma$) \cite{Adam:2013mnn}.} \label{fig:theta13vsbr(mutoegamma)} \end{figure} Similary, other shifts of mixing angles from Tri-Bi maximal values are written as \begin{eqnarray} &&\sin{\theta_{12}^{\rm lepton}}-\frac{1}{\sqrt{3}}\simeq\frac{Y_{e}^{e\tau}Y_{e}^{\tau\mu}}{\sqrt{3}(4\pi)^2}\frac{m_\tau}{m_\mu}\left(U_{ea}^hU_{ea}^h\ln{\frac{m_{h_a}^2}{m_\tau^2}}-U_{ea}^AU_{ea}^A\ln{\frac{m_{A_a}^2}{m_\tau^2}}\right), \nonumber\\ &&\sin{\theta_{23}^{\rm lepton}}-\left(-\frac{1}{\sqrt{2}}\right)\simeq\frac{Y_{e}^{\mu e}Y_{e}^{e\tau}}{\sqrt{2}(4\pi)^2}\frac{m_e}{m_\tau}\left(U_{ea}^hU_{ea}^h\ln{\frac{m_{h_a}^2}{m_e^2}}-U_{e a}^AU_{e a}^A\ln{\frac{m_{A_a}^2}{m_e^2}}\right). \nonumber\\ \end{eqnarray} $\epsilon_{e\tau}$ in $\sin\theta_{12}$ is neglected as in the case of $\theta_{13}$. A contribution to $\sin{\theta_{23}^{\rm lepton}}$ from ${\cal T}$-breaking in the charged lepton sector is extremely small because of the factor $m_e/m_\tau$. $\sin{\theta_{12}^{\rm lepton}}$ correlates with Br($\mu\to e\gamma$) as same as the case of $\theta_{13}^{\rm lepton}$. On the other hand, the correction to $\sin{\theta_{23}^{\rm lepton}}$ seems to be much smaller than the others. In the above argument, it is attempted to give $\sin\theta_{13}$ without considering details of neutrino sector which is highly model dependent. However, a contribution to $\sin{\theta_{13}}$ coming from only loop-induced ${\cal T}$-breaking effect on charged lepton mixing may be small compared to the observed value, and then we would like to refer to a possibility of other contributions to $\theta_{13}$ from charged lepton sector. We can introduce new higher dimensional operators which predict new lepton mixing to supply additional contribution to $\theta_{13}$. Such operators occur when heavy particles coupling with ${\cal G}$-charged scalars decouple at some scale. It would be enough to consider additional terms as follows: \beq \label{zetaterm} \zeta_{ijk}\frac{\Phi_i}{\Lambda} H_q \overline{l^j_L}l^k_R+{\rm h.c.}, \eeq where $\Lambda$ is the decoupling scale and $\zeta_{ijk}$ is defined by ${\cal G}$. Then, mass mixing terms, $\epsilon_{ij}$, are enhanced, according to the nonzero VEVs of $\Phi$ and $H_q$. Especially, the term in Eq. (\ref{zetaterm}) corresponding to $\epsilon_{e\mu}$ adds a new contribution in the form of $\zeta\langle\Phi\rangle\langle H_q \rangle/\sqrt{2}m_\mu\Lambda$ to $\sin\theta_{13}^{\rm lepton}$. Then the size of the coefficient of this effective interaction has to be $\zeta/\Lambda=O(10^{-5})\times\langle\Phi\rangle^{-1}$ to realize $\sin\theta_{13}=O(0.1)$. Secondly, we could consider the possibility that the Higgs VEV alignment is deviated. When the VEV alignment of $H_i$ is altered, the remnant symmetry ${\cal T}$ is broken and becomes just approximate symmetry even in the charged lepton sector. When $H_e^0$ and $H_\mu^0$ gain nonzero VEVs as $\langle H_{e, \, \mu}^0 \rangle= \delta v_{e,\mu}/ \sqrt{2}$, mass mixing terms in the form of \begin{eqnarray} (\epsilon^{\delta v}_e)_{e \tau}\equiv Y^{e\tau}_{e}\frac{\delta v_e}{\sqrt{2}}=\frac{m_\tau \delta v_e}{v\cos\beta} ,~ (\epsilon^{\delta v}_\mu)_{e\mu}\equiv Y^{e\mu}_{\mu}\frac{\delta v_\mu}{\sqrt{2}}=\frac{m_\mu\delta v_\mu}{v\cos\beta} \label{eq:epsilondeltav} \end{eqnarray} are added to $\epsilon_{e\tau}$ and $\epsilon_{e\mu}$ respectively. The size of $\delta v_{e, \mu}$ has to be $O(0.1)\times v\cos\beta$ to realize $\sin\theta_{13}=O(0.1)$. These deviations change the Yukawa couplings for $\phi_{e}$ and $\phi_{\mu}$ in the base that charged leptons are mass eigenstates. Besides, the mass mixing terms between ${\cal T}$ -trivial and ${\cal T}$-charged scalars can be induced by, for instance, $|H_i|^2|H_q|^2$ term in the Higgs potential. Let us approximately describe the deviated Yukawa couplings for $\phi_{e}$ or $\phi_{\mu}$ assuming $\delta v_{e, \mu} \ll v \cos \beta$: \begin{eqnarray} \label{eq;additionalYukawa} {\cal L}'_{\cal T} &=&-Y'^{ij}_e \phi'_e \overline{l^i_L} l^j_R-Y'^{ij}_\mu \phi'_\mu \overline{l^i_L} l^j_R, \\ Y'^{ij}_e&=& (U^{\dagger}_L)^{ik}Y^{kl}_{e}(U_R)^{lj} - \frac{\delta v _e}{v \cos \beta} \frac{\sqrt{2}m_i}{v \cos \beta } \delta_{ij}, \\ Y'^{ij}_\mu&=& (U^{\dagger}_L)^{ik}Y^{kl}_{\mu}(U_R)^{lj} - \frac{\delta v _\mu}{v \cos \beta} \frac{\sqrt{2}m_i}{v \cos \beta } \delta_{ij}. \end{eqnarray} In these descriptions, we take the SM limit that the SM Higgs around $125$ GeV does not have tree-level FCNCs and its Yukawa couplings are the same as the SM ones. Then, the mass bases of $\phi_{e}$ and $\phi_{\mu}$ are slightly deviated by $\delta v_e$ and $\delta v_\mu$. The mass bases of CP-even and CP-odd scalars in $\phi'_{e}$ and $\phi'_{\mu}$ may be the same in this limit. As we have already discussed, the LFV given by $\phi_\mu$ exchanging is dominated by the LFV $\tau$ decay, $\tau^- \to e^+ \mu^- \mu^-$, which is the ${\cal T}$-conserving process. The extra ${\cal T}$-breaking terms in addition to the mass mixing in Eq. (\ref{deltam}) enhance the other LFV $\tau$ decays as follows: \begin{eqnarray} {\rm Br} (\tau^- \to e^- \mu^+ \mu^- )&\simeq&\left( \frac{\delta v_e}{ v \cos \beta}\frac{m^2_{\phi_\mu}}{m^2_{\phi_e}} \right )^2 \times {\rm Br} (\tau^- \to e^+ \mu^- \mu^-), \\ {\rm Br} (\tau^- \to e^- e^+ \mu^- )&\simeq&\left \{ \left(\frac{\epsilon_{e\tau}}{m_\tau}+\frac{\epsilon_{\tau\mu}}{m_\mu} \right)\frac{m^2_{\phi_\mu}}{m^2_{\phi_e}}+\frac{\epsilon_{e\mu}}{m_\mu} \right \}^2 \times {\rm Br} (\tau^- \to e^+ \mu^- \mu^-), \\ {\rm Br} (\tau^- \to \mu^- \mu^+ \mu^- )&\simeq& \left( \frac{\epsilon_{e \mu}-\epsilon_{\tau\mu}}{ m_\mu} - \frac{\delta v_\mu}{ v \cos \beta} \right )^2 \times {\rm Br} (\tau^- \to e^+ \mu^- \mu^-). \end{eqnarray} Note that $\epsilon_{ij}$ in these equations include the contributions of the loop corrections, the higher-dimensional operators in Eq. (\ref{zetaterm}), and $(\epsilon^{\delta v}_{e, \mu})_{ij}$ in Eq. (\ref{eq:epsilondeltav}). The suppression factors in these processes could be estimated as $\sin^2 \theta^{\rm lepton}_{13}$, so that they are predicted around the region with $ O(10^{-2}) \times {\rm Br} (\tau^- \to e^+ \mu^- \mu^-)$, which is safe for the current experimental bound as far as the $\tau^- \to e^+ \mu^- \mu^-$ bound is evaded. If the CP-even and CP-odd scalars in especially $ \phi'_e$ have different masses, the branching ratio of $\mu \to e \gamma$ would be as discussed in Sec. \ref{sec:T-breaking}. Moreover, the ones of $\tau \to e \gamma$ and $\tau \to \mu \gamma$ would be also enhanced, according to the nonzero $\sin \theta^{\rm lepton}_{13}$: \begin{eqnarray} {\rm Br} (\tau^- \to e^- \gamma )&\simeq& \left( \frac{\epsilon_{e \tau}}{ m_\tau} - \frac{\delta v_e}{ v \cos \beta} \right )^2 \times \frac{m^2_{\tau}}{m^2_\mu}\times {\rm Br} (\mu^- \to e^- \gamma), \\ {\rm Br} (\tau^- \to \mu^- \gamma )&\simeq& \left( \frac{\epsilon_{e \tau}}{ m_\tau} - \frac{\delta v_e}{ v \cos \beta} \right )^2 \times {\rm Br} (\mu^- \to e^- \gamma). \end{eqnarray} Eventually, ${\rm Br} (\tau^- \to e^- \gamma )$ is predicted around the region compatible with ${\rm Br} (\mu^- \to e^- \gamma )$, and then the constraints from the exotic $\tau$ decay are less serious than the one from $\mu^- \to e^- \gamma$. \section{summary} \label{section6} The origin of the flavor structure of the fermions in the SM is one of the mysteries which have been discussed for a long time. The SM gauge groups are orthogonal to the generation, and we can find flavor symmetry rotating the generations, if we ignore the Yukawa couplings to generate the mass matrices for the fermions according to the spontaneous EW symmetry breaking. This fact may suggest the possibility that the flavor symmetry exists at the high energy and then the observed mass matrices are generated dynamically. Besides, we may be able to find some fragments of the flavor symmetry in the SM. The lepton sector especially may still hold some remnant symmetry of the flavor symmetry respected at the high-energy scale. The remnant ones may give some hints for not only model building but also how to prove flavor symmetric models in flavor physics. In this paper, we investigated flavor physics in models with flavor symmetry, ${\cal G}$, in a quite general manner. In our setup, only leptons are charged under ${\cal G}$ and extra Higgs doublets are introduced to respect the flavor symmetry in the Yukawa couplings. The Higgs doublets are assumed to belong to non-trivial irreducible representations of ${\cal G}$. Then ${\cal G}$ is spontaneously broken by VEVs of ${\cal G}$-triplet Higgs bosons, $H_i$ and $\Phi$. Some remnant symmetry is left after the symmetry breaking: ${\cal T}$ is conserved in the charged lepton sector and ${\cal S}$ in the neutrino sector respectively. This framework has been used to realize a specific neutrino mixing pattern such as the Bi maximal or the Tri-Bi maximal mixing. The leptophilic ${\cal G}$-triplet $H_i$ breaks ${\cal G}$ to ${\cal T}$, and the EW singlet $\Phi_i$, which couples only neutrinos, breaks ${\cal G}$ to ${\cal S}$. The symmetry ${\cal T}$ plays a crucial role in the control of the FCNCs, although it is not respected in the full lagrangian. ${\cal T}$-breaking terms would appear in the Higgs potential and neutrino mass matrix, but they could be also under control once we assume the vacuum alignment of $H_i$ and $\Phi_i$. In our study, ${\cal T}$ is considered as especially ${\cal T}=Z_3$, and the constraints from the LFV processes are investigated. In the basis in which ${\cal T}$-generator is diagonal, charged leptons are mass eigenstates, while details of Higgs potential need to be analyzed to decide mass eigenstates of Higgs bosons. Charged leptons and mass eigenstates of leptophilic Higgs bosons can be classified into trivial and non-trivial ${\cal T}$ singlets after the ${\cal G}$ symmetry breaking, where the trivial ${\cal T}$-singlet Higgs fields only develop the VEVs. Then ${\cal T}$ charge conservation constrains the form of interactions involving charged leptons: the Yukawa couplings of charged leptons and scalars are decided by the remnant symmetry. Especially, ${\cal T}$-charged Higgs bosons can have FCNCs and cause multi-leptonic decays and flavor non-universal gauge couplings. We considered the scenario that $\tau$ and $\mu$ leptons carry non-trivial ${\cal T}$ charges, and then the LFV $\tau$ decay, $\tau^- \to e^+ \mu^- \mu^- $, is predicted through the ${\cal T}$-charged scalar exchanging. The flavor violating scattering, $e^+e^- \to \tau^+ \tau^-$, could be also sizable, so we investigated the constraints. The masses of ${\cal T}$-charged scalars are expected to be around the EW scale, so we conclude that $\tan \beta$ should be less than $O(10)$. On the other hand, the neutrinophilic scalar, $\Phi_i$, breaks ${\cal T}$ in the neutrino sector, and this ${\cal T}$ breaking would propagate into the charged lepton sector through the interactions between $H_i$ and $\Phi_i$ in the Higgs potential. In fact, flavor changing processes that do not conserve ${\cal T}$ charges occur at the one-loop level such as $\mu\to e\gamma$ and $\tau\to e\gamma$, which are important when the model prediction is compared with the experimental bounds. In addition, the muon anomalous magnetic moment could be enhanced, although the large enhancement is excluded by the bound from the exotic $\tau$ decay. Figs. \ref{fig3} and \ref{fig4} show that $\mu \to e \gamma$ and the electron EDM strongly constrain our models if ${\cal T}$ breaking terms in the Higgs potential are allowed: $\tan \beta \lesssim 10$. In other words, the observations, as well as $\tau^- \to e^+ \mu^- \mu^- $, are the most relevant to our flavor models. In this type of models argued in this paper, neutrino mixing with $\theta_{13}=0$ tends to be predicted because Bi maximal or Tri-Bi maximal mixing is realized. Then the remnant symmetry breaking effects may be required to alter this mixing pattern. In Sec. \ref{section5}, we discussed the case that the remnant symmetry ${\cal T}$ is slightly broken in the charged lepton sector. As mentioned above, the ${\cal T}$ breaking effect caused by VEVs of $\Phi_i$ enters into the charged lepton sector through the loop processes involving scalars. However, this contribution is too small to achieve the observed value of $\theta_{13}$. We also considered additional ${\cal T}$-breaking effects to realize the large $\theta_{13}$. Such newly added ${\cal T}$-breaking terms also contributes to FCNCs, then we pointed out the correlation between $\theta_{13}$ and FCNCs caused by additional ${\cal T}$ breaking effect. For instance, the branching ratios of $\tau^- \to e^- \mu^+ \mu^- $, $e^- e^+ \mu^-$ and $\mu^- \mu^+ \mu^-$ becomes the almost same order as Br$(\tau^- \to e^+ \mu^- \mu^- )$ suppressed by $O(10^{-2})$, and Br($\tau^- \to e^- \gamma$) may be the same order as Br($\mu^- \to e^- \gamma$). The process, $\tau^- \to e^+ \mu^- \mu^-$, is the most important in our model, but these predictions would be also useful to test our flavor models. In this study, we take the SM limit, so the SM-like Higgs around $125$ GeV does not have FCNCs. As discussed recently in Refs. \cite{Omura, hmutau}, it may be interesting to allow the tree-level FCNCs involving the SM-like Higgs, motivated by the CMS excess in the $h \to \mu \tau$ channel \cite{Khachatryan:2015kon} as well as the muon anomalous magnetic moment \cite{Omura}. However, our model would not enhance the $(g-2)_{\mu}$ because of the chirality structure of the FCNCs, and then the possible way is to consider very light pseudoscalar and large $\tan \beta$ to achieve the discrepancy of $(g-2)_{\mu}$ \cite{leptophilic2HDM}. Such a parameter set would require the tuning of the ${\cal T}$-breaking terms and large mass differences among the scalars to evade the strong bounds from $\mu \to e \gamma$, the electron EDM and $\tau^- \to e^+ \mu^- \mu^-$. Note that we studied flavor physics assuming the relation in Eq. (\ref{A4-relation}) and ${\cal T}=Z_n$ ($n\neq 2$). These conditions may be relevant to our prediction, so we will discuss the other possibility such as ${\cal T}=Z_2$ in the future. \acknowledgments This work is supported by Grant-in-Aid for Scientific research from the Ministry of Education, Science, Sports, and Culture (MEXT), Japan, No. 23104011 (for Y.O.) and No. 25400252 (for T.K.).
2212.02347
\section{Introduction}Nonreciprocal optical devices (NRODs), such as optical isolators and circulators, enforcing unidirectional light propagation, have become key components for modern photonic applications~\cite{Millot2016NP, Zhang2022Nture, Jiho2019Science, Lai2020NP}, quantum information processing~\cite{Daiss2021, Wallucks2020, 2008The, Cirac1997, Andre2017}, quantum sensing~\cite{PhysRevA.103.042418} and nanofabrication of optical crystals~\cite{ZhangYongNature}. NRODs need to break the electromagnetic time-reversal symmetry. A standard approach to implement NRODs is based on the Faraday effect using a DC magnetic bias in a magneto-optical material~\cite{Ren2022}. However, a magneto-optical system is typically incompatible with integrated photonic technologies and magnetically sensitive applications, introduces large loss to the signal, and requires a strong magnetic field. Various schemes for magnetic-free nonreciprocity have been proposed and demonstrated by using spatiotemporal modulation~\cite{Lira2012,Estep2014,Sounas2017, NaturePhotonics.8.701, NaturePhotonics.15.828}, nonlinear resonator modes~\cite{RN43, Fan2012, Alu2022arxiv}, quantum nonlinearity~\cite{ZTC2019PRL, AtomicMirrorPRL, Blatt2011PRL}, nonlinearity in a parity-time-symmetry-broken system~\cite{JXS2014NP, YL2014NP, XM2018PRL, PhysRevA.99.053806}, photon-phonon coupling~\cite{Kittlaus2021, DCH2016NP, 2011NP, Gaurav2018NP, Gaurav2015NP}, moving lattice~\cite{WDW2013PRL, WJH2013PRL}, spinning resonators~\cite{JH2018Nature, JH2020PRL, JH2018PRL}, wave-mixing processes in nonlinear media~\cite{FSH2009NP, Song2021, Yang2022, JXS2016NC}, chiral quantum optical systems~\cite{XKY2014PRA, Arno2016PRX, Arno2016Science, Peter2015NNT, TL2019PRA, Tang2022PRL, XKY2018NP, XKY2018PRL, XKY2021SciAdv, XKY2020PRL, WHD2022LPR, ZCL2021NC, arxiv2210.07038, Philipp2022NP, TL2022AQT}, chiral valley systems~\cite{Guddala2021} and unidirectional quantum squeezing of microring resonator modes~\cite{Tang2022a, ZCL2016PRL}. In addition, phononic and microwave nonreciprocity have been reported and have their own applications~\cite{Xu2020,Li2011, Wang2019, Huang2016,PhysRevApplied.10.064037}. Meanwhile, these approaches require high-quality cavities to enhance the light-matter interaction or narrow-band auxiliary systems, such as alkali atoms or mechanical modes. As a result, their nonreciprocal bandwidth is strongly limited. Nonlinearity-based NRODs have attracted significant attention because they are compatible with integrated photonics and can be obtained in the absence of an external bias. Nevertheless, dynamic reciprocity imposes fundamental constraint to their practical applications~\cite{Shi2015}. The chiral Kerr nonlinearity can tackle the problem of dynamic reciprocity~\cite{XKY2018PRL, SBS2020PRR, Leonardo2018Optica, PanRuikaiCPL}. However, nonlinear NRODs often require nonlinear resonators, which have a very limited bandwidth. In this theoretical work, we propose a passive bias-free optical isolator for a ultrashort laser pulse with a nonlinear medium with asymmetric inputs exhibiting unidirectional self-induced transparency (SIT). This cavity-free isolator can enable unidirectional propagation of a $5~\femto\second$ laser pulse. A $57~\nano\meter$ bandwidth can be obtained for $20~ \deci\bel$ isolation. \section{System and model}Our proposed optical isolator consists of a 1D two-level atom (TLA) medium and a normal-absorpting (NA) medium, as schematically depicted in Fig.~\ref{fig1}. The TLA medium is followed by an air region and then by a NA medium in the right-hand end. The NA medium causes absorption to a laser field. For simplicity, we consider the TLA and NA media to be surrounded by air, for the theoretical analysis. In practice, the TLA medium can be a semiconductor~\cite{Polu1975Self,Wu2011}, an ensemble of atoms~\cite{Gibbs1972}, or semiconductor quantum dots doped in an optical material~\cite{PRL94,PRL81,APL10,Kim2022}. The NA medium can immediately follow the TLA medium. The air can also be replaced with a transparent optical medium. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{Figure1.jpg} \\ \caption{Schematic of the proposed optical isolator consisting of a TLA medium and a NA medium. The forward-moving femtosecond laser pulse first passes the TLA medium without loss due to the SIT and then is partly absorbed by the NA medium (red pulses). The backward-moving pulse (blue pulses) first decays to an area smaller than $\pi$ due to the absorption in the NA medium, and is further strongly absorbed in the TLA medium according to Beer's law~\cite{AllenBook}.} \label{fig1} \end{figure} We use a two-level model for the TLA medium. The transition between the ground state $|g\rangle$ and the excited state $|e\rangle$ of the TLA system has a resonance frequency $\omega_0$ and a electronic dipole moment $\vec{d}$ with amplitude $d$. The population relaxation time and the decoherence time are $T_1$ and $T_2$, respectively. We assume that a laser field is polarized along $\vec{d}$ with angular frequency $\omega_L$. At position $z$ and time $t$, its envelope is $\tilde{E}_x(z, t)$, with its area defined as $A = \int_{-\infty}^\infty d \tilde{E}_x(t)/\hbar dt$~\cite{AllenBook}. The pulse energy is calculated as $U_p(z) = \frac{ \varepsilon_0}{2} \int_{-\infty}^\infty \tilde{E}^2_x(z, t) dt$, with $\varepsilon_0$ being the vacuum permittivity. The propagation of a short laser pulse with a duration $\tau_p \ll T_2$ in a TLA medium is crucially dependent on its area $A$~\cite{AllenBook}. Due to the strong nonlinear interaction, an ultrashort laser pulse with $A = 2\pi$ and $\omega_L = \omega_0$ drives a full Rabi oscillation of the TLA medium. This process takes place much faster than the relaxation and decoherence rates. During propagation, the first-half part of the $2\pi$ laser pulse coherently transfers energy to atoms and completely inverts the atomic population. Then, the atoms coherently return back the energy to the laser pulse after being ``pulled'' back to the ground state by the second-half part. Thus, the $2\pi$ laser pulse can propagate as the nonlinear medium is transparent and maintain its area $2\pi$. Thus, the laser pulse can be considered as a light soliton with very small energy loss. This SIT phenomenon has been demonstrated in various experiments~\cite{Carmon2000OL, DN2003OL, Rotschild2006OL} and by numerical simulations based on the Maxwell-Bloch equations~\cite{McCall1967PRL, Hughes1998PRL, Daniel1995PRA, Xia2005, Xia2007, Xia2007a, xia2007b, xie2010}. When $\pi < A < 2\pi$, the pulse will evolve to a $2\pi$ pulse at the expense of losing some energy. If $A < \pi$, the pulse will be strongly absorbed. Its total intensity or energy reduces exponentially according to Beer's law~\cite{AllenBook}. A $\pi$ pulse is unstable and will decay during propagation. The key idea of our isolator is explained as follows. When an ultrashort pulse with $A \sim 2\pi$ enters the device from the left end in the forward case, the TLA medium is mostly transparent because of the SIT. Then, the pulse leaves the device after a small absorption in the NA medium. This forward transmission can be high. In contrast, in the backward case, the pulse first inputs to the right end of the NA medium. It is partly absorbed by the NA medium and then suffers a Beer-law decay in intensity (energy) inside the TLA medium. Thus, the asymmetric arrangement of the NA medium causes unidirectional SIT, indicating a strong bias-free optical nonreciprocity. Now we model the propagation of the laser pulse $E_x (t)$ in the TLA medium with the Maxwell-Bloch coupling equations. The TLA-field interaction can be described by the Hamiltonian without the rotating-wave approximation and the slowly varying envelope approximation as \begin{equation} H = \left( \begin{matrix} \omega_0/2 & d E_x(t)/\hbar \\ d E_x(t) /\hbar & - \omega_0/2 \end{matrix} \right)\;. \end{equation} We consider a 1D continuous TLA medium with a number density $N_a$. The Maxwell's equations for the electric field $E_{x}$ and the magnetic field $B_y$ take the form \begin{subequations}\label{eq:Maxwell} \begin{align} \partial_t B_y & = - \partial_z E_x \;,\\ \partial_t D_x & = -\frac{1}{\mu_0}\partial_z B_y \;, \end{align} \end{subequations} where $\mu_0$ is the magnetic permeability in the vacuum. The nonlinear response of the TLA medium is taken into account via the relation $D_x = \varepsilon_0 E_x + P_x$, where the macroscopic polarization $P_x = - N_a d u$ is connected with the off-diagonal density matrix element $\langle e| \rho |g\rangle = (u + i v)/2$. $P_x$ can be solved by the Bloch equations derived from $H$ \begin{subequations}\label{eq:Bloch} \begin{align} \partial_t u & = -\frac{1}{T_2}u + \omega_0 v \;,\\ \partial_t v & = - \omega_0 u -\frac{1}{T_2}v + 2\frac{d E_x}{\hbar} w \;,\\ \partial_t w & = - 2\frac{d E_x}{\hbar} w - \frac{1}{T_1}(w-w_0) \;, \end{align} \end{subequations} where $w$ is the population difference between the states $|g\rangle$ and $|e\rangle$. Note that $u$, $v$, and $w$ satisfy the relationship $u^2+v^2+w^2=1$. This model can reproduce well experimental results~\cite{AllenBook, rfSIT} and has been widely used to study the time evolution of a femtosecond laser pulse in a nonlinear medium~\cite{RN2, FX2002PRL}. \section{Numerical method}We employ the standard finite-difference time-domain method (FDTD) and the fourth-order Runge-Kutta method to solve the Maxwell-Bloch coupling equations~\cite{Hughes1998PRL, Xia2007, Xia2007a, xia2007b}. The Mur absorbing boundary condition is used to avoid the reflection off the truncated boundary of the FDTD simulation domain~\cite{Mur1981}. A source laser pulse is introduced to the air region at $z_\text{f} = 15~\micro\meter$ in the forward case and at $z_\text{b} = 255~\micro\meter$ in the backward case. It is assumed to be a sech function defined as $E_x(t) = \tilde{E}_x(t,\tau) \sin (\omega_\text{L} (t-\tau))$ with $\tilde{E}_x(t, \tau) = E_0 \text{sech}[1.76 (t-\tau)/\tau_p]$, where $E_0$ is the amplitude. The delay $\tau$ is much larger than duration $\tau_p$ to ensure that the electric field is negligible at $t=0$. Specifically, $\tau > 4 \tau_p$. The area of the input pulse is $A_0 = dE_0 \tau_p \pi/1.76\hbar$. Now we specify the TLA medium and the NA medium. The TLA medium is set at $30\leq z \leq 120~\micro\metre$ and the NA medium at $150 \leq z \leq 240~\micro\metre$. These are separated by $30 ~\micro\meter$ of air, so that we can monitor the transmitted field after the media. The medium in the rest regions is air. The time-dependent transmitted pulses are retrieved at $z = 15~\micro\meter$ in the backward case and at $z = 255~\micro\meter$ in the forward case. The NA medium causes a decay of $2.95~\deci\bel$ in the total field energy. % We choose the atomic parameters: $N_a = 3\times10^{25}~{\metre}^{-3}$ $\omega_0=2.3\times10^{15}~\hertz$ ($\lambda_0=820~\nano\meter$), $d=2\times10^{-29}~\coulomb \cdot \meter$, $T_1^{-1}=0.5~\pico\second$ and $T_2^{-1}=0.25~\pico\second$. The TLA medium is initialized in the ground state such that $u=0$, $v=0$, and $w_0= -1$ at $t=0$. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{Figure2.jpg} \\ \caption{Evolution of a $2\pi$ femtosecond laser pulse with $\tau_p = 5~\femto\second$ in the forward case (a) and the backward case (b). (c) Energy evolution of the forward- (red curve) and backward-moving (blue cuve) laser pulses. Dotted curves are fits with exponential decays.} \label{fig2} \end{figure} Without loss of generality, we adopt the typical duration $\tau_p = 5~\femto\second$ for a resonant femtosecond laser pulse ($\omega_\text{L} = \omega_0$), spanning a $200~\tera\hertz$ bandwidth. For $\tau_p = 5~\femto\second$, the pulse area can reach $A_0=2\pi$ when $E_0 \approx 3.7\times10^9~\volt\per\meter$. We are interested in a laser pulse with $A_0 < 3\pi$, corresponding to a peak field intensity less than $4\times 10^{12}~\watt/{\centi\meter}^{2}$. We evaluate the performance of the optical isolator by comparing energies of the input and transmitted laser fields. The forward and backward transmissions are calculated as \begin{subequations} \label{eq:Transmissions} \begin{align} T_\text{f} = -10 ~{\log}_{10} [U(z=240~\micro\metre)/U(z=30~\micro\metre)] \;, \\ T_\text{b} = -10 ~{\log}_{10} [U(z=30~\micro\metre)/U(z=240~\micro\metre)] \;. \end{align} \end{subequations} In calculation of the pulse energy and the transmissions, we truncate the integral time limits because the energy of the laser pulses outside the limits is negligible. The isolation contrast is defined as $\eta = 10 {\log}_{10} (T_\text{f}/T_\text{b})$. \begin{figure} \centering \includegraphics[width=1\linewidth]{Figure3.jpg} \\ \caption{ Transmission and isolation contrast of the optical isolator versus the pulse area A and energy for a $5~\femto\second$ laser pulse.} \label{fig3} \end{figure} \section{Theoretical Results}We first investigate the propagation of a $2\pi$ laser pulse excited by the source at either $z_\text{f}$ or $z_\text{b}$, respectively. The nonreciprocal propagation can be seen from Fig.~\ref{fig2}. In the forward case, see Fig.~\ref{fig2}(a), the laser pulse enters the TLA medium at about $t=100~\femto\second$ with a negligible reflection and leaves at $t=400~\femto\second$ with the unchanged pulse shape. The laser energy decays to $79.1\%$ of the initial energy $U_0$ due to the fast atomic dissipation. After going through the nonlinear medium, the energy of the pulse further decays by about $2.95~\deci\bel$ due to the absorption of the NA medium. The laser pulses transmitted through the device retain $40.2\%$ of their energy, i.e. $T_\text{f} = 40.2\%$. Nevertheless, the pulse keeps its shape unchanged after passing through the isolator. In the backward case, see Fig.~\ref{fig2}(b), the laser pulse with the same area is excited by the source at $z_\text{b}$. In stark contrast, the backward-moving laser pulse is first absorbed to an area well smaller than $2\pi$, corresponding to $50.8\%$ of the initial energy, and then enters the TLA medium at $t = 600~\femto\second$. The pulse area remains about $1.4\pi$. During propagation, the pulse quickly splits into two parts, a fast-decaying strong pulse and a slowly-decaying weak one with an oscillating envelope. According to Beer's law, the TLA medium strongly absorbs the main pulse but leaves the negligibly small oscillating one, corresponding to a $2\pi$ pulse but with a smaller energy decay scale with respect to the Beer law. When the laser field passes through the TLA medium at $t = 900~\femto\second$, only $1.48\%$ of the energy of the pulse remains, corresponding to $T_\text{b} = 1.48\%$. This nonreciprocal energy evolution of the laser pulses due to the unidirectional SIT during propagation is clearly shown in Fig.~\ref{fig2}(c). The obtained isolation contrast and the insertion loss are $14.3~\deci\bel$ and $3.96~\deci\bel$, respectively. The insertion loss in the forward case dominantly results from the absorption in the NA medium. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{Figure4.jpg} \\ \caption{Field distribution of a $5~\femto\second$ laser pulse (blue curves) in the TLA medium and the population difference $w$ (brown curves) at different times. Red and blue arrows indicate the forward and backward moving directions.} \label{fig4} \end{figure} Therefore, our theoretically proposed optical isolator can enable nonreciprocal propagation of a femtosecond laser pulse spanning over $200~\tera\hertz$. The discrepancy between the energy evolution (simulations) and an exponential decay (fitting) in the TLA medium is due to the slow-decaying and envelope-oscillating $2\pi$ pulse, which includes a small fraction of the total energy. The transmission and isolation contrast versus the area of the input laser pulse are shown in Fig.~\ref{fig3}. Here, the isolation contrast is calculated for a whole femtosecond laser pulse. When $2\pi < A < 3\pi$, the forward transmission is stable and high, larger than $40\%$. We obtain the maximal isolation contrast $17.1~\deci\bel$ when $A = 2.25 \pi$. However, when $A>2.25 \pi$, the backward transmission quickly increases to a high level. According to the ``area theorem''~\cite{AllenBook}, if the area of a femtosecond laser pulse is less than $\pi$ when it enters the TLA medium, its energy can be completely absorbed in the TLA medium. However, when $A>2\pi$, the laser pulse will split into multi daughter pulses although the daughter pulses are under the SIT. To guarantee a high and shape-maintaining transmission in the forward case, the pulse area when entering the TLA medium needs to be no more than $2\pi$ to avoid splitting. Thus, the nonreciprocal energy range is narrow. Note that there is a small reflection caused by the air-medium interface. We can attain high-performance isolation for $1.81 \pi < A< 2.25 \pi$. Dynamic nonreciprocity is also obtained when the two laser pulses simultaneously propagate in the TLA medium along opposite directions. We study the two counter-propagating $2\pi$ laser pulses as an example. To allow the two laser pulses to meet in the medium, we input the backward-moving pulse about $500~\femto\second$ earlier than the forward-moving pulse. Figure~\ref{fig4} shows the normalized electric fields of the pulses and the population inversion $w$. When the forward- and backward-moving pulses meet, the population inversion $w$ oscillates quickly. However, the two laser pulses can pass each other without observable interference as two light solitons. The transmission of the backward-moving pulse is much smaller than the forward-moving pulse. These results confirm that our proposed \emph{optical isolator displays dynamic nonreciprocity}. \begin{figure} \centering \includegraphics[width=1\linewidth]{Figure5.jpg} \\ \caption{(a) Spectra of the input pulse (dotted black curve), the forward-moving (solid red curve) and the backward-moving (dashed blue curve) pulses transmitted through the optical isolator. (b) Isolation contrast versus the wavelength.} \label{fig5} \end{figure} We study the transmission spectra and the isolation contrast versus the frequency (wavelength) in Fig.~\ref{fig5}. We can obtain the pulse spectra by the Fourier transform of the corresponding fields. The spectra are normalized with respect to the peak value of the input pulse spectrum. We take the time when the laser field disappears as the lower and upper temporal boundaries of the Fourier transform. As shown in Fig.~\ref{fig5}(a), after transmitting the optical isolator, the forward-moving laser pulse retains a spectral profile similar to the input pulse, whereas a deep dip appears in the spectral center of the backward-moving pulse due to the strong Beer-law absorption of the TLA medium. The isolation contrast around the dip is shown in Fig.~\ref{fig5}(b). We obtain a maximum contrast $28.1~\deci\bel$ at $\omega_\text{L}$ and a nonreciprocal bandwidth of $57~\nano\meter$ for $\eta \geq 20~\deci\bel$. The isolation can be improved to a much higher value by using a TLA medium long enough, because the decay of the forward-moving laser pulse is mostly determined by the NA medium and thus fixed, while the backward-moving laser pulse with an area $A<\pi$ initially in the TLA medium can be completely absorbed. Here, we choose a $90~\micro\meter$ nonlinear medium for demonstrating the concept of our isolator and remaining the reasonably high forward transmission. \section{Implementation}The SIT has been observed in various media, such as Ruby~\cite{Asher1972}, alkali atoms~\cite{Gibbs1972}, Rydberg atoms~\cite{LWB2020PRL}, molecular gases~\cite{Zembrod1971}, semiconductors~\cite{Polu1975Self,Wu2011} and semiconductor quantum dots~\cite{PRL94, PRL81, PRB65, APL10}. Semiconductors possess short relaxation times and comparatively high transition dipole moments. Semiconductor quantum dots can be artificial TLAs with high dipole moments~\cite{PRB65, APL10}. The conditions required by our scheme can be satisfied with semiconductors~\cite{PRL94, PRL81}. Thus, an on-chip optical isolator based unidirectional SIT can be integrated on a semiconductor platform. \section{Conclusion and discussion}In summary, we have shown unidirectional SIT as a novel mechanism for obtaining a passive bias-free optical isolator. This isolator displays dynamic nonreciprocity over an ultrabroad bandwith comparable with a magneto-optical nonreciprocal device. This ultrabroadband nonreciprocial device has the potential to be integrated on a chip in a semiconductor platform and thus can boost applications of integrated photonics and femtosecond lasers. Note that SIT of a few-cycle rf pulse has been experimentally observed in Rubidium atoms and is reproduced by solving the Bloch equation~\cite{rfSIT}. Therefore, the concept of our optical isolator can be extended to the rf regime. If a TLA medium with small dissipation is available~\cite{LWB2020PRL}, our method can also isolate the reflection of a long light pulse. \section*{Acknowledgements} H.W. thanks Lei Tang for helpful discussions. This work was supported by the National Key R\&D Program of China (Grants No. 2019YFA0308700 and No. 2019YFA0308704), the National Natural Science Foundation of China (Grant No. 11874212 and No.11890704), the Fundamental Research Funds for the Central Universities (Grant No. 021314380095), the Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0301400), the Program for Innovative Talents and Teams in Jiangsu (Grant No. JSSCTD202138). F.N. is supported in part by: Nippon Telegraph and Telephone Corporation (NTT) Research, the Japan Science and Technology Agency (JST) [via the Quantum Leap Flagship Program (Q-LEAP), and the Moonshot R\&D Grant Number JPMJMS2061], the Japan Society for the Promotion of Science (JSPS) [via the Grants-in-Aid for Scientific Research (KAKENHI) Grant No. JP20H00134], the Army Research Office (ARO) (Grant No. W911NF-18-1-0358), the Asian Office of Aerospace Research and Development (AOARD) (via Grant No. FA2386-20-1-4069), and the Foundational Questions Institute Fund (FQXi) via Grant No. FQXi-IAF19-06.
2212.02339
\section{Introduction} As an efficient method to trace the source of leakage, digital watermarking technology has been widely studied. There have been many excellent works in image \cite{fang2018screen}, audio \cite{erfani2016audio}, and video \cite{asikuzzaman2017overview} watermarking. The two most important properties that audio watermarking should satisfy are fidelity and robustness. Fidelity ensures the normal use of the watermarked audio. Robustness guarantees that even if the audio is distorted (MPEG encoding, noise addition, audio re-recording, etc.), the embedded watermark can still be extracted losslessly\cite{wu1999robust}. Most traditional audio watermarking methods are concerned with the robustness of digital distortions in the electronic channel because most audio copying occurs in the digital channel. However, with the miniaturization of recording devices, audio re-recording (AR) has become a more convenient and converted way to copy audios. Meanwhile, AR could effectively preserve the audio content and significantly damage the embedded watermark signal; attackers can easily and stealthily steal the audio content without any watermark evidence, as shown in \Fref{fig:AR}. Therefore, ensuring robustness to AR becomes the urgent property of audio watermarking at this stage. \begin{figure}[t] \centering \includegraphics[scale=0.5]{figure/background.pdf} \caption{The re-recording operation preserves the content information while destroying the watermark information in the audio to hide the source of the leakage.} \label{fig:AR} \end{figure} Currently, the research field of audio watermarking is still dominated by traditional mathematical algorithms, which try to search for an invariant feature before and after distortion to conduct watermark embedding. Most used features are in the transform domain, such as discrete cosine transform (DCT), discrete wavelet transform (DWT), and fast Fourier transform (FFT) \cite{su2018snr, xiang2017spread, natgunanathan2012robust, liu2018patchwork}. However, due to the complexity of the AR process itself, quantitatively and delicately analyzing the distortions and finding the robust feature in this process is a nontrivial task. Therefore, none of the existing algorithms can resist AR distortion very well. This drives the demand to seek a learnable feature that is adapted to AR distortions. In recent years, deep-learning-based watermarking has achieved tremendous success in the field of image and video watermarking \cite{zhu2018hidden,tancik2020stegastamp,zhang2019robust,luo2021dvmark}. The main backbone of such a framework is an end-to-end auto-encoder-like architecture that contains an encoder, a distortion layer, and a decoder. The central part to ensure robustness is the distortion layer which tries to generate the distorted image for training \cite{zhu2018hidden}, and it is essential to make the distortion layer differentiable. However, the AR process is a complex and non-differential process. Although some algorithms currently focus on deep learning-based information hiding studies in the audio field \cite{kreuk2020hide,jiang2020smartsteganogaphy}, they cannot handle the robustness requirement against distortions, including AR distortion. To address the robustness of AR distortion, in this paper, we propose DeAR, a \underline{de}ep-learning-based \underline{a}udio \underline{r}e-recording resistant watermarking method, which can effectively resist AR distortion at different distances in the real world. Specifically, we adopt the classic deep-learning-based watermarking framework, where an encoder and a decoder are jointly trained for watermark embedding and extraction. To achieve robustness against re-recording, we first analyze the re-recording process from the effects of sound propagation in the air and the processing of microphones and speakers. According to the analysis, we delicately model the AR distortion with several differential operations (environment reverberation, band-pass filtering, and Gaussian noise) and serve such operations as the distortion layer to cooperate with the proposed framework. In addition, motivated by traditional audio watermarking, we change the input of the time domain signal into the low-frequency coefficients of the audio for better robustness. Specifically, we introduce the differential time-frequency transform and its corresponding reverse transform into the end-to-end training process, which can automatically search for the optimal embedding frequency rather than be determined by a pre-defined rule such as Singular Value Decomposition (SVD). Experimental results verify that the proposed method can achieve satisfying robustness against audio re-recording at different distances and is also resilient to other common distortions. The primary contributions of our work are concluded as follows: \begin{itemize} \item We are the first to propose a deep-learning-based audio watermarking against audio re-recording (AR), DeAR, which can achieve watermark embedding and extraction in an end-to-end manner rather than based on handcrafted rules. According to the audio cover, we flexibly adapt audio frequency transform and one-dimensional convolution operations for better implementation. \item To achieve robustness against AR, we first analyze the distortions induced by AR and model it as a distortion pipeline composed of environment reverberation, band-pass filtering, and Gaussian noise. The whole pipeline is further absorbed into the training of DeAR. \item Extensive experiments demonstrate that the proposed method can achieve robustness against audio re-recording and common electronic channel distortions while guaranteeing the requirement of fidelity. In addition, some ablation studies further verify our design and prove the flexibility of DeAR. \end{itemize} \section{Related work} \label{sec:rw} \subsection{Traditional Audio Watermarking} Traditional audio watermarking (AW) mainly embeds watermark information in the time domain and the transform domain. Time-domain based AW \cite{cvejic2004increasing, natgunanathan2010novel} is simple and efficient but not robust enough. In comparison, transform-domain based AW \cite{bansal2015comparative,su2018snr} can achieve better robustness. For example, Su \textit{et al}. \cite{su2018snr} conducted DWT transform and DCT transform on the audio and appended the watermark information in the low-frequency band to achieve robustness. However, its watermark extraction is non-blind, requiring the access to the raw audio. Besides, they did not consider robustness against re-recording process. Based on the above method, Liu \textit{et al}. \cite{liu2018patchwork} first considered the re-recording robustness in traditional framework. Specifically, they simplify the practical process as an idealized noise appending process, which only influences the amplitude of the target audio. However, Liu's method does not perform well in a practical scenario. We explained that re-recording is composed of not only the additive noise but also other different distortions such as the convolutional noise (namely, environment reverberation) and noise induced by microphones and speakers, which is demonstrated in many existing works \cite{peddinti2015reverberation, yakura2018robust}. \subsection{Real-world Air Channel Distortion} In this paper, we regard the re-recording process as an air channel distortion. Eliminating the side effect of the air channel distortion has always been an inspiring topic in automatic speech recognition (ASR) tasks. To address this issue, massive perturbed data are collected for training noise adaptive acoustic models, which is not applicable for the audio watermarking task. Except that, the most similar research field is robust adversarial audio of speech-to-text models, which guarantees robustness against air channel distortion. In detail, the corresponding studies \cite{qin2019imperceptible,yakura2018robust} modeled these distortions by expectation over transformation (EOT) operations, which is incorporated in the optimization procedure of the adversarial audio. Motivated by this, we first analyze the re-recording process from the effects of sound propagation in the air and the processing of microphones and speakers. Then, we model the re-recording as a distortion pipeline, which is composed of environment reverberation, band-pass filtering, and Gaussian noise. This distortion pipeline can easily cooperate with the deep-learning-based audio watermarking framework. \subsection{Deep-learning-based Image Watermarking} In recent years, some deep-learning (DL)-based image watermarking schemes have been proposed. For example, Zhu \textit{et al}. \cite{zhu2018hidden} proposed the first end-to-end image watermarking framework. As shown in \Fref{basic_framework}, the popular framework contains three parts: an encoder responsible for watermark embedding, a differential distortion layer to simulate the transmission process, and a decoder structure responsible for watermark extraction. The main challenge is making the non-differential distortion differential, which is necessary for end-to-end training. \begin{figure}[t] \centering \includegraphics[scale=0.48]{figure/basic_framework_3.pdf} \caption{Illustration of the deep-learning-based image watermarking framework.} \label{basic_framework} \end{figure} In \cite{zhu2018hidden}, Zhu \textit{et al}. leveraged a differential approximations for non-differential JPEG compression, which was further improved by subsequent works \cite{ahmadi2020redmark,ying2021hiding}. Based on \cite{zhu2018hidden}, Tancik \textit{et al}. \cite{tancik2020stegastamp} further considered the distortion caused by printing and camera-shooting, Jia \cite{jia2020rihoop} realized resilience to 3D rendering, and Zhang \textit{et al}. \cite{zhang2020model} even achieved robustness against model extraction. Based on DL-based image watermarking, there have been a few attempts at DL-based audio steganography. For example, Jiang \textit{et al}. \cite{jiang2020smartsteganogaphy} converted raw audio to 2D mel-spectrum images by short-time Fourier transform (STFT) to satisfy the above image watermarking framework. However, STFT and its inverse transform can intrinsically induce the loss of embedded information. In addition, this method lacks the consideration of any robustness during transmission, including audio re-recording (AR). \begin{figure*}[t] \centering \includegraphics[scale=0.58]{figure/framework_edit_5.pdf} \caption{The overall framework of the proposed DeAR.} \label{framework_1} \end{figure*} \section{Method} \label{sec:pm} As shown in \Fref{framework_1}, DeAR is mainly composed of three parts, namely, an encoder for watermark embedding, a decoder for watermark extraction, and a distortion layer to enhance robustness against audio re-recording. All of the above parts are jointly trained, and we introduce each part in detail. \subsection{Watermark Embedding} We use $A$ to represent a single-channel raw audio with length $N$. Rather than directly feeding the raw audio $A$ into the encoder, we first transfer it into the frequency domain by a differential DWT and obtain the corresponding approximate coefficients $A_{ac}$ and detail coefficients $A_{dc}$, \textit{i}.\textit{e}., \begin{linenomath*} \begin{equation} A_{ac}, A_{dc} = \operatorname{DWT}(A), \end{equation} \end{linenomath*} where the length of $A_{ac}$ and $A_{dc}$ is half the original audio signal, namely, $N/2$. Motivated by traditional audio watermarking, we propose embedding the watermark into the low frequency of the raw audio, namely, leveraging $A_{ac}$ as the audio cover. Meanwhile, $A_{dc}$ is deposited for subsequent audio reconstruction. The goal of the encoder is to embed the watermark information $W$ into $A_{ac}$. As shown in \Fref{framework_1}, we want the encoder $\mathbf{En}$ to generate a stealthy residual $\mathcal{R}$ and further stamp it onto the original $A_{ac}$ to generate the watermarked approximate coefficients $A_{ac}^W$, \textit{i}.\textit{e}., \begin{linenomath*} \begin{equation} A_{ac}^W = \mathbf{En}(A_{ac},W)*S + A_{ac}, \end{equation} \end{linenomath*} where $S$ is the strength factor and set as 1 by default. To satisfy the fidelity requirement, we constrain the watermarked approximate coefficients $A_{ac}^W$ acoustically consistent with the original one $A_{ac}$. To achieve this, we introduce a basic loss $\mathcal{L}_{e}$ for the encoder's training, that is, we adopt the widely-used mean square error MSE as $\mathcal{L}_{e}$, \textit{i}.\textit{e}., \begin{linenomath*} \begin{equation} \begin{aligned} \mathcal{L}_{e} &= \operatorname{MSE}(A_{ac},A_{ac}^W) &= \frac{2}{N}\sum_{i=1}^{N/2}(A_{ac}(i)-A_{ac}^W(i))^2. \end{aligned} \end{equation} \end{linenomath*} To further improve the fidelity and minimize the domain gap between $A_{ac}^W$ and $A_{ac}$, we introduce an extra discriminator $\mathbf{D}$ for adversarial training with the encoder $\mathbf{En}$, and adversarial loss $\mathcal{L}_{d}$ will let $\mathbf{En}$ embed watermarks better so that $\mathbf{D}$ cannot distinguish $A_{ac}^W$ from the watermark-free $A_{ac}$, \textit{i}.\textit{e}., \begin{linenomath*} \begin{equation} \mathcal{L}_{d} = log(1-\mathbf{D}(A_{ac}^W)). \end{equation} \end{linenomath*} Meanwhile, for $\mathbf{D}$, $\mathcal{L}_{D} = log(1-\mathbf{D}(A_{ac}))+log(\mathbf{D}(A_{ac}^W))$. \subsection{Watermark Extraction} Given the watermarked approximate coefficients $A_{ac}^W$, the decoder $\mathbf{De}$ needs to recover watermark $W^{\prime}$ as consistent as the original watermark $W$. To achieve this, we introduce the watermark loss $\mathcal{L}_w$, namely, the MSE between the original watermark $W$ and the extracted watermark $W^{\prime}$, \textit{i}.\textit{e}., \begin{linenomath*} \begin{equation} \mathcal{L}_w = \operatorname{MSE}(W,W^{\prime}). \end{equation} \end{linenomath*} It should be emphasized that we adopt the binary watermark $W\in \{-1,1\}^{L}$ instead of $\{0,1\}^{L}$, which is better for forensics. Meanwhile, this helps the MSE-based constraint work better. \subsection{Audio Re-recording Modeling} To enhance the robustness against the audio re-recording (AR) process, we further insert a distortion layer between the encoder and the decoder. As mentioned above, it is essential to make the distortion layer differentiable, which can prevent gradient interruption during end-to-end learning. However, the AR process is a complex and non-differential process. To overcome this challenge, we learn from the studies on real-world air channel distortion \cite{qin2019imperceptible,yakura2018robust}, and introduce a differential audio re-recording operation $\mathbf{DAR}$, which consists of three components: environment reverberation, band-pass filtering, and Gaussian noise. Because $\mathbf{DAR}$ is a processing pipeline operated on the time domain, it cannot be directly applied to $A_{ac}^W$. Therefore, after generating $A_{ac}^W$, we adopt inverse DWT (IDWT) to transform the watermarked approximate coefficients $A_{ac}^W$ back to the watermarked audio $A^W$, with the corresponding deposited $A_{dc}$, \textit{i}.\textit{e}., \begin{linenomath*} \begin{equation} A^W = \operatorname{IDWT}(A_{ac}^W, A_{dc}). \end{equation} \end{linenomath*} \subsubsection{Environment Reverberation.} The impulse Response (IR) is the environment's reaction when presented with a brief input signal. It describes the acoustic characteristics of an environment, particularly of interest being the behavior of reverberation in the space. IR can reproduce the reverberation in the captured environment by convolution. From different microphones, room environments, and speakers, we can collect diverse base IR $p$ to form a set $P$. Given a target audio $A$, we randomly select a base IR $p$ from the dataset $P$, and operate convolution $Conv(\cdot)$ on $A$ by $p$ to simulate the Environment Reverberation (ER). \textit{i}.\textit{e}., \begin{linenomath*} \begin{equation} \operatorname{ER(A)} = Conv(A, p), \quad\text{where $p \in P$}. \end{equation} \end{linenomath*} Here, we follow the previous work \cite{palomaki2004techniques} and leverage the acoustic impulse responses dataset\footnote{https://www1.icsi.berkeley.edu/Speech/papers/gelbart-ms/pointers/} that was collected using four mics and various degrees of reverberation in the varechoic chamber at Bell Labs. \subsubsection{Band-pass Filtering.} Since the frequency band of human hearing is limited, the widely-used normal range is from $500$ Hz to $2000$ Hz. Based on this, commonly-used speakers will not play audio with a too high or too low-frequency band. Meanwhile, the microphone will also process the playing audio, usually cutting off the frequency band outside the normal range to reduce noise, that is, a basic denoising process. Therefore, to simulate the distortions caused by the inherent characteristics of devices such as the speaker and the microphone, we apply a frequency band-pass filtering BF($\cdot$) operation to the watermarked audio. Given a target audio $A$, we conduct BF($\cdot$) as follows: \begin{linenomath*} \begin{equation} \label{equ:bp} \operatorname{BF}(A) = {LF}[{HF}[A, \operatorname{\alpha}], \operatorname{\beta}], \end{equation} \end{linenomath*} where $LH[\cdot]$ and $HF[\cdot]$ represent the low-pass filtering and the high-pass filtering, respectively. And $\alpha$ and $\beta$ denote the corresponding thresholds of $HF[\cdot]$ and $LF[\cdot]$. \subsubsection{Gaussian Noise.} In addition to the above two components, we introduce the popular Gaussian noise to simulate the random noise induced by indeterminate factors during AR process. Gaussian noise is a kind of additive noise, that is widely used in current automatic speech recognition (ASR) systems \cite{yin2015noisy} to enhance the robustness against random environmental noise. Specifically, we operate GN($\cdot$) on audio $A$ by directly superimposing noise $\omega\sim\mathcal{N}(0, \sigma^2)$, \textit{i}.\textit{e}., \begin{linenomath*}\begin{equation} \begin{split} \operatorname{GN}(A) = A + \omega, \text{where $\omega\sim\mathcal{N}(0, \sigma^2)$}. \end{split} \end{equation} \end{linenomath*} It shall be mentioned that the $\sigma$ is audio-aware and determined by the pre-defined signal-noise-ratio, which is randomly sampled from $40\,\mathrm{dB}$ to $50\,\mathrm{dB}$. \subsubsection{The pipeline of the distortion layer.} $\mathbf{DAR}$ is set as the distortion layer to enhance robustness against AR process. The pipeline of $\mathbf{DAR}$ is as follows: \begin{linenomath*} \begin{equation} \mathbf{DAR}(\cdot) = \operatorname{GN}(\operatorname{BF}(\operatorname{ER}(\cdot))). \end{equation} \end{linenomath*} Given a watermarked audio $A^W$, we can finally obtain the attacked audio $(A^W)^{att}$, \textit{i}.\textit{e}., \begin{linenomath*} \begin{equation} (A^W)^{att} = \mathbf{DAR}(A^W). \end{equation} \end{linenomath*} Afterward, we leverage DWT to acquire its corresponding approximate coefficients $(A^W)^{att}_{ac}$ and feed it into the decoder $\mathbf{De}$ for watermark extraction, \textit{i}.\textit{e}., \begin{linenomath*} \begin{equation} \begin{aligned} (A^W)^{att}_{ac},(A^W)^{att}_{dc} &= \operatorname{DWT}((A^W)^{att}),\\ W^{\prime} &= \mathbf{De}((A^W)^{att}_{ac}). \end{aligned} \end{equation} \end{linenomath*} \subsection{More Details of DeAR} \subsubsection{Network Structures.} For the encoder $\mathbf{En}$, we adopt a fully convolutional network, which keeps the size of the feature maps in each layer unchanged. Similar to $\mathbf{En}$, the decoder $\mathbf{De}$ and the discriminator $\mathbf{D}$ also utilize a fully convolutional network but further append a down-sampling module for final extraction and binary classification, respectively. To remedy the information loss during forward propagation, we leverage the skip connection to stack the initial information with the feature map in each layer for both $\mathbf{En}$ and $\mathbf{De}$. More importantly, we leverage 1D convolution rather than 2D convolution in basic networks, which is effective for the 1D audio waveform. \subsubsection{Loss functions.} During the training stage, we jointly train the encoder and the decoder, and the whole loss function $\mathcal{L}$ can be formulated as follows: \begin{linenomath*} \begin{equation} \mathcal{L} = \lambda_e\mathcal{L}_{e} + \lambda_d\mathcal{L}_{d} + \lambda_w\mathcal{L}_{w}, \end{equation} \end{linenomath*} where $\lambda_e$, $\lambda_d$, and $\lambda_w$ are used to balance the three terms. \section{Experiments} \subsection{Experiment Settings} \subsubsection{Dataset.} We conduct our experiments on FMA \cite{fma_dataset}, a famous music analysis dataset in which $12000$ audios are utilized for the training of the proposed DeAR, and $200$ randomly selected audios are adopted as testing audios. The sampling frequencies are all $44.1\,\mathrm{kHz}$ and cropped to 500k samples \subsubsection{Metrics.} To measure the fidelity of the watermarked audio, the signal-to-noise ratio (\textbf{SNR}) is adopted with the following definition: \begin{equation*} \rm{SNR} = 10\cdot \log\left(\frac{\sum_{i=1}^NA(i)^2}{\sum_{i=1}^N\left[A^W(i)-A(i)\right]^2}\right). \end{equation*} Besides, we take the average bit recovery accuracy $\overline{\textbf{ACC}}$, calculated from $200$ testing audios, to evaluate the robustness of watermarking schemes. \subsubsection{Implementation Details.} In the training process of DeAR, we set $\lambda_e=150$, $\lambda_w=1$ and $\lambda_d=0.01$, and utilize Adam\cite{kingma2014adam} with a learning rate of $10^{-4}$ for optimization by default. And we empirically set the threshold of the high-pass filtering ($HF[\cdot]$) and that of the low-pass filtering ($LF[\cdot]$), namely, $\alpha$ and $\beta$ as $1\,\mathrm{kHz}$ and $4\,\mathrm{kHz}$ for training the model robust to re-recording. In the testing process, the same watermark bit sequence of $100$ bits is embedded for all testing audios. We adopt methods the most relevant to the proposed DeAR as the baseline methods, \textit{i}.\textit{e}., Liu's method \cite{liu2018patchwork} and Su's method \cite{su2018snr}. For a fair comparison, we slightly modify the parameter of Su's method for better robustness\footnote{The upper bound of the interval of the search space of intensity factor is set to $2$.}. \begin{figure}[t] \centering \includegraphics[scale=0.25]{figure/Scenario.pdf} \caption{A practical experimental environment to evaluate robustness against audio re-recording.} \label{series} \end{figure} For re-recording experiments, we use one consumer-grade speaker, SENNHEISER Sp10, as the playback device and a consumer-grade microphone, ATR2100-USB, to re-record the played audio. The corresponding experimental scenario is shown in \Fref{series}, where $5\,\mathrm{cm}$ is set as the default distance between the speaker and the microphone. Considering the de-synchronization introduced by digital-to-analog conversion, we design a synchronization strategy and apply it to DeAR and all baseline methods for a fair comparison. Specifically, we shift the target re-recorded audio within a pre-defined range $(3/44.1\,\mathrm{s}, 8/44.1\,\mathrm{s})$ and calculate the corresponding bit recovery accuracy (\textbf{ACC}), in which the highest one is regarded as the final \textbf{ACC}. \subsection{Comparison Results} \begin{figure*}[t] \begin{center} \includegraphics[scale=0.103] {figure/wave4.png} \caption{Qualitative comparison of fidelity. The top row shows the original audio and watermarked audios of DeAR and the baseline methods, wherein the black component represents the 10$\times$ residuals. To better illustrate their difference, the enlarged views of the same regions are shown in the middle and bottom rows. } \label{fig:waveform} \end{center} \end{figure*} \begin{table}[t] \begin{center} \setlength\tabcolsep{16pt} \begin{tabular}{c|c c c} \toprule[1.5pt] Metrics & DeAR & Liu & Su \\ \hline SNR ($\mathrm{dB}$) & \bv{25.86} & 25.81 & 24.94 \\ $\overline{\textbf{ACC}}$ ($\%$) & \bv{99.18} & 77.09 & 56.00 \\ \bottomrule[1.5pt] \end{tabular} \caption{Quantitative comparison with the baseline methods.} \label{tab:cmp} \end{center} \end{table} \begin{table}[t] \setlength\tabcolsep{9pt} \begin{tabular}{ccccc} \toprule[1.5pt] Distance ($\mathrm{cm}$) & 5 & 20 & 50 & 100 \\ \hline DeAR & \textbf{99.18} & \textbf{98.55} & \textbf{93.40} & \textbf{92.68} \\ Liu & 77.09 & 82.64 & 74.76 & 66.02 \\ \bottomrule[1.5pt] \end{tabular} \caption{Comparison of robustness against re-recording at different distances.} \label{tab:dis} \end{table} \begin{figure}[t] \begin{center} \includegraphics[scale=0.1045] {figure/re-res-finale.png} \caption{Visual example of noise induced by audio re-recording.} \label{fig:res} \end{center} \end{figure} \subsubsection{Fidelity.} We first compare the fidelity of the proposed DeAR with the baseline methods. As shown in \Tref{tab:cmp}, DeAR achieves a 25.86 SNR, which outperforms the baseline methods (SNR is 25.81 and 24.94 for Liu's method \cite{liu2018patchwork} and Su's methods \cite{su2018snr}, respectively). Furthermore, we provide one visual example of the watermarked audio for qualitative comparison in \Fref{fig:waveform}. We can observe that the proposed DeAR and Liu's method tend to modify the original audio adaptively, while Su's method prefers a relatively large and uniform noise pattern. Compared with Liu's method, DeAR induces more slight modification to guarantee fidelity. \subsubsection{Robustness Against Audio Re-recording.} We compare robustness against audio re-recording in this experiment and provide quantitative results in \Tref{tab:cmp}. With comparable fidelity, DeAR outperforms the baseline methods by a large margin (above $20$\% and $40$\%, respectively). Su's method is fragile to the audio re-recording (AR) process because it only considers the common distortions during the digital transmission but lacks the consideration of AR. For Liu's method, we believe that its limited performance ($77.09$\%) is due to its AR model being an idealized additive noise, which is inconsistent with the practical scenario. In \Fref{fig:res}, we provide a visual example of the noise induced by the practical re-recording process, which is a more complex noise pattern with an even larger amplitude than the original audio. In addition to the default distance ($5\,\mathrm{cm}$), we further conduct a controlled comparison with Liu's method at different distances. As shown in \Tref{tab:dis}, our method performs better over a wide range of distances. As the distance increases, the robustness against AR suffers a corresponding degradation but is still acceptable (all above $90$\%). \begin{table}[t] \begin{center} \centering \setlength\tabcolsep{5.2pt} \begin{tabular}{ c | c | c c c } \toprule[1.5pt] \multicolumn{2}{c|}{Distortions} & DeAR & Su & Liu \\ \hline \multirow{ 4}{*}{\tabincell{c}{Gaussian\\Noise}} & $20\,\mathrm{dB}$ & 87.13 / 98.67 & \bv{100.0} & 60.09 \\ & $30\,\mathrm{dB}$ & 93.77 / 99.89 & \bv{100.0} & 62.41 \\ & $40\,\mathrm{dB}$ & 96.10 / 99.99 & \bv{100.0} & 65.76 \\ & $50\,\mathrm{dB}$ & 96.96 / 99.99 & \bv{100.0} & 71.61 \\ \hline \multirow{2}{*}{MP3} & $64\,\mathrm{kbps}$ & 95.22 / \bv{99.94} & 97.98 & 88.54 \\ & $128\,\mathrm{kbps}$ & 97.00 / \bv{99.97} & 98.00 & 98.53 \\ \hline \multirow{ 2}{*}{Band-pass} & $1\,\mathrm{kHz}$ & 98.17 / \bv{100.0} & \underline{57.02} & 99.14 \\ & $4\,\mathrm{kHz}$ & 92.12 / 99.06 & \bv{99.99} & \underline{50.57}\\ \hline \multicolumn{2}{c|}{Re-sampling} & 97.04 / \bv{100.0} & \bv{100.0} & 99.37 \\ \hline \multicolumn{2}{c|}{Dropout} & 97.14 / \bv{99.99} & 99.08 & 69.63 \\ \hline \multicolumn{2}{c|}{Amplitude Modification} & 97.14 / \bv{99.99} & 98.00 & 99.84 \\ \hline \multicolumn{2}{c|}{Re-quantization} & 97.11 / \bv{100.0} & \bv{100.0} & 94.22 \\ \hline \multicolumn{2}{c|}{Median Filtering } & 96.16 / \bv{100.0} & \bv{100.0} & 90.80 \\ \bottomrule[1.5pt] \end{tabular} \caption{Robustness against other common distortions. We provide the $\overline{\textbf{ACC}}$ of the default / the enhanced DeAR. } \label{tab:common} \end{center} \end{table} \begin{figure*}[ht] \begin{center} \includegraphics[scale=0.340] {figure/line-chart11-eps-converted-to.pdf} \caption{The influence of different embedding bits on robustness. } \label{line-chart} \end{center} \end{figure*} \begin{table}[t] \setlength\tabcolsep{9pt} \begin{tabular}{ccccc} \toprule[1.5pt] Distance ($\mathrm{cm}$) & 5 & 20 & 50 & 100 \\ \hline Default & \textbf{99.18} & \textbf{98.55} & 93.40 & \textbf{92.68} \\ Enhanced & 98.65 & 98.54 & \textbf{94.98} & 91.33 \\ \bottomrule[1.5pt] \end{tabular} \caption{Comparison of robustness against AR between the default and the enhanced DeAR. } \label{tab:enhance} \end{table} \subsubsection{Robustness Against Other Common Distortions.} To compare robustness more comprehensively, we further evaluate it under other common distortions during digital transmission, namely, Gaussian noise under different signal-to-noise ratios ($20\,\mathrm{dB}$, $30\,\mathrm{dB}$, $40\,\mathrm{dB}$, $50\,\mathrm{dB}$), MP3 compression ($64\,\mathrm{kbps}$, $128\,\mathrm{kbps}$), Band-pass ($1\,\mathrm{kHz}$ high-pass, $4\,\mathrm{kHz}$ low-pass), Re-sampling (the watermarked audios are resampled to 90\% of the original sampling frequency and then resampled back), Dropout (drop (zero) a sample every 100 sample points), Amplitude Modification (90\% of the original audios), Re-quantization (requantize down to 8 bits/sample), and Median Filtering (window size of 3). As shown in \Tref{tab:common}, Su's method achieves excellent robustness in most cases except under the $1\,\mathrm{kHz}$ high-pass (underlined data), which may be the reason for its fragility to audio re-recording. Liu's method cannot extract the watermark well, facing Gaussian noise and $4\,\mathrm{kHz}$ low-pass. In contrast, DeAR is robust under all types of distortions. Furthermore, we propose the enhanced DeAR by appending an additional distortion layer after the original distortion layer in \Fref{framework_1}. The distortion layer contains the above common distortions except Gaussian noise and band-pass filtering. For example, it only involves Gaussian noise under $20\,\mathrm{dB}$ for the enhanced training. As shown in \Tref{tab:common} and \Tref{tab:enhance}, the enhanced DeAR can achieve better robustness against other common distortions while still preserving robustness against audio re-recording. We shall point out that it will also induce a slight degradation of fidelity (SNR from $25.86$ to $25.81$), which is an intrinsic trade-off between robustness and fidelity. \subsection{Ablation Study} \subsubsection{The Influence of Different Embedding Bits.} We further explore the influence of different embedding bits on robustness. We take $64\,$bits, $100\,$bits, $169\,$bits, and $225\,$bits as examples to evaluate the robustness against Gaussian noise, band-pass, simulated reverberation, and audio re-recording. We evaluate the robustness of the model with different embedding bits under the exact SNR requirement ($SNR=26\pm0.2$). For each distortion, we also test on different strengths. We shall note that the simulated reverberation is implemented by convolving watermarked audios and impulse responses. The impulse response here is selected in a conference room at different distances between the speaker and the microphone. As shown in \Fref{line-chart}, in most cases, we shall sacrifice the robustness ability to some extent if we want to embed more information into the target audio. \subsubsection{The Importance of Each Component of the Re-recording Modeling.} \mzh{To fully verify the importance of the proposed re-recording modeling, we re-train the model with the modified distortion layer three times. In each training, one component of three distortions in the distortion layer is removed.} Besides the default consumer-grade microphone, we also utilize the widely-used smartphone (\textit{e}.\textit{g}. Apple iPhone12 pro) to re-record the watermarked audio. As shown in \Tref{component}, \mzh{all designed distortion} components, \mzh{\textit{i}.\textit{e}.}, environment reverberation, band-pass filtering, and Gaussian noise could improve the robustness against audio re-recording process. Among them, \mzh{the} reverberation is much necessary \mzh{because it induces an accuracy improvement of more than 25\%.} \begin{table}[t] \setlength\tabcolsep{6.5pt} \begin{center} \begin{tabular}{c|cccc} \toprule[1.5pt] Device & Default & w/o ER & w/o BF & w/o GN \\ \hline Microphone & \textbf{99.18} & 73.63 & 98.52 & 75.80 \\ Smartphone & \textbf{99.53} & 66.17 & 92.64 & 75.25 \\ \bottomrule[1.5pt] \end{tabular} \end{center} \caption{The robustness performance ($\overline{\textbf{ACC}}$) of different configurations under different devices.} \label{component} \end{table} \subsubsection{Flexibility with Strength Factor. } Given an audio coefficient and watermark as input, the well-trained encoder outputs a watermark residual, which is further superimposed on the input. In a practical scenario, we are able to flexibly adjust the strength of the residual to balance the trade-off between fidelity and robustness. In \Tref{tab-snr}, we provide the quantitative results of fidelity and robustness against audio re-recording under different strength factors, which demonstrate the flexibility of the proposed DeAR. \begin{table}[t] \begin{center} \setlength\tabcolsep{5.5pt} \begin{tabular}{ c | c c c c c} \toprule[1.5pt] Strength Factor & 0.2 & 0.5 & 0.8 & 1 & 1.2 \\ \hline SNR ($\mathrm{dB}$) & 39.84 & 31.88 & 27.79 & 25.86 & 24.27 \\ $\overline{\textbf{ACC}}$ ($\%$) & 79.58 & 94.81 & 98.48 & 99.18 & 99.49 \\ \bottomrule[1.5pt] \end{tabular} \caption{Performance of DeAR with different strength factors.} \label{tab-snr} \end{center} \end{table} \section{Conclusion} To the best of our knowledge, we are the first to propose using deep neural networks for audio watermarking and achieve robustness against the audio re-recording (AR) process. To achieve this, we jointly train an encoder and decoder for watermark embedding and extraction, between which a distortion layer that simulates the re-recording process is further inserted to enhance robustness. Extensive experiments demonstrate that our method outperforms the baseline methods in terms of both fidelity and resilience to the AR process. Furthermore, some ablation studies are also conducted to verify the importance of our design and the flexibility in practical scenarios. \section{Acknowledgments} This work was supported in part by the Natural Science Foundation of China under Grant 62072421, 62002334, 62102386, 62121002 and U20B2047 and Students' Innovation and Entrepreneurship Foundation of USTC.
2112.00516
\section{INTRODUCTION} The dynamics of most systems, like autonomous vehicles \cite{Ames2016}, robotics \cite{robot2020}, and chemical processes \cite{chemProccess2013}, is constrained. Inputs are constrained by actuation capability, and state constraints are either imposed by physical limitations or safety considerations. Depending on the dynamics and constraints, different methods exist to find a controller that ensures existence of a Lyapunov function, and thus Lyapunov stability. Two major approaches are \ac{MPC} \cite{nonMPC1998} and Lyapunov-based methods \cite{Ames2016}. This work presents a method that for control-affine systems improves on \ac{MPC} by avoiding non-convex optimizations, and on Lyapunov-based methods by not requiring a known \ac{CLF}. Most nonlinear MPC formulations depend on careful choices of terminal `ingredients' that consist a set, a cost function, and a stabilizing controller \cite{nonMPC1998}. Other \ac{MPC} approaches either ensure existence of these ingredients implicitly \cite[Ch.\;2.3]{nonMPCeconomic2018}, or circumvent them using Lyapunov-based \ac{MPC} \cite{nonMPC-Lyap2005,nonMPC-Lyap2006} if a \ac{CLF} is known. These methods often rely on solving a non-convex optimization online. Using numerical methods to solve them not only does not guarantee finding the global solution, but is computationally taxing, making their efficient implementation a question of ongoing research \cite{nonMPCefficient2009,nonMPCefficient2020}. To avoid online optimization, explicit nonlinear \ac{MPC} finds the controller offline \cite{nonMPCExTube2006,nonMPCexplicit2012}. However, solving a highly nonlinear optimization on \textit{a priori} unknown polyhedral partitions remains difficult. Lyapunov-based methods rely on Lyapunov-like functions, such as \ac{CLF}s and control barrier functions, to ensure stability of control-affine systems. While barrier functions \cite{CBF2007,CBF2016,CBF2018} impose state constraints by ensuring positive invariance of a subset of admissible states, satisfying input constraints needs the conditions on the Lyapunov-like functions' time derivatives to hold for admissible inputs \cite{uConCLF1995,Ames2016}. If such functions are known, online \ac{QP} can be used to find a minimum-norm controller that not only ensures safety and stability, but can also prioritize safety if needed \cite{Ames2016}. However, these methods require Lyapunov-like functions, which are not trivial to find. This paper presents a method to stabilize state and input constrained nonlinear systems via an offline optimization on variable trianglations of admissible states that refines simplexes if needed. Since it finds the corresponding \ac{CPA} Lyapunov function explicitly, the exact region of attraction and an upper-bound on the decay rate of the state norm are provided. By choosing a \ac{CPA} state feedback controller structure, the nonlinear optimization is solved iteratively using SDPs for control-affine systems. In this case, the corresponding Lipschitz \ac{CLF} is used to formulate a minimum-norm controller by \ac{QP}. In seeking both the controller and the Lyapunov function offline, this method is similar to \cite{ExpMPCLike2017}, but it is not limited to polynomial systems. Like \cite{nonMPCExTube2006}, the method depends on refining elements in a subset of the state space, but it avoids solving the highly nonlinear optimization. The method builds upon the analysis technique of \cite{gieslRevCPA2013} that implements \ac{CPA} Lyapunov functions. For control-affine systems, it improves \cite{Doban}, which is also based on \cite{gieslRevCPA2013}, by not requiring a known \ac{CLF}, and removing the need for \textit{a priori} constraints on the controller's gradient. \section{Preliminaries} \textbf{Notation.} The interior, boundary, and closure of $\Omega\in\mathbb{R}^n$ are denoted by $\Omega\degree$, $\partial\Omega$, and $\bar{\Omega}$, respectively. The set of real-valued functions with $r$ times continuously differentiable partial derivatives over their domain is denoted by $\mathbb{C}^r$. The $i$-th element of a vector $x$ is denoted by $x^{(i)}$. The element in the $i$-th row and $j$-th column of a matrix $G$ is denoted by $G^{(i,j)}$. The preimage of a function $f$ with respect to a subset $\Omega$ of its codomain is defined by $f^{-1}(\Omega)=\{x\mid f(x) \in \Omega \}$. The transpose and Euclidean norm of $x\in\mathbb{R}^n$ are denoted by $x^\intercal$ and $||x||$, respectively. The set of all subsets $\Omega\subset\mathbb{R}^n$ satisfying i) $\Omega$ is compact, ii) $\Omega\degree$ is a connected open neighborhood of the origin, and iii) $\Omega=\overline{\Omega\degree}$ is denoted by $\mathfrak{R}^n$. The vector of ones in $\mathbb{R}^n$ is denoted by $1_n$. The exponential stability of an autonomous system's equilibrium point can be verified by constructing a Lipschitz Lyapunov function on a triangulated subset of $\mathbb{R}^n$ \cite{gieslRevCPA2013}. The required definitions are given next. \begin{definition}[Affine independence{\cite{gieslRevCPA2013}}] \label{def:affDepVecs} A collection of vectors $\{x_0,\ldots,x_n\}$ in $\mathbb{R}^n$ is called affinely independent if $x_1-x_0,\ldots,x_n-x_0$ are linearly independent. \qed \end{definition} \begin{definition}[$n$-simplex {\cite{gieslRevCPA2013}}] \label{def:simplex} An $n$-simplex is the convex combination of $n+1$ affinely independent vectors in $\mathbb{R}^n$, denoted $\sigma{=}\textrm{co}(\{x_j\}_{j=0}^n)$, where $x_j$'s are called vertices. \qed \end{definition} \noindent In this paper, simplex always refers to $n$-simplex. By abuse of notation, $\mathcal{T}$ will refer to both a collection of simplexes and the set of points in all the simplexes of the collection. \begin{definition} [Triangulation {\cite{gieslRevCPA2013}}] \label{def:triangulation} A set $\mathcal{T}\in\mathfrak{R}^n$ is called a triangulation if it is a finite collection of $m_{\mathcal{T}}$ simplexes, denoted $\mathcal{T}=\{\sigma_i\}_{i=1}^{m_{\mathcal{T}}}$, and the intersection of any of the two simplexes in $\mathcal{T}$ is either a face or the empty set. The following two conventions are used throughout this paper for triangulations and their simplexes. Let $\mathcal{T}=\{\sigma_i\}_{i=1}^n$. If $0\in\sigma_i$, then $0$ is a vertex of $\sigma_i$. Further, let $\{x_{i,j}\}_{j=0}^n$ be the vertices of simplex $\sigma_i$. Then $\sigma_i$ is represented by $\sigma_i=\textrm{co}(\{x_{i,j}\}_{j=0}^n)$. The choice of $x_{i,0}$ in $\sigma_i$ is arbitrary unless $0\in\sigma_i$, where $x_{i,0}$ is selected as $0$. The vertices in the triangulation $\mathcal{T}$ is denoted by $\mathbb{E}_\mathcal{T}$. \qed \end{definition} \begin{definition} [CPA interpolation \cite{gieslRevCPA2013}] \label{def:CPAfunction} Consider a triangulation $\mathcal{T}{=}\{\sigma_i\}_{i=1}^{m_{\mathcal{T}}}$, and a set $\mathbf{W}{=}\left\{ W_x \right\}_{ x\in \mathbb{E}_\mathcal{T} } {\subset} \mathbb{R}$. The unique, CPA interpolation of $\textbf{W}$ on $\mathcal{T}$, denoted $W:\mathcal{T}{\rightarrow}\mathbb{R}$, is affine on each $\sigma_i{\in}\mathcal{T}$ and satisfies $W(x){=}W_x$, $\forall x{\in}\mathbb{E}_\mathcal{T}$. \qed \end{definition} \begin{remark}[{\!\!\cite[Rem.\;9]{gieslRevCPA2013}}] \label{rem:nablaLinear} Given $\mathcal{T}=\{\sigma_i\}_{i=1}^{m_{\mathcal{T}}}$ and $\mathbf{W}$, the CPA interpolation assigns a unique affine function $W(x)=x^\intercal\nabla{W}_i+\omega_i$ to each $\sigma_i\in\mathcal{T}$. The $\nabla{W}_i$ is linear in the elements of $\mathbf{W}$ and can be computed as follows. Let $\sigma_i=\textrm{co}(\{x_{i,j}\}_{j=0}^n)$, and $X_i\in\mathbb{R}^{n\times n}$ be a matrix that has $x_{i,j}-x_{i,0}$ as its $j$-th row. Since the elements of $\{x_{i,j}\}_{j=0}^n$ are affinely independent, $X_i$ is invertible. Each $x_{i,j}$ is an element of $\mathbb{E}_\mathcal{T}$, so it has a corresponding element in $\mathbf{W}$, denote $W_{x_{i,j}}$. Let $\bar{W}_i\in\mathbb{R}^n$ be a vector that has $W_{x_{i,j}}-W_{x_{i,0}}$ as its $j$-th element. Then, ${\nabla W}_i = X^{-1}_i \bar{W}_i$. \qed \end{remark} The following theorem from \cite{gieslRevCPA2013} bounds the time derivative of a CPA function above on a simplex using its values at the vertices of that simplex using Taylor's theorem. \begin{theorem}[{\!\!\cite{gieslRevCPA2013}}] \label{thm:gieseldWBound} Consider the system \begin{equation} \label{eq:AutSystem} \dot{x} = g(x), \;\; x\in \mathcal{X} \in \mathfrak{R}^n, \end{equation} where $g:\mathbb{R}^n\rightarrow\mathbb{R}^n$ is in $\mathbb{C}^2$. Let $\mathcal{T}=\{\sigma_i\}_{i=1}^{m_{\mathcal{T}}} \subseteq \mathcal{X}$ be a triangulation, and $W(x):\mathcal{T}\rightarrow\mathbb{R}$ be the CPA interpolation of a set $\mathbf{W}=\{W_x\}_{x\in{\mathbb{E}_\mathcal{T}}}$. Consider a point $x\in\mathcal{T}\degree$. The Dini derivative of $W$ at $x$ is defined as $D^+W(x) = \textrm{lim\,sup}_{h\rightarrow0^+}\sfrac{(W(x+hg(x))-W(x))}{h}$, which equals $\dot{W}(x)$ where $W\in\mathbb{C}^1$. For an arbitrary $x\in\mathcal{T}\degree$, there exists a $\sigma_i=\{x_{i,j}\}_{j=0}^n\in\mathcal{T}$ so that for small enough $h>0$, $\textrm{co}(x,x+hg(x))\subset\sigma_i$. Let $0\leq\alpha_j\leq 1$, where $j\in\mathbb{Z}_0^n$ and $\sum_{j=0}^n\alpha_j=1$, be the unique set of coefficients satisfying $x=\sum_{j=0}^n\alpha_ix_{i,j}$. Then \begin{equation} \label{eq:gieselInequality} D^+W(x) \leq \sum_{j=0}^n \alpha_j \left( g(x_{i,j})^\intercal\nabla{W}_i + c_{i,j}\beta_i 1_n^\intercal l_i \right), \end{equation} where $l_i\in\mathbb{R}^n$ satisfies $l_i\geq|\nabla{W}_i|$, and \begin{flalign} &\beta_i \geq \max_{p,q,r\in\mathbb{Z}_1^n} \max_{\xi\in\sigma_i} \left| \left. \sfrac{\partial^2 g^{(p)}}{\partial x^{(q)}\partial x^{(r)}} \right|_{x=\xi} \right|, \textrm{ and} \label{eq:beta} \\ &c_{i,j}{=}\frac{n}{2} ||x_{i,j} {-} x_{i,0}|| (\max_{k\in\mathbb{Z}_1^n} ||x_{i,k}{-}x_{i,0}|| {+} ||x_{i,j}{-}x_{i,0}||). \nonumber \end{flalign} \qed \end{theorem} Note that in \eqref{eq:beta}, $\beta_i$ bounds the largest absolute value of the elements of the Hessian of $g(x)$ on $\sigma_i$ above. \section{Control Design} Using Theorem\;\ref{thm:gieseldWBound}, the exponential stability of of the equilibrium can be verified by constructing a CPA Lyapunov function formulated as a linear feasibility program \cite{gieslRevCPA2013}. Here, the goal is to turn the analysis method of \cite{gieslRevCPA2013} into a design method for state and input constrained control systems by finding a state-feedback controller that makes the origin exponentially stable. Choosing a parameterized controller structure, the search for parameters can be formulated as a non-convex optimization since the CPA Lyapunov function is also unknown. First, a stability theorem and piecewise twice continuous differentiability on a triangulation are defined and then, the optimization is formulated. The following theorem, improves on \cite[Def\;2,\;Rem\;5]{gieslRevCPA2013} by bounding the convergence rate of $||x(t)||$ above. \begin{theorem} \label{thm:myExpoStability} The origin in \eqref{eq:AutSystem}, where $g:\Omega\rightarrow\mathbb{R}^n$ is a Lipschitz map, $\Omega\in\mathfrak{R}^n$, and $g(0)=0$, is exponentially stable if there exists a Lipschitz function $V:\Omega\rightarrow\mathbb{R}^n$ and constants $a,b_1,b_2>0$ satisfying $V(0)=0$, and \begin{subequations} \label{eq:myExpo} \begin{align} b_1||x||^a &\leq V(x), \quad \forall x\in\Omega, \textrm{ and} \label{eq:myExpoBound}\\ D^+V(x) & \leq -b_2 V(x), \quad \forall x\in\Omega\degree \backslash \{0\}. \label{eq:myExpoDecrease} \end{align} \end{subequations} Further, let $\mathcal{A}=V^{-1}([0,r])\subseteq\Omega$ be in $\mathfrak{R}^n$ for some $r>0$. Then, $||x(t)||\leq\sqrt[^a]{r/b_1}e^{(-b_2/a)(t-t_0)}$, $\forall x(t_0)\in\mathcal{A}\degree$. \qed \end{theorem} \begin{proof} Since $\mathcal{A}\in\mathfrak{R}^n$, $x(t_0)\in\mathcal{A}\degree$ implies $V(x(t)) \leq r$ and $x(t)\in\mathcal{A}\degree$ holds for all $t\geq t_0$. Using the Comparison Lemma \cite[Lem 3.4]{khalil}, $V(x(t))\leq V(x(t_0))e^{-b_2(t-t_0)}$ for all $t\geq t_0$. So, $||x(t)||\leq \sqrt[^a]{V(x(0))/b_1} e^{-(b_2/a)(t-t_0)}$, and therefore $||x(t)||\leq\sqrt[^a]{r/b_1}e^{-(b_2/a)(t-t_0)}$. \end{proof} \begin{definition} \label{def:C2functionsPiecewise} A continuous function $g(x)\in\mathbb{R}^n$ is piecewise in $\mathbb{C}^2$ on a triangulation $\mathcal{T}=\{\sigma_i\}_{i=1}^{m_{\mathcal{T}}}$, denoted $g\in\mathbb{C}^2(\mathcal{T})$, if it is in $\mathbb{C}^2$ on $\sigma_i$ for all $i\in\mathbb{Z}_1^{m_{\mathcal{T}}}$. \qed \end{definition} From now on, in case $\xi\in\sigma_i$, $\left.\sfrac{\partial^2 g^{(p)}}{\partial x^{(q)} \partial x^{(r)}}\right|_{x=\xi}$ for any vector function $g(x)$ and $p,q,r\in\mathbb{Z}_{i=1}^n$ means that the derivatives at the point $x=\xi$ are evaluated in those directions $y\in\mathbb{R}^n$ in which $\textrm{co}(x,x+hy)\subset \sigma_i$ as $h\rightarrow 0$. \begin{theorem} \label{thm:genOpt} Consider the system \begin{align} \label{eq:controlSystem} \dot{x} = g(x,u), \; x\in\mathcal{X}\in\mathfrak{R}^n, \; u\in\mathcal{U}\in\mathfrak{R}^m, \; g(0,0)=0. \end{align} Given a triangulation $\mathcal{T} =\{\sigma_i\}_{i=1}^{m_{\mathcal{T}}}$, where $\mathcal{T}\subseteq\mathcal{X}$, suppose that a class of Lipschitz controllers $\mathcal{F}=\{u(\cdot,\boldsymbol{\lambda})\}$ parameterized by $\boldsymbol{\lambda}$ is chosen so that $u(0,\lambda)=0$, and $g_{\boldsymbol{\lambda}}(\cdot) \coloneqq g(\cdot,u(\cdot,\boldsymbol{\lambda}))$ is Lipschitz on $\mathcal{T}$, and both $u(\cdot,\boldsymbol{\lambda})$, $g_{\boldsymbol{\lambda}}(\cdot)\in\mathbb{C}^2(\mathcal{T})$, and $u(\cdot,\boldsymbol{\lambda})\in\mathcal{U}$ for $\forall x\in\mathbb{E}_\mathcal{T}$ implies $u(\cdot,\boldsymbol{\lambda})\in\mathcal{U}$ for $\forall x\in\mathcal{T}$, and $\mathcal{F}$ has an admissible element. Consider the following nonlinear program. \begin{subequations} \label{eq:genOpt} \begin{alignat}{2} [\mathbf{V}^\ast,\; &\mathbf{L}^\ast,\; \boldsymbol{\lambda}^\ast,\; a^\ast,\; \mathbf{b}^\ast] = && \argmin_{\mathbf{V},\; \mathbf{L},\; \boldsymbol{\lambda},\; a,\; \mathbf{b}} \;\; \hat{J}(\mathbf{V}, \mathbf{L}, \boldsymbol{\lambda}, a, \mathbf{b}) \nonumber \\ \textrm{s.t.}\;\;& V_0 = 0, \;\; a,b_1 > 0, && \label{eq:V0b1Constraint} \\ & b_1||x||^a \leq V_x, &&\forall x\in\mathbb{E}_\mathcal{T}\backslash\{0\}, \label{eq:myVconstraint} \\ & |{\nabla V}_i| \leq l_i, &&\forall i\in\mathbb{Z}_1^{m_{\mathcal{T}}}, \label{eq:nablaConstraint} \\ & u(x_{i,j},\boldsymbol{\lambda})\in\mathcal{U}, &&\forall i\in\mathbb{Z}_1^{m_{\mathcal{T}}}, \; \forall j\in\mathbb{Z}_0^n, \label{eq:uConstraintGeneral} \\ & D^+_{i,j}V \leq -b_2 V_{x_{i,j}}, \;\; &&\forall i\in\mathbb{Z}_1^{m_{\mathcal{T}}}, \; \forall j\in\mathbb{Z}_0^n, \label{eq:myDv} \end{alignat} \end{subequations} \noindent where $D^+_{i,j}V=g_{\boldsymbol{\lambda}}(x_{i,j})^\intercal {\nabla V}_i + c_{i,j}\beta_i 1_n^\intercal l_i$, and $\mathbf{V}=\{V_x\}_{x\in\mathbb{E}_\mathcal{T}}\subset\mathbb{R}$ and $\mathbf{L}=\{l_i\}_{i=1}^{m_{\mathcal{T}}}\subset\mathbb{R}^n$, and $\mathbf{b}=\{b_1,b_2\}\subset\mathbb{R}$, and $\hat{J}$ is a cost function, and for $u(\cdot,\boldsymbol{\lambda})$ satisfying \eqref{eq:uConstraintGeneral}, \begin{flalign}\label{eq:betaAndc} &\beta_i \geq \max_{p,q,r\in\mathbb{Z}_1^n} \max_{\xi\in\sigma_i} \left| \left. \sfrac{\partial^2 g^{(p)}_{\boldsymbol{\lambda}}}{\partial x^{(q)}\partial x^{(r)}} \right|_{x=\xi} \right|, \textrm{ and} & \\ &c_{i,j}{=} \frac{n}{2} ||x_{i,j} {-} x_{i,0}|| (\max_{k\in\mathbb{Z}_1^n} ||x_{i,k}{-}x_{i,0}|| {+} ||x_{i,j}{-}x_{i,0}||).& \nonumber \end{flalign} \noindent The optimization \eqref{eq:genOpt} is feasible. If $b_2^\ast > 0$ in \eqref{eq:genOpt}, then the CPA function $V^\ast:\mathcal{T}\rightarrow\mathbb{R}$ constructed from the elements of $\textbf{V}^\ast$ is a Lyapunov function of $\dot{x}=g_{\boldsymbol{\lambda}^\ast}(x)$. Let $\mathcal{A} = V^{\ast^{-1}}([0,r])\subseteq\mathcal{T}$ be in $\mathfrak{R}^n$ for some $r>0$. Then $x=0$ is locally exponentially stable for $\dot{x}=g_{\boldsymbol{\lambda}^\ast}(x)$ with $||x(t)||\leq\sqrt[^{a^\ast}]{r/b_1^\ast}e^{-(b_2^\ast/a^\ast)(t-t_0)}$ if $x(t_0)\in\mathcal{A}\degree$. \qed \end{theorem} \begin{proof} To see that \eqref{eq:genOpt} is feasible, note that $V_x=b_1||x||^a$ with any $a,b_1>0$ satisfies \eqref{eq:myVconstraint} and can be used to compute a feasible solution $l_i=|\nabla{V}_i|$ for \eqref{eq:nablaConstraint} using Remark\;\ref{rem:nablaLinear}. By assumption, a feasible $\boldsymbol{\lambda}$ exists satisfying \eqref{eq:uConstraintGeneral}. Using these feasible values, finite $\beta_i$ satisfying \eqref{eq:betaAndc} can be chosen and $g_{\boldsymbol{\lambda}}(\cdot)$ is always finite because $g_{\boldsymbol{\lambda}}(\cdot)\in\mathbb{C}^2(\mathcal{T})$. Likewise, $c_{i,j}$ is finite because each $\sigma_i$ is compact, making the left-hand side of \eqref{eq:myDv} finite for each $i\in\mathbb{Z}_1^{m_{\mathcal{T}}}$ and $j\in\mathbb{Z}_0^n$. Note that if $x_{i,j}=0$, then $g_{\boldsymbol{\lambda}}(x_{i,j})=0$ and by convention, $j=0$, so $c_{i,j}=0$, making $D^+_{i,j}V=0$, making any $b_2$ feasible. Thus, there exists $b_2{\in}\mathbb{R}$ that satisfies \eqref{eq:myDv} for all $i{\in}\mathbb{Z}_1^{m_{\mathcal{T}}}$ and $j{\in}\mathbb{Z}_0^n$. The remainder of the proof is devoted to showing that $V^\ast$ for the closed-loop system, $\dot{x}=g_{\boldsymbol{\lambda}}(x)$, verifies Theorem\;\ref{thm:myExpoStability} because by assumption, \eqref{eq:uConstraintGeneral} implies $u(x,\boldsymbol{\lambda})\in\mathcal{U}$ for all $x\in\mathcal{T}$. Let $\Omega:=\mathcal{T}$ in Theorem\;\ref{thm:myExpoStability}. Constraints \eqref{eq:V0b1Constraint}--\eqref{eq:myVconstraint} ensure $V^\ast(0)=0$ and \eqref{eq:myExpoBound} since $V^\ast$ is a CPA function. It remains to show that \eqref{eq:nablaConstraint} and \eqref{eq:myDv} verify \eqref{eq:myExpoDecrease}. For simplicity, let $g(x)=g(x,u(\boldsymbol{\lambda},x))$. The assumptions of Theorem\;\ref{thm:gieseldWBound} with $W\coloneqq V$ are verified by \eqref{eq:nablaConstraint},\eqref{eq:betaAndc}. Applying \eqref{eq:gieselInequality}, \eqref{eq:myDv}, and the fact that $V(x)\geq0$ is affine on each $\sigma_i$ shows that $D^+V(x) \leq \sum_{j=0}^n \alpha_j D^+_{i,j} V \leq -b_2 \sum_{j=0}^n \alpha_j V_{x_{i,j}} = -b_2 V(x)$, where $x=\sum_{j=0}^n\alpha_jx_{i,j}\in\mathcal{T}\degree$, $\sum_{j=0}^n \alpha_j=1$, and $0\leq\alpha_j\leq1$. Like \cite{Doban}, as a relaxation of Theorem\;\ref{thm:gieseldWBound}, it is assumed that $g_{\boldsymbol{\lambda}}(\cdot)\in\mathbb{C}^2(\mathcal{T})$, not everywhere. Since $x\in\mathcal{T}\degree$ was an arbitrary point, \eqref{eq:myExpoDecrease} is verified. \end{proof} Even if $b_2^\ast \leq 0$ in \eqref{eq:genOpt}, a stabilizing controller can be found if $D^+_{i,j}V$ in \eqref{eq:myDv} is positive in all simplexes that include the origin because in this case, a set $\mathcal{A}\in\mathfrak{R}$ can be obtained. This is described in the following. \begin{corollary} \label{cor:mightFindAController} Suppose that $b_2^\ast\leq0$ in \eqref{eq:genOpt}. Let $\mathbb{I}_0=\Set{i\in\mathbb{Z}_1^{m_{\mathcal{T}}} | 0\in\sigma_i}$ and $\mathcal{E}_0=\{\sigma_i\}_{i\in\mathbb{I}}$. Further, let $\mathbb{I}_1= \{i\in\mathbb{Z}_1^{m_{\mathcal{T}}} \,\mid\, D^+V^\ast_{x_{i,j}} < 0, \;\forall j\in\mathbb{Z}_0^n, \; \textrm{ where } x_{i,j}\neq0 \}$, and $\mathcal{E}_1=\{\sigma_i\}_{i\in\mathbb{I}_1}$. If $\mathcal{E}_1\supseteq\mathcal{E}_0$, then $V^\ast:\hat{\mathcal{E}}_1\rightarrow\mathbb{R}$ constructed from the elements of $\textbf{V}^\ast$ is a Lyapunov function of $\dot{x}=g_{\boldsymbol{\lambda}^\ast}(x)$, where $\hat{\mathcal{E}}_1\subseteq\mathcal{E}_1$ is in $\mathfrak{R}^n$. Let $\mathcal{A} = V^{\ast^{-1}}([0,r])\subseteq\hat{\mathcal{E}}_1$ be in $\mathfrak{R}^n$ for some $r>0$. Then $x=0$ is locally exponentially stable for $\dot{x}=g_{\boldsymbol{\lambda}^\ast}(x)$ with $||x(t)||\leq\sqrt[^{a^\ast}]{r/b_1^\ast}e^{-(\hat{b}_2^\ast/a^\ast)(t-t_0)}$ if $x(t_0)\in\mathcal{A}\degree$, where $\hat{b}_2^\ast \coloneqq \min \Set{-D^+_{i,j}V^\ast / V^\ast_{x_{i,j}} \mid i\in\mathbb{I}_1, j\in\mathbb{Z}_j^n, x_{i,j} \neq 0 }$. \qed \end{corollary} \begin{proof} For all simplexes in $\mathcal{E}_1$, $D_{i,j}^+V^\ast$ is negative except at $0$ where it is zero, making $\hat{b}^\ast_2$ positive. $\mathcal{E}_1{\supseteq}\mathcal{E}_0$ ensures that $\hat{\mathcal{E}}_1{\in}\mathfrak{R}^n$ exists because $\mathcal{E}_0{\in}\mathfrak{R}^n$. The claim follows from Theorem\;\ref{eq:genOpt} by letting $\mathcal{T}{\coloneqq}\hat{\mathcal{E}}_1$, and $b^\ast_2{\coloneqq}\hat{b}_2^\ast$ in \eqref{eq:genOpt}. \end{proof} In practice, it may not be obvious how to apply Theorem\;\ref{thm:genOpt} and Corollary\;\ref{cor:mightFindAController} for control design. For one, finding a control structure in which point-wise feasibility on vertices of a triangulation implies feasibility at all points in the triangulation, is not trivial. Once the control structure is chosen, its first and second derivatives may need to be constrained to compute $\beta_i$ in \eqref{eq:betaAndc}. Moreover, constraints \eqref{eq:myDv} and \eqref{eq:uConstraintGeneral} are nonlinear. Note that searching for a positive $b_2$ in \eqref{eq:genOpt} is important because even if $\mathcal{E}_1\supseteq\mathcal{E}_0$ in Corollary\;\ref{cor:mightFindAController}, the $\hat{b}_2^\ast$ or $\mathcal{A}$ obtained by it might be too small. A practical design for control-affine systems using Theorem\;\ref{thm:genOpt} is discussed next. \subsection{Design for Control-Affine Systems} Let the system in \eqref{eq:controlSystem} be control-affine with a polytopic input constraint. If CPA controllers are chosen as class $\mathcal{F}$ in Theorem\;\ref{thm:genOpt}, which means each element of $u$ is a CPA function, and $\beta_i$ is computed using \eqref{eq:betaAndc}, the only remaining nonlinearities in \eqref{eq:genOpt}'s constraints are bilinear terms in \eqref{eq:myDv}. Using a feasible initialization that has $b_2\!\leq\!0$, convex overbounding can be used to iteratively find larger values for $b_2$ on a fixed triangulation in a process inspired by \cite{warner2017}. This is formulated as an iterative SDP here. The following theorem integrates the computation of $\beta_i$ with \eqref{eq:genOpt} when a CPA controller is sought, and highlights the remaining nonlinearities. \begin{theorem} \label{thm:semiProg} Consider the constrained control system \begin{align} \label{eq:controlAffineSystem} \dot{x} = f(x) + G(x)u, \;\;x\in\mathcal{X}\in\mathfrak{R}^n, \;\;u\in\mathcal{U}\in\mathfrak{R}^m, \end{align} where $f(0)=0$, and $\mathcal{U}=\Set{u\in\mathbb{R}^m \mid H u \leq h_c}$. Given a triangulation $\mathcal{T} =\{\sigma_i\}_{i=1}^{m_{\mathcal{T}}}$, where $\mathcal{T}\subseteq\mathcal{X}$, suppose that both $f(\cdot),G(\cdot)\in\mathbb{C}^2(\mathcal{T})$. Let $u$ be CPA on $\mathcal{T}$, i.e. $u^{(s)}:\mathcal{T} \rightarrow \mathbb{R}$, $\forall s\in\mathbb{Z}_1^m$, where ${u^{(s)}}_i = x^\intercal \nabla{u^{(s)}}_i+\omega^{(s)}_i$, $\forall s\in\mathbb{Z}_1^m, \forall i\in\mathbb{Z}_1^{m_{\mathcal{T}}}$. Let $\textbf{y} = [\mathbf{V}, \mathbf{L}, \mathbf{U}, \mathbf{Z}, a, \mathbf{b}]$ be the unknowns, where $\mathbf{V}=\{V_x\}_{x\in\mathbb{E}_\mathcal{T}}\subset\mathbb{R}^n$ and $\mathbf{L}=\{l_i\}_{i=1}^{m_{\mathcal{T}}}\subset\mathbb{R}^n$, and $\mathbf{U}=\{u_x\}_{x\in\mathbb{E}_\mathcal{T}}\subset\mathbb{R}^m$, and $\mathbf{Z}=\{z_i\}_{i=1}^{m_{\mathcal{T}}}\subset\mathbb{R}$, and $a\in\mathbb{R}$, and $\mathbf{b}=\{b_1,b_2\}\subset\mathbb{R}$. The following optimization is feasible. \begin{subequations} \label{eq:SemiProg} \begin{alignat}{2} \textbf{y}^\ast &= \argmin_{\textbf{y}} \;\; J(\textbf{y}) \nonumber \\ \textrm{s.t.} \;\; & V_0 = 0, \;\; a, b_1 > 0, && \label{eq:SemiV0b1Constraint} \\ & b_1||x||^a \leq V_x, && \forall x\in\mathbb{E}_\mathcal{T}\backslash\{0\}, \label{eq:SemiVconstraint} \\ & |{\nabla V}_i| \leq l_i, && \forall i\in\mathbb{Z}_1^{m_{\mathcal{T}}}, \label{eq:SemiNablaConstraint} \\ & u_0 = 0, \; H u_x \leq h_c, && \forall x \in \mathbb{E}_\mathcal{T}\backslash\{0\}, \label{eq:SemUconstraint} \\ & |\nabla{u^{(s)}}_i| \leq z_i, && \forall i\in\mathbb{Z}_1^{m_{\mathcal{T}}}, \; \forall s\in\mathbb{Z}_1^m, \label{eq:SemDuConstraint} \\ & D^+_{i,j}V \leq -b_2 V_{x_{i,j}}, \quad && \forall i\in\mathbb{Z}_1^{m_{\mathcal{T}}}, \; \forall j\in\mathbb{Z}_0^n, \label{eq:semDv} \end{alignat} \end{subequations} \noindent where $D^+_{i,j}V=\phi_{i,j} + u_{x_{i,j}}^\intercal G(x_{i,j})^\intercal\nabla{V}_i + c_{i,j}\eta_iz_i1_n^\intercal l_i$ and $\phi_{i,j} = f(x_{i,j})^\intercal {\nabla V}_i {+} c_{i,j}\mu_i 1_n^\intercal l_i$, and $c_{i,j}$ is given in \eqref{eq:betaAndc}, and \begin{flalign}\label{eq:affineB} &\mu_i{=}\max_{p,q,r\in\mathbb{Z}_1^n} \max_{\xi\in\sigma_i} \left| \left. \sfrac{\partial^2 f^{(p)}}{\partial x^{(q)}\partial x^{(r)}}\right|_{x=\xi}\right| + \ldots & \\ & \sum_{s=1}^m \left| \left. \sfrac{\partial^2 G^{(p,s)}}{\partial x^{(q)}\partial x^{(r)}} \right|_{x=\xi} \right| \max_{u^{(s)}} \left|\textrm{proj}_s(\mathcal{U})\right|, \textrm{ and } & \nonumber \\ &\eta_i{=}\!\max_{p,q,r{\in}\mathbb{Z}_1^n} \max_{\xi{\in}\sigma_i}\!\sum_{s{=}1}^m \!\left| \left. \sfrac{\partial G^{(p,s)}}{\partial x^{(q)}}\right|_{x{=}\xi} \right|\! {+} \!\left| \left. \sfrac{\partial G^{(p,s)}}{\partial x^{(r)}}\right|_{x{=}\xi} \right|, \nonumber & \end{flalign} \noindent where $\textrm{proj}_s(\mathcal{U})$ projects $\mathcal{U}$ onto the $s$-th axis of $\mathbb{R}^m$. \qed \end{theorem} \begin{proof} Consider any $\sigma_i{\in}\mathcal{T}$. By generalizing \cite[Lem\;III.1]{Doban} to multi-input systems with polytopic input constraints, the right-hand side of \eqref{eq:betaAndc} can be bounded above by $\mu_i+\eta_i z_i$, where $z_i\geq |\nabla{u^{(s)}}_i|$, using the Triangle Inequality. Considering $z_i$ as an optimization variable, and replacing $\beta_i$ with $\mu_i{+}\eta_i z_i$, and including $z_i{\geq} |\nabla{u^{(s)}}_i|$ in \eqref{eq:genOpt}, \eqref{eq:SemiProg} is obtained. Thus, the claim follows from Theorem\;\ref{thm:genOpt}. \end{proof} The only remaining nonlinearities in \eqref{eq:SemiProg}'s constraints are $||x||^a$ terms in \eqref{eq:SemiVconstraint}, and the bilinear terms in $D^+_{i,j}V{-}\phi_{i,j}$ and the right-hand side of \eqref{eq:semDv}, since $\mu_{i,j},\eta_{i,j}$'s are known constants on a given triangulation, $\mathcal{T}$. Note that there are $(n{+}1)m_{\mathcal{T}}$ constraints in the form of \eqref{eq:semDv}, which grows linearly with $m_{\mathcal{T}}$. Thus, covexifying \eqref{eq:SemiProg} is valuable to make it more practical. Theorem\;\ref{thm:semiProg} can be used as a nonlinear optimization to find a CPA controller, or used to find an initialization for the following iterative SDP algorithm. \subsection{Iterative Design Algorithm} The primary objective in both nonlinear optimizations \eqref{eq:genOpt} and \eqref{eq:SemiProg} is finding a $b_2^\ast>0$ to ensure stability. Choosing a cost function that weighs increasing $b_2$ against performance is a bad choice because no controller is formulated until $b_2>0$ is found. This section gives an algorithm for system \eqref{eq:controlAffineSystem} that iteratively searches for $b_2>0$ using a sequence of SDPs. If a sufficiently large $b_2>0$ is found, the algorithm fixes it, and then optimizes other performance objectives in another sequence. The following theorem formulates each iteration. \begin{theorem} \label{thm:semiProg2} Suppose that $J$ in \eqref{eq:SemiProg} is linear or quadratic, and $a>0$ is a fixed number. Let $\underline{\mathbf{y}}=[\underline{\mathbf{V}}, \underline{\mathbf{L}}, \underline{\mathbf{U}}, \underline{\mathbf{Z}}, a, \underline{\mathbf{b}}]$ satisfy \eqref{eq:SemiVconstraint}--\eqref{eq:semDv}. Consider the following optimization. \begin{subequations} \label{eq:SemiProg2} \begin{alignat}{2} &\delta{\textbf{y}^\ast} = \argmin_{\delta\mathbf{y} = [\delta\mathbf{V}, \delta\mathbf{L}, \delta\mathbf{U}, \delta\mathbf{Z}, 0, \delta\mathbf{b}]} && J(\underline{\textbf{y}}+\delta\textbf{y}) \nonumber \\ &\textrm{s.t.} && \nonumber \\ & \delta V_0 = 0, \;\; \underline{b}_1 + \delta b_1 > 0, && \label{eq:Semi2V1b0Constraint} \\ & (\underline{b}_1+\delta b_1)||x||^a \leq \underline{V}_x + \delta V_x, && \forall x\in\mathbb{E}_\mathcal{T}\backslash\{0\}, \label{eq:SemiVconstraint2} \\ & |{\nabla\underline{V}}_i + \delta {\nabla V}_i| \leq \underline{l}_i + \delta l_i, \quad && \forall i\in\mathbb{Z}_1^{m_{\mathcal{T}}}, \label{eq:SemiNablaConstraint2} \\ & \delta u_0 = 0, \; H (\underline{u}_x + \delta u_x) \leq h_c, && \forall x \in \mathbb{E}_\mathcal{T}, \label{eq:SemUconstraint2} \\ &|\nabla{\underline{u}^{(s)}}_i {+} \delta \nabla{u^{(s)}}_i| \leq \underline{z}_i {+} \delta z_i, \;\; && \forall i\in\mathbb{Z}_1^{m_{\mathcal{T}}}, \; \forall s\in\mathbb{Z}_1^m, \label{eq:SemDuConstraint2} \\ & P_{i,j} \leq 0, && \forall i\in\mathbb{Z}_1^{m_{\mathcal{T}}}, \; j{=}0, \label{eq:2SemDv1} \\ & Q_{i,j} \leq 0, && \forall i\in\mathbb{Z}_1^{m_{\mathcal{T}}}, \; \forall j\in\mathbb{Z}_1^n, \label{eq:2SemDv2} \end{alignat} \end{subequations} \noindent where $\delta{\nabla V}_i{=}X_i^{-1}\delta\bar{V}_i$, $\delta{\nabla u^{(s)}}_i{=}X_i^{-1}\delta\bar{u}_i$ as in Remark\;\ref{rem:nablaLinear}, \begin{equation} P_{i,j} = \begin{bmatrix} \hat{\phi}_{i,j} & \ast & \ast & \ast & \ast \\ \delta{\nabla{V}}_i & -2I_n & \ast & \ast & \ast \\ G(x_{i,j})\delta u_{x_{i,j}} & 0 & -2I_n & \ast & \ast \\ \delta V_{x_{i,j}} & 0 & 0 & -2 & \ast \\ \delta b_2 & 0 & 0 & 0 & -2 \end{bmatrix}, \end{equation} \begin{align} \hat{\phi}_{i,j} = & ({\nabla \underline{V}}_i + \delta{\nabla V}_i)^\intercal(f(x_{i,j}) + G(x_{i,j}) \underline{u}_{x_{i,j}}) + \ldots \nonumber \\ & {\nabla \underline{V}}_i^\intercal G(x_{i,j})\delta u_{x_{i,j}} + \mu_ic_{i,j}1_n^\intercal (\underline{l}_i+\delta l_i) + \ldots \nonumber \\ & \eta_ic_{i,j}\left(( \underline{z}_i+\delta z_i)1_n^\intercal \underline{l}_i + \underline{z}_i 1_n^\intercal\delta l_i \right) + \ldots \nonumber \\ & b_2 ( \underline{V}_{x_{i,j}} + \delta V_{x_{i,j}}) + \underline{V}_{x_{i,j}}\delta b_2, \textrm{ and} \label{eq:phiHat} \end{align} \begin{equation} Q_{i,j} = \begin{bmatrix} P_{i,j} & \ast & \ast\\ 1_n^\intercal\delta l_i & \frac{-2}{\eta_i c_{i,j}} & \ast \\ \delta z_i & 0 & \frac{-2}{\eta_i c_{i,j}} \end{bmatrix}, \end{equation} \noindent and $c_{i,j}$ is given in \eqref{eq:betaAndc}, and $\mu_i, \eta_i$ are given in \eqref{eq:affineB}. Then, $\underline{\textbf{y}}+\delta\textbf{y}^\ast$ is a feasible point for \eqref{eq:SemiProg}, and $J(\underline{\textbf{y}}+\delta\textbf{y}^\ast)\leq J(\underline{\textbf{y}})$. \qed \end{theorem} \begin{proof} To see that \eqref{eq:SemiProg2} is feasible, observe that $\delta\textbf{y}{=}0$ satisfies \eqref{eq:SemiProg2} since in this case, \eqref{eq:SemiProg2} is equivalent to \eqref{eq:SemiProg} with $\textbf{y}:=\underline{\textbf{y}}$. In fact, \eqref{eq:2SemDv1}--\eqref{eq:2SemDv2} are the convexified equivalences of \eqref{eq:semDv}. To show this, recall that $w^\intercal v {\leq} 1/2(w^\intercal w {+}v^\intercal v)$ for any $v,w$ vectors with the same dimension. Applying this fact with $(v,w){=}(\delta \nabla{V}_i, G(x_{i,j}) \delta u_{x_{i,j}})$, $(v,w){=}( \delta z_i, 1_n^\intercal\delta l_i)$, and $(v,w){=}(\delta V_{x_{i,j}}, \delta b_2)$ shows that by Schur complement, \eqref{eq:2SemDv1} is implied when $j{=}0$, since $c_{i,j}$ is zero in this case, and \eqref{eq:2SemDv2} is implied when $j{\neq} 0$. Finally, $J(\underline{\mathbf{y}} {+} \delta \mathbf{y}) \leq J(\mathbf{y})$ because otherwise $\delta\mathbf{y}=0$ would be a better, feasible solution. \end{proof} \begin{remark} In case $G(x)$ in \eqref{eq:controlAffineSystem} is a constant matrix, \eqref{eq:2SemDv1} must be used for all $j\in\mathbb{Z}_0^n$ because in this case, $\eta_i=0$ in \eqref{eq:affineB}. Also, \eqref{eq:SemDuConstraint} and \eqref{eq:SemDuConstraint2} are not needed. \qed \end{remark} Starting with a feasible point of \eqref{eq:SemiProg}, Theorem\;\ref{thm:semiProg2} can be used repeatedly to potentially decrease the values of the cost function. Note that by replacing $\delta V_0=\delta u_0=0$ in the simplexes that have $0$ as their vertex, and letting $\underline{b}_1+\delta b_1$ be greater than or equal to a small positive number, \eqref{eq:SemiProg2} is a SDP in the standard format. The small positive number must be kept constant in the later iterations. Two methods of finding a feasible initialization point are given next. \begin{initialization} \label{initialization:random} Choosing $a,b_1>0$, let $V_x=b_1||x||^a$, $\forall x{\in}\mathbb{E}_\mathcal{T}$. Let $u_0{=}0$ and assign admissible $u_x$ for all $x{\in}\mathbb{E}_\mathcal{T}$. They can be random. Compute $l_i{=}|\nabla{V}_i|$ and $z_i{=}|\nabla{u^{(s)}}_i|$ for all $i\in\mathbb{Z}_1^{m_{\mathcal{T}}}$ as in Remark\;\ref{rem:nablaLinear}. Finally, find the largest $b_2$ satisfying \eqref{eq:semDv} in all simplexes. \qed \end{initialization} \begin{initialization} \label{initialization:LQR} Linearize \eqref{eq:controlAffineSystem} around the origin. Design a LQR controller, and find the corresponding quadratic Lyapunov function, $x^\intercal\hat{P}x$. Sample $x^\intercal\hat{P}x$ at the vertices of $\mathcal{T}$ to find $\mathbf{V}$, and let $a{=}2$ and $b_1$ be equal to the smallest eigenvalue of $\hat{P}$. Sample the LQR controller at the vertices of $\mathcal{T}$ to form $\mathbf{U}^{\textrm{LQR}}{=}\{u_x^{\textrm{LQR}}\}_{x\in\mathbb{E}_\mathcal{T}}$. Divide each element of $\mathbf{U}^{\textrm{LQR}}$ by a positive number so that the result, $\mathbf{U}{=}\{u_x\}_{x\in\mathbb{E}_\mathcal{T}}$, has admissible values for all vertices. Compute $l_i{=}|\nabla{V}_i|$ and $z_i{=}|\nabla{u^{(s)}}_i|$ for all $i{\in}\mathbb{Z}_1^{m_{\mathcal{T}}}$ as in Remark\;\ref{rem:nablaLinear} using the computed values of $V_x$ and $u_x$, respectively. Finally, find the largest $b_2$ satisfying \eqref{eq:semDv} in all simplexes. \qed \end{initialization} Given a triangulation and a linear or quadratic cost function $\hat{J}(\textbf{V},\textbf{L},\textbf{U},\textbf{Z},b_1)$, the procedure for finding a stabilizing CPA controller for \eqref{eq:controlAffineSystem} is given in Algorithm\;\ref{alg:cpaControl}. It iteratively increases $b_2$ until it is positive. Since $e^{-(\sfrac{b_2}{a})t}$ is proportional to the state norm's upper-bound when $a{>}0$ is fixed, increasing $b_2{>}0$ can continue until a desired decay rate is ensured. Then, by fixing $b_2$'s value, $\hat{J}(\cdot)$ is iteratively minimized. Finally, the corresponding positive-invariant set, $\mathcal{A}{=}{V}^{-1}([0,r])$, $r{>}0$, where $\mathcal{A}{\subseteq}\mathcal{T}$ and $\mathcal{A}{\in}\mathfrak{R}^n$, is found. Both of the loops can be terminated in lines \ref{line:term1} and \ref{line:term2} if a predefined maximum number of iterations is reached. If a sufficiently large positive $b_2$ cannot be found, triangulation refinement, discussed later, is needed. \begin{algorithm}[h!] \caption{CPA control design on a fixed triangulation} \label{alg:cpaControl} \begin{algorithmic}[1] \Require The control-affine system \eqref{eq:controlAffineSystem}, and a triangulation $\mathcal{T}\subseteq\mathcal{X}$, and a linear or quadratic $\hat{J}(\textbf{V},\textbf{L},\textbf{U},\textbf{Z},b_1)$ \Ensure $u(x)$, and a positive-invariant set $\mathcal{A}$ \State $\underline{\mathbf{y}} \coloneqq $ a feasible point of \eqref{eq:SemiProg} (using Initialization\;\ref{initialization:random} or \ref{initialization:LQR}) \State $J \coloneqq -b_2$ \Comment{since $b_2$ is to be maximized} \Repeat{} \State Use Theorem\;\ref{thm:semiProg2} \Until{$b_2 > 0$ is large enough OR $b_2$ is not changing} \label{line:term1} \If{$b_2>0$ is found} \State Fix $b_2$, and let $J\coloneqq\hat{J}(\cdot)$ \Repeat{} \State Use Theorem\;\ref{thm:semiProg2} \Until{$J$ is sufficiently small OR $J$ is not changing} \label{line:term2} \State Return $u(x)$ and find a set $\mathcal{A}={V}^{-1}([0,r])$, $r>0$, \\ \hspace{0.4cm} where $\mathcal{A}\subseteq\mathcal{T}$ and $\mathcal{A}\in\mathfrak{R}^n$ \label{line:findingA} \EndIf \end{algorithmic} \end{algorithm} Once Algorithm\;\ref{alg:cpaControl} returns the stabilizing controller, the corresponding point, $\mathbf{y}$, can serve as the initial guess for the non-convex optimization \eqref{eq:SemiProg} with a nonlinear cost function $J(\cdot)$, as a final attempt to boost the performance. \subsection{Minimum-norm Online Implementation} Consider system \eqref{eq:controlAffineSystem}. Algorithm\;\ref{alg:cpaControl} only minimizes the objective pointwise on the vertices of the triangulation unless $\hat{J}(\cdot)$ is chosen wisely. However, since the corresponding Lyapunov function of the returned controller is also a Lipschitz CLF, a minimum-norm controller can be formulated as a QP \cite{Ames2016,nonMPC-Lyap2006}. Suppose that $b_2^\ast>0$ is found by Algorithm\;\ref{alg:modification}, and $V^\ast$ is the corresponding CPA Lyapunov function. Let $\mathcal{A}$ be $\mathcal{A}{=}{V^\ast}^{-1}([0,r])$, $r{>}0$, where $\mathcal{A}{\subseteq}\mathcal{T}$ and $\mathcal{A}{\in}\mathfrak{R}^n$. Starting at any $x{\in}\mathcal{A}\degree$, the minimum-norm controller can be written as \begin{subequations} \label{eq:minNormLips} \begin{flalign} &u^\ast(x) = \argmin_{u} \;\; u^\intercal\hat{H}(x)u + \hat{h}(x)^\intercal u & \nonumber \\ &\textrm{s.t.\;\;} Hu\leq h_c, & \\ & \quad\;\;\, \nabla{V^\ast_i}^\intercal(f(x){+}G(x)u){+} b_2^\ast V^\ast(x) \leq 0, \;\; \forall i\in\mathcal{I}, & \end{flalign} \end{subequations} \noindent where $\mathcal{I} {=} \Set{i{\in}\mathbb{Z}_1^{m_{\mathcal{T}}} | x{\in}\sigma_i}$, and $\hat{H}(x)$ is positive definite. The set $\mathcal{I}$ has more than one element if $x$ is on the common face of some simplexes. The optimization \eqref{eq:minNormLips} is feasible for all $x\in\mathcal{A}$, because the corresponding CPA controller of $V^\ast$ is a feasible point for it. Therefore, the convergence inequality $||x(t)||\leq\sqrt[^a]{r/b_1}e^{-(b_2/a)(t-t_0)}$ that holds for the CPA controller, also holds for the QP-based controller. \section{Triangulation Refinement} Both Theorem\;\ref{thm:genOpt} and Algorithm\;\ref{alg:cpaControl} work on given fixed triangulations. If a positive $b_2$ cannot be found, the triangulation can be refined. These refinements can be local by tracking the value of $D^+_{i,j}V$ on the simplexes in $\mathcal{T}$. However, for simplicity, a structured triangulation with uniform refinement over all simplexes is proposed here. The standard triangulation, denoted by $\mathcal{T}^{\textrm{std}}$, is a hyper-cube in $\mathbb{R}^n$ composed of generalized isosceles right triangle simplexes with unit length sides that can be tessellated to cover the whole $\mathbb{R}^n$ \cite[Sec\;3.1]{gieslRevCPA2013}. Multiplying all of its vertices with a positive number scales the triangulation. \begin{definition} \label{def:scaledSubset} Given $\mathcal{X}\in\mathfrak{R}^n$ and $\rho>0$, a scaled subset of the $\mathcal{T}^{\textrm{std}}$ in $\mathcal{X}$, denoted $\mathcal{T}_\mathcal{X}^\rho$, is a triangulation obtained by scaling $\mathcal{T}^\textrm{std}$ with $\rho$, and then finding the largest collection of its simplexes entirely in $\mathcal{X}$. \qed \end{definition} Let the volume enclosed by $\Omega\in\mathfrak{R}^n$ be denoted by $\textrm{vol}(\Omega)$. Given an initial $\rho$, and a minimum threshold on the covering percentage of $\mathcal{X}$, denoted by $\epsilon_c$, Algorithm\;\ref{alg:modification} finds a small enough $\rho$ so that $\textrm{vol}(\mathcal{T}_\mathcal{X}^h)/\textrm{vol}(\mathcal{X})\geq \epsilon_c$. Then it searches for a controller. If not successful, $\rho$ is decreased to refine all simplexes, and the search continues. The algorithm terminates if $u(x)$ is returned, or if decreasing $\rho$ finally violates a given threshold, $\rho_{\textrm{min}}$. \begin{algorithm}[h!] \caption{Control design with triangulation refinement} \label{alg:modification} \begin{algorithmic}[1] \Require System \eqref{eq:controlSystem} (or \eqref{eq:controlAffineSystem}), cost function, $\rho$, $0<\gamma<1$, $\epsilon_c$. \Ensure $u(x)$ and a positive-invariant set $\mathcal{A}$ \Repeat{} \State $\mathcal{T} \coloneqq \mathcal{T}_\mathcal{X}^\rho$, where $\textrm{vol}(\mathcal{T}_\mathcal{X}^\rho)/\textrm{vol}(\mathcal{X})\geq \epsilon_c$ \State Solve \eqref{eq:genOpt} (or Algorithm\;\ref{alg:cpaControl}) \If{$b_2>0$ is found} \State Return: output of Theorem\;\ref{thm:genOpt} (or Algorithm\;\ref{alg:cpaControl}) \label{line:refFindingA} \EndIf \State $\rho := \gamma \rho$ and make sure $\textrm{vol}(\mathcal{T}_\mathcal{X}^\rho)/\textrm{vol}(\mathcal{X})\geq \epsilon_c$ \Until{$\rho < \rho_{\textrm{min}}$ OR $b_2>0$ } \end{algorithmic} \end{algorithm} \begin{remark} Since \eqref{eq:SemiProg2}'s solution satisfies \eqref{eq:genOpt}, Whenever \eqref{eq:genOpt} is solved or Theorem\;\ref{thm:semiProg2} in Algorithm\;\ref{alg:cpaControl} is invoked, and a $b_2{\leq}0$ is found, Corollary\;\ref{cor:mightFindAController} can be checked to see if a positive $\hat{b}_2$ exists. If so, $\mathcal{A}$ is also obtained by Corollary\;\ref{cor:mightFindAController}. \qed \end{remark} \section{Numerical Simulation} Consider the inverted pendulum $\dot{x}^{(1)} = x^{(2)}$, $\dot{x}^{(2)} = 4.9\sin{x^{(1)}} - 0.3x^{(2)} + u$, where the polytope $\mathcal{X}$ in Fig.\ref{fig:trian} and $|u|\leq 5$ define its state and input constraints. All units are SI. To solve SDPs, Yalmip \cite{yalmip} with Sedumi \cite{sedumi} were used in MATLAB. For initialization, an LQR with the cost function $x^TQx+u^2$, where $Q=2I$, was used. Choosing $\rho=0.5$, $\gamma = 0.8$, and $\epsilon_c = 0.85$ in Algorithm\;\ref{alg:modification}, and limiting the convex-overbounding iterations in Algorithm\;\ref{alg:cpaControl} to five, a CPA controller was found on $\mathcal{T}_\mathcal{X}^{0.13}$ with $b_2=0.33$ and $b_1=4.27$ after five iterations. Corollary\;\ref{cor:mightFindAController} could be used in the second iteration to return a solution earlier, however, refining the triangulation resulted in a larger positive invariant set, $\mathcal{A}$. The triangulation and the boundary of the Lyapunov function's sub-level set $\mathcal{A}\subseteq\mathcal{T}$ satisfying $\mathcal{A}\in\mathfrak{R}^n$ are depicted in Fig.\;\ref{fig:trian}. No further improvement was made offline. To simulate the QP-based controller \eqref{eq:minNormLips}, the objective function $u^2$ was used in \eqref{eq:minNormLips}, and the system equations were integrated using the 4th order Runge-Kutta with a $0.01$\;s time step. Starting at $x=(0.52,-0.78)$, which is inside the level set, the state trajectories and inputs of the two controllers are given in Fig.\;\ref{fig:traj}. Since the state has a different evolution using the QP-based controller, the trajectory of its input, $u_{\textrm{QP}}$, is not always below the trajectory of the CPA controller input, $u_\textrm{CPA}$, despite the fact that $u_{\textrm{CPA}}$ is a minimum-norm realization. The time it takes for both states to settle in $\pm0.05$ range for the CPA controller is $1.8$\;s and is $4.1$\;s for the QP-based controller. Since the initial point is inside the sub-level set, the state and input constraints are respected. \begin{figure} \centering \includegraphics[width=8.65cm]{triangul.png} \caption{The set $\mathcal{X}$, the triangulation $\mathcal{T}\subseteq\mathcal{X}$, and $\partial\mathcal{A}$, the boundary of a sub-level set of the CPA Lyapunov function in $\mathcal{T}$.} \label{fig:trian} \end{figure} \begin{figure} \centering \includegraphics[width=8.65cm]{trajs.png} \caption{State and input trajectories of both controllers. Since $u^\ast(x)$ in \eqref{eq:minNormLips} may not be Lipschitz \cite{Ames2016}, occasional jumps in $u_{\textrm{QP}}$ happen.} \label{fig:traj} \end{figure} \section{Conclusion} In this paper, a method to stabilize state and input constrained nonlinear systems was proposed via an offline optimization on a variable triangulation. The method provides an exact region of attraction, and bounds the decay rate of the state norm above. For control-affine systems, the optimization was formulated by iterative SDPs assuming CPA structure for the controller. In this case, the corresponding Lyapunov function, which is a Lipschitz CLF, was used to also formulate a minimum-norm QP-based controller. \bibliographystyle{unsrt}
0801.0621
\section{Tridiagonal pairs} \indent Throughout the paper $\mathbb{K}$ denotes a field and $V$ denotes a vector space over $\mathbb{K}$ with finite positive dimension. \medskip We begin by recalling the notion of a tridiagonal pair. We will use the following terms. Let $\text{End}(V)$ denote the $\mathbb{K}$-algebra consisting of all $\mathbb{K}$-linear transformations from $V$ to $V$. For $A \in \text{End}(V)$ and for a subspace $W \subseteq V$, we call $W$ an {\em eigenspace} of $A$ whenever $W \neq 0$ and there exists $\theta \in \mathbb{K}$ such that $W=\{v \in V \,|\, Av=\theta v\}$; in this case $\th$ is the {\em eigenvalue} of $A$ associated with $W$. We say $A$ is {\em diagonalizable} whenever $V$ is spanned by the eigenspaces of $A$. \medskip \begin{definition} \cite{ITT} \label{def:TDpair} \samepage By a {\em tridiagonal pair} on $V$ we mean an ordered pair $A,A^* \in \text{End}(V)$ that satisfy (i)--(iv) below: \begin{itemize} \item[(i)] Each of $A,A^*$ is diagonalizable. \item[(ii)] There exists an ordering $\{V_i\}_{i=0}^d$ of the eigenspaces of $A$ such that \begin{equation} \label{eq:Astrid} A^* V_i \subseteq V_{i-1} + V_{i} + V_{i+1} \qquad\qquad (0 \leq i \leq d), \end{equation} where $V_{-1}=0$ and $V_{d+1}=0$. \item[(iii)] There exists an ordering $\{V^*_i\}_{i=0}^\delta$ of the eigenspaces of $A^*$ such that \begin{equation} \label{eq:Atrid} A V^*_i \subseteq V^*_{i-1} + V^*_{i} + V^*_{i+1} \qquad\qquad (0 \leq i \leq \delta), \end{equation} where $V^*_{-1}=0$ and $V^*_{\delta+1}=0$. \item[(iv)] There is no subspace $W$ of $V$ such that $AW \subseteq W$, $A^* W \subseteq W$, $W \neq 0$, $W \neq V$. \end{itemize} \end{definition} \begin{note} \label{note:star} \samepage It is a common notational convention to use $A^*$ to represent the conjugate-transpose of $A$. We are not using this convention. In a tridiagonal pair $A,A^*$ the linear transformations $A$ and $A^*$ are arbitrary subject to (i)--(iv) above. \end{note} \medskip Let $A,A^*$ denote a tridiagonal pair on $V$, as in Definition \ref{def:TDpair}. By \cite[Lemma 4.5]{ITT} the integers $d$ and $\delta$ from (ii), (iii) are equal; we call this common value the {\em diameter} of the pair. \medskip We refer the reader to \cite{AC,AC2,AC3,Bas,ITT,IT:shape,IT:uqsl2hat,IT:Krawt,N:aw,N:refine,N:height1, NT:tde,NT:sharp,Vidar} for background on tridiagonal pairs. See \cite{BI,BT:Borel,BT:loop,Bow,Ca,CaMT,CaW,Egge,F:RL,H:tetra,HT:tetra, IT:non-nilpotent,IT:tetra, IT:inverting,IT:drg,IT:loop,ITW:equitable,Leo,Koe,Mik,Mik2, P,R:multi,R:6j,Suz,T:sub1,T:sub3, T:qSerre,T:Kac-Moody,Z} for related topics. The following special case has received a lot of attention. A tridiagonal pair $A,A^*$ is called a {\em Leonard pair} whenever the eigenspaces for $A$ (resp. $A^*$) all have dimension $1$. See \cite{Cur:mlt,Cur:spinLP,H,M:LT,NT:balanced,NT:formula,NT:det,NT:mu, NT:span,NT:switch,NT:affine,NT:maps,T:Leonard,T:24points,T:canform,T:intro, T:intro2,T:split,T:array,T:qRacah,T:survey,TV,V,V:AW} for information about Leonard pairs. \section{Tridiagonal systems} \indent When working with a tridiagonal pair, it is often convenient to consider a closely related object called a tridiagonal system. To define a tridiagonal system, we recall a few concepts from linear algebra. Let $A$ denote a diagonalizable element of $\text{End}(V)$. Let $\{V_i\}_{i=0}^d$ denote an ordering of the eigenspaces of $A$ and let $\{\th_i\}_{i=0}^d$ denote the corresponding ordering of the eigenvalues of $A$. For $0 \leq i \leq d$ let $E_i:V \to V$ denote the linear transformation such that $(E_i-I)V_i=0$ and $E_iV_j=0$ for $j \neq i$ $(0 \leq j \leq d)$. Here $I$ denotes the identity of $\text{End}(V)$. We call $E_i$ the {\em primitive idempotent} of $A$ corresponding to $V_i$ (or $\th_i$). Observe that (i) $\sum_{i=0}^d E_i = I$; (ii) $E_iE_j=\delta_{i,j}E_i$ $(0 \leq i,j \leq d)$; (iii) $V_i=E_iV$ $(0 \leq i \leq d)$; (iv) $A=\sum_{i=0}^d \th_iE_i$. Moreover \begin{equation} \label{eq:defEi} E_i=\prod_{\stackrel{0 \leq j \leq d}{j \neq i}} \frac{A-\th_jI}{\th_i-\th_j}. \end{equation} We note that each of $\{E_i\}_{i=0}^d$, $\{A^i\}_{i=0}^d$ is a basis for the $\mathbb{K}$-subalgebra of $\text{End}(V)$ generated by $A$. Now let $A,A^*$ denote a tridiagonal pair on $V$. An ordering of the eigenspaces of $A$ (resp. $A^*$) is said to be {\em standard} whenever it satisfies \eqref{eq:Astrid} (resp. \eqref{eq:Atrid}). We comment on the uniqueness of the standard ordering. Let $\{V_i\}_{i=0}^d$ denote a standard ordering of the eigenspaces of $A$. Then the ordering $\{V_{d-i}\}_{i=0}^d$ is standard and no other ordering is standard. A similar result holds for the eigenspaces of $A^*$. An ordering of the primitive idempotents of $A$ (resp. $A^*$) is said to be {\em standard} whenever the corresponding ordering of the eigenspaces of $A$ (resp. $A^*$) is standard. \medskip \begin{definition} \label{def:TDsystem} \samepage By a {\em tridiagonal system} on $V$ we mean a sequence \[ \Phi=(A;\{E_i\}_{i=0}^d;A^*;\{E^*_i\}_{i=0}^d) \] that satisfies (i)--(iii) below. \begin{itemize} \item[(i)] $A,A^*$ is a tridiagonal pair on $V$. \item[(ii)] $\{E_i\}_{i=0}^d$ is a standard ordering of the primitive idempotents of $A$. \item[(iii)] $\{E^*_i\}_{i=0}^d$ is a standard ordering of the primitive idempotents of $A^*$. \end{itemize} \end{definition} \medskip We will use the following notation. \medskip \begin{notation} \label{notation} \samepage let $\Phi=(A;\{E_i\}_{i=0}^d;A^*;\{E^*_i\}_{i=0}^d)$ denote a tridiagonal system on $V$. We denote by $\cal D$ (resp. ${\cal D}^*$) the $\mathbb{K}$-subalgebra of $\text{End}(V)$ generated by $A$ (resp. $A^*)$. For $0 \leq i \leq d$ let $\th_i$ (resp. $\th^*_i$) denote the eigenvalue of $A$ (resp. $A^*$) associated with the eigenspace $E_iV$ (resp. $E^*_iV$). We observe $\{\th_i\}_{i=0}^d$ (resp. $\{\th^*_i\}_{i=0}^d$) are mutually distinct and contained in $\mathbb{K}$. \end{notation} \medskip Referring to Notation \ref{notation}, it has been conjectured that $E^*_0V$ has dimension $1$, provided that $\mathbb{K}$ is algebraically closed \cite{IT:shape}. A more recent and stronger conjecture is that $E^*_0{\cal T}E^*_0$ is commutative, where $\cal T$ is the $\mathbb{K}$-subalgebra of $\text{End}(V)$ generated by ${\cal D}$ and ${\cal D}^*$ \cite{NT:sharp}. There is more detailed version of this conjecture which reads as follows: \medskip \begin{conjecture} {\rm \cite[Conjecture 12.2]{NT:sharp}} \label{conj:EDDE} \samepage With reference to Notation {\rm \ref{notation}} the following {\rm (i)}, {\rm (ii)} hold. \begin{itemize} \item[\rm (i)] $E^*_0{\cal T}E^*_0$ is generated by $E^*_0{\cal D}E^*_0$. \item[\rm (ii)] The elements of $E^*_0 {\cal D} E^*_0$ mutually commute. \end{itemize} \end{conjecture} \medskip The following special case of Conjecture \ref{conj:EDDE} has been proven. Referring to Notation \ref{notation}, there is a well known parameter $q$ associated with $A,A^*$ that is used to describe the eigenvalues \cite{ITT,T:qSerre}. In \cite{IT:aug}, Conjecture \ref{conj:EDDE} is proven assuming $q$ is not a root of unity and $\mathbb{K}$ is algebraically closed. In this paper we use a different approach to prove part (ii) of Conjecture \ref{conj:EDDE} without any extra assumptions. We also obtain a result which sheds some light on why part (i) of the conjecture should be true without any extra assumptions. We now state our main theorem. \medskip \begin{theorem} \label{thm:main} \samepage With reference to Notation {\rm \ref{notation}} the following {\rm (i)}, {\rm (ii)} hold. \begin{itemize} \item[\rm (i)] The span of $E^*_0{\cal D}{\cal D}^*{\cal D}E^*_0$ is equal to the span of $E^*_0{\cal D}E^*_0{\cal D}E^*_0$. \item[\rm (ii)] The elements of $E^*_0{\cal D}E^*_0$ mutually commute. \end{itemize} \end{theorem} \medskip Our proof of Theorem \ref{thm:main} appears in Section 11. In Sections 3--10 we obtain some results that will be used in the proof. Our point of departure is the following observation. \medskip \begin{lemma} \label{lem:supertrid} With reference to Notation {\rm \ref{notation}} the following {\rm (i)}, {\rm (ii)} hold for $0 \leq i,j,k \leq d$. \begin{itemize} \item[\rm (i)] $E^*_iA^kE^*_j=0$ if $k<|i-j|$. \item[\rm (ii)] $E_i{A^*}^kE_j=0$ if $k<|i-j|$. \end{itemize} \end{lemma} \begin{proof} Routinely obtained using lines \eqref{eq:Astrid}, \eqref{eq:Atrid} and Definition \ref{def:TDsystem}. \end{proof} \section{The $D_4$ action} \indent Let $\Phi=(A; \{E_i\}_{i=0}^d; A^*; \{E^*_i\}_{i=0}^d)$ denote a tridigonal system on $V$. Then each of the following is a tridiagonal system on $V$: \begin{align*} \Phi^{*} &:= (A^*; \{E^*_i\}_{i=0}^d; A; \{E_i\}_{i=0}^d), \\ \Phi^{\downarrow} &:= (A; \{E_i\}_{i=0}^d; A^*; \{E^*_{d-i}\}_{i=0}^d), \\ \Phi^{\Downarrow} &:= (A; \{E_{d-i}\}_{i=0}^d; A^*; \{E^*_{i}\}_{i=0}^d). \end{align*} Viewing $*$, $\downarrow$, $\Downarrow$ as permutations on the set of all tridiagonal systems, \begin{equation} \label{eq:relation1} \text{$*^2$$\;=\;$$\downarrow^2$$\;=\;$$\Downarrow^2$$\;=\;$$1$}, \end{equation} \begin{equation} \label{eq:relation2} \text{$\Downarrow$$*$$\;=\;$$*$$\downarrow$}, \quad \text{$\downarrow$$*$$\;=\;$$*$$\Downarrow$}, \quad \text{$\downarrow$$\Downarrow$$\;=\;$$\Downarrow$$\downarrow$}. \end{equation} The group generated by symbols $*$, $\downarrow$, $\Downarrow$ subject to the relations (\ref{eq:relation1}), (\ref{eq:relation2}) is the dihedral group $D_4$. We recall that $D_4$ is the group of symmetries of a square, and has $8$ elements. Apparently $*$, $\downarrow$, $\Downarrow$ induce an action of $D_4$ on the set of all tridiagonal systems. Two tridiagonal systems will be called {\em relatives} whenever they are in the same orbit of this $D_4$ action. The relatives of $\Phi$ are as follows: \medskip \noindent \begin{center} \begin{tabular}{c|c} name & relative \\ \hline $\Phi$ & $(A; \{E_i\}_{i=0}^d; A^*; \{E^*_i\}_{i=0}^d)$ \\ $\Phi^{\downarrow}$ & $(A; \{E_i\}_{i=0}^d; A^*; \{E^*_{d-i}\}_{i=0}^d)$ \\ $\Phi^{\Downarrow}$ & $(A; \{E_{d-i}\}_{i=0}^d; A^*; \{E^*_i\}_{i=0}^d)$ \\ $\Phi^{\downarrow \Downarrow}$ & $(A; \{E_{d-i}\}_{i=0}^d; A^*; \{E^*_{d-i}\}_{i=0}^d)$ \\ $\Phi^{*}$ & $(A^*; \{E^*_i\}_{i=0}^d; A; \{E_i\}_{i=0}^d)$ \\ $\Phi^{\downarrow *}$ & $(A^*; \{E^*_{d-i}\}_{i=0}^d; A; \{E_i\}_{i=0}^d)$ \\ $\Phi^{\Downarrow *}$ & $(A^*; \{E^*_i\}_{i=0}^d; A; \{E_{d-i}\}_{i=0}^d)$ \\ $\Phi^{\downarrow \Downarrow *}$ & $(A^*; \{E^*_{d-i}\}_{i=0}^d; A; \{E_{d-i}\}_{i=0}^d)$ \end{tabular} \end{center} \section{Some polynomials} \indent Let $\lambda$ denote an indeterminate and let $\mathbb{K}[\lambda]$ denote the $\mathbb{K}$-algebra consisting of all polynomials in $\lambda$ that have coefficients in $\mathbb{K}$. \medskip \begin{definition} \label{def:tau} With reference to Notation \ref{notation}, for $0 \leq i \leq d$ we define the following polynomials in $\mathbb{K}[\lambda]$: \begin{align} \tau_i &= (\lambda-\th_0)(\lambda-\th_1)\cdots(\lambda-\th_{i-1}), \label{eq:deftau} \\ \eta_i &= (\lambda-\th_d)(\lambda-\th_{d-1})\cdots(\lambda-\th_{d-i+1}), \label{eq:defeta} \\ \tau^*_i &= (\lambda-\th^*_0)(\lambda-\th^*_1)\cdots(\lambda-\th^*_{i-1}), \label{eq:deftaus} \\ \eta^*_i &= (\lambda-\th^*_d)(\lambda-\th^*_{d-1})\cdots(\lambda-\th^*_{d-i+1}). \label{eq:defetas} \end{align} Note that each of $\tau_i,\eta_i,\tau^*_i,\eta^*_i$ is monic with degree $i$. \end{definition} \medskip The following lemmas show the significance of these polynomials. We will focus on $\tau_i,\eta_i$; of course similar results hold for $\tau^*_i,\eta^*_i$. \medskip \begin{lemma} \label{lem:basistauiA} \samepage With reference to Notation {\rm \ref{notation}}, each of $\{\tau_i(A)\}_{i=0}^d$, $\{\eta_i(A)\}_{i=0}^d$ form a basis for $\cal D$. \end{lemma} \begin{proof} The sequence $\{A^i\}_{i=0}^d$ is a basis for $\cal D$ and each of $\tau_i, \eta_i$ has degree $i$ for $0 \leq i \leq d$. \end{proof} \begin{lemma} \label{lem:tauiAEi} \samepage With reference to Notation {\rm \ref{notation}}, for $0 \leq i \leq d$ we have \begin{align} \tau_i(A) &= \sum_{j=i}^d \tau_i(\th_j)E_j, & E_i &= \sum_{j=i}^d \frac{\eta_{d-j}(\th_i)\tau_j(A)} {\tau_i(\th_i)\eta_{d-i}(\th_i)}, \label{eq:tauiA} \\ \eta_i(A) &= \sum_{j=0}^{d-i} \eta_i(\th_{j}) E_{j}, & E_i &= \sum_{j=d-i}^{d} \frac{\tau_{d-j}(\th_{i})\eta_{j}(A)} {\tau_i(\th_i)\eta_{d-i}(\th_i)}. \label{eq:etaiA} \end{align} \end{lemma} \begin{proof} To get the equation on the left in \eqref{eq:tauiA}, observe that \[ \tau_i(A) = \sum_{j=0}^d \tau_i(A)E_j = \sum_{j=0}^d \tau_i(\th_j)E_j \] and $\tau_i(\th_j)= 0$ for $0 \leq j \leq i-1$. Concerning the equation on the right in \eqref{eq:tauiA}, first observe by \eqref{eq:defEi} that \begin{equation} \label{eq:Eiaux} E_i = \frac{\tau_i(A)\eta_{d-i}(A)} {\tau_i(\th_i)\eta_{d-i}(\th_i)}. \end{equation} By \cite[Lemma 5.4]{NT:mu} or a routine induction on $d$ we find \[ \eta_{d-i} = \sum_{j=i}^d \eta_{d-j}(\th_i)\tau^{-1}_i \tau_j. \] Therefore \begin{equation} \label{eq:etaaux} \tau_i \eta_{d-i} = \sum_{j=i}^d \eta_{d-j}(\th_i) \tau_j. \end{equation} Evaluating the right-hand side of \eqref{eq:Eiaux} using \eqref{eq:etaaux} we obtain the equation on the right in \eqref{eq:tauiA}. We have now obtained \eqref{eq:tauiA}. Applying \eqref{eq:tauiA} to $\Phi^{\Downarrow}$ we obtain \eqref{eq:etaiA}. \end{proof} \begin{lemma} \label{lem:taubasis} \samepage With reference to Notation {\rm \ref{notation}} the following {\rm (i)}--{\rm (iii)} hold for $0 \leq i \leq d$. \begin{itemize} \item[\rm (i)] $\text{\rm Span}\{A^h \,|\, 0 \leq h \leq i\} = \text{\rm Span}\{\tau_h(A) \,|\, 0 \leq h \leq i\}$. \item[\rm (ii)] $\text{\rm Span}\{E_h \,|\, i \leq h \leq d\} = \text{\rm Span}\{\tau_h(A) \,|\, i \leq h \leq d\}$. \item[\rm (iii)] $\tau_i(A)$ is a basis for $ \text{\rm Span}\{A^h \,|\, 0 \leq h \leq i\} \cap \text{\rm Span}\{E_h \,|\, i \leq h \leq d\}$. \end{itemize} \end{lemma} \begin{proof} (i): Recall that $\tau_h$ has degree $h$ for $0 \leq h \leq d$. (ii): Follows from Lemma \ref{lem:tauiAEi}. (iii): Immediate from (i), (ii) above. \end{proof} \medskip Applying Lemma \ref{lem:taubasis} to $\Phi^{\Downarrow}$ we obtain the following result. \medskip \begin{lemma} \label{lem:etabasis} \samepage With reference to Notation {\rm \ref{notation}} the following {\rm (i)}--{\rm (iii)} hold for $0 \leq i \leq d$. \begin{itemize} \item[\rm (i)] $\text{\rm Span}\{A^h \,|\, 0 \leq h \leq i\} = \text{\rm Span}\{\eta_h(A) \,|\, 0 \leq h \leq i\}$. \item[\rm (ii)] $\text{\rm Span}\{E_h \,|\, 0 \leq h \leq d-i \} = \text{\rm Span}\{\eta_h(A) \,|\, i \leq h \leq d\}$. \item[\rm (iii)] $\eta_i(A)$ is a basis for $\text{\rm Span}\{A^h \,|\, 0 \leq h \leq i\} \cap \text{\rm Span}\{E_h \,|\, 0 \leq h \leq d-i \}$. \end{itemize} \end{lemma} \section{Some bases for $\cal D$ and ${\cal D}^*$} \indent In this section we give some bases for $\cal D$ and ${\cal D}^*$ that will be useful later in the paper. We will state our results for $\cal D$; of course similar results hold for ${\cal D}^*$. \medskip \begin{lemma} \label{lem:replace} With reference to Notation {\rm \ref{notation}} consider the following basis for $\cal D$: \begin{equation} \label{eq:E0Ed} E_0,E_1,\ldots,E_d. \end{equation} For $0 \leq n \leq d$, if we replace any $(n+1)$-subset of \eqref{eq:E0Ed} by $I,A,A^2, \ldots, A^n$ then the result is still a basis for $\cal D$. \end{lemma} \begin{proof} Let $\Delta$ denote a $(n+1)$-subset of $\{0,1,\ldots, d\}$ and let $\overline{\Delta}$ denote the complement of $\Delta$ in $\{0,1,\ldots, d\}$. We show \begin{equation} \label{eq:basisaux} \{A^i\}_{i=0}^n \cup \{E_i\}_{i \in \overline{\Delta}} \end{equation} is a basis for $\cal D$. The number of elements in \eqref{eq:basisaux} is $d+1$ and this equals the dimension of $\cal D$. Therefore it suffices to show the elements \eqref{eq:basisaux} span $\cal D$. Let $S$ denote the subspace of $\cal D$ spanned by \eqref{eq:basisaux}. To show ${\cal D} = S$ we show $E_i \in S$ for $i \in \Delta$. For $0 \leq j \leq n$ we have $A^j = \sum_{i=0}^d \theta^j_i E_i$. In these equations we rearrange the terms to find \begin{equation} \label{eq:basisaux2} \sum_{i \in \Delta} \th^j_i E_i \in S \qquad\qquad (0 \leq j \leq n). \end{equation} In the linear system \eqref{eq:basisaux2} the coefficient matrix is Vandermonde and hence nonsingular. Therefore $E_i \in S$ for $i \in \Delta$. Now $S={\cal D}$ and the result follows. \end{proof} \section{The space $R$} \begin{definition} \label{def:pi} \samepage With reference to Notation \ref{notation} we consider the tensor product ${\cal D}\otimes{\cal D}^*\otimes{\cal D}$ where $\otimes = \otimes_{\mathbb{K}}$. We define a $\mathbb{K}$-linear transformation \[ \pi : \qquad \begin{array}{ccc} {\cal D}\otimes{\cal D}^*\otimes{\cal D} & \qquad \to \qquad & \text{End}(V) \\ X \otimes Y \otimes Z & \qquad \mapsto \qquad & E^*_0XYZE^*_0 \end{array} \] We note that the image of $\pi$ is the span of $E^*_0{\cal D}{\cal D}^*{\cal D}E^*_0$. \end{definition} \begin{definition} \label{def:R} \samepage With reference to Notation \ref{notation} let $R$ denote the sum of the following three subspaces of ${\cal D} \otimes {\cal D}^* \otimes {\cal D}$: \begin{equation} \label{eq:L} \text{Span}\{A^i \otimes E^*_j \,|\, 0 \leq i<j \leq d\} \otimes {\cal D}, \end{equation} \begin{equation} \label{eq:R} {\cal D} \otimes \text{Span}\{E^*_j \otimes A^i \,|\, 0 \leq i<j \leq d\}, \end{equation} \begin{equation} \label{eq:M} \text{Span}\{E_i \otimes A^{*t} \otimes E_j \,|\, 0 \leq i,j,t \leq d, \, t<|i-j| \}. \end{equation} \end{definition} \begin{lemma} \label{lem:kernel} \samepage With reference to Definitions {\rm \ref{def:pi}} and {\rm \ref{def:R}} the space $R$ is contained in the kernel of $\pi$. \end{lemma} \begin{proof} Routinely obtained using Lemma \ref{lem:supertrid}. \end{proof} \medskip With reference to Notation \ref{notation} and Lemma \ref{lem:kernel}, we desire to understand the kernel of $\pi$. To gain this understanding we systematically investigate $R$. We proceed as follows. \medskip \begin{lemma} \label{lem:DDsD} \samepage With reference to Notation {\rm \ref{notation}}, \[ {\cal D} \otimes {\cal D}^* \otimes {\cal D} = \sum_{t=0}^d {\cal D} \otimes \tau^*_t(A^*) \otimes {\cal D} \qquad\qquad (\text{\rm direct sum}). \] \end{lemma} \begin{proof} Applying Lemma \ref{lem:basistauiA}(i) to $\Phi^*$ we find that $\{\tau^*_t(A^*)\}_{t=0}^d$ is a basis for ${\cal D}^*$. The result follows. \end{proof} \begin{definition} \label{def:Rt} \samepage With reference to Notation {\rm \ref{notation}} and Definition {\rm \ref{def:R}}, for $0 \leq t \leq d$ let $R_t$ denote the intersection of $R$ with ${\cal D}\otimes \tau^*_t(A^*) \otimes {\cal D}$. \end{definition} \medskip With reference to Notation \ref{notation} and Definition \ref{def:R}, our next goal is to show $R=\sum_{t=0}^d R_t$ (direct sum). The following lemma will be useful. \medskip \begin{lemma} \label{lem:coincidepre} \samepage With reference to Notation {\rm \ref{notation}} the following {\rm (i)}--{\rm (iii)} hold. \begin{itemize} \item[\rm (i)] The space \eqref{eq:L} is the direct sum over $t=0,1,\ldots,d$ of the following subspaces: \begin{equation} \label{eq:Lt} \text{\rm Span}\{A^i \,|\, 0 \leq i<t\} \otimes \tau^*_t(A^*) \otimes {\cal D}. \end{equation} \item[\rm (ii)] The space \eqref{eq:R} is the direct sum over $t=0,1,\ldots,d$ of the following subspaces: \begin{equation} \label{eq:Rt} {\cal D} \otimes \tau^*_t(A^*) \otimes \text{\rm Span}\{A^i \,|\, 0 \leq i<t\}. \end{equation} \item[\rm (iii)] The space \eqref{eq:M} is the direct sum over $t=0,1,\ldots,d$ of the following subspaces: \begin{equation} \label{eq:Mt} \text{\rm Span}\{E_i \otimes \tau^*_t(A^*) \otimes E_j \,|\, 0 \leq i,j \leq d,\, t<|i-j| \}. \end{equation} \end{itemize} \end{lemma} \begin{proof} (i): Applying Lemma \ref{lem:taubasis}(ii) to $\Phi^*$ we obtain \begin{align*} \text{Span}\{A^i & \otimes E^*_t \, |\, 0 \leq i< t \leq d \} \\ &= \sum_{i=0}^d A^i \otimes \text{Span}\{E^*_t \,|\, i<t \leq d \} \\ &= \sum_{i=0}^d A^i \otimes \text{Span}\{\tau^*_t(A^*) \,|\, i<t\leq d \} \\ &= \sum_{t=0}^d \text{Span}\{A^i \,|\, 0 \leq i<t \} \otimes \tau^*_t(A^*). \end{align*} In the above lines we tensor each term on the right by $\cal D$ to find that the space \eqref{eq:L} is the sum over $t=0,1,...,d$ of the spaces \eqref{eq:Lt}. The sum is direct by Lemma \ref{lem:DDsD}. (ii): Similar to the proof of (i) above. (iii): Applying Lemma \ref{lem:taubasis}(i) to $\Phi^*$ we obtain \begin{align*} \text{Span}\{E_i \otimes &A^{*t} \otimes E_j \,|\, 0 \leq i,j,t \leq d,\, t<|i-j| \} \\ &= \sum_{i=0}^d\sum_{j=0}^d E_i \otimes \text{Span}\{A^{*t}\,|\, 0 \leq t<|i-j|\} \otimes E_j \\ &= \sum_{i=0}^d\sum_{j=0}^d E_i \otimes \text{Span}\{\tau^*_t(A^*)\,|\, 0\leq t<|i-j|\}\otimes E_j\\ &= \sum_{t=0}^d \text{Span}\{E_i \otimes \tau^*_t(A^*) \otimes E_j \,|\, 0 \leq i,j \leq d,\, t<|i-j| \}. \end{align*} In other words the space \eqref{eq:M} is the sum over $t=0,1,\ldots,d$ of the spaces \eqref{eq:Mt}. This sum is direct by Lemma \ref{lem:DDsD}. \end{proof} \begin{theorem} \label{thm:coincide} \samepage With reference to Notation {\rm \ref{notation}} and Definition {\rm \ref{def:Rt}} the following {\rm (i)}, {\rm (ii)} hold. \begin{itemize} \item[\rm (i)] For $0 \leq t \leq d$ the subspace ${R}_t$ is the sum of the spaces \eqref{eq:Lt}--\eqref{eq:Mt}. \item[\rm (ii)] ${R} = \sum_{t=0}^d {R}_t$ (direct sum). \end{itemize} \end{theorem} \begin{proof} For $0 \leq t \leq d$ let ${R}'_t$ denote the sum of the spaces \eqref{eq:Lt}, \eqref{eq:Rt}, \eqref{eq:Mt}. Note that ${R}'_t$ is contained in ${\cal D} \otimes \tau^*_t(A^*) \otimes {\cal D}$, and that ${R} = \sum_{t=0}^d {R}'_t$ by Lemma \ref{lem:coincidepre}. By these comments and Lemma \ref{lem:DDsD} we find ${R}'_t = {R}_t$ for $0 \leq t \leq d$. The result follows. \end{proof} \section{A basis for $R_t$ and ${\cal D} \otimes \tau^*_t(A^*)\otimes{\cal D}$} \indent With reference to Notation \ref{notation} and Definition \ref{def:Rt}, for $0 \leq t \leq d$ we display a basis for $R_t$ and extend this to a basis for ${\cal D} \otimes \tau^*_t(A^*)\otimes{\cal D}$. \medskip \begin{theorem} \label{thm:basis1} \samepage With reference to Notation {\rm \ref{notation}} and Definition {\rm \ref{def:Rt}}, for $0 \leq t \leq d$ the following sets of vectors together form a basis for ${\cal D} \otimes \tau^*_t(A^*) \otimes {\cal D}$: \begin{align} & \{A^i \otimes \tau^*_t(A^*) \otimes A^j \,|\, 0 \leq i\leq d,\, 0 \leq j < t \}, \label{eq:T1} \\ & \{A^i \otimes \tau^*_t(A^*) \otimes A^j \,|\, 0 \leq i < t,\, t \leq j \leq d \}, \label{eq:T2} \\ & \{E_i \otimes \tau^*_t(A^*) \otimes E_j \,|\, 0 \leq i,j \leq d,\, t<|i-j| \}, \label{eq:S3} \\ & \{E_i \otimes \tau^*_t(A^*) \otimes E_i \,|\, 0 \leq i \leq d-t\}. \label{eq:basis1} \end{align} Moreover the sets \eqref{eq:T1}--\eqref{eq:S3} together form a basis for ${R}_t$. \end{theorem} \begin{proof} The span of \eqref{eq:T1}--\eqref{eq:S3} equals the span of \eqref{eq:Lt}--\eqref{eq:Mt} and this equals ${R}_t$ by Theorem \ref{thm:coincide}(i). The dimension of ${\cal D} \otimes \tau^*_t(A^*) \otimes {\cal D}$ is $(d+1)^2$. The cardinality of the sets \eqref{eq:T1}--\eqref{eq:basis1} is $t(d+1)$, $t(d-t+1)$, $(d-t)(d-t+1)$, $d-t+1$ respectively, and the sum of these numbers is $(d+1)^2$. Therefore the number of vectors in \eqref{eq:T1}--\eqref{eq:basis1} is equal to the dimension of ${\cal D} \otimes \tau^*_t(A^*) \otimes {\cal D}$. Consequently to finish the proof it suffices to show that \eqref{eq:T1}--\eqref{eq:basis1} together span ${\cal D} \otimes \tau^*_t(A^*) \otimes {\cal D}$. Let ${S}$ denote the span of \eqref{eq:T1}--\eqref{eq:basis1}. We first claim that for $0 \leq i \leq d-t$, both \[ E_i \otimes \tau^*_t(A^*) \otimes {\cal D} \subseteq S, \qquad\qquad {\cal D} \otimes \tau^*_t(A^*) \otimes E_i \subseteq S. \] To prove the claim, by induction on $i$ we may assume \begin{equation} \label{eq:hypo} E_j \otimes \tau^*_t(A^*) \otimes {\cal D} \subseteq S, \qquad {\cal D} \otimes \tau^*_t(A^*) \otimes E_j \subseteq S \qquad\qquad (0 \leq j < i). \end{equation} By Lemma \ref{lem:replace} the following vectors together form a basis for $\cal D$: \[ E_0,E_1,\ldots,E_{i-1}; \qquad E_i; \qquad I,A,A^2,\ldots,A^{t-1}; \qquad E_{i+t+1},E_{i+t+2},\ldots,E_d. \] Therefore $E_i \otimes \tau^*_t(A^*) \otimes {\cal D}$ is the sum of the following spaces: \begin{align} &E_i \otimes \tau^*_t(A^*) \otimes \text{Span}\{E_0,E_1,\ldots,E_{i-1}\}, \label{eq:space1} \\ &E_i \otimes \tau^*_t(A^*) \otimes \text{Span}\{E_i\}, \label{eq:space2} \\ &E_i \otimes \tau^*_t(A^*) \otimes \text{Span}\{I,A,A^2,\ldots,A^{t-1}\}, \label{eq:space3} \\ &E_i \otimes \tau^*_t(A^*) \otimes \text{Span}\{E_{i+t+1},E_{i+t+2},\ldots,E_d\}. \label{eq:space4} \end{align} The space \eqref{eq:space1} is contained in $S$ by \eqref{eq:hypo}, the space \eqref{eq:space2} is contained in $S$ by \eqref{eq:basis1}, the space \eqref{eq:space3} is contained in $S$ by \eqref{eq:T1}, and the space \eqref{eq:space4} is contained in $S$ by \eqref{eq:S3}. Therefore $E_i \otimes \tau^*_t(A^*) \otimes {\cal D}$ is contained in $S$. Similarly one shows that ${\cal D} \otimes \tau^*_t(A^*) \otimes E_i$ is contained in $S$ and the claim is proved. Next we claim that $E_i \otimes \tau^*_t(A^*) \otimes {\cal D}$ is contained in $S$ for $d-t<i \leq d$. By Lemma \ref{lem:replace} the following vectors together form a basis for $\cal D$: \[ E_0,E_1,\ldots,E_{d-t}; \qquad I,A,A^2,\ldots,A^{t-1}. \] Therefore $E_i \otimes \tau^*_t(A^*) \otimes {\cal D}$ is the sum of the following spaces: \begin{align} & E_i \otimes \tau^*_t(A^*) \otimes \text{Span}\{E_0,E_1,\ldots,E_{d-t}\}, \label{eq:space5} \\ & E_i \otimes \tau^*_t(A^*) \otimes \text{Span}\{I,A,A^2,\ldots,A^{t-1}\}. \label{eq:space6} \end{align} The space \eqref{eq:space5} is contained in $S$ by the first claim, and the space \eqref{eq:space6} is contained in $S$ by \eqref{eq:T1}. Therefore $E_i \otimes \tau^*_t(A^*) \otimes {\cal D}$ is contained in $S$ and the second claim is proved. By the two claims and since $\{E_i\}_{i=0}^d$ is a basis for $\cal D$, we find ${\cal D} \otimes \tau^*_t(A^*) \otimes {\cal D}$ is contained in $S$. In other words \eqref{eq:T1}--\eqref{eq:basis1} together span ${\cal D} \otimes \tau^*_t(A^*) \otimes {\cal D}$ as desired. The result follows. \end{proof} \begin{corollary} \label{cor:dim} \samepage With reference to Notation {\rm \ref{notation}} and Definition {\rm \ref{def:Rt}} the following {\rm (i)}--{\rm (iv)} hold. \begin{itemize} \item[\rm (i)] For $0 \leq t \leq d$ the dimension of $R_t$ is $d^2+d+t$. \item[\rm (ii)] For $0 \leq t \leq d$ the codimension of $R_t$ in ${\cal D} \otimes \tau^*_t(A^*) \otimes {\cal D}$ is $d-t+1$. \item[\rm (iii)] The dimension of $R$ is $d(d+1)(2d+3)/2$. \item[\rm (iv)] The codimension of $R$ in ${\cal D} \otimes {\cal D}^* \otimes {\cal D}$ is $(d+1)(d+2)/2$. \end{itemize} \end{corollary} \begin{proof} (i), (ii): The dimension of ${\cal D} \otimes \tau^*_t(A^*) \otimes {\cal D}$ is $(d+1)^2$. By \eqref{eq:basis1} the codimension of $R_t$ in ${\cal D} \otimes \tau^*_t(A^*) \otimes {\cal D}$ is $d-t+1$. The result follows. (iii): Sum the dimension in (i) over $t=0,1,\ldots, d$. (iv): Sum the codimension in (ii) over $t=0,1,\ldots,d$. \end{proof} \section{The map $\ddagger$} \begin{definition} \label{def:dd} \samepage With reference to Notation {\rm \ref{notation}} we define a $\mathbb{K}$-linear transformation \[ \ddagger: \qquad \begin{array}{ccc} {\cal D}\otimes{\cal D}^* \otimes{\cal D} & \qquad \to \qquad & {\cal D} \otimes {\cal D}^* \otimes{\cal D} \\ X \otimes Y \otimes Z & \qquad \mapsto \qquad & Z \otimes Y \otimes X \end{array} \] We call $\ddagger$ the {\em transpose map}. We observe that $\ddagger$ is an involution. \end{definition} \begin{proposition} \label{prop:dd} \samepage With reference to Notation {\rm \ref{notation}} and Definition {\rm \ref{def:dd}} the following {\rm (i)}--{\rm (iii)} hold. \begin{itemize} \item[\rm (i)] $R$ is invariant under $\ddagger$. \item[\rm (ii)] For $0 \leq t \leq d$ the space ${\cal D} \otimes \tau^*_t(A^*) \otimes {\cal D}$ is invariant under $\ddagger$. \item[\rm (iii)] For $0 \leq t \leq d$ the space $R_t$ is invariant under $\ddagger$. \end{itemize} \end{proposition} \begin{proof} (i): By Definition \ref{def:R} the space $R$ is the sum of \eqref{eq:L}--\eqref{eq:M}. The map $\ddagger$ exchanges \eqref{eq:L}, \eqref{eq:R} and leaves \eqref{eq:M} invariant. The result follows. (ii): Clear. (iii): By Theorem \ref{thm:coincide}(i) the space $R_t$ is the sum of \eqref{eq:Lt}--\eqref{eq:Mt}. The map $\ddagger$ exchanges \eqref{eq:Lt}, \eqref{eq:Rt} and leaves \eqref{eq:Mt} invariant. The result follows. \end{proof} \begin{theorem} \label{thm:dd} \samepage With reference to Notation {\rm \ref{notation}} and Definition {\rm \ref{def:dd}} the following {\rm (i)}, {\rm (ii)} hold. \begin{itemize} \item[\rm (i)] For $0 \leq t \leq d$ the image of ${\cal D} \otimes \tau^*_t(A^*) \otimes {\cal D}$ under $1-\ddagger$ is contained in $R_t$. \item[\rm (ii)] The image of ${\cal D} \otimes {\cal D}^* \otimes {\cal D}$ under $1-\ddagger$ is contained in $R$. \end{itemize} \end{theorem} \begin{proof} (i): Let $C$ denote the subspace of ${\cal D} \otimes \tau^*_t(A^*) \otimes {\cal D}$ spanned by the elements \eqref{eq:basis1}. By Theorem \ref{thm:basis1} the space ${\cal D} \otimes \tau^*_t(A^*) \otimes {\cal D}$ is the direct sum of $R_t$ and $C$. By Proposition \ref{prop:dd}(iii) the image of $R_t$ under $1-\ddagger$ is contained in $R_t$. By \eqref{eq:basis1} the image of $C$ under $1-\ddagger$ is zero. The result follows. (ii): Combine Lemma \ref{lem:DDsD}, Theorem \ref{thm:coincide}(ii), and (i) above. \end{proof} \section{A complement for $R$ in ${\cal D} \otimes {\cal D}^* \otimes {\cal D}$} \indent With reference to Notation \ref{notation} and Definition \ref{def:R}, our goal in this section is to show that the elements $\{E_i \otimes \tau^*_{j-i}(A^*) \otimes E_j \,|\, 0 \leq i\leq j \leq d\}$ form a basis for a complement of $R$ in ${\cal D} \otimes {\cal D}^* \otimes {\cal D}$. We begin with a slightly technical lemma. \medskip \begin{lemma} \label{eq:tech} \samepage With reference to Notation {\rm \ref{notation}} and Definition {\rm \ref{def:Rt}}, for $0 \leq t \leq d$ and $0 \leq i <j \leq i+t \leq d$ the space \begin{equation} \label{eq:auxd1} R_t + \text{\rm Span}\{E_h \otimes \tau^*_t(A^*) \otimes E_h \,|\, 0 \leq h < i\} \end{equation} contains both \begin{equation} \label{eq:auxd2} f^t_{ij}(\th_i) E_i \otimes \tau^*_t(A^*) \otimes E_i + f^t_{ij}(\th_j) E_i \otimes \tau^*_t(A^*) \otimes E_j, \end{equation} \begin{equation} \label{eq:auxd3} f^t_{ij}(\th_i) E_i \otimes \tau^*_t(A^*) \otimes E_i + f^t_{ij}(\th_j) E_j \otimes \tau^*_t(A^*) \otimes E_i, \end{equation} where $f^t_{ij} = \prod_{h=i+1,\,h \neq j}^{i+t} (\lambda-\th_h)$. \end{lemma} \begin{proof} We fix $t$ and show by induction on $i=0,1,\ldots, d-t$ that each of \eqref{eq:auxd2}, \eqref{eq:auxd3} is contained in \eqref{eq:auxd1} for $i<j\leq i+t$. Concerning \eqref{eq:auxd2}, in the equation $f^t_{ij}(A) = \sum_{n=0}^d f^t_{ij}(\th_n)E_n$ we tensor each term on the left by $E_i \otimes \tau^*_t(A^*)$ to get \begin{equation} \label{eq:auxd4} E_i \otimes \tau^*_t(A^*) \otimes f^t_{ij}(A) = \sum_{n=0}^d f^t_{ij}(\th_n) E_i \otimes \tau^*_t(A^*) \otimes E_n. \end{equation} We examine the terms in \eqref{eq:auxd4}. The left side of \eqref{eq:auxd4} is in $R_t$ by \eqref{eq:T1} and since $f^t_{ij}$ has degree $t-1$. For $0 \leq n \leq d$ consider the $n$-summand on the right in \eqref{eq:auxd4}. First assume $0 \leq n < i-t$. Then the $n$-summand is in $R_t$ by \eqref{eq:S3}. Next assume $i-t\leq n < i$. By \eqref{eq:auxd3} and induction, \begin{align*} f^t_{ni}(\th_n) E_n \otimes \tau^*_t(A^*) &\otimes E_n + f^t_{ni}(\th_i) E_i \otimes \tau^*_t(A^*) \otimes E_n \\ & \in R_t + \text{Span}\{E_h \otimes \tau^*_t(A^*) \otimes E_h \,|\, 0 \leq h < n\}. \end{align*} By this and since $f^t_{ni}(\th_i)$ is nonzero, \begin{equation} \label{eq:auxd5} E_i \otimes \tau^*_t(A^*) \otimes E_n \in R_t + \text{Span}\{E_h \otimes \tau^*_t(A^*) \otimes E_h \,|\, 0 \leq h \leq n\}. \end{equation} Therefore our $n$-summand is in \eqref{eq:auxd1}. Next assume $i+1\leq n \leq i+t$, $n\not=j$. Then the $n$-summand is $0$ since $f^t_{ij}(\th_n)=0$. Next assume $i+t<n\leq d$. Then the $n$-summand is in $R_t$ by \eqref{eq:S3}. By these comments the expression \eqref{eq:auxd2} is contained in \eqref{eq:auxd1}. By this and Theorem \ref{thm:dd}(i) the expression \eqref{eq:auxd3} is contained in \eqref{eq:auxd1}. \end{proof} \begin{proposition} \label{prop:basis} \samepage With reference to Notation {\rm \ref{notation}} and Definition {\rm \ref{def:Rt}}, for $0 \leq t \leq d$ the vectors \begin{equation} \label{eq:basis} \{E_i \otimes \tau^*_t(A^*) \otimes E_{i+t} \,|\, 0 \leq i \leq d-t\} \end{equation} form a basis for a complement of ${R}_t$ in ${\cal D}\otimes \tau^*_t(A^*) \otimes {\cal D}$. \end{proposition} \begin{proof} Consider the quotient vector space \[ V_t = {\cal D} \otimes \tau^*_t(A^*) \otimes {\cal D} /R_t. \] We show the vectors \begin{equation} \label{eq:auxe1} E_i \otimes \tau^*_t(A^*) \otimes E_{i+t} + R_t \qquad\qquad(0 \leq i \leq d-t) \end{equation} form a basis for $V_t$. By Theorem \ref{thm:basis1} the vectors \begin{equation} \label{eq:auxe2} E_i \otimes \tau^*_t(A^*) \otimes E_i + R_t \qquad\qquad (0 \leq i \leq d-t) \end{equation} form a basis for $V_t$. Write the elements \eqref{eq:auxe1} in terms of the basis \eqref{eq:auxe2}. By Lemma \ref{eq:tech} the resulting coefficient matrix is upper triangular with all diagonal entries nonzero. Therefore \eqref{eq:auxe1} is a basis for $V_t$ and the result follows. \end{proof} \begin{theorem} \label{thm:basis2} With reference to Notation {\rm \ref{notation}} and Definition {\rm \ref{def:R}} the vectors \begin{equation} \label{eq:basis2} \{E_i \otimes \tau^*_{j-i}(A^*) \otimes E_j \,|\, 0 \leq i\leq j\leq d \} \end{equation} form a basis for a complement of $R$ in ${\cal D} \otimes {\cal D}^* \otimes {\cal D}$. \end{theorem} \begin{proof} The set \eqref{eq:basis2} is the disjoint union over $t=0,1,\ldots, d$ of the sets \eqref{eq:basis}. The result follows in view of Lemma \ref{lem:DDsD}, Theorem \ref{thm:coincide}(ii), and Proposition \ref{prop:basis} \end{proof} \section{The space ${\cal D} \otimes E^*_0 \otimes {\cal D}$} \indent With reference to Notation \ref{notation} and Definition \ref{def:R}, in this section we show that the elements $\{E_i \otimes E^*_0 \otimes E_j \,|\, 0 \leq i \leq j \leq d\}$ form a basis for a complement of $R$ in ${\cal D} \otimes {\cal D}^* \otimes {\cal D}$. We will use the following lemma. \medskip \begin{lemma} \label{lem:new} \samepage With reference to Notation {\rm \ref{notation}}, for $0 \leq t \leq d$ and $0 \leq i \leq d-t$ the element \[ E_i \otimes \tau^*_t(A^*) \otimes E_{i+t} - (\th^*_0-\th^*_1)(\th^*_0-\th^*_2) \cdots (\th^*_0-\th^*_t) E_i \otimes E^*_0 \otimes E_{i+t} \] is contained in \begin{equation} \label{eq:auxf1} R + \sum_{n=t+1}^d {\cal D} \otimes \tau^*_n(A^*) \otimes {\cal D}. \end{equation} \end{lemma} \begin{proof} Applying the equation on the right in \eqref{eq:tauiA} to $\Phi^*$, \[ E^*_0 = \sum_{n=0}^d \frac{\eta^*_{d-n}(\th^*_0) \tau^*_n(A^*)}{\eta^*_d(\th^*_0)}. \] In this equation we tensor each term on the left by $E_i$ and on the right by $E_{i+t}$ to get \begin{equation} \label{eq:auxf2} E_i E^*_0 E_{i+t} = \sum_{n=0}^d \frac{\eta^*_{d-n}(\th^*_0)}{\eta^*_d(\th^*_0)} E_i \otimes \tau^*_n(A^*) \otimes E_{i+t}. \end{equation} For $0 \leq n \leq d$ consider the $n$-summand on the right in \eqref{eq:auxf2}. For $0 \leq n \leq t-1$ the $n$-summand is in $R$ by \eqref{eq:S3}. For $t +1 \leq n \leq d$ the $n$-summand is in \eqref{eq:auxf1} by construction. The result follows from these comments and since \[ \eta^*_d(\th^*_0) = (\th^*_0-\th^*_1)(\th^*_0-\th^*_2) \cdots (\th^*_0-\th^*_t) \eta^*_{d-t}(\th^*_0). \] \end{proof} \medskip \begin{theorem} \label{thm:basis3} With reference to Notation {\rm \ref{notation}} and Definition {\rm \ref{def:R}} the vectors \begin{equation} \label{eq:basis4} \{E_i \otimes E^*_0 \otimes E_j \,|\, 0 \leq i\leq j\leq d\} \end{equation} form a basis for a complement of $R$ in ${\cal D} \otimes {\cal D}^* \otimes {\cal D}$. \end{theorem} \begin{proof} The cardinality of the set \eqref{eq:basis4} is $(d+1)(d+2)/2$, and by Corollary \ref{cor:dim}(iv) this is the codimension of $R$ in ${\cal D} \otimes {\cal D}^* \otimes {\cal D}$. Therefore, it suffices to show that $R$ and the elements \eqref{eq:basis4} together span ${\cal D} \otimes {\cal D}^* \otimes {\cal D}$. Let $S$ denote the subspace of ${\cal D} \otimes {\cal D}^* \otimes {\cal D}$ spanned by $R$ and the elements \eqref{eq:basis4}. To show that $S= {\cal D} \otimes {\cal D}^* \otimes {\cal D}$ we show ${\cal D} \otimes \tau^*_t(A^*) \otimes {\cal D} \subseteq S$ for $0 \leq t \leq d$. We show this by induction on $t=d,d-1, \ldots, 0$. Let $t$ be given. By Proposition \ref{prop:basis}, \[ {\cal D} \otimes \tau^*_t(A^*) \otimes {\cal D} = R_t + \text{Span}\{E_i \otimes \tau^*_t(A^*) \otimes E_{i+t}\,|\,0 \leq i \leq d-t\}. \] By construction $R_t \subseteq R \subseteq S$. For $0 \leq i \leq d-t$ we have $E_i \otimes \tau^*_t(A^*) \otimes E_{i+t} \in S$ by Lemma \ref{lem:new} and induction on $t$. By these comments ${\cal D} \otimes \tau^*_t(A^*) \otimes {\cal D} \subseteq S$ and the result follows. \end{proof} \section{The proof of Theorem \ref{thm:main}} \indent Using the results in earlier sections we can now easily prove Theorem \ref{thm:main}. \medskip \begin{proofof}{Theorem \ref{thm:main}} (i): By Definition \ref{def:pi} the image $\pi({\cal D} \otimes {\cal D}^* \otimes {\cal D})$ is the span of $E^*_0{\cal D}{\cal D}^*{\cal D}E^*_0$. Similarly the image $\pi({\cal D} \otimes E^*_0 \otimes {\cal D})$ is the span of $E^*_0{\cal D}E^*_0{\cal D}E^*_0$. We show \begin{equation} \label{eq:auxc} \pi({\cal D} \otimes {\cal D}^* \otimes {\cal D}) = \pi({\cal D} \otimes E^*_0 \otimes {\cal D}). \end{equation} Let $C$ denote the subspace of ${\cal D} \otimes {\cal D}^* \otimes {\cal D}$ spanned by the elements \eqref{eq:basis4}. By Theorem 9.3 ${\cal D} \otimes {\cal D}^* \otimes {\cal D}$ is the direct sum $C + R$. By the construction $C$ is contained in ${\cal D} \otimes E^*_0 \otimes {\cal D}$. By Lemma \ref{lem:kernel} the space $R$ is contained in the kernel of $\pi$. Therefore \[ {\cal D} \otimes {\cal D}^* \otimes {\cal D} = {\cal D} \otimes E^*_0 \otimes {\cal D} + \text{Ker}(\pi). \] Applying $\pi$ to this equation we get \eqref{eq:auxc} and the result follows. (ii): For $X, Y \in {\cal D}$ we show $E^*_0XE^*_0$, $E^*_0YE^*_0$ commute. By Theorem \ref{thm:dd}(ii), \[ X \otimes E^*_0 \otimes Y - Y \otimes E^*_0 \otimes X \in R. \] In the above line we apply the map $\pi$ and use Lemma \ref{lem:kernel} to find \[ E^*_0 X E^*_0 Y E^*_0 = E^*_0 Y E^*_0 X E^*_0. \] By this and since $E^{*2}_0=E^*_0$ the elements $E^*_0XE^*_0$, $E^*_0YE^*_0$ commute. \end{proofof} \section{Acknowledgements} \indent The second author gratefully acknowledges many conversations with Tatsuro Ito (Ka\-na\-za\-wa University) on the general subject of this paper. The resulting insights lead directly to the paper, and consequently we feel that Ito deserves to be a coauthor. We offered him this coauthorship but he declined. \bigskip {\small \bibliographystyle{plain}
0801.1339
\section{Introduction} Relativistic plasma flows are observed in a number of astrophysical objects, ranging from a mildly relativistic jets of the sources like SS433, through the-Lorentz-factor-of-a-few jets in AGNs and galactic `micro-quasars', up to highly ultra-relativistic outflows in sources of gamma ray bursts or pulsar winds. As nearly all such objects are efficient emitters of nonthermal radiation, what requires existence of energetic particles, our attempts to understand the processes generating cosmic rays are essential for understanding the fascinating phenomena observed. Below I will discuss the work carried out in order to understand the cosmic ray acceleration processes acting at relativistic shocks and within highly turbulent regions accompanying such shocks and shear layers. I will not include here the interesting work involving collisionless shocks modelling with {\it particle in cell} simulations. This approach uses quite different modelling method in comparison to the other work discussed here, relating in most cases to the characteristic energy range of shock compressed thermal plasma particles. The present paper is a modified and updated version of some of my previous reviews of the subject. Also, essentially the same slightly shortened text is provided as my contribution to the HEPRO Workshop in Dublin (September 2007). Below we will append the index `1' (`2') to quantities measured in the plasma rest frame upstream (downstream) of the shock. \section{Particle diffusive acceleration at non-relativistic shock waves} Processes of the first-order particle acceleration at non-relativistic shock waves were widely discussed by a number of authors, let me note still actual reviews by Drury (1983) and Blandford \& Eichler (1987). The most interesting physical feature of the first-order shock acceleration at the non-relativistic shock wave is independence of the {\it test-particle stationary} particle energy spectrum from the background conditions near the shock, including the mean magnetic field configuration and the spectrum of MHD turbulence. The reason is a nearly-isotropic form of the particle momentum distribution at the shock. If efficient scattering occurs near the shock, this condition also holds for oblique shocks with the shock velocity along the (upstream) magnetic field $U_{B,1} \equiv U_1 / \cos \Psi_1 \ll v$ ($\Psi_1$ - the upstream magnetic field inclination to the shock normal). Then, the particle density is continuous across the shock and the spectral index for the {\em phase-space} distribution function, $\alpha$, is given exclusively in terms of a single parameter -- the shock compression ratio $R$: $$\alpha = {3R \over R-1} \qquad . \eqno(2.1) $$ \vspace{1mm} \noindent Because of the isotropic form of the particle distribution function, the spatial diffusion equation has become a widely used mathematical tool for describing particle transport and acceleration processes in non-relativistic flows. In real astronomical objects some non-linear and time dependent effects, or occurring of additional energy losses and gains can make the physics of the acceleration more complicated, creating, e.g., non-power-low and/or non-stationary particle distributions. \section{Cosmic ray acceleration at relativistic shock waves} Below I describe work done on mildly- and ultra-relativistic shock acceleration including important recent results of Niemiec et al. With these last results many previous ones occurred to be of historical value only, reflecting specific individual features of the acceleration processes. Basing on these older works one can understand the recent simulations results in a relatively straightforward way. Attempting to give an overview of the full field I base this presentation on my own and my collaborators work, which seems to me to present a consistent way of development and reflects my approach to understanding acceleration processes in relativistic shocks. \subsection{History: acceleration at mildly relativistic shocks} In cases of the shock velocity reaching values comparable to the light velocity, the particle distribution at the shock becomes {\em anisotropic}. This simple fact complicates to a great extent both the physical picture and the mathematical description of particle acceleration. A first attempt to consider the acceleration process at the relativistic shock was presented in 1981 by Peacock, and a consistent theory was proposed later by Kirk \& Schneider (1987a). Those authors considered stationary solutions of the relativistic Fokker-Planck equation for particle pitch-angle diffusion in the parallel shock wave. In the situation with the gyro-phase averaged distribution $f(p, \mu, z)$, which depends only on the unique spatial co-ordinate $z$ along the shock velocity, and with $\mu$ being the pitch-angle cosine, the equation takes a form: $$\Gamma ( U + v \mu ) {\partial f \over \partial z} = C(f) + S \qquad , \eqno(3.1) $$ \vspace{1mm} \noindent where $\Gamma \equiv 1/\sqrt{1-U^2}$ is the flow Lorentz factor, $C(f)$ is the collision operator and $S$ is the source function. In the presented approach, the spatial co-ordinates are measured in the shock rest frame, while the particle momentum co-ordinates and the collision operator are given in the respective plasma rest frame. For the applied pitch-angle diffusion operator, $C = \partial / \partial \mu (D_{\mu \mu} \partial f / \partial \mu)$, the authors generalized the diffusive approach to higher order terms in particle distribution anisotropy and constructed general solutions at both sides of the shock which involved solutions of the eigenvalue problem. By matching two solutions at the shock, the spectral index of the resulting power-law particle distribution can be found by taking into account a sufficiently large number of eigenfunctions. The same procedure yields the particle angular distribution and the spatial density distribution. The low-order truncation in this approach corresponds to the standard diffusion approximation and to a somewhat more general method described by Peacock. An application of this approach to more realistic conditions -- but still for parallel shocks -- was presented by Heavens \& Drury (1988), who investigated the fluid dynamics of relativistic shocks (cf. also Ellison \& Reynolds 1991) and used the results to calculate spectral indices for accelerated particles. \begin{figure}[hbt] \vspace{56mm} \special{psfile=ostrowski07-fig1.ps hoffset=-40 voffset=-30 hscale=54 vscale=54 angle=0} \vspace{0mm} \caption{ Spectral indices $\alpha$ of particles accelerated at oblique shocks versus shock velocity projected at the mean magnetic field, $U_{B,1}$ (Ostrowski 1991a). The results are presented for the shock compression $R = 4$. On the right the respective synchrotron spectral index $\gamma$ is given. The shock velocities $U_1$ are given near the respective curves taken from Kirk \& Heavens (1989). The points were taken from simulations computing explicitly the details of particle-shock interactions (Ostrowski 1991a). } \end{figure} A substantial progress in understanding the acceleration process in the presence of highly anisotropic particle distributions is due to the work of Kirk \& Heavens (1989; see also Ostrowski 1991a and Ballard \& Heavens 1991), who considered particle acceleration at {\it subluminal} ($U_{B,1} < c$) relativistic shocks with oblique magnetic fields. They assumed the magnetic momentum conservation, $p_\perp^2/B = const$, at particle interaction with the shock and applied the Fokker-Planck equation discussed above to describe particle transport along the field lines outside the shock, while excluding the possibility of cross-field diffusion. In the cases when $U_{B,1}$ reached relativistic values, they derived very flat energy spectra with $\gamma \approx 0$ at $U_{B,1} \approx 1$ (Fig.~1). In such conditions, the particle density in front of the shock can substantially -- even by a few orders of magnitude -- exceed the downstream density (see the curve denoted `-8.9' at Fig.~2). Creating flat spectra and great density contrasts is possible due to the effective repeating reflections of anisotropically distributed upstream particles from the region of compressed magnetic field downstream of the shock. As stressed by Begelman \& Kirk (1990), in relativistic shocks one often finds {\it superluminal} conditions with $U_{B,1} > c$, where the above presented approach is no longer valid. Then, it is not possible to reflect upstream particles from the shock and to transmit downstream particles into the upstream region. In effect, only a single transmission of upstream (or shock injected) particles re-shapes the original distribution by shifting particle energies to larger values, with super-adiabatic efficiency due to anisotropic particle distribution at the transmission. \begin{figure}[hbt] \vspace{74mm} \special{psfile=ostrowski07-fig2.ps hoffset=-47 voffset=-35 hscale=56 vscale=56 angle=0} \vspace{0mm} \caption{ Energetic particle density profiles across the relativistic shock with the oblique magnetic field (Ostrowski 1991b). The shock with $U_1 = 0.5$, $R = 5.11$ and $\psi_1 = 55^o$ is considered. The curves for growing to the top perturbation amplitudes are characterized with the value $\log \kappa_\perp / \kappa_\parallel$ ($\kappa_\perp / \kappa_\parallel$ is a ratio of the cross-field to the parallel diffusion coefficients) given near each curve. The data are vertically shifted for picture clarity. The value $X_{max}$ is the distance from the shock at which the upstream particle density decreases to $10^{-3}$ part of the shock value.} \end{figure} \subsection{History: acceleration in the presence of large amplitude magnetic field perturbations} As the approaches proposed by Kirk \& Schneider (1987a) and Kirk \& Heavens (1989), and the derivations of Begelman \& Kirk (1990) are valid only in cases of weakly perturbed magnetic fields, for larger amplitude MHD perturbations numerical methods have to be used. A series of such simulations were performed by numerous authors (e.g. Kirk \& Schneider 1987b; Ellison et al. 1990; Ostrowski 1991a, 1993; Ballard \& Heavens 1992, Naito \& Takahara 1995, Bednarz \& Ostrowski 1996). Different approaches applied, including the ones involving particle momentum pitch-angle diffusion or integrating particle trajectories in analytic structures of the perturbed magnetic fields\footnote{Let us note that application by some authors of point-like large-angle scattering models in relativistic shocks does not provide a viable physical representation of the scattering at MHD waves (Bednarz \& Ostrowski 1996).}, allowed for considering a wide range of background conditions near the shock. The results obtained by different authors can be summarized at the figure (Fig.~3) taken from Bednarz \& Ostrowski (1996). One should note, that essentially all derivations by the above authors were performed with assuming scale-free conditions for the acceleration process, resulting in power-law distributions of the accelerated particles. \begin{figure}[hbt] \vspace{57mm} \special{psfile=ostrowski07-fig3.ps hoffset=-50 voffset=-36 hscale=57 vscale=57 angle=0} \vspace{0mm} \caption{ Relation of the particle acceleration time scale $T_{acc}$ versus the particle spectral index $\alpha$ at different magnetic field inclinations $\psi_1$ given near the respective curves (Bednarz \& Ostrowski 1996). The {\it minimum} value of the model parameter $\kappa_\perp/\kappa_\|$ occurs at the encircled point of each curve and the wave amplitude monotonously increases along each curve up to $\delta B \sim B$, where all curves converge in $\alpha$. The curve for $\Psi_1 = 60^\circ$ ($U_{B,1}=1$) separate the sub- and super-luminal shock results. We do not discuss here the presented acceleration times. } \end{figure} At the figure (Fig. ~3) one can find very flat spectra for oblique subluminal shocks if the perturbation amplitudes are small. Contrary to that generation of the power-law spectrum is possible in the superluminal shocks only in the presence of large amplitude turbulence. Then, in contrast to the subluminal shocks, spectra are extremely steep for small values of $\delta B$ (not presented at the figure) and $\alpha$ monotonously decreases with increasing magnetic field perturbations. A new feature is observed in oblique shocks of the spectral index $\alpha$ changing with $\delta B$ in a non-monotonic way. \subsection{History: Energy spectra of cosmic rays accelerated at ultra-relativistic shocks} \noindent The main difficulty in modelling the acceleration processes at shocks with large Lorentz factors $\Gamma$ is the fact that the involved particle distributions are extremely anisotropic in the shock, with the particle angular distribution opening angles $\sim \Gamma^{-1}$ in the upstream plasma rest frame. In the simulations of Bednarz \& Ostrowski (1998) a Monte Carlo method involving small amplitude pitch-angle scattering was applied for particle transport near the shocks with $\Gamma$ in the range $3$ -- $243$. The simulations revealed an unexpected result, showing convergence, for $\Gamma \to \infty$, of the resulting power-law distributions to the 'universal' one with the spectral index $\sigma \approx 2.2$ (Fig.~4). Essentially the same result was derived with different methods by many other authors (Gallant \& Achterberg 1999; Kirk et al. 2000, Achterberg et al. 2001, Lemoione et al. 2003, Keshet \& Waxman 2005, Lemoine \& Revenu 2006, Morlino et al. 2007, et al.), what could suggest that in the ultrarelativistic limit the acceleration process becomes again simple, generating cosmic ray spectra essentially independent of the considered background conditions. One can find examples in the literature that some authors extend this claim for all relativistic shocks. \begin{figure}[hbt] \vspace{57mm} \special{psfile=ostrowski07-fig4.ps hoffset=-50 voffset=-147 hscale=54 vscale=54 angle=0} \vspace{0mm} \caption{ The simulated spectral indices $\sigma$ ($\sigma \equiv \alpha - 2$) versus the shock Lorentz factor $\Gamma$ (Bednarz \& Ostrowski 1998). Results for a given $\psi_1$ are joined with dashed lines; the respective value of $\psi_1$ is given near each curve. Increasing the turbulence amplitude in a not presented here series of simulations led to shifting the resulting curves toward the parallel shock, $\Psi_1 = 0^\circ$, results.} \end{figure} \begin{figure*}[hbt] \vspace{73mm} \special{psfile=ostrowski07-fig5.ps hoffset=-70 voffset=-460 hscale=95 vscale=95 angle=0} \vspace{0mm} \caption{ Particle spectra for an oblique mildly relativistic shock: the shock velocity $u_1 = 0.5\,c$, the mean magnetic field inclination $\Psi_1 = 45^\circ$ and the wave power spectrum are indicated in the respective panels (from Niemiec \& Ostrowski 2004). Values of magnetic turbulence amplitude, $\delta B/B$, and the indices fitted to the power-law sections of spectra (in parentheses) are given near each result.} \end{figure*} Ostrowski \& Bednarz (2002) reconsidered all the above approaches to derive particle spectra at relativistic shocks and `discovered' that the conditions producing the universal spectral index were in some way equivalent to assuming highly turbulent conditions near the shock. Additionally, all these models did not introduce any physical scale and thus forced the power-law shape of the resulting spectrum. Do such conditions and the resulting characteristic spectra really exist in astrophysical situations ? \subsection{Toward a realistic description of the relativistic shock acceleration} Studies of - as far as possible - realistic conditions for the relativistic shock acceleration were presented in a series of papers by Niemiec et al. (Niemiec \& Ostrowski 2004, 2006, Niemiec et al. 2006; see also Lemoine \ Pelletier 2006). We assumed 3-D static magnetic field perturbations upstream of the shock by imposing a large number of sinusoidal waves with different power spectra, $F(k)$. The flat spectrum, $F(k) \propto k^{-1}$, or the Kolmogorov spectrum, $F(k) \propto k^{-5/3}$, were considered in the wide wave-vector range ($k_{min}$, $k_{max}$), with $k_{max}/k_{min} = 10^5$. The downstream magnetic field structures were computed by respective compression of the upstream field at the shock. In the last of above papers an additional component of large amplitude short-wave MHD turbulence was assumed to be produce at the shock. The accelerated particle spectra were derived using the Monte Carlo simulations for a wide range of shock Lorentz factors - between 2 and 30 - and a selection of the mean magnetic field configurations and perturbation amplitudes. \begin{figure}[hbt] \vspace{65mm} \special{psfile=ostrowski07-fig6.ps hoffset=-20 voffset=-5 hscale=75 vscale=75 angle=0} \vspace{0mm} \caption{ The particle spectra derived for superluminal relativistic shocks with Lorentz factors $\gamma_1$ = 5, 10 and 30 (Niemiec \& Ostrowski 2006a).} \end{figure} \begin{figure*}[hbt] \vspace{75mm} \special{psfile=ostrowski07-fig7.ps hoffset=-25 voffset=-5 hscale=87 vscale=87 angle=0} \vspace{0mm}\caption{ Particle spectra derived for parallel relativistic shocks with Lorentz factor $\gamma_1$ = 30 (Niemiec \& Ostrowski 2006). For description see Fig.~5 and the original paper.} \end{figure*} Let us take a look at a few characteristic results of these modelings. At Fig.~5 results derived for subluminal oblique mildly relativistic shocks are presented. One may note that introducing the energy scales to the modelling, in our units $2\pi / k_{max}$ and $2\pi / k_{min}$, lead to deviations of the resulting spectra from the power-law shape. The spectra, as expected from the previous discussion, are very flat for small amplitude turbulence, but steepen at larger amplitudes. An interesting feature is seen, that at very high particle energies, above the resonance range ($E > 2\pi/k_{min})$ the `short' waves weaker influence particle trajectories leading to the hard energy tails before the cut-offs imposed by the modelling. When we considered oblique superluminal shocks the spectra consisted of the shock compressed injected component and the limited high energy tail until a cut-off well within the `resonance range' ($2\pi/k_{max}$, $2\pi/k_{min}$). As illustrated at Fig.~6 the tails fast diminish with the growing shock Lorentz factor, leaving for large $\gamma_1$ the `compressed component' only. There are a few general observations for the first-order Fermi acceleration processes from these series of models. For particles in a low energy range of the resonance energies for the considered field perturbations the acceleration processes proceed in an ensemble of different oblique shocks, where each {\em local} mean magnetic field structure is formed as a superposition of the mean magnetic field $B_{0,1}$ and long wave field perturbations. Thus, there can occur significant differences between spectra generated in the presence of flat and steep turbulence power spectra, and the spectral indices significantly vary with the perturbations amplitudes. In parallel shocks the long wave perturbations introduce the acceleration effects observed with oblique magnetic fields (cf. also Ostrowski 1993). Thus in the ultrarelativistic parallel shocks propagating in highly turbulent medium these effects can lead to formation of particle distributions with cut-offs at relatively low energies, like in shocks with perpendicular field configurations (Fig.~7). If the shock wave generates a large amplitude short-wave turbulence downstream of the shock, the acceleration process can form a more extended power-law tail, but at higher particle energies the mean magnetic field or the long wave magnetic field structures start to dominate in shaping particle trajectories and thus the acceleration process, leading to results like the ones described above. In any studied case we were not able to create the scale free conditions in the acceleration process, leading to the wide range power law distribution of accelerated particles. It was possible only in limited energy ranges and the forms of the spectra depended usually a lot on the considered background conditions. \subsection{Observational constraints on the shock acceleration from Cyg A hotspots} Constraints for the above theoretical derivations can be provided by precise observations of energetic particle emission from objects harbouring the relativistic shocks. Such study performed for hotspots of the Cygnus A radiosource (Stawarz et al. 2007) reveals significant deviations of the derived electron spectra from the `standard' shock spectra. The resulting spectral energy distribution of the hotspot D is presented at Fig.~8, showing both the extended synchrotron component and the inverse-compton (IC) one modelled for optical and X-ray measurements. Additionally, the low Spitzer IR points provide additional important constraint for the IC spectral component. In derivation of the relativistic electrons distribution these measurements allow for excluding a possibility of substantial absorption in the low frequency synchrotron spectrum and thus require very flat distribution of low energy electrons. Thus the intrinsic electron spectrum (Fig.~9) is composed of a very flat low energy sector, with the energy spectral index $s \approx 1.5$, followed above a break at $E \approx 1$~GeV with the steep ($s > 3$) high energy sector. A possible interpretation of the spectrum considered by us was to assume a mildly-relativistic shock acceleration in the jet dominated by the protons bulk kinetic energy. The protons are expected establish the characteristic break energy scale, $E_{br} \sim 1$~GeV, at the shock transition layer. The electrons (or pairs) below this energy are expected to be accelerated within this layer due to electron-proton collective interactions. Above $E_{br}$, either the first order Fermi process acting at the shock creates a steep spectrum, or the acceleration process proceeds downstream of the shock in the second-order Fermi process. The existing theoretical models do not allow to reject any of these alternatives. \begin{figure}[hbt] \vspace{60mm} \special{psfile=ostrowski07-fig8.ps hoffset=-15 voffset=-10 hscale=80 vscale=80 angle=0} \vspace{0mm} \caption{ Spectral energy distribution of emission from the hot spot D of Cyg A (Stawarz et al. 2007). One can clearly see both the synchrotron and the IC components. The Spitzer points in infrared and the optical point allow to exclude possibility of forming the measured flat low-frequency synchrotron component (dotted lines in synchrotron and IC spectral ranges) due to some self-absorption processes.} \end{figure} \begin{figure}[hbt] \vspace{60mm} \special{psfile=ostrowski07-fig9.ps hoffset=-13 voffset=-15 hscale=80 vscale=80 angle=0} \vspace{0mm} \caption{ Relativistic electron spectra in Cyg A hot-spots (Stawarz et al. 2007; here $\gamma$ is an electron Lorentz factor). Different spectral indices of the low energy and the high energy parts are expected to be intrinsic to the acceleration process, not the effect of the distribution `aging' downstream of the shock.} \end{figure} \section{Energetic particle acceleration in shear layers and regions of relativistic turbulence} Acceleration processes acting, e.g., in AGN central sources and in shocks formed in large scale jets are not always able to explain the observed high energy electrons radiating away from the centre/shock. Among a few proposed possibilities explaining these data the relatively natural but still unexplored is the one involving particle acceleration within a shear layer transition at the interface between the jet and the surrounding medium. To date knowledge of physical conditions within such layers is very limited and only rough estimates for the considered acceleration processes are possible. Within the {\em subsonic} turbulent layer with a non-vanishing small velocity shear the ordinary second-order Fermi acceleration, as well as the process of `viscous' particle acceleration (cf. the review by Berezhko (1990) of the work done in early 80-th; Earl et al. 1988, Ostrowski 1990, 1998, 2000, Stawarz \& Ostrowski 2002) can take place. A mean particle energy gain per scattering in the later process scales as $$ {<\Delta E> \over E} \, \propto \left( { <\Delta U> \over c } \right)^2 \qquad , \eqno(4.1)$$ \vspace{1mm} \noindent where $< \Delta U >$ is the mean velocity difference between the `successive scattering acts'. It is proportional to the mean free path normal to the shear layer, $\lambda_n$, times the mean flow velocity gradient in this direction $ \nabla_n \cdot \vec{U} $. With $d$ denoting the shear layer thickness this gradient can be estimated as $| \nabla_n \cdot \vec{U} | \approx U/d$. Because the acceleration rate in the Fermi II process is $\propto (V / c)^2$ ($V \approx V_A$ is the wave velocity, $V_A$ -- the Alfv\'en velocity), the relative importance of both processes is given by a factor $$\left( {\lambda_n \over d} {U \over V} \right)^2 \qquad . \eqno(4.2)$$ \vspace{1mm} \noindent The relative efficiency of the viscous acceleration grows with $\lambda_n$ and in the formal limit of $\lambda_n \approx d$ and $V \ll c$ -- outside the equation (4.2) validity range -- it dominates over the Fermi acceleration to a large extent. Because accelerated particles can escape from the accelerating layer only due to a relatively inefficient radial diffusion, the formed particle spectra are expected be very flat up to the high energy cut-off, but the exact form of the spectrum depends on several unknown physical parameters of the boundary layer (Ostrowski 1998, 2000). For turbulent {\em relativistic} plasmas the second-order Fermi acceleration can in principle dominate over the viscous process at all particle energies. In the case of electrons the upper energy scale for the accelerated particles is provided by the radiation losses. A simple exercises with the above estimated acceleration scales and the synchrotron radiation loss scale yields - for the sources like small and large scale jets, radio hot spots or lobes -- the highest electron energies between 1 TeV and $10^3$ TeV, for the respective acceleration time scales $\sim 10^3$ s in ~gauss fields and up to $\sim 10^3$ yrs in $\mu$G fields (depending on the considered object). The much higher energy limits for protons are usually determined by the escape boundary conditions, not the radiative losses. \section{Final remarks} A recent study of the first order Fermi acceleration processes at relativistic shocks, taking into account realistic assumptions about the physical conditions near the shock, reveals a few unexpected conclusions. The modelling of the acceleration process in mostly perpendicular (for relativistic velocities) shocks yields spectra consisting of the compressed injected part appended with a limited high energy tail. For given upstream conditions increasing the shock Lorentz factor $\Gamma$ leads to steepening of such energetic tails, providing essentially the compressed injected component at large $\Gamma$. Another unexpected feature is observed dependence of the spectrum inclination on the turbulence amplitude also for the parallel shock waves and formation of cut-offs at such shocks for large $\Gamma$. Essentially no conditions studied by Niemiec et al. resulted in formation of the wide-range power-law particle distribution with {\em the universal spectral index 2.2}~. It can be of interest, with the presently published AUGER results (The Pierre Auger Collaboration 2007), that the modelling presented above seems to exclude the first-order Fermi acceleration at relativistic shocks as possible sources for highest energy particles registered in this experiment. In the same time the second-order Fermi processes acting in turbulent relativistic plasmas are expected to play significant role in cosmic ray acceleration. Thus, even facing substantial mathematical and physical difficulties, its deserve a detailed study. One may note that the acceleration processes accompanied the magnetic field reconnection processes are analogous to the second-order Fermi acceleration. Such processes always accompany the large amplitude MHD turbulence and generate turbulence. A few simple attempts to consider these processes were recently presented (e.g. Virtanen \& Vainio 2005). When considering the relativistic shock acceleration one should also note interesting approaches by Derishev et al. (2003) and Stern (2003), outside the classical Fermi scheme. They consider the acceleration processes in highly relativistic shocks or jet shear layers (Stern \& Poutanen 2006) involving particle-particle or particle-photon interactions at both sides of the shock. \subsection*{Acknowledgements} I am grateful to my collaborators Jacek Niemiec and {\L}ukasz Stawarz, whose significant work forms the main part of this rapport. The present work was supported by the Polish Ministry of Science and Higher Education in years 2005-2008 as a research project 1 P03D 003 29.
0801.0836
\section{Introduction} Two-dimensional (2D) lattice models with continuous local spin symmetry, such as the classical XY-model and the classical Heisenberg model on the square lattice, do not have finite magnetization when temperature is finite. This fact proved by Mermin and Wagner~\cite{Mermin} does not exclude the presence of phase transition of the Berezinskii-Kosterlitz-Thouless (BKT) type~\cite{Ber,KT}. These well-known facts are based on analysis in the flat 2D plane. Quite recently, Baek {\it et al.} studied the XY model on the heptagonal lattice~\cite{Baek}, which is one of the hyperbolic lattices constructed as a tessellation of heptagons on the hyperbolic plane, i.e., the 2D space with a constant negative curvature~\cite{Sausset}. By way of the Monte Carlo (MC) simulations for open boundary systems, they concluded the absence of phase transition, including that of the BKT type. Their result is in accordance with the thermodynamic property of the Ising model on the hyperbolic lattice, where there is no singularity in the specific heat as shown by d'Auriac {\it et al.}~\cite{dAuriac}. These observations on the hyperbolic lattice can be explained by the non-negligible effect of the system boundary~\cite{Chris1,Chris2}, which always has a finite portion of the system regardless of the system size. It should be noted, as pointed by d'Auriac {\it et al.}, that the presence of the ordered phase is not excluded in the region far from the boundary~\cite{dAuriac}, although the area of such an ordered region is negligibly small compared with the whole system on the hyperbolic lattice. The situation is similar to that of the statistical models on the Cayley tree, where its deep inside can be regarded as the Bethe lattice~\cite{Baxter}. Shima {\it et al.} studied the Ising model on the hyperbolic lattice by the MC simulations, and observed the mean-field like phase transition deep inside the system~\cite{Shima,Hasegawa}. The mean field behavior is in accordance with theoretical studies of phase transition in the infinitely large hyperbolic lattices~\cite{Rietman,Doyon}. It can be expected that such an order also appears in the case of the XY-model and the clock models. In this paper we study $N(\ge3)$-state clock models on the pentagonal lattice~\cite{tech} up to $N = 30$ by use of the CTMRG method~\cite{Nishino1,Nishino2,Nishino3} modified for systems on the hyperbolic lattices~\cite{Ueda,Roman}. The internal energy and the spontaneous magnetization {\it at the center of sufficiently large systems} are calculated numerically. In order to judge the presence of an ordered state deep inside the system, we impose the ferromagnetic boundary conditions at the beginning of the iterative calculation of the CTMRG method. As we show in the following, the obtained results support the existence of the mean-field like phase transition for all the $N$ even in the limit $N \rightarrow \infty$, where the system coincides with the classical XY model. In the next section we introduce geometry of the pentagonal lattice and consider the $N$-state clock model on it. A brief explanation of the CTMRG method is presented. In Sec.~III we show numerical results on the spontaneous magnetization, the internal energy, and the specific heat. We summarize the observed phase transition. \section{Clock models on pentagonal lattice} \begin{figure}[tb] \includegraphics[width=50mm,clip]{Fig1.eps} \caption{The pentagonal lattice drawn in the Poincar\'{e} disc. The open circles represent the $N$-state spin variables $\theta_i^{~}$. Two geodesics drawn by thick arcs divide the system into four equivalent quadrants.} \label{f1} \end{figure} We consider the 2D lattice shown in Fig.~\ref{f1}, which is a tessellation of regular pentagons. The lattice is in a curved plane with a constant negative scalar curvature. Therefore, the Hausdorff dimension of the lattice is infinite. For a technical reason in the CTMRG method, we have chosen the lattice with the coordination number four~\cite{tech}. Two geodesics drawn by the thick arcs cross one another at a site labeled by $\theta_1^{~}$. By these two arcs the whole lattice is divided into four equivalent parts called the quadrants or the corners. Let us introduce the $N$-state clock model on the pentagonal lattice. On each lattice site there is an $N$-state spin variable $\theta_i^{~}$ where $i$ is the site index. The possible values of $\theta_i^{~}$ are $2 \pi \xi / N$ with $\xi=0,1,2,\dots,N-1$. We consider the {\it angle} $\theta_i^{~}$ as the internal degree of freedom. Therefore, $\theta_i^{~}$ has nothing to do with the lattice geometry. If there are only ferromagnetic interactions between neighboring spin pairs, the Hamiltonian of the $N$-state clock model is written as \begin{equation} {\cal H} = - J \sum_{\langle i j \rangle}^{~} \cos\left( \theta_i^{~} - \theta_{j}^{~} \right) \, , \label{eq1} \end{equation} where $J > 0$ is the coupling constant. The summation runs over all the nearest-neighbor pairs $\langle i j \rangle$. The case $N = 2$ is nothing but the Ising model with coupling interaction $J$ and this case has been studied~\cite{Ueda,Roman}. The case $N = 4$ can be reduced to the Ising model with the coupling $J / 2$. We thus chiefly discuss the case $N = 3$, which is equivalent to the 3-state Potts model, and the cases $N \ge 5$ in the following. In order to observe the phase transition deep inside the system, we impose the ferromagnetic boundary conditions so that all the spin variables at the system boundary are aligned in the direction $\theta = 0$. For convenience we represent this clock model as a special case of the interaction-round-a-face (IRF) model on the hyperbolic lattice. For instance, let us label the spins around a pentagon as shown in Fig.~\ref{f1}. The IRF weight $W$, which is the local Boltzmann weight corresponding to this pentagon, is obtained as \begin{equation} W(\theta_1^{~} \, \theta_2^{~} \, \theta_3^{~} \, \theta_4^{~}\, \theta_5^{~}) = \prod\limits_{i=1}^5 \exp\left\{ \frac{J\cos\left( \theta_i^{~} - \theta_{i+1}^{~} \right)} {2 \, k_{\rm B}T} \right\} \, , \label{eq2} \end{equation} where $\theta_6\equiv\theta_1$. Having the IRF weight $W$ thus defined, we can express the partition function of the whole system \begin{equation} {\cal Z} = \sum_{\{\theta\}}\prod \, W \, , \label{epf} \end{equation} where the product is taken for all the IRF weights in the pentagonal lattice. The sum $\sum_{\{ \theta \}}^{~}$ is taken over all spin configurations. In order to discuss the phase transition on the hyperbolic lattice, let us consider a system whose size (or diameter) $L$ is far larger than the correlation length $\xi$. We divide the system into two parts, the boundary area (BA) and the deep inside area (DIA). The former, BA, is a ring-shaped area, where all the sites in the area are within the distance of the order of $\xi$ from the system boundary. The latter, DIA, is the rest of the system, which we analyze in the following. Because of the hyperbolic geometry, the portion of the BA with respect to the whole system is always finite even in the limit $L \rightarrow \infty$. The situation is similar to that of the Cayley tree~\cite{Baxter}. Thus the thermodynamic property of the whole system is always affected by the boundary condition, especially in low temperature~\cite{Chris1,Chris2}. When $\xi$ is finite, it is possible to consider the thermodynamics of the DIA, discarding the thermodynamic contribution from the BA, since we have assumed $L \gg \xi$ and therefore the size of the DIA is sufficiently large. When we collect numerical data of the DIA, we always treat sufficiently large systems that satisfy $L \gg \xi$, choosing such temperatures for which $\xi$ is at most of the order of 1000. We then detect the phase transition in the DIA by extrapolation from both low- and high-temperature sides. We introduce Baxter's corner transfer matrix (CTM) $C$, which represents the Boltzmann weight of a quadrant of the system~\cite{Baxter}. The partition function ${\cal Z}$ is then expressed as ${\rm Tr}\ C^4$, i.e., as the trace of the {\it density matrix} $\rho = C^4_{~}$. Applying the concept of the density matrix renormalization~\cite{White1,White2,Sch}, a precise approximation of ${\cal Z}$ can be obtained for large scale systems by way of iterative numerical calculations~\cite{Nishino1,Nishino2,Nishino3}. These are the outline of the CTMRG method, which can be applied to statistical models on hyperbolic lattices~\cite{Ueda,Roman}. After we obtain the density matrix $\rho$ for a sufficiently large system, we can calculate the expectation values at the center of the system, which represent the thermodynamics deep inside the system. For example, we can obtain the spontaneous magnetization \begin{equation} {\cal M}^{(N)}_{~} = {\rm Tr}\, \left[ \, \cos(\theta_c^{~}) \, \rho \, \right] \, / \, {\rm Tr}\, \rho \, , \end{equation} where $\theta_{\rm c}^{~}$ represents the spin at the center of the system, and the internal energy per bond \begin{equation} {\cal E}^{(N)}_{~} = -J \, {\rm Tr}\, \left [\, \cos(\theta_{\rm c}^{~}-\theta'_{\rm c}) \, \rho \, \right] \, / \, {\rm Tr}\,\rho \, , \end{equation} where $\theta'_{\rm c}$ is the neighboring spin next to $\theta_{\rm c}^{~}$. The specific heat ${\cal C}^{(N)}_{~}$ can be obtained by taking the numerical derivative of ${\cal E}^{(N)}_{~}$ with respect to temperature $T$. It should be noted that ${\cal M}^{(N)}_{~}$, ${\cal E}^{(N)}_{~}$, and ${\cal E}^{(N)}_{~}$ are not thermodynamic functions of the whole system but are those of the area deep inside the system. It has been known that the decay of the density matrix eigenvalues is very fast for models on the hyperbolic lattices~\cite{Ueda,Roman}. The clock model under study has the same feature in common. Therefore, it is sufficient to keep a very small number of the degree of freedom for the block spin variable $m$ in the formalism of CTMRG. Typically, we keep $m\approx 2N$ states. We checked that further increase of $m$ does not improve numerical precision in ${\cal M}^{(N)}_{~}$ and ${\cal E}^{(N)}_{~}$ any more, even at the vicinity of the phase transition. \section{Numerical results} \begin{figure}[tb] \includegraphics[width=0.95\columnwidth,clip]{Fig2.eps} \caption{Temperature dependence of the spontaneous magnetization ${\cal M}^{(N)}_{~}$ for $3 \le N \le 30$. The open circle denotes the discontinuity in ${\cal M}^{(3)}_{~}$.} \label{f2} \end{figure} Throughout this section, we take the coupling constant $J$ in Eq.~(\ref{eq1}) to the unit of energy. For all the cases $N \ge 2$, we observe phase transition, where the transition temperatures $T_{\rm 0}^{(N)}$ are listed in Table~\ref{t1}. Note that $T_{\rm 0}^{(N)}$ converges to $T_{\rm 0}^{(\infty)}$ very fast with respect to $N$. Figure~\ref{f2} shows the spontaneous magnetization ${\cal M}^{(N)}_{~}$ with respect to the rescaled temperature $T / T_{\rm 0}^{(N)}$. (Under this rescaling, ${\cal M}^{(2)}_{~}$ and ${\cal M}^{(4)}_{~}$ are identical.) If $N = 3$, the magnetization is discontinuous at $T_{\rm 0}^{(3)}$. The 3-state clock model, which is equivalent to the 3-state Potts model, exhibits the first order phase transition if the system is on the pentagonal lattice. This is a kind of mean-field behavior, since it is well known that the mean-field approximation applied to the 3-state Potts model on 2D lattices show the first order phase transition~\cite{Wu}. In the vicinity of $T_0^{(N)}$ the magnetization ${\cal M}^{(N)}_{~}$ rapidly converges to the large $N$ limit ${\cal M}^{(\infty)}_{~}$. The inset of Fig.~\ref{f2} displays the low-temperature behavior of ${\cal M}^{(N)}_{~}$ in details. Note that in the limit $N\to\infty$ the magnetization ${\cal M}^{(N)}_{~}$ decreases linearly with $T$ at very low temperatures. Figure~\ref{f3} shows the square of ${\cal M}^{(N)}_{~}$ with respect to $t = ( T_0^{(N)} - T ) / T_0^{(N)}$ for the cases other than $N = 3$. It is obvious that the scaling relation ${\cal M}^{(N)}_{~} \propto t^{\beta}$ is satisfied with the exponent $\beta = \frac{1}{2}$. \begin{table}[tb] \caption {The transition temperatures $T_{\rm 0}^{(N)}$, the critical exponents $\beta$, and positions of the specific heat maximum $T_{\rm Sch}^{(N)}$.} \label{t1} \begin{ruledtabular} \begin{tabular*}{\hsize}{ c@{\extracolsep{0ptplus1fil}}c@{\extracolsep{0ptplus1fil}} c@{\extracolsep{0ptplus1fil}}c@{\extracolsep{0ptplus1fil}} c@{\extracolsep{0ptplus1fil}}} $N$-clock & $ T_{\rm 0}^{(N)}$ & $\beta$ & $T_{\rm Sch}^{(N)}$ \\ \colrule 2 & 2.7991 & 0.5 & --- \\ 3 & 1.6817 & --- & --- \\ 4 & 1.3995 & 0.5 & --- \\ 5 & 1.3659 & 0.5 & --- \\ 6 & 1.3625 & 0.5 & 0.62948 \\ 7 & 1.3623 & 0.5 & 0.46295 \\ 8 & 1.3622 & 0.5 & 0.35676 \\ 9 & 1.3622 & 0.5 & 0.28357 \\ 10 & 1.3622 & 0.5 & 0.22997 \\ 13 & 1.3622 & 0.5 & 0.13761 \\ 20 & 1.3622 & 0.5 & 0.05864 \\ 30 & 1.3622 & 0.5 & 0.02600 \\ \end{tabular*} \end{ruledtabular} \end{table} \begin{figure}[tb] \includegraphics[width=0.95\columnwidth,clip]{Fig3.eps} \caption{Square of ${\cal M}^{(N)}_{~}$ with respect to $( T_0^{(N)} - T ) / T_0^{(N)}$.} \label{f3} \end{figure} Figure \ref{f4} shows the internal energy ${\cal E}^{(N)}_{~}$. There is a finite jump in ${\cal E}^{(3)}_{~}$ at $T_0^{(3)}$, where the latent heat per bond ${\cal L} = {\cal E}^{(3)}_{+} - {\cal E}^{(3)}_{-}$ is $0.078$. Analogously to the magnetization ${\cal M}^{(N)}_{~}$, the ${\cal E}^{(N)}_{~}$ is linear in $T$ at low-temperature region in the limit $N\to\infty$. \begin{figure}[tb] \includegraphics[width=0.95\columnwidth,clip]{Fig4.eps} \caption{The absolute value of the internal energy $| {\cal E}^{(N)}_{~} |$. The open circles denote the jump in the case $N = 3$.} \label{f4} \end{figure} Figure~\ref{f5} shows the rescaled specific heat $C^{(N)}_{~} / C^{(N)}_{\rm ~max}$, where $C^{(N)}_{\rm ~max}$ is the specific heat at $T_0^{(N)}$, with respect to the rescaled temperature $T / T_0^{(N)}$. Evidently, a discontinuity in the specific heat is observed for the cases $N = 2$ and $N \ge 4$. Thus, the second order phase transition has the mean-field nature. There is no indication of the BKT transition that is observed for clock models on flat 2D lattices~\cite{Landau}. \begin{figure}[tb] \includegraphics[width=0.95\columnwidth,clip]{Fig5.eps} \caption{The rescaled specific heat $C^{(N)}_{~} / C^{(N)}_{\rm ~max} $ versus the rescaled temperature $T / T_0^{(N)}$. The inset shows a typical example for the case $N=10$ without rescaling.} \label{f5} \end{figure} When $N$ is larger than 5, we observe the Schottky type peak in the specific heat. Figure~\ref{f6} shows the $N$ dependence of the Schottky peak position $T_{\rm Sch}^{(N)}$. As it is shown, $T_{\rm Sch}^{(N)}$ is proportional to $1 / N^2_{~}$. This is qualitatively in accordance with the energy scale of local excitation $2 ( 2\pi / N )^2_{~} J$ from the completely ordered state. It is thus concluded that the Schottky peak disappears in the limit $N \rightarrow \infty$ and that the specific heat of the classical XY model on the pentagonal lattice remains finite even at $T = 0$. \begin{figure}[t] \includegraphics[width=0.95\columnwidth,clip]{Fig6.eps} \caption{The Schottky peak position $T_{\rm Sch}^{(N)}$ versus $1/{N^2_{~}}$.} \label{f6} \end{figure} \section{Conclusions} We have studied the $N$-state clock models on the pentagonal lattice, which is a typical example of the hyperbolic lattices. The phase transition deep inside the system is observed by use of the CTMRG method. From the critical exponent $\beta = \frac{1}{2}$ for the spontaneous magnetization and the jump in the specific heat, we conclude that the phase transition for $N = 2$ and $N \ge 4$ is mean-field like, provided that the ferromagnetic boundary conditions are imposed. The Hausdorff dimension, which is infinite for the hyperbolic lattices, is essential in the observed critical behavior. We conjecture that the phase transition deep inside the system is also present for systems with free boundary conditions. In the case when $N = 3$, where the system is equivalent to the 3-state Potts model, we observed the first-order phase transition. Since the $q$-state Potts model tends to exhibit the first-order transition for larger $q$~\cite{Wu}, it is expected that the transition of $q \ge 3$ Potts models on the pentagonal lattice is of the first order. We have partially confirmed the behavior for several values of $q$ and we conjecture that the transition is of the first order on any kind of hyperbolic lattices when $q \ge 3$. We observed stable ferromagnetic states below $T_0^{(N)}$ even in the continuous limit $N \rightarrow \infty$. This fact does not contradict to the Mermin-Wagner theorem~\cite{Mermin} since the pentagonal lattice is not on the flat 2D plane. The vortex energy on hyperbolic lattices might be larger than that on the flat lattice. The difference may elucidate the absence of the BKT phase transition on the pentagonal lattice. \section*{Acknowledgments} The Slovak Agency for Science and Research grant APVV-51-003505 and Slovak VEGA grant No. 2/6101/27 are acknowledged (A.G. and R.K.). This work is also partially supported by Grant-in-Aid for Scientific Research from Japanese Ministry of Education, Culture, Sports, Science and Technology (T.N. and A.G.).
2012.12422
\section{Introduction} Multiple Indicator Cluster Surveys, MICS (\url{mics.unicef.org}), are one of the most significant global sources of household data on health, education, and the well-being of women and children. Supported by UNICEF, they have been conducted since the mid-1990s in over 100 countries. The data is gathered through face-to-face interviews in nationally representative samples of households and can be disaggregated in various ways. MICS occurs in multiyear rounds and provides tools and guidance for governments and institutions to create, inform, and implement socio-economic and health policies. MICS data and documentation are freely available at the MICS website. More details can be found in the survey article \cite{KH:MICS}. One important parameter that can be calculated from the MICS surveys and attributed to each household is its \emph{wealth index}. This number essentially captures the household wealth based on the ownership of certain items and is an important and widely used instrument for assessing the economic situation of a country. The calculation is performed through standard principal component analysis (PCA) of the data, with the wealth index depending on the first principal component. The goal of this paper is to analyze the MICS data and the wealth index with \emph{topological data analysis} (TDA). This is a relatively new technique that tries to extract intrinsic information from the shape of a data cloud and interpret this information as features of the data. For overviews of TDA and its applications, see~\cite{Carlsson:TDA, Carlsson:TDAHomotopy, OPTGH:TDAOverview}. There are several successful TDA methods currently in use, and the one we will use in this paper is the \emph{Mapper algorithm}, due to Singh, M\'emoli, and Carlsson \cite{SMC:Mapper}. Mapper is an unsupervised machine learning algorithm which is essentially a dimension reduction and clustering procedure. The idea is to reduce the data to a \emph{Mapper graph} that retains various topological features of the data cloud. The nodes in the graph represent clusters of points that are ``nearby'' according to some notion of distance, while edges represent overlaps of clusters. Because of its ability to retain the ``shape'' of data and hold both local (nodes) and global (edges) information, this algorithm seems to capture more information than other dimension-reduction procedures like PCA. At the same time, it is a powerful visualization tool since it reduces a high-dimensional cloud to a graph. Mapper has been used to great effect in a number of settings such as medicine, genomic analysis, neroscience, chemistry, remote sensing, soil science, agriculture, sports, voting, and economics (see \cite{BBBNL:Mapper} for a collection of references). We believe, however, that the use of TDA to study standards of living and wealth inequality is novel. In more detail, we apply the Mapper algorithm to the 2014-15 MICS survey from Serbia. The choice of this dataset is solely dependent on the fact that one of the authors (Memi\'c) was closely involved with creating, conducting, and analyzing the MICS surveys in Serbia and is thus intimately familiar with the methodology, in-field data collection, and the post-survey statistical analysis. The fact that this paper focuses on Serbia is in many way secondary; the takeaway is that the Mapper algorithm offers a different point of view on the MICS data and that it can be used to inform policy-making. Our analysis can be performed on any MICS dataset and, in fact, a future comparative study could reveal which socio-economic indicators are specific to particular countries or regions of the world and which are more universal. The data that goes into our Mapper graph is based on the yes/no answers to 34 survey questions (a subset of all the questions MICS asks) about material possessions. This data is endowed with a distance function that captures the idea that households that are ``close'' are those that have a similar set of possessions, and that some possessions are less common than others. This leads to a definition of a probability-based semimetric which does not appear to have been used in literature before. Once the Mapper is generated, we extract two sets of observations from it. One is obtained by overlaying information onto the Mapper, i.e.~by coloring the nodes according to the wealth index or ownership of particular items. This analysis provides insight into relationships between wealth and rural/urban lifestyle, ownership of items, and certain types of households. This is potentially useful information in terms of understanding how to most efficiently raise the standard of living; it suggests, for example, that for the majority of people, household amenities are more important than gadgets. This is helpful in light of the fact that there is a long tail at the low end of the distribution of wealth scores (Figure \ref{fig:wscore_dist}), which indicates that the dataset contains a number of households with wealth -- and, one might therefore suspect, standard of living -- several standard deviations below the average. Our analysis provides some insight into what possessions might most efficiently raise the standard of living for these households. The other way we study the Mapper is by looking at its graph-theoretic properties, namely paths and flares. Studying the paths provides insight into relative priorities households have in terms of ownership of items. One of the advantages of Mapper is that it picks up non-monotonic relationships between material possessions even without the field evidence pointing to the potential existence of such a relationship. Flares, on the other hand, appear to inform our understanding of household categorization in a way that is more subtle than one based simply on income and assets. Because the Mapper performs local-to-global information extraction, looking at the same graph with different overlay colors allows us to see both overall trends and deviations from those trends clearly, without noise obscuring either. Both the ability to analyze boolean data and the application of overlay colors allow the Mapper to discern patterns in how different possessions relate to wealth as quantified by the wealth index. This expands the usefulness of the Mapper, and TDA more generally, as a hypothesis-generating method. It should be noted that our analysis relies on an existing measurement of wealth, namely the wealth index, so we do not expect to redefine it with our methods. However, since Mapper is a more a subtle data reduction technique than PCA, the standard methodology for computing the wealth index, one of the takeaways from the work here is that perhaps Mapper could serve as a more nuanced way of understanding wealth scores. \subsection{Organization of the paper} In writing this paper, we have attempted to keep the technical exposition of the Mapper algorithm and the underlying topology to a minimum. However, the reader would be aided by familiarily with basic statistics, including a cursory understanding of principal component analysis (PCA), as well as elementary notions of linear algebra and topology such as maps and metrics. The paper is organized as follows: \begin{itemize} \item In Section \ref{S:Background}, we provide background on the Mapper algorithm (Section \ref{S:Mapper}) and and MICS surveys (Section \ref{S:MICS}), including a brief review of the wealth score and the wealth index. \item Serbia MICS data and the parameters used to produce our Mapper graph are presented in Section \ref{S:Methods}. In particular, Section \ref{S:SerbiaMICS} lists the questions whose binary answers provide the input for the Mapper and discusses the distribution of the wealth score, which has a long tail at the long end. Sections \ref{S:Metrics}, \ref{S:Filter}, and \ref{S:Clustering} give details on the metric, filter, and clustering -- the choices that have to be made in order for the Mapper to be generated. As mentioned above, one feature of independent interest in this setup is our definition of probablility-based (semi)metric that does not appear to have been used before. \item Section \ref{S:Results} gives two Mappers, one generated from a probability-based semimetric and one from the Euclidean metric. The latter simply provides evidence that the former is more informative and worth studying. \item We discuss and analyze the Mapper in Section \ref{S:Discussion}. We first overlay various information onto the Mapper in Section \ref{S:Overlay}. As mentioned above, this leads to various observations about the relationship between the wealth index and the urban/rural split (Section \ref{S:Urban/rural}) as well as some more nuanced characterizations of households according to ownership of items (Section \ref{item_types}). We then study some of the graph-theoretic properties of the Mapper in Section \ref{S:Graph}. Following various paths connecting its nodes turns out to inform our understanding of the relative priorities that households have in ownership of items (Section \ref{S:Paths}). Flares, which, roughly speaking, are paths that separate from the main body of the Mapper and end in a node of degree one, on the other hand provide insight into what kind of items are considered more luxury and which are more essential. \item We summarize our finding briefly in Section \ref{S:Conclusions}. \item Section \ref{S:Future} is meant to convey that the analysis performed here is just the first step in applications of TDA to the MICS data and that this approach has much potential. Various future directions of investigation are laid out, and include expanding the set of questions used to generate the Mapper as well as those that are superimposed on it, modifying the metric and the filter function according to different statistically significant parameters, and the application of this method to MICS datasets from countries other than Serbia. \end{itemize} \subsection{Acknowledgments} The third author was partially supported by the Simons Foundation. \section{Background}\label{S:Background} \subsection{Mapper Algorithm}\label{S:Mapper} The Mapper algorithm allows us to visualize high-dimensional datasets as a graph.\footnote{Or more generally as a \emph{simplicial complex}; we will not need this more general version here.} The graph retains many of the geometric properties of the original data cloud, such as connectedness and the presence of holes, but is easier to analyze. Starting with a data set $X\subset {\mathbb R}^n$ consisting of vectors in some high-dimensional Euclidean space ${\mathbb R}^n$, the user first specifies a dimension-reducing function $h\colon X\to{\mathbb R}^d$ called the \emph{filter} or \emph{projection}. The filter usually has some statistical meaning, and it varies depending on the context and the desired features of the data that are to be exposed. The target space is more manageable since $d$ is typically much smaller than $n$. Our filter will simply provide the number of affirmative answers to a series of questions (see Section \ref{S:Filter}). The image of the filter is then covered by $m$ overlapping hypercubes, i.e.~products of intervals, where $m$ is chosen by the user. The degree of the overlap can also be selected. Then, for each hypercube $i$ (where $1\leq i\leq m$), the datapoints in the preimage of hypercube $i$ are clustered. For the clustering to be performed, a distance function on the data is needed. The standard Euclidean distance is often used, but other metrics are also employed depending on the context of the analysis. In fact, one only needs a \emph{semimetric}, a distance function that does not necessarily satisfy the triangle inequality. Our distance function will indeed be a semimetric (see Section \ref{S:Metrics}). It will be based on probabilistic considerations and does not appear to have been used in the literature before. With the notion of ``closeness'' in hand, one then uses some clustering algorithm, for example single-linkage hierarchical clustering, on the preimages of the filter map to decide which data points are close enough to be clustered together. Each cluster then becomes a node in the Mapper graph. Since the hypercubes overlap, clusters from different hypercubes may have datapoints in common. Two nodes have an edge between them precisely when this occurs. In other words, two nodes are connected by an edge if and only if the clusters they represent have non-empty intersection. Figure \ref{fig:MapperExample} illustrates the Mapper procedure on a simple dataset in ${\mathbb R}^2$, i.e.~each data point is in this case a vector with two components. The size of the nodes in the Mapper corresponds to how many data points are clustered in it. The nodes can also be colored according to whatever attributes are deemed important. In our analysis, we will first color the node by the average wealth score of the households represented in the clusters, and then compare the result to the graph colored by rates of ownership of different possessions. \begin{figure}[h] \captionsetup{font=small} \centering \includegraphics[width=0.6\columnwidth]{images/MapperExample} \caption{An illustration of the Mapper algorithm. The filter map $h\colon{\mathbb R}^2\to{\mathbb R}$ projects onto the $y$ coordinate. The image of $h$ is covered by 4 hypercubes (intervals in this case) $I_1,..., I_4$. The preimages of $h$ are the intersections of the open sets $U_1,..., U_4$ with the data cloud. These preimages are then clustered to form the nodes of the Mapper. An edge is created whenever a data point belongs to more than one cluster. Image source: Belch{\'i} et.al.~\cite{BBBNL:Mapper}.} \label{fig:MapperExample} \end{figure} A number of free implementations of the Mapper algorithm exist, such as Python Mapper~\cite{PythonMapper}, TDAmapper (R implementation)~\cite{TDAmapper}, and Kepler Mapper \cite{KeplerMapper2019}. The last one is what we will use for our analysis. Mapper is a data reduction method, and it is in this way similar to principal component analysis. However, while the nodes are created from local information, the edges of the graph retain some knowledge about the global topological features of the data as a subspace of ${\mathbb R}^n$. This ability for local-to-global extrapolation is what makes Mapper appealing and useful. It should be noted that the user makes a number of choices when implementing Mapper -- filter function, number of sets in the cover, the size of the overlap, metric, and clustering procedure -- and varying these parameters can produce very different graphs. \subsection{Multiple Indicator Cluster Surveys}\label{S:MICS} Since its inception in 1995, the Multiple Indicator Cluster Surveys (MICS) have become the largest source of data on women, children, and adolescents worldwide. Supported and conducted by UNICEF, the program is currently in its sixth multi-year round (MICS6). Over two decades, 341 MICS surveys have been carried out in 117 countries, helping shape policies for the improvement of the well-being of women and children. MICS was a major source of data on the UN Millennium Development Goals indicators and will continue to be a major data source for the UN 2030 Sustainable Development Agenda. MICS surveys consist of a number of modules, some standard and some adapted to a specific country's needs. Data is collected on a number of issues, such as fertility, mortality, contraception, newborn and mother health, and various other socio-economic parameters. Some of the modules are on the household level while others are on the individual level. The survey is conducted by trained fieldwork teams in face-to-face interviews with household members. The data is publicly available for research purposes from the MICS website.\footnote{See \url{https://mics.unicef.org}.} An excellent overview of the MICS surveys, their methodology, history, and significance can be found in \cite{KH:MICS}.\footnote{A number of papers discussing the MICS methodology can also be found at \url{https://mics.unicef.org/publications/reports-and-methodological-papers}.} \subsection{Wealth index and wealth score}\label{S:WI} One important piece of information that is extracted from MICS surveys is the \emph{wealth index}. This calculation captures the accumulated wealth by ranking households in terms of ownership of assets and amenities. A selection of MICS survey questions is used for the calculation of the wealth index. The questions are adapted to each country and are selected based on their perceived appropriateness for explaining the wealth of a household. In the case of Serbia that we consider in this paper, the subset of the 2014-15 MICS survey questions that were used for the computation of the wealth index had to do with material possessions and the features of the respondents' dwelling. The original wealth index calculation, used through MICS4, was modified in 2008 to take into account the urban bias. The current version involves the following steps: \begin{enumerate} \item Select a set of variables that are thought be correlated with wealth. \item Run separate principal component analyses (PCA) for urban and rural areas. \item Run a PCA for the whole population. \item Regress the urban and rural factor scores onto those for the general population. \item Obtain a combined \emph{wealth score}. \item Assign the combined wealth score to each household (give the same score to all household members). \item Divide the households into five equal groups, from poorest to richest, according to the combined wealth score, each containing 20 percent of the household members. The quintile in which a household ends up is its \emph{wealth index}. \end{enumerate} In a little more detail, principal components analysis (PCA), a data reduction technique, is at the heart of wealth index construction. From a set of variables that are correlated in terms of wealth, PCA extracts a set of uncorrelated principal components leading to the reduction of several variables in a data set into a smaller number of dimensions. Each dimension, or principal component, is a weighted linear combination of the initial variables that would best explain variance. In other words, each principal component is the sum of the variables multiplied by their weights. The weights for each variable are different in each principal component and are deduced from the data's correlation matrix. The components are ordered so that the first principal component explains the largest amount of variation in the data. The wealth score calculation methodology uses the first principal component as the representation of wealth. \section{Methods}\label{S:Methods} \subsection{2014-15 Serbia MICS data}\label{S:SerbiaMICS} This paper focuses on the 2014-15 MICS5 data from Serbia, which participated in all six rounds of MICS (MICS6 data will be available by the end of 2020). The last three rounds were carried out on two independent samples in Serbia: nationally representative sample and the sample of the population living in Roma settlements. Analysis in this paper covers the data from Serbia MICS5 Survey on the nationally representative sample. Our analysis focuses on the following 34 questions: \begin{multicols} {2} \begin{enumerate} \item Do you own an air conditioner? \item Do you own an animal-drawn cart? \item Do you have a bank account? \item Do you own a bed? \item Do you own a bicycle? \item Do you have cable tv? \item Do you own a car? \item Do you own a dishwasher? \item Do you own a drying machine? \item Do you own an electric stove? \item Do you have electricity? \item Do you own a freezer? \item Do you own a fridge? \item Do you own a hair dryer? \item Do you have internet? \item Do you own an iron? \item Do you own a microwave? \item Do you own a mobile phone? \item Do you own a motorcycle or scooter? \item Do you own a non-mobile phone? \item Do you own your dwelling? \item Do you own land that can be used for agriculture? \item Do you own animals? \item Do you own a pc/laptop? \item Do you own a radio? \item Do you own a table with chairs? \item Do you own a television? \item Do you own a tractor? \item Do you own a truck? \item Do you own a vacuum cleaner? \item Do you own a wardrobe? \item Do you own a washing machine? \item Do you own a watch? \item Do you own a water heater? \end{enumerate} \end{multicols} These are all yes/no questions about material possessions. The answers to these questions are easy to encode and the TDA filter function is simple to define. One could in principle base the TDA analysis on different subsets of questions with different purposes in mind; we will make some comments about this in Section \ref{S:OtherData}. Of the $7351$ households surveyed, $6147$ households responded to all $34$ of the questions above, $1160$ households responded to none of these questions, and $44$ households responded to some, but not all, of these questions. We confine our analysis to those households that answered all $34$ questions. The ``yes'' or ``no'' responses to each question are coded to values of $1$ and $0$, respectively. A household response to the questions will be a vector $$ x= (x_1,...,x_{34}) $$ whose coordinates are 0 or 1, with $x_i$ encoding the answer to the $i$th question. The dataset will thus consist of 6147 binary vectors of length 34. Much of our analysis will have to do with overlaying the wealth score data onto the Mapper graph resulting from the questions above. The Serbia wealth score questions are broader than those, and, in addition to material possessions, they collect data on access to water, features of the dwelling, etc. The wealth score values of the $6147$ households who responded to all of the 34 questions above range from $-7.68$ to $1.40$ with a mean of magnitude $<0.001$, a standard deviation of $0.99$ and a skew of $-2.48$. \begin{figure}[h] \captionsetup{font=small} \centering \includegraphics[width=0.6\columnwidth]{images/wscore_density} \caption{Serbia wealth score distribution} \label{fig:wscore_dist} \end{figure} \begin{figure}[h] \captionsetup{font=small} \centering \includegraphics[width=0.6\columnwidth]{images/wscore_with_windex} \caption{A color-coded visualization of Serbia wealth scores.} \label{fig:wscore_scatter} \end{figure} As is evident from Figures \ref{fig:wscore_dist} and \ref{fig:wscore_scatter}, there is a long tail on the low end of the wealth score distribution; this can be seen in the skew of the distribution as well as by simply looking at the density plot. As mentioned in the Introduction, this is one of the features we believe TDA can give insight into. In Section \ref{item_types}, we discuss how TDA can help identify which possessions (or lack thereof) distinguish the households in the tail. We also discuss how households can be classified with TDA in Section \ref{types_of_households}. It should be noted that there is no corresponding tail on the high end of the distribution. It is possible that no such tail exists, but another explanation is that MICS survey questions do not differentiate between households that are moderately wealthy and households that are extremely wealthy. \subsection{Metrics}\label{S:Metrics} As explained in Section \ref{S:Mapper}, in order to apply the Mapper algorithm to the Serbia MICS dataset $X$, we first need to define the distance $d(x,y)$ between two households' response vectors $x=(x_1,...,x_{34})$ and $y=(y_1,...,y_{34})$. We want the distance function to satisfy two intuitive properties: \begin{enumerate} \item If $x$ and $z$ agree wherever $x$ and $y$ agree, then $x$ and $z$ must be at least as close as $x$ and $y$. More formally, if \begin{itemize} \item $A$ is the set of questions that households $x$ and $y$ agree on, \item $B$ is the set of questions that households $x$ and $z$ agree on, and \item $A\subseteq B$, \end{itemize} then $$ d(x,z)\leq d(x,y). $$ \item Uncommon similarities should be weighted more heavily. In other words, if many households own a television but fewer households own a car, two households both owning a car is a greater indicator of similarity than two households both owning a television. \end{enumerate} To satisfy these two conditions, we devise a new semimetric. We first compute, for each of the $34$ items, the proportion $p_i$ of households that answered yes to question $i$. This is hence the probability that a household answered yes to this question. Note that one could generate random response vectors from these probabilities by assuming independence among the different questions. We define the distance between two households to be zero if their responses are identical. Otherwise, we determine which questions the two households agree on and define the distance as the probability that two randomly generated response vectors would agree on these questions. In other words, if $x\neq y$, and $p=(p_1, ..., p_{34})$ is a vector of the proportions of households that own each of the $34$ items, we define \begin{align*} d\colon X\times X & \longrightarrow {\mathbb R} \\ (x,y) & \longmapsto d(x,y) = \prod_{1\leq i\leq 34 \colon x_i=y_i}{p_i^2 + (1-p_i)^2}. \end{align*} It is clear from the definitions that this satisfies the conditions of a semimetric, i.e. \begin{itemize} \item $d(x,y)\geq 0$, \item $d(x,y)=0 \Longrightarrow x=y$, \item $d(x,y)=d(y,x)$. \end{itemize} However, this distance function is strictly a semimetric and not a metric, as it is possible that it violates the triangle inequality, i.e.~it is possible that $d(x,y)>d(x,z) +d(y,z)$ for some $x,y,z$. However, $d(x,y)$ does satisfy the two desirable conditions listed above: Since $p_i$ is a probability, $p_i\in [0,1]$, and therefore $p_i^2 + (1-p_i)^2 \in [0,1]$. Then if $x$ and $y$ agree on questions $A$ and $x$ and $z$ agree on questions $B$ and $A\subset B$, we have \begin{align*} d(x,z) &= \prod_{i\in B}p_i^2+(1-p_i)^2 = \prod_{i\in A} p_i^2+(1-p_i)^2 \cdot \prod_{i\in \bar{A}\cap B} p_i^2+(1-p_i)^2 \\ &=d(x,y) \cdot\prod_{i\in \bar{A}\cap B} p_i^2+(1-p_i)^2 \leq d(x,y) \end{align*} Further, $p_i^2+(1-p_i)^2$, which is the probability that two random households from the dataset agree on question $i$, is a parabola with a minimum at $0.5$. Thus, a rarer similarity contributes a smaller factor to the product $d(x,y)$, which reduces the dissimilarity between two households more than a more common similarity. Most of our analysis will use this probability-based semimetric (Section \ref{S:Probability}). However, For comparison, we also also run the Mapper algorithm on the dataset $X$ with the standard Euclidean metric (Section \ref{S:Euclidean}). \subsection{Filter function}\label{S:Filter} For the filter function, we sum the components of each vector $x\in X$. This counts the number of items a household reported owning. Thus households that, at first glance, have a similar number of possessions are grouped together in the image. Then clusters within each open set in the image will be based on \textit{which} items a household owns rather than \textit{how many} items a household owns. Below is a table summarizing how many households reported owning any given number of things. For example, this table says that 239 households reported owning 18 out of the 34 items surveyed or that 448 reported owing 27 items. \begin{center} \begin{tabular}{ |c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $\mathbf{\sum_{i=1}^{34} x_i}$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 \\ \hline {\bf Count} & 2 & 0 & 6 & 9 & 6 & 10 & 7 & 20 & 23 & 23 & 33 & 38 & 56 & 54 & 85 & 101 & 158 & 239 \\ \hline \end{tabular} \end{center} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $\mathbf{\sum_{i=1}^{34} x_i}$ & 19 & 20 & 21 & 22 & 23 & 24 & 25 & 26 & 27 & 28 & 29 & 30 & 31 & 32 & 33 & 34 \\ \hline {\bf Count} & 303 & 371 & 473 & 561 & 607 & 710 & 670 & 616 & 448 & 276 & 131 & 71 & 30 & 8 & 1 & 1 \\ \hline \end{tabular} \end{center} To form the cover of the image of this filter function, we use $10$ intervals overlapping by $30\%$. Below is a table summarizing the cover of the image of the filter. The intervals are numbered 0-9. So for example, the first interval includes households that reported owning between one and five of the 34 items (inclusive), and there were 23 such households. It is important to keep in mind that the intervals overlap. \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline {\bf Interval Number} & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline {\bf Interval Elements} & 1-5 & 4-8 & 7-11 & 11-14 & 14-18 & 17-21 & 21-24 & 24-28 & 27-31 & 30-34 \\ \hline {\bf Household Count }& 23 & 52 & 106 & 181 & 637 & 1544 & 2351 & 2720 & 956 & 111 \\ \hline \end{tabular} \end{center} \subsection{Clustering}\label{S:Clustering} We employ DBSCAN, a well-regarded general purpose clustering algorithm. DBSCAN requires two parameters: minPts and $\epsilon$ and classifies any datapoint $x$ with at least minPts points within distance $\epsilon$ (including $x$ itself) as a \emph{core point}. Any point $y$ within $\epsilon$ of $x$ is placed in the same cluster as $x$. If $y$ is itself a core point, any point within $\epsilon$ of $y$ is placed in the same cluster as $y$; this is applied recursively until all points are clustered, selecting a new $x$ whenever the edge of a cluster is reached. Any point that is not a core point and not in the cluster of a core point is classified as an outlier. \footnote{For more information on DBSCAN, see \url{https://scikit-learn.org/stable/modules/generated/sklearn.cluster.dbscan.html}. } The choice of minPts is often guided by $\ln(n)$, where $n$ is the number of observations in the dataset. in this case, $n=6147$, the number of households who answered all 34 questions, and so $\ln(n) \approx 8.7$. We choose minPts to be $10$, so we wish to choose $\epsilon$ such that having $10$ points in a ball of radius $\epsilon$ suggests a cluster. To select $\epsilon$, we plot the distance from each point in the dataset to its $9^{\text{th}}$ nearest neighbor, as shown in Figure \ref{fig:epsilon}. \begin{figure}[h] \captionsetup{font=small} \centering \includegraphics[width=0.6\textwidth]{images/9nn} \caption{The parameter $\epsilon$ is chosen to be at approximately the bottom of the elbow.} \label{fig:epsilon} \end{figure} This graph has a clear ``elbow.'' For distance values $d$ that are above the elbow, we expect that most points will have at least $9$ other points within $d$, yielding clusters that are too coarse. On the other hand, for values of $d$ that are below most of the graph, we expect very few points to have at least $10$ points within distance $d$; we would thus have very few core points and many points classified as outliers. We therefore choose $\epsilon$ to be at approximately the bottom of the elbow of the graph, i.e.~we choose $\epsilon = 10^{-4}$. For the Euclidean metric, we can repeat this process. To select $\epsilon$, we once again plot the distance from each point in the dataset to its 9th nearest neighbor, as shown in Figure \ref{fig:EuclideanNNPlot}. \begin{figure}[h] \captionsetup{font=small} \centering \includegraphics[width=0.6\textwidth]{images/9nneuclidean} \caption{The parameter $\epsilon$ is normally chosen to be at the bottom of the ``elbow.''} \label{fig:EuclideanNNPlot} \end{figure} It is harder to discern a clear elbow. Because data is binary, coordinates are either $0$ or $1$, and so the Euclidean metric produces only a discrete set of possible distances between points. We use $1.5$ as $\epsilon$. \section{Results}\label{S:Results} Before we look at the Mapper graphs, we make some notational conventions and initial observations. A node $n$ from a given Mapper graph will be denoted by $(i,c)$, where $i$ is the interval the households in $n$ fall into and $c$ is the cluster (within the interval $i$) that households from $n$ were placed in. Note that the labeling of $c$ is arbitrary, while the labels $i$ are in ascending order according to the number of items that a household reported possessing. For example, node (3,0) refers to the node containing (or referring to) households from interval 3 (possessing between 11 and 14 of the 34 items surveyed) that were clustered by DBSCAN into interval 3's cluster 0. Node (3,1) refers to the node containing households from interval 3 that were clustered into interval 3's cluster 1. Node (2,0) refers to the node containing households from interval 2 that were clustered by DBSCAN into interval 2's cluster 0. As explained in Section \ref{S:Mapper}, edges exist between nodes when the clusters represented by the nodes overlap. Because each interval overlaps at most one other interval on each side (as we are mapping to one-dimensional space), clusters can only overlap with other clusters that come from the intervals either directly above or below them (for example, clusters from interval $1$ can only overlap clusters from intervals $0$ or $2$). Therefore, a node can only have an edge between itself and a node representing a cluster from the interval above or below. Note that clusters within a given interval are, by definition, disjoint, and so a node cannot have an edge between itself and another node representing a cluster from the same interval. Thus, (3,0) and (3,1) are mutually exclusive by definition (that is, no household can be in both), but (2,0) is not necessarily mutually exclusive with either (3,0) or (3,1). \subsection{Probability-based semimetric}\label{S:Probability} Figure \ref{fig:SerbiaMICS} shows the Mapper graph generated using the semimetric described in Section \ref{S:Metrics} and the filter function from Section \ref{S:Filter}. Recall that each node represents a cluster of households that are contained in a single interval in the image. Each node is colored based on the average wealth score of the households in the cluster that it represents, with purple corresponding to low average wealth score and yellow corresponding to high average wealth score. The size of the node reflects the number of households represented in it. \begin{figure}[h] \captionsetup{font=small} \centering \includegraphics[width=1\textwidth]{images/wscore_new} \caption{Mapper graph for the Serbia 2014-15 MICS data generated using the probability semimetric. Each node is colored based on the average wealth score of the constituent households. Darker colors indicate lower average wealth scores. The shape of the graph is irrelevant; the graph should be regarded up to isomorphism.} \label{fig:SerbiaMICS} \end{figure} As an example, Figure \ref{fig:wscoredist} gives the distribution of wealth scores for households in node (5,0). Table \ref{table:statspernode} summarizes key statistics of each node. \begin{figure}[h] \captionsetup{font=small} \centering \includegraphics[width=0.8\textwidth]{images/nodedist} \caption{Distribution of the wealth score within node (5,0).} \label{fig:wscoredist} \end{figure} \begin{figure}[h] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline {\bf Node} & {\bf Average wscore} & {\bf wscore ${\mathbf \sigma}$} & {\bf wscore min} & {\bf wscore max} & {\bf Percent rural} & {\bf \# households}\\ \hline {\bf (0,0)}&$-5.747$&$0.937$&$-7.679$&$-4.145$&$0.652$&$23$\\ \hline {\bf (1,0)}&$-4.613$&$0.866$&$-7.441$&$-2.578$&$0.653 $&$49$\\ \hline {\bf (2,0)}&$-3.467$&$0.642$&$-4.795$&$-2.087$&$0.574$&$54$\\ \hline {\bf (2,1)}&$-4.002$&$0.588$&$-4.83$&$-2.983$&$0.929$&$14$\\ \hline {\bf (3,0)}&$-1.834$&$0.673$&$-3.29$&$-0.732$&$0.553 $&$38$\\ \hline {\bf (3,1)}&$-3.309$&$0.344$&$-3.984$&$-2.873$&$1.0$&$9$\\ \hline {\bf (4,0)}&$-0.842$&$0.665$&$-3.259$&$0.249$&$0.387$&$367$\\ \hline {\bf (4,1)}&$-0.836$&$0.43$&$-1.481$&$-0.194$&$0.333$&$9$\\ \hline {\bf (4,2)}&$-1.718$&$0.572$&$-3.08$&$-0.689$&$0.933$&$15$\\ \hline {\bf (5,0)}&$-0.405$&$0.573$&$-2.68$&$0.624$&$0.432 $&$1053$\\ \hline {\bf (5,1)}&$0.401$&$0.299$&$-0.348$&$0.586$&$0.0$&$9$\\ \hline {\bf (5,3)}&$0.105$&$0.388$&$-0.513$&$0.579$&$0.125$&$16$\\ \hline {\bf (5,2)}&$0.126$&$0.368$&$-0.576$&$0.53$&$0.067$&$15$\\ \hline {\bf (5,4)}&$-0.075$&$0.372$&$-0.778$&$0.301$&$0.333$&$9$\\ \hline {\bf (6,0)}&$0.2$&$0.451$&$-1.582$&$0.927$&$0.336 $&$1977$\\ \hline {\bf (7,0)}&$0.535$&$0.348$&$-0.988$&$1.141$&$0.349$&$2540$\\ \hline {\bf (8,0)}&$0.696$&$0.267$&$-0.207$&$1.236$&$0.468 $&$920$\\ \hline {\bf (9,0)}&$0.842$&$0.241$&$0.179$&$1.402$&$0.709 $&$103$\\ \hline \end{tabular} \end{center} \caption{Key statistics of each node. Here wscore denotes wealth score and $\sigma$ the standard deviation.} \label{table:statspernode} \end{figure} Note that the total number of households is $7220$. That is because some households appear in multiple nodes, due to the overlap of the intervals, while other households do not appear in any nodes because they were classified as outliers by DBSCAN. Tables \ref{table:itemspernode1} and \ref{table:itemspernode2} at the end of the article summarize how households in each node responded to each of the questions. \subsection{Euclidean metric}\label{S:Euclidean} Figure \ref{fig:EuclideanMapper} shows the Mapper graph generated using the Euclidean metric. As before, each node is colored based on the average wealth score of the household it represents. We see $9$ nodes, one for each interval except interval 1. This means that clustering using DBSCAN and the Euclidean metric only produced one cluster per interval. Note that this does not mean that every household in each interval was placed in a cluster. We can see from the lack of an edge between nodes (2,0) and (3,0) that the households in the intersection of intervals 2 and 3 were not in both of the respective clusters. We can also see from the lack of a node from interval 1 that no cluster was found in that interval. \begin{figure}[h] \captionsetup{font=small} \centering \includegraphics[width=0.6\textwidth]{images/euclideanmapper} \caption{Mapper graph for the Serbia 2014-15 MICS data generated using the Euclidean metric. Each node is colored based on the average wealth score of the constituent households. Darker colors again indicate lower average wealth scores. The distance between the components and the shape of the graph are irrelevant} \label{fig:EuclideanMapper} \end{figure} The fact that there is only one node for each interval means that the clustering diregards many of the interesting properties that we see in the graph based on the probability-based semimetric, such as the loop and flares off the trunk of the graph. The absence of these features makes the Euclidean graph less useful in terms of gaining insight into the relationship between possessions and the standard of living. In fact, one of the reasons we are considering the Euclidean metric-based graph is to illustrate the contrast with the probability-based semimetric and provide validation for using this more interesting distance function. Another key feature to note is that while the graph in Figure \ref{fig:SerbiaMICS} is connected, the one in Figure \ref{fig:EuclideanMapper} is not. \section{Discussion}\label{S:Discussion} \subsection{Overlaying information onto the Mapper}\label{S:Overlay} In this section, we will consider other information from the MICS survey and overlay them on the Mapper graph from Figure \ref{fig:SerbiaMICS} by coloring the nodes accordingly. This will lead to some insights about the relationship between the wealth score and living in urban or rural areas as well as the wealth score and ownership of certain types of items. \subsubsection{Wealth score and urban/rural living}\label{S:Urban/rural} Figure \ref{fig:urbanrural} shows three figures where the underlying graph is the Mapper from Figure \ref{fig:SerbiaMICS}, but now the nodes are colored according to the answers to particular questions. In the graph (A), the nodes are colored based on the percentage of households that were classified as rural by the MICS study. Darker shades of purple indicate a lower proportion of urban households, while lighter shades of yellow indicate a higher proportion of urban households. The nodes in graphs (B) and (C) are colored based on land ownership and tractor ownership, two items closely related to rural farming. Darker shades of purple correspond to a lower percentage of households reporting owning the item in question, and lighter shades of yellow correspond to a higher percentage. \begin{figure}[h] \captionsetup{font=small} \centering \subfloat[Rural households]{% \includegraphics[clip,width=0.6\columnwidth]{images/rural}% } \subfloat[Land ownership]{% \includegraphics[clip,width=0.6\columnwidth]{images/land}% } \subfloat[Tractor ownership]{% \includegraphics[clip,width=0.6\columnwidth]{images/tractor}% } \caption{Graphs illustrating the relationship between wealth and rural lifestyle.} \label{fig:urbanrural} \end{figure} It is common that people living in rural areas are, on average, less well-off than people living in urban areas. Figure \ref{fig:urbanrural}, for the most part, substantiates this belief. We see that the lowest-wealth nodes (referring to the colors in Figure \ref{fig:SerbiaMICS}) are among the lightest on the graph of rural households (Figure \ref{fig:urbanrural}(A)), and that nodes get progressively darker as we move towards the higher-wealth end of the graph. This is mirrored in the data on land ownership. While one might think that owning land is a sign of wealth, households that own land are significantly less wealthy than households that do not. (If we test this hypothesis, we get a $p$-value $<2.2\times 10^{-16}$. The $95\%$ confidence interval for the difference in wealth score between land-owning and non-land-owning households is $( -0.44, -0.38)$.) This can be explained, however, if living in a rural area mediates the relationship between land ownership and wealth (in other words, if owning land is correlated with living in a rural area and living in a rural area is correlated with lower wealth, that would explain the initially unintuitive fact that owning land is correlated with lower wealth). There is, however, one notable exception: The last node on the right, which has the highest average wealth score, has (proportionately) more rural households than the several nodes to the left of it. These households are also much more likely to own land. One possibility is that, while city-dwellers are wealthier on average, that the wealthiest individuals are, in fact, in agriculture. The spike in tractor ownership in the last node supports this interpretation, since tractors are an agricultural tool. Another possible interpretation is that people, upon becoming wealthy, choose to move out of cities and into rural areas. This would not, however, fully explain why even the wealthiest urban people report owning land that can be used for agriculture, nor why many own tractors. \subsubsection{Wealth score and ownership of items} \label{item_types} We next investigate the relationship between certain possessions and overall wealth score by coloring the graph from Figure \ref{fig:SerbiaMICS} by the percentage of households that own a given item. Figure \ref{fig:possessions} shows nine graphs, divided into three categories. As before, darker shades of purple correspond to a lower percentage of households reporting owning the item in question, and lighter shades of yellow correspond to a higher percentage. \begin{figure}[h] \captionsetup{font=small} \centering \begin{minipage}{.33\textwidth} \begin{figure}[H] \centering \subfloat[Electricity]{% \includegraphics[clip,width=0.9\columnwidth]{images/electricity}% } \subfloat[Bed]{% \includegraphics[clip,width=0.9\columnwidth]{images/bed}% } \subfloat[Table with chairs]{% \includegraphics[clip,width=0.9\columnwidth]{images/table}% } \caption*{Essentials} \end{figure} \end{minipage}% \begin{minipage}{.33\textwidth} \begin{figure}[H] \centering \subfloat[Car]{% \includegraphics[clip,width=0.9\columnwidth]{images/car}% } \subfloat[Cell phone]{% \includegraphics[clip,width=0.9\columnwidth]{images/mobile}% } \subfloat[Computer]{% \includegraphics[clip,width=0.9\columnwidth]{images/computer}% } \caption*{$\longrightarrow$ } \end{figure} \end{minipage}% \begin{minipage}{.33\textwidth} \begin{figure}[H] \centering \subfloat[Dishwasher]{% \includegraphics[clip,width=0.9\columnwidth]{images/dishwasher}% } \subfloat[Motorcycle]{% \includegraphics[clip,width=0.9\columnwidth]{images/motorcycle}% } \subfloat[Drying machine]{% \includegraphics[clip,width=0.9\columnwidth]{images/drying_machine}% } \caption*{Luxuries} \end{figure} \end{minipage} \caption{Mapper graphs of various possessions. The arrow $\longrightarrow$ indicates a progression from items that are considered more essential to those that are considered more luxury.} \label{fig:possessions} \end{figure} These graphs show three main classes of possessions. There are ``essential'' possessions, characterized by near-100\% owenership by all but the poorest of households. There are ``luxury'' items, characterized by near-0\% ownership by all but the richest of households and relatively low rates of ownership by even those households. And then there are the ``middle-class amenities,'' characterized by very low rates of ownership among the poorest households and a rather sudden jump to high rates of ownership around node (5,0) or (6,0). Note that this classification essential and luxury items is not the same as simply saying that the least-owned objects are luxury items. For example, animal-drawn carts are the single least-owned item, but this is not a luxury item. The ownership of this item is simply consistently low across nodes with varying average wealth scores. This classification method can be applied to all of the MICS questions about possessions, and reveals perhaps unintuitive categorizations for certain items. For example, Figure \ref{fig:TVmicrowave} shows two more colorings, this time by TV ownership and by microwave ownership. \begin{figure}[h] \captionsetup{font=small} \centering \subfloat[Graph of TV Ownership]{% \includegraphics[clip,width=0.6\columnwidth]{images/tv}% } \subfloat[Graph of Microwave Ownership]{% \includegraphics[clip,width=0.6\columnwidth]{images/microwave}% } \caption{Mapper graphs of TV and microwave ownership} \label{fig:TVmicrowave} \end{figure} The graph of TV ownership most closely resembles that of an ``essential'' item, even though popular sentiment might not classify it as such (for example, people receiving government aid who purchase televisions might be criticized as being irresponsbile with their money). On the other hand, the graph for microwave ownership most closely resembles that of a ``luxury'' item, although many people in wealthy countries consider it essential. The interpretation that people are spending their money unwisely is difficult to justify because a microwave is less expensive than a TV. In other words, a person or household that is able to purchase a TV is also able to purchase a microwave, so the fact that people tend to choose TVs over microwaves suggests that a television has a greater positive effect on happiness than a microwave. This also reminds us that cultural norms are not universally applicable; while someone in the United States might consider a microwave an essential tool for cooking, this appliance may not be as desirable in another country due to a difference in the style of food preparation. \subsubsection{Wealth score and types of households} \label{types_of_households} Nodes (0,0) and (1,0) are the poorest households. In these nodes, ownership of essential items is not universal. Though two-thirds of these households are classified as rural, many do not own their home and few own land. Nodes (2,1) and (3,1) exemplify the category of the "rural poor." These households all own land and most ($85.7\%$ in node (2,1) and $100\%$ in node (3,1)) own animals as well. They mostly own the essential items but very few, if any, middle-class amenities. Middle-class and upper-class households can be classified based on item types. In other words, node (7,0) is the first node at which many luxury items such as air conditioners and microwaves appear in the majority of households, and so we can consider nodes (7,0), (8,0), and (9,0) to be comprised of upper-class households. Similarly, node (4,0) is where many of the middle-class amenities start to appear in the majority of households, and so we can classify nodes (4,0), (5,0), and (6,0) to be comprised of middle-class households. It should be noted that designations such as ``middle-class,'' ``upper-class,'' or ``poor'' are imprecise and overlapping even with full information about a household's situation. Indeed, (3,1) and (4,0), as well as (6,0) and (7,0), have some overlap since there is an edge between them. \subsection{Graph-theoretic properties of the probability-based Mapper}\label{S:Graph} In this section, we analyze some graph-theoretic properties of the Mapper from Figure \ref{fig:SerbiaMICS} such as existence of interesting paths and flares. \subsubsection{Paths}\label{S:Paths} Item classification is reflected in the paths of Figure \ref{fig:SerbiaMICS}. This can be seen by examining how item ownership changes as we move along the path. Figure \ref{fig:path1} shows the proportion of households in each node along the top path from node (0,0) to node (9,0) that own three different items/amenities: electricity (essential), a dishwasher (luxury), and a car (somewhere in between). This provides a different graphical representation of the information in the Mapper. \begin{figure}[h] \captionsetup{font=small} \centering \includegraphics[width=0.8\textwidth]{images/path1plot} \caption{Graph of item ownership across different nodes in the path from node (0,0) to node (9,0) along nodes (2,0) and (3,0).} \label{fig:path1} \end{figure} The word ``path'' here has a simple graph-theoretic meaning -- it is a sequence of edges that connect one node to another -- and should not be understood as a path of economic mobility. While an argument could be made that some of these objects generate wealth (having a car might enable the commute to a higher-paying job), in most cases it is more likely that the item is a reflection of wealth (having a television requires a certain amount of wealth, but it does not make its owner appreciably wealthier). The merit of the analysis in Figure \ref{fig:path1} lies in providing a more precise picture of relative priorities. This graph shows that ownership of cars does not emerge until ownership of electricity is at nearly 100\%. On the other hand, while dishwasher ownership spikes later and a bit less steeply, we begin to see dishwasher ownership before car ownership reaches 100\%. From this, we can infer that people prefer electricity to cars and cars to dishwashers, but that the latter preference is not as strong or uniform. Analyzing the differences in the path that goes through nodes (2,0) and (3,0) and the path that goes through nodes (2,1) and (3,1) also highlights the urban/rural wealth difference. We see ownership of cell phones and televisions rise more quickly along the former path; this is discernible even from the Mapper graphs in Figure \ref{fig:possessions}. The difference in bed and electricity ownership is far less pronounced, and not immediately discernible from the Mapper graph's colors. This suggests that while the urban/rural wealth difference exists, in these households it manifests in items that are less essential than the life-critical ones. \subsubsection{Flares} \label{flares} We will not define precisely what we mean by a ``flare'' (one could do it in terms of node adjacencies and degrees) since this is clear from the graph in Figure \ref{fig:SerbiaMICS}: Nodes (4,1) and (4,2) flare off from node (5,0), and nodes (5,1), (5,2), (5,3), and (5,4) flare off from node (6,0). It should be noted that these flares contain very few households --- node (5,3) contains the most, 16, and nodes (4,1), (5,1), and (5,4) contain the least, 9 each. Recall that nodes are generated by clustering households within each interval of the image of the filter function. As these flares are quite small relative to the number of households in the nodes that form the trunk of the graph, we can view them as households that are uncommon among those who answered in the affirmative to a similar number of questions Node (4,2) looks very rural, with 100\% rates of land ownership and animal ownership. At first, it looks as though it might fall into the category of rural poor. Average wealth score is $-1.718$, compared to $-0.842$ for node (4,0) and $-0.837$ for node (4,1), continuing the trend of rural households having lower wealth than urban households in the same interval. However, the average wealth score in node (4,2) is about 1.5 standard deviations higher than the average wealths core in node (3,1); a wealth score of $-3.309$ is in the tail end of the wealth score distribution, whereas a wealth score of $-1.718$ is on the border. Unlike the households in nodes (2,1) and (3,1), households in node (4,2) have a slew of amenities (such as electric stoves, irons, radios, vacuum cleaners, and washing machines) that are much less common among households in nodes (2,1) and (3,1). Further, five of the nine households in node (4,2) have a bank account, compared to none of the households in nodes (2,1) or (3,1). Thus, the (admittedly very few) households in node (4,2) might serve as a target for rural standard of living. Node (4,1), on the other hand, has 0\% land and animal ownership and an average wealth score similar to that of node (4,0). Instead, what sets node (4,1) apart from (4,0) and (4,2) is 100\% bike ownership (compared to $12.8\%$ for node (4,0), $0\%$ for node (4,2), and $33.7\%$ for node (5,0)), 100\% watch ownership (compared to $37.6\%$ in node (4,0), $0\%$ in node (4,2), and $57.6\%$ in node (5,0)), higher rates of mobile phone ownership, lower rates of non-mobile phone ownership and drastically lower rates of freezer ownership. We also see cable TV ownership at about half the rate of node (4,0) and one-third the rate of node (5,0). In broad terms, this looks like prioritization of items that are brought on one's person when going out every day over amenities that stay at home. This could in turn suggest a group of people who are prioritizing keeping up appearances. However, making this precise is difficult due to the small sample size. What we can infer, however, is that the overwhelming majority of households in interval 4 clustered into node (4,0), not node (4,1), and so the most people's preference for purchasing does prioritize household amenities. This is useful information in terms of understanding how to most efficiently raise the standard of living -- it suggests that for the majority of people, household amenities are more important than the gadgets that are carried on one's person. Nodes (5,1)-(5,4) all have higher average wealth score than node (5,0) and are much less rural. All demonstrate 100\% laptop/computer ownership, compared to $18.8\%$ for node (5,0) and $73.8\%$ for node (6,0). Nodes (5,1), (5,2), and (5,3) also have much higher rates of TV ownership than node (5,0). Node (5,1) in particular is characterized by the average wealth score which is higher than in nodes (5,0), (5,2), (5,3), (5,4), and (6,0). We see no radios or bicycles and only one watch from the nine households, but universal owernship of ammenities such as air condinitioning, a luxury item, as well as cable TV and internet. As with node (4,1), the small number of households in node (5,1) make inference difficult. We can, however, say that the fact that these households clustered separately from the other households in node (5,0) confirms the luxury status of these amenities, since it indicates that households in interval 5 that have these amenities are ``far out'' from the main cluster of that interval. \section{Conclusions}\label{S:Conclusions} At the most general level, we have demonstrated that TDA can be used to investigate the shape of boolean data. In Euclidean space, the distance between two boolean vectors of length 34 can take on only 35 different values, which poses a challenge when attempting to investigate shape. By applying the probability-based semimetric, we allow the distances between two points to take on a much wider range of values while retaining the meaning of distance; this opens the door for clustering and TDA analyses to be performed. We have further demonstrated that using different overlay colors on the same Mapper graph can illuminate relationships between different variables. Because the Mapper performs local-to-global information extraction, looking at the same graph with different overlay colors allows us to see both overall trends and deviations from the overall trend clearly, without noise obscuring either. Both the ability to analyze boolean data and the application of overlay colors expand the usefulness of TDA as a hypothesis-generating method. More specifically to the MICS data at hand, overlaying items on top of the Mapper graph and categorizing them accordingly can help guide which items or amenities ought to be prioritized for a possible program that would supply them to individuals. Our ``essential'' items are defined by the fact that households with limited means chose to purchase them over middle-class amenities and luxury items. This pattern of purchasing would suggest that these items give higher marginal utility per unit cost, and that, given limited resources, providing these essential items to households might increase the utility of the population as a whole in an efficient way. Household categorization could similarly be used to determine eligibility for certain types of aid. Governments or organizations could use our Mapper method to categorize households in a more nuanced way than simply stratifying based on income or assets. Such categorizations could help direct specific aid (for example, providing households with electricity or a table) to the households that would most benefit from it. This is valuable because more highly targeted aid may be cheaper (as it is provided to fewer households) and more politically feasible. \section{Future work}\label{S:Future} \subsection{Other MICS survey data}\label{S:OtherData} The MICS study collects much more data than the 34 questions we use for our analysis. There are other questions relating to standard of living that do no elicit binary answers (for example, questions on roof composition and primary water source). Beyond this, there are questions on education level, vaccinations, access to health care, child labor, and more. These questions could be incoporated into the Mapper graph or overlayed on it as was done in Section \ref{S:Overlay}. This could, for example, reveal which households are more or less likely to vaccinate their children and guide the development of targeted policy to increase vaccination rates. However, in order to apply our probability-based metric to data with more than two answers, we must expand the notion of what it means for two households to agree on a given item. One straightforward approach that could work well with unordered categorical data would be to say that two households agree if they gave the same answer, and do not agree if they gave different answers. One could pre-compute the frequency of each response and then use either the probability that two households agree on the given item (i.e.~$p_i^2$), or the probability that two households both gave the response (i.e.~$p_1^2 + p_2^2 + \cdots + p_n^2$). This approach would not work, however, for continuous data, and may be inappropriate for discrete ordered data. In this case, it might be better to create a distribution of the responses to a given question and use the probability that two randomly chosen households gave responses as close or closer than the two households in question. We could also use an \emph{isolation forest} to construct a lens for our data. Isolation forests quantify how ``unusual'' a datapoint is. If, for example, the anomalous households were primarily high wealth, that would suggest that there is a high tail in the distribution of wealth that the wealth score might not be capturing. If non-anomalous households form multiple clusters, this might indicate meaningful gaps between the lower, middle, and upper class. If they form a single cluster, that would suggest that there are households along the entire spectrum of socioeconomic status. \subsection{Other countries} As mentioned in the Introduction, we chose to work with the Serbia data because one of the authors participated in running that MICS survey. However, UNICEF conducts MICS surveys in many different countries. Performing our analysis on data from different countries would allow one to draw region-specific insights, studying how people in different countries or regions prioritize their spending, and evaluating the validity and importance of the wealth score in policy-making decisions. It would also allow for generalizations of some insights to the global population if the same patterns emerged in various places around the world. \subsection{Other TDA methods} The Mapper algorithm is one of the two main topological data analysis tools. The other is \emph{persistent homology} \cite{Carlsson:TDA, Carlsson:TDAHomotopy} which regards the data cloud as a topological space and then tries to understand its topologically important features, namely those that are unchanged by continuous deformations. These include connectedness and existence of holes or ``voids'' of various dimensions. The data cloud is first endowed with a metric (Euclidean, correlation, or anything else) that turns the data cloud into a space, and then one applies \emph{homology}, a standard algebraic tool in topology, to study it. The homology is calculated at different scales, and the features that ``persist'' at various levels are considered topologically meaningful and provide insight into the data cloud. Persistent homology has been used in a variety of settings (for an overview, see \cite{OPTGH:TDAOverview}) and is applicable to the setting of the MICS data. If this analysis for example exhibits the data cloud as a number of disconnected components, this might reflect disparities in wealth according to conditions or paramaters that are otherwise not easily discernible by the usual statistical methods. If the data shows a presence of holes, this might indicate combinations of possessions that are not represented in the data. Identifying why any such combination did not appear (are the items redundant, is one useful only in a rural setting and another only in an urban one, etc.) could illuminate something about the landscape of consumer demand. \newpage \begin{figure}[p] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline & {\bf (0,0)} & {\bf (1,0)} & {\bf (2,0)} & {\bf (2,1)} & {\bf (3,0)} & {\bf (3,1)} & {\bf (4,0)} & {\bf (4,1)} & {\bf (4,2)}\\ \hline {\bf Air conditioner}&0.0&0.0&0.0&0.0&0.0&0.0&0.022&0.0&0.0\\ \hline {\bf Animal-drawn cart}&0.043&0.02&0.0&0.0&0.0&0.0&0.008&0.0&0.0\\ \hline {\bf Bank account}&0.13&0.102&0.204&0.0&0.237&0.0&0.504&0.889&0.533\\ \hline {\bf Bed}&0.826&0.959&1.0&1.0&1.0&1.0&1.0&1.0&1.0\\ \hline {\bf Bicycle}&0.087&0.143&0.056&0.0&0.0&0.0&0.128&1.0&0.0\\ \hline {\bf Cable TV}&0.0&0.0&0.0&0.0&0.0&0.0&0.229&0.111&0.0\\ \hline {\bf Car}&0.0&0.0&0.0&0.0&0.0&0.0&0.049&0.0&0.0\\ \hline {\bf Dishwasher}&0.0&0.0&0.0&0.0&0.0&0.0&0.005&0.0&0.0\\ \hline {\bf Drying machine}&0.0&0.0&0.0&0.0&0.0&0.0&0.003&0.0&0.0\\ \hline {\bf Electric stove}&0.0&0.163&0.463&0.143&0.816&0.333&0.954&0.889&0.867\\ \hline {\bf Electricity}&0.435&0.694&0.944&0.929&0.974&1.0&1.0&1.0&0.933\\ \hline {\bf Freezer}&0.0&0.02&0.222&0.5&0.658&0.778&0.708&0.111&1.0\\ \hline {\bf Fridge}&0.087&0.347&0.685&0.643&0.974&1.0&0.995&1.0&1.0\\ \hline {\bf Hair dryer}&0.0&0.041&0.13&0.0&0.263&0.0&0.76&0.556&0.2\\ \hline {\bf Internet}&0.0&0.0&0.0&0.0&0.0&0.0&0.0&0.0&0.0\\ \hline {\bf Iron}&0.0&0.102&0.259&0.0&0.658&0.222&0.94&0.889&0.6\\ \hline {\bf Microwave}&0.0&0.0&0.0&0.0&0.0&0.0&0.0&0.0&0.0\\ \hline {\bf Mobile phone}&0.348&0.429&0.574&0.214&0.342&0.111&0.597&1.0&0.667\\ \hline {\bf Motorcycle or scooter}&0.0&0.0&0.019&0.0&0.0&0.0&0.014&0.111&0.0\\ \hline {\bf Non-mobile phone}&0.0&0.082&0.204&0.5&0.711&0.778&0.834&0.333&0.733\\ \hline {\bf Owns dwelling}&0.391&0.531&0.667&0.929&0.895&0.889&0.828&0.889&0.933\\ \hline {\bf Owns land}&0.087&0.163&0.037&1.0&0.211&1.0&0.218&0.0&1.0\\ \hline {\bf Owns animals}&0.0&0.041&0.056&0.857&0.079&1.0&0.144&0.0&1.0\\ \hline {\bf PC/laptop}&0.0&0.0&0.019&0.0&0.026&0.0&0.025&0.0&0.0\\ \hline {\bf Radio}&0.043&0.184&0.259&0.143&0.105&0.222&0.643&0.778&1.0\\ \hline {\bf Table with chairs}&0.522&0.796&0.815&1.0&0.947&1.0&0.978&1.0&1.0\\ \hline {\bf Television}&0.261&0.592&0.926&0.786&0.947&1.0&0.986&1.0&1.0\\ \hline {\bf Tractor}&0.0&0.0&0.0&0.071&0.0&0.111&0.038&0.0&0.267\\ \hline {\bf Truck}&0.0&0.0&0.019&0.0&0.0&0.0&0.005&0.0&0.0\\ \hline {\bf Vacuum cleaner}&0.0&0.0&0.074&0.0&0.421&0.111&0.905&0.889&0.733\\ \hline {\bf Wardrobe}&0.435&0.735&0.87&0.857&0.974&0.889&0.995&1.0&1.0\\ \hline {\bf Washing machine}&0.0&0.02&0.148&0.071&0.789&0.111&0.847&1.0&0.667\\ \hline {\bf Watch}&0.043&0.163&0.037&0.143&0.026&0.111&0.376&1.0&0.0\\ \hline {\bf Water heater}&0.0&0.02&0.333&0.143&0.842&0.222&0.918&1.0&0.8\\ \hline \end{tabular} \end{center} \caption{Proportion of households in nodes (0,0)-(4,2) answering each of the 34 questions in the affirmative.} \label{table:itemspernode1} \end{figure} \newpage \begin{figure}[p] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline & {\bf (5,0)} & {\bf (5,1)} & {\bf (5,2)} & {\bf (5,3)} & {\bf (5,4)} & {\bf (6,0)} & {\bf (7,0)} & {\bf (8,0)} & {\bf (9,0)}\\ \hline {\bf Air conditioner}&0.052&1.0&0.067&0.125&0.0&0.31&0.565&0.71&0.816\\ \hline {\bf Animal-drawn cart}&0.005&0.0&0.0&0.0&0.0&0.012&0.014&0.026&0.068\\ \hline {\bf Bank account}&0.749&0.889&0.733&1.0&0.556&0.907&0.964&0.978&0.981\\ \hline {\bf Bed}&1.0&1.0&1.0&1.0&1.0&1.0&1.0&1.0&1.0\\ \hline {\bf Bicycle}&0.337&0.0&0.067&0.062&0.222&0.526&0.782&0.905&0.951\\ \hline {\bf Cable TV}&0.375&1.0&0.867&1.0&0.0&0.701&0.829&0.893&0.893\\ \hline {\bf Car}&0.263&0.222&0.067&0.875&1.0&0.7&0.925&0.992&1.0\\ \hline {\bf Dishwasher}&0.021&0.0&0.0&0.062&0.0&0.117&0.353&0.554&0.699\\ \hline {\bf Drying machine}&0.003&0.0&0.067&0.0&0.0&0.03&0.14&0.287&0.495\\ \hline {\bf Electric stove}&0.961&1.0&1.0&1.0&1.0&0.983&0.994&0.997&1.0\\ \hline {\bf Electricity}&0.998&1.0&1.0&1.0&1.0&0.999&1.0&0.999&1.0\\ \hline {\bf Freezer}&0.828&1.0&0.733&0.375&1.0&0.835&0.913&0.979&1.0\\ \hline {\bf Fridge}&0.999&1.0&1.0&1.0&1.0&1.0&1.0&1.0&1.0\\ \hline {\bf Hair dryer}&0.895&0.889&1.0&0.938&1.0&0.977&0.994&1.0&1.0\\ \hline {\bf Internet}&0.132&1.0&1.0&1.0&1.0&0.668&0.911&0.974&1.0\\ \hline {\bf Iron}&0.977&1.0&1.0&0.938&1.0&0.998&0.999&1.0&1.0\\ \hline {\bf Microwave}&0.066&0.0&0.067&0.125&0.0&0.324&0.62&0.829&0.971\\ \hline {\bf Mobile phone}&0.817&1.0&1.0&1.0&1.0&0.979&0.998&1.0&1.0\\ \hline {\bf Motorcycle or scooter}&0.017&0.0&0.0&0.0&0.0&0.055&0.16&0.366&0.728\\ \hline {\bf Non-mobile phone}&0.877&1.0&0.867&0.562&0.889&0.918&0.965&0.99&0.99\\ \hline {\bf Owns dwelling}&0.879&0.444&0.8&0.312&1.0&0.788&0.894&0.967&0.99\\ \hline {\bf Owns land}&0.346&0.0&0.0&0.0&0.0&0.313&0.44&0.679&0.99\\ \hline {\bf Owns animals}&0.231&0.0&0.0&0.0&0.0&0.196&0.271&0.458&0.825\\ \hline {\bf PC/laptop}&0.188&1.0&1.0&1.0&1.0&0.738&0.957&0.997&1.0\\ \hline {\bf Radio}&0.814&0.0&0.0&0.0&0.0&0.812&0.891&0.932&0.971\\ \hline {\bf Table with chairs}&0.997&1.0&0.933&1.0&0.889&0.998&1.0&1.0&1.0\\ \hline {\bf Television}&0.995&1.0&1.0&1.0&1.0&0.999&0.999&0.999&1.0\\ \hline {\bf Tractor}&0.092&0.0&0.0&0.0&0.0&0.12&0.2&0.401&0.883\\ \hline {\bf Truck}&0.004&0.0&0.0&0.0&0.0&0.009&0.029&0.085&0.291\\ \hline {\bf Vacuum cleaner}&0.955&1.0&1.0&1.0&1.0&0.993&0.996&0.999&1.0\\ \hline {\bf Wardrobe}&0.998&1.0&1.0&1.0&1.0&0.998&1.0&1.0&1.0\\ \hline {\bf Washing machine}&0.947&1.0&0.933&0.938&1.0&0.992&0.996&0.997&1.0\\ \hline {\bf Watch}&0.575&0.111&0.933&0.062&1.0&0.715&0.849&0.928&0.971\\ \hline {\bf Water heater}&0.97&0.889&1.0&1.0&1.0&0.988&0.993&0.996&0.99\\ \hline \end{tabular} \end{center} \caption{Proportion of households in nodes (5,0)-(9,0) answering each of the 34 questions in the affirmative.} \label{table:itemspernode2} \end{figure} \clearpage \bibliographystyle{amsplain}
0909.0091
\section{Introduction}\label{s:intro} In this seventh paper in the series of the discovery of isotopes, the discovery of the iron isotopes is discussed. Previously, the discovery of cerium \cite{Gin09}, arsenic \cite{Sho09}, gold \cite{Sch09}, tungsten \cite{Fri09}, krypton \cite{Hei09} and einsteinium \cite{Bur09} isotopes was discussed. The purpose of this series is to document and summarize the discovery of the isotopes. Guidelines for assigning credit for discovery are (1) clear identification, either through decay-curves and relationships to other known isotopes, particle or $\gamma$-ray spectra, or unique mass and Z-identification, and (2) publication of the discovery in a refereed journal. The authors and year of the first publication, the laboratory where the isotopes were produced as well as the production and identification methods are discussed. When appropriate, references to conference proceedings, internal reports, and theses are included. When a discovery includes a half-life measurement, we compared the measured value to the currently adapted value taken from the NUBASE evaluation \cite{Aud03} which is based on ENSDF database \cite{ENS08}. In cases where the reported half-life differed significantly from the adapted half-life (up to approximately a factor of two), we searched the subsequent literature for indications that the measurement was erroneous. If that was not the case we credited the authors with the discovery in spite of the inaccurate half-life. \section{Discovery of $^{45-72}$Fe} Twenty-eight iron isotopes from A = $45-72$ have been discovered so far; these include 4 stable, 10 proton-rich and 14 neutron-rich isotopes. According to the HFB-14 model \cite{Gor07} iron isotopes are predicted to be stable with respect to one neutron emission out to $^{81}$Fe for the odd-mass isotopes and two-neutron emission out to $^{90}$Fe for the even-mass isotopes. At the proton dripline one more isotope, $^{44}$Fe, is predicted to be stable with respect to nucleon emission. Thus, there remain 15 isotopes to be discovered. No additional nuclei beyond the proton dripline are predicted to live long enough to be observed \cite{Tho04}. Over 60\% of all possible iron isotopes have been produced and identified so far. Figure \ref{f:year} summarizes the year of first discovery for all iron isotopes identified by the method of discovery. The range of isotopes predicted to exist is indicated on the right side of the figure. The radioactive iron isotopes were produced using heavy-ion fusion evaporation (FE), light-particle reactions (LP), spallation reactions (SP), deep-inelastic reactions (DI), and projectile fragmentation or fission (PF). The stable isotopes were identified using mass spectroscopy (MS). Heavy ions are all nuclei with an atomic mass larger than A = 4 \cite{Gru77}. Light particles also include neutrons produced by accelerators. In the following paragraphs, the discovery of each iron isotope is discussed in detail. \begin{figure} \centering \includegraphics[scale=.5]{iron-year.pdf} \caption{Iron isotopes as a function of time they were discovered. The different production methods are indicated. The solid black squares on the right hand side of the plot are isotopes predicted to be bound by the HFB-14 model.} \label{f:year} \end{figure} \subsection*{$^{45}$Fe}\vspace{-0.85cm} In their paper \textit{First Observation of the T$_{z}=-$7/2 Nuclei $^{45}$Fe and $^{49}$Ni}, Blank \textit{et al.} reported the discovery of $^{45}$Fe in 1996 at the Gesellschaft f\"{u}r Schwerionenforschung (GSI) in Germany \cite{Bla96}. A 600 A$\cdot$MeV $^{58}$Ni beam bombarded a 4~g/cm$^2$ thick beryllium target and isotopes were separated with the projectile-fragment separator FRS. $^{45}$Fe was identified by time-of-flight, $\Delta$E, and B$\rho$ analysis. ``In the entire Z versus A/Z plot ... only one background event can be identified... This high background suppression enables us to conclude even with only three and five counts on the first observation of $^{45}$Fe and $^{49}$Ni, respectively.'' The half-life was estimated to be greater than 350~ns based on the flight time through the FRS. In 1992 the non-observation of $^{45}$Fe in a projectile fragmentation experiment had led to the suggestion that $^{45}$Fe was probably not stable with respect to particle emission \cite{Bor92}. \subsection*{$^{46,47}$Fe}\vspace{-0.85cm} $^{46}$Fe and $^{47}$Fe were discovered by Borrel \textit{et al.} at Grand Accelerateur National D'ions Lourds (GANIL) in France in 1992, as reported in the paper \textit{The decay modes of proton drip-line nuclei with A between 42 and 47} \cite{Bor92}. A 69 A$\cdot$MeV $^{58}$Ni beam was incident on a natural nickel target and the projectile fragments were separated using the Ligne d'Ions Super Epluch\'{e}s (LISE) spectrometer. The isotopes were identified by time of flight and energy loss measurements. ``A three hour run leads to the first identification of $^{47}$Fe with 23 counts'' and ``another step is taken towards the proton dripline with the identification of $^{46}$Fe. Sixteen events are obtained in nineteen hours.'' The half-life of $^{46}$Fe was experimentally determined via maximum-likelihood analysis of the time spectrum to be 20$^{+20}_{-8}$ms; this value agrees with the presently accepted value of 9(4)~ms. The half-life of $^{47}$Fe was also determined via maximum-likelihood analysis of the time spectrum to be 27$^{+32}_{-10}$ms. Pougheon \textit{et al.} had observed one count of $^{47}$Fe at GANIL in 1987, but attributed the uncertain event to background \cite{Pou87}. \subsection*{$^{48}$Fe}\vspace{-0.85cm} The 1987 paper \textit{Direct Observation of New Proton Rich Nuclei in the Region 23$\leq$Z$\leq$29 Using A 55A.MeV $^{58}$Ni Beam}, reported the first observation of $^{48}$Fe at GANIL by Pougheon \textit{et al.} \cite{Pou87}. The fragmentation of a 55 A$\cdot$MeV $^{58}$Ni beam on a nickel target was used to produce proton-rich isotopes which were separated with the LISE spectrometer. Energy loss, time of flight, and magnetic rigidity measurements were made such that ``$^{48}$Fe is clearly identified with 27 counts.'' \subsection*{$^{49}$Fe}\vspace{-0.85cm} $^{49}$Fe was first observed by Cerny \textit{et al.} in 1970 and reported in the paper $^{49}$\textit{Fe A New T${_z}=-$3/2 Delayed-Proton Emitter} \cite{Cer70}. The reaction $^{40}$Ca($^{12}$C,3n) using 65 A$\cdot$MeV carbon ions accelerated by the Harwell variable-energy cyclotron was used to produce $^{49}$Fe. Beta-delayed protons were measured with a semiconductor telescope consisting of two surface-barrier detectors. ``Figure 1(b) presents a proton spectrum from $^{49}$Fe produced from a 2.2 mg/cm$^2$ Ca target. A single peak corresponding to a c.m. energy of 1.96(0.5) MeV, after correction for energy loss in the target, dominates the proton decay.'' The half-life was measured to be 75(10)~ms, which is consistent with the accepted value of 70(3)~ms. \subsection*{$^{50}$Fe}\vspace{-0.85cm} In the paper \textit{Mass measurements of the proton-rich nuclei $^{50}$Fe and $^{54}$Ni}, Tribble \textit{et al.} reported the discovery of $^{50}$Fe in 1977 \cite{Tri77}. Alpha particles accelerated to 110 MeV with the Texas A\&M University 88-inch Cyclotron were used to produce the reaction $^{54}$Fe($^{4}$He,$^{8}$He) and the ejectiles were observed at the focal plane of an Enge split-pole magnetic spectrograph. ``The experiments provide the first observation and subsequent mass measurement of the proton-rich nuclei $^{50}$Fe and $^{54}$Ni.'' The measured $\beta$-decay energy was 7.12(6)~MeV which was used to estimate a half-life of 200~ms; this is close to the adapted value of 155(11)~ms. \subsection*{$^{51}$Fe}\vspace{-0.85cm} In a paper entitled \textit{New Proton-Rich Nuclei in the f${_{7/2}}$ Shell}, Proctor \textit{et al.} described the discovery of $^{51}$Fe in 1972 \cite{Pro72}. The Michigan State University sector-focused cyclotron accelerated $^{3}$He to 70.8~MeV and the reaction $^{54}$Fe($^{3}$He,$^{6}$He) was used to produce $^{51}$Fe. The outgoing $^6$He particles were detected in the focal plane of an Enge split-pole magnetic spectrograph. ``The $^{51}$Fe ground state (J$^\pi$ = 5/2$^-$) is even more weakly populated, but is unambiguously identified in a number of spectra.'' \subsection*{$^{52}$Fe}\vspace{-0.85cm} \textit{Products of High Energy Deuteron and Helium Ion Bombardments of Copper} presented the first observation of $^{52}$Fe by Miller \textit{et al.} in 1948 \cite{Mil48}. The bombardment of natural copper with 190 MeV deuterons from the Berkeley 184-inch frequency-modulated cyclotron was used to produce $^{52}$Fe in a spallation type reaction. ``An aluminum absorption curve of the parent-daughter equilibrium mixture showed, in addition to a component of \textit{ca.} 2.3 Mev attributable to the 21-min $^{52}$Mn, a component of \textit{ca.} 0.55-Mev maximum energy presumably as a result of the 7.8-hour parent, assigned to $^{52}$Fe.'' This measured half-life agrees with the presently accepted value of 8.275(8)~h. \subsection*{$^{53}$Fe}\vspace{-0.85cm} In \textit{Radioactive Isotopes of Iron}, Livingood and Seaborg reported the production of $^{53}$Fe in 1938 \cite{Liv38}. The isotope was produced in the reaction $^{50}$Cr($\alpha$,n)$^{53}$Fe with 16 MeV $\alpha$-particles accelerated by the Berkeley cyclotron. The decay curves of the produced radioactivity were measured with a quartz fiber electroscope following chemical separation. The authors ``believe the 9-minute activity to be due to Fe$^{53}$ rather than to Fe$^{55}$ because: (1) it is not produced by deuteron or slow neutron bombardment of Fe, (2) it is produced by fast neutrons on Fe, (3) attempts to produce $^{55}$Fe by other reactions have not disclosed a 9 minute activity.'' The half-life was determined to be 8.9(2)~m which is close to the accepted value of 8.51(2)~m. In 1937, Ridenour and Henderson had observed a 9-minute activity; however, they were unable to make the unique mass assignment and attributed it to either the reaction $^{50}$Cr($\alpha$,n)$^{53}$Fe or the reaction $^{52}$Cr($\alpha$,n)$^{55}$Fe \cite{Rid37a}. In an even earlier publication, they had preferred the later assignment \cite{Rid37b}. \subsection*{$^{54}$Fe}\vspace{-0.85cm} In his 1923 paper \textit{The Mass Spectra of Elements - Part IV}, Aston mentioned the possible first observation of $^{54}$Fe \cite{Ast23}. Using his mass spectrograph in Cavendish he investigated iron with the volatile carbonyl. ``The faint line at 54 may possibly be an isotope, but this is by no means certain.'' Aston confirmed the observation in 1925 \cite{Ast25}. \subsection*{$^{55}$Fe}\vspace{-0.85cm} Livingood and Seaborg observed $^{55}$Fe in 1938 described in the paper \textit{Long-Lived Radioactive Fe$^{55}$} \cite{Liv39}. Iron samples bombarded with deuterons from the Berkeley cyclotron described in a previous publication \cite{Liv38} were measured for a period of 22 months. ``These facts assure us that Fe$^{55}$ has been formed through Fe${54}$(d,p)Fe$^{55}$ with the activity probably leading to stable Mn$^{55}$ either by positron emission or by K-electron capture.'' The counting time was not sufficient to extract a reliable half-life measurement and only a lower limit of one year was determined. The currently accepted half-life value is 2.737(10)~y. \subsection*{$^{56}$Fe}\vspace{-0.85cm} $^{56}$Fe was first identified at Cavendish in 1923 by Aston \cite{Ast23} as reported in \textit{The Mass Spectra of Elements - Part IV}. Volatile iron carbonyl was used to obtain the mass spectrum. ``The only line which can be ascribed with certainty to iron is the one at 56. Thirteen independent measurements of the principal line relative to other lines on the plate gave values of its mass which were very consistent and had a mean of 55.94.'' \subsection*{$^{57}$Fe}\vspace{-0.85cm} In 1935 Aston discovered $^{57}$Fe at Cavendish and described the results in his article \textit{The Isotopic Constitution and Atomic Weights of Hafnium, Thorium, Rhodium, Titanium, Zirconium, Calcium, Gallium, Silver, Carbon, Nickel, Cadmium, Iron and Indium} \cite{Ast35}. Aston used a pure sample of the carbonyl in the spectrograph. ``In addition to the strong isotope 56 and a weak one, 54, previously known, a third, 57 was revealed.'' \subsection*{$^{58}$Fe}\vspace{-0.85cm} The existence of $^{58}$Fe was demonstrated by deGier and Zeeman at the University of Amsterdam in 1935 and reported in the paper \textit{The Isotopic Constitution of Iron} \cite{deG35}. DeGier and Zeeman succeeded with the identification of $^{58}$Fe with a very pure sample of carbonyl. ``With properly chosen canals the intensity of the iron lines could be increased so far that isotope 58 can be seen in the reproduction... The appearance of line 58 could now be followed closely when varying the circumstances of the experiments. In this way we obtained several convincing plates of the new isotope.'' In early 1935 Aston was not confident in the observation of $^{58}$Fe: ``Line 58 was present but weakened as the work proceeded and was most probably due to traces of nickel still left in the tube'' \cite{Ast35}. \subsection*{$^{59}$Fe}\vspace{-0.85cm} $^{59}$Fe was discovered by Livingood and Seaborg in 1938 as reported in \textit{Radioactive Isotopes of Iron} \cite{Liv38}. $^{59}$Fe was produced in the reactions $^{58}$Fe(d,p) and $^{59}$Co(n,p) with 5.5 MeV deuterons from the Berkeley cyclotron. The neutron irradiation was performed by placing the target next to the cyclotron during the bombardment of deuterons on lithium. The decay curves of the produced radioactivity were measured with a quartz fiber electroscope following chemical separation. ``It is at once apparent that only Fe$^{59}$ can be negative electron active. Furthermore, the only radio-iron that can be made from cobalt with neutron is Fe$^{59}$, so that we are justified in ascribing the 47-day activity to this isotope.'' The measured half-life of 47(3)~d is consistent with the accepted half-life of 44.495(9)~d. Livingood {\it et al.} had reported a 40~d iron activity in 1937 without attributing it to a specific isotope \cite{Liv37}. \subsection*{$^{60}$Fe}\vspace{-0.85cm} The discovery of $^{60}$Fe was described by Roy and Kohman in the 1957 paper \textit{Iron 60} \cite{Roy57}. A copper target was bombarded with 400~MeV protons from the Carnegie synchocyclotron in Pittsburgh and $^{60}$Fe was produced in a spallation reaction. The mass assignment was made through the observation of the decay to the $^{60m}$Co daughter following chemical separation. ``From this, the activity ratio of Fe$^{60}$ and Fe$^{59}$, 45 days, the half-life of Fe$^{60}$ can be derived. The result is $\sim 3\cdot 10^5$ years, uncertain by a factor of 3 because of the approximate nature of the measurements and calculations.'' This half-life is somewhat smaller than the accepted value of 1.5(3)$\cdot$10$^{6}$~y. \subsection*{$^{61}$Fe}\vspace{-0.85cm} Ricci \textit{et al.} were the first to produce $^{61}$Fe in 1957 and published the results in the article \textit{A New Isotope of Iron $^{61}$Fe} \cite{Ric57}. $^{61}$Fe was produced in the spallation of nickel and copper targets in Buenos Aires, Argentina. ``Mass number 61 was assigned to the new iron isotope because it decays to 99 minutes $^{61}$Co, already known.'' The half-life was measured to be 6.0(5)~m. This is consistent with the accepted value of 5.98(6)~m. \subsection*{$^{62}$Fe}\vspace{-0.85cm} In the 1975 paper \textit{Decay of the New Isotope $^{62}$Fe}, Franz \textit{et al.} reported the first observation of $^{62}$Fe \cite{Fra75}. Neutrons between 25 and 200~MeV generated by 200~MeV protons from the Brookhaven AGS linac injector bombarded a nickel oxide target enriched to 96\% $^{64}$Ni. $^{62}$Fe was produced with the $^{64}$Ni(n,2pn) reaction. Gamma spectra were measured following chemical separation. ``The mass assignment must be to $^{62}$Fe because the appropriate growth and decay were observed of 1.5-m $^{62}$Co in the chemically purified iron sample.'' The half-life of 68(2)~s is currently the only measured value for $^{62}$Fe. \subsection*{$^{63-65}$Fe}\vspace{-0.85cm} Guerreau \textit{et al.} reported the discovery of $^{63}$Fe, $^{64}$Fe and $^{65}$Fe in the 1980 paper \textit{Seven New Neutron Rich Nuclides Observed in Deep Inelastic Collisions of 340 MeV $^{40}$Ar on $^{238}$U} \cite{Gue80}. A 340 MeV $^{40}$Ar beam accelerated by the Orsay ALICE accelerator facility bombarded a 1.2 mg/cm$^2$ thick UF$_4$ target supported by an aluminum foil. The isotopes were identified using two $\Delta$E-E telescopes and two time of flight measurements. ``The new nuclides $^{54}$Ti, $^{56}$V, $^{58-59}$Cr, $^{61}$Mn, $^{63-64}$Fe, have been produced through $^{40}$Ar + $^{238}$U reactions.'' At least twenty counts were recorded for these isotopes. The tentative observation of $^{65}$Fe was mentioned. An inspection of the spectrum indicates at least 6 events of $^{65}$Fe. Breuer \textit{et al.} detected $^{63}$Fe independently only a few months later \cite{Bre80}. \subsection*{$^{66-68}$Fe}\vspace{-0.85cm} The 1985 paper \textit{Production and Identification of New Neutron-Rich Fragments from 33 MeV/u $^{86}$Kr Beam in the 18$\leq$Z$\leq$27 Region} by Guillemaud-Mueller \textit{et al.} reported the first observation of $^{66}$Fe, $^{67}$ Fe and $^{68}$Fe \cite{Gui85}. The 33 MeV/u $^{86}$Kr beam bombarded tantalum targets and the fragments were separated with the GANIL triple-focusing analyser LISE. ``Each particle is identified by an event-by-event analysis. The mass A is determined from the total energy and the time of flight, and Z by the $\Delta$E and E measurements.'' \subsection*{$^{69}$Fe}\vspace{-0.85cm} In their paper \textit{New neutron-rich isotopes in the scandium-to-nickel region, produced by fragmentation of a 500 MeV/u $^{86}$Kr beam}, Weber \textit{et al.} presented the first observation of $^{69}$Fe in 1992 at GSI \cite{Web92}. $^{69}$Fe was produced in the fragmentation reaction of a 500 A$\cdot$MeV $^{86}$Kr beam from the heavy-ion synchroton SIS on a beryllium target and separated with the zero-degree spectrometer FRS. ``The isotope identification was based on combining the values of B$\rho$, time of flight (TOF), and energy loss ($\triangle$E) that were measured for each ion passing through the FRS and its associated detector array.'' Twelve counts of $^{69}$Fe were recorded. \subsection*{$^{70-72}$Fe}\vspace{-0.85cm} Bernas \textit{et al.} observed $^{70}$Fe, $^{71}$Fe and $^{72}$Fe for the first time in 1997 as reported in their paper \textit{Discovery and cross-section measurement of 58 new fission products in projectile-fission of 750$\cdot$A MeV $^{238}$U} \cite{Ber97}. Uranium ions were accelerated to 750 A$\cdot$MeV by the GSI UNILAC/SIS accelerator facility and bombarded a beryllium target. The isotopes produced in the projectile-fission reaction were separated using the fragment separator FRS and the nuclear charge Z for each was determined by the energy loss measurement in an ionization chamber. ``The mass identification was carried out by measuring the time of flight (TOF) and the magnetic rigidity B$\rho$ with an accuracy of 10$^{-4}$.'' Two hundred counts of $^{70}$Fe, 39 counts of $^{71}$Fe, and two counts of $^{72}$Fe were observed. \section{Summary} The discovery of the iron isotopes has been mostly uncontroversial. The activities of only two isotopes ($^{53}$Fe and $^{59}$Fe) were detected before they could be assigned to the specific isotope. Prior to the discovery of $^{45}$Fe it was claimed to be potentially unstable due to the non-observation in a fragmentation experiment. \ack This work was supported by the National Science Foundation under grants No. PHY06-06007 (NSCL) and PHY07-54541 (REU). MH was supported by NSF grant PHY05-55445.
2008.00956
\section{Introduction}\label{intro} This is an extended and improved version of our PADL'20 paper \cite{padl20}.\footnote{ Selected by the reviewers of PADL'20 and the program chairs Ekaterina Komendantskaya and Yanhong Annie Liu to be submitted to the Rapid Publications track of the journal \emph{Theory and Practice of Logic Programming}. } Logic programming languages have been used successfully for inference and planning in natural language processing tasks restricted to narrow domains \cite{actlan17,inclezan18,baral19,inclezan19}. Their success, however, is limited in open-domain large-scale information extraction and knowledge representation tasks. On the other hand, deep learning systems are good at basic tasks ranging from parsing to factoid question answering, but they are still taking baby steps emulating human-level inference on complex documents \cite{bert18,bert19}. Thus, a significant gap persists between neural and symbolic approaches in the field. The work presented here aims at filling this gap. We explore synergies between neural, graph-based and symbolic approaches to solve a practical problem: building a dialog agent. This agent digests a text document (e.g., a story, a textbook, a scientific paper, a legal document) and enables the user to interact with the most relevant content. We will start with a quick overview of the system, including the main tools and techniques of each module. Our system builds upon state-of-the-art natural language processing tools, and couples them with a declarative language module focusing on high-level text mining. We integrate the modules in the Python-based nltk ecosystem \cite{bird-loper-2004-nltk}, and rely on the Java-based Stanford CoreNLP toolkit \cite{coreNLP} for basic natural language processing tasks such as sentence boundary detection, tokenization, part-of-speech tagging and parsing. \subsubsection*{Overview of the System Architecture} Fig. \ref{deepsys} summarizes the architecture of our system. The Stanford CoreNLP dependency parser is started as a separate server process to which the Python-based text processing module connects as a client. It interfaces with the Prolog-based dialog engine by generating a clausal representation of the document's structure and content as well as the user's queries. The dialog engine is responsible for handling the user's queries for which answers are sent back to the Python front-end, which also handles calls to OS-level spoken-language services, when activated. \FIG{deepsys}{System Architecture}{0.42}{deepsys} State-of-the-art dependency parsers \cite{stan,AdolphsXLU11,choi:17a}, among which the neural Stanford dependency parser \cite{stan} stands out, produce highly accurate dependency graphs. The vertices in these graphs are words and their part-of-speech tags, and labeled edges indicate the syntactic heads of words (e.g., subject, direct object). In contrast to collocations in a sliding window, dependency graphs provide ``distilled'' building blocks through which a graph-based natural language processing system can absorb higher level linguistic information. Inspired by the effectiveness of algorithms like Google's PageRank, recursive ranking algorithms applied to text graphs have enabled extraction of keyphrases, summaries and relations. Their popularity continues to increase due to their holistic view on the interconnections between text units, which signal the most relevant text units. Additionally, these algorithms are comparatively simpler. At more that 3100 citations and a follow-up of other highly cited papers \cite{Erkan:2004}, the TextRank algorithm \cite{EMNLP:TR,ijcnlp05} and its creative descendants have extended their applications to a wide variety of document types and social media interactions in a few dozen languages. While part of the family of the TextRank descendants, our graph-based text processing algorithm will use information derived from the dependency graphs associated to sentences. We leverage part-of-speech tags assigned to vertices (words) and edge labels (syntactic dependencies between words) in dependency graphs in order to extract rank-ordered facts corresponding to content elements present in sentences. We pass these to logic programs that can query them and infer new relations, beyond those that can be mined directly from the text. Like in the case of a good search engine, interactions with a text document will focus on the most relevant and semantically coherent elements matching a query. With this in mind, the natural feel of an answer syntactically appropriate for a query is less important than the usefulness of the content elements extracted: just sentences of the document in their natural order. We will also enable spoken interaction with the dialog engine, opening the door to the use of the system via voice-based appliances. Applications range from assistive technologies to visually challenged people, live user manuals, teaching from K-12 to graduate level classes, and interactive information retrieval from complex technical or legal documents. \begin{comment} The most significant contributions of the research work covered by the paper are: \BI \I a logic relation post-processor supporting realtime interactive queries about a document's content \I integration of our algorithms into an open-source system with practical uses helping a reader of a scientific document to interactively familiarize herself with its content \EI \end{comment} The paper is organized as follows. Section \ref{tg} describes the graph-based Natural Language Processing module. Section \ref{dia} describes our Prolog-based dialog engine. Section \ref{inter} shows interaction examples with several document types. Section \ref{disc} puts in context the main ideas of the paper and justifies some of the architecture choices we have made. Section \ref{rel} overviews related work and background information. Section \ref{conc} concludes the paper. \section{The graph-based Natural Language Processing module}\label{tg} We have organized our Python-based textgraph processing algorithm together with the Prolog-based dialog engine into a unified system.\footnote{Our implementation is available at \url{https://github.com/ptarau/DeepRank.}} We start with the building and the ranking of the text graph. Then, we overview the summary, keyphrase and relation extraction components, and the creation of the Prolog database that constitutes the logical model of the document, to be processed by the dialog engine. \subsection{Building and ranking the text graph} We connect as a Python client to the Stanford CoreNLP server and use it to provide our dependency links via the wrapper at \url{https://www.nltk.org/} of the Stanford CoreNLP toolkit \cite{coreNLP}. Unlike the original TextRank and related approaches that develop special techniques for each text processing task, we design a unified algorithm to obtain graph representations of documents, that are suitable for keyphrase extraction, summarization and interactive content exploration. We use unique sentence identifiers and unique lemmas\footnote{A lemma is a canonical representation of a word, as it stands in a dictionary, for all its inflections e.g., it is `{\bf `be''} for ``is'', ``are'', ``was'' etc.} as nodes of the text graph. As keyphrases are centered around nouns and good summary sentences are likely to talk about important concepts, we will need to reverse some links in the dependency graph provided by the parser, to prioritize nouns and deprioritize verbs, especially auxiliary and modal ones. Thus, we (a) redirect the dependency edges toward nouns with subject and object roles, as shown for a simple short sentence in Fig. \ref{depgraph}, and (b) add {\em ``about''} edges from the sentences they occur in. \DFIG{depgraph}{Dependency graph of a simple sentence with redirected and newly added arrows }{0.38}{depgraph.pdf} We also create {\em ``recommend''} links from words to the sentence identifiers and back from sentences to verbs with {\em predicate} roles to indirectly ensure that sentences recommend and are recommended by their content. Specifically, we ensure that (a) sentences recommend verbs with predicate function, and (b) their recommendation spreads to nouns that are predicate arguments (e.g., having subject or object roles). By using the PageRank implementation of the {\bf networkx} toolkit,\footnote{\url{https://networkx.github.io/}} after ranking the sentence and word nodes of the text graph, the system is also able to display subgraphs filtered to contain only the highest ranked nodes, using Python's {\tt graphviz} library. An example of text graph, filtered to only show word-to-word links, derived from the U.S. Constitution,\footnote{Available as a text document at: \url{https://www.usconstitution.net/const.txt}} is shown in Fig. \ref{constit}. \FIG{constit}{Text graph fragment connecting the highest ranked words in the U.S. Constitution}{0.52}{constit} \subsection{Pre- and post-ranking graph refinements} The algorithm induces a form of automatic stop word filtering, due to the fact that our dependency link arrangement ensures that modifiers with lesser semantic value relinquish their rank by pointing to more significant lexical components. This is a valid alternative to explicit ``leaf trimming'' before ranking, which remains an option for reducing graph size for large texts or multi-document collections as well as helping with a more focused relation extraction from the reduced graphs. Besides word-to-word links, our text graphs connect sentences as additional dependency graph nodes, resulting in a unified keyphrase and summary extraction framework. Note also that, as an option that is relevant especially for scientific, medical or legal documents, we add {\tt first\_in} links from a word to the sentence containing its first occurrence, to prioritize sentences where concepts are likely to be defined or explained. Our reliance on graphs provided by dependency parsers builds a bridge between deep neural network-based machine learning and graph-based natural language processing enabling us to often capture implicit semantic information. \subsection{Summary and keyword extraction} As link configurations tend to favor very long sentences, a post-ranking normalization is applied for sentence ranking. After ordering sentences by rank we extract the highest ranked ones and reorder them in their natural order in the text to form a more coherent summary. We use the parser's compound phrase tags to fuse along dependency links. We design our keyphrase synthesis algorithm to ensure that highly ranked words will pull out their contexts from sentences, to make up meaningful keyphrases. As a heuristic, we mine for a context of 2-4 dependency linked words of a highly ranked noun, while ensuring that the context itself has a high-enough rank, as we compute a weighted average favoring the noun over the elements of its context. \subsection{Relation extraction}\label{rels} We add subject-verb-object facts extracted from the highest ranked dependency links, enhanced with ``is-a'' and ``part-of'' relations using WordNet via the {\tt nltk} toolkit. We plan in the future to also generate relations from conditional statements identified following dependency links and involving negations, modalities, conjuncts and disjuncts, to be represented as Prolog rules. Subject-verb-object (SVO) relations are extracted directly from the dependency graph and an extra argument is added to the triplet marking the number of the sentence they originate from. ``{Is-a}'' relations are extracted using WordNet \cite{Fellbaum98} hypernyms and hyponyms.\footnote{More general and, respectively, more specific concepts.} Similarly, ``{\tt part\_of}'' relations are extracted using meronyms and holonyms.\footnote{Concepts corresponding to objects that are part of, and, respectively, have as part other objects.} As a heuristic that ensures that they are relevant to the content of the text, we ensure that both their arguments are words that occur in the document, when connecting their corresponding synsets via WordNet relations. By constraining the two ends of an ``is-a'' or ``part-of'' edge to occur in the document, we avoid relations derived from synsets unrelated to the document's content. In fact, this provides an effective word-sense disambiguation heuristic. \section{The Prolog-based dialog engine}\label{dia} After our Python-based document processor, with help from the Stanford dependency parser, builds and ranks the text graph and extracts summaries, keyphrases and relations, we pass them to the Prolog-based dialog engine. \subsection{Generating input for post-processing by logic programs} Once the document is processed, we generate, besides the dependency links provided by the parser, relations containing facts that we have gleaned from processing the document. Together, they form a Prolog database representing the content of the document. To keep the interface simple and portable to other logic programming tools, we generate the following predicates in the form of Prolog-readable code, in one file per document: \BE \I {\tt keyword(WordPhrase).} -- the extracted keyphrases \I {\tt summary(SentenceId,SentenceWords).} -- the extracted summary sentence identifiers and list of words in sentence \I {\tt dep(SentenceID,WordFrom,FromTag,Label,WordTo,ToTag).} -- a component of a dependency link, with the first argument indicating the sentence they have been extracted \I {\tt edge(SentenceID,FromLemma,FromTag,RelationLabel,ToLemma,ToTag).} -- edge marked with sentence identifiers indicating where it was extracted from, and the lemmas with their POS tags at the two ends of the edge \I {\tt rank(LemmaOrSentenceId,Rank).} -- the rank computed for each lemma \I {\tt w2l(Word,Lemma,Tag).} -- a map associating to each word a lemma and a tag, as found by the POS tagger \I {\tt svo(Subject,Verb,Object,SentenceId).} -- subject-verb-object relations extracted from parser input or WordNet-based {\tt is\_a} and {\tt part\_of} labels in verb position \I {\tt ner(SentId,ListOfNamedEntityPairs)} extracted from Named Entity Recognizer ({\tt NER}) annotations \I {\tt sent(SentenceId,ListOfWords).} -- the list of sentences in the document with a sentence identifier as first argument and a list of words as second argument \EE These predicates provide a relational view of a document in the form of a fact database that will support the inference mechanisms built on top of it. The resulting logic program can then be processed with Prolog semantics, possibly enhanced by using constraint solvers \cite{OzEngines:97}, abductive reasoners \cite{abduct02} or via Answer Set Programming systems \cite{ijcai2018-769,asp}. Specifically, we expect benefits from such extensions for tackling computationally difficult problems like word-sense disambiguation (WSD) or entailment inference as well as domain-specific reasoning \cite{inclezan19,naraction,baral19}. We have applied this process to the {\em Krapivin document set} \cite{dataset08}, a collection of {\bf 2304} research papers annotated with the authors' own keyphrases and abstracts. The resulting 3.5 GB {\em Prolog dataset\footnote{\url{http://www.cse.unt.edu/~tarau/datasets/PrologDeepRankDataset.zip}}} is made available for researchers in the field, interested to explore declarative reasoning or text mining mechanisms. \subsection{The Prolog interface} We use as a logic processing tool the open source SWI-Prolog system\footnote{\url{http://www.swi-prolog.org/}} \cite{swi} that can be called from, and can call Python programs using the {\tt pyswip} adaptor.\footnote{\url{https://github.com/yuce/pyswip}} After the adaptor creates the Prolog process and the content of the digested document is transferred from Python (in a few seconds for typical scientific papers with 10-15 pages), query processing is realtime. \subsection{The user interaction loop} With the Prolog representation of the digested document in memory, the dialog starts by displaying the summary and keyphrases extracted from the document.\footnote{And also speak them out if the {\tt quiet} flag is off.} One can see this as a ``mini search-engine'', specialized to the document, and, with help of an indexing layer, extensible to multi-document collections. The dialog agent associated to the document answers queries as sets of salient sentences extracted from the text, via a specialization of our summarization algorithm to the context inferred from the query. As part of an interactive {\em read/listen, evaluate, print/say} loop, we generate for each query sentence, a set of predicates that are passed to the Prolog process, from where answers will come back via the {\tt pyswip} interface. The predicates extracted from a query have the same structure as the database representing the content of the complete document, initially sent to Prolog. \subsection{The answer generation algorithm} Answers are generated by selecting the most relevant sentences, presented in their natural order in the text, in the form of a specialized ``mini-summary''. We will next overview our query answering algorithm, with examples to follow in section \ref{inter}. \subsubsection{Query expansion} Answer generation starts with a query-expansion mechanism via relations that are derived by finding, for lemmas in the query, WordNet hypernyms, hyponyms, meronyms and holonyms, as well as by directly extracting them from the query's dependency links. We use the rankings available both in the query and the document graph to prioritize the highest ranked sentences connected to the highest ranked nodes in the query. \subsubsection{Short-term dialog memory} We keep representations of recent queries in memory, as well as the answers generated for them. If the representation of the current query overlaps with a past one, we use content in the past query's database to extend query expansion to cover edges originating from that query. Overlapping is detected via shared edges between noun or verb nodes between the query graphs. \subsubsection{Answer sentence selection} Answer sentence selection is performed with a combination of several interoperating algorithms: \BI \I use of {\em personalized PageRank} \cite{perso1,perso2} with a dictionary provided by highest ranking lemmas and their ranks in the query's graph, followed by reranking the document's graph to specialize to the query's content \I matching guided by SVO-relations \I matching of edges in the query graph against edges in the document graph \I query expansion guided by rankings in both the query graph and the document graph \I matching guided by a selection of related content components in the short-term dialog memory window \EI Matching against the Prolog database representing the document is implemented as a size constraint on the intersection of the expanded query lemma set, built with highly ranked shared lemmas pointing to sentences containing them. The set of answers is organized to return the highest-ranked sentences based on relevance to the query and in the order in which they appear in the document. We keep the dialog window relatively small (limited to the highest ranked 3 sentences in the answer set, by default). Relevance is ensured with help from the rankings computed for both the document content and the query. \subsubsection{Personalized PageRank} Using {\em personalized PageRank} \cite{perso1,perso2} can be seen as a specialization of the document graph with respect to the query. The personalization dictionary is also used to implicitly redirect flow, otherwise stuck in the sink nodes of the text graph, to content related to the query. The predicate {\tt query\_pers\_sents} is generated for each query and then passed to Prolog, where it is used to prioritize the answers computed by the answer search algorithms. \subsubsection{Matching guided by SVO-relations} SVO-facts inferred from syntactic dependencies and from WordNet relations have a 4-th argument, indicating the sentence number where they have been found. Thus walking over them in a transitive closure computation (limited to at most K inference steps) allows us to collect a path made of the sentences and relations used at each step, suggesting possibly interesting candidate answers. The predicate {\tt tc/7} implements a {\tt K}-step limited transitive closure computation also controlled by a set of relations available and a loop checking mechanism that avoids revisiting the same nodes repeatedly. We believe that it also shows the expressiveness of a declarative programming pattern in handling a fairly complex set of requirements in a clear and compact form. Our computation is exposed via the interface predicate {\tt tc/5}. {\em The predicate {\tt tc(K,A,Rels,C,Res)} holds if we can get from word {\tt A} to word {\tt C} in at most {\tt K} steps using any relation in the set {\tt Rels} and returning in {\tt Res} the number of steps left, the path followed and a sentence number in the document, possibly relevant as an answer.} \begin{code} tc(K,A,Rels,C,Res):-tc(A,Rels,C,[],K,_,Res). \end{code} \begin{code} tc(A,Rels,C,Xs,SN1,N2,Res) :- succ(N1,SN1), member(Rel,Rels), call_svo(A,Rel,B,Id), not(memberchk(B-_,Xs)), tc1(B,Rels,C,[A-Rel|Xs],Id,N1,N2,Res). tc1(B,_Rels,B,Xs,Id,N,N,res(N,Id,Xs)):- nonvar(Id). tc1(B,Rels,C,Xs,_,N1,N2,Res) :- tc(B,Rels,C,Xs,N1,N2,Res). \end{code} Note that loop checking is achieved by keeping a path of elements the form Word-Relation and the the ``{\tt \_}'' variable in the definition of {\tt tc/5} ensures that paths of length {\em up to} {\tt K} are returned. We also accommodate {\tt SVO} relations originating from a {\em domain-specific ontology}, for which the sentence identifier is left as an {\em unbound logical variable}, provided that at the end of the available inference steps an actual sentence of the document is returned, a requirement ensured with the {\tt nonvar/1} test in the first clause of the predicate {\tt tc1/8}. If the system detects the presence of additional {\tt svo/4} facts from a domain specific ontology (consulted using a plugin mechanism), the single step predicate {\tt call\_svo/4} will extend its scope over such additional {\tt SVO} relations. \subsubsection{Using Named Entity Recognition} Named Entity Recognition (NER) provides person, location, time, etc. annotations that provide additional relations, usable for inference. When NER is available, matching directed by {\bf wh}-words like {\em where}, {\em when}, {\em who}, questions trigger scanning the named entity database, also sent to Prolog. For a document on the CDC COVID-19 status, the system extracts {\tt NER} relations like \begin{code} ner(86, [(0, ('March', 'DATE')), (1, ('10', 'DATE')), (2, ('CDC', 'ORGANIZATION')), (3, ('infection', 'CAUSE_OF_DEATH'))]). \end{code} saying that sentence {\tt 86} is talking about named entities hinting at events on March 10 when the CDC, an organization, has identified life-threatening infections. \subsubsection{Using {\bf wh}-word replacement with logic variables} Edges in the query originating or targeting {\bf wh}-words get replaced with logic variables with labels also corresponding to expected syntactic roles (e.g., {\tt nsubj} for {\rm who}). Besides matching them against corresponding edges in the document we also take advantage of the named entity relation {\tt ner/2} from which we derive specialized answer predicates. \begin{code} who(KWs,SentId):-wh(['PERSON','ORGANIZATION','TITLE'],KWs,SentId). where(KWs,SentId):- wh(['LOCATION','CITY','COUNTRY','STATE_OR_PROVINCE'],KWs,SentId). many(KWs,SentId):-wh(['NUMBER', 'ORDINAL', 'MONEY'],KWs,SentId). when(KWs,SentId):-wh(['DATE','TIME','DURATION'],KWs,SentId). \end{code} The {\tt wh/3} predicate will scan the corresponding {\tt ner/2} facts against a match, to be also validated by ensuring that answers have a high enough personalized PageRank. \section{Interacting with the dialog engine}\label{inter} We will next show interaction examples with several document types, with focus on key aspects of our question-answering algorithms. The following example shows the result of a query on the US Constitution document, with edges marked by POS-tags of the two nodes aggregated with the label of the dependency link connecting them. \FIG{remove}{Graph of a query on the U.S. Constitution}{0.44}{remove} {\small \begin{quote} {\verb~>>>~ talk\_about('examples/const')} {\em {\bf \verb~?--~ How can a President be removed from office?} 59 : In Case of the Removal of the President from Office , or of his Death , Resignation , or Inability to discharge the Powers and Duties of the said Office , the same shall devolve on the Vice President , and the Congress may by Law provide for the Case of Removal , Death , Resignation or Inability , both of the President and Vice President , declaring what Officer shall then act as President , and such Officer shall act accordingly , until the Disability be removed , or a President shall be elected . 66 : Section 4 The President , Vice President and all civil Officers of the United States , shall be removed from Office on Impeachment for , and Conviction of , Treason , Bribery , or other high Crimes and Misdemeanors . 190 : If the Congress , within twenty one days after receipt of the latter written declaration , or , if Congress is not in session , within twenty one days after Congress is required to assemble , determines by two thirds vote of both Houses that the President is unable to discharge the powers and duties of his office , the Vice President shall continue to discharge the same as Acting President ; otherwise , the President shall resume the powers and duties of his office . } \end{quote} } Note the relevance of the extracted sentences and resilience to semantic and syntactic variations (e.g., the last sentence does not contain the word ``remove''). The dependency graph of the query is shown in Fig. \ref{remove}. The clauses of the {\tt query\_rank/2} predicate in the Prolog database corresponding to the query are: \begin{codex} query_rank('President', 0.2162991696472837). query_rank('remove', 0.20105324712764877). query_rank('office', 0.12690425831428373). query_rank('how', 0.04908035060099132). query_rank('can', 0.04908035060099132). query_rank('a', 0.04908035060099132). query_rank('be', 0.04908035060099132). query_rank('from', 0.04908035060099132). query_rank(0, 0.0023633884483800784). \end{codex} The impact of Personalized PageRank can be seen by comparing the cloud-map for the document (see Fig. \ref{dcloud}) with the cloud-map of the document specialized to the query (see Fig. \ref{qcloud}). Note the increased emphasis in Fig. \ref{qcloud} of some relevant actors (President, Senate) as well as the relevant concepts related to removal from office of a President (impeachment, profit, disability, judgment). \FIG{dcloud}{Word-cloud of U.S. Constitution before specialization w.r.t. to query}{0.72}{dcloud} \FIG{qcloud}{Word-cloud of U.S. Constitution after query-driven personalized PageRank}{0.72}{qcloud} Our next example uses an ASCII version of Einstein's 1920 book on relativity, retrieved from the Gutenberg collection\footnote{ {\small \url{https://www.gutenberg.org/files/30155/30155-0.txt}} } and trimmed to the actual content of the book (250 pages in {\tt epub} form). {\small \begin{quote} {\verb~>>>~ talk\_about('examples/relativity')} {\em {\bf \verb~?--~ What happens to light in the presence of gravitational fields?} 611 : In the example of the transmission of light just dealt with , we have seen that the general theory of relativity enables us to derive theoretically the influence of a gravitational field on the course of natural processes , the laws of which are already known when a gravitational field is absent . 764 : On the contrary , we arrived at the result that according to this latter theory the velocity of light must always depend on the co-ordinates when a gravitational field is present . 765 : In connection with a specific illustration in Section XXIII , we found that the presence of a gravitational field invalidates the definition of the coordinates and the time , which led us to our objective in the special theory of relativity . } \end{quote} } \FIG{light}{Graph of query on Einstein's book on Relativity}{0.33}{light} The query graph is shown in Fig. \ref{light}. After the less than 30 seconds that it takes to digest the book, answers are generated in less than a second for all queries that we have tried. Given the availability of spoken dialog, a user can iterate and refine queries to extract the most relevant answer sentences of a document. On an even larger document, like the Tesla Model 3 owner's manual,\footnote{ \url{https://www.tesla.com/sites/default/files/model_3_owners_manual_north_america_en.pdf} } digesting the document takes about 60 seconds and results in 12 MB of Prolog clauses. After that, query answering is still below 1 second. {\small \begin{quote} {\verb~>>>~ talk\_about('examples/tesla')} {\verb~?--~ {\bf How may I have a flat tire repaired?}} 3207 : Arrange to have Model 3 transported to a Tesla Service Center , or to a nearby tire repair center . 3291 : Note : If a tire has been replaced or repaired using a different tire sealant than the one available from Tesla , and a low tire pressure is detected , it is possible that the tire sensor has been damaged . \end{quote} } The highly relevant first answer is genuinely useful in this case, given that Tesla Model 3's do not have a spare tire. Being able to use voice queries while driving and in need of urgent technical information about one's car, hints towards obvious practical applications of our dialog engine. Directly querying news articles with potentially urgent information is another application. The following interaction is based on the US-CDC guidelines on the COVID-19 pandemics (as posted on March 17, 2020). { \begin{quote} {\verb~>>>~ talk\_about('examples/covid')} {\verb~?--~ {\bf How does the COVID-19 virus spread in a community?}} 25 : Pandemics happen when a new virus emerges to infect people and can spread between people sustainably . 26 : Because there is little to no pre-existing immunity against the new virus , it spreads worldwide . 47 : Three U.S. states are experiencing sustained community spread . \end{quote} } The highest ranked lemmas after running personalized PageRank show the emphasis on key lemmas coming from the query as well as some of the overall important related concepts in the document. \begin{code} query_pers_words('virus', 0.1279800580708069). query_pers_words('spread', 0.08076646367741867). query_pers_words('covid', 0.06794580306202914). query_pers_words('community', 0.05973841781795769). query_pers_words('CoV', 0.01846080235270702). query_pers_words('response', 0.012416829623738527). query_pers_words('influenza', 0.012240656425061098). query_pers_words('culture', 0.012084585427826857). query_pers_words('illness', 0.01129131793978254). query_pers_words('people', 0.009611356400409315). query_pers_words('healthcare', 0.008738409714154799). query_pers_words('risk', 0.00859884680785955). \end{code} The evolution between the cloud-map of the document and the cloud-map after the reranked document is shown in Fig. \ref{dcovid} and Fig. \ref {qcovid}. \FIG{dcovid}{Word-cloud of the CDC COVID-19 document before specialization w.r.t. to query}{0.72}{dcovid} \FIG{qcovid}{Word-cloud of the CDC COVID-19 document after query-driven personalized PageRank}{0.72}{qcovid} The following examples show the effect if the {\bf wh-}words triggering answers derived from the {\tt ner/2} facts. { \begin{quote} {\verb~>>>~ talk\_about('examples/covid')} {\verb~?--~ {\bf Who did declare the outbreak an emergency?}} On March 13 , the President of the United States declared the COVID-19 outbreak a national emergency . 27 : On March 11 , the COVID-19 outbreak was characterized as a pandemic by the WHO . \end{quote} } { \begin{quote} {\verb~?--~ {\bf When did the outbreak become a pandemic?}} 4 : On March 11 , WHO publicly characterized COVID-19 as a pandemic . 22 : COVID-19 Now a Pandemic A pandemic is a global outbreak of disease . 27 : On March 11 , the COVID-19 outbreak was characterized as a pandemic by the WHO . \end{quote} } \section{Discussion}\label{disc} Ideally, one would like to evaluate the quality of natural language understanding of an AI system by querying it not only about a set of relations explicitly extracted in the text, but also about relations inferred from the text. Moreover, one would also like to have the system justify the inferred relations in the form of a proof, or at least a sketch of the thought process a human would use for the same purpose. The main challenge here is not only that theorem-proving logic is hard, (with first-order classical predicate calculus already Turing-complete), but also that modalities, beliefs, sentiments, hypothetical and counterfactual judgments often make the underlying knowledge structure intractable. On the other hand, simple relations, stated or implied by text elements that can be mined or inferred from a ranked graph built from labeled dependency links, provide a limited but manageable approximation of the text's deeper logic structure, especially when aggregated with generalizations and similarities provided by WordNet or the much richer Wikipedia knowledge graph. Given its effectiveness as an interactive content exploration tool, we plan future work on packaging our dialog engine as a set of Amazon Alexa skills for some popular Wikipedia entries as well as product reviews, FAQs and user manuals. Empirical evaluation of our keyphrase and summarization algorithms will be subject to a different paper, but preliminary tests indicate that both of them match or exceed Rouge scores for state of the art systems \cite{deep19}. \section{Related work}\label{rel} \subsubsection*{Dependency parsing} The Stanford neural network based dependency parser \cite{stan} is now part of the Stanford CoreNLP toolkit,\footnote{\url{https://stanfordnlp.github.io/CoreNLP/}} which also comes with part of speech tagging, named entity recognition and co-reference resolution \cite{coreNLP}. Its evolution toward the use of Universal Dependencies \cite{ud14} makes systems relying on it potentially portable to over {\bf 70} languages covered by the Universal Dependencies effort. \footnote{\url{https://universaldependencies.org/}} Of particular interest is the connection of dependency graphs to logic elements like predicate argument relations \cite{Choi:2011}. The automatic conversion of constituency trees to dependency graphs proposed by \citeN{choi:17a} provides a bridge allowing the output of high-quality statistically trained phrase structure parsers to be reused for extraction of dependency links. {\em In this context, our novel contribution is that we analyze dependency links and part-of-speech tags associated to their endpoints in order to build a unified document graph from which we extract SVO relations. By redirecting links to focus on nouns and sentences we not only enable keyphrase and summary extraction from the resulting document graph but also facilitate its use for query answering in our dialog engine. } \subsubsection*{Graph based Natural Language Processing} TextRank \cite{EMNLP:TR,ijcnlp05} extracts keyphrases using word co-occurrence relations controlled by the distance between words: two vertices are connected if their corresponding lexical units co-occur within a sliding window ranging from 2 to 10 words. Sentence similarity is computed as content overlap giving weights to the links that refine the original PageRank algorithm \cite{page98pagerank,brin98anatomy}. TextRank needs elimination of stop words and obtains best results when links are restricted to nouns and adjectives. \citeN{Erkan:2004} explore several graph centrality measures, and \citeN{radabook} offer a comprehensive overview of graph-based natural language processing and related graph algorithms. Graph-based and other text summarization techniques are surveyed by \citeN{NenkovaM12} and more recently by \citeN{textSum}. Besides ranking, elements like coherence via similarity with previously chosen sentences and avoidance of redundant rephrasings are shown to contribute to the overall quality of the summaries. {\em The main novelty of our approach in this context is building text graphs from dependency links and integrating words and sentences in the same text graph, resulting in a unified algorithm that also enables relation extraction and interactive text mining. } \begin{comment} Beyond summaries obtained by aggregating important sentences extracted from a document, and possibly applying to them sentence compression techniques that remove redundant or less relevant words, new techniques are emerging for abstractive summarization \cite{abstractiveBing}. For this purpose, in the context of graph-based processing, one clearly benefits from as much syntactic and semantic information as possible, given also the need to synthesize new sentences subject to syntactic and semantic constraints. \end{comment} \subsubsection*{Relation Extraction} The relevance of dependency graphs for relation extraction has been identified in several papers. Among others, \citeN{AdolphsXLU11} point out to their role as a generic interface between parsers and relation extraction systems. \citeN{depRelPat} identify several models grounded on syntactic patterns (e.g., subject-verb-object) that can be mined out from dependency graphs. Of particular interest for relation extraction facilitated by dependency graphs is the shortest path hypothesis that prefers relating entities like predicates and arguments that are connected via a shortest path in the graph \cite{Bunescu:2005}. To facilitate their practical applications to biomedical texts, \citeN{biorel} extend dependency graphs with richer sets of semantic features including ``is-a'' and ``part-of'' relations and co-reference resolution. The use of ranking algorithms in combination with WordNet synset links for word-sense disambiguation goes back as far as \citeN{coling04:pr}, which is in fact a prequel to TextRank \cite{EMNLP:TR}. With the emergence of resources like Wikipedia, a much richer set of links and content elements has been used in connection with graph-based natural language processing \cite{wikiRank,AdolphsXLU11,wikify}. We currently extract our relations directly from the dependency graph and by using one step up and one step down links in the WordNet hypernym and meronym hierarchies. We plan extensions to integrate Wikipedia content via the {\tt dbpedia} database,\footnote{\url{https://wiki.dbpedia.org/}} and to extract more elaborate logic relations using a Prolog-based semantic parser like Boxer \cite{boxer15}. \subsubsection*{Logic Programming Systems for Natural Language Processing} A common characteristic of Prolog or ASP-based NLP systems is their focus on closed domains with domain-specific logic expressed in clausal form \cite{actlan17,inclezan18,baral19,inclezan19}, although recent work (e.g., \citeN{naraction}) extracts action language programs from more general narratives. {\em As our main objective is the building of a practically useful dialog agent, and as we work with open domain text and query driven content retrieval, our focus is not on precise domain-specific reasoning mechanisms. By taking advantage of the Prolog representation of a document's content, we use reasoning about the extracted relations and ranking information to find the most relevant sentences derived from a given query and the recent dialog history. } \section{Conclusions}\label{conc} The key idea of the paper has evolved from our search for synergies between symbolic AI and emerging natural language processing tools built with machine learning techniques. It is our belief that these are complementary and that by working together they will take significant forward steps in natural language understanding. We have based our text graph on heterogeneous but syntactically and semantically meaningful text units (words and sentences) resulting in a web of interleaved links. These links mutually recommend each other's highly ranked instances. Our fact extraction algorithm, in combination with the Prolog interface, has elevated the syntactic information provided by dependency graphs with semantic elements ready to benefit from logic-based inference mechanisms. Given the standardization brought by the use of {\em Universal Dependencies}, our techniques are likely to be portable to a large number of languages. The Prolog-based dialog engine supports spoken interaction with a conversational agent that exposes salient content of the document driven by the user's interests. Its applications range from assistive technologies to visually challenged people, voice interaction with user manuals, teaching from K-12 to graduate-level classes, and interactive information retrieval from complex technical or legal documents. Last but not least, we have used our system's front end to generate a Prolog dataset derived from more than 2000 research papers. We make this dataset available to other researchers using logic programming based reasoners and content mining tools.\footnote{\url{http://www.cse.unt.edu/~tarau/datasets/PrologDeepRankDataset.zip}} \section*{Acknowledgment} We are thankful to the anonymous reviewers of {\bf PADL'2020} for their careful reading and constructive suggestions. \bibliographystyle{acmtrans}
2008.00957
\section*{\refname}} \title{Free-Electron Shaping Using Quantum Light } \author{Valerio~Di~Giulio} \affiliation{ICFO-Institut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology, 08860 Castelldefels (Barcelona), Spain} \author{F.~Javier~Garc\'{\i}a~de~Abajo} \email{javier.garciadeabajo@nanophotonics.es} \affiliation{ICFO-Institut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology, 08860 Castelldefels (Barcelona), Spain} \affiliation{ICREA-Instituci\'o Catalana de Recerca i Estudis Avan\c{c}ats, Passeig Llu\'{\i}s Companys 23, 08010 Barcelona, Spain} \begin{abstract} Controlling the wave function of free electrons is important to improve the spatial resolution of electron microscopes, the efficiency of electron interaction with sample modes of interest, and our ability to probe ultrafast materials dynamics at the nanoscale. In this context, attosecond electron compression has been recently demonstrated through interaction with the near fields created by scattering of ultrashort laser pulses at nanostructures followed by free electron propagation. Here, we show that control over electron pulse shaping, compression, and statistics can be improved by replacing coherent laser excitation by interaction with quantum light. We find that compression is accelerated for fixed optical intensity by using phase-squeezed light, while amplitude squeezing produces ultrashort double-pulse profiles. The generated electron pulses exhibit periodic revivals in complete analogy to the optical Talbot effect. We further reveal that the coherences created in a sample by interaction with the modulated electron are strongly dependent on the statistics of the modulating light, while the diagonal part of the sample density matrix reduces to a Poissonian distribution regardless of the type of light used to shape the electron. The present study opens a new direction toward the generation of free electron pulses with additional control over duration, shape, and statistics, which directly affect their interaction with a sample. \end{abstract} \maketitle \date{\today} \tableofcontents \setcounter{equation}{0} \section{Introduction} The exploration of ultrafast phenomena generally relies on the use of short probe pulses, such as those provided by femtosecond visible-infrared lasers and attosecond x-ray sources \cite{PTB01,CK07,KI09}. Electrons can potentially reach much shorter durations than light for typical beam energies in the $10^2$-$10^5\,$eV range, as they are characterized by oscillation periods of 20-0.02\,as. Electron pulse compression is also capital for free-electron lasers \cite{MT10}, relying on the $\propto N^2$ superradiance emission produced by $N$ electrons when acting as a single point charge. With applications such as imaging, spectroscopy, and light generation in view, strong interest has arisen in manipulating the free electron density matrix using light. Triggered by the advent of the so-called photon-induced near-field electron microscopy (PINEM) \cite{BFZ09}, a long series of experimental \cite{BFZ09,KGK14,PLQ15,FES15,paper282,EFS16,KSE16,RB16,VFZ16,KML17,KES17,FBR17,PRY17,paper306,paper311,MB18,MB18_2,paper332,DNS20,KLS20,WDS20} and theoretical \cite{paper151,PLZ10,PZ12,B17_2,paper272,paper312,K19,RML19,paper347} studies have demonstrated that interaction with the optical near fields scattered from illuminated nanostructures provides an efficient way to manipulate the temporal and spatial distribution of free electrons. In PINEM, electron and light pulses are made to interact in the presence of a sample, giving rise to multiple photon exchanges between the optical field and the electron, and leading to comb-like energy spectra characterized by sidebands that are associated with different numbers of exchanged photons and separated from the incident electron energy by a multiple of the photon energy. Recent experiments have measured hundreds of such sidebands produced through suitable combinations of sample geometry and illumination conditions \cite{DNS20,KLS20}. Additionally, electron pulse compression has been observed by free propagation of PINEM-modulated electrons over a sufficiently long distance \cite{KML17,PRY17,MB18_2,MB18}. The electron transforms into a series of pulses with duration down to the attosecond regime \cite{KML17,PRY17}, which can be made even smaller by increasing the strength of the PINEM light \cite{B17_2}. While this type of electron-light interaction affects only the longitudinal part of the electron wave function, lateral control can be achieved either by the use of electron phase masks \cite{VTS10,VGC14,VBM18,SLR19} or through modulating the optical field with a transverse spatial resolution limited by the light wavelength, and more generally, by the polariton wavelength when relying on the excitation of optical modes in material surfaces. By analogy to elastic electron diffraction by light gratings in free space (the Kapitza-Dirac effect \cite{KD1933,FAB01,FB02}), which has been shown to also enable the formation of vortex beams \cite{HSB15}, surface-plasmon standing waves can produce intense inelastic electron diffraction \cite{paper272}, as confirmed by the observation of discrete electron beam deflection upon absorption or emission of a given number of photons reflected from a thin metal plate \cite{paper311}. Similarly, optical near fields can transfer orbital angular momentum \cite{paper312}, also demonstrated through the synthesis and observation of vortex electron beams produced by inelastic interaction with chiral near fields \cite{paper332}. As a practical application of these phenomena, lateral phase imprinting on electron beams through optical fields has been recently proposed to provide a viable approach to aberration correction and lateral electron beam profiling \cite{paper351}. By sweeping the photon energy of the light used for PINEM interaction, the near field experienced by the electrons undergoes amplitude modulations that map the optical response of the sample. This strategy has been proposed as a form of spectrally-resolved microscopy that can combine the subnanometer spatial focusing of electron beams \cite{BDK02} with an excellent energy resolution limited by the spectral width of the light source \cite{H99,paper114}. A first demonstration of this possibility has enabled spatial mapping of plasmons in silver nanowires with $\sim20\,$meV energy resolution without any need for electron monochromators \cite{paper306}, a result that is rivalling the energy resolution achieved through state-of-the art electron energy-loss spectroscopy \cite{KLD14}. The above studies rely on coherent light, such as that generated by laser sources, while an extension to quantum optical fields has been recently predicted to introduce quantum effects in the electron spectra \cite{paper339}. Quantum light thus presents an opportunity to further manipulate the electron wave function in applications such as pulse compression and modulation of the electron statistics. Here, we show that a wide range of electron statistics can be reached through interaction of free electrons with quantum light. Besides changing the focusing properties of the optically-modulated electrons, this interaction reveals a strong dependence of the electron density matrix on the statistics of the light field, which can be observed in a self-interference configuration setup. Specifically, we show that interaction with phase-squeezed and minimum-phase-uncertainty light sources produce faster compression of the electron, while amplitude-squeezed light gives rise ultrashort double-pulse electron profiles. Additionally, we find that the interaction of the modulated electron with a target produces a Poissonian distribution of sample excitations with off-diagonal coherences that are strongly dependent on the statistics of the light used to modulate the electron. Besides the fundamental interest of this wealth of phenomena, we envision applications in the control of electron compression and in the generation of light with nontrivial statistics. \section{Electron density matrix produced upon PINEM interaction} \label{quantumPINEM} \subsection{The quantum PINEM interaction} \label{quantumPINEMinteraction} Free electron-light interaction has been extensively studied under the assumption of classical illumination \cite{paper151,PLZ10}. An extension to describe the quantum evolution of the joint electron-light state has been recently presented \cite{paper339}, which we use here to investigate the modification produced in the electron density profile following propagation after PINEM interaction with nonclassical light. We first provide a succinct summary of this quantum formalism. We consider the sample response to be dominated by a single bosonic optical mode oscillating at frequency $\omega_0$ and characterized by an electric-field distribution $\vec{\mathcal{E}}_0({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v})$ defined as either a normal \cite{GL91} or a quasi-normal \cite{FHD19} bosonic mode. In addition, we assume that the electron always consists of a superposition of states with relativistic momentum and energy tightly focused around $\hbar{\bf k}} \def\kpar{k_\parallel} \def\kparb{{\bf k}_\parallel_0$ and $E_0$ (i.e., having small uncertainties compared with $\hbar\omega_0/v$ and $\hbar\omega_0$, respectively, where $v$ is the electron velocity). Also, we ignore nonunitary elements in the dynamics by considering that the electron-light interaction happens on a fast time scale compared with the decay of the bosonic mode. These assumptions allow us to linearize the electron kinetic energy operator (nonrecoil approximation). Starting from the Dirac equation \cite{S1994} and following an approach inspired by quantum optics methods \cite{SZ97} with an electromagnetic gauge in which the scalar potential is zero, the effective Hamiltonian of the system can be approximated by the noninteraction and interaction pieces \cite{paper339} \begin{subequations} \begin{align} \hat{\mathcal{H}}_0&=\hbar \omega_0 a^\dagger a + E_0 -\hbar \vb \cdot ({\rm i}} \def\ee{{\rm e} \nabla + {\bf k}} \def\kpar{k_\parallel} \def\kparb{{\bf k}_\parallel_0), \\ \hat{\mathcal{H}}_1&= -{\rm i}} \def\ee{{\rm e} (e \vb/\omega_0) \cdot \left[\vec{\mathcal{E}}_0({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v}) a - \vec{\mathcal{E}}_0^*({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v}) a^\dagger\right], \end{align} \label{H0H1} \end{subequations} respectively, where $a$ and $a^\dagger$ are annihilation and creation operators of the bosonic optical mode, and $\vb =\hbar{\bf k}} \def\kpar{k_\parallel} \def\kparb{{\bf k}_\parallel_0/E_0=v\zz$ is the electron velocity vector, taken to be along $\zz$. We remark that the aforementioned QED model accurately reproduces the electron-field dynamics when spin-flips, ponderomotive forces, and electron recoil can be safely disregarded. However, in situations departing from these conditions, the full minimal-coupling Hamiltonian has to be considered, and thus, numerical integration provides a more suitable method to explore the resulting physics \cite{T17,T18,T20}. We can then write the solution for the electron-optical mode wave function as a sum of energy sidebands, each of them describing the amplitude associated with a net exchange of $\ell$ quanta with the optical mode ($\ell>0$ for electron energy gain and $\ell<0$ for loss). More precisely, we have (see Ref.\ \cite{paper339} and Appendix\ \ref{appendixA}) \begin{align} |\psi({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v},t)\rangle=&\psi_{\rm inc}({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v},t)\!\!\sum_{\ell=-\infty}^\infty\sum_{n=0}^\infty\ee^{{\rm i}} \def\ee{{\rm e} \omega_0[\ell(z/v-t)-n t]}f_\ell^n({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v})|n\rangle, \label{eq:solution} \end{align} where ${\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v}$ denotes the electron coordinate, $|n\rangle$ runs over Fock states of the optical field, $\psi_{\rm inc}({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v},t)$ is the incident electron wave function, and the amplitude coefficients admit the closed-form expression \begin{align} f_\ell^n=&\ee^{{\rm i}} \def\ee{{\rm e} (\chi+\ell {\rm arg}\{-\beta_0\})}\, \alpha_{n+\ell}\, F_\ell^n \label{eq:intcoeff}\\ F_\ell^n=&|\beta_0|^{\ell}\ee^{-|\beta_0|^2/2}\sqrt{(n+\ell)!n!} \!\!\!\!\!\!\!\!\!\! \sum_{n'={\rm max}\{0,-\ell\}}^n \!\!\!\!\!\!\!\!\!\! \frac{(-|\beta_0|^2)^{n'}}{n'!(\ell+n')!(n-n')!}, \nonumber \end{align} with \begin{align} \beta_0(\Rb,z)=\frac{e}{\hbar\omega_0}\int_{-\infty}^z dz'\,\mathcal{E}_{0,z}(\Rb,z')\ee^{-{\rm i}} \def\ee{{\rm e} \omega_0 z'/v} \nonumber \end{align} acting as a single-mode coupling coefficient and $\chi=(-e/\hbar\omega_0)\int_{-\infty}^z dz'\,{\rm Im}\{\beta_0^*(\Rb,z')\mathcal{E}_{0,z}(\Rb,z')\ee^{-{\rm i}} \def\ee{{\rm e} \omega_0 z'/v}\}$ representing a global phase that is irrelevant in the present study. A dependence on lateral coordinates $\Rb=(x,y)$ is imprinted by the spatial distribution of the optical mode field. In the initial state (i.e., before quanta exchanges), only $\ell=0$ terms are present, so we can write $f^n_\ell(z\rightarrow -\infty)=\delta_{\ell0}\alpha_n$, where the amplitudes $\alpha_n$ define the starting optical boson field, which must satisfy the normalization condition \begin{align} \sum_n|\alpha_n|^2=1. \label{sumalphan} \end{align} Interestingly, the number of excitations $n'=n+\ell$ is conserved along the temporal evolution of the system \cite{paper339}, thus allowing us to propagate each initial $n'$ component separately and multiply it by the initial boson amplitude $\alpha_{n+\ell}$ when writing Eq.\ (\ref{eq:intcoeff}). Because the expansion coefficients defined in this equation are obtained from the evolution operator \cite{paper339}, they satisfy the normalization condition $\sum_{\ell n} |f_\ell^n|^2=\sum_{\ell n'} |\alpha_{n'}F_\ell^{n'-\ell}|^2=1$ for any optical field, which leads to the condition \begin{align} \sum_\ell(F_\ell^{n-\ell})^2=1 \label{sumFln} \end{align} satisfied for any $n$. Electron propagation prior to interaction is described through the linearized Hamiltonian $\hat{\mathcal{H}}_0$, which essentially assumes that the electron beam is well collimated and energy dispersion is negligible in the PINEM interaction region, such that we can write \begin{align} \psi_{\rm inc}({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v},t)=\ee^{{\rm i}} \def\ee{{\rm e}{\bf k}} \def\kpar{k_\parallel} \def\kparb{{\bf k}_\parallel_0\cdot {\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v}-{\rm i}} \def\ee{{\rm e} E_0t/\hbar}\phi({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v}-\vb t), \nonumber \end{align} where $\phi$ is a slowly varying function of relative position ${\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v}-\vb t$. Importantly, Eq.\ (\ref{eq:intcoeff}) prescribes that the evolution of the electron-boson system is uniquely determined by the nondimensional coupling parameter $\beta_0$ in combination with the amplitudes $\alpha_n$ defining the initial optical wave function. In what follows, we assume no dependence on $\Rb$ (see below) and set $\beta_0\equiv\beta_0(z\rightarrow\infty)$ because we are interested in studying free-electron propagation after PINEM interaction has taken place, even though this dependence plays a fundamental role in the observed transfer of orbital angular momentum between photons and electrons \cite{paper332}, and in addition, it could be useful to correct electron beam aberrations \cite{paper351}. Nevertheless, the coefficients of the quantum light state in Eq.\ (\ref{eq:intcoeff}) could provide an additional knob to further intertwine longitudinal and transverse electron degrees of freedom beyond what is possible using classical light. Additionally, they could affect the maximum achievable probability associated with specific PINEM sidebands, as well as the dependence on pulse duration, which also deserve further study. \begin{figure*} \centering{\includegraphics[width=1.0\textwidth]{Fig1}} \caption{{\bf Talbot effect and electron compression with classical light.} (a) An electron Gaussian wave packet (green) is transformed through PINEM interaction followed by propagation along a distance $z$ into a substantially modified electron density profile in the propagation-distance-shifted time $\tau=t-z/v$ due to superposition of different energy components. (b) Electron density profile (vertical $\tau$ coordinate) as a function of propagation distance $z$ (horizontal axis) after PINEM interaction with coherent light. We consider $100$\,keV electrons, a photon energy $\hbar\omega_0=1.5$\,eV, and a coupling coefficient $|\beta|=5$. Trains of compressed electron pulses are periodically observed at discrete multiple values of the Talbot propagation distance $z_T$. (c-e) Details of the $\tau$-$z$ map in (b) corresponding to the color-matched square regions of $z$ width $\Delta=4\,$mm. (f) Same as (e), but for z near $2z_{\rm T}$.} \label{Fig1} \end{figure*} \subsection{Effect of free propagation} Our purpose is to investigate the electron characteristics after free propagation over a macroscopic distance of several mm from the PINEM interaction region [see Fig.\ \ref{Fig1}(a)]. We identify in Eq.\ (\ref{eq:solution}) a propagation phase $\ee^{{\rm i}} \def\ee{{\rm e} k_\ell z}$ associated with each $\ell$ sideband, in which the electron wave vector is replaced by its linearized nonrecoil version $k_\ell\approx k_0 + \ell \omega_0/v$. While this approximation does accurately describe propagation over the relatively small extension of the PINEM interaction region, the exact expression \begin{align} k_\ell&=\hbar^{-1}\!\sqrt{E_\ell^2/c^2-m_{\rm e}} \def\kB{{k_{\rm B}}^2 c^2 } \label{klexpan}\\ &\approx k_0+\ell\omega_0/v-2\pi\ell^2/z_T+\cdots, \nonumber \end{align} needs to be used to deal with arbitrarily long propagation distances $z$, where the second-order correction, characterized by a distance \begin{align} z_T=4\pi\,m_{\rm e}} \def\kB{{k_{\rm B}} v^3\gamma^3/\hbar\omega_0^2 \label{zT} \end{align} (e.g., $z_T\approx159\,$mm for $\hbar\omega_0=1.5\,$eV and 100\,keV electrons), is sufficiently accurate under the conditions here considered, giving rise to numerical results that are indistinguishable from the full expression in the examples shown below. Our purpose is to study electron propagating and dismiss any entanglement with the PINEM optical field. We thus consider the electron density matrix, obtained from the pure-joint-state density matrix $|\psi(z,t)\rangle\langle\psi(z',t)|$ by tracing out the optical degrees of freedom: \begin{align} \rho(z,z',t)=\sum_{n=0}^\infty \psi_n(z,t)\psi_n^*(z',t), \label{rho1} \end{align} with \begin{align} \psi_n(z,t)=\phi(z-vt) \sum_{\ell=-\infty}^\infty \alpha_{n+\ell}\, F_\ell^n \ee^{{\rm i}} \def\ee{{\rm e} k_\ell z-{\rm i}} \def\ee{{\rm e}\ell\omega_0(t-t_p)}, \nonumber \end{align} where the phase of $\beta_0$ enters only through a time shift $t_p={\rm arg}\{-\beta_0\}/\omega_0$. We remark here that the mathematical operation of tracing out the degrees of freedom associated with the photonic mode to obtain a density matrix for the electron subsystem is physically justified by the fact that this operation ensures the correct measurement statistics if one only needs to measure electron properties (i.e., without performing any measurement on the rest of the system)\cite{NC04}. We note that diffraction effects involving the transverse evolution of the wave function are disregarded. Under attainable experimental conditions, an initial 100\,keV electron beam with $\varphi\sim50\,\mu$rad divergence, focused to a $2/k_0\varphi\sim25\,$nm spot over the PINEM interaction region, becomes just a factor $\sim2$ wider after free propagation over a distance $z\sim1\,$mm due to diffraction. In addition, the results here presented are valid under the assumption that $\phi(z-vt)$ involves a sufficiently narrow wave vector decomposition to neglect corrections beyond the linear energy dependence of the wave vector during the propagation distances under consideration, so $\phi$ enters the electron density matrix just as a broad envelope factor. However, we note that these assumptions may break in scenarios involving slow electrons ($E_0\lesssim10^2$ eV) or very strong electron-field coupling, in which the ponderomotive force can lead to a non-negligible beam spreading after interaction with the sample \cite{T20}. \subsection{Talbot effect and periodicity of the density matrix} Retaining just up to $\ell^2$ corrections in Eq.\ (\ref{klexpan}) for $k_\ell$ and considering relative positions $|z-z'|\ll z_T$, we can recast the electron density matrix (Eq.\ (\ref{rho1})) as \begin{align} \rho(z,z',t)=\ee^{{\rm i}} \def\ee{{\rm e} k_0(z-z')}\phi(z-vt)\phi^*(z'-vt)\tilde\rho(z,\tau,\tau'), \nonumber \end{align} where \begin{align} \tilde\rho(z,\tau,\tau')=\sum_{n\ell\ell'} &\alpha_{n+\ell}\alpha_{n+\ell'}^* \; F_\ell^n F_{\ell'}^n \label{rhotilde}\\ &\times\ee^{2\pi{\rm i}} \def\ee{{\rm e}\left[({\ell'}^2-\ell^2)z/z_T+(\ell'\tau'-\ell\tau)/\tau_0\right]}, \nonumber \end{align} $\tau=t-t_p-z/v$, and $\tau'=t-t_p-z'/v$. Disregarding the trivial phase propagation factor $\ee^{{\rm i}} \def\ee{{\rm e} k_0(z-z')}$ and the slowly varying envelope introduced by $\phi$, the density matrix is periodic in both of the time-shifted coordinates $\tau$ and $\tau'$ with the same period as the light optical cycle $\tau_0=2\pi/\omega_0$. Additionally, we find that $\tilde\rho(z,\tau,\tau')$ portrays a periodic pattern as a function of propagation distance $z$ similar to the Talbot effect \cite{T1836,R1881,LS1971_2,L1988,NT93}, with a period given by $z_T$ (Eq.\ (\ref{zT})). To illustrate this effect, we plot in Fig.\ \ref{Fig1}(b) the diagonal elements $\rho(z,z,t)=\sum_{n=0}^\infty \left|\psi_n(z,t)\right|^2$ normalized to the envelope density $|\phi(z-vt)|^2$ for coherent light illumination, which represent the scaled electron density profile as a function of time and propagation distance $z$ from the PINEM interaction region, calculated in the high-fluence classical limit (see below). Incidentally, off-diagonal elements are also considered and represented below in Fig.\ \ref{Fig4}. The plot clearly reveals a train of temporally focused electron pulses at $z\sim1.5\,$mm, followed by a series of focusing revivals at intervals of $z_T\approx159\,$mm and accompanied by temporally shifted revivals at fractional values of the Talbot distance $z_T$ \cite{BK96}. \section{Electron pulse compression with different optical mode statistics} Before analyzing the effect of light statistics in the evolution of the electron after PINEM interaction, we remark that the previous formalism is only valid for pure initial optical states, whose density matrix is given by $\sum_{nn'}\alpha_n\alpha_{n'}^*|n\rangle\langle n'|$. In contrast, for a perfect mixture (i.e., an initial optical density matrix $\sum_n|\alpha_n|^2|n\rangle\langle n|$ with no coherences), the outcome of interaction and propagation has to be separately calculated for each Fock state $|n\rangle$ and then averaged incoherently. Using the normalization conditions of Eqs.\ (\ref{sumalphan}) and (\ref{sumFln}), we find an electron density matrix $\tilde{\rho}(z,\tau,\tau')=1$, which is not altered due to interference between different energy components after PINEM interaction. We note that a well-defined optical Fock state belongs to this category and thus does not produce changes in the electron density matrix either. \begin{figure} \centering{\includegraphics[width=0.45\textwidth]{Fig2}} \caption{{\bf Electron compression using squeezed light.} (a-d) Evolution of the electron density profile following PINEM interaction with (a) classical, (b) MPU, (c) phase-squeezed, and (d) amplitude-squeezed light using a single-mode coupling coefficient $|\beta_0|=0.2$ and average population $\bar{n}=625$ (i.e., $|\beta|=\sqrt{\bar{n}}|\beta_0|=5$). (e) FWHM [see panel (a)] of the compressed electron density in (a-d) as a function of propagation distance $z$. (f) Minimum in the FWHM along the curves in (e) as a function of coupling coefficient $|\beta|$ (varying $|\beta_0|$ and keeping $\bar{n}=625$). We consider $100$\,keV electrons and a $1.5$\,eV photon energy.} \label{Fig2} \end{figure} \subsection{High-fluence and classical limits} Electron coupling to a single optical mode is generally weak and therefore characterized by a small coupling coefficient $|\beta_0|\ll1$ (e.g., we set $|\beta_0|=0.2$ here, as a feasible value for coupling to Mie and plasmon modes in nanoparticles \cite{paper339}). Still, a strong PINEM effect can be produced with a high average number of photons $\bar{n}=\sum_n n|\alpha_n|^2$, while only sidebands $|\ell|\ll\bar{n}$ can then be efficiently populated. In this limit, using the Stirling formula to approximate the factorials containing $n$ in Eq.\ (\ref{eq:intcoeff}), we find (see Appendix\ \ref{appendixB}) \begin{align} F_\ell^n\approx J_\ell(2\sqrt{n}|\beta_0|). \label{largen} \end{align} Additionally, if the optical mode is prepared in a coherent state (e.g., by exciting it with laser light), its population follows a Poissonian distribution $|\alpha_n|^2=\ee^{-\bar{n}}\,\bar{n}^{n}/n!$, which approaches a normal distribution \cite{F1968} $|\alpha_n|^2\approx\ee^{-(n-\bar{n})^2/2\bar{n}}/\sqrt{2\pi \bar{n}}$ for $\bar{n}\gg1$. Introducing this expression in Eq.\ (\ref{rhotilde}), approximating $n\approx\bar{n}$ in Eq.\ (\ref{largen}), and using the normalization condition $\sum_n|\alpha_n|^2=1$, we can write the density matrix in the high-fluence classical limit as \begin{align} \tilde\rho(z,\tau,\tau')\approx \psi_{\rm cl}(z,\tau)\psi_{\rm cl}^*(z,\tau'), \nonumber \end{align} where \begin{align} \psi_{\rm cl}(z,\tau)=\sum_{\ell} &J_\ell(2|\beta|)\ee^{-2\pi{\rm i}} \def\ee{{\rm e}(\ell^2z/z_T+\ell\tau/\tau_0)} \nonumber \end{align} and \begin{align} \beta=\sqrt{\bar{n}}\beta_0 \label{beta} \end{align} is the effective coupling coefficient, which is proportional to the light intensity used to excite the optical mode. This result is consistent with previous theoretical \cite{FES15,B17_2} and experimental \cite{PRY17,MB18_2} studies of free propagation after high-fluence classical PINEM interaction. Electron compression and Talbot revivals in this limit are shown in Fig.\ \ref{Fig1}(b) for coherent illumination with $|\beta_0|=0.2$ and $\beta=5$, while a zoom of the focal region is presented in Fig.\ \ref{Fig2}(a). Interestingly, for any population of the optical mode that is smooth and strongly peaked around $\bar{n}\gg1$, we can approximate $\alpha_{n+\ell}\approx\alpha_{n}$ for $|\ell|\ll n$, so the wave function completely separates into light and electron components in Eq.\ (\ref{eq:solution}), which becomes $|\psi({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v},t)\rangle\approx\left\{\sum_{n=0}^\infty\alpha_{n}\ee^{-{\rm i}} \def\ee{{\rm e} n\omega_0t}|n\rangle\right\}\times\left\{\psi_{\rm inc}({\bf r}} \def\Rb{{\bf R}} \def\ub{{\bf u}} \def\vb{{\bf v},t)\sum_{\ell=-\infty}^\infty\ee^{{\rm i}} \def\ee{{\rm e} (\chi+\ell {\rm arg}\{-\beta\})}\,J_\ell(2|\beta|)\,\ee^{{\rm i}} \def\ee{{\rm e}\ell\omega_0(z/v-t)}\right\}$, in agreement with a well-known expression for PINEM with classical light \cite{paper311}. \subsection{Coherent squeezed light} We now explore squeezed light as an experimentally feasible alternative to classical laser light to excite the PINEM optical mode. Single-mode coherent squeezed states $D(g)S(\zeta)|0\rangle$ are defined by applying the displacement and squeezing operators, $D(g)=\exp(ga^\dagger-g^*a)$ and $S(\zeta)=\exp\left[(\zeta^*aa-\zeta a^{\dagger}a^{\dagger})/2\right]$, to the optical vacuum \cite{LK1987}. Writing the squeezing parameter as $\zeta=\ee^{{\rm i}} \def\ee{{\rm e} \theta}s$, one can express the expansion coefficients of these states in the number basis representation as \begin{align} \alpha_n=\frac{\left(\xi/2\right)^{n/2}}{\sqrt{n!\cosh{s}}}\; \ee^{-(|g|^2+g^{* 2}\xi})/2 H_n\left[\frac{g+g^*\xi}{\sqrt{2\xi}}\right],\nonumber \end{align} where $\xi=\ee^{{\rm i}} \def\ee{{\rm e}\theta}\tanh{s}$ and $H_n$ is the Hermite polynomial of order $n$. These coefficients reduce to those of a coherent state for $s=0$. The average photon number is given by $\bar{n}=|g|^2+\sinh^2 s$, while $\alpha_n$ depends on the phases of $g$ and $\zeta$ through the combination $\varphi={\rm arg}\{g\}-\theta/2$. In particular, the variance takes minimum and maximum values for $\varphi=0$ and $\pi$, corresponding to amplitude- and phase-squeezed states, respectively \cite{LK1987}. We consider the two extreme possibilities of PINEM interaction with purely phase- and amplitude-squeezed light in Fig.\ \ref{Fig2}(c,d), where we plot the density profile $\rho(z,z,t)=\tilde{\rho}(z,\tau,\tau)$ as a function of propagation distance $z$ for fixed coupling strength [$|\beta|=5$, obtained with $\bar{n}=625$ and $|\beta_0|=0.2$, see Eq.\ (\ref{beta})]. Electron focusing takes place at a similar propagation distance $z\sim2\,$mm for all light statistics under consideration. When the illumination has classical [Fig.\ \ref{Fig2}(a)] or amplitude-squeezed [Fig.\ \ref{Fig2}(d)] statistics, the density shows oscillations as a function of relative time $\tau$ before focusing. These oscillations disappear with phase-squeezed light [Fig.\ \ref{Fig2}(c)]. Additionally, the latter produces a focal spot spanning a larger interval of propagation distances $z$ and emerging at a shorter value of $z$ in comparison with classical light [Fig.\ \ref{Fig2}(e)]. The behavior with amplitude-squeezed light is the opposite, and in particular, the minimum full width at half maximum (FWHM) of the focal spot is approximately twice larger than the result obtained with phase-squeezed or classical light. As already discussed for classical light \cite{B17_2}, the degree of compression increases with increasing coupling $|\beta|$ [Fig.\ \ref{Fig2}(f)]. Incidentally, upon visual inspection of the $z$-$\tau$ pattern for coherent-state illumination in Fig.\ \ref{Fig2}(a), smoothing along $z$ would lead to vertical elongation of the density features, similar to those obtained using phase-squeezed light [Fig.\ \ref{Fig2}(b)]; in contrast, smoothing along $\tau$ would produce a pattern more similar to that of amplitude-squeezed illumination [Fig.\ \ref{Fig2}(d)]. This is consistent with the intuitive picture that phase-squeezing should generate sharper features in the wave function snapshots (i.e., narrower peaks as a function of $\tau$, accompanied by broadening along $z$ in order to preserve the total electron probability); conversely, amplitude-squeezed light should produce the opposite effect (broadening along $\tau$ and sharpening along $z$). \begin{figure} \centering{\includegraphics[width=0.45\textwidth]{Fig3}} \caption{{\bf Tailoring the electron wave packet with amplitude-squeezed light.} (a-c) Electron density profile produced by PINEM interaction with classical (dashed curves) and amplitude-squeezed (solid curves) light after at propagation distance $z$ as indicated by labels. The electron-light coupling coefficient is assumed to be $|\beta|=5$ with $|\beta_0|=0.2$ and $\bar{n}=625$. (d) Evolution of the density profile using amplitude-squeezed light for different coupling strengths $|\beta|$ obtained by varying $|\beta_0|$ with $\bar{n}=625$. We consider $100$\,keV electrons, a photon energy $1.5$\,eV, and a single-mode coupling coefficient $|\beta_0|=0.2$ in all cases.} \label{Fig3} \end{figure} \subsubsection{Synthesis of double-peak electron pulses} Although PINEM interaction with amplitude-squeezed light renders comparatively poorer focusing, it shows an interesting double-peak pattern for $z$ below the focal spot. This effect, which is already observed in Fig.\ \ref{Fig2}(d), is analyzed in more detail in Fig.\ \ref{Fig3} for different degrees of squeezing. We also show in the same figure the profiles obtained with classical light, revealing amplitude squeezing as a better strategy to produce such double-pulse pattern. We remark that the width and distance between the two pulses can be controlled by varying the coupling strength parameter $|\beta|$ [Fig.\ \ref{Fig3}(d)]. Related to this, we note that a recent experiment \cite{MB20} has shown that a single double-peak electron density profile can be achieved by exploiting classical midinfrared single-cycle laser pulses. \subsection{Electron compression with minimum-phase-uncertainty light} One expects that better focusing can be achieved by reducing phase uncertainty in the optical field. In the limit of large average photon number $\bar{n}\gg1$, the state that produces a minimum phase uncertainty (MPU) has been shown to be given by \cite{KK93} \begin{align} \alpha_n\approx\frac{C}{\sqrt{\bar{n}}}{\rm Ai}\left[s_1(1-2n/3\bar{n})\right], \nonumber \end{align} where ${\rm Ai}$ is the Airy function, $s_1\approx-2.3381$ is its first zero, $C=\sqrt{2|s_1|/3}/{\rm Ai}'(s_1)\approx2.7805$, and ${\rm Ai}'(s_1)$ is the derivative of ${\rm Ai}$. PINEM focusing with MPU light is illustrated in Fig.\ \ref{Fig2}(b). In contrast to classical light, the Rabi-like oscillations along $z$ are now replaced by a well-defined short-period comb of electron density peaks. This is similar to what we obtain with phase-squeezed light [Fig.\ \ref{Fig2}(c)], but the pattern with MPU light becomes more pronounced. Further deviations from coherent illumination are found in the speed at which compression is achieved: among the statistics under consideration, the shortest FWHM pulse with fixed light intensity and propagation distance is obtained when using MPU light [Fig.\ \ref{Fig2}(e)]. Nevertheless, after a sufficiently large distance $z$, the FWHM reaches similar values with MPU, coherent, and phase-squeezed light, while amplitude-squeezed light systematically leads to lower compression, and this effect becomes more dramatic when increasing the coupling coefficient $|\beta|$ [Fig.\ \ref{Fig2}(f)]. \begin{figure*} \centering{\includegraphics[width=0.75\textwidth]{Fig4}} \caption{{\bf Measuring the electron density matrix through self-interference}. (a) Sketch of an experimental arrangement to explore electron auto-correlation by means of a beam splitter and different lengths ($z$ and $z'$) along the two electron paths before recombination at the detection region. (b-i) Real (left panels) and imaginary (right panels) parts of the electron density matrix as a function of shifted times $\tau$ and $\tau'$ for $z=1.6$\,mm and different statistics of the PINEM light, as indicated by labels. We consider 100\,keV electrons, 1.5\,eV PINEM photons, a squeezing paramerter $s=2$, and coupling parmameters $|\beta_0|=0.2$ and $|\beta|=5$.} \label{Fig4} \end{figure*} \subsection{Electron self-interference} We can further modify the focal properties of the electron by mixing it with a delayed version of itself, using for example a beam splitter and different lengths $z$ and $z'$ of the two electron paths converging at the observation region, as sketched in Fig.\ \ref{Fig4}(a). We assume that $z-z'$ is tuned to be a multiple of the electron wavelength, thus rendering $\rho\propto\tilde\rho$ [see Eq.\ (\ref{rhotilde})], considering for simplicity an incident electron plane wave [i.e., $\phi(z-vt)=1/\sqrt{L}$, where $L$ is a quantization length]. Using the notation of Eq.\ (\ref{rho1}), the electron density profile obtained in this way then results from the superposition $(L/2)\sum_n |\psi_n(z,t)+\ee^{{\rm i}} \def\ee{{\rm e}\varphi}\psi_n(z',t)|^2=\tilde\rho(z,\tau,\tau)/2+\tilde\rho(z,\tau',\tau')/2+{\rm Re}\{\ee^{-{\rm i}} \def\ee{{\rm e}\varphi}\tilde\rho(z,\tau,\tau')\}$, where an overall phase $\varphi$ is introduced (e.g., by means of electrostatic elements along one of the electron arms \cite{VBM18}) to allow us to switch between the real and imaginary parts of $\tilde\rho(z,\tau,\tau')$. An example of how this quantity depends on PINEM light statistics is shown in Fig.\ \ref{Fig4}(b-i), plotted over a discrete dense sampling of $\tau$ and $\tau'$ points satisfying the condition that $v(\tau-\tau')$ are multiples of the electron wavelength. Interestingly, we observe a rotation of the focal spot feature when going from classical to amplitude-squeezed light. This is consistent with the poorer focusing properties observed for the latter. Through the proposed electron self-interference, the focal spot profile can be modified to cover a wide variety of patterns observed for different light statistics. In particular, phase-squeezed and MPU light produce a radical departure in $\tilde{\rho}(z,\tau,\tau')$ relative to classical coherent light. \begin{figure*} \centering{\includegraphics[width=0.75\textwidth]{Fig5}} \caption{\textbf{Dependence of sample polarization on electron density matrix.} (a) Sketch of an electron wave packed undergoing PINEM modulation, followed by propagation along a distance $d$, and interaction with a single-mode sample of frequency $\omega'_0=m\omega_0$ that is a harmonic $m$ of the PINEM photon frequency. (b-e) Amplitude $\Delta_m$ of the oscillation at frequency $\omega'_0$ displayed by the sample polarization after interaction with the electron. We plot $|\Delta_m|$ for a few values of $m$ as a function of PINEM-sample distance $d$ and different PINEM-light statistics. All parameters are the same as in Fig.\ \ref{Fig4}.} \label{Fig5} \end{figure*} \section{Effect of the electron density matrix on the excitation of a sample} A commonly asked question relates to how the probability and distribution of excitations produced in a sample are affected by the profile of the beam in an electron microscope. The dependence on the transverse component of the electron wave function has been shown to reduce to a trivial average of the excitation produced by line-like beams over the lateral electron density profile \cite{RH1988,paperarxiv2}. In the present study, we concentrate instead on the longitudinal electron wave function (i.e., along the beam direction). Within first-order Born approximation, the excitation probability is known to be independent of the longitudinal electron wave function when the initial states of the sample and the electron are not phase-correlated \cite{PG19,paperarxiv2}, although a dependence has been shown to arise when the sample state is a coherent superposition of ground and excited states that is phase-locked with respect to the electron arrival time \cite{PG19}, and for example, this effect is actually observed in double-PINEM experiments \cite{PRY17}. Here, we concentrate on the common scenario of a sample prepared in its ground state before interaction with the electron. Remarkably, even when considering higher-order interactions, the number of excitations created by the electron has been shown to still remain independent of the longitudinal wave function \cite{paperarxiv3}, which incidentally implies that the cathodoluminescence intensity is also independent. We generalize this result below by calculating the full density matrix of the bosonic mode, which turns out to have a Poissonian diagonal part equally independent of electron wave function, although the coherences exhibit a dependence on the quantum state of light used in the PINEM interaction to modulate the electron. For simplicity, we consider a single sample bosonic mode of frequency $\omega'_0$ interacting with an incident PINEM-modulated electron wave packet [Fig.\ \ref{Fig5}(a)]. We can then treat the electron-sample interaction using the same formalism as in Sec.\ \ref{quantumPINEM} by just iterating Eq.\ (\ref{eq:solution}). We find the expression \begin{align} &|\Psi(z,t)\rangle=\ee^{{\rm i}} \def\ee{{\rm e} k_0z-{\rm i}} \def\ee{{\rm e} E_0t/\hbar}\phi(z-vt)\!\!\sum_{\ell=-\infty}^\infty\sum_{n=0}^\infty\sum_{n'=0}^\infty f_\ell^n f_{-n'}^{\prime n'} \nonumber\\ &\times \ee^{{\rm i}} \def\ee{{\rm e} \omega_0[\ell(z/v-t)-n t]-2\pi{\rm i}} \def\ee{{\rm e}\ell^2d/z_T-{\rm i}} \def\ee{{\rm e} n'\omega'_0z/v} |nn'\rangle \label{lastsupper} \end{align} for the wave function of the entire system, comprising the electron, as well as the PINEM and sample bosonic modes, the Fock states of which are labeled by their respective occupation numbers $n$ and $n'$. Primed quantities are reserved here for the sample [i.e., $f_\ell^n$ refers to the first PINEM interaction, while $f_{\ell'}^{\prime n'}$ describes the coupling to the sample in Eq.\ (\ref{lastsupper})], and in particular the condition $\ell'=-n'$ (i.e., sample initially prepared in its ground state $|0\rangle$) is used to write the coefficients $f_{-n'}^{\prime n'}$. Additionally, we introduce a phase correction $\propto\ell^2$ accounting for propagation over a macroscopic distance $d$ separating the PINEM and sample interaction regions, but we neglect this type of correction for relatively short propagation along the extension of the envelope function $\phi(z)$ and within the sample interaction region [see Fig.\ \ref{Fig5}(a)]. The density matrix of the sample mode after interaction with the electron, \begin{align} \rho^{\rm sample}=\sum_{n'_1n'_2}\rho^{\rm sample}_{n'_1n'_2} \ee^{-{\rm i}} \def\ee{{\rm e} (n'_1-n'_2)\omega'_0t} |n'_1\rangle\langle n'_2|, \nonumber \end{align} is then obtained by tracing out electron (integral over $z$) and PINEM boson (sum over $n$) degrees of freedom. More precisely, we find the coefficients \begin{align} &\rho^{\rm sample}_{n'_1n'_2}=\ee^{{\rm i}} \def\ee{{\rm e} (n'_1-n'_2)\omega'_0t} \int dz\sum_n\langle nn'_1|\Psi(z,t)\rangle \langle\Psi(z,t)|nn'_2\rangle \nonumber\\ &=f_{-n'_1}^{\prime n'_1} f_{-n'_2}^{\prime n'_2*} \sum_{\ell_1=-\infty}^\infty \sum_{\ell_2=-\infty}^\infty \phi_{\ell_1\ell_2n'_1n'_2} \sum_{n=0}^\infty f_{\ell_1}^n {f_{\ell_2}^n}^*, \label{rhonnsum} \end{align} where \begin{align} \phi_{\ell_1\ell_2n'_1n'_2} = &\ee^{2\pi{\rm i}} \def\ee{{\rm e}(\ell_2^2-\ell_1^2)d/z_T} \label{phillnn}\\ &\times \int dz \, |\phi(z)|^2 \, \ee^{{\rm i}} \def\ee{{\rm e}[(\ell_1-\ell_2)\omega_0-(n'_1-n'_2)\omega'_0]z/v}. \nonumber \end{align} Incidentally, further electron propagation beyond the sample should also involve corrections to the linearized momentum $n'\omega'_0/v$, on which we are not interested here. We remind that the momentum decomposition of $\phi$ involves small wave vectors compared with $\omega/v$, so its role in the integral of Eq.\ (\ref{phillnn}) consists in introducing some broadening with respect to the perfect phase-matching condition \begin{align} (\ell_1-\ell_2)\omega_0=(n'_1-n'_2)\omega'_0. \label{condition} \end{align} Such broadening produces nonzero (but small) values of $\phi_{\ell_1\ell_2n'_1n'_2}$ even when $\omega_0/\omega'_0$ is not a rational number. For simplicity, we consider $\omega_0/\omega'_0$ to be a rational number and further assume the spectral width of the sample mode to also be small compared with $\omega_0$; the coefficients of Eq.\ (\ref{phillnn}) then reduce to \begin{align} \phi_{\ell_1\ell_2n'_1n'_2}=\ee^{2\pi{\rm i}} \def\ee{{\rm e}(\ell_2^2-\ell_1^2)d/z_T}, \nonumber \end{align} subject to the condition given by Eq.\ (\ref{condition}). We note that the diagonal elements $\rho^{\rm sample}_{n'n'}$ involve just $\ell_1=\ell_2$ terms in virtue of Eq.\ (\ref{condition}), so the only nonzero coefficients in Eq.\ (\ref{rhonnsum}) for those elements are $\phi_{\ell\ell n'n'}=1$, and, using the normalization condition $\sum_{\ell n} |f_\ell^n|^2=1$, we find $\rho^{\rm sample}_{n'n'}=|f_{-n'}^{\prime n'}|^2$, which does not depend on the PINEM coefficients $f_\ell^n$: we corroborate that the number of excitations created in the sample is independent of how the incident PINEM electron is prepared \cite{paperarxiv3}; additionally, the distribution of those excitations is also independent. More specifically, upon inspection of Eq.\ (\ref{eq:intcoeff}), we find $f_{-n'}^{\prime n'}=\ee^{{\rm i}} \def\ee{{\rm e} \chi'} \ee^{-|\beta'_0|^2/2} {\beta'_0}^{*n'}/\sqrt{n'!}$, and therefore, \begin{align} \rho^{\rm sample}_{n'n'}=\left|f_{-n'}^{\prime n'}\right|^2=\ee^{-|\beta'_0|^2}\frac{|\beta'_0|^{2n'}}{n'!} \nonumber \end{align} reduces to a Poissonian distribution regardless of the quantum state of the incident electron, with average $|\beta_0|^2$ corresponding to the contribution of the mode under consideration to the EELS probability. This result, which was found for excitation by an electron treated as a classical probe \cite{SL1971,paper228}, is now generalized to a quantum treatment of the electron. We remark that this conclusion is in essence a result of the nonrecoil approximation. Combining the above results, the elements of the sample density matrix can be written as \begin{align} \rho^{\rm sample}_{n'_1n'_2}=&\ee^{-|\beta'_0|^2} \frac{(-\beta'_0)^{n'_1*}(-\beta'_0)^{n'_2}}{\sqrt{n'_1!n'_2!}} \nonumber\\ &\times {\sum_{\ell_1\ell_2}}' \ee^{2\pi{\rm i}} \def\ee{{\rm e}(\ell_2^2-\ell_1^2)d/z_T} \sum_{n=0}^\infty f_{\ell_1}^n f_{\ell_2}^{n*}, \nonumber \end{align} where the sum is subject to the condition imposed by Eq.\ (\ref{condition}). The symmetry property $\rho^{\rm sample}_{n'_1n'_2}=\rho^{{\rm sample}*}_{n'_2n'_1}$ is easily verified from this expression. We can now calculate different observables involving the sample mode, as for example $\propto(a^{\prime\dagger}+a')$. The expectation value of this quantity, which vanishes unless the ratio of sample-to-PINEM mode frequencies $\omega'_0/\omega_0=m$ is an integer, only involves terms in which $n'_1$ and $n'_2$ differ by 1. A straightforward calculation leads to the result \begin{align} \langle a^{\prime\dagger}+a'\rangle&=2{\rm Re}\{-\beta'_0\Delta_m\ee^{{\rm i}} \def\ee{{\rm e}\omega'_0t}\}, \nonumber \end{align} where \begin{align} \Delta_m=\ee^{2\pi{\rm i}} \def\ee{{\rm e} m^2d/z_T} \sum_{\ell=-\infty}^\infty \ee^{4\pi{\rm i}} \def\ee{{\rm e}\ell m d/z_T} \sum_{n=0}^\infty f_\ell^n f_{\ell+m}^{n*}. \label{Deltam} \end{align} This polarization matrix element has been recently shown to exhibit some degree of coherence with the light used to modulate the electron in the first PINEM interaction \cite{paperarxiv3}. We show in Fig.\ \ref{Fig5}(b-e) the dependence of $|\Delta_m|$ on PINEM-sample separation $d$ for a few values of $m$ and different PINEM statistics. This quantity is periodic in $d$ with a period $z_T/2m$, as it is clear from the exponential inside the sum of Eq.\ (\ref{Deltam}). Dramatic differences are observed in $|\Delta_m|$ for different PINEM statistics; in particular, a clear trend is observed toward concentration of $\Delta_m$ at specific distances $d$ when the uncertainty in the light coherence is reduced (i.e., when moving from coherent or amplitude-squeezed light to phase-squeezed light, and eventually to MPU light). Incidentally, a similar analysis for the $N^{\rm th}$ moment $\propto(a^{\prime\dagger}+a')^N$ leads to a contribution oscillating at frequency $N\omega'_0$ with a coefficient $\Delta_{mN}$. An effect at that order is produced if $mN$ is an integer, a condition that can be met for noninteger values of the sample-PINEM frequency ratio $\omega'_0/\omega_0=m$; for example, an oscillation with frequency $\omega_0$ is induced in $\propto(a^{\prime\dagger}+a')^2$ after electron-sample interaction if the sample mode frequency is half of the PINEM photon frequency. The time-dependent of the off-diagonal sample density matrix components under discussion could be measured through attosecond streaking \cite{IQY02,SKK07}, as a function of the delay between the times of arrival of the electron and an x-ray pulse, giving rise to oscillations in the energy of photoelectrons produced by the latter as a function of such delay. For low-frequency sample modes, a direct measurement could be based on time-resolved quantum tomography of the sample state; this strategy could benefit from low-frequency beatings resulting from the combination of multiple sample modes of similar frequency. More direct evidence should be provided by the nontrivial interference that has been shown to emerge when mixing the PINEM light with cathodoluminescence emission from the sample \cite{paperarxiv3}. \section{Conclusions} We have demonstrated that the interaction of free electrons with quantum light opens a new direction for modulating the longitudinal electron profile, the degree and duration of electron pulse compression, and the statistics associated with this compression. By squeezing the interacting light in phase, the formation of electron pulses is accelerated, and this effect is maximized when using optical fields with an Airy number distribution that minimizes phase uncertainty. Interestingly, amplitude-squeezed light leads to the emergence of double-pulse electron profiles, which could be useful to investigate dynamical processes in a sample. The influence of light statistics becomes more dramatic when examining the electron density matrix after interaction, a quantity that can be accessed through our proposed self-interference experiment. Additionally, we have shown that the excitation of a sample by the electron is affected by how the latter is modulated, and in particular, by the statistics of the modulating light. Indeed, although no dependence is predicted in the probability of exciting sample modes, the temporal evolution of the electron-induced off-diagonal sample density matrix elements shows a dramatic departure from the results observed with laser-modulated electrons when considering instead electrons that have interacted with quantum light. Besides their practical interest to shape and temporally compress free electrons, the results here presented reveal a wealth of fundamental phenomena emerging from the interaction with nonclassical light. We further anticipate potential application in the creation of light sources with nontrivial statistics through electron-induced optical emission using gratings and undulators.
quant-ph/9509009
\section{Introduction} Consider the equations of motion of Bohmian mechanics\ for a system of $N$ particles with masses $m_{1},...,m_{N}$ moving in physical space ${\rm I\! R}^3$: the wave function\ $\psi$ evolves according to Schr\"{o}dinger 's equation \begin{equation} \label{seq} i\hbar \frac{\partial \psi _t(q)}{\partial t} = \left( -\sum_{k=1}^{N} \frac{{\hbar}^{2}}{2m_{k}}\Delta_{k} + V(q) \right) \psi _t(q) , \end{equation} and the configuration $Q=({\bf Q}_1, \dots , {\bf Q}_N) \in {\rm I\! R}^{3N}$, with ${\bf Q}_k \in {\rm I\! R} ^3$ denoting the position of the $k$-th particle, evolves according to Bohm's equation \begin{equation} \label{bohmev} \frac {dQ_t}{dt} = v^{\psi _t} (Q_t) , \end{equation} where the velocity field $v^{\psi}=({\bf v}^{\psi}_{1}, \dots ,{\bf v}^{\psi}_{N})$ is determined by the wave function \ $\psi$ \be{velfield} {\bf v}_{k}^{\psi}(q) = \frac{\hbar}{m_{k}} \mbox {Im} \frac{{\nabla}_{k}\psi (q)}{\psi (q)} \end{equation} (Bohm 1952). This velocity field is regular on the subset of ${\rm I\! R}^{3N}$ where $\psi\neq 0$ and is differentiable . The question arises as to what happens if the configuration $Q_t$, when moving in accordance with (\ref{bohmev}) along integral curves of $v^{\psi _t}$, reaches at some time $\tau$ a singularity of $v^\psi $, for example a node of the wave function , i.e.\ a point where $\psi =0$. In general, the event of reaching a singularity of $v^\psi $ corresponds to a singularity of the motion: the velocity $dQ_t/dt$ becoming infinite, $Q_t$ being discontinuous, or even ``exploding,'' i.e.\ reaching infinity as $t\to \tau$. In some cases it may be possible to continue the Bohmian trajectory through a singularity of the velocity field, but in any case, Eqn.\ (\ref{bohmev}) is not defined at singularities of $v^\psi $, and the theory would have to be supplemented by a suitable prescription for how to extend the motion through a singularity, or which trajectories to avoid. We should therefore consider the following problem---the {\em initial value problem in Bohmian mechanics : For given initial values $\psi_0$ and $Q_0$ at a time $t_0$ (we shall put $t_0=0$), do there exist global unique solutions $(\psi_t, Q_t)$ of (\ref{seq},\ref{bohmev}) such that $\psi_{t_0}=\psi_0$ and $Q_{t_0}=Q_0$?} A positive answer to this question for all or at least for a suitable majority of initial conditions, or for a class of ``physically relevant'' initial conditions, is certainly important: the limits of global existence of solutions may hint at limits of validity of the theory. Let us first give a simple explicit example where indeed some trajectories reach nodes of the wave function . Consider the one-dimensional harmonic oscillator (with $\hbar = m=\omega =1$), i.e, the potential $\displaystyle V(q)=q^2/2$, and take as the wave function\ a superposition of the ground state and the second excited state: \be{exwf} \psi_t(q)=e^{-q^2/2}\Bigl( 1+(1-2q^2)e^{-2it}\Bigr) e^{-it/2}. \end{equation} This wave function\ leads to a velocity field $v^\psi $ which is an odd function of $q$ and periodic in time with period $\pi$. The Bohmian motion is therefore invariant under reflection $q\to -q$ and periodic in $t$ with period $\pi$: If $Q(t)$ is a solution of (\ref{bohmev}), then $-Q(t)$ is also a solution of (\ref{bohmev}), and $Q(t+\pi)=Q(t)$. The wave function\ $\psi$ has nodes at $(q,t)=(0, (n+ 1/2)\pi )$ and at $(q,t)= (\pm 1, n\pi )$ for all integers $n$. There are three trajectories which periodically run into nodes of $\psi$, see Figure \ref{nodes}. One of them, the constant $Q_t=0$ for $t\neq (n+1/2)\pi$, is a solution of (\ref{bohmev}) which runs with velocity 0 at times $t\to (n+ 1/2)\pi$ into nodes of $\psi$. The other two ``node-crossing'' trajectories (which are reflection images of each other) are singular at the nodes: for example, in the vicinity of the node at $q=1$, $t=0$ the trajectory running into it has the form $\displaystyle Q_t= (3t^2 /4)^{1/3} +1$, i.e.\ it has infinite velocity at $t=0$. In this example the trajectories may be continued through the nodes in an obvious and consistent way. This is an artifact of the low dimensionality: In fact, in $d=1$ dimension, $Q_t$ satisfies \be{smap} \int_{-\infty}^{Q_{t}(q_0)} |\psi_t(q)|^{2} \, dq =\int_{-\infty}^{q_0} |\psi_0(q)|^{2} \, dq, \end{equation} employing the equivariance of the $|\psi_t|^2$-measure, i.e.\ that the density $\rho_0=|\psi_0|^2$ evolves under the Bohmian dynamics to $\rho_t=|\psi_t|^2$ (D\"urr, Goldstein, Zangh\`\i\ 1992a; see also Section 4), and the property that trajectories do not cross in configuration-space-time. Clearly, much less regularity of $\psi$ is needed for this definition of the motion of the Bohmian configuration---in particular, nodes of the wave function\ are no problem---and, if both definitions (\ref{bohmev}) and (\ref{smap}) are possible, they agree. However, a generalization of (\ref{smap}) to the physically most interesting case of $d=3N$-dimensional configuration space is not known. We shall turn now to the question of the existence of global unique solutions of the equations of motion of Bohmian mechanics , (\ref{seq}) and (\ref{bohmev}). \section{The Schr\"odinger equation} In Bohmian mechanics, the evolution of the wave function\ $\psi$ according to Schr\"odinger's equation (\ref{seq}) is independent of the evolution of the actual particle configuration $Q$ according to the guiding equation (\ref{bohmev}), while for the integration of (\ref{bohmev}) we need $\psi$. Therefore, when solving the initial value problem, we may consider Schr\"odinger's equation first. The linear partial differential equation (\ref{seq}) is usually discussed in the framework of the Hilbert space ${\cal H} = L^2 ({\rm I\! R}^{3N})$ of square integrable functions, by viewing the Hamiltonian $\displaystyle \wt H=- \sum_{k=1}^{N} \frac{{\hbar}^{2}}{2m_{k}}\Delta_{k} + V(q) $ as the generator of the unitary group $U_t=e^{-iHt/\hbar }$ (Kato 1951, Reed and Simon 1975). Usually, boundary conditions on the wave function\ have to be specified in order to get a unique time evolution $(U_t)_{t\in{\rm I\! R}}$. In more technical terms, one has to select a self-adjoint extension $H$ of the partial differential operator $\wt H$, which is a priori defined only on sufficiently smooth functions. As different self-adjoint extensions generate different time evolutions of the wave function , the choice of the right self-adjoint extension is a matter of physics. We shall comment on this in Section 4. Given $(U_t)_{t\in{\rm I\! R}}$, for any initial $\psi_0\in{\cal H}$ we have a global unique wave function\ $\psi_t=U_t\psi_0$. This wave function , however, in general is not a genuine solution of Schr\"{o}dinger's equation : Generic $\psi_0\in{\cal H}$ are not smooth functions, and $\psi_t$ will not be differentiable , so that the differential equation (\ref{seq}) cannot be discussed. The standard procedure now is to forget about Schr\"{o}dinger's equation , and argue that all one is interested in is a unitary time evolution---or, even more abstractly, a unitary representation of the time translation group. This is, however, not sufficient for our approach. {From} the point of view of Bohmian mechanics , the wave function\ $\psi$ is a smooth field on configuration-space-time solving Schr\"{o}dinger's equation . To obtain this, we have to put suitable conditions on the initial wave function\ $\psi_0$. It turns out that a suitable subspace of ${\cal H}$ is given by the so-called $C^\infty$-vectors of the Hamiltonian $H$, $\displaystyle C^\infty (H)= \bigcap_{n=1}^\infty {\cal D} (H^n)$. ${\cal D} (H^n)$ denotes the domain of the $n$-th power of the Hamiltonian, i.e. the set of wave function s for which the expectation value of the $2n$-th power of $H$, the ``energy,'' is finite, $\displaystyle \int_{{\rm I\! R}^{3N}} (H^n\psi )^2\, dq= \langle \psi | H^{2n}\psi \rangle <\infty$.$^1$ $C^\infty (H)$ is a dense subset of ${\cal H}$, and invariant under the time evolution $U_t$. Eigenfunctions and wave packets are special $C^\infty$-vectors, so all wave function s usually considered in physics are included. It should not at all be regarded as a defect of Bohmian mechanics\ that it is not defined for all $\psi_0\in{\cal H}$: {From} the point of view of Bohmian mechanics , the Hilbert space $L^2({\rm I\! R}^{3N})$ is not the state space of the wave function , but a useful tool for the analysis of the theory. (In this context, see also the contributions of D\"urr, Goldstein, Zangh\`\i\ and Daumer to this volume.) We have: {\em For any $\psi_0\in C^\infty (H)$, $\psi_t= U_t\psi_0= e^{-itH/\hbar}\psi_0$ is a global smooth solution of Schr\"{o}dinger's equation \ on}\/ $\Omega\times {\rm I\! R}$, where $\Omega$ denotes the set where the potential is smooth. This result cannot be genuinely strengthened: the wave function\ cannot be expected to be regular at points where the potential is singular. For instance, the ground state eigenfunction of the Coulomb potential $V(q)=1/|q|$, $e^{- |q|}$, is not differentiable\ at the singularity of the potential at $q=0$. Important examples of potentials are the $N$-particle Coulomb interaction for particles with charges $e_i$ \[ V_{\rm Coulomb}({\bf q}_1,\dots ,{\bf q}_N) = \sum _{i=1}^N \sum _{j=i+1}^N \frac{e_ie_j}{|{\bf q}_i- {\bf q}_j|} \] with $\displaystyle \Omega_{\rm Coulomb}= {\rm I\! R}^{3N} \setminus \bigcup _{i=1}^N \bigcup _{j=i+1}^N \{ {\bf q}_i={\bf q}_j\} $, and the $N$-electron atom \[ V_{\rm atom}({\bf q}_1,\dots ,{\bf q}_N) =\sum _{i=1}^N \frac{e_ne_i}{|{\bf q}_i|} + \sum _{i=1}^N \sum _{j=i+1}^N \frac{e_ie_j}{|{\bf q}_i-{\bf q}_j|} \] with $\displaystyle \Omega_{\rm atom}= {\rm I\! R}^{3N} \setminus \Biggl( \bigcup _{i=1}^N \{ {\bf q}_i=0\} \cup \bigcup _{i=1}^N \bigcup _{j=i+1}^N \{ {\bf q}_i={\bf q}_j\} \Biggr) ,$ in the approximation that the nucleus with charge $e_n$ is at rest at the origin, acting like an external Coulomb field. \section{The Bohmian trajectories} With the wave function\ $\psi\in C^\infty (\Omega\times {\rm I\! R})$, the velocity field $v^\psi $ (\ref{velfield}) can be formed on the set where the wave function\ $\psi\neq 0$. We shall call the set where the velocity field is regular ${\cal G} = (\Omega\times {\rm I\! R})\setminus {\cal N}$, where ${\cal N}$ is the (space-time) set of nodes of the wave function\ ${\cal N} = \{ (q,t)\in\Omega\times{\rm I\! R} : \psi_t(q)=0\}$. On this set, the first order ordinary differential equation (\ref{bohmev}) is locally integrable. If we extend the solution as far as possible, we obtain for each initial value $q_0\in{\cal G}_0$, i.e.\ $(q_0,0)\in{\cal G}$, a {\em maximal solution} $Q(t; q_0)$ on a maximal time interval of existence $(\tau^- (q_0), \tau^+ (q_0))$. The solution is called {\em global}\/ if $\tau^-=-\infty$ and $\tau^+= +\infty$. In the introduction we have given an example showing that we cannot expect to have global solutions for {\it all}\/ initial values $q_0$. But what indeed holds true is that for {\em almost all}\/ initial configurations we have global unique solutions. ``Almost all'' here is with respect to the natural---the equivariant---measure $P^{\psi_0}$ on configuration space ${\rm I\! R}^{3N}$, that is the measure with the density $|\psi_0|^2$. We have proved that {\it for a large class of potentials, including the N-particle Coulomb interaction with arbitrary masses and charges, as well as arbitrary positive potentials, $P^{\psi_0} ( \tau ^+ = +\infty $ and $\tau^- =-\infty )=1$}. In other words, for almost any initial point $Q_0$, the solution of the guiding equation (\ref{bohmev}) is global, i.e.\ nodes of the wave function\ or other singularities of the velocity field will not be reached in finite time, and the solution does not ``explode,'' i.e.\ reach infinity in finite time. (Cf.\ the example of Figure \ref{nodes}: the set of initial configurations for which the solution of (\ref{bohmev}) is not global consists of 3 points---certainly a set of $P^{\psi_0}$-measure 0---while for all the other initial values the solution is global.) We shall comment on the proof---which is in fact quite intuitive---in the next section. (For the details, see Berndl, D\"urr, Goldstein, Peruzzi, Zangh\`\i\ 1995.) Despite the relative simplicity of the proof, the generality of the result is rather surprising. The analogous problem in Newtonian mechanics---global existence and uniqueness of solutions of the $N$-body problem for Newtonian gravity---is a classical problem of mathematical physics which has been investigated with a great variety of methods (Moser 1973, Diacu 1992), but for which the analogous result---global existence of solutions for Lebesgue-almost all initial values in the $N$-particle phase space ${\rm I\! R}^{6N}$---has not yet been established. In addition to the possibility of collision singularities, the $N$-body problem with $N>3$ yields marvelous scenarios of so-called pseudocollisions, where some particles, while oscillating wildly, reach infinity in finite time. Examples of such catastrophies have been constructed by Mather and McGehee 1975,$^2$ by Gerver 1991, and by Xia 1992. For special situations, such as for example an almost planar solar system with weakly eccentric planets, the KAM theorem furnishes among other things the existence of global solutions for a set of initial values of positive Lebesgue measure in phase space (cf.\ for example Arnold 1963). For the general case of the $N$-body problem with $N\geq 5$ it is still open whether for almost all initial values the solutions of Newtonian gravity are global or whether pseudocollisions occur on a set of positive measure. The explanatory power of Newtonian mechanics rests largely on the analysis of specific solutions of the equations of motion, such as for example the planetary motion, or the gyroscope motion---physical situations where the bodies are modelled by fixed mass densities. The more ambitious program of Newtonian mechanics for a system composed of a huge number of point particles or rigid balls, for example the statistical gas theory, is difficult for two reasons. Firstly, to get global existence of solutions for almost all initial values---certainly a prerequisite of a statistical analysis---one has to deal for example with the problem of singular interactions in the case of point particles resp.\ multiple collisions in the case of rigid balls. Secondly, more interesting than equilibrium properties and much more difficult to analyse are the physical effects occuring in the transition {}from nonequilibrium to equilibrium. In Bohmian mechanics , actual trajectories are interesting in some cases. However, to determine the equilibrium properties they are not needed. And in contrast to the situation in classical physics, we have the strongest empirical evidence that our world is in quantum equilibrium (D\"urr, Goldstein, Zangh\`\i\ 1992a and 1992b). Our theorem of global existence of Bohmian trajectories is thus exactly what is necessary and sufficient for a complete analysis of Bohmian mechanics . It provides the rigorous basis for the derivation of the quantum formalism as well as scattering theory {from} an equilibrium analysis of Bohmian mechanics . (These topics are described in the contributions of D\"urr, Goldstein, Zangh\`\i\ and Daumer to this volume.) In this sense, what we have proved is just that $|\psi |^2$ {\it is an equivariant density for the Bohmian particle motion,} \be{equivar} \rho _0=|\psi _0|^2 \quad \Longrightarrow \quad \rho _t=|\psi _t|^2 \quad \mbox{ for all } t\in{\rm I\! R}. \end{equation} \section{The quantum flux} The preceding sentence may have confused readers: Why isn't the equivariance of the $|\psi _t|^2$-measure clear without an intricate examination of the global existence and uniqueness of solutions of Bohmian mechanics ? Why doesn't (\ref{equivar}) follow immediately {from} a comparison of the continuity equation for an ensemble of configurations moving with velocity $v^\psi $ and having a density $\rho _t(q)$ \begin{equation} \frac{\partial}{\partial t} \, \rho _t(q) \ + \ \sum_{k=1}^N \mbox{div}_{k}\, \bigl( {\bf v}^{\psi _t}_{k}(q) \, \rho _t(q)\bigr) \ = \ 0 \label{conteq} \end{equation} with the ``quantum continuity equation'' \begin{equation} \frac{\partial}{\partial t} \, |\psi_t(q)|^{2} \ + \ \sum_{k=1}^{N} \mbox{div}_{k} \ {\bf j}^{\psi_t}_{k}(q) \ =\ 0 , \label{qfluxeq} \end{equation} noting that the quantum probability current $j^{\psi}=({\bf j}_{1}^{\psi},...,{\bf j}_{N}^{\psi})$ is given by \[ {\bf j}_{k}^{\psi} \ = \ {\bf v}_{k}^{\psi}\, |\psi|^{2} \ =\ \frac{\hbar}{{m}_{k}} \, {\rm Im}\, ({\psi}^{\ast} {\nabla}_{k} \psi ) \ ? \] The answer is that the continuity equation (\ref{conteq}) expresses how an ensemble density $\rho_0$ evolves under the deterministic evolution of the trajectories, and holds therefore on just that set which is covered by the integral curves of $v^{\psi _t}$. We shall denote this set---the image set of the maximal solution $Q$---by ${\cal I}$: \[ {\cal I} = \{ (q,t)\in{\cal G} : \exists q_0\in{\cal G}_0\ \mbox{with}\ t\in (\tau^-(q_0),\tau^+(q_0))\ \mbox{and}\ q=Q_t(q_0)\} . \] Equation (\ref{qfluxeq}), on the other hand, is an identity for every $\psi_t$ which solves Schr\"{o}dinger's equation . Thus (\ref{equivar}) holds on ${\cal I}$: $\rho_t(q)= |\psi_t(q)|^2$ for $(q,t)\in{\cal I}$. By putting $\rho_t(q)=0$ on the set which is not reached by the Bohmian trajectories $(\Omega\times{\rm I\! R})\setminus {\cal I}$, we obtain $\rho_t(q)\leq |\psi_t(q)|^2$ for $(q,t)\in \Omega\times{\rm I\! R}$. This insight is fundamental to the proof of global existence. The second fundamental insight comes from the consideration of the configuration-space-time flux $J(q,t) = (v^{\psi _t} (q) \rho_t(q) , \rho_t(q))$. To establish $P^{\psi_0} ( \tau ^+ = +\infty $ and $\tau^-=-\infty )=1$, i.e.\ that for $P^{\psi_0}$-almost all initial configurations $q_0$ the maximal solution is global, we have to show that the solution will reach singularities of the velocity field or infinity in finite time at most for a set of initial configurations of $P^{\psi_0}$-measure zero. The probability of such a ``bad event'' is estimated with the help of the flux $J(q,t)$: For any hypersurface $\Sigma$ in configuration-space-time with local normal vector field $n(q,t)$ and surface area element $d\sigma$, \[ \int _\Sigma |J(q,t)\cdot n(q,t)| \, d\sigma\] is the expected number of crossings of the hypersurface $\Sigma$ by the Bohmian trajectory and hence a bound for the probability of crossing $\Sigma$. We obtain \be{probest} P^{\psi_0}(Q_t\mbox{ crosses }\Sigma )\leq \int _\Sigma |J(q,t)\cdot n(q,t)| \, d\sigma \leq \int _\Sigma |J^{\psi_t} (q)\cdot n(q,t)| \, d\sigma \end{equation} with the quantum flux $J^{\psi_t} (q) = (j^{\psi_t}(q), |\psi_t(q)|^2 )$. Consider now neighborhoods around the singularities of the velocity field: ${\cal N}^\epsilon $, a (configuration-space-time) neighborhood of thickness $\epsilon$ around the set of nodes ${\cal N}$ of the wave function , ${\cal S}^\delta$, a (configuration space) neighborhood of thickness $\delta$ around the set of singularities of the potential ${\cal S} = \partial \Omega$, and ${\cal K}^{r}$, a sphere in configuration space of radius $r$ to control escape to infinity. ${\cal G} ^{\epsilon \delta r}$ denotes the set of ``$\epsilon$-$\delta$-$r$-good'' points in configuration-space-time: $ {\cal G} ^{\epsilon \delta r}= (({\cal K} ^r \setminus {\cal S} ^\delta ) \times {\rm I\! R} ) \setminus {\cal N} ^\epsilon $ (see Figure \ref{gednbild}). A Bohmian trajectory approaching a singularity of the velocity field or infinity first has to cross the boundary of ${\cal G}^{\epsilon \delta r}$. {From} (\ref{probest}), we obtain the following bound for the probability of a ``bad event'' in the time interval $[0,T]$: for all $0<T,\epsilon ,\delta ,r<\infty$ \begin{eqnarray} P^{\psi_0}(\tau^+<T) & \leq & P^{\psi_0}({\cal G}_0\setminus {\cal G}^{\epsilon \delta r}_0) \ +\ \int_{\partial{\cal G}^{\epsilon \delta r}\cap ({\rm I\! R}^{3N}\times[0,T])} |J^{\psi_t} (q)\cdot n(q,t)| \, d\sigma \nonumber\\ & \leq & P^{\psi_0}({\cal G}_0\setminus {\cal G}^{\epsilon \delta r}_0) \ +\ {\bf N} (\epsilon,\delta,r)\ + \ {\bf S}(\delta)\ +\ {\bf I} (r) \label{bound}\end{eqnarray} with \[ \begin{array}{l} \displaystyle {\bf N} (\epsilon,\delta,r)\ =\ \int_{\partial {\cal N}^\epsilon\cap (({\cal K}^r\setminus{\cal S}^\delta)\times [0,T])}|J^{\psi_t} (q)\cdot n(q,t)| \, d\sigma ,\\ \displaystyle {\bf S} (\delta)\ =\ \int_{(\partial {\cal S}^\delta\cap \Omega)\times[0,T]}|J^{\psi_t} (q)\cdot n(q,t)| \, d\sigma ,\\ \displaystyle {\bf I} (r)\ =\ \int_{(\partial {\cal K}^r \cap \Omega)\times[0,T]}|J^{\psi_t} (q)\cdot n(q,t)| \, d\sigma . \end{array} \] By proving that for appropriate choices of sequences $\epsilon\to 0$, $\delta\to 0$, and $r\to\infty$ the right hand side of (\ref{bound}) gets arbitrarily small we show that for all $T>0$, $P^{\psi_0}(\tau^+<T)=0$, and thus $P^{\psi_0}(\tau^+<\infty)=0$. {From} time reversal invariance we obtain that also $P^{\psi_0}(\tau^->-\infty)=0$, and that altogether the solutions of Bohm's equation (\ref{bohmev}) with the velocity field (\ref{velfield}) are global for almost all initial configurations. Heuristically, it is rather immediate that the flux integrals ${\bf N}$, ${\bf S}$, and ${\bf I}$ get arbitrarily small as $\epsilon\to 0$, $\delta\to 0$, and $r\to\infty$. For ${\bf N}$, observe that the flux $J^\psi=0$ at the nodes of the wave function . {From} the continuity of $\psi$, $J^\psi$ is small in the vicinity of ${\cal N}$. Furthermore, the nodal set ${\cal N}$ itself is generically small: since $\psi$ is a complex function, the set where $\psi_t(q)=0$, i.e.\ ${\rm Re}\, \psi_t(q)=0$ and ${\rm Im}\, \psi_t(q)=0$, is generically of codimension 2 in configuration-space-time. Thus the area of $\partial{\cal N}^\epsilon$ should be small. Of the set $\cal S$ of singularities of the potential $V$, we assume in Berndl, D\"urr, Goldstein, Peruzzi, Zangh\`\i\ 1995 that it be contained in a union of a finite number of $(3N-3)$-dimensional hyperplanes, as is certainly the case for the $N$-particle Coulomb interaction $V=V_{\rm Coulomb}$ and the $N$-electron atom $V=V_{\rm atom}$. The area of $\partial{\cal S}^\delta$ is therefore very small. Moreover, in certain cases it is required as a boundary condition for self-adjointness\ of the Hamiltonian\ that the flux into the singularities of $V$ vanishes, see below. That the flux to infinity is small can be derived from the fact that the quantum flux $J^{\psi_t} (q)$ tends rapidly to 0 as $|q|\to \infty$, which follows from the square integrability of $\psi$ and $\nabla\psi$, i.e.\ from the normalizability of the $|\psi|^2$-distribution and finite ``kinetic energy.'' These conditions are automatically fulfilled for the considered class of potentials for $\psi_0\in C^\infty (H)$, which is just what we required for the existence of a global classical solution of Schr\"{o}dinger's equation . We are thus lead back to the question of the existence of global classical solutions of Schr\"{o}dinger's equation ! In fact, the connection between the global existence of Bohmian trajectories and global solutions of Schr\"{o}dinger's equation\ is most remarkable. The clue is the quantum flux $J^\psi$, which in Bohmian mechanics\ has the interpretation of a flux of particles moving along deterministic trajectories with velocity $v^\psi $: The condition that there is no flux into the critical points ensures firstly, as explained above, that the Bohmian configuration will not reach the critical points and thus exists globally. Secondly, it provides suitable boundary conditions for the domain of the Hamiltonian\ ${\cal D} (H)$ such that the Hamiltonian\ will be self-adjoint\ on ${\cal D} (H)$ and thus Schr\"{o}dinger's equation\ has global unique solutions as explained in Section 2. This connection is realized by considering that from integration by parts \[ \int_{M} \psi^{\ast} (H\psi ) \, dq - \int_{M} (H \psi^{\ast} ) \psi \, dq \;=\;-i\hbar \int_{\partial M} j^{\psi}\cdot n\, ds , \] for $M= {\cal K}^r \backslash {\cal S}^{\delta}$, and thus by the self-adjointness of the Hamiltonian\ \[ \lim_{\delta\rightarrow 0, r\rightarrow \infty} \left( \int_{\partial {\cal S}^{\delta} \cap {\cal K}^r } j^{\psi}\cdot n\, ds \, \, + \,\, \int_{\partial {\cal K}^{r} \setminus {\cal S}^\delta } j^{\psi}\cdot n\, ds \right) \ =\ 0. \] This is only slightly weaker than the vanishing of ${\bf S}(\delta)$ and ${\bf I}(r)$ in the limit $\delta\to0$, $r\to\infty$, which is part of our sufficient condition for global existence of Bohmian trajectories. Moreover, in situations where the self-adjoint extension of $\wt H$ (cf.\ Section 2) is not unique, the particle picture of Bohmian mechanics\ supplies an interpretation of the different possible boundary conditions yielding different time evolutions of the wave function\ and thus a basis for the choice of one over the others. (For more on these points, see Berndl, D\"urr, Goldstein, Peruzzi, Zangh\`\i\ 1995.) In this way, the point of view of Bohmian mechanics\ provides genuine understanding to the mathematics around the self-adjointness of Schr\"odinger Hamiltonians! Usually, the self-adjointness\ of the Hamiltonian ---via its equivalence to the existence of a unitary group---is motivated by the conservation of $|\psi|^2$-probability. Probability of what? The standard answer---the probability of {\em finding}\/ a particle in a certain region---is justified by Bohmian mechanics : A particle is found in a certain region because, in fact, it's there. By incorporating the positions of the particles into the theory, and thus by interpreting $|\psi|^2$ as a probability density of particles being and the quantum flux $J^\psi$ as a flux of particles moving, Bohmian mechanics\ can be regarded as providing the foundation for all intuitive reasoning in quantum mechanics. \begin{figure}[p] \begin{center} \leavevmode \epsfysize=10cm \epsffile{harmosall.eps} \end{center} \caption{Sketch of the course of Bohmian trajectories for the wave function \ (4). Nodes of the wave function\ are marked by dots.} \label{nodes} \end{figure} \begin{figure}[p] \begin{center} \leavevmode \epsfysize=10cm \epsffile{gednc.eps} \end{center} \caption{Bohmian trajectories run in the white area ${\cal G}^{\epsilon \delta r} = (({\cal K} ^r\setminus {\cal S} ^\delta ) \times {\rm I\! R} ) \setminus {\cal N} ^\epsilon $. The set of singularities of the potential ${\cal S} \times {\rm I\! R}$ is dashed, nodes of the wave function\ are marked by {\sf x}. Trajectories having crossed $\partial{\cal G}^{\epsilon \delta r}$ are dotted. By letting the grey neighborhoods ${\cal N}^\epsilon$ and ${\cal S}^\delta$ around the singularities of the velocity field shrink and the sphere ${\cal K}^r$ blow up, the set of dotted trajectories shrinks to a set of measure zero.} \label{gednbild} \end{figure} \newpage \section*{Footnotes} \setlength{\parindent}{0pt} \parskip 3ex plus 0.1ex minus 0.1ex 1 $\quad$ If only the expectation of $H^2$, the squared energy, is finite, $\psi_0\in{\cal D} (H)$, ``Schr\"{o}dinger's equation\ holds in the $L^2$-sense,'' i.e. $\displaystyle i \lim_{h\to0}\frac{\psi_{t+h} - \psi_t}{h} = H\psi_t$, where the equality and convergence of the limit holds with respect to the Hilbert space norm and not pointwise as expressed by Schr\"{o}dinger's equation . 2 $\quad$ In this (one-dimensional) example the system explodes only after infinitely many two particle collisions; thus it does not describe a genuine pseudocollision. \newpage \section*{References} Arnol'd, V.I. (1963), ``Small denominators and problems of stability of motion in classical and celestial mechanics,'' {\it Russian Mathematical Surveys} {\bf 18}(6), 85--191. (Also: Arnol'd, V.I. (1989) {\it Mathematical methods in classical mechanics.} New York, Springer.) Berndl, K., D\"urr, D., Goldstein, S., Peruzzi, G., Zangh\`\i , N. (1995), ``On the global existence of Bohmian mechanics,'' to appear in {\it Communications in Mathematical Physics} Bohm, D. (1952), ``A suggested interpretation of the quantum theory in terms of ``hidden'' variables I, II,'' {\it Physical Review} {\bf 85}, 166--179, 180--193. Diacu, F.N. (1992), {\it Singularities of the N-body problem}. Montr\'eal, Les Publications CRM. (Also: Diacu, F.N. (1993), ``Painlev\'e's Conjecture,'' {\it Mathematical Intelligencer} {\bf 15}, 6--12.) D\"{u}rr, D., Goldstein, S., Zangh\`{\i}, N. (1992a), ``Quantum equilibrium and the origin of absolute uncertainty,'' {\it Journal of Statistical Physics} {\bf 67}, 843--907. D\"{u}rr, D., Goldstein, S., Zangh\`{\i}, N. (1992b), ``Quantum mechanics, randomness, and deterministic reality,'' {\it Physics Letters A} {\bf 172}, 6--12. Gerver, J.L. (1991), ``The existence of pseudocollisions in the plane,'' {\it Journal of Differential Equations} {\bf 89}, 1--68. Kato, T. (1951), ``Fundamental properties of Hamiltonian operators of Schr\"{o}dinger\ type,'' {\it Transactions of the American Mathematical Society} {\bf 70}, 195--211. Mather, J., McGehee, R. (1975), ``Solutions of the collinear four body problem which become unbounded in finite time'' in J. Moser (ed.), {\it Dynamical Systems: Theory and Applications}. Berlin, Springer. Moser, J. (1973), {\it Stable and Random Motions in Dynamical Systems}. Princeton, Princeton University Press. Reed, M., Simon, B. (1975), {\it Methods of Modern Mathematical Physics II}. San Diego, Academic Press. (Also: Simon, B. (1977), ``An introduction to the self-adjointness and spectral analysis of Schr\"{o}dinger\ operators'' in W. Thirring, P. Urban (eds.), {\it The Schr\"{o}dinger\ Equation}. Wien, Springer, pp. 19--42.) Xia, Z. (1992), ``The existence of noncollision singularities in Newtonian systems,'' {\it Annals of Mathematics} {\bf 135}, 411--468. \end{document}
hep-th/9509032
\section{INTRODUCTION} The main purpose of our work is to approach the scaling region and extract physical results by analytic calculations. For a lattice calculation, a basic requirement is that for weak enough coupling, the dimensionless quantities should satisfy the scaling law, predicted by renormalization group equation. For $\rm{SU(N_c)}$ gauge theories in 3 dimensions, superrenormalizabilty and dimensional analysis tell us that the dimensionless masses $aM$ should scale as \begin{eqnarray} {aM \over g^2} \to {M \over e^2}. \label{s2p1} \end{eqnarray} For (2+1)-dimensional compact \rm{U(1)} and (3+1)-dimensional non-abelian gauge theories, $aM$ should scale exponentially as \begin{eqnarray} {aM} \to exp(-b/g^2). \label{s3p1} \end{eqnarray} If the calculated $M$ data converge to a stable value, we can get an estimate for the mass. There have been various analytic methods available in the literature (for a review see \cite{Guo}). The main difficulty of the conventional methods (e.g. strong coupling expansion) is that they converge very slowly and very higher order $1/g^2$ calculations are required to extend the results to the intermediate coupling region. Unfortunately, high order calculations are difficult in practice. Recently, we proposed a new method \cite{GCL,CGZF,QCD3,MASS} for Hamiltonian lattice gauge theory. This method consists of solving the eigenvalue equation with a suitable truncation scheme preserving the continuum limit. Even at low order truncation, clear scaling windows for the physical quantities in most cases have been established, and the results are in perfect agreement with the available Monte Carlo data. Here we review only the work on $\rm{U(1)}_3$, $\rm{SU(2)}_3$ and 2 dimensional $\sigma$ model, while that for \rm{SU(3)} has been summarized in \cite{Luo}. \section{THE METHOD} The Schr{\"o}dinger equation $H \vert \Omega \rangle = \epsilon_{\Omega} \vert \Omega \rangle$ on the Hamiltonian lattice for the ground state \begin{eqnarray} \vert \Omega \rangle = exp \lbrack R(U) \rbrack \vert 0 \rangle \label{b1} \end{eqnarray} and vacuum energy $\epsilon_{\Omega}$ can be reformulated as \begin{eqnarray*} \sum_{l} \lbrace [E_l,[E_l,R(U)]]+[E_l,R(U)][E_l,R(U)] \rbrace \end{eqnarray*} \begin{eqnarray} - {2 \over g^4} \sum_{p} tr(U_p+U_{p}^{\dagger}) ={2a \over g^2} \epsilon_{\Omega}. \label{schr} \end{eqnarray} To solve this equation, let us write $R(U)$ in order of graphs $G_{n,i}$, i.e., $ R(U)=\sum_{n} R_{n}(U)=\sum_{n,i} C_{n,i} G_{n,i}(U)$. Substituting it to (\ref{schr}), we have the $N$th order truncated eigenvalue equation \begin{eqnarray*} \sum_{l} \lbrace [E_l,[E_l,\sum_{n}^{N} R_{n}(U)]] \end{eqnarray*} \begin{eqnarray*} +\sum_{n_1+n_2 \le N}[E_l,R_{n_1}(U)][E_l,R_{n_2}(U)] \rbrace \end{eqnarray*} \begin{eqnarray} - {2 \over g^4} \sum_{p} tr(U_p+U_{p}^{\dagger}) ={2a \over g^2} \epsilon_{\Omega}. \label{b2} \end{eqnarray} By taking the coefficients of the graphs $G_{n,i}$ in this equation to zero, we obtain a set of non-linear algebra equations, from which $C_{n,i}$ are determined. The similar method applies to the eigenvalue equation for the mass and its wave function \cite{MASS}. Therefore, solving lattice field theory is reduced to solving the algebra equations. The lowest order graph is quite simple: $R_1(U)=C_{1,1}(U_p +h.c.)$ The first term in (\ref{b2}) doesn't generate new graphs, but the second term does, i.e. \begin{eqnarray*} [E_l,G_{n_1}(U)] \in R_{n}(U) + lower ~ orders, \end{eqnarray*} \begin{eqnarray*} [E_l,G_{n_1}(U)][E_l,G_{n_2}(U)] \in R_{n_1+n_2}(U) \end{eqnarray*} \begin{eqnarray} +lower ~ orders. \end{eqnarray} Two questions arise: \noindent 1) Should all the new graphs generated by the second term in (\ref{b2}) be taken as independent graphs of order $n_1+n_2$? For abelian gauge theories, the answer is yes. For non-abelian gauge theories, because of the uni-modular conditions \cite{QCD3,MASS,Luo,GCFC}, there is a mixing problem not only for the graphs of the same order, but also for graphs of different orders. The classification for independent graphs is particularly complecate for \rm{SU(3)}. \noindent 2) For $n_1+n_2 > N$, should we keep the lower order graphs in $[E_l,G_{n_1}(U)][E_l,G_{n_2}(U)]$? To preserve the correct limit, at $Nth$ order truncation, one should to DROP all these graphs. This is the essential feature of our truncation scheme, which differs sufficiently from the scheme in \cite{Green}. There have also been some other truncation schemes proposed in \cite{SW,Bishop}. One of their major problems is the violation of the long wavelength structure or continuum limit of the equation, and consequently the violation of the scaling law (\ref{s2p1}) or (\ref{s3p1}) for the physical quantities. Let's see further why the equation (\ref{b2}) should be truncated in the way suggested at point 2). The continuum limit of a graph $G_{n,i}(U)$ is \begin{eqnarray*} G_{n,i}(U)=e^2 a^4[A_{n,i} ~ tr ({\cal F}^2) \label{small_a} \end{eqnarray*} \begin{eqnarray} +a^2 B _{n,i} ~ tr ({\cal D} {\cal F})^2+...] \end{eqnarray} with ${\cal F}$ the field strength tensor and ${\cal D}$ the covariant derivative. It has been generally proven \cite{GCL} that in the continuum limit the second term of (\ref{b2}) term should behave as \begin{eqnarray} [E_l,G_{n_1}(U)][E_l,G_{n_2}(U)] \propto e^2 a^6 ~Tr({\cal D} {\cal F}_{\mu,\nu})^2. \end{eqnarray} To preserve this correct limit, when the equation (\ref{b2}) is truncated to the $Nth$ order, all the graphs created by $[E_l,R_{n_1}(U)][E_l,R_{n_2}(U)]$ for $n_1+n_2 \le N$ must be considered. On the other hand, all the graphs created by this term for $n_1+n_2 > N$ should be dropped, even there are lower order graphs. Otherwise the partial sum of the lower order graphs would make this term behave in a considerably different (wrong) way. \section{RESULTS} Once the coefficients $C_{n,i}$ are obtained by solving (\ref{b2}), we can use (\ref{b1}) and (\ref{small_a}) to compute the parameters $\mu_0$ and $\mu_2$ in the vacuum wave function for the long wavelength configurations $U$ \cite{Arisue} \begin{eqnarray*} \vert \Omega \rangle=exp \lbrack - {\mu_0} \int d^{D-1}x ~ tr {\cal F}^2 \end{eqnarray*} \begin{eqnarray} - {\mu_2} \int d^{D-1}x ~tr ({\cal D} {\cal F})^2 \rbrack. \label{a1} \end{eqnarray} \begin{figure}[htb] \fpsxsize=7.5cm \vspace{-20mm} \fpsbox[70 90 579 760]{mu_su2.ps} \vspace{-10mm} \caption{Parameters in the vacuum wave function of $\rm{SU(2)}_3$ model. The dashed lines show the mean values for the Monte Carlo data.} \label{fig1} \end{figure} The results for $\mu_0$ and $\mu_2$ in $3$ dimensional \rm{SU(2)} gauge theory are shown in Fig. 1. Empressively, nice scaling behavior is obtained even at $N=3$, and the data for $4/g^2 > 4$ are in good agreement with the Monte Carlo measurements \cite{Arisue}. The order $N=4$ data are also included in this figure. Although there are no big differences between the results at $N=3$ and $N=4$, higher order calculation seems necessary to ensure that the results would finally occur at the correct values. Figure 2 shows our results for the mass gap in $\rm{SU(2)}_3$ at different truncation orders. For comparison, the results from the truncation method of Llewellyn Smith and Watson (LS-W) \cite{SW} and $14th$ order series expansion \cite{Zheng} are also included. Again, even at $N=3$, with satisfaction of the scaling law, our results are in best agreement with the data taken from the continuum limit of the Monte Carlo data \cite{Teper}. \begin{figure}[htb] \fpsxsize=7.5cm \vspace{-20mm} \fpsbox[70 90 579 760]{mass_su2.ps} \vspace{-10mm} \caption{The mass gap of the $\rm{SU(2)}_3$ model from three different methods. The dashed lines show the continuum limit of the Monte Carlo data.} \label{fig2} \end{figure} As is mentioned above, for non-abelian gauge theories, the uni-modular conditions lead to the existence of different choices for independent graphs. In \rm{SU(2)}, because of $trU_p^{\dagger}=trU_p$, all the disconnected graphs can be transformed into the connected ones, used as an independent set of the graphs. Our results in Figs. 1 and 2 are from such a choice, while the comparison between different choices (connected, disconnected \cite{GCL} and inverse) has been made in \cite{GCFC}. Of course, one is free to choose arbitrary set of independent graphs. A criterion for a good choice is that it is convergent more rapidly to the continuum limit than other ones at lower order truncation. This has been obviously demonstrated for $\rm{SU(3)}$ \cite{Luo,ASSYM}. The most intriguing scaling law is the exponential scaling (\ref{s3p1}). Before investigating \rm{QCD} in 3+1 dimensions, we would like to test our method in a (2+1)-dimensional compact \rm{U(1)} model, which has many properties of the realistic theory. Here there is no ambiguity induced by uni-modular conditions in $\rm{SU(N_c)}$ theories. Because the nature of the abelian group greatly simplifies the calculations, we can easily write a program at arbitary orders. \begin{figure}[htb] \fpsxsize=7.5cm \vspace{-20mm} \fpsbox[70 90 579 760]{mu0_u1.ps} \vspace{-10mm} \caption{Relevant quantity for $\mu_0$ in the long wavelength vacuum state of compact $\rm{U(1)}_3$. The dashed line is the expected scaling law.} \label{fig3} \end{figure} \begin{figure}[htb] \fpsxsize=7.5cm \vspace{-20mm} \fpsbox[70 90 579 760]{mass_u1.ps} \vspace{-10mm} \caption{Relevant quantity for the asymmetric mass $M_A$ of compact $\rm{U(1)}_3$. The dashed line is the expected scaling law.} \label{fig4} \end{figure} The results for $\rm{U(1)}_3$ are shown in Figs. 3 and 4 respectively. At relatively low truncation orders (comparing to 16th order series expansion \cite{Zheng}), a scaling window has been seen around $1/g^2=1$, and such a window becomes wider with the order $N$. Most impressively, there is an obvious tendency to converge to the scaling curve (dash line). This implies that the obtained physical quantities occur at their correct values. Fitting the N=6 data in the scaling region, we get \cite{U1} \begin{eqnarray*} {\mu_{0} \over ag} =3.1120 \times 10^{-2} exp(2.54(3)/g^2), \end{eqnarray*} \begin{eqnarray} (M_{A} a g)^2 =365(73) exp( -5.0(2)/g^2), \end{eqnarray} or $\mu_0 M_A=0.59(5)$. The non-linear (1+1)-dimensional $\sigma$ model is another interesting application of our method. According to the theoretical expectation, \begin{eqnarray} M a \propto {1 \over g^2} exp(-{2 \pi \over g^2} -{g^2 \over 8 \pi}). \end{eqnarray} Calculations at $N=5,6,7,8$ have been carried out. Preliminary results are quite encouraging, and they are in reasonable agreement with the Monte Carlo data \cite{Ha}. To reach the asymptotic scaling region, higher order calculation seems necessary. \section{SUMMARY} The reason for the success of our method is that the truncated eigenvalue equation preserves the continuum limit. The results for (2+1)-dimensional models and (1+1)-dimensional $\sigma$ models are presented to support the efficiency and reliability of our method. In conclusion, the eigenvalue equation with a proper truncation scheme may be the most direct and efficient way for extracting the continuum physics. The members at Zhongshan are supported by Inst. High Education, and XQL is sponsored by DESY. We thank the discussions with H. Arisue, P. Cai, C. Hamer, D. Sch{\"u}tte and A. Sequ{\'\i}.
hep-ph/9509253
\section*{\centering REFERENCES}\list {\arabic{enumi}.}{\settowidth\labelwidth{#1.}\leftmargin\labelwidth \advance\leftmargin\labelsep \usecounter{enumi}} \def\hskip .11em plus .33em minus .07em{\hskip .11em plus .33em minus .07em} \sloppy\clubpenalty4000\widowpenalty4000 \sfcode`\.=1000\relax} \let\endThebibliography=\endlist \def\refjl#1#2#3#4#5#6{\bibitem{#1} #2, {\it #3 \/} {\bf #4} (#5) #6.} \def\refbk#1#2#3#4{\bibitem{#1} #2, {\it #3 \/} #4} \def{\it et al\/}{{\it et al\/}} \defNucl. Inst. and Meth.{Nucl. Inst. and Meth.} \defNucl. Phys.{Nucl. Phys.} \defPhys. Lett.{Phys. Lett.} \defPhys. Rev. Lett.{Phys. Rev. Lett.} \defPhys. Rev.{Phys. Rev.} \defPhys. Rep.{Phys. Rep.} \defZ. Phys.{Z. Phys.} \defMod. Phys. Lett.{Mod. Phys. Lett.} \defInt. J. Mod. Phys.{Int. J. Mod. Phys.} \def\JPG{J. Phys. G: Nucl. Phys.} \def\jpg{J. Phys. G: Nucl. Part. Phys.} \defAnn. Phys., Lpz.{Ann. Phys., Lpz.} \defAnn. Phys. (NY){Ann. Phys. (NY)} \defNuovo Cimento{Nuovo Cimento} \defRev. Mod. Phys.{Rev. Mod. Phys.} \defRep. Prog. Phys.{Rep. Prog. Phys.} \defAnn. Rev. Nucl. Part. Sci.{Ann. Rev. Nucl. Part. Sci.} \defProgr. Theor. Phys.{Progr. Theor. Phys.} \newcommand{\eqn}[1]{(\ref{#1})} \newcommand{\bel}[1]{\begin{equation}\label{#1}} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{array}{c}}{\begin{array}{c}} \newcommand{\end{array}}{\end{array}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\displaystyle}{\displaystyle} \newcommand{\begin{itemize}}{\begin{itemize}} \newcommand{\end{itemize}}{\end{itemize}} \newcommand{\stackrel{\leftrightarrow}{\partial}}{\stackrel{\leftrightarrow}{\partial}} \newcommand{{\cal L}}{{\cal L}} \newcommand{{\cal D}}{{\cal D}} \newcommand{{\cal M}}{{\cal M}} \newcommand{{\cal O}}{{\cal O}} \newcommand{{\cal P}}{{\cal P}} \newcommand{{\cal Q}}{{\cal Q}} \newcommand{\dagger}{\dagger} \newcommand{\displaystyle \frac}{\displaystyle \frac} \newcommand{\nonumber}{\nonumber} \newcommand{\stackrel{<}{_\sim}}{\stackrel{<}{_\sim}} \newcommand{\rm\scriptsize}{\rm\scriptsize} \newcommand{$\tau$cF}{$\tau$cF} \begin{document} \newpage \Preprint \begin{center} \renewcommand{\thefootnote}{\fnsymbol{footnote}} {\huge\bf IMPORTANCE OF PRECISION MEASUREMENTS IN THE TAU SECTOR\footnote{Talk given at the $\tau$cF\ workshop, Argonne (Illinois), USA, June 21-23, 1995}}\addtocounter{footnote}{-1} \\[2ex] {{\Large\bf A. Pich} \\[1.5ex] {\it Departament de F\'{\i}sica Te\`orica, IFIC, Universitat de Val\`encia -- CSIC \\ E-46100 Burjassot, Val\`encia, Spain}} \\[2ex] \parbox[t]{12.5cm}{\small {\bf Abstract.} $\tau$ decays provide a powerful tool to test the structure of the weak currents and the universality of their couplings to the $W$ boson. The constraints implied by present data and the possible improvements at the $\tau$cF\ are analyzed.} \end{center} \section*{\centering INTRODUCTION} \label{sec:introduction} The light quarks and leptons are by far the best known ones. Many experiments have analyzed in the past the properties of $e$, $\mu$, $\nu_e$, $\nu_\mu$, $\pi$, $K$, $\ldots$ However, one na\"{\i}vely expects the heavier fermions to be much more sensitive to New Physics, since they may couple more strongly to whatever dynamics is responsible for the fermion-mass generation. Obviously, new heavy-flavour facilities, such as the $B$ and Tau-Charm Factories ($\tau$cF ), are needed to match (at least) the precision attained for the light flavours. Similarly to the bottom quark, the tau lepton is a third generation fermion, with a wide variety of decay channels into particles belonging to the first and second fermionic families. Therefore, one can expect that $\tau$ and $b$ physics will provide some clues to the puzzle of the recurring generations of leptons and quarks. While the decays of the $b$-quark are ideally suited to look for quark mixing and CP-violating phenomena, the pure leptonic or semileptonic character of $\tau$ decays provides a much cleaner laboratory to test the structure of the weak currents and the universality of their couplings to the gauge bosons. Moreover, the tau is the only known lepton massive enough to decay into hadrons; its semileptonic decays are then an ideal tool for studying strong interaction effects in very clean conditions. The last five years have witnessed a substantial change on our knowledge of the $\tau$ properties. The large (and clean) data samples collected by the most recent experiments have improved considerably the statistical accuracy and, moreover, have brought a new level of systematic understanding. The qualitative change of the $\tau$ data can be appreciated in Table~\ref{tab:improvements}, which compares the status of several $\tau$ measurements in 1990 \cite{PDG:90,PI:92} with the most recent world averages \cite{PDG:94,montreux}. All experimental results obtained so far confirm the Standard Model (SM) scenario in which the $\tau$ is a sequential lepton, with its own quantum number and associated neutrino. \begin{table}[bth] \caption{Recent improvements in $\tau$ physics and expected precision at the $\tau$cF .} \label{tab:improvements} \centering\vspace{0.2cm} \begin{tabular}{|c|c|c|c|} \hline Parameter & 1990 \protect\cite{PDG:90,PI:92} & 1995 \protect\cite{PDG:94,montreux} & $\tau$cF\ sensitivity \\ \hline $m_\tau$ \, (MeV) & $1784.1^{+2.7}_{-3.6}$ & $1777.0\pm 0.3$ & $0.1$ \\ $m_{\nu_\tau}$ \, (MeV) & $< 35$ \quad {\footnotesize (a)} & $< 24$ \quad {\footnotesize (a)} & 1--2 \\ $\tau_\tau$ \, (fs) & $303\pm 8$ & $291.6\pm 1.6$ & -- \\ $B_e$ \, (\%) & $17.7\pm 0.4$ & $17.79\pm 0.09$ & 0.02 \\ $B_\mu$ \, (\%) & $17.8\pm 0.4$ & $17.33\pm 0.09$ & 0.02 \\ $B(\pi^-\nu_\tau)$ \, (\%) & $11.0\pm 0.5$ & $11.09\pm 0.15$ & 0.01 \\ $B(K^-\nu_\tau)$ \, (\%) & $0.68\pm 0.19$ & $0.68\pm 0.04$ & 0.003 \\ $B(\pi^-\eta\nu_\tau)$ & $<9\times 10^{-3}$ \quad {\footnotesize (a)} & $<3.4\times 10^{-4}$ \quad {\footnotesize (a)} & $10^{-6}$ \\ $B(l^-G)$ & $<10^{-2}$ & $<2.7\times 10^{-3}$ \quad {\footnotesize (a)} & $10^{-5}$ \\ $B(\mu^-\gamma)$ & $< 5.5\times 10^{-4}$ \quad {\footnotesize (b)} & $< 4.2\times 10^{-6}$ \quad {\footnotesize (b)} & $10^{-7}$ \\ $B(e^-e^+e^-)$ & $< 3.8\times 10^{-5}$ \quad {\footnotesize (b)} & $< 3.3\times 10^{-6}$ \quad {\footnotesize (b)} & $10^{-7}$ \\ \hline $\rho_{\tau\to\mu}$ & $0.84\pm 0.11$ & $0.738\pm 0.038$ & 0.002 \\ $\eta_{\tau\to\mu}$ & -- & $-0.14\pm 0.23$ & 0.003 \\ $\xi_{\tau\to\mu}$ & -- & $1.23\pm 0.24$ & 0.02 \\ $(\xi\delta)_{\tau\to\mu}$ & -- & $0.71\pm 0.15$ & 0.02 \\ $\xi'_{\tau\to\mu}$ & -- & -- & 0.15 \\ $h_{\nu_\tau}$ & -- & $-1.014\pm0.027$ & 0.003 \\ \hline $a_\tau^\gamma$ & $<0.1$ \quad {\footnotesize (b)} & $<0.01$ \quad {\footnotesize (a)} & 0.001 \\ $d_\tau^\gamma$ \, (e cm) & $<6\times 10^{-16}$ \quad {\footnotesize (b)}& $<5\times 10^{-17}$ \quad {\footnotesize (a)} & $10^{-17}$ \\ \hline \end{tabular} \vspace{0.2 cm} {\footnotesize (a) 95\% CL \ ; \quad (b) 90\% CL} \end{table} The present experiments are soon going to reach their systematic limits. Further improvements in $\tau$ physics require then new high-precision facilities, to push the significance of the $\tau$ tests beyond the present few per cent level. The last column in Table~\ref{tab:improvements} shows the sensitivities that could be achieved at the $\tau$cF . In some cases, a much better accuracy could be obtained with polarized beams or monochromatic optics. In the following, I discuss several precision tests of the SM, using the present $\tau$-decay data, and the expected improvements at the $\tau$cF . I will concentrate on the universality and Lorentz-structure of the charged leptonic currents. A discussion of other important topics in $\tau$ physics can be found in refs.~[2, 5--8]. \section*{\centering CHARGED-CURRENT UNIVERSALITY} The leptonic decays $\tau^-\to e^-\bar\nu_e\nu_\tau,\mu^-\bar\nu_\mu\nu_\tau$ are theoretically understood at the level of the electroweak radiative corrections \cite{MS:88}. Within the SM, \begin{equation} \label{eq:leptonic} \Gamma_{\tau\to l} \, \equiv \, \Gamma (\tau^- \rightarrow \nu_{\tau} l^- \bar{\nu}_l) \, = \, {G_F^2 m_{\tau}^5 \over 192 \pi^3} \, f(m_l^2 / m_{\tau}^2) \, r_{EW}, \end{equation} where $f(x) = 1 - 8 x + 8 x^3 - x^4 - 12 x^2 \log{x}$. The factor $r_{EW}=0.9960$ takes into account radiative corrections not included in the Fermi coupling constant $G_F$, and the non-local structure of the $W$ propagator \cite{MS:88}. Using the value of $G_F$ measured in $\mu$ decay, Eq.~\eqn{eq:leptonic} provides a relation \cite{PI:92} between the $\tau$ lifetime and the leptonic branching ratios $B_l\equiv B(\tau^-\to\nu_\tau l^-\bar\nu_l)$: \begin{equation} \label{eq:relation} B_e \, = \, {B_\mu \over 0.972564\pm 0.000010} \, = { \tau_{\tau} \over (1.6321 \pm 0.0014) \times 10^{-12}\, {\rm s} } \, . \end{equation} The errors reflect the present uncertainty of $0.3$ MeV in the value of $m_\tau$. \begin{figure}[bht] \centerline{\epsfxsize =5.7in \epsfbox{BeTauPlot.ps}} \caption{Relation between $B_e$ and $\tau_\tau$. The narrow dotted band corresponds to the prediction in Eq.~(\protect\ref{eq:relation}). The larger region between the two dot-dashed lines indicates the relation obtained with the old \protect\cite{PDG:92} value of $m_\tau$. The experimental points show the present world averages \protect\cite{montreux}, together with the values quoted by the Particle Data Group [1, 3, 10] since 1990.} \label{fig:BeLife} \end{figure} The predicted $B_\mu/B_e$ ratio is in perfect agreement with the measured value $B_\mu/B_e = 0.974 \pm 0.007$ \cite{montreux}. As shown in Fig.~\ref{fig:BeLife}, the relation between $B_e$ and $\tau_\tau$ is also well satisfied by the present data. Notice, that this relation is very sensitive to the value of the $\tau$ mass [$\Gamma_{\tau\to l}\propto m_\tau^5$]. The most recent measurements of $\tau_\tau$, $B_e$ and $m_\tau$ have consistently moved the world averages in the correct direction, eliminating the previous ($\sim 2\sigma$) disagreement. The experimental precision (0.5\%) is already approaching the level where a possible non-zero $\nu_\tau$ mass could become relevant; the present bound \cite{ALEPH:95} $m_{\nu_\tau}< 24$ MeV (95\% CL) only guarantees that such effect is below 0.14\%. These measurements can be used to test the universality of the $W$ couplings to the leptonic charged currents. The $B_\mu/B_e$ ratio constraints $|g_\mu/g_e|$, while the $B_e/\tau_\tau$ relation provides information on $|g_\tau/g_\mu|$. The present results are shown in Tables \ref{tab:univme} and \ref{tab:univtm}, together with the values obtained from the ratio \cite{BR:92} $R_{\pi\to e/\mu}\equiv\Gamma(\pi^-\to e^-\bar\nu_e)/ \Gamma(\pi^-\to\mu^-\bar\nu_\mu)$, and from the comparison of the $\sigma\cdot B$ partial production cross-sections for the various $W^-\to l^-\bar\nu_l$ decay modes at the $p$-$\bar p$ colliders \cite{UA1:89}. \begin{table}[bt] \caption{Present constraints on $|g_\mu/g_e|$.} \label{tab:univme} \centering\vspace{0.2cm} \begin{tabular}{|c|c|c|c|} \hline & $B_\mu/B_e$ \cite{montreux} & $R_{\pi\to e/\mu}$ \cite{BR:92} % & $\sigma\cdot B_{W\to\mu/e}$ \cite{UA1:89} \\ \hline $|g_\mu/g_e|$ & $1.0008\pm 0.0036$ & $1.0017\pm 0.0015$ & $1.01\pm 0.04$ \\ \hline \end{tabular} \end{table} \begin{table}[bt] \caption{Present constraints on $|g_\tau/g_\mu|$.} \label{tab:univtm} \centering\vspace{0.2cm} \begin{tabular}{|c|c|c|c|c|} \hline & $B_e\tau_\mu/\tau_\tau$ \cite{montreux} & $R_{\tau/\pi}$ \cite{montreux} & $R_{\tau/K}$ \cite{montreux} & $\sigma\cdot B_{W\to\tau/\mu}$ \cite{UA1:89} \\ \hline $|g_\tau/g_\mu|$ & $0.9979\pm 0.0037$ & $1.006\pm 0.008$ & $0.972\pm 0.029$ & $0.99\pm 0.05$ \\ \hline \end{tabular} \end{table} The decay modes $\tau^-\to\nu_\tau\pi^-$ and $\tau^-\to\nu_\tau K^-$ can also be used to test universality through the ratios \begin{eqnarray}\label{eq:R_tp} R_{\tau/\pi} & \!\!\!\equiv &\!\!\! {\Gamma(\tau^-\to\nu_\tau\pi^-) \over \Gamma(\pi^-\to \mu^-\bar\nu_\mu)} = \Big\vert {g_\tau\over g_\mu}\Big\vert^2 {m_\tau^3\over 2 m_\pi m_\mu^2} {(1-m_\pi^2/ m_\tau^2)^2\over (1-m_\mu^2/ m_\pi^2)^2} \left( 1 + \delta R_{\tau/\pi}\right) , \qquad \\ \label{eq:R_tk} R_{\tau/K} &\!\!\! \equiv &\!\!\! {\Gamma(\tau^-\to\nu_\tau K^-) \over \Gamma(K^-\to \mu^-\bar\nu_\mu)} = \Big\vert {g_\tau\over g_\mu}\Big\vert^2 {m_\tau^3\over 2 m_K m_\mu^2} {(1-m_K^2/m_\tau^2)^2\over (1-m_\mu^2/ m_K^2)^2} \left( 1 + \delta R_{\tau/K}\right) , \qquad \end{eqnarray} where the dependence on the hadronic matrix elements (the so-called decay constants $f_{\pi,K}$) factors out. Owing to the different energy scales involved, the radiative corrections to the $\tau^-\to\nu_\tau\pi^-/K^-$ amplitudes are however not the same than the corresponding effects in $\pi^-/K^-\to\mu^-\bar\nu_\mu$. The size of the relative correction was first estimated by Marciano and Sirlin \cite{MS:93} to be $\delta R_{\tau/\pi} = (0.67\pm 1.)\% $, where the 1\% error amounts for the missing long-distance contributions to the tau decay rate. A recent evaluation of those long-distance corrections \cite{DF:94} quotes the more precise values: \bel{eq:dR_tp_tk} \delta R_{\tau/\pi} = (0.16\pm 0.14)\% \ , \qquad\qquad \delta R_{\tau/K} = (0.90\pm 0.22)\% \ . \end{equation} Using these numbers, the measured \cite{montreux} $\tau^-\to\pi^-\nu_\tau$ and $\tau^-\to K^-\nu_\tau$ decay rates imply the $|g_\tau/g_\mu|$ ratios given in Table~\ref{tab:univtm}. The inclusive sum of both decay modes, i.e. $\Gamma[\tau^-\to h^-\nu_\tau]$ with $h=\pi,K$, provides a slightly more accurate determination: $|g_\tau/g_\mu| = 1.004\pm 0.007$. The present data verifies the universality of the leptonic charged-current couplings to the 0.15\% ($e/\mu$) and 0.37\% ($\tau/\mu$) level. The precision of the most recent $\tau$-decay measurements is becoming competitive with the more accurate $\pi$-decay determination. It is important to realize the complementarity of the different universality tests. The pure leptonic decay modes probe the charged-current couplings of a transverse $W$. In contrast, the decays $\pi/K\to l\bar\nu$ and $\tau\to\nu_\tau\pi/K$ are only sensitive to the spin-0 piece of the charged current; thus, they could unveil the presence of possible scalar-exchange contributions with Yukawa-like couplings proportional to some power of the charged-lepton mass. One can easily imagine new-physics scenarios which would modify differently the two types of leptonic couplings \cite{MA:94}. For instance, in the usual two-Higgs doublet model, charged-scalar exchange generates a correction to the ratio $B_\mu/B_e$, but $R_{\pi\to e/\mu}$ remains unaffected. Similarly, lepton mixing between the $\nu_\tau$ and an hypothetical heavy neutrino would not modify the ratios $B_\mu/B_e$ and $R_{\pi\to e/\mu}$, but would certainly correct the relation between $B_l$ and the $\tau$ lifetime. At the $\tau$cF, the accurate measurement of the $B_\mu/B_e$ ratio would allow to test $|g_\mu/g_e|$ to the 0.05\% level, compared to the present 0.36\% precision (0.15\% from $R_{\pi\to e/\mu}$). The final accuracy of the $|g_\tau/g_\mu|$ universality test will be limited by the knowledge of the $\tau$ lifetime. Assuming that the $\tau_\tau$ measurement will be improved (at LEP or at the $B$ Factory) by a factor of 2, i.e. $\delta\tau_\tau/\tau_\tau\sim 0.3\%$, $|g_\tau/g_\mu|$ would be tested with a 0.16\% precision. \section*{\centering LORENTZ STRUCTURE OF THE CHARGED CURRENT} Let us consider the leptonic decays $l^-\to\nu_l l'^-\bar\nu_{l'}$, where the lepton pair ($l$, $l^\prime $) may be ($\mu$, $e$), ($\tau$, $e$), or ($\tau$, $\mu$). The most general, local, derivative-free, lepton-number conserving, four-lepton interaction Hamiltonian, consistent with locality and Lorentz invariance [17--23], \begin{equation} {\cal H} = 4 \frac{G_{l'l}}{\sqrt{2}} \sum_{\epsilon,\omega = R,L}^{n = S,V,T} g^n_{l'_\epsilon l^{\phantom{'}}_\omega} \left[ \overline{l'_\epsilon} \Gamma^n {(\nu_{l'})}_\sigma \right]\, \left[ \overline{({\nu_l})_\lambda} \Gamma_n l_\omega \right]\ , \label{eq:hamiltonian} \end{equation} contains ten complex coupling constants or, since a common phase is arbitrary, nineteen independent real parameters. $\epsilon , \omega , \sigma, \lambda$ are the chiralities (left-handed, right-handed) of the corresponding fermions, and $n$ labels the type of interaction: scalar ($I$), vector ($\gamma^\mu$), tensor ($\sigma^{\mu\nu}/\sqrt{2}$). For given $n, \epsilon , \omega $, the neutrino chiralities $\sigma $ and $\lambda$ are uniquely determined. Taking out a common factor $G_{l'l}$, which is determined by the total decay rate, the coupling constants $g^n_{l'_\epsilon l_\omega}$ are normalized to \cite{FGJ:86} \begin{eqnarray}\label{eq:normalization} 1 &\!\!\! = &\!\!\! {1\over 4} \,\left( |g^S_{RR}|^2 + |g^S_{RL}|^2 + |g^S_{LR}|^2 + |g^S_{LL}|^2 \right) \, + \, 3 \,\left( |g^T_{RL}|^2 + |g^T_{LR}|^2 \right) \nonumber \\ & &\!\!\! \mbox{} + \left( |g^V_{RR}|^2 + |g^V_{RL}|^2 + |g^V_{LR}|^2 + |g^V_{LL}|^2 \right) \, . \end{eqnarray} In the SM, $g^V_{LL} = 1$ and all other $g^n_{\epsilon\omega} = 0 $. For an initial lepton-polarization ${\cal P}_l$, the final charged lepton distribution in the decaying lepton rest frame is usually parametrized in the form \cite{BM:57,KS:57} \begin{eqnarray}\label{eq:spectrum} {d^2\Gamma \over dx\, d\cos\theta} &\!\!\! = &\!\!\! {m_l\omega^4 \over 2\pi^3} G_{l'l}^2 \sqrt{x^2-x_0^2}\, \Biggl\{ x (1 - x) + {2\over 9} \rho \left(4 x^2 - 3 x - x_0^2 \right) + \eta\, x_0 (1-x) \Biggr.\nonumber\\ & & \Biggl. - {1\over 3}{\cal P}_l \, \xi \, \sqrt{x^2-x_0^2} \cos{\theta} \left[ 1 - x + {2\over 3} \delta \left( 4 x - 4 + \sqrt{1-x_0^2} \right)\right] \Biggr\} \, , \quad \end{eqnarray} where $\theta$ is the angle between the $l^-$ spin and the final charged-lepton momentum, $\, \omega \equiv (m_l^2 + m_{l'}^2)/2 m_l \, $ is the maximum $l'^-$ energy for massless neutrinos, $x \equiv E_{l'^-} / \omega$ is the reduced energy and $x_0\equiv m_{l'}/\omega$. For unpolarized $l's$, the distribution is characterized by the so-called Michel \cite{MI:50} parameter $\rho$ and the low-energy parameter $\eta$. Two more parameters, $\xi$ and $\delta$ can be determined when the initial lepton polarization is known. If the polarization of the final charged lepton is also measured, 5 additional independent parameters \cite{PDG:94} ($\xi'$, $\xi''$, $\eta''$, $\alpha'$, $\beta'$) appear. The total decay rate is given by (neutrinos are assumed to be massless) \begin{equation}\label{eq:gamma} \Gamma\, = \, {m_l^5 G_{l'l}^2\over 192 \pi^3}\, \left\{ f\!\left({m_{l'}^2\over m_l^2}\right) + 4\eta\, {m_{l'}\over m_l}\, g\!\left({m_{l'}^2\over m_l^2}\right) \right\} r_{\mbox{\rm\scriptsize EW}} \, , \end{equation} where $g(z) = 1 + 9 z - 9 z^2 - z^3 + 6 z (1+z) \ln{z}$, and the SM radiative corrections $r_{\mbox{\rm\scriptsize EW}}$ have been included\footnote{ Since we assume that the SM provides the dominant contribution to the decay rate, any additional higher-order correction beyond the effective four-fermion Hamiltonian (\protect\ref{eq:hamiltonian}) would be a subleading effect.}. Thus, the normalization $G_{e\mu}$ corresponds to the Fermi coupling $G_F$, measured in $\mu$ decay. The $B_\mu/B_e$ and $B_e\tau_\mu/\tau_\tau$ universality tests, discussed in the previous section, actually prove the ratios $|\hat G_{\mu\tau}/\hat G_{e\tau}|$ and $|\hat G_{e\tau}/\hat G_{e\mu}|$, respectively, where \begin{equation}\label{eq:Ghat_def} \widehat G_{l'l} \,\equiv\, G_{l'l} \, \sqrt{1 + 4\,\eta_{\l\to l'}\, {m_{l'}\over m_l}\, {g\!\left( m_{l'}^2/ m_l^2 \right)\over f\!\left( m_{l'}^2/ m_l^2 \right)}} \, . \end{equation} An important point, emphatically stressed by Fetscher and Gerber \cite{FG:93}, concerns the extraction of $G_{e \mu}$, whose uncertainty is dominated by the uncertainty in $\eta_{\mu\to e}$. In terms of the $g_{\epsilon\omega}^n$ couplings, the shape parameters in Eq.~\eqn{eq:spectrum} are: \begin{eqnarray}\label{eq:michel} \rho - \frac{3}{4} & \!\!\! = & \!\!\! - \frac{3}{4} \left[ {|g^V_{LR}|}^2 + {|g^V_{RL}|}^2 + 2 {|g^T_{LR}|}^2 + 2{|g^T_{RL}|}^2 + \mbox{\rm Re}(g^S_{LR} g^{T \ast}_{LR} + g^S_{RL} g^{T \ast}_{RL}) \right] , \nonumber\\ \eta & \!\!\! = & \!\!\! \frac{1}{2} \mbox{\rm Re}\left[ g^V_{LL} g^{S\ast}_{RR} + g^V_{RR} g^{S\ast}_{LL} + g^V_{LR} \left(g^{S\ast}_{RL} + 6 g^{T\ast}_{RL}\right) + g^V_{RL} \left(g^{S\ast}_{LR} + 6 g^{T\ast}_{LR}\right) \right] , \nonumber\\ \xi - 1 & \!\!\! = & \!\!\! - \frac{1}{2} \left[ {|g^S_{LR}|}^2 + {|g^S_{RR}|}^2 + 4 (-{|g^V_{LR}|}^2 + 2 {|g^V_{RL}|}^2 + {|g^V_{RR}|}^2) \right. \nonumber\\ & & \hspace{5mm} \left. - 4 {|g^T_{LR}|}^2 + 16 {|g^T_{RL}|}^2 - 8 {\rm Re}(g^S_{LR} g^{T \ast}_{LR} - g^S_{RL} g^{T \ast}_{RL}) \right]\ , \\ ({\xi}\delta) - \frac{3}{4} & \!\!\! = & \!\!\! - \frac{3}{4} \left[ \frac{1}{2} ({|g^S_{LR}|}^2 + {|g^S_{RR}|}^2) + ({|g^V_{LR}|}^2 + {|g^V_{RL}|}^2 + 2 {|g^V_{RR}|}^2) \right. \nonumber\\ & & \hspace{5mm} \left. + 2 ({2 |g^T_{LR}|}^2 + {|g^T_{RL}|}^2) - \mbox{\rm Re}(g^S_{LR} g^{T \ast}_{LR} - g^S_{RL} g^{T \ast}_{RL}) \right]\ . \nonumber \end{eqnarray} In the SM, $\rho = \delta = 3/4$, $\eta = \eta'' = \alpha' = \beta' = 0 $ and $\xi = \xi' = \xi'' = 1 $. It is convenient to introduce \cite{FGJ:86} the probabilities $Q_{\epsilon\omega}$ for the decay of a $\omega$-handed $l^-$ into an $\epsilon$-handed daughter lepton, \begin{eqnarray}\label{eq:Q_LL} Q_{LL} &\!\!\! = &\!\!\! {1 \over 4} |g^S_{LL}|^2 \! + |g^V_{LL}|^2 \phantom{+ 3 |g^T_{LR}|^2} = {1 \over 4}\left( -3 +{16\over 3}\rho -{1\over 3}\xi +{16\over 9}\xi\delta +\xi'+\xi'' \right)\! , \quad\;\;\nonumber\\ Q_{RR} &\!\!\! = &\!\!\! {1 \over 4} |g^S_{RR}|^2 \! + \! |g^V_{RR}|^2 \phantom{+ 3 |g^T_{LR}|^2} = {1 \over 4}\left( -3 +{16\over 3}\rho +{1\over 3}\xi -{16\over 9}\xi\delta -\xi'+\xi'' \right)\! , \nonumber\\ Q_{LR} &\!\!\! = &\!\!\! {1 \over 4} |g^S_{LR}|^2 \! + \! |g^V_{LR}|^2 \! + \! 3 |g^T_{LR}|^2 = {1 \over 4}\left( 5 -{16\over 3}\rho +{1\over 3}\xi -{16\over 9}\xi\delta +\xi'-\xi'' \right)\! , \\ Q_{RL} &\!\!\! = &\!\!\! {1 \over 4} |g^S_{RL}|^2 \! + \! |g^V_{RL}|^2 \! + \! 3 |g^T_{RL}|^2 = {1 \over 4}\left( 5 -{16\over 3}\rho -{1\over 3}\xi +{16\over 9}\xi\delta -\xi'-\xi'' \right)\! . \nonumber \end{eqnarray} Upper bounds on any of these (positive-semidefinite) probabilities translate into corresponding limits for all couplings with the given chiralities. For $\mu$-decay, where precise measurements of the polarizations of both $\mu$ and $e$ have been performed, there exist \cite{FGJ:86} upper bounds on $Q_{RR}$, $Q_{LR}$ and $Q_{RL}$, and a lower bound on $Q_{LL}$. They imply corresponding upper bounds on the 8 couplings $|g^n_{RR}|$, $|g^n_{LR}|$ and $|g^n_{RL}|$. The measurements of the $\mu^-$ and the $e^-$ do not allow us to determine $|g^S_{LL}|$ and $|g^V_{LL}|$ separately \cite{FGJ:86,JA:66}. Nevertheless, since the helicity of the $\nu_\mu$ in pion decay is experimentally known \cite{RO:82} to be $-1$, a lower limit on $|g^V_{LL}|$ is obtained \cite{FGJ:86} from the inverse muon decay $\nu_\mu e^-\to\mu^-\nu_e$. The present (90\% CL) bounds \cite{PDG:94,BA:88} on the $\mu$-decay couplings are given in Table~\ref{tab:mu_couplings}. These limits show nicely that the bulk of the $\mu$-decay transition amplitude is indeed of the predicted V$-$A type. \begin{table}[hbt] \caption{90\% CL experimental limits \protect\cite{PDG:94,BA:88} for the $\mu$-decay $g^n_{e_\epsilon \mu_\omega}$ couplings.} \label{tab:mu_couplings} \centering\vspace{0.2cm} \begin{tabular}{|l|l|l|} \hline $|g^S_{e_L \mu_L}| < 0.55$ & $|g^V_{e_L \mu_L}| > 0.96$ & \hfil -- \hfil \\ $|g^S_{e_R \mu_R}| < 0.066$ & $|g^V_{e_R \mu_R}| < 0.033$ & \hfil -- \hfil \\ $|g^S_{e_L \mu_R}| < 0.125$ & $|g^V_{e_L \mu_R}| < 0.060$ & $|g^T_{e_L \mu_R}| < 0.036$\\ $|g^S_{e_R \mu_L}| < 0.424$ & $|g^V_{e_R \mu_L}| < 0.110$ & $|g^T_{e_R \mu_L}| < 0.122$\\ \hline \end{tabular} \end{table} The experimental analysis of the $\tau$-decay parameters is necessarily different from the one applied to the muon, because of the much shorter $\tau$ lifetime. The measurement of the $\tau$ polarization and the parameters $\xi$ and $\delta$ is still possible due to the fact that the spins of the $\tau^+\tau^-$ pair produced in $e^+e^-$ annihilation are strongly correlated [27--34]. However, the polarization of the charged lepton emitted in the $\tau$ decay has never been measured. In principle, this could be done for the decay $\tau^-\to\mu^-\bar\nu_\mu\nu_\tau$ by stopping the muons and detecting their decay products \cite{FE:90}. The measurement of the inverse decay $\nu_\tau l^-\to\tau^-\nu_l$ looks far out of reach. The present experimental status on the $\tau$-decay Michel parameters is shown in Table~\ref{tab:tau_michel} \cite{PS:95}, which gives the world-averages of all published [3, 35--37] measurements. For comparison, the values measured in $\mu$-decay \cite{PDG:94} are also given. The improved accuracy of the most recent experimental analyses has brought an enhanced sensitivity to the different shape parameters, allowing the first measurements of $\eta_{\tau\to\mu}$ \cite{ALEPH:95b,ARGUS:95}, $\xi_{\tau\to e}$, $\xi_{\tau\to\mu}$, $(\xi\delta)_{\tau\to e}$ and $(\xi\delta)_{\tau\to\mu}$ \cite{ALEPH:95b}. (The ARGUS measurement \cite{ARGUS:95b} of $\xi_{\tau\to l}$ and $(\xi\delta)_{\tau\to l}$ assumes identical couplings for $l=e,\mu$. A measurement of $\sqrt{\xi_{\tau\to e}\xi_{\tau\to\mu}}$ was published previously \cite{ARGUS:93}). \begin{table}[bt] \caption{Experimental averages [3, 35--37] of the Michel parameters. The last column ($\tau\to l$) assumes identical couplings for $l=e,\mu$ (the quoted value for $\eta_{\tau\to l}$ is that obtained directly from measurements of the energy distribution). $\xi_{\mu\to e}$ refers to the product $\xi_{\mu\to e}{\cal P}_\mu$, where ${\cal P}_\mu\approx 1$ is the longitudinal polarization of the muon from pion decay.} \label{tab:tau_michel} \centering\vspace{0.2cm} \begin{tabular}{|c|c|c|c|c|} \hline & $\mu\to e$ & $\tau\to\mu$ & $\tau\to e$ & $\tau\to l$ \\ \hline $\rho$ & $0.7518\pm 0.0026$ & $0.738\pm 0.038$ & $0.736\pm 0.028$ & $0.733\pm 0.022$ \\ $\eta$ & $-0.007\pm 0.013$ & $-0.14\pm 0.23\phantom{-}$ & -- & $-0.01\pm 0.14\phantom{-}$ \\ $\xi$ & $1.0027\pm 0.0085$ & $1.23\pm 0.24$ & $1.03\pm 0.25$ & $1.06\pm 0.11$ \\ $\xi\delta$ & $0.7506\pm 0.0074$ & $0.71\pm 0.15$ & $ 1.11\pm 0.18$ & $ 0.76\pm 0.09$ \\ \hline \end{tabular} \end{table} The determination of the $\tau$-polarization parameters \cite{ALEPH:95b,ARGUS:95b} allows us to bound the total probability for the decay of a right-handed $\tau$ \cite{FE:90}, \begin{equation}\label{eq:Q_R} Q_{\tau_R} \equiv Q_{l'_R\tau^{\phantom{'}}_R} + Q_{l'_L\tau^{\phantom{'}}_R} = \frac{1}{2}\, \left[ 1 + \frac{\xi}{3} - \frac{16}{9} (\xi\delta)\right] \; . \end{equation} One finds (ignoring possible correlations among the measurements) \cite{PS:95}: \begin{eqnarray} Q_{\tau_R}^{\tau\to\mu} &\!\!\! =&\!\!\! \phantom{-}0.07\pm 0.14 \; < \, 0.28 \quad (90\%\;\mbox{\rm CL})\, , \nonumber\\ Q_{\tau_R}^{\tau\to e} &\!\!\! =&\!\!\! -0.32\pm 0.17 \; < \, 0.14 \quad (90\%\;\mbox{\rm CL})\, , \\ Q_{\tau_R}^{\tau\to l} &\!\!\! =&\!\!\! \phantom{-}0.00\pm 0.08 \; < \, 0.14 \quad (90\%\;\mbox{\rm CL})\, , \nonumber \end{eqnarray} where the last value refers to the $\tau$-decay into either $l=e$ or $\mu$, assuming universal leptonic couplings. Since these probabilities are positive semidefinite quantities, they imply corresponding limits on all $|g^n_{l_R\tau_R}|$ and $|g^n_{l_L\tau_R}|$ couplings. The quoted 90\% CL have been obtained adopting a Bayesian approach for one-sided limits \cite{PDG:94}. Table~\ref{table:g_tau_bounds} gives the implied bounds on the $\tau$-decay couplings. \begin{table}[bt] \caption{90\% CL limits \protect\cite{PS:95} for the $\tau_R$-decay $g^n_{l_\epsilon \tau_R}$ couplings. The numbers with an asterisk use the measured value of $(\xi\delta)_e$.} \label{table:g_tau_bounds} \centering\vspace{0.2cm} \begin{tabular}{|l|l|l|} \hline \hfil $\tau\to\mu$\hfil &\hfil $\tau\to e$ \hfil & \hfil $\tau\to l$ \hfil \\\hline $|g^S_{\mu_R\tau_R}| < 1.05$ & $|g^S_{e_R\tau_R}| < 0.75^*$ & $|g^S_{l_R\tau_R}| < 0.74$ \\ $|g^S_{\mu_L\tau_R}| < 1.05$ & $|g^S_{e_L\tau_R}| < 0.75^*$ & $|g^S_{l_L\tau_R}| < 0.74$ \\ \hline $|g^V_{\mu_R\tau_R}| < 0.53$ & $|g^V_{e_R\tau_R}| < 0.38^*$ & $|g^V_{l_R\tau_R}| < 0.37$ \\ $|g^V_{\mu_L\tau_R}| < 0.53$ & $|g^V_{e_L\tau_R}| < 0.38^*$ & $|g^V_{l_L\tau_R}| < 0.37$ \\ \hline $|g^T_{\mu_L\tau_R}| < 0.30$ & $|g^T_{e_L\tau_R}| < 0.22^*$ & $|g^T_{l_L\tau_R}| < 0.21$ \\ \hline \end{tabular} \end{table} The central value of $Q_{\tau_R}^{\tau\to e}$ turns out to be negative at the $2\sigma$ level; i.e., there is only a 3\% probability to have a positive value of $Q_{\tau_R}^{\tau\to e}$. Therefore, the limits on $|g^n_{e_R\tau_R}|$ and $|g^n_{e_L\tau_R}|$ should be taken with some caution, since the meaning of the assigned confidence level is not at all clear. The problem clearly comes from the measured value of $(\xi\delta)_e$. In order to get a positive probability $Q_{\tau_R}$, one needs $(\xi -1) > \frac{16}{3} [(\xi\delta) -\frac{3}{4}]$. Thus, $(\xi\delta)$ can only be made larger than $3/4$ at the expense of making $\xi$ correspondingly much larger than one \cite{PS:95}. If lepton universality is assumed (i.e. $G_{l'l} = G_F$, $\, g^n_{l'_\epsilon l^{\phantom{'}}_\omega}\! = g^n_{\epsilon\omega}$), the leptonic decay ratios $B_\mu/B_e$ and $B_e\tau_\mu/\tau_\tau$ provide limits on the low-energy parameter $\eta$. The best sensitivity \cite{ST:94} comes from $\widehat G_{\mu\tau}$, where the term proportional to $\eta$ is not suppressed by the small $m_e/m_l$ factor. The measured $B_\mu/B_e$ ratio implies then \cite{PS:95}: \begin{equation}\label{eq:eta_univ} \eta_{\tau\to l} \, = \, 0.007\pm 0.033 \ . \end{equation} This determination is more accurate that the one in Table~\ref{tab:tau_michel}, obtained from the shape of the energy distribution, and is comparable to the value measured in $\mu$-decay: $\eta_{\mu\to e} = -0.007\pm 0.013$ \cite{BU:85}. A non-zero value of $\eta$ would show that there are at least two different couplings with opposite chiralities for the charged leptons. Since, we assume the V$-$A coupling $g_{LL}^V$ to be dominant, the second coupling would be \cite{FE:90} a Higgs-type coupling $g^S_{RR}$ [$\eta\approx\mbox{\rm Re}(g^S_{RR})/2$, to first-order in new-physics contributions]. Thus, Eq.~(\ref{eq:eta_univ}) puts the (90\% CL) bound: $-0.09 \, <\mbox{\rm Re}(g^S_{RR}) < 0.12$. \subsection*{\centering Model-Dependent Constraints} The general bounds in Table~\ref{table:g_tau_bounds} look rather weak. The sensitivity of present experiments is not good enough to get interesting constraints from a completely general analysis of the four-fermion Hamiltonian. Nevertheless, stronger limits can be obtained within particular models, as shown in Tables~\ref{tab:coup_CH}, \ref{tab:W_couplings} and \ref{tab:ns_coup}. Table~\ref{tab:coup_CH} assumes that there are no tensor couplings, i.e. $g^T_{\epsilon\omega}=0$. This condition is satisfied in any model where the interactions are mediated by vector bosons and/or charged scalars \cite{PS:95}. In this case, the quantities $(1-\frac{4}{3}\rho)$, $(1-\frac{4}{3}\xi\delta)$ and $(1-\frac{4}{3}\rho) + \frac{1}{2} (1-\xi)$ reduce to sums of $|g^n_{l'_\epsilon l_\omega}|^2$, which are positive semidefinite; i.e.~, in the absence of tensor couplings, $\rho\leq\frac{3}{4}$, $\xi\delta\leq\frac{3}{4}$ and $(1-\xi) > 2 (\frac{4}{3}\rho - 1)$. This allows us to extract direct bounds on several couplings. \begin{table}[bth] \caption{90\% CL limits for the couplings $g^n_{\epsilon\omega}$, assuming that there are no tensor couplings \protect\cite{PS:95}. The numbers with an asterisk use the measured value of $(\xi\delta)_e$.} \label{tab:coup_CH} \centering\vspace{0.2cm} \begin{tabular}{|l|l|l|l|l|} \hline & \hfil $\mu\to e$\hfil & \hfil $\tau\to\mu$\hfil &\hfil $\tau\to e$ \hfil & \hfil $\tau\to l$ \hfil \\\hline $|g^S_{LL}|$ & $<0.55$ & $\leq 2$ & $\leq 2$ & $\leq 2$ \\ $|g^S_{RR}|$ & $<0.066$ & $<0.80$ & $<0.63^*$ & $<0.62$ \\ $|g^S_{LR}|$ & $<0.125$ & $<0.80$ & $<0.63^*$ & $<0.62$ \\ $|g^S_{RL}|$ & $<0.424$ & $\leq 2$ & $\leq 2$ & $\leq 2$ \\ \hline $|g^V_{LL}|$ & $>0.96$ & $\leq 1$ & $\leq 1$ & $\leq 1$ \\ $|g^V_{RR}|$ & $<0.033$ & $<0.40$ & $<0.32^*$ & $<0.31$ \\ $|g^V_{LR}|$ & $<0.060$ & $<0.31$ & $<0.27$ & $<0.25$ \\ $|g^V_{RL}|$ & $<0.047$ & $<0.23$ & $<0.27$ & $<0.18$ \\ \hline \end{tabular} \end{table} If one only considers $W$-mediated interactions, but admitting the possibility that the $W$ couples non-universally to leptons of any chirality, the stronger limits in Table~\ref{tab:W_couplings} are obtained \cite{PS:95}. In this case, the $g^V_{l'_\epsilon l^{\phantom{'}}_\omega}$ constants factorize into the product of two leptonic $W$ couplings, implying \cite{MU:85} additional relations among the couplings, such as $g^V_{LR}\ g^V_{RL} = g^V_{LL}\ g^V_{RR}$, which hold within any of the three channels, $(\mu, e)$, $(\tau, e)$, and $(\tau, \mu)$. Moreover, there are additional equations relating different processes, such as \cite{PS:95} $g^V_{\mu_L \tau_L}\ g^V_{e_L \tau_R} = g^V_{\mu_L \tau_R}\ g^V_{e_L \tau_L}$. The normalization condition~\eqn{eq:normalization} provides lower bounds on the $g^V_{LL}$ couplings. \begin{table}[bt] \caption{90\% CL limits on the $g^V_{\epsilon \omega}$ couplings, assuming that (non-standard) $W$-exchange is the only relevant interaction \protect\cite{PS:95}.} \label{tab:W_couplings} \centering\vspace{0.2cm} \begin{tabular}{|l|l|l|l|} \hline & \hfil $\mu\to e$\hfil & \hfil $\tau\to\mu$\hfil &\hfil $\tau\to e$ \hfil \\\hline $|g^V_{LL}|$ & $>0.997$ & $>0.95$ & $>0.96$ \\ $|g^V_{RR}|$ & $<0.0028$ & $<0.019$ & $<0.013$ \\ $|g^V_{LR}|$ & $<0.060$ & $<0.31$ & $<0.27$ \\ $|g^V_{RL}|$ & $<0.047$ & $<0.060$ & $<0.047$ \\ \hline \end{tabular} \end{table} For $W$-mediated interactions, the hadronic $\tau$-decay modes can also be used to test the structure of the $\tau\nu_\tau W$ vertex, if one assumes that the W coupling to the light quarks is the SM one. The ${\cal P}_\tau$ dependent part of the decay amplitude is then proportional to the mean $\nu_\tau$ helicity \bel{eq:nu_helicity} h_{\nu_\tau} \,\equiv\, {|g_R|^2 - |g_L|^2 \over |g_R|^2 + |g_L|^2} , \end{equation} which plays a role analogous to the leptonic-decay parameter $\xi$. The analysis of $\tau^+\tau^-$ decay correlations in leptonic--hadronic and hadronic--hadronic decay modes, using the $\pi$, $\rho$ and $a_1$ hadronic final states, gives $h_{\nu_\tau}=-1.014\pm 0.027$ [35,37,42]; this implies $|g_R/g_L|^2= -0.007\pm 0.013 < 0.018$ (90\% CL). The sign of the $\nu_\tau$ helicity can be determined \cite{ARGUS:90} to be negative with the decay $\tau^-\nu_\tau a_1^-$, because there are two different amplitudes [corresponding to two different ways of forming the rho in $a_1^-\to(\rho\pi)^-$] and their interference contains information on the sign.\footnote{ Once the $h_{\nu_\tau}$ sign is fixed, the measurement of leptonic--hadronic correlations determines the signs of $\xi_{\tau\to e}$ and $\xi_{\tau\to\mu}$ to be positive. At the $Z$ peak, the signs of $\xi_{\tau\to l}$ and $h_{\nu_\tau}$ can be directly determined \protect\cite{ALEPH:94} from the sign of ${\cal P}_\tau$, which is fixed by combining the measurements of the polarization and left-right asymmetries.} \begin{table}[tbh] \caption{90\% CL limits for the $g^n_{\epsilon\omega}$ couplings, taking $g^n_{RR}=0$, $g^S_{LL}=0$, $g^V_{LR}=g^S_{LR}=2 g^T_{LR}$ and $g^V_{RL}=g^S_{RL}=2 g^T_{RL}$ \protect\cite{PS:95}.} \label{tab:ns_coup} \centering\vspace{0.2cm} \begin{tabular}{|l|l|l|l|l|} \hline & \hfil $\mu\to e$\hfil & \hfil $\tau\to\mu$\hfil &\hfil $\tau\to e$ \hfil & \hfil $\tau\to l$ \hfil \\\hline $|g^V_{LL}|$ & $>0.998$ & $>0.95$ & $>0.96$ & $>0.97$ \\ $|g^V_{LR}|$ & $<0.047$ & $<0.22$ & $<0.19$ & $<0.18$ \\ $|g^V_{RL}|$ & $<0.033$ & $<0.16$ & $<0.19$ & $<0.13$ \\ \hline \end{tabular} \end{table} Table~\ref{tab:ns_coup} shows the constraints obtained under the assumption that the interaction is mediated by the SM $W$ plus an additional neutral scalar \cite{PS:95}. The scalar contributions vanish for the LL and RR couplings and satisfy the relations $g^V_{LR} = g^S_{LR} = 2 g^T_{LR}$, $g^V_{RL} = g^S_{RL} = 2 g^T_{RL}$. This allows to express everything in terms of the vector couplings. The quantities $(1-\frac{4}{3}\rho)$, $(1-\frac{4}{3}\xi\delta)$ and $(1-\frac{4}{3}\rho) + \frac{1}{2} (1-\xi)$ are also positive semidefinite in this case. Moreover, $(1-\frac{4}{3}\rho)=(1-\frac{4}{3}\xi\delta)$. \subsection*{\centering Expected Signals in Minimal New-Physics Scenarios} All experimental results obtained so far are consistent with the SM. Clearly, the SM provides the dominant contributions to the $\tau$-decay amplitudes. Future high-precision measurements of allowed $\tau$-decay modes should then look for small deviations of the SM predictions and find out the possible source of any detected discrepancy. In a first analysis, it seems natural to assume \cite{PS:95} that new-physics effects would be dominated by the exchange of a single intermediate boson, coupling to two leptonic currents. The new contribution could be originated by non-standard couplings of the usual $W$ boson, or by the exchange of a new scalar or vector particle (intermediate tensor particles hardly appear in any reasonable model beyond the SM). Table~\ref{tab:summary} \cite{PS:95} summarizes the expected effects of different new-physics scenarios on the measurable shape parameters. The four general cases studied correspond to adding a single intermediate boson-exchange, $V^+$, $S^+$, $V^0$, $S^0$ (charged/neutral, vector/scalar), to the SM contribution (a non-standard $W$ would be a particular case of the SM + $V^+$ scenario). AS indicates that any sign is allowed. \begin{table}[bht] \caption{Theoretical constraints on the Michel parameters \protect\cite{PS:95}} \label{tab:summary} \centering\vspace{0.2cm} \begin{tabular}{|c|c|c|c|c|} \hline & SM + $V^+$ & SM + $S^+$ & SM + $V^0$ & SM + $S^0$ \\ \hline $\rho - 3/4$ & $< 0$ & 0 & 0 & $< 0$ \\ \hline $\xi - 1$ & AS & $< 0$ & $< 0$ & AS \\ \hline $(\delta\xi)-3/4$& $< 0$ & $< 0$ & $< 0$ & $< 0$ \\ \hline $\eta$ & 0 & AS & AS & AS \\ \hline \end{tabular} \end{table} It is immediately apparent that $\rho \leq 3/4$ and $(\delta \xi) < 3/4$ in all cases studied. Thus one can have new physics and still $\rho$ be equal to the SM value. In fact, any interaction consisting of an arbitrary combination of $g^S_{\epsilon \omega}$'s and $g^V_{\gamma \gamma}$'s yields this result \cite{FE:90}. On the other hand, $(\delta\xi)$ will be different from $3/4$ in any of the cases above providing, in principle, a better opportunity for the detection of Physics Beyond the SM. The above features are easy to understand by looking back at Eqs.~(\ref{eq:michel}) and recalling that the tensor couplings can only be generated by neutral scalar interactions (violating individual lepton flavours), in which case they are proportional to the scalar couplings. It is easy to see that having two such neutral scalars will not alter the situation. Indeed, to obtain $\rho > 3/4$ or $(\delta \xi) > 3/4$ one would need to get contributions from charged and neutral scalars simultaneously\cite{PS:95}. Moreover, $(\delta \xi)>3/4$ can only happen through $RL$ couplings and must be accompanied by $\xi>1$. The $\tau$cF\ offers an ideal experimental environment to perform this kind of analyses. The expected sensitivities to the different shape parameters, quoted in Table~\ref{tab:improvements}, would allow to prove the effective four-fermion Hamiltonian to a level where very interesting constraints on new-physics scenarios could be obtained. The numbers given in Table~\ref{tab:improvements} are somehow conservative, since they only take into account the information obtained from correlated $\tau^+\tau^-$ events where both $\tau$'s decay into leptons \cite{marbella:1,ST:93}. Better precisions may be reached including the correlations of the leptonic decays with the hadronic ones \cite{marbella:1,ST:93}. \section*{\centering DISCUSSION} The flavour structure of the SM is one of the main pending questions in our understanding of weak interactions. Although we do not know the reason of the observed family replication, we have learn experimentally that the number of SM fermion generations is just three (and no more). Therefore, we must study as precisely as possible the few existing flavours, to get some hints on the dynamics responsible for their observed structure. The construction of high-precision flavour factories is clearly needed. Without any doubt, the $\tau$cF\ is the best available tool to explore the $\tau$ and $\nu_\tau$ leptons and the charm quark. This facility combines the three ingredients required for making an accurate and exhaustive investigation of these particles: high statistics, low backgrounds and good control of systematic errors. The threshold region provides a series of unique features (low and measurable backgrounds free from heavy flavour contaminations, monochromatic particles from two-body decays, small radiative corrections, single tagging, high-rate calibration sources, \ldots) that create an ideal experimental environment for this physics. Two basic properties make the $\tau$ particle an ideal laboratory for testing the SM: the $\tau$ is a lepton, which means clean physics, and moreover, it is heavy enough to produce a large variety of decay modes. In the previous sections I have discussed two particular topics, charged-current universality and Lorentz structure of the weak currents, which would greatly benefit from a high-precision experimental study of the $\tau$ lepton. There are, in addition, many other interesting subjects to be investigated. The $\tau$cF\ could carry out a precise and exhaustive study of all exclusive $\tau$ decay channels, looking for signs of discrepancies with the theoretical expectations. The accurate measurement of the $q^2$ distribution of the final hadrons would allow a detailed analysis of the vector and axial-vector spectral functions and, therefore, a significant improvement of our knowledge of QCD. Rare and forbidden $\tau$ decays could be looked for, with a sensitivity better than $10^{-7}$ in some channels. The bound on the $\nu_\tau$ mass could be pushed down to the 1--2 MeV level. The present knowledge of the $\tau$ electromagnetic moments could be improved by more than one order of magnitude. Last but not least, CP-violation in the lepton sector at the milli-weak ($10^{-3}$) level could be investigated (with longitudinal beam polarization). In addition to the large improvement in our knowledge of the $\tau$ lepton, the $\tau$cF\ would also provide precious information on the $c$ quark, through the detailed study of the $D$ mesons and the $J/\Psi$ and other charmonium states. A comprehensive set of precision measurements for $\tau$, charm and light-hadron spectroscopy would be obtained, proving the SM to a much deeper level of sensitivity and exploring the frontiers of its possible extensions. \section*{\centering ACKNOWLEDGEMENTS} Many results discussed here have been obtained in collaboration with Jo\~ao P.~Silva \cite{PS:95}. This work has been supported in part by CICYT (Spain), under grant No. AEN-93-0234. \begin{Thebibliography}{99} \refjl{PDG:90}{M. Aguilar-Ben\'{\i}tez {\it et al\/}}{Review of Particle Properties, Phys. Lett.}{B239}{1990}{1} \bibitem{PI:92} A. Pich, {\it Tau Physics}, in {\it Heavy Flavours}, eds. A.J.~Buras and M.~Lindner, Advanced Series on Directions in High Energy Physics -- Vol.~10 (World Scientific, Singapore, 1992), p.~375. \refjl{PDG:94}{M. Aguilar-Ben\'{\i}tez {\it et al\/}}{Review of Particle Properties, Phys. Rev.}{D50}{1994}{1173} \bibitem{montreux} Proc. {\it Third Workshop on Tau Lepton Physics} (Montreux, 1994), ed. L.~Rolandi, {\it Nucl. Phys.\ B (Proc. Suppl.)\/} {\bf 40} (1995). \bibitem{marbella:1} A. Pich, {\it Perspectives on $\tau$-Charm Factory Physics}, in Proc. {\it Third Workshop on the Tau-Charm Factory} (Marbella, 1993), eds. J.~Kirkby and R.~Kirkby (Editions Fronti\'eres, Gif-sur-Yvette, 1994), p.~767. \bibitem{marbella:2} A. Pich, {\it Tau Physics Prospects at the $\tau$-Charm Factory and at other Machines}, in Proc. {\it Third Workshop on the Tau-Charm Factory} (Marbella, 1993), eds. J.~Kirkby and R.~Kirkby (Editions Fronti\'eres, Gif-sur-Yvette, 1994), p.~51. \bibitem{QCD:94} A. Pich, {\it QCD Predictions for the $\tau$ Hadronic Width: Determination of $\alpha_s(M_\tau^2)$}, in Proc. {\it QCD 94} (Montpellier, 1994), ed. S. Narison, {\it Nucl. Phys.\ B (Proc. Suppl.) \/} {\bf 39B,C} (1995) 326. \bibitem{slac:89} A. Pich, {\it QCD Tests from Tau Decay Data}, in Proc. {\it Tau-Charm Factory Workshop} (SLAC, California, 1989), ed. L.V.~Beers, SLAC-Report-343 (1989), p.~416. \refjl{MS:88}{W.J. Marciano and A. Sirlin}{Phys. Rev. Lett.}{61}{1988}{1815} \refjl{PDG:92}{M. Aguilar-Ben\'{\i}tez {\it et al\/}}{Review of Particle Properties, Phys. Rev.}{D45}{1992}{Part 2} \refjl{ALEPH:95}{D. Buskulic {\it et al\/}\ (ALEPH)}{Phys. Lett.}{B349}{1995}{585} \refjl{BR:92}{D.I. Britton {\it et al\/}}{Phys. Rev. Lett.}{68}{1992}{3000; \\ G. Czapek {\it et al\/} {\itPhys. Rev. Lett.} {\bf 70} (1993) 17.} \bibitem{UA1:89} C. Albajar {\it et al\/}\ (UA1), {\it Z. Phys.} {\bf C44} (1989) 15; \\ J. Alitti {\it et al\/}\ (UA2), {\it Phys. Lett.} {\bf B280} (1992) 137; \\ F. Abe {\it et al\/}\ (CDF), {\it Phys. Rev. Lett.} {\bf 68} (1992) 3398; {\bf 69} (1992) 28. \refjl{MS:93}{W.J. Marciano and A. Sirlin}{Phys. Rev. Lett.}{71}{1993}{3629} \bibitem{DF:94} R. Decker and M. Finkemeier, {\itNucl. Phys.} {\bf B438} (1995) 17; {\itNucl. Phys.\ B (Proc. Suppl.)} {\bf 40} (1995) 453. \refjl{MA:94}{W.J. Marciano}{Nucl. Phys.\ B (Proc. Suppl.)}{40}{1995}{3} \refjl{MI:50}{L. Michel}{Proc. Phys. Soc.}{A63}{1950}{514; 1371} \refjl{BM:57}{C. Bouchiat and L. Michel}{Phys. Rev.}{106}{1957}{170} \refjl{KS:57}{T. Kinoshita and A. Sirlin}{Phys. Rev.}{107}{1957}{593; {\bf 108} (1957) 844} \bibitem{SCH:83} F. Scheck, {\it Leptons, Hadrons and Nuclei} (North-Holland, Amsterdam, 1983); {\itPhys. Rep.} {\bf 44} (1978) 187. \refjl{FGJ:86}{W. Fetscher, H.-J. Gerber and K.F. Johnson}{Phys. Lett.}{B173} {1986}{102} \bibitem{FG:93} W. Fetscher and H.-J. Gerber, {\it Precision Measurements in Muon and Tau Decays}, in {\it Precision Tests of the Standard Electroweak Model}, ed. P.~Langacker, Advanced Series on Directions in High Energy Physics -- Vol.~14 (World Scientific, Singapore, 1995), p.~657. \refjl{PS:95}{A. Pich and J.P. Silva}{Phys. Rev.}{D}{1995}{in press [hep-ph/9505327]} \refjl{JA:66}{C. Jarlskog}{Nucl. Phys.}{75}{1966}{659} \bibitem{RO:82} L.Ph. Roesch {\it et al\/} , {\it Helv. Phys. Acta} {\bf 55} (1982) 74; \\ W. Fetscher, {\itPhys. Lett.} {\bf 140B} (1984) 117; \\ A. Jodidio {\it et al\/} , {\itPhys. Rev.} {\bf D34} (1986) 1967; {\itPhys. Rev.} {\bf D37} (1988) 237. \bibitem{BA:88} B. Balke {\it et al\/} , {\itPhys. Rev.} {\bf D37} (1988) 587; \\ D. Geiregat {\it et al\/} , {\itPhys. Lett.} {\bf B247} (1990) 131; \\ S.R. Mishra {\it et al\/} , {\itPhys. Lett.} {\bf B252} (1990) 170. \refjl{TS:71}{Y.S. Tsai}{Phys. Rev.}{D4}{1971}{2821; \\ S. Kawasaki, T. Shirafuji and S.Y. Tsai, {\it Progr. Theor. Phys.} {\bf 49} (1973) 1656} \refjl{PS:77}{S.-Y. Pi and A.I. Sanda}{Ann. Phys. (NY)}{106}{1977}{171} \bibitem{GO:89} J.J. G\'omez-Cadenas, {\it Beautiful $\tau$ Physics in the Charm Land}, in Proc. {\it Tau-Charm Factory Workshop} (SLAC, 1989), ed. L.V. Beers, SLAC-Report-343 (1989), p.~48. \bibitem{NE:91} C.A. Nelson, {\itPhys. Rev.} {\bf D43} (1991) 1465; {\itPhys. Rev. Lett.} {\bf 62} (1989) 1347; {\itPhys. Rev.} {\bf D40} (1989) 123 [Err: {\bf D41} (1990) 2327]; \\ S. Goozovat and C.A. Nelson, {\itPhys. Rev.} {\bf D44} (1991) 2818; {\itPhys. Lett.} {\bf B267} (1991) 128 [Err: {\bf B271} (1991) 468]. \refjl{FE:90}{W. Fetscher}{Phys. Rev.}{D42}{1990}{1544} \refjl{BPR:91}{J. Bernab\'eu, A. Pich and N. Rius}{Phys. Lett.}{B257}{1991}{219} \refjl{ABGPR:92}{R. Alemany {\it et al\/}}{Nucl. Phys.}{B379}{1992}{3} \refjl{DDDR:93}{M. Davier {\it et al\/}}{Phys. Lett.}{B306}{1993}{411} \refjl{ALEPH:95b}{D. Buskulic {\it et al\/}\ (ALEPH)}{Phys. Lett.}{B346}{1995}{379} \refjl{ARGUS:95}{H. Albrecht {\it et al\/}\ (ARGUS)}{Phys. Lett.}{B341}{1995}{441} \refjl{ARGUS:95b}{H. Albrecht {\it et al\/}\ (ARGUS)}{Phys. Lett.}{B349}{1995}{576} \refjl{ARGUS:93}{H. Albrecht {\it et al\/}\ (ARGUS)}{Phys. Lett.}{B316}{1993}{608} \refjl{ST:94}{A. Stahl}{Phys. Lett.}{B324}{1994}{121} \refjl{BU:85}{H. Burkard {\it et al\/}}{Phys. Lett.}{160B}{1985}{343} \bibitem{MU:85} K. Mursula and F. Scheck, {\itNucl. Phys.} {\bf B253} (1985) 189; \\ K. Mursula, M. Roos and F. Scheck, {\itNucl. Phys.} {\bf B219} (1983) 321. \refjl{ARGUS:94}{H. Albrecht {\it et al\/}\ (ARGUS)}{Phys. Lett.}{B337}{1994}{383; \\ {\itZ. Phys.} {\bf C58} (1993) 61} \refjl{ARGUS:90}{H. Albrecht {\it et al\/}\ (ARGUS)}{Phys. Lett.}{B250}{1990}{164} \refjl{ALEPH:94}{D. Buskulic {\it et al\/}\ (ALEPH)}{Phys. Lett.}{B321}{1994}{168} \bibitem{ST:93} A. Stahl, {\it The Lorentz Structure of the Charged Weak Current in $\tau$ Decays}, in Proc. {\it Third Workshop on the Tau-Charm Factory} (Marbella, 1993), eds. J. Kirkby and R. Kirkby (Editions Fronti\'eres, Gif-sur-Yvette, 1994), p.~175. \end{Thebibliography} \end{document}
2212.01804
\section{Introduction} \label{sec:intro} \begin{enumerate} \item what are numerical models? what equations do we consider? \item GRMHD/semi-analytical code and model description used in the ngeht challenges \item time and radial information of these models \item disks vs. jets, MAD vs. SANE \item justification for usage: what are the unique properties of these numerical models (if any) or why are the models important? \item expensive and exotic models: future science goals? (Long time analysis of variability, misaligned/bent jets) \item Future Outlook: what would the numerical model world look like in a few years/ decades? what are the major caveats of the current models? \end{enumerate} With the advent of the Event Horizon Telescope (EHT;\citealt{EHT_M87_2019_PaperI}), imaging the near-horizon structure of supermassive black holes (BHs) are now a reality. \section{Numerical methods} We employ a RIAF solution and a number of different GRMHD codes for the ngEHT challenges. \subsection{RIAF+hotspot solutions} RIAF models attempt to describe the time and azimuthally averaged appearance of accretion flows. This is done with a set of building blocks. The specific RIAF models used in the challenges are based on \citet{Yuan_2003, broderick_2006, Broderick_2010} approach. To describe an accretion flow we decompose the accretion flow into a set of phenomenological models that describe the electron density, temperature, magnetic field, and velocity profile. To define the electron density we first define the cylindrical radius $\rho = r|\sin(\theta)|$ and vertical displacement $z = r\cos(\theta)$. Then the electron-density radial profile is given by \begin{equation} n_{e,\rm{X}}(\rho, z) = n_{e,\rm X} \rho^{p_X}\exp\left(-\frac{z^2}{2h^2\rho^2}\right) \end{equation} where $X$ denotes the population of electron. The disk height is controlled by $h$ which for this work we set to unity. For the challenge dataset we included both thermal synchrotron emitting ($X=\rm th)$, and non-thermal synchrotron ($X_{\rm nth}$) emitting electrons. The thermal electrons have $n_{e,{\rm th}, 0}=1.3 \times 10^8 $ and $p_{\rm th} = -1.1$, while the non-thermal electrons are given by $n_{e, {\rm nth}, 0} = 1.3\times 10^5$, $p_{\rm nth} =-2.02$. The temperature profile of the thermal electrons is also given by a radial power-law with a Gaussian envelope describing the height: \begin{equation}\label{eq:Te_riaf} T_{e}(t,r,\theta, \phi) = T_{e,0}\rho^{-0.84}\exp\left(-\frac{z^2}{2h^2\rho^2}\right), \end{equation} where for the challenge data we set $T_{e,0} = 6.3\times 10^10 K$. To define the gas pressure we then assume an equipartition between the electron and proton energy and assume that the protons are in roughly hydrostatic equilibrium giving: \begin{equation}\label{eq:riaf_gas} p_{\rm gas}(x^\mu) = \frac{m_p n_{e,th}(x^\mu)}{6}\frac{M}{r}. \end{equation} TO define the local magnetic field we then assume a constant $\beta = p_{\rm gas}/p_{\rm mag}$ plasma, which combined with \autoref{eq:riaf_gas}. The orientation of the magnetic field is then given by a purely toroidal configuration, relative to the plasma observer. Finally, to describe the emission and absorption coefficients we assume a synchrotron self-absorption model from \citet{broderickblandford04} for both the thermal and non-thermal synchrotron electrons. For the non-thermal electrons we use a power-law subscription with radiation coefficients from \citet{jonesodell}, and a spectral-power law index of 1.25. To describe the dynamics of the accretion flow we follow the prescription from \citet{Pu2016}. Following the notation from \citet{Tiede2020} the velocity field $u^\mu = u^t v^\mu$ is given by \begin{equation}\label{eq:hotspot_velo} \begin{aligned} v^r &= v^r_K + \alpha(v^r_{ff} - v^r_{ff}) \\ v^\theta &= 0 \\ v^\phi &= v^\phi_K + (1-\kappa)(v^\phi_{ff} - v^\phi_K), \end{aligned} \end{equation} where $v^\mu_K$ denotes the Keplerian velocity field, and $v^\mu_{ff}$ is the free-fall velocity field. The remaining component $u^t$ is given by the normalization condition $u^\mu u_\mu = -1$. component outside the ISCO $v^r_K = 0$, however inside the ISCO we use plunging geodesics which are specified by matching the angular momentum and energy at the ISCO. The hotspot dynamics follows the model from \citet{Tiede2020}, where we assume that the hotspot travels passively along the accretion flow velocity field \autoref{eq:hotspot_velo}. This implies that the equation of motion is given by the conservation of particle number \autoref{eq:continuity}. For the emission we assume a non-thermal synchrotron hotspot, with a initial Gaussian density profile \begin{equation} n_{e}(x^\mu) = n_0e^{-\Delta r^\mu r_\mu +(\Delta r_\mu v^\mu)^2/2R_s^2}, \end{equation} where we have set $n_0 = 6\times 10^6$ and $R_s = 0.5M$, and $\Delta r^\mu$ is the displacement from the hotspot center. \subsection{GRMHD equations} here we describe the GRMHD equations. conservation of particle number and energy-momentum \begin{eqnarray} \label{eq:continuity} \partial_{t}(\sqrt{-g} \rho u^{t}) &=& -\partial_{i}(\sqrt{-g} \rho u^{i}) \\ \partial_{t}(\sqrt{-g} T^t_{\nu}) &=& -\partial_{i}(\sqrt{-g} T^i_{\nu}) +\sqrt{-g} T^{\alpha}_{\beta}\Gamma^{\beta}_{\nu \alpha} \end{eqnarray} $i$ denotes the spatial indices, metric connection $\Gamma^{\beta}_{\nu \alpha}$ and stress-energy tensor $T^{\mu}_{\nu}$ \begin{equation} T^{\mu}_{\nu}=(\rho + u + p +b^2)u^{\mu}u_{\nu} + (p + \frac{b^2}{2})\delta^{\mu}_{\nu} - b^{\mu}b_{\nu} \end{equation} $B^i$ is the magnetic field 3-vector and $b^{\mu}$ is the magnetic field 4-vector, defined in terms of $B^i$ \begin{eqnarray} b^t &=& B^iu^{\mu} g_{i\mu} \\ b^i &=& (B^i + b^t u^i)/u^t \end{eqnarray} evolution equation of $B^i$ using the spatial components of the induction equation \begin{equation} \partial_{t}(\sqrt{-g} B^i) = -\partial_{j}(\sqrt{-g} (b^j u^i - b^i u^j)) \end{equation} and the temporal component providing the no-monopoles constraint \begin{equation} \frac{1}{\sqrt{-g}}\partial_i(\sqrt{-g} B^i) = 0 \end{equation} \subsection{Two-temperature physics} \begin{equation} \partial_{\mu} (\sqrt{-g} \rho u^{\mu} s_{\rm e}) = \frac{\sqrt{-g} (\Gamma_{\rm e} - 1)}{\rho^{\Gamma_{\rm e} -1}}f_{\rm e}Q \end{equation} $s_{\rm e} = p_{\rm e}/\rho^{\Gamma_{\rm e}}$. The total heating rate $Q$ is calculated by comparing the total internal energy of the gas and the internal energy obtained from the electron entropy conservation equation. \begin{eqnarray} f_{\rm e} &=& \frac{1}{2} \exp \large[ -\frac{1-\beta/\beta_{\rm max}}{0.8+\sigma_{\rm h}^{0.5}} \large] \\ \beta_{\rm max} &=& \sigma_{\rm h}/4 \\ \sigma_{\rm h} &=& \frac{b^2}{\rho h} \end{eqnarray} \noindent where $h=1+\Gamma_{\rm g} p_{\rm g}/(\Gamma_{\rm g}-1)$ is the gas specific enthalpy. \subsection{Radiative GRMHD} ngEHT:alex raymond paper, Lindy paper astro 2020, OG:Chael+19 \section{model list} \href{https://challenge.ngeht.org/challenge1/}{Challenge 1: } \begin{enumerate} \item SgrA* semi-analytic stationary RIAF (Broderick \& Loeb 2016) from C. Fromm \item M87 MAD 2T-GRMHD a=0.94 with reconnection heating (Mizuno+2021); RT with kappa using BHOSS \end{enumerate} \href{https://challenge.ngeht.org/challenge2/}{Challenge 2: } \begin{enumerate} \item SgrA* semi-analytic stationary RIAF + shearing hotspot (Broderick \& Loeb 2016,Tiede+2020) Stokes I \item SgrA* MAD GRMHD a=0.5 Stokes I (HAMR) + BHOSS \item M87 MAD GRMHD a=0.94 Stokes I (HAMR) kappa \end{enumerate} \href{https://challenge.ngeht.org/challenge3/}{Challenge 3: } \begin{enumerate} \item SgrA* semi-analytic stationary RIAF + shearing hotspot (Broderick \& Loeb 2016,Tiede+2020) Full Stokes \item SgrA* MAD GRMHD a=0.5 Full Stokes (HAMR) + iPole \item M87 MAD GRMHD a=0.94 Full Stokes (HAMR) kappa + iPole \end{enumerate} \section{Temporal behavior of fluxes} \begin{figure} \includegraphics[width=\columnwidth]{figures/time_data.png} \caption{We show the mass accretion rate $\dot{M}$, dimensionless magnetic flux $\Phi/\sqrt{\dot{M}}$, normalized radial flux of the angular momentum $\dot{J}/\dot{M}$ and the outflow efficiency $P_{\rm out}/\dot{M}c^2=1-\dot{E}/\dot{M}c^2$ over time. Values are calculated at the event horizon.} \label{fig:Horizon_fluxes} \end{figure} calculate mass, magnetic, angular momentum and energy fluxes. Please note we assume that the factor $\sqrt{4\pi}$ is absorbed within the magnetic field $b^{\mu}$ or $B^i$. We are also using natural units $GM_{\rm BH}=c=1$ for these equations. \begin{eqnarray} \dot{M} &=\int^{2\pi}_{0}\int^{\pi}_{0}\, (-\rho \, u^r) \, \sqrt{-g} \, d\theta \, d\varphi\,, \\ \Phi &=\frac{1}{2}\int^{2\pi}_{0}\int^{\pi}_{0}\, |^*F^{rt}| \, \sqrt{-g} \, d\theta \, d\varphi\,, \\ \dot{L} &=\int^{2\pi}_{0}\int^{\pi}_{0}\, T^r_{\varphi} \, \sqrt{-g} \, d\theta \, d\varphi\,, \\ \dot{E} &=\int^{2\pi}_{0}\int^{\pi}_{0}\, (T^r_t) \, \sqrt{-g} \, d\theta \, d\varphi\,, \end{eqnarray} where all quantities are calculated calculated at the event horizon $r_{\rm hor}=r_{\rm g} (1+\sqrt{1-a^2})$. One issue is contamination by floors. This section is skipped for the stationary RIAF model of challenge 1. \section{Disk-averaged quantities} \begin{figure} \includegraphics[width=\columnwidth]{figures/radial_data_1.png} \caption{We show the radial profiles of gas density $\rho$, plasma-$\beta$, proton temperature $T_{\rm p}$, electron temperature $T_{\rm e}$. Quantities are disk-averaged and time-averaged over the raytracing period.} \label{fig:disk_profiles_1} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{figures/radial_data_2.png} \caption{We show the radial profiles of disk scale height $h/r$, radial velocity $|v_r|$, angular velocity $\Omega$ and specific angular momentum $u_{\varphi}$. Quantities are disk-averaged and time-averaged over the raytracing period.} \label{fig:disk_profiles_2} \end{figure} Here we calculate the disk-averaged properties of each model, namely gas density $\rho$, thermal pressure $p_{\rm gas}$, magnetic pressure $p_{\rm mag}$, radial velocity $|v_r|$, azimuthal velocity $|v_{\varphi}|$, angular momentum $u_{\varphi}$, disk scale height $h/r$, ion temperature $T_i$ and the electron temperature $T_e$. We define disk-averaging of a quantity $q$ as \begin{equation} \langle q \rangle (r,t) = \frac{\int^{2\pi}_{0}\int^{\pi}_{0}\, \rho \, q \, \sqrt{-g} \, d\theta \, d\varphi}{\int^{2\pi}_{0}\int^{\pi}_{0}\, \rho \, \sqrt{-g} \, d\theta \, d\varphi}, \end{equation} where $q \in \{ \rho, p_{\rm gas}, p_{\rm mag}, |v_r|, |v_{\varphi}| , u_{\varphi}, h/r, T_i [\rm Kelvin], T_e [\rm Kelvin] \}$. Further definitions follow: \begin{eqnarray} p_{\rm gas} &=& (\Gamma_{\rm ad}-1)u, \\ p_{\rm mag} &=& b^{\mu}b_{\mu}/2, \\ |v_i| &=& \sqrt{v^i v^i g_{ii}}, \\ {\rm where\,,} v^i &=& u^i/u^t, \nonumber \\ h/r &=& |\theta - \pi/2|, \\ \end{eqnarray} where $\Gamma_{\rm ad}$ and $u$ are the adiabatic index and the internal energy of the gas. Please provide the radial profiles over the time period relevant for the raytracing, e.g., $5,000-10,000~GM_{\rm BH}/c^3$. Since these simulations have been raytraced targeting either Sgr A$^*$ and M87, please provide the black hole mass, the density-scaling or $\dot{M}$-scaling, and the $R_{\rm high}--R_{\rm low}$ values used to produce the target 230 GHz flux. For the stationary RIAF model, please provide the profiles used. \section{Jet properties} \begin{figure} \includegraphics[width=\columnwidth]{jet_data.png} \caption{We show the jet radius $R_{\rm jet}$ and the jet Lorentz factor $\gamma$.} \label{fig:jet_profiles} \end{figure} Here we calculate the radial profiles of the jet half width $R_{\rm jet}$ and Lorentz factor $\gamma$: \begin{equation} R_{\rm jet} (r,t) = \sqrt\frac{\int^{2\pi}_{0}\int^{\pi}_{0}\, \sqrt{-g/g_{rr}} (\mu>2) \, d\theta \, d\varphi}{2\pi}\,, \end{equation} \begin{equation} \gamma (r,t) = \frac{\int^{2\pi}_{0}\int^{\pi}_{0}\, (\mu>2) \, \alpha u^t \, \sqrt{-g} \, d\theta \, d\varphi}{\int^{2\pi}_{0}\int^{\pi}_{0}\, (\mu>2) \, \sqrt{-g} \, d\theta \, d\varphi}, \end{equation} where $\mu=-T^r_t/(\rho u^r)$ is the specific radial energy density and $\alpha=1/\sqrt{-g^{tt}}$. \section{Axisymmetrised profiles} \begin{figure*} \includegraphics[width=\textwidth]{m87_2D.png} \caption{We show t- and $\varphi-$averaged data: electron number density $n_{\rm e}$ and temperature $T_{\rm e}$. We also denote the jet boundary with $\sigma=1$ (black lines). The time averaging is done over the raytracing period. RIAF plots are for Sgr A$^*$ while the rest are for M87.} \label{fig:2D_profiles} \end{figure*} Here we calculate time and azimuthally averaged 2D ($r,\theta$) plots of gas density, plasma beta ($=p_{\rm gas}/p_{\rm mag}$), magnetization $\sigma=b^2/\rho$ and $T_e~[\rm K]$. The time-averaging should be done over the the time period relevant for the raytracing. \section{Software and third party data repository citations} \label{sec:cite} The AAS Journals would like to encourage authors to change software and third party data repository references from the current standard of a footnote to a first class citation in the bibliography. As a bibliographic citation these important references will be more easily captured and credit will be given to the appropriate people. The first step to making this happen is to have the data or software in a long term repository that has made these items available via a persistent identifier like a Digital Object Identifier (DOI). A list of repositories that satisfy this criteria plus each one's pros and cons are given at \break \url{https://github.com/AASJournals/Tutorials/tree/master/Repositories}. \begin{acknowledgments} HPC, funding \end{acknowledgments} \vspace{5mm} \facilities{HST(STIS), Swift(XRT and UVOT), AAVSO, CTIO:1.3m, CTIO:1.5m,CXO} \software{\texttt{BHAC} \citep{porth17}, \texttt{H-AMR} \citep{liska_hamr2020_arxiv}, \texttt{BHOSS} \citep{Younsi:19_polarizedbhoss}, \texttt{IPOLE} \citep{Moscibrodzka:2018} } \section{Introduction} With the advent of the Event Horizon Telescope \citep[EHT;][]{EHT_M87_2019_PaperI, EHT_SgrA_2022_PaperI}, imaging the near-horizon structure of supermassive black holes (BHs) are now a reality. The primary targets of EHT and the future next-generation EHT (or ngEHT\footnote{\href{https://www.ngeht.org/}{https://www.ngeht.org/}}) are M87$^*${} \citep[the supermassive BH in the elliptical galaxy M87][]{EHT_M87_2019_PaperI} and Sgr~A$^*${} in the Galactic Center \citep{EHT_SgrA_2022_PaperI}, two of the most well-studied low-luminosity active galactic nuclei. Numerous papers on extracting information about the event horizon-scale accretion flows in these two sources using the EHT's enormous resolving power already exist. With the ngEHT, we will achieve unprecedented levels of angular resolution and sensitivity to low-flux regions, with the dynamic range in flux expected to increase to 1000 compared to the EHT's current dynamic range of 10. This would enable us to investigate the BH shadow shape with higher precision as well as provide a crucial connection between the accretion flow and the jet launching region. The expected advances in sensitivity require deeper investigations of feature extraction from simulated synthetic reconstructions of BH systems. Hence, we designed the ngEHT analysis challenges\footnote{\href{https://challenge.ngeht.org/challenge1/}{https://challenge.ngeht.org/challenge1/}} \citep{Roelofs_ngEHT} to test our ability to capture the complex dynamics of gas and magnetic fields around M87$^*${} and Sgr~A$^*${} using the ngEHT reference array \citep[e.g.,][]{Raymond:2021}. Black hole accretion and jet physics has been intensively studied over the past few decades \citep[e.g.,][]{Shakura:73,ree82,Narayan:95a,quataert:2000:CDAF,narayan03,mckinney06, kom07, tch11, narayanSANE2012, tch16}. In the context of M87$^*${} and Sgr~A$^*${}, we expect the accretion flow to be highly sub-Eddington, radiatively-inefficient and geometrically-thick, popularly known as radiatively-inefficient accretion flows (RIAFs). This accretion flow solution has been used to successfully model the multiwavelength spectrum of Sgr~A$^*${} \citep[e.g.,][]{Yuan:03}. On the other hand, semi-analytical models of jets are preferred to explain the spectrum of M87$^*${} \citep[e.g.,][]{lucchini19:M87}. Thus, these two sources already provide a means to probe two different components of BH accretion, namely, the inner accretion flow structure and turbulence in Sgr~A$^*${} and the prominent jet feature in M87$^*${}. The first three EHT numerical simulation papers \citep{EHT_M87_2019_PaperV,EHT_M87_2019_PaperVIII,EHT_SgrA_2022_PaperV} already give us important clues about the horizon-scale conditions of these BH systems based on numerical simulations: (1) these BHs probably have non-zero spin, (2) the accretion disk is expected to have colder electrons than the jet sheath, and (3) the observations favor the presence of dynamically-important magnetic fields close to the BH. All of these results point us towards the magnetically arrested disk \citep[MAD;][]{igu03,narayan03} state, an accretion mode where the BH magnetosphere becomes over-saturated with magnetic flux and exhibit quasi-periodic explosions of vertical magnetic flux bundles. MAD flows also have powerful relativistic jets, where the jet power can exceed the input accretion power \citep{tch11}, a definite signature of BH spin energy extraction via the \citet{bz77} process. Building on the semi-analytical RIAF models, time-dependent general relativistic magneto-hydrodynamic (GRMHD) simulations have become important tools for deciphering BH accretion physics in a variety of astrophysical systems \citep[e.g.,][]{gam03, mckinney06,fragile07, tch11,Chael_2019, Porth:19, Narayan2022}. Indeed, the EHT regularly makes use of large libraries of GRMHD simulations to model the observed horizon-scale BH images of M87$^*${} and Sgr~A$^*${} as well as larger-scale jet images \citep[such as for Centaurus A][]{Janssen:2021} in order to constrain the time-variable plasma properties. In designing the ngEHT reference array, it is therefore crucial to use GRMHD simulations for understanding the attainability of specific science goals, such as resolving the photon ring and the disk-jet connection region as well as tracing out time-variable features via the ngEHT analysis challenges. In this work, we discuss the numerical fluid simulations that were used as source models for the ngEHT analysis challenges. In particular, our objective is to compare between models that incorporate increasingly complicated levels of accretion and electron physics, focusing on M87$^*${} and Sgr~A$^*${}. Our model set consists of a time-dependent shearing hotspot stitched to a steady-state RIAF solution, two standard GRMHD simulations of MAD accretion flows, a GRMHD MAD simulation including electron heating via the inclusion of two-temperature physics, and a fully radiative, two-temperature, GRMHD MAD simulation. We describe the equations and setup of our numerical models in Sec.~\ref{sec:sims}, show our comparison results in Sec.~\ref{sec:results} and finally conclude in Sec.~\ref{sec:conclusions}. \section{Numerical simulations} \label{sec:sims} In this section we provide a brief description of the semi-analytical stationary RIAF and shearing hotspot model as well as the (two-temperature/radiative) GRMHD simulations used for the ngEHT analysis challenges. \subsection{RIAF+hotspot solutions} \label{sec:RIAF} RIAF models attempt to describe the time and azimuthally averaged appearance of accretion flows. This is done with a set of building blocks. The specific RIAF models used in the challenges are based on \citet{Yuan_2003, broderick_2006, Broderick_2010} approach. We decompose the accretion flow into a set of phenomenological models that describe the electron density, temperature, magnetic field, and velocity profile. The electron density profile is defined in terms of the cylindrical radius $R_{\rm cyl} = r|\sin(\theta)|$ and vertical displacement $z = r\cos(\theta)$, and is given by \begin{equation} n_{e,\rm{X}}(R_{\rm cyl}, z) = n_{e,\rm X} R_{\rm cyl}^{p_X}\exp\left(-\frac{z^2}{2h^2R_{\rm cyl}^2}\right) \end{equation} where $X$ denotes the population of electron. The disk height is controlled by $h$ which for this work we set to unity. For the challenge dataset we included both thermal synchrotron emitting ($X\equiv{\rm th})$, and non-thermal synchrotron ($X\equiv{\rm nth}$) emitting electrons. The thermal electrons have $n_{e,{\rm th}, 0}=1.3 \times 10^8 $ and $p_{\rm th} = -1.1$, while the non-thermal electrons are given by $n_{e, {\rm nth}, 0} = 1.3\times 10^5$, $p_{\rm nth} =-2.02$. The temperature profile of the thermal electrons is also given by a radial power-law with a Gaussian envelope describing the height: \begin{equation}\label{eq:Te_riaf} T_{e}(t,r,\theta, \varphi) = T_{e,0}R_{\rm cyl}^{-0.84}\exp\left(-\frac{z^2}{2h^2R_{\rm cyl}^2}\right), \end{equation} where for the challenge data we set $T_{e,0} = 6.3\times 10^{10}$~K. To define the gas pressure, we assume an equipartition between the electron and proton energy and that the protons are in roughly hydrostatic equilibrium, which gives us: \begin{equation}\label{eq:riaf_gas} p_{\rm gas}(x^\mu) = \frac{m_p n_{e,th}(x^\mu)}{6}\frac{M}{r}. \end{equation} For the local magnetic field, we then assume a constant $\beta = p_{\rm gas}/p_{\rm mag}$ plasma, which combined with eqn.~\ref{eq:riaf_gas} give us the magnetic field strength. The orientation of the magnetic field is then given by a purely toroidal configuration, relative to the plasma observer. Finally, we take the emission and absorption coefficients from the synchrotron self-absorption model in \citet{broderickblandford04} for both the thermal and non-thermal synchrotron electrons. For the non-thermal electrons we use a power-law subscription with radiation coefficients from \citet{jonesodell}, and a spectral-power law index of 1.25. We follow the velocity field prescription from \citet{Pu2016} to describe the accretion flow dynamics. Using the notation from \citet{Tiede2020}, the velocity field $u^\mu = u^t v^\mu$ is given by \begin{equation}\label{eq:hotspot_velo} \begin{aligned} v^r &= v^r_K + \alpha(v^r_{ff} - v^r_{ff}) \\ v^\theta &= 0 \\ v^\varphi &= v^\varphi_K + (1-\kappa)(v^\varphi_{ff} - v^\varphi_K), \end{aligned} \end{equation} where $v^\mu_K$ denotes the Keplerian velocity field, and $v^\mu_{ff}$ is the free-fall velocity field. The remaining component $u^t$ is given by the normalization condition $u^\mu u_\mu = -1$. The radial component outside the inner stable circular orbit (ISCO) is $v^r_K = 0$ as the disk is in steady-state. However, inside the ISCO, we use plunging geodesics which are specified by matching the angular momentum and energy at the ISCO. The hotspot evolution follows the model from \citet{Tiede2020}, where we assume that the hotspot travels passively along the accretion flow velocity field (eqn.~\ref{eq:hotspot_velo}). This implies that the equation of motion is given by the conservation of particle number (eqn.~\ref{eq:continuity}). For the emission we assume a non-thermal synchrotron hotspot, with a initial Gaussian density profile \begin{equation} n_{e}(x^\mu) = n_0e^{-\Delta r^\mu r_\mu +(\Delta r_\mu v^\mu)^2/2R_s^2}, \end{equation} where we have set $n_0 = 6\times 10^6$ and $R_s = 0.5\,r_{\rm g}$, and $\Delta r^\mu$ is the displacement from the hotspot center. \subsection{GRMHD simulations} \label{sec:GRMHD} Over the previous two decades, multiple GRMHD codes have been developed and utilized to model black hole accretion and jet launching physics over long dynamical timescales. The wide usage of GRMHD simulations is particularly encouraging since this allows verification of code-specific numerical choices that users usually have to make even while solving the same base set of GRMHD equations. Indeed, recently there was a community-wide effort to benchmark these codes against each other for a standard problem: evolving a weakly magnetized torus of gas around a spinning black hole \citep{Porth:19}. It was found that these codes largely provide similar solutions though some disk quantities remain unconverged with increasing grid resolutions, suggesting more investigation is required. For this work, we employ three different GRMHD codes to probe black hole accretion, increasing the complexity of the equations solved at each step: (1) single fluid GRMHD simulations from the \texttt{H-AMR}{} code \citep{liska_hamr2020_arxiv}, (2) a two-temperature single fluid GRMHD simulation from the \texttt{BHAC}{} code \citep{porth17}, and (3) a two-temperature radiative GRMHD simulation from the \texttt{KORAL}{} code \citep{Sadowski:13_koral}. First we describe the set of GRMHD equations \citep[e.g., from][]{Gammie:03, Porth:19}. We have the conservation of particle number and energy-momentum: \begin{eqnarray} \label{eq:continuity} \partial_{t}(\sqrt{-g} \rho u^{t}) &=& -\partial_{i}(\sqrt{-g} \rho u^{i}) \\ \partial_{t}(\sqrt{-g} T^t_{\nu}) &=& -\partial_{i}(\sqrt{-g} T^i_{\nu}) +\sqrt{-g} T^{\alpha}_{\beta}\Gamma^{\beta}_{\nu \alpha}. \end{eqnarray} \noindent Here, $\rho$ is the rest-mass gas density and can also be written in the form of $\rho=m n$, where $m$ is the mean rest-mass per particle and $n$ is the particle number density. We also have the four-velocity $u^{\mu}$, stress-energy tensor $T^{\mu}_{\nu}$, metric determinant $g\equiv det(g_{\mu\nu})$ and the metric connection $\Gamma^{\beta}_{\nu \alpha}$. Note that the index $t$ refers to the temporal component of the vector or tensor and $i$ denotes the spatial indices. The stress-energy tensor $T^{\mu}_{\nu}$ is given as \begin{equation} T^{\mu}_{\nu}=(\rho + U_{\rm gas} + p_{\rm gas} + 2p_{\rm mag})u^{\mu}u_{\nu} + (p_{\rm gas} + p_{\rm mag})\delta^{\mu}_{\nu} - b^{\mu}b_{\nu}, \end{equation} \noindent where $U_{\rm gas}$ and $p_{\rm gas}$ are the gas internal energy and pressure, related by the ideal gas equation: $p_{\rm gas}=(\Gamma_{\rm gas}-1)U_{\rm gas}$. We also have the magnetic pressure $p_{\rm mag}=b^2/2$ and the magnetic field 4-vector $b^{\mu}$, which can be defined in terms of the magnetic field 3-vector $B^i$: \begin{eqnarray} b^t &=& B^i u^{\mu} g_{i\mu} \\ b^i &=& (B^i + b^t u^i)/u^t. \end{eqnarray} Here we included a factor of $\sqrt{4\pi}$ into the definition of $B^i$. We evolve the magnetic field $B^i$ using the spatial components of the induction equation, \begin{equation} \partial_{t}(\sqrt{-g} B^i) = -\partial_{j}(\sqrt{-g} (b^j u^i - b^i u^j)), \end{equation} \noindent while the temporal component provides the no-monopoles constraint, \begin{equation} \frac{1}{\sqrt{-g}}\partial_i(\sqrt{-g} B^i) = 0. \end{equation} These equations are numerically integrated in the conservative form \citep{Gammie:03} to get the physically-relevant quantities $\rho$, $U_{\rm gas}$, $u^{\mu}$ and $B^i$. We refer the reader to the corresponding code papers for more information on the numerical techniques used to evolve these equations over space and time. In this work, we use two GRMHD simulations performed with the \texttt{H-AMR}{} code, one targeting M87$^*${} and the other Sgr~A$^*${}. These simulations employ logarithmic Kerr-Schild coordinates and the grid resolutions are $N_r\times N_\theta \times N_\varphi = 580 \times 288 \times 512$ for the M87$^*${} simulation and $348 \times 192 \times 192$ for the Sgr~A$^*${} simulation. All simulations in this work adopt the geometrical unit convention, $GM_{\rm BH}=c=1$, normalizing the length scale to the gravitational radius $r_{\rm g}=GM_{\rm BH}/c^2$. The M87$^*${} GRMHD simulation evolves a MAD flow around a black hole with spin $a=0.9375$. The Sgr~A$^*${} model also simulates a MAD flow but around a black hole with spin $a=1/2$. \texttt{H-AMR}{} uses outflowing radial boundary conditions (BCs), transmissive polar BCs and periodic azimuthal BCs \citep[for more details, see][]{liska_tilt_2018}. Since GRMHD simulations are scale-free, we determine the gas density code-to-CGS units conversion factor (hereafter, ``density scaling'') by raytracing the simulation at 230 GHz for a target source and flux. We use the general relativistic ray-tracing codes \texttt{BHOSS}{} \citep{Younsi:19_polarizedbhoss} and \texttt{IPOLE}{} \citep{Moscibrodzka:2018} to compute images at 230 GHz and set the compact flux to be approximately 0.5 Jy for M87$^*${} and 2.4 Jy for Sgr~A$^*${}. We take black hole masses and distances of $M_{\rm BH} =6.2\times10^9 M_{\odot}$ and $D_{\rm BH}=16.9$~Mpc for M87$^*${} and $M_{\rm BH}=4.14\times10^6 M_{\odot}$ and $D_{\rm BH}=8.127$~kpc for Sgr~A$^*${}. GRMHD simulations evolve a single temperature fluid. At the low accretion rates seen in M87$^*${} and Sgr~A$^*${}, Coulomb coupling between ions and electrons is inefficient and therefore, the two particle species are not in thermal equilibrium. Since ions are much heavier than electrons, ions dominate the single fluid thermodynamics evolved in GRMHD simulations. Hence, to calculate the radiative output from GRMHD simulations, we calculate the electron temperature $T_{\rm e}$ using sub-grid models such as the $R-\beta$ prescription \citep{Moscibrodzka_2016} based on local gas plasma-$\beta$ ($\equiv p_{\rm gas}/p_{\rm mag}$): \begin{eqnarray} T_{\rm e} &=& \frac{2 m_{\rm p}U_{\rm gas}}{3k_{\rm B}\rho (2 + R)}, \\ {\rm where,\,\,} R &=& \frac{1+R_{\rm high} \beta^2}{1+\beta^2}. \label{eq:Rbeta} \end{eqnarray} For the ngEHT analysis challenges, we take $R_{\rm high}$ values of 160 and 40 and source inclinations of $163^{\circ}$ and $50^{\circ}$ for the M87$^*${} and Sgr~A$^*${} simulations, respectively. We assume a thermal relativistic Maxwell-J\"uttner distribution for describing the electron energy distribution in the Sgr~A$^*${} model, and a hybrid thermal$+$non-thermal $\kappa-$distribution \citep[e.g.,][]{Xiao:2006, Davelaar:18} for the M87$^*${} model. The model images are shown in \citet{Roelofs_ngEHT}. \subsection{Two-temperature physics} \label{sec:two-temp} Two-temperature GRMHD (2t-GRMHD) simulations \citep[e.g.,][]{ressler_2015, Dexter_2020} evolve the ion and electron entropy equation separately and hence, provide the ion and electron temperature in a self-consistent manner. The main advantage of this method is that we remove the electron temperature as a free parameter when constructing images. However, we do have to make a choice about the sub-grid prescription that determines the fraction of local dissipative energy that heats the electrons. There are two heating mechanisms that are thought to be applicable to global accretion flows: turbulent heating \citep{Howes:2010, Kawazura:2019}, and magnetic reconnection \citep{Werner:2018, Rowan:2017}. For the ngEHT analysis challenges, we focus only on one simulation with reconnection heating, taken from \citet{Mizuno_2021}. We assume that the number densities and velocities of ions and electrons are equal, i.e., $n_{\rm i}=n_{\rm e}=n$ and $u^{\mu}_{\rm i}=u^{\mu}_{\rm e}=u^{\mu}$, maintaining charge neutrality. The electron entropy equation is given as \begin{equation} \partial_{\mu} (\sqrt{-g} \rho u^{\mu} s_{\rm e}) = \frac{\sqrt{-g} (\Gamma_{\rm e} - 1)}{\rho^{\Gamma_{\rm e} -1}}f_{\rm e}Q, \end{equation} \noindent where the electron entropy is $s_{\rm e} = p_{\rm e}/\rho^{\Gamma_{\rm e}}$, with $p_{\rm e}$ and $\Gamma_{\rm e}$ as the electron pressure and adiabatic index. The total heating rate $Q$ is calculated by comparing the total internal energy of the gas and the internal energy obtained from the electron entropy conservation equation \citep[see][for more details]{ressler_2015}. The fraction of dissipative heating that contributes to electron heating is given by $f_{\rm e}$. For this particular simulation, $f_{\rm e}$ is designed to capture electron/ion heating via magnetic reconnection from \citet{Rowan:2017}: \begin{equation} f_{\rm e} = \frac{1}{2} \exp \Big[ -\frac{1-\beta/\beta_{\rm max}}{0.8+\sigma_{\rm h}^{0.5}} \Big], \end{equation} \noindent where $\beta_{\rm max} = 1/4\sigma_{\rm h}$, defined using the hot gas magnetization $\sigma_{\rm h} = b^2/\rho h$ and the specific gas enthalpy $h=1+\Gamma_{\rm g} p_{\rm g}/[\rho(\Gamma_{\rm g}-1)]$. The 2t-GRMHD simulation from \citet{Mizuno_2021} assumes modified Kerr-Schild coordinates and a black hole spin of $0.9375$. The grid resolution is $384 \times 192 \times 192$. The accretion mode is of a magnetically arrested flow and the simulation is raytraced (using \texttt{BHOSS}{}) once the near-horizon flow has reached steady state. The target source is M87$^*${}, assuming a black hole mass of $M_{\rm BH}=6.5\times10^9 M_{\odot}$ and distance of 16.9 Mpc. The accretion rate is normalized such that the 230 GHz compact flux density is 0.8 Jy. We assume a thermal electron distribution everywhere except in the jet sheath where we adopt a $\kappa-$distribution. More details about the image are provided in \citet{Roelofs_ngEHT}. \subsection{Radiative GRMHD} \label{sec:GRRMHD} Two temperature GRMHD simulations do not include radiative cooling and hence, are thought to be appropriate for low luminosity supermassive black holes such as M87$^*${} and Sgr~A$^*${}. To verify this assumption, we consider a two-temperature radiative GRMHD (2t-GRRMHD hereafter) simulation from \citet{Chael_2019}. This simulation accounts for self-consistent radiation physics, incorporating both particle heating via magnetic reconnection (as in Sec~\ref{sec:two-temp}) and radiative cooling via bremsstrahlung, synchrotron, Compton and Coulomb losses. The simulation is run using the 2t-GRRMHD code \texttt{KORAL}{} \citep{Sadowski:13_koral, Sadowski:15_photon,Chael_2018}, which evolves a two-temperature magnetized fluid and treats the radiation field as a second fluid \citep{Sadowski:2017}. The conservation equations solved in 2t-GRRMHD are different from that of GRMHD: \begin{equation} (T^{\mu}_{\nu} + R^{\mu}_{\nu})_{;\mu} =0, \end{equation} \noindent where $R^{\mu}_{\nu}$ is the frequency-integrated radiation field, defined as, \begin{equation} R^{\mu}_{\nu} = \frac{4}{3} \overline{E} u^{\mu}_{\rm R} u_{\nu \rm R} + \frac{1}{3} \overline{E}\delta^{\mu}_{\nu}. \end{equation} \noindent Here, the radiation field is described by its rest-frame energy density $\overline{E}$ and four-velocity $u^{\mu}_{\rm R}$ following the M1 closure scheme. The ion and electron entropy equations are \begin{eqnarray} T_{\rm e} (ns_{\rm e}u^{\mu})_{;\mu} &=& f_{\rm e}q^{\rm v} + q^{\rm C} - G, \\ T_{\rm i} (ns_{\rm i}u^{\mu})_{;\mu} &=& (1-f_{\rm e})q^{\rm v} - q^{\rm C}, \end{eqnarray} \noindent where $q^{\rm v}$ is the dissipative heating rate and $q^{\rm C}$ is the Coulomb coupling rate that captures the exchange of energy between ions and electrons. The heating fraction $f_{\rm e}$ is taken from \citet{Rowan:2017} (see \citealt{Chael_2018} for more details), same as the 2t-GRMHD simulation. Finally, $G$ is the radiative cooling rate \citep{Sadowski:13_koral}. For further details about the equations, see \citet{Sadowski:13_koral, Sadowski:2017, Chael_2018}. The simulation assumes a black hole spin of $a=0.9375$ and mass $M_{\rm BH}=6.2\times10^9 M_{\odot}$, targeting M87$^*${}. The gas density is scaled to physical CGS units such that the compact emission at 230 GHz is roughly 0.98 Jy. The simulation uses modified Kerr-Schild coordinates with a grid resolution of $N_r\times N_\theta \times N_\varphi = 288 \times 224 \times 128$. See \citet{Chael_2019} for more details about the simulation. While not utilized for the ngEHT analysis challenges, we included this simulation in this work since this model captures the coupling between gas and radiation, necessary for black holes accreting close the Eddington limit. Further, this model has been used in previous ngEHT reference array papers \citep{Blackburn:2019_APC, Raymond:2021}. \section{Results} \label{sec:results} We perform a series of comparisons focused on the time-evolution of horizon-scale quantities and radial dependence of disk and jet properties. The diagnostics are chosen such that any trends we find can inform EHT/ngEHT science applications, such as horizon-scale morphology and variability of the accretion flow. Further, the quantities are similar to those reported in the GRMHD code comparison project \citep{Porth:19} and so, can be directly compared. There are a total of five models: three (2t-radiative) GRMHD simulations targeting M87*, and one RIAF solution and one GRMHD simulation for Sgr A*. We further note that all three numerical simulations of M87* have the same BH spin, favoring direct comparisons of the horizon-scale gas properties. \subsection{Temporal behavior of horizon fluxes} \label{sec:time} \begin{figure} \includegraphics[width=\columnwidth]{figures/time_data.png} \caption{We show the mass accretion rate $\dot{M}$, dimensionless magnetic flux $\phi\equiv\Phi/\sqrt{\dot{M}}$, the outflow efficiency $P_{\rm out}/\dot{M}c^2=1-\dot{E}/\dot{M}c^2$ and specific radial flux of the angular momentum $\dot{J}/\dot{M}$ over time. Values are calculated at the event horizon.} \label{fig:Horizon_fluxes} \end{figure} We calculate the mass, magnetic, angular momentum and energy fluxes in the radial direction as follows: \begin{eqnarray} {\rm Mass:\,\,}& \dot{M} &=\int^{2\pi}_{0}\int^{\pi}_{0}\, (-\rho \, u^r) \, \sqrt{-g} \, d\theta \, d\varphi\,, \\ {\rm Magnetic:\,\,}& \Phi &=\frac{\sqrt{4\pi}}{2}\int^{2\pi}_{0}\int^{\pi}_{0}\, |B^r| \, \sqrt{-g} \, d\theta \, d\varphi\,, \\ {\rm Ang. Mom.:\,\,}& \dot{J} &=\int^{2\pi}_{0}\int^{\pi}_{0}\, T^r_{\varphi} \, \sqrt{-g} \, d\theta \, d\varphi\,, \\ {\rm Energy:\,\,}& \dot{E} &=\int^{2\pi}_{0}\int^{\pi}_{0}\, (T^r_t) \, \sqrt{-g} \, d\theta \, d\varphi\,, \end{eqnarray} \noindent where all quantities are calculated at the event horizon radius $r_{\rm hor}=r_{\rm g} (1+\sqrt{1-a^2})$. We note that there could be substantial contribution of density floors when calculating the mass accretion rate for MAD systems. However, this radius was chosen for simplicity in comparing to previous simulations in the literature. Figure~\ref{fig:Horizon_fluxes} shows the mass accretion rate $\dot{M}$ in units of solar masses per year ($M_{\odot}$/yr), the dimensionless magnetic flux $\phi=\Phi/\sqrt{\dot{M} r_{\rm g}^2 c }$, the outflow power $P_{\rm out} = \dot{M}c^2-\dot{E}$ and the specific angular momentum flux $\dot{J}/\dot{M}$ for simulations targeting M87* and Sgr A*. The RIAF solution being a steady-state solution is excluded from this section (though the hotspot evolves with time). Quantities from the 2t-GRRMHD simulation are only shown for $(11-16)\times 10^3\,\,r_{\rm g}/c$, i.e., the time period over which the simulation was raytraced in \citet{Chael_2019}. Remarkably, despite the difference in electron physics complexity, the simulations behave very similarly. The factor of 2 difference in $\dot{M}$ between the M87$^*${} non-radiative simulations and the 2t-GRRMHD simulation can be explained by the lower electron temperatures in the near-horizon accretion flow due to radiative cooling (see Sec.~\ref{sec:disk}) as well as the higher 230 GHz flux normalization used for the radiative model. \begin{table}[H] \caption{Modulation index (MI) of the mass accretion rate $\dot{M}$, dimensionless magnetic flux $\phi$, outflow efficiency $P_{\rm out}/\dot{M}c^2$ and the specific angular momentum flux $\dot{J}/\dot{M}$ for each GRMHD model. The quantities are calculated over the final $5000\,\,r_{\rm g}/c$ in runtime and at the event horizon (see Fig.~\ref{fig:Horizon_fluxes}).} \newcolumntype{C}{>{\centering\arraybackslash}X} \begin{tabularx}{\textwidth}{lCCcc} \toprule Model & MI($\dot{M}$) & MI($\phi$) & MI($P_{\rm out}/\dot{M}c^2$) & MI($\dot{J}/\dot{M}$)\\ \midrule M87$^*${} GRMHD & 0.27 & 0.15 & 0.26 & 0.33\\ M87$^*${} 2t-GRMHD & 0.29 & 0.14 & 0.25 & 0.31\\ M87$^*${} 2t-GRRMHD & 0.28 & 0.14 & 0.14 & 0.31\\ Sgr~A$^*${} GRMHD & 0.23 & 0.21 & 0.39 & 0.57\\ \bottomrule \end{tabularx} \label{tab:MI} \end{table} The accretion rate in all simulations show large variation with quasi-periodic sharp drops. These drops in $\dot{M}$ occur due to the emergence of magnetic flux eruptions, a characteristic feature of the magnetically arrested disk \citep{Begelman2022, Ripperda2022, Chatterjee:2022}. These eruptions also lower the value of $\phi$ since magnetic flux bundles escape from the vicinity of the BH, carrying away the magnetic flux accumulated in the BH magnetosphere. We see that $\phi$ often crosses the magnetic flux saturation value of 50 \citep{tch11}, overwhelming the BH magnetosphere with strong magnetic fields that eventually reconnect and trigger flux eruptions \citep[see][for the detailed mechanism]{Ripperda2022}. As these field line bundles move out and interact with the disk, they (1) hinder accretion, lowering $\dot{M}$, (2) remove magnetic flux from near the BH, lowering the jet power, and (3) push gas outwards, reducing the inward angular momentum flux. Curiously, we see larger drops in the specific angular momentum flux for the Sgr~A$^*${} GRMHD model. This is possibly due to the smaller BH spin ($a=0.5$ as opposed to $0.9375$ for the M87$^*${} models) as the weakly powered jet does not carry away angular momentum as efficiently as the higher BH spin models and flux eruptions play a bigger role in regulating disk angular momentum transport. Additionally, the reconnection events that trigger these eruptions accelerate electrons to higher energies, and are thus, crucial for understanding flare activity in BH sources. To understand how time-variability of the horizon-fluxes behave, we calculate the modulation index MI, which is defined as the ratio of the standard deviation and the mean \citep{EHT_SgrA_2022_PaperV}. We show MI for the different fluxes in Table~\ref{tab:MI}. The MI($\dot{M}$) is usually a good proxy for the variability of the sub-millimeter emission in these slowly accreting optically thin black hole sources \citep[e.g.,][]{Chatterjee:2021}. The MI($\dot{M}$) values we see from the simulations are $\sim0.23-0.29$ and are larger than expected from Sgr~A$^*${} 230 GHz lightcurves \citep[where $MI\sim 0.1$][]{Wielgus:2022_SgrALC}. This suggests that careful analysis of the electron distribution function is needed to understand if we are substantially over-predicting the 230 GHz lightcurve variability. Further, in general, weakly-magnetized accretion flows exhibit lower MI($\dot{M}$) values due to the absence of flux eruptions, which suggests that further study of the accretion mode in Sgr~A$^*${} is also necessary. It is encouraging to note that our MI values for $\dot{M}$ and $\phi$ are consistent with the MI values from longer time-evolved GRMHD simulations of $a=0.9$ BHs in \citet{Narayan2022}, indicating that our simulations are sufficiently converged with respect to horizon-scale quantities. \subsection{Disk-averaged quantities} \label{sec:disk} \begin{figure} \includegraphics[width=\columnwidth]{figures/radial_data_1.png} \caption{We show the radial profiles of gas density $\rho$, plasma-$\beta$, proton temperature $T_{\rm p}$, electron temperature $T_{\rm e}$. Quantities are disk-averaged and time-averaged over the raytracing period.} \label{fig:disk_profiles_1} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{figures/radial_data_2.png} \caption{We show the radial profiles of disk scale height $h/r$, radial velocity $|v_r|$, angular velocity $\Omega$ and specific angular momentum $u_{\varphi}$. Quantities are disk-averaged and time-averaged over the raytracing period.} \label{fig:disk_profiles_2} \end{figure} Here we calculate the disk-averaged properties of each model, namely gas density $\rho$, thermal pressure $p_{\rm gas}$, magnetic pressure $p_{\rm mag}$, radial velocity $|v_r|$, azimuthal velocity $|v_{\varphi}|$, angular momentum $u_{\varphi}$, disk scale height $h/r$, ion temperature $T_i$ and the electron temperature $T_e$. We define disk-averaging of a quantity $q$ as \begin{equation} \langle q \rangle (r,t) = \frac{\int^{2\pi}_{0}\int^{\pi}_{0}\, \rho \, q \, \sqrt{-g} \, d\theta \, d\varphi}{\int^{2\pi}_{0}\int^{\pi}_{0}\, \rho \, \sqrt{-g} \, d\theta \, d\varphi}, \end{equation} where $q \in \{ \rho, p_{\rm gas}, p_{\rm mag}, |v_r|, |v_{\varphi}| , u_{\varphi}, h/r, T_i [\rm Kelvin], T_e [\rm Kelvin] \}$. Further definitions follow: \begin{eqnarray} p_{\rm gas} &=& (\Gamma_{\rm ad}-1)u, \\ p_{\rm mag} &=& b^{\mu}b_{\mu}/2, \\ |v_i| &=& \sqrt{v^i v^i g_{ii}}, \\ {\rm where,\,\,} v^i &=& u^i/u^t, \nonumber \\ h/r &=& |\theta - \pi/2|, \\ \end{eqnarray} where $\Gamma_{\rm ad}$ and $u$ are the adiabatic index and the internal energy of the gas. Figures~\ref{fig:disk_profiles_1} and \ref{fig:disk_profiles_2} show the respective disk-averaged radial profiles for each model, including the Sgr~A$^*${} RIAF solution. The density profiles in the inner few 10s of $r_{\rm g}$ converge roughly to a $n_{\rm e}\propto r^{-1}$ profile and matches the RIAF density profile. The M87$^*${} 2t-GRRMHD density is larger by a factor of $\approx 2$ from the GRMHD/2t-GRMHD, as is expected from the difference in the mass accretion rate (Fig.~\ref{fig:Horizon_fluxes}). The 2t-GRRMHD simulation exhibits a slightly more magnetized inflow within the inner $2\,\,r_{\rm g}$, but overall, the GRMHD simulations have a similar plasma-$\beta\equiv p_{\rm gas}/p_{\rm mag}$ disk profile. The stronger magnetic field seen in the 2t-GRRMHD model could explain the higher values of the horizon magnetic flux seen in Fig.~\ref{fig:Horizon_fluxes}. The RIAF model assumes a constant disk plasma-$\beta$ of 1, (see Sec.~\ref{sec:RIAF}), which is substantially higher when compared to the MAD GRMHD models. This value of plasma-$\beta$ is chosen in order to match the observed 230 GHz flux density of Sgr~A$^*${}. As we see from the disk scale height in Fig.~\ref{fig:disk_profiles_2}, the RIAF model has a much thicker disk than the GRMHD models, and therefore, produces a lot more sub-millimeter (sub-mm) emission even with a low electron temperature and weak magnetic field strength. Next, we see that the disk-averaged electron temperature $T_{\rm e}$ in the 2t-GRRMHD M87$^*${} model is more than an order of magnitude lower than the other GRMHD models within the inner $10\,\, r_{\rm g}$, but actually matches the Sgr~A$^*${} RIAF $T_{\rm e}$ profile and has a shallower profile $T_{\rm e} \propto r^{-1}$ instead of $r^{-3/2}$. It is further interestingly to note that the disk ion temperature $T_{\rm i}$ are very similar in all the GRMHD simulations shown here. Therefore, despite the same reconnection-driven heating mechanism captured in both the 2t-GRMHD and the 2t-GRRMHD models, radiative cooling of hot electrons plays a crucial role in determining the eventual $T_{\rm e}$. Due to the low $T_{\rm e}$, the required accretion rate normalization is higher in the 2t-GRRMHD, as we noted in the previous subsection. In Fig.~\ref{fig:disk_profiles_2}, we show the average disk scale height $h/r$, the radial and angular velocities ($v_r$ and $\Omega$ and the specific angular momentum $u_{\varphi}$. The MAD simulations all show very similar disk properties. The $\langle h/r\rangle \approx 0.1-0.3$ with a sharp increase within $3\,\,r_{\rm g}$ where the inflow becomes vertically supported by strong poloidal magnetic fields. The radial velocity has a profile of $r^{-1/2}$, similar to the scaling relation found in ADAF solutions assuming a constant viscosity parameter $\alpha$ \citep{nar94,nar98}. The $\alpha$ parameter profile depends on how magnetized the accretion flow is, with $\alpha\propto r^{-1}$ for weakly-magnetized flows and close to constant for MAD-like flows \citep[e.g.,][]{liska_tor_2019,Chatterjee:2022}. We also see highly sub-Keplerian angular velocity profiles in the GRMHD models, typical for magnetically supported disks. For the RIAF model, the RIAF disk is not infalling and has a constant Keplerian angular velocity. Instead, the hotspot, added to the RIAF solution, undergoes shearing and disappears into the BH with a radial velocity similar to the values found in the GRMHD MAD disks. This occurs because the hotspot is designed to travel along plunging geodesics (see Sec.~\ref{sec:RIAF}), similar to rapid gas infall close to the BH in the GRMHD models. The angular momentum in the GRMHD models looks sub-Keplerian as expected for MADs. \subsection{Jet properties} \begin{figure} \includegraphics[width=\columnwidth]{figures/jet_data.png} \caption{We show the jet radius $R_{\rm jet}$ and the jet Lorentz factor $\gamma$ from the M87$^*${} GRMHD and 2t-GRRMHD models, and the Sgr~A$^*${} GRMHD model. The gray circles indicates the deprojected jet radius of the M87 jet assuming a BH mass of $6.2\times10^9M_{\odot}$ and a source inclination of $14^{\circ}$ \citep{Nakamura_2018}. The data points are a compilation of various papers \citep{Doeleman2012,asadanak2012,Hada2013,nak2013,Akiyama:2015,Hada2016}.} \label{fig:jet_profiles} \end{figure} Here we calculate the radial profiles of the jet half width $R_{\rm jet}$ and Lorentz factor $\gamma$: \begin{equation} R_{\rm jet} (r,t) = \sqrt\frac{\int^{2\pi}_{0}\int^{\pi}_{0}\, \sqrt{-g/g_{rr}} (\mu>2) \, d\theta \, d\varphi}{2\pi}\,, \label{eq:Rjet} \end{equation} \begin{equation} \gamma (r,t) = \frac{\int^{2\pi}_{0}\int^{\pi}_{0}\, (\mu>2) \, \alpha u^t \, \sqrt{-g} \, d\theta \, d\varphi}{\int^{2\pi}_{0}\int^{\pi}_{0}\, (\mu>2) \, \sqrt{-g} \, d\theta \, d\varphi}, \end{equation} where $\mu=-T^r_t/(\rho u^r)$ is the specific radial energy density and $\alpha=1/\sqrt{-g^{tt}}$. Hee we define the jet boundary as $\mu>2$, i.e., the region over which the jet still remains highly energized. Note that this definition of the jet radius is quite similar to standard definition used in the literature \citep[e.g.,][]{Narayan2022} where the jet boundary is taken to be the $\sigma=1$ surface. Since $\mu=\gamma(\sigma+h)$\footnote{Note that specific enthalpy includes the rest-mass energy contribution in our definition from Sec.~\ref{sec:two-temp}.}, our condition $\mu>2$ also incorporates regions where the jet might not be magnetically-dominated but is relativistically hot or fast. Since we restrict our jet profiles to within $r\lesssim 10^3\,r_{\rm g}$, the radius is primarily determined by the jet magnetization. Figure~\ref{fig:jet_profiles} show the jet radius $R_{\rm jet}$ and Lorentz factor $\gamma$ as a function of the radial distance from the BH for the M87$^*${} GRMHD and 2t-GRRMHD models as well as the Sgr~A$^*${} GRMHD model. The M87$^*${} jet radius from our models matches the observed jet width from M87 (gray circles) quite well, with the radial profile roughly proportional to $r^{0.625}$, which is the fitted powerlaw for the M87 jet \citep{Nakamura_2018}, though the index value has been also reported to be slightly smaller in some works \citep[0.57;][]{asadanak2012,Nokhrina:2019}. The powerlaw index of 0.625 is larger than that found using the $\sigma=1$ condition from \citet{Narayan2022}, where the authors found a powerlaw index of 0.428 for their MAD spin $a=0.9$ GRMHD model. It is possible that we find larger jet radii as we incorporate a part of the hot jet sheath region within our definition of $R_{\rm jet}$ \citep[as suggested by Fig.~7 in][]{chatterjee2019}. For the Sgr~A$^*${} model, we also find a similar $R_{\rm jet}$ profile. There are no detections of an extended jet in Sgr~A$^*${} \citep[e.g.,][]{Issaoun_2019} though semi-analytical and GRMHD models largely favor a jet component from a spinning BH \citep[e.g.,][]{Markoff:07,EHT_SgrA_2022_PaperV}. We also show the Lorentz factor $\gamma$ in Fig.~\ref{fig:jet_profiles}. Mostly the jets accelerate to $\gamma\approx3-4$ by $10^3\,r_{\rm g}$ in all of our GRMHD models. It is more difficult to compare our $\gamma$ profiles with values inferred from observations of the M87 jet \citep[e.g.,][]{mertens2016}. This is because our $\gamma$ values are biased towards the jet spine while the observations generally capture the velocities of localized features in the sub-relativistic jet sheath/disk wind, especially at small distances from the BH. Indeed, both simulations and observations show that the jet Lorentz factor varies greatly as a function of jet radius \citep[e.g., see][]{chatterjee2019}. We speculate that a better approach might be to calculate emissivity-weighted Lorentz factors in order to compare to the measured $\gamma$ from M87. Since our focus is on the comparison between GRMHD simulations, we leave direct comparisons to data to future work. \subsection{Axisymmetrized profiles} \begin{figure} \includegraphics[width=\textwidth]{figures/m87_2D.png} \caption{We show t- and $\varphi-$averaged data: electron number density $n_{\rm e}$ (top row) and temperature $T_{\rm e}$ (bottom row). We also denote the jet boundary with $\sigma=1$ (black lines). The time-averaging is done over the $5,000\,r_{\rm g}/c$ for each model. RIAF plots are for Sgr A$^*$ while the rest are for M87. The Sgr~A$^*${} GRMHD model produces similar plots of $n_{\rm e}$ and $T_{\rm e}$ as the M87$^*${} model, and hence, we do not show it here.} \label{fig:2D_profiles} \end{figure} In the previous sections, we have found that the largest differences between the GRMHD models occur in electron temperature distribution. Figure~\ref{fig:2D_profiles} shows the time and azimuthally averaged 2D vertical plots of gas density $n_{\rm e}$ and electron temperature $T_{\rm e}$. We show the normalized $n_{\rm e}$ so as to capture the relative change in the disk/wind density distribution, which would provide us information about the disk structure. The large difference in disk scale height is immediately apparent between the RIAF and the MAD GRMHD models (also see Fig.~\ref{fig:disk_profiles_2}). The presence of a prominent wide jet component in MADs squeezes the inner disk and pushes against the accretion flow, a feature which is not captured in the constant $h/r$ RIAF model. However, the RIAF model does roughly reproduce the density profile of the disk midplane region. This would mean that the RIAF model could well represent non/weakly-jetted, quasi-spherical accretion flows. For sources like M87$^*${}, where we see a prominent jet component, the density gradient in the vertical direction is expected to be steeper as strong magnetic stresses power disk winds that carry away gas from the disk \citep[e.g.,][]{Chatterjee:2022}. Overall, the disk/wind density distribution among the GRMHD models look similar with small differences in the lateral extension of the wind region and the steepness of the vertical gradient in density. For example, if we compare the 2t-GRRMHD model with the other two simulations, the density in the wind region is larger in the radiative model. The reason for the shallow vertical density profile in the 2t-GRRMHD model is unclear since weakly magnetized thick disk simulations tell us that radiative cooling would lead to the loss of gas pressure in the disk and would result in the disk collapsing to a relatively dense structure in the midplane \citep[e.g.,][]{fm09,Yoon:2020}. However, in the presence of strong poloidal magnetic fields, i.e. in the MAD state, the plasma-$\beta$ can decrease to $\beta \approx 0.2-1$ in the disk midplane (see Fig.~\ref{fig:disk_profiles_1}, third row, left panel), going to even lower values in the upper layers of the accretion flow. The high magnetic pressure could help support the disk against collapse while sufficiently strong magnetic stresses could power disk winds. Such behavior is also seen in recent GRMHD simulations of near-Eddington, geometrically thin, strongly magnetized disks, where the inner disk (or corona) has a larger $h/r$ that the outer disk due to magnetic pressure support \citep{Liska:2022}. To verify how radiative cooling affects the inner disk/wind structure in highly sub-Eddington accretion flows like M87$^*${} and Sgr~A$^*${}, we require longer 2t-GRRMHD simulations such that the disk is in inflow-outflow equilibrium out to a radius of at least $50\,r_{\rm g}$. The 2D temperature plot of the RIAF model also looks vastly different in the inner disk ($r\lesssim20\,r_{\rm g}$) when compared to the GRMHD and 2t-GRMHD simulations, but is similar to the temperature distribution in the 2t-GRRMHD disk midplane (also seen in the $T_{\rm e}$ plot of Fig.~\ref{fig:disk_profiles_1}). The RIAF model does not capture gas heating in the jet sheath region (the region just outside of the jet boundary indicated by the $\sigma=1$ dashed line) and therefore $T_{\rm e}$ drops as we move away from the midplane towards the poles. In the GRMHD models, the jet sheath is as hot, if not hotter, than the inner accretion flow as temperatures reach $T_{\rm e}>10^{11}$K. For the GRMHD simulation, the electron temperature is given as a fraction of the fluid temperature, where the fraction depends on how magnetized the gas is in the region, as per the $R-\beta$ prescription from eqn.~\ref{eq:Rbeta}. For the M87$^*${} model, we chose a $R_{\rm high}$ value of 160 to have a jet-dominated sub-mm image. This choice of $R_{\rm high}$ suppresses the electron temperature in the disk, focusing higher temperatures in the jet sheath. Comparing the GRMHD model with the 2t-GRMHD model, the jet sheath region exhibits very similar $T_{\rm e}$ values but the disk midplane is hotter by a factor of a few in the 2t-GRMHD model. We note that this difference in $T_{\rm e}$ in the midplane is more noticeable in the 2D plot rather than in the disk-averaged $T_{\rm e}$ profile shown in Fig.~\ref{fig:disk_profiles_1} as the upper layers of the disk become substantially hotter in the GRMHD model. For the radiative 2t-GRRMHD model, the inner regions of the disk are cooler as electrons heated by magnetic reconnection quickly cool via synchrotron and Compton losses. From Fig.~\ref{fig:disk_profiles_1}, the drop in $T_{\rm e}$ for the 2t-GRRMHD model is shown to be as large as an order of magnitude when compared to the (2t-) GRMHD models. Another interesting feature is that the hot region ($T_{\rm e}>10^{11}$K) in the jet sheath is much narrower in the 2t-GRRMHD model, which could have a significant bearing on the ray-traced image, possibly producing a thinner jet sheath. Finally, the difference in $T_{\rm e}$ in the jet body between the GRMHD models is due to the different density/internal energy floor setups used by the corresponding codes. Since the gas in the jet sheath and the jet body undergo mixing due to boundary instabilities \citep[e.g.,][]{chatterjee2019, Wong:2021}, it is possible that the choice of floors could affect the overall electron temperature in the jet sheath. Such a study is outside the scope of our paper and is left to future work. \begin{figure} \includegraphics[width=\textwidth]{figures/hotspot_ne_2D.png} \caption{We show the $\varphi-$averaged hotspot electron number density as a function of radius and time. The hotspot falls into the BH and gets sheared over time.} \label{fig:hotspot} \end{figure} \subsection{Orbiting hotspot in a RIAF model} High-energy flares are commonly observed in AGNs, with GeV and MeV flares seen in M87$^*${} \citep[e.g.,][]{Aharonian:2006, Acciari:2010} and quasi-daily nIR and X-ray flares in Sgr~A$^*${} \citep[e.g.,][]{Baganoff:01,Eckart:2006:flare_activity,Hornstein_2007,Nowak_2012,Neilsen_2013,Witzel_2018,Do_2019,Haggard_2019}. A number of attempts have been made to explain the origin of flaring, such as magnetic reconnection in turbulent gas flows in the disk and the jet \citep{Dodds-Eden_2010, Dibi_2014, chatterjee2019,Nathanail_2020,Chatterjee:2021} and magnetic flux eruptions \citep{Dexter:2020:NIR_MAD,Porth:2020:NIR_MAD,Scepi:2022,Ripperda2022}. For Sgr~A$^*${}, semi-analytical models found that we require high-energy electrons, assumed to be accelerated via an ad-hoc process such as a large magnetic reconnection event or shocks, to describe the large flaring events \citep{Markoff_2001_sgra, Dibi_2016, Gutierrez_2020}. Near-infrared observations from the GRAVITY telescope provided further evidence for orbiting hotspot features in the accretion flow \citep{Gravity:20:orbit} that may be linked to acceleration events. It has also been recently shown that orbiting hotspots can be used to model double peaked X-ray flares \citep{Haggard_2019, Ball_2021} and prominent Stokes Q-U loops in the sub-mm emission of Sgr~A$^*${} \citep{Wielgus:2022}. These results give us considerable motivation to test the capability of the ngEHT to detect hotspot formation in accretion flows around black holes. Instead of isolating a particular magnetic flux eruption event in our simulations, we added a shearing hotspot to the RIAF solution as detailed in Sec.~\ref{sec:RIAF}. Figure~\ref{fig:hotspot} shows the temporal evolution of the azimuthally-averaged electron number density of the hotspot. We begin with a gaussian distribution of gas that undergoes shearing as the gas falls in closer to the BH. The overall density normalization is much lower that in the RIAF disk since the optically thin hotspot gas produces a large enough non-thermal synchrotron emissivity. The hotspot is evolved over $800\,r_{\rm g}/c$, but the gas distribution comes to a near-steady-state profile within the first $200\,r_{\rm g}/c$, which is roughly one hour for Sgr~A$^*${}. The shearing of the hotspot gas has significant impact on the evolution of the 230 GHz image \citep{Tiede2020, Roelofs_ngEHT} From Fig.~\ref{fig:disk_profiles_2} (right column), we see that the radial velocity matches the disk-averaged gas velocity from the GRMHD model, showing nearly free-fall speeds, while the azimuthal velocity becomes highly sub-Keplerian. The velocity profiles show that our hotspot model should be able to reproduce the expected hotspot motion from the GRMHD models, and is ideal for investigating multiwavelength flare lightcurves. A companion paper \citep{Emami:2022_hotspot} goes into further details about how current dynamical reconstruction techniques can be used to trace out the motion and morphology of the shearing hotspot in the context of ngEHT observations. These hotspot models and reconstruction methods would be integral in deciphering the more complex gas dynamics of magnetic flux eruption events in MADs, which have been shown to produce significant variation in image structure at 230 GHz \citep[e.g.,][]{Gelles:2022}. \section{Conclusions} \label{sec:conclusions} In this work, we have compared a series of numerical solutions with increasing complexity, going from time-independent radiatively-inefficient accretion flow (RIAF) model to fully 3D GRMHD simulations of accreting black holes, incorporating the effects of electron heating and cooling losses via two-temperature and radiation physics. In addition, each of our simulations are run with different GRMHD codes, similar to the approach of another community-wide code comparison effort \citep{Porth:19}. We found that the simulations exhibit remarkably similar properties given that the simulations incorporate varying levels of complexity in electron physics. The notable exception is the electron temperature, where radiative cooling decreases the temperature by a factor of $\lesssim 10$ within the inner 10 gravitational radii, the region that produces the bulk of the 230 GHz emission in M87$^*${}, one of the two primary targets of the EHT and the ngEHT (the other being Sgr~A$^*${}). The main goal of this work is to understand the variation in the underlying accretion flow and jet properties in our models since synthetic ray-traced imaged constructed from these models are used as ``truth'' images for the ngEHT Analysis Challenges \citep{Roelofs_ngEHT}. The ngEHT Analysis Challenges are an effort to determine how much information about the accretion flow and jet dynamics can we glean from the proposed ngEHT reference array, and what modifications in the image reconstruction tools are necessarily required to decode future ngEHT observational data. Our paper deals with numerical models designed to investigate hotspot evolution, turbulent inspiralling gas flows and extended powerful jets, targeting M87$^*${} and Sgr~A$^*${}. We restricted our model set to the community-standard setup: a rotating, geometrically-thick, optically-thin torus of magnetized gas around a spinning black hole, which is the fiducial model choice of the EHT \citep{EHT_M87_2019_PaperV, EHT_M87_2019_PaperVIII, EHT_SgrA_2022_PaperV}. This model choice leaves out exploration of multiple new setups of black hole accretion, such as quasi-spherical wind-fed inflows \citep[e.g.,][]{Ressler:2020:sgra_MAD, Lalakos:2022}, strongly-wind-fed accretion \citep[e.g.,][]{Cruz-Osorio:2017,Kaaz:2022}, geometrically-thin accretion disks \citep[e.g.,][]{Avara:2016, Liska:2022}, puffy radiation-dominated super-Eddington disks \citep[e.g.,][]{Sadowski:2016, Curd:2019} and misaligned accretion disks \citep[e.g.,][]{fragile07, liska_tilt_2018, White_2019, Chatterjee_2020}. Apart from varying the accretion mode, the high resolution of the images from EHT and ngEHT could potentially help distinguish between different space-time metrics \citep{EHT_Sgra_paper_VI}. So far, only a limited number of non-Kerr GRMHD simulations have only been performed \citep[e.g.,][]{Mizuno:2018, Olivares:2020,Nampalliwar:2022PhRvD.106f3009N}. The future of numerical studies is bright, given their rising popularity in the astrophysics community and the increase in computational resources. The breadth of the current investigations in accretion physics would result in a plethora of variable structures that should be thoroughly studied keeping the observational capabilities of the ngEHT in mind. \vspace{6pt} \acknowledgments{In this section you can acknowledge any support given which is not covered by the author contribution or funding sections. This may include administrative and technical support, or donations in kind (e.g., materials used for experiments).} Razieh Emami acknowledges the support by the Institute for Theory and Computation at the Center for Astrophysics as well as grant numbers 21-atp21-0077, NSF AST-1816420 and HST-GO-16173.001-A for very generous supports. \conflictsofinterest{The authors declare no conflict of interest.} \begin{adjustwidth}{-\extralength}{0cm} \reftitle{References} \section{Introduction} With the advent of the Event Horizon Telescope \citep[EHT;][]{EHT_M87_2019_PaperI, EHT_SgrA_2022_PaperI}, imaging the near-horizon structure of supermassive black holes (BHs) is now a reality. The primary targets of EHT and the future next-generation EHT (or ngEHT\footnote{\href{https://www.ngeht.org/}{https://www.ngeht.org/}}) are M87$^*${} \citep[the supermassive BH in the elliptical galaxy M87;][]{EHT_M87_2019_PaperI} and Sgr~A$^*${} in the Galactic Center \citep{EHT_SgrA_2022_PaperI}, two of the most well-studied low-luminosity active galactic nuclei. Extracting information about the event horizon-scale accretion flows in these two sources using the EHT's enormous resolving power is an active area of research. With the ngEHT, we will achieve unprecedented levels of angular resolution and sensitivity to low-flux regions, with the dynamic range in flux expected to increase to $\sim$1000 compared to the EHT's current dynamic range of $\sim$10 \citep[e.g.,][]{Doeleman2019}. This would enable us to investigate the BH shadow shape with higher precision as well as provide a crucial connection between the accretion flow and the jet launching region. The expected advances in sensitivity require deeper investigations of feature extraction from simulated synthetic reconstructions of BH systems. Hence, we designed the ngEHT analysis challenges\footnote{\href{https://challenge.ngeht.org/}{https://challenge.ngeht.org/}} \citep{Roelofs_ngEHT} to test our ability to capture the complex dynamics of gas and magnetic fields around M87$^*${} and Sgr~A$^*${} using the ngEHT reference array \citep[e.g.,][]{Raymond:2021} with various analysis methods. Black hole accretion and jet physics has been intensively studied over the past few decades \citep[e.g.,][]{Shakura:73,ree82,Narayan:95a,quataert:2000:CDAF,narayan03,mckinney06, kom07, tch11, narayanSANE2012, tch16}. In the context of M87$^*${} and Sgr~A$^*${}, we expect the accretion flow to be highly sub-Eddington, radiatively-inefficient and geometrically-thick, popularly known as radiatively-inefficient accretion flows (RIAFs). This accretion flow solution has been used to successfully model the multiwavelength spectrum of Sgr~A$^*${} \citep[e.g.,][]{Yuan:03}. On the other hand, semi-analytical models of jets are preferred to explain the spectrum of M87$^*${} \citep[e.g.,][]{lucchini19:M87}. Thus, these two sources already provide a means to probe two different components of BH accretion, namely, the inner accretion flow structure and turbulence in Sgr~A$^*${} and the prominent jet feature in M87$^*${}. The first three EHT numerical simulation papers \citep{EHT_M87_2019_PaperV,EHT_M87_2019_PaperVIII,EHT_SgrA_2022_PaperV} already give us important clues about the horizon-scale conditions of these BH systems based on numerical simulations: (1) these BHs probably have non-zero spin, (2) the accretion disk is expected to have colder electrons than the jet sheath, and (3) the observations favor the presence of dynamically-important magnetic fields close to the BH. All of these results point us towards the magnetically arrested disk \citep[MAD;][]{igu03,narayan03} state, an accretion mode where the BH magnetosphere becomes over-saturated with magnetic flux and exhibit quasi-periodic explosions of vertical magnetic flux bundles. MAD flows also have powerful relativistic jets, where the jet power can exceed the input accretion power \citep{tch11}, a definite signature of BH spin energy extraction via the \citet{bz77} process. Building on the semi-analytical RIAF models, time-dependent general relativistic magneto-hydrodynamic (GRMHD) simulations have become important tools for deciphering BH accretion physics in a variety of astrophysical systems \citep[e.g.,][]{gam03, mckinney06,fragile07, tch11,Chael_2019, Porth:19, Narayan2022}. Indeed, the EHT regularly makes use of large libraries of GRMHD simulations to model the observed horizon-scale BH images of M87$^*${} and Sgr~A$^*${} as well as larger-scale jet images \citep[such as for Centaurus A;][]{Janssen:2021} in order to constrain the time-variable plasma properties. In designing the ngEHT reference array, it is therefore crucial to use GRMHD simulations for understanding the attainability of specific science goals, such as resolving the photon ring and the disk-jet connection region as well as tracing out time-variable features via the ngEHT analysis challenges. In this work, we discuss the numerical fluid simulations that were used as source models for the ngEHT analysis challenges. In particular, our objective is to compare between models that incorporate increasingly complicated levels of accretion and electron physics, focusing on M87$^*${} and Sgr~A$^*${}. Our model set consists of a time-dependent shearing hotspot stitched to a steady-state RIAF solution, two standard GRMHD simulations of MAD accretion flows, a GRMHD MAD simulation with electron heating via incorporating two-temperature physics, and a fully radiative, two-temperature, GRMHD MAD simulation. We describe the equations and setup of our numerical models in Sec.~\ref{sec:sims}, show our comparison results in Sec.~\ref{sec:results} and finally conclude in Sec.~\ref{sec:conclusions}. \section{Numerical simulations} \label{sec:sims} In this section we provide a brief description of the semi-analytical stationary RIAF and shearing hotspot model as well as the (two-temperature/radiative) GRMHD simulations used for the ngEHT analysis challenges. \subsection{RIAF+hotspot solutions} \label{sec:RIAF} RIAF models attempt to describe the time and azimuthally averaged appearance of accretion flows. This is done with a set of building blocks. The specific RIAF models used in the challenges are based on \citet{Yuan_2003, broderick_2006, Broderick_2010} approach. We decompose the accretion flow into a set of phenomenological models that describe the electron density, temperature, magnetic field, and velocity profile. The electron density profile is defined in terms of the cylindrical radius $R_{\rm cyl} = r|\sin(\theta)|$ and vertical displacement $z = r\cos(\theta)$, and is given by \begin{equation} n_{e,\rm{X}}(R_{\rm cyl}, z) = n_{e,\rm X} R_{\rm cyl}^{p_X}\exp\left(-\frac{z^2}{2h^2R_{\rm cyl}^2}\right) \end{equation} where $X$ denotes the population of electron. The disk height is controlled by $h$ which for this work we set to unity. For the challenge dataset we included both thermal synchrotron emitting ($X\equiv{\rm th})$, and non-thermal synchrotron ($X\equiv{\rm nth}$) emitting electrons. The thermal electrons have $n_{e,{\rm th}, 0}=1.3 \times 10^8 $ and $p_{\rm th} = -1.1$, while the non-thermal electrons are given by $n_{e, {\rm nth}, 0} = 1.3\times 10^5$, $p_{\rm nth} =-2.02$. The temperature profile of the thermal electrons is also given by a radial power-law with a Gaussian envelope describing the height: \begin{equation}\label{eq:Te_riaf} T_{e}(t,r,\theta, \varphi) = T_{e,0}R_{\rm cyl}^{-0.84}\exp\left(-\frac{z^2}{2h^2R_{\rm cyl}^2}\right), \end{equation} where for the challenge data we set $T_{e,0} = 6.3\times 10^{10}$~K. To define the gas pressure, we assume an equipartition between the electron and proton energy and that the protons are in roughly hydrostatic equilibrium, which gives us: \begin{equation}\label{eq:riaf_gas} p_{\rm gas}(x^\mu) = \frac{m_p n_{e,th}(x^\mu)}{6}\frac{M}{r}. \end{equation} For the local magnetic field, we then assume a constant $\beta = p_{\rm gas}/p_{\rm mag}$ plasma, which combined with eqn.~\ref{eq:riaf_gas} give us the magnetic field strength. The orientation of the magnetic field is then given by a purely toroidal configuration, relative to the plasma observer. Finally, we take the emission and absorption coefficients from the synchrotron self-absorption model in \citet{broderickblandford04} for both the thermal and non-thermal synchrotron electrons. For the non-thermal electrons we use a power-law prescription with radiation coefficients from \citet{jonesodell}, and a photon spectral-power law index of 1.25. These numbers are chosen to match the best-fit parameters for the Sgr~A$^*${} spectrum from \citet{Broderick:2016}. We follow the velocity field prescription from \citet{Pu2016} to describe the accretion flow dynamics. Using the notation from \citet{Tiede2020}, the velocity field $u^\mu = u^t v^\mu$ is given by \begin{equation}\label{eq:hotspot_velo} \begin{aligned} v^r &= v^r_K + \alpha(v^r_{ff} - v^r_{ff}) \\ v^\theta &= 0 \\ v^\varphi &= v^\varphi_K + (1-\kappa)(v^\varphi_{ff} - v^\varphi_K), \end{aligned} \end{equation} where $v^\mu_K$ denotes the Keplerian velocity field, and $v^\mu_{ff}$ is the free-fall velocity field. The remaining component $u^t$ is given by the normalization condition $u^\mu u_\mu = -1$. The radial component outside the inner stable circular orbit (ISCO) is $v^r_K = 0$ as the disk is in steady-state. However, inside the ISCO, we use plunging geodesics which are specified by matching the angular momentum and energy at the ISCO. The hotspot evolution follows the model from \citet{Tiede2020}, where we assume that the hotspot travels passively along the accretion flow velocity field (eqn.~\ref{eq:hotspot_velo}). This implies that the equation of motion is given by the conservation of particle number (eqn.~\ref{eq:continuity}). For the emission we assume a non-thermal synchrotron hotspot, with a initial Gaussian density profile \begin{equation} n_{e}(x^\mu) = n_0e^{-\Delta r^\mu r_\mu +(\Delta r_\mu v^\mu)^2/2R_s^2}, \end{equation} where we have set $n_0 = 6\times 10^6$ and $R_s = 0.5\,r_{\rm g}$, and $\Delta r^\mu$ is the displacement from the hotspot center. \subsection{GRMHD simulations} \label{sec:GRMHD} Over the previous two decades, multiple GRMHD codes have been developed and utilized to model black hole accretion and jet launching physics over long dynamical timescales. The wide usage of GRMHD simulations is particularly encouraging since this allows verification of code-specific numerical choices that users usually have to make even while solving the same base set of GRMHD equations. Indeed, recently there was a community-wide effort to benchmark these codes against each other for a standard problem: evolving a weakly magnetized torus of gas around a spinning black hole \citep{Porth:19}. It was found that these codes largely provide similar solutions though some disk quantities remain unconverged with increasing grid resolutions, suggesting more investigation is required. For this work, we employ three different GRMHD codes to probe black hole accretion, increasing the complexity of the equations solved at each step: (1) single fluid GRMHD simulations from the \texttt{H-AMR}{} code \citep{liska_hamr2020}, (2) a two-temperature single fluid GRMHD simulation from the \texttt{BHAC}{} code \citep{porth17}, and (3) a two-temperature radiative GRMHD simulation from the \texttt{KORAL}{} code \citep{Sadowski:13_koral}. First we describe the set of GRMHD equations \citep[e.g., from][]{Gammie:03, Porth:19}. We have the conservation of particle number and energy-momentum: \begin{eqnarray} \label{eq:continuity} \partial_{t}(\sqrt{-g} \rho u^{t}) &=& -\partial_{i}(\sqrt{-g} \rho u^{i}) \\ \partial_{t}(\sqrt{-g} T^t_{\nu}) &=& -\partial_{i}(\sqrt{-g} T^i_{\nu}) +\sqrt{-g} T^{\alpha}_{\beta}\Gamma^{\beta}_{\nu \alpha}. \end{eqnarray} \noindent Here, $\rho$ is the rest-mass gas density and can also be written in the form of $\rho=m n$, where $m$ is the mean rest-mass per particle and $n$ is the particle number density. We also have the four-velocity $u^{\mu}$, stress-energy tensor $T^{\mu}_{\nu}$, metric determinant $g\equiv det(g_{\mu\nu})$ and the metric connection $\Gamma^{\beta}_{\nu \alpha}$. Note that the index $t$ refers to the temporal component of the vector or tensor and $i$ denotes the spatial indices. The stress-energy tensor $T^{\mu}_{\nu}$ is given as \begin{equation} T^{\mu}_{\nu}=(\rho + U_{\rm gas} + p_{\rm gas} + 2p_{\rm mag})u^{\mu}u_{\nu} + (p_{\rm gas} + p_{\rm mag})\delta^{\mu}_{\nu} - b^{\mu}b_{\nu}, \end{equation} \noindent where $U_{\rm gas}$ and $p_{\rm gas}$ are the gas internal energy and pressure, related by the ideal gas equation: $p_{\rm gas}=(\Gamma_{\rm gas}-1)U_{\rm gas}$. We also have the magnetic pressure $p_{\rm mag}=b^2/2$ and the magnetic field 4-vector $b^{\mu}$, which can be defined in terms of the magnetic field 3-vector $B^i$: \begin{eqnarray} b^t &=& B^i u^{\mu} g_{i\mu} \\ b^i &=& (B^i + b^t u^i)/u^t. \end{eqnarray} Here we included a factor of $\sqrt{4\pi}$ into the definition of $B^i$. We evolve the magnetic field $B^i$ using the spatial components of the induction equation, \begin{equation} \partial_{t}(\sqrt{-g} B^i) = -\partial_{j}(\sqrt{-g} (b^j u^i - b^i u^j)), \end{equation} \noindent while the temporal component provides the no-monopoles constraint, \begin{equation} \frac{1}{\sqrt{-g}}\partial_i(\sqrt{-g} B^i) = 0. \end{equation} These equations are numerically integrated in the conservative form \citep{Gammie:03} to get the physically-relevant quantities $\rho$, $U_{\rm gas}$, $u^{\mu}$ and $B^i$. We refer the reader to the corresponding code papers for more information on the numerical techniques used to evolve these equations over space and time. In this work, we use two GRMHD simulations performed with the \texttt{H-AMR}{} code, one targeting M87$^*${} and the other Sgr~A$^*${}. These simulations employ logarithmic Kerr-Schild coordinates and the grid resolutions are $N_r\times N_\theta \times N_\varphi = 580 \times 288 \times 512$ for the M87$^*${} simulation and $348 \times 192 \times 192$ for the Sgr~A$^*${} simulation. All simulations in this work adopt the geometrical unit convention, $GM_{\rm BH}=c=1$, normalizing the length scale to the gravitational radius $r_{\rm g}=GM_{\rm BH}/c^2$. The M87$^*${} GRMHD simulation evolves a MAD flow around a black hole with spin $a=0.9375$. The Sgr~A$^*${} model also simulates a MAD flow but around a black hole with spin $a=1/2$. \texttt{H-AMR}{} uses outflowing radial boundary conditions (BCs), transmissive polar BCs and periodic azimuthal BCs \citep[for more details, see][]{liska_tilt_2018}. Since GRMHD simulations are scale-free, we determine the gas density code-to-CGS units conversion factor (hereafter, ``density scaling'') by raytracing the simulation at 230 GHz for a target source and flux. We use the general relativistic ray-tracing codes \texttt{BHOSS}{} \citep{Younsi:19_polarizedbhoss} and \texttt{IPOLE}{} \citep{Moscibrodzka:2018} to compute images at 230 GHz and set the compact flux to be approximately 0.5 Jy for M87$^*${} and 2.4 Jy for Sgr~A$^*${}. We take black hole masses and distances of $M_{\rm BH} =6.2\times10^9 M_{\odot}$ and $D_{\rm BH}=16.9$~Mpc for M87$^*${} and $M_{\rm BH}=4.14\times10^6 M_{\odot}$ and $D_{\rm BH}=8.127$~kpc for Sgr~A$^*${}. GRMHD simulations evolve a single temperature fluid. At the low accretion rates seen in M87$^*${} and Sgr~A$^*${}, Coulomb coupling between ions and electrons is inefficient and therefore, the two particle species are not in thermal equilibrium. Since ions are much heavier than electrons, ions dominate the single fluid thermodynamics evolved in GRMHD simulations. Hence, to calculate the radiative output from GRMHD simulations, we calculate the electron temperature $T_{\rm e}$ using sub-grid models such as the $R-\beta$ prescription \citep{Moscibrodzka_2016} based on local gas plasma-$\beta$ ($\equiv p_{\rm gas}/p_{\rm mag}$): \begin{eqnarray} T_{\rm e} &=& \frac{2 m_{\rm p}U_{\rm gas}}{3k_{\rm B}\rho (2 + R)}, \\ {\rm where,\,\,} R &=& \frac{1+R_{\rm high} \beta^2}{1+\beta^2}. \label{eq:Rbeta} \end{eqnarray} For the ngEHT analysis challenges, we take $R_{\rm high}$ values of 160 and 40 and source inclinations of $163^{\circ}$ and $50^{\circ}$ for the M87$^*${} and Sgr~A$^*${} simulations, respectively. We assume a thermal relativistic Maxwell-J\"uttner distribution for describing the electron energy distribution in the Sgr~A$^*${} model, and a hybrid thermal$+$non-thermal $\kappa-$distribution \citep[e.g.,][]{Xiao:2006, Davelaar:18} for the M87$^*${} model. The model images are shown in \citet{Roelofs_ngEHT}. \subsection{Two-temperature physics} \label{sec:two-temp} Two-temperature GRMHD (2t-GRMHD) simulations \citep[e.g.,][]{ressler_2015, Dexter_2020} evolve the ion and electron entropy equation separately and hence, provide the ion and electron temperature in a self-consistent manner. The main advantage of this method is that we remove the electron temperature as a free parameter when constructing images. However, we do have to make a choice about the sub-grid prescription that determines the fraction of local dissipative energy that heats the electrons. There are two heating mechanisms that are thought to be applicable to global accretion flows: turbulent heating \citep{Howes:2010, Kawazura:2019}, and magnetic reconnection \citep{Werner:2018, Rowan:2017}. For the ngEHT analysis challenges, we focus only on one simulation with reconnection heating, taken from \citet{Mizuno_2021}. We assume that the number densities and velocities of ions and electrons are equal, i.e., $n_{\rm i}=n_{\rm e}=n$ and $u^{\mu}_{\rm i}=u^{\mu}_{\rm e}=u^{\mu}$, maintaining charge neutrality. The electron entropy equation is given as \begin{equation} \partial_{\mu} (\sqrt{-g} \rho u^{\mu} s_{\rm e}) = \frac{\sqrt{-g} (\Gamma_{\rm e} - 1)}{\rho^{\Gamma_{\rm e} -1}}f_{\rm e}Q, \end{equation} \noindent where the electron entropy is $s_{\rm e} = p_{\rm e}/\rho^{\Gamma_{\rm e}}$, with $p_{\rm e}$ and $\Gamma_{\rm e}$ as the electron pressure and adiabatic index. The total heating rate $Q$ is calculated by comparing the total internal energy of the gas and the internal energy obtained from the electron entropy conservation equation \citep[see][for more details]{ressler_2015}. The fraction of dissipative heating that contributes to electron heating is given by $f_{\rm e}$. For this particular simulation, $f_{\rm e}$ is designed to capture electron/ion heating via magnetic reconnection from \citet{Rowan:2017}: \begin{equation} f_{\rm e} = \frac{1}{2} \exp \Big[ -\frac{1-\beta/\beta_{\rm max}}{0.8+\sigma_{\rm h}^{0.5}} \Big], \end{equation} \noindent where $\beta_{\rm max} = 1/4\sigma_{\rm h}$, defined using the hot gas magnetization $\sigma_{\rm h} = b^2/\rho h$ and the specific gas enthalpy $h=1+\Gamma_{\rm g} p_{\rm g}/[\rho(\Gamma_{\rm g}-1)]$. The 2t-GRMHD simulation from \citet{Mizuno_2021} assumes modified Kerr-Schild coordinates and a black hole spin of $0.9375$. The grid resolution is $384 \times 192 \times 192$. The accretion mode is of a magnetically arrested flow and the simulation is raytraced (using \texttt{BHOSS}{}) once the near-horizon flow has reached steady state. The target source is M87$^*${}, assuming a black hole mass of $M_{\rm BH}=6.5\times10^9 M_{\odot}$ and distance of 16.9 Mpc. The accretion rate is normalized such that the 230 GHz compact flux density is 0.8 Jy. We assume a thermal electron distribution everywhere except in the jet sheath where we adopt a $\kappa-$distribution. More details about the image are provided in \citet{Roelofs_ngEHT}. \subsection{Radiative GRMHD} \label{sec:GRRMHD} Two temperature GRMHD simulations do not include radiative cooling and hence, are thought to be appropriate for low luminosity supermassive black holes such as M87$^*${} and Sgr~A$^*${}. To verify this assumption, we consider a two-temperature radiative GRMHD (2t-GRRMHD hereafter) simulation from \citet{Chael_2019}. This simulation accounts for self-consistent radiation physics, incorporating both particle heating via magnetic reconnection (as in Sec~\ref{sec:two-temp}) and radiative cooling via bremsstrahlung, synchrotron, Compton and Coulomb losses. The simulation is run using the 2t-GRRMHD code \texttt{KORAL}{} \citep{Sadowski:13_koral, Sadowski:15_photon,Chael_2018}, which evolves a two-temperature magnetized fluid and treats the radiation field as a second fluid \citep{Sadowski:2017}. The conservation equations solved in 2t-GRRMHD are different from that of GRMHD: \begin{equation} (T^{\mu}_{\nu} + R^{\mu}_{\nu})_{;\mu} =0, \end{equation} \noindent where $R^{\mu}_{\nu}$ is the frequency-integrated radiation field, defined as, \begin{equation} R^{\mu}_{\nu} = \frac{4}{3} \overline{E} u^{\mu}_{\rm R} u_{\nu \rm R} + \frac{1}{3} \overline{E}\delta^{\mu}_{\nu}. \end{equation} \noindent Here, the radiation field is described by its rest-frame energy density $\overline{E}$ and four-velocity $u^{\mu}_{\rm R}$ following the M1 closure scheme. The ion and electron entropy equations are \begin{eqnarray} T_{\rm e} (ns_{\rm e}u^{\mu})_{;\mu} &=& f_{\rm e}q^{\rm v} + q^{\rm C} - G, \\ T_{\rm i} (ns_{\rm i}u^{\mu})_{;\mu} &=& (1-f_{\rm e})q^{\rm v} - q^{\rm C}, \end{eqnarray} \noindent where $q^{\rm v}$ is the dissipative heating rate and $q^{\rm C}$ is the Coulomb coupling rate that captures the exchange of energy between ions and electrons. The heating fraction $f_{\rm e}$ is taken from \citet{Rowan:2017} (see \citealt{Chael_2018} for more details), same as the 2t-GRMHD simulation. Finally, $G$ is the radiative cooling rate \citep{Sadowski:13_koral}. For further details about the equations, see \citet{Sadowski:13_koral, Sadowski:2017, Chael_2018}. The simulation assumes a black hole spin of $a=0.9375$ and mass $M_{\rm BH}=6.2\times10^9 M_{\odot}$, targeting M87$^*${}. The gas density is scaled to physical CGS units such that the compact emission at 230 GHz is roughly 0.98 Jy. The simulation uses modified Kerr-Schild coordinates with a grid resolution of $N_r\times N_\theta \times N_\varphi = 288 \times 224 \times 128$. See \citet{Chael_2019} for more details about the simulation. While not utilized for the ngEHT analysis challenges, we included this simulation in this work since this model captures the coupling between gas and radiation, necessary for black holes accreting close the Eddington limit. Further, this model has been used in previous ngEHT reference array papers \citep{Blackburn:2019_APC, Raymond:2021}. \section{Results} \label{sec:results} We perform a series of comparisons focused on the time-evolution of horizon-scale quantities and radial dependence of disk and jet properties. The diagnostics are chosen such that any trends we find can inform EHT/ngEHT science applications, such as horizon-scale morphology and variability of the accretion flow. Further, the quantities are similar to those reported in the GRMHD code comparison project \citep{Porth:19} and so, can be directly compared. There are a total of five models: three (2t-radiative) GRMHD simulations targeting M87*, and one RIAF solution and one GRMHD simulation for Sgr A*. We further note that all three numerical simulations of M87* have the same BH spin, favoring direct comparisons of the horizon-scale gas properties. \subsection{Temporal behavior of horizon fluxes} \label{sec:time} \begin{figure} \includegraphics[width=\columnwidth]{figures/time_data.png} \caption{We show the mass accretion rate $\dot{M}$, dimensionless magnetic flux $\phi\equiv\Phi/\sqrt{\dot{M}}$, the outflow efficiency $P_{\rm out}/\dot{M}c^2=1-\dot{E}/\dot{M}c^2$ and specific radial flux of the angular momentum $\dot{J}/\dot{M}$ over time. Values are calculated at the event horizon.} \label{fig:Horizon_fluxes} \end{figure} We calculate the mass, magnetic, angular momentum and energy fluxes in the radial direction as follows: \begin{eqnarray} {\rm Mass:\,\,}& \dot{M} &=\int^{2\pi}_{0}\int^{\pi}_{0}\, (-\rho \, u^r) \, \sqrt{-g} \, d\theta \, d\varphi\,, \\ {\rm Magnetic:\,\,}& \Phi &=\frac{\sqrt{4\pi}}{2}\int^{2\pi}_{0}\int^{\pi}_{0}\, |B^r| \, \sqrt{-g} \, d\theta \, d\varphi\,, \\ {\rm Ang. Mom.:\,\,}& \dot{J} &=\int^{2\pi}_{0}\int^{\pi}_{0}\, T^r_{\varphi} \, \sqrt{-g} \, d\theta \, d\varphi\,, \\ {\rm Energy:\,\,}& \dot{E} &=\int^{2\pi}_{0}\int^{\pi}_{0}\, (T^r_t) \, \sqrt{-g} \, d\theta \, d\varphi\,, \end{eqnarray} \noindent where all quantities are calculated at the event horizon radius $r_{\rm hor}=r_{\rm g} (1+\sqrt{1-a^2})$. We note that there could be substantial contribution of density floors when calculating the mass accretion rate for MAD systems. However, this radius was chosen for simplicity in comparing to previous simulations in the literature. Figure~\ref{fig:Horizon_fluxes} shows the mass accretion rate $\dot{M}$ in units of solar masses per year ($M_{\odot}$/yr), the dimensionless magnetic flux $\phi=\Phi/\sqrt{\dot{M} r_{\rm g}^2 c }$, the outflow power $P_{\rm out} = \dot{M}c^2-\dot{E}$ and the specific angular momentum flux $\dot{J}/\dot{M}$ for simulations targeting M87* and Sgr A*. The RIAF solution being a steady-state solution is excluded from this section (though the hotspot evolves with time). Quantities from the 2t-GRRMHD simulation are only shown for $(11-16)\times 10^3\,\,r_{\rm g}/c$, i.e., the time period over which the simulation was raytraced in \citet{Chael_2019}. Remarkably, despite the difference in electron physics complexity, the simulations behave very similarly. The factor of 2 difference in $\dot{M}$ between the M87$^*${} non-radiative simulations and the 2t-GRRMHD simulation can be explained by the lower electron temperatures in the near-horizon accretion flow due to radiative cooling (see Sec.~\ref{sec:disk}) as well as the higher 230 GHz flux normalization used for the radiative model. \begin{table}[H] \caption{Modulation index (MI) of the mass accretion rate $\dot{M}$, dimensionless magnetic flux $\phi$, outflow efficiency $P_{\rm out}/\dot{M}c^2$ and the specific angular momentum flux $\dot{J}/\dot{M}$ for each GRMHD model. The quantities are calculated over the final $5000\,\,r_{\rm g}/c$ in runtime and at the event horizon (see Fig.~\ref{fig:Horizon_fluxes}).} \newcolumntype{C}{>{\centering\arraybackslash}X} \begin{tabularx}{\textwidth}{lCCcc} \toprule Model & MI($\dot{M}$) & MI($\phi$) & MI($P_{\rm out}/\dot{M}c^2$) & MI($\dot{J}/\dot{M}$)\\ \midrule M87$^*${} GRMHD & 0.27 & 0.15 & 0.26 & 0.33\\ M87$^*${} 2t-GRMHD & 0.29 & 0.14 & 0.25 & 0.31\\ M87$^*${} 2t-GRRMHD & 0.28 & 0.14 & 0.14 & 0.31\\ Sgr~A$^*${} GRMHD & 0.23 & 0.21 & 0.39 & 0.57\\ \bottomrule \end{tabularx} \label{tab:MI} \end{table} The accretion rate in all simulations show large variation with quasi-periodic sharp drops. These drops in $\dot{M}$ occur due to the emergence of magnetic flux eruptions, a characteristic feature of the magnetically arrested disk \citep{Porth:2020:NIR_MAD,Begelman2022, Ripperda2022, Chatterjee:2022}. These eruptions also lower the value of $\phi$ since magnetic flux bundles escape from the vicinity of the BH, carrying away the magnetic flux accumulated in the BH magnetosphere. We see that $\phi$ often crosses the magnetic flux saturation value of 50 \citep{tch11}, overwhelming the BH magnetosphere with strong magnetic fields that eventually reconnect and trigger flux eruptions \citep[see][for the detailed mechanism]{Ripperda2022}. As these field line bundles move out and interact with the disk, they (1) hinder accretion, lowering $\dot{M}$, (2) remove magnetic flux from near the BH, lowering the jet power, and (3) push gas outwards, reducing the inward angular momentum flux. Curiously, we see larger drops in the specific angular momentum flux for the Sgr~A$^*${} GRMHD model. This is possibly due to the smaller BH spin ($a=0.5$ as opposed to $0.9375$ for the M87$^*${} models) as the weakly powered jet does not carry away angular momentum as efficiently as the higher BH spin models and flux eruptions play a bigger role in regulating disk angular momentum transport. Additionally, the reconnection events that trigger these eruptions accelerate electrons to higher energies, and are thus, crucial for understanding flare activity in BH sources. To quantify the time-variability of the horizon-fluxes, we calculate the modulation index MI, which is defined as the ratio of the standard deviation and the mean of the quantity over time \citep{EHT_SgrA_2022_PaperV}. We show MI for the different fluxes in Table~\ref{tab:MI}. The MI($\dot{M}$) is usually a good proxy for the variability of the sub-millimeter emission in these slowly accreting optically thin black hole sources \citep[e.g.,][]{Chatterjee:2021}. The MI($\dot{M}$) values we see from the simulations are $\sim0.23-0.29$ and are larger than expected from Sgr~A$^*${} 230 GHz lightcurves \citep[where $MI\sim 0.1$;][]{Wielgus:2022_SgrALC}. This suggests that careful analysis of the electron distribution function is needed to understand if we are substantially over-predicting the 230 GHz lightcurve variability. Further, in general, weakly-magnetized accretion flows exhibit lower MI($\dot{M}$) values due to the absence of flux eruptions, which suggests that further study of the accretion mode in Sgr~A$^*${} is also necessary. It is encouraging to note that our MI values for $\dot{M}$ and $\phi$ are consistent with the MI values from longer time-evolved GRMHD simulations of $a=0.9$ BHs in \citet{Narayan2022}, indicating that our simulations are sufficiently converged with respect to horizon-scale quantities. \subsection{Disk-averaged quantities} \label{sec:disk} \begin{figure} \includegraphics[width=\columnwidth]{figures/radial_data_1.png} \caption{We show the radial profiles of gas density $\rho$, plasma-$\beta$, proton temperature $T_{\rm p}$, electron temperature $T_{\rm e}$. Quantities are disk-averaged and time-averaged over the raytracing period.} \label{fig:disk_profiles_1} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{figures/radial_data_2.png} \caption{We show the radial profiles of disk scale height $h/r$, radial velocity $|v_r|$, angular velocity $\Omega$ and specific angular momentum $u_{\varphi}$. Quantities are disk-averaged and time-averaged over the raytracing period.} \label{fig:disk_profiles_2} \end{figure} Here we calculate the disk-averaged properties of each model, namely gas density $\rho$, thermal pressure $p_{\rm gas}$, magnetic pressure $p_{\rm mag}$, radial velocity $|v_r|$, azimuthal velocity $|v_{\varphi}|$, angular momentum $u_{\varphi}$, disk scale height $h/r$, ion temperature $T_i$ and the electron temperature $T_e$. We define disk-averaging of a quantity $q$ as \begin{equation} \langle q \rangle (r,t) = \frac{\int^{2\pi}_{0}\int^{\pi}_{0}\, \rho \, q \, \sqrt{-g} \, d\theta \, d\varphi}{\int^{2\pi}_{0}\int^{\pi}_{0}\, \rho \, \sqrt{-g} \, d\theta \, d\varphi}, \end{equation} where $q \in \{ \rho, p_{\rm gas}, p_{\rm mag}, |v_r|, |v_{\varphi}| , u_{\varphi}, h/r, T_i [\rm Kelvin], T_e [\rm Kelvin] \}$. Further definitions follow: \begin{eqnarray} p_{\rm gas} &=& (\Gamma_{\rm ad}-1)u, \\ p_{\rm mag} &=& b^{\mu}b_{\mu}/2, \\ |v_i| &=& \sqrt{v^i v^i g_{ii}}, \\ {\rm where,\,\,} v^i &=& u^i/u^t, \nonumber \\ h/r &=& |\theta - \pi/2|, \\ \end{eqnarray} where $\Gamma_{\rm ad}$ and $u$ are the adiabatic index and the internal energy of the gas. Figures~\ref{fig:disk_profiles_1} and \ref{fig:disk_profiles_2} show the respective disk-averaged radial profiles for each model, including the Sgr~A$^*${} RIAF solution. The density profiles in the inner few tens of $r_{\rm g}$ converge roughly to a $n_{\rm e}\propto r^{-1}$ profile, matching the RIAF density profile as well as longer-time evolved MAD simulations \citep{Chatterjee:2022}. The M87$^*${} 2t-GRRMHD density is larger by a factor of $\approx 2$ from the GRMHD/2t-GRMHD, as is expected from the difference in the mass accretion rate (Fig.~\ref{fig:Horizon_fluxes}). The 2t-GRRMHD simulation exhibits a slightly more magnetized inflow within the inner $2\,\,r_{\rm g}$, but overall, the GRMHD simulations have a similar plasma-$\beta\equiv p_{\rm gas}/p_{\rm mag}$ disk profile. The stronger magnetic field seen in the 2t-GRRMHD model could explain the higher values of the horizon magnetic flux seen in Fig.~\ref{fig:Horizon_fluxes}. The RIAF model assumes a constant disk plasma-$\beta=10$, (see Sec.~\ref{sec:RIAF}), which is substantially higher when compared to the MAD GRMHD models. This value of plasma-$\beta$ is chosen in order to match the observed 230 GHz flux density of Sgr~A$^*${}. As we see from the disk scale height in Fig.~\ref{fig:disk_profiles_2}, the RIAF model has a much thicker disk than the GRMHD models, and therefore, produces a lot more sub-millimeter (sub-mm) emission even with a low electron temperature and weak magnetic field strength. Next, we see that the disk-averaged electron temperature $T_{\rm e}$ in the 2t-GRRMHD M87$^*${} model is more than an order of magnitude lower than the other GRMHD models within the inner $10\,\, r_{\rm g}$, but actually matches the Sgr~A$^*${} RIAF $T_{\rm e}$ profile and has a shallower profile $T_{\rm e} \propto r^{-1}$ instead of $r^{-3/2}$. It is further interestingly to note that the disk ion temperature $T_{\rm i}$ are very similar in all the GRMHD simulations shown here. Therefore, despite the same reconnection-driven heating mechanism captured in both the 2t-GRMHD and the 2t-GRRMHD models, radiative cooling of hot electrons plays a crucial role in determining the eventual $T_{\rm e}$. Due to the low $T_{\rm e}$, the required accretion rate normalization is higher in the 2t-GRRMHD, as we noted in the previous subsection. In Fig.~\ref{fig:disk_profiles_2}, we show the average disk scale height $h/r$, the radial and angular velocities ($v_r$ and $\Omega$ and the specific angular momentum $u_{\varphi}$. The MAD simulations all show very similar disk properties. The $\langle h/r\rangle \approx 0.1-0.3$ with a sharp increase within $3\,\,r_{\rm g}$ where the inflow becomes vertically supported by strong poloidal magnetic fields. The radial velocity has a profile of $r^{-1/2}$, similar to the scaling relation found in ADAF solutions assuming a constant viscosity parameter $\alpha$ \citep{nar94,nar98}. The $\alpha$ parameter profile depends on how magnetized the accretion flow is, with $\alpha\propto r^{-1}$ for weakly-magnetized flows and close to constant for MAD-like flows \citep[e.g.,][]{liska_tor_2019,Chatterjee:2022}. We also see highly sub-Keplerian angular velocity profiles in the GRMHD models, typical for magnetically supported disks. For the RIAF model, the RIAF disk is not infalling and has a constant Keplerian angular velocity. Instead, the hotspot, added to the RIAF solution, undergoes shearing and disappears into the BH with a radial velocity similar to the values found in the GRMHD MAD disks. This occurs because the hotspot is designed to travel along plunging geodesics (see Sec.~\ref{sec:RIAF}), similar to rapid gas infall close to the BH in the GRMHD models. The angular momentum in the GRMHD models looks sub-Keplerian as expected for MADs. \subsection{Jet properties} \begin{figure} \includegraphics[width=\columnwidth]{figures/jet_data.png} \caption{We show the jet radius $R_{\rm jet}$ and the jet Lorentz factor $\gamma$ from the M87$^*${} GRMHD and 2t-GRRMHD models, and the Sgr~A$^*${} GRMHD model. The gray circles indicates the deprojected jet radius of the M87 jet assuming a BH mass of $6.2\times10^9M_{\odot}$ and a source inclination of $14^{\circ}$ \citep{Nakamura_2018}. The data points are a compilation of various papers \citep{Doeleman2012,asadanak2012,Hada2013,nak2013,Akiyama:2015,Hada2016}.} \label{fig:jet_profiles} \end{figure} Here we calculate the radial profiles of the jet half width $R_{\rm jet}$ and Lorentz factor $\gamma$: \begin{equation} R_{\rm jet} (r,t) = \sqrt\frac{\int^{2\pi}_{0}\int^{\pi}_{0}\, \sqrt{-g/g_{rr}} (\mu>2) \, d\theta \, d\varphi}{2\pi}\,, \label{eq:Rjet} \end{equation} \begin{equation} \gamma (r,t) = \frac{\int^{2\pi}_{0}\int^{\pi}_{0}\, (\mu>2) \, \alpha u^t \, \sqrt{-g} \, d\theta \, d\varphi}{\int^{2\pi}_{0}\int^{\pi}_{0}\, (\mu>2) \, \sqrt{-g} \, d\theta \, d\varphi}, \end{equation} where $\mu=-T^r_t/(\rho u^r)$ is the specific radial energy density and $\alpha=1/\sqrt{-g^{tt}}$. We define the jet boundary as $\mu>2$, i.e., the region over which the jet still remains highly energized. Note that this definition of the jet radius is quite similar to standard definition used in the literature \citep[e.g.,][]{Narayan2022} where the jet boundary is taken to be the $\sigma=1$ surface. Since $\mu=\gamma(\sigma+h)$,\footnote{Note that specific enthalpy includes the rest-mass energy contribution in our definition from Sec.~\ref{sec:two-temp}.} our condition $\mu>2$ also incorporates regions where the jet might not be magnetically-dominated but is relativistically hot or fast. Since we restrict our jet profiles to within $r\lesssim 10^3\,r_{\rm g}$, the radius is primarily determined by the jet magnetization. Figure~\ref{fig:jet_profiles} show the jet radius $R_{\rm jet}$ and Lorentz factor $\gamma$ as a function of the radial distance from the BH for the M87$^*${} GRMHD and 2t-GRRMHD models as well as the Sgr~A$^*${} GRMHD model. The M87$^*${} jet radius from our models matches the observed jet width from M87 (gray circles) quite well, with the radial profile roughly proportional to $r^{0.625}$, which is the fitted powerlaw for the M87 jet \citep{Nakamura_2018}, though the index value has been also reported to be slightly smaller in some works \citep[0.57;][]{asadanak2012,Nokhrina:2019}. The powerlaw index of 0.625 is larger than that found using the $\sigma=1$ condition from \citet{Narayan2022}, where the authors found a powerlaw index of 0.428 for their MAD spin $a=0.9$ GRMHD model. It is possible that we find larger jet radii as we incorporate a part of the hot jet sheath region within our definition of $R_{\rm jet}$ \citep[as suggested by Fig.~7 in][]{chatterjee2019}. For the Sgr~A$^*${} model, we also find a similar $R_{\rm jet}$ profile. There are no detections of an extended jet in Sgr~A$^*${} \citep[e.g.,][]{Issaoun_2019} though semi-analytical and GRMHD models largely favor a jet component from a spinning BH \citep[e.g.,][]{Markoff:07,EHT_SgrA_2022_PaperV}. We also show the Lorentz factor $\gamma$ in Fig.~\ref{fig:jet_profiles}. Mostly the jets accelerate to $\gamma\approx3-4$ by $10^3\,r_{\rm g}$ in all of our GRMHD models. It is more difficult to compare our $\gamma$ profiles with values inferred from observations of the M87 jet \citep[e.g.,][]{mertens2016}. This is because our $\gamma$ values are biased towards the jet spine while the observations generally capture the velocities of localized features in the sub-relativistic jet sheath/disk wind, especially at small distances from the BH. Indeed, both simulations and observations show that the jet Lorentz factor varies greatly as a function of jet radius \citep[e.g., see][]{chatterjee2019}. We speculate that a better approach might be to calculate emissivity-weighted Lorentz factors in order to compare to the measured $\gamma$ from M87. Since our focus is on the comparison between GRMHD simulations, we leave direct comparisons to data to future work. \subsection{Axisymmetrized profiles} \begin{figure} \includegraphics[width=\textwidth]{figures/m87_2D.png} \caption{We show t- and $\varphi-$averaged data: electron number density $n_{\rm e}$ (top row) and temperature $T_{\rm e}$ (bottom row). We also denote the jet boundary with $\sigma=1$ (black lines). The time-averaging is done over the $5,000\,r_{\rm g}/c$ for each model. RIAF plots are for Sgr A$^*$ while the rest are for M87. The Sgr~A$^*${} GRMHD model produces similar plots of $n_{\rm e}$ and $T_{\rm e}$ as the M87$^*${} model, and hence, we do not show it here.} \label{fig:2D_profiles} \end{figure} In the previous sections, we have found that the largest differences between the GRMHD models occur in electron temperature distribution. Figure~\ref{fig:2D_profiles} shows the time and azimuthally averaged 2D vertical plots of gas density $n_{\rm e}$ and electron temperature $T_{\rm e}$. We show the normalized $n_{\rm e}$ so as to capture the relative change in the disk/wind density distribution, which would provide us information about the disk structure. The large difference in disk scale height is immediately apparent between the RIAF and the MAD GRMHD models (also see Fig.~\ref{fig:disk_profiles_2}). The presence of a prominent wide jet component in MADs squeezes the inner disk and pushes against the accretion flow, a feature which is not captured in the constant $h/r$ RIAF model. However, the RIAF model does roughly reproduce the density profile of the disk midplane region. This would mean that the RIAF model could well represent non/weakly-jetted, quasi-spherical accretion flows. For sources like M87$^*${}, where we see a prominent jet component, the density gradient in the vertical direction is expected to be steeper as strong magnetic stresses power disk winds that carry away gas from the disk \citep[e.g.,][]{Chatterjee:2022}. Overall, the disk/wind density distribution among the GRMHD models look similar with small differences in the lateral extension of the wind region and the steepness of the vertical gradient in density. For example, if we compare the 2t-GRRMHD model with the other two simulations, the density in the wind region is larger in the radiative model. The reason for the shallow vertical density profile in the 2t-GRRMHD model is unclear since weakly magnetized thick disk simulations tell us that radiative cooling would lead to the loss of gas pressure in the disk and would result in the disk collapsing to a relatively dense structure in the midplane \citep[e.g.,][]{fm09,Yoon:2020}. However, in the presence of strong poloidal magnetic fields, i.e. in the MAD state, the plasma-$\beta$ decreases to $\beta \approx 0.2-1$ in the disk midplane (see Fig.~\ref{fig:disk_profiles_1}, third row, left panel), and can reach even lower values in the upper layers of the accretion flow. The high magnetic pressure could help support the disk against collapse while sufficiently strong magnetic stresses could power disk winds. Such behavior is also seen in recent GRMHD simulations of near-Eddington, geometrically thin, strongly magnetized disks, where the inner disk (or corona) has a larger $h/r$ that the outer disk due to magnetic pressure support \citep{Liska:2022}. To verify how radiative cooling affects the inner disk/wind structure in highly sub-Eddington accretion flows like M87$^*${} and Sgr~A$^*${}, we require longer 2t-GRRMHD simulations such that the disk is in inflow-outflow equilibrium out to a radius of at least $50\,r_{\rm g}$. The 2D temperature plot of the RIAF model also looks vastly different in the inner disk ($r\lesssim20\,r_{\rm g}$) when compared to the GRMHD and 2t-GRMHD simulations, but is similar to the temperature distribution in the 2t-GRRMHD disk midplane (also seen in the $T_{\rm e}$ plot of Fig.~\ref{fig:disk_profiles_1}). The RIAF model does not capture gas heating in the jet sheath region (the region just outside of the jet boundary indicated by the $\sigma=1$ dashed line) and therefore $T_{\rm e}$ drops as we move away from the midplane towards the poles. In the GRMHD models, the jet sheath is as hot, if not hotter, than the inner accretion flow as temperatures reach $T_{\rm e}>10^{11}$K. For the GRMHD simulation, the electron temperature is given as a fraction of the fluid temperature, where the fraction depends on how magnetized the gas is in the region, as per the $R-\beta$ prescription from eqn.~\ref{eq:Rbeta}. For the M87$^*${} model, we chose a $R_{\rm high}$ value of 160 to have a jet-dominated sub-mm image. This choice of $R_{\rm high}$ suppresses the electron temperature in the disk, focusing higher temperatures in the jet sheath. Comparing the GRMHD model with the 2t-GRMHD model, the jet sheath region exhibits very similar $T_{\rm e}$ values but the disk midplane is hotter by a factor of a few in the 2t-GRMHD model. We note that this difference in $T_{\rm e}$ in the midplane is more noticeable in the 2D plot rather than in the disk-averaged $T_{\rm e}$ profile shown in Fig.~\ref{fig:disk_profiles_1} as the upper layers of the disk become substantially hotter in the GRMHD model. For the radiative 2t-GRRMHD model, the inner regions of the disk are cooler as electrons heated by magnetic reconnection quickly cool via synchrotron and Compton losses. From Fig.~\ref{fig:disk_profiles_1}, the drop in $T_{\rm e}$ for the 2t-GRRMHD model is shown to be as large as an order of magnitude when compared to the (2t-) GRMHD models. Another interesting feature is that the hot region ($T_{\rm e}>10^{11}$K) in the jet sheath is much narrower in the 2t-GRRMHD model, which could have a significant bearing on the ray-traced image, possibly producing a thinner jet sheath. Finally, the difference in $T_{\rm e}$ in the jet body between the GRMHD models is due to the different density/internal energy floor setups used by the corresponding codes. Since the gas in the jet sheath and the jet body undergo mixing due to boundary instabilities \citep[e.g.,][]{chatterjee2019, Wong:2021}, it is possible that the choice of floors could affect the overall electron temperature in the jet sheath. Such a study is outside the scope of our paper and is left to future work. \begin{figure} \includegraphics[width=\textwidth]{figures/hotspot_ne_2D.png} \caption{We show the $\varphi-$averaged hotspot electron number density as a function of radius and time. The hotspot falls into the BH and gets sheared over time.} \label{fig:hotspot} \end{figure} \subsection{Orbiting hotspot in a RIAF model} High-energy flares are commonly observed in AGNs, with GeV and MeV flares seen in M87$^*${} \citep[e.g.,][]{Aharonian:2006, Acciari:2010} and quasi-daily nIR and X-ray flares in Sgr~A$^*${} \citep[e.g.,][]{Baganoff:01,Eckart:2006:flare_activity,Hornstein_2007,Nowak_2012,Neilsen_2013,Witzel_2018,Do_2019,Haggard_2019}. A number of attempts have been made to explain the origin of flaring, such as magnetic reconnection in turbulent gas flows in the disk and the jet \citep{Dodds-Eden_2010, Dibi_2014, chatterjee2019,Nathanail_2020,Chatterjee:2021} and magnetic flux eruptions \citep{Dexter:2020:NIR_MAD,Porth:2020:NIR_MAD,Scepi:2022,Ripperda2022}. For Sgr~A$^*${}, semi-analytical models found that we require high-energy electrons, assumed to be accelerated via an ad-hoc process such as a large magnetic reconnection event or shocks, to describe the large flaring events \citep{Markoff_2001_sgra, Dibi_2016, Gutierrez_2020}. Near-infrared observations from the GRAVITY telescope provided further evidence for orbiting hotspot features in the accretion flow \citep{Gravity:20:orbit} that may be linked to acceleration events. It has also been recently shown that orbiting hotspots can be used to model double peaked X-ray flares \citep{Haggard_2019, Ball_2021} and prominent Stokes Q-U loops in the sub-mm emission of Sgr~A$^*${} \citep{Wielgus:2022}. These results give us considerable motivation to test the capability of the ngEHT to detect hotspot formation in accretion flows around black holes. Instead of isolating a particular magnetic flux eruption event in our simulations, we added a shearing hotspot to the RIAF solution as detailed in Sec.~\ref{sec:RIAF}. Figure~\ref{fig:hotspot} shows the temporal evolution of the azimuthally-averaged electron number density of the hotspot. We begin with a gaussian distribution of gas that undergoes shearing as the gas falls in closer to the BH. The overall density normalization is much lower that in the RIAF disk since the optically thin hotspot gas produces a large enough non-thermal synchrotron emissivity. The hotspot is evolved over $800\,r_{\rm g}/c$, but the gas distribution comes to a near-steady-state profile within the first $200\,r_{\rm g}/c$, which is roughly one hour for Sgr~A$^*${}. The shearing of the hotspot gas has significant impact on the evolution of the 230 GHz image \citep{Tiede2020, Roelofs_ngEHT} From Fig.~\ref{fig:disk_profiles_2} (right column), we see that the radial velocity matches the disk-averaged gas velocity from the GRMHD model, showing nearly free-fall speeds, while the azimuthal velocity becomes highly sub-Keplerian. The velocity profiles show that our hotspot model should be able to reproduce the expected hotspot motion from the GRMHD models, and is ideal for investigating multiwavelength flare lightcurves. A companion paper \citep{Emami:2022_hotspot} goes into further details about how current dynamical reconstruction techniques can be used to trace out the motion and morphology of the shearing hotspot in the context of ngEHT observations. These hotspot models and reconstruction methods would be integral in deciphering the more complex gas dynamics of magnetic flux eruption events in MADs, which have been shown to produce significant variation in image structure at 230 GHz \citep[e.g.,][]{Gelles:2022}. \section{Conclusions} \label{sec:conclusions} In this work, we have compared a series of numerical solutions with increasing complexity, going from time-independent radiatively-inefficient accretion flow (RIAF) model to fully 3D GRMHD simulations of accreting black holes, incorporating the effects of electron heating and cooling losses via two-temperature and radiation physics. In addition, each of our simulations are run with different GRMHD codes, similar to the approach of another community-wide code comparison effort \citep{Porth:19}. We found that the simulations exhibit remarkably similar properties given that the simulations incorporate varying levels of complexity in electron physics. The notable exception is the electron temperature, where radiative cooling decreases the temperature by a factor of $\lesssim 10$ within the inner 10 gravitational radii, the region that produces the bulk of the 230 GHz emission in M87$^*${}, one of the two primary targets of the EHT and the ngEHT (the other being Sgr~A$^*${}). The main goal of this work is to understand the variation in the underlying accretion flow and jet properties in our models since synthetic ray-traced imaged constructed from these models are used as ``truth'' images for the ngEHT Analysis Challenges \citep{Roelofs_ngEHT}. The ngEHT Analysis Challenges are an effort to determine how much information about the accretion flow and jet dynamics can we glean from the proposed ngEHT reference array, and what modifications in the image reconstruction tools are necessarily required to decode future ngEHT observational data. Our paper deals with numerical models designed to investigate hotspot evolution, turbulent inspiralling gas flows and extended powerful jets, targeting M87$^*${} and Sgr~A$^*${}. We restricted our model set to the community-standard setup: a rotating, geometrically-thick, optically-thin torus of magnetized gas around a spinning black hole, which is the fiducial model choice of the EHT \citep{EHT_M87_2019_PaperV, EHT_M87_2019_PaperVIII, EHT_SgrA_2022_PaperV}. This model choice leaves out exploration of multiple new setups of black hole accretion, such as quasi-spherical wind-fed inflows \citep[e.g.,][]{Ressler:2020:sgra_MAD, Lalakos:2022}, strongly-wind-fed accretion \citep[e.g.,][]{Cruz-Osorio:2017,Kaaz:2022}, geometrically-thin accretion disks \citep[e.g.,][]{Avara:2016, Liska:2022, MishraB:2022}, puffy radiation-dominated super-Eddington disks \citep[e.g.,][]{Sadowski:2016, Curd:2019} and misaligned accretion disks \citep[e.g.,][]{fragile07, liska_tilt_2018, White_2019, Chatterjee_2020}. Apart from varying the accretion mode, the high resolution of the images from EHT and ngEHT could potentially help distinguish between different space-time metrics \citep{EHT_Sgra_paper_VI}. So far, only a limited number of non-Kerr GRMHD simulations have only been performed \citep[e.g.,][]{Mizuno:2018, Olivares:2020, Nampalliwar:2022PhRvD.106f3009N}. The future of numerical studies is bright, given their rising popularity in the astrophysics community and the increase in computational resources. The breadth of the current investigations in accretion physics would result in a plethora of variable structures that should be thoroughly studied keeping the observational capabilities of the ngEHT in mind. \vspace{6pt} \funding{We thank the National Science Foundation (AST-1716536, AST-1935980, AST-1935980 and AST-2034306) and the Gordon and Betty Moore Foundation (GBMF-10423) for financial support of this work. This work was supported in part by the Black Hole Initiative, which is funded by grants from the John Templeton Foundation (JTF-61497) and the Gordon and Betty Moore Foundation (GBMF-8273) to Harvard University. K.C. is also supported in part by the Black Hole PIRE program (NSF grant OISE-1743747). R.E. acknowledges the support by the Institute for Theory and Computation at the Center for Astrophysics as well as grant numbers 21-atp21-0077, NSF AST-1816420 and HST-GO-16173.001-A for very generous supports. H.M. received financial support for this research from the International Max Planck Research School (IMPRS) for Astronomy and Astrophysics at the Universities of Bonn and Cologne. This research is supported by the DFG research grant ``Jet physics on horizon scales and beyond" (Grant No. FR 4069/2-1), the ERC synergy grant ``BlackHoleCam: Imaging the Event Horizon of Black Holes" (Grant No. 610058) and ERC advanced grant ``JETSET: Launching, propagation and emission of relativistic jets from binary mergers and across mass scales" (Grant No. 884631). J.K. acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany´s Excellence Strategy – EXC 2094 – 390783311. } \dataavailability{Software: \texttt{BHAC}{} \citep{porth17}, \texttt{BHOSS}{} \citep{Younsi:19_polarizedbhoss}, \texttt{H-AMR}{} \citep[][]{liska_hamr2020}, \texttt{IPOLE}{} \citep{Moscibrodzka:2018}, \texttt{KORAL}{} \citep{Sadowski:13_koral}} \acknowledgments{This research was enabled by support provided by grant no. NSF PHY-1125915 along with INCITE and ASCR Leadership Computing Challenge (ALCC) programs under the award PHY129, using resources from the Oak Ridge Leadership Computing Facility, Summit, which is a US Department of Energy office of Science User Facility supported under contract DE-AC05- 00OR22725, as well as Calcul Quebec (http://www.calculquebec.ca) and Compute Canada (http://www.computecanada.ca).} \conflictsofinterest{The authors declare no conflict of interest.} \begin{adjustwidth}{-\extralength}{0cm} \reftitle{References} \section{Introduction} With the advent of the Event Horizon Telescope \citep[EHT;][]{EHT_M87_2019_PaperI, EHT_SgrA_2022_PaperI}, imaging the near-horizon structure of supermassive black holes (BHs) is now a reality. The primary targets of EHT and the future next-generation EHT (or ngEHT\footnote{\href{https://www.ngeht.org/}{https://www.ngeht.org/}}) are M87$^*${} \citep[the supermassive BH in the elliptical galaxy M87;][]{EHT_M87_2019_PaperI} and Sgr~A$^*${} in the Galactic Center \citep{EHT_SgrA_2022_PaperI}, two of the most well-studied low-luminosity active galactic nuclei. Extracting information about the event horizon-scale accretion flows in these two sources using the EHT's enormous resolving power is an active area of research. With the ngEHT, we will achieve unprecedented levels of angular resolution and sensitivity to low-flux regions, with the dynamic range in flux expected to increase to $\sim$1000 compared to the EHT's current dynamic range of $\sim$10 \citep[e.g.,][]{Doeleman2019}. This would enable us to investigate the BH shadow shape with higher precision as well as provide a crucial connection between the accretion flow and the jet launching region. The expected advances in sensitivity require deeper investigations of feature extraction from simulated synthetic reconstructions of BH systems. Hence, we designed the ngEHT analysis challenges\footnote{\href{https://challenge.ngeht.org/}{https://challenge.ngeht.org/}} \citep{Roelofs_ngEHT} to test our ability to capture the complex dynamics of gas and magnetic fields around M87$^*${} and Sgr~A$^*${} using the ngEHT reference array \citep[e.g.,][]{Raymond:2021} with various analysis methods. Black hole accretion and jet physics has been intensively studied over the past few decades \citep[e.g.,][]{Shakura:73,ree82,Narayan:95a,quataert:2000:CDAF,narayan03,mckinney06, kom07, tch11, narayanSANE2012, tch16}. In the context of M87$^*${} and Sgr~A$^*${}, we expect the accretion flow to be highly sub-Eddington, radiatively-inefficient and geometrically-thick, popularly known as radiatively-inefficient accretion flows (RIAFs). This accretion flow solution has been used to successfully model the multiwavelength spectrum of Sgr~A$^*${} \citep[e.g.,][]{Yuan:03}. On the other hand, semi-analytical models of jets are preferred to explain the spectrum of M87$^*${} \citep[e.g.,][]{lucchini19:M87}. Thus, these two sources already provide a means to probe two different components of BH accretion, namely, the inner accretion flow structure and turbulence in Sgr~A$^*${} and the prominent jet feature in M87$^*${}. The first three EHT numerical simulation papers \citep{EHT_M87_2019_PaperV,EHT_M87_2019_PaperVIII,EHT_SgrA_2022_PaperV} already give us important clues about the horizon-scale conditions of these BH systems based on numerical simulations: (1) these BHs probably have non-zero spin, (2) the accretion disk is expected to have colder electrons than the jet sheath, and (3) the observations favor the presence of dynamically-important magnetic fields close to the BH. All of these results point us towards the magnetically arrested disk \citep[MAD;][]{igu03,narayan03} state, an accretion mode where the BH magnetosphere becomes over-saturated with magnetic flux and exhibit quasi-periodic explosions of vertical magnetic flux bundles. MAD flows also have powerful relativistic jets, where the jet power can exceed the input accretion power \citep{tch11}, a definite signature of BH spin energy extraction via the \citet{bz77} process. Building on the semi-analytical RIAF models, time-dependent general relativistic magneto-hydrodynamic (GRMHD) simulations have become important tools for deciphering BH accretion physics in a variety of astrophysical systems \citep[e.g.,][]{gam03, mckinney06,fragile07, tch11,Chael_2019, Porth:19, Narayan2022}. Indeed, the EHT regularly makes use of large libraries of GRMHD simulations to model the observed horizon-scale BH images of M87$^*${} and Sgr~A$^*${} as well as larger-scale jet images \citep[such as for Centaurus A;][]{Janssen:2021} in order to constrain the time-variable plasma properties. In designing the ngEHT reference array, it is therefore crucial to use GRMHD simulations for understanding the attainability of specific science goals, such as resolving the photon ring and the disk-jet connection region as well as tracing out time-variable features via the ngEHT analysis challenges. In this work, we discuss the numerical fluid simulations that were used as source models for the ngEHT analysis challenges. In particular, our objective is to compare between models that incorporate increasingly complicated levels of accretion and electron physics, focusing on M87$^*${} and Sgr~A$^*${}. Our model set consists of a time-dependent shearing hotspot stitched to a steady-state RIAF solution, two standard GRMHD simulations of MAD accretion flows, a GRMHD MAD simulation with electron heating via incorporating two-temperature physics, and a fully radiative, two-temperature, GRMHD MAD simulation. We describe the equations and setup of our numerical models in Sec.~\ref{sec:sims}, show our comparison results in Sec.~\ref{sec:results} and finally conclude in Sec.~\ref{sec:conclusions}. \section{Numerical simulations} \label{sec:sims} In this section we provide a brief description of the semi-analytical stationary RIAF and shearing hotspot model as well as the (two-temperature/radiative) GRMHD simulations used for the ngEHT analysis challenges. \subsection{RIAF+hotspot solutions} \label{sec:RIAF} RIAF models attempt to describe the time and azimuthally averaged appearance of accretion flows. This is done with a set of building blocks. The specific RIAF models used in the challenges are based on \citet{Yuan_2003, broderick_2006, Broderick_2010} approach. We decompose the accretion flow into a set of phenomenological models that describe the electron density, temperature, magnetic field, and velocity profile. The electron density profile is defined in terms of the cylindrical radius $R_{\rm cyl} = r|\sin(\theta)|$ and vertical displacement $z = r\cos(\theta)$, and is given by \begin{equation} n_{e,\rm{X}}(R_{\rm cyl}, z) = n_{e,\rm X} R_{\rm cyl}^{p_X}\exp\left(-\frac{z^2}{2h^2R_{\rm cyl}^2}\right) \end{equation} where $X$ denotes the population of electron. The disk height is controlled by $h$ which for this work we set to unity. For the challenge dataset we included both thermal synchrotron emitting ($X\equiv{\rm th})$, and non-thermal synchrotron ($X\equiv{\rm nth}$) emitting electrons. The thermal electrons have $n_{e,{\rm th}, 0}=1.3 \times 10^8 $ and $p_{\rm th} = -1.1$, while the non-thermal electrons are given by $n_{e, {\rm nth}, 0} = 1.3\times 10^5$, $p_{\rm nth} =-2.02$. The temperature profile of the thermal electrons is also given by a radial power-law with a Gaussian envelope describing the height: \begin{equation}\label{eq:Te_riaf} T_{e}(t,r,\theta, \varphi) = T_{e,0}R_{\rm cyl}^{-0.84}\exp\left(-\frac{z^2}{2h^2R_{\rm cyl}^2}\right), \end{equation} where for the challenge data we set $T_{e,0} = 6.3\times 10^{10}$~K. To define the gas pressure, we assume an equipartition between the electron and proton energy and that the protons are in roughly hydrostatic equilibrium, which gives us: \begin{equation}\label{eq:riaf_gas} p_{\rm gas}(x^\mu) = \frac{m_p n_{e,th}(x^\mu)}{6}\frac{M}{r}. \end{equation} For the local magnetic field, we then assume a constant $\beta = p_{\rm gas}/p_{\rm mag}$ plasma, which combined with eqn.~\ref{eq:riaf_gas} give us the magnetic field strength. The orientation of the magnetic field is then given by a purely toroidal configuration, relative to the plasma observer. Finally, we take the emission and absorption coefficients from the synchrotron self-absorption model in \citet{broderickblandford04} for both the thermal and non-thermal synchrotron electrons. For the non-thermal electrons we use a power-law prescription with radiation coefficients from \citet{jonesodell}, and a photon spectral-power law index of 1.25. These numbers are chosen to match the best-fit parameters for the Sgr~A$^*${} spectrum from \citet{Broderick:2016}. We follow the velocity field prescription from \citet{Pu2016} to describe the accretion flow dynamics. Using the notation from \citet{Tiede2020}, the velocity field $u^\mu = u^t v^\mu$ is given by \begin{equation}\label{eq:hotspot_velo} \begin{aligned} v^r &= v^r_K + \alpha(v^r_{ff} - v^r_{ff}) \\ v^\theta &= 0 \\ v^\varphi &= v^\varphi_K + (1-\kappa)(v^\varphi_{ff} - v^\varphi_K), \end{aligned} \end{equation} where $v^\mu_K$ denotes the Keplerian velocity field, and $v^\mu_{ff}$ is the free-fall velocity field. The remaining component $u^t$ is given by the normalization condition $u^\mu u_\mu = -1$. The radial component outside the inner stable circular orbit (ISCO) is $v^r_K = 0$ as the disk is in steady-state. However, inside the ISCO, we use plunging geodesics which are specified by matching the angular momentum and energy at the ISCO. The hotspot evolution follows the model from \citet{Tiede2020}, where we assume that the hotspot travels passively along the accretion flow velocity field (eqn.~\ref{eq:hotspot_velo}). This implies that the equation of motion is given by the conservation of particle number (eqn.~\ref{eq:continuity}). For the emission we assume a non-thermal synchrotron hotspot, with a initial Gaussian density profile \begin{equation} n_{e}(x^\mu) = n_0e^{-\Delta r^\mu r_\mu +(\Delta r_\mu v^\mu)^2/2R_s^2}, \end{equation} where we have set $n_0 = 6\times 10^6$ and $R_s = 0.5\,r_{\rm g}$, and $\Delta r^\mu$ is the displacement from the hotspot center. \subsection{GRMHD simulations} \label{sec:GRMHD} Over the previous two decades, multiple GRMHD codes have been developed and utilized to model black hole accretion and jet launching physics over long dynamical timescales. The wide usage of GRMHD simulations is particularly encouraging since this allows verification of code-specific numerical choices that users usually have to make even while solving the same base set of GRMHD equations. Indeed, recently there was a community-wide effort to benchmark these codes against each other for a standard problem: evolving a weakly magnetized torus of gas around a spinning black hole \citep{Porth:19}. It was found that these codes largely provide similar solutions though some disk quantities remain unconverged with increasing grid resolutions, suggesting more investigation is required. For this work, we employ three different GRMHD codes to probe black hole accretion, increasing the complexity of the equations solved at each step: (1) single fluid GRMHD simulations from the \texttt{H-AMR}{} code \citep{liska_hamr2020}, (2) a two-temperature single fluid GRMHD simulation from the \texttt{BHAC}{} code \citep{porth17}, and (3) a two-temperature radiative GRMHD simulation from the \texttt{KORAL}{} code \citep{Sadowski:13_koral}. First we describe the set of GRMHD equations \citep[e.g., from][]{Gammie:03, Porth:19}. We have the conservation of particle number and energy-momentum: \begin{eqnarray} \label{eq:continuity} \partial_{t}(\sqrt{-g} \rho u^{t}) &=& -\partial_{i}(\sqrt{-g} \rho u^{i}) \\ \partial_{t}(\sqrt{-g} T^t_{\nu}) &=& -\partial_{i}(\sqrt{-g} T^i_{\nu}) +\sqrt{-g} T^{\alpha}_{\beta}\Gamma^{\beta}_{\nu \alpha}. \end{eqnarray} \noindent Here, $\rho$ is the rest-mass gas density and can also be written in the form of $\rho=m n$, where $m$ is the mean rest-mass per particle and $n$ is the particle number density. We also have the four-velocity $u^{\mu}$, stress-energy tensor $T^{\mu}_{\nu}$, metric determinant $g\equiv det(g_{\mu\nu})$ and the metric connection $\Gamma^{\beta}_{\nu \alpha}$. Note that the index $t$ refers to the temporal component of the vector or tensor and $i$ denotes the spatial indices. The stress-energy tensor $T^{\mu}_{\nu}$ is given as \begin{equation} T^{\mu}_{\nu}=(\rho + U_{\rm gas} + p_{\rm gas} + 2p_{\rm mag})u^{\mu}u_{\nu} + (p_{\rm gas} + p_{\rm mag})\delta^{\mu}_{\nu} - b^{\mu}b_{\nu}, \end{equation} \noindent where $U_{\rm gas}$ and $p_{\rm gas}$ are the gas internal energy and pressure, related by the ideal gas equation: $p_{\rm gas}=(\Gamma_{\rm gas}-1)U_{\rm gas}$. We also have the magnetic pressure $p_{\rm mag}=b^2/2$ and the magnetic field 4-vector $b^{\mu}$, which can be defined in terms of the magnetic field 3-vector $B^i$: \begin{eqnarray} b^t &=& B^i u^{\mu} g_{i\mu} \\ b^i &=& (B^i + b^t u^i)/u^t. \end{eqnarray} Here we included a factor of $\sqrt{4\pi}$ into the definition of $B^i$. We evolve the magnetic field $B^i$ using the spatial components of the induction equation, \begin{equation} \partial_{t}(\sqrt{-g} B^i) = -\partial_{j}(\sqrt{-g} (b^j u^i - b^i u^j)), \end{equation} \noindent while the temporal component provides the no-monopoles constraint, \begin{equation} \frac{1}{\sqrt{-g}}\partial_i(\sqrt{-g} B^i) = 0. \end{equation} These equations are numerically integrated in the conservative form \citep{Gammie:03} to get the physically-relevant quantities $\rho$, $U_{\rm gas}$, $u^{\mu}$ and $B^i$. We refer the reader to the corresponding code papers for more information on the numerical techniques used to evolve these equations over space and time. In this work, we use two GRMHD simulations performed with the \texttt{H-AMR}{} code, one targeting M87$^*${} and the other Sgr~A$^*${}. These simulations employ logarithmic Kerr-Schild coordinates and the grid resolutions are $N_r\times N_\theta \times N_\varphi = 580 \times 288 \times 512$ for the M87$^*${} simulation and $348 \times 192 \times 192$ for the Sgr~A$^*${} simulation. All simulations in this work adopt the geometrical unit convention, $GM_{\rm BH}=c=1$, normalizing the length scale to the gravitational radius $r_{\rm g}=GM_{\rm BH}/c^2$. The M87$^*${} GRMHD simulation evolves a MAD flow around a black hole with spin $a=0.9375$. The Sgr~A$^*${} model also simulates a MAD flow but around a black hole with spin $a=1/2$. \texttt{H-AMR}{} uses outflowing radial boundary conditions (BCs), transmissive polar BCs and periodic azimuthal BCs \citep[for more details, see][]{liska_tilt_2018}. Since GRMHD simulations are scale-free, we determine the gas density code-to-CGS units conversion factor (hereafter, ``density scaling'') by raytracing the simulation at 230 GHz for a target source and flux. We use the general relativistic ray-tracing codes \texttt{BHOSS}{} \citep{Younsi:19_polarizedbhoss} and \texttt{IPOLE}{} \citep{Moscibrodzka:2018} to compute images at 230 GHz and set the compact flux to be approximately 0.5 Jy for M87$^*${} and 2.4 Jy for Sgr~A$^*${}. We take black hole masses and distances of $M_{\rm BH} =6.2\times10^9 M_{\odot}$ and $D_{\rm BH}=16.9$~Mpc for M87$^*${} and $M_{\rm BH}=4.14\times10^6 M_{\odot}$ and $D_{\rm BH}=8.127$~kpc for Sgr~A$^*${}. GRMHD simulations evolve a single temperature fluid. At the low accretion rates seen in M87$^*${} and Sgr~A$^*${}, Coulomb coupling between ions and electrons is inefficient and therefore, the two particle species are not in thermal equilibrium. Since ions are much heavier than electrons, ions dominate the single fluid thermodynamics evolved in GRMHD simulations. Hence, to calculate the radiative output from GRMHD simulations, we calculate the electron temperature $T_{\rm e}$ using sub-grid models such as the $R-\beta$ prescription \citep{Moscibrodzka_2016} based on local gas plasma-$\beta$ ($\equiv p_{\rm gas}/p_{\rm mag}$): \begin{eqnarray} T_{\rm e} &=& \frac{2 m_{\rm p}U_{\rm gas}}{3k_{\rm B}\rho (2 + R)}, \\ {\rm where,\,\,} R &=& \frac{1+R_{\rm high} \beta^2}{1+\beta^2}. \label{eq:Rbeta} \end{eqnarray} For the ngEHT analysis challenges, we take $R_{\rm high}$ values of 160 and 40 and source inclinations of $163^{\circ}$ and $50^{\circ}$ for the M87$^*${} and Sgr~A$^*${} simulations, respectively. We assume a thermal relativistic Maxwell-J\"uttner distribution for describing the electron energy distribution in the Sgr~A$^*${} model, and a hybrid thermal$+$non-thermal $\kappa-$distribution \citep[e.g.,][]{Xiao:2006, Davelaar:18} for the M87$^*${} model. The model images are shown in \citet{Roelofs_ngEHT}. \subsection{Two-temperature physics} \label{sec:two-temp} Two-temperature GRMHD (2t-GRMHD) simulations \citep[e.g.,][]{ressler_2015, Dexter_2020} evolve the ion and electron entropy equation separately and hence, provide the ion and electron temperature in a self-consistent manner. The main advantage of this method is that we remove the electron temperature as a free parameter when constructing images. However, we do have to make a choice about the sub-grid prescription that determines the fraction of local dissipative energy that heats the electrons. There are two heating mechanisms that are thought to be applicable to global accretion flows: turbulent heating \citep{Howes:2010, Kawazura:2019}, and magnetic reconnection \citep{Werner:2018, Rowan:2017}. For the ngEHT analysis challenges, we focus only on one simulation with reconnection heating, taken from \citet{Mizuno_2021}. We assume that the number densities and velocities of ions and electrons are equal, i.e., $n_{\rm i}=n_{\rm e}=n$ and $u^{\mu}_{\rm i}=u^{\mu}_{\rm e}=u^{\mu}$, maintaining charge neutrality. The electron entropy equation is given as \begin{equation} \partial_{\mu} (\sqrt{-g} \rho u^{\mu} s_{\rm e}) = \frac{\sqrt{-g} (\Gamma_{\rm e} - 1)}{\rho^{\Gamma_{\rm e} -1}}f_{\rm e}Q, \end{equation} \noindent where the electron entropy is $s_{\rm e} = p_{\rm e}/\rho^{\Gamma_{\rm e}}$, with $p_{\rm e}$ and $\Gamma_{\rm e}$ as the electron pressure and adiabatic index. The total heating rate $Q$ is calculated by comparing the total internal energy of the gas and the internal energy obtained from the electron entropy conservation equation \citep[see][for more details]{ressler_2015}. The fraction of dissipative heating that contributes to electron heating is given by $f_{\rm e}$. For this particular simulation, $f_{\rm e}$ is designed to capture electron/ion heating via magnetic reconnection from \citet{Rowan:2017}: \begin{equation} f_{\rm e} = \frac{1}{2} \exp \Big[ -\frac{1-\beta/\beta_{\rm max}}{0.8+\sigma_{\rm h}^{0.5}} \Big], \end{equation} \noindent where $\beta_{\rm max} = 1/4\sigma_{\rm h}$, defined using the hot gas magnetization $\sigma_{\rm h} = b^2/\rho h$ and the specific gas enthalpy $h=1+\Gamma_{\rm g} p_{\rm g}/[\rho(\Gamma_{\rm g}-1)]$. The 2t-GRMHD simulation from \citet{Mizuno_2021} assumes modified Kerr-Schild coordinates and a black hole spin of $0.9375$. The grid resolution is $384 \times 192 \times 192$. The accretion mode is of a magnetically arrested flow and the simulation is raytraced (using \texttt{BHOSS}{}) once the near-horizon flow has reached steady state. The target source is M87$^*${}, assuming a black hole mass of $M_{\rm BH}=6.5\times10^9 M_{\odot}$ and distance of 16.9 Mpc. The accretion rate is normalized such that the 230 GHz compact flux density is 0.8 Jy. We assume a thermal electron distribution everywhere except in the jet sheath where we adopt a $\kappa-$distribution. More details about the image are provided in \citet{Roelofs_ngEHT}. \subsection{Radiative GRMHD} \label{sec:GRRMHD} Two temperature GRMHD simulations do not include radiative cooling and hence, are thought to be appropriate for low luminosity supermassive black holes such as M87$^*${} and Sgr~A$^*${}. To verify this assumption, we consider a two-temperature radiative GRMHD (2t-GRRMHD hereafter) simulation from \citet{Chael_2019}. This simulation accounts for self-consistent radiation physics, incorporating both particle heating via magnetic reconnection (as in Sec~\ref{sec:two-temp}) and radiative cooling via bremsstrahlung, synchrotron, Compton and Coulomb losses. The simulation is run using the 2t-GRRMHD code \texttt{KORAL}{} \citep{Sadowski:13_koral, Sadowski:15_photon,Chael_2018}, which evolves a two-temperature magnetized fluid and treats the radiation field as a second fluid \citep{Sadowski:2017}. The conservation equations solved in 2t-GRRMHD are different from that of GRMHD: \begin{equation} (T^{\mu}_{\nu} + R^{\mu}_{\nu})_{;\mu} =0, \end{equation} \noindent where $R^{\mu}_{\nu}$ is the frequency-integrated radiation field, defined as, \begin{equation} R^{\mu}_{\nu} = \frac{4}{3} \overline{E} u^{\mu}_{\rm R} u_{\nu \rm R} + \frac{1}{3} \overline{E}\delta^{\mu}_{\nu}. \end{equation} \noindent Here, the radiation field is described by its rest-frame energy density $\overline{E}$ and four-velocity $u^{\mu}_{\rm R}$ following the M1 closure scheme. The ion and electron entropy equations are \begin{eqnarray} T_{\rm e} (ns_{\rm e}u^{\mu})_{;\mu} &=& f_{\rm e}q^{\rm v} + q^{\rm C} - G, \\ T_{\rm i} (ns_{\rm i}u^{\mu})_{;\mu} &=& (1-f_{\rm e})q^{\rm v} - q^{\rm C}, \end{eqnarray} \noindent where $q^{\rm v}$ is the dissipative heating rate and $q^{\rm C}$ is the Coulomb coupling rate that captures the exchange of energy between ions and electrons. The heating fraction $f_{\rm e}$ is taken from \citet{Rowan:2017} (see \citealt{Chael_2018} for more details), same as the 2t-GRMHD simulation. Finally, $G$ is the radiative cooling rate \citep{Sadowski:13_koral}. For further details about the equations, see \citet{Sadowski:13_koral, Sadowski:2017, Chael_2018}. The simulation assumes a black hole spin of $a=0.9375$ and mass $M_{\rm BH}=6.2\times10^9 M_{\odot}$, targeting M87$^*${}. The gas density is scaled to physical CGS units such that the compact emission at 230 GHz is roughly 0.98 Jy. The simulation uses modified Kerr-Schild coordinates with a grid resolution of $N_r\times N_\theta \times N_\varphi = 288 \times 224 \times 128$. See \citet{Chael_2019} for more details about the simulation. While not utilized for the ngEHT analysis challenges, we included this simulation in this work since this model captures the coupling between gas and radiation, necessary for black holes accreting close the Eddington limit. Further, this model has been used in previous ngEHT reference array papers \citep{Blackburn:2019_APC, Raymond:2021}. \section{Results} \label{sec:results} We perform a series of comparisons focused on the time-evolution of horizon-scale quantities and radial dependence of disk and jet properties. The diagnostics are chosen such that any trends we find can inform EHT/ngEHT science applications, such as horizon-scale morphology and variability of the accretion flow. Further, the quantities are similar to those reported in the GRMHD code comparison project \citep{Porth:19} and so, can be directly compared. There are a total of five models: three (2t-radiative) GRMHD simulations targeting M87*, and one RIAF solution and one GRMHD simulation for Sgr A*. We further note that all three numerical simulations of M87* have the same BH spin, favoring direct comparisons of the horizon-scale gas properties. \subsection{Temporal behavior of horizon fluxes} \label{sec:time} \begin{figure} \includegraphics[width=\columnwidth]{figures/time_data.png} \caption{We show the mass accretion rate $\dot{M}$, dimensionless magnetic flux $\phi\equiv\Phi/\sqrt{\dot{M}}$, the outflow efficiency $P_{\rm out}/\dot{M}c^2=1-\dot{E}/\dot{M}c^2$ and specific radial flux of the angular momentum $\dot{J}/\dot{M}$ over time. Values are calculated at the event horizon.} \label{fig:Horizon_fluxes} \end{figure} We calculate the mass, magnetic, angular momentum and energy fluxes in the radial direction as follows: \begin{eqnarray} {\rm Mass:\,\,}& \dot{M} &=\int^{2\pi}_{0}\int^{\pi}_{0}\, (-\rho \, u^r) \, \sqrt{-g} \, d\theta \, d\varphi\,, \\ {\rm Magnetic:\,\,}& \Phi &=\frac{\sqrt{4\pi}}{2}\int^{2\pi}_{0}\int^{\pi}_{0}\, |B^r| \, \sqrt{-g} \, d\theta \, d\varphi\,, \\ {\rm Ang. Mom.:\,\,}& \dot{J} &=\int^{2\pi}_{0}\int^{\pi}_{0}\, T^r_{\varphi} \, \sqrt{-g} \, d\theta \, d\varphi\,, \\ {\rm Energy:\,\,}& \dot{E} &=\int^{2\pi}_{0}\int^{\pi}_{0}\, (T^r_t) \, \sqrt{-g} \, d\theta \, d\varphi\,, \end{eqnarray} \noindent where all quantities are calculated at the event horizon radius $r_{\rm hor}=r_{\rm g} (1+\sqrt{1-a^2})$. We note that there could be substantial contribution of density floors when calculating the mass accretion rate for MAD systems. However, this radius was chosen for simplicity in comparing to previous simulations in the literature. Figure~\ref{fig:Horizon_fluxes} shows the mass accretion rate $\dot{M}$ in units of solar masses per year ($M_{\odot}$/yr), the dimensionless magnetic flux $\phi=\Phi/\sqrt{\dot{M} r_{\rm g}^2 c }$, the outflow power $P_{\rm out} = \dot{M}c^2-\dot{E}$ and the specific angular momentum flux $\dot{J}/\dot{M}$ for simulations targeting M87* and Sgr A*. The RIAF solution being a steady-state solution is excluded from this section (though the hotspot evolves with time). Quantities from the 2t-GRRMHD simulation are only shown for $(11-16)\times 10^3\,\,r_{\rm g}/c$, i.e., the time period over which the simulation was raytraced in \citet{Chael_2019}. Remarkably, despite the difference in electron physics complexity, the simulations behave very similarly. The factor of 2 difference in $\dot{M}$ between the M87$^*${} non-radiative simulations and the 2t-GRRMHD simulation can be explained by the lower electron temperatures in the near-horizon accretion flow due to radiative cooling (see Sec.~\ref{sec:disk}) as well as the higher 230 GHz flux normalization used for the radiative model. \begin{table}[H] \caption{Modulation index (MI) of the mass accretion rate $\dot{M}$, dimensionless magnetic flux $\phi$, outflow efficiency $P_{\rm out}/\dot{M}c^2$ and the specific angular momentum flux $\dot{J}/\dot{M}$ for each GRMHD model. The quantities are calculated over the final $5000\,\,r_{\rm g}/c$ in runtime and at the event horizon (see Fig.~\ref{fig:Horizon_fluxes}).} \newcolumntype{C}{>{\centering\arraybackslash}X} \begin{tabularx}{\textwidth}{lCCcc} \toprule Model & MI($\dot{M}$) & MI($\phi$) & MI($P_{\rm out}/\dot{M}c^2$) & MI($\dot{J}/\dot{M}$)\\ \midrule M87$^*${} GRMHD & 0.27 & 0.15 & 0.26 & 0.33\\ M87$^*${} 2t-GRMHD & 0.29 & 0.14 & 0.25 & 0.31\\ M87$^*${} 2t-GRRMHD & 0.28 & 0.14 & 0.14 & 0.31\\ Sgr~A$^*${} GRMHD & 0.23 & 0.21 & 0.39 & 0.57\\ \bottomrule \end{tabularx} \label{tab:MI} \end{table} The accretion rate in all simulations show large variation with quasi-periodic sharp drops. These drops in $\dot{M}$ occur due to the emergence of magnetic flux eruptions, a characteristic feature of the magnetically arrested disk \citep{Porth:2020:NIR_MAD,Begelman2022, Ripperda2022, Chatterjee:2022}. These eruptions also lower the value of $\phi$ since magnetic flux bundles escape from the vicinity of the BH, carrying away the magnetic flux accumulated in the BH magnetosphere. We see that $\phi$ often crosses the magnetic flux saturation value of 50 \citep{tch11}, overwhelming the BH magnetosphere with strong magnetic fields that eventually reconnect and trigger flux eruptions \citep[see][for the detailed mechanism]{Ripperda2022}. As these field line bundles move out and interact with the disk, they (1) hinder accretion, lowering $\dot{M}$, (2) remove magnetic flux from near the BH, lowering the jet power, and (3) push gas outwards, reducing the inward angular momentum flux. Curiously, we see larger drops in the specific angular momentum flux for the Sgr~A$^*${} GRMHD model. This is possibly due to the smaller BH spin ($a=0.5$ as opposed to $0.9375$ for the M87$^*${} models) as the weakly powered jet does not carry away angular momentum as efficiently as the higher BH spin models and flux eruptions play a bigger role in regulating disk angular momentum transport. Additionally, the reconnection events that trigger these eruptions accelerate electrons to higher energies, and are thus, crucial for understanding flare activity in BH sources. To quantify the time-variability of the horizon-fluxes, we calculate the modulation index MI, which is defined as the ratio of the standard deviation and the mean of the quantity over time \citep{EHT_SgrA_2022_PaperV}. We show MI for the different fluxes in Table~\ref{tab:MI}. The MI($\dot{M}$) is usually a good proxy for the variability of the sub-millimeter emission in these slowly accreting optically thin black hole sources \citep[e.g.,][]{Chatterjee:2021}. The MI($\dot{M}$) values we see from the simulations are $\sim0.23-0.29$ and are larger than expected from Sgr~A$^*${} 230 GHz lightcurves \citep[where $MI\sim 0.1$;][]{Wielgus:2022_SgrALC}. This suggests that careful analysis of the electron distribution function is needed to understand if we are substantially over-predicting the 230 GHz lightcurve variability. Further, in general, weakly-magnetized accretion flows exhibit lower MI($\dot{M}$) values due to the absence of flux eruptions, which suggests that further study of the accretion mode in Sgr~A$^*${} is also necessary. It is encouraging to note that our MI values for $\dot{M}$ and $\phi$ are consistent with the MI values from longer time-evolved GRMHD simulations of $a=0.9$ BHs in \citet{Narayan2022}, indicating that our simulations are sufficiently converged with respect to horizon-scale quantities. \subsection{Disk-averaged quantities} \label{sec:disk} \begin{figure} \includegraphics[width=\columnwidth]{figures/radial_data_1.png} \caption{We show the radial profiles of gas density $\rho$, plasma-$\beta$, proton temperature $T_{\rm p}$, electron temperature $T_{\rm e}$. Quantities are disk-averaged and time-averaged over the raytracing period.} \label{fig:disk_profiles_1} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{figures/radial_data_2.png} \caption{We show the radial profiles of disk scale height $h/r$, radial velocity $|v_r|$, angular velocity $\Omega$ and specific angular momentum $u_{\varphi}$. Quantities are disk-averaged and time-averaged over the raytracing period.} \label{fig:disk_profiles_2} \end{figure} Here we calculate the disk-averaged properties of each model, namely gas density $\rho$, thermal pressure $p_{\rm gas}$, magnetic pressure $p_{\rm mag}$, radial velocity $|v_r|$, azimuthal velocity $|v_{\varphi}|$, angular momentum $u_{\varphi}$, disk scale height $h/r$, ion temperature $T_i$ and the electron temperature $T_e$. We define disk-averaging of a quantity $q$ as \begin{equation} \langle q \rangle (r,t) = \frac{\int^{2\pi}_{0}\int^{\pi}_{0}\, \rho \, q \, \sqrt{-g} \, d\theta \, d\varphi}{\int^{2\pi}_{0}\int^{\pi}_{0}\, \rho \, \sqrt{-g} \, d\theta \, d\varphi}, \end{equation} where $q \in \{ \rho, p_{\rm gas}, p_{\rm mag}, |v_r|, |v_{\varphi}| , u_{\varphi}, h/r, T_i [\rm Kelvin], T_e [\rm Kelvin] \}$. Further definitions follow: \begin{eqnarray} p_{\rm gas} &=& (\Gamma_{\rm ad}-1)u, \\ p_{\rm mag} &=& b^{\mu}b_{\mu}/2, \\ |v_i| &=& \sqrt{v^i v^i g_{ii}}, \\ {\rm where,\,\,} v^i &=& u^i/u^t, \nonumber \\ h/r &=& |\theta - \pi/2|, \\ \end{eqnarray} where $\Gamma_{\rm ad}$ and $u$ are the adiabatic index and the internal energy of the gas. Figures~\ref{fig:disk_profiles_1} and \ref{fig:disk_profiles_2} show the respective disk-averaged radial profiles for each model, including the Sgr~A$^*${} RIAF solution. The density profiles in the inner few tens of $r_{\rm g}$ converge roughly to a $n_{\rm e}\propto r^{-1}$ profile, matching the RIAF density profile as well as longer-time evolved MAD simulations \citep{Chatterjee:2022}. The M87$^*${} 2t-GRRMHD density is larger by a factor of $\approx 2$ from the GRMHD/2t-GRMHD, as is expected from the difference in the mass accretion rate (Fig.~\ref{fig:Horizon_fluxes}). The 2t-GRRMHD simulation exhibits a slightly more magnetized inflow within the inner $2\,\,r_{\rm g}$, but overall, the GRMHD simulations have a similar plasma-$\beta\equiv p_{\rm gas}/p_{\rm mag}$ disk profile. The stronger magnetic field seen in the 2t-GRRMHD model could explain the higher values of the horizon magnetic flux seen in Fig.~\ref{fig:Horizon_fluxes}. The RIAF model assumes a constant disk plasma-$\beta=10$, (see Sec.~\ref{sec:RIAF}), which is substantially higher when compared to the MAD GRMHD models. This value of plasma-$\beta$ is chosen in order to match the observed 230 GHz flux density of Sgr~A$^*${}. As we see from the disk scale height in Fig.~\ref{fig:disk_profiles_2}, the RIAF model has a much thicker disk than the GRMHD models, and therefore, produces a lot more sub-millimeter (sub-mm) emission even with a low electron temperature and weak magnetic field strength. Next, we see that the disk-averaged electron temperature $T_{\rm e}$ in the 2t-GRRMHD M87$^*${} model is more than an order of magnitude lower than the other GRMHD models within the inner $10\,\, r_{\rm g}$, but actually matches the Sgr~A$^*${} RIAF $T_{\rm e}$ profile and has a shallower profile $T_{\rm e} \propto r^{-1}$ instead of $r^{-3/2}$. It is further interestingly to note that the disk ion temperature $T_{\rm i}$ are very similar in all the GRMHD simulations shown here. Therefore, despite the same reconnection-driven heating mechanism captured in both the 2t-GRMHD and the 2t-GRRMHD models, radiative cooling of hot electrons plays a crucial role in determining the eventual $T_{\rm e}$. Due to the low $T_{\rm e}$, the required accretion rate normalization is higher in the 2t-GRRMHD, as we noted in the previous subsection. In Fig.~\ref{fig:disk_profiles_2}, we show the average disk scale height $h/r$, the radial and angular velocities ($v_r$ and $\Omega$ and the specific angular momentum $u_{\varphi}$. The MAD simulations all show very similar disk properties. The $\langle h/r\rangle \approx 0.1-0.3$ with a sharp increase within $3\,\,r_{\rm g}$ where the inflow becomes vertically supported by strong poloidal magnetic fields. The radial velocity has a profile of $r^{-1/2}$, similar to the scaling relation found in ADAF solutions assuming a constant viscosity parameter $\alpha$ \citep{nar94,nar98}. The $\alpha$ parameter profile depends on how magnetized the accretion flow is, with $\alpha\propto r^{-1}$ for weakly-magnetized flows and close to constant for MAD-like flows \citep[e.g.,][]{liska_tor_2019,Chatterjee:2022}. We also see highly sub-Keplerian angular velocity profiles in the GRMHD models, typical for magnetically supported disks. For the RIAF model, the RIAF disk is not infalling and has a constant Keplerian angular velocity. Instead, the hotspot, added to the RIAF solution, undergoes shearing and disappears into the BH with a radial velocity similar to the values found in the GRMHD MAD disks. This occurs because the hotspot is designed to travel along plunging geodesics (see Sec.~\ref{sec:RIAF}), similar to rapid gas infall close to the BH in the GRMHD models. The angular momentum in the GRMHD models looks sub-Keplerian as expected for MADs. \subsection{Jet properties} \begin{figure} \includegraphics[width=\columnwidth]{figures/jet_data.png} \caption{We show the jet radius $R_{\rm jet}$ and the jet Lorentz factor $\gamma$ from the M87$^*${} GRMHD and 2t-GRRMHD models, and the Sgr~A$^*${} GRMHD model. The gray circles indicates the deprojected jet radius of the M87 jet assuming a BH mass of $6.2\times10^9M_{\odot}$ and a source inclination of $14^{\circ}$ \citep{Nakamura_2018}. The data points are a compilation of various papers \citep{Doeleman2012,asadanak2012,Hada2013,nak2013,Akiyama:2015,Hada2016}.} \label{fig:jet_profiles} \end{figure} Here we calculate the radial profiles of the jet half width $R_{\rm jet}$ and Lorentz factor $\gamma$: \begin{equation} R_{\rm jet} (r,t) = \sqrt\frac{\int^{2\pi}_{0}\int^{\pi}_{0}\, \sqrt{-g/g_{rr}} (\mu>2) \, d\theta \, d\varphi}{2\pi}\,, \label{eq:Rjet} \end{equation} \begin{equation} \gamma (r,t) = \frac{\int^{2\pi}_{0}\int^{\pi}_{0}\, (\mu>2) \, \alpha u^t \, \sqrt{-g} \, d\theta \, d\varphi}{\int^{2\pi}_{0}\int^{\pi}_{0}\, (\mu>2) \, \sqrt{-g} \, d\theta \, d\varphi}, \end{equation} where $\mu=-T^r_t/(\rho u^r)$ is the specific radial energy density and $\alpha=1/\sqrt{-g^{tt}}$. We define the jet boundary as $\mu>2$, i.e., the region over which the jet still remains highly energized. Note that this definition of the jet radius is quite similar to standard definition used in the literature \citep[e.g.,][]{Narayan2022} where the jet boundary is taken to be the $\sigma=1$ surface. Since $\mu=\gamma(\sigma+h)$,\footnote{Note that specific enthalpy includes the rest-mass energy contribution in our definition from Sec.~\ref{sec:two-temp}.} our condition $\mu>2$ also incorporates regions where the jet might not be magnetically-dominated but is relativistically hot or fast. Since we restrict our jet profiles to within $r\lesssim 10^3\,r_{\rm g}$, the radius is primarily determined by the jet magnetization. Figure~\ref{fig:jet_profiles} show the jet radius $R_{\rm jet}$ and Lorentz factor $\gamma$ as a function of the radial distance from the BH for the M87$^*${} GRMHD and 2t-GRRMHD models as well as the Sgr~A$^*${} GRMHD model. The M87$^*${} jet radius from our models matches the observed jet width from M87 (gray circles) quite well, with the radial profile roughly proportional to $r^{0.625}$, which is the fitted powerlaw for the M87 jet \citep{Nakamura_2018}, though the index value has been also reported to be slightly smaller in some works \citep[0.57;][]{asadanak2012,Nokhrina:2019}. The powerlaw index of 0.625 is larger than that found using the $\sigma=1$ condition from \citet{Narayan2022}, where the authors found a powerlaw index of 0.428 for their MAD spin $a=0.9$ GRMHD model. It is possible that we find larger jet radii as we incorporate a part of the hot jet sheath region within our definition of $R_{\rm jet}$ \citep[as suggested by Fig.~7 in][]{chatterjee2019}. For the Sgr~A$^*${} model, we also find a similar $R_{\rm jet}$ profile. There are no detections of an extended jet in Sgr~A$^*${} \citep[e.g.,][]{Issaoun_2019} though semi-analytical and GRMHD models largely favor a jet component from a spinning BH \citep[e.g.,][]{Markoff:07,EHT_SgrA_2022_PaperV}. We also show the Lorentz factor $\gamma$ in Fig.~\ref{fig:jet_profiles}. Mostly the jets accelerate to $\gamma\approx3-4$ by $10^3\,r_{\rm g}$ in all of our GRMHD models. It is more difficult to compare our $\gamma$ profiles with values inferred from observations of the M87 jet \citep[e.g.,][]{mertens2016}. This is because our $\gamma$ values are biased towards the jet spine while the observations generally capture the velocities of localized features in the sub-relativistic jet sheath/disk wind, especially at small distances from the BH. Indeed, both simulations and observations show that the jet Lorentz factor varies greatly as a function of jet radius \citep[e.g., see][]{chatterjee2019}. We speculate that a better approach might be to calculate emissivity-weighted Lorentz factors in order to compare to the measured $\gamma$ from M87. Since our focus is on the comparison between GRMHD simulations, we leave direct comparisons to data to future work. \subsection{Axisymmetrized profiles} \begin{figure} \includegraphics[width=\textwidth]{figures/m87_2D.png} \caption{We show t- and $\varphi-$averaged data: electron number density $n_{\rm e}$ (top row) and temperature $T_{\rm e}$ (bottom row). We also denote the jet boundary with $\sigma=1$ (black lines). The time-averaging is done over the $5,000\,r_{\rm g}/c$ for each model. RIAF plots are for Sgr A$^*$ while the rest are for M87. The Sgr~A$^*${} GRMHD model produces similar plots of $n_{\rm e}$ and $T_{\rm e}$ as the M87$^*${} model, and hence, we do not show it here.} \label{fig:2D_profiles} \end{figure} In the previous sections, we have found that the largest differences between the GRMHD models occur in electron temperature distribution. Figure~\ref{fig:2D_profiles} shows the time and azimuthally averaged 2D vertical plots of gas density $n_{\rm e}$ and electron temperature $T_{\rm e}$. We show the normalized $n_{\rm e}$ so as to capture the relative change in the disk/wind density distribution, which would provide us information about the disk structure. The large difference in disk scale height is immediately apparent between the RIAF and the MAD GRMHD models (also see Fig.~\ref{fig:disk_profiles_2}). The presence of a prominent wide jet component in MADs squeezes the inner disk and pushes against the accretion flow, a feature which is not captured in the constant $h/r$ RIAF model. However, the RIAF model does roughly reproduce the density profile of the disk midplane region. This would mean that the RIAF model could well represent non/weakly-jetted, quasi-spherical accretion flows. For sources like M87$^*${}, where we see a prominent jet component, the density gradient in the vertical direction is expected to be steeper as strong magnetic stresses power disk winds that carry away gas from the disk \citep[e.g.,][]{Chatterjee:2022}. Overall, the disk/wind density distribution among the GRMHD models look similar with small differences in the lateral extension of the wind region and the steepness of the vertical gradient in density. For example, if we compare the 2t-GRRMHD model with the other two simulations, the density in the wind region is larger in the radiative model. The reason for the shallow vertical density profile in the 2t-GRRMHD model is unclear since weakly magnetized thick disk simulations tell us that radiative cooling would lead to the loss of gas pressure in the disk and would result in the disk collapsing to a relatively dense structure in the midplane \citep[e.g.,][]{fm09,Yoon:2020}. However, in the presence of strong poloidal magnetic fields, i.e. in the MAD state, the plasma-$\beta$ decreases to $\beta \approx 0.2-1$ in the disk midplane (see Fig.~\ref{fig:disk_profiles_1}, third row, left panel), and can reach even lower values in the upper layers of the accretion flow. The high magnetic pressure could help support the disk against collapse while sufficiently strong magnetic stresses could power disk winds. Such behavior is also seen in recent GRMHD simulations of near-Eddington, geometrically thin, strongly magnetized disks, where the inner disk (or corona) has a larger $h/r$ that the outer disk due to magnetic pressure support \citep{Liska:2022}. To verify how radiative cooling affects the inner disk/wind structure in highly sub-Eddington accretion flows like M87$^*${} and Sgr~A$^*${}, we require longer 2t-GRRMHD simulations such that the disk is in inflow-outflow equilibrium out to a radius of at least $50\,r_{\rm g}$. The 2D temperature plot of the RIAF model also looks vastly different in the inner disk ($r\lesssim20\,r_{\rm g}$) when compared to the GRMHD and 2t-GRMHD simulations, but is similar to the temperature distribution in the 2t-GRRMHD disk midplane (also seen in the $T_{\rm e}$ plot of Fig.~\ref{fig:disk_profiles_1}). The RIAF model does not capture gas heating in the jet sheath region (the region just outside of the jet boundary indicated by the $\sigma=1$ dashed line) and therefore $T_{\rm e}$ drops as we move away from the midplane towards the poles. In the GRMHD models, the jet sheath is as hot, if not hotter, than the inner accretion flow as temperatures reach $T_{\rm e}>10^{11}$K. For the GRMHD simulation, the electron temperature is given as a fraction of the fluid temperature, where the fraction depends on how magnetized the gas is in the region, as per the $R-\beta$ prescription from eqn.~\ref{eq:Rbeta}. For the M87$^*${} model, we chose a $R_{\rm high}$ value of 160 to have a jet-dominated sub-mm image. This choice of $R_{\rm high}$ suppresses the electron temperature in the disk, focusing higher temperatures in the jet sheath. Comparing the GRMHD model with the 2t-GRMHD model, the jet sheath region exhibits very similar $T_{\rm e}$ values but the disk midplane is hotter by a factor of a few in the 2t-GRMHD model. We note that this difference in $T_{\rm e}$ in the midplane is more noticeable in the 2D plot rather than in the disk-averaged $T_{\rm e}$ profile shown in Fig.~\ref{fig:disk_profiles_1} as the upper layers of the disk become substantially hotter in the GRMHD model. For the radiative 2t-GRRMHD model, the inner regions of the disk are cooler as electrons heated by magnetic reconnection quickly cool via synchrotron and Compton losses. From Fig.~\ref{fig:disk_profiles_1}, the drop in $T_{\rm e}$ for the 2t-GRRMHD model is shown to be as large as an order of magnitude when compared to the (2t-) GRMHD models. Another interesting feature is that the hot region ($T_{\rm e}>10^{11}$K) in the jet sheath is much narrower in the 2t-GRRMHD model, which could have a significant bearing on the ray-traced image, possibly producing a thinner jet sheath. Finally, the difference in $T_{\rm e}$ in the jet body between the GRMHD models is due to the different density/internal energy floor setups used by the corresponding codes. Since the gas in the jet sheath and the jet body undergo mixing due to boundary instabilities \citep[e.g.,][]{chatterjee2019, Wong:2021}, it is possible that the choice of floors could affect the overall electron temperature in the jet sheath. Such a study is outside the scope of our paper and is left to future work. \begin{figure} \includegraphics[width=\textwidth]{figures/hotspot_ne_2D.png} \caption{We show the $\varphi-$averaged hotspot electron number density as a function of radius and time. The hotspot falls into the BH and gets sheared over time.} \label{fig:hotspot} \end{figure} \subsection{Orbiting hotspot in a RIAF model} High-energy flares are commonly observed in AGNs, with GeV and MeV flares seen in M87$^*${} \citep[e.g.,][]{Aharonian:2006, Acciari:2010} and quasi-daily nIR and X-ray flares in Sgr~A$^*${} \citep[e.g.,][]{Baganoff:01,Eckart:2006:flare_activity,Hornstein_2007,Nowak_2012,Neilsen_2013,Witzel_2018,Do_2019,Haggard_2019}. A number of attempts have been made to explain the origin of flaring, such as magnetic reconnection in turbulent gas flows in the disk and the jet \citep{Dodds-Eden_2010, Dibi_2014, chatterjee2019,Nathanail_2020,Chatterjee:2021} and magnetic flux eruptions \citep{Dexter:2020:NIR_MAD,Porth:2020:NIR_MAD,Scepi:2022,Ripperda2022}. For Sgr~A$^*${}, semi-analytical models found that we require high-energy electrons, assumed to be accelerated via an ad-hoc process such as a large magnetic reconnection event or shocks, to describe the large flaring events \citep{Markoff_2001_sgra, Dibi_2016, Gutierrez_2020}. Near-infrared observations from the GRAVITY telescope provided further evidence for orbiting hotspot features in the accretion flow \citep{Gravity:20:orbit} that may be linked to acceleration events. It has also been recently shown that orbiting hotspots can be used to model double peaked X-ray flares \citep{Haggard_2019, Ball_2021} and prominent Stokes Q-U loops in the sub-mm emission of Sgr~A$^*${} \citep{Wielgus:2022}. These results give us considerable motivation to test the capability of the ngEHT to detect hotspot formation in accretion flows around black holes. Instead of isolating a particular magnetic flux eruption event in our simulations, we added a shearing hotspot to the RIAF solution as detailed in Sec.~\ref{sec:RIAF}. Figure~\ref{fig:hotspot} shows the temporal evolution of the azimuthally-averaged electron number density of the hotspot. We begin with a gaussian distribution of gas that undergoes shearing as the gas falls in closer to the BH. The overall density normalization is much lower that in the RIAF disk since the optically thin hotspot gas produces a large enough non-thermal synchrotron emissivity. The hotspot is evolved over $800\,r_{\rm g}/c$, but the gas distribution comes to a near-steady-state profile within the first $200\,r_{\rm g}/c$, which is roughly one hour for Sgr~A$^*${}. The shearing of the hotspot gas has significant impact on the evolution of the 230 GHz image \citep{Tiede2020, Roelofs_ngEHT} From Fig.~\ref{fig:disk_profiles_2} (right column), we see that the radial velocity matches the disk-averaged gas velocity from the GRMHD model, showing nearly free-fall speeds, while the azimuthal velocity becomes highly sub-Keplerian. The velocity profiles show that our hotspot model should be able to reproduce the expected hotspot motion from the GRMHD models, and is ideal for investigating multiwavelength flare lightcurves. A companion paper \citep{Emami:2022_hotspot} goes into further details about how current dynamical reconstruction techniques can be used to trace out the motion and morphology of the shearing hotspot in the context of ngEHT observations. These hotspot models and reconstruction methods would be integral in deciphering the more complex gas dynamics of magnetic flux eruption events in MADs, which have been shown to produce significant variation in image structure at 230 GHz \citep[e.g.,][]{Gelles:2022}. \section{Conclusions} \label{sec:conclusions} In this work, we have compared a series of numerical solutions with increasing complexity, going from time-independent radiatively-inefficient accretion flow (RIAF) model to fully 3D GRMHD simulations of accreting black holes, incorporating the effects of electron heating and cooling losses via two-temperature and radiation physics. In addition, each of our simulations are run with different GRMHD codes, similar to the approach of another community-wide code comparison effort \citep{Porth:19}. We found that the simulations exhibit remarkably similar properties given that the simulations incorporate varying levels of complexity in electron physics. The notable exception is the electron temperature, where radiative cooling decreases the temperature by a factor of $\lesssim 10$ within the inner 10 gravitational radii, the region that produces the bulk of the 230 GHz emission in M87$^*${}, one of the two primary targets of the EHT and the ngEHT (the other being Sgr~A$^*${}). The main goal of this work is to understand the variation in the underlying accretion flow and jet properties in our models since synthetic ray-traced imaged constructed from these models are used as ``truth'' images for the ngEHT Analysis Challenges \citep{Roelofs_ngEHT}. The ngEHT Analysis Challenges are an effort to determine how much information about the accretion flow and jet dynamics can we glean from the proposed ngEHT reference array, and what modifications in the image reconstruction tools are necessarily required to decode future ngEHT observational data. Our paper deals with numerical models designed to investigate hotspot evolution, turbulent inspiralling gas flows and extended powerful jets, targeting M87$^*${} and Sgr~A$^*${}. We restricted our model set to the community-standard setup: a rotating, geometrically-thick, optically-thin torus of magnetized gas around a spinning black hole, which is the fiducial model choice of the EHT \citep{EHT_M87_2019_PaperV, EHT_M87_2019_PaperVIII, EHT_SgrA_2022_PaperV}. This model choice leaves out exploration of multiple new setups of black hole accretion, such as quasi-spherical wind-fed inflows \citep[e.g.,][]{Ressler:2020:sgra_MAD, Lalakos:2022}, strongly-wind-fed accretion \citep[e.g.,][]{Cruz-Osorio:2017,Kaaz:2022}, geometrically-thin accretion disks \citep[e.g.,][]{Avara:2016, Liska:2022, MishraB:2022}, puffy radiation-dominated super-Eddington disks \citep[e.g.,][]{Sadowski:2016, Curd:2019} and misaligned accretion disks \citep[e.g.,][]{fragile07, liska_tilt_2018, White_2019, Chatterjee_2020}. Apart from varying the accretion mode, the high resolution of the images from EHT and ngEHT could potentially help distinguish between different space-time metrics \citep{EHT_Sgra_paper_VI}. So far, only a limited number of non-Kerr GRMHD simulations have only been performed \citep[e.g.,][]{Mizuno:2018, Olivares:2020, Nampalliwar:2022PhRvD.106f3009N}. The future of numerical studies is bright, given their rising popularity in the astrophysics community and the increase in computational resources. The breadth of the current investigations in accretion physics would result in a plethora of variable structures that should be thoroughly studied keeping the observational capabilities of the ngEHT in mind. \vspace{6pt} \funding{We thank the National Science Foundation (AST-1716536, AST-1935980, AST-1935980 and AST-2034306) and the Gordon and Betty Moore Foundation (GBMF-10423) for financial support of this work. This work was supported in part by the Black Hole Initiative, which is funded by grants from the John Templeton Foundation (JTF-61497) and the Gordon and Betty Moore Foundation (GBMF-8273) to Harvard University. K.C. is also supported in part by the Black Hole PIRE program (NSF grant OISE-1743747). R.E. acknowledges the support by the Institute for Theory and Computation at the Center for Astrophysics as well as grant numbers 21-atp21-0077, NSF AST-1816420 and HST-GO-16173.001-A for very generous supports. H.M. received financial support for this research from the International Max Planck Research School (IMPRS) for Astronomy and Astrophysics at the Universities of Bonn and Cologne. This research is supported by the DFG research grant ``Jet physics on horizon scales and beyond" (Grant No. FR 4069/2-1), the ERC synergy grant ``BlackHoleCam: Imaging the Event Horizon of Black Holes" (Grant No. 610058) and ERC advanced grant ``JETSET: Launching, propagation and emission of relativistic jets from binary mergers and across mass scales" (Grant No. 884631). J.K. acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany´s Excellence Strategy – EXC 2094 – 390783311. } \dataavailability{Software: \texttt{BHAC}{} \citep{porth17}, \texttt{BHOSS}{} \citep{Younsi:19_polarizedbhoss}, \texttt{H-AMR}{} \citep[][]{liska_hamr2020}, \texttt{IPOLE}{} \citep{Moscibrodzka:2018}, \texttt{KORAL}{} \citep{Sadowski:13_koral}} \acknowledgments{This research was enabled by support provided by grant no. NSF PHY-1125915 along with INCITE and ASCR Leadership Computing Challenge (ALCC) programs under the award PHY129, using resources from the Oak Ridge Leadership Computing Facility, Summit, which is a US Department of Energy office of Science User Facility supported under contract DE-AC05- 00OR22725, as well as Calcul Quebec (http://www.calculquebec.ca) and Compute Canada (http://www.computecanada.ca).} \conflictsofinterest{The authors declare no conflict of interest.} \begin{adjustwidth}{-\extralength}{0cm} \reftitle{References} \section{Introduction} \label{sec:intro} \begin{enumerate} \item what are numerical models? what equations do we consider? \item GRMHD/semi-analytical code and model description used in the ngeht challenges \item time and radial information of these models \item disks vs. jets, MAD vs. SANE \item justification for usage: what are the unique properties of these numerical models (if any) or why are the models important? \item expensive and exotic models: future science goals? (Long time analysis of variability, misaligned/bent jets) \item Future Outlook: what would the numerical model world look like in a few years/ decades? what are the major caveats of the current models? \end{enumerate} With the advent of the Event Horizon Telescope (EHT;\citealt{EHT_M87_2019_PaperI}), imaging the near-horizon structure of supermassive black holes (BHs) are now a reality. \section{Numerical methods} We employ a RIAF solution and a number of different GRMHD codes for the ngEHT challenges. \subsection{RIAF+hotspot solutions} RIAF models attempt to describe the time and azimuthally averaged appearance of accretion flows. This is done with a set of building blocks. The specific RIAF models used in the challenges are based on \citet{Yuan_2003, broderick_2006, Broderick_2010} approach. To describe an accretion flow we decompose the accretion flow into a set of phenomenological models that describe the electron density, temperature, magnetic field, and velocity profile. To define the electron density we first define the cylindrical radius $\rho = r|\sin(\theta)|$ and vertical displacement $z = r\cos(\theta)$. Then the electron-density radial profile is given by \begin{equation} n_{e,\rm{X}}(\rho, z) = n_{e,\rm X} \rho^{p_X}\exp\left(-\frac{z^2}{2h^2\rho^2}\right) \end{equation} where $X$ denotes the population of electron. The disk height is controlled by $h$ which for this work we set to unity. For the challenge dataset we included both thermal synchrotron emitting ($X=\rm th)$, and non-thermal synchrotron ($X_{\rm nth}$) emitting electrons. The thermal electrons have $n_{e,{\rm th}, 0}=1.3 \times 10^8 $ and $p_{\rm th} = -1.1$, while the non-thermal electrons are given by $n_{e, {\rm nth}, 0} = 1.3\times 10^5$, $p_{\rm nth} =-2.02$. The temperature profile of the thermal electrons is also given by a radial power-law with a Gaussian envelope describing the height: \begin{equation}\label{eq:Te_riaf} T_{e}(t,r,\theta, \phi) = T_{e,0}\rho^{-0.84}\exp\left(-\frac{z^2}{2h^2\rho^2}\right), \end{equation} where for the challenge data we set $T_{e,0} = 6.3\times 10^10 K$. To define the gas pressure we then assume an equipartition between the electron and proton energy and assume that the protons are in roughly hydrostatic equilibrium giving: \begin{equation}\label{eq:riaf_gas} p_{\rm gas}(x^\mu) = \frac{m_p n_{e,th}(x^\mu)}{6}\frac{M}{r}. \end{equation} TO define the local magnetic field we then assume a constant $\beta = p_{\rm gas}/p_{\rm mag}$ plasma, which combined with \autoref{eq:riaf_gas}. The orientation of the magnetic field is then given by a purely toroidal configuration, relative to the plasma observer. Finally, to describe the emission and absorption coefficients we assume a synchrotron self-absorption model from \citet{broderickblandford04} for both the thermal and non-thermal synchrotron electrons. For the non-thermal electrons we use a power-law subscription with radiation coefficients from \citet{jonesodell}, and a spectral-power law index of 1.25. To describe the dynamics of the accretion flow we follow the prescription from \citet{Pu2016}. Following the notation from \citet{Tiede2020} the velocity field $u^\mu = u^t v^\mu$ is given by \begin{equation}\label{eq:hotspot_velo} \begin{aligned} v^r &= v^r_K + \alpha(v^r_{ff} - v^r_{ff}) \\ v^\theta &= 0 \\ v^\phi &= v^\phi_K + (1-\kappa)(v^\phi_{ff} - v^\phi_K), \end{aligned} \end{equation} where $v^\mu_K$ denotes the Keplerian velocity field, and $v^\mu_{ff}$ is the free-fall velocity field. The remaining component $u^t$ is given by the normalization condition $u^\mu u_\mu = -1$. component outside the ISCO $v^r_K = 0$, however inside the ISCO we use plunging geodesics which are specified by matching the angular momentum and energy at the ISCO. The hotspot dynamics follows the model from \citet{Tiede2020}, where we assume that the hotspot travels passively along the accretion flow velocity field \autoref{eq:hotspot_velo}. This implies that the equation of motion is given by the conservation of particle number \autoref{eq:continuity}. For the emission we assume a non-thermal synchrotron hotspot, with a initial Gaussian density profile \begin{equation} n_{e}(x^\mu) = n_0e^{-\Delta r^\mu r_\mu +(\Delta r_\mu v^\mu)^2/2R_s^2}, \end{equation} where we have set $n_0 = 6\times 10^6$ and $R_s = 0.5M$, and $\Delta r^\mu$ is the displacement from the hotspot center. \subsection{GRMHD equations} here we describe the GRMHD equations. conservation of particle number and energy-momentum \begin{eqnarray} \label{eq:continuity} \partial_{t}(\sqrt{-g} \rho u^{t}) &=& -\partial_{i}(\sqrt{-g} \rho u^{i}) \\ \partial_{t}(\sqrt{-g} T^t_{\nu}) &=& -\partial_{i}(\sqrt{-g} T^i_{\nu}) +\sqrt{-g} T^{\alpha}_{\beta}\Gamma^{\beta}_{\nu \alpha} \end{eqnarray} $i$ denotes the spatial indices, metric connection $\Gamma^{\beta}_{\nu \alpha}$ and stress-energy tensor $T^{\mu}_{\nu}$ \begin{equation} T^{\mu}_{\nu}=(\rho + u + p +b^2)u^{\mu}u_{\nu} + (p + \frac{b^2}{2})\delta^{\mu}_{\nu} - b^{\mu}b_{\nu} \end{equation} $B^i$ is the magnetic field 3-vector and $b^{\mu}$ is the magnetic field 4-vector, defined in terms of $B^i$ \begin{eqnarray} b^t &=& B^iu^{\mu} g_{i\mu} \\ b^i &=& (B^i + b^t u^i)/u^t \end{eqnarray} evolution equation of $B^i$ using the spatial components of the induction equation \begin{equation} \partial_{t}(\sqrt{-g} B^i) = -\partial_{j}(\sqrt{-g} (b^j u^i - b^i u^j)) \end{equation} and the temporal component providing the no-monopoles constraint \begin{equation} \frac{1}{\sqrt{-g}}\partial_i(\sqrt{-g} B^i) = 0 \end{equation} \subsection{Two-temperature physics} \begin{equation} \partial_{\mu} (\sqrt{-g} \rho u^{\mu} s_{\rm e}) = \frac{\sqrt{-g} (\Gamma_{\rm e} - 1)}{\rho^{\Gamma_{\rm e} -1}}f_{\rm e}Q \end{equation} $s_{\rm e} = p_{\rm e}/\rho^{\Gamma_{\rm e}}$. The total heating rate $Q$ is calculated by comparing the total internal energy of the gas and the internal energy obtained from the electron entropy conservation equation. \begin{eqnarray} f_{\rm e} &=& \frac{1}{2} \exp \large[ -\frac{1-\beta/\beta_{\rm max}}{0.8+\sigma_{\rm h}^{0.5}} \large] \\ \beta_{\rm max} &=& \sigma_{\rm h}/4 \\ \sigma_{\rm h} &=& \frac{b^2}{\rho h} \end{eqnarray} \noindent where $h=1+\Gamma_{\rm g} p_{\rm g}/(\Gamma_{\rm g}-1)$ is the gas specific enthalpy. \subsection{Radiative GRMHD} ngEHT:alex raymond paper, Lindy paper astro 2020, OG:Chael+19 \section{model list} \href{https://challenge.ngeht.org/challenge1/}{Challenge 1: } \begin{enumerate} \item SgrA* semi-analytic stationary RIAF (Broderick \& Loeb 2016) from C. Fromm \item M87 MAD 2T-GRMHD a=0.94 with reconnection heating (Mizuno+2021); RT with kappa using BHOSS \end{enumerate} \href{https://challenge.ngeht.org/challenge2/}{Challenge 2: } \begin{enumerate} \item SgrA* semi-analytic stationary RIAF + shearing hotspot (Broderick \& Loeb 2016,Tiede+2020) Stokes I \item SgrA* MAD GRMHD a=0.5 Stokes I (HAMR) + BHOSS \item M87 MAD GRMHD a=0.94 Stokes I (HAMR) kappa \end{enumerate} \href{https://challenge.ngeht.org/challenge3/}{Challenge 3: } \begin{enumerate} \item SgrA* semi-analytic stationary RIAF + shearing hotspot (Broderick \& Loeb 2016,Tiede+2020) Full Stokes \item SgrA* MAD GRMHD a=0.5 Full Stokes (HAMR) + iPole \item M87 MAD GRMHD a=0.94 Full Stokes (HAMR) kappa + iPole \end{enumerate} \section{Temporal behavior of fluxes} \begin{figure} \includegraphics[width=\columnwidth]{figures/time_data.png} \caption{We show the mass accretion rate $\dot{M}$, dimensionless magnetic flux $\Phi/\sqrt{\dot{M}}$, normalized radial flux of the angular momentum $\dot{J}/\dot{M}$ and the outflow efficiency $P_{\rm out}/\dot{M}c^2=1-\dot{E}/\dot{M}c^2$ over time. Values are calculated at the event horizon.} \label{fig:Horizon_fluxes} \end{figure} calculate mass, magnetic, angular momentum and energy fluxes. Please note we assume that the factor $\sqrt{4\pi}$ is absorbed within the magnetic field $b^{\mu}$ or $B^i$. We are also using natural units $GM_{\rm BH}=c=1$ for these equations. \begin{eqnarray} \dot{M} &=\int^{2\pi}_{0}\int^{\pi}_{0}\, (-\rho \, u^r) \, \sqrt{-g} \, d\theta \, d\varphi\,, \\ \Phi &=\frac{1}{2}\int^{2\pi}_{0}\int^{\pi}_{0}\, |^*F^{rt}| \, \sqrt{-g} \, d\theta \, d\varphi\,, \\ \dot{L} &=\int^{2\pi}_{0}\int^{\pi}_{0}\, T^r_{\varphi} \, \sqrt{-g} \, d\theta \, d\varphi\,, \\ \dot{E} &=\int^{2\pi}_{0}\int^{\pi}_{0}\, (T^r_t) \, \sqrt{-g} \, d\theta \, d\varphi\,, \end{eqnarray} where all quantities are calculated calculated at the event horizon $r_{\rm hor}=r_{\rm g} (1+\sqrt{1-a^2})$. One issue is contamination by floors. This section is skipped for the stationary RIAF model of challenge 1. \section{Disk-averaged quantities} \begin{figure} \includegraphics[width=\columnwidth]{figures/radial_data_1.png} \caption{We show the radial profiles of gas density $\rho$, plasma-$\beta$, proton temperature $T_{\rm p}$, electron temperature $T_{\rm e}$. Quantities are disk-averaged and time-averaged over the raytracing period.} \label{fig:disk_profiles_1} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{figures/radial_data_2.png} \caption{We show the radial profiles of disk scale height $h/r$, radial velocity $|v_r|$, angular velocity $\Omega$ and specific angular momentum $u_{\varphi}$. Quantities are disk-averaged and time-averaged over the raytracing period.} \label{fig:disk_profiles_2} \end{figure} Here we calculate the disk-averaged properties of each model, namely gas density $\rho$, thermal pressure $p_{\rm gas}$, magnetic pressure $p_{\rm mag}$, radial velocity $|v_r|$, azimuthal velocity $|v_{\varphi}|$, angular momentum $u_{\varphi}$, disk scale height $h/r$, ion temperature $T_i$ and the electron temperature $T_e$. We define disk-averaging of a quantity $q$ as \begin{equation} \langle q \rangle (r,t) = \frac{\int^{2\pi}_{0}\int^{\pi}_{0}\, \rho \, q \, \sqrt{-g} \, d\theta \, d\varphi}{\int^{2\pi}_{0}\int^{\pi}_{0}\, \rho \, \sqrt{-g} \, d\theta \, d\varphi}, \end{equation} where $q \in \{ \rho, p_{\rm gas}, p_{\rm mag}, |v_r|, |v_{\varphi}| , u_{\varphi}, h/r, T_i [\rm Kelvin], T_e [\rm Kelvin] \}$. Further definitions follow: \begin{eqnarray} p_{\rm gas} &=& (\Gamma_{\rm ad}-1)u, \\ p_{\rm mag} &=& b^{\mu}b_{\mu}/2, \\ |v_i| &=& \sqrt{v^i v^i g_{ii}}, \\ {\rm where\,,} v^i &=& u^i/u^t, \nonumber \\ h/r &=& |\theta - \pi/2|, \\ \end{eqnarray} where $\Gamma_{\rm ad}$ and $u$ are the adiabatic index and the internal energy of the gas. Please provide the radial profiles over the time period relevant for the raytracing, e.g., $5,000-10,000~GM_{\rm BH}/c^3$. Since these simulations have been raytraced targeting either Sgr A$^*$ and M87, please provide the black hole mass, the density-scaling or $\dot{M}$-scaling, and the $R_{\rm high}--R_{\rm low}$ values used to produce the target 230 GHz flux. For the stationary RIAF model, please provide the profiles used. \section{Jet properties} \begin{figure} \includegraphics[width=\columnwidth]{jet_data.png} \caption{We show the jet radius $R_{\rm jet}$ and the jet Lorentz factor $\gamma$.} \label{fig:jet_profiles} \end{figure} Here we calculate the radial profiles of the jet half width $R_{\rm jet}$ and Lorentz factor $\gamma$: \begin{equation} R_{\rm jet} (r,t) = \sqrt\frac{\int^{2\pi}_{0}\int^{\pi}_{0}\, \sqrt{-g/g_{rr}} (\mu>2) \, d\theta \, d\varphi}{2\pi}\,, \end{equation} \begin{equation} \gamma (r,t) = \frac{\int^{2\pi}_{0}\int^{\pi}_{0}\, (\mu>2) \, \alpha u^t \, \sqrt{-g} \, d\theta \, d\varphi}{\int^{2\pi}_{0}\int^{\pi}_{0}\, (\mu>2) \, \sqrt{-g} \, d\theta \, d\varphi}, \end{equation} where $\mu=-T^r_t/(\rho u^r)$ is the specific radial energy density and $\alpha=1/\sqrt{-g^{tt}}$. \section{Axisymmetrised profiles} \begin{figure*} \includegraphics[width=\textwidth]{m87_2D.png} \caption{We show t- and $\varphi-$averaged data: electron number density $n_{\rm e}$ and temperature $T_{\rm e}$. We also denote the jet boundary with $\sigma=1$ (black lines). The time averaging is done over the raytracing period. RIAF plots are for Sgr A$^*$ while the rest are for M87.} \label{fig:2D_profiles} \end{figure*} Here we calculate time and azimuthally averaged 2D ($r,\theta$) plots of gas density, plasma beta ($=p_{\rm gas}/p_{\rm mag}$), magnetization $\sigma=b^2/\rho$ and $T_e~[\rm K]$. The time-averaging should be done over the the time period relevant for the raytracing. \section{Software and third party data repository citations} \label{sec:cite} The AAS Journals would like to encourage authors to change software and third party data repository references from the current standard of a footnote to a first class citation in the bibliography. As a bibliographic citation these important references will be more easily captured and credit will be given to the appropriate people. The first step to making this happen is to have the data or software in a long term repository that has made these items available via a persistent identifier like a Digital Object Identifier (DOI). A list of repositories that satisfy this criteria plus each one's pros and cons are given at \break \url{https://github.com/AASJournals/Tutorials/tree/master/Repositories}. \begin{acknowledgments} HPC, funding \end{acknowledgments} \vspace{5mm} \facilities{HST(STIS), Swift(XRT and UVOT), AAVSO, CTIO:1.3m, CTIO:1.5m,CXO} \software{\texttt{BHAC} \citep{porth17}, \texttt{H-AMR} \citep{liska_hamr2020_arxiv}, \texttt{BHOSS} \citep{Younsi:19_polarizedbhoss}, \texttt{IPOLE} \citep{Moscibrodzka:2018} } \section{Introduction} With the advent of the Event Horizon Telescope \citep[EHT;][]{EHT_M87_2019_PaperI, EHT_SgrA_2022_PaperI}, imaging the near-horizon structure of supermassive black holes (BHs) are now a reality. The primary targets of EHT and the future next-generation EHT (or ngEHT\footnote{\href{https://www.ngeht.org/}{https://www.ngeht.org/}}) are M87$^*${} \citep[the supermassive BH in the elliptical galaxy M87][]{EHT_M87_2019_PaperI} and Sgr~A$^*${} in the Galactic Center \citep{EHT_SgrA_2022_PaperI}, two of the most well-studied low-luminosity active galactic nuclei. Numerous papers on extracting information about the event horizon-scale accretion flows in these two sources using the EHT's enormous resolving power already exist. With the ngEHT, we will achieve unprecedented levels of angular resolution and sensitivity to low-flux regions, with the dynamic range in flux expected to increase to 1000 compared to the EHT's current dynamic range of 10. This would enable us to investigate the BH shadow shape with higher precision as well as provide a crucial connection between the accretion flow and the jet launching region. The expected advances in sensitivity require deeper investigations of feature extraction from simulated synthetic reconstructions of BH systems. Hence, we designed the ngEHT analysis challenges\footnote{\href{https://challenge.ngeht.org/challenge1/}{https://challenge.ngeht.org/challenge1/}} \citep{Roelofs_ngEHT} to test our ability to capture the complex dynamics of gas and magnetic fields around M87$^*${} and Sgr~A$^*${} using the ngEHT reference array \citep[e.g.,][]{Raymond:2021}. Black hole accretion and jet physics has been intensively studied over the past few decades \citep[e.g.,][]{Shakura:73,ree82,Narayan:95a,quataert:2000:CDAF,narayan03,mckinney06, kom07, tch11, narayanSANE2012, tch16}. In the context of M87$^*${} and Sgr~A$^*${}, we expect the accretion flow to be highly sub-Eddington, radiatively-inefficient and geometrically-thick, popularly known as radiatively-inefficient accretion flows (RIAFs). This accretion flow solution has been used to successfully model the multiwavelength spectrum of Sgr~A$^*${} \citep[e.g.,][]{Yuan:03}. On the other hand, semi-analytical models of jets are preferred to explain the spectrum of M87$^*${} \citep[e.g.,][]{lucchini19:M87}. Thus, these two sources already provide a means to probe two different components of BH accretion, namely, the inner accretion flow structure and turbulence in Sgr~A$^*${} and the prominent jet feature in M87$^*${}. The first three EHT numerical simulation papers \citep{EHT_M87_2019_PaperV,EHT_M87_2019_PaperVIII,EHT_SgrA_2022_PaperV} already give us important clues about the horizon-scale conditions of these BH systems based on numerical simulations: (1) these BHs probably have non-zero spin, (2) the accretion disk is expected to have colder electrons than the jet sheath, and (3) the observations favor the presence of dynamically-important magnetic fields close to the BH. All of these results point us towards the magnetically arrested disk \citep[MAD;][]{igu03,narayan03} state, an accretion mode where the BH magnetosphere becomes over-saturated with magnetic flux and exhibit quasi-periodic explosions of vertical magnetic flux bundles. MAD flows also have powerful relativistic jets, where the jet power can exceed the input accretion power \citep{tch11}, a definite signature of BH spin energy extraction via the \citet{bz77} process. Building on the semi-analytical RIAF models, time-dependent general relativistic magneto-hydrodynamic (GRMHD) simulations have become important tools for deciphering BH accretion physics in a variety of astrophysical systems \citep[e.g.,][]{gam03, mckinney06,fragile07, tch11,Chael_2019, Porth:19, Narayan2022}. Indeed, the EHT regularly makes use of large libraries of GRMHD simulations to model the observed horizon-scale BH images of M87$^*${} and Sgr~A$^*${} as well as larger-scale jet images \citep[such as for Centaurus A][]{Janssen:2021} in order to constrain the time-variable plasma properties. In designing the ngEHT reference array, it is therefore crucial to use GRMHD simulations for understanding the attainability of specific science goals, such as resolving the photon ring and the disk-jet connection region as well as tracing out time-variable features via the ngEHT analysis challenges. In this work, we discuss the numerical fluid simulations that were used as source models for the ngEHT analysis challenges. In particular, our objective is to compare between models that incorporate increasingly complicated levels of accretion and electron physics, focusing on M87$^*${} and Sgr~A$^*${}. Our model set consists of a time-dependent shearing hotspot stitched to a steady-state RIAF solution, two standard GRMHD simulations of MAD accretion flows, a GRMHD MAD simulation including electron heating via the inclusion of two-temperature physics, and a fully radiative, two-temperature, GRMHD MAD simulation. We describe the equations and setup of our numerical models in Sec.~\ref{sec:sims}, show our comparison results in Sec.~\ref{sec:results} and finally conclude in Sec.~\ref{sec:conclusions}. \section{Numerical simulations} \label{sec:sims} In this section we provide a brief description of the semi-analytical stationary RIAF and shearing hotspot model as well as the (two-temperature/radiative) GRMHD simulations used for the ngEHT analysis challenges. \subsection{RIAF+hotspot solutions} \label{sec:RIAF} RIAF models attempt to describe the time and azimuthally averaged appearance of accretion flows. This is done with a set of building blocks. The specific RIAF models used in the challenges are based on \citet{Yuan_2003, broderick_2006, Broderick_2010} approach. We decompose the accretion flow into a set of phenomenological models that describe the electron density, temperature, magnetic field, and velocity profile. The electron density profile is defined in terms of the cylindrical radius $R_{\rm cyl} = r|\sin(\theta)|$ and vertical displacement $z = r\cos(\theta)$, and is given by \begin{equation} n_{e,\rm{X}}(R_{\rm cyl}, z) = n_{e,\rm X} R_{\rm cyl}^{p_X}\exp\left(-\frac{z^2}{2h^2R_{\rm cyl}^2}\right) \end{equation} where $X$ denotes the population of electron. The disk height is controlled by $h$ which for this work we set to unity. For the challenge dataset we included both thermal synchrotron emitting ($X\equiv{\rm th})$, and non-thermal synchrotron ($X\equiv{\rm nth}$) emitting electrons. The thermal electrons have $n_{e,{\rm th}, 0}=1.3 \times 10^8 $ and $p_{\rm th} = -1.1$, while the non-thermal electrons are given by $n_{e, {\rm nth}, 0} = 1.3\times 10^5$, $p_{\rm nth} =-2.02$. The temperature profile of the thermal electrons is also given by a radial power-law with a Gaussian envelope describing the height: \begin{equation}\label{eq:Te_riaf} T_{e}(t,r,\theta, \varphi) = T_{e,0}R_{\rm cyl}^{-0.84}\exp\left(-\frac{z^2}{2h^2R_{\rm cyl}^2}\right), \end{equation} where for the challenge data we set $T_{e,0} = 6.3\times 10^{10}$~K. To define the gas pressure, we assume an equipartition between the electron and proton energy and that the protons are in roughly hydrostatic equilibrium, which gives us: \begin{equation}\label{eq:riaf_gas} p_{\rm gas}(x^\mu) = \frac{m_p n_{e,th}(x^\mu)}{6}\frac{M}{r}. \end{equation} For the local magnetic field, we then assume a constant $\beta = p_{\rm gas}/p_{\rm mag}$ plasma, which combined with eqn.~\ref{eq:riaf_gas} give us the magnetic field strength. The orientation of the magnetic field is then given by a purely toroidal configuration, relative to the plasma observer. Finally, we take the emission and absorption coefficients from the synchrotron self-absorption model in \citet{broderickblandford04} for both the thermal and non-thermal synchrotron electrons. For the non-thermal electrons we use a power-law subscription with radiation coefficients from \citet{jonesodell}, and a spectral-power law index of 1.25. We follow the velocity field prescription from \citet{Pu2016} to describe the accretion flow dynamics. Using the notation from \citet{Tiede2020}, the velocity field $u^\mu = u^t v^\mu$ is given by \begin{equation}\label{eq:hotspot_velo} \begin{aligned} v^r &= v^r_K + \alpha(v^r_{ff} - v^r_{ff}) \\ v^\theta &= 0 \\ v^\varphi &= v^\varphi_K + (1-\kappa)(v^\varphi_{ff} - v^\varphi_K), \end{aligned} \end{equation} where $v^\mu_K$ denotes the Keplerian velocity field, and $v^\mu_{ff}$ is the free-fall velocity field. The remaining component $u^t$ is given by the normalization condition $u^\mu u_\mu = -1$. The radial component outside the inner stable circular orbit (ISCO) is $v^r_K = 0$ as the disk is in steady-state. However, inside the ISCO, we use plunging geodesics which are specified by matching the angular momentum and energy at the ISCO. The hotspot evolution follows the model from \citet{Tiede2020}, where we assume that the hotspot travels passively along the accretion flow velocity field (eqn.~\ref{eq:hotspot_velo}). This implies that the equation of motion is given by the conservation of particle number (eqn.~\ref{eq:continuity}). For the emission we assume a non-thermal synchrotron hotspot, with a initial Gaussian density profile \begin{equation} n_{e}(x^\mu) = n_0e^{-\Delta r^\mu r_\mu +(\Delta r_\mu v^\mu)^2/2R_s^2}, \end{equation} where we have set $n_0 = 6\times 10^6$ and $R_s = 0.5\,r_{\rm g}$, and $\Delta r^\mu$ is the displacement from the hotspot center. \subsection{GRMHD simulations} \label{sec:GRMHD} Over the previous two decades, multiple GRMHD codes have been developed and utilized to model black hole accretion and jet launching physics over long dynamical timescales. The wide usage of GRMHD simulations is particularly encouraging since this allows verification of code-specific numerical choices that users usually have to make even while solving the same base set of GRMHD equations. Indeed, recently there was a community-wide effort to benchmark these codes against each other for a standard problem: evolving a weakly magnetized torus of gas around a spinning black hole \citep{Porth:19}. It was found that these codes largely provide similar solutions though some disk quantities remain unconverged with increasing grid resolutions, suggesting more investigation is required. For this work, we employ three different GRMHD codes to probe black hole accretion, increasing the complexity of the equations solved at each step: (1) single fluid GRMHD simulations from the \texttt{H-AMR}{} code \citep{liska_hamr2020_arxiv}, (2) a two-temperature single fluid GRMHD simulation from the \texttt{BHAC}{} code \citep{porth17}, and (3) a two-temperature radiative GRMHD simulation from the \texttt{KORAL}{} code \citep{Sadowski:13_koral}. First we describe the set of GRMHD equations \citep[e.g., from][]{Gammie:03, Porth:19}. We have the conservation of particle number and energy-momentum: \begin{eqnarray} \label{eq:continuity} \partial_{t}(\sqrt{-g} \rho u^{t}) &=& -\partial_{i}(\sqrt{-g} \rho u^{i}) \\ \partial_{t}(\sqrt{-g} T^t_{\nu}) &=& -\partial_{i}(\sqrt{-g} T^i_{\nu}) +\sqrt{-g} T^{\alpha}_{\beta}\Gamma^{\beta}_{\nu \alpha}. \end{eqnarray} \noindent Here, $\rho$ is the rest-mass gas density and can also be written in the form of $\rho=m n$, where $m$ is the mean rest-mass per particle and $n$ is the particle number density. We also have the four-velocity $u^{\mu}$, stress-energy tensor $T^{\mu}_{\nu}$, metric determinant $g\equiv det(g_{\mu\nu})$ and the metric connection $\Gamma^{\beta}_{\nu \alpha}$. Note that the index $t$ refers to the temporal component of the vector or tensor and $i$ denotes the spatial indices. The stress-energy tensor $T^{\mu}_{\nu}$ is given as \begin{equation} T^{\mu}_{\nu}=(\rho + U_{\rm gas} + p_{\rm gas} + 2p_{\rm mag})u^{\mu}u_{\nu} + (p_{\rm gas} + p_{\rm mag})\delta^{\mu}_{\nu} - b^{\mu}b_{\nu}, \end{equation} \noindent where $U_{\rm gas}$ and $p_{\rm gas}$ are the gas internal energy and pressure, related by the ideal gas equation: $p_{\rm gas}=(\Gamma_{\rm gas}-1)U_{\rm gas}$. We also have the magnetic pressure $p_{\rm mag}=b^2/2$ and the magnetic field 4-vector $b^{\mu}$, which can be defined in terms of the magnetic field 3-vector $B^i$: \begin{eqnarray} b^t &=& B^i u^{\mu} g_{i\mu} \\ b^i &=& (B^i + b^t u^i)/u^t. \end{eqnarray} Here we included a factor of $\sqrt{4\pi}$ into the definition of $B^i$. We evolve the magnetic field $B^i$ using the spatial components of the induction equation, \begin{equation} \partial_{t}(\sqrt{-g} B^i) = -\partial_{j}(\sqrt{-g} (b^j u^i - b^i u^j)), \end{equation} \noindent while the temporal component provides the no-monopoles constraint, \begin{equation} \frac{1}{\sqrt{-g}}\partial_i(\sqrt{-g} B^i) = 0. \end{equation} These equations are numerically integrated in the conservative form \citep{Gammie:03} to get the physically-relevant quantities $\rho$, $U_{\rm gas}$, $u^{\mu}$ and $B^i$. We refer the reader to the corresponding code papers for more information on the numerical techniques used to evolve these equations over space and time. In this work, we use two GRMHD simulations performed with the \texttt{H-AMR}{} code, one targeting M87$^*${} and the other Sgr~A$^*${}. These simulations employ logarithmic Kerr-Schild coordinates and the grid resolutions are $N_r\times N_\theta \times N_\varphi = 580 \times 288 \times 512$ for the M87$^*${} simulation and $348 \times 192 \times 192$ for the Sgr~A$^*${} simulation. All simulations in this work adopt the geometrical unit convention, $GM_{\rm BH}=c=1$, normalizing the length scale to the gravitational radius $r_{\rm g}=GM_{\rm BH}/c^2$. The M87$^*${} GRMHD simulation evolves a MAD flow around a black hole with spin $a=0.9375$. The Sgr~A$^*${} model also simulates a MAD flow but around a black hole with spin $a=1/2$. \texttt{H-AMR}{} uses outflowing radial boundary conditions (BCs), transmissive polar BCs and periodic azimuthal BCs \citep[for more details, see][]{liska_tilt_2018}. Since GRMHD simulations are scale-free, we determine the gas density code-to-CGS units conversion factor (hereafter, ``density scaling'') by raytracing the simulation at 230 GHz for a target source and flux. We use the general relativistic ray-tracing codes \texttt{BHOSS}{} \citep{Younsi:19_polarizedbhoss} and \texttt{IPOLE}{} \citep{Moscibrodzka:2018} to compute images at 230 GHz and set the compact flux to be approximately 0.5 Jy for M87$^*${} and 2.4 Jy for Sgr~A$^*${}. We take black hole masses and distances of $M_{\rm BH} =6.2\times10^9 M_{\odot}$ and $D_{\rm BH}=16.9$~Mpc for M87$^*${} and $M_{\rm BH}=4.14\times10^6 M_{\odot}$ and $D_{\rm BH}=8.127$~kpc for Sgr~A$^*${}. GRMHD simulations evolve a single temperature fluid. At the low accretion rates seen in M87$^*${} and Sgr~A$^*${}, Coulomb coupling between ions and electrons is inefficient and therefore, the two particle species are not in thermal equilibrium. Since ions are much heavier than electrons, ions dominate the single fluid thermodynamics evolved in GRMHD simulations. Hence, to calculate the radiative output from GRMHD simulations, we calculate the electron temperature $T_{\rm e}$ using sub-grid models such as the $R-\beta$ prescription \citep{Moscibrodzka_2016} based on local gas plasma-$\beta$ ($\equiv p_{\rm gas}/p_{\rm mag}$): \begin{eqnarray} T_{\rm e} &=& \frac{2 m_{\rm p}U_{\rm gas}}{3k_{\rm B}\rho (2 + R)}, \\ {\rm where,\,\,} R &=& \frac{1+R_{\rm high} \beta^2}{1+\beta^2}. \label{eq:Rbeta} \end{eqnarray} For the ngEHT analysis challenges, we take $R_{\rm high}$ values of 160 and 40 and source inclinations of $163^{\circ}$ and $50^{\circ}$ for the M87$^*${} and Sgr~A$^*${} simulations, respectively. We assume a thermal relativistic Maxwell-J\"uttner distribution for describing the electron energy distribution in the Sgr~A$^*${} model, and a hybrid thermal$+$non-thermal $\kappa-$distribution \citep[e.g.,][]{Xiao:2006, Davelaar:18} for the M87$^*${} model. The model images are shown in \citet{Roelofs_ngEHT}. \subsection{Two-temperature physics} \label{sec:two-temp} Two-temperature GRMHD (2t-GRMHD) simulations \citep[e.g.,][]{ressler_2015, Dexter_2020} evolve the ion and electron entropy equation separately and hence, provide the ion and electron temperature in a self-consistent manner. The main advantage of this method is that we remove the electron temperature as a free parameter when constructing images. However, we do have to make a choice about the sub-grid prescription that determines the fraction of local dissipative energy that heats the electrons. There are two heating mechanisms that are thought to be applicable to global accretion flows: turbulent heating \citep{Howes:2010, Kawazura:2019}, and magnetic reconnection \citep{Werner:2018, Rowan:2017}. For the ngEHT analysis challenges, we focus only on one simulation with reconnection heating, taken from \citet{Mizuno_2021}. We assume that the number densities and velocities of ions and electrons are equal, i.e., $n_{\rm i}=n_{\rm e}=n$ and $u^{\mu}_{\rm i}=u^{\mu}_{\rm e}=u^{\mu}$, maintaining charge neutrality. The electron entropy equation is given as \begin{equation} \partial_{\mu} (\sqrt{-g} \rho u^{\mu} s_{\rm e}) = \frac{\sqrt{-g} (\Gamma_{\rm e} - 1)}{\rho^{\Gamma_{\rm e} -1}}f_{\rm e}Q, \end{equation} \noindent where the electron entropy is $s_{\rm e} = p_{\rm e}/\rho^{\Gamma_{\rm e}}$, with $p_{\rm e}$ and $\Gamma_{\rm e}$ as the electron pressure and adiabatic index. The total heating rate $Q$ is calculated by comparing the total internal energy of the gas and the internal energy obtained from the electron entropy conservation equation \citep[see][for more details]{ressler_2015}. The fraction of dissipative heating that contributes to electron heating is given by $f_{\rm e}$. For this particular simulation, $f_{\rm e}$ is designed to capture electron/ion heating via magnetic reconnection from \citet{Rowan:2017}: \begin{equation} f_{\rm e} = \frac{1}{2} \exp \Big[ -\frac{1-\beta/\beta_{\rm max}}{0.8+\sigma_{\rm h}^{0.5}} \Big], \end{equation} \noindent where $\beta_{\rm max} = 1/4\sigma_{\rm h}$, defined using the hot gas magnetization $\sigma_{\rm h} = b^2/\rho h$ and the specific gas enthalpy $h=1+\Gamma_{\rm g} p_{\rm g}/[\rho(\Gamma_{\rm g}-1)]$. The 2t-GRMHD simulation from \citet{Mizuno_2021} assumes modified Kerr-Schild coordinates and a black hole spin of $0.9375$. The grid resolution is $384 \times 192 \times 192$. The accretion mode is of a magnetically arrested flow and the simulation is raytraced (using \texttt{BHOSS}{}) once the near-horizon flow has reached steady state. The target source is M87$^*${}, assuming a black hole mass of $M_{\rm BH}=6.5\times10^9 M_{\odot}$ and distance of 16.9 Mpc. The accretion rate is normalized such that the 230 GHz compact flux density is 0.8 Jy. We assume a thermal electron distribution everywhere except in the jet sheath where we adopt a $\kappa-$distribution. More details about the image are provided in \citet{Roelofs_ngEHT}. \subsection{Radiative GRMHD} \label{sec:GRRMHD} Two temperature GRMHD simulations do not include radiative cooling and hence, are thought to be appropriate for low luminosity supermassive black holes such as M87$^*${} and Sgr~A$^*${}. To verify this assumption, we consider a two-temperature radiative GRMHD (2t-GRRMHD hereafter) simulation from \citet{Chael_2019}. This simulation accounts for self-consistent radiation physics, incorporating both particle heating via magnetic reconnection (as in Sec~\ref{sec:two-temp}) and radiative cooling via bremsstrahlung, synchrotron, Compton and Coulomb losses. The simulation is run using the 2t-GRRMHD code \texttt{KORAL}{} \citep{Sadowski:13_koral, Sadowski:15_photon,Chael_2018}, which evolves a two-temperature magnetized fluid and treats the radiation field as a second fluid \citep{Sadowski:2017}. The conservation equations solved in 2t-GRRMHD are different from that of GRMHD: \begin{equation} (T^{\mu}_{\nu} + R^{\mu}_{\nu})_{;\mu} =0, \end{equation} \noindent where $R^{\mu}_{\nu}$ is the frequency-integrated radiation field, defined as, \begin{equation} R^{\mu}_{\nu} = \frac{4}{3} \overline{E} u^{\mu}_{\rm R} u_{\nu \rm R} + \frac{1}{3} \overline{E}\delta^{\mu}_{\nu}. \end{equation} \noindent Here, the radiation field is described by its rest-frame energy density $\overline{E}$ and four-velocity $u^{\mu}_{\rm R}$ following the M1 closure scheme. The ion and electron entropy equations are \begin{eqnarray} T_{\rm e} (ns_{\rm e}u^{\mu})_{;\mu} &=& f_{\rm e}q^{\rm v} + q^{\rm C} - G, \\ T_{\rm i} (ns_{\rm i}u^{\mu})_{;\mu} &=& (1-f_{\rm e})q^{\rm v} - q^{\rm C}, \end{eqnarray} \noindent where $q^{\rm v}$ is the dissipative heating rate and $q^{\rm C}$ is the Coulomb coupling rate that captures the exchange of energy between ions and electrons. The heating fraction $f_{\rm e}$ is taken from \citet{Rowan:2017} (see \citealt{Chael_2018} for more details), same as the 2t-GRMHD simulation. Finally, $G$ is the radiative cooling rate \citep{Sadowski:13_koral}. For further details about the equations, see \citet{Sadowski:13_koral, Sadowski:2017, Chael_2018}. The simulation assumes a black hole spin of $a=0.9375$ and mass $M_{\rm BH}=6.2\times10^9 M_{\odot}$, targeting M87$^*${}. The gas density is scaled to physical CGS units such that the compact emission at 230 GHz is roughly 0.98 Jy. The simulation uses modified Kerr-Schild coordinates with a grid resolution of $N_r\times N_\theta \times N_\varphi = 288 \times 224 \times 128$. See \citet{Chael_2019} for more details about the simulation. While not utilized for the ngEHT analysis challenges, we included this simulation in this work since this model captures the coupling between gas and radiation, necessary for black holes accreting close the Eddington limit. Further, this model has been used in previous ngEHT reference array papers \citep{Blackburn:2019_APC, Raymond:2021}. \section{Results} \label{sec:results} We perform a series of comparisons focused on the time-evolution of horizon-scale quantities and radial dependence of disk and jet properties. The diagnostics are chosen such that any trends we find can inform EHT/ngEHT science applications, such as horizon-scale morphology and variability of the accretion flow. Further, the quantities are similar to those reported in the GRMHD code comparison project \citep{Porth:19} and so, can be directly compared. There are a total of five models: three (2t-radiative) GRMHD simulations targeting M87*, and one RIAF solution and one GRMHD simulation for Sgr A*. We further note that all three numerical simulations of M87* have the same BH spin, favoring direct comparisons of the horizon-scale gas properties. \subsection{Temporal behavior of horizon fluxes} \label{sec:time} \begin{figure} \includegraphics[width=\columnwidth]{figures/time_data.png} \caption{We show the mass accretion rate $\dot{M}$, dimensionless magnetic flux $\phi\equiv\Phi/\sqrt{\dot{M}}$, the outflow efficiency $P_{\rm out}/\dot{M}c^2=1-\dot{E}/\dot{M}c^2$ and specific radial flux of the angular momentum $\dot{J}/\dot{M}$ over time. Values are calculated at the event horizon.} \label{fig:Horizon_fluxes} \end{figure} We calculate the mass, magnetic, angular momentum and energy fluxes in the radial direction as follows: \begin{eqnarray} {\rm Mass:\,\,}& \dot{M} &=\int^{2\pi}_{0}\int^{\pi}_{0}\, (-\rho \, u^r) \, \sqrt{-g} \, d\theta \, d\varphi\,, \\ {\rm Magnetic:\,\,}& \Phi &=\frac{\sqrt{4\pi}}{2}\int^{2\pi}_{0}\int^{\pi}_{0}\, |B^r| \, \sqrt{-g} \, d\theta \, d\varphi\,, \\ {\rm Ang. Mom.:\,\,}& \dot{J} &=\int^{2\pi}_{0}\int^{\pi}_{0}\, T^r_{\varphi} \, \sqrt{-g} \, d\theta \, d\varphi\,, \\ {\rm Energy:\,\,}& \dot{E} &=\int^{2\pi}_{0}\int^{\pi}_{0}\, (T^r_t) \, \sqrt{-g} \, d\theta \, d\varphi\,, \end{eqnarray} \noindent where all quantities are calculated at the event horizon radius $r_{\rm hor}=r_{\rm g} (1+\sqrt{1-a^2})$. We note that there could be substantial contribution of density floors when calculating the mass accretion rate for MAD systems. However, this radius was chosen for simplicity in comparing to previous simulations in the literature. Figure~\ref{fig:Horizon_fluxes} shows the mass accretion rate $\dot{M}$ in units of solar masses per year ($M_{\odot}$/yr), the dimensionless magnetic flux $\phi=\Phi/\sqrt{\dot{M} r_{\rm g}^2 c }$, the outflow power $P_{\rm out} = \dot{M}c^2-\dot{E}$ and the specific angular momentum flux $\dot{J}/\dot{M}$ for simulations targeting M87* and Sgr A*. The RIAF solution being a steady-state solution is excluded from this section (though the hotspot evolves with time). Quantities from the 2t-GRRMHD simulation are only shown for $(11-16)\times 10^3\,\,r_{\rm g}/c$, i.e., the time period over which the simulation was raytraced in \citet{Chael_2019}. Remarkably, despite the difference in electron physics complexity, the simulations behave very similarly. The factor of 2 difference in $\dot{M}$ between the M87$^*${} non-radiative simulations and the 2t-GRRMHD simulation can be explained by the lower electron temperatures in the near-horizon accretion flow due to radiative cooling (see Sec.~\ref{sec:disk}) as well as the higher 230 GHz flux normalization used for the radiative model. \begin{table}[H] \caption{Modulation index (MI) of the mass accretion rate $\dot{M}$, dimensionless magnetic flux $\phi$, outflow efficiency $P_{\rm out}/\dot{M}c^2$ and the specific angular momentum flux $\dot{J}/\dot{M}$ for each GRMHD model. The quantities are calculated over the final $5000\,\,r_{\rm g}/c$ in runtime and at the event horizon (see Fig.~\ref{fig:Horizon_fluxes}).} \newcolumntype{C}{>{\centering\arraybackslash}X} \begin{tabularx}{\textwidth}{lCCcc} \toprule Model & MI($\dot{M}$) & MI($\phi$) & MI($P_{\rm out}/\dot{M}c^2$) & MI($\dot{J}/\dot{M}$)\\ \midrule M87$^*${} GRMHD & 0.27 & 0.15 & 0.26 & 0.33\\ M87$^*${} 2t-GRMHD & 0.29 & 0.14 & 0.25 & 0.31\\ M87$^*${} 2t-GRRMHD & 0.28 & 0.14 & 0.14 & 0.31\\ Sgr~A$^*${} GRMHD & 0.23 & 0.21 & 0.39 & 0.57\\ \bottomrule \end{tabularx} \label{tab:MI} \end{table} The accretion rate in all simulations show large variation with quasi-periodic sharp drops. These drops in $\dot{M}$ occur due to the emergence of magnetic flux eruptions, a characteristic feature of the magnetically arrested disk \citep{Begelman2022, Ripperda2022, Chatterjee:2022}. These eruptions also lower the value of $\phi$ since magnetic flux bundles escape from the vicinity of the BH, carrying away the magnetic flux accumulated in the BH magnetosphere. We see that $\phi$ often crosses the magnetic flux saturation value of 50 \citep{tch11}, overwhelming the BH magnetosphere with strong magnetic fields that eventually reconnect and trigger flux eruptions \citep[see][for the detailed mechanism]{Ripperda2022}. As these field line bundles move out and interact with the disk, they (1) hinder accretion, lowering $\dot{M}$, (2) remove magnetic flux from near the BH, lowering the jet power, and (3) push gas outwards, reducing the inward angular momentum flux. Curiously, we see larger drops in the specific angular momentum flux for the Sgr~A$^*${} GRMHD model. This is possibly due to the smaller BH spin ($a=0.5$ as opposed to $0.9375$ for the M87$^*${} models) as the weakly powered jet does not carry away angular momentum as efficiently as the higher BH spin models and flux eruptions play a bigger role in regulating disk angular momentum transport. Additionally, the reconnection events that trigger these eruptions accelerate electrons to higher energies, and are thus, crucial for understanding flare activity in BH sources. To understand how time-variability of the horizon-fluxes behave, we calculate the modulation index MI, which is defined as the ratio of the standard deviation and the mean \citep{EHT_SgrA_2022_PaperV}. We show MI for the different fluxes in Table~\ref{tab:MI}. The MI($\dot{M}$) is usually a good proxy for the variability of the sub-millimeter emission in these slowly accreting optically thin black hole sources \citep[e.g.,][]{Chatterjee:2021}. The MI($\dot{M}$) values we see from the simulations are $\sim0.23-0.29$ and are larger than expected from Sgr~A$^*${} 230 GHz lightcurves \citep[where $MI\sim 0.1$][]{Wielgus:2022_SgrALC}. This suggests that careful analysis of the electron distribution function is needed to understand if we are substantially over-predicting the 230 GHz lightcurve variability. Further, in general, weakly-magnetized accretion flows exhibit lower MI($\dot{M}$) values due to the absence of flux eruptions, which suggests that further study of the accretion mode in Sgr~A$^*${} is also necessary. It is encouraging to note that our MI values for $\dot{M}$ and $\phi$ are consistent with the MI values from longer time-evolved GRMHD simulations of $a=0.9$ BHs in \citet{Narayan2022}, indicating that our simulations are sufficiently converged with respect to horizon-scale quantities. \subsection{Disk-averaged quantities} \label{sec:disk} \begin{figure} \includegraphics[width=\columnwidth]{figures/radial_data_1.png} \caption{We show the radial profiles of gas density $\rho$, plasma-$\beta$, proton temperature $T_{\rm p}$, electron temperature $T_{\rm e}$. Quantities are disk-averaged and time-averaged over the raytracing period.} \label{fig:disk_profiles_1} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{figures/radial_data_2.png} \caption{We show the radial profiles of disk scale height $h/r$, radial velocity $|v_r|$, angular velocity $\Omega$ and specific angular momentum $u_{\varphi}$. Quantities are disk-averaged and time-averaged over the raytracing period.} \label{fig:disk_profiles_2} \end{figure} Here we calculate the disk-averaged properties of each model, namely gas density $\rho$, thermal pressure $p_{\rm gas}$, magnetic pressure $p_{\rm mag}$, radial velocity $|v_r|$, azimuthal velocity $|v_{\varphi}|$, angular momentum $u_{\varphi}$, disk scale height $h/r$, ion temperature $T_i$ and the electron temperature $T_e$. We define disk-averaging of a quantity $q$ as \begin{equation} \langle q \rangle (r,t) = \frac{\int^{2\pi}_{0}\int^{\pi}_{0}\, \rho \, q \, \sqrt{-g} \, d\theta \, d\varphi}{\int^{2\pi}_{0}\int^{\pi}_{0}\, \rho \, \sqrt{-g} \, d\theta \, d\varphi}, \end{equation} where $q \in \{ \rho, p_{\rm gas}, p_{\rm mag}, |v_r|, |v_{\varphi}| , u_{\varphi}, h/r, T_i [\rm Kelvin], T_e [\rm Kelvin] \}$. Further definitions follow: \begin{eqnarray} p_{\rm gas} &=& (\Gamma_{\rm ad}-1)u, \\ p_{\rm mag} &=& b^{\mu}b_{\mu}/2, \\ |v_i| &=& \sqrt{v^i v^i g_{ii}}, \\ {\rm where,\,\,} v^i &=& u^i/u^t, \nonumber \\ h/r &=& |\theta - \pi/2|, \\ \end{eqnarray} where $\Gamma_{\rm ad}$ and $u$ are the adiabatic index and the internal energy of the gas. Figures~\ref{fig:disk_profiles_1} and \ref{fig:disk_profiles_2} show the respective disk-averaged radial profiles for each model, including the Sgr~A$^*${} RIAF solution. The density profiles in the inner few 10s of $r_{\rm g}$ converge roughly to a $n_{\rm e}\propto r^{-1}$ profile and matches the RIAF density profile. The M87$^*${} 2t-GRRMHD density is larger by a factor of $\approx 2$ from the GRMHD/2t-GRMHD, as is expected from the difference in the mass accretion rate (Fig.~\ref{fig:Horizon_fluxes}). The 2t-GRRMHD simulation exhibits a slightly more magnetized inflow within the inner $2\,\,r_{\rm g}$, but overall, the GRMHD simulations have a similar plasma-$\beta\equiv p_{\rm gas}/p_{\rm mag}$ disk profile. The stronger magnetic field seen in the 2t-GRRMHD model could explain the higher values of the horizon magnetic flux seen in Fig.~\ref{fig:Horizon_fluxes}. The RIAF model assumes a constant disk plasma-$\beta$ of 1, (see Sec.~\ref{sec:RIAF}), which is substantially higher when compared to the MAD GRMHD models. This value of plasma-$\beta$ is chosen in order to match the observed 230 GHz flux density of Sgr~A$^*${}. As we see from the disk scale height in Fig.~\ref{fig:disk_profiles_2}, the RIAF model has a much thicker disk than the GRMHD models, and therefore, produces a lot more sub-millimeter (sub-mm) emission even with a low electron temperature and weak magnetic field strength. Next, we see that the disk-averaged electron temperature $T_{\rm e}$ in the 2t-GRRMHD M87$^*${} model is more than an order of magnitude lower than the other GRMHD models within the inner $10\,\, r_{\rm g}$, but actually matches the Sgr~A$^*${} RIAF $T_{\rm e}$ profile and has a shallower profile $T_{\rm e} \propto r^{-1}$ instead of $r^{-3/2}$. It is further interestingly to note that the disk ion temperature $T_{\rm i}$ are very similar in all the GRMHD simulations shown here. Therefore, despite the same reconnection-driven heating mechanism captured in both the 2t-GRMHD and the 2t-GRRMHD models, radiative cooling of hot electrons plays a crucial role in determining the eventual $T_{\rm e}$. Due to the low $T_{\rm e}$, the required accretion rate normalization is higher in the 2t-GRRMHD, as we noted in the previous subsection. In Fig.~\ref{fig:disk_profiles_2}, we show the average disk scale height $h/r$, the radial and angular velocities ($v_r$ and $\Omega$ and the specific angular momentum $u_{\varphi}$. The MAD simulations all show very similar disk properties. The $\langle h/r\rangle \approx 0.1-0.3$ with a sharp increase within $3\,\,r_{\rm g}$ where the inflow becomes vertically supported by strong poloidal magnetic fields. The radial velocity has a profile of $r^{-1/2}$, similar to the scaling relation found in ADAF solutions assuming a constant viscosity parameter $\alpha$ \citep{nar94,nar98}. The $\alpha$ parameter profile depends on how magnetized the accretion flow is, with $\alpha\propto r^{-1}$ for weakly-magnetized flows and close to constant for MAD-like flows \citep[e.g.,][]{liska_tor_2019,Chatterjee:2022}. We also see highly sub-Keplerian angular velocity profiles in the GRMHD models, typical for magnetically supported disks. For the RIAF model, the RIAF disk is not infalling and has a constant Keplerian angular velocity. Instead, the hotspot, added to the RIAF solution, undergoes shearing and disappears into the BH with a radial velocity similar to the values found in the GRMHD MAD disks. This occurs because the hotspot is designed to travel along plunging geodesics (see Sec.~\ref{sec:RIAF}), similar to rapid gas infall close to the BH in the GRMHD models. The angular momentum in the GRMHD models looks sub-Keplerian as expected for MADs. \subsection{Jet properties} \begin{figure} \includegraphics[width=\columnwidth]{figures/jet_data.png} \caption{We show the jet radius $R_{\rm jet}$ and the jet Lorentz factor $\gamma$ from the M87$^*${} GRMHD and 2t-GRRMHD models, and the Sgr~A$^*${} GRMHD model. The gray circles indicates the deprojected jet radius of the M87 jet assuming a BH mass of $6.2\times10^9M_{\odot}$ and a source inclination of $14^{\circ}$ \citep{Nakamura_2018}. The data points are a compilation of various papers \citep{Doeleman2012,asadanak2012,Hada2013,nak2013,Akiyama:2015,Hada2016}.} \label{fig:jet_profiles} \end{figure} Here we calculate the radial profiles of the jet half width $R_{\rm jet}$ and Lorentz factor $\gamma$: \begin{equation} R_{\rm jet} (r,t) = \sqrt\frac{\int^{2\pi}_{0}\int^{\pi}_{0}\, \sqrt{-g/g_{rr}} (\mu>2) \, d\theta \, d\varphi}{2\pi}\,, \label{eq:Rjet} \end{equation} \begin{equation} \gamma (r,t) = \frac{\int^{2\pi}_{0}\int^{\pi}_{0}\, (\mu>2) \, \alpha u^t \, \sqrt{-g} \, d\theta \, d\varphi}{\int^{2\pi}_{0}\int^{\pi}_{0}\, (\mu>2) \, \sqrt{-g} \, d\theta \, d\varphi}, \end{equation} where $\mu=-T^r_t/(\rho u^r)$ is the specific radial energy density and $\alpha=1/\sqrt{-g^{tt}}$. Hee we define the jet boundary as $\mu>2$, i.e., the region over which the jet still remains highly energized. Note that this definition of the jet radius is quite similar to standard definition used in the literature \citep[e.g.,][]{Narayan2022} where the jet boundary is taken to be the $\sigma=1$ surface. Since $\mu=\gamma(\sigma+h)$\footnote{Note that specific enthalpy includes the rest-mass energy contribution in our definition from Sec.~\ref{sec:two-temp}.}, our condition $\mu>2$ also incorporates regions where the jet might not be magnetically-dominated but is relativistically hot or fast. Since we restrict our jet profiles to within $r\lesssim 10^3\,r_{\rm g}$, the radius is primarily determined by the jet magnetization. Figure~\ref{fig:jet_profiles} show the jet radius $R_{\rm jet}$ and Lorentz factor $\gamma$ as a function of the radial distance from the BH for the M87$^*${} GRMHD and 2t-GRRMHD models as well as the Sgr~A$^*${} GRMHD model. The M87$^*${} jet radius from our models matches the observed jet width from M87 (gray circles) quite well, with the radial profile roughly proportional to $r^{0.625}$, which is the fitted powerlaw for the M87 jet \citep{Nakamura_2018}, though the index value has been also reported to be slightly smaller in some works \citep[0.57;][]{asadanak2012,Nokhrina:2019}. The powerlaw index of 0.625 is larger than that found using the $\sigma=1$ condition from \citet{Narayan2022}, where the authors found a powerlaw index of 0.428 for their MAD spin $a=0.9$ GRMHD model. It is possible that we find larger jet radii as we incorporate a part of the hot jet sheath region within our definition of $R_{\rm jet}$ \citep[as suggested by Fig.~7 in][]{chatterjee2019}. For the Sgr~A$^*${} model, we also find a similar $R_{\rm jet}$ profile. There are no detections of an extended jet in Sgr~A$^*${} \citep[e.g.,][]{Issaoun_2019} though semi-analytical and GRMHD models largely favor a jet component from a spinning BH \citep[e.g.,][]{Markoff:07,EHT_SgrA_2022_PaperV}. We also show the Lorentz factor $\gamma$ in Fig.~\ref{fig:jet_profiles}. Mostly the jets accelerate to $\gamma\approx3-4$ by $10^3\,r_{\rm g}$ in all of our GRMHD models. It is more difficult to compare our $\gamma$ profiles with values inferred from observations of the M87 jet \citep[e.g.,][]{mertens2016}. This is because our $\gamma$ values are biased towards the jet spine while the observations generally capture the velocities of localized features in the sub-relativistic jet sheath/disk wind, especially at small distances from the BH. Indeed, both simulations and observations show that the jet Lorentz factor varies greatly as a function of jet radius \citep[e.g., see][]{chatterjee2019}. We speculate that a better approach might be to calculate emissivity-weighted Lorentz factors in order to compare to the measured $\gamma$ from M87. Since our focus is on the comparison between GRMHD simulations, we leave direct comparisons to data to future work. \subsection{Axisymmetrized profiles} \begin{figure} \includegraphics[width=\textwidth]{figures/m87_2D.png} \caption{We show t- and $\varphi-$averaged data: electron number density $n_{\rm e}$ (top row) and temperature $T_{\rm e}$ (bottom row). We also denote the jet boundary with $\sigma=1$ (black lines). The time-averaging is done over the $5,000\,r_{\rm g}/c$ for each model. RIAF plots are for Sgr A$^*$ while the rest are for M87. The Sgr~A$^*${} GRMHD model produces similar plots of $n_{\rm e}$ and $T_{\rm e}$ as the M87$^*${} model, and hence, we do not show it here.} \label{fig:2D_profiles} \end{figure} In the previous sections, we have found that the largest differences between the GRMHD models occur in electron temperature distribution. Figure~\ref{fig:2D_profiles} shows the time and azimuthally averaged 2D vertical plots of gas density $n_{\rm e}$ and electron temperature $T_{\rm e}$. We show the normalized $n_{\rm e}$ so as to capture the relative change in the disk/wind density distribution, which would provide us information about the disk structure. The large difference in disk scale height is immediately apparent between the RIAF and the MAD GRMHD models (also see Fig.~\ref{fig:disk_profiles_2}). The presence of a prominent wide jet component in MADs squeezes the inner disk and pushes against the accretion flow, a feature which is not captured in the constant $h/r$ RIAF model. However, the RIAF model does roughly reproduce the density profile of the disk midplane region. This would mean that the RIAF model could well represent non/weakly-jetted, quasi-spherical accretion flows. For sources like M87$^*${}, where we see a prominent jet component, the density gradient in the vertical direction is expected to be steeper as strong magnetic stresses power disk winds that carry away gas from the disk \citep[e.g.,][]{Chatterjee:2022}. Overall, the disk/wind density distribution among the GRMHD models look similar with small differences in the lateral extension of the wind region and the steepness of the vertical gradient in density. For example, if we compare the 2t-GRRMHD model with the other two simulations, the density in the wind region is larger in the radiative model. The reason for the shallow vertical density profile in the 2t-GRRMHD model is unclear since weakly magnetized thick disk simulations tell us that radiative cooling would lead to the loss of gas pressure in the disk and would result in the disk collapsing to a relatively dense structure in the midplane \citep[e.g.,][]{fm09,Yoon:2020}. However, in the presence of strong poloidal magnetic fields, i.e. in the MAD state, the plasma-$\beta$ can decrease to $\beta \approx 0.2-1$ in the disk midplane (see Fig.~\ref{fig:disk_profiles_1}, third row, left panel), going to even lower values in the upper layers of the accretion flow. The high magnetic pressure could help support the disk against collapse while sufficiently strong magnetic stresses could power disk winds. Such behavior is also seen in recent GRMHD simulations of near-Eddington, geometrically thin, strongly magnetized disks, where the inner disk (or corona) has a larger $h/r$ that the outer disk due to magnetic pressure support \citep{Liska:2022}. To verify how radiative cooling affects the inner disk/wind structure in highly sub-Eddington accretion flows like M87$^*${} and Sgr~A$^*${}, we require longer 2t-GRRMHD simulations such that the disk is in inflow-outflow equilibrium out to a radius of at least $50\,r_{\rm g}$. The 2D temperature plot of the RIAF model also looks vastly different in the inner disk ($r\lesssim20\,r_{\rm g}$) when compared to the GRMHD and 2t-GRMHD simulations, but is similar to the temperature distribution in the 2t-GRRMHD disk midplane (also seen in the $T_{\rm e}$ plot of Fig.~\ref{fig:disk_profiles_1}). The RIAF model does not capture gas heating in the jet sheath region (the region just outside of the jet boundary indicated by the $\sigma=1$ dashed line) and therefore $T_{\rm e}$ drops as we move away from the midplane towards the poles. In the GRMHD models, the jet sheath is as hot, if not hotter, than the inner accretion flow as temperatures reach $T_{\rm e}>10^{11}$K. For the GRMHD simulation, the electron temperature is given as a fraction of the fluid temperature, where the fraction depends on how magnetized the gas is in the region, as per the $R-\beta$ prescription from eqn.~\ref{eq:Rbeta}. For the M87$^*${} model, we chose a $R_{\rm high}$ value of 160 to have a jet-dominated sub-mm image. This choice of $R_{\rm high}$ suppresses the electron temperature in the disk, focusing higher temperatures in the jet sheath. Comparing the GRMHD model with the 2t-GRMHD model, the jet sheath region exhibits very similar $T_{\rm e}$ values but the disk midplane is hotter by a factor of a few in the 2t-GRMHD model. We note that this difference in $T_{\rm e}$ in the midplane is more noticeable in the 2D plot rather than in the disk-averaged $T_{\rm e}$ profile shown in Fig.~\ref{fig:disk_profiles_1} as the upper layers of the disk become substantially hotter in the GRMHD model. For the radiative 2t-GRRMHD model, the inner regions of the disk are cooler as electrons heated by magnetic reconnection quickly cool via synchrotron and Compton losses. From Fig.~\ref{fig:disk_profiles_1}, the drop in $T_{\rm e}$ for the 2t-GRRMHD model is shown to be as large as an order of magnitude when compared to the (2t-) GRMHD models. Another interesting feature is that the hot region ($T_{\rm e}>10^{11}$K) in the jet sheath is much narrower in the 2t-GRRMHD model, which could have a significant bearing on the ray-traced image, possibly producing a thinner jet sheath. Finally, the difference in $T_{\rm e}$ in the jet body between the GRMHD models is due to the different density/internal energy floor setups used by the corresponding codes. Since the gas in the jet sheath and the jet body undergo mixing due to boundary instabilities \citep[e.g.,][]{chatterjee2019, Wong:2021}, it is possible that the choice of floors could affect the overall electron temperature in the jet sheath. Such a study is outside the scope of our paper and is left to future work. \begin{figure} \includegraphics[width=\textwidth]{figures/hotspot_ne_2D.png} \caption{We show the $\varphi-$averaged hotspot electron number density as a function of radius and time. The hotspot falls into the BH and gets sheared over time.} \label{fig:hotspot} \end{figure} \subsection{Orbiting hotspot in a RIAF model} High-energy flares are commonly observed in AGNs, with GeV and MeV flares seen in M87$^*${} \citep[e.g.,][]{Aharonian:2006, Acciari:2010} and quasi-daily nIR and X-ray flares in Sgr~A$^*${} \citep[e.g.,][]{Baganoff:01,Eckart:2006:flare_activity,Hornstein_2007,Nowak_2012,Neilsen_2013,Witzel_2018,Do_2019,Haggard_2019}. A number of attempts have been made to explain the origin of flaring, such as magnetic reconnection in turbulent gas flows in the disk and the jet \citep{Dodds-Eden_2010, Dibi_2014, chatterjee2019,Nathanail_2020,Chatterjee:2021} and magnetic flux eruptions \citep{Dexter:2020:NIR_MAD,Porth:2020:NIR_MAD,Scepi:2022,Ripperda2022}. For Sgr~A$^*${}, semi-analytical models found that we require high-energy electrons, assumed to be accelerated via an ad-hoc process such as a large magnetic reconnection event or shocks, to describe the large flaring events \citep{Markoff_2001_sgra, Dibi_2016, Gutierrez_2020}. Near-infrared observations from the GRAVITY telescope provided further evidence for orbiting hotspot features in the accretion flow \citep{Gravity:20:orbit} that may be linked to acceleration events. It has also been recently shown that orbiting hotspots can be used to model double peaked X-ray flares \citep{Haggard_2019, Ball_2021} and prominent Stokes Q-U loops in the sub-mm emission of Sgr~A$^*${} \citep{Wielgus:2022}. These results give us considerable motivation to test the capability of the ngEHT to detect hotspot formation in accretion flows around black holes. Instead of isolating a particular magnetic flux eruption event in our simulations, we added a shearing hotspot to the RIAF solution as detailed in Sec.~\ref{sec:RIAF}. Figure~\ref{fig:hotspot} shows the temporal evolution of the azimuthally-averaged electron number density of the hotspot. We begin with a gaussian distribution of gas that undergoes shearing as the gas falls in closer to the BH. The overall density normalization is much lower that in the RIAF disk since the optically thin hotspot gas produces a large enough non-thermal synchrotron emissivity. The hotspot is evolved over $800\,r_{\rm g}/c$, but the gas distribution comes to a near-steady-state profile within the first $200\,r_{\rm g}/c$, which is roughly one hour for Sgr~A$^*${}. The shearing of the hotspot gas has significant impact on the evolution of the 230 GHz image \citep{Tiede2020, Roelofs_ngEHT} From Fig.~\ref{fig:disk_profiles_2} (right column), we see that the radial velocity matches the disk-averaged gas velocity from the GRMHD model, showing nearly free-fall speeds, while the azimuthal velocity becomes highly sub-Keplerian. The velocity profiles show that our hotspot model should be able to reproduce the expected hotspot motion from the GRMHD models, and is ideal for investigating multiwavelength flare lightcurves. A companion paper \citep{Emami:2022_hotspot} goes into further details about how current dynamical reconstruction techniques can be used to trace out the motion and morphology of the shearing hotspot in the context of ngEHT observations. These hotspot models and reconstruction methods would be integral in deciphering the more complex gas dynamics of magnetic flux eruption events in MADs, which have been shown to produce significant variation in image structure at 230 GHz \citep[e.g.,][]{Gelles:2022}. \section{Conclusions} \label{sec:conclusions} In this work, we have compared a series of numerical solutions with increasing complexity, going from time-independent radiatively-inefficient accretion flow (RIAF) model to fully 3D GRMHD simulations of accreting black holes, incorporating the effects of electron heating and cooling losses via two-temperature and radiation physics. In addition, each of our simulations are run with different GRMHD codes, similar to the approach of another community-wide code comparison effort \citep{Porth:19}. We found that the simulations exhibit remarkably similar properties given that the simulations incorporate varying levels of complexity in electron physics. The notable exception is the electron temperature, where radiative cooling decreases the temperature by a factor of $\lesssim 10$ within the inner 10 gravitational radii, the region that produces the bulk of the 230 GHz emission in M87$^*${}, one of the two primary targets of the EHT and the ngEHT (the other being Sgr~A$^*${}). The main goal of this work is to understand the variation in the underlying accretion flow and jet properties in our models since synthetic ray-traced imaged constructed from these models are used as ``truth'' images for the ngEHT Analysis Challenges \citep{Roelofs_ngEHT}. The ngEHT Analysis Challenges are an effort to determine how much information about the accretion flow and jet dynamics can we glean from the proposed ngEHT reference array, and what modifications in the image reconstruction tools are necessarily required to decode future ngEHT observational data. Our paper deals with numerical models designed to investigate hotspot evolution, turbulent inspiralling gas flows and extended powerful jets, targeting M87$^*${} and Sgr~A$^*${}. We restricted our model set to the community-standard setup: a rotating, geometrically-thick, optically-thin torus of magnetized gas around a spinning black hole, which is the fiducial model choice of the EHT \citep{EHT_M87_2019_PaperV, EHT_M87_2019_PaperVIII, EHT_SgrA_2022_PaperV}. This model choice leaves out exploration of multiple new setups of black hole accretion, such as quasi-spherical wind-fed inflows \citep[e.g.,][]{Ressler:2020:sgra_MAD, Lalakos:2022}, strongly-wind-fed accretion \citep[e.g.,][]{Cruz-Osorio:2017,Kaaz:2022}, geometrically-thin accretion disks \citep[e.g.,][]{Avara:2016, Liska:2022}, puffy radiation-dominated super-Eddington disks \citep[e.g.,][]{Sadowski:2016, Curd:2019} and misaligned accretion disks \citep[e.g.,][]{fragile07, liska_tilt_2018, White_2019, Chatterjee_2020}. Apart from varying the accretion mode, the high resolution of the images from EHT and ngEHT could potentially help distinguish between different space-time metrics \citep{EHT_Sgra_paper_VI}. So far, only a limited number of non-Kerr GRMHD simulations have only been performed \citep[e.g.,][]{Mizuno:2018, Olivares:2020,Nampalliwar:2022PhRvD.106f3009N}. The future of numerical studies is bright, given their rising popularity in the astrophysics community and the increase in computational resources. The breadth of the current investigations in accretion physics would result in a plethora of variable structures that should be thoroughly studied keeping the observational capabilities of the ngEHT in mind. \vspace{6pt} \acknowledgments{In this section you can acknowledge any support given which is not covered by the author contribution or funding sections. This may include administrative and technical support, or donations in kind (e.g., materials used for experiments).} Razieh Emami acknowledges the support by the Institute for Theory and Computation at the Center for Astrophysics as well as grant numbers 21-atp21-0077, NSF AST-1816420 and HST-GO-16173.001-A for very generous supports. \conflictsofinterest{The authors declare no conflict of interest.} \begin{adjustwidth}{-\extralength}{0cm} \reftitle{References}
2105.09756
\section{Introduction} \label{section:introduction} Distributed systems are notoriously subject to faults and much of the effort in the design and implementation of distributed algorithms is devoted to fault tolerance issues. Introduced in the seminal paper of Dijkstra~\cite{dijkstra1982self}, \emph{self-stabilization} is a fundamental approach to fault tolerance requiring that the system recovers from any combination of transient faults and stabilizes back to a legal configuration within finite time. The strong guarantees of this natural approach attracted a lot of attention over the years making self-stabilization an extensively studied research field (see \cite{dolev2000self, altisen2019introduction} for textbooks). The main performance measure for self-stabilizing algorithms is their stabilization \emph{run-time}. With very few exceptions (discussed in Sec.{}~\ref{section:related-work}), this is measured as a function of the size of the system: the more processors are in the system, the longer is the time it takes for the algorithm to stabilize. However, while self-stabilizing systems are guaranteed to recover from \emph{any} number of transient faults, in most systems, the common scenario is that over a limited time interval, only \emph{a few} faults occur. Although a small number of faults can seriously hinder the operation of the whole system, one may hope that the system recovers from them faster than the recovery time from a larger number of faults, regardless of the total number of processors. With this hope in mind, we turn to the notion of \emph{fully adaptive run-time} that expresses the recovery time of a self-stabilizing algorithm as a function of the number of faults (supporting also dynamic \emph{topology changes}), rather than the size of the graph $G$ on which it runs. Our main contribution is a generic transformer that can be applied to a wide class of \emph{locally checkable labeling (LCL)} problems \cite{naor1995can}, converting a given fault free synchronous algorithm \ensuremath{\mathcal{A}}{} that satisfies certain properties into a self-stabilizing synchronous algorithm \ensuremath{\mathcal{A}_{\mathit{ST}}}{} for the same problem.\footnote{% In this paper, the term LCL is not restricted to graphs of constant degrees, see Sec.{}~\ref{section:model-and-preliminaries}.} Our transformer ensures that \ensuremath{\mathcal{A}_{\mathit{ST}}}{} is efficient in terms of its expected adaptive run-time and message size overhead, both expressed as a function of the number $k$ of nodes that experienced transient faults and the graph's degree bound $\Delta$, as well as a few other parameters of the LCL problem and the fault free algorithm \ensuremath{\mathcal{A}}{} (refer to Thm.{}\ \ref{theorem:nodes:main} and \ref{theorem:edges:main} for the exact expressions). Another appealing feature of our transformer is that the self-stabilizing algorithm \ensuremath{\mathcal{A}_{\mathit{ST}}}{} is \emph{anonymous} and \emph{size-uniform}, namely, the nodes do not need to have unique IDs, nor are they assumed to know the size of $G$; we do allow the nodes to know the degree bound $\Delta$ in cases that this is necessary for the original fault free algorithm \ensuremath{\mathcal{A}}{}, however, this is not a requirement of the transformer itself. Combined with its fully adaptive run-time and with a (deterministic) bound that we establish on the ``propagation radius'' of the faults, we conclude that \ensuremath{\mathcal{A}_{\mathit{ST}}}{} can also work in an \emph{infinite} (bounded degree) graph $G$ and recover fast from any (finite or infinite) combination of transient faults as long as those can be partitioned into ``spaced apart'' regions, each containing a finite number of faulty nodes (see Sec.{}\ \ref{section:transformer-statement-nodes} and \ref{section:transformer-statement-edges} for more details). While generic transformers, that compile a fault free algorithm into a self-stabilizing one, have been the topic of many works (see Sec.{}~\ref{section:related-work}), to the best of our knowledge, we develop the first local transformer that comes with appealing guarantees for both the stabilization run-time and the message size overhead. Our transformer is also the first one to ensure that the stabilization run-time is fully adaptive without committing to huge messages that essentially encode the whole history. Moreover, we are unaware of any existing transformer that produces anonymous size-uniform self-stabilizing algorithms (and it is far from being clear if the transformers in the existing literature can be modified to provide such a guarantee). As such, the current paper presents the first transformer that is suitable (out of the box) for infinite graphs as well. \paragraph{Concrete LCL Problems.} To demonstrate the applicability of the new transformer, it is applied to known fault free algorithms (or simple variants thereof), resulting in self-stabilizing algorithms for eight LCL problems. These include four \emph{distributed node problems} and the corresponding four \emph{distributed edge problems}, derived from the distributed node problems by projecting them on the line graph of a given graph, as listed here: \begin{itemize} \item Distributed node problem: \emph{maximal independent set}. \\ Definition: the output of each node $v \in V(G)$ is $f(v) \in \{ 1, 2 \}$; the nodes $v$ that output $f(v) = 1$ form an independent set; if a node $v \in V(G)$ outputs $f(v) = 2$, then there exists at least one neighbor $u$ of $v$ with $f(u) = 1$. \\ Corresponding distributed edge problem: \emph{maximal matching}. \item Distributed node problem: \emph{node $c$-coloring}. \\ Definition: the output of each node $v \in V(G)$ is $f(v) \in \{ 1, \dots, c \}$; the nodes $v$ that output $f(v) = i$ form an independent set for each $1 \leq i \leq c$. \\ Corresponding distributed edge problem: \emph{edge $c$-coloring}. \item Distributed node problem: \emph{maximal node $c$-coloring}. \\ Definition: the output of each node $v \in V(G)$ is $f(v) \in \{ 1, \dots, c \}$; the nodes $v$ that output $f(v) = i$ form an independent set for each $1 \leq i \leq c - 1$; if a node $v \in V(G)$ outputs $f(v) = c$, then there exists at least one neighbor $u_{i}$ of $v$ with $f(u_{i}) = i$ for each $1 \leq i \leq c - 1$. \\ Corresponding distributed edge problem: \emph{maximal edge $c$-coloring}. \item Distributed node problem: \emph{incremental node $c$-coloring}. \\ Definition: the output of each node $v \in V(G)$ is $f(v) \in \{ 1, \dots, c \}$; the nodes $v$ that output $f(v) = i$ form an independent set for each $1 \leq i \leq c - 1$; if a node $v \in V(G)$ outputs $f(v) = i$, $2 \leq i \leq c$, then $v$ admits at least $i - 1$ neighbors $u$ with $f(u) \leq i - 1$. \\ Corresponding distributed edge problem: \emph{incremental edge $c$-coloring}. \end{itemize} For some background, we note that maximal independent set, maximal matching, node $c$-coloring, and edge $c$-coloring are arguably the four most extensively studied local symmetry breaking problems in theoretical distributed computing and their algorithmic investigation dates back to the early days of this field \cite{Valiant1982parallel, ColeV1986deterministic, Luby1986simple, AlonBI1986fast, IsraeliI1986fast}. The maximal node $c$-coloring problem was introduced in \cite{Luby1986simple} as a natural generalization of maximal independent set (derived by fixing $c = 2$) and node $(\Delta + 1)$-coloring (derived by fixing $c = \Delta + 2$); accordingly, maximal edge $c$-coloring is a natural generalization of maximal matching and edge $(2 \Delta - 1)$-coloring. To the best of our knowledge, the incremental node $c$-coloring and incremental edge $c$-coloring problems have not been studied previously; they also generalize the maximal independent set and maximal matching problems, respectively (again, by fixing $c = 2$). The performance guarantees of the self-stabilizing algorithms we develop for the aforementioned eight LCL problems are listed in Table~\ref{table:concrete-problems}. The reader may notice that the fully adaptive run-time bounds of our self-stabilizing algorithms are logarithmic in the ``natural instance parameters'', i.e., in $k$ and $\Delta$ (in the case of incremental node/edge $c$-coloring, this holds as long as $c = O (1)$); with an algorithm-specific analysis, we actually eliminate the dependency on $\Delta$ for maximal matching. To the best of our knowledge, these are the first self-stabilizing algorithms for any non-trivial LCL problem that work with reasonable size messages and admit (provable) fully-adaptive run-time guarantees. In fact, the logarithmic run-time bound compares favorably with the existing literature on self-stabilizing algorithms for LCL problems also when we ignore its adaptivity, taking $k = |V(G)|$. Moreover, the algorithms presented in the current paper are among the first self-stabilizing algorithms for non-trivial LCL problems that are anonymous and size-uniform. Building on prior art \cite{KothapalliSOS2006}, it is fairly straightforward to prove that for each one of the problems listed in Table~\ref{table:concrete-problems}, when restricted to an $n$-node cycle graph, any anonymous self-stabilizing algorithm with constant size messages requires $\Omega (\log k)$ time in expectation to stabilize from $k$ transient faults for any $k \leq n$. Since the message size bounds of our algorithms depend only on $\Delta$ (and in some cases also on $c \leq O (\Delta)$), we conclude that the adaptive run-time bounds of our anonymous size-uniform algorithms are asymptotically optimal in terms of their dependency on $k$. \paragraph{Paper's Outline.} The remainder of this paper is organized as follows. The computational model is presented in Sec.{}~\ref{section:model-and-preliminaries} together with some related definitions that serve us throughout the paper. In Sec.{}~\ref{section:technical-challenges}, we discuss the technical challenges that arise from developing our transformer and provide some intuition as to the measures taken to overcome them. The interface of the node-LCL transformer --- namely, the properties that (the fault free) algorithm \ensuremath{\mathcal{A}}{} must satisfy to become eligible for our transformer and the performance guarantees of the resulting self-stabilizing algorithm \ensuremath{\mathcal{A}_{\mathit{ST}}}{} --- is presented in Sec.{}~\ref{section:transformer-nodes-interface}. Sec.{}\ \ref{section:transformer-nodes-implementation} and \ref{section:transformer-nodes-analysis} are dedicated to the node-LCL transformer's implementation and analysis, respectively; a key component in this regard is a generic module developed in Sec.{}~\ref{section:probabilistic-phase-synchronization}. In Sec.{}~\ref{section:simulation-line-graph}, we explain how the node-LCL self-stabilizing algorithms produced by our transformer can be simulated on the line graph of a given graph to solve the corresponding edge-LCL problems. As this simulation may be inefficient in terms of its message size overhead, we also develop an edge-LCL transformer that can be applied to fault free algorithms designed (directly) for distributed edge problems. The interface, implementation and analysis of the edge-LCL transformer are presented in Sec.{}\ \ref{section:transformer-edges-interface}, \ref{section:transformer-edges-implementation}, and \ref{section:transformer-edges-analysis}, respectively. In Sec.{}~\ref{section:concrete-problems}, we develop fault free algorithms for the problems listed in Table~\ref{table:concrete-problems} and establish their transformer eligibility. We conclude in Sec.{}~\ref{section:related-work} with a related work comparison and further discussion. \section{Computational Model and Preliminaries} \label{section:model-and-preliminaries} \paragraph{Graphs.} Throughout, the term graph refers to an undirected graph, whereas the term digraph refers to a directed graph. We denote the node set and edge set of a graph/digraph $G$ by $V(G)$ and $E(G)$, respectively. Consider a graph $G$. The \emph{line graph} of $G$, denoted by $L(G)$, is the graph defined by setting $V(L(G)) = E(G)$ and $E(L(G)) = \{ \{ e, e' \} \mid e, e' \in E(G), |e \cap e'| = 1 \}$. The set of neighbors of a node $v \in V(G)$ is denoted by $N_{G}(v) = \{ u \in V(G) \mid \{ u, v \} \in E(G) \}$ and the degree of $v$ is denoted by $\mathrm{d}_{G}(v) = |N_{G}(v)|$. The set of neighbors of an edge $e \in E(G)$ is denoted by $N_{G}(e) = N_{L(G)}(e)$ and the degree of $e$ is denoted by $\mathrm{d}_{G}(e) = |N_{G}(e)|$. For a node subset $W \subseteq V(G)$ (resp., edge subset $F \subseteq E(G)$), let $G(W)$ (resp., $G(F)$) denote the subgraph induced by $W$ (resp., $F$) on $G$ and let $G - W = G(V(G) - W)$ (resp., $G - F = G(E(G) - F)$). The (hop-)distance in $G$ between nodes $u, v \in V(G)$ is denoted by $\delta_{G}(u, v)$. Given a node subset $W \subseteq V(G)$, we denote the distance in $G$ between $u$ and $W$ by $\delta_{G}(u, W) = \min_{v \in W} \delta_{G}(u, v)$, adhering to the convention that $\delta_{G}(u, \emptyset) = \infty$. Given a positive integer $\Delta$, let $\mathcal{U}_{\Delta}$ denote the collection of all finite and countably infinite graphs whose degrees are up-bounded by $\Delta$. Let $\mathcal{U} = \bigcup_{\Delta = 1}^{\infty} \mathcal{U}_{\Delta}$ denote the collection of all finite and infinite graphs with finite degrees. \paragraph{Multisets.} A multiset $M$ over a finite ground set $S$ can be represented as a vector $M \in \mathbb{Z}_{\geq 0}^{S}$ so that $M(s)$ indicates the multiplicity of $s \in S$. The size of $M$ is defined to be the sum of the entries in its vector representation, denoted by $|M| = \sum_{s \in S} M(s)$. The empty multiset over $S$, denoted by $\emptyset$, is the unique multiset of size $0$. For $d \in \mathbb{Z}_{\geq 0}$, let $\mathcal{M}_{d}(S) = \{ M \in \mathbb{Z}_{\geq 0}^{S} \, : \, |M| = d \}$ denote the collection of all multisets over $S$ of size $d$ and let $\mathcal{M}(S) = \bigcup_{d = 0}^{\infty} \mathcal{M}_{d}(S)$ denote the set of all multisets over $S$ of a finite size. Given a multiset $M \in \mathcal{M}(S)$ and an element $s \in S$, we consider $s$ to be included in $M$, denoted by $s \in M$, if $M(s) \geq 1$. Given two multisets $M, M' \in \mathcal{M}(S)$, we consider $M$ to be a subset of $M'$ (and $M'$ to be a superset of $M$), denoted by $M \subseteq M'$ (or $M' \supseteq M$) if $M(s) \leq M'(s)$ for each $s \in S$; we consider $M$ to be a strict subset of $M'$ (and $M'$ to be a strict superset of $M$), denoted by $M \subset M'$ (or $M' \supset M$) if $M \subseteq M'$ and $M \neq M'$. We note that the subset relation is a partial order that induces a lattice on $\mathcal{M}(S)$. \paragraph{Configurations.} Fix a set $\mathcal{O}$ of output values and let $\bot$ be a designated symbol that does not belong to $\mathcal{O}$. Consider a graph $G \in \mathcal{U}$. A \emph{node configuration} (resp., \emph{edge configuration}) is a function $C : V(G) \rightarrow \mathcal{O} \cup \{ \bot \}$ (resp., $C : E(G) \rightarrow \mathcal{O} \cup \{ \bot \}$) that assigns to each node in $V(G)$ (resp., edge in $E(G)$) either an output value in $\mathcal{O}$ or $\bot$.\footnote{% We emphasize that a node/edge configuration is a purely combinatorial notion. In particular, a node configuration should not be confused with the global state of a distributed algorithm running on the graph $G$ (discussed in the sequel).} (The term configuration by itself is reserved hereafter for node configurations.) Given a subgraph $H$ of $G$, let $C_{H}$ denote the configuration over $H$ obtained from $C$ by restricting it to the nodes in $V(H)$ (resp., edges in $E(H)$). We say that a node $v \in V(G)$ (resp., an edge $e \in E(G)$) is \emph{decided} under $C$ if $C(v) \in \mathcal{O}$ (resp., $C(e) \in \mathcal{O}$); if $C$ maps $v$ (resp., $e$) to $\bot$, then we say that $v$ (resp., $e$) is \emph{undecided}. Let $D(C) = \{ v \in V(G) \mid C(v) \in \mathcal{O} \}$ (resp., $D(C) = \{ e \in E(G) \mid C(e) \in \mathcal{O} \}$) denote the set of nodes (resp., edges) that are decided under $C$; let $U(C) = V(G) - D(C)$ (resp., $U(C) = E(G) - D(C)$) denote the set of nodes (resp., edges) that are undecided under $C$. The configuration $C$ is said to be \emph{complete} if $D(C) = V(G)$ (resp., $D(C) = E(G)$); when we wish to emphasize that some nodes (resp., edges) are undecided under $C$, we refer to $C$ as being \emph{incomplete}. For a node $v \in V(G)$ (resp., edge $e \in E(G)$), let $D_{v}(C) = D(C) \cap N_{G}(v)$ (resp., $D_{e}(C) = D(C) \cap N_{G}(e)$) denote the set of decided nodes (resp., edges) among the neighbors of $v$ (resp., $e$) in $G$; let $U_{v}(C) = N_{G}(v) - D_{v}(C)$ (resp., $U_{e}(C) = N_{G}(e) - D_{e}(C)$) denote the set of undecided nodes (resp., edges) among the neighbors of $v$ (resp., $e$) in $G$. Let $C[v]$ (resp., $C[e]$) denote the multiset consisting of the output values $C(u)$ (resp., $C(f)$) for all $u \in D_{v}(C)$ (resp., $f \in D_{e}(C)$). \paragraph{Distributed Problems.} A \emph{distributed node problem} (resp., \emph{distributed edge problem}) is a $3$-tuple $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{G}, \{ \mathcal{C}_{G} \}_{G \in \mathcal{G}} \rangle$, where $\mathcal{O}$ is a set of output values, $\mathcal{G} \subseteq \mathcal{U}$ is a family of graphs, and $\mathcal{C}_{G} \subseteq \mathcal{O}^{V(G)}$ (resp., $\mathcal{C}_{G} \subseteq \mathcal{O}^{E(G)}$) is a collection of \emph{legal} complete node (resp., edge) configurations for graph $G \in \mathcal{G}$.\footnote{% Our techniques can be extended to cover also distributed node/edge problems in which the nodes/edges are hard-coded with an input value, but this goes beyond the scope of the current paper.} \paragraph{Distributed Algorithms.} Consider a connected message passing communication network represented by a graph $G \in \mathcal{U}_{\Delta}$. The nodes of $G$ are associated with identical (possibly randomized) state machines and we refer to the vector encoding all node states as the network's \emph{global state}. These state machines operate in synchronous \emph{rounds} so that in each round $t \in \mathbb{Z}_{\geq 0}$, every node $v \in V(G)$ executes the following operations: \\ (1) $v$ performs local computation and updates its state; \\ (2) $v$ sends messages to (a subset of) its neighbors; and \\ (3) $v$ receives the messages sent to it by its neighbors in round $t$. \\ Throughout, we assume that round $t$ occurs during the time interval $[t, t + 1)$ so that time $t$ marks the beginning of the round. We adhere to the conventions of the \emph{port numbering} model of distributed graph algorithms: Node $v \in V(G)$ has a port $i_{u} \in \{ 1, \dots, \mathrm{d}_{G}(v) \}$ for each neighbor $u \in N_{G}(v)$ such that from $v$'s perspective, $u$ is regarded as neighbor $i_{u}$. The nodes do not have unique identifiers and they are not assumed to know the graph size (if the graph is finite). The only global information held by the nodes is the degree bound $\Delta$. Consider a distributed node problem $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{G}, \{ \mathcal{C}_{G} \}_{G \in \mathcal{G}} \rangle$ and an algorithm $\ensuremath{\mathcal{A}}$ for $\ensuremath{\mathcal{P}}$. When $\ensuremath{\mathcal{A}}$ runs on a graph $G \in \mathcal{G}$, the state of each node $v \in V(G)$ includes a designated \emph{output register} (typically among other registers) denoted by $\mathtt{out}_{v}$. The output register $\mathtt{out}_{v}$ holds an output value in $\mathcal{O}$ or $\bot$ if $v$ has no output value. We refer to the node configuration $C : V(G) \rightarrow \mathcal{O} \cup \{ \bot \}$ defined by setting $C(v) = \mathtt{out}_{v}$ for each node $v \in V(G)$ as the \emph{configuration of $\ensuremath{\mathcal{A}}$}. Node $v \in V(G)$ is said to be \emph{decided} (resp., \emph{undecided}) under $\ensuremath{\mathcal{A}}$ if it is decided (resp., undecided) under $C$. Let $\mathtt{out}_{v, t}$ denote the value of register $\mathtt{out}_{v}$ at time $t$ and let $C_{t} : V(G) \rightarrow \mathcal{O} \cup \{ \bot \}$ be the corresponding configuration of \ensuremath{\mathcal{A}}{}. Consider a distributed edge problem $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{G}, \{ \mathcal{C}_{G} \}_{G \in \mathcal{G}} \rangle$ and an algorithm $\ensuremath{\mathcal{A}}$ for $\ensuremath{\mathcal{P}}$. When $\ensuremath{\mathcal{A}}$ runs on a graph $G \in \mathcal{G}$, the state of each node $v \in V(G)$ includes a designated \emph{output register} (typically among other registers) denoted by $\mathtt{out}_{v}(i)$ for each port $i \in \{ 1, \dots, \mathrm{d}_{G}(v) \}$; to simplify the exposition, we often write $\mathtt{out}_{v}(u)$ instead of $\mathtt{out}_{v}(i)$ when port $i$ of $v$ corresponds to $u \in N_{G}(v)$. The output register $\mathtt{out}_{v}(u)$ holds an output value in $\mathcal{O}$ or $\bot$ if $v$ has no output value for edge $e = \{ u, v \}$; edge $e$ is said to be \emph{port-consistent} under \ensuremath{\mathcal{A}}{} if $\mathtt{out}_{v}(u) = \mathtt{out}_{u}(v)$, and \emph{port-inconsistent} otherwise. We refer to the edge configuration $C : E(G) \rightarrow \mathcal{O} \cup \{ \bot \}$ defined by setting $C(\{ u, v \}) = \mathtt{out}_{v}(u) = \mathtt{out}_{u}(v)$ (resp., $C(\{ u, v \}) = \bot$) for each port-consistent (resp., port-inconsistent) edge $\{ u, v \} \in E(G)$ as the \emph{configuration of $\ensuremath{\mathcal{A}}$}. Edge $e \in E(G)$ is said to be \emph{decided} (resp., \emph{undecided}) under $\ensuremath{\mathcal{A}}$ if it is decided (resp., undecided) under $C$. Let $\mathtt{out}_{v, t}(u)$ denote the value of register $\mathtt{out}_{v}(u)$ at time $t$ and let $C_{t} : E(G) \rightarrow \mathcal{O} \cup \{ \bot \}$ be the corresponding configuration of \ensuremath{\mathcal{A}}{}. \paragraph{Fault Free Algorithms.} Fix a positive integer $\Delta$. Consider a family $\mathcal{G} \subseteq \mathcal{U}_{\Delta}$ of graphs with degree bound $\Delta$ and a distributed node/edge problem $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{G}, \{ \mathcal{C}_{G} \}_{G \in \mathcal{G}} \rangle$. An algorithm for $\ensuremath{\mathcal{P}}$ is said to be \emph{fault free} if its correctness relies on the assumption that the execution commences from an initial global state determined by the algorithm designer. More formally, a fault free algorithm \ensuremath{\mathcal{A}}{} is associated with a local state $S_{d}$, $0 \leq d \leq \Delta$, in which the output register holds $\bot$, assigned at the beginning of the execution to each node of degree $d$. It is guaranteed that if the graph is finite (and assuming that no faults occur), then the execution reaches a legal configuration for $\ensuremath{\mathcal{P}}$ in finite time with probability $1$ and that this configuration does not change subsequently. \paragraph{Phase-Based Algorithms.} In this paper, we focus on a class of fault free algorithms referred to as \emph{phase-based}. The execution of a phase-based algorithm $\ensuremath{\mathcal{A}}$ for a distributed node (resp., edge) problem $\ensuremath{\mathcal{P}}$ is divided into \emph{phases} so that every phase consists of $\phi$ rounds, where $\phi \geq 2$ is a parameter of $\ensuremath{\mathcal{A}}$. The rounds within a phase are referred to as \emph{steps}, indexed by $j = 0, 1, \dots, \phi - 1$. The execution of $\ensuremath{\mathcal{A}}$ progresses by running the phases in succession so that every node executes step $0 \leq j \leq \phi - 1$ of phase $i = 0, 1, \dots$ in round $t = i \phi + j$. Consider the graph $G \in \mathcal{G}$ on which the phase-based algorithm $\ensuremath{\mathcal{A}}$ runs. The operation of $\ensuremath{\mathcal{A}}$ is fully specified by a \emph{phase procedure}, denoted by \ensuremath{\mathtt{Phase}}{}, that dictates the actions of a node $v \in V(G)$ in each step $j = 0, 1, \dots, \phi - 1$ of phase $i$. A key feature of the phase-based algorithm is that the code of \ensuremath{\mathtt{Phase}}{} for step $j$ is oblivious to the phase number $i$. As such, \ensuremath{\mathtt{Phase}}{} should be seen as a procedure that is invoked from scratch at the beginning of every phase and runs for $\phi$ rounds. Moreover, at the beginning of each phase, node $v$ resets all its registers with the exception of the output register $\mathtt{out}_{v}$ (resp., output registers $\mathtt{out}_{v}(u)$ for $u \in N_{G}(v)$), which means that the only information passed from one application of \ensuremath{\mathtt{Phase}}{} to the next is the node (resp., edge) configuration associated with $\ensuremath{\mathcal{A}}$. In other words, all registers of $\ensuremath{\mathcal{A}}$ other than the output registers are temporary registers whose usage is confined to the scope of a single phase. Among the registers that a node $v$ resets at the beginning of phase $i$ are the registers that store the messages sent to $v$ from its neighbors in the last step of phase $i - 1$. This means, in particular, that $v$ loses trace of its neighbors' states, including the content of their output registers, when a new phase begins. To compensate for this loss, we require that the first step $j = 0$ of the phase, referred to as the \emph{announcement step}, is a special step in which each node $v \in V(G)$ sends $\mathtt{out}_{v}$ (resp., $\mathtt{out}_{v}(u)$) to each one of its neighbors $u \in N_{G}(u)$.\footnote{% The announcement step is redundant in the context of fault free algorithms for distributed edge problems as all edges are supposed to be port-consistent. We add it nonetheless for the sake of compatibility.} The last step $j = \phi - 1$ of the phase, referred to as the \emph{decision step}, has a special role as well: this is the only step in which the nodes are allowed to write into their output registers. \paragraph{Self-Stabilization and Fully Adaptive Run-Time.} An algorithm $\ensuremath{\mathcal{A}}$ for $\ensuremath{\mathcal{P}}$ is \emph{self-stabilizing} if it is guaranteed to reach a legal configuration in finite time with probability $1$ from any initial global state. In the current paper, this notion is captured by a malicious \emph{adversary} that can modify the content of any register maintained by $\ensuremath{\mathcal{A}}$ (essentially modifying the nodes' states), including the registers that hold the incoming messages. The adversary can also impose dynamic \emph{topology changes}, including the addition and removal of nodes and edges, as long as the resulting graph remains in $\mathcal{G}$. The only piece of information that the adversary cannot modify is the degree bound $\Delta$, assumed to be hard-coded into the nodes. To simplify the discussion, we assume that the adversarial manipulations that occur in round $t \in \mathbb{Z}_{\geq 0}$ take place towards the end of the round (say, at time $t + 1 - \epsilon$), i.e., after the messages sent in round $t$ have reached their destination and before the local computation of round $t + 1$ begins (this assumption is without loss of generality). Consider a graph $G \in \mathcal{G}$ on which $\ensuremath{\mathcal{A}}$ runs. Node $v \in V(G)$ is said to be \emph{manipulated} by the adversary if either \\ (i) the content of (any of) $v$'s registers is modified; \\ (ii) the bijection between $v$'s ports and $N_{G}(v)$ is modified; or \\ (iii) $v$ is added to the graph as a new node. \\ Notice that condition (ii) includes topological changes involving the edges incident on $v$ as well as ``rewiring'' of $v$'s ports. Recall that $C_{t}$ denotes the (node/edge) configuration of \ensuremath{\mathcal{A}}{}'s execution on $G$ at time $t$. Consider times $t^{*}_{a} < t^{*}_{b}$ and an integer $k > 0$ and suppose that \\ (1) configuration $C_{t^{*}_{a} - 1} = C_{t^{*}_{a}}$ is legal; \\ (2) the adversary does not manipulate any node in round $t^{*}_{a} - 1$; \\ (3) the adversary manipulates $k$ nodes during the round interval $[t^{*}_{a}, t^{*}_{b} - 1]$; and \\ (4) the adversary does not manipulate any node from round $t^{*}_{b}$ onwards. \\ We say that \ensuremath{\mathcal{A}}{} has \emph{fully adaptive run-time} $T(\Delta, k)$ for a function $T : \mathbb{Z}_{> 0} \times \mathbb{Z}_{> 0} \rightarrow \mathbb{Z}_{> 0}$ if there exists a time $t > t^{*}_{b}$ such that \\ (i) $C_{t}$ is a legal configuration for the graph resulting from the adversarial manipulations, i.e., the graph at time $t^{*}_{b}$; \\ (ii) $C_{t'} = C_{t}$ for every $t' \geq t$; and \\ (iii) $t \leq t^{*}_{b} + T(\Delta, k)$. \\ If $t$ and $C_{t}$ are random variables determined by the coin tosses of \ensuremath{\mathcal{A}}{}, then we require that condition (iii) holds in expectation. It is important to point out that the number $k$ of manipulated nodes is chosen by the adversary; the self-stabilizing algorithm does not know $k$ nor does it know any approximation of $k$. \paragraph{Locally Checkable Labelings.} A \emph{locally checkable labeling (LCL)} over the output value set $\mathcal{O}$ is a family $\ensuremath{\mathcal{L}} = \{ \ensuremath{\ell}_{d} \}_{d = 0}^{\infty}$ of predicates such that for every $d \in \mathbb{Z}_{\geq 0}$, the predicate $\ensuremath{\ell}_{d}$ is a function $\ensuremath{\ell}_{d} : \mathcal{O} \times \mathcal{M}_{d}(\mathcal{O}) \rightarrow \{ \mathit{true}, \mathit{false} \}$. Fix an LCL $\ensuremath{\mathcal{L}} = \{ \ensuremath{\ell}_{d} \}_{d = 0}^{\infty}$ over the set $\mathcal{O}$ of output values and consider a graph $G \in \mathcal{U}$. A decided node $v \in D(C)$ is said to be \emph{content} under $C$ (with respect to $\ensuremath{\mathcal{L}}$) if $\ensuremath{\ell}_{|D_{v}(C)|}(C(v), C[v]) = \mathit{true}$, recalling that $|D_{v}(C)| = |C[v]|$; otherwise, $v$ is said to be \emph{uncontent}. Let $\Gamma(C)$ be the set of nodes that are content under $C$ and let $\Lambda(C) = D(C) - \Gamma(C)$ be the set of nodes that are (decided yet) uncontent under $C$. The configuration $C$ is said to be \emph{strong} if $\Gamma(C) = D(C)$, namely, if every decided node is also content. The edges of the graph $G$ can also be classified into content and uncontent with respect to the LCL $\ensuremath{\mathcal{L}} = \{ \ensuremath{\ell}_{d} \}_{d = 0}^{\infty}$. Consider an edge configuration $C : E(G) \rightarrow \mathcal{O} \cup \{ \bot \}$. A decided edge $e \in D(C)$ is said to be \emph{content} under $C$ (with respect to $\ensuremath{\mathcal{L}}$) if the node $e$ is content under $C$ with respect to $\ensuremath{\mathcal{L}}$ in the line graph $L(G)$, recalling that the nodes of $L(G)$ are identified with the edges of $G$; otherwise, $e$ is said to be \emph{uncontent}. Let $\Gamma(C)$ be the set of edges that are content under $C$ and let $\Lambda(C) = D(C) - \Gamma(C)$ be the set of edges that are (decided yet) uncontent under $C$. The configuration $C$ is said to be \emph{strong} if $\Gamma(C) = D(C)$, namely, if every decided edge is also content. We emphasize that the distinction between content and uncontent nodes/edges applies only to decided nodes/edges. The notion of LCLs allows us to define an important class of distributed problems. An \emph{LCL distributed node problem (node-LCL)} (resp., \emph{LCL distributed edge problem (edge-LCL)}) is a $3$-tuple $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{G}, \ensuremath{\mathcal{L}} \rangle$, where $\mathcal{O}$ is a set of output values, $\mathcal{G} \subseteq \mathcal{U}$ is a family of graphs, and $\ensuremath{\mathcal{L}} = \{ \ensuremath{\ell}_{d} \}_{d = 0}^{\infty}$ is an LCL over the output value set $\mathcal{O}$. The legal node (resp., edge) configurations for $\ensuremath{\mathcal{P}}$ are defined as follows: Given a graph $G \in \mathcal{G}$, a complete node (resp., edge) configuration $C : V(G) \rightarrow \mathcal{O}$ (resp., $C : E(G) \rightarrow \mathcal{O}$) is legal for $\ensuremath{\mathcal{P}}$ if and only if node $v$ (resp., edge $e$) is content under $C$ with respect to $\ensuremath{\mathcal{L}}$ for every $v \in V(G)$ (resp., $e \in E(G)$). \paragraph{Configured Graphs.} Consider a node-LCL (resp., edge-LCL) $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{G}, \ensuremath{\mathcal{L}} \rangle$. A \emph{configured graph} for $\ensuremath{\mathcal{P}}$ is a pair $(G, C)$, where $G \in \mathcal{G}$ and $C \in \mathcal{O}^{V(G)}$ (resp., $C \in \mathcal{O}^{E(G)}$); let $\mathcal{CG}(\ensuremath{\mathcal{P}})$ denote the collection of all configured graphs for $\ensuremath{\mathcal{P}}$. A configured graph $(G, C) \in \mathcal{CG}(\ensuremath{\mathcal{P}})$ is called a \emph{strongly configured graph} if $C$ is strong for $G$ (under $\ensuremath{\mathcal{P}}$); let $\mathcal{SCG}(\ensuremath{\mathcal{P}})$ denote the collection of all strongly configured graphs for $\ensuremath{\mathcal{P}}$. \section{Technical Challenges and Intuition} \label{section:technical-challenges} Consider a distributed node (resp., edge) problem $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{G}, \{ \mathcal{C}_{G} \}_{G \in \mathcal{G}} \rangle$ and a fault free algorithm \ensuremath{\mathcal{A}}{} for $\ensuremath{\mathcal{P}}$, running on a graph $G \in \mathcal{G}$. In this section, we discuss the technical challenges that arise when one wishes to transform \ensuremath{\mathcal{A}}{} into a self-stabilizing algorithm \ensuremath{\mathcal{A}_{\mathit{ST}}}{} for $\ensuremath{\mathcal{P}}$. We also present the requirements we impose on $\ensuremath{\mathcal{P}}$ and \ensuremath{\mathcal{A}}{} and provide some intuition as to how these requirements allow us to overcome the aforementioned challenges. The first (and easiest) challenge is implementing a fault detection mechanism: While the fault free execution of \ensuremath{\mathcal{A}}{} is guaranteed to reach a legal configuration (and to remain in this configuration), when it comes to \ensuremath{\mathcal{A}_{\mathit{ST}}}{}, we must be able to detect faults introduced by the adversarial manipulations. To this end, we require that $\ensuremath{\mathcal{P}}$ is a node-LCL (resp., edge-LCL) $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{G}, \ensuremath{\mathcal{L}} \rangle$, thus ensuring that illegal configurations can be detected locally (see Prop.{}\ \ref{property:nodes:detectability} and \ref{property:edges:detectability}). Upon detecting an uncontent node $v \in V(G)$ (resp., edge $\{ u, v \} \in E(G)$), the output register $\mathtt{out}_{v}$ is reset to $\mathtt{out}_{v} \gets \bot$ (resp., output registers $\mathtt{out}_{u}(v)$ and $\mathtt{out}_{v}(u)$ are reset to $\mathtt{out}_{u}(v), \mathtt{out}_{v}(u) \gets \bot$). The next challenge is concerned with ensuring that a small number $k$ of manipulated nodes do not develop into a ``reset avalanche'', leading to a large number of undecided nodes (resp., edges); this is crucial for bounding the stabilization time of the self-stabilizing algorithm as a function of $k$, independently of the size of $G$. Indeed, it is not difficult to devise LCL problems and legal configurations for these problems in which the manipulation of a single node triggers an arbitrarily long cascade of resets. To overcome this difficulty, we require that $\ensuremath{\mathcal{L}}$ admits a bounded \emph{influence number}, a parameter that allows us to control the distance to which faults can propagate (see Prop.{}\ \ref{property:nodes:bounded-influence} and \ref{property:edges:bounded-influence}). Perhaps the most interesting challenges are those involved with recovering from an invalid global state. First, by employing certain requirements imposed on $\ensuremath{\mathcal{P}}$ (see Prop.{}\ \ref{property:nodes:core-coverage} and \ref{property:edges:core-coverage}) and \ensuremath{\mathcal{A}}{} (see Prop.{}\ \ref{property:nodes:respectful-decisions} and \ref{property:edges:respectful-decisions}), we prove that starting from a certain (deterministic) time $t_{s} > t^{*}_{b}$, it is guaranteed that all configurations are strong, which means that resets no longer occur. From that time on, the task of converging to a valid global state reduces to the task of augmenting an (arbitrary) incomplete strong configuration into a complete one. To accomplish this task, we require that \ensuremath{\mathcal{A}}{} is a phase-based algorithm (see Prop.{}\ \ref{property:nodes:phase-based} and \ref{property:edges:phase-based}). One may hope that the robustness of phase-based algorithms alone ensures that repeated invocations of the the phase procedure \ensuremath{\mathtt{Phase}}{} eventually ``takes care'' of all undecided nodes (resp., edges), leading to a complete (legal) configuration. Unfortunately, this (informal) argument hides a significant flaw: The correctness of the phase-based algorithm \ensuremath{\mathcal{A}}{} relies on the assumption that \ensuremath{\mathtt{Phase}}{} runs at all nodes in synchrony so that step $j = 0, 1, \dots, \phi - 1$ of phase $i = 0, 1, \dots$ is executed by all nodes concurrently in round $t = i \phi + j$. To implement that, the nodes should maintain a \emph{modular clock} that keeps track of $j = t \bmod \phi$. While maintaining this modular clock in a fault free environment is a trivial task, the situation becomes trickier in the self-stabilizing setting: the adversary can modify the modular clocks, thus causing the corresponding nodes to execute their phases ``out of sync'' indefinitely. We cope with this difficulty by introducing a technique called \emph{probabilistic phase synchronization} (see Sec.{}~\ref{section:probabilistic-phase-synchronization}). In high level terms, this technique replaces the (deterministic) modular clock by an (algorithm independent) randomized module, referred to as \ensuremath{\mathtt{PPS}}{}, whose implementation is based on a simple ergodic Markov chain, maintained independently at each node. The state space of the Markov chain is $\{ \ensuremath{\mathit{\hbar}}, 0, 1, \dots, \phi - 1 \}$, where states $0, 1, \dots, \phi - 1$ are identified with the $\phi$ steps of \ensuremath{\mathtt{Phase}}{} and $\ensuremath{\mathit{\hbar}}$ is an auxiliary state whose role is to ensure that the nodes initiate new phases at random times. This facilitates a probabilistic synchronization mechanism among the nodes: A node $v \in V(G)$ residing in state $j \in \{ 0, 1, \dots, \phi - 1 \}$ of its \ensuremath{\mathtt{PPS}}{} Markov chain simulates step $j$ of \ensuremath{\mathtt{Phase}}{} in conjunction with (and only with) its neighbors whose own \ensuremath{\mathtt{PPS}}{} Markov chains also reside in state $j$. By bounding the mixing time of the Markov chain underlying \ensuremath{\mathtt{PPS}}{}, we guarantee that a node $v \in V(G)$ has ample opportunities to execute \ensuremath{\mathtt{Phase}}{} in synchrony with each neighbor $u \in N_{G}(v)$. Combined with an additional requirement that \ensuremath{\mathcal{A}}{} admits a certain type of potential function (see Prop.{}\ \ref{property:nodes:progress} and \ref{property:edges:progress}), we can establish (probabilistically) sufficient progress in each application of \ensuremath{\mathtt{Phase}}{}, ultimately bounding the fully adaptive run-time of \ensuremath{\mathcal{A}_{\mathit{ST}}}{}. Care must be taken though that the decisions of a node $v \in V(G)$ do not conflict with the decisions of neighboring nodes $u \in N_{G}(v)$ that are not phase-synchronized with $v$. To this end, we exploit the requirement that nodes may write into their output registers only during the decision step of each phase, which means that during its own decision step, node $v$ is aware of the decisions of $u$ for every neighbor $u \in N_{G}(v)$ that started a phase before $v$. \section{The Node-LCL Transformer --- Interface} \label{section:transformer-nodes-interface} This section presents the interface of our transformer in the context of distributed node problems. We say that a fault free algorithm \ensuremath{\mathcal{A}}{} for a node-LCL $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{G}, \ensuremath{\mathcal{L}} \rangle$ is \emph{eligible} if the transformer can be applied to \ensuremath{\mathcal{A}}{}, turning it into a self-stabilizing algorithm for $\ensuremath{\mathcal{P}}$. For \ensuremath{\mathcal{A}}{} to be eligible, certain conditions must be met; these conditions are partitioned into properties of problem $\ensuremath{\mathcal{P}}$, presented in Sec.{}~\ref{section:properties-problem-nodes}, and properties of algorithm \ensuremath{\mathcal{A}}{} itself, presented in Sec.{}~\ref{section:properties-algorithm-nodes}. (In Sec.{}~\ref{section:concrete-problems}, we show that these properties are satisfied by the node-LCLs presented in Table~\ref{table:concrete-problems}.) The formal guarantees of the transformer when applied to (the eligible algorithm) \ensuremath{\mathcal{A}}{}, including the dependence of its fully adaptive run-time and message size on various parameters of $\ensuremath{\mathcal{P}}$ and \ensuremath{\mathcal{A}}{}, are stated in Sec.{}~\ref{section:transformer-statement-nodes}. \subsection{Properties of the Distributed Node Problem} \label{section:properties-problem-nodes} Fix a node-LCL $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{G}, \ensuremath{\mathcal{L}} \rangle$, where $\mathcal{G} \subseteq \mathcal{U}_{\Delta}$ for a degree bound $\Delta$ and $\ensuremath{\mathcal{L}} = \{ \ensuremath{\ell}_{d} \}_{d = 0}^{\infty}$. In this section, we present Prop.{}\ \ref{property:nodes:hereditary-closure}--\ref{property:nodes:detectability} that should be satisfied by $\ensuremath{\mathcal{P}}$ as a necessary condition for any fault free algorithm for $\ensuremath{\mathcal{P}}$ to be eligible. We start with Property~\ref{property:nodes:hereditary-closure} that addresses the graph family $\mathcal{G}$. \begin{property}[\emph{hereditary closure}] \label{property:nodes:hereditary-closure} Node-LCL $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{G}, \ensuremath{\mathcal{L}} \rangle$ satisfies the \emph{hereditary closure} property if $G \in \mathcal{G}$ implies that $G' \in \mathcal{G}$ for any subgraph $G'$ of $G$. \end{property} \paragraph{Cores and Supportive Digraphs.} Given an output value $o \in \mathcal{O}$, let $\mathcal{T}(o) = \{ M \in \mathcal{M}(\mathcal{O}) \mid \ensuremath{\ell}_{|M|}(o, M) = \mathit{true} \}$ denote the set of multisets for which $o$ is evaluated to $\mathit{true}$ under $\ensuremath{\mathcal{L}}$. \begin{property}[\emph{core coverage}] \label{property:nodes:core-coverage} Node-LCL $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{G}, \ensuremath{\mathcal{L}} \rangle$ satisfies the \emph{core coverage} property if the following two conditions hold for each output value $o \in \mathcal{O}$: \\ (1) $\mathcal{T}(o) \neq \emptyset$; and \\ (2) if $M \subseteq M' \subseteq M''$ and $M, M'' \in \mathcal{T}(o)$, then $M' \in \mathcal{T}(o)$ for every $M, M', M'' \in \mathcal{M}(\mathcal{O})$. \end{property} Assuming that $\ensuremath{\mathcal{P}}$ satisfies the \emph{core coverage} property, we refer to a minimal element of the partially ordered set $(\mathcal{T}(o), \subseteq)$ as a \emph{core} of $o \in \mathcal{O}$ and denote the set of cores of $o$ by $\SatMultisets^{*}(o) \subseteq \mathcal{T}(o)$. Notice that $\SatMultisets^{*}(o)$ is a (non-empty) antichain of the partially ordered set $(\mathcal{T}(o), \subseteq)$.\footnote{% An antichain of a partially ordered set is a subset of mutually incomparable elements.} The notion of cores allows us to introduce the (abstract) \emph{supportive digraph} $D_{\ensuremath{\mathcal{L}}}$ of the LCL $\ensuremath{\mathcal{L}}$: the node set $V(D_{\ensuremath{\mathcal{L}}})$ of $D_{\ensuremath{\mathcal{L}}}$ is defined to be $V(D_{\ensuremath{\mathcal{L}}}) = \mathcal{O}$; the edge set $E(D_{\ensuremath{\mathcal{L}}})$ of $D_{\ensuremath{\mathcal{L}}}$ includes an edge from $o \in \mathcal{O}$ to $o' \in \mathcal{O}$ if (and only if) there exists a multiset $M \in \SatMultisets^{*}(o)$ such that $o' \in M$. In other words, the supportive digraph of $\ensuremath{\mathcal{L}}$ is defined over the set $\mathcal{O}$ of output values and includes an edge from output value $o$ to output value $o'$ if $o'$ belongs to some (at least one) core of $o$. This digraph plays a key role in the following property. \begin{property}[\emph{$\nu$-bounded influence}] \label{property:nodes:bounded-influence} The LCL $\ensuremath{\mathcal{L}} = \{ \ensuremath{\ell}_{d} \}_{d = 0}^{\infty}$ over the output values $\mathcal{O}$ is said to admit \emph{influence number} $\nu \in \mathbb{Z}_{\geq 0}$ if the longest (directed) path in the supportive digraph $D_{\ensuremath{\mathcal{L}}}$ is of length $\nu$ (which means in particular that $D_{\ensuremath{\mathcal{L}}}$ is acyclic). Node-LCL $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{G}, \ensuremath{\mathcal{L}} \rangle$ satisfies the \emph{$\nu$-bounded influence} property if $\ensuremath{\mathcal{L}}$ admits influence number at most $\nu$. \end{property} \paragraph{Distributed Implementation of the Predicates.} Consider a graph $G \in \mathcal{G}$. Our transformer employs a distributed procedure referred to as the \emph{detection procedure}, denoted hereafter by \ensuremath{\mathtt{Detect}}{}, that ``implements'' the LCL $\ensuremath{\mathcal{L}}$. Formally, \ensuremath{\mathtt{Detect}}{} runs indefinitely on $G$ and does not access any of the nodes' registers, except for the output register $\mathtt{out}_{v}$, assumed to be maintained by each node $v \in V(G)$, to which \ensuremath{\mathtt{Detect}}{} has a read-only access. Based on that, \ensuremath{\mathtt{Detect}}{} returns a Boolean value, denoted by $\ensuremath{\mathtt{Detect}}_{t}(v) \in \{ \mathit{true}, \mathit{false} \}$, for each node $v \in V(G)$ in every round $t$. Assuming that neither node $v \in V(G)$ nor any of its neighbors are manipulated in round $t - 1$, the correctness guarantee of \ensuremath{\mathtt{Detect}}{} is that $\ensuremath{\mathtt{Detect}}_{t}(v) = \mathit{false}$ if and only if $v$ is (decided and) uncontent under $C_{t}$ with respect to $\ensuremath{\mathcal{L}}$, recalling that $C_{t} : V(G) \rightarrow \mathcal{O} \cup \{ \bot \}$ is the (node) configuration defined by setting $C_{t}(v) = \mathtt{out}_{v, t}$. We emphasize that the detection procedure \ensuremath{\mathtt{Detect}}{} can (and typically does) run in parallel to other procedures that may occasionally modify the output registers (that \ensuremath{\mathtt{Detect}}{} reads). It is required that the correctness of \ensuremath{\mathtt{Detect}}{} holds for any configuration $C_{t}$, irrespective of any modification that the output registers may undergo during round $t$. \begin{property}[\emph{$\mu(\Delta)$-detectability}] \label{property:nodes:detectability} The LCL $\ensuremath{\mathcal{L}} = \{ \ensuremath{\ell}_{d} \}_{d = 0}^{\infty}$ over the output value set $\mathcal{O}$ is said to be \emph{$\mu(\Delta)$-detectable} for a function $\mu : \mathbb{Z}_{> 0} \rightarrow \mathbb{Z}_{> 0}$ if $\ensuremath{\mathcal{L}}$ admits a detection procedure whose message size is at most $\mu(\Delta)$ when it runs on a graph of degree bound $\Delta$. Node-LCL $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{G}, \ensuremath{\mathcal{L}} \rangle$ satisfies the \emph{$\mu(\Delta)$-detectability} property if $\ensuremath{\mathcal{L}}$ is $\mu(\Delta)$-detectable.\footnote{% The notion of $\mu(\Delta)$-detectability becomes more relevant in the context of edge-LCLs (see Sec.{}~\ref{section:transformer-edges-interface}) as the detection procedure for a node-LCL is typically implemented so that each node $v \in V(G)$ simply shares $\mathtt{out}_{v}$ with its neighbors, resulting in messages of size $O (\log (|\mathcal{O}|))$. We add this property nonetheless for the sake of compatibility.} \end{property} \subsection{Properties of the Fault Free Algorithm} \label{section:properties-algorithm-nodes} Fix a node-LCL $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{G}, \ensuremath{\mathcal{L}} \rangle$, where $\mathcal{G} \subseteq \mathcal{U}_{\Delta}$ for a degree bound $\Delta$ and $\ensuremath{\mathcal{L}} = \{ \ensuremath{\ell}_{d} \}_{d = 0}^{\infty}$, and a fault free algorithm \ensuremath{\mathcal{A}}{} for $\ensuremath{\mathcal{P}}$. In this section, we present Properties \ref{property:nodes:phase-based}--\ref{property:nodes:progress} that should be satisfied by \ensuremath{\mathcal{A}}{} as a necessary condition for its eligibility. \paragraph{Restricting the Phase Procedure.} Consider the graph $G \in \mathcal{G}$ on which $\ensuremath{\mathcal{A}}$ runs. Recall that if $\ensuremath{\mathcal{A}}$ is a phase-based algorithm, then it is fully specified by a phase length parameter $\phi$ and a phase procedure \ensuremath{\mathtt{Phase}}{} invoked (from scratch) at the beginning of every phase. Moreover, each phase starts with an announcement step $j = 0$, in which the nodes announce the content of their output register, and ends with a decision step $j = \phi - 1$, which is the only step in which the nodes may write into their output register. For the node-LCL algorithm $\ensuremath{\mathcal{A}}$ to be eligible, we add a few more requirements on the structure of \ensuremath{\mathtt{Phase}}{}. First, it is required that a decided node $v \in V(G)$ keeps on sending the content of its output register $\mathtt{out}_{v}$ to all its neighbors throughout the phase.\footnote{% This requirement is obviously redundant when \ensuremath{\mathtt{Phase}}{} operates under the fault free algorithm \ensuremath{\mathcal{A}}{}. As explained in the sequel though, it facilitates the construction of the self-stabilizing algorithm.} Moreover, node $v$ does not take any role in the operation of \ensuremath{\mathtt{Phase}}{} beyond informing its neighbors of its output value (in particular, the decided nodes do not need to keep track of the step number $j$). Let $U \subseteq V(G)$ be the set of nodes that are undecided at the beginning of the phase and recall that following the announcement step, each node $u \in U$ is aware of its undecided neighbors $u' \in N_{G}(u) \cap U$. During steps $j = 1, \dots, \phi - 2$, referred to as the phase's \emph{working stage}, an undecided node $u \in U$ may send messages only to its undecided neighbors. Moreover, during the working stage, $u$ should completely ``ignore'' its decided neighbors $v \in V(G) - U$, including the content of their output registers $\mathtt{out}_{v}$. More formally, the actions of $u$ during the working stage may depend on the number $\mathrm{d}_{G(U)}(u)$ of undecided neighbors that $u$ has and on the (global) degree bound $\Delta$, but they should not depend on the decisions $\mathtt{out}_{v}$ of its decided neighbors $v \in N_{G}(v) - U$, nor should they depend on the number $\mathrm{d}_{G}(u) - \mathrm{d}_{G(U)}(u)$ of decided neighbors that $u$ has. The only step of the phase in which an undecided node $u \in U$ may take into account the content $\mathtt{out}_{v}$ of the messages received from its decided neighbors $v \in N_{G}(v) - U$ is the decision step. This means that the decision of $u$ to write an output value $o \in \mathcal{O}$ into its output register $\mathtt{out}_{u}$ during the decision step may be affected by the multiset $\{ \mathtt{out}_{v} \mid v \in N_{G}(u) - U \}$. \begin{property}[\emph{$\phi$-phase-based}] \label{property:nodes:phase-based} Algorithm \ensuremath{\mathcal{A}}{} satisfies the \emph{$\phi$-phase-based} property if it belongs to the phase-based class of algorithms with a phase of length $\phi$. \end{property} Given a configuration $C : V(G) \rightarrow \mathcal{O} \cup \{ \bot \}$, let $\eta$ be the execution of the phase procedure \ensuremath{\mathtt{Phase}}{} on the graph $G$ whose output registers are set so that $\mathtt{out}_{v} = C(v)$ for each $v \in V(G)$ (recall that the initial content of all other registers is irrelevant as they are reset by \ensuremath{\mathtt{Phase}}{} when $\eta$ starts). Let $\ensuremath{\mathtt{Phase}}(G, C)$ be the configuration associated with \ensuremath{\mathtt{Phase}}{} when $\eta$ halts, i.e., after $\phi$ rounds. Notice that if \ensuremath{\mathtt{Phase}}{} is randomized, then $\ensuremath{\mathtt{Phase}}(G, C)$ is a random variable that depends on the coin tosses of the nodes $v \in V(G)$ during $\eta$. The following key property can now be stated. \begin{property}[\emph{respectful decisions}] \label{property:nodes:respectful-decisions} The phase-based algorithm \ensuremath{\mathcal{A}}{} with phase procedure \ensuremath{\mathtt{Phase}}{} satisfies the \emph{respectful decisions} property if the following two conditions hold with probability $1$ for every graph $G \in \mathcal{G}$, configuration $C : V(G) \rightarrow \mathcal{O} \cup \{ \bot \}$, and undecided node subset $X \subseteq U(C)$: \\ (1) $\Gamma(C) \subseteq \Gamma(\ensuremath{\mathtt{Phase}}(G - X, C_{G - X}))$; and \\ (2) $U(C) \cap D(\ensuremath{\mathtt{Phase}}(G - X, C_{G - X})) \subseteq \Gamma(\ensuremath{\mathtt{Phase}}(G - X, C_{G - X}))$.\footnote{% The fact that the graph $G - X$ belongs to the graph family $\mathcal{G}$ is ensured by Property~\ref{property:nodes:hereditary-closure}.} \end{property} \paragraph{Potential Functions.} A function $\pi : \mathcal{CG}(\ensuremath{\mathcal{P}}) \rightarrow \mathbb{Z}_{\geq 0}$ is said to be a \emph{locally separable potential function} for $\ensuremath{\mathcal{P}}$ if there exists a family $\left\{ \sigma(M, M') \right\}_{M, M' \in \mathcal{M}(\mathcal{O})} \subset \mathbb{Z}_{\geq 0}$ of \emph{potential coefficients} such that \\ (1) $\sigma(M, M') = \sigma(M', M)$ for any $M, M' \in \mathcal{M}(\mathcal{O})$; \\ (2) if $M_{1} \subseteq M'_{1}$ and $M_{2} \subseteq M'_{2}$, then $\sigma(M_{1}, M_{2}) \geq \sigma(M'_{1}, M'_{2})$ for any $M_{1}, M'_{1}, M_{2}, M'_{2} \in \mathcal{M}(\mathcal{O})$; and \\ (3) $\pi(G, C) = \sum_{u, v \in U(C), \{ u, v \} \in E(G)} \sigma(C[u], C[v])$ for any $(G, C) \in \mathcal{CG}(\ensuremath{\mathcal{P}})$. \\ We refer to $\sigma(\emptyset, \emptyset)$ as the \emph{top potential coefficient} of $\pi$, observing that it up-bounds any other potential coefficient. \begin{property}[\emph{$(\beta, \sigma_{0})$-progress}] \label{property:nodes:progress} Algorithm \ensuremath{\mathcal{A}}{} satisfies the \emph{$(\beta, \sigma_{0})$-progress} property for a real $0 < \beta < 1$ and integer $\sigma_{0} \geq 0$ if $\ensuremath{\mathcal{P}}$ admits a locally separable potential function $\pi$ with top potential coefficient $\sigma_{0}$ such that for every strongly configured graph $(G, C) \in \mathcal{SCG}(\ensuremath{\mathcal{P}})$, it is guaranteed that \\ (1) $\mathbb{E}(\pi(G, \ensuremath{\mathtt{Phase}}(G, C))) \leq (1 - \beta) \cdot \pi(G, C)$; and \\ (2) if $\pi(G, C) = 0$, then the configuration $\ensuremath{\mathtt{Phase}}(G, C)$ is complete with probability $1$.\footnote{% Prop.{}~\ref{property:nodes:respectful-decisions} ensures that if $\ensuremath{\mathtt{Phase}}(G, C)$ is complete, then it is also legal.} \end{property} \subsection{The Main Theorem --- Node LCLs} \label{section:transformer-statement-nodes} We say that a fault free algorithm \ensuremath{\mathcal{A}}{} for a node-LCL $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{G}, \ensuremath{\mathcal{L}} \rangle$ is \emph{$(\nu, \mu(\Delta), \phi, \beta, \sigma_{0})$-eligible} if \\ (I) $\ensuremath{\mathcal{P}}$ satisfies the hereditary closure property (Prop.{}~\ref{property:nodes:hereditary-closure}), the core coverage property (Prop.{}~\ref{property:nodes:core-coverage}), the $\nu$-bounded influence property (Prop.{}~\ref{property:nodes:bounded-influence}), and the $\mu(\Delta)$-detectability property (Prop.{}~\ref{property:nodes:detectability}); and \\ (II) \ensuremath{\mathcal{A}}{} satisfies the $\phi$-phased-based property (Prop.{}~\ref{property:nodes:phase-based}), the respectful decisions property (Prop.{}~\ref{property:nodes:respectful-decisions}), and the $(\beta, \sigma_{0})$-progress property (Prop.{}~\ref{property:nodes:progress}). \\ The guarantees of our transformer are cast in the following theorem. \begin{theorem} \label{theorem:nodes:main} Consider a node-LCL $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{G}, \ensuremath{\mathcal{L}} \rangle$, where $\mathcal{G} \subseteq \mathcal{U}_{\Delta}$ for a degree bound $\Delta$, and let \ensuremath{\mathcal{A}}{} be a fault free algorithm for $\ensuremath{\mathcal{P}}$ that uses messages of size $\mu_{\ensuremath{\mathcal{A}}}(\Delta)$. If \ensuremath{\mathcal{A}}{} is $(\nu, \mu(\Delta), \phi, \beta, \sigma_{0})$-eligible, then $\ensuremath{\mathcal{P}}$ admits a randomized self-stabilizing algorithm \ensuremath{\mathcal{A}_{\mathit{ST}}}{} that uses messages of size \[ O \left( \mu(\Delta) + \mu_{\ensuremath{\mathcal{A}}}(\Delta) + \log \phi \right) \] whose fully adaptive run time in response to $k$ node manipulations is \[ O \left( \left( \phi^{5} / \beta \right) \cdot \left( \log (k) + (\ensuremath{\nu} + \phi) \log (\Delta) + \log (\sigma_{0}) \right) \right) \, . \] Moreover, nodes whose distance from any manipulated node is at least $\nu + \phi + 1$ are guaranteed to maintain their original output value throughout the execution of \ensuremath{\mathcal{A}_{\mathit{ST}}}{} (with probability $1$). \end{theorem} \sloppy Recall that the graph $G \in \mathcal{G}$ on which the self-stabilizing algorithm \ensuremath{\mathcal{A}_{\mathit{ST}}}{} runs may be (countably) infinite and the fully adaptive run-time guarantee promised in Thm.{}~\ref{theorem:edges:main} holds if the adversary manipulates a finite node subset. Given the (deterministic) $\nu + \phi + 1$ bound on the ``radius of influence'', we can actually allow the adversary to manipulate an infinite node subset $S \subset V(G)$ as long as $V(G)$ can be partitioned into clusters $C_{1}, C_{2}, \dots$ such that the following two conditions hold for every $i \geq 1$: \\ (1) $C_{i} \cap S = k_{i}$ for some $k_{i} \in \mathbb{Z}_{> 0}$; and \\ (2) if $v \in C_{i} \cap S$, then $\delta_{G}(v, V(G) - C_{i}) > \nu + \phi + 1$. \\ The theorem then guarantees that the expected recovery time of $G(C_{i})$ is at most $O \left( \left( \phi^{5} / \beta \right) \cdot \left( \log (k) + (\ensuremath{\nu} + \phi) \log (\Delta) + \log (\sigma_{0}) \right) \right)$ for each $i \geq 1$. \par\fussy \begin{remark*} It is interesting to point out that the classic \emph{$2$-hop node $c$-coloring} problem does not admit an eligible fault free algorithm. In this problem, the output of each node $v \in V(G)$ is $f(v) \in \{ 1, \dots, c \}$; for each $1 \leq i \leq c$, if a node $v \in V(G)$ outputs $f(v) = i$, then $f(u) \neq i$ for every node $u \in V(G)$ such that $\delta_{G}(u, v) \leq 2$. To see that our generic transformer is not applicable for this node-LCL, suppose that $G$ is a path of length $2$ of the form $(u, v, w)$ and consider the configuration $C$ defined by setting $C(u) = C(w) = 1$ and $C(v) = \bot$. Notice that nodes $u$ and $w$ are content under $C$, hence any application of the detection procedure under configuration $C$ returns $\mathit{true}$ for both $u$ and $w$; it returns $\mathit{true}$ also for $v$ as $v$ is undecided. At the same time, node $v$ cannot receive an output value $i \in \{ 1, \dots, c \}$ in a way that satisfies the respectful decisions property (Prop.{}~\ref{property:nodes:respectful-decisions}). Since each edge in $E(G)$ has a decided endpoint, it follows that $\pi(G, C) = 0$ for any locally separable potential function $\pi$, thus deriving a contradiction to the $(\beta, \sigma_{0})$-progress property (Prop.{}~\ref{property:nodes:progress}), requiring that $v$ becomes decided with probability $1$ upon the next time it is engaged in an application of the phase procedure. \end{remark*} \section{Probabilistic Phase Synchronization} \label{section:probabilistic-phase-synchronization} In this section, we present the \ensuremath{\mathtt{PPS}}{} module whose role is to synchronize between phases executed by neighboring nodes. Fix a node-LCL $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{G}, \ensuremath{\mathcal{L}} \rangle$ and a fault free algorithm \ensuremath{\mathcal{A}}{} for $\ensuremath{\mathcal{P}}$ that we wish to transform into a self-stabilizing algorithm \ensuremath{\mathcal{A}_{\mathit{ST}}}{}.\footnote{% The current section is phrased for node-LCL algorithms, however, it applies verbatim also for edge-LCL algorithms.} Assume that \ensuremath{\mathcal{A}}{} satisfies the $\phi$-phase-based property and let \ensuremath{\mathtt{Phase}}{} be its corresponding phase procedure. Recall that the execution of \ensuremath{\mathtt{Phase}}{} in each round $t$ is determined by the corresponding step number $j = t \bmod \phi$ and that under \ensuremath{\mathcal{A}}{}, this value is maintained in a designated register, referred to as the modular clock. Under \ensuremath{\mathcal{A}_{\mathit{ST}}}{}, we can no longer rely on the modular clocks as they are exposed to adversarial manipulations (see the discussion in Sec.{}~\ref{section:technical-challenges}). The \ensuremath{\mathtt{PPS}}{} module offers an alternative mechanism that we now turn to describe. Fix some graph $G \in \mathcal{G}$ and consider a node $v \in V$. The \ensuremath{\mathtt{PPS}}{} module can be viewed as an ergodic Markov chain over the state space $\{ \ensuremath{\mathit{\hbar}}, 0, 1, \dots, \phi - 1 \}$ that $v$ runs independently of the other nodes. The current state of ($v$'s copy of) \ensuremath{\mathtt{PPS}}{} is stored in a register denoted by $\mathit{step}_{v}$; \ensuremath{\mathtt{Phase}}{}'s queries of the modular clock under \ensuremath{\mathcal{A}}{} are replaced under \ensuremath{\mathcal{A}_{\mathit{ST}}}{} by calls to procedure \ensuremath{\mathtt{Get\_Step\_PPS}}{} that simply returns the current value of $\mathit{step}_{v}$. Beside \ensuremath{\mathtt{Get\_Step\_PPS}}{}, the \ensuremath{\mathtt{PPS}}{} module has one additional procedure whose role is to update the Markov chain. This procedure is denoted by $\ensuremath{\mathtt{Step\_Counter\_\phi}}$ and it is invoked by \ensuremath{\mathtt{PPS}}{} in every round, towards the end of the local computation stage, after any call to \ensuremath{\mathtt{Get\_Step\_PPS}}{} has already been made (see Algorithm~\ref{pseudocode:step-counter} in Sec.{}~\ref{appendix:pseudocode-PPS} for a pseudocode of $\ensuremath{\mathtt{Step\_Counter\_\phi}}$ and Fig.{}~\ref{figure:markov-chain} for an illustration of the underlying Markov chain). We emphasize that $\ensuremath{\mathtt{Step\_Counter\_\phi}}$ is the only procedure that writes to the $\mathit{step}_{v}$ register; any other access of \ensuremath{\mathcal{A}_{\mathit{ST}}}{} to this register is restricted to the read-only procedure \ensuremath{\mathtt{Get\_Step\_PPS}}{}. The fundamental properties of $\ensuremath{\mathtt{Step\_Counter\_\phi}}$ are summarized in the following observation, where we take $\mathit{step}_{v, t}$ to denote the value of register $\mathit{step}_{v}$ at time $t$. \begin{observation}\label{observation:step-counter} For every time $t \geq t^{*}_{b}$ and node $v \in V$, we have \begin{itemize} \item $\Pr(\mathit{step}_{v, t + 1} = j \mid \mathit{step}_{v, t} = j - 1) = 1$ for every $1 \leq j \leq \phi - 1$; \item $\Pr(\mathit{step}_{v, t + 1} = \ensuremath{\mathit{\hbar}} \mid \mathit{step}_{v, t} = \phi - 1) = 1$; and \item $\Pr(\mathit{step}_{v, t + 1} = \ensuremath{\mathit{\hbar}} \mid \mathit{step}_{v, t} = \ensuremath{\mathit{\hbar}}) = \Pr(\mathit{step}_{v, t + 1} = 0 \mid \mathit{step}_{v, t} = \ensuremath{\mathit{\hbar}}) = 1 / 2$. \end{itemize} This holds independently of any coin toss of $v$ prior to time $t$ and of any coin toss of all other nodes. \end{observation} We now turn to bound the mixing time of the Markov chain in \ensuremath{\mathtt{PPS}}{}. To this end, let $S_{\phi} = \{ \ensuremath{\mathit{\hbar}}, 0, 1, \dots, \phi - 1 \}$ denote its state space. The following lemma is established by showing that this ergodic Markov chain belongs to a family of Markov chains that has been identified and analyzed in \cite{wilmer1999exact}; its proof is deferred to Appendix~\ref{appendix:proof:lemma:start-phase-probability}. \begin{lemma}\label{lemma:start-phase-probability} Fix some time $t_{0} \geq t^{*}_{b}$, node $v$, and $j_{0} \in S_{\phi}$. For every positive $\epsilon < 1$, there exists a time $\hat{t} = t_{0} + O (\log (1 / \epsilon) \cdot \phi^{3})$ such that for every $t \geq \hat{t}$, it holds that \[ \Pr \left( \mathit{step}_{v, t} = j \mid \mathit{step}_{v, t_{0}} = j_{0} \right) \, \geq \, \begin{cases} \frac{2}{\phi + 2} \cdot (1 - \epsilon) \, , & \text{if } j = \ensuremath{\mathit{\hbar}} \\ \frac{1}{\phi + 2} \cdot (1 - \epsilon) \, , & \text{otherwise} \end{cases} \, . \] This holds independently of any coin toss of $v$ prior to time $t_{0}$ and of any coin toss of all other nodes. \end{lemma} By plugging $\epsilon = 1 - \frac{\phi + 2}{2 \phi}$ into Lem.{}~\ref{lemma:start-phase-probability}, we obtain the following corollary. \begin{corollary} \label{corollary:start-phase-prob-fixed-epsilon} Fix some time $t_{0} \geq t^{*}_{b}$, node $v$, and $j_{0} \in S_{\phi}$. For $\tau = O (\phi^{3})$, it holds that $\Pr \left( \mathit{step}_{v, t_{0} + \tau} = 0 \mid \mathit{step}_{v, t_{0}} = j_{0} \right) \geq \frac{1}{2 \phi}$. This holds independently of any coin toss of $v$ prior to time $t_{0}$ and of any coin toss of all other nodes. \end{corollary} \section{The Node-LCL Transformer --- Implementation} \label{section:transformer-nodes-implementation} In this section, we explain how the transformer operates; refer to Algorithm~\ref{pseudocode:transforemer-for-node-LCL} in Sec.{}~\ref{appendix:pseudocodes-transformers} for a pseudocode. Consider a node-LCL $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{G}, \ensuremath{\mathcal{L}} \rangle$, where $\mathcal{G} \subseteq \mathcal{U}_{\Delta}$ for a degree bound $\Delta$, and let \ensuremath{\mathcal{A}}{} be a fault free algorithm for \ensuremath{\mathcal{P}}{} that is $(\nu, \mu(\Delta), \phi, \beta, \sigma_{0})$-eligible. The transformer composes the self-stabilizing algorithm \ensuremath{\mathcal{A}_{\mathit{ST}}}{} promised in Thm.{}~\ref{theorem:nodes:main} from three components: \\ (1) the detection procedure \ensuremath{\mathtt{Detect}}{} that realizes the $\mu(\Delta)$-detectability of $\ensuremath{\mathcal{P}}$; \\ (2) the phase procedure \ensuremath{\mathtt{Phase}}{} associated with the $\phi$-phase-based algorithm \ensuremath{\mathcal{A}}{}; and \\ (3) the \ensuremath{\mathtt{PPS}}{} module. \\ Any message $M$ sent under \ensuremath{\mathcal{A}_{\mathit{ST}}}{} is partitioned into three fields, one for each component, denoted by $M.\mathtt{detect}$, $M.\mathtt{phase}$, and $M.\mathtt{pps}$, respectively. Consider the graph $G \in \mathcal{G}$ on which \ensuremath{\mathcal{A}_{\mathit{ST}}}{} runs and some round $t \in \mathbb{Z}_{\geq}$. If a node $v \in V(G)$ is decided at time $t$, that is, $\mathtt{out}_{v, t} = o \in \mathcal{O}$, then $v$ starts round $t$ by calling \ensuremath{\mathtt{Detect}}{} which is implemented over the $\mathtt{detect}$ field of its messages. If \ensuremath{\mathtt{Detect}}{} returns $\mathit{true}$, then $v$ sends its output value $o$ to all its neighbors over the $\mathtt{phase}$ field of its messages; if \ensuremath{\mathtt{Detect}}{} returns $\mathit{false}$, then $v$ resets $\mathtt{out}_{v} \gets \bot$ and turns on a flag denoted by $\ensuremath{wait}_{v}$ whose role is to prevent (the now undecided) $v$ from ``jumping'' into the middle of a phase (more on that soon); in both cases, round $t$ ends for $v$ with no further actions. The undecided nodes simulate the operation of \ensuremath{\mathtt{Phase}}{}. To this end, we say that an undecided node $u \in V(G)$ is \emph{engaged} (in a phase) if $\mathit{step}_{u} \neq \ensuremath{\mathit{\hbar}}$ and $\ensuremath{wait}_{u} = \mathit{false}$. Two engaged nodes $u, u' \in V(G)$ are said to be \emph{phase-synchronized} if $\mathit{step}_{u} = \mathit{step}_{u'}$. Recalling that no adversarial manipulations occur after time $t^{*}_{b}$, Obs.{}~\ref{observation:step-counter} ensures that if nodes $u$ and $u'$ are phase-synchronized at time $t \geq t^{*}_{b}$ with $\mathit{step}_{u, t} = \mathit{step}_{u', t} = j$ for $0 \leq j \leq \phi - 2$, then $u$ and $u'$ are phase-synchronized at time $t + 1$ with $\mathit{step}_{u, t + 1} = \mathit{step}_{u', t + 1} = j + 1$. By writing $\mathit{step}_{u}$ into the $\mathtt{pps}$ field of their outgoing messages, the engaged nodes $u$ can identify their phase-synchronized neighbors whenever $\mathit{step}_{u} > 0$. Recalling that step $j = 0$ of \ensuremath{\mathtt{Phase}}{} is the announcement step, in which the nodes do nothing but sharing their output registers with their neighbors, this enables an engaged node $u$ to simulate an invocation of \ensuremath{\mathtt{Phase}}{} on the subgraph induced by all nodes that are phase-synchronized with $u$. Specifically, the invocation of \ensuremath{\mathtt{Phase}}{} runs for $\phi$ consecutive rounds, progressing as $\mathit{step}_{u} = 0, 1, \dots, \phi - 1$. The simulation of \ensuremath{\mathtt{Phase}}{} is implemented over the $\mathtt{phase}$ fields of the messages. Recall that the flag $\ensuremath{wait}_{v}$ is turned on when a node $v \in V(G)$ resets its output register. Moreover, as long as $\ensuremath{wait}_{v} = \mathit{true}$, node $v$ is prevented from becoming engaged. We design \ensuremath{\mathcal{A}_{\mathit{ST}}}{} so that $v$ resets $\ensuremath{wait}_{v} \gets \mathit{false}$ when (and only when) $\mathit{step}_{v} = \ensuremath{\mathit{\hbar}}$. This means that $v$ may become engaged only after $\mathit{step}_{v}$ moves from $\mathit{step}_{v} = \ensuremath{\mathit{\hbar}}$ to $\mathit{step}_{v} = 0$ which is consistent with starting a phase from the beginning. \section{The Node-LCL Transformer --- Analysis} \label{section:transformer-nodes-analysis} Consider a node-LCL $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{G}, \ensuremath{\mathcal{L}} \rangle$, where $\mathcal{G} \subseteq \mathcal{U}_{\Delta}$ for a degree bound $\Delta$, and let \ensuremath{\mathcal{A}}{} be a fault free algorithm for \ensuremath{\mathcal{P}}{} that is $(\nu, \mu(\Delta), \phi, \beta, \sigma_{0})$-eligible. Let \ensuremath{\mathtt{Detect}}{} be the detection procedure that realizes the $\mu(\Delta)$-detectability of $\ensuremath{\mathcal{P}}$, let \ensuremath{\mathtt{Phase}}{} be the phase procedure associated with the $\phi$-phase-based algorithm \ensuremath{\mathcal{A}}{}, and let $\pi$ be the locally separable potential function due to which \ensuremath{\mathcal{A}}{} satisfies the $(\beta, \sigma_{0})$-progress property (Prop.{}~\ref{property:nodes:progress}). Let \ensuremath{\mathcal{A}_{\mathit{ST}}}{} be the self-stabilizing algorithm produced from \ensuremath{\mathcal{A}}{} by the node-LCL transformer presented in Sec.~\ref{section:transformer-nodes-implementation}. Our goal in this section is to establish the correctness and run-time guarantees of \ensuremath{\mathcal{A}_{\mathit{ST}}}{}, proving Thm.{}~\ref{section:transformer-statement-nodes}. The proof consists of two parts: First, in Sec.{}~\ref{section:transformer-nodes-fault-recovery}, we prove that by time \[ t_{s} \, = \, t^{*}_{b} + \ensuremath{\nu} + \phi + 2 \, , \] the self-stabilizing algorithm \ensuremath{\mathcal{A}_{\mathit{ST}}}{} ``flushes'' the effect of the adversarial manipulations from its global state; more formally, from time $t_{s}$ onwards, the configuration of \ensuremath{\mathcal{A}_{\mathit{ST}}}{} is guaranteed to remain strong, which means, by \ensuremath{\mathcal{A}_{\mathit{ST}}}{}'s mode of operation, that the decided nodes stick to their decisions. Sec.{}~\ref{section:transformer-nodes-adaptive-run-time} is then dedicated to bounding the fully adaptive run-time of \ensuremath{\mathcal{A}_{\mathit{ST}}}{}, proving that it does not take long for \ensuremath{\mathcal{A}_{\mathit{ST}}}{} to stabilize after time $t_{s}$. \subsection{Fault Recovery} \label{section:transformer-nodes-fault-recovery} Recall that the adversarial manipulations may include the addition/removal of nodes/edges, hence the graph may change during the round interval $[t^{*}_{a}, t^{*}_{b}-1]$. In what follows, we reserve $G$ for the graph at time $t^{*}_{b}$, recalling that this is also the graph at any time $t \geq t^{*}_{b}$. Consider a graph $H \in \mathcal{G}$ and a configuration $C : V(H) \rightarrow \mathcal{O} \cup \{\bot\}$. We make an extensive use of the following operator: given a node subset $U \subseteq U(C)$ and a function $f : U \rightarrow \mathcal{O}$, let $[C \nwarrow f]$ denote the configuration $C'$ defined by setting (1) $C'(u) = f(u)$ for every $u \in U$; and (2) $C'(v) = C(v)$ for every $v \in V(H) - U$. Recall that $C_{t}$ denotes the configuration of \ensuremath{\mathcal{A}_{\mathit{ST}}}{} at time $t \in \mathbb{Z}_{\geq 0}$. For a time $t \geq t^{*}_{b}$, let $\hat{D}_{t} = U(C_{t}) \cap D(C_{t + 1})$ be the subset of nodes that become decided (as a result of the decision step of \ensuremath{\mathtt{Phase}}{}) during round $t$. Let $\hat{f}_{t} : \hat{D}_{t} \rightarrow \mathcal{O}$ be the function that corresponds to the decisions taken by the nodes in $\hat{D}_{t}$ during round $t$, i.e., $\hat{f}_{t}(v) = C_{t + 1}(v)$ for every $v \in \hat{D}_{t}$, and let $\hat{C}_{t} = [C_{t} \nwarrow \hat{f}_{t}]$. We are now ready to state the following fundamental observation. \begin{observation*} The following statements hold for every $t \geq t^{*}_{b} + 1$: \\ (1) $D(\hat{C}_{t}) = D(C_{t}) \cup D(C_{t + 1})$; \\ (2) $\hat{C}_{t}(v) = C_{t + 1}(v) = \hat{C}_{t + 1}(v)$ for every node $v \in D(C_{t + 1})$; \\ (3) $\Lambda(C_{t}) = D(C_{t}) \cap U(C_{t+1}) = D(\hat{C}_{t}) \cap U(C_{t+1})$; and \\ (4) $D(C_{t + 1}) = D(\hat{C}_{t}) - \Lambda(C_{t})$. \end{observation*} The remainder of this section is dedicated to proving the following lemma. \begin{lemma} \label{lemma:nodes:correctness} For every $t \geq t_{s}$, it holds that $(G, C_{t}) \in \mathcal{SCG}(\ensuremath{\mathcal{P}})$. Moreover, if $v \in D(C_{t})$, then $C_{t + 1}(v) = C_{t}(v)$. \end{lemma} Lem.{}~\ref{lemma:nodes:correctness} provides the cornerstone for the correctness of \ensuremath{\mathcal{A}_{\mathit{ST}}}{} as it ensures that if $C_{t}$ is complete for some time $t \geq t_{s}$, then $C_{t}$ is also legal and $C_{t'} = C_{t}$ for every $t' \geq t$. The proof of Lem.{}~\ref{lemma:nodes:correctness} relies on the following two lemmas for which we define \[ t_{0} \, = \, t^{*}_{b} + \phi + 1 \, . \] \begin{lemma} \label{lemma:nodes:after-t-0-decided-imples-content-in-hat} For every $t \geq t_{0}$, it holds that $D(C_{t}) \subseteq \Gamma(\hat{C}_{t-1})$. \end{lemma} \begin{proof} A node $v \in V(G)$ that runs the decision step of \ensuremath{\mathtt{Phase}}{} in round $t - 1$ has $\mathit{step}_{v, t - 1} = \phi - 1$. Taking $t' = t - \phi$, the design of \ensuremath{\mathtt{PPS}}{} ensures that $\mathit{step}_{v, t'} = 0$. By the definition of the phase procedure, we know that $v$ resets all the registers maintained by \ensuremath{\mathtt{Phase}}{} in round $t'$. Since $t' \geq t^{*}_{b} + 1$, it follows that the invocation of \ensuremath{\mathtt{Phase}}{} that starts in round $t'$ is completed by all involved nodes in a fault free manner. Consider a node $v \in D(C_{t})$. We argue that $v \notin \Lambda(C_{t - 1})$. Indeed, assume by contradiction that $v \in \Lambda(C_{t - 1})$. Since $v$ and all its neighbors are not manipulated in round $t - 2 > t^{*}_{b}$, it follows by the design of \ensuremath{\mathcal{A}_{\mathit{ST}}}{} and the guarantee of \ensuremath{\mathtt{Detect}}{} that $\ensuremath{\mathtt{Detect}}_{v, t - 1} = \mathit{false}$. Hence $\mathtt{out}_{v, t} = \bot$, in contradiction to the assumption that $v \in D(C_{t})$ If $v \in U(C_{t - 1})$, then $v$ runs the decision step of \ensuremath{\mathtt{Phase}}{} in round $t - 1$. Otherwise ($v \in D(C_{t - 1})$), we know that $v \in \Gamma(C_{t - 1})$. In both cases, Prop.{}~\ref{property:nodes:respectful-decisions} can be applied to conclude that $v \in \Gamma(\hat{C}_{t - 1})$. \end{proof} \begin{lemma} \label{lemma:nodes:uncontent-directed-path-connection} Fix some time $t \geq t_{0}$. If $v \in \Lambda(C_{t})$, then the supportive digraph $D_{\ensuremath{\mathcal{L}}}$ admits a directed path $P$ emerging from $C_{t}(v)$ of length $t - t_{0}$. \end{lemma} \begin{proof} We establish the assertion by induction on $t$. The base case of $t = t_{0}$ is trivial as the digraph $D_{\ensuremath{\mathcal{L}}}$ admits a path of length $0$ emerging from $o$ for every $o \in \mathcal{O}$ including $o = C_{t}(v)$. Assume that the assertion holds for $t - 1$ and consider a node $v \in \Lambda(C_{t})$. As $\Lambda(C_{t}) \subseteq D(C_{t})$, Lem.{}~\ref{lemma:nodes:after-t-0-decided-imples-content-in-hat} ensures that $v \in \Gamma(\hat{C}_{t - 1})$; let $o = \hat{C}_{t - 1}(v) \in \mathcal{O}$ be the output value of $v$ under $\hat{C}_{t - 1}$. Since $v \in \Gamma(\hat{C}_{t - 1})$, it follows that $\hat{C}_{t - 1}[v] \in \mathcal{T}(o)$. Let $\hat{M} = \hat{M}(v, t) \in \SatMultisets^{*}(o)$ be a core for $o$ such that $\hat{M} \subseteq \hat{C}_{t - 1}[v]$. We argue that $\hat{M} \neq \emptyset$. Indeed, recalling that $\hat{C}_{t - 1}[v] \supseteq C_{t}[v]$, if $\hat{M} = \emptyset$, then $v \in \Gamma(C_{t})$ by the core coverage property (Prop.{}~\ref{property:nodes:core-coverage}), in contradiction to $v \in \Lambda(C_{t})$. Let $\hat{B} = \hat{B}(v, t) \subseteq N_{G}(v)$ be a subset of $v$'s neighbors whose decisions under $\hat{C}_{t - 1}$ realize $\hat{M}$. For each $u \in \hat{B}$, either (1) $u \in D(C_{t})$ and $C_{t}(u) = \hat{C}_{t - 1}(u)$; or (2) $u \in U(C_{t})$. Recalling that $\hat{C}_{t - 1}[v] \supseteq C_{t}[v]$, if $C_{t}(u) = \hat{C}_{t - 1}(u)$ for every $u \in \hat{B}$, then $v \in \Gamma(C_{t})$ by the core coverage property (Prop.{}~\ref{property:nodes:core-coverage}), in contradiction to $v \in \Lambda(C_{t})$. Therefore, there must exist a node $v' \in \hat{B}$ such that $v' \in U(C_{t})$. Since $v' \in D(\hat{C}_{t - 1})$, it follows that $v' \in \Lambda(C_{t - 1})$. By the inductive hypothesis, the supportive digraph $D_{\ensuremath{\mathcal{L}}}$ admits a directed path $P'$ emerging from $o' = C_{t - 1}(v') \in \mathcal{O}$ of length $t - 1 - t_{0}$. Recalling that $o' \in \hat{M}$, we conclude that $(o, o') \in E(D_{\ensuremath{\mathcal{L}}})$. The assertion follows as $(o, o') \circ P'$ is a directed path in $D_{\ensuremath{\mathcal{L}}}$ emerging from $o$ of length $t - t_{0}$. \end{proof} We are now ready to establish Lem.{}~\ref{lemma:nodes:correctness}. \begin{proof}[proof of Lem.{}~\ref{lemma:nodes:correctness}] The first part of the assertion follows from Lem.{}~\ref{lemma:nodes:uncontent-directed-path-connection} since $t_{s} = t_{0} + \nu + 1$ and since $\ensuremath{\mathcal{P}}$ satisfies that $\nu$-bounded influence property (Prop.{}~\ref{property:nodes:bounded-influence}). The second part of the assertion follows from the design of \ensuremath{\mathcal{A}_{\mathit{ST}}}{}, recalling that if $v \in \Gamma(C_{t})$ and neither $v$ nor any of $v$'s neighbors are manipulated in round $t - 1$, then $\ensuremath{\mathtt{Detect}}_{t}(v) = \mathit{true}$. \end{proof} \subsection{Fully Adaptive Run Time} \label{section:transformer-nodes-adaptive-run-time} Our goal in this section is to bound the fully adaptive run-time of \ensuremath{\mathcal{A}_{\mathit{ST}}}{}, proving that in response to $k$ node manipulations, the algorithm stabilizes by time $t^{*}_{b} + T$, where \begin{equation} \label{equation:nodes:bound-T} \mathbb{E}(T) \, \leq \, O \left( \left( \phi^{5} / \beta \right) \cdot \left( \log (k) + (\ensuremath{\nu} + \phi) \log (\Delta) + \log (\sigma_{0}) \right) \right) \, . \end{equation} In passing, we also prove that the nodes whose distance from the manipulated nodes is at least $\ensuremath{\nu} + \phi + 1$ retain their original output value (see Cor.{}~\ref{corollary:adversay-effect-radius}), thus establishing Thm.~\ref{section:transformer-statement-nodes}. To bound the stabilization time $T$ of \ensuremath{\mathcal{A}_{\mathit{ST}}}{}, we define the random variables \[ T_{1} \, = \, \min \{ z \in \mathbb{Z}_{\geq 0} \, : \, \pi(G, C_{t_{s} + z}) = 0 \} \quad \text{and} \quad T_{2} \, = \, \min \{ z \in \mathbb{Z}_{\geq 0} \, : \, \text{$C_{t_{s} + T_{1} + z}$ is legal} \} \, . \] Recalling that $t_{s} = t^{*}_{b} + \ensuremath{\nu} + \phi + 2$, we conclude that $T = O (\ensuremath{\nu} + \phi) + T_{1} + T_{2}$, so it remains to bound $\mathbb{E}(T_{1})$ and $\mathbb{E}(T_{2})$. We start with the following fundamental observation. \begin{observation*} The potential function $\pi$ satisfies $\pi(G, C_{t}) \geq \pi(G, C_{t + 1})$ for every $t \geq t_{s}$. \end{observation*} \begin{proof} Follows from Lem.{}~\ref{lemma:nodes:correctness} by the definition of a locally separable potential function. \end{proof} To bound $\mathbb{E}(T_{2})$, notice that $\pi(G, C) \geq \pi(G - X, C_{G - X})$ for any node subset $X \subseteq U(C)$ by the definition of a locally separable potential function. Combined with the $(\beta, \sigma_{0})$-progress property (Prop.{}~\ref{property:nodes:progress}), we conclude that after time $t_{s} + T_{1}$, each node $v \in U(C_{t_{s} + T_{1}}) \subseteq U(C_{t_{s}})$ participates in at most one phase of \ensuremath{\mathtt{Phase}}{} before it becomes decided (and remains decided thereafter). Employing Obs.{}~\ref{observation:step-counter}, we conclude, by a standard probabilistic argument, that $\mathbb{E}(T_{2}) \leq O (\phi + \log (|U(C_{t_{s}})|))$. The following lemma (established in the sequel) is employed to prove that \begin{equation} \label{equation:nodes:bound-T2} \mathbb{E}(T_{2}) \, \leq \, O \left( \log (k) + (\ensuremath{\nu} + \phi) \log (\Delta) \right) \, . \end{equation} \begin{lemma} \label{lemma:nodes:up-bound-undecided-edges} The number of undecided nodes at time $t_{s}$ satisfies \[ |U(C_{t_{s}})| \, \leq \, O \left( k \cdot \Delta^{\ensuremath{\nu} + \phi} \right) \, . \] \end{lemma} To bound $\mathbb{E}(T_{1})$, observe that by combining the definition of a locally separable potential function and Lem.{}~\ref{lemma:nodes:up-bound-undecided-edges}, we conclude that \begin{equation} \label{equation:nodes:bound-initial-potential} \pi(G, C_{t_{s}}) \, \leq \, O \left( \sigma_{0} \cdot k^{2} \Delta^{2 (\ensuremath{\nu} + \phi)} \right) \, . \end{equation} The desired bound is established by proving that this potential decreases sufficiently fast in expectation. This is stated formally in the following proposition (established in the sequel), where $\tau = O (\phi^{3})$ is the parameter promised in Cor.{}~\ref{corollary:start-phase-prob-fixed-epsilon}. \begin{lemma} \label{lemma:nodes:expected-removed-potential} Fix some time $t \geq t_{s}$ and the global state at time $t$. The design of \ensuremath{\mathcal{A}_{\mathit{ST}}}{} ensures that \[ \mathbb{E}(\pi(G, C_{t + \tau + \phi})) \, \leq \, \left( 1 - \frac{\beta}{4 \phi^{2}} \right) \cdot \pi(G, C_{t}) \, . \] \end{lemma} Combining Lem.{}~\ref{lemma:nodes:expected-removed-potential} with (\ref{equation:nodes:bound-initial-potential}), we deduce by a standard probabilistic argument that \begin{equation} \label{equation:nodes:bound-T1} \mathbb{E}(T_{1}) \, \leq \, O \left( \left( \phi^{5} / \beta \right) \cdot \left( \log (k) + (\ensuremath{\nu} + \phi) \log (\Delta) + \log (\sigma_{0}) \right) \right) \, . \end{equation} The bound in (\ref{equation:nodes:bound-T}) now follows from (\ref{equation:nodes:bound-T2}) and (\ref{equation:nodes:bound-T1}). We now turn to establish Lem.{}~\ref{lemma:nodes:expected-removed-potential}. \begin{proof} [proof of Lem.{}~\ref{lemma:nodes:expected-removed-potential}] Throughout the proof we will denote $C_{t}$ and $U(C_{t})$ by $C$ and $U$, respectively. Let $P_{\tau} = \{u \in U \mid \mathit{step}_{u,t+ \tau} = 0 \}$ be the random set of nodes that their step variable is $0$ at time $t+ \tau$ and are undecided at time $t$. After $P_{\tau}$ was determined (independently for each node by the design of the \ensuremath{\mathtt{PPS}}{} module), we revel the set $P_{\tau}$ to an adversary. Conditioned on the event that $P_{\tau}$ is already determined, the adversary is allowed to determine the coin tosses of the nodes in $V(G)-P_{\tau}$ during the round interval $[t, t+ \tau + \phi - 2]$ and for the nodes in $P_{\tau}$ during the time interval $[t, t+ \tau - 1]$. Let $P^{d}_{\tau} = P_{\tau} \cap D(C_{t+\tau})$ be the random set of nodes that are decided under the random configuration $C_{t+ \tau}$, let $P^{u}_{\tau} = P_{\tau} - P^{d}_{\tau}$ be the random set of nodes that are undecided under the random configuration $C_{t+ \tau}$, let $E^{u}_{P} = E(G(P^{u}_{\tau}))$, and let $E^{d}_{P} = E(G(P_{\tau})) - E^{u}_{P}$. Let $\Pi^{u}_{P} = \sum_{\{ u, v \} \in E^{u}_{P}}\sigma(C[u], C[v])$ (resp., $\Pi^{d}_{P} = \sum_{\{ u, v \} \in E^{d}_{P}} \sigma(C[u], C[v])$) be the random sum of the potential coefficients of the edges in $E^{u}_{P}$ (resp., $E^{d}_{P}$) under configuration $C$ and similarly let $\Pi_{P} = \sum_{\{ u, v \} \in E(G(P_{\tau}))} \sigma(C[u], C[v])$. Notice that $\Pi_{P} = \Pi^{u}_{P} + \Pi^{d}_{P}$. For every $e \in E(G(U))$, let $A_{e}$ be the event that $e \in E(G(P_{\tau}))$. By Cor.{}~\ref{corollary:start-phase-prob-fixed-epsilon}, for every edge $e \in E(G(U))$, it holds that \begin{equation} \label{eq:prob-start-pahse-edge} \Pr\left(\operatorname{A}_{e}\right) \geq \frac{1}{4\phi^{2}} \, . \end{equation} Let $\Pi_{U} = \sum_{\{ u, v \} \in E(G(U))} \sigma(C[u],C[v])$. Notice that by the definition of a potential function it holds that $\Pi_{U} = \pi(G, C)$. Using Eq.{}~\ref{eq:prob-start-pahse-edge}, we know that \begin{equation} \label{eq:expected-potetial-start-phase-edges} \mathbb{E} \left( \Pi_{P} \right) = \mathbb{E} \left( \Pi^{u}_{P} + \Pi^{d}_{P} \right) \geq \frac{1}{4 \phi^{2}} \Pi_{U} = \frac{1}{4 \phi^{2}} \pi(G,C) \, . \end{equation} We now turn to prove that at least a $\beta$ fraction, in expectation, out of the potential in $\Pi_{P}$ is removed by time $t + \tau + \phi$. Notice that all of the potential in $\Pi^{d}_{P}$ is removed by time $t + \tau +\phi$ since every edge that contributed potential to $\Pi^{d}_{P}$ under configuration $C$ has at least one decided endpoint at time $t + \tau +\phi$. Let $\bar{D} = (U-P_{\tau}) \cap D(C_{t+ \tau + \phi -1})$ be the random set of nodes in $U-P_{\tau}$ that are decided under $C_{t+ \tau + \phi -1}$, let $f: \bar{D} \rightarrow \mathcal{O}$ be a function such that $f(u) = C_{t+ \tau + \phi - 1}(u)$ for every $u \in \bar{D}$, and let $g : P^{d}_{\tau} \rightarrow \mathcal{O}$ be a function such that $g(u) = C_{t + \tau}(u)$ for every $u \in P^{d}_{\tau}$. Let $X = U - (P_{\tau} \cup \bar{D})$. The random set of nodes in $P^{u}_{\tau}$ start a phase in round $t + \tau$ and the phase can be considered as being executed on the random graph $G-X$ with the random configuration $C^{-X} = [[C_{G-X} \nwarrow g] \nwarrow f]$. By the properties of the locally separable potential function $\pi$, and Lem.{}~\ref{lemma:nodes:correctness} we know that \begin{align*} \pi(G-X, C^{-X}) &= \sum_{u,v \in U(C^{-X}), \{u,v\} \in E(G-X)}\sigma(C^{-X}[v],C^{-X}[u]) \\ &= \sum_{\{ u, v \} \in E^{u}_{P}} \sigma(C^{-X}[v],C^{-X}[u]) \\ &\leq \sum_{\{ u, v \} \in E^{u}_{P}} \sigma(C[v],C[u]) \\ &= \Pi^{u}_{P} \, . \end{align*} We can conclude that, \begin{equation} \label{eq:start-phase-potetial} \pi(G-X, C^{-X}) \leq \Pi^{u}_{P}. \end{equation} By the $(\beta, \sigma_{0})$-progress property (Prop.{}~\ref{property:nodes:progress}) it is guaranteed that \begin{align*} & \mathbb{E} \left( \pi(G-X ,\ensuremath{\mathtt{Phase}}(G-X,C^{-X})) \right) \\ = \, & \mathbb{E} \left(\mathbb{E} \left( \pi(G-X,\ensuremath{\mathtt{Phase}}(G-X,C^{-X})) \mid \pi(G-X,C^{-X}) \right)\right) \\ \leq \, & (1-\beta) \cdot \mathbb{E} \left( \pi(G-X,C^{-X}) \right) \\ \leq \, & (1-\beta) \cdot \mathbb{E} \left( \Pi^{u}_{P} \right) \, , \end{align*} where the last inequality holds due to Eq.{}~\ref{eq:start-phase-potetial}. We can conclude that the total potential, in expectation, that is removed during the round interval $[t,t +\tau + \phi-1]$ is at least \begin{equation*} \mathbb{E} ( \Pi^{d}_{P} ) + \beta \mathbb{E} \left( \Pi^{u}_{P} \right) \geq \beta \cdot \left( \mathbb{E} ( \Pi^{d}_{P} ) + \mathbb{E} \left( \Pi^{u}_{P} \right) \right) \geq \beta \cdot \frac{1}{4 \phi^{2}} \pi(G,C), \end{equation*} by Eq.{}~\ref{eq:prob-start-pahse-edge} and Eq.{}~\ref{eq:expected-potetial-start-phase-edges}. \end{proof} In the round interval $[t^{*}_{a}, t^{*}_{b}-1]$, the adversary may manipulate nodes. In order to distinguish between the configuration that resulted from the execution of \ensuremath{\mathcal{A}_{\mathit{ST}}}{} and the graph and configuration that the next round is executed on, we formally define both. We denote by $G_{t} \in \mathcal{G}$ the graph at time $t$, i.e., the graph that resulted from the adversarial manipulations in round $t - 1$ for every time $t \in \mathbb{Z}_{\geq 0}$. Since the graph may only change by the adversary, we denote by $G_{a}$ the graph $G_{t}$ for every time $t \leq t^{*}_{a}$. Recall that we denote by $G$ the graph $G_{t^{*}_{b}}$. We denote by $\widetilde{C}_{t}$ the configuration of \ensuremath{\mathcal{A}_{\mathit{ST}}}{} that corresponds to the configuration after all nodes finished their local computation stage in round $t$. Notice that configuration $\widetilde{C}_{t}$ is defined on the graph $G_{t}$ and that $\widetilde{C}_{t} = C_{t+1}$ for every time $t \leq t^{*}_{a} - 1$ and $t \geq t^{*}_{b}$. We now turn to redefine the set $\hat{D}_{t}$ and the configuration $\hat{C}_{t}$. For a time $t \geq t^{*}_{a} + 1$, let $\hat{D}_{t} = U(C_{t}) \cap D(\widetilde{C}_{t})$ be the subset of nodes that become decided (as a result of the decision step of \ensuremath{\mathtt{Phase}}{}) during round $t$. Let $\hat{f}_{t}:\hat{D}_{t} \rightarrow \mathcal{O}$ be the function that corresponds to the decisions taken by the nodes in $\hat{D}_{t}$ during round $t$, i.e., $\hat{f}_{t}(v) = \widetilde{C}_{t}(v)$ for every $v \in \hat{D}_{t}$ and let $\hat{C}_{t} = [C_{t} \nwarrow \hat{f}_{t}]$. We denote by $K \subseteq V(G)$ the set of nodes in $G$ that were manipulated during the round interval $[t^{*}_{a}, t^{*}_{b}-1]$. Let $V_{0} = V(G) - K$ be the set of nodes in $G$ that were not manipulated. Notice that $|K| \leq k$. Let $V^{f}_{0} = \{ v \in V_{0} \mid \delta_{G}(v, K) \geq \phi + 1 \}$ be the set of nodes in $V_{0}$ that are at distance at least $\phi + 1$ from any manipulated node. Let $K_{t} \subseteq V(G_{t})$ be the set of nodes that were manipulated in round $t-1$. We denote by $\delta^{m}_{t}(v) = \min \{ \delta_{G_{t'}}(v, K_{t'}) \mid t^{*}_{a} \leq t' \leq t \}$ for every $v \in V^{f}_{0}$ and $t \geq t^{*}_{a}$. \begin{observation*} The following statements are true for every $t \geq t^{*}_{a}$: \\ (1) $N_{G_{t}}(v) = N_{G_{a}}(v) = N_{G}(v)$, for every $v \in V_{0}$; \\ (2) $\delta^{m}_{t}(v) \geq \delta_{G}(v ,K)$, for every $v \in V_{0}$; \\ (3) $V^{f}_{0} \cap \Lambda(C_{t}) = V^{f}_{0} \cap D(C_{t}) \cap U(\widetilde{C}_{t}) = V^{f}_{0} \cap D(\hat{C}_{t}) \cap U(\widetilde{C}_{t})$; \\ (4) $V^{f}_{0} \cap D(C_{t+1}) = V^{f}_{0} \cap (D(\hat{C}_{t}) - \Lambda(C_{t}))$; \\ (5) $V^{f}_{0} \cap D(C_{t}) = V^{f}_{0} \cap D(\widetilde{C}_{t-1})$; \\ (6) $V^{f}_{0} \cap \Gamma(C_{t}) = V^{f}_{0} \cap \Gamma(\widetilde{C}_{t-1})$; and \\ (7) $\widetilde{C}_{t}(v) = \hat{C}_{t}(v) = C_{t+1}(v) = \hat{C}_{t+1}(v)$, for every $v \in V^{f}_{0} \cap D(C_{t+1})$. \end{observation*} \begin{lemma} \label{lemma:nodes:far-from-manipulated-and-decided-content-in-hat} For every $t \geq t^{*}_{a}$ it holds that $V^{f}_{0} \cap D(C_{t}) \subseteq V^{f}_{0} \cap \Gamma(\hat{C}_{t-1})$. \end{lemma} \begin{proof} Every node $v \in V^{f}_{0}$ simulating the decision step of \ensuremath{\mathtt{Phase}}{} in round $t-1$ has $\mathit{step}_{v, t-1} = \phi - 1$, which means that $\mathit{step}_{v, t-\phi} = 0$ by the design of the \ensuremath{\mathtt{PPS}}{} module since $v$ was not manipulated. Let $t' = t - \phi$ and assume that $t' \geq t^{*}_{a}$. In round $t'$ node $v$ simulated the $0$ step of \ensuremath{\mathtt{Phase}}{}, hence initialized all the registered maintained by \ensuremath{\mathtt{Phase}}{}. Node $v$ and all the nodes $u \in V_{0}$ such that $\delta^{m}_{t}(u) \geq \phi$ that are engaged in the phase that started at time $t'$ were not manipulated, hence simulated an execution of \ensuremath{\mathtt{Phase}}{} based on messages they received that corresponds to a full, fault free, execution of \ensuremath{\mathtt{Phase}}{}. Let $v \in V^{f}_{0} \cap D(C_{t})$. First we will prove that $v \notin \Lambda(C_{t-1})$. Assume by contradiction that $v \in \Lambda(C_{t-1})$. Since $v$ and all of its neighbors were not manipulated in round $t-2$, it holds that $\ensuremath{\mathtt{Detect}}_{v,t-1} = \mathit{false}$ and $\widetilde{C}_{t-1}(v) = C_{t}(v) = \bot$, by the design of \ensuremath{\mathcal{A}_{\mathit{ST}}}{} and the guarantee of \ensuremath{\mathtt{Detect}}{}. Hence, $v \notin D(C_{t})$. If $v \in U(C_{t-1})$, then $v$ simulated the decision step of \ensuremath{\mathtt{Phase}}{} in round $t-1$ and by Prop.{}~\ref{property:nodes:respectful-decisions}, it holds that $v \in V^{f}_{0} \cap \Gamma(\hat{C}_{t-1})$. Otherwise ($v \in D(C_{t-1})$), every neighbor $u \in U_{v}(C_{t-1}) \cap D(\hat{C}_{t-1})$ simulated the decision step of \ensuremath{\mathtt{Phase}}{} in round $t-1$. By Prop.{}~\ref{property:nodes:respectful-decisions}, it holds that $v \in \Gamma(\hat{C}_{t-1})$. \end{proof} We denote by $C_{a}$ the configuration $C_{t^{*}_{a}}$. Consider some $o \in \mathcal{O}$. We denote by $\mathit{depth}_{D_{\PredicateSet}}(o)$ the longest (directed) path emerging from node $o$ in $D_{\ensuremath{\mathcal{L}}}$. \begin{lemma} \label{lemma:nodes:far-from-manipulated-keep-decision} For every $v \in V_{0}$, it holds that if $\delta_{G}(v, K) \geq \mathit{depth}_{D_{\PredicateSet}}(C_{a}(v)) + \phi + 1$, then $C_{t}(v) = C_{a}(v)$ for every $t \geq t^{*}_{a}$. \end{lemma} \begin{proof} The claim obviously holds for $t = t^{*}_{a}, \cdots, t^{*}_{a} + \phi$, so we assume hereafter that $t \geq t^{*}_{a} + \phi + 1$. Recall that $\phi \geq 2$. Let $w \in V^{f}_{0}$ be some node and let $C_{a}(w) = o \in \mathcal{O}$. Assume that there exists a time $t \geq t^{*}_{a} + \phi + 1$ such that $C_{t}(w) \neq o$ and let $t$ be the first time it occurred. We know that $\widetilde{C}_{t-1}(w) = C_{t}(w) \neq o$. Since $C_{t-1}(w) = o$ and $w$ was not manipulated in round $t-2$, it holds that $w$ changed decision since $\ensuremath{\mathtt{Detect}}_{w,t-1} = \mathit{false}$ by the construction of \ensuremath{\mathcal{A}_{\mathit{ST}}}{}. Since $w$ and all of its neighbors were not manipulated in round $t-2$, by the guarantee of \ensuremath{\mathtt{Detect}}{}, it holds that $w$ is uncontent under $C_{t-1}$. Since $w \in D(C_{t-1})$ and $t \geq t^{*}_{a}$, by Lem.{}~\ref{lemma:nodes:far-from-manipulated-and-decided-content-in-hat}, it holds that $w \in \Gamma(\hat{C}_{t-2})$. We denote by $\hat{M}^{w}_{t-2} \in \SatMultisets^{*}(o)$ some core for $o$ such that $\hat{M}^{w}_{t-2} \subseteq \hat{C}_{t-2}[w] \in \mathcal{T}(o)$. We will prove the claim by induction on $\mathit{depth}_{D_{\PredicateSet}}(\cdot)$. Let $v \in V_{0}$ be some node such that $\delta_{G}(v, K) \geq \mathit{depth}_{D_{\PredicateSet}}(C_{a}(v)) + \phi + 1$ and $\mathit{depth}_{D_{\PredicateSet}}(C_{a}(v)) = 0$ and let $C_{a}(v) = o \in \mathcal{O}$. Assume by contradiction that there exists a time $t \geq t^{*}_{a} + \phi +1$ such that $C_{t}(v) \neq o$ and let $t$ be the first time it occurred. Since $\mathit{depth}_{D_{\PredicateSet}}(o) = 0$, every core for $o$ is the empty set, hence $\hat{M}^{v}_{t-2} = \emptyset$. The only difference between configurations $\hat{C}_{t-2}$ and $\widetilde{C}_{t-2}$ is that decided nodes under $\hat{C}_{t-2}$ may become undecided under $\widetilde{C}_{t-2}$. By the definition of a core $v \in \Gamma(\widetilde{C}_{t-2})$, $v \in \Gamma(C_{t-1})$ which is a contradiction. \sloppy Let $v \in V_{0}$ be some node such that $\delta_{G}(v, K) \geq \mathit{depth}_{D_{\PredicateSet}}(C_{a}(v)) + \phi + 1$ and $\mathit{depth}_{D_{\PredicateSet}}(C_{a}(v)) = i > 0$ and let $C_{a}(v) = o \in \mathcal{O}$. Assume by contradiction that there exists a time $t \geq t^{*}_{a} + \phi + 1$ such that $C_{t}(v) \neq o$ and let $t$ be the first time it occurred. It holds that $\emptyset \subset \hat{M}^{v}_{t-2}$, since $i > 0$ which means that the empty set it not a core for $o$. Let $\hat{B}^{v}_{t-2} \subseteq N_{G}(v)$ be the set of $v$'s neighbors such that their decisions are in $\hat{M}^{v}_{t-2}$. Since $\mathit{depth}_{D_{\PredicateSet}}(o) = i > 0$, it holds that $\mathit{depth}_{D_{\PredicateSet}}(\hat{C}_{t-2}(u)) \leq i-1$ and $\delta_{G}(u, K) \geq \mathit{depth}_{D_{\PredicateSet}}(\hat{C}_{t-2}(u)) + \phi + 1$ for every $u \in \hat{B}^{v}_{t-2}$. By induction hypothesis $C_{t'}(u) = C_{a}(u)$, for every $t' \geq t^{*}_{a}$ and $u \in \hat{B}^{v}_{t-2}$. Let $u \in \hat{B}^{v}_{t-2}$. It holds that, $\hat{C}_{t-2}(u) = \widetilde{C}_{t-2}(u) = C_{t-1}(u)$. The only difference between configurations $\hat{C}_{t-2}$ and $\widetilde{C}_{t-1}$ is that decided nodes under $\hat{C}_{t-2}$ may become undecided under $\widetilde{C}_{t-2}$. By the definition of a core $v \in \Gamma(\widetilde{C}_{t-2})$, $v \in \Gamma(C_{t-1})$ which is a contradiction. \end{proof} \par\fussy The following corollary is derived from Lem.{}~\ref{lemma:nodes:far-from-manipulated-keep-decision} by the $\ensuremath{\nu}$-bounded influence property (Prop.{}~\ref{property:nodes:bounded-influence}). \begin{corollary} \label{corollary:adversay-effect-radius} For every node $v \in V_{0}$, if $\delta_{G}(v, K) \geq \ensuremath{\nu} + \phi + 1$, then $C_{t}(v) = C_{a}(v)$ for every time $t \geq t^{*}_{a}$. \end{corollary} Lem.{}~\ref{lemma:nodes:up-bound-undecided-edges} is derived from Cor.{}~\ref{corollary:adversay-effect-radius} by observing that \[ \left| \left\{ v \in V(G) \, : \, \delta_{G}(v, K) \leq \ensuremath{\nu} + \phi \right\} \right| \, \leq \, O \left( |K| \cdot \Delta^{\ensuremath{\nu} + \phi} \right) \, . \] \section{Simulating a Node-LCL Algorithm on the Line Graph} \label{section:simulation-line-graph} Simulating an algorithm \ensuremath{\mathcal{A}}{} for a distributed node problem on the line graph $L(G)$ of a graph $G$ is a simple task in a fault free environment with unique node IDs: It suffices to appoint an endpoint $x(e) \in e$ for each edge $e \in E(G) = V(L(G))$ that takes on the responsibility for simulating the computation associated with $e$ throughout the execution (e.g., the endpoint with a smaller ID); since $\delta_{G}(x(e), x(e')) \leq 2$ for every $e, e' \in E(G)$ such that $\{ e, e' \} \in E(L(G))$, it suffices to simulate each round of $\ensuremath{\mathcal{A}}$ on $L(G)$ with $2$ rounds in $G$, e.g., allocating the even rounds for simulating the actions of $\ensuremath{\mathcal{A}}$ on $L(G)$ and the odd rounds for gathering the information from nodes that are $2$ hops away. Things become more involved when one wishes to simulate an anonymous self-stabilizing algorithm \ensuremath{\mathcal{A}_{\mathit{ST}}}{} on $L(G)$. First, we need to ensure that faulty nodes are safely detected. Second, we no longer have a natural symmetry breaking rule that determines the endpoint $x(e)$ responsible for simulating the computation associated with an edge $e \in E(G)$. Third, in a self-stabilizing setting, the adversary can manipulate the nodes' clocks, thus preventing them from partitioning the rounds into even and odd in synchrony. To overcome these difficulties, we shall design a self-stabilizing algorithm, denoted by $\ensuremath{\mathcal{A}_{\mathit{ST}}}'$, that runs on $G$ and simulates a run of \ensuremath{\mathcal{A}_{\mathit{ST}}}{} on $L(G)$. To this end, we exploit the structure of the self-stabilizing algorithms developed in Sec.{}~\ref{section:transformer-nodes-implementation}, recalling that \ensuremath{\mathcal{A}_{\mathit{ST}}}{} is composed of a detection procedure \ensuremath{\mathtt{Detect}}{}, a phase procedure \ensuremath{\mathtt{Phase}}{}, and the \ensuremath{\mathtt{PPS}}{} module. We design $\ensuremath{\mathcal{A}_{\mathit{ST}}}'$ by adapting each component of \ensuremath{\mathcal{A}_{\mathit{ST}}}{} separately, denoting the components that run on $G$ and simulate the operation of \ensuremath{\mathtt{Detect}}{}, \ensuremath{\mathtt{Phase}}{}, and \ensuremath{\mathtt{PPS}}{} on $L(G)$ by $\ensuremath{\mathtt{Detect}}'$, $\ensuremath{\mathtt{Phase}}'$, and $\ensuremath{\mathtt{PPS}}'$, respectively. To simulate a run of \ensuremath{\mathtt{Detect}}{} on $L(G)$ by the simulating procedure $\ensuremath{\mathtt{Detect}}'$, consider an edge $e = \{ u, v \} \in E(G)$. The output register $\mathtt{out}_{e}$ of $e$ is represented under $\ensuremath{\mathtt{Detect}}'$ by the two output registers $\mathtt{out}_{u}(v)$ and $\mathtt{out}_{v}(u)$. We augment the messages exchanged between $u$ and $v$ under $\ensuremath{\mathtt{Detect}}'$ with the content of the simulated output register $\mathtt{out}_{e}$, thus allowing the two nodes to verify that edge $e$ is port-consistent. Beyond that piece of information, nodes $u$ and $v$ simply share with each other under $\ensuremath{\mathtt{Detect}}'$ the messages that $\ensuremath{\mathtt{Detect}}$ sends when simulated on $L(G)$ (typically, these messages simply contain the content of all their output registers). If a port-inconsistency is detected or if $\ensuremath{\mathtt{Detect}}$ returns $\mathit{false}$, then node $u$ (resp., $v$) resets $\mathtt{out}_{u}(v) \gets \bot$ (resp., $\mathtt{out}_{v}(u) \gets \bot$). Next, we turn to explain how an invocation of the phase procedure \ensuremath{\mathtt{Phase}}{} on a subgraph of $L(G)$ is simulated by procedure $\ensuremath{\mathtt{Phase}}'$ running on $G$. Assuming that each phase of \ensuremath{\mathtt{Phase}}{} consists of $\phi$ steps, a phase of $\ensuremath{\mathtt{Phase}}'$ is stretch over $2 \phi$ steps. Given an edge $e \in E(G)$, module $\ensuremath{\mathtt{PPS}}'$ is responsible for (probabilistically) starting new phases of $\ensuremath{\mathtt{Phase}}'$ and for advancing the steps once a phase starts so that both endpoints of $e$ agree on the current step $j \in \{ 0, 1, \dots, 2 \phi - 1 \}$ of edge $e$. Module $\ensuremath{\mathtt{PPS}}'$ is also responsible for breaking the symmetry between the endpoints $x(e)$ and $y(e)$ of edge $e = \{ x(e), y(e) \}$ so that $x(e)$ is responsible for simulating the execution of \ensuremath{\mathtt{Phase}}{} on $e$ in the current phase and $y(e)$ assists $x(e)$ in collecting information from nodes that are $2$ hops away from $x(e)$. Specifically, each step $j = 0, 1, \dots, \phi - 1$ of \ensuremath{\mathtt{Phase}}{} is simulated by steps $2 j$ and $2 j + 1$ of $\ensuremath{\mathtt{Phase}}'$, where in step $2 j$, node $y(e)$ passes to $x(e)$ the messages sent under \ensuremath{\mathtt{Phase}}{} from edges $e' \in E(G)$ of the form $e' = \{ y(e) = y(e'), x(e') \}$ to edge $e$, and in step $2 j + 1$, node $x(e)$ updates the registers of \ensuremath{\mathtt{Phase}}{} at $e$, maintained under $\ensuremath{\mathtt{Phase}}'$ at $x(e)$. The simulation of the decision step $j = \phi - 1$ of \ensuremath{\mathtt{Phase}}{} is slightly different, recalling that this is the only step in which \ensuremath{\mathtt{Phase}}{} is allowed to write an output value $o \in \mathcal{O}$ into the output register $\mathtt{out}_{e}$ of (the undecided) edge $e \in E(G)$, implemented under $\ensuremath{\mathtt{Phase}}'$ by writing $o$ into the output registers $\mathtt{out}_{x(e)}(y(e))$ and $\mathtt{out}_{y(e)}(x(e))$. To ensure that the output value $o$ is written into $\mathtt{out}_{x(e)}(y(e))$ and $\mathtt{out}_{y(e)}(x(e))$ concurrently, node $x(e)$ employs step $2 \phi - 2$ of $\ensuremath{\mathtt{Phase}}'$ to inform $y(e)$ of $o$. Then, in step $2 \phi - 1$, serving as the decision step of $\ensuremath{\mathtt{Phase}}'$, nodes $x(e)$ and $y(e)$ set $\mathtt{out}_{x(e)}(y(e)) \gets o$ and $\mathtt{out}_{y(e)}(x(e)) \gets o$. The last action is conditioned on the output values of the decided edges $e' \in N_{G}(e)$ that are revealed to $x(e)$ and $y(e)$ at the beginning of step $2 \phi - 1$. Here, we exploit the trivial (yet key) observation that if $e' \in N_{G}(e)$, then $|e' \cap \{ x(e), y(e) \}| = 1$, hence if $e'$ is already decided at the end of step $2 \phi - 2$, then both $x(e)$ and $y(e)$ know about it at the beginning of step $2 \phi - 1$. Finally, we explain how module $\ensuremath{\mathtt{PPS}}'$ simulates module \ensuremath{\mathtt{PPS}}{} (see Sec.{}~\ref{section:probabilistic-phase-synchronization}) that runs in an edge $e = \{ u, v \} \in E(G)$ and determines when a new phase of $\ensuremath{\mathtt{Phase}}'$ begins (as well as advancing the steps within a phase). Module $\ensuremath{\mathtt{PPS}}'$ is also responsible for breaking the symmetry between $u$ and $v$, appointing one of them to $x(e)$ and the other to $y(e)$, when a new phase begins. To implement module $\ensuremath{\mathtt{PPS}}'$, nodes $u$ and $v$ maintain the register $\mathit{step}_{e}$ in conjunction. Specifically, node $u$ (resp., $v$) maintains a local copy of $\mathit{step}_{e}$, denoted by $\mathit{step}_{u}(v)$ (resp., $\mathit{step}_{v}(u)$). In each round, nodes $u$ and $v$ exchange the values of $\mathit{step}_{u}(v)$ and $\mathit{step}_{v}(u)$; if $\mathit{step}_{u}(v) \neq \mathit{step}_{v}(u)$, then nodes $u$ and $v$ reset $\ensuremath{\mathtt{PPS}}'$ by setting $\mathit{step}_{u}(v) \gets \ensuremath{\mathit{\hbar}}$ and $\mathit{step}_{v}(u) \gets \ensuremath{\mathit{\hbar}}$, respectively. Assuming that $\mathit{step}_{u}(v) = \mathit{step}_{v}(u)$, nodes $u$ and $v$ update $\mathit{step}_{e}$ in accordance with the policy of \ensuremath{\mathtt{PPS}}{} (see Sec.{}~\ref{section:probabilistic-phase-synchronization}). If $\mathit{step}_{e} = j \in \{ 0, 1, \dots, 2 \phi - 1 \}$, then this update is deterministic, hence it can be performed by the two nodes with no further interaction. Otherwise ($\mathit{step}_{e} = \ensuremath{\mathit{\hbar}}$), nodes $u$ and $v$ toss an (unbiased) coin $r_{e}$ to determine the next state of $\mathit{step}_{e}$. To this end, node $u$ (resp., $v$) tosses a coin denoted by $r_{u}(v)$ (resp., $r_{v}(u)$) and shares its value with $v$ (resp., $u$). The two nodes then set $r_{e} \gets r_{u}(v) \oplus r_{v}(u)$ and update $\mathit{step}_{e}$ accordingly. To assign the roles of $x(e)$ and $y(e)$ to the endpoints of edge $e$ in a given phase, we use a similar trick: Whenever $\mathit{step}_{e} = \ensuremath{\mathit{\hbar}}$, node $u$ (resp., $v$) tosses yet another (unbiased) coin, denoted by $\hat{r}_{u}(v)$ (resp., $\hat{r}_{v}(u)$) and shares its value with $v$ (resp., ($u$). If $\mathit{step}_{e}$ advances in the subsequent round from $\mathit{step}_{e} = \ensuremath{\mathit{\hbar}}$ to $\mathit{step}_{e} = 0$, then edge $e$ actually starts a phase of \ensuremath{\mathtt{Phase}}{} if and only if $\hat{r}_{u}(v) \neq \hat{r}_{v}(u)$, in which case the node whose coin is $1$ (resp., $0$) assumes the role of $x(e)$ (resp., $y(e)$) for the duration of the current phase. Notice that with this additional condition, the lower bound promised in Cor.{}~\ref{corollary:start-phase-prob-fixed-epsilon} on the probability to start a phase under $\ensuremath{\mathtt{PPS}}'$ decreases by (no more than) a constant factor. Recalling that a degree bound of $\Delta$ for $G$ implies a degree bound of $2 \Delta - 2$ for $L(G)$, the following theorem is derived from Thm.{}~\ref{theorem:nodes:main}. \begin{theorem} \label{theorem:simulation-line-graph} Consider a node-LCL $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{G}, \ensuremath{\mathcal{L}} \rangle$ and let \ensuremath{\mathcal{A}}{} be a fault free algorithm for \ensuremath{\mathcal{P}}{} that is $(\nu, \mu(\Delta), \phi, \beta, \sigma_{0})$-eligible. Let $\mathcal{H} = \{ G \in \mathcal{U} \mid L(G) \in \mathcal{G} \}$ and let $\Delta$ be a degree bound for $\mathcal{H}$. Then, the edge-LCL $\ensuremath{\mathcal{P}}' = \langle \mathcal{O}, \mathcal{H}, \ensuremath{\mathcal{L}} \rangle$ admits a randomized self-stabilizing algorithm $\ensuremath{\mathcal{A}_{\mathit{ST}}}'$ that uses messages of size \[ O \left( \Delta (\mu(2 \Delta - 2) + \mu_{\ensuremath{\mathcal{A}}}(2 \Delta - 2)) + \log \phi \right) \] whose fully adaptive run time in response to $k$ node manipulations is \[ O \left( (1 / \beta) (\nu + 1) \phi^{6} \log (k + \Delta + \sigma_{0}) \right) \, . \] Moreover, edges whose distance from any manipulated node is at least $\nu + 2 \phi + 1$ are guaranteed to maintain their original output value throughout the execution of $\ensuremath{\mathcal{A}_{\mathit{ST}}}'$ (with probability $1$). \end{theorem} \section{The Edge-LCL Transformer --- Interface} \label{section:transformer-edges-interface} This section presents the interface of our transformer in the context of distributed edge problems. There are many similarities between the interface presented here and that presented in Sec.{}~\ref{section:transformer-nodes-interface} in the context of distributed node problems; for clarity of the exposition, we adhere to the same terminology and notation when possible. We say that a fault free algorithm \ensuremath{\mathcal{A}}{} for an edge-LCL $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{G}, \ensuremath{\mathcal{L}} \rangle$ is \emph{eligible} if the transformer can be applied to \ensuremath{\mathcal{A}}{}, turning it into a self-stabilizing algorithm for $\ensuremath{\mathcal{P}}$. For \ensuremath{\mathcal{A}}{} to be eligible, certain conditions must be met; these conditions are partitioned into properties of problem $\ensuremath{\mathcal{P}}$, presented in Sec.{}~\ref{section:properties-problem-edges}, and properties of algorithm \ensuremath{\mathcal{A}}{} itself, presented in Sec.{}~\ref{section:properties-algorithm-edges}. (In Sec.{}~\ref{section:concrete-problems}, we show that these properties are satisfied by the edge-LCLs presented in Table~\ref{table:concrete-problems}.) The formal guarantees of the transformer when applied to (the eligible) \ensuremath{\mathcal{A}}{}, including the dependence of its fully adaptive run-time on various parameters of \ensuremath{\mathcal{P}}{} and \ensuremath{\mathcal{A}}{}, are stated in Sec.{}~\ref{section:transformer-statement-edges}. \subsection{Properties of the Distributed Edge Problem} \label{section:properties-problem-edges} Fix an edge-LCL $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{G}, \ensuremath{\mathcal{L}} \rangle$, where $\mathcal{G} \in \mathcal{U}_{\Delta}$ for a degree bound $\Delta$ and $\ensuremath{\mathcal{L}} = \{ \ensuremath{\ell}_{d} \}_{d = 0}^{\infty}$. In this section, we present Properties \ref{property:edges:hereditary-closure}--\ref{property:edges:detectability} that should be satisfied by $\ensuremath{\mathcal{P}}$ as a necessary condition for any fault free algorithm for $\ensuremath{\mathcal{P}}$ to be eligible. Let $\mathcal{G}_{n} = \{L(G) \mid G \in \mathcal{G} \}$. \begin{property}[\emph{hereditary closure}] \label{property:edges:hereditary-closure} Edge-LCL $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{G}, \ensuremath{\mathcal{L}} \rangle$ satisfies the \emph{hereditary closure} property if $G \in \mathcal{G}$ implies that $G' \in \mathcal{G}$ for any subgraph $G'$ of $G$. \end{property} \begin{property}[\emph{core coverage}] \label{property:edges:core-coverage} Edge-LCL $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{G}, \ensuremath{\mathcal{L}} \rangle$ satisfies the \emph{core coverage} property if the node-LCL $\langle \mathcal{O}, \mathcal{G}_{n}, \ensuremath{\mathcal{L}} \rangle$ satisfies Prop.{}~\ref{property:nodes:core-coverage}. \end{property} \begin{property}[\emph{$\nu$-bounded influence}] \label{property:edges:bounded-influence} Edge-LCL $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{G}, \ensuremath{\mathcal{L}} \rangle$ satisfies the \emph{$\nu$-bounded influence} property if the node-LCL $\langle \mathcal{O}, \mathcal{G}_{n}, \ensuremath{\mathcal{L}} \rangle$ satisfies Prop.{}~\ref{property:nodes:bounded-influence}. \end{property} \paragraph{Distributed Implementation of the Predicates.} Our transformer employs a distributed procedure referred to as the \emph{detection procedure}, denoted hereafter by \ensuremath{\mathtt{Detect}}{}, that ``implements'' the LCL $\ensuremath{\mathcal{L}}$. Consider some graph $G \in \mathcal{G}$. Procedure \ensuremath{\mathtt{Detect}}{} runs indefinitely on $G$ and does not access any of the nodes' registers except the $\mathrm{d}_{G}(v)$ output registers $\mathtt{out}_{v}(u)$ that each node $v \in V(G)$ maintains for every $u \in N_{G}(v)$, to which \ensuremath{\mathtt{Detect}}{} has read-only access. In every round $t$, \ensuremath{\mathtt{Detect}}{} returns a Boolean value for each node $v$ and neighbor $u \in N_{G}(v)$ denoted by $\ensuremath{\mathtt{Detect}}_{v,t}(u) \in \{\mathit{true},\mathit{false}\}$. Consider some edge $e =\{u,v\} \in E(G)$. Assuming that in round $t-1$ non of the nodes in $\{u,v\} \cup \{w,x \in V(G) \mid \{w,x\} \in N_{G}(e) \}$ were manipulated, procedure \ensuremath{\mathtt{Detect}}{} guarantees that (1) $\ensuremath{\mathtt{Detect}}_{v,t}(u)=\ensuremath{\mathtt{Detect}}_{u,t}(v)$; and (2) if $e$ is port inconsistent, then $\ensuremath{\mathtt{Detect}}_{v,t}(u)=\ensuremath{\mathtt{Detect}}_{u,t}(v) = \mathit{false}$. Otherwise, $\ensuremath{\mathtt{Detect}}_{v,t}(u)=\ensuremath{\mathtt{Detect}}_{u,t}(v) = \mathit{false}$ if and only if edge $e$ is (decided and) uncontent under $C_{t}$ with respect to \ensuremath{\mathcal{L}}{} recalling that $C_{t} : E(G) \rightarrow \mathcal{O} \cup \{ \bot \}$ is the edge configuration defined by setting $C_{t}(\{ u, v \}) = \mathtt{out}_{v,t}(u) = \mathtt{out}_{u,t}(v)$ (resp., $C_{t}(\{ u, v \}) = \bot$) for each port-consistent (resp., port-inconsistent) edge $\{ u, v \} \in E(G)$. As for the implementation of \ensuremath{\mathcal{L}}{} for edge-LCL, it is straightforward to implement \ensuremath{\mathtt{Detect}}{} so that each node sends messages of size $O(\Delta \log(|\mathcal{O}|))$. \begin{property}[\emph{$\mu(\Delta)$-detectability}] \label{property:edges:detectability} The LCL $\ensuremath{\mathcal{L}} = \{ \ensuremath{\ell}_{d} \}_{d = 0}^{\infty}$ over the output value set $\mathcal{O}$ is said to be \emph{$\mu(\Delta)$-detectable} for a function $\mu : \mathbb{Z}_{> 0} \rightarrow \mathbb{Z}_{> 0}$ if $\ensuremath{\mathcal{L}}$ admits a detection procedure whose message size is at most $\mu(\Delta)$ when it runs on a graph of degree bound $\Delta$. Edge-LCL $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{G}, \ensuremath{\mathcal{L}} \rangle$ satisfies the \emph{$\mu(\Delta)$-detectability} property if $\ensuremath{\mathcal{L}}$ is $\mu(\Delta)$-detectable. \end{property} \subsection{Properties of the Fault Free Algorithm} \label{section:properties-algorithm-edges} Fix an edge-LCL $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{G}, \ensuremath{\mathcal{L}} \rangle$ where $\mathcal{G} \subseteq \mathcal{U}_{\Delta}$ for a degree bound $\Delta$ and a fault free algorithm \ensuremath{\mathcal{A}}{} for \ensuremath{\mathcal{P}}{}. In this section, we present Prop.{}\ \ref{property:edges:phase-based}--\ref{property:edges:progress} that \ensuremath{\mathcal{A}}{} should satisfy in order to be eligible. \paragraph{Restricting the Phase Procedure.} Consider the graph $G \in \mathcal{G}$ on which $\ensuremath{\mathcal{A}}$ runs. Recall that if $\ensuremath{\mathcal{A}}$ is a phase-based algorithm, then it is fully specified by a phase length parameter $\phi$ and a phase procedure \ensuremath{\mathtt{Phase}}{} invoked (from scratch) at the beginning of every phase. Moreover, each phase starts with an announcement step $j = 0$, in which the nodes send the content of $\mathtt{out}_{v}(u)$ to every neighbor $u \in N_{G}(v)$, and ends with a decision step $j = \phi - 1$, which is the only step in which the nodes may write into their output registers. For the edge-LCL algorithm $\ensuremath{\mathcal{A}}$ to be eligible, we add a few more requirements on the structure of \ensuremath{\mathtt{Phase}}{}. First, it is required that the decision of a decided edge $e$ is irrevocable and $\mathtt{out}_{v,t}(u) = \mathtt{out}_{u,t}(v)$ for every $\{u,v\} \in E(G)$ and time $t$. Let $U \subseteq E(G)$ be the set of edges that are undecided at the beginning of the phase. A node $v \in V(G(U))$ has at least one undecided incident edge and is said to be \emph{employed} in the phase. During steps $j = 1, \dots, \phi - 2$, referred to as the phase's \emph{working stage}, an employed node $v \in V(G(U))$ may send messages only to its employed neighbors. The actions of $v$ during the working stage may depend on the number $\mathrm{d}_{G(U)}(v)$ of employed neighbors that $v$ has and on the (global) degree bound $\Delta$, but they should not depend on the number $\mathrm{d}_{G}(v) - \mathrm{d}_{G(U)}(v)$ of decided incident edges that $v$ has. The decision step of \ensuremath{\mathtt{Phase}}{} is the only step in which an undecided edge may become decided. An edge $e = \{u,v\} \in E(G(U))$ becomes decided once $u$ and $v$ sets $\mathtt{out}_{v}(u) = \mathtt{out}_{u}(v) \in \mathcal{O}$. Nodes $v$ and $u$ may share information regarding the decisions of the decided edges in $N_{G}(e)$ with each other, however it is required that this information is treated as a multiset by both $v$ and $u$. \begin{property}[\emph{$\phi$-phase-based}] \label{property:edges:phase-based} Algorithm \ensuremath{\mathcal{A}}{} satisfies the \emph{$\phi$-phase-based} property if it belongs to the phase-based class of algorithms with a phase of length $\phi$. \end{property} Given an edge configuration $C : E(G) \rightarrow \mathcal{O} \cup \{ \bot \}$, let $\eta$ be the execution of the phase procedure \ensuremath{\mathtt{Phase}}{} on the graph $G$ whose output registers are set so that $\mathtt{out}_{v}(u) = \mathtt{out}_{u}(v) = C(e)$ for each $e =\{u,v\} \in E(G)$. Let $\ensuremath{\mathtt{Phase}}(G,C)$ be the edge configuration associated with \ensuremath{\mathtt{Phase}}{} when $\eta$ halts, i.e., after $\phi$ rounds. Notice that if \ensuremath{\mathtt{Phase}}{} is randomized, then $\ensuremath{\mathtt{Phase}}(G,C)$ is a random variable that depends on the coin tosses of the nodes $v \in V(G)$ during $\eta$. The following key property can now be stated. \begin{property}[\emph{respectful decisions}] \label{property:edges:respectful-decisions} The phase-based algorithm \ensuremath{\mathcal{A}}{} with phase procedure \ensuremath{\mathtt{Phase}}{} satisfies the \emph{respectful decisions} property if the following two conditions hold with probability $1$ for every graph $G \in \mathcal{G}$, edge configuration $C : E(G) \rightarrow \mathcal{O} \cup \{ \bot \}$, and undecided edge subset $X \subseteq U(C)$: \\ (1) $\Gamma(C) \subseteq \Gamma(\ensuremath{\mathtt{Phase}}(G - X, C_{G - X}))$; and \\ (2) $U(C) \cap D(\ensuremath{\mathtt{Phase}}(G - X, C_{G - X})) \subseteq \Gamma(\ensuremath{\mathtt{Phase}}(G - X, C_{G - X}))$.\footnote{% The fact that the graph $G - X$ belongs to the graph family $\mathcal{G}$ is ensured by Property~\ref{property:edges:hereditary-closure}.} \end{property} \paragraph{Potential Functions.} A function $\pi : \mathcal{CG}(\ensuremath{\mathcal{P}}) \rightarrow \mathbb{Z}_{\geq 0}$ is said to be a \emph{locally separable potential function} for $\ensuremath{\mathcal{P}}$ if there exists a family $\left\{ \sigma(M) \right\}_{M \in \mathcal{M}(\mathcal{O})} \subset \mathbb{Z}_{\geq 0}$ of \emph{potential coefficients} such that \\ (1) if $M \subseteq M'$, then $\sigma(M) \geq \sigma(M')$ for any $M, M' \in \mathcal{M}(\mathcal{O})$; and \\ (2) $\pi(G, C) = \sum_{e \in U(C)} \sigma(C[e])$ for any $(G, C) \in \mathcal{CG}(\ensuremath{\mathcal{P}})$. \\ We refer to $\sigma(\emptyset)$ as the \emph{top potential coefficient} of $\pi$, observing that it up-bounds any other potential coefficient. \begin{property}[\emph{$(\beta, \sigma_{0})$-progress}] \label{property:edges:progress} Algorithm \ensuremath{\mathcal{A}}{} satisfies the \emph{$(\beta, \sigma_{0})$-progress} property for a real $0 < \beta < 1$ and integer $\sigma_{0} \geq 0$ if $\ensuremath{\mathcal{P}}$ admits a locally separable potential function $\pi$ with top potential coefficient $\sigma_{0}$ such that for every strongly configured graph $(G, C) \in \mathcal{SCG}(\ensuremath{\mathcal{P}})$, it is guaranteed that \\ (1) $\mathbb{E}(\pi(G, \ensuremath{\mathtt{Phase}}(G, C))) \leq (1 - \beta) \cdot \pi(G, C)$; and \\ (2) if $\pi(G, C) = 0$, then the edge configuration $\ensuremath{\mathtt{Phase}}(G, C)$ is complete with probability $1$.\footnote{% Prop.{}~\ref{property:edges:respectful-decisions} ensures that if $\ensuremath{\mathtt{Phase}}(G, C)$ is complete, then it is also legal.} \end{property} \subsection{The Main Theorem --- Edge LCLs} \label{section:transformer-statement-edges} We say that a fault free algorithm \ensuremath{\mathcal{A}}{} for an edge-LCL $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{G}, \ensuremath{\mathcal{L}} \rangle$ is \emph{$(\nu, \mu(\Delta), \phi, \beta, \sigma_{0})$-eligible} if \\ (I) $\ensuremath{\mathcal{P}}$ satisfies the hereditary closure property (Prop.{}~\ref{property:edges:hereditary-closure}), the core coverage property (Prop.{}~\ref{property:edges:core-coverage}), the $\nu$-bounded influence property (Prop.{}~\ref{property:edges:bounded-influence}), and the $\mu(\Delta)$-detectability property (Prop.{}~\ref{property:edges:detectability}); and \\ (II) \ensuremath{\mathcal{A}}{} satisfies the $\phi$-phased-based property (Prop.{}~\ref{property:edges:phase-based}), the respectful decisions property (Prop.{}~\ref{property:edges:respectful-decisions}), and the $(\beta, \sigma_{0})$-progress property (Prop.{}~\ref{property:edges:progress}). \\ The guarantees of our transformer are cast in the following theorem. \begin{theorem} \label{theorem:edges:main} Consider an edge-LCL $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{G}, \ensuremath{\mathcal{L}} \rangle$, where $\mathcal{G} \subseteq \mathcal{U}_{\Delta}$ for a degree bound $\Delta$, and let \ensuremath{\mathcal{A}}{} be a fault free algorithm for $\ensuremath{\mathcal{P}}$ that uses messages of size $\mu_{\ensuremath{\mathcal{A}}}(\Delta)$. If \ensuremath{\mathcal{A}}{} is $(\nu, \mu(\Delta), \phi, \beta, \sigma_{0})$-eligible, then $\ensuremath{\mathcal{P}}$ admits a randomized self-stabilizing algorithm that uses messages of size \[ O \left( \mu(\Delta) + \mu_{\ensuremath{\mathcal{A}}}(\Delta) + \log \phi \right) \] whose fully adaptive run time is \[ \mathbb{E}(T_{1}) \, \leq \, O \left( \left( \phi^{5} / \beta \right) \cdot \left( \log (k) + (\ensuremath{\nu} + \phi) \log (\Delta) + \log (\sigma_{0}) \right) \right) \, . \] Moreover, edges whose distance from any manipulated node is at least $\nu + \phi + 1$ are guaranteed to maintain their original output value throughout the execution of \ensuremath{\mathcal{A}_{\mathit{ST}}}{} (with probability $1$). \end{theorem} \sloppy Similarly to the node-LCL transformer, here too we can allow the adversary to manipulate an infinite node subset $S \subset V(G)$ as long as $V(G)$ can be partitioned into clusters $C_{1}, C_{2}, \dots$ such that the following two conditions hold for every $i \geq 1$: \\ (1) $C_{i} \cap S = k_{i}$ for some $k_{i} \in \mathbb{Z}_{> 0}$; and \\ (2) if $v \in C_{i} \cap S$, then $\delta_{G}(v, V(G) - C_{i}) > \nu + \phi + 1$. \\ Thm.{}~\ref{theorem:nodes:main} guarantees that the expected recovery time of $G(C_{i})$ is at most $O \left( \left( \phi^{5} / \beta \right) \cdot \left( \log (k) + (\ensuremath{\nu} + \phi) \log (\Delta) + \log (\sigma_{0}) \right) \right)$ for each $i \geq 1$. \par\fussy \section{The Edge-LCL Transformer --- Implementation} \label{section:transformer-edges-implementation} In this section, we explain how the transformer operates; refer to Algorithm~\ref{pseudocode:transforemer-for-edge-LCL} in Sec.{}~\ref{appendix:pseudocodes-transformers} for a pseudocode. Consider an edge-LCL $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{G}, \ensuremath{\mathcal{L}} \rangle$, where $\mathcal{G} \subseteq \mathcal{U}_{\Delta}$ for a degree bound $\Delta$, and let \ensuremath{\mathcal{A}}{} be a fault free algorithm for \ensuremath{\mathcal{P}}{} that is $(\nu, \mu(\Delta), \phi, \beta, \sigma_{0})$-eligible. The transformer composes the self-stabilizing algorithm \ensuremath{\mathcal{A}_{\mathit{ST}}}{} promised in Thm.{}~\ref{theorem:edges:main}. Algorithm \ensuremath{\mathcal{A}_{\mathit{ST}}}{} have three components: \\ (1) the detection procedure \ensuremath{\mathtt{Detect}}{} that realizes the $\mu(\Delta)$-detectability of $\ensuremath{\mathcal{P}}$; \\ (2) the phase procedure \ensuremath{\mathtt{Phase}}{} associated with the $\phi$-phase-based algorithm \ensuremath{\mathcal{A}}{}; and \\ (3) the \ensuremath{\mathtt{PPS}}{} module. \\ Any message $M$ sent under \ensuremath{\mathcal{A}_{\mathit{ST}}}{} is partitioned into three fields, one for each component, denoted by $M.\mathtt{detect}$, $M.\mathtt{phase}$, and $M.\mathtt{pps}$, respectively. Consider the graph $G \in \mathcal{G}$ on which \ensuremath{\mathcal{A}_{\mathit{ST}}}{} runs, some round $t \in \mathbb{Z}_{\geq}$, and a node $v \in V(G)$. Node $v$ start round $t$ by calling \ensuremath{\mathtt{Detect}}{} which is implemented over the $\mathtt{detect}{}$ field of the messages. Recall that \ensuremath{\mathtt{Detect}}{} returns a Boolean value for each output register $\mathit{OutM}_{v}(u)$ that $v$ maintain for edge $\{v, u\} \in E(G)$. Let $e = \{v, u\} \in E(G)$. If $\ensuremath{\mathtt{Detect}}_{v, t}(u) = \mathit{false}$, then node $v$ resets $\mathtt{out}_{v} \leftarrow \bot$ and turns on a flag denoted by $\ensuremath{wait}_{v}(u)$ whose rule is to prevent $e$ from ``jumping'' into the middle of a phase. We say that a node $v$ is \emph{engaged} in a phase it there exists a neighbor $u \in N_{G}(v)$ such that $\mathtt{out}_{v}(u) = \bot$ and $\ensuremath{wait}_{v}(u) = \mathit{false}$ and $\mathit{step}_{v, t} \neq \ensuremath{\mathit{\hbar}}$. Two engaged nodes $u, v \in V(G)$ are said to be \emph{phase synchronized} if $\mathit{step}_{v, t} = \mathit{step}_{u, t}$. Obs.{}~\ref{observation:step-counter} ensures that if nodes $v$ and $u$ are phase-synchronized at time $t \geq t^{*}_{b}$ with $\mathit{step}_{u, t} = \mathit{step}_{u', t} = j$ for $0 \leq j \leq \phi - 2$, then $v$ and $u$ are phase-synchronized at time $t + 1$ with $\mathit{step}_{u, t + 1} = \mathit{step}_{u', t + 1} = j + 1$. By writing $\mathit{step}_{u}$ into the $\mathtt{pps}$ field of their outgoing messages, the engaged node $v$ can identify its phase-synchronized neighbors whenever $\mathit{step}_{v} > 0$. Recalling that step $j = 0$ of \ensuremath{\mathtt{Phase}}{} is the announcement step. Algorithm \ensuremath{\mathcal{A}_{\mathit{ST}}}{} uses the announcement step so that the engaged node $v$ can identify its phase synchronized neighbors and simulate an invocation of \ensuremath{\mathtt{Phase}}{} on the subgraph induced by all nodes that are phase-synchronized with $v$. Specifically, the invocation of \ensuremath{\mathtt{Phase}}{} runs for $\phi$ consecutive rounds, progressing as $\mathit{step}_{u} = 0, 1, \dots, \phi - 1$. The simulation of \ensuremath{\mathtt{Phase}}{} is implemented over the $\mathtt{phase}$ fields of the messages. Since node $v$ can identify its decided and undecided incident edges, no content is sent in the $\mathtt{phase}$ over a decided edge. We design \ensuremath{\mathcal{A}_{\mathit{ST}}}{} so that $\ensuremath{wait}_{v}(u)$ is initialized to $\mathit{false}$ whenever $\mathit{step}_{v,t} = \ensuremath{\mathit{\hbar}}$. \section{The Edge-LCL Transformer --- Analysis} \label{section:transformer-edges-analysis} Consider an edge-LCL $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{G}, \ensuremath{\mathcal{L}} \rangle$, where $\mathcal{G} \subseteq \mathcal{U}_{\Delta}$ for a degree bound $\Delta$, and let \ensuremath{\mathcal{A}}{} be a fault free algorithm for \ensuremath{\mathcal{P}}{} that is $(\nu, \mu(\Delta), \phi, \beta, \sigma_{0})$-eligible. Let \ensuremath{\mathtt{Detect}}{} be the detection procedure that realizes the $\mu(\Delta)$-detectability of $\ensuremath{\mathcal{P}}$, let \ensuremath{\mathtt{Phase}}{} be the phase procedure associated with the $\phi$-phase-based algorithm \ensuremath{\mathcal{A}}{}, and let $\pi$ be the locally separable potential function due to which \ensuremath{\mathcal{A}}{} satisfies the $(\beta, \sigma_{0})$-progress property (Prop.{}~\ref{property:edges:progress}). Let \ensuremath{\mathcal{A}_{\mathit{ST}}}{} be the self-stabilizing algorithm produced from \ensuremath{\mathcal{A}}{} by the node-LCL transformer presented in Sec.~\ref{section:transformer-edges-implementation}. At every discrete time $t \in \mathbb{Z}_{\geq 0}$, algorithm \ensuremath{\mathcal{A}_{\mathit{ST}}}{} is executed on some graph $G \in \mathcal{G}$. The configuration of \ensuremath{\mathcal{A}_{\mathit{ST}}}{} at time $t$ is denoted by $C_{t}$ and recall that this is the edge configuration is which $C_{t}(\{ u, v \}) = \mathtt{out}_{v}(u) = \mathtt{out}_{u}(v)$ (resp., $C_{t}(\{ u, v \}) = \bot$) for each port-consistent (resp., port-inconsistent) edge $\{ u, v \} \in E(G)$. During the round interval $[t^{*}_{a}, t^{*}_{b}-1]$ the adversary manipulates $k > 0$ nodes and does not manipulate any node after round $t^{*}_{b} - 1$. Our goal in this section is to establish Thm.{}~\ref{theorem:edges:main} by proving that the fully adaptive run time of \ensuremath{\mathcal{A}_{\mathit{ST}}}{} is $O \left( \left( \phi^{5} / \beta \right) \cdot \left( \log (k) + (\ensuremath{\nu} + \phi) \log (\Delta) + \log (\sigma_{0}) \right) \right)$ in expectation. Consider some $G \in \mathcal{G}$ and an edge configuration $C: E(G) \rightarrow \mathcal{O} \cup \{\bot\}$. Let $U \subseteq U(C)$ be some set of the undecided edges and let $f : U \rightarrow \mathcal{O}$ be some function. We denote by $[C \nwarrow f]$ the edge configuration $C'$ such that (1) $C'(e) = f(e)$ for every $e \in U$; and (2) $C'(e') = C(e')$ for every $e' \in E(G)-U$. Since the adversary may add/remove nodes/edges, the graph may change during the round interval $[t^{*}_{a}, t^{*}_{b}-1]$; in what follows, we reserve $G \in \mathcal{G}$ for the graph that exists at time $t^{*}_{b}$, recalling that this is also the graph at any time $t \geq t^{*}_{b}$. For every $t \geq t^{*}_{b}$ the set of edges that became decided during round $t$ (by the decision step of \ensuremath{\mathtt{Phase}}{}) is denoted by $\hat{D}_{t} = U(C_{t}) \cap D(C_{t+1})$. Let $\hat{f}_{t}:\hat{D}_{t} \rightarrow \mathcal{O}$ be the function that corresponds to the decisions set to edges in $\hat{D}_{t}$ during round $t$, i.e., $\hat{f}(e) = C_{t+1}(e)$ for every $e \in \hat{D}_{t}$. We denote by $\hat{C}_{t} = [C_{t} \nwarrow \hat{f}]$ the edge configuration $C_{t}$ augmented with the decisions set to edges during round $t$ to the nodes in $\hat{D}_{t}$. Let $e = \{v, u\} \in E(G)$. We denote by $V_{e} = \{v, u\} \cup \{ w, x \in V(G) \mid \{w, x\} \in N_{G}(e) \}$ the set of nodes that are incident on $e$ or any of $e$'s neighbors. \begin{observation} \label{observation:edges:after-manipulations} The design of \ensuremath{\mathcal{A}_{\mathit{ST}}}{} guarantees that for every time $t \geq t^{*}_{b} + 2$ the following statements are true: \\ (1) Every edge $e \in E(G)$ is port-consistent at time $t$; \\ (2) $\Lambda(C_{t}) = D(C_{t}) \cap U(C_{t+1}) = D(\hat{C}_{t}) \cap U(C_{t+1})$; \\ (3) $D(C_{t+1}) = D(\hat{C}_{t}) - \Lambda(C_{t})$; and \\ (4) if $e \in D(C_{t+1})$, then $C_{t+1}(e) = \hat{C}_{t}(e) = \hat{C}_{t+1}(e)$, for every $e \in E(G)$. \end{observation} Let $t_{s} = t^{*}_{b} + \ensuremath{\nu} + \phi + 3$. The remainder of this section is dedicated to proving the following proposition. \begin{lemma} \label{lemma:edges:correctness} For every $t \geq t_{s}$, it holds that $(G,C_{t}) \in \mathcal{SCG}(\ensuremath{\mathcal{P}})$. Moreover, if $e \in D(C_{t})$, then $C_{t+1}(e) = C_{t}(e)$. \end{lemma} Lem.{}~\ref{lemma:edges:correctness} implies that \ensuremath{\mathcal{A}_{\mathit{ST}}}{} is correct, i.e., if there exists a time $t \geq t_{s}$ in which $C_{t}$ is complete, then $C_{t}$ is also legal and $C_{t'} = C_{t}$ for every $t' \geq t$. Let $t_{0} = t^{*}_{b} + \phi + 2$. \begin{lemma} \label{lemma:edges:after-t-0-decided-imples-content-in-hat} For every $t \geq t_{0}$, it holds that $D(C_{t}) \subseteq \Gamma(\hat{C}_{t-1})$. \end{lemma} \begin{proof} Every node $v \in V(G)$ simulating the decision step of \ensuremath{\mathtt{Phase}}{} in round $t-1$ has $\mathit{step}_{v, t-1} = \phi - 1$, which means that $\mathit{step}_{v, t-\phi} = 0$ by the design of the \ensuremath{\mathtt{PPS}}{} module. Let $t' = t - \phi$. In round $t'$ node $v$ simulated the $0$ step of \ensuremath{\mathtt{Phase}}{}, hence initialized all the registered maintained by \ensuremath{\mathtt{Phase}}{}. Since $t' \geq t^{*}_{b} + 2$, all the nodes engaged in the phase that started at time $t'$, simulated a full, fault free, execution of \ensuremath{\mathtt{Phase}}{}. Let $e = \{v, u\} \in D(C_{t})$. First we will prove that $e \notin \Lambda(C_{t-1})$. Assume by contradiction that $e \in \Lambda(C_{t-1})$. Since the nodes in $V_{e}$ were not manipulated in round $t-2$, it holds that $\ensuremath{\mathtt{Detect}}_{v,t-1}(u) = \ensuremath{\mathtt{Detect}}_{u,t-1}(v) = \mathit{false}$ and $\mathtt{out}_{v,t}(u) = \mathtt{out}_{u,t}(v) = \bot$, by the design of \ensuremath{\mathcal{A}_{\mathit{ST}}}{} and the guarantee of \ensuremath{\mathtt{Detect}}{}. Hence $e \notin D(C_{t})$. If $e \in U(C_{t-1})$, then $v$ and $u$ simulated the decision step of \ensuremath{\mathtt{Phase}}{} in round $t-1$ and by Prop.{}~\ref{property:nodes:respectful-decisions} $e$ is content under $\hat{C}_{t-1}$. Otherwise ($e \in D(C_{t-1})$), for every neighbor $e' = \{w, x\} \in U_{e}(C_{t-1}) \cap D(\hat{C}_{t-1})$ nodes $w$ and $x$ simulated the decision step of \ensuremath{\mathtt{Phase}}{} in round $t-1$. By Prop.{}~\ref{property:nodes:respectful-decisions} $e$ is content under $\hat{C}_{t-1}$. \end{proof} \begin{lemma} \label{lemma:edges:uncontent-directed-path-connection} Fix some time $t \geq t_{0} + 1$. If $e \in \Lambda(C_{t})$, then the supportive digraph $D_{\ensuremath{\mathcal{L}}}$ admits a directed path $P$ emerging from $C_{t}(e)$ of length $t - t_{0}$. \end{lemma} \begin{proof} Fix some $t' \geq t_{0}$ and an edge $e' \in \Lambda(C_{t'})$. By Cor.{}~\ref{lemma:nodes:after-t-0-decided-imples-content-in-hat}, it holds that $e' \in \Gamma(\hat{C}_{t'-1})$ since $e' \in D(C_{t'})$. Let $\hat{C}_{t'-1}(e') = o \in \mathcal{O}$. Since $e' \in \Gamma(\hat{C}_{t'-1})$, it holds that $\hat{C}_{t'-1}[e'] \in \mathcal{T}(o)$. Let $\hat{M}^{e'}_{t'-1} \in \SatMultisets^{*}(o)$ be some core for $o$ such that $\hat{M}^{e'}_{t'-1} \subseteq \hat{C}_{t'-1}[e']$. The only difference between configurations $\hat{C}_{t'-1}$ and $C_{t'}$ is that decided edges under $\hat{C}_{t'-1}$ may become undecided under $C_{t'}$. Hence, $\emptyset \subset \hat{M}^{e'}_{t'-1}$ since if $\emptyset = \hat{M}^{e'}_{t'-1}$, then $e' \in \Gamma(C_{t'})$. We will prove the claim by induction on $t$. Let $t = t_{0} + 1$, let $o = C_{t}(e)$, and let $e' \in N_{G}(e)$ be an arbitrary neighbor of $e$ such that $o' = \hat{C}_{t-1}(e') \in \hat{M}^{e}_{t-1}$. Since $\emptyset \subset \hat{M}^{e}_{t-1}$, such a neighbor exists. Hence, by the definition of the supportive digraph $D_{\ensuremath{\mathcal{L}}}$, the path $P=\langle o, o' \rangle$ of length $1$ exists in $D_{\ensuremath{\mathcal{L}}}$. Fix some $t > t_{0} + 1$. Let $\hat{B}^{e}_{t-1} \subseteq N_{G}(e)$ be the set of $e$'s neighbors such that their decisions are in $\hat{M}^{e}_{t-1}$. It holds that $\hat{B}^{e}_{t-1} \subseteq D(\hat{C}_{t-1})$ and for every $e' \in \hat{B}^{v}_{t-1}$ either $C_{t}(e') = \hat{C}_{t-1}(e)$ or $e' \in U(C_{t})$. If for every neighbor $e' \in \hat{B}^{e}_{t-1}$, it holds that $C_{t}(e') = \hat{C}_{t-1}(e')$, then by the core coverage property (Prop.{}~\ref{property:edges:core-coverage}) it holds that $e \in \Gamma(C_{t})$ since the only difference between configurations $\hat{C}_{t-1}$ and $C_{t}$ is that decided edges under $\hat{C}_{t-1}$ may become undecided under $C_{t}$. Since $e \in \Lambda(C_{t})$, there exists a neighbor $e' \in \hat{B}^{e}_{t-1}$ such that $e' \in U(C_{t})$. By Obs.{}~\ref{observation:edges:after-manipulations}, $e' \in \Lambda(C_{t-1})$ since $e' \in D(\hat{C}_{t-1}) \cap U(C_{t})$. Moreover, $\hat{C}_{t-1}(e') = C_{t-1}(e') = o' \in \mathcal{O}$. By the induction hypothesis, there exists a direct path $P' = \langle o', \cdots \rangle$ in the supportive digraph $D_{\ensuremath{\mathcal{L}}}$ of length $t-t_{0}-1$. Let $o = \hat{C}_{t-1}(e)$. Since $\hat{M}^{e}_{t-1}$ is a core for $o$ and $o' \in \hat{M}^{e}_{t-1}$, it holds that $(o, o') \in E(D_{\ensuremath{\mathcal{L}}})$. Hence, we can extend path $P'$ to a path $P = \langle o, P' \rangle$ of length $t-t_{0}$. \end{proof} \begin{proof}[proof of Lem.{}~\ref{lemma:edges:correctness}] Assume by contradiction that there exists an uncontent edge $e = \{v, u\}$ under $C_{t}$. In that case, by Lem.{}~\ref{lemma:edges:uncontent-directed-path-connection} we get a contradiction to the $\nu$-bounded influence property of \ensuremath{\mathcal{L}}{} (Prop.{}~\ref{property:edges:bounded-influence}). The second part of the claim is ensured by the design of algorithm \ensuremath{\mathcal{A}_{\mathit{ST}}}{} and \ensuremath{\mathtt{Detect}}{}. Edge $e$ is content under $C_{t}$ and all the nodes in $V_{e}$ were not manipulated in round $t-1$, hence $\ensuremath{\mathtt{Detect}}_{v, t}(u) = \ensuremath{\mathtt{Detect}}_{u, t}(v) = \mathit{true}$ and the output value of $e$ does not change. \end{proof} \subsection{Fully Adaptive Run Time} \label{section:adaptive-run-time-edges} In this section we will prove the fully adaptive run time of \ensuremath{\mathcal{A}_{\mathit{ST}}}{} and by doing so proving Thm.{}~\ref{theorem:edges:main}. Let $\tau$ be the parameter determined by Cor.{}~\ref{corollary:start-phase-prob-fixed-epsilon}. We start with the following proposition that implies that given the potential at some time $t \geq t_{s}$, in expectation, at least a fraction of the potential is removed after $t + \tau + \phi$ rounds. \begin{lemma} \label{lemma:edges:expected-removed-potential} Fix some time $t \geq t_{s}$ and a global state at time $t$. The design of \ensuremath{\mathcal{A}_{\mathit{ST}}}{} ensures that \[\mathbb{E}\left( \pi (G,C_{t+\tau+\phi})\right) \leq \left(1- \frac{\beta}{4 \phi^{2}}\right) \cdot \pi(G,C_{t})\]. \end{lemma} Consider some time $t \geq t_{s}$ such that $\pi(G,C_{t}) = 0$ and denote $C_{t}$ by $C$. By the properties of $\pi$, Lem.{}~\ref{lemma:edges:correctness}, and the $(\beta, \sigma_{0})$-progress property (Prop.{}~\ref{property:edges:progress}), it holds that $\pi(G-X,C_{G-X}) = 0$, $C_{G-X}$ is strong, and edge $e$ is decided w.p.\ $1$ under $\ensuremath{\mathtt{Phase}}(G-X,C_{G-X})$, for any $X \subseteq U(C)$ and $e \in U(C) - X$. So, by standard probabilistic arguments and Cor.{}~\ref{corollary:start-phase-prob-fixed-epsilon}, it holds that at time $t + O(\phi^{5} \cdot \log (|U(C)|))$ in expectation, the configuration is complete and by Lem.{}~\ref{lemma:edges:correctness} also legal. Let $\Pi_{s} = \pi(G,C_{t_{s}})$ and let $R = \min \{r \in \mathbb{Z}_{\geq 0} \mid \pi(G,C_{t_{s}+r}) = 0 \}$ be a random variable that counts the number or rounds after $t_{s}$ until the potential reaches for the first time to $0$. At time $t_{s} + R$, the edge configuration $C_{t_{s} + R}$ is not necessarily complete, i.e., some edges may still be undecided. By Lem.{}~\ref{lemma:edges:expected-removed-potential} and standard probabilistic arguments we know that $\mathbb{E} \left( R \right) = O \left( (1/\beta)\phi^{5} \log (\Pi_{s}) \right)$. So, we can conclude that the fully adaptive run time of \ensuremath{\mathcal{A}_{\mathit{ST}}}{} is $O(\nu + \phi) + O \left( (1/\beta)\phi^{5} \log (\Pi_{s}) \right) + O(\phi^{5} \cdot \log (|U(C_{t_{s}})|))$. By the definition of $\pi$, only edges in $U(C_{t_{s}})$ contribute to the potential $\Pi_{s}$ and $\sigma(C_{t_{s}}[e]) \leq \sigma(\emptyset) = \sigma_{0}$ for every $e \in U(C_{t_{s}})$, hence $\Pi_{s} \leq \sigma_{0} \cdot U(C_{t_{s}})$. Lem.{}~\ref{lemma:edges:up-bound-undecided-edges} completes the proof of Thm.{}~\ref{theorem:nodes:main} since it states that $U(C_{t_{s}}) \leq O \left( k \cdot \Delta^{\phi + \ensuremath{\nu}}\right)$. \begin{lemma} \label{lemma:edges:up-bound-undecided-edges} The number of undecided edges at time $t_{s}$ is up-bounded by $O \left( k \cdot \Delta^{\phi + \ensuremath{\nu}}\right)$. \end{lemma} We start by proving Lem.{}~\ref{lemma:edges:expected-removed-potential}. \begin{proof} [proof of Lem.{}~\ref{lemma:edges:expected-removed-potential}] Throughout the proof we will denote $C_{t}$ and $U(C_{t})$ by $C$ and $U$, respectively. Let $V = V(G(U))$ be the set of nodes with at least one incident undecided edge at time $t$. Let $P_{\tau} = \{v \in V \mid \mathit{step}_{v,t+ \tau} = 0 \}$ be the random set of nodes that their step variable is $0$ at time $t+ \tau$ and are in $V$ at time $t$. After $P_{\tau}$ was determined (independently for each node by the design of the \ensuremath{\mathtt{PPS}}{} module), we revel the set $P_{\tau}$ to an adversary. Conditioned on the event that $P_{\tau}$ is already determined, the adversary is allowed to determine the coin tosses of the nodes in $V(G)-P_{\tau}$ during the round interval $[t, t+ \tau + \phi - 2]$ and for the nodes in $P_{\tau}$ during the time interval $[t, t+ \tau - 1]$. Let $E^{d}_{P} \subseteq E(G(P_{\tau}))$ be the random set of edges that are decided under the random edge configuration $C_{t+ \tau}$, let $E^{u}_{P} = E(G(P_{\tau})) - E^{d}_{P}$ be the random set of edges that are undecided under the random edge configuration $C_{t+ \tau}$. Let $\Pi^{u}_{P} = \sum_{e \in E^{u}_{P}}\sigma(C[e])$ (resp., $\Pi^{d}_{P} = \sum_{e \in E^{d}_{P}} \sigma(C[e])$) be the random sum of the potential coefficients of the edges in $E^{u}_{P}$ (resp., $E^{d}_{P}$) under edge configuration $C$ and similarly let $\Pi_{P} = \sum_{e \in E(G(P_{\tau}))} \sigma(C[e])$. Notice that $\Pi_{P} = \Pi^{u}_{P} + \Pi^{d}_{P}$. For every $e \in E(G(U))$, let $A_{e}$ be the event that $e \in E(G(P_{\tau}))$. By Cor.{}~\ref{corollary:start-phase-prob-fixed-epsilon}, for every edge $e \in E(G(U))$, it holds that \begin{equation} \label{eq:edges:prob-start-pahse-edge} \Pr\left(\operatorname{A}_{e}\right) \geq \frac{1}{4\phi^{2}} \, . \end{equation} Let $\Pi_{U} = \sum_{e \in U} \sigma(C[e])$. Notice that by the definition of a potential function it holds that $\Pi_{U} = \pi(G, C)$. Using Eq.{}~\ref{eq:edges:prob-start-pahse-edge}, we know that \begin{equation} \label{eq:edges:expected-potetial-start-phase-edges} \mathbb{E} \left( \Pi_{P} \right) = \mathbb{E} \left( \Pi^{u}_{P} + \Pi^{d}_{P} \right) \geq \frac{1}{4 \phi^{2}} \Pi_{U} = \frac{1}{4 \phi^{2}} \pi(G,C) \, . \end{equation} We now turn to prove that at least a $\beta$ fraction, in expectation, out of the potential in $\Pi_{P}$ is removed by time $t + \tau + \phi$. Notice that all of the potential in $\Pi^{d}_{P}$ is removed by time $t + \tau +\phi$ since every edge that contributed potential to $\Pi^{d}_{P}$ under edge configuration $C$ is decided at time $t + \tau +\phi$. Let $\bar{D} = (U-E(G(P_{\tau}))) \cap D(C_{t+ \tau + \phi -1})$ be the random set of edges in $U-E(G(P_{\tau}))$ that are decided under $C_{t+ \tau + \phi -1}$, let $f: \bar{D} \rightarrow \mathcal{O}$ be a function such that $f(e) = C_{t+ \tau + \phi - 1}(e)$ for every $e \in \bar{D}$, and let $g : E^{d}_{P} \rightarrow \mathcal{O}$ be a function such that $g(e) = C_{t + \tau}(e)$ for every $e \in E^{d}_{P}$. Let $X = U - (E(G(P_{\tau})) \cup \bar{D})$. The random set of edges in $E^{u}_{P}$ are engaged in the phase that starts in round $t + \tau$ and the phase can be considered as being executed on the random graph $G-X$ with the random configuration $C^{-X} = [[C_{G - X} \nwarrow g] \nwarrow f]$. By the properties of the locally separable potential function $\pi$, and Lem.{}~\ref{lemma:edges:correctness} we know that \begin{align*} \pi(G-X, C^{-X}) &= \sum_{e \in U(C^{-X})} \sigma(C^{-X}[e]) \\ &= \sum_{e \in E^{u}_{P}} \sigma(C^{-X}[e]) \\ &\leq \sum_{e \in E^{u}_{P}} \sigma(C[e]) \\ &= \Pi^{u}_{P} \, . \end{align*} We can conclude that, \begin{equation} \label{eq:edges:start-phase-potetial} \pi(G-X, C^{-X}) \leq \Pi^{u}_{P}. \end{equation} By the $(\beta, \sigma_{0})$-progress property (Prop.{}~\ref{property:edges:progress}) it is guaranteed that \begin{align*} & \mathbb{E} \left( \pi(G-X ,\ensuremath{\mathtt{Phase}}(G-X,C^{-X})) \right) \\ & = \mathbb{E}\Ex \left( \pi(G-X,\ensuremath{\mathtt{Phase}}(G-X,C^{-X})) \mid \pi(G-X,C^{-X}) \right) \\ & \leq (1-\beta) \cdot \mathbb{E} \left( \pi(G-X,C^{-X}) \right) \\ & \leq (1-\beta) \cdot \mathbb{E} \left( \Pi^{u}_{P} \right) \, , \end{align*} where the last inequality holds due to Eq.{}~\ref{eq:edges:start-phase-potetial}. We can conclude that the total potential, in expectation, that is removed during the time interval $(t,t +\tau + \phi]$ is at least \begin{equation*} \mathbb{E} ( \Pi^{d}_{P} ) + \beta \mathbb{E} \left( \Pi^{u}_{P} \right) \geq \beta \cdot \left( \mathbb{E} ( \Pi^{d}_{P} ) + \mathbb{E} \left( \Pi^{u}_{P} \right) \right) \geq \beta \cdot \frac{1}{4 \phi^{2}} \pi(G,C), \end{equation*} by Eq.{}~\ref{eq:edges:prob-start-pahse-edge} and Eq.{}~\ref{eq:edges:expected-potetial-start-phase-edges}. \end{proof} In the round interval $[t^{*}_{a}, t^{*}_{b}-1]$, the adversary may manipulate nodes. In order to distinguish between the edge configuration that resulted from the execution of \ensuremath{\mathcal{A}_{\mathit{ST}}}{} and the graph and edge configuration that the next round is executed on, we will formally define both. We denote by $G_{t} \in \mathcal{G}$ the graph at time $t$, i.e., the graph that resulted from the adversary manipulations in round $t-1$ (manipulations that occurred at time $t-\epsilon$), for every time $t \in \mathbb{Z}_{\geq 0}$. Since the graph may only change by the adversary, we denote by $G_{a}$ the graph $G_{t}$ for every time $t \leq t^{*}_{a}$. Recall that we denote by $G$ the graph $G_{t^{*}_{b}}$. We denote by $\widetilde{C}_{t}$ the edge configuration of \ensuremath{\mathcal{A}_{\mathit{ST}}}{} that corresponds to the edge configuration after all nodes finished their local computation stage in round $t$. Notice that edge configuration $\widetilde{C}_{t}$ is defined on the graph $G_{t}$ and that $\widetilde{C}_{t} = C_{t+1}$ for every time $t \leq t^{*}_{a} - 1$ and $t \geq t^{*}_{b}$. We now turn to redefine the set $\hat{D}_{t}$ and the edge configuration $\hat{C}_{t}$. For every $t \geq t^{*}_{a} + 1$ the set of edges that became decided during round $t$ (by the simulation of the decision step of \ensuremath{\mathtt{Phase}}{}) is denoted by $\hat{D}_{t} = U(C_{t}) \cap D(\widetilde{C}_{t})$. Let $\hat{f}_{t}:\hat{D}_{t} \rightarrow \mathcal{O}$ be the function that corresponds to the decisions of the edges that were set in for edges in $\hat{D}_{t}$ during round $t$, i.e., $\hat{f}(e) = \widetilde{C}_{t}(e)$ for every $e \in \hat{D}_{t}$. We denote by $\hat{C}_{t} = [C_{t} \nwarrow \hat{f}]$ the edge configuration $C_{t}$ after setting the decisions of round $t$ to the edges in $\hat{D}_{t}$. We denote by $K \subseteq V(G)$ the set of nodes in $G$ that were manipulated during the round interval $[t^{*}_{a}, t^{*}_{b}-1]$. Notice that $|K| \leq k$. Let $V_{0} = V(G) - K$ be the set of nodes in $G$ that were not manipulated and let $E_{0} = E(G(V_{0}))$ be the set of edges in the subgraph of $G$ induced by $V_{0}$. Let $V^{f}_{0} = \{ v \in V_{0} \mid \delta_{G}(v, K) \geq \phi + 1 \}$ be the set of nodes in $V_{0}$ that are at distance at least $\phi + 1$ from any manipulated node in $K$ in the graph $G$. Let $E^{f}_{0} = E(G(V_{0}^{f}))$ be the set of edges in the subgraph of $G$ induced by $V^{f}_{0}$. Let $K_{t} \subseteq V(G_{t})$ be the set of nodes that were manipulated in round $t$. We denote by $\delta^{m}_{t}(v) = \min \{ \delta_{G_{t'}}(v, K_{t'}) \mid t^{*}_{a} \leq t' \leq t \}$ for every $v \in V^{f}_{0}$ and $t \geq t^{*}_{a}$. \begin{observation} \label{observation:edges:during-manipulations} The following statements are true for every $t \geq t^{*}_{a}$: \\ (1) Every edge $e \in E^{f}_{0}$ is port-consistent at time $t$; \\ (2) $N_{G_{t}}(v) = N_{G_{a}}(v) = N_{G}(v)$, for every $v \in V_{0}$; \\ (3) $\delta^{m}_{t}(v) \geq \delta_{G}(v ,K) \geq \phi + 1$; \\ (4) $E^{f}_{0} \cap \Lambda(C_{t}) = E^{f}_{0} \cap D(C_{t}) \cap U(\widetilde{C}_{t}) = E^{f}_{0} \cap D(\hat{C}_{t}) \cap U(\widetilde{C}_{t})$; \\ (5) $E^{f}_{0} \cap D(C_{t+1}) = E^{f}_{0} \cap (D(\hat{C}_{t}) - \Lambda(C_{t}))$; \\ (6) $E^{f}_{0} \cap D(C_{t}) = E^{f}_{0} \cap D(\widetilde{C}_{t-1})$; \\ (7) $E^{f}_{0} \cap \Gamma(C_{t}) = E^{f}_{0} \cap \Gamma(\widetilde{C}_{t-1})$; and \\ (8) if $e \in D(C_{t+1})$, then $\widetilde{C}_{t}(e) = \hat{C}_{t}(e) = C_{t+1}(e) = \hat{C}_{t+1}(e)$, for every $e \in E^{f}_{0}$. \end{observation} Consider some time $t \geq t^{*}_{a}$ and an edge $e = \{v ,u\} \in E(G_{t})$. We denote by $V^{e}_{t} = \{v, u\} \cup \{ \{w, x\} \in E(G_{t}) \mid \{w, x\} \in N_{G_{t}}(e) \}$. \begin{lemma} \label{lemma:edges:far-from-manipulated-and-decided-content-in-hat} For every $t \geq t^{*}_{a}$ it holds that $E^{f}_{0} \cap D(C_{t}) \subseteq E^{f}_{0} \cap \Gamma(\hat{C}_{t-1})$. \end{lemma} \begin{proof} Every node $v \in V^{f}_{0}$ simulating the decision step of \ensuremath{\mathtt{Phase}}{} in round $t-1$ has $\mathit{step}_{v, t-1} = \phi - 1$, which means that $\mathit{step}_{v, t-\phi} = 0$ by the design of the \ensuremath{\mathtt{PPS}}{} module since $v$ was not manipulated. Let $t' = t - \phi$ and assume that $t' \geq t^{*}_{a}$. In round $t'$ node $v$ simulated the $0$ step of \ensuremath{\mathtt{Phase}}{}, hence initialized all the registered maintained by \ensuremath{\mathtt{Phase}}{}. Node $v$ and all the nodes $u \in V_{0}$ such that $\delta^{m}_{t}(u) \geq \phi$ that are engaged in the phase that started at time $t'$ were not manipulated, hence simulated an execution of \ensuremath{\mathtt{Phase}}{} based on messages they received that corresponds to a full, fault free, execution of \ensuremath{\mathtt{Phase}}{} by Obs.{}~\ref{observation:edges:during-manipulations}. Let $e \in E^{f}_{0} \cap D(C_{t})$. First we will prove that $e \notin \Lambda(C_{t-1})$. Assume by contradiction that $e =\{v, u\} \in \Lambda(C_{t-1})$. Since all the nodes in $V^{e}_{t}$ were not manipulated in round $t-2$, it holds that $\ensuremath{\mathtt{Detect}}_{v,t-1}(u) = \ensuremath{\mathtt{Detect}}_{v, t-1}(v) = \mathit{false}$ and $\widetilde{C}_{t-1}(e) = C_{t}(e) = \bot$, by the design of \ensuremath{\mathcal{A}_{\mathit{ST}}}{} and the guarantee of \ensuremath{\mathtt{Detect}}{}. Hence, $e \notin D(C_{t})$. If $e \in U(C_{t-1})$, then $v$ and $u$ simulated the decision step of \ensuremath{\mathtt{Phase}}{} in round $t-1$ and by Prop.{}~\ref{property:nodes:respectful-decisions}, it holds that $e \in E^{f}_{0} \cap \Gamma(\hat{C}_{t-1})$. Otherwise ($e \in D(C_{t-1})$), the nodes $w$ and $x$ such that $\{w, x\} \in U_{v}(C_{t-1}) \cap D(\hat{C}_{t-1})$ simulated the decision step of \ensuremath{\mathtt{Phase}}{} in round $t-1$. By Prop.{}~\ref{property:nodes:respectful-decisions}, it holds that $e \in \Gamma(\hat{C}_{t-1})$. \end{proof} We denote by $C_{a}$ the configuration $C_{t^{*}_{a}}$. \begin{lemma} \label{lemma:edges:far-from-manipulated-keep-decision} For every $e = \{v, u\} \in E_{0}$, it holds that if $\delta_{G}(v, K) \geq \mathit{depth}_{D_{\PredicateSet}}(C_{a}(e)) + \phi + 1$ and $\delta_{G}(u, K) \geq \mathit{depth}_{D_{\PredicateSet}}(C_{a}(e)) + \phi + 1$, then $C_{t}(e) = C_{a}(e)$, for every $t \geq t^{*}_{a}$. \end{lemma} \begin{proof} The claim obviously holds for $t = t^{*}_{a}, \cdots, t^{*}_{a} + \phi$, so we assume hereafter that $t \geq t^{*}_{a} + \phi + 1$. Recall that $\phi \geq 2$. Let $e'=\{w, x\} \in E^{f}_{0}$ be some edge and let $C_{a}(e') = o \in \mathcal{O}$. Assume that there exists a time $t \geq t^{*}_{a} + \phi + 1$ such that $C_{t}(e') \neq o$ and let $t$ be the first time it occurred. By Obs.{}~\ref{observation:edges:during-manipulations}, we know that $\widetilde{C}_{t-1}(e') = C_{t}(e') \neq o$. Since $C_{t-1}(e') = o$ and the nodes $w$ and $x$ were not manipulated in round $t-2$, it holds that $e'$ changed decision since $\ensuremath{\mathtt{Detect}}_{w,t-1}(x) = \ensuremath{\mathtt{Detect}}_{x, t-1}(w) = \mathit{false}$ by the construction of \ensuremath{\mathcal{A}_{\mathit{ST}}}{}. Since the nodes in $V^{e'}_{t-2}$ were not manipulated in round $t-2$, by the guarantee of \ensuremath{\mathtt{Detect}}{}, it holds that $e'$ is uncontent under $C_{t-1}$. Since $e' \in D(C_{t-1})$ and $t \geq t^{*}_{a}$, by Lem.{}~\ref{lemma:edges:far-from-manipulated-and-decided-content-in-hat}, it holds that $e' \in \Gamma(\hat{C}_{t-2})$. We denote by $\hat{M}^{e'}_{t-2} \in \SatMultisets^{*}(o)$ some core for $o$ such that $\hat{M}^{e'}_{t-2} \subseteq \hat{C}_{t-2}[e'] \in \mathcal{T}(o)$. We will prove the claim by induction on $\mathit{depth}_{D_{\PredicateSet}}(\cdot)$. Let $e = \{v, u\} \in E_{0}$ be some edge such that $\delta_{G}(v, K) \geq \mathit{depth}_{D_{\PredicateSet}}(C_{a}(e)) + \phi + 1$, $\delta_{G}(u, K) \geq \mathit{depth}_{D_{\PredicateSet}}(C_{a}(e)) + \phi + 1$ and $\mathit{depth}_{D_{\PredicateSet}}(C_{a}(e)) = 0$ and let $C_{a}(e) = o \in \mathcal{O}$. Assume by contradiction that there exists a time $t \geq t^{*}_{a} + \phi +1$ such that $C_{t}(e) \neq o$ and let $t$ be the first time it occurred. Since $\mathit{depth}_{D_{\PredicateSet}}(o) = 0$, every core for $o$ is the empty set, hence $\hat{M}^{e}_{t-2} = \emptyset$. The only difference between configurations $\hat{C}_{t-2}$ and $\widetilde{C}_{t-2}$ is that decided edges under $\hat{C}_{t-2}$ may become undecided under $\widetilde{C}_{t-2}$. By the definition of a core $e \in \Gamma(\widetilde{C}_{t-2})$ and by Obs.{}~\ref{observation:edges:during-manipulations}, $e \in \Gamma(C_{t-1})$ which is a contradiction. Let $e=\{v, u\} \in E_{0}$ be some node such that $\delta_{G}(v, K) \geq \mathit{depth}_{D_{\PredicateSet}}(C_{a}(e)) + \phi + 1$, $\delta_{G}(u, K) \geq \mathit{depth}_{D_{\PredicateSet}}(C_{a}(e)) + \phi + 1$ and $\mathit{depth}_{D_{\PredicateSet}}(C_{a}(e)) = i > 0$ and let $C_{a}(v) = o \in \mathcal{O}$. Assume by contradiction that there exists a time $t \geq t^{*}_{a} + \phi + 1$ such that $C_{t}(e) \neq o$ and let $t$ be the first time it occurred. It holds that $\emptyset \subset \hat{M}^{v}_{t-2}$, since $i > 0$ which means that the empty set it not a core for $o$. Let $\hat{B}^{e}_{t-2} \subseteq N_{G}(e)$ be the set of $e$'s neighbors such that their decisions are in $\hat{M}^{e}_{t-2}$. Since $\mathit{depth}_{D_{\PredicateSet}}(o) = i > 0$, it holds that $\mathit{depth}_{D_{\PredicateSet}}(\hat{C}_{t-2}(e')) \leq i-1$ and $\delta_{G}(w, K) \geq \mathit{depth}_{D_{\PredicateSet}}(\hat{C}_{t-2}(e')) + \phi + 1$, $\delta_{G}(x, K) \geq \mathit{depth}_{D_{\PredicateSet}}(\hat{C}_{t-2}(e')) + \phi + 1$ for every $e' =\{w, x\} \in \hat{B}^{v}_{t-2}$. By induction hypothesis $C_{t'}(e') = C_{a}(e')$ and $e' \in \Gamma(C_{t'})$ for every $t' \geq t^{*}_{a}$. and $e' \in \hat{B}^{e}_{t-2}$. Let $e' \in \hat{B}^{e}_{t-2}$. By Obs.{}~\ref{observation:edges:during-manipulations}, $\hat{C}_{t-2}(e') = \widetilde{C}_{t-2}(e') = C_{t-1}(e')$. The only difference between configurations $\hat{C}_{t-2}$ and $\widetilde{C}_{t-1}$ is that decided edges under $\hat{C}_{t-2}$ may become undecided under $\widetilde{C}_{t-2}$. By the definition of a core $e \in \Gamma(\widetilde{C}_{t-2})$ and by Obs.{}~\ref{observation:edges:during-manipulations}, $e \in \Gamma(C_{t-1})$ which is a contradiction. \end{proof} \begin{proof} [proof of Lem.{}~\ref{lemma:edges:up-bound-undecided-edges}] Every edge $e=\{v, u\} \in E_{0}$ such that $\delta_{G}(v, K) \geq \nu + \phi + 1$ and $\delta_{G}(u, K) \geq \nu + \phi + 1$ will not change its decision at any time by Lem.{}~\ref{lemma:edges:far-from-manipulated-keep-decision}. \end{proof} \section{Concrete Problems and Algorithms} \label{section:concrete-problems} In this section, we develop the self-stabilizing algorithms listed in Table~\ref{table:concrete-problems} and establish their fully adaptive run-time and message size bounds. Sec.{}\ \ref{section:concrete:node-LCLs} and \ref{section:concrete:edge-LCLs} present the corresponding fault free algorithms (and detection procedures) for node-LCLs and edge-LCLs, respectively, and derive their eligibility parameters; the promised bounds are then obtained by plugging these parameters into Thm.{}\ \ref{theorem:nodes:main}, \ref{theorem:edges:main}, and \ref{theorem:simulation-line-graph}. In Sec.{}~\ref{section:mm-log-k}, we provide an alternative analysis for the self-stabilizing maximal matching algorithm produced by the transformer, improving its fully adaptive run-time from $O (\log (k + \Delta))$ to $O (\log k)$. Note that this alternative analysis relies on specific features of the maximal matching problem and we do not know if it can be incorporated into the line of arguments used by the transformer's generic analysis. Consider a graph $G \in \mathcal{U}$ and a positive integer $\alpha \in \mathbb{Z}_{> 0}$. The \emph{clone} graph of $G_{\alpha}$ is the graph in which $V(G_{\alpha}) = \{(v,i) \mid v \in V(G) ,i \in [\alpha] \}$ and $E(G_{\alpha}) = \{ \{(v,i),(v,i')\} \mid v \in V(G), 1 \leq i < i' \leq \alpha \} \cup \{ \{(v,i), (u,i)\} \mid \{v,u\} \in E(G), i \in [\alpha] \}$. The clone graph was introduced in the seminal paper of Luby~\cite{Luby1986simple}. It is well known that the nodes of a communication network $G$, can simulate an execution of a fault free distributed algorithm on $G_{\alpha}$ for every $\alpha \in \mathbb{Z}_{> 0}$. Moreover, if $\alpha$ is ``hard coded'' into the nodes of $G$ (i.e., cannot be manipulated by the adversary), then the nodes of $G$ can simulate an execution of a self-stabilizing algorithm on $G_{\alpha}$. \subsection{Node-LCLs} \label{section:concrete:node-LCLs} For all problems discussed in this section, we take the straightforward detection procedure \ensuremath{\mathtt{Detect}}{} in which each node $v$ shares its output register $\mathtt{out}_{v}$ with all its neighbors in every round. \subsubsection{Maximal Independent Set} \label{section:mis} We define the LCL $\ensuremath{\mathcal{L}} = \{\ensuremath{\ell}_{d}\}^{\infty}_{d=0}$ of maximal independent set over the output values $\mathcal{O} = \{ \mathit{IN}, \mathit{OUT} \}$ such that for every $d \in \mathbb{Z}_{\geq 0}$ and $M \in M_{d}(\mathcal{O})$, it holds that (1) $\ensuremath{\ell}_{d}(\mathit{IN},M) = true$ iff $\mathit{IN} \notin M$; and (2) $\ensuremath{\ell}_{d}(\mathit{OUT},M) = true$ iff $\mathit{IN} \in M$. By construction, \ensuremath{\mathcal{L}}{} satisfies the $1$-bounded influence property (Prop.{}~\ref{property:nodes:bounded-influence}). Let $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{U}_{\Delta}, \ensuremath{\mathcal{L}} \rangle$ be the resulting node-LCL. \paragraph{Eligible distributed algorithm.} We use a modified version of the maximal independent set algorithm developed in the seminal work of Alon et al.~\cite{AlonBI1986fast}. The eligible distributed algorithm \ensuremath{\mathcal{A}}{} for \ensuremath{\mathcal{P}}{} is presented through its phase procedure \ensuremath{\mathtt{Phase}}{} which has phase length of $3$. Let $G \in \mathcal{U}_{\Delta}$ be a graph and let $H$ be the subgraph induced by the undecided nodes at the beginning of a phase of \ensuremath{\mathtt{Phase}}{}. Consider a node $v \in V(H)$. In step $1$, node $v$ marks itself w.p.\ $1 / \mathrm{d}_{H}(v)$ and sends a message to its undecided neighbors indicating whether it is mark or not together with its degree $\mathrm{d}_{H}(v)$. Step $2$ is the decision step: If node $v$ has a decided neighbor with decision $\mathit{IN}$, then $v$ sets $\mathtt{out}_{v} \leftarrow \mathit{OUT}$. Otherwise, node $v$ sets $\mathtt{out}_{v} \leftarrow \mathit{IN}$ if and only if it is marked and $\mathrm{d}_{H}(v) > \mathrm{d}_{H}(u)$ for every marked neighbor $u \in N_{H}(v)$ of $v$. \paragraph{Locally Separable Potential Function for \ensuremath{\mathcal{P}}{}.} Given $M, M' \in \mathcal{M}(\mathcal{O})$, the potential coefficient $\sigma(M, M')$ is defined by setting $\sigma(M, M') = 1$ if $\mathit{IN} \in M$ or $\mathit{IN} \in M'$; and $\sigma(M, M') = 2$ otherwise. This implies that $\sigma_{0} = \sigma(\emptyset, \emptyset) = 2$. \sloppy \paragraph{Proving the $(\beta, \sigma_{0})$-Progress Property for \ensuremath{\mathcal{A}}{}.} Consider a strongly configured graph $(G, C) \in \mathcal{SCG}(\ensuremath{\mathcal{P}})$ and let $H = G(U(C))$ be the subgraph induced by the undecided nodes. We make an extensive use of the operator $\psi_{C} : E(H) \rightarrow \mathbb{Z}_{\geq 0}$ defined by setting $\psi_{C}(e) = \sigma(C[u], C[v])$ for each edge $e = \{ u, v \} \in E(H)$. It is convenient to extend the domain of the operator $\psi_{C}$ from $E(H)$ to $E(G)$, defining $\psi_{C}(e) = 0$ for each edge $e \in E(G) - E(H)$. \par\fussy For a node $v \in V(H)$, let $N^{g}_{H}(v) = \{ u \in N_{H}(v) \mid \mathrm{d}_{H}(u) \leq \mathrm{d}_{H}(v) \}$ be the set of neighbors $u$ of $v$ whose degree in $H$ is not larger than that of $v$. Node $v$ is said to be \emph{good} in $H$ \cite{AlonBI1986fast} if $|N^{g}_{H}(v)| \geq \mathrm{d}_{H}(v) / 3$. It is a well known fact (see, e.g., \cite[Lem.{}~4.4]{AlonBI1986fast}) that at least half of edges in $E(H)$ are incident on a good node (at least one). Consider the configuration $C' = \ensuremath{\mathtt{Phase}}(G, C)$ obtained from applying the phase procedure \ensuremath{\mathtt{Phase}}{} to $G$ under configuration $C$; observe that $C'$ is a random variable. Let $v \in V(H)$ be a good node and let $E_{v} = \{ e \in E(H) \mid v \in e \}$. We will show that, in expectation, at least a constant fraction of the edges $e \in E_{v}$ satisfy \begin{equation} \label{equation:MIS:main-progress-argument} \psi_{C'}(e) \, \leq \, \psi_{C}(e) - 1 \, . \end{equation} Since at least half of the edges in $H$ are incident on a good node and since $\psi_{C}(e) \leq \sigma_{0} = 2$ for every edge $e \in E(H)$, it follows that $\mathbb{E}(\pi(G, C')) \leq (1 - \beta) \cdot \pi(G, C)$ for a constant fraction $\beta > 0$. A node $u \in V(H)$ is said to be \emph{doomed} in $C$ if there exists a node $w \in D_{u}(C)$ such that $C(w) = \mathit{IN}$. Notice that if $u$ is doomed in $C$, then $u$ is guaranteed to be decided in $C'$. In particular, if $v$ is doomed in $C$, then $\psi_{C'}(e) = 0$ for every $e \in E_{v}$, thus establishing (\ref{equation:MIS:main-progress-argument}). Assume in what follows that $v$ is not doomed in $C$. Let $Z = \{ u \in N^{g}_{H}(v) \mid \text{$u$ is doomed in $C$} \}$. If $|Z| \geq |N^{g}_{H}(v)| / 2$, then at least $(1 / 6)$-fraction of the edges in $e \in E_{v}$ satisfy $\psi_{C'}(e) = 0$, thus establishing (\ref{equation:MIS:main-progress-argument}). Assume that $|N^{g}_{H}(v) - Z| > |N^{g}_{H}(v)| / 2$. It is well known (see, e.g., \cite{AlonBI1986fast}) that each node $u \in V(H)$ that is not doomed under $C$ satisfies $C'(u) = \mathit{IN}$ with probability at least $\Omega (1 / \mathrm{d}_{H}(u))$. This applies, in particular, to the nodes in $N^{g}_{H}(v) - Z$, concluding that each node $u \in N^{g}_{H}(v) - Z$ satisfies $C'(u) = \mathit{IN}$ with probability at least $\Omega (1 / \mathrm{d}_{H}(u)) \geq \Omega (1 / \mathrm{d}_{H}(v))$. Since \[ |N^{g}_{H}(v) - Z| \, > \, |N^{g}_{H}(v)| / 2 \, \geq \, \mathrm{d}_{H}(v) / 6 \, , \] it follows by a standard probabilistic argument that with a positive constant probability, at least one node $u \in N^{g}_{H}(v) - Z$ satisfies $C'(u) = \mathit{IN}$. This results in $\psi_{C'}(e) = 1 = \Psi_{C}(e) - 1$ for each edge $e = \{ w, v \}$ such that $w \in N^{g}_{H}(v) - Z$, thus establishing (\ref{equation:MIS:main-progress-argument}). \subsubsection{Node $(1 + \epsilon)\Delta$-coloring} \label{section:1+epsilon-node-coloring} Fix some $\Delta \in \mathbb{Z}_{> 0}$ and a constant $\epsilon \geq 1/\Delta$ and assume with out loss of generality that $(1+\epsilon) \Delta \in \mathbb{Z}_{> 0}$. Let $\ensuremath{\mathcal{L}}{} = \{\ensuremath{\ell}_{d}\}^{\infty}_{d=0}$ be an LCL over the output values $\mathcal{O} = \left[(1+\epsilon)\Delta \right]$ such that for every $d \in \mathbb{Z}_{\geq 0}$, $i \in \mathcal{O}$, and $M \in \mathcal{M}_{d}(\mathcal{O})$ it holds that $\ensuremath{\ell}_{d}(i, M) = \mathit{true}$ iff $M(i) = 0$. By the definition of \ensuremath{\mathcal{L}}{} it satisfies the $0$-bounded influence property (Prop.{}~\ref{property:nodes:bounded-influence}.) Let $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{U}_{\Delta}, \ensuremath{\mathcal{L}} \rangle$ be a node-LCL and notice that \ensuremath{\mathcal{P}}{} is the node coloring problem with $(1+\epsilon)\Delta$ colors. \paragraph{Eligible distributed algorithm.} We describe the eligible distributed algorithm \ensuremath{\mathcal{A}}{} for \ensuremath{\mathcal{P}}{} by its phase procedure \ensuremath{\mathtt{Phase}}{} which has phase length of $3$. Let $G \in \mathcal{U}_{\Delta}$ be some graph and let $v \in V(G)$ be some undecided node at the beginning of \ensuremath{\mathtt{Phase}}{}. In the $1$ step, node $v$ chooses a color $c_{v}$ u.a.r. (uniformly at random) from the pallet $\mathcal{O}$ and sends it to all of its undecided neighbors. In the $2$ step (which is the decision step), node $v$ sets $\mathtt{out}_{v} = c_{v}$ only if $c_{v}$ differ from the color of all its decided neighbors and from $c_{u}$ for every undecided neighbor $u$. If all of $v$'s neighbors are decided, then $v$ sets $\mathtt{out}_{v}$ to some arbitrarily chosen non-conflicting color from the set $\mathcal{O}$. \paragraph{Locally Separable Potential Function for \ensuremath{\mathcal{P}}{}.} The potential coefficients for every $M, M' \in \mathcal{M}(\mathcal{O})$ are defined such that $\sigma(M, M') = 1$. In this definition we get that $\sigma_{0} = \sigma(\emptyset, \emptyset) = 1$. Notice that the resulting separable potential function for \ensuremath{\mathcal{P}}{}, denoted by $\pi$, simply counts the number of edges that both endpoints are undecided. \paragraph{Proving the $(\beta, \sigma_{0})$-progress Property for \ensuremath{\mathcal{A}}{}.} Consider some $G \in \mathcal{U}_{\Delta}$. Let $C : V(G) \rightarrow \mathcal{O} \cup \{\bot\}$ be a configuration such that $(G,C) \in \mathcal{SCG}(\ensuremath{\mathcal{P}})$ and let $v \in U(C)$. We denote by $A_{v}$ the event that node $v$ is decided under configuration $\ensuremath{\mathtt{Phase}}(G, C)$. For a neighbor $u \in D_{v}(C)$, we denote by $A_{v,u}$, the event that node $v$ choose the same color during the execution of \ensuremath{\mathtt{Phase}}{} as the color of neighbor $u$. For a neighbor $u \in U_{v}(C)$ we denote by $A_{v,u}$ the event that nodes $v$ and $u$ choose the same color during the execution of \ensuremath{\mathtt{Phase}}{}. It is easily verifiable that for every $u \in N_{G}(v)$, it holds that $\Pr(A_{v,u}) \leq \frac{1}{(1+\epsilon) \Delta}$. By the union bound we get that $\Pr\left(\bigcup_{u \in N_{G}(v)} A_{v,u}\right) \leq \frac{1}{1+\epsilon}$. We can conclude that $\Pr(A_{v}) \geq 1-\frac{1}{1+\epsilon}$, hence for every $e = \{u,v\} \in E(G(U))$, the probability that at least one of $e$'s endpoints is decided under configuration $\ensuremath{\mathtt{Phase}}(G, C)$ is at least $1-\frac{1}{1+\epsilon} = \Omega(1)$. Once $u$ or $v$ becomes decided, the potential coefficient of edge $e$ (which is $1$) is removed from $\pi$. \subsubsection{Maximal Node $c$-coloring} \label{section:maximal-node-coloring} We define the LCL $\ensuremath{\mathcal{L}} = \{\ensuremath{\ell}_{d}\}^{\infty}_{d=0}$ of maximal node $c$-coloring over the output values $\mathcal{O}_{c} = \{1, \cdots, c\}$ such that for every $d \in \mathbb{Z}_{\geq 0}$ and $M \in M_{d}(\mathcal{O})$ it holds that (1) $\ensuremath{\ell}_{d}(i,M) = true$ iff $i \notin M$ for every $1 \leq i \leq c-1$; and (2) $\ensuremath{\ell}_{d}(c,M) = true$ iff $M(j) \geq 1$ for every $j \in \{1, \cdots, c-1\}$. By the definition of \ensuremath{\mathcal{L}}{} it satisfies the $1$-bounded influence property (Prop.{}~\ref{property:nodes:bounded-influence}). Let $2 \leq c \leq \Delta + 2$ be an integer. Let $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{U}_{\Delta}, \ensuremath{\mathcal{L}} \rangle$ be the resulting node-LCL. Consider some $G \in \mathcal{G}$. The self-stabilizing algorithm for \ensuremath{\mathcal{P}}{}, is a simulation of our self-stabilizing maximal independent set algorithm on $G_{c-1}$. For every node $v \in V(G)$, if all of the nodes $(v ,i) \in V(G_{c-1})$ for $1 \leq i \leq c-1$ are decided, then node $v$ takes the color $i$ if there exists $(v, i)$ that is decided $\mathit{IN}$. Otherwise, node $v$ takes the color $c$. \subsubsection{Node ($\Delta+1$)-coloring} \label{section:Delta+1-node-coloring} We solve the node $(\Delta + 1)$-coloring by solving the maximal node $(\Delta + 2)$-coloring. \subsubsection{Incremental Node $c$-coloring} \label{section:incremental-node-coloring} We define the LCL $\ensuremath{\mathcal{L}} = \{\ensuremath{\ell}_{d}\}^{\infty}_{d=0}$ of incremental node $c$-coloring over the output values $\mathcal{O} = \{ 1, \dots, c \}$ such that for every $d \in \mathbb{Z}_{\geq 0}$, $1 \leq i \leq c$, and $M \in M_{d}(\mathcal{O})$, it holds that $\ensuremath{\ell}_{d}(i, M) = true$ iff (1) $i \notin M$; and (2) $\sum_{j = 1}^{i - 1} M(j) \geq i - 1$. By construction, \ensuremath{\mathcal{L}}{} satisfies the $(c - 1)$-bounded influence property (Prop.{}~\ref{property:nodes:bounded-influence}). Let $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{U}_{\Delta}, \ensuremath{\mathcal{L}} \rangle$ be the resulting node-LCL. \paragraph{Eligible distributed algorithm.} We use a variant of the maximal independent set algorithm presented in Sec.{}~\ref{section:mis}. The eligible distributed algorithm \ensuremath{\mathcal{A}}{} for \ensuremath{\mathcal{P}}{} is presented through its phase procedure \ensuremath{\mathtt{Phase}}{} which has phase length of $3$. Let $G \in \mathcal{U}_{\Delta}$ be a graph and let $H$ be the subgraph induced by the undecided nodes at the beginning of a phase of \ensuremath{\mathtt{Phase}}{}. Consider a node $v \in V(H)$. In step $1$, node $v$ marks itself w.p.\ $1 / \mathrm{d}_{H}(v)$ and sends a message to its undecided neighbors indicating whether it is mark or not together with its degree $\mathrm{d}_{H}(v)$. Step $2$ is the decision step: If node $v$ has at least $c - 1$ decided neighbors with decisions $1 \leq i \leq c - 1$, then $v$ sets $\mathtt{out}_{v} \leftarrow c$. Otherwise, if $v$ is marked and $\mathrm{d}_{H}(v) > \mathrm{d}_{H}(u)$ for every marked neighbor $u \in N_{H}(v)$ of $v$, then $v$ sets $\mathtt{out}_{v} \gets i_{v}$, where $i_{v}$ is the smallest $1 \leq i \leq c - 1$ such that (I) $v$ has no neighbor with output $i$; and (II) $v$ has at least $i - 1$ decided neighbors with decisions $1 \leq j \leq i - 1$. \paragraph{Locally Separable Potential Function for \ensuremath{\mathcal{P}}{}.} The potential coefficients for every $M, M' \in \mathcal{M}(\mathcal{O})$ are defined such that $\sigma(M, M') = c + \max\{ 0, c-1-\sum_{i=1}^{c-1}M(i) \} + \max\{ 0, c-1-\sum_{i=1}^{c-1}M'(i) \}$. In this definition we get that $\sigma_{0} = \sigma(\emptyset, \emptyset) = 3c-2$. \paragraph{Proving the $(\beta, \sigma_{0})$-progress Property for \ensuremath{\mathcal{A}}{}.} Consider some strongly configured graph $(G, C) \in \mathcal{SCG}(\ensuremath{\mathcal{P}})$. Let $H = G(U(C))$ and let $e = \{v, u\} \in E(H)$. We denote by $\psi(e) = \sigma(C[v], C[u])$. By the definition of the potential coefficients, it holds that $\psi(e)/\psi(e') \leq O(1)$ for every $e, e' \in E(H)$. Let $v \in V(H)$ be a good node and let $E_{v} = \{ \{v, u\} \in E(H) \mid u \in N_{H}(v) \}$. We will show that, in expectation, at least a constant fraction of the edges in $E_{v}$ ``lose'' at least $1$ unit of the potential under configuration $\ensuremath{\mathtt{Phase}}{}(G, C)$. Since at least half of the edges in $H$ are incident on a good node we can conclude that, in expectation, a $\Omega(1/c)$ fraction of the potential is removed. Let $v \in V(H)$ be a good node and let $\gamma = |N^{g}_{H}(v)|$, If there exist $u_{1}, \cdots, u_{c-1} \in D_{v}(C)$ such that $C(u_{i}) = i$, then $v$ is decided under $\ensuremath{\mathtt{Phase}}(G, C)$. Otherwise, if there exist $u_{1}, \cdots, u_{\lceil(1/2)\gamma\rceil} \in |N^{g}_{H}(v)|$ such that for every $1 \leq i \leq \lceil (1/2)\gamma \rceil$ there exist $w^{i}_{1}, \cdots, w^{i}_{c-1} \in D_{u_{i}}(C)$ with $C(w^{i}_{j}) = j$, then at least a constant fraction of the edges in $E_{v}$ ``lose'' $1$ unit of potential under configuration $\ensuremath{\mathtt{Phase}}(G, C)$. Otherwise, using similar arguments as in \cite{AlonBI1986fast}, we can prove that at least one of $v$'s neighbors in $N^{g}_{H}(v)$ is marked with at least a constant probability and that a marked node will be decide with some color $1 \leq i \leq c-1$ with at least a constant probability. Hence, with at least a constant probability, node $v$ has an additional neighbor with some color $i$ for $1 \leq i \leq c-1$ under configuration $\ensuremath{\mathtt{Phase}}(G, C)$ which means that all the edges in $E_{v}$ ``lose'' $1$ unit of potential configuration $\ensuremath{\mathtt{Phase}}(G, C)$. \subsection{Edge-LCLs} \label{section:concrete:edge-LCLs} \subsubsection{Maximal Matching} \label{section:mm} We define the LCL $\ensuremath{\mathcal{L}} = \{\ensuremath{\ell}_{d}\}^{\infty}_{d=0}$ of maximal maximal matching over the output values $\mathcal{O} = \{ \mathit{IN}, \mathit{OUT} \}$ such that for every $d \in \mathbb{Z}_{\geq 0}$ and $M \in \mathcal{M}_{d}(\mathcal{O})$ it holds that (1) $\ensuremath{\ell}_{d}(\mathit{Mat}, M) = \mathit{true}$ iff $M(\mathit{Mat}) = 0$; and (2) $\ensuremath{\ell}_{d}(\mathit{UnM} , M) = \mathit{true}$ iff $M(\mathit{Mat}) \geq 1$. By construction, \ensuremath{\mathcal{L}}{} satisfies the $1$-bounded influence property (Prop.{}~\ref{property:edges:bounded-influence}). Let $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{U}_{\Delta}, \ensuremath{\mathcal{L}} \rangle$ be the resulting edge-LCL. \paragraph{Implementing the Detection Procedure for \ensuremath{\mathcal{L}}{}.} Consider some $G \in \mathcal{U}$. We will implement the detection procedure for \ensuremath{\mathcal{L}}{} with messages of size $O(1)$. Let \ensuremath{\mathtt{Detect}}{} denote the implementation of the aforementioned detection procedure. Every node $v$ will send to every neighbor $u \in N_{G}(v)$, the content of $\mathtt{out}_{v}(u)$ and the value $\mathit{false}$ if $\mathtt{out}_{v}(w) \neq \mathit{Mat}$ for every $w \in N_{G}(v)-\{u\}$ and $\mathit{true}$ otherwise. If node $v$ learns that an incident edge $\{v,u\}$ is port-inconsistent, then $\ensuremath{\mathtt{Detect}}_{v}(u)$ returns $\mathit{false}$ for this edge. Otherwise (edge $\{v, u\}$ is port-consistent), it follows the following (1) if $\mathtt{out}_{v}(u) = \bot$, then $\ensuremath{\mathtt{Detect}}_{v}(u) = \mathit{true}$; (2) if $\mathtt{out}_{v}(u) = \mathit{Mat}$, then $\ensuremath{\mathtt{Detect}}_{v}(u) = false$ if and only if $v$ received $\mathit{true}$ from $u$ or $\mathtt{out}_{v}(w) = \mathit{Mat}$ for some $w \in N_{G}(v)-\{u\}$; and (3) if $\mathtt{out}_{v}(u) = \mathit{UnM}$, then $\ensuremath{\mathtt{Detect}}_{v}(u) = \mathit{false}$ if and only if $v$ received $\mathit{false}$ from $u$ and $\mathtt{out}_{v}(w) \neq \mathit{Mat}$ for every $w \in N_{G}(v)-\{u\}$. \paragraph{Eligible distributed algorithm.} We use a modified version the classic maximal matching algorithm of Israeli and Itai~\cite{IsraeliI1986fast}. We describe the eligible distributed algorithm \ensuremath{\mathcal{A}}{} for \ensuremath{\mathcal{P}}{} by its phase procedure \ensuremath{\mathtt{Phase}}{} which has phase length of $4$. Let $G \in \mathcal{U}_{\Delta}$ be some graph, let $U \subseteq E(G)$ be the set of undecided edges at the beginning of \ensuremath{\mathtt{Phase}}{}, let $e =\{v, u\} \in U$, let $U_{v} = \{ u \in N_{G}(v) \mid \{v, u\} \in U \}$, and let $D_{v} = N_{G}(v) - U_{v}$. In step $1$, node $v$ randomly marks itself as \emph{active} or \emph{passive} with equal probability. If $v$ is active, then $v$ picks u.a.r.\ a neighbor $u \in U_{v}$ and sends to it a 'matching request' message and the value $\mathit{true}$ if $v$ has an output register with the value $\mathit{Mat}$ and $\mathit{false}$ otherwise. In step $2$, a passive node $u$ that received a 'matching request' message from at least one (active) neighbor, chooses (arbitrarily) one such neighbor $v$ and sends to it an 'accept' message along with the value $\mathit{true}$ if $u$ has an output register with the value $\mathit{Mat}$ and $\mathit{false}$ otherwise. We denote such edge $e = \{u,v\}$ as a \emph{candidate} edge. Step $3$ is the decision step: for a candidate edge $e = \{u,v\}$, both $v$ and $u$ sets $\mathtt{out}_{v}(u) = \mathtt{out}_{u}(v) = \mathit{Mat}$ if both $v$ and $u$ received the value $\mathit{false}$ regarding the matching status of $e$'s neighbors and $\mathtt{out}_{v}(u) = \mathtt{out}_{u}(v) = \mathit{UnM}$ otherwise. \paragraph{Locally Separable Potential Function for \ensuremath{\mathcal{P}}{}.} Given $M \in \mathcal{M}(\mathcal{O})$, the potential coefficient $\sigma(M)$ is defined by setting $\sigma(M) = 1$ if $M(\mathit{Mat}) \geq 1$; and $\sigma(M) = 2$ otherwise. This implies that $\sigma_{0} = \sigma(\emptyset, \emptyset) = 2$. \paragraph{Proving the $(\beta, \sigma_{0})$-progress Property for \ensuremath{\mathcal{A}}{}.} Consider some strongly configured graph $(G, C) \in \mathcal{SCG}(\ensuremath{\mathcal{P}})$. Let $H = G(U(C))$ be the subgraph induced by the undecided edges. Consider the configuration $C' = \ensuremath{\mathtt{Phase}}(G, C)$ obtained from applying the phase procedure \ensuremath{\mathtt{Phase}}{} to $G$ under configuration $C$; observe that $C'$ is a random variable. Let $v \in V(H)$ be a good node (see Sec.{}~\ref{section:mis}) and let $E_{v} = \{ e \in E(H) \mid v \in e \}$. We will show that, in expectation, at least a constant fraction of the edges $e \in E_{v}$ satisfy $\sigma(C[e]) \leq \sigma(C'([e])) - 1$. Since at least half of the edges in $H$ are incident on a good node and since $C(e) \leq \sigma_{0} = 2$ for every edge $e \in E(H)$, it follows that $\mathbb{E}(\pi(G, C')) \leq (1 - \beta) \cdot \pi(G, C)$ for a constant fraction $\beta > 0$. Let $e = \{v,u\} \in E(H)$ be an edge in $H$ such that $v$ is a good node in $H$ and let $\gamma = |N^{g}_{H}(v)|$. If there exists $e' =\{v,w\} \in D_{e}(C)$ for some $w \in N_{G}(v)$ such that $C(e') = Mat$, then all of $v$'s incident edges in $H$ becomes decided under $C'$. Otherwise, if there exists $u_{1}, \cdots, u_{\lceil (1/2)\gamma \rceil} \in N^{g}_{H}(v)$ such that for every $1 \leq i \leq \lceil (1/2)\gamma \rceil$ there exists a node $w_{i} \in N_{G}(u_{i})$ and an edge $e_{i} = \{u_{i},w_{i}\} \in D_{e_{i}}(C)$ with $C(e_{i}) = Mat$, then a constant fraction of $v$'s incident edges in $H$ become decided under configuration $C'$. Otherwise, it is easily verifiable that with at least a constant probability node $v$ is passive and receives a 'matching request' from at least one of its neighbors in the set $\{u \in N^{g}_{H}(v) \mid \forall w \in N_{H}(u), C(\{w,u\}) \neq Mat \}$. \subsubsection{Edge Coloring with $(2+\epsilon)\Delta$ Colors} \label{section:2+epsilon-edge-coloring} Fix some $\Delta \in \mathbb{Z}_{> 0}$ and a constant $\epsilon > 0$ and assume without loss of generality that $(2+ \epsilon) \Delta \in \mathbb{Z}_{> 0}$. We define the LCL $\ensuremath{\mathcal{L}} = \{\ensuremath{\ell}_{d}\}^{\infty}_{d=0}$ of edge $((2+\epsilon)) \Delta$-coloring over the output values $\mathcal{O} = \left[ (2+\epsilon)\Delta \right]$ such that for every $d \in \mathbb{Z}_{\geq 0}$, $i \in \mathcal{O}$, and $M \in \mathcal{M}_{d}(\mathcal{O})$ it holds that $\ensuremath{\ell}_{d}(i,M) = \mathit{true}$ iff $M(i) = 0$. By the definition of \ensuremath{\mathcal{L}}{} it satisfies the $0$-bounded influence property (Prop.{}~\ref{property:edges:bounded-influence}.) Let $\ensuremath{\mathcal{P}} = \langle \mathcal{O}, \mathcal{U}_{\Delta}, \ensuremath{\mathcal{L}} \rangle$ be an edge-LCL and notice that \ensuremath{\mathcal{P}}{} is the edge coloring problem with $(2+\epsilon)\Delta$ colors. \paragraph{Implementing the Detection Procedure for \ensuremath{\mathcal{L}}{}.} Consider some $G \in \mathcal{U}$. We will implement the detection procedure for \ensuremath{\mathcal{L}}{} with messages of size $O(\log (|\mathcal{O}|))$. Let \ensuremath{\mathtt{Detect}}{} denote the implementation of the aforementioned detection procedure. Every node $v$ will send to every neighbor $u \in N_{G}(v)$, the content of $\mathtt{out}_{v}(u)$ and the value $\mathit{false}$ if $\mathtt{out}_{v}(u)$ is equal to some $\mathtt{out}_{v}(w)$ for $w \in N_{G}(v)-\{u\}$ and $\mathit{true}$ otherwise. If node $v$ learns that an incident edge $\{v,u\}$ is port-inconsistent, then $\ensuremath{\mathtt{Detect}}_{v}(u)$ returns $\mathit{false}$ for this edge. Otherwise (edge $\{v, u\}$ is port-consistent), $\ensuremath{\mathtt{Detect}}_{v}(u)$ returns $\mathit{false}$ for edge $\{v, u\}$ if and only if $\mathtt{out}_{v}(u) \neq \bot$ and $v$ received the value $\mathit{false}$ from $u$ or there exists a neighbor $w \in N_{G}(v)-\{u\}$ such that $\mathtt{out}_{v}(u) = \mathtt{out}_{v}(w)$. \paragraph{Eligible distributed algorithm.} We describe the eligible distributed algorithm \ensuremath{\mathcal{A}}{} for \ensuremath{\mathcal{P}}{} by its phase procedure \ensuremath{\mathtt{Phase}}{} which has phase length of $5$. Let $G \in \mathcal{U}_{\Delta}$ be some graph, let $U \subseteq E(G)$ be the set of undecided edges at the beginning of \ensuremath{\mathtt{Phase}}{}, let $e =\{v, u\} \in U$, let $U_{v} = \{ u \in N_{G}(v) \mid \{v, u\} \in U \}$, and let $D_{v} = N_{G}(v) - U_{v}$. In the $1$ step, node $v$ select a color $c_{v}(u)$ u.a.r.\ from the pallet $\mathcal{O}$ for every edge $\{v, u\} \in U$ and sends it to neighbor $u \in U_{v}$. In the $2$ step, both $v$ and $u$ determine the same candidate color for edge $e$ defined to be $c_{e} = 1 + ((c_{v}(u) + c_{u}(v)) \mod |\mathcal{O}|)$. Node $v$ sends $c_{e}$ to every neighbor $w \in U_{v} - \{u\}$. Let $\gamma:U \rightarrow \mathcal{O}$ be the function of the candidate colors for all undecided edges, i.e., $\gamma(e') = c_{e'}$ for every $e' \in U$. In the $3$ step, if $\gamma(e) \neq \gamma(e')$ for every $e' \in \{\{v,w\} \mid w \in U_{v}-\{u\} \}$ and $\gamma(e) \neq \mathtt{out}_{v}(w)$ for every $w \in D_{v}$, then $v$ sends an accept message to $u$. Otherwise, $v$ sends a decline message to $u$. The $4$ step is the decision step. Node $v$ sets $\mathtt{out}_{v}(u) = \gamma(e)$ only if $v$ sent an accept message to $u$ in the previous step and $v$ received an accept message from $u$ (in that case $u$ will also set $\mathtt{out}_{u}(v) = \gamma(e)$). \paragraph{Locally Separable Potential Function for \ensuremath{\mathcal{P}}{}.} Given $M \in \mathcal{M}(\mathcal{O})$, the potential coefficient $\sigma(M)$ is defined by setting $\sigma(M) = 1$. This implies that $\sigma_{0} = \sigma(\emptyset, \emptyset) = 1$. Notice that the resulting separable potential function for \ensuremath{\mathcal{P}}{}, denoted by $\pi$, simply counts the number of undecided edges. \paragraph{Proving the $(\beta, \sigma_{0})$-progress Property for \ensuremath{\mathcal{A}}{}.} Consider some $G \in \mathcal{U}_{\Delta}$. Let $C : E(G) \rightarrow \mathcal{O} \cup \{\bot\} $ be an edge configuration such that $(G,C) \in \mathcal{SCG}(\ensuremath{\mathcal{P}})$ and let $e =\{u, v\} \in U(C)$. We denote by $A_{e}$ the event that edge $e$ is decided under edge configuration $\ensuremath{\mathtt{Phase}}(G, C)$. For a neighboring edge $e' \in U_{e}(C)$, we denote by $A_{e,e'}$, the event that the candidate color $c_{e}$ equals to the candidate color $e'$ chosen during the execution of \ensuremath{\mathtt{Phase}}{}. For a neighboring edge $e' \in D_{e}(C)$ we denote by $A_{e,e'}$ the event that $C(e') = c_{e}$, i.e., the candidate color chosen from edge $e$ during the execution of \ensuremath{\mathtt{Phase}}{} conflict with an existing decided neighboring edge color. It is easily verifiable that for every $e' \in N_{G}(e)$, it holds that $\Pr(A_{e,e'}) \leq \frac{1}{(2+\epsilon) \Delta}$. By the union bound we get that $\Pr\left(\bigcup_{e' \in N_{G}(e)} A_{e,e'}\right) \leq \frac{2}{2+\epsilon}$. We can conclude that $\Pr(A_{e}) \geq 1-\frac{2}{2+\epsilon}$. Hence, for every $e = \{u,v\} \in U(C)$, the probability that $e$ is decided under configuration $\ensuremath{\mathtt{Phase}}(G, C)$ is at least $\left(1-\frac{2}{2+\epsilon}\right) = \Omega(1)$. Once $e$ becomes decided, the potential coefficient of edge $e$ (which is $1$) is removed from $\pi$. \subsubsection{Maximal Edge $c$-coloring} \label{section:maximal-edge-coloring} Let $2 \leq c \leq 2 \Delta$ be an integer. The LCL is defined as in Sec.{}~\ref{section:maximal-node-coloring}. We solve the maximal edge $c$-coloring by solving maximal node $c$-coloring using the simulation defined in Sec.{}~\ref{section:simulation-line-graph}. \subsubsection{Edge $(2\Delta - 1)$-coloring} \label{section:2delta-1-edge-coloring} We solve the edge $(2\Delta - 1)$-coloring by solving the maximal edge $2\Delta$-coloring. \subsubsection{Incremental Edge $c$-coloring} \label{section:incremental-edge-coloring} Let $2 \leq c \leq 2 \Delta$ be an integer. The LCL is defined as in Sec.{}~\ref{section:incremental-node-coloring}. We solve the incremental edge $c$-coloring by solving incremental node $c$-coloring on the line graph using the simulation defined in Sec.{}~\ref{section:simulation-line-graph}. \subsection{A Self Stabilizing Algorithm for Maximal Matching with Fully Adaptive Run Time $O (\log (k))$} \label{section:mm-log-k} Let \ensuremath{\mathcal{A}_{\mathit{ST}}}{} be the self stabilizing algorithm obtained from the $(\ensuremath{\nu},\mu(\Delta)), \phi, \beta,\sigma_{0})$-eligible algorithm defined in Sec.{}~\ref{section:mm} such that $\ensuremath{\nu} = 1$, $\mu(\Delta) = O(1)$, $\phi = 4$, $\beta = \Omega(1)$, and $\sigma_{0} = 2$. In order to derive the a fully adaptive run-time of $O(\log k)$ we will use a similar analysis to the analysis of the transformer when it is applied to an edge-LCL (see Sec.{}~\ref{section:transformer-edges-analysis}). Moreover, we will use the same notations define in Sec.{}~\ref{section:transformer-edges-analysis} Recall that adversary manipulate $k > 0$ nodes during the round interval $[t^{*}_{a}, t^{*}_{b} - 1]$. By the properties of \ensuremath{\mathcal{A}_{\mathit{ST}}}{}, we know that for every time $t \geq t_{s}$ the configuration is strong and at least a constant fraction of the potential is removed after a constant number of rounds, in expectation (see Lem.{}~\ref{lemma:edges:correctness} and Lem.{}~\ref{lemma:edges:expected-removed-potential}). Let $T = \min \{ t \geq t_{s} \mid U(C_{t}) = O(k^{2}) \}$ be the random variable that marks the first time $t \geq t_{s}$ such that the number of undecided edges at time $t$ is $O(k^{2})$. In order to prove that the fully adaptive run-time of \ensuremath{\mathcal{A}_{\mathit{ST}}}{} is $O(\log k)$ it suffice to prove that $\mathbb{E} \left( T \right) \leq O(\log k)$, since the potential at time $T$ is at most $O(k^{2})$. Consider some graph $G' \in \mathcal{G}$ on which \ensuremath{\mathcal{A}_{\mathit{ST}}}{} runs and time $t \in \mathbb{Z}_{\geq 0}$. We say that node $v \in V(G')$ is \emph{matched} if there exists a neighbor $u \in N_{G'}(v)$ such that $\mathtt{out}_{v}(u) = \mathit{Mat}$ and $\mathtt{out}_{u}(v) = \mathit{Mat}$. Node $v$ is said to be unmatched if $\mathtt{out}_{v}(u) = \mathit{UnM}$ for every $u \in N_{G}(v)$. Throughout, we denote by $G$ the graph at time $t^{*}_{b}+2$, i.e., $G_{t^{*}_{b}+2}$. \begin{lemma} \label{lemma:mm:three-sets} Let $S = \{ v \in V(G) \mid v \text{ is matched at time } t^{*}_{b} + 2 \}$. There exists a node set $\widetilde{K} \subseteq V - S$ of size $|\widetilde{K}| \leq 2 k$ such that $I = V - (S \cup K)$ is an independent set in $G$. \end{lemma} \begin{proof} Let $A$ be the set of nodes in $V(G) - S$ that are manipulated (by the adversary) during the round interval $[t^{*}_{a}, t^{*}_{b} - 1]$. Let $R$ be the set of nodes in $V(G) - (S \cup A)$ that are matched at time $t^{*}_{a}$, referred to hereafter as \emph{orphans}. We define $\widetilde{K} = A \cup R$ and establish the assertion by proving that (1) $|\widetilde{K}| \leq 2 k$; and (2) $I = V(G) - (S \cup \widetilde{K})$ is an independent set in $G$. For every orphan $v \in R$, let $w(v)$ be the node with which $v$ is matched at time $t^{*}_{a}$. Note that $w(v)$ does not necessarily exist in $G$ as it may have been removed. Since $v$ and $w(v)$ are no longer matched at time $t^{*}_{b} + 2$ and since $v$ is not manipulated in the round interval $[t^{*}_{a}, t^{*}_{b} - 1]$, $w(v)$ must be manipulated during that round interval. We conclude that $|R| \leq k$ as the mapping defined by $w$ is injective. The bound $|\widetilde{K}| \leq 2 k$ follows since $|A| \leq k$. It remains to show that $I$ is an independent set in $G$. This is done by arguing that every node $v \in I$ is strongly unmatched at time $t^{*}_{a}$. This establishes the assertion recalling that by definition, the nodes in $I$ are not manipulated during the time interval $[t_{a}^{*}, t_{0}^{*})$, hence the adversary does not introduce new edges in $I \times I$. To that end, assume by contradiction that there exists some node $v \in I$ that is not unmatched at time $t_{a}^{*}$. Since \ensuremath{\mathcal{A}_{\mathit{ST}}}{} is in a legal configuration at time $t^{*}_{a}$, it follows that $v$ must be matched at that time. But by definition, the nodes in $V(G) - S$ that are matched at time $t^{*}_{a}$ belong to either $R$ or $A$, in contradiction to $v \in I = V(G) - (S \cup A \cup R)$. \end{proof} Let $V_{t} = \{ v \in V(G) \mid \forall u \in N_{G}(v), \mathtt{out}_{v, t}(u) \neq \mathit{Mat} \}$, let $P_{t} = V_{t} \cap \{v \in V(G) \mid \mathit{step}_{v, t} = 0\}$, and let $D_{t} = \{v \in K \cap V_{t} \mid \mathrm{d}_{G(V_{t})}(v) \geq \xi k \}$ for some constant $\xi > 0$ that we determine later. . \begin{lemma} \label{lemma:mm:degrees-become-small} Fix some time $t \geq t^{*}_{b} + \phi + 2$. It holds that, $\Pr \left( v \notin D_{t+2\phi} \right) \geq \Omega(1)$ for every node $v \in D_{t}$. \end{lemma} \begin{proof} For every node $v \in V_{t}$, let $t \leq t(v)$ be the first time after time $t$ such that $\mathit{step}_{v,t(v)} = \ensuremath{\mathit{\hbar}}$. According to the definition of the \ensuremath{\mathtt{PPS}}{} module it must hold that $t \leq t(v) \leq t + \phi$ and notice that $t(v)$ is fully determined by $\mathit{step}_{v,t}$. Denote by $A_{v}$ the event that $\mathit{step}_{v,t'} = \ensuremath{\mathit{\hbar}}$ for every $t(v) \leq t' \leq t+\phi$. By Obs.{}~\ref{observation:step-counter}, it holds that the event $A_{v}$ is independent of any coin toss of $v$ prior to time $t(v)$ and of any coin toss of all other nodes and that \begin{equation}\label{eq:remain-in-Hold-prob} \Pr\left(A_{v}\right) \geq 2^{-\phi}. \end{equation} For every node $v \in V_{t}$, we augment the power of the adversary by allowing it choose the outcome of any coin toss in during the round interval $[t,t(v))$. Notice that this adversary can only choose the outcome of coin tosses that are within a phase and cannot choose the outcome of coin tosses of the \ensuremath{\mathtt{PPS}}{} module. Let $S$ be the set of nodes $v \in V_{t}$ that are not matched at time $t(v)$, i.e., $S=\{v \in V_{t} \bigm| v \in V_{t(v)}\}$. Let $G^{S} = G(S)$. Fix some node $v \in D_{t}$. If node $v$ is matched at time $t(v)$ or $\mathrm{d}_{G^{S}}(v) < \xi k$, then $v \notin D_{t + 2\phi - 1}$ with probability $1$. Otherwise ($v$ is not matched at time $t(v)$ and more than $\xi k$ of its neighbors are in $S$), we will show that with probability $\Omega(2^{-\phi})$ node $v$ is matched at time $t + 2\phi -1$ which implies that $v \notin D_{t + 2\phi - 1}$. Let $B$ be the event that $\mathrm{d}_{G(P_{t+ \phi+1})}(v) \geq 3k$. We start by showing that there exists $\alpha = \alpha(\phi)$ such that $\Pr\left(B\right) \geq \alpha$. For every $u \in S$, let $\widetilde{A}_{u}$ be the event that $A_{u}$ occurred and $\mathit{step}_{v,t+\phi +1} = 0$. By Eq.{}~\ref{eq:remain-in-Hold-prob} and Obs.{}~\ref{observation:step-counter} we can conclude that $\Pr\left(\widetilde{A}_{u}\right) = \Pr\left(A_{u}\right) \cdot (1/2) \geq 2^{-(\phi+1)}$. Moreover, the events in $\{\widetilde{A}_{u} \bigm| u \in S\}$ are independent. Denote $d = \mathrm{d}_{G^{S}}(v)$ and let $u_{1}, \dots, u_{d}$ be the neighbors of $v$ is $G^{S}$. Let $Y$ be the random variable that counts the number of occurrences of events $\widetilde{A}_{u_{i}}$. Notice that if $Y \geq 3k$ and $\widetilde{A}_{v}$ occurs, then event $B$ occur; moreover, events $Y \geq 3k$ and $\widetilde{A}_{v}$ are independent and $\mathbb{E}\left(Y\right) \geq \xi k 2^{-(\phi+1)}$ since $d \geq \xi k$. By choosing $\xi = 4/2^{-(\phi + 1)} = 2^{\phi+3}$ and applying Chernoff's (lower tail) bound we conclude that \begin{equation*} \begin{split} \Pr\left(Y<3k\right) = \Pr \left( Y<(1-1/4) \cdot \xi k 2^{-(\phi+1)} \right) \leq \Pr\left(Y<(1-1/4) \cdot \mathbb{E}\left(Y\right)\right) < e^{-k/8} \leq e^{-1/8} \, . \end{split} \end{equation*} Thus $\Pr\left(B\right) \geq (1-e^{-1/8})\cdot 2^{-(\phi+1)}$. Occurrence of $B$ implies that $v$ is good in $G(P_{t+\phi+1})$ and a good node is removed with at least a constant probability by the end of the phase, i.e., at time $t+2\phi$. We conclude that \begin{equation*} \begin{split} \Pr\left(v \notin D_{t+2\phi-1}\right) & = \Pr\left(B \land v \notin V_{t+2\phi} \right) \\ & \geq \Pr\left(v \notin V_{t+2\phi} \bigm| B\right) \cdot \Pr\left(B\right) = \Omega(1) \, , \end{split} \end{equation*} thus establishing the assertion. \end{proof} \section{Additional Related Work and Discussion} \label{section:related-work} \paragraph{Transformers.} From the more theoretical point of view, a transformer is used to show the reduction (or possibly the equivalence) of some source class of algorithms (and/or problems) to some target class intended for problems that initially seem ``harder''. Some known such transformers in the context of distributed computing are those that allow the application of synchronous algorithms on asynchronous networks and those that allow the use of algorithms for static networks to be performed on one that may undergo (detectable) topological changes (\cite{awerbuch1985complexity,afek1987applying} and many others). Transformers can also easy the design of algorithms in the target class \cite{awerbuch1985complexity,KatzPerry}. When used in the context of faults they may even reduce the amortized complexity per fault by avoiding re-computation after fault \cite{awerbuch1990communication}. Below, we refer only to transformers Some of the transformers mentioned below are more general than the one presented in the current paper in either transforming algorithms where nodes may also start with some input other than the graph, or algorithms for non LCL problems or even algorithms that never terminate and interact with users outside the network. The first transformers, published as early as 1990 \cite{KatzPerry,afek1990memory}, were rather inefficient. In \cite{KatzPerry}, copies of the states of all the nodes were collected to some leader who detected illegal states and instructed the nodes to reset -- restart from some legal global state. Some limitations were (1) assuming some incorruptible leader, (2) high overhead, including in time (that could be reduced to the diameter if the LOCAL \cite{Peleg2000} model was assumed), (3) assuming a directed spanning tree, otherwise, the reset could be invoked many times \cite{awerbuch1994self}. In \cite{afek1997local}, points (1) and (3) were addressed by introducing (an inefficient, improved later) self stabilizing algorithm to elect a leader and maintain a spanning tree. leader election algorithm. Point (2) was addressed by suggesting the paradigm of local detection, (also called local checking \cite{awerbuch1991self,awerbuch1991distributed,naor1995can}) that allowed some nodes to detect incorrectness just by communicating with their neighbors,rather than with the the remote leader. It was shown that any problem could be revised to be locally checkable \cite{afek2002local,korman2010proof}. Transformers using these global reset protocols, e.g. \cite{awerbuch1993time,awerbuch1994self}, including on spanning tree, consume time that is at least a diameter, since they reset the whole network to some correct configuration. The ``resynchronizer'' of \cite{awerbuch1991distributed}, and the ``superstabilizer'' \cite{super-stab-tr,DBLP:journals/cjtcs/DolevH97} are global in their stabilization time for a similar reason. (The ``superstabilizer'' transformer may treat some faults faster in the specific case that these faults are topological changes such that the endpoint node of an inserted (or deleted) edge is notified by the environment about the change.) In \cite{awerbuch1991self}, a class of distributed languages that are both locally correctable and locally checkable is defined. Their local correction algorithm nevertheless could suffer from at least a diameter time even for a few faults. The ``rollback compiler'' in \cite{awerbuch1991distributed} logs in each node, all the local history of the algorithm it transforms. Moreover, each node exchanges its log with its neighbors in every round. That way, the nodes can simulate the history repeatedly and verify that all the actions were taken correctly, or correct them if they were not. If the time complexity of non-self-stabilizing algorithm to be transformed is $t$, then the stabilization time of the transformed algorithm is $t$ too. Indeed they mention as applications local algorithms such as MIS and $\Delta^2$ coloring. However, this held only under the LOCAL model, since their messages were very large. Moreover, this may not work well for randomized algorithms. A more detailed description of how this transformer applies to local algorithms appears in \cite{lenzen2009local}. The above idea of exchanging logs was pushed further in \cite{afek2002local} to not only exchange logs with neighbors, but also to flood the network with the log of every node in every round of computation. That allowed them to address algorithms that never terminate and interact repeatedly with users outside the system. Such interactive algorithms are adaptive in a different sense than the one addressed in the current paper. That is, the stabilization may take time that is proportional to the diameter. However, the length of the duration of the interruption at each node is just the diameter of the subgraph induced by the faulty node. It is plausible that the kind of adaptivity addressed here can also be obtained. However, recall that all this would hold only under the LOCAL model (because of the huge messages) and possibly only for deterministic algorithms. Another transformer for deterministic algorithms that yielded time adaptive self stabilizing ones used a ``brute force'' method (and huge messages) too and was thus time efficient only under the LOCAL model \cite{kutten1997time}. Bounding the amount of memory used by transformers have been addressed indirectly by addressing the memory requirements of the reset and the fault detection module. A bounded memory reset protocol was proposed in \cite{kulkarni1998multitolerance}, but only assuming a bounded number of faults. Minimizing the memory used for fault detection has been the subject of numerous studies, both in the general case and for specific tasks, see e.g. \cite{beauquier1998transient,korman2010proof,goos2016locally,censor2020approximate}. \paragraph{Locally Checkable Labelings.} The term LCL was coined in \cite{naor1995can}. One of its motivations was the above mentioned self stabilization transformers. Specifically, the checking of local conditions and the detection of faults so that transformers could reactivate the correction algorithm. Another motivations was to identify algorithms where a fault at a processor $p$ could only affect processors in some bounded region around $p$. (The second motivation seems related to adaptivity, please see below). Research on non-self-stabilizing algorithms for LCL problems has been far too extensive to survey here. Some very partial (and somewhat arbitrary) list includes \cite{karp1985fast,IsraeliI1986fast,hanckowiak2001distributed,barenboim2016locality,ghaffari2019distributed,balliu2019lower,DBLP:conf/swat/Suomela20,DBLP:journals/dc/GhaffariHKMSU20,DBLP:journals/siamcomp/ChangLP20,DBLP:journals/siamcomp/ChangP19}. In that non-self-stabilizing context there has also been interest in automatic conversion and generation of algorithms \cite{DBLP:conf/podc/Brandt19}. The transformer presented here addresses phase based algorithms. We use the fact that such algorithms are very common for LCLs, so the transformer presented here is rather general. Moreover, even though every previous algorithm was designed for some specific problem, many addressed issues of phase synchronization under various environments that can be useful in designing other phase based algorithms (e.g. \cite{awerbuch1994efficient,wattenhofer2004distributed,Turau2019making}). The latter, \cite{Turau2019making}, even designs two self stabilizing in $O(\log n)$ time algorithms for the specific LCL problems of Maximal Matching and of MIS by revising (manually) two non-stabilizing phase base algorithms. The algorithms presented in \cite{Turau2019making} were not adaptive. The design of a transformer for a wide class of phase based algorithms is stated in to be the ultimate goal of the ideas discussed in there. The only other sublinear self stabilizing algorithms we are aware of for LCL problems are the $O(\Delta + \log^* n)$ stabilization time $(\Delta + 1)$-node coloring, $(2\Delta - 1)$-edge-coloring, Maximal Independent set and Maximal Matching of \cite{barenboim2018locally}.\footnote{% Many of the previous self-stabilizing algorithms were deterministic and asynchronous. Some also addressed related tasks that are not LCL.} A unique identity is assumed and the algorithms are not shown to be adaptive. Self-stabilizing LCL problems were extensively studied even before that. in particular, there is a long line of works on distributed self-stabilizing MM, MIS, and/or coloring algorithms, see table $1$ in \cite{ileri2019self} and \cite{hsu1992self, tel1994maximal, cohen2016self, cohen2018self, hedetniemi2001maximal, gradinariu2001self, goddard2003robust, goddard2003self, manne2009new, manne2011self, shi2004anonymous, asada2016efficient, kimoto2010time, guellati2010survey, datta2016maximum, arapoglu2019asynchronous, Turau2019making,Turau2007linear, Xu2003synchronous, Chiu2013,Hedetniemi2003self, Hedetniem2003coloring}. However, the stabilization time of all of them was $O (n)$ (or more). \paragraph{Adaptive Run-Time.} Adaptivity has gained a lot of interest outside of distributed computing. See, e.g., \cite{frederickson, baswana2015fully, baswana2015fully, neiman2016simple}. In the non-self-stabilizing computing context, Lamport suggested that an algorithm should benefit from the common case where the number of contending processes is small \cite{lamport1987fast}. In the context of fault tolerance, adaptivity in sublinear time was suggested in \cite{DBLP:conf/focs/KuttenP95} for mending a Maximal Independent set that was corrupted at some $k$ locations. The mending took $O(\log(k)$ time instead of $O(\log n)$ for computing in scratch. That algorithm was not fully self-stabilizing. Several other mending (or fixing) algorithms, again, not fully self stabilizing, were suggested in \cite{konig2013local}. A graph theoretical definition of mendability was discussed in \cite{DBLP:journals/corr/abs-2102-08703} and it was suggested that it can be helpful for self stabilization in the LOCAL model. A self stabilizing paradigm to handle a single fault, or multiple isolated faults (``sufficiently far'' from each other) was suggested in \cite{ghosh1996fault,DBLP:journals/dc/GhoshGHP07}. Self stabilizing algorithms whose time adaptivity was at least linear in the number of faults for various tasks were suggested in \cite{kutten1995fault,kutten1997time,azar2003distributed,demirbas2004hierarchy,burman2005asynchronous,arora2006lsrp,DBLP:conf/wdag/KuttenM07}. We note that, while the above algorithms are adaptive since the time was expressed as a function of the number $k$ of faults, they are not always \emph{fully} adaptive since all the faults are assumed to occur in a single batch before the mending starts and no additional faults occur at least until they are mended. In contrast, the run-time analysis in the current paper is fully adaptive in the sense that it allows the adversary to divide the $k$ faults (and/or topology changes) over multiple batches. Adapting efficiently to detectable topological changes (that is, not self stabilizing) was addressed e.g. in \cite{lotker2009distributed,bamberger2018local, censor2019fast}. Those are not not self-stabilizing. The algorithm is \cite{bamberger2018local} is adaptive (the time is not expressed as a function of $k$) while in \cite{censor2019fast}, the \emph{amortized} (not the worst case) time complexity is optimized, assuming that the graph is initially empty. Both papers rely on messages of super-constant size. \clearpage \bibliographystyle{alpha}
2111.01577
\section{Conclusion\label{subsec:Conclusion}} Our study provides insight into how developers use named casts. This technique provides the opportunity to prioritise refactorings for named cast operators. The results have shown that identifiers can add insights into program semantics. This is beneficial in several ways, one of which is to build novel representations of programs for a variety of software analysis tasks. One such task is sanity checking cast operations where the developers cross type boundaries for a variety of reasons, such as code reuse, time-to-market pressure, coding standards etc. We believe that the approaches presented in this work are leightweight enought to be used by developers as an IDE plugin during development. This work also provides a strong foundation to help richer forms of static analysis scale by using a novel form of program representation that draws from the natural language channel. \section{Cast operations, their use and the motivation of the work\label{subsec:Context}} C++ provides several ways in which a type conversion can be effected. We first provide an overview of these ways. Then, we show through an example how, despite clear guidelines on how casts should be used, type casts can be used imprecisely. After, we present the motivation behind this work. \subsection{Implicit and Explicit Casts\label{subsec:Type-conversions}} Type conversions are operations where the type of an expression is changed from one type to another. There are two types of conversions: implicit and explicit casts. In implicit casts, the conversion is done without the developers explicitly specifying the type to which a value needs to be converted. Implicit casts are performed automatically by the compiler if there is a viable conversion. For example, in C/C++, it is possible to pass a \lstinline*float* as an argument to a method which expects a \lstinline*double* {[}\citeauthor{implicitCastCPP} \citeyear{implicitCastCPP}{]}. Implicit conversions, also known as standard conversions {[}\citeauthor{implicitCastSTD} \citeyear{implicitCastSTD}{]}, are generally applied on built-in numerical data types, booleans and some pointer conversions {[}\citeauthor{implicitCastDocumentation} \citeyear{implicitCastDocumentation}{]}. The implicit conversions between numerical types are called promotions {[}\citeauthor{implicitCastSTD} \citeyear{implicitCastSTD}{]} and are allowed from smaller size types to larger size types. C/C++ also allows explicit conversion using syntactic constructs.\textcolor{black}{{} The syntactic constructs tell the compiler to perform a type conversion where the new type is specified in comparison to implicit conversions.} There are two ways to perform explicit casts, which are presented in \prettyref{fig:syntaxCasting}. Here, a variable \lstinline*x* of type \lstinline*double* is converted to an \lstinline*int* type. The first is the functional style, where the target type is treated as a method and the variable that will be converted is passed as an argument. The other is commonly referred to as the C-style syntax where the use of the variable is qualified by the target type within parenthesis. \begin{figure}[t] \begin{lstlisting}[] double x = 10.3; int y; y = int (x); // functional notation y = (int) x; // c-like cast notation \end{lstlisting}\caption{Functional and C-style syntax for implicit type conversion.} \label{fig:syntaxCasting} \end{figure} The functional and C-style explicit casts can handle conversions of built-in types such as numeric types. However, using those operators on user-defined types, particularly class hierarchies, requires additional language constructs. Thus, C++ introduced the following four\emph{ named cast} operators: \lstinline*static_cast*, \lstinline*dynamic_cast*, \lstinline*const_cast* and \lstinline*reinterpret_cast*. Out of the four, \lstinline*static_cast*, \lstinline*dynamic_cast* and \lstinline*const_cast* perform additional checks either statically or at runtime to avoid undefined \foreignlanguage{british}{behaviour} resulting from incorrect usage of type casts {[}\citeauthor{explicitCastCPP} \citeyear{explicitCastCPP}{]}. \lstinline*reinterpret_cast* is the most permissive with no checks on the validity of the type conversion. It merely reinterprets the memory holding an object as another type. \paragraph*{The \lstinline*static_cast* operator.} \lstinline*static_cast* vets the casts by statically checking the validity of the conversions against the class hierarchies {[}\citeauthor{staticCastCPP} \citeyear{staticCastCPP}{]}. As shown in \prettyref{fig:staticCast}, a downcast of an object \lstinline*a* typed as base class \lstinline*Base* to a derived class \lstinline*Derived* is allowed, but the developer needs to be confident that \lstinline*a* will never be an object of another derived class of \lstinline*Base*. If the latter happens, accessing a field of the \lstinline*Derived* class through \lstinline*b* would lead to undefined \foreignlanguage{british}{behaviour}. This is because \lstinline*static_cast* does not apply runtime checks to validate if \lstinline*a* is an object of type \lstinline*Derived* or another derived class \lstinline*Derived2* of \lstinline*Base*. Therefore, the correctness of a \lstinline*static_cast* is reliant on the developer. \lstinline*static_cast* operations are also used for converting \lstinline*enum* and \lstinline*void* types where the developer is sure of the type of the data pointed to by a \lstinline*void* pointer. \begin{figure}[t] \begin{lstlisting}[] class Base {}; class Derived: public Base {}; Base * a = new Base; Derived * b = static_cast<Derived*>(a); \end{lstlisting} \caption{Example of \lstinline*static_cast* .} \label{fig:staticCast} \end{figure} \begin{figure}[t] \begin{lstlisting}[] class Base { virtual void vf(){} }; class Derived : public Base { }; int main() { Base *pBDerived = new Derived; Derived *pd; pd = dynamic_cast<Derived*>(pBDerived); return 0; } \end{lstlisting} \caption{Example of \lstinline*dynamic_cast*.} \label{fig:dynamicCast} \end{figure} \paragraph*{The \lstinline*dynamic_cast* operator.} \lstinline*dynamic_cast* is an operator used for casting pointers and class reference conversions. Unlike \lstinline*static_cast*, \lstinline*dynamic_cast* checks whether the \emph{named cast} is permissible at runtime. If not, it returns a null pointer {[}\citeauthor{dynamicCastCPP} \citeyear{dynamicCastCPP}{]}. This operation guarantees that the result points to a valid object of the new type at the end of the type conversion. \prettyref{fig:dynamicCast} presents an example of \lstinline*dynamic_cast* for a pointer \lstinline*pBDerived*. The pointer has the initial type \lstinline+Base*+ and it points to a \lstinline*Derived* object. Through the cast on line 8, the \lstinline*pBDerived* pointer becomes an object of class \lstinline*Derived*. \lstinline*dynamic_cast* operations perform validity checks using the Run-Time Type Identification (RTTI) which is a feature in C++ to inspect types of objects at runtime. Naturally, the runtime checks introduce overheads and \lstinline*dynamic_cast* is an expensive operation for performance-sensitive applications. \paragraph*{The \lstinline*reinterpret_cast* operator.} This operator's role is to reinterpret memory holding an object of one type as another type, thus converting from one type to another. The pointer to the memory is recast into a new pointer type without any checks if the content can be of the new type. In general, this cast is used on low-level conversions based on a reinterpretation of the binary values of the variables {[}\citeauthor{reinterpretCastCPP} \citeyear{reinterpretCastCPP}{]}. In \prettyref{fig:reinterpretCast}, a \lstinline*reinterpret_cast* example is shown on line 5. The variable \lstinline*a* of class \lstinline*A* is reinterpreted to type \lstinline*B* and assigned to pointer \lstinline*b* even though \lstinline*A* and \lstinline*B* are unrelated in the class hierarchy. The \lstinline*reinterpret_cast* has a lower overhead than the other operators since it does not perform validity checks. Like the \lstinline*static_cast*, though, the correctness for this conversion relies entirely on the developer. \begin{figure}[t] \begin{lstlisting}[] class A { /* ... */ }; class B { /* ... */ }; A * a = new A; B * b = reinterpret_cast<B*>(a); \end{lstlisting} \caption{Example of \lstinline*reinterpret_cast*.} \label{fig:reinterpretCast} \end{figure} \paragraph*{The \lstinline*const_cast* operator.} This operator makes it possible to modify variables that have the type qualifier \lstinline*const*, which directs the compiler not to allow any modification for a variable, and \lstinline*volatile*, which prevents the compiler from applying any \foreignlanguage{british}{optimisations} on the variable. An example is presented in \prettyref{fig:constCast}. The variable \lstinline*c* of type \lstinline*const char** is passed as an argument to a method \lstinline*print* which only supports \lstinline+char*+. This forces the use of \lstinline*const_cast* in line 9 as mandatory to match the actual type to the formal parameter type. The C++ standard states that the \lstinline*const_cast* operator can introduce undefined \foreignlanguage{british}{behaviour} in programs. This situation can appear if the constness is removed from a variable and after the variable is modified {[}\citeauthor{constCastCPP} \citeyear{constCastCPP}{]}. \begin{figure}[t] \begin{lstlisting}[] void print (char * str) { cout << str << '\n'; } int main () { const char * c = "sample text"; print ( const_cast<char *> (c) ); return 0; } \end{lstlisting} \caption{Example of \lstinline*const_cast*.} \label{fig:constCast} \end{figure} \subsection{An example of imprecise named cast usage\label{subsec:An-example-of}} \begin{figure}[t] \begin{lstlisting}[] // Add information on the relationship between QUIC error codes // and their symbolic names. std::unique_ptr<base::DictionaryValue> dict(new base::DictionaryValue()); for (QuicErrorCode error = QUIC_NO_ERROR; error < QUIC_LAST_ERROR; error = static_cast<QuicErrorCode>(error + 1)) { dict->SetInteger(QuicErrorCodeToString(error), static_cast<int>(error)); } \end{lstlisting} \caption{An example where two \lstinline*static_cast* operators are used to iterate over an enumeration and store integer values in a dictionary. The snippet is from the file \emph{net\_log\_util.cc} of component \emph{Net} taken from an open source implementation of the QUIC protocol in the Chromium project} \label{fig:motiv} \end{figure} \emph{Named casts} were proposed initially to provide semantic clarity. However, developers sometimes use them to bypass type system restrictions at the cost of increased code complexity. Consider \prettyref{fig:motiv} as an example. The code is a snippet taken from the implementation of QUIC protocol {[}\citeauthor{quic} \citeyear{quic}{]}. QUIC is a general-purpose transport layer network protocol open sourced as a part of the Chromium project. There are two uses of the \lstinline*static_cast* operator in this snippet, which populates a dictionary \lstinline*dict* with key-value pairs, which are strings representing an \lstinline+error+ description and an integer representing the \lstinline+error+ code. It is important to note here that \lstinline+error+ itself is neither an integer nor a string but an \foreignlanguage{british}{\emph{unscoped}} \lstinline*enum* type \lstinline+QuicErrorCode+. The type \lstinline*enum* or enumeration is a user-defined type which consists of a set of named integral constants {[}\citeauthor{enumerationsCPP} \citeyear{enumerationsCPP}{]}. Enumerations are generally used in three situations: a single choice where the developer filters through the choices with a switch statement, a multiple choice through C-style bitsets, or as a type definition for integral types. In \prettyref{fig:motiv}, the type \lstinline*enum* is not used for any of the three situations, but it is used to iterate over the enumeration values and populate \lstinline*dict*. By design, C++ does not encourage the iteration over objects of type \lstinline*enum* since it does not provide an iterator. In the example, the iteration is achieved by implicitly casting the loop control variable \lstinline+error+ into an integer, incrementing it and casting it back to \lstinline+QuicErrorCode+ using a \lstinline*static_cast* in line 7. In the loop expression, \lstinline+QUIC_NO_ERROR+ and \lstinline+QUIC_LAST_ERROR+ are the first and last elements of the enumeration. The second \lstinline*static_cast* in line 9 converts the variable \lstinline+error+ of type \lstinline+QuicErrorCode+ to an \lstinline+int+. It is used as a parameter for the function \lstinline+SetInteger+, which populates the dictionary \lstinline+dict+ with key-value pairs. This is the second time that the developers chose to cross the boundaries between an \lstinline*enum* type to an \lstinline*int* to be able to use operators of the type \lstinline*int*. The iteration on \lstinline*enum* objects can be pernicious, as \lstinline*enum* types are not guaranteed to be contiguous. The Clang++ compiler would replace \lstinline+QUIC_NO_ERROR+ and \lstinline+QUIC_LAST_ERROR+ with their actual values in the loop from the snippet. This means that \lstinline+error+ would take all the values in the corresponding range. The enumeration \lstinline+QuicErrorCode+ is not contiguous and the values for each entry are defined by the developers. This means that the \lstinline+dict+ could contain error codes that were not described originally in \lstinline+QuicErrorCode+. However, the developers handle those cases explicitly in the function \lstinline+QuicErrorCodeToString+, which contains a \lstinline*switch* over all the values from \lstinline+QuicErrorCode+. This function returns the string of the \lstinline+error+ or an invalid error code for any other values. This implementation is not erroneous; however, it is suboptimal. One may wonder at this stage, what could be a better solution and what should the solution aim to achieve? Type systems came about to ensure type safety and casts typically should be avoided wherever possible. The aim of a better solution should be to keep the \lstinline*enum* and \lstinline*int* types separate and implement all operators essential to iterate or operate in the \lstinline*enum* space. The developers used an enumeration to generate a dictionary object type used later by the rest of the application. The enumeration implementation consists of the \lstinline+QuicErrorCode+ declaration along with a set of functions of switch cases such as \lstinline+QuicErrorCodeToString+ that allow the return of the string for an error. We believe a better solution would be to declare and use a dictionary from the start rather than declaring and using the enumeration to create the dictionary. This solution would not require the crossing of type boundaries, since the type of the dictionary can be declared accordingly to the types of the values. Also, the solution would bring improved efficiency. Enumerations are efficient since they are resolved at compile time and converted into integral literals \textcolor{black}{at the bitcode level}. The enumerations are used along with switch cases and iterations over the enumerations, which present a linear efficiency. This efficiency performs well on a small number of cases, which is not the case for \lstinline+QuicErrorCode+ since it consists of 199 cases. On the other hand, the selection of a key in a dictionary would have a logarithmic efficiency. We are not sure if \lstinline+QuicErrorCode+ is used in any other part of the application, but dictionaries should generally perform better than large enumerations. Our solution would also ease the code maintainability process. Each time \lstinline+QuicErrorCode+ needs to be updated, it requires modifications at the declaration and at each function with switch cases. It would be easier to maintain a dictionary since the only modification required would be at the declaration. This example shows a need for tools that identify if the cast of types is essential and if the cast is done correctly. It is crucial to ensure that the crossing type boundaries are beneficial from a software engineering point of view, allowing code reuse without confounding the uses of types and operators for those types. \subsection{Motivation\label{subsec:Motivation}} In this research, we \foreignlanguage{british}{hypothesise} that in large and mature projects such as Chromium, where code is reviewed before it is merged in the application, there are hints in program identifiers that point to their purpose. We aim to use this natural language information in identifiers to understand if \emph{named casts} are being used for good software engineering reasons. If not, we aim to identify poor practices. For example, the actual to formal binding for the method \lstinline+SetInteger+ binds \lstinline+error+ of type \lstinline+QuicErrorCode+ to a formal named \lstinline+in_value+ of type \lstinline*int*. A perfunctory check of the names for the variables and the types may seem that these variables are disparate. However, one may notice upon close inspection that \lstinline+SetInteger+ is a modifier of a dictionary. Therefore, it is essential that formal arguments of this modifier are named generically. In this work, we combine an automated \foreignlanguage{british}{analyser} with human inspection to classify cases where \emph{named casts} are used to point out both good and poor practices in using \emph{named casts}. In a \emph{named cast} situation, precise names are meaningful names that reflect the relation between the \emph{source} and \emph{destination}. The choice of the identifiers is not only vital during development, but also during maintenance. Precise names reflect that the developers had a good understanding of the problem that they solved. The same precise names allow other developers to gain a faster and more comprehensive understanding of the code. Thus, the reusability and maintenance of the code is made easier. If the relation between \emph{source} and \emph{destination} does not exist, developers may be misled by the names and overlook some cases which could be dangerous during code testing and maintenance. Those cases need to be identified and refactored with meaningful names. Our tool uses the information-theoretic analysis to discover imprecise names given to the \emph{source} expression and \emph{destination} variable. \section{Discussion\label{subsec:Discussion}} In this work, we presented a summary of the findings from the named cast operators study. We have identified: two cases of iteration over enumeration types (\prettyref{fig:motiv} and \ref{lst:reint_dawn}), two cases of poorly named variables (\prettyref{lst:static_surface} and \ref{lst:reint_dawn}), two instances of anti-patterns that have been refactored in later versions of the software so that the named cast operators were no longer used (\prettyref{lst:static_ipv4} and \ref{lst:reint_removed}), two cases that increased the complexity of the code which led to poor quality code and bugs (\prettyref{lst:static_path_rendering} and \ref{lst:reint_delete}), two cases that enabled a function to change behaviour based on the types of the pointer (\prettyref{lst:reint_mach}) and two good programming practices for protecting values stored in variables (\prettyref{lst:const_nonConst_tznImpl} and \ref{lst:const_frames}). The operator \lstinline*static_cast* is the most versatile and most widely used operator for explicit type conversions. In \prettyref{fig:motiv}, we discovered the use of the \lstinline*static_cast* to iterate over an enumeration which is an abuse of the enumeration type and an inefficient implementation. \prettyref{lst:static_histogram} presents a good use of \lstinline*static_cast*, demonstrating how it can be used to provide safety during pointer initialisations. We also found examples where named casts were used as a quick workaround. The case from \prettyref{lst:static_ipv4} showed a cast which has been removed in recent versions. The case from \prettyref{lst:static_surface} shows conversions between primitive types, which in most cases is harmless. However, the \emph{destination} variable is a \lstinline*void* pointer which can point to many types and lead to type confusions. The last case from \prettyref{lst:static_path_rendering} shows a correct use of the \lstinline*static_cast* operator being part of complex code that led to inefficient code and even to a bug. The \lstinline*reinterpret_cast* operator is used mostly for pointer to pointer conversions as it is the most permissive. \prettyref{lst:reint_mach} presented two examples of conversions of two different pointer types bound to a \emph{destination }which has the same name. Using the same name to store data of different kinds is not desirable and we believe the code can benefit from variable renaming. In \prettyref{lst:reint_dawn}, we presented an example of serialisation/deserialisation where the developers have relied on \lstinline*reinterpret_cast* to be able to deal with a diversity of objects. There is a strong software engineering reason to do so as it is essential to keep the interface to the serialiser and deserialiser generic to be able to deal with any data type. The case from \prettyref{lst:reint_delete} shows another example where complex code led to bugs. After the bugs were solved, the code has been refactored and the named cast was completely removed. Last case shows the use of a \lstinline*reinterpret_cast* as a quick workaround to not develop the behaviour for empty values case for entries of a HashMap. This named cast operation was also removed in the recent versions. \lstinline*dynamic_cast* operators are used infrequently. They are used when the developer is unsure if a conversion is possible or not. In this way, the runtime checks will confirm whether the casts are valid. An example where it is mandatory to prove a cast is valid appears in the implementation of an exception handler showed in \prettyref{lst:dyn_buildtools}. Another essential use-case of \lstinline*dynamic_cast* operator is for downcasts. The component ICU contains the most dynamic conversions and they are used for downcasts. Section \ref{subsec:Related-Work} discusses some solutions to avoid the expensive dynamic cast. However, the question of why from all Chromium's components only ICU has implemented its downcasts with \lstinline*dynamic_cast* remains unanswered. The operator \lstinline*const_cast* is used for software engineering reasons and security reasons. Even if this operator can introduce undefined behaviour as presented in Section \ref{subsec:Context}, the analysed cases were adequately implemented. We have identified two \lstinline*const_cast* usage patterns from the analysis. One pattern appears when an object tries to access itself through the pointer \lstinline*this* in a function declared with the qualifier \lstinline*const*. The \lstinline*const* functions will make the pointer \lstinline*this* also have the qualifier \lstinline*const*. However, there are times when the \lstinline*const this* pointer needs to be passed as a parameter to non-const functions. \prettyref{lst:const_nonConst_tznImpl} shows an example where an explicit conversion was performed in a getter to obtain information from an object. Another use-case appears when some non-const variables need to be protected against modification in specific methods. In order to do so, the \lstinline*const_cast* will be used to add the \lstinline*const* qualifier. \prettyref{lst:const_frames} shows how a stack is passed as a parameter to a function after the conversion. The motivation behind the use of some const type conversions comes from the use of third party libraries. \section{Evaluation\label{subsec:Evaluation}} \begin{table*}[tp] {\scriptsize{}}% \begin{tabular}{l>{\raggedright}p{2.6cm}>{\raggedleft}p{1.35cm}rrrr>{\raggedleft}p{0.05cm}rrrrr} \toprule {\scriptsize{}Name} & {\scriptsize{}Description} & {\scriptsize{}Lines of Code} & \multicolumn{4}{c}{{\scriptsize{}Assignment expressions}} & & \multicolumn{4}{c}{{\scriptsize{}Call expressions}} & {\scriptsize{}Total}\tabularnewline & & & {\scriptsize{}S} & {\scriptsize{}R} & {\scriptsize{}D} & {\scriptsize{}C} & & {\scriptsize{}S} & {\scriptsize{}R} & {\scriptsize{}D} & {\scriptsize{}C} & \tabularnewline \midrule {\scriptsize{}\href{https://chromium.googlesource.com/v8/v8.git/+/8200c5d117}{V8}} & {\scriptsize{}JavaScript Engine} & {\scriptsize{}1,359,009} & {\scriptsize{}1262} & {\scriptsize{}1649} & {\scriptsize{}0} & {\scriptsize{}8} & & {\scriptsize{}1592} & {\scriptsize{}353} & {\scriptsize{}0} & {\scriptsize{}4} & {\scriptsize{}4868}\tabularnewline {\scriptsize{}\href{https://chromium.googlesource.com/chromium/src/+/689912289c/net/}{Net}} & {\scriptsize{}Networking Protocols} & {\scriptsize{}765,964} & {\scriptsize{}616} & {\scriptsize{}1153} & {\scriptsize{}0} & {\scriptsize{}26} & & {\scriptsize{}693} & {\scriptsize{}770} & {\scriptsize{}0} & {\scriptsize{}15} & {\scriptsize{}3273}\tabularnewline {\scriptsize{}\href{https://chromium.googlesource.com/chromium/src/+/689912289c/gpu/}{gpu}} & {\scriptsize{}Graphics Stack} & {\scriptsize{}277,035} & {\scriptsize{}1386} & {\scriptsize{}307} & {\scriptsize{}0} & {\scriptsize{}10} & & {\scriptsize{}171} & {\scriptsize{}100} & {\scriptsize{}0} & {\scriptsize{}56} & {\scriptsize{}2030}\tabularnewline {\scriptsize{}\href{https://chromium.googlesource.com/chromium/src/+/689912289c/ui/}{UI}} & {\scriptsize{}UI Frameworks} & {\scriptsize{}178843} & {\scriptsize{}197} & {\scriptsize{}823} & {\scriptsize{}0} & {\scriptsize{}5} & & {\scriptsize{}689} & {\scriptsize{}36} & {\scriptsize{}0} & {\scriptsize{}4} & {\scriptsize{}1754}\tabularnewline {\scriptsize{}\href{https://chromium.googlesource.com/chromium/src/+/689912289c/media/}{Media}} & {\scriptsize{}Media Components} & {\scriptsize{}370,069} & {\scriptsize{}450} & {\scriptsize{}700} & {\scriptsize{}0} & {\scriptsize{}20} & & {\scriptsize{}358} & {\scriptsize{}207} & {\scriptsize{}0} & {\scriptsize{}3} & {\scriptsize{}1738}\tabularnewline {\scriptsize{}\href{https://chromium.googlesource.com/chromium/src/+/689912289c/third_party/blink/}{Blink}} & {\scriptsize{}Browser Engine} & {\scriptsize{}1,524,213} & {\scriptsize{}1081} & {\scriptsize{}120} & {\scriptsize{}0} & {\scriptsize{}0} & & {\scriptsize{}138} & {\scriptsize{}0} & {\scriptsize{}0} & {\scriptsize{}0} & {\scriptsize{}1339}\tabularnewline {\scriptsize{}\href{https://chromium.googlesource.com/chromium/src/+/689912289c/chrome/}{Chrome}} & {\scriptsize{}Application Layer } & {\scriptsize{}2,385,043} & {\scriptsize{}776} & {\scriptsize{}199} & {\scriptsize{}0} & {\scriptsize{}22} & & {\scriptsize{}256} & {\scriptsize{}3} & {\scriptsize{}0} & {\scriptsize{}0} & {\scriptsize{}1256}\tabularnewline {\scriptsize{}\href{https://webrtc.googlesource.com/src.git/+/f1e97b9}{Webrtc}} & {\scriptsize{}Communications API} & {\scriptsize{}634,428} & {\scriptsize{}482} & {\scriptsize{}78} & {\scriptsize{}0} & {\scriptsize{}9} & & {\scriptsize{}541} & {\scriptsize{}33} & {\scriptsize{}0} & {\scriptsize{}1} & {\scriptsize{}1144}\tabularnewline {\scriptsize{}\href{https://skia.googlesource.com/skia.git/+/a1ea0a96f4}{Skia}} & {\scriptsize{}Graphics Library} & {\scriptsize{}665,350} & {\scriptsize{}349} & {\scriptsize{}274} & {\scriptsize{}0} & {\scriptsize{}20} & & {\scriptsize{}208} & {\scriptsize{}179} & {\scriptsize{}0} & {\scriptsize{}33} & {\scriptsize{}1063}\tabularnewline {\scriptsize{}\href{https://chromium.googlesource.com/chromium/src/+/689912289c/device/}{Device}} & {\scriptsize{}Sensor Communication} & {\scriptsize{}133,831} & {\scriptsize{}469} & {\scriptsize{}376} & {\scriptsize{}0} & {\scriptsize{}0} & & {\scriptsize{}116} & {\scriptsize{}30} & {\scriptsize{}0} & {\scriptsize{}0} & {\scriptsize{}991}\tabularnewline {\scriptsize{}\href{https://chromium.googlesource.com/chromium/src/+/689912289c/chrome/browser/policy/}{Policy}} & {\scriptsize{}Policy Settings} & {\scriptsize{}38,532} & {\scriptsize{}121} & {\scriptsize{}34} & {\scriptsize{}0} & {\scriptsize{}353} & & {\scriptsize{}314} & {\scriptsize{}34} & {\scriptsize{}0} & {\scriptsize{}0} & {\scriptsize{}856}\tabularnewline {\scriptsize{}\href{https://android.googlesource.com/platform/external/perfetto.git/+/b06d185a49}{Perfetto}} & {\scriptsize{}Tracing Service} & {\scriptsize{}205,355} & {\scriptsize{}297} & {\scriptsize{}7} & {\scriptsize{}0} & {\scriptsize{}54} & & {\scriptsize{}454} & {\scriptsize{}1} & {\scriptsize{}0} & {\scriptsize{}0} & {\scriptsize{}813}\tabularnewline {\scriptsize{}\href{https://chromium.googlesource.com/chromium/src/+/689912289c/chrome/browser/safe_browsing/}{Safe Browsing}} & {\scriptsize{}URL Check Protocol} & {\scriptsize{}9,046} & {\scriptsize{}162} & {\scriptsize{}57} & {\scriptsize{}0} & {\scriptsize{}79} & & {\scriptsize{}440} & {\scriptsize{}46} & {\scriptsize{}0} & {\scriptsize{}0} & {\scriptsize{}784}\tabularnewline {\scriptsize{}\href{https://dawn.googlesource.com/dawn.git/+/0da52f2}{Dawn}} & {\scriptsize{}WebGPU } & {\scriptsize{}66,458} & {\scriptsize{}125} & {\scriptsize{}542} & {\scriptsize{}0} & {\scriptsize{}0} & & {\scriptsize{}25} & {\scriptsize{}3} & {\scriptsize{}0} & {\scriptsize{}0} & {\scriptsize{}695}\tabularnewline {\scriptsize{}\href{https://chromium.googlesource.com/chromium/src/+/689912289c/third_party/protobuf/}{Protobuf}} & {\scriptsize{}Serializing Struct Data} & {\scriptsize{}227,475} & {\scriptsize{}160} & {\scriptsize{}77} & {\scriptsize{}0} & {\scriptsize{}17} & & {\scriptsize{}394} & {\scriptsize{}10} & {\scriptsize{}0} & {\scriptsize{}15} & {\scriptsize{}673}\tabularnewline {\scriptsize{}\href{https://chromium.googlesource.com/chromium/src/+/689912289c/chrome/common/}{Common}} & {\scriptsize{}Application Layer } & {\scriptsize{}39,981} & {\scriptsize{}341} & {\scriptsize{}319} & {\scriptsize{}0} & {\scriptsize{}1} & & {\scriptsize{}9} & {\scriptsize{}0} & {\scriptsize{}0} & {\scriptsize{}0} & {\scriptsize{}670}\tabularnewline {\scriptsize{}\href{https://chromium.googlesource.com/chromium/src/+/689912289c/base/}{Base}} & {\scriptsize{}Core Components} & {\scriptsize{}278,364} & {\scriptsize{}192} & {\scriptsize{}220} & {\scriptsize{}0} & {\scriptsize{}7} & & {\scriptsize{}129} & {\scriptsize{}102} & {\scriptsize{}0} & {\scriptsize{}6} & {\scriptsize{}656}\tabularnewline {\scriptsize{}\href{https://pdfium.googlesource.com/pdfium/+/0f4ac587a4}{Pdfium}} & {\scriptsize{}PDF Library} & {\scriptsize{}483,545} & {\scriptsize{}369} & {\scriptsize{}62} & {\scriptsize{}0} & {\scriptsize{}1} & & {\scriptsize{}181} & {\scriptsize{}20} & {\scriptsize{}0} & {\scriptsize{}0} & {\scriptsize{}633}\tabularnewline {\scriptsize{}\href{https://chromium.googlesource.com/chromium/deps/icu/+/2ecd66c696/}{ICU}} & {\scriptsize{}Unicode Components} & {\scriptsize{}325,354} & {\scriptsize{}285} & {\scriptsize{}63} & {\scriptsize{}75} & {\scriptsize{}40} & & {\scriptsize{}79} & {\scriptsize{}14} & {\scriptsize{}1} & {\scriptsize{}5} & {\scriptsize{}562}\tabularnewline {\scriptsize{}\href{https://chromium.googlesource.com/chromium/src/+/689912289c/components/viz/}{VIZ}} & {\scriptsize{}Visual Subservices} & {\scriptsize{}83,767} & {\scriptsize{}176} & {\scriptsize{}235} & {\scriptsize{}0} & {\scriptsize{}0} & & {\scriptsize{}51} & {\scriptsize{}57} & {\scriptsize{}0} & {\scriptsize{}0} & {\scriptsize{}519}\tabularnewline {\scriptsize{}\href{https://chromium.googlesource.com/chromium/src/+/689912289c/components/metrics}{Metrics Proto}} & {\scriptsize{}Data Analysis} & {\scriptsize{}75,204} & {\scriptsize{}165} & {\scriptsize{}0} & {\scriptsize{}0} & {\scriptsize{}47} & & {\scriptsize{}304} & {\scriptsize{}0} & {\scriptsize{}0} & {\scriptsize{}0} & {\scriptsize{}516}\tabularnewline {\scriptsize{}\href{https://chromium.googlesource.com/chromium/src/+/689912289c/chrome/browser/sync/}{Sync}} & {\scriptsize{}Sync Implementation } & {\scriptsize{}139,526} & {\scriptsize{}92} & {\scriptsize{}1} & {\scriptsize{}0} & {\scriptsize{}84} & & {\scriptsize{}313} & {\scriptsize{}3} & {\scriptsize{}0} & {\scriptsize{}0} & {\scriptsize{}493}\tabularnewline {\scriptsize{}\href{https://chromium.googlesource.com/angle/angle.git/+/2328d65ab}{Angle}} & {\scriptsize{}Graphics Engine} & {\scriptsize{}2,381,153} & {\scriptsize{}175} & {\scriptsize{}28} & {\scriptsize{}0} & {\scriptsize{}3} & & {\scriptsize{}230} & {\scriptsize{}19} & {\scriptsize{}0} & {\scriptsize{}0} & {\scriptsize{}455}\tabularnewline {\scriptsize{}\href{https://chromium.googlesource.com/chromium/src/+/689912289c/buildtools/}{Buildtools}} & {\scriptsize{}Buildtools Chromium} & {\scriptsize{}510,018} & {\scriptsize{}187} & {\scriptsize{}153} & {\scriptsize{}13} & {\scriptsize{}2} & & {\scriptsize{}25} & {\scriptsize{}7} & {\scriptsize{}0} & {\scriptsize{}3} & {\scriptsize{}390}\tabularnewline {\scriptsize{}\href{https://chromium.googlesource.com/chromium/src/+/689912289c/media/audio/}{Audio}} & {\scriptsize{}Audio System} & {\scriptsize{}34,120} & {\scriptsize{}43} & {\scriptsize{}202} & {\scriptsize{}0} & {\scriptsize{}0} & & {\scriptsize{}33} & {\scriptsize{}50} & {\scriptsize{}0} & {\scriptsize{}0} & {\scriptsize{}328}\tabularnewline {\scriptsize{}\href{https://swiftshader.googlesource.com/SwiftShader.git/+/0cd9a67ce}{Swiftshader}} & {\scriptsize{}Graphics Library} & {\scriptsize{}2,166,480} & {\scriptsize{}160} & {\scriptsize{}87} & {\scriptsize{}0} & {\scriptsize{}5} & & {\scriptsize{}62} & {\scriptsize{}6} & {\scriptsize{}0} & {\scriptsize{}0} & {\scriptsize{}320}\tabularnewline {\scriptsize{}\href{https://chromium.googlesource.com/chromium/src/+/689912289c/extensions/}{Extensions}} & {\scriptsize{}Core Parts Extension } & {\scriptsize{}223,979} & {\scriptsize{}312} & {\scriptsize{}4} & {\scriptsize{}0} & {\scriptsize{}0} & & {\scriptsize{}0} & {\scriptsize{}0} & {\scriptsize{}0} & {\scriptsize{}0} & {\scriptsize{}316}\tabularnewline {\scriptsize{}\href{https://chromium.googlesource.com/chromium/src/+/689912289c/cc/}{CC}} & {\scriptsize{}Compositor Renderer} & {\scriptsize{}198,390} & {\scriptsize{}117} & {\scriptsize{}17} & {\scriptsize{}0} & {\scriptsize{}0} & & {\scriptsize{}167} & {\scriptsize{}6} & {\scriptsize{}0} & {\scriptsize{}2} & {\scriptsize{}309}\tabularnewline {\scriptsize{}\href{https://chromium.googlesource.com/chromium/src/+/689912289c/components/remote_cocoa/}{Remote Cocoa}} & {\scriptsize{}Cocoa Front-End} & {\scriptsize{}4,255} & {\scriptsize{}137} & {\scriptsize{}158} & {\scriptsize{}0} & {\scriptsize{}0} & & {\scriptsize{}5} & {\scriptsize{}1} & {\scriptsize{}0} & {\scriptsize{}0} & {\scriptsize{}301}\tabularnewline {\scriptsize{}\href{https://chromium.googlesource.com/chromium/src/+/689912289c/base/}{Logging}} & {\scriptsize{}Logs Implementation } & {\scriptsize{}42,865} & {\scriptsize{}90} & {\scriptsize{}0} & {\scriptsize{}0} & {\scriptsize{}6} & & {\scriptsize{}176} & {\scriptsize{}0} & {\scriptsize{}0} & {\scriptsize{}0} & {\scriptsize{}272}\tabularnewline {\scriptsize{}Rest of Corpus} & {\scriptsize{}Components < 250 } & {\scriptsize{}---} & {\scriptsize{}2238} & {\scriptsize{}1284} & {\scriptsize{}0} & {\scriptsize{}247} & & {\scriptsize{}1925} & {\scriptsize{}545} & {\scriptsize{}0} & {\scriptsize{}42} & {\scriptsize{}6281}\tabularnewline \midrule & \multicolumn{2}{r}{\textbf{\scriptsize{}Total Casts}} & {\scriptsize{}13012} & {\scriptsize{}9229} & {\scriptsize{}88} & {\scriptsize{}1066} & & {\scriptsize{}10078} & {\scriptsize{}2635} & {\scriptsize{}1} & {\scriptsize{}189} & {\scriptsize{}36298}\tabularnewline \bottomrule \end{tabular}{\scriptsize\par} \caption{C++ Corpus from Google Chromium. Represents the distribution of cast types and the frequency of usage of each conversion operator (S - \lstinline*static_cast*, R - \lstinline*reinterpret_cast*, D - \lstinline*dynamic_cast*, C - \lstinline*const_cast*)} \label{tbl:dataset}\vspace{-15bp} \end{table*} Our corpus consists of casts collected from the Chromium project. We give a quantitative overview of the type of named casts in our corpus in Section \ref{subsec:Quantitative-analysis}. A human evaluation is performed on a sampled set of named casts where the subtokens in \emph{source} and \emph{destination} are significantly different and the results are presented in Section \ref{subsec:manual-evaluation}. We selected and described the most interesting named cast cases in Section \ref{subsec:Qualitative-analysis}. \begin{figure*}[t] \subfloat[Static Cast Assignment Cases.]{\includegraphics[width=1\columnwidth]{pics/bop_condEntropy_rhsLength_S} \label{fig:a_rhs_ce_s}}\subfloat[Reinterpret Cast, Const Cast and Dynamic Cast Assignment Cases.]{\includegraphics[width=1\columnwidth]{pics/bop_condEntropy_rhsLength_rest} \label{fig:a_rhs_ce_rest}}\vspace{-10bp} \subfloat[Static Cast Function Call Cases.]{\includegraphics[width=1\columnwidth]{pics/callexpr_condEntropy_rhsLength_S} \label{fig:c_rhs_ce_s}}\subfloat[Reinterpret Cast, Const Cast and Dynamic Cast Function Call Cases.]{\includegraphics[width=1\columnwidth]{pics/callexpr_condEntropy_rhsLength_rest} \label{fig:c_rhs_ce_rest}}\caption{Type conversions represented by \emph{source} expression length and conditional entropy. The star cases are the outliers.} \end{figure*} \subsection{Quantitative analysis\label{subsec:Quantitative-analysis}} \prettyref{tbl:dataset} shows the distribution of named casts in various components of Chromium. Our corpus consists of 36,298 named casts. \prettyref{tbl:dataset} shows the frequency for each category of named casts for individual modules in the Chromium corpus. Overall, 63.62\% are \lstinline*static_cast*s, 32.68\% are \lstinline*reinterpret_cast*s, 0.25\% are \lstinline*dynamic_cast*s and 3.45\% are \lstinline*const_cast*s. As discussed in Section \ref{subsec:Extraction-of-Named-Casts}, we consider named casts that are a part of either assignments or actual-to-formal parameter binding in function calls. The proportion of named casts that are a part of assignments is 64.46\% (23,395 casts) while only 35.54\% (12,903 casts) are in call expressions. It is observed from \prettyref{tbl:dataset} that \lstinline*dynamic_cast* and \lstinline*const_cast* operators are used rarely. The \lstinline*dynamic_cast* operator uses Run-Time Type Identification (RTTI) to verify that the types can be converted at runtime, which is an expensive operation. It is likely that the cost of checking prohibits their widespread use. \lstinline*const_cast* operators are used to set or remove the constness or volatility of variables. Such variables are rare themselves which explains why so few instances of \lstinline*const_cast* are present in our dataset. \lstinline*static_cast* can be used to cast up or down objects. A check on the class inheritance hierarchy evaluates if the conversion between the object and \emph{destination} type is possible. Therefore, \lstinline*static_cast* is safer than \lstinline*reinterpret_cast* which is extremely permissive, allowing arbitrary type conversions. Indeed, best practice is to use \lstinline*static_cast* over \lstinline*reinterpret_cast* and this is reflected in the prevalence of \lstinline*static_cast* operations in our corpus. It is noticed from \prettyref{tbl:dataset} that the larger and performance-critical modules such as the JavaScript compiler \emph{V8}, networking (\emph{Net}), GPU, user interface (\emph{UI}), the \emph{Media} libraries, etc. have the most casts. Interestingly, none of these modules uses the runtime intensive \lstinline*dynamic_cast* cast operators. Only International Components for Unicode\emph{ (ICU)} and \emph{Buildtools} components contain a total of 88 \lstinline*dynamic_cast* operators. Neither of these components are central to the user experience of the browser and thus they can potentially tolerate runtime overheads. \hyperref[fig:a_rhs_ce_s]{Figures} \ref{fig:a_rhs_ce_s}, \ref{fig:a_rhs_ce_rest}, \ref{fig:c_rhs_ce_s} and \ref{fig:c_rhs_ce_rest} show the conditional entropy against the size of the \emph{source} length. For named casts that are a part of assignments, \hyperref[fig:a_rhs_ce_s]{Figure} \prettyref{fig:a_rhs_ce_s} shows a graph for \lstinline*static_cast* and \hyperref[fig:a_rhs_ce_s]{Figure} \prettyref{fig:a_rhs_ce_rest} shows a graph for \lstinline*const_cast*, \lstinline*dynamic_cast* and \lstinline*reinterpret_cast*. For named casts that are a part of actual to formal bindings, \hyperref[fig:a_rhs_ce_s]{Figure} \prettyref{fig:c_rhs_ce_s} shows a graph for \lstinline*static_cast* and \hyperref[fig:a_rhs_ce_s]{Figure} \prettyref{fig:c_rhs_ce_rest} shows the graph for the remaining casts. As expected with longer \emph{source} expressions, conditional entropy stays low. It is interesting to note that some named casts with short \emph{source }expressions show noticeably high conditional entropy. These are the cases we investigated further to understand if the named cast operation is used carefully and there is ample justification to cross type boundaries. These outliers are highlighted in the \hyperref[fig:a_rhs_ce_s]{Figures} \prettyref{fig:a_rhs_ce_s}, \ref{fig:a_rhs_ce_rest}, \ref{fig:c_rhs_ce_s} and \ref{fig:c_rhs_ce_rest} by using colours that are different from the rest of the points in the dataset. They were identified by using an estimated Gaussian distribution and selecting the upper quartile. The outliers consist of 991 \lstinline*static_cast*, 319 \lstinline*reinterpret_cast*, 11 \lstinline*dynamic_cast* and 47 \lstinline*const_cast* operations. The share of each category of named types in the outliers is proportional to the constitution of the original dataset. \subsection{Manual evaluation \label{subsec:manual-evaluation}} \begin{figure*}[tp] \includegraphics[scale=0.85]{pics/cast_data_pie_chart} \caption{Classification of the manually evaluated results (TP - True Positive, FP - False Positive)} \label{fig:cast_data_pie_chart} \end{figure*} For the manual investigation of named casts, we selected a subset from the outliers using random uniform sampling targeting a 90\% confidence using the central limit theorem {[}\citeauthor{central_limit_theorem} \citeyear{central_limit_theorem}{]}. The sampled dataset consists of 164 data points with a breakdown of 126 \lstinline*static_cast*, 32 \lstinline*reinterpret_cast*, 5 \lstinline*const_cast* and 1 \lstinline*dynamic_cast* operations. Our human evaluation had three different raters analysing the sampled dataset and filtering the cases in \emph{true positive} and \emph{false positive}. The human evaluators classified 50, 90 and 94 cases out of 164 as true positives. The true positive rate represents how many times our approach correctly identified a case that presents an incorrect implementation or imprecise names for the identifiers. Our raters have agreed on a total of 83 \emph{true positive} cases, which means that our technique presents a \emph{true positive} rate of 50.6\%. The results of the manual analysis are presented in \prettyref{fig:cast_data_pie_chart}. The raters have categorised each true positive based on their type. The results indicate 25.6\% cases represent incorrect implementations while 28.04\% cases represent imprecise names for the \emph{source} and \emph{destination}. Some examples of imprecise names of \emph{source/destination} pairs are: \lstinline*tag* with \lstinline*chars[i]*, \lstinline*levels* with \lstinline*fparams[0]*, \lstinline*param* with \lstinline*bufSize*, \lstinline*t* with \lstinline*output_cursor*, \lstinline*val* with \lstinline*p[i]*, \lstinline*frames* with \lstinline*out_trace*. The raters evaluated cases as having imprecise names when the source and destination names are not meaningful and when the names can cause confusion rather than clarity for the code meaning. \paragraph*{Analysis of False Positives} The cases which represent false positives fall into two categories. One of them is when the named casts are used correctly and efficiently simultaneously, while the identifiers' names relate and show the code's purpose. The rate of those cases is only 24.4\% of the entire dataset. The last quarter of the dataset is represented by correct implemented named conversions with generic names. In some cases, generic names will not produce code quality problems and they can provide enough information regarding the named casts. For instance, one case converts \lstinline*buffer[buffer_pos]* to \lstinline*current*, which means that the code extracts one element of \lstinline*buffer* from index \lstinline+buffer_pos+ and stores it in \lstinline*current* variable. Even if this named cast is sound and the identifier names are reasonable, our tool would filter such cases as false positives since their genericity makes the names different. As part of our evaluation, we want to discover the degree of agreement between raters. The inter-rater agreement, also called Cohen's Kappa coefficient {[}\citeauthor{kappa} \citeyear{kappa}{]}, is a robust measurement metric for the agreement level between two or multiple raters. The Kappa can take values between -1 and 1. If Kappa has value ``1'', it means that the raters are in perfect agreement. Kappa's negative value means that the raters are in disagreement. Kappa coefficient has been calculated as the mean value between the kappa coefficient between any two raters. The Cohen's Kappa coefficient for this evaluation was approximately 0.62. This means that our raters were in a substantial agreement about the nature of type conversion. \subsection{Qualitative analysis \label{subsec:Qualitative-analysis}} We split the named casts cases from the sampled set based on their operator and present the analysis of the most interesting cases in the following subsections. \subsubsection{Analysis of \lstinline*static_cast* examples} An example of a \lstinline*static_cast* where the \emph{source }and \emph{destination }look different is presented in \prettyref{lst:static_histogram}. The listing contains a call to \lstinline*CompareAndSwapPtr* as well as the definition for the same. This method is actually called from within a macro function definition, \lstinline*RTC_HISTOGRAM_COMMON_BLOCK*. The purpose of this macro function is to add the information passed to the \lstinline*histogram_pointer* safely. If the memory where \lstinline*histogram_pointer* points is empty, then the pointer will be changed to point to the new memory address. Otherwise, the code from lines 1-4 will ensure that it points to a \lstinline*nullptr*. The \lstinline*static_cast* used on line 3 in \prettyref{lst:static_histogram} is passed as a parameter to the function \lstinline*CompareAndSwapPtr*. The function call is part of a pointer declaration. The newly declared pointer \lstinline*prev_pointer* will become the output of the method \lstinline*CompareAndSwapPtr*. This function makes use of the API \lstinline*Interlocked CompareExchangePointer* from \lstinline*Windows* which is used to perform a pointer comparison and swap atomically. The code has to clear \lstinline*atomic_histogram_pointer*. So, the API call ultimately will compare the pointer with a \lstinline*nullptr*. If those two pointers contain different values, then it will store the value of \lstinline*nullptr* in the address of \lstinline*atomic_histogram_pointer*. The \lstinline*static_cast* converts the \lstinline*nullptr* to the type \lstinline*webrtc::metrics:Histogram** for consistency. Since the code from \prettyref{lst:static_histogram} tries to validate if \lstinline*atomic_histogram _pointer* is \lstinline*null*, it is required to compare the pointer with a null pointer literal: \lstinline*nullptr*. In order to compare two pointers, they need to be of the same type and therefore, a \lstinline*static_cast* is used as it is the only named cast operator which allows casts from \lstinline*nullptr* to a different type. The \emph{destination} identifier to which the named cast is bound is \lstinline*old_value*. While \lstinline*old_value* looks different to \lstinline*nullptr* and that is why our information-theoretic analysis identified it, the method \lstinline*CompareAndSwapPtr* is likely designed to be generic and accepting of many different pointer types. Therefore, this use of named cast is sound. \begin{figure}[t] \begin{lstlisting}[] webrtc::metrics::Histogram* prev_pointer = rtc::AtomicOps::CompareAndSwapPtr( &atomic_histogram_pointer, static_cast<webrtc::metrics::Histogram*> (nullptr), histogram_pointer); static T* CompareAndSwapPtr(T* volatile* ptr, T* old_value, T* new_value) { return static_cast<T*>( ::InterlockedCompareExchangePointer( reinterpret_cast<PVOID volatile*>(ptr), old_value, new_value)); } \end{lstlisting} \caption{A \lstinline*static_cast* example which presents a good \foreignlanguage{british}{utilisation} of the operator for efficiency and portability.} \label{lst:static_histogram} \end{figure} \prettyref{lst:static_ipv4} presents a use of the \lstinline*static_cast* operator in the component \emph{Base}, in file \emph{ip\_address.cc} inside the method\emph{ }\lstinline*ParseV4*. This method is used as part of the constructor for the class \lstinline+IPAddress+ to extract the IPv4 address from a string. The named cast operation in \prettyref{lst:static_ipv4} is part of a variable assignment. Although the \emph{source} and \emph{destination} identifiers are selected because they look different, we need to understand how they are used to assess whether a named cast is necessary here. We studied how the \emph{source} and \emph{destination} identifiers are used and found that the input string for \lstinline*ParseV4* is split in octets in order to be parsed and added to the IPv4 address. The \emph{source} identifier is \lstinline*next_octet* of type \lstinline*uint16_t* which represents one byte of the IPv4 address. The destination variable is \lstinline*address.bytes_* where \lstinline*bytes_* is a member of the class \lstinline*IPv4*. Specifically, it is an array of type \lstinline*array<uint8_t, 16>*. The array has the length 16 since \lstinline*IPAddress* can also have the IPv6 format. The implementation of \lstinline*ParseV4* does not seem to be erroneous. However, the use of the \lstinline*static_cast* operator is unnecessary since the conversion from string to octets can be done using the built-in type transformation type. Developers can use functions such as \lstinline+sscanf+ to read parts of the formatted string and return directly the desired output. In fact, this is exactly what the developers did in later versions of the implementation: the \lstinline*ParseV4* function has now been refactored {[}\citeauthor{commit_ipadd} \citeyear{commit_ipadd}{]} and updated to use \lstinline+sscanf+. \begin{figure}[t] \begin{lstlisting}[] address.bytes_[i++] = static_cast<uint8_t>(next_octet); \end{lstlisting} \caption{An example of \lstinline*static_cast* operator used in function \emph{ParseV4}. This function has been refactored and replaces the conversion with a function that reads the values.} \label{lst:static_ipv4} \end{figure} The code from \prettyref{lst:static_surface} presents a set of four \lstinline*static_cast* conversions collected from the component \emph{Swiftshader} from the file \emph{Surface}.cpp. We identified the cast because the \emph{source} identifier is very short compared to the \emph{destination} identifier. These casts are inside a method \lstinline*write* which contains a \lstinline*switch* statement that writes the \foreignlanguage{british}{colour} values (RGBA format) to a data structure. The \emph{source} identifiers are \lstinline*r*\emph{, }\lstinline*g*\emph{, }\lstinline*b* and \lstinline*a* of type \lstinline*float* which are short for the \foreignlanguage{british}{colours} red, green, blue and alpha, the last of which represents the opacity. The \emph{destination} identifier, which is originally a \lstinline*void* pointer, has a generic name \lstinline*element* because it may point to arbitrary data types. However, notice in \prettyref{lst:static_surface}, \lstinline*element* has been implicitly cast to point at an \lstinline*unsigned int* to match the type for the desired\emph{ destination }type. Implicitly casting \lstinline*void* pointers at the point of use can be confusing. This could lead to the variable \lstinline*element* being treated differently, assuming it has another type. We have found 45 such conversions in the \lstinline*switch* statement. Another \lstinline*static_cast* analysed is presented in line 4 in \prettyref{lst:static_path_rendering}, which belongs to the file \emph{Context.cpp} from component \emph{libANGLE.} This case has been identified because the \emph{source} and \emph{destination} expressions are different. The \emph{source} variable is a pointer of type \lstinline+const void*+ with the identifier \lstinline+paths+ and it represents a vector of \lstinline+paths+ from the Render Tree. The \emph{destination} variable is a pointer of type \lstinline+const auto*+ with the identifier \lstinline+nameArray+. This conversion is required to allow the conversion of the \lstinline+paths+ vector in a target template type. The template type is used as an argument to the named cast operator in line 4 and it appears in the function template declaration on lines 1-2 in \prettyref{lst:static_path_rendering}. The role of the function \lstinline+GatherPaths+ is to iterate through all the \lstinline+paths+ and returns their names. This case belongs to a larger and more complex code that has the functionality to validate the command buffer at path rendering. The developers decided to stop supporting this feature since this rendering method had a worse performance compared to the other rendering methods {[}\citeauthor{nameArray_bug1} \citeyear{nameArray_bug1}{]}. In addition, under specific circumstances this functionality was trying to retrieve information from an empty pointer which was leading to a crash {[}\citeauthor{nameArray_bug2} \citeyear{nameArray_bug2}{]}. This example shows that a name cast conversion can be used correctly, but it might also add complexity to the code, leading to inefficient and error-prone code. \begin{figure}[t] \begin{lstlisting}[] ((unsigned int*)element)[0] = static_cast<unsigned int>(r); ((unsigned int*)element)[1] = static_cast<unsigned int>(g); ((unsigned int*)element)[2] = static_cast<unsigned int>(b); ((unsigned int*)element)[3] = static_cast<unsigned int>(a); \end{lstlisting} \caption{Example of how a \lstinline*static_cast* is used on primitive types. The destination variable is originally a \lstinline*void* pointer and may potentially be misused if the developer is unaware of the various types it can represent.} \label{lst:static_surface} \end{figure} \subsubsection{Analysis of \lstinline*reinterpret_cast* examples} \prettyref{lst:reint_mach} presents two similar cases of \lstinline*reinterpret_cast* with high conditional entropy. Those cases have been investigated and two different \emph{source} identifiers are bound to the same \emph{destination} identifier even if the conversions appear in different components. \prettyref{lst:reint_mach} contains the calls and the signature for the function \lstinline*host_statistics* and \lstinline*host_info*. These method calls have been collected from the files \emph{process\_metrics \_mac.cc} from \emph{Base} component and \emph{audio\_low\_latency\_input\_mac.cc} from \emph{Media} component. The functions \lstinline*host_statistics* and \lstinline*host_info* are defined in the \emph{Mach} library which contains services and primitives for the OS X kernel. The role of the functions \lstinline*host_statistics* and \lstinline*host_info* is to retrieve host-specific information. The function \lstinline*host_statistics* in line 2 obtains information about virtual memory for a host. The \lstinline*host_info* method in line 10 retrieves basic information about a host such as the number of current physical processors for the host. Both methods return a variable \lstinline*kr* of type \lstinline*kern_return_t*. This variable is an integer which maps to a list of generic errors. If the method is successful, then \lstinline*kr* would have the value \lstinline*0*. Otherwise, it would have a different value which represents a specific error. In fact, not only those two functions, but most of the methods from the \emph{Mach} library follow the same coding conventions and they have a similar format. The \emph{source} variable for the first case has the identifier \lstinline*&data* which is a generic name. Its type is \lstinline*vm_statistics_data_t* which is a pointer to the structure \lstinline*vm_statistics* and contains statistics on the kernel's use of virtual memory. The source identifier for the casts from line 10 is \lstinline*&hbi* which is an acronym for the source variable type. Just like \lstinline*&data*, \lstinline*&hbi* is the address of a structure \lstinline*host_basic_info* which is used to present basic information about a host. The two casts from \prettyref{lst:reint_mach} have similarly named \emph{destination} identifiers: \lstinline*host_info_out* with type \lstinline*host_info_t*. \begin{figure}[t] \begin{lstlisting}[] template <typename T> std::vector<Path *> GatherPaths(..., const void *paths ... const auto *nameArray = static_cast<const T *>(paths); \end{lstlisting} \caption{Example of a \lstinline*static_cast* which was part of code which is not supported anymore.} \label{lst:static_path_rendering} \end{figure} \lstinline*host_statistic* can hold two different types of structures: \lstinline*vm_ statistics* for virtual memory information and \lstinline*host_load_info* for host's processor load information. The \lstinline*flavor* keeps track of the type of statistics desired. In this way, the functions will treat each \emph{destination} variable differently based on the variable \lstinline*flavor*. Implementing the functions in this manner allows the functions to perform different operations based on the parameters passed. The \emph{destination} identifiers are identical since the functions \lstinline*host_statistics* and \lstinline*host_info* follow the same coding conventions and have a similar format. Unfortunately, if the developer is not careful to pass the correct match between the type and the \lstinline*flavor* as parameters to functions, it may lead to a crash. This is a case where rigorously adhering to a coding convention can cause confusion during development. A finding has been noticed while analysing the use of similar casts for the component \emph{Mach}. Those type conversions are designed for a specific platform, Mac OS. According to the developer's comments {[}\citeauthor{problems_kernel_count} \citeyear{problems_kernel_count}{]}, the implementation for this platform caused the most problems to the developers. So, they had to build a specific solution for it. This is also supported by the fact that those functions are defined in the \emph{Mach} library. Even if this conversion pattern can cause confusion, the pattern seems vital for Chromium's execution on Mac OS platform since it was designed for a specific system. \begin{figure}[t] \begin{lstlisting}[] //check the total number of pages currently in use and pageable. kern_return_t kr = host_statistics(host.get(), HOST_VM_INFO, reinterpret_cast<host_info_t>(&data), &count); kern_return_t host_statistics(host_t host_priv, host_flavor_t flavor, host_info_t host_info_out, mach_msg_type_number_t *host_info_outCnt); //retrieve the number of current physical processors kern_return_t kr = host_info(mach_host.get(), HOST_BASIC_INFO, reinterpret_cast<host_info_t>(&hbi), &info_count); kern_return_t host_info (host_t host, host_flavor_t flavor, host_info_t host_info_out, mach_msg_type_number_t *host_info_outCnt) \end{lstlisting} \caption{An example of how \lstinline*reinterpret_cast* operator is used to allow functions with pointer parameters which can point to two different data structures.} \label{lst:reint_mach} \end{figure} A second case of \lstinline*reinterpret_cast* use that we studied is presented in \prettyref{lst:reint_dawn}. This snippet is from component Dawn in file \emph{WireCmd\_autogen.cpp }and is one of 13 similar cases. The file is generated from \emph{WireCmd}.\emph{cpp} using the build system and contains serialisation and deserialisation functions. The generated file is large with 14,000 lines of code and has a total of 200 type conversions which have the same identifier for \emph{source} variables and also for the \emph{destination} variables. The \emph{source} identifier is the string \lstinline*buffer* and in most cases, it is a pointer to a pointer for \lstinline+char+. There are cases when the \emph{source} variables have additional type qualifiers such as \lstinline*const volatile*. The \emph{destination} variable is \lstinline*memberBuffer* and it is declared with the type \lstinline*auto*. We observed that the \emph{destination} type varies from pointers to numeric types such as \lstinline*unsigned long long* to pointers for structures and enumerations. The casts are part of assignment expressions in which the \lstinline*memberBuffer* is initialised with a part of the \lstinline*buffer*. The purpose of these casts is to serialise and deserialise a variety of different structures for the component \emph{Dawn}. In other words, the methods provide the functionality to convert objects tp streams of bytes and recreate the objects when needed. Since the universe of types to be serialised is large, developers have relied on macros to serialise/deserialise objects. The example selected in the \prettyref{lst:reint_dawn} presents the \lstinline*buffer* which is converted in the type \lstinline*DawnTextureFormat*. The target type is an enumeration. Similar to the example from Section \ref{subsec:An-example-of}, lines 2-4 iterate over the enumeration. While the use of macros is preferred for serialisation and deserialisation, given the massive number of types that need to be serialised or deserialised, macros provide little insight into the actual role of the casts. Nonetheless, the generated file can be created from only 700 lines of code which contain macros. The use of \lstinline*reinterpret_cast* in this case is clearly beneficial from a software reuse point of view and leads to a decrease in the amount of code. On the other hand, the named cast operator is used to bypass the lack of an iterator for the enumeration type, which if not done correctly, can be pernicious as \lstinline*reinterpret_cast* comes with no semantic checks at all and as discussed above, enum types may not be contiguous in the first place. \begin{figure}[t] \begin{lstlisting}[] auto memberBuffer = reinterpret_cast<DawnTextureFormat*> (*buffer); for (size_t i = 0; i < memberLength; ++i) { memberBuffer[i] = record.colorFormats[i]; } \end{lstlisting} \caption{An example of \lstinline*reinterpret_cast* which is used to be enable iteration over an enumeration.} \label{lst:reint_dawn} \end{figure} The code from \prettyref{lst:reint_delete} presents the use of a \lstinline*reinterpret_cast* in line 4. The snippet is collected from component V8 in file \emph{api.cc}. The \emph{source} variable is a \lstinline+void*+ pointer with the identifier \lstinline*info*, while the \emph{destination} variable is a shared pointer with the identifier \lstinline*bs_indirection* of type \lstinline+std::shared_ptr<i::BackingStore>*+. To understand this case, first, we need to understand what the type \lstinline*BackingStore* is. In caching, a backing store is represented by the copy of a data in the memory, more specific in our case, a copy to an \lstinline*ArrayBuffer* {[}\citeauthor{backingData} \citeyear{backingData}{]}. The named cast operator is used to retrieve the shared pointer for \lstinline*BackingStore* data, which will be deleted later in the same function. The \lstinline*BackingStore* pointer is a shared pointer that can be accessed from the V8 and the Embedder components of Chromium and generates a lifetime management problem when both components hold pointers to the backing store data. The code complexity is increased since the components can resize the shared memory or transfer ownership from one component to another. The unsafe ownership model of \lstinline*BackingStore* is prone to errors, such as memory leaks and access of the pointers after deleting them, which has eventually led to various bugs {[}\citeauthor{backingData_bug1} \citeyear{backingData_bug1}; \citeauthor{backingData_bug2} \citeyear{backingData_bug2}{]}. \begin{figure}[t] \begin{lstlisting}[] // The backing store deleter just deletes the indirection, which downrefs // the shared pointer. It will get collected normally. void BackingStoreDeleter(... void* info) { std::shared_ptr<i::BackingStore>* bs_indirection = reinterpret_cast<std::shared_ptr <i::BackingStore>*> (info); ... delete bs_indirection; } \end{lstlisting} \caption{An example of \lstinline*reinterpret_cast* which was removed from code.} \label{lst:reint_delete} \end{figure} The problems have been solved by refactoring the ownership model and making the \lstinline*BackingStore* to own the shared pointers {[}\citeauthor{backingData} \citeyear{backingData}; \citeauthor{backingData_commit} \citeyear{backingData_commit}{]}. The previous implementation required each component to delete its shared pointer instance through the method \lstinline*BackingStoreDeleter*. The new version of the \lstinline*BackingStore* class counts the shared pointers references and if the count reaches zero, then the \lstinline*BackingStore* will delete the pointer. The named cast operation, along with the function \lstinline*BackingStoreDeleter*, was removed in the new implementation {[}\citeauthor{backingData_commit} \citeyear{backingData_commit}{]}. While the named cast operation was not directly causing the bugs, we can definitely say that it added complexity to the code by asking each component to delete its shared pointer instance, and eventually the code led to bugs. Our approach identified this case because the \emph{source} and \emph{destination} identifiers (\lstinline*info* and \lstinline*bs_indirection*) are different. We can notice there is a semantic relation between the identifiers since \lstinline*info* refers to the data and \lstinline*bs_indirection* refers to backing store pointer which is the copy of the data. If a semantic perspective would be considered, it is likely that this case would not have been identified. \prettyref{lst:reint_removed} presents two versions of a macro function \emph{F} collected from the file \emph{ast-value-factory.cc} of component AST . The first version contains a \lstinline*reinterpret_cast* on line 6. We identified this named cast because the \emph{source} and \emph{destination} expressions are very different. The \emph{source} expression is an integer literal representing the value \lstinline*1*. The \emph{destination} variable is a \lstinline+void*+ pointer with the identifier \lstinline*entry->value* and it points to the value of an \lstinline+entry+ in a \lstinline+HashMap+. The function \emph{F} is used in the initialisation of \lstinline+HashMap+ objects and each entry is initialised with value \lstinline*1*. The second version of the macro function \emph{F,} which is a refactored version \citep{refactored_hashmap}, does not contain the named cast operation. With the lack of named cast operation along with the information from the commit, we can tell that the new implementation of the \lstinline+HashMaps+ supports empty values objects without causing any errors. The named cast operation in the first version was a workaround, without a proper way of defining the behaviour, if the entries did not have values. This means that the code in the first version was error-prone in the case of empty values. A proper implementation shows that the named cast operation is not needed in the current case. \begin{figure}[t] \begin{lstlisting}[] // Old implementation #define F(name, str) ... HashMap::Entry* entry = string_table_.InsertNew(name##_string_, name##_string_->Hash()); entry->value = reinterpret_cast<void*>(1); // New Implementation #define F(name, str) ... string_table_.InsertNew(name##_string_, name##_string_->Hash()); \end{lstlisting} \caption{An example of \lstinline*reinterpret_cast* which was removed from code.} \label{lst:reint_removed} \end{figure} \subsubsection{Analysis of \lstinline*const_cast* examples} There were only five cases of \lstinline*const_ cast* operators in the sampled dataset. Four cases belong to the library \emph{ICU} in two different files: \emph{tznames\_impl.cpp} and \emph{tzfmt.cpp}. For these cases, the \emph{source} identifiers are generic and partially different compared to the \emph{destination} identifiers. The \prettyref{lst:const_nonConst_tznImpl} presents one of the four cases from the file \emph{tznames\_impl}.cpp. The \emph{source} variable is the pointer \lstinline*this* which is an instance of the class encapsulating the statement and has the type \lstinline+const TimeZoneNamesImpl*+ . The \emph{destination} variable is a pointer called \lstinline*nonConstThis* which does not have the qualifier \lstinline*const* in its type. The chosen identifiers for \emph{source} and \emph{destination} reinforces our hypothesis that identifiers carry meaning. Here, the getters in the encapsulating class need to maintain the integrity of the original object. Thus, the desired values need to be extracted from a non \lstinline*const* object derived from the pointer \lstinline*this* using a \lstinline*const_cast* operator. This is an instance where explicit casting is being used judiciously, clearly indicating its purpose through meaningful identifiers. \begin{figure}[t] \begin{lstlisting}[] TimeZoneNamesImpl *nonConstThis = const_cast<TimeZoneNamesImpl *>(this); \end{lstlisting} \caption{A \lstinline*const_cast* example used to obtain a non \lstinline*const* object from the \lstinline*const* pointer \lstinline*this*.} \label{lst:const_nonConst_tznImpl} \end{figure} \prettyref{lst:const_frames} presents a second example of a \lstinline*const_cast*. This example is taken from component \emph{Base} and belongs to the method \emph{CaptureStackTrace} which is used to collect frames in the execution stack. It is interesting and complements the one discussed in \prettyref{lst:const_nonConst_tznImpl} because the type qualifier \lstinline*const* is being added to a value in this case. In this case, the type conversion is a parameter for the function call \lstinline*TraceStackFramePointers*. The function in lines 1-3 returns the total number of the frames for the stack. The \emph{source} identifier is \lstinline*frames* which has the type \lstinline+void**+ and it represents the pointer to the stack frames. Line 5 of \prettyref{lst:const_frames} shows the function declaration. The \emph{destination} identifier is \emph{}\emph{\lstinline+out_trace+} with the type \lstinline+const void**+. Being able to check the stack is vital for debugging but at the same time, the stack should be protected during debugging. The \lstinline*const_cast* is required in this case to protect the stack frames from inadvertent manipulation while the developer is inspecting the stack. Here, we see an instance where the cast is necessary but the identifier for the \emph{destination} is not descriptive enough. The advantage of our approach is we are able to bring this to the notice of the developer who may choose to use a more meaningful identifier for the \emph{destination}. \begin{figure}[t] \begin{lstlisting}[] size_t frame_count = base::debug::TraceStackFramePointers( const_cast<const void**>(frames), max_entries, skip_frames); size_t TraceStackFramePointers(const void** out_trace, size_t max_depth, size_t skip_initial) \end{lstlisting}\caption{An example of how a \lstinline*const_cast* operator is used to add the \lstinline*const* qualifier to a variable.} \label{lst:const_frames} \end{figure} \subsubsection{Analysis of \lstinline*dynamic_cast* examples} Since the sampled dataset had only one instance of \lstinline*dynamic_cast*, we expanded our investigation to the entire dataset and analysed a total of 11 cases. We present two of them below. The first instance can be found in \prettyref{lst:dyn_fmt}. It has been extracted from \emph{private\_typeinfo.cpp} and it is part of \emph{libc++abi} library. The use of the \lstinline*dynamic_cast* operator appears in variable declarations in methods \lstinline*can_catch* and \lstinline*can_catch_nested*. These methods are used for exception handling and report mismatches during type conversions by checking if the result is null or not. If not, the methods return an exception. The \emph{source} variable, in our example, has the identifier \lstinline*__pointee*, which is of the type \lstinline*const __shim_type_info** . The \emph{destination} variable is \lstinline*member_ptr_type*, which is a \lstinline*const* pointer to \lstinline+ __pointer_to_member_type_info+, which itself is derived from the class \lstinline*__pbase_type_info* a sub-class of \lstinline*std::type_info* which contains information about types for variables. The names in this cast are generic and understandably so. \emph{libc++abi} implements the Application Binary Interface for \emph{C++} and is expected to be generic to fit in with a wide spectrum of low-level transactions between the application, libraries and the operating system. The \lstinline*dynamic_cast* operator is used in this case to check at runtime if the \emph{destination} variable can take the \emph{source}'s type while keeping the natural language identifiers as generic as possible. \begin{figure}[t] \begin{lstlisting}[] const __pointer_to_member_type_info* member_ptr_type = dynamic_cast<const __pointer_to_member_type_info*> (__pointee); \end{lstlisting} \caption{An example of how a \lstinline*dynamic_cast* is used in the implementation of an exception handler} \label{lst:dyn_buildtools} \end{figure} A second example of \lstinline*dynamic_cast* is presented in \prettyref{lst:dyn_fmt}. The snippet is from the file \emph{upluralrules}.\emph{cpp }in the \emph{ICU} (International Components for Unicode) module. The \emph{source} variable is \lstinline*fmt* with the type \lstinline+const class icu_64::NumberFormat*+ which captures the format of the expression. The \emph{destination} variable is \lstinline*decFmt* and it has the type \lstinline+const class icu_64::DecimalFormat*+ . The \emph{destination}'s type class \lstinline*DecimalFormat* inherits from \emph{source}'s type class \lstinline*NumberFormat* {[}\citeauthor{ICU_doc} \citeyear{ICU_doc}{]} and this is an example of a down-cast operation which is verified at runtime. If the checks fail and \lstinline*decFmt* is \lstinline*NULL*, the method continues to check for other known formats. The \emph{ICU} module handles a wide variety of data types. Even for numerics, which is the focus of our example, there are several different types that need checking: \lstinline*int32_t*, \lstinline*double* and \lstinline*FixedDecimal*. Most of these values are only available at runtime and therefore, the developers prefer to insert explicit checks through the \lstinline*dynamic_cast* operator. The identifiers in this case reflect the type specialisation that is happening through the \lstinline*dynamic_cast* operator. This is an example where type conversions are used judiciously with clear objectives and the names reflecting the type conversion that is taking place. Further, the use of \lstinline*dynamic_cast* operator makes the type conversion safe at runtime. \begin{figure}[t] \begin{lstlisting}[] const DecimalFormat *decFmt = dynamic_cast<const DecimalFormat *>(&fmt); \end{lstlisting} \caption{An example of \lstinline*dynamic_cast* used to perform a down-cast conversion.} \label{lst:dyn_fmt} \end{figure} \section{Introduction} Developers like flexibility while using programming language features during software development {[}\citeauthor{flexibility} \citeyear{flexibility}{]}. Type casts allow developers to work around the restrictions imposed on a specific type and use methods written for other types. While casting offers flexibility, it can lead to undefined behaviour in weakly typed languages like C/C++. For example, consider the cast operation \lstinline*a=(T)b*, the outcome of this statement is unclear unless we know what \lstinline*T* stands for and what are the types of \lstinline*a* and \lstinline*b*. If \lstinline*a* and \lstinline*b* are scalars, this could be a value conversion. If they are objects, this could be a downcast from \lstinline+b+ to create \lstinline+a+, if \lstinline*a*'s class is derived from \lstinline*b*'s class. \lstinline*a* and \lstinline*b* could be unrelated pointer types, in which case, the set of permissible operations is so vast that compilers might struggle to identify semantic errors. \citeauthor{typesafety_study} \citeyearpar{typesafety_study} studied the safety of type casts and found that a quarter of them were guarded with type checks to ensure their validity of type casts against run-time errors. This was corroborated in a later study by \Citeauthor{casting_java_explicit} \citeyearpar{casting_java_explicit} on the classification of patterns for type casting. A study of implicit casting in JavaScript {[}\citeauthor{js_study} \citeyear{js_study}{]} found most implicit casts to be harmless and useful, implying that developers use them judiciously. \citeauthor{casting_java_explicit} \citeyearpar{casting_java_explicit} performed a study of how developers use type casts in Java and found 26 usage patterns for type casts. Importantly, they discovered that half of the casts inspected by them were not guarded locally which could potentially cause run-time errors. Thus, there is a need to vet type casts to understand if they are being used carefully. Type casts come in two forms: \emph{implicit} and \emph{explicit}. Implicit casts or coercions are conversions from one type to another without explicitly specifying the new type, and they are usually limited to numeric types. Compilers have multiple checks to vet implicit casts on numerics. Even so, it is not possible to categorically enforce checks on casts for several mainstream languages with user-defined types. Therefore, for languages like C++, that are permissive in how memory is used at a low-level, several primitives for explicit type conversion have been introduced. These primitives, which are called \emph{named casts}, come with a unique set of checks on the cast operation. They are the recommended technique for explicitly changing one type to another in C++ and have two placeholders in the primitive: a \emph{source} expression that needs to be cast and the \emph{destination} type for the cast. In this paper, we propose a lightweight approach to check if casts are used judiciously. \citeauthor{dual_channel} \citeyearpar{dual_channel} presented source code as being dual channel. One channel is represented by the algorithmic channel, comprised of instructions understood and executed by computers. The second channel is the natural language channel, consisting of identifiers and comments to provide semantics for the instructions. In line with the recent work that uses meaning in identifiers in programs {[}\citeauthor{refinym} \citeyear{refinym}; \citeauthor{flexeme} \citeyear{flexeme}{]}, we propose a dual channel approach to analyse \emph{named casts}. Our assumption is that developers leave hints about their intent in the identifiers they choose and that this information can be used to check the fidelity of an explicit type conversion. In particular, we are interested in knowing if the \emph{source} expression that is being cast is related to the \emph{destination} variable to which the result of the cast is being assigned. Our main contributions are as follows: \begin{enumerate} \item We propose a lightweight approach to detect casts misuse and imprecise names for \emph{source} and \emph{destination} used in \emph{named casts}. \item We extract the identifier information of the variables used in \emph{named casts} from the Chromium project, which is an aggregation of over 34 components with nearly 27 million lines of C++ code. \item We perform an information-theoretic analysis of the identifiers used in the \emph{named casts}. We check if identifiers in the \emph{source} expression are in accord with the identifier for \emph{destination} variable to which the recast \emph{source} expression is assigned. \item We perform an in-depth investigation of the cases where there is a discord in the variable names. We evaluate the effectiveness of our approach through a manual evaluation on a sampled dataset. \item We identify several cases where the discord was innocuous and essential, but also two anti-patterns where \emph{named casts} are being used without sufficient care. Two instances of these \textcolor{black}{issues} have already been patched in a recent release of the software. In addition, we discover two cases where the \emph{named casts} were part of code with a high complexity that eventually led to bugs. After the bugs were fixed, the \emph{named casts} were completely removed. \end{enumerate} We discuss an overview of casting in C++, along with an example of imprecise \emph{named cast} usage and the motivation for our research in Section \ref{subsec:Context}. We describe our methodology in Section \ref{subsec:Methodology} and the results of our evaluation in Section \ref{subsec:Evaluation}. Next, we \foreignlanguage{british}{summarise} the analysis findings based on the \emph{named cast}'s type in Section \ref{subsec:Discussion}. Section \ref{subsec:Limitations} discusses some threats to validity. Section \ref{subsec:Related-Work} presents the related work and Section \ref{subsec:Conclusion} concludes this work. \section{Methodology\label{subsec:Methodology}} \begin{figure*}[t] \includegraphics[width=1\textwidth]{pics/arh_diagram}\caption{Software architecture diagram of our tool which extracts named casts from a C++ codebase and analyses them using information theory.} \label{fig:arh_diagram} \end{figure*} Our objective is to analyse if natural language identifiers are indicative of the purpose of the cast. For this, we focus on assignment expressions where the right hand side is a named cast expression and on actual-to-formal bindings in method calls where the argument to the method is a named cast expression. In both cases, the expression that is cast to a new type is referred to as the \emph{source} and the identifier to which the cast expression is bound is called the \emph{destination}. \subsection{Overview of Software Architecture} \prettyref{fig:arh_diagram} presents an overview of our tooling. We rely on a Clang plugin to traverse the abstract syntax tree (AST) of source files. Our plugin traverses every node to discover named cast expressions and then determines if the expression is part of a larger sub-tree representing an assignment operation or a method call expression. Details of this process can be found in Section \ref{subsec:Extraction-of-Named-Casts}. From the set of named casts, we prioritise those casts where the \emph{source }and\emph{ destination }are significantly different for manual investigation. Details of our prioritisation process can be found in Section \ref{subsec:Data-analysis} and the results of our manual investigation can be found in Section \ref{subsec:Qualitative-analysis}. Our corpus is generated from the Chromium pr\textcolor{black}{oject {[}\citeauthor{chrome} \citeyear{chrome}{]}. Chromium is an e}xtensive system written in C++ and it only supports the Clang compiler for building. Chromium uses the Ninja build system and GN \textcolor{black}{{[}\citeauthor{gn} \citeyear{gn}{]} as} a meta-build system that generates Ninja build files. The Ninja files run the Clang compiler, for which our analysis plugin is written, on the C++ files. Therefore, we modified the meta-build system to use a local version of Clang that is compatible with our plugin. The output generated by our modified compilation phase is a JSON file containing the named cast information for every C++ file that is compiled. These named casts constitute the dataset for our analysis which is described next. \subsection{Extraction of Named Casts\label{subsec:Extraction-of-Named-Casts}} In \prettyref{fig:ast_diagram}, we present an example of how our plugin analyses a named cast from the \emph{Net}\textbf{\emph{ }}sub-system in Chromium. After Clang parses the source file and produces an AST for the file \emph{net\_log\_util.cc}, the plugin traverses the tree and searches for named casts that are a part of either assignments or call expressions. On the left in \prettyref{fig:ast_diagram}, the syntax tree for the function call \lstinline*SetInteger* is shown. The node \lstinline*CallExpr* has a child \lstinline*CXXStaticCastExpr* which represents the node for \lstinline*static_cast* implying that the named cast is used as an argument for a function call. The plugin then follows the call to find the method definition. A projection of the AST for the method definition is shown on the right in \prettyref{fig:ast_diagram}. The plugin then links the formal parameter to the actual parameter for \lstinline*SetInteger* and discovers that the \emph{source} variable is \lstinline*error* and the \emph{destination} variable is \lstinline*in_value*. All the macro names in the code will be replaced with actual code at the compilation stage {[}\citeauthor{preprocessor} \citeyear{preprocessor}{]}. However, the physical location of the named casts would still point to the macro's call. To solve this, our plugin is designed to follow macro definitions, post their expansion, to discover named casts inside macro definitions as well. For each C++ file \foreignlanguage{british}{analysed}, the Clang plugin generates a JSON file with information about named casts. Each JSON entry in the file consists of the type of named cast i.e. \lstinline*static_cast*, \lstinline*dynamic_cast* \lstinline*reinterpret_cast* or \lstinline*const_cast*. It additionally contains the type and the subtokens for the \emph{source} and the \emph{destination }expression. To generate the subtokens, we extract all tokens from each expression and we preserve only identifiers, keywords and literals tokens. Those tokens are split in subtokens based on the camelcase and snakecase separators. \begin{figure}[t] \begin{centering} \includegraphics[scale=0.6]{pics/ast_diagram} \par\end{centering} \caption{Abstract syntax tree representation for our motivating example; we selected only the nodes of interest. The left side shows the function call, \emph{SetInteger}. The right side presents the mapping between the function call and the function definition.} \label{fig:ast_diagram}\vspace{-15bp} \end{figure} \subsection{Data analysis\label{subsec:Data-analysis}} In this research, we study if the identifiers convey the reason for the use of a named cast. We do this by comparing the source expression subtokens with the destination variable subtokens. Our comparison is based on a notion of entropy -- the amount of information in names. We find cases where \emph{source }subtokens are significantly different from the \emph{destination} subtokens. The difference is measured using conditional entropy which computes the number of additional bits that would be required to represent the \emph{destination} given the subtokens in the \emph{source.} While we have access to the type information, we do not use this information in the calculation of the conditio\textcolor{black}{nal entropy. The reason for this is that, during development and sometimes in static time, the type of a variable is not always visible to the human. That is why including the type in our analysis would make it different than the way a human would vie}w code. Next, we show how we compute the conditional entropy of \lstinline+fooBar+ given the entropy for \lstinline+bazGoo+ in the named cast \lstinline+fooBar = static_cast<Quux*> bazGoo+. \prettyref{eq:ent} presents the standard Shannon's formula for computing the e\textcolor{black}{ntropy {[}\citeauthor{shannonEntropy} \citeyear{shannonEntropy}{]}, which} is the negative sum of the probabilities multiplied with the logarithm value of the probability. Here, $X$ represents \lstinline+bazGoo+ and $x_{i}$ represents the probabilities for \lstinline+baz+ and \lstinline+Goo+ which are the subtokens of the identifier. The subtokens' probabilities have a value of $\frac{1}{2}$ since there are only two possible options. Thus, $H(bazGoo)=-(2*\frac{1}{2}*log_{2}\frac{1}{2})=-[1*(-1)]=1$. In other words, we need only one bit to represent the two possible options for the \emph{source} subtokens. We then compute the conditional entropy as shown in \prettyref{eq:cond_ent} {[}\citeauthor{condEnt_ent} \citeyear{condEnt_ent}{]}. The conditional entropy is the amount of information (in bits) required to express the outcome of a random variable knowing the outcome of another random variable. In \prettyref{eq:cond_ent}, \lstinline*Y* is a placeholder for the subtokens from \lstinline+foo+ and \lstinline+Bar+ in our example. We try to compute the conditional entropy of \lstinline*Y* given \lstinline*X* based on the chain rule. Thus, the conditional entropy value is the entropy value of the \emph{source}'s subtokens subtracted from the joint entropy value of both \emph{source} and \emph{destination} subtokens. In current example, the joint entropy is computed for all the subtokens \lstinline+baz+, \lstinline+Goo+, \lstinline+foo+ and \lstinline+Bar+. $H(bazGoo,fooBar)=-(4*\frac{1}{4}*log_{2}\frac{1}{4})=-[1*(-2)]=2$. The conditional entropy tells how many more bits are need to represent the additional subtokens that the \emph{destination} identifiers bring knowing the source's subtokens. In the example, the conditional entropy equals with the difference between the joint entropy and entropy of the \lstinline+bazGoo+ and it has value one. Thus, the \emph{destination} \lstinline+fooBar+ identifier will require an additional bit in order to represent the two new additional subtokens. \begin{equation} H(X)=-\sum\limits _{i=1}^{n}P(x_{i})*\log P(x_{i})\label{eq:ent} \end{equation} \begin{equation} H(Y|X)=H(X,Y)-H(X)\label{eq:cond_ent} \end{equation} The role of conditional entropy value is to discover how different a \emph{destination} expression is, compared to the \emph{source} expression used in a named cast. Therefore, we compare the subtokens of the \emph{destination} expression with the subtokens of the \emph{source} expression for each named cast operation we collected from Chromium. If we were to consider the subtokens across multiple named cast cases in the conditional entropy calculation for each case, then the result would not be the difference between \emph{source} and \emph{destination}. The comparison would instead identify if the \emph{destination} expression contains unique subtokens compared to \emph{source} subtokens from all the cases. The chances that some of the \emph{destination} subtokens appear in the subtokens from \emph{source} expression increases with the addition of multiple \emph{source} expressions in the calculation of the conditional entropy. The conditional entropy values of the \emph{destination} given the \emph{source} enables the identification of cases where the \emph{source} looks significantly different from the \emph{destination}. A low conditional entropy value implies that \emph{source} and \emph{destination} subtokens are similar. On the other hand, a high conditional entropy value means they have few subtokens in common. If identifiers are used for different purposes, under the assumption that names are chosen carefully, their subtokens will also be different. We are interested in the cases where the conditional entropy is high. Those cases should generally point to clear instances where disparate names are used in the \emph{source }and the \emph{destination} expressions. This is indicative of the \emph{destination }variable serving a different purpose than the \emph{source }expression. We generate a ranked list for the named casts based on their conditional entropy value in order to select identifiers where the expressions in \emph{source} and \emph{destination} are disparate. We performed this for all four categories of named casts: \lstinline*const_cast*, \lstinline*dynamic_cast*, \lstinline*reinterpret_cast* and \lstinline*static_cast*. Additionally, we manually sampled cases from the top quartile in the ranked list for each of these categories. Our samples are randomly selected from the outlier dataset using the central limit theorem {[}\citeauthor{central_limit_theorem} \citeyear{central_limit_theorem}{]} with a 90\% confidence. One may wonder why we did not use a simpler distance metric such as Levenshtein Distance (LD) instead of conditional entropy. LD uses three operations: insertion, deletion, substitution and the edit distance is the number of operations used to transform the input string into the output string. It is sensitive to the ordering of subtokens. Subtoken ordering is not important to us as we want only to check if the subtokens are being reused from the \emph{source }in the \emph{destination}. Whether an identifier is called \lstinline+thrown_type+ or \lstinline+type_thrown+ is immaterial to us, but it affects the Levenshtein distance. \section{Related Work\label{subsec:Related-Work}} Research into type systems accelerated with Luca Cardelli's seminal and accessible papers on type theory {[}\citeauthor{typefulPaper} \citeyear{typefulPaper}; \citeauthor{understanding_types} \citeyear{understanding_types}; \citeauthor{types_data} \citeyear{types_data}{]}. He lucidly explained how type systems could help us write better programs with fewer bugs. Some of that research also discusses properties of types in object-oriented programming. \citeauthor{explicit_casting_research} \citeyearpar{explicit_casting_research} presented an analysis of the explicit type casts operators for C++ with details of each type of operator. \citeauthor{fastDynCastPaper} \citeyearpar{fastDynCastPaper} proposed a method to implement dynamic casts, which is an expensive operation, for systems where performance is critical. \citeauthor{dynCastPaper} \citeyearpar{dynCastPaper} have demonstrated the efficiency of the Gibbs and Stroustrup implementation by using it as a baseline while also improving the performance by a factor of two. \paragraph*{Type casting studies.} In term of the effects, there are a significant number of research papers that present the study of the undefined behaviour introduced by type conversions {[}\citeauthor{UB1} \citeyear{UB1}; \citeauthor{UB2} \citeyear{UB2}; \citeauthor{UB3} \citeyear{UB3}{]}. Undefined behaviour can have many causes and some of them are due to type conversions. For instance, during the execution of a \lstinline*dynamic_cast*, the program needs to check the pointer's type. This is done by the dereferencing the pointer, and this case is undefined behaviour {[}\citeauthor{UBblog} \citeyear{UBblog}; \citeauthor{UBblog2} \citeyear{UBblog2}{]}. Compilers will capture some cases of undefined behaviour for which they will generate warnings, but not all of them {[}\citeauthor{UB1} \citeyear{UB1}{]}. For this reason, developers need tools and techniques to verify their code. \citeauthor{js_study} \citeyearpar{js_study} have done an empirical study over the implicit casts for JavaScript. They proved that those type conversions are in general harmless and developers use them correctly. This can be translated as most of the times, implicit casts are safe to use. However, there is contradicting evidence that unrestrained named casts or explicit casts can have undesirable effects. Tools have been researched and developed to detect such casts. \citeauthor{detectPaper_caver} \citeyearpar{detectPaper_caver} present CAVER, which is a tool to identify poor practices in casting and also discussed their security implications. The tool analyses C++ code and focuses on the unsafe uses of the \lstinline*static_cast* and \lstinline*dynamic_cast*. This work has provided a good background to understand how named casts can go wrong. Their tool's evaluation, much like ours, is performed on the code from Chromium. \citeauthor{detectPaper_hextype} \citeyearpar{detectPaper_hextype} provide another tool HexType that performs well at detecting badly implemented casts. They have implemented HexType using low-overhead data structures and compiler optimisations to minimise the required resources. \citeauthor{casting_java_explicit} \citeyearpar{casting_java_explicit} provided an empirical study of type conversions for Java. The target of their research is to discover when and how developers use an explicit cast. This is done through discovering and presenting 25 patterns of cast-usages from real-life Java code. This paper is the closest to our work, but unlike us, it does not use any signal from the natural language identifiers to detect anti-patterns. \paragraph*{Dual-Channel Research.} \citeauthor{knuth} \citeyearpar{knuth} proposed a paradigm shift in programming, which is commonly known as Literate Programming, where writing code to instruct a computer is secondary to presenting it to human beings. In Literate Programming, each program contains its explanation in natural language intermixed with sections of code. Knuth presented the system WEB, which is a literate programming language comprising of a document formatting language (TEX) and a programming language (PASCAL). Literate programs contain a human-readable explanation interspersed with code which is automatically picked up by the WEB system to produce an executable. At the same time, WEB enables the inclusion of powerful features such as pictures, equations, tables, and others in the natural language part of Literate program. Thus, the natural language information remains in harmony with the software itself. Literate programming laid the foundation for novel research directions in Software Engineering that drew upon advances in Natural Language Processing. \citeauthor{naturalness} \citeyearpar{naturalness} proposed the\emph{ naturalness hypothesis }for software which noted that large programs can be repetitive and can be modeled with techniques that capture repetition such as n-grams. They noted that code is analogous to natural languages in the way it tends to repeat. Such repetitive patterns can be harvested and interpreted as statistical properties that can be used to develop better software engineering tools. They used this observation to build a statistical language model over a large corpus to improve code completion. An n-gram language model was built using token sequences, which included natural language information in the form of identifiers, from open source code. The model was used in a plugin to complete code for Eclipse IDE which performed better than the Eclipse's completion system at that time. Source code is normally written for it to run on a device. But, the same code is also written for developers who maintain or improve the application. Therefore, a large part of the code semantics is embedded in the communication channels between developers i.e. the natural language identifiers that are chosen and the comments that are written in the code. Based on this insight,\emph{ }\citeauthor{dual_channel} \citeyearpar{dual_channel} described two communication channels in source code: the algorithmic channel (AL) and the natural language channel (NL). The algorithmic channel comprises of all the instructions written by the developers which will be executed by a computer. The natural language channel, which consists of identifiers and comments, provides information about the purpose of the code in a human-readable format. The relation between the AL and NL channel can be utilised to improve software analysis tools. \citeauthor{flexeme} \citeyearpar{flexeme} have developed a tool called HEDDLE to detect and separate tangled commits into atomic concerns. HEDDLE generates a graph data structure that encodes different versions of the program and annotates the data flow edges using the natural language information from the source code. HEDDLE performs faster and is more accurate in the detection of tangled commits than the previous state-of-the-art. \citeauthor{posit} \citeyearpar{posit} have also developed a technique called POSIT, which adapts NLP techniques for tagging between code and natural language. POSIT can generate more accurate tags for both source code tokens and natural language words than the previous state-of-the-art. \paragraph*{Dual-channel Research On Extracting Meaning From Names.} Identifier names represent the majority of tokens from the source code. \citeauthor{identifiers_code_quality} \citeyearpar{identifiers_code_quality} have shown through an empirical study on Java applications that there is a direct relation between the naming quality of identifiers and source code quality. Thus, poor named identifiers show a lack of understanding of the problem, which is translated into poor quality software. The authors measured the quality of identifiers based on identifier naming guidelines and subtokens comparison to Java and application specific terms. Even if the subtokens' semantic meaning is ignored in the analysis, this empirical study proves that the relation between the dual-channel information is not entirely harvested and applied in software analysis tools. \citeauthor{refinym} \citeyearpar{refinym} used dual-channel constraints to mine conceptual types from identifiers and assignment flows between them. Conceptual types are types that are latent in the program but not explicitly declared by the developer. Generally, conceptual type corresponds to the actual types, but there are cases where they can be latent. For instance, password and username may have the same type, \emph{string}, but their conceptual types are different. If a password, which is generally a highly protected field, was declared the same way as the username, it would lead to a vulnerability. \citeauthor{deepbugs} \citeyearpar{deepbugs} developed a learning approach, called DeepBugs, for discovering bugs based on the semantic meaning of the identifier names. This approach uses embeddings, a vector representation for identifiers, which preserve the semantic similarities between identifiers. The bug detection is treated as a binary classification problem. DeepBugs approach trains a classifier to distinguish correct code from incorrect code. The training data consist of correct code and incorrect code generated by the authors. The bug detectors use the embeddings from the training phase to discover bugs. Three bug detectors were built based on this approach to discover accidentally swapped function arguments, incorrect binary operators, and incorrect operands in binary operations. The bug detectors have a high accuracy between 89\% and 95\% to distinguish correct and incorrect code. The bug detectors are also very efficient, with less than 20 milliseconds to analyse a file. False positives are inevitable in static analysis tools; however, the bug detectors have a 68\% true positive rate. Another approach that makes use of the semantic meaning of the identifier names is presented by \citeauthor{context2name} \citeyearpar{context2name} and it is called Context2Name. JavaScript code is usually deployed in a minified version in which the identifiers are replaced with short and random names. Context2Name is a deep learning-based technique that predicts identifier names for variables that have a minified name. This technique generates context vectors for each identifier by inspecting five tokens before and after the identifier's occurrence. The context vectors are then summarised in embeddings. Those embeddings are used by a recurrent neural network to predict natural names for the minified variables. Context2Name predicts correct identifiers with a 47.5\% accuracy of all minified names and it predicts 5.3\% additional identifiers missed by the state-of-art tools. The improvements made by the dual-channel research shows how much potential the dual-channel information presents for software analysis. Our study uses similar approaches with the work from dual-channel research on a different problem. Hints of the developer's intent have been extracted from natural language information to guide the detection of anti-patterns of named casts. \section{Threats to Validity\label{subsec:Limitations}} \paragraph*{Internal threats} The results of the manual evaluation and the findings of the named casts operators usages are influenced by the subjective experience of the raters. We tried to \foreignlanguage{british}{minimise} this bias by using three raters with experience in C++. Each rater consulted the ISO C++ Standard to understand how the named cast operators should be used and only after the raters provided feedback on the sample data. After each rater performed an initial evaluation, they selected together the interesting cases presented in Section \ref{subsec:Qualitative-analysis}. \paragraph*{External threats} Our tool is subject to analyse code where variable names are chosen carelessly. In an ideal world, the natural language channel provides enough context to understand the code's purpose. Our approach relies on the connection between the identifiers to detect cast misuses and the tool performs better if the identifiers are meaningful. In a scenario where the names are chosen carelessly, our tool might identify fewer cases of casts misuses, but it will identify more cases of imprecise names. In many cases, cast misuse can be overshadowed by imprecise naming. This is overcome by initially identifying imprecise naming, essentially forming the first stage of a two stage refactoring - clarification of intent followed by validation of intent. However, our tool will also detect some false positives based on the nature of the approach. Developers might decide in some cases that generic or different names are appropriate for the \emph{source} and \emph{destination} identifiers. In such cases, these casts would be flagged despite the identifiers being meaningful to the code.
1104.2417
\section{Introduction}\label{sec:intro} The distribution of particles transported by turbulent flows is a current research topic with implications in diverse fields, such as process technology (\cite{Pratsinis1996}), cloud formation (\cite{Bodenschatz2010}), and plankton dynamics (\cite{Schmitt2008}). In most of the cases, the particles have a finite size and a different density than the carrier fluid, i.e. they have inertia. These inertial particles cannot totally follow the fluid motion and distribute inhomogeneously within the turbulent flow, leading to clustering or preferential concentration (\cite{Toschi2009a}). The two relevant dimensionless parameters describing the dispersed inertial particles in the fluid are the density ratio $\beta=3\rho_f/(\rho_f+2\rho_p)$, where $\rho_f$ and $\rho_p$ are the densities of the carrier fluid and particle, respectively, and the Stokes number, St$=\tau_p/\tau_{\eta}$, where $\tau_p=a^2/3\beta\nu$ is the particle relaxation time, $\tau_{\eta}$ is the typical timescale of the flow, which for a turbulent flow is the Kolmogorov time scale, $a$ is the particle radius, and $\nu$ is the kinematic viscosity of the fluid. In recent years, both numerical and experimental studies have quantified the clustering of particles by employing different approaches like statistical analysis of single-point measurements (\cite{Calzavarini2008b}), box-counting method (\cite{Fessler1994,Aliseda2002}), pair correlation functions (\cite{Chen2006,Saw2008}), Kaplan-Yorke dimension (\cite{Bec2006,Calzavarini2008c}), Minkowski functionals (\cite{Calzavarini2008c}) and segregation indicators (\cite{Calzavarini2008a, IJzermans2009}). It is not possible to obtain global information on bubble clustering from a single-point analysis (\cite{Calzavarini2008b}). Methods like box-counting and pair correlation functions, although useful, require the selection of an arbitrary length scale that affects the quantification of the clustering. The Kaplan-Yorke dimension, based on the calculation of the Lyapunov exponents, quantifies the contraction of a dynamical system by considering the separation rates of particle trajectories. Nevertheless, it does not provide global morphological information. Minkowski functionals, originally used to provide complete morphological information of the large-scale distribution of galaxies (\cite{Kerscher2001}), have been applied to study the clustering of particles in turbulent flows (\cite{Calzavarini2008c}). \cite{Calzavarini2008c} found that light particles cluster in filamentary structures, whereas heavy particles have a wall-like topology around interconnected tunnels, and obviously no clustering was observed for neutrally buoyant tracers. In the above numerical simulations and experiments, the strongest clustering was found for particles with St$\approx$O(1). The problem with Minkowski-type analysis is that it is numerically expensive, and it does not provide information on the Lagrangian evolution of the clusters. An alternative mathematical tool that can be used to study clustering is the Vorono\"{\i} tessellation, which has been used in astronomy as a tool to characterize clustering of galaxies (\cite{Weygaert1989}). Recently, \cite{Monchaux2010} apply a Vorono\"{\i} analysis to quantify the clustering of heavy particles in grid-generated turbulence. This Vorono\"{\i} approach does not require the selection of an arbitrary length scale for a fixed particle number, and it can provide information on the Lagrangian statistics of clustering \cite[]{Monchaux2010}. \cite{Monchaux2010} obtain two-dimensional particle positions by imaging a turbulent flow in a wind tunnel seeded with droplets. The Vorono\"{\i} cells are defined based on the positions of the particles within the measurement domain. One can quantify the clustering by calculating the probability density function (PDF) of the normalized areas of the Vorono\"{\i} cells. The PDF will have a different shape for inertial particles when compared to the corresponding PDF of randomly distributed particles. The main difference is observed at the small and large values of normalized areas, where the PDF of heavy particles has a higher probability than for randomly distributed particles. There is a central region where there is no significant difference between the PDFs of heavy particles and randomly distributed ones. The values of normalized areas at which the PDF deviates from the randomly distributed particles can be used as thresholds to classify Vorono\"{\i} cells that belong either to clusters or voids. \cite{Monchaux2010} report a maximum preferential concentration for St around unity, in agreement with other methods that have been used to study clustering. The objective of the present work is to extend the work of \cite{Monchaux2010} to: (i) three-dimensions and (ii) a much larger range of density ratios (including light, heavy, and neutrally buoyant particles) and Stokes numbers, i.e. we quantify particle clustering by applying 3D Vorono\"{\i} analysis both for numerical and experimental data sets of particles and bubbles. Moreover, we (iii) correlate the clustering behavior of different particles with local turbulent flow quantities and (iv) study the Lagrangian temporal evolution of the clusters. \section{Experimental and Numerical Datasets and Vorono\"{\i} analysis} \label{sec:exp} \subsection{Datasets} \begin{table} \caption{Summary of the simulation and experimental parameters, where $N$ is the size of the numerical domain, $Re_{\lambda}$ is the Taylor-Reynolds number, $\eta$, $\tau_{\eta}$ are the Kolmogorov length and time scales, respectively, and $ N_{particles}$ is the number of particles in the simulations, and the time-averaged particle number in the measurement volume for the experiment.} \begin{center} \begin{tabular}{c|c|c|c|c|c|c|c} \hline &$N$ &$Re_{\lambda}$&$\eta$ &$\tau_{\eta}$ &$ N_{particles}$ &$St $\\ \hline Simulation A & 128 & 75 & 0.0332 & 0.1104 &$1.0\times10^3 $ &0.1 - 4.0 \\ Simulation B & 512 & 180 & 0.001 & 0.0483 & 6.4$\times10^4$ &0.1 - 4.1\\ Experiment & - & 162 & 288 $\mu m$ & 80 ms & 1.3$\times10^3$ & 0.04 $\pm$ 0.02\\ \hline \end{tabular} \end{center} \label{tab:flowCond} \end{table} The numerical scheme for a dilute suspension (neglecting particle collisions) of point particles in homogeneous and isotropic turbulence is described as follows (\cite{Maxey1983,Calzavarini2008c}): \begin{equation} \label{particle model} \frac{d\mbox{\boldmath $v$}}{dt}=\beta\frac{D}{Dt}\mbox{\boldmath$u$}(\mbox{\boldmath$x$}(t),t) -\frac{1}{\tau_p}(\mbox{\boldmath $v$}-\mbox{\boldmath $u$}(\mbox{\boldmath$x$}(t),t)) \end{equation} where $\mbox{\boldmath$v$}=d\mbox{\boldmath$x$}/dt$ is the particle velocity and $\mbox{\boldmath$u$}(\mbox{\boldmath$x$}(t),t)$ the velocity field. The dimensionless numbers used to model the particle motion are the density difference between the particle and the fluid $\beta$ and the Stokes number St. The values of $\beta$ = 0, 1 and 3 correspond to very heavy particles, neutrally buoyant tracers, and bubbles in water, respectively. When St = 0, the particles perfectly follow the fluid flow behaving as fluid tracers. As summarized in table~\ref{tab:flowCond}, we explore a parameter space of $\beta$ = 0, 1, and 3 and St ranging from 0.1 to 4.0 consisting of 24 values at $Re_{\lambda}$ = 75 with the spatial resolution of $N =128^3$. For $Re_{\lambda}$ = 180 with $N =512^3$, we study 5 different values of St = 0.1, 0.6, 1.6, 2.6, and 4.1. \cite[from iCFDdatabase http://cfd.cineca.it,][]{Calzavarini2008c}. The simulation of the Navier-Stokes equation is based on a 2/3 de-aliased pseudo-spectral algorithm with 2$^{nd}$ order Adams--Bashforth time-stepping (for details see \cite{Bec2006}). Simulations have been performed in a cubic box of side $L = 2\pi$ with periodic boundary conditions. The forcing adopted acts only at the largest scale, it is implemented by keeping constant the kinetic energy content in of the smallest shell ($|k|$$\leq$ 1) in Fourier space. The intensity of the forcing is adjusted in such a way to have a turbulent dissipative scale ($\eta$) of about 0.8 lattice grids in real space. Particle dynamics is evolved with time steps $O(10)$ times smaller than the smallest Stokes time, leading to an accurate resolution of the particle trajectories. Tri-linear interpolation is used to determine the value of the velocity field at the particle position. The numerical code was also validated by comparison against an independent code implementing different temporal integration scheme, different particles interpolation and different large scale forcing (\cite{Toschi2009b}). The simulations extends over few O(1) large-eddy-turnover times, this is enough for particles to reach a statistically steady distribution. In the present analysis, we fix the number of particles ($N_{particles}$) for given Reynolds numbers: 1000 particles for the simulation with the domain size of 128$^3$ at $Re_\lambda$ = 75, and 6.4$\times$10$^4$ particles for the simulation of 512$^3$ at $Re_\lambda$ = 180. The number of particles normalized by the corresponding domain volume, i.e. the volume concentrations of the particles, for the two $Re_\lambda$ are identical. In one particular case of $Re_{\lambda}$ =75 and St = 0.6, the particle number is varied from 100 to 1 $\times$ 10$^5$. We conduct experiments in the Twente Water Tunnel (TWT), an 8 m long vertical water tunnel designed for studying two-phase flows. By means of an active grid, nearly homogeneous and isotropic turbulence with $\mathrm{Re}_{\lambda}$ up to 300 can be achieved. A measurement section with dimensions 2$\times$0.45$\times$0.45 m$^3$ with three glass walls provides optical access for the three-dimensional particle tracking velocimetry (PTV) system. Micro-bubbles with a mean radius of 170 $\pm 60$ $\mathrm{\mu}$m are generated by blowing pressurized air through a ceramic porous plate that is located in the upper part of the water tunnel. These micro-bubbles are advected downwards by the flow passing through the measurement section. In our 3D-PTV micro-bubble experiments, we use a 4-camera system to get micro-bubble positions in the active-grid-generated turbulence in the TWT. The experimental data are collected for a duration of 6 seconds (three times the large eddy turnover time) at an acquisition rate of 1,000 fps. For the experimental data, $Re_{\lambda}$ = 162, $\beta$ = 3 and St = 0.04 $\pm$ 0.02, and the time-averaged number of particles inside the measurement volume of 70mm$\times$70mm$\times$70mm is $1.3\times10^3$ (for further details, see \cite{Martinez2010, Martinez2011}). \subsection{Vorono\"{\i} analysis} The Vorono\"{\i} diagram is a spatial tesselation where each Vorono\"{\i} cell is defined at the particle location based on the distance to the neighboring particles (\cite{SpatialTesselation}). Every point in a Vorono\"{\i} cell is closest to the particle position compared to the neighboring particles, the exceptions are the vertices, borderlines and facets (see figure~\ref{3DVoronoi}). Therefore, in regions where particles cluster, the volume of the Vorono\"{\i} cells is smaller compared to that of the cells in neighboring regions. Hence, the volume of the Vorono\"{\i} cells is inversely proportional to the local particle concentration. The PDF of the Vorono\"{\i} volumes normalized by the mean volume for randomly distributed particles can be well described by a $\Gamma$-distribution (\cite{Ferenc2007}) (see figure~\ref{betaDependency4}). In the 3D case, the $\Gamma$-distribution has the following prefactor and exponent: \begin{equation} \label{Gamma function} f(x)=\frac{3125}{24}x^4 \exp (-5x). \end{equation} Here $x$ is the Vorono\"{\i} volume normalized by the mean volume. Particles which are not randomly distributed will have a PDF that deviates from this $\Gamma$-distribution, indicating preferential concentration. The Vorono\"{\i} cells of particles located near the edges of the domain are ill-defined, i.e. they either do not close or close at points outside of the domain. These cells at the border of the domain are not considered for the analysis. \begin{figure} \begin{center} \includegraphics[width=70mm]{Fig1.eps} \end{center} \caption{ An example of a 3D Vorono\"{\i} tesselation. The dots represent the particle position and lines represent the borders of the Vorono\"{\i} cells.} \label{3DVoronoi} \end{figure} \section{Results} {First, we present results on the effect of the density ratio ($\beta$) on the clustering, followed by the effect of the Stokes number (St) and the number of particles ($N_{particles}$). Then, we show how the volume of Vorono\"{\i} cells ($\mathcal{V}$) and enstrophy are related. Finally, we present results on the Lagrangian autocorrelations of Vorono\"{\i} volumes and enstrophy. \subsection{Density effect} Here we study the clustering behavior of particles of different $\beta$ at a fixed St for two different $Re_{\lambda}$. Figure~\ref{betaDependency4} shows the PDFs of the Vorono\"{\i} volumes ($\mathcal{V}$) normalized by their averaged volume ($\mathcal{\bar{V}}$), $\mathcal{V}$/$\mathcal{\bar{V}}$, for heavy, neutrally buoyant, and light particles of St = 0.6 at $Re_\lambda$ =75 (Fig.~\ref{betaDependency4} (a)), and 180 (Fig.~\ref{betaDependency4} (b)). It clearly shows that the trends in the probability density functions are similar for both $Re_{\lambda}$. The PDF of neutrally buoyant particles follows the $\Gamma$-distribution eq.\ (\ref{Gamma function}) quite well, reflecting that neutrally buoyant particles do not have any preferential concentration. In contrast, the PDFs of light and heavy particles clearly show a different behavior compared to the randomly distributed particles. We observe that the probability of finding either small or large Vorono\"{\i} volumes is higher for both light and heavy particles. The two regions of small and large volumes can be used to identify clusters and voids. The strongest clustering is observed for light particles, as the probability of finding small Vorono\"{\i} volumes is the highest. Owing to the density difference, light particles accumulate in vortex filaments due to centrifugal forces (\cite{Mazzitelli2003, Mazzitelli2004, Biferale2010}), while heavy particles concentrate in regions of intense strain (\cite{Bec2006}). Here, although the heavy particles show clustering, it is less compared to light particles. These results are consistent with the Minkowski analysis by \cite{Calzavarini2008c}. \begin{figure} \begin{center} \includegraphics[width=1\textwidth]{Fig2.eps} \end{center} \caption{ (Color online) The normalized Vorono\"{\i} volume PDFs for heavy (squares), neutrally buoyant (circles), and light particles (triangles) at St = 0.6 from DNS at (a) $Re_{\lambda} = 75$, and (b) $Re_{\lambda} = 180$. The thick line shows the $\Gamma$-distribution eq.\ (\ref{Gamma function}) for randomly distributed particles (\cite{Ferenc2007}), the PDF of the neutrally buoyant particles agrees well with the randomly distributed particles ($+$). Both heavy and light particles show clustering, however light particles show the maximum clustering.} \label{betaDependency4} \end{figure} \subsection{Stokes number effect} In this section, we study the effect of St on the clustering behavior for the three types of particles. We study the clustering behavior of the particles by examining the deviations of their Vorono\"{\i} volume PDFs from the $\Gamma-$distribution. \begin{figure} \begin{center} \includegraphics[width=1\textwidth]{Fig3_2.eps} \end{center} \caption{ (Color online) The normalized Vorono\"{\i} volume PDFs for different St ranging from 0.1 to 4 in the numerics for (a) light particles $\beta$=3 at $Re_{\lambda} = 75$, (b) light particles $\beta$=3 at $Re_{\lambda} = 180$, (c) heavy particles $\beta$=0 at $Re_{\lambda} = 75$, and (d) heavy particles $\beta$=0 at $Re_{\lambda} = 180$ . The stars in (b) correspond to the experimental result with St = 0.04 $\pm$ 0.02 at $Re_{\lambda} = 162$. } \label{Stokes_effect} \end{figure} Figure~\ref{Stokes_effect} shows PDFs of light ($\beta$=3) and heavy ($\beta$=0) particles for different St at $Re_\lambda$ of 75 and 180. Firstly, we discuss the clustering of light particles as shown in Fig.~\ref{Stokes_effect} for $Re_\lambda$ of (a) 75 and (b) 180. Both Reynolds numbers give a similar trend with increasing St. When St increases, the probability of finding clusters and voids increases up to a value of St = 1.6, after which the dependence becomes weaker for both $Re_\lambda$. We note that the experimental result, shown with stars in Fig.~\ref{Stokes_effect} (b), for micro-bubbles with St = 0.04 $\pm$ 0.02 agrees reasonably well with the trend of the numerical data for light particles. In any case, for these small Stokes numbers, the PDF of the Vorono\"{\i} volumes is still qualitatively similar to that of tracers. Another important feature of the light particle PDF is that the highest probability occurs at the smallest volume and decreases monotonically with increasing volume for St in the range of 0.6 to 4. As studied by \cite{Calzavarini2008c}, bubbles in this range of St tend to get trapped in vortex filaments, leaving void regions. Thus, most of the bubbles are concentrated in small regions and there are few bubbles outside these small regions. In general, the clustering of heavy particles is weaker as compared to that of light particles. For heavy particles, as shown in Fig.~\ref{Stokes_effect} (c,d), as St increases, the probability of finding clusters and voids increases up to a value of St = 1.6, then the St dependence changes for different $Re_\lambda$ and is discussed below. \begin{figure} \begin{center} \includegraphics[width=1\textwidth]{Fig4.eps} \end{center} \caption{ (Color online) The comparison between log-normal and $\Gamma-$distribution of Eq.~\ref{eq:fitting} fitting for the PDF of the 3D Vorono\"{\i} volumes. Open symbols represent heavy (squares), neutrally buoyant (circles), and light particles (triangles) at St = 0.6 from DNS at $Re_{\lambda} = 75$; the lines represent (a) log-normal, and (b) $\Gamma-$distribution.} \label{PDFfit} \end{figure} \cite{Monchaux2010} found that the Vorono\"{\i} area statistics of heavy particles can be well fitted by a log-normal distribution. For comparison, figure~\ref{PDFfit} (a) shows that the log-normal fitting for the PDFs of the present 3D Vorono\"{\i} volumes for the three different type of particles. It is clear that the Vorono\"{\i} volume statistics cannot be characterized well by the log-normal function, even not for the neutrally buoyant particles. However, the PDFs for all type of the particles can be fitted very well by the $\Gamma-$distribution (see figure~\ref{PDFfit} (b)) with only one fitting parameter $\sigma$: \begin{equation} f(x) = \frac{1}{\sigma^{\frac{2}{\sigma^2}}\Gamma(\frac{1}{\sigma^2})}x^{\frac{1}{\sigma^2}-1}\exp^{-\frac{x}{\sigma^2}} \label{eq:fitting} \end{equation} where, $\sigma$ is the standard deviation of the Vorono\"{\i} volumes. Hence $\sigma$ provides a proper statistical quantification of Vorono\"{\i} volumes. In order to quantify the clustering using a single number, we use the standard deviation $\sigma$ of the normalized Vorono\"{\i} volume distributions. In figure~\ref{normsigma2} (a), we plot $\sigma$ normalized by the standard deviation of the Vorono\"{\i} volumes for randomly distributed particles $\sigma_{\Gamma}$. The magnitude of the indicator $\sigma/\sigma_{\Gamma}$ distinguishes the behavior of light, neutrally buoyant, and heavy particles. A higher value of the indicator reflects stronger clustering for a given $Re_\lambda$. For neutrally buoyant particles there is no observed clustering, hence the indicator value is constant at 1. Heavy particles show clustering and the indicator value saturates at St $\approx$ 1-2 at $Re_\lambda = $75. However, the indicator value continuously increases with St at the higher Reynolds number of $Re_\lambda = $180, and the absolute value of the indicator $\sigma/\sigma_{\Gamma}$ is larger for higher $Re_\lambda$. This indicates that the clustering of heavy particles is stronger at higher $Re_\lambda$ for a given St. Fig.~\ref{normsigma2} (a) shows that the absolute value of the indicator $\sigma/\sigma_{\Gamma}$ for light particles is also larger for higher $Re_\lambda$, revealing a stronger clustering for light particles at higher $Re_\lambda$. The reason for the Reynolds number effect could be because of the changing range of length scales of the vortex filaments which affect the clustering. At higher $Re_{\lambda}$, there is a wider range of clustering length scales resulting in a Vorono\"{\i} volume distribution with a higher value of standard deviation. The curves corresponding to light particles show the strongest clustering, with a peak at St $\approx$ 1 $-$ 2 for both $Re_\lambda$ = 75 and 180. This clustering result has a consistent trend with that of the Kaplan-Yorke analysis (\cite{Calzavarini2008c}). We also add the data point for the standard deviation of the experimental Vorono\"{\i} volume PDF as shown in figure~\ref{normsigma2}. Although the mean value of the indicator $\sigma/\sigma_{\Gamma}$ for the experimental data is higher than those from the numerical simulations of light point particles, there is a good agreement with numerical trend within the experimental errorbar. More experimental data at larger Stokes numbers, i.e., larger bubbles, have to be taken to come to a final conclusion on this issue. \begin{figure} \begin{center} \includegraphics[width=1\textwidth]{Fig5.eps} \end{center} \caption{ (Color online) (a) Normalized standard deviation $\sigma/\sigma_{\Gamma}$ (indicator) of the Vorono\"{\i} volume distributions versus St for the two different $Re_{\lambda}$ from DNS data. The symbols correspond to heavy (squares), neutrally buoyant (circles), and light particles (triangles). Open and filled symbols represent data at $Re_{\lambda} = 75$ and $Re_{\lambda} = 180$, respectively. The value of the indicator for neutrally buoyant particles remains constant at 1, i.e. clustering is not observed, whereas light particles show the most clustering with a peak at St $\approx$ 1.5 for both $Re_{\lambda}$. The experimental result of micro-bubbles is plotted with the star. (b) An enlarged plot showing only the results for light particles.} \label{normsigma2} \end{figure} \subsection{Effects of the number of particles} In principle, one can expect different behaviors depending on the number density of particles. In the simulation dataset A, there are 10$^5$ particles available for one special case of $Re_\lambda$ = 75 and St = 0.6. Using this snapshot, we study the effects of the particle number on the value of the clustering indicator $\sigma/\sigma_{\Gamma}$. We subsample data from this snapshot by selecting the required number particles and computing the Vorono\"{\i} statistics. This subsampling procedure is randomized and then carried out at least 100 times for each case of particle number. Figure~\ref{fig:normsigma_space} shows the effect of varying the number of particles on the clustering indicator $\sigma/\sigma_{\Gamma}$ and the error bars represent the standard deviation of all the subsamples of a given number of particles. In the present data set, the mean distances of particles are: 34.47$\eta$ for $N_{particles}$=10$^2$, 16$\eta$ for $N_{particles}$=10$^3$, 3.44$\eta$ for $N_{particles}$=10$^5$, which are all above 1$\eta$. Hence, we are always studying situations where the mean particle distances are in the inertial range. As shown in figure~\ref{fig:normsigma_space}, for light and heavy particles the value of the indicator increases as the number of the particles is increased. The evolution of the value of the indicator $\sigma/\sigma_{\Gamma}$ is steeper with increasing number of particles, and there seems to be no plateau region where the indicator value saturates. We do not understand the exact reason for this particle number dependence. One possible reason could be that the clusters have a complicated structure \cite[]{Calzavarini2008c}. However, for a given number of particles, the indicator does show a consistent trend: a stronger clustering for light particles, weaker clustering for heavy particles, and no clustering for neutrally buoyant particles. Moreover, the error of the indicator calculated at $N_{particle}$ = 1000 is less than 4 $\%$. Therefore, at a fixed number of particles, the clustering indicator $\sigma/\sigma_{\Gamma}$ of the Vorono\"{\i} volume is robust. In the analysis that follows, we use the data of $N_{particles}$ = 1000 for the simulation with the domain size of $N$ = 128$^3$ at $Re_\lambda$ = 75 (simulation A). \begin{figure} \begin{center} \includegraphics[width=0.7\textwidth]{NumOfParErrorBar.eps} \end{center} \caption{ (Color online) Normalized standard deviation of the Vorono\"{\i} volume distributions as a function of number of particles taken from a snapshot case at $Re_{\lambda} = 75$. The dashed line shows the number of particles we used in the present work at $Re_{\lambda}$ = 75 and St = 0.6 (simulation A).} \label{fig:normsigma_space} \end{figure} \subsection{Relation between the volume of the Vorono\"{\i} cell and enstrophy} We relate the Vorono\"{\i} volumes for the three different type of particles with turbulent flow quantities. A natural property for this comparison would be the enstrophy $\Omega={\omega}^2/2$ (where $ \omega$ is vorticity). \cite{Benzi2009} have shown that different types of particles react sensitively to the local enstrophy at the particle position, reflecting their tendency to stay in regions with different vorticity contents. We thus calculate the joint PDF of Vorono\"{\i} volumes and enstrophy for three types of particles at a fixed St = 0.6 for $Re_\lambda =$ 75. For comparison, we also calculate the joint PDF for the case of neutrally buoyant particles with the smallest St available in the simulations (St = 0.1 and $\beta$ = 1). The statistical behavior of these particles is expected to be close to that of ideal fluid tracers (St = 0 and $\beta$ = 1). From now on, we refer to this case as the fluid tracer case. The Vorono\"{\i} volume and the enstrophy are normalized by the mean values ($\mathcal{V}_{tr}$ and $\Omega_{tr}$) of the fluid tracers. Figure~\ref{JPDFEnstVoro} shows the joint PDFs of the normalized Vorono\"{\i} volume ($\mathcal{V}/\mathcal{V}_{tr}$) and the normalized enstrophy ($\Omega/\Omega_{tr}$) for the different type of particles. The joint PDF for neutrally buoyant particles of St = 0.6, shown in figure~\ref{JPDFEnstVoro} (c), is very similar to that of fluid tracers shown in figure~\ref{JPDFEnstVoro} (a). We observe a clear difference in the joint PDF for heavy and light particles, as shown in figure~\ref{JPDFEnstVoro} (b, d). The coordinates corresponding to the peak of the joint PDF ($(\Omega/\Omega_{tr})_{jPDF}^{max}$, $(\mathcal{V}^p/\mathcal{V}_{tr})_{jPDF}^{max}$) is indicated by the crosses in the figure for each case. Compared to the tracer case, a slightly lower $(\mathcal{V}^p/\mathcal{V}_{tr})_{jPDF}^{max}$ and a lower $(\Omega/\Omega_{tr})_{jPDF}^{max}$ for heavy particles indicates more clustering at low enstrophy regions. The maximum value of the joint PDFs for the light particles is located at the region with a much higher enstrophy and a smaller Vorono\"{\i} volume. This shows that the light particles shows strong clustering at high enstrophy regions. \begin{figure} \begin{center} \includegraphics[width=1\textwidth]{Fig7.eps} \end{center} \caption{ (Color online) Joint PDFs of normalized Vorono\"{\i} volumes and enstrophy for tracers and particles at St = 0.6 for $Re_\lambda =$ 75: (a) fluid tracers, (b) heavy particles, (c) neutrally buoyant particles, and (d) light particles. The cross indicates the location of the maximum probability (peak) for each case.} \label{JPDFEnstVoro} \end{figure} \begin{figure} \begin{center} \includegraphics[width=1\textwidth]{Fig8.eps} \end{center} \caption{ (Color online) The coordinates of the peak of the Joint PDFs of normalized Vorono\"{\i} volumes and enstrophy as a function of St: (a) $\mathcal{V}^p/\mathcal{V}_{tr}$ versus St, (b) $\Omega^p/\Omega_{tr}$ versus St. } \label{peak_st} \end{figure} The St dependence on the peak coordinates of the joint PDF ($(\Omega/\Omega_{tr})_{jPDF}^{max}$, $(\mathcal{V}^p/\mathcal{V}_{tr})_{jPDF}^{max}$) is plotted in figure~\ref{peak_st}. As shown in figure~\ref{peak_st} (a), the value of $(\mathcal{V}^p/\mathcal{V}_{tr})_{jPDF}^{max}$ for neutrally buoyant particles is nearly same with that of tracers at St from 0.1 to 4. The value of $(\mathcal{V}^p/\mathcal{V}_{tr})_{jPDF}^{max}$ for heavy particles is slightly smaller than unity for all St, indicating clustering. Figure~\ref{peak_st} (a) also shows that the clustering for light particles is stronger as evidenced by the much smaller $(\mathcal{V}^p/\mathcal{V}_{tr})_{jPDF}^{max}$ compared to those of neutrally buoyant and heavy particles at all St. The minimum value of $(\mathcal{V}^p/\mathcal{V}_{tr})_{jPDF}^{max}$ indicating strongest clustering for the light particles is located at St = 1 $-$ 2, which is in excellent agreement with the results obtained using the indicator $\sigma/\sigma_{\Gamma}$ (figure~\ref{normsigma2}). The corresponding enstrophy at the peak ($(\Omega/\Omega_{tr})_{jPDF}^{max}$) of the joint PDF versus St for the different particles is shown in figure~\ref{peak_st} (b). The value of $(\Omega/\Omega_{tr})_{jPDF}^{max}$ for the heavy particles is smaller than unity, and it is much larger than unity for the light particles. This reflects the clustering of light particles in flow regions with very high enstrophy, whereas heavy particles cluster in low enstrophy regions for all St in the present study. \subsection{Vorono\"{\i} Lagrangian autocorrelation} Finally, we conduct a Lagrangian analysis on the Vorono\"{\i} volumes. For each type of particle we calculate the Lagrangian autocorrelation of its associated Vorono\"{\i} volume. Figure~\ref{AutoCorr} (a) shows a typical temporal evolution of Vorono\"{\i} volumes for the three types of particles at St = 0.6 and $Re_\lambda = $ 75. To compare the behavior of the three different particles, we choose particles with similar Vorono\"{\i} volume at the starting time and trace their time evolution. While the Vorono\"{\i} volumes of heavy and neutrally buoyant particles change frequently in time, it is clearly seen that light particles tend to have small values for longer times. This suggests that light particles are trapped in clustered regions for a long time and are suddenly ejected, as seen in figure~\ref{AutoCorr} (a) around $\tau/\tau_{\eta} \approx$ 95. Figure~\ref{AutoCorr} (b) shows the autocorrelation function C$_{V}(\tau)$ for heavy, neutrally buoyant, and light particles at a fixed St = 0.6 and $Re_\lambda = $ 75. We define the decorrelation time $\tau_{V}$ as the time when the autocorrelation function has decreased to 1/2, i.e., C$_{V}(\tau_V)$ = 1/2. As shown in figure~\ref{AutoCorr} (b), the decorrelation time for light particles is around $\tau_{V} \sim 7 \tau_{\eta}$, whereas for heavy and neutrally buoyant particles decorrelation already occurs around $4 \tau_{\eta}$. Thus the clustering of light particles lasts for a longer time as compared to heavy and neutrally buoyant particles. As shown by \cite{Calzavarini2008c}, light particles accumulate in filamentary structures and heavy ones tend to cluster outside these structures to form wall-like interconnected tunnels. These differences in the morphology of the clustered particles could be a possible reason for the light particles being clustered for a longer time as compared to heavy particles. We also compare the autocorrelation time scale of the Vorono\"{\i} volumes to that of the enstrophy shown in figure~\ref{AutoCorr}(c) for the same St and $Re_\lambda$. First, as expected, for neutrally buoyant particles, the Lagrangian decorrelation time for the Vorono\"{\i} volumes is comparable to that of the enstrophy ($\tau_{\Omega}$), i.e. $\tau_{\Omega} \sim \tau_{V} \sim 4\tau_{\eta}$, because the neutrally buoyant particles do not cluster. However, remarkably, for light particles the decorrelation time of the Vorono\"{\i} volumes is much larger, $\tau_{V} \sim 7 \tau_{\eta}$, i.e., more than twice as large as the autocorrelation time scale $\tau_{\Omega} \sim 3 \tau_{\eta}$ of the enstrophy itself. For heavy particles, the Lagrangian decorrelation time of the Vorono\"{\i} volumes is around $\tau_{V} \sim 4 \tau_{\eta}$, which is also about two times that of enstrophy $\tau_{\Omega} \sim 2 \tau_{\eta}$. We also study the St dependence of the decorrelation time scales of Vorono\"{\i} volume ($\tau_{V}$) and enstrophy ($\tau_{\Omega}$) at $Re_\lambda =$ 75 for heavy, neutrally buoyant, and light particles as shown in figure~\ref{AutoCorr_St} (a). We observe that $\tau_V$ for light particles is always larger than heavy and neutrally buoyant particles in the St range 0.1 to 4, with a peak around St unity. This suggests that the light particles cluster for a longer time in the range of studied St. It is well known that flow regions of high enstrophy trap bubbles and regions with intense strain accumulate heavy particles. Figure~\ref{AutoCorr_St}(a) also shows that $\tau_{V}$ for both light and heavy particles is much larger than their decorrelation time of enstrophy $\tau_{\Omega}$ for all St from 0.1 to 4. This is more clearly seen in figure~\ref{AutoCorr_St}(b) where the ratio $\tau_{V}/\tau_{\Omega}$ for both light and heavy particles is greater than unity for all St, while this ratio is always close to unity for the neutrally buoyant particles. Remarkably, this means that the life-times of the clustered bubbles and heavy particles are much longer than the life-time of the trapping flow structures themselves. The interpretation is that clustered particles are constrained in different regions of the flow and due to their inertia need time to reorganize themselves in the flow after sudden changes in flow conditions. However, neutrally buoyant particles do not have this constraint and are distributed more evenly at any given time in the flow. Figure~\ref{AutoCorr_St}(b) shows that the ratio $\tau_{V}/\tau_{\Omega}$ for light particles has a weakly decreasing trend at St larger than unity. The ratio $\tau_{V}/\tau_{\Omega}$ for heavy particles monotonically increases with increasing St, and it is larger compared to light particles for St $>$ 0.5. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{Fig9.eps} \end{center} \caption{ (Color online) Lagrangian Vorono\"{\i} analysis for heavy (squares), neutrally buoyant (circles), and light particles (triangles) at St = 0.6 and $Re_{\lambda} = 75$. (a) Temporal evolution of Vorono\"{\i} volumes. (b) Temporal autocorrelation functions of Vorono\"{\i} volumes. (c) Temporal autocorrelation functions of enstrophy.} \label{AutoCorr} \end{figure} \begin{figure} \begin{center} \includegraphics[width=1\textwidth]{Fig10.eps} \end{center} \caption{ (Color online) (a) Decorrelation time of Vorono\"{\i} volume ($\tau_V$) and enstrophy ($\tau_{\Omega}$) as a function of St at $Re_{\lambda} = 75$ for heavy (squares), neutrally buoyant (circles), and light particles (triangles). Open and filled symbols represent decorrelation time of Vorono\"{\i} volume and enstrophy, respectively. (b) The ratio of decorrelation times ($\tau_V$ / $\tau_{\Omega}$) as a function of St.} \label{AutoCorr_St} \end{figure} \section{Conclusion} \label{sec:con} We use three-dimensional Vorono\"{\i} analysis to study particle clustering in homogenous isotropic turbulence with both numerical data in the point particle limit and one experimental data set. The analysis is applied to inertial particles (light, neutrally buoyant, and heavy) of different density ratios $\beta$, St ranging from 0.1 to 4 and two different Taylor-Reynolds numbers ($Re_\lambda$ = 75 and 180). In the entire range of parameters covered, the Vorono\"{\i} volume PDFs of neutrally buoyant particles agree well with the $\Gamma$-distribution for randomly distributed particles. At a fixed value of St, the PDFs of Vorono\"{\i} volumes of light and heavy particles show higher probability to have small and large Vorono\"{\i} volumes than randomly distributed particles, reflecting the clustering behavior. The standard deviation of normalized Vorono\"{\i} volumes $\sigma/\sigma_{\Gamma}$ is used as an indicator to quantify the clustering. Heavy particles show some clustering, and light particles have a much stronger clustering. Both heavy and light particles show a stronger clustering for a higher $Re_\lambda$. The maximum clustering for light particles is around St $\approx$ 1-2 for both Taylor-Reynolds numbers, and this maximum clustering range has a consistent trend with that of the Kaplan-Yorke analysis. We check the effect of number of particles on the value of the indicator and find that the clustering trend is robust for a given number of particles. For one (small) Stokes number St = 0.04 $\pm$ 0.02 we have also extracted the 3D Vorono\"{\i} volume PDF from experimental data. Though the PDF fits into the general trend -- at these small Stokes numbers the PDF nearly follows a $\Gamma$-distribution -- a quantitative analysis shows that the experimental PDF of 3D Vorono\"{\i} volumes is slightly broader than what is obtained from point-particle simulations. More experiments with larger Stokes numbers will have to be done to judge whether this is a limitation of the point-particle approach, a consequence of the neglectance of two-way and four-way coupling in the numerics, or whether the experimental data are not precise enough. From our point of view, the Vorono\"{\i} analysis is an excellent means to quantitatively compare clustering effects of particles in experimental and numerical data sets. Finally, we show that the Vorono\"{\i} analysis can be connected to local flow properties like enstrophy. By comparing the joint PDFs of enstrophy and Vorono\"{\i} volumes and their Lagrangian autocorrelation, the clustering behavior of heavy, neutrally buoyant, and light particles can further be distinguished. It is found that the light particles strongly cluster in flow regions with very high enstrophy, whereas heavy particles weakly cluster in low enstrophy regions for all St in the present study. From the Lagrangian autocorrelation of Vorono\"{\i} volumes we conclude that the clustering of light particles lasts much longer than that of heavy or neutrally buoyant particles. And due to inertial effects owing to the density difference from the carrying fluid, light and heavy particles remain clustered for a much longer time than the flow structures themselves. \section*{Acknowledgments} We would like to acknowledge the support from COST Action MP0806: \emph{Particles in turbulence}. We acknowledge Mickael Bourgoin, Romain Monchaux, Sander G. Huisman, Ceyda Sanli, Devaraj van der Meer, and Federico Toschi for useful discussions. J.M.M. acknowledges the foundation for Fundamental Research of Matter (FOM) for the funding within the Industrial Partnership Programme: \emph{Fundamentals of heterogeneous bubbly flows}.
2009.09164
\section{Introduction} Constructing a quantum network where states can be transferred in a coherent manner between two nodes is a task of fundamental importance towards the realization of an efficient platform for quantum information processing \cite{divincenzo2000physical}. The past two decades great effort has been made to obtain the optimal protocol for state transfer in the simplest geometry, the one-dimensional quantum chain. The most common model describing a quantum chain is the one-dimensional spin-$1/2$ chain. The Hamiltonian used is quite generic and the results can be applied to a variety of physical systems such as evanescently coupled waveguides \cite{bellec2012faithful,perez2013coherent,chapman2016experimental}, acoustic cavities \cite{shen2019one}, diamond vacancies \cite{yao2011robust}, superconducting circuits \cite{tsomokos2010using,mei2018robust}, arrays of quantum dots \cite{petrosyan2006coherent}, driven optical lattices \cite{chen2011controlling}, NMR \cite{cappellaro2007simulations} and nanoelectromechanical networks \cite{tian2020perfect}. Depending on whether the parameters of the system vary in time or not the quantum state transfer (QST) protocols can be divided into two classes, time-independent and time-dependent. In the former, the parameters are initially engineered in a suitable manner and as the system evolves "freely" the transfer takes place. The protocols in this approach usually rely either on the seminal work initiated by Bose \cite{bose2003quantum} and later evolved in \cite{christandl2004perfect,nikolopoulos2004coherent}, or on works where the states are transferred via Rabi-like oscillation schemes \cite{wojcik2005unmodulated,oh2011heisenberg,giorgi2013quantum}. On the contrary, in time-dependent protocols, the system's parameters are controlled during the dynamical evolution. The most intuitive protocol in this case is to apply a sequence of swap operations between adjacent sites and gradually move the state along the chain, while other representative protocols are introduced in\cite{balachandran2008adiabatic,schirmer2009fast,burgarth2010scalable,korzekwa2014quantum}. Besides the feasibility of the experimental implementation and the scalability, there are two major factors that determine the efficiency of a QST protocol. Namely, how much time it takes for the transfer to occur and how faithfully the state is transferred in the presence or absence of decoherence and static imperfections. The quantum speed limit for transferring a state along a spin chain has been studied for various protocols \cite{deffner2017quantum,yung2006quantum,ashhab2012speed,caneva2009optimal,zhang2018automatic} . On the other hand, many works \cite{de2005perfect,kay2006perfect,petrosyan2010state,bruderer2012exploiting} have examined the role of different sources of decoherence in QST protocols and proposed schemes \cite{burgarth2005conclusive,balachandran2008adiabatic,huang2018quantum,agundez2017superadiabatic,allcock2009quantum} to circumvent their impact. In most cases there is a trade-off between speed and robustness, as increasing one results to the decrease of the other and vice versa. A very promising direction towards the realization of an efficient platform able to perform fault-tolerant quantum computation comes from the flourishing field of topological states of matter \cite{sarma2006topological}. One of the most appealing properties of topological systems is that they host edge states which, due to their topological protection, are robust to different sources of quantum decoherence. Recent studies \cite{lang2017topological,estarellas2017topologically,mei2018robust,boross2019poor,longhi2019topological,longhi2019landau} have employed 1-D topological systems, such as the Kitaev chain \cite{kitaev2001unpaired} and the SSH model \cite{asboth2016short}, to act as a platform for realizing QST protocols. In this work, in the same spirit and aiming to balance the trade-off between the various factors that determine the efficiency of QST protocols, we propose a fast and robust protocol for transferring an excitation along an SSH chain. We consider a protocol where the exchange interaction between adjacent sites can vary with time. In order to reveal the crucial characteristics that favor our proposal we compare it with two other QST protocols, one \cite{mei2018robust} that employs a topologically non-trivial chain and one employing a topologically trivial chain. In the former the underlying chain is adiabatically driven, while in the later resonant processes are at work. The rest of the paper organizes as follows. In Sec. \ref{Protocols} we present the Hamiltonian of the system together with the corresponding protocols for both topological and topologically trivial quantum channels. In Sec. \ref{crucial} we identify the crucial characteristics the driving function needs to posses, in order to speed up QST in a topological quantum channel. In Sec. \ref{speed} we present numerical evidence supporting our claims. In Sec. \ref{Disorder} we analyze the impact of on and off-diagonal static noise. Finally, in Sec. \ref{Conclusions} we conclude. \section{QST Protocols} \label{Protocols} We start by considering a paradigm model describing a spin-$1/2$ chain acting as a data bus for transferring a quantum state. The Hamiltonian describes $N$ spins, where nearest neighbors are coupled with an XX Heisenberg exchange interaction of strength $J_{i}$ and a local magnetic field $B_{i}$ is applied at each spin. When we restrict ourselves to the one-excitation subspace, where all spins point down but one, the Hamiltonian writes as follows: \begin{equation} \label{eq:1} \mathcal{H}=\sum_{i=1}^{N-1} J_{i} ( \ket{i} \bra{i+1} + h.c.) + \sum_{i=1}^{N} B_{i} \ket{i} \bra{i} \end{equation} where $J_{i}, B_{i}\in \mathbb{R}$, $J_{i}\geq 0$ and $\ket{i}$ denotes that the $i$-th site of the chain is excited (i.e. in Fock space notation $\ket{0_{1} 0_{2} ... 1_{i} ...0_{N} }$). The aim of the protocols we will consider is to transfer a single site excitation from the first $\ket{1}$ to the last $\ket{N}$ site of the chain by properly controlling the couplings $J_{i}$ during the dynamical evolution. The quantity that determines how faithfully the transfer has occurred is the fidelity, which in our case can be defined as: \begin{equation} \label{eq:2} \mathcal{F}=\abs{\bra{N}\ket{N(t^{*})}}^2 \end{equation} where by $\ket{N(t^{*})}$ we denote amplitude of the $N$-th site, obtained by numerically solving the time-dependent Schrodinger equation and $t^{*}$ corresponds to the transfer time. We must note here, that the fidelity for transferring a generic initial state of the form $\psi_{init}=\cos(\theta/2) \ket{0}+e^{i\phi} \sin(\theta/2) \ket{1}$ from the first to the last site of the chain, is given by $F=\frac{1}{3}+\frac{1}{6}(1+\mathcal{F})^{2}$ \cite{bose2003quantum}. However, since $F$ is simply a function of $\mathcal{F}$, throughout this paper, unless explicitly stated otherwise, whenever we refer to fidelity we will consider the quantity of Eq. \ref{eq:2}. In all protocols we will present, the energy scale is determined by the maximum value that the couplings acquire during the dynamical evolution. Thus, without loss of generality we will set $J_{max}=1$. Time will be given in units of $1/J_{max}$ and energy in units of $J_{max}$. We will now present two different cases for realizing an efficient time-driven quantum channel, one where the underlying undriven chain has topological characteristics and another where the chain is topologically trivial. \subsection{Topological chain} \FloatBarrier The SSH model is the simplest topologically non-trivial system in 1-D that can be obtained by suitably modifying the Hamiltonian of Eq. \ref{eq:1}. To do so, the chain has to be dimerized. Meaning, we have to make $J_{i}=J_{odd}$ for $i \in odd$ and $J_{i}=J_{even}$ for $i \in even$. In addition, the magnetic field needs to be constant. We will assume that $B_{i}=0$ $\forall i$. For even-sized chains the topological phase arises when $J_{odd} > J_{even}$ and two edge modes appear at the two ends of the chain. The energies corresponding to the two eigenmodes lie close (above and below) to $E=0$ and are separated from the rest of the modes by a finite energy gap. The size of the gap is related to the ratio between $J_{even}$ and $J_{odd}$ \cite{asboth2016short}. On the contrary, for odd-sized chains, there is always one edge mode that is localized on the end corresponding to the weaker coupling. Namely, when $J_{odd} < J_{even}$ ($J_{odd} >J_{even}$) the mode is localized near the first (last) site of the chain. In this case the energy of the mode is exactly zero. For odd-sized chains the eigenmode energies (besides the zero energy solution) are given by the following expression: \begin{equation} \label{eq:pairs} \epsilon_{j}=\pm \abs{J_{odd}+J_{even} e^{i q_{j}}}, \end{equation} where $q_{j}=2 j \pi / (N+1)$ and $j=1, 2,..., [N/2]$ ($j$ counts the number of $\pm$ pairs and $[x]$ gives the greatest integer that is less than equal to $x$) \cite{coutant2020robustness}. Thus, for an odd-sized chain of length $N$ the energy gap can be analytically determined and is given by: \begin{equation} \label{eq:en} g=2 \abs{\epsilon_{[N/2]}} \end{equation} \begin{figure} \center \includegraphics[width=0.4 \textwidth]{p1.pdf}\caption{\label{fig:1a} A schematic of different time instants during the dynamical evolution of the topological chain. (a) Initially, $J_{odd}=0$, $J_{even}=J_{max}$, the zero-mode is localized on the first site and the energy gap takes its maximum value. (b) For $t=t^{*}/2$ we have $J_{1}=J_{N-1}<J_{max}$ and the energy gap acquires its minimum value. Before and after $t^{*}/2$ we have $J_{odd}<J_{even}<J_{max}$ and $J_{even}<J_{odd}<J_{max}$ respectively. (c) Finally, $J_{odd}=J_{max}$, $J_{even}=0$, the zero-mode is localized on the last site and the energy gap, once again takes its maximum value.} \centering \end{figure} \FloatBarrier For the protocols we will consider in this case, we restrict ourselves to odd-sized chains and we assume that we can separately control even and odd-indexed couplings. Initially the system is prepared so that $J_{odd}=0$ and $J_{even}=1$ and the excitation is localized on the first site of the chain which is disconnected from the rest (see Fig. \ref{fig:1a} (a)). Therefore, the initial state is an eigenstate of the system ($\ket{1 0 0 ... 0}$ wth zero eigenenergy). At the transfer time $t^{*}$ (see Fig. \ref{fig:1a} (c)) we end up with the reverse situation (i.e. $J_{even}=0$ and $J_{odd}=1$). The system undergoes a transition and transforms from a topological chain supporting an edge mode on the first site, to a topological chain with an edge mode on the last site, resulting to an excitation transfer from one side to the other. Note here, that in this protocol, there always exists a time where all the couplings acquire the same value $J_{even}=J_{odd}$. In our case we will consider this time to be $t=t^{*}/2$ (see Fig. \ref{fig:1a} (b)). For the infinite system, $J_{even}=J_{odd}$ corresponds to the closing of the energy gap separating the zero-energy mode with the rest excited states. However, in finite systems a finite energy difference between any two modes is always present. Thus, the point in the parameter space where $J_{even}=J_{odd}$, corresponds to the minimization of the energy gap. \subsection{Topologically-trivial chain} To explore how the topological nature of the underlying static chain affects the transfer we also proceed to a comparison with a protocol employing a topologically-trivial chain. We consider a protocol where the only couplings that are controlled are the ones connecting the edge sites with the rest of the chain (i.e. $J_{1}$ and $J_{N-1}$). This protocol has been chosen based on its performance in terms of speed and also by the fact that local manipulation of the system's parameters makes it experimentally more feasible. As was the case for the topological chain, the initial state is localized at the first site and corresponds to the system's zero-energy eigenstate (see Fig. \ref{fig:1b} (a)). Here, $J_{1}= 0$ while $J_{i}=J$ $\forall i \neq 1$. During the dynamical evolution, due to the odd-size of the chain, the zero-energy eigenstate is always present. Therefore, gradually switching on $J_{1}$ while $J_{N-1}$ is decreased, results at time $t^{*}$, to the transfer of the excitation at the other end of the chain (see Fig. \ref{fig:1b} (c)). An important difference between this protocol and the one described in the previous section, is that in this case, there is no point in the parameter space where all the couplings acquire the same value. \begin{figure} \center \includegraphics[width=0.4 \textwidth]{p2.pdf}\caption{\label{fig:1b} A schematic of different time instants during the dynamical evolution of the topologically-trivial chain. (a) Initially, $J_{1}=0$, $J_{i}=J_{max}$ where $i=2,...,N-1$, the zero-mode is localized on the first site and the energy gap takes its maximum value. (b) For $t=t^{*}/2$ we have $J_{1}=J_{N-1}<J_{max}$ while $J_{i}=J_{max}$ for $i=2,3,...,J_{N-2}$. Before and after $t^{*}/2$ we have $J_{1}<J_{N-1}<J_{max}$ and $J_{N-1}<J_{1}<J_{max}$ respectively. (c) Finally, $J_{1}=0$ and $J_{i}=J_{max}$ for $i=2,...,N-1$, while the zero-mode is localized at the last site of the chain.} \centering \end{figure} \FloatBarrier \section{Crucial characteristics of the driving function} \label{crucial} \begin{figure*} \center \includegraphics[width=\textwidth]{plot1.pdf}\caption{\label{fig:1} The chain consists of $N=31$ sites. For each protocol (a) cosine (b) exponential and (c) trivial, on the top panel we plot the driving function as a function of time. While on the bottom panel, we depict the corresponding instantaneous energy spectrum as a function of time. In all plots, we have taken the transfer time to be $t^{*}=1$.} \centering \end{figure*} Before presenting our numerical results, we will develop an intuitive and solid line of arguments that dictate which are the crucial considerations that have to be taken into account when driving the state transfer in an odd-sized SSH chain. In all protocols we will consider the system is prepared in the zero-energy eigenstate that is localized on the first site of the chain. As the system evolves, a zero-energy state is always present due to the odd size of the chain. Therefore, the adiabatic approximation ensures, that if the system is driven sufficiently slow during the transfer process, we can remain in the zero-energy eigenstate without exciting bulk modes. What we propose in this paper is to suitably adjust the driving function in order to reach high fidelity values for small transfer times. Our approach does not rely on methods like the adiabatic passage or shortcuts to adiabaticity, where specifically engineered terms are introduced in the Hamiltonian that can induce counter-processes able to suppress the bulk excitations. In other words, we confine ourselves to drive the parameters of nearest-neighbor coupling. To introduce counter-adiabatic terms one should include next-to-nearest neighbor interactions like in \cite{d2020fast}. When driving the chain, we focus on two important quantities: The energy difference between the zero-mode and the rest of the states, and the derivative of the Hamiltonian matrix that is directly related to the slope of the driving function. These two quantities after all appear in the definition of the adiabatic invariant which is defined as follows: Assuming that $\ket{n,t}$ is the instantaneous eigenmode corresponding to the zero-energy state, in order to be close to the adiabatic limit the following sum has to be sufficiently small \begin{equation}\label{eq:3} \sum_{m \neq n} \frac{\bra{m,t}\dot{\hat{H}}\ket{n,t}}{E_{m}(t)-E_{n}(t)} \ll 1, \end{equation} where $E_{m}(t)$ the instantaneous eigenenergy of the $m^{th}$ mode and $\dot{\hat{H}}$ the time-derivative of the Hamiltonian. Equation \ref{eq:3} holds when no degeneracies appear in the spectrum and the energies $\abs{E_{m}(t)-E_{n}(t)}>\epsilon_{0}$ are separated by a small $\epsilon_{0}$ $\forall t$. In the QST protocol employing the odd-sized SSH chain, initially (as $J_{odd}=0$) the energy gap separating the zero-mode from the excited states takes its maximum value. As $J_{odd}$ is switched-on and $J_{even}$ decreases, the energy gap approaches its minimum, occurring at $J_{odd}=J_{even}$ and at the transfer time $t^{*}$ regains its maximum value (e.g. see Fig. \ref{fig:1} (a) and (b) bottom panel). Our logic when dealing with the aforementioned dynamical evolution is simple and can be summarized into two considerations. One thing that we can do is to force the driving function to equate $J_{odd}$ and $J_{even}$ at values close to $J_{max}$, which is the maximum value the couplings can acquire during the transfer. This will result to the maximization of the minimum energy gap. The minimum energy gap can be used to specify a characteristic time scale. When the transfer time is sufficiently large compared to this time scale, we can safely assume that we are close to the adiabatic following of the zero-energy state. A driving function that has this characteristic has also been used in \cite{petrosyan2010state}, where the dimerized chain they consider is equivalent to the SSH chain. The other crucial consideration, is to adjust the driving in such a way that, initially, when the energy gap is bigger, we "strongly" drive the system (i.e. steeper slope of the driving function) and when we are close to the minimum value of the energy gap the driving becomes more "gentle" (i.e. smaller slope). Note however here, that strongly driving the system may induce non-adiabatic transitions between the zero-energy state and the excited states, that in general reduce the efficiency of the transfer. Our aim is to balance the interplay between the two aforementioned considerations in order to increase the speed of the transfer protocol. An intuitive driving function that we propose and claim to encapsulate this behavior is the exponential. In the next sub-sections it will become clear that the proper implementation of the above leads to faster transfer process while at the same time the robustness of the protocol is maintained. \FloatBarrier \subsection{Speed of the transfer} \label{speed} Now let us examine in more detail the QST protocols that we briefly described in the previous section and provide the numerical evidence supporting our claims. We will examine chains of moderate length $N=31$ sites. In the protocol we propose the couplings are driven by an exponential function (see Fig. \ref{fig:1} (b) top panel), where $J_{odd}= (1-e^{-\alpha t/t^{*}})/(1-e^{-\alpha})$ and $J_{even}=(1- e^{-\alpha (t^{*}-t))/t^{*}})/(1-e^{-\alpha})$, while $\alpha=6.0$ is a free parameter that has been fine-tuned to increase the efficiency in terms of speed. We will get back to the role of this free parameter at the end of the current section. The exponential protocol will be compared with a protocol proposed in \cite{mei2018robust}, where $J_{odd}= b(1-\cos{(\pi t/t^{*})})$ and $J_{even}= b(1+\cos{(\pi t/t^{*})})$ and $b=0.5$ (see Fig. \ref{fig:1} (a) top panel). On the other hand, for the trivial protocol the driving function has the following linear form: $J_{1}=\frac{t}{t^{*}}$, $J_{N-1}=1-\frac{t}{t^{*}}$ and $J_{i}=J_{max}=1$,$\forall i \neq 1,N-1$ (see Fig. \ref{fig:1} (c) top panel). In Fig. \ref{fig:1}, we plot for each protocol on the top panel the driving function for the couplings and on the bottom how the instantaneous eigenspectrum evolves over time. Comparing the two protocols that employ the topological chain (see Fig. \ref{fig:1} (a) and (b)), we can immediately notice their qualitative differences. The cosine function initially for large values of the energy gap, drives the system slowly, meaning the numerator of Eq. \ref{eq:3} is smaller as compared to the exponential. However, it approaches the minimum value of the energy gap with greater slope, while the exponential slows down and drives the system more smoothly in this region. Last but not least, the minimum value of the energy gap is analytically obtained by plugging into Eq. \ref{eq:en}, the instantaneous values of the couplings at $t=t^{*}/2$. For the cosine we get $g_{min}^{cos}=0.09$, while for the exponential $g_{min}^{exp}=0.18$. As it was already mentioned, this is because the exponential equates $J_{even}$ and $J_{odd}$ at higher values. In the trivial protocol on the other hand (Fig. \ref{fig:1} (c)), the evolution is completely different. Namely, the instantaneous energy gap separating the zero-energy mode with the rest of the modes starts from its minimum value ($g_{min}^{triv}=0.1$), slowly increases reaching its maximum ($g_{max}^{triv}=0.14$) at the middle of the time evolution and then returns to its initial value. \begin{figure}[h] \center \includegraphics[width=0.4\textwidth]{plot2.pdf}\caption{\label{fig:2} The chain consists of $N=31$ sites. Fidelity as a function of the transfer time for all protocols.} \centering \end{figure} Now that the qualitative differences between the protocols have become apparent let us proceed and examine some quantitative results. In Fig. \ref{fig:2}, for each protocol, we plot the fidelity $\mathcal{F}$ (Eq. \ref{eq:2}) as a function of the transfer time $t^{*}$. To make a comparison in terms of the speed of the transfer we have to set a lower bound in fidelity. In particular, we will consider the time after which the fidelity is stabilized above $0.9$. In this case, the exponential protocol is clearly faster than the cosine protocol, since this occurs for $t^{*}\geq 42$ as compared to the cosine where this happens for $t^{*}\geq 761$. The trivial protocol on the other hand, even though it reaches $\mathcal{F}=0.9$ for $t^{*}=35$, appears to have a strongly oscillatory behavior that prevents its stabilization above $\mathcal{F}=0.9$ till $t^{*} \geq 231$. For all profiles, in the limit of $t^{*} \to \infty$ the fidelity approaches unity and the excitation is perfectly transferred along the chain. This makes up the adiabatic limit where, during the dynamical evolution, we "follow" the zero-energy state, without exciting other bulk eigenmodes. The oscillations that appear in the fidelity plot of the trivial protocol are in general unwelcome in QST protocols since they demand great precision when tuning the transfer time \cite{kay2010perfect}. Moreover, they signify that resonant processes are the underlying mechanism responsible for achieving high values of fidelity in such small transfer times. Taking a closer look at the fidelity plot of the exponential protocol (Fig. \ref{fig:5} for $\alpha=6$), we can notice that small oscillations are also present here, i.e. the fidelity curve does not increase smoothly. This indicates that resonant processes are at work also in this case. Obtaining a suitable basis where these processes that occur during the dynamical evolution can be rigorously tracked down, remains a highly non-trivial task \cite{lim1991superadiabatic,dykhne1960quantum}. Nevertheless, as we will now show, the resonant processes can be properly handled to increase the efficiency of the transfer process. When we introduced the exponential driving function, we mentioned that the $\alpha$ parameter is fine-tuned ($\alpha=6.0$). Smaller values of the $a$ parameter lead to a less steep slope of the driving function and a smaller value of $g_{min}^{exp}$ (i.e. $J_{even}$, $J_{odd}$ equate at a smaller value). In this case, the resonant processes are suppressed and the fidelity smoothens out (see Fig. \ref{fig:5} $\alpha=4$). Consequently, the protocol's speed is reduced since high fidelity values are obtained for larger transfer times. On the contrary, increasing $\alpha$ above the fine-tuned value results to a stronger slope of the driving function and a greater value of $g_{min}^{exp}$. Therefore, the resonant processes take over for small transfer times and strong oscillations appear at the fidelity plot (see Fig. \ref{fig:5} $\alpha=8$). This once again reduces the speed of the protocol. Thus, the value of this fine-tuned parameter $\alpha=6$ is a trade-off, since it signifies the point up to which we strongly drive the system such that the speed is increased, but gently enough to avoid resonant effects. \begin{figure}[h] \center \includegraphics[width=0.4\textwidth]{aplot.pdf}\caption{\label{fig:5} Fidelity as a function of the transfer time for the exponential driving and for different values of the $\alpha$ parameter.} \centering \end{figure} Let us also note here that we have performed optimizations in terms of the CRAB (chopped random basis) algorithm \cite{doria2011optimal} where the guess function was assumed to be the exponential, or the cosine one. The CRAB algorithm gives corrections in terms of a chopped polynomial or Fourier expansion of the correction function. The correction of CRAB to the exponential function was very small, with subtle corrections not changing its basic logic and behavior. On the other hand the cosine function as a guess function in CRAB had a strong correction in the direction of getting closer to the exponential profile. Those calculations indicate that our protocol is close to the optimal one, within the constrains taken in this work. \FloatBarrier \subsection{Disorder Analysis}\label{Disorder} In this section, we will consider static disorder both on the couplings and on the magnetic field and study its effect on fidelity. Based on the matrix representation of the Hamiltonian the disorder on the couplings is commonly addressed to as off-diagonal disorder, while the disorder on the magnetic field as diagonal disorder. Static disorder can be attributed to manufacturing errors that arise during the experimental implementation. The way each disorder realization is imposed on the system's parameters is the following: \begin{equation} \label{eq:4} J_{i}(t) \to J_{i}(t)(1 + \delta J_{i}) \quad B_{i}(t) \to B_{i}(t)+ \delta B_{i} \end{equation} $\delta J_{i}$ and $ \delta B_{i}$ acquire random real values uniformly distributed in the interval $(-d_{s},d_{s})$, while $d_{s}$ corresponds to the disorder strength. When we consider static disorder, for each realization a random profile of perturbations is imposed on the parameters and remains fixed during the time evolution. \begin{figure*} \center \includegraphics[width=\textwidth]{displot.pdf}\caption{\label{fig:3} For each protocol, (a) cosine, (b) exponential and (c) trivial, we show the impact of diagonal and off-diagonal disorder of strength $d_{s}=0.2$ (units of $J_{max}$). Each point corresponds to the mean value of fidelity averaged over $10000$ disorder realizations given as a function of the transfer time, while the error bars correspond to the standard deviation of the sample. In order to compare we have also included the unperturbed curve. The transfer time axis is displayed in logarithmic scale. We also note, that the limits of the $t^{*}$-axis for the cosine differ from the other two.} \centering \end{figure*} In Fig. \ref{fig:3} for each protocol, we plot the mean fidelity as a function of $\log{t^{*}}$ for diagonal and off-diagonal disorder of moderate strength $d_{s}=0.2$. What we can immediately notice, is that in almost all cases (there is one exception that will be discussed later on) the effect of disorder does not ruin completely the transfer process. Instead, the main effect is that in the presence of disorder (diagonal or off-diagonal), the transfer time $t^{*}$ needed to reach high values of fidelity is increased. Let us turn our attention to the protocols employing the SSH chain. The zero-energy mode of the underlying static chain is known to be robust against perturbations that respect chiral symmetry \cite{asboth2016short}. Off-diagonal (chiral) disorder may change the mode's wavefunction however its energy remains pinned down to zero. On the contrary, diagonal disorder breaks chiral symmetry and the energy of the mode is shifted. For the time-driven chain a difference between chiral and non-chiral disorder becomes apparent in the case of the adiabatic cosine protocol (see Fig. \ref{fig:3} (a)). As it was expected from the static case, the fidelity reduces more in the presence of non-chiral disorder \cite{mei2018robust,lang2017topological}. The exponential protocol however, seems indifferent to whether the disorder is chiral or not (see Fig. \ref{fig:3} (b)). The reason behind this lies in the higher speed of the exponential protocol. Since the effect of disorder strongly manifests in large time scales, the adiabatic cosine protocol is far more sensitive compared to the exponential. This argument has been recently used to justify the resilience of the counter adiabatic protocol in \cite{d2020fast}. When we examine the effect of the disorder on the couplings for the topologically trivial protocol (see Fig. \ref{fig:3} (c)) we observe that the oscillatory behavior of fidelity for small transfer times is suppressed (i.e. less oscillations and the mean fidelity is significantly degraded). Thus, we can deduce that the resonant processes are not so robust to the static off-diagonal disorder. On the other hand, when considering the effect of the disorder on the magnetic field we distinguish two cases. One where the disorder is imposed on all sites of the chain and another where the first and last sites are exempted (i.e. $\delta B_{1}=\delta B_{N}=0$). In the latter case, the protocol proves to be even more robust than the off-diagonal case (see Fig. \ref{fig:3} (c) diagonal 2). However, in the former, the effect of disorder is severe and the transfer process is completely destroyed (see Fig. \ref{fig:3} (c) diagonal 1). The diagonal disorder on the edges greatly affects the transfer process since it induces an energy difference between the initial and the final state. The system's initial state is localized on the first site with energy equal to $\delta B_{1}$, while the final state is localized on site $N$ with energy $\delta B_{N}$. Combined with the fact that the energy gap separating them from the rest of the excited states takes its minimum value (which is smaller than the strength of the disorder) during the beginning and the end of the transfer process, explains the high impact of the diagonal disorder on the edges. In conclusion, the topological protection coming from the energy gap, clearly favors the topological channels, which are indifferent to whether the diagonal disorder is imposed on the edge sites. To sum up, the exponential protocol is quite robust to both on and off-diagonal disorder and clearly outperforms the two other protocols. As opposed to the adiabatic cosine protocol, it is indifferent to whether the disorder is chiral or not and compared with the trivial chain we can deduce that no significant (in terms of affecting the fidelity) resonant processes susceptible to static noise are at work. Finally, the presence of a wide energy gap in the underlying static SSH chain clearly favors the topological quantum channel when the diagonal disorder is imposed on the edge sites of the chain. \FloatBarrier \section{Conclusions} \label{Conclusions} In this work we have numerically investigated a time-dependent protocol that employs a topological quantum chain to act as a quantum channel for transferring single-site excitations. We propose an exponential driving function that increases the efficiency of the transfer in terms of speed. To sustain our claim, we make a comparison with two other QST protocols. The crucial characteristics of the exponential function are the fact that it suitably adapts the slope of the driving function based on the value of the instantaneous energy gap, while at the same time ensures that the minimum value of the energy gap $g_{min}$ is as big as possible. The resonant processes are fine-tuned, leading to a speed increasal. Employing the CRAB optimization algorithm, we get strong indications that the proposed time-driving function is close to optimal. In addition, we study the effect of diagonal and off-diagonal static noise highlighting the fact that even though the speed of the protocol is increased its robustness is maintained. The difference in terms of speed with the cosine and the trivial protocol, emphasizes the power of our treatment and identifies the considerations that have to be taken into account when driving a topological quantum chain. The developed scheme adds up to the ongoing effort of constructing discrete networks that can efficiently transfer and manipulate quantum states. It also makes a substantial contribution to speeding-up adiabatic protocols (with topological characteristics or not) since it indicates a conceptual way of designing the control schemes depending on the instantaneous eigenspectrum characteristics. \section*{Acknowledgments} G. T. and I. B. acknowledge funding by the project CS.MICRO funded under the program Etoiles Montantes of the Region Pays de la Loire. N. E. P. gratefully acknowledges financial support from the Hellenic Foundation for Research and Innovation (HFRI) and the General Secretariat for Research and Technology (GSRT), under the HFRI PhD Fellowship Grant No. 868.
1901.10255
\section{Introduction} \label{introduction} The deep learning revolution has yielded models of increasingly large size. In recent years, designing compact and accurate neural networks with a small number of trainable parameters has been an active research topic, motivated by practical applications in embedded systems (to reduce memory footprint \cite{43969}), federated and distributed learning (to reduce communication \cite{45648}), derivative-free optimization in reinforcement learning (to simplify the computation of the approximated gradient \cite{47028}). Besides a number of practical applications, it is also an important research question whether or not models really need to be this big or if smaller results can achieve similar accuracy~\cite{ba2014deep}. Structured matrices are at the very core of most of the work on compact networks. In these models, dense weight matrices are replaced by matrices with a prescribed structure (e.g. low rank matrices, Toeplitz matrices, circulant matrices, LDR, etc.). Despite substantial efforts (e.g. \cite{cheng,moczulski2015acdc}), the performance of compact models is still far from achieving an acceptable accuracy motivating their use in real-world scenarios. This raises several questions about the effectiveness of such models and about our ability to train them. In particular two main questions call for investigation: \begin{itemize} \item[]{\bf Q1\ }{\em How to efficiently train deep neural networks with a large number of structured layers?} \item[]{\bf Q2\ }{\em What is the expressive power of structured layers compared to dense layers?} \end{itemize} In this paper, we provide principled answers to these questions for the particular case of deep neural networks based on diagonal and circulant matrices (a.k.a. Diagonal-circulant networks or DCNNs). The idea of using diagonal and circulant matrices together comes from a series of results in linear algebra by Muller et al. \cite{muller1998algorithmic} and Huhtanen et al. \cite{Huhtanen2015}. The most recent result from Huhtanen et al. \cite{Huhtanen2015} demonstrates that any matrix $A$ in $\C^{n\times n}$ can be decomposed into the product of $2n-1$ alternating diagonal and circulant matrices. The diagonal-circulant decomposition inspired Moczulski et al. \cite{moczulski2015acdc} to design the {\em AFDF} structured layer, which is the building block of DCNNs. However, they were not able to train deep neural networks based on AFDF. To answer {\bf Q1}, we first describe a theoretically sound initialization procedure for DCNN which allows the signal to propagate through the network without vanishing or exploding. Furthermore, we provide a number of empirical insights to explain the behaviour of DCNNs, and show the impact of the number of the non-linearities in the network on the convergence rate and the accuracy of the network. By combining all these insights, we are able (for the first time) to train large and deep DCNNs. We demonstrate the good performance of DCNNs on a large scale application (the \textit{YouTube-8M}\xspace video classification problem) and obtain very competitive accuracy. To answer {\bf Q2}, we propose an analysis of the expressivity of DCNNs by extending the results by Huhtanen et al. \cite{Huhtanen2015}. We introduce a new bound on the number of diagonal-circulant required to approximate a matrix that depends on its rank. Building on this result, we demonstrate that a DCNN with bounded width and small depth can approximate any dense networks with ReLU activations. \paragraph{Outline of the paper:} We present in Section~\ref{related_work} the related work on structured neural networks and several compression techniques. Section~\ref{section:circulant} introduces circulant matrices, our new result extending the one from Huhtanen et al. \cite{Huhtanen2015}. Section~\ref{sec-dcnn-th} proposes an theoretical analysis on the expressivity on DCNNs. Section~\ref{section:training} describes two efficient techniques for training deep diagonal circulant neural networks. Finally, Section~\ref{section:exp} presents extensive experiments to compare the performance of deep diagonal circulant neural networks in different settings w.r.t. other state of the art approaches. Section~\ref{section:conclusion} provides a discussion and concluding remarks. \section{Related Work} \label{related_work} Structured matrices exhibit a number of good properties which have been exploited by deep learning practitioners, mainly to compress large neural networks architectures into smaller ones. For example Hinrichs et al. \cite{hinrichs2011johnson} have demonstrated that a single circulant matrix can be used to approximate the Johson-Lindenstrauss transform, often used in machine learning to perform dimensionality reduction. Building upon this result, Cheng et al. \cite{cheng} proposed to replace the weight matrix of a fully connected layer by a circulant matrix effectively replacing the complex transform modeled by the fully connected layer by a simple dimensionality reduction. Despite the reduction of expressivity, the resulting network demonstrated good accuracy using only a fraction of its original size (90\% reduction). \textbf{Comparison with ACDC.} Moczulski et al. \cite{moczulski2015acdc} have introduced two {\em Structured Efficient Linear Layers} (SELL) called AFDF and ACDC. The AFDF structured layer benefits from the theoretical results introduced by Huhtanen et al. \cite{Huhtanen2015} and can be seen the building block of DCNNs. However, Moczulski et al. \cite{moczulski2015acdc} only experiment using ACDC, a different type of layer that does not involve circulant matrices. As far as we can tell, the theoretical guarantees available for the AFDF layer do not apply on the ACDC layer since the cosine transform does not diagonalize circulant matrices \cite{sanchez1995diagonalizing}. Another possible limit of the ACDC paper is that they only train large neural networks involving ACDC layers combined with many other expressive layers. Although the resulting network demonstrates good accuracy, it is difficult the characterize the true contribution of the ACDC layers in this setting. \textbf{Comparison with Low displacement rank structures.} More recently, Thomas et al. \cite{Thomas_NIPS2018_8119} have generalized these works by proposing neural networks with low-displacement rank matrices (LDR), that are structured matrices encompassing a large family of structured matrices, including Toeplitz-like, Vandermonde-like, Cauchy-like and more notably DCNNs. To obtain this result, LDR represents a structured matrix using two displacement operators and a low-rank residual. Despite being elegant and general, we found that the LDR framework suffers from several limits which are inherent to its generality, and makes it difficult to use in the context of large and deep neural networks. First, the training procedure for learning LDR matrices is highly involved and implies many complex mathematical objects such as Krylov matrices. Then, as acknowledged by the authors, the number of parameters required to represent a given structured matrix (e.g. a Toeplitz matrix) in practice is unnecessarily high (higher than required in theory). \textbf{Other compression techniques.} Besides structured matrices, a variety of techniques have been proposed to build more compact deep learning models. These include {\em model distillation}~\cite{44873}, Tensor Train~\cite{novikov2015tensorizing}, Low-rank decomposition~\cite{NIPS2013_5025}, to mention a few. However, Circulant networks show good performances in several contexts (the interested reader can refer to the results reported by Moczulski et al. \cite{moczulski2015acdc} and Thomas et al. \cite{Thomas_NIPS2018_8119}). \section{A primer on circulant matrices and a new result} \label{section:circulant} An n-by-n circulant matrix $C$ is a matrix where each row is a cyclic right shift of the previous one as illustrated below. {\small \[ C = circ(c) =\left[\begin{array}{ccccc} c_{0} & c_{n-1} & c_{n-2} & \dots & c_{1} \\ c_{1} & c_{0} & c_{n-1} & & c_{2} \\ c_{2} & c_{1} & c_{0}& & c_{3} \\ \vdots & & & \ddots & \vdots \\ c_{n-1} & c_{n-2} & c_{n-3} & & \phantom{0}c_{0}\phantom{0} \end{array}\right] \]} \noindent Circulant matrices exhibit several interesting properties from the perspective of numerical computations. Most importantly, any $n$-by-$n$ circulant matrix $C$ can be represented using only $n$ coefficients instead of the $n^2$ coefficients required to represent classical unstructured matrices. In addition, the matrix-vector product is simplified from $O(n^2)$ to $O(n\ log(n))$ using the convolution theorem. As we will show in this paper, circulant matrices also have a strong expressive power. So far, we know that a single circulant matrix can be used to represent a variety of important linear transforms such as random projections~\cite{hinrichs2011johnson}. When they are combined with diagonal matrices, they can also be used as building blocks to represent any linear transform~\cite{schmid2000decomposing, Huhtanen2015} with an arbitrary precision. Huhtanen et al. \cite{Huhtanen2015} were able to bound the number of factors that is required to approximate any matrix $A$ with arbitrary precision. \paragraph{Relation between diagonal circulant matrices and low rank matrices} We recall this result in Theorem~\ref{thm:huhtanen} as it is the starting point of our theoretical analysis (note that in the rest of the paper, $\left\Vert\ \cdot\ \right\Vert $ denotes the $\ell_{2}$ norm when applied to vectors, and the operator norm when applied to matrices). \begin{thm} (Reformulation from Huhtanen et al. \cite{Huhtanen2015}) \label{thm:huhtanen} For every matrix $A\in\mathbb{C}^{n\times n}$, for any $\epsilon > 0$, there exists a sequence of matrices $B_1 \ldots B_{2n-1}$ where $B_{i}$ is a circulant matrix if $i$ is odd, and a diagonal matrix otherwise, such that $\left\Vert B_{1}B_{2}\ldots B_{2n-1}-A \right\Vert < \epsilon$. \end{thm} Unfortunately, this theorem is of little use to understand the expressive power of diagonal-circulant matrices when they are used in deep neural networks. This is because: 1) the bound only depends on the dimension of the matrix $A$, not on the matrix itself, 2) the theorem does not provide any insights regarding the expressive power of $m$ diagonal-circulant factors when $m$ is much lower than $2n - 1$ as it is the case in most practical scenarios we consider in this paper. In the following theorem, we enhance the result by Huhtanen et al. \cite{Huhtanen2015} by expressing the number of factors required to approximate $A$, {\em as a function of the rank of $A$}. This is useful when one deals with low-rank matrices, which is common in machine learning problems. \begin{thm} \footnote{All proofs are in the arxiv version of the paper. \\ \url{https://arxiv.org/abs/1901.10255}} (Rank-based circulant decomposition) \label{prop:rank-decomposition}Let $A\in\mathbb{C}^{n\times n}$ be a matrix of rank at most $k$. Assume that $n$ can be divided by $k$. For any $\epsilon>0$, there exists a sequence of $4k+1$ matrices $B_{1},\ldots,B_{4k+1},$ where $B_{i}$ is a circulant matrix if $i$ is odd, and a diagonal matrix otherwise, such that $\Vert B_1B_2\ldots B_{4k+1} - A\Vert < \epsilon$ \end{thm} A direct consequence of Theorem~\ref{prop:rank-decomposition}, is that if the number of diagonal-circulant factors is set to a value $K$, we can represent all linear transform $A$ whose rank is $\frac{K - 1}{4}$. Compared to \cite{Huhtanen2015}, this result shows that structured matrices with fewer than $2n$ diagonal-circulant matrices (as it is the case in practice) can still represent a large class of matrices. As we will show in the following section, this result will be useful to analyze the expressivity of neural networks based on diagonal and circulant matrices. \section{Analysis of Diagonal Circulant Neural Networks (DCNNs)} \label{sec-dcnn-th} Zhao et al. \cite{pmlr-v70-zhao17b} have shown that circulant networks with 2 layers and unbounded width are universal approximators. However, results on unbounded networks offer weak guarantees and two important questions have remained open until now: 1) {\em Can we approximate any function with a bounded-width circulant networks?} 2) {\em What function can we approximate with a circulant network that has a bounded width and a small depth?} We answer these two questions in this section. First, we introduce some necessary definitions regarding neural networks and we provide a theoretical analysis of their approximation capabilities. \begin{defn}[Deep ReLU network\xspace]\label{drn} Given $L$ weight matrices $W = (W_1, \ldots, W_L)$ with $W_i \in \mathbb C^{n\times n}$ and $L$ bias vectors $b = (b_1, \ldots, b_L)$ with $b_i \in \mathbb C^n$, a {\em deep ReLU\xspace network} is a function $f_{W_L, b_L} : \mathbb C^n \rightarrow \mathbb C^n$ such that $f_{W, b}(x) = (f_{W_L, b_L} \circ \ldots \circ f_{W_1, b_1})(x)$ where $f_{W_i, b_i}(x) = \phi(W_i x + b_i)$ and $\phi(.)$ is a ReLU\xspace non-linearity \footnote{Because our networks deal with complex numbers, we use an extension of the ReLU\xspace function to the complex domain. The most straightforward extension defined in \cite{DBLP:conf/iclr/TrabelsiBZSSSMR18} is as follows: $\mathrm{ReLU\xspace}(z)=\mathrm{ReLU}\left(\mathfrak{R}(z)\right)+i\mathrm{ReLU}\left(\mathfrak{I}(z)\right)$, where $\mathfrak{R}$ and $\mathfrak{I}$ refer to the real and imaginary parts of $z$.} In the rest of this paper, we call $L$ and $n$ respectively the depth and the width of the network. Moreover, we call {\em total rank $k$}, the sum of the ranks of the matrices $W_{1}\ldots W_{L}$. i.e. $k = \sum_{i=1}^L rank(W_i)$. \end{defn} \noindent We also need to introduce DCNNs, similarly to Moczulski et al. \cite{moczulski2015acdc}. \begin{defn}[Diagonal Circulant Neural Networks]\label{def:DCNN} Given $L$ diagonal matrices $D = (D_1, \ldots, D_L)$ with $D_i \in \mathbb C^{n\times n}$, $L$ circulant matrices $C = (C_1, \ldots, C_L)$ with $C_i \in \mathbb C^{n\times n}$ and $L$ bias vectors $b = (b_1, \ldots, b_L)$ with $b_i \in \mathbb C^n$, a {\em Diagonal Circulant Neural Networks} (DCNN) is a function $f_{W_L, b_L} : \mathbb C^n \rightarrow \mathbb C^n$ such that $f_{D,C,b}(x) = (f_{D_L, C_L, b_L} \circ \ldots \circ f_{D_1, C_1, b_1})(x)$ where $f_{D_i, C_i, b_i}(x) = \phi_i(D_i C_i x + b_i)$ and where $\phi_i(.)$ is a ReLU\xspace non-linearity or the identity function. \end{defn} \noindent We can now show that bounded-width DCNNs can approximate any Deep ReLU Network, and as a corollary, that they are universal approximators. \begin{lem}\label{mainth_} Let $\mathcal{N}$ be a deep ReLU network of width $n$ and depth $L$, and let $\mathcal{X} \subset \mathbb{C}^{n}$ be a bounded set. For any $\epsilon>0$, there exists a DCNN $\mathcal{N}'$ of width $n$ and of depth $(2n-1)L$ such that $\Vert \mathcal{N}(x)-\mathcal{N}'(x) \Vert < \epsilon$ for all $x \in \mathcal{X}$. \end{lem} \noindent We can now state the universal approximation corollary: \begin{cor}\label{cor:universal} Bounded width DCNNs are universal approximators in the following sense: for any continuous function $f:[0,1]^{n}\rightarrow\mathbb{R}_+$ of bounded supremum norm, for any $\epsilon>0$, there exists a DCNN $\mathcal{N}_{\epsilon}$ of width $n+3$ such that $\forall x\in[0,1]^{n+3}$, $\left|f(x_{1}\ldots x_{n})-\left(\mathcal{N}_{\epsilon}\left(x\right)\right)_{1}\right|<\epsilon$, where $\left(\cdot\right)_{i}$ represents the $i^{th}$ component of a vector. \end{cor} \noindent This is a first result, however $(2n+5)L$ is not a small depth (in our experiments, $n$ can be over 300~000), and a number of work provided empirical evidences that DCNN with small depth can offer good performances (e.g. \cite{anca2018eccv,cheng}). To improve our result, we introduce our main theorem which studies the approximation properties of these small depth networks. \begin{thm}(Rank-based expressive power of DCNNs) \label{prop:low_rank_nn} \noindent Let $\mathcal{N}$ be a deep ReLU network of width $n$, depth $L$ and a total rank $k$ and assume $n$ is a power of $2$. Let $\mathcal{X} \subset \mathbb{C}^{n}$ be a bounded set. Then, for any $\epsilon>0$, there exists a DCNN with ReLU activation $\mathcal{N}'$ of width $n$ such that $\left\Vert \mathcal{N}(x)-\mathcal{N}'(x)\right\Vert <\epsilon$ for all $x\in\mathcal{X}$ and the depth of $\mathcal{N}'$ is bounded by $9k$.\end{thm} \noindent Remark that in the theorem, we require that $n$ is a power of $2$. We conjecture that the result still holds even without this condition. This result refines Lemma~\ref{mainth_}, and answer our second question: a DCNN of bounded width and small depth can approximate a Deep ReLU network of low total rank. Note that the converse is not true: because $n$-by-$n$ circulant matrix can be of rank $n$, approximating a DCNN of depth $1$ can require a deep ReLU network of total rank equals to $n$. \paragraph{Expressivity of DCNNs} For the sake of clarity, we highlight the significance of these results with the two following properties. \textbf{Properties. } Given an arbitrary fixed integer $n$, let $\text{\ensuremath{\mathcal{R}}}_{k}$ be the set of all functions $f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}$ representable by a deep ReLU network of total rank at most $k$ and let $\mathcal{C}_{l}$ the set of all functions $f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}$ representable by deep diagonal-circulant networks of depth at most $l$, then: \begin{align} \label{prop_eq1}\forall k,\exists l \, & \quad \mathcal{R}_{k}\subsetneq\mathcal{C}_{l} \\ \label{prop_eq2}\forall l,\nexists k\, & \quad \mathcal{C}_{l}\subseteq\mathcal{R}_{k} \end{align} \noindent We illustrate the meaning of this properties using Figure~\ref{fig:circfig}. As we can see, the set $\mathcal R_{k}$ of all the functions representable by a deep ReLU\xspace network of total rank $k$ is strictly included in the set $\mathcal C_{9k}$ of all DCNN of depth $9k$ (as by Theorem~\ref{prop:low_rank_nn}). \begin{figure}[th] \begin{center} \input{graphs/picture.tex} \end{center} \caption{Illustration of Properties (1) and (2). } \label{fig:circfig} \end{figure} \noindent These properties are interesting for many reasons. First, Property (\ref{prop_eq2}) shows that diagonal-circulant networks are \emph{strictly more expressive} than networks with low total rank. Second and most importantly, in standard deep neural networks, it is known that the most of the singular values are close to zero (see e.g. \cite{Sedghi18iclr,Arora19neurips}). Property (\ref{prop_eq1}) shows that these networks can efficiently be approximated by diagonal-circulant networks. Finally, several publications have shown that neural networks can be trained explicitly to have low-rank weight matrices \cite{chong18eccv, goyal19}. This opens the possibility of learning compact and accurate diagonal-circulant networks. \section{How to train very deep DCNNs} \label{section:training} \begin{figure*}[ht] \centering \subfigure[]{ \centering \input{graphs/cifar10_factor.tex} \label{fig:cifar10_factor} }\hspace{2cm} \subfigure[]{ \centering \input{graphs/cifar10_leaky_relu.tex} \label{fig:cifar10_leaky_relu} } \caption{Experiments on training DCNNs and other structured neural networks on CIFAR-10. Figure~\ref{fig:cifar10_factor}: impact of increasing the number of ReLU activations in a DCNN. Deep DCNNs with fewer ReLUs are easier to train. Figure~\ref{fig:cifar10_leaky_relu}: impact of increasing the slope of a Leaky-ReLU in DCNNs. Deep DCNNs with a larger slope are easier to train.} \end{figure*} Training DCNNs has revealed to be a challenging problem. We devise two techniques to facilitate the training of deep DCNNs. First, we propose an initialization procedure which guarantee the signal is propagated across the network without vanishing nor exploding. Secondly, we study the behavior of DCNNs with different non-linearity functions and determine the best parameters for different settings. \paragraph{Initialization scheme} The following initialization procedure which is a variant of Xavier initialization. First, for each circulant matrix $C=circ(c_{1}\ldots c_{n})$, each $c_{i}$ is randomly drawn from $\mathcal{N}\left(0,\sigma^{2}\right)$, with $\sigma=\sqrt{\frac{2}{n}}$. Next, for each diagonal matrix $D=diag(d_{1}\ldots d_{n})$, each $d_{i}$ is drawn randomly and uniformly from $\{-1,1\}$ for all $i$. Finally, all biases in the network are randomly drawn from $\mathcal{N}\left(0,\sigma'^{2}\right)$, for some small value of $\sigma'$. The following proposition states that the covariance matrix at the output of any layer in a DCNN, independent of the depth, is constant. \begin{prop} \label{prop:initialization}Let $\mathcal{N}$ be a DCNN of depth $L$ initialized according to our procedure, with $\sigma'=0$. Assume that all layers $1$ to $L-1$ have ReLU activation functions, and that the last layer has the identity activation function. Then, for any $x\in\mathbb{R}^{n}$, the covariance matrix of $\mathcal{N}(x)$ is $\frac{2.Id}{n}\left\Vert x\right\Vert _{2}^{2}$. Moreover, note that this covariance does not depend on the depth of the network. \end{prop} \begin{proof} \emph{(Proposition \ref{prop:initialization})} Let $\mathcal{N}=\ensuremath{f_{D_{L},C_{L}}\circ\ldots\circ f_{D_{1},C_{1}}}$ be a $L$ layer DCNN. All matrices are initialized as described in the statement of the proposition. Let $y=D_{1}C_{1}x$. Lemma \ref{lem:covariance} shows that $cov(y_{i},y_{i'})=0$ for $i\neq i'$ and $var(y_{i})=\frac{2}{n}\left\Vert x\right\Vert _{2}^{2}$. For any $j\le L$, define $z^{j}=f_{D_{j},C_{j}}\circ\ldots\circ f_{D_{1},C_{1}}(x)$. By a recursive application of lemma \ref{lem:covariance}, we get that then $cov(z_{i}^{j},z_{i'}^{j})=0$ and $var(z_{i}^{j})=\frac{2}{n}\left\Vert x\right\Vert _{2}^{2}$. \end{proof} \begin{lem} \label{lem:covariance}Let $c_{1}\ldots c_{n},d_{1}\ldots d_{n},b_{1}\ldots b_{n}$ be random variables in $\mathbb{R}$ such that $c_{i}\sim\mathcal{N}(0,\sigma^{2})$, $b_{i}\sim\mathcal{N}(0,\sigma'^{2})$ and $d_{i}\sim\{-1,1\}$ uniformly. Define $C=circ(c_{1}\ldots c_{n})$ and $D=diag(d_{1}\ldots d_{n})$. Define $y=DCu$ and $z=CDu$ for some vector $u$ in $\mathbb{R}^{n}$. Also define $\bar{y}=y+b$ and $\bar{z}=z+b$. Then, for all $i$, the p.d.f. of $y_{i}$, $\bar{y}_{i}$, $z_{i}$ and $\bar{z}_{i}$ are symmetric. Also: \begin{itemize} \item Assume $u_{1}\ldots u_{n}$ is fixed. Then, we have for $i\neq i':$ \begin{align*} cov(y_{i},y_{i'}) & =cov(z_{i},z_{i'}) =cov(\bar{y}_{i},\bar{y}_{i'}) =cov(\bar{z}_{i},\bar{z}_{i'})=0\\ var(y_{i}) & =var(z_{i})=\sum_{j}u_{j}^{2}\sigma^{2} \\ var(\bar{y}_{i}) & =var(\bar{z}_{i})=\sigma'^2+\sum_{j}u_{j}^{2}\sigma^{2} \end{align*} \item Let $x_{1}\ldots x_{n}$ be random variables in $\mathbb{R}$ such that the p.d.f. of $x_{i}$ is symmetric for all $i$, and let $u_{i}=ReLU(x_{i})$. We have for $i\neq i':$ \begin{align*} cov(y_{i},y_{i'}) & =cov(z_{i},z_{i'}) =cov(\bar{y}_{i},\bar{y}_{i'}) =cov(\bar{z}_{i},\bar{z}_{i'})=0\\ var(y_{i}) & =var(z_{i})=\frac{1}{2}\sum_{j}var(x_{i}).\sigma^{2} \\ var(\bar{y}_{i}) & =var(\bar{z}_{i})=\sigma'^2+\frac{1}{2}\sum_{j}var(x_{i}).\sigma^{2} \end{align*} \end{itemize} \end{lem} \begin{proof} \emph{(Lemma \ref{lem:covariance})} By an abuse of notation, we write $c_{0}=c_{n},c_{-1}=c_{n-1}$ and so on. First, note that: $y_{i}=\sum_{j=1}^{n}c_{j-i}u_{j}d_{j}$ and $z_{i}=\sum_{j=1}^{n}c_{j-i}u_{j}d_{i}$. Observe that each term $c_{j-i}u_{j}d_{j}$ and $c_{j-i}u_{j}d_{i}$ have symmetric p.d.f. because of $d_{i}$ and $d_{j}$. Thus, $y_{i}$ and $z_{i}$ have symmetric p.d.f. Now let us compute the covariance. \begin{align*} cov(y_{i},y_{i'}) &= \sum_{j,j'=1}^{n}cov\left(c_{j-i}u_{j}d_{j},c_{j'-i'}u_{j'}d_{j'}\right) \\ \begin{split} &= \sum_{j,j'=1}^{n}\mathbb{E}\left[c_{j-i}u_{j}d_{j}c_{j'-i'}u_{j'}d_{j'}\right] \\ &\quad-\mathbb{E}\left[c_{j-i}u_{j}d_{j}\right]\mathbb{E}\left[c_{j'-i'}u_{j'}d_{j'}\right] \end{split} \end{align*} Observe that $\mathbb{E}\left[c_{j-i}u_{j}d_{j}\right]=\mathbb{E}\left[c_{j-i}u_{j}\right]\mathbb{E}\left[d_{j}\right]=0$ because $d_{j}$ is independent from $c_{j-i}u_{j}$. Also, observe that if $j\neq j'$ then $\mathbb{E}\left[d_{j}d_{j'}\right]=0$ and thus $\mathbb{E}\left[c_{j-i}u_{j}d_{j}c_{j'-i'}u_{j'}d_{j'}\right]=\mathbb{E}\left[d_{j}d_{j'}\right]\mathbb{E}\left[c_{j-i}u_{j}c_{j'-i'}u_{j'}\right]=0$. Thus, the only non null terms are those for which $j=j'$. We get: \begin{align*} cov(y_{i},y_{i'}) & =\sum_{j=1}^{n}\mathbb{E}\left[c_{j-i}u_{j}d_{j}c_{j-i'}u_{j}d_{j}\right]\\ & =\sum_{j=1}^{n}\mathbb{E}\left[c_{j-i}c_{j-i'}u_{j}^{2}\right] \end{align*} Assume $u$ is a fixed vector. Then, $var(y_{i})=\sum_{j=1}^{n}u_{j}^{2}\sigma^{2}$ and $cov(y_{i},y_{i'})=0$ for $i\neq i'$ because $c_{j-i}$ is independent from $c_{j-i'}$. Now assume that $u_{j}=ReLU(x_{j})$ where $x_{j}$ is a r.v. Clearly, $u_{j}^{2}$ is independent from $c_{j-i}$ and $c_{j-i'}$. Thus: \begin{align*} cov(y_{i},y_{i'}) & =\sum_{j=1}^{n}\mathbb{E}\left[c_{j-i}c_{j-i'}\right]\mathbb{E}\left[u_{j}^{2}\right] \end{align*} For $i\neq i'$, then $c_{j-i}$ and $c_{j-i'}$ are independent, and thus $\mathbb{E}\left[c_{j-i}c_{j-i'}\right]=\mathbb{E}\left[c_{j-i}\right]\mathbb{E}\left[c_{j-i'}\right]=0$. Therefore, $cov(y_{i},y_{i'})=0$ if $i\neq i'$. Let us compute the variance. We get $var(y_{i})=\sum_{j=1}^{n}var(c_{j-i}).\mathbb{E}\left[u_{j}^{2}\right]$. Because the p.d.f. of $x_{j}$ is symmetric, $\mathbb{E}\left[x_{j}^{2}\right]=2\mathbb{E}\left[u_{j}^{2}\right]$ and $\mathbb{E}\left[x_{j}\right]=0$. Thus, $var(y_{i})=\frac{1}{2}\sum_{j=1}^{n}var(c_{j-i}).\mathbb{E}\left[x_{j}^{2}\right]=\frac{1}{2}\sum_{j=1}^{n}var(c_{j-i}).var(x_{j})$. Finally, note that $cov(\bar{y}_{i},\bar{y}_{i'})=cov(y_{i},y_{i'})+cov(b_{i},b_{i'})$. This yields the covariances of $\bar{y}$. To derive $cov(z_{i},z_{i'})$ and $cov(\bar{z}_{i},\bar{z}_{i'})$ , the required calculus is nearly identical. We let the reader check by himself/herself. \end{proof} \paragraph{Non-linearity function} We empirically found that reducing the number of non-linearities in the networks simplifies the training of deep neural networks. To support this claim, we conduct a series of experiments on various DCNNs with a varying number of ReLU activations (to reduce the number of non-linearities, we replace some ReLU activations with the identity function). In a second experiment, we replace the ReLU activations with Leaky-ReLU activations and vary the slope of the Leaky ReLU (a higher slope means an activation function that is closer to a linear function). The results of this experiment are presented in Figure~\ref{fig:cifar10_factor} and \ref{fig:cifar10_leaky_relu}. In \ref{fig:cifar10_factor}, ``ReLU(DC)'' means that we interleave on ReLU activation functions between every diagonal-circulant matrix, whereas ReLU(DCDC) means we interleave a ReLU activation every other block etc. In both Figure~\ref{fig:cifar10_factor} and Figure~\ref{fig:cifar10_leaky_relu}, we observe that reducing the non-linearity of the networks can be used to train deeper networks. This is an interesting result, since we can use this technique to adjust the number of parameters in the network, without facing training difficulties. We obtain a maximum accuracy of 0.56 with one ReLU every three layers and leaky-ReLUs with a slope of 0.5. We hence rely on this setting in the experimental section. \section{Empirical evaluation} \label{section:exp} This experimental section aims at answering the following questions: \begin{itemize} \item[] {\bf Q6.1} -- How do DCNNs compare to other approaches such as ACDC, LDR or other structured approaches? \item[] {\bf Q6.2} -- How do DCNNs compare to other compression based techniques? \item[] {\bf Q6.3} -- How do DCNNs perform in the context of large scale real-world machine learning applications? \end{itemize} \subsection{Comparison with other structured approaches (Q6.1)} \begin{figure*}[ht] \centering \subfigure[]{ \includegraphics[scale=0.35]{figures/acdc_regression.pdf} \label{fig:adcd_regression} } \subfigure[]{ \includegraphics[scale=0.35]{figures/acdc_cifar10.pdf} \label{fig:acdc_cifar10} } \caption{Comparison of DCNNs and ACDC networks on two different tasks. Figure~\ref{fig:adcd_regression} shows the evolution of the training loss on a regression task with synthetic data. Figure~\ref{fig:acdc_cifar10} shows the test accuracy on the CIFAR-10 dataset.} \end{figure*} {\bf Comparison with ACDC \cite{moczulski2015acdc}.} In Section~\ref{related_work}, we have discussed the differences between the ACDC framework and our approach from a theoretical perspective. In this section, we conduct experiments to compare the performance of DCNNs with neural networks based on ACDC layers. We first reproduce the experimental setting from \cite{moczulski2015acdc}, and compare both approaches using only linear networks (i.e. networks without any ReLU activations). The results are presented in Figure~\ref{fig:adcd_regression}. On this simple setting, both architectures demonstrate good performance, however, DCNNs offer better convergence rate. In Figure~\ref{fig:acdc_cifar10}, we compare neural networks with ReLU activations on CIFAR-10. The synthetic dataset has been created in order to reproduce the experiment on the regression linear problem proposed by~\cite{moczulski2015acdc}. We draw $X$, $Y$ and $W$ from a uniform distribution between [-1, +1] and $\epsilon$ from a normal distribution with mean 0 and variance $0.01$. The relationship between $X$ and $Y$ is define by $Y = XW + \epsilon$. We found that networks which are based only on ACDC layers are difficult to train and offer poor accuracy on CIFAR. (We have tried different initialization schemes including the one from the original paper, and the one we propose in this paper.) Moczulski et al. \cite{moczulski2015acdc} manage to train a large VGG network however these networks are generally highly redundant, the contribution of the structured layer is difficult to quantify. We also observe that adding a single dense layer improves the convergence rate of ACDC in the linear case networks, which explain the good results of \cite{moczulski2015acdc}. However, it is difficult to characterize the true contribution of the ACDC layers when the network involved a large number of other expressive layers. In contrast, deep DCNNs can be trained and offer good performance without additional dense layers (these results are in line with our experiments on the \textit{YouTube-8M}\xspace dataset). We can conclude that DCNNs are able to model complex relations at a low cost. \begin{figure*}[ht] \centering \subfigure[]{ \hspace{0.5cm} \centering \input{graphs/cifar10_type.tex} \label{fig:cifar10_type} }\hspace{0.9cm} \subfigure[]{ \centering \input{graphs/scatterplot.tex} \label{fig:cifar10_with_channels_xp} \hspace{-1.5cm} } \caption{Figure~\ref{fig:cifar10_type}: network size vs. accuracy compared on Dense networks, DCNNs (our approach), DTNNs (our approach), neural networks based on Toeplitz matrices and neural networks based on Low Rank-based matrices. DCNNs outperforms alternatives structured approaches. Figure~\ref{fig:cifar10_with_channels_xp} shows the accuracy of different structured architecture given the number of trainable parameters.} \end{figure*} {\bf Comparison with Dense networks, Toeplitz networks and Low Rank networks.} We now compare DCNNs with other state-of-the-art structured networks by measuring the accuracy on a flattened version of the CIFAR-10 dataset. Our baseline is a dense feed-forward network with a fixed number of weights (9 million weights). We compare with DCNNs and with DTNNs (see below), Toeplitz networks, and Low-Rank networks~\cite{8099498}. We first consider Toeplitz networks which are stacked Toeplitz matrices interleaved with ReLU activations since Toeplitz matrices are closely related to circulant matrices. Since Toeplitz networks have a different structure (they do not include diagonal matrices), we also experiment using DTNNs, a variant of DCNNs where all the circulant matrices have been replaced by Toeplitz matrices. Finally we conduct experiments using networks based on low-rank matrices as they are also closely related to our work. For each approach, we report the accuracy of several networks with a varying depth ranging from 1 to 40 (DCNNs, Toeplitz networks) and from 1 to 30 (from DTNNs). For low-rank networks, we used a fixed depth network and increased the rank of each matrix from 7 to 40. We also tried to increase the depth of low rank matrices, but we found that deep low-rank networks are difficult to train so we do not report the results here. We compare all the networks based on the number of weights from 21K (0.2\% of the dense network) to 370K weights (4\% of the dense network) and we report the results in Figure~\ref{fig:cifar10_type}. First we can see that the size of the networks correlates positively with their accuracy which demonstrate successful training in all cases. We can also see that the DCNNs achieves the maximum accuracy of 56\% with 20 layers ($\sim$ 200K weights) which as as good as the dense networks with only 2\% of the number of weights. Other approaches also offer good performance but they are not able to reach the accuracy of a dense network. \begin{table} \centering \small \caption{\small LDR networks compared with DCNNs on a flattend version of CIFAR-10. DCNNs outperform all LDR configurations with fewer weights.$^2$} \begin{tabular}{lcc} \toprule \textbf{Architectures} & \textbf{\#Params} & \textbf{Acc.} \\ \midrule \textit{Dense} & \textit{9.4M} & \textit{0.562} \\ \textbf{\textit{DCNN $(5\ layers)$}} & \textbf{49K} & \textbf{0.543} \\ \textbf{\textit{DCNN $(2\ layers)$}} & \textbf{21K} & \textbf{0.536} \\ LDR--TD $(r = 2)$ & 64K & 0.511 \\ LDR--TD $(r = 3)$ & 70K & 0.473 \\ Toeplitz-like $(r=2)$ & 46K & 0.483 \\ Toeplitz-like $(r =3)$ & 52K & 0.496 \\ \bottomrule \end{tabular} \label{table:xp_ldr} \end{table} \begin{table} \centering \small \caption{\small Two depths scattering on CIFAR-10 followed by LDR or DC layer. Networks with DC layers outperform all LDR configurations with fewer weights.} \begin{tabular}{lcc} \toprule \textbf{Architectures} & \textbf{\#Params} & \textbf{Acc.} \\ \midrule \textbf{DC $(1\ layers)$} & \textbf{124K} & \textbf{0.757} \\ \textbf{DC $(3\ layers)$} & \textbf{217K} & \textbf{0.785} \\ \textbf{Ensemble x5 DC $(3\ layers)$} & \textbf{1.08M} & \textbf{0.811} \\ LDR-SD $(r=1)$ & 140K & 0.701 \\ LDR-SD $(r=10)$ & 420K & 0.728 \\ Toeplitz-like $(r=1)$ & 110K & 0.711 \\ Toeplitz-like $(r=10)$ & 388K & 0.720 \\ \bottomrule \end{tabular} \label{table:xp_ldr_scattering} \end{table} \footnotetext[2]{Remark: the numbers may differ from the original experiments by \cite{Thomas_NIPS2018_8119} because we use the original dataset instead of a monochrome version)} \noindent {\bf Comparison with LDR networks~\cite{Thomas_NIPS2018_8119}.} We now compare DCNNs with the LDR framework using the network configuration experimented in the original paper: a single LDR structured layer followed by a dense layer. In the LDR framework, we can change the size of a network by adjusting the rank of the residual matrix, effectively capturing matrices with a structure that is close to a known structure but not exactly (e.g. in the LDR framework, Toeplitz matrices can be encoded with a residual matrix with rank=2, so a matrix that can be encoded with a residual of rank=3 can be seen as Toeplitz-like.). The results are presented in Table~\ref{table:xp_ldr} and demonstrate that DCNNs outperforms all LDR networks both in terms in size and accuracy. {\bf Exploiting image features.} Dense layers and DCNNs are not designed to capture task-specific features such as the translation invariance inherently useful in image classification. We can further improve the accuracy of such general purpose architectures on image classification without dramatically increasing the number of trained parameters by stacking them on top of fixed (i.e. non-trained) transforms such as the scattering transform \cite{mallat2010recursive}. In this section we compare the accuracy of various structured networks, enhanced with the scattering transform, on an image classification task, and run comparative experiments on CIFAR-10. Our test architecture consists of 2 depth scattering on the RGB images followed by a batch norm and LDR or DC layer. To vary the number of parameters of Scattering+LDR architecture, we increase the rank of the matrix (stacking several LDR matrices quickly exhausted the memory). The Figure \ref{fig:cifar10_with_channels_xp} and \ref{table:xp_ldr_scattering} shows the accuracy of these architectures given the number of trainable parameters. First, we can see that the DCNN architecture very much benefits from the scattering transform and is able to reach a competitive accuracy over 78\%. We can also see that scattering followed by a DC layer systematically outperforms scattering + LDR or scattering + Toeplitz-like with less parameters. \subsection{Comparison with other compression based approaches (Q6.2)} \begin{table} \centering \caption{Comparison with compression based approaches} \small \begin{tabular}{lcrc} \toprule \multicolumn{1}{c}{\textbf{Architecture}} & \multicolumn{1}{c}{\textbf{\#Params}} & \textbf{Error (\%)} \\ \hline \\ \textit{LeNet \cite{Lecun98gradient-basedlearning}} & \textit{4 257 674} & \textit{0.61} \\ \multirow{2}[0]{*}{\textbf{DCNN}} & \textbf{25 620} & \textbf{1.74} \\ & \textbf{31 764} & \textbf{1.60} \\ \multirow{2}[0]{*}{HashNet \cite{Chen_Hashing_Trick}} & 46 875 & 2.79 \\ & 78 125 & 1.99 \\ \multirow{2}[0]{*}{Dark Knowledge \cite{44873}} & 46 875 & 6.32 \\ & 78 125 & 2.16 \\ \bottomrule \end{tabular}% \label{tab:mnist}% \end{table}% We provide a comparison with other compression based approaches such as HashNet \cite{Chen_Hashing_Trick}, Dark Knowledge \cite{44873} and Fast Food Transform (FF) \cite{7410530}. Table~\ref{tab:mnist} shows the test error of DCNN against other know compression techniques on the MNIST datasets. We can observe that DCNN outperform easily HashNet \cite{Chen_Hashing_Trick} and Dark Knowledge \cite{44873} with fewer number of parameters. The architecture with Fast Food (FF) \cite{7410530} achieves better performance but with convolutional layers and only $1$ Fast Food Layer as the last Softmax layer. \subsection{DCNNs for large-scale video classification on the \textit{YouTube-8M}\xspace dataset (Q6.3)} To understand the performance of deep DCNNs on large scale applications, we conducted experiments on the \textit{YouTube-8M}\xspace video classification with 3.8 training examples introduced by \cite{abu2016youtube}. Notice that we favour this experiment over ImageNet applications because modern image classification architectures involve a large number of convolutional layers, and compressing convolutional layers is out of our scope. Also, as mentioned earlier, testing the performance of DCNN architectures mixed with a large number of expressive layers makes little sense. The \textit{YouTube-8M}\xspace includes two datasets describing 8 million labeled videos. Both datasets contain audio and video features for each video. In the first dataset ({\em aggregated}) all audio and video features have been aggregated every 300 frames. The second dataset ({\em full}) contains the descriptors for all the frames. To compare the models we use the GAP metric (Global Average Precision) proposed by~\cite{abu2016youtube}. On the simpler {\em aggregated} dataset we compared off-the-shelf DCNNs with a dense baseline with 5.7M weights. On the full dataset, we designed three new compact architectures based on the state-of-the-art architecture introduced by \cite{abu2016youtube}. \noindent {\bf Experiments on the {\em aggregated} dataset with DCNNs:} We compared DCNNs with a dense baseline with 5.7 millions weights. The goal of this experiment is to discover a good trade-off between depth and model accuracy. To compare the models we use the GAP metric (Global Average Precision) following the experimental protocol in~\cite{abu2016youtube}, to compare our experiments. Table~\ref{table:youtube_agg_xp} shows the results of our experiments on the {\em aggrgated} \textit{YouTube-8M}\xspace dataset in terms of number of weights, compression rate and GAP. We can see that the compression ratio offered by the circulant architectures is high. This comes at the cost of a little decrease of GAP measure. The 32 layers DCNN is 46 times smaller than the original model in terms of number of parameters while having a close performance. \begin{table} \centering \caption{ \small This table shows the GAP score for the \textit{YouTube-8M}\xspace dataset with DCNNs. We can see a large increase in the score with deeper networks.} \small \begin{tabular}{lccc} \toprule \textbf{Architecture} & \textbf{\#Weights} & \textbf{GAP@20} \\ \hline \\ \textit{original} & \textit{5.7M} & \textit{0.773} \\ 4 DC & 25 410 (\textit{\bf 0.44}) & 0.599 \\ 32 DC & 122 178 \textit{(2.11)} & 0.685 \\ 4 DC + 1 FC & 4.46M \textit{(77)} & \textbf{0.747} \\ \hline \end{tabular} \label{table:youtube_agg_xp} \end{table} \begin{table} \centering \caption{ \small This table shows the GAP score for the \textit{YouTube-8M}\xspace dataset with different layer represented with our DC decomposition.} \small \begin{tabular}{lccc} \toprule \textbf{Architecture} & \textbf{\#Weights} & \textbf{GAP@20} \\ \hline \\ \textit{original} & \textit{45M} & \textit{0.846} \\ DBoF with DC & 36M (\textit{80}) & 0.838 \\ FC with DC & 41M (\textit{91}) & \textbf{0.845} \\ MoE with DC & 12M (\textit{\bf 26}) & 0.805 \\ \hline \end{tabular} \label{table:youtube_full_xp} \end{table} \noindent {\bf Experiments with DCNNs Deep Bag-of-Frames Architecture:} The Deep Bag-of-Frames architecture can be decomposed into three blocks of layers, as illustrated in Figure~\ref{fig:archi_youtube}. The first block of layers, composed of the Deep Bag-of-Frames embedding (DBoF), is meant to model an embedding of these frames in order to make a simple representation of each video. A second block of fully connected layers (FC) reduces the dimensionality of the output of the embedding and merges the resulting output with a concatenation operation. Finally, the classification block uses a combination of Mixtures-of-Experts (MoE) ~\cite{716791,45619} and Context Gating~\cite{DBLP:journals/corr/MiechLS17} to calculate the final class probabilities. Table~\ref{table:youtube_full_xp} shows the results in terms of number of weights, size of the model (MB) and GAP on the full dataset, replacing the DBoF block reduces the size of the network without impacting the accuracy. We obtain the best compression ratio by replacing the MoE block with DCNNs (26\%) of the size of the original dataset with a GAP score of 0.805 (95\% of the score obtained with the original architecture). We conclude that DCNN are both theoretically sound and of practical interest in real, large scale applications. \begin{figure}[ht!] \centering \scalebox{.72}{\input{youtube/archi_youtube.tex}} \caption{This figure shows the state-of-the-art neural network architecture, initially proposed by \cite{abu2016youtube} and later improved by~\cite{DBLP:journals/corr/MiechLS17}, used in our experiment. } \label{fig:archi_youtube} \end{figure} \paragraph{Architectures \& Hyper-Parameters:} For the first set of our experiments (e.g. experiments on CIFAR-10), we train all networks for 200 epochs, a batch size of 200, Leaky ReLU activation with a different slope. We minimize the Cross Entropy Loss with Adam optimizer and use a piecewise constant learning rate of $5 \times 10^{-5}$, $2.5\times10^{-5}$, $5\times10^{-6}$ and $1\times10^{-6}$ after respectively 40K, 60K and 80K steps. For the \textit{YouTube-8M}\xspace dataset experiments, we built a neural network based on the SOTA architecture initially proposed by \cite{abu2016youtube} and later improved by~\cite{DBLP:journals/corr/MiechLS17}. Remark that no convolution layer is involved in this application since the input vectors are embeddings of video frames processed using state-of-the-art convolutional neural networks trained on ImageNet. We trained our models with the CrossEntropy loss and used Adam optimizer with a 0.0002 learning rate and a 0.8 exponential decay every 4 million examples. All fully connected layers are composed of 512 units. DBoF, NetVLAD and NetFV are respectively 8192, 64 and 64 of cluster size for video frames and 4096, 32, 32 for audio frames. We used 4 mixtures for the MoE Layer. We used all the available 300 frames for the DBoF embedding. In order to stabilize and accelerate the training, we used batch normalization before each non linear activation and gradient clipping. \section{Conclusion}\label{section:conclusion} This paper deals with the training of diagonal circulant neural networks. To the best of our knowledge, training such networks with a large number of layers had not been done before. We also endowed this kind of models with theoretical guarantees, hence enriching and refining previous theoretical work from the literature. More importantly, we showed that DCNNs outperform their competing structured alternatives, including the very recent general approach based on LDR networks. Our results suggest that stacking diagonal circulant layers with non linearities improves the convergence rate and the final accuracy of the network. Formally proving these statements constitutes the future directions of this work. As future work, we would like to generalize the good results of DCNNs to convolutions neural networks. We also believe that circulant matrices deserve a particular attention in deep learning because of their strong ties with convolutions: a circulant matrix operator is equivalent to the convolution operator with circular paddings (as shown in [5]). This fact makes any contribution to the area of circulant matrices particularly relevant to the field of deep learning with impacts beyond the problem of designing compact models. As future work, we would like to generalize our results to deep convolutional neural networks. \bibliographystyle{ecai}
1405.1745
\section{Steady state response and Josephson harmonics in the RSJ} In this section, we present a detailed calculation of the steady-state amplitudes of the Josephson oscillation and its harmonics ($\omega_{J}, 2\omega_{J}$ ..) in an RSJ. This requires a self-consistent evaluation of the static and rf characteristics of the junction, since through the second Josephson relation $V/(I_{0} R) = \omega_{J}/\omega_{0}$ (where $\omega_{0} = I_{0} R/\varphi_{0}$), there exists an intimate connection between the Josephson oscillation frequency and the static voltage that develops across the junction in the running state. \par For evaluating the steady state response of the RSJ, we consider Eq. (4) of the main text with \begin{eqnarray} \varphi (t) = \omega_{J} t + \delta\varphi (t), \end{eqnarray} where $\delta\varphi (t)$ denotes the oscillatory part of the evolution due to Josephson oscillations (see Eq. (5) of the main text). As explained in the main text, we consider the effect of these internally generated oscillations on the phase evolution perturbatively by expanding the junction characteristics as a truncated series in $\delta\varphi(t)$ about the operating point set by $\omega_{J}t$. Furthermore, for computational efficiency, we expand both the Josephson oscillation amplitudes and the Josephson frequency corresponding to static voltage across the junction, in terms of the inverse bias parameter $x = I_{0}/I_{B} = \omega_{0}/\omega_{B}\ll 1$ \footnotemark \;as \footnotetext{The symbol $\omega_{B} \equiv I_{B} R/\varphi_{0}$ is a parametrization of the bias current on a frequency scale.} \begin{eqnarray} & & p_{k}^{I, Q} = \sum_{a=0}^{2K -1} p_{k,a}^{I,Q} x^{a}\\ \label{Eqpumpsexample} & & \frac{V}{I_{B}R} = \frac{\omega_{J}}{\omega_{B}} \equiv \sum_{b=0}^{2K} v_{b} x^{b}, \end{eqnarray} where $K$ denotes the number of Josephson harmonics included in the analysis. \par For illustrative purposes, let us consider the $K=1$ (first Josephson harmonic) case. First, we use Eq. (\ref{Eqpumpsexample}) to calculate the voltage term in Eq. (4), and obtain \begin{eqnarray} \dot{\varphi}= \omega_{J} [1- ( p_{1,0}^{I}+ p_{1,1}^{I} x) \sin\omega_{J} t + ( p_{1,0}^{Q}+ p_{1,1}^{Q} x) \cos\omega_{J} t]. \nonumber\\ \label{Eqvoltageexample} \end{eqnarray} Similarly, for the current through the junction, we obtain \begin{eqnarray} \omega_{0} \sin\varphi &=& \omega_{0} \sin \omega_{J} t + \omega_{0}\cos (\omega_{J}t) [ ( p_{1,0}^{I}+ p_{1,1}^{I} x) \cos\omega_{J} t \nonumber\\ & & \qquad \qquad+ ( p_{1,0}^{Q}+ p_{1,1}^{Q} x) \sin\omega_{J} t]. \nonumber\\ \label{Eqcurrentexample} \end{eqnarray} Using Eqs. (\ref{Eqcurrentexample}) and (\ref{Eqvoltageexample}) in Eq. (4) and collecting terms oscillating at $\omega_{J}$, we obtain \begin{eqnarray} (p_{1,0}^{I} + p_{1,1}^{I} x) v_{0} = x \\ p_{1,0}^{Q} + p_{1,1}^{Q} x = 0, \end{eqnarray} where we have separated the contributions of the two quadratures to the amplitude of the harmonic. This gives, $p_{1,0}^{I} = 0, \;p_{1,1}^{I} = v_{0}^{-1}, \;p_{1,0}^{Q} = 0, \; p_{1,1}^{Q} =0$. Using these to evaluate the static response from Eq. (4), we obtain \begin{eqnarray} & & (v_{0} + v_{1} x + v_{2} x^{2} ) = 1 - \frac{x^{2}}{2} \nonumber\\ \Rightarrow & & v_{0} = 1; \; v_{1} = 0; \; v_{2} = -\frac{1}{2}. \end{eqnarray} Substituting the values of $v_{b}$'s in the expression for pump amplitudes gives \begin{subequations} \begin{align} p_{1,0}^{I} &=0; \quad p_{1,1}^{I} = 1\\ p_{1,0}^{Q} &=0; \quad p_{1,1}^{Q} = 0 \end{align} \label{Eqpumpexample1}% \end{subequations} and hence the amplitude of first Josephson harmonic as $x \cos\omega_{J} t$. It should be noted that the resultant expression for the static voltage across the junction \begin{eqnarray} \frac{V}{I_{B}R} = \frac{\omega_{J}}{\omega_{B}} = 1 - \frac{x^{2}}{2} \end{eqnarray} yields the leading order term from the expansion of the exact expression in the parameter $I_{0}/I_{B}$, \begin{eqnarray} \frac{V}{I_{B}R} = \left(1 - \frac{I_{0}^{2}}{I_{B}^{2}} \right)^{1/2} = 1 - \frac{1}{2} \frac{I_{0}^{2}}{I_{B}^{2}} +\mathcal{O} \left(\frac{I_{0}^{4}}{I_{B}^{4}}\right), \end{eqnarray} thus demonstrating the consistency of our evaluation scheme. Similarly, on going to the next order of expansion (cubic) in $\delta\varphi$ and including the second Josephson harmonic in the analysis, we obtain, \begin{subequations} \begin{align} p_{1,0}^{I} &=0; & p_{1,1}^{I} &= 1; & p_{1,2}^{I} &= 0; & p_{1,3}^{I} &= \frac{1}{4}\\ p_{1,0}^{Q} &=0; & p_{1,1}^{Q} &= 0; & p_{1,2}^{Q} &= 0; & p_{1,3}^{Q} &= 0 \\ p_{2,0}^{I} &=0; & p_{2,1}^{I} &= 0; & p_{2,2}^{I} &= 0; & p_{2,3}^{I} &= 0 \\ p_{2,0}^{Q} &=0; & p_{2,1}^{Q} &=0; & p_{2,2}^{Q} &= -\frac{1}{4}; & p_{2,3}^{Q} &=0. \end{align} \label{Eqpumpexample2}% \end{subequations} These terms give an expression for the second Josephson harmonic as $ -\frac{x^{2}}{4} \sin 2\omega_{J} t$. The corresponding expression for the static voltage is \begin{eqnarray} \frac{V}{I_{B}R} = \frac{\omega_{J}}{\omega_{B}} = 1 - \frac{x^{2}}{2} - \frac{x^{4}}{8}, \end{eqnarray} now correct to fourth order of the exact value. \section{Scattering coefficients of the RSJ} To derive the scattering matrix of the RSJ, we follow the method described in \cite{ArchanaSQUID} and first obtain an admittance matrix $\mathbb{Y}_{J}$ of the junction using the pump amplitudes obtained in Eq. (8): \begin{widetext} \begin{eqnarray} \mathbb{Y}_{J} = \left( \begin{array}{ccccc} 0 && - \frac{i x}{2 (1+\Omega_{m})} \left[1 + \frac{x^{2}}{4}\left( \frac{1- \Omega_{m}}{1+\Omega_{m}}\right)\right] && \frac{i x}{2 (1-\Omega_{m})} \left[1 + \frac{x^{2}}{4}\left(\frac{1+ \Omega_{m}}{1-\Omega_{m}}\right)\right] \\ \\ - \frac{i x}{2\Omega_{m}}\left(1 - \frac{x^{2}}{4}\right) && 0 && - \frac{x^{2}}{4 (1- \Omega_{m})}\\ - \frac{i x}{2\Omega_{m}}\left(1 - \frac{x^{2}}{4}\right) & & - \frac{x^{2}}{4 (1+ \Omega_{m})} && 0 \end{array} \right). \label{EqYmat} \end{eqnarray} \end{widetext} Here, all admittances are normalized with respect to the characteristic admittance $Z_{C}^{-1} = R^{-1}$ corresponding to the shunt conductance. From this, it is straightforward to obtain the scattering matrix of the RSJ, correct to cubic order, by using the identity \cite{Pozar} \begin{eqnarray} \mathbb{S} &=& \mathbb{W}^{-1}. (1+\mathbb{Y}_{J})^{-1}(1-\mathbb{Y}_{J}) .\mathbb{W} \end{eqnarray} with coefficients \begin{subequations} \begin{align} r_{m} &= 1 + \frac{x^{2}}{1- \Omega_{m}^{2}},\\ r_{\pm} & = 1 \mp \frac{x^{2}}{2 \Omega_{m} (1\pm \Omega_{m})},\\ t_{d} &= \frac{-i x}{\sqrt{\Omega_{m}(1 + \Omega_{m})}}\left(1 + \frac{x^{2}}{4} \frac{3 + \Omega_{m}^{2}}{1-\Omega_{m}^{2}} \right), \\ t_{u} & = \frac{-i x}{\sqrt{\Omega_{m}(1 + \Omega_{m})}}\left(1 + \frac{x^{2}}{4} \frac{1 - \Omega_{m}}{1+\Omega_{m}} \right), \\ s_{d} &= \frac{i x}{\sqrt{\Omega_{m}(1 - \Omega_{m})}}\left(1 + \frac{x^{2}}{4} \frac{3 + \Omega_{m}^{2}}{1-\Omega_{m}^{2}} \right),\\ s_{u} &= \frac{-i x}{\sqrt{\Omega_{m}(1 - \Omega_{m})}}\left(1 + \frac{x^{2}}{4} \frac{1 + \Omega_{m}}{1-\Omega_{m}} \right),\\ v_{\pm\mp}&= \pm \frac{x^{2}}{2\Omega_{m}} \sqrt{\frac{1 \mp \Omega_{m}}{1 \pm \Omega_{m}}}. \end{align} \label{EqRSJScoeffs}% \end{subequations} Here, $\Omega_{m} = \omega_{m}/\omega_{B}$, where $\omega_{B} = 2 \pi I_{B} R/\Phi_{0}$, and the diagonal matrix \begin{eqnarray*} \mathbb{W} = {\rm diag}(\sqrt{|\Omega_{m}|}, \sqrt{|\Omega_{m}+\Omega_{J}|}, \sqrt{|\Omega_{m} - \Omega_{J}|}) \end{eqnarray*} transforms the basis vectors from dimensionless current amplitudes $I[\omega]$ into the relevant photon fluxes $a[\omega] = I[\omega]/\sqrt{\hbar|\omega| R}$. Figure 4(b) is a plot of the relative asymmetry obtained as a difference between net up- and downconverted amplitudes, normalized to the total converted power, calculated from the coefficients in Eqs. (\ref{EqRSJScoeffs}). \par The importance of higher pump harmonics is underscored by the degree of the terms that break the symmetry between the up and down frequency conversion amplitudes. The scattering coefficients are symmetric to leading order in the expansion parameter $x$; the lowest order asymmetric term is of order $x^{3}$ as shown by Eqs. (\ref{EqRSJScoeffs}); this necessarily involves higher order mixing products mediated by the second Josephson harmonic of strength $\sim x^{2}$ [Eq. (\ref{Eqpumpexample2})]. \subsection{Fluctuation Spectrum of the RSJ} We now show a representative leading order calculation of the noise spectrum of the RSJ, using the matrix method introduced here. Consider the admittance matrix derived for the junction in Eq. (\ref{EqYmat}) to leading order in x. The total admittance ($\mathbb{Y}_{T}$) may be calculated as \begin{eqnarray} \mathbb{Y}_{T} = (1 + \mathbb{Y}_{J}), \end{eqnarray} where $\mathbb{Y}_{J}$ is normalized with respect to the characteristic admittance $Z_{C}^{-1} = R^{-1}$. We can thus obtain the total impedance matrix of the RSJ, $Z_{T}$ as \begin{eqnarray} Z_{T} = \frac{1}{{\rm det} [\mathbb{Y}_{T}]} \left( \begin{array}{ccccc} 1 & & \frac{i x}{2 (1 + \Omega_{m})} & & \frac{-i x}{2 (1-\Omega_{m})} \\ \frac{i x}{2 \Omega_{m}} & & 1 & & 0\\ \frac{i x}{2\Omega_{m}} & & 0 & & 1 \end{array} \right), \label{Zmat} \end{eqnarray} where we have considered only terms to first order in $x$. The first row of $Z_{T}$ gives the impedance contribution at the modulation frequency $\omega_{m}$. We note that \begin{eqnarray} ({\rm det} [\mathbb{Y}_{T}])^{-1} &=& \left[1 - \frac{x^{2}}{2 (1 - \Omega_{m}^{2})}\right]^{-1} \nonumber\\ &=& 1 + \frac{x^{2}}{2} + \mathcal{O}(x^{4}). \label{EqdynRes} \end{eqnarray} Equation (\ref{EqdynRes}) corresponds to the dynamic resistance $R_{D}$ across the junction since, to first order in perturbation, we have \begin{eqnarray*} v_{0} = \frac{V_{\rm dc}}{I_{0}R} =\frac{1}{x} - \frac{x}{2}, \label{dc} \end{eqnarray*} leading to \begin{eqnarray} R^{-1} R_{D} \equiv R^{-1} \frac{d V_{\rm dc}}{d I_{B}} &=& \frac{d v_{0}}{d (1/x)} = 1 + \frac{x^{2}}{2}. \label{dynresis} \end{eqnarray} Thus, we can write the power spectrum of the voltage fluctuations for the RSJ, in the limit of zero modulation frequency ($\Omega_{m} \rightarrow 0$), as \begin{eqnarray} S_{VV} [\omega_{m}] &=& |z_{11}|^{2} S_{II}[\omega_{m}] + |z_{12}|^{2}S_{II}[\omega_{J}+ \omega_{m}] \nonumber\\ & & \quad + |z_{13}|^{2}S_{II}[-\omega_{J}+\omega_{m}]\\ &=& \left[S_{II}[\omega_{m}] + \frac{x^{2}}{4} (S_{II}[\omega_{J}] + S_{II}[-\omega_{J}])\right] R_{D}^{2} \nonumber\\ &=& \left[\frac{2 k_{B} T}{R} + \frac{I_{0}^{2}}{I_{B}^{2}} \frac{\hbar \omega_{J}}{2 R} \coth\left(\frac{\hbar \omega_{J}}{2 k_{B} T}\right)\right] R_{D}^{2}. \nonumber\\ \label{noise} \end{eqnarray} Here, in the second step we have used the coefficients in the second row of $Z_{T}$ calculated in Eq. (\ref{Zmat}) and the identity $S_{II}[\omega]+ S_{II}[-\omega] = 2\bar{S}_{II} = 2 R^{-1} \hbar \omega \coth(\hbar \omega/2 k_{B}T)$ \cite{ClerkRMP}. Equation (\ref{noise}) is the well known result \cite{Likharev, SQUIDvol1} showing that the noise appearing near zero frequency in the RSJ involves two contributions: (i) a direct contribution from the input Johnson noise of the resistor and (ii) noise that is downconverted from the Josephson harmonics and appears at the input. The second contribution is appropriately weighted by the square of the bias parameter $x^{2}$. This can be easily understood in the scattering formalism presented here: the strength of the internally generated Josepshon frequency, which acts as the pump for mixing down the high frequency noise, is determined solely by the bias condition. The downconverted amplitude scales directly with the pump strength, a result well known in the literature on parametric systems. Consequently, the intensity of the noise fluctuations, which scale as the square of the amplitude, are weighted by $x^{2}$ [cf. Eq. \ref{Eqpumpexample1}]. Extending this analysis to higher orders provides us with an analytical method to quantify the contribution of higher harmonics and their sidebands to the noise near zero frequency. \\ \section{Numerical simulation of RSJ mixing} \begin{figure} \includegraphics[width=\columnwidth]{FigS1.pdf}\\ \caption{Numerical calculation for the RSJ. (a) A representative spectral density of voltage fluctuations across the RSJ, calculated numerically for $I_{B} = 1.5\; I_{0}$. (b) Phases of different Josephson harmonics obtained from a direct numerical integration of Eq. (\ref{EqRSJnum}) with $i_{\rm rf}(\tau)=0$ . (c) The asymmetry obtained from the numerically calculated spectrum, in the presence of an rf source term $i_{\rm rf} (\tau)$ as given in Eq. (\ref{EqSignum}). It is calculated as the difference between the voltage spectral densities at modulation frequency $\Omega_{m}$ and sideband frequencies $(\pm 1 + \Omega_{m})$, respectively, also indicated with red arrows in (a).} \label{FigRSJnumerics}% \end{figure}% One can rewrite Eq. (4) as \begin{eqnarray} \sqrt{i_{B}^{2} -1} \frac{d \varphi}{d\tau} + \sin\varphi = i_{B} + i_{\rm RF}(\tau), \label{EqRSJnum} \end{eqnarray} where $i_{B} = I_{B}/I_{0}$ represents the dimensionless parametrization of the static bias current and $\tau = \omega_{J}t $ represents dimensionless time counted in units of Josephson oscillation period. The term $i_{RF} (\tau)$ denotes the rf drive at signal and sideband frequencies of interest: \begin{eqnarray} i_{\rm RF}(\tau) &=& m(\tau) \cos(\Omega_{m}\tau) + s_{+}(\tau)\cos [(1+\Omega_{m})\tau] \nonumber\\ & & \qquad \qquad+ s_{-}(\tau) \cos [(1-\Omega_{m})\tau]. \label{EqSignum} \end{eqnarray} Here, $\Omega_{m} = \omega_{m}/\omega_{J}$ denotes the dimensionless frequency of the modulation signal and $1 \pm \Omega_{m}$ denote the dimensionless sideband frequencies. To be specific, we assume $m(\tau) = s_{\pm}(\tau) = 0.1$. \par Figure \ref{FigRSJnumerics} shows the results obtained by numerical integration of Eq. (\ref{EqRSJnum}), averaged over $\sim 10^{3}$ Josephson periods. An example of a typical spectrum is shown in Fig. \ref{FigRSJnumerics}(a). Figure \ref{FigRSJnumerics}(b) shows that as the bias current $I_{B}$ is decreased towards $I_{0}$, phase configuration of the Josephsonharmonics becomes closer to that given in Eq. (\ref{Eqpumpexample2}). Concomitantly, the asymmetry between the net output powers at the modulation frequency and the sidebands increases with bias current, as shown in Fig. \ref{FigRSJnumerics}(c). The essential physics presented analytically is thus confirmed qualitatively by the numerical calculation. For quantitative comparison, however, one would require a more detailed study that takes into account higher Josephson harmonics ($K > 2$) in the perturbative series expansion, and isolates the effect of reflections and higher order mixing products present in the full numerical calculation.
1405.1943
\section{Introduction} \label{sec:Intro} There is an extensive literature dedicated to well-posedness of the Cauchy problem for the incompressible Euler equations of hydrodynamics. The first rigorous results on the local in time existence and uniqueness of solutions go back to the papers of Gyunter \cite{Gu} and Lichtenstein \cite{Li} in the late 1920's while the first global result was proved in 2D by Wolibner \cite{Wo} in 1933. Nevertheless, our understanding of the Cauchy problem remains incomplete especially in connection with the phenomenon of turbulence and persistence of smooth solutions in 3D for all time. Another important problem is to identify an optimal function space in which the Cauchy problem is locally well-posed. In this area substantial progress has been made in recent years. For example, Bardos and Titi \cite{BT} used the shear flow of DiPerna and Majda to construct solutions in 3D with an instantaneous loss of regularity in H\"{o}lder $C^\alpha$ and Zygmund $B^1_{\infty, \infty}$ spaces. More precisely, they found $C^\alpha$ initial data for which the corresponding (weak) solution does not belong to $C^\beta$ for any $1>\beta > \alpha^2$ and any $t>0$. This technique has also been used to obtain similar results in the Triebel-Lizorkin $F^1_{\infty, 2}$ space by Bardos, Lemarie and Titi and in the logarithmic Lipschitz spaces $\mathrm{logLip}^\alpha$ by the authors \cite{MY}. More recently, Bourgain and Li \cite{BL} using a combination of Lagrangian and Eulerian techniques obtained strong local ill-posedness results in the Sobolev spaces $W^{n/p+1,p}$ for any $1<p<\infty$ and in the Besov spaces $B^{n/p+1}_{p,q}$ with $1<p<\infty$ and $1<q\leq\infty$ and $n=2$ or $3$. In particular, they settled the borderline Sobolev case $H^{n/2+1}$. However, as far as we are aware the problem of local well-posedness in the classical $C^1$ space in both space dimensions as well as other spaces such as $B^1_{\infty, q}$ with $1< q < \infty$ has remained open; cf. comments on criticality of the $C^1$ space in \cite{BT}; see also the papers of Pak and Park \cite{PP} and Takada \cite{Ta}. Our goal in this paper is to settle the former case in 2D by showing that the Euler equations are locally ill-posed in $C^1(\mathbb{R}^2)$. Recall that a Cauchy problem is locally well-posed in a Banach space $X$ (in the sense of Hadamard) if for any initial data in $X$ there exist $T>0$ and a unique solution which persists in the space $C([0,T), X)$ and which depends continuously on the data. Otherwise, the problem is said to be ill-posed. The Cauchy problem for the Euler equations in 2D is usually written in the form \begin{align} \label{eq:Euler-u} &u_t + u{\cdot}\nabla u + \nabla\pi = 0, \qquad t \geq 0, \, x \in \mathbb{R}^2 \\ \label{eq:Euler-uu} &\mathrm{div}\, u = 0 \\ \label{eq:Euler-u-ic} &u(0) = u_0, \end{align} where $u$ is the velocity vector field and $\pi$ is the pressure function of the fluid. Our approach is inspired by the methods of Bourgain and Li \cite{BL} who for suitable initial vorticity data constructed Lagrangian flows with large deformation gradients and used them to show that there exist nearby solutions which lose their regularity instantaneously in time through norm inflation. The initial data has an odd symmetry and a stagnation point at the origin. Such properties also seem to play an important role in a paper of Kiselev and \v{S}verak \cite{KS}. Additional information about other recent ill-posedness results can be found in both of these references. We mention in passing yet another manifestation of local ill-posedness that occurs for the Euler as well as the (supercritical) quasi-geostrophic equations in which certain initial data defined in the periodic case by lacunary series lead to solutions that fail to be continuous in time when considered as curves in the classical H\"{o}lder $C^{1+\alpha}$ spaces for $0<\alpha<1$; we refer to Cheskidov and Shvydkoy \cite{CS} and \cite{MY} for details. The main result of the paper can be succinctly stated as follows \begin{theorem} \label{thm:0} The 2D incompressible Euler equations \eqref{eq:Euler-u} are locally ill-posed in the space $C^1$. \end{theorem} Before giving a more precise statement it will be convenient to use the vorticity formulation of the Euler equations. Recall that in two dimensions the vorticity of a vector field $u$ is a 2-form $\omega = d u^\flat$ which is identified with the function $$ \omega = \mathrm{rot}\, u = - \frac{\partial u_1}{\partial x_2} + \frac{\partial u_2}{\partial x_1}. $$ In this case the Cauchy problem \eqref{eq:Euler-u}-\eqref{eq:Euler-u-ic} can be rewritten as \begin{align} \label{eq:euler-v} &\omega_t + u{\cdot}\nabla \omega = 0, \qquad t \geq 0, \; x \in \mathbb{R}^2 \\ \label{eq:euler-vic} &\omega(0) = \omega_0 \end{align} where the velocity is recovered from $\omega$ using the Biot-Savart law \begin{equation} \label{eq:Biot-Savart} u = K \ast \omega = \nabla^\perp \Delta^{-1} \omega \end{equation} with kernel $K(x) {=} (2\pi)^{-1}(-x_2/|x|^2, x_1/|x|^2)$ and where $\nabla^\perp {=} (-\frac{\partial}{\partial x_2}, \frac{\partial}{\partial x_1})$ denotes the symplectic gradient of a function. Our strategy will be the following. First, as in \cite{BL} we choose an initial vorticity $\omega_0$ such that the Lagrangian flow of the corresponding velocity field retains a large gradient on a (possibly short) time interval. We then perturb $\omega_0$ to get a sequence of initial vorticities in $W^{1,p}$. Finally, we show that \textit{the assumption that the Euler equations are well-posed in} $C^1(\mathbb{R}^2)$ (in particular, that its solutions depend continuously on the initial data in the $C^1$ norm) leads to a contradiction with a result of Kato and Ponce, which for convenience we restate in the following form \begin{theorem*}[Kato-Ponce \cite{KP}] Let $1{<}p{<}\infty$ and $s{>}1{+}\frac{2}{p}$. For any $\omega_0 \in W^{s-1, p}(\mathbb{R}^2)$ and any $T>0$ there exists a constant $K=K(T,\omega,s,p)>0$ such that $$ \sup_{0 \leq t \leq T}\| \omega(t)\|_{W^{s-1,p}} \leq K. $$ \end{theorem*} Theorem \ref{thm:0} will be therefore a consequence of the following result \begin{theorem} \label{thm:1} Let $2 < p < \infty$. There exist $T>0$ and a sequence $\omega_{0,n} \in C^\infty_c(\mathbb{R}^2)$ with the following properties 1. there exists a constant $C>0$ such that $\| \omega_{0,n} \|_{W^{1.p}} \leq C$ for all $n \in \mathbb{Z}_+$ \noindent and 2. for any $M \gg 1$ there is $0 < t_0 \leq T$ such that $\| \omega_{n}(t_0)\|_{W^{1,p}} \geq M^{1/3}$ for all sufficiently large $n$ and all $p$ sufficiently close to $2$. \end{theorem} In Section \ref{sec:Lag} we provide some technical lemmas to construct an initial vorticity whose Lagrangian flow has a large gradient. The proof of the latter is given in Section \ref{sec:proof-Prop}. The last section contains the proof of Theorems \ref{thm:1}. \section{Vorticity and the Lagrangian flow} \label{sec:Lag} Given a smooth radial bump function $\varphi$ on $\mathbb{R}^2$ supported in the unit ball $B(0,1)$ with $0 \leq \varphi \leq 1$ define another function \begin{align} \label{eq:bump} \varphi_0(x_1, x_2) = \sum_{\varepsilon_1, \varepsilon_2 = \pm 1} \varepsilon_1 \varepsilon_2 \varphi(x_1 {-} \varepsilon_1, x_2 {-} \varepsilon_2). \end{align} For a fixed positive integer $N_0 \in \mathbb{Z}_+$ and any $M \gg 1$ we set \begin{align} \label{eq:iv} \omega_0(x) = \omega_0^{M,N}(x) = M^{-2} N^{-\frac{1}{p}} \sum_{N_0 \leq k \leq N_0+N} \varphi_k(x), \qquad N = 1, 2, 3 \dots \end{align} where $2< p < \infty$ and where \begin{align*} \varphi_k(x) = 2^{(-1 + \frac{2}{p})k} \varphi_0 (2^k x). \end{align*} Observe that by construction $\varphi_0$ is an odd function in both $x_1, x_2$ and for any $k \geq 1$ its support is compact and contained in the set \begin{equation} \label{eq:suppp} \mathrm{supp}\,{ \varphi_k } \subset \bigcup_{\varepsilon_1, \varepsilon_2 = \pm 1} B\big( (\varepsilon_1 2^{-k}, \varepsilon_2 2^{-k}), 2^{-(k+2)} \big). \end{equation} Combined with the uniform (in time) $L^\infty$ control of the vorticity in $\mathbb{R}^2$ this ensures the existence of a unique solution of the Cauchy problem \eqref{eq:euler-v}-\eqref{eq:euler-vic} with the initial data \eqref{eq:iv}; e.g., by a result of Yudovich \cite{Yu}, see also Majda and Bertozzi \cite{MB}. Moreover \begin{lem} \label{lem:omega-0} We have \begin{align} \label{eq:omega-0} \| \omega_0 \|_{L^p} + \| \omega_0 \|_{\dot{W}^{1,p}} \lesssim M^{-2} \end{align} with the bound independent of $N>0$ and $2<p<\infty$. \end{lem} \begin{proof} Since the supports in \eqref{eq:suppp} are disjoint we have \begin{align*} \| \omega_0 \|_{L^p}^p = M^{-2p} N^{-1} \sum_{N_0 \leq k \leq N_0 +N} 2^{-kp} \int_{\mathbb{R}^2} \big| \varphi_0(x) \big|^p dx \lesssim M^{-2p} \end{align*} and similarly \begin{align*} \Big\| \frac{\partial \omega_0}{\partial x_1} \Big\|_{L^p}^p = M^{-2p} N^{-1} \sum_{N_0 \leq k \leq N_0 +N} \int_{\mathbb{R}^2} 2^{2k} \Big| \frac{\partial\varphi_0}{\partial x_1} (2^kx) \Big|^p dx. \simeq M^{-2p} \end{align*} The estimate of the other partial is analogous. \end{proof} In particular, since $p>n=2$ from the results of Kato and Ponce it follows that there exists a unique velocity field $u \in C^1([0, \infty), W^{2,p}(\mathbb{R}^2))$ solving \eqref{eq:Euler-u}-\eqref{eq:Euler-uu} whose vorticity function $\omega \in C([0,\infty), W^{1,p}(\mathbb{R}^2))$ satisfies the initial condition \eqref{eq:iv} (see \cite{KP}, Lem. 3.1; Thm. III). The associated Lagrangian flow of $u = \nabla^\perp\Delta^{-1}\omega$, i.e., the solution of the initial value problem \begin{align} \label{eq:flow} &\frac{d}{dt}\eta(t,x) = u(t, \eta(t,x)) \; \big( {=} F_u(\eta(t,x)) \big) \\ \label{eq:flow-ic} &\eta(0,x) = x \end{align} is a curve of volume-preserving diffeomorphisms with $\omega\circ\eta \in C([0, \infty), W^{1,p}(\mathbb{R}^2))$, see e.g., \cite{KP} or \cite{BB}. Furthermore, the odd symmetry of $\omega_0$ is preserved by $\eta$ and hence retained by the vorticity $\omega$ for all time. From \eqref{eq:Biot-Savart} it then follows that the velocity field $v$ is symmetric with respect to the variables $x_1$ and $x_2$ and hence both coordinate axes are invariant under the flow $\eta$ with the origin $x_1=x_2=0$ as its hyperbolic stagnation point. \begin{lem} \label{lem:Riesz} Let $T>0$ and consider the flow $\eta(t)$ of the velocity field $u=\nabla^\perp\Delta^{-1}\omega$ for $0 \leq t \leq T$. Suppose that $\sup_{0\leq t \leq T} \| D\eta(t)\|_\infty \leq C_T$ for some $C_T>0$. Then \begin{equation} \label{eq:Riesz} \sup_{0 \leq t \leq T} \| R_{ii} \omega (t) \|_\infty \lesssim \big( 5/4 + T C_T \big)^{\frac{p-2}{p}} C_T M^{-2} \end{equation} where $R_{ij} = \partial_i \partial_j \Delta^{-1}$ denotes the double Riesz transforms with $i, j = 1,2$. \end{lem} \begin{proof} Observe that $\mathrm{supp}\, \omega_0 \subset B(0, 5/4)$ by \eqref{eq:iv} and \eqref{eq:suppp}. An estimate of the Biot-Savart operator \eqref{eq:Biot-Savart} gives a uniform bound on the velocity field so that the support of the vorticity can grow at most linearly in time and, since $\omega = \omega_0\circ\eta^{-1}$ by conservation of vorticity, we find that the support of $\omega(t)$ is contained in a ball of radius $r_t = 5/4 + tC_T$. Next, using the H\"older inequality we obtain \begin{align*} \sup_{x \in B(0,r_T)} | R_{ii} \omega(t,x) | &= \bigg| \frac{1}{2\pi} \int_{B(0,r_T)} \frac{x_i - y_i}{|x-y|^2} \frac{\partial \omega}{\partial x_i}(t,y) \, dy \bigg| \\ &\leq \frac{1}{2\pi} \left( \int_{B(0,r_T)} |x-y|^{-q} dy \right)^{1/q} \| \nabla \omega(t) \|_{L^p} \\ &\lesssim r_T^{\frac{2-q}{q}} \big\| D\eta^{-1}(t) \big\|_\infty \big\| \nabla\omega_0 \circ \eta^{-1}(t) \big\|_{L^p} \end{align*} where $1/p + 1/q =1$. Since $D\eta^{-1} = (D\eta)^{-1} \circ \eta^{-1}$ and $\eta(t)$ is volume-preserving, using the bound on the Jacobi matrix of the flow and inequality \eqref{eq:omega-0} of Lemma \ref{lem:omega-0} we can further estimate the expression on the right hand side by \begin{align*} \simeq r_T^{\frac{p-2}{p}} \| D\eta(t) \|_\infty \| \nabla\omega_0 \|_{L^p} \lesssim r_T^{\frac{p-2}{p}} C_T M^{-2} \end{align*} which gives \eqref{eq:Riesz}. \end{proof} \begin{remark} \label{rem:SGr} In fact, note that if $\xi: \mathbb{R}^2 \to \mathbb{R}^2$ is a volume-preserving diffeomorphism then the Jacobi matrix of its inverse can be computed from \begin{align*} D\xi^{-1} = (D\xi)^{-1}\circ\xi^{-1} = \left( \begin{matrix} \partial_2\xi_2 {\circ} \xi^{-1} & -\partial_2 \xi_1{\circ} \xi^{-1} \\ -\partial_1 \xi_2{\circ}\xi^{-1} & \partial_1 \xi_1{\circ} \xi^{-1} \end{matrix} \right). \end{align*} Thus given a smooth function $f: \mathbb{R}^2 \to \mathbb{R}$ we can express the gradient of the composition $\nabla( f \circ \xi^{-1}) = \nabla{f} \circ \xi^{-1} {\cdot} D\xi^{-1}$ using the scalar product and the symplectic gradient as \begin{equation} \label{eq:SGr} \nabla( f \circ \xi^{-1}) = \big( {-}\nabla{f} \circ \xi^{-1} \cdot \nabla^\perp{\xi_2} \circ \xi^{-1}, \nabla{f} \circ \xi^{-1} \cdot \nabla^\perp{\xi_1} \circ \xi^{-1} \big). \end{equation} \end{remark} The proof of the following result will be given in the next section. \begin{prop} \label{prop:Lag} Let $\eta(t)$ be the flow of the velocity field $u=\nabla^\perp\Delta^{-1}\omega$ with initial vorticity given by \eqref{eq:iv}. Given $M \gg 1$ we have $$ \sup_{0 \leq t \leq M^{-3}} \| D\eta(t) \|_\infty > M $$ for any sufficiently large integer $N>0$ in \eqref{eq:iv} and any $2<p<\infty$ sufficiently close to $2$. \end{prop} \begin{remark} In what follows it can be assumed that $2<p\leq3$. In this case all estimates on the flow $\eta$ or $D\eta$ can be made independent of the Lebesgue exponent $2<p < \infty$. \end{remark} We will also need a comparison result for solutions of the Lagrangian flow equations, namely \begin{lem} \label{lem:comp} Let $u$ and $v$ be smooth divergence-free vector fields on $\mathbb{R}^2$ and let $\eta$ and $\xi$ be the solutions of \eqref{eq:flow}-\eqref{eq:flow-ic} with the right-hand sides given by $F_u$ and $F_{u+v}$ respectively. Then $$ \sup_{0 \leq t \leq 1}{ \big( \| \xi(t) - \eta(t) \|_\infty + \| D\xi(t) - D\eta(t) \|_\infty \big) } \leq C\sup_{0 \leq t \leq 1} ( \| v(t) \|_\infty + \| Dv(t)\|_\infty ) $$ for some $C>0$ depending only on $u$ and its derivatives. \end{lem} For a standard proof one writes down the equation for the difference $\eta - \xi$ and applies Gronwall's inequality, see e.g., \cite{BL}; Lemma 4.1. \section{Proof of Proposition \ref{prop:Lag}} \label{sec:proof-Prop} Let $T \leq M^{-3}$ and assume to the contrary that \begin{equation} \label{eq:D<} \| D\eta(t) \|_\infty \leq M \end{equation} for any $0 \leq t \leq T$. Since $D\eta(0) = Id$ shrinking the time interval $[0,T]$ slightly further, if necessary, we can arrange by continuity to have $C_T \leq M$, see Lemma \ref{lem:Riesz}. Then \eqref{eq:Riesz} gives \begin{align} \label{eq:Rii} \sup_{0\leq t \leq T}{\| R_{ii}\omega(t)\|_\infty} \lesssim (5/4 + M^{-3} M)^{(p-2)/p} M M^{-2} \simeq M^{-1} \end{align} for $i=1,2$. Note that since $M \gg 1$ the factor in the parenthesis can be bounded by a universal constant (for example by 3) and so the bound in \eqref{eq:Rii} is independent of any Lebesgue exponent $p>2$. Differentiating the flow equations \eqref{eq:flow} in the $x$ variable we obtain the system \begin{align*} \frac{d}{dt} D\eta(t, x) &= \left( \begin{matrix} -R_{12} \omega(t, x) & -R_{22} \omega(t, x) \\ R_{11} \omega(t, x) & R_{12} \omega(t, x) \end{matrix} \right) D\eta(t, x) \\ &= \left( \begin{matrix} -\Lambda(t,x) & 0 \\ 0 & \Lambda(t,x) \end{matrix} \right) D\eta(t, x) + P(t, x) D\eta(t, x) \\ D\eta(0,x) &= Id \end{align*} where $\Lambda(t, x) = (R_{12}\omega)(t, \eta(t, x))$. Observe that by \eqref{eq:Rii} we have \begin{equation} \label{eq:P} \sup_{0 \leq t \leq T} \| P(t) \|_\infty \lesssim M^{-1}. \end{equation} Applying Duhamel's formula we can rewrite the above system in the form \begin{align} \nonumber D\eta(t,x) &= \left( \begin{matrix} e^{-\int_0^t \Lambda(\tau,x) d\tau} & 0 \\ 0 & e^{\int_0^t \Lambda(\tau,x) d\tau} \end{matrix} \right) \\ \label{eq:Duhamel} &\hskip 2cm + \int_0^t \left( \begin{matrix} e^{-\int_\tau^t \Lambda(\sigma,x) d\sigma} & 0 \\ 0 & e^{\int_\tau^t \Lambda(\sigma,x) d\sigma} \end{matrix} \right) P(\tau,x) D\eta(\tau,x) \, d\tau \\ \nonumber &= A(t,x) + B(t,x). \end{align} For any $x \in \mathbb{R}^2$ and any $0 \leq \tau \leq t$ we have the inequalities \begin{align} \nonumber e^{\mp \int_\tau^t \Lambda(\sigma,x) d\sigma} &= e^{\mp (\int_0^t \Lambda(\sigma,x) d\sigma - \int_0^\tau \Lambda(\sigma,x) d\sigma) } \\ \label{eq:sup-e} &\leq e^{2\sup_{0 \leq \tau \leq t} | \int_0^\tau \Lambda(\sigma,x) d\sigma |} = \sup_{0 \leq \tau \leq t} e^{2 | \int_0^\tau \Lambda(\sigma,x) d\sigma |} \end{align} and therefore, solving for $A(t,x)$ on the right hand side of equation \eqref{eq:Duhamel} and using the bounds \eqref{eq:D<} and \eqref{eq:P} we find \begin{align*} e^{| \int_0^t \Lambda(\tau,x) d\tau |} \lesssim M + t \sup_{0 \leq \tau \leq t}{ e^{2|\int_0^\tau \Lambda(\sigma,x)d\sigma|} } \end{align*} for any $0 \leq t \leq T$. Recall that $\partial_i \partial_j \Delta^{-1}$ is a Calderon-Zygmund operator mapping continuously in $W^{1,p}$ for any $1<p<\infty$ and $\omega\circ\eta \in C([0,\infty), W^{1,p}(\mathbb{R}^2))$. Consequently, a simple continuity argument gives \begin{equation} \label{eq:e} e^{|\int_0^t \Lambda(\tau, x) d\tau|} \lesssim 2M \end{equation} and hence \begin{equation} \label{eq:2M} \left| \int_0^t \Lambda(\tau, x) d\tau \right| \lesssim \log{(2M)} \end{equation} for any $x \in \mathbb{R}^2$ and any $0 \leq t \leq T \leq M^{-3}$ provided that $M$ is chosen sufficiently large. Observe that this bound is independent of the choice of the integers $N$ and $N_0$ in \eqref{eq:iv} as well as the exponent $p$. In particular, we have \begin{equation} \label{eq:e2M} (2M)^{-1} \lesssim e^{\pm \int_0^t \Lambda(\tau, x) d\tau} \lesssim 2M, \qquad\quad 0 \leq t \leq T, \quad x \in \mathbb{R}^2. \end{equation} We will seek a contradiction with \eqref{eq:2M} and to this end we will need to examine the expression for $\Lambda(t,0)$. First, using \eqref{eq:Duhamel} we have \begin{align} \label{eq:fi} \eta(t,x) &= \eta(t,x) - \eta(t,0) = \int_0^1 D\eta (t, rx){\cdot}x \, dr \\ \nonumber &= \int_0^1 A(t, rx) \, dr \cdot x + \int_0^1 B(t, rx) \, dr \cdot x = \tilde{A}(t,x) + \tilde{B}(t,x) \end{align} where \begin{align} \nonumber | \tilde{B}(t,x) | &\leq |x| \int_0^1 | B(t, rx)| dr \\ \label{eq:beta} &\leq |x| \int_0^1 \int_0^t \sup_{0 \leq \tau \leq t} e^{2|\int_0^\tau \Lambda(\sigma, rx) d\sigma|} \|P(\tau)\|_\infty \|D\eta(\tau)\|_\infty d\tau dr \\ \nonumber &\lesssim t M^2 |x| \lesssim M^{-1} |x| \end{align} by \eqref{eq:D<}, \eqref{eq:P}, \eqref{eq:sup-e} and \eqref{eq:e} for any $0 \leq t \leq T$ and similarly \begin{equation} \label{eq:alfa} | \tilde{A}_i (t,x) | \leq \int_0^1 \Big| \sum_{j=1}^2 A_{ij}(t,rx) x_j \Big| \, dr \lesssim M |x_i| \qquad (i =1,2) \end{equation} where $A_{ij}(t,x)$ denote the entries of the matrix $A(t,x)$ in \eqref{eq:Duhamel}. Next, it is not difficult to check that by construction the components $\eta_1$ and $\eta_2$ of the Lagrangian flow of $u = \nabla^\perp \Delta^{-1}\omega$ with initial vorticity $\omega_0$ given by \eqref{eq:iv} are sign-preserving in the sense that $x_i \geq 0$ implies that $\eta_i (x) \geq 0$ for $i =1,2$. In fact, a quick inspection shows that $ (\Delta^{-1} \partial_2 \omega)(t, 0, \eta_2(t,x)) = (\Delta^{-1} \partial_2 \omega)(t, \eta_1(t,x),0) = 0 $ so that both components satisfy an O.D.E. $$ \frac{d}{dt} \eta_i(t,x) = F_i \big( t,\eta(t,x) \big) \eta_i(t,x) \qquad (i=1,2) $$ with some smooth functions $F_i$ and therefore $ \eta_i(t,x) = x_i e^{\int_0^t F_i(\tau, \eta(\tau, x)) d\tau} $ which implies the assertion. Combining this observation with the odd symmetry of $\omega_0$ as in \cite{BL} we find that the integrand of the expression for $\Lambda(t,0)$ is a non-negative function in $t$ and can be bounded below by its restriction to a subset of the first quadrant. Since the origin is a stagnation point of the flow we have $\eta(t,0)=0$ and using conservation of vorticity and change of variables we get \begin{align} \nonumber - \Lambda(t,0) &= -(R_{12}\omega) (t,\eta(t,0)) = -\frac{\partial}{\partial x_1}\Big|_{x_1=0} \frac{\partial}{\partial x_2} \Big|_{x_2=0} \int_{\mathbb{R}^2} \frac{1}{2\pi} \log{|x-y|} \, \omega(t, y) \, dy \\ \label{eq:L} &= \frac{1}{\pi} \int_{\mathbb{R}^2} \frac{\eta_1(t,x) \eta_2(t,x)}{(\eta_1^2(t,x) + \eta_2^2(t,x))^2} \, \omega_0(x) dx \\ \nonumber &\geq \frac{1}{\pi} \int_{x_1, x_2 \geq 0} \frac{\eta_1(t,x) \eta_2(t,x)}{(\eta_1^2(t,x) + \eta_2^2(t,x))^2} \, \omega_0(x) dx \\ \nonumber &= \frac{1}{\pi} \int_{x_1, x_2 \geq 0} \left( \frac{\eta_1(t,x)}{\eta_2(t,x)} + \frac{\eta_2(t,x)}{\eta_1(t,x)} \right)^{-1} \left( \eta_1^2(t,x) + \eta_2^2(t,x) \right)^{-1} \omega_0(x) dx. \end{align} In order to get a suitable lower bound on $\Lambda(t,0)$ we further restrict the integral to a sector of the first quadrant defined by $$ S = \left\{ x \in \mathbb{R}^2: \frac{1}{2} x_1 \leq x_2 \leq 2 x_1, x_1 \geq 0, x_2 \geq 0 \right\} $$ and observe that for $x \in S$ from \eqref{eq:fi}, \eqref{eq:beta} and \eqref{eq:alfa} we have \begin{align} \nonumber | \eta_1(t,x)| &= \left| \tilde{A}_1(t,x) + \tilde{B}_1(t,x) \right| \\ \label{eq:fi1} &\lesssim M x_2 + M^{-1} \sqrt{ x_1^2 + x_2^2} \lesssim \big( M + M^{-1} \big) x_2 \\ \nonumber &\simeq M x_2 \end{align} and similarly \begin{align} \label{eq:fifi1} | \eta_2(t,x)| \lesssim \big( M + M^{-1} \big) x_1 \simeq M x_1. \end{align} On the other hand, for the second component $\eta_2(t,x)$ and $x \in S$, integrating \eqref{eq:e2M} and using the sign-preserving property we obtain \begin{align*} \frac{x_2}{2M} \lesssim x_2 \int_0^1 e^{\int_0^t \Lambda(\tau, rx) d\tau} dr = \tilde{A}_2(t,x) = \eta_2(t,x) - \tilde{B}_2(t,x) \leq \eta_2(t,x) + | \tilde{B}_2(t,x)| \end{align*} which by \eqref{eq:beta} gives \begin{align*} 0 \leq x_2 \lesssim M \eta_2(t,x) + tM^3 \sqrt{x_1^2 + x_2^2} \end{align*} for any $x \in S$ and $0 \leq t \leq T$. Therefore, shrinking slightly the time interval once again, if needed, and using $x_1 \leq 2 x_2$ we obtain \begin{align*} 0 \leq x_2 \lesssim M \eta_2(t,x) \end{align*} for any $0 \leq t \leq \min{( T, M^{-3}/2\sqrt{5})}$. Put together with \eqref{eq:fi1} this implies \begin{align*} | \eta_1(t,x)| \lesssim M^2 \eta_2(t,x) \end{align*} and by an analogous argument \begin{align*} | \eta_2(t,x)| \lesssim M^2 \eta_1(t,x) \end{align*} for any $0 \leq t \leq \min{( T, M^{-3}/2\sqrt{5} )}$ and any $x \in S$ which combined give \begin{equation} \label{eq:fi12} M^{-2} \lesssim \frac{\eta_1(t,x)}{\eta_2(t,x)} \lesssim M^2 \end{equation} for any $0 \leq t \leq \min{( T, M^{-3}/2\sqrt{5} )}$ and any $x \in S$. We return to the estimate of $\Lambda(t,0)$ in \eqref{eq:L}. Substituting for $\omega_0$ from \eqref{eq:bump} and \eqref{eq:iv} and using \eqref{eq:fi1}, \eqref{eq:fifi1} and \eqref{eq:fi12} we obtain \begin{align*} - \pi \Lambda(t,0) &\geq \int_{S} \left( \frac{\eta_1(t,x)}{\eta_2(t,x)} + \frac{\eta_2(t,x)}{\eta_1(t,x)} \right)^{-1} \left( \eta_1^2(t,x) + \eta_2^2(t,x) \right)^{-1} \omega_0(x) \, dx \\ & \gtrsim M^{-6} N^{-\frac{1}{p}} \sum_{k=N_0}^{N_0+N} \int_{S} \frac{\varphi_k(x)}{|x|^2} \, dx \\ &\hskip -1.5cm \simeq M^{-6} N^{-\frac{1}{p}} \sum_{k=N_0}^{N_0+N} 2^{(-1 + \frac{2}{p})k} \int_{S \cap \mathrm{supp}(\varphi_k)} \sum_{\varepsilon_1, \varepsilon_2 = \pm 1} \frac{\varepsilon_1 \varepsilon_2 \varphi(2^k x_1 {-} \varepsilon_1, 2^k x_2 {-} \varepsilon_2)}{x_1^2 + x_2^2} \, dx_1 dx_2 \\ &\simeq M^{-6} N^{-\frac{1}{p}} \sum_{k=N_0}^{N_0+N} 2^{(-1 + \frac{2}{p})k} \int_{B_k} \frac{\varphi(2^k x_1 {-} 1, 2^k x_2 {-} 1)}{x_1^2 + x_2^2} \, dx_1 dx_2 \end{align*} where $B_k$ denotes the ball centered at $(2^{-k}, 2^{-k})$ of radius $1/2^{k+2}$, see \eqref{eq:suppp}. Thus, changing variables we can further bound the above expression from below by \begin{align*} &\gtrsim M^{-6} N^{-\frac{1}{p}} \frac{8}{25} \int_{|x|<\frac{1}{4}} \varphi(x) \, dx \sum_{k=N_0}^{N_0+N} 2^{(-1 + \frac{2}{p})k} \simeq M^{-6} N^{-\frac{1}{p}} \sum_{k=N_0}^{N_0+N} 2^{(-1 + \frac{2}{p})k}. \end{align*} Recall that from \eqref{eq:e2M} with $t \simeq M^{-3}$ we now have \begin{align*} \log 2M &\gtrsim \left| \int_0^{M^{-3}} \Lambda(\tau, 0) \, d\tau \right| \gtrsim M^{-9} N^{-\frac{1}{p}} \sum_{k=N_0}^{N_0+N} 2^{(-1 + \frac{2}{p})k} \gtrsim M^{-9} \frac{ N^{ 1-\frac{1}{p} } }{ 2^{(1 - \frac{2}{p})(N_0 + N)} }. \end{align*} Finally, given $M \gg 1$ and $N_0>0$ first choose a large positive integer $N$ so that $N^{1/2} \geq 10 M^{10}$ and then pick an exponent such that $$ 2< p \leq \frac{2(N_0{+}N)}{N_0{+}N-1} $$ to obtain a desired contradiction. \section{Proof of Theorem \ref{thm:1}} \label{sec:proof} Let $2 < p < \infty$, $s=2$ and take $T =1$. Let $\omega_0$ be the initial vorticity defined in \eqref{eq:iv} in Section \ref{sec:Lag}. As before, let $\omega(t)$ be the corresponding (smooth) solution of \eqref{eq:euler-v}-\eqref{eq:euler-vic} and let $\eta(t)$ denote the associated Lagrangian flow of $u = \nabla^\perp\Delta^{-1}\omega$. Let $M \gg 1$ be an arbitrary large number. We can consider two cases. If there exists $0 < t_0 \leq M^{-3}$ such that $\| \omega(t_0)\|_{W^{1,p}} > M^{1/3}$ then there is nothing to prove. We will therefore assume that \begin{equation} \label{eq:assump} \|\omega(t_0) \|_{W^{1,p}} \leq M^{1/3}, \qquad 0 \leq t_0 \leq M^{-3}. \end{equation} By Proposition \ref{prop:Lag} we can then find a point $x^\ast \in \mathbb{R}^2$ such that at least one of the entries $\partial\eta^i/\partial x_j$ of the Jacobi matrix (for example, the $i{=}j{=}2$ entry) satisfies $| \partial_2\eta_2 (t_0,x^\ast)| > M$. Therefore, by continuity, there is a $\delta >0$ such that \begin{align} \label{eq:M} \left| \frac{\partial \eta_2}{\partial x_2} (t_0,x) \right| > M \qquad \text{for all} \quad |x-x^\ast| < \delta. \end{align} Let $0 \leq \rho \leq 1$ be a smooth bump function on $\mathbb{R}^2$ with $\mathrm{supp}\, \rho \subset B(0,2)$ and such that $\rho \equiv 1$ on $B(0,1)$. For any $k \in \mathbb{Z}_+$ and $\lambda >0$ define \begin{equation} \label{eq:beta-pert} \beta_{k,\lambda}(x) = \frac{\lambda^{-1 + \frac{2}{p}}}{\sqrt{k}} \sum_{\varepsilon_1, \varepsilon_2 = \pm 1} \varepsilon_1 \varepsilon_2 \rho(\lambda(x-x^\ast_\epsilon)) \sin{kx_1} \end{equation} where $x^\ast_\epsilon = (\varepsilon_1 x^\ast_1, \varepsilon_2 x^\ast_2)$ and $x^\ast = (x^\ast_1, x^\ast_2)$. Observe that $\beta_{k,\lambda}$ are \textit{smooth} functions with compact support in $\mathbb{R}^2$ \begin{equation} \label{eq:supp} \mathrm{supp}\, (\beta_{k,\lambda}) \subset \bigcup_{\varepsilon_1, \varepsilon_2} B\big( x^\ast_\varepsilon, 2/\lambda \big). \end{equation} In the sequel we will need two technical lemmas. \begin{lem} \label{lem:rem} For any $k \in \mathbb{Z}_+$ and $\lambda >0$ we have \begin{enumerate} \item[1.] $ \| \partial_j \Delta^{-1} \beta_{k,\lambda} \|_\infty \lesssim k^{-1/2} \lambda^{-1 + \frac{2}{p}} \| \rho\|_\infty $ \vskip 0.1cm \item[2.] $ \| \partial_i \partial_j \Delta^{-1} \beta_{k,\lambda} \|_\infty \lesssim k^{-1/2} \lambda^{-1 + \frac{2}{p}} \| \hat{\rho}\|_{L^1} $ \vskip 0.15cm \item[3.] $ \| \beta_{k,\lambda} \|_{W^{1,p}} \lesssim \big( k^{1/2}\lambda^{-1} + k^{-1/2} + k^{-1}\lambda^{-1} \big) \|\rho\|_{W^{1,p}} $ \end{enumerate} where $i, j =1, 2$ and the bounds depend on $\rho$ and $x^\ast$. \end{lem} \begin{proof} Observe that for any $x \in \mathbb{R}^2$ we have $$ \mathcal{F}^{-1}\mathcal{F}\big(\partial_j\Delta^{-\frac{1}{2}} \beta_{k,\lambda} \big)(x) = K \ast \beta_{k,\lambda}(x) = \int_{\mathbb{R}^2} K(x-y) \beta_{k,\lambda}(y) \, dy $$ where $\mathcal{F}, \mathcal{F}^{-1}$ denote the Fourier transform and its inverse, respectively, and where the kernel $K$ is homogeneous of degree $-1$, that is, $$ K(tx) = t^{-1}K(x) \qquad \text{for any} \quad t>0, \, x \neq 0. $$ In particular $K \in L^1_{\mathrm{loc}}(\mathbb{R}^2)$ and consequently we can obtain an $L^\infty$ bound for the integral operator by a direct calculation using the fact that $\beta_{k,\lambda}$ has compact support. Namely, for any $\epsilon>0$ we have \begin{align*} \left| \partial_j \Delta^{-1/2} \beta_{k,\lambda} (x) \right| &= \left| \left( \int_{|x-y|<\epsilon} + \int_{|x-y|>\epsilon} \right) K(x-y) \beta_{k,\lambda}(y) \, dy \right| \\ &\lesssim \| \beta_{k,\lambda}\|_\infty \int_{|x-y|<\epsilon} \frac{dy}{|x-y|} + \frac{1}{\epsilon} \int_{\mathrm{supp}\, \beta_{k,\lambda}} |\beta_{k,\lambda}(y)| \, dy \\ &\lesssim \| \beta_{k,\lambda}\|_\infty \int_{|y|<\epsilon} |y|^{-1} dy + \epsilon^{-1} \|\beta_{k,\lambda}\|_\infty \mu(\mathrm{supp}\, \beta_{k,\lambda}). \end{align*} Note that by \eqref{eq:supp} if $\lambda>1$ then $\mu(\mathrm{supp}\, \beta_{k,\lambda}) \leq 16\pi^2$ so that from \eqref{eq:beta-pert} we get $$ \| \partial_j \Delta^{-1} \beta_{k,\lambda} \|_\infty \lesssim C_{\epsilon,x^*} \|\beta_{k,\lambda}\|_\infty \lesssim C_{\epsilon, x^*} \| \rho \|_\infty k^{-1/2} \lambda^{-1 + \frac{2}{p}} $$ where $C_{\epsilon, x^*}>0$ is a constant depending only on $\epsilon >0$ and $x^*$. For the second assertion let $\xi_{\pm} = (\xi_1 \pm k/2\pi, \xi_2)$ and first compute the Fourier transform $$ \hat{\beta}_{k,\lambda}(\xi) = \frac{\lambda^{-1+\frac{2}{p}}}{\sqrt{k}} \sum_{\varepsilon_1, \varepsilon_2} \frac{\varepsilon_1 \varepsilon_2}{2i} \left( e^{-2\pi i \langle \xi_{-}, x^*_{\varepsilon} \rangle} \frac{1}{\lambda^2} \hat{\rho}\Big( \frac{\xi_{-}}{\lambda} \Big) - e^{-2\pi i \langle \xi_{+}, x^*_{\varepsilon} \rangle} \frac{1}{\lambda^2} \hat{\rho}\Big( \frac{\xi_{+}}{\lambda} \Big) \right). $$ Next, we estimate \begin{align*} \big| \partial_i \partial_j \Delta^{-1} \beta_{k,\lambda}(x) \big| &\simeq \Big| \mathcal{F}^{-1}\mathcal{F} \big( \partial_i \partial_j \Delta^{-1} \beta_{k,\lambda} \big) (x) \Big| \lesssim \| \hat{\beta}_{k,\lambda} \|_{L^1} \\ & \lesssim k^{-1/2} \lambda^{-1+\frac{2}{p}} \sum_{A=\pm} \int_{\mathbb{R}^2} \frac{1}{\lambda^2} \Big| \hat{\rho} \Big(\frac{\xi_A}{\lambda} \Big) \Big| d\xi \\ &\simeq k^{-1/2} \lambda^{-1+ \frac{2}{p}} \| \hat{\rho}\|_{L^1}. \end{align*} Finally, using once again the triangle inequality and the change of variables formula we compute \begin{align*} \Big\| \frac{\partial\beta_{k,\lambda}}{\partial x_1} \Big\|_{L^p} &\lesssim \frac{1}{\sqrt{k}} \Big\| \lambda^{2/p} \sum_{\varepsilon_1,\varepsilon_2} \varepsilon_1 \varepsilon_2 \frac{\partial \rho}{\partial x_1}\big( \lambda(\cdot - x^*_\varepsilon) \big) \Big\|_{L^p} + \frac{\sqrt{k}}{\lambda} \Big\| \lambda^{2/p} \sum_{\varepsilon_1,\varepsilon_2} \varepsilon_1 \varepsilon_2 \rho\big(\lambda(\cdot-x^*_\varepsilon)\big) \Big\|_{L^p} \\ &\simeq \frac{1}{\sqrt{k}} \left( \int_{\mathbb{R}^2} \Big| \sum_{\varepsilon_1, \varepsilon_2} \varepsilon_1\varepsilon_2 \frac{\partial\rho}{\partial x_1}(x) \Big|^p dx \right)^{1/p} + \frac{\sqrt{k}}{\lambda} \left( \int_{\mathbb{R}^2} \Big| \sum_{\varepsilon_1, \varepsilon_2} \varepsilon_1\varepsilon_2 \rho(x) \Big|^p dx \right)^{1/p} \\ &\lesssim k^{-1/2} \Big\| \frac{\partial \rho}{\partial x_1}\Big\|_{L^p} + k^{1/2}\lambda^{-1} \| \rho\|_{L^p}. \end{align*} Similarly, we find \begin{align*} \Big\| \frac{\partial\beta_{k,\lambda}}{\partial x_2} \Big\|_{L^p} \lesssim k^{-1/2} \Big\| \frac{\partial \rho}{\partial x_2} \Big\|_{L^p} \end{align*} and \begin{align*} \| \beta_{k,\lambda}\|_{L^p} \lesssim \frac{\lambda^{-1}}{\sqrt{k}} \left( \int_{\mathbb{R}^2} \Big| \lambda^{2/p} \sum_{\varepsilon_1, \varepsilon_2} \varepsilon_1\varepsilon_2 \rho\big( \lambda(x-x^\ast_\varepsilon) \big) \Big|^p dx \right)^{1/p} \lesssim k^{-1/2} \lambda^{-1} \| \rho\|_{L^p} \end{align*} which combined yield the lemma. \end{proof} Observe that choosing \begin{equation} \label{eq:kln} k= \lambda^2 \quad \text{and} \quad \lambda = 3n, \quad n \in \mathbb{Z}_+ \end{equation} and letting $\beta_n = \beta_{k,\lambda}$ in Lemma \ref{lem:rem} we immediately obtain \begin{align} \label{eq:n1} &\| \partial_j \Delta^{-1} \beta_n \|_\infty \rightarrow 0 \quad \text{and} \quad \| \partial_j \nabla^\perp \Delta^{-1} \beta_n \|_\infty \rightarrow 0 \quad \text{as} \; n \to \infty \end{align} as well as \begin{align} \label{eq:n2} &\| \beta_n\|_{W^{1,p}} \lesssim \|\rho\|_{W^{1,p}} < \infty \quad \text{for any} \; n \in \mathbb{Z}_+. \end{align} We will also need the following \begin{lem} \label{lem:remrem} With $k, \lambda$ and $n$ as in \eqref{eq:kln} we have \begin{enumerate} \item[1.] $ \| \partial_2 \beta_{k,\lambda} \partial_1 \eta_2(t_0) \|_{L^p} \lesssim k^{-1/2} \| \partial_1\eta_2(t_0)\|_{L^\infty(\cup B(x^*_\varepsilon,2))} \xrightarrow[n \to \infty]{} 0 $ \vskip .1cm \item[2.] $ \| \partial_1\beta_{k,\lambda} \partial_2\eta_2(t_0) \|_{L^p} \gtrsim M k^{1/2} \lambda^{-1} - k^{-1/2} \| \partial_2\eta_2(t_0)\|_{L^\infty(B(x^\ast,2))} \xrightarrow[n \to \infty]{} M $ \end{enumerate} where the constants in both estimates depend only on $\rho$, $x^\ast$ and $t_0$. \end{lem} \begin{proof} In the first case from \eqref{eq:beta-pert} we have \begin{align*} \bigg( \int_{\mathbb{R}^2} \Big| &\frac{\partial \beta_{k,\lambda}}{\partial x_2}(x) \frac{\partial \eta_2}{\partial x_1}(t_0, x) \Big|^p dx \bigg)^{1/p} \\ &= \left( \int_{ \cup B( x^\ast_\varepsilon, \frac{2}{\lambda}) } \bigg| \frac{1}{\sqrt{k}} \lambda^{\frac{2}{p}} \sum_{\varepsilon_1, \varepsilon_2} \varepsilon_1 \varepsilon_2 \frac{\partial \rho}{\partial x_2}( \lambda(x-x^*_\varepsilon)) \frac{\partial\eta_2}{\partial x_1}(t_0,x) \sin{kx_1} \bigg|^p dx \right)^{1/p} \\ &\leq \frac{1}{\sqrt{k}} \sum_{\varepsilon_1, \varepsilon_2} \left( \int_{ \cup B( x^\ast_\varepsilon, \frac{2}{\lambda}) } \lambda^2 \Big| \frac{\partial\rho}{\partial x_2}(\lambda(x-x^*_\varepsilon)) \Big|^p dx \right)^{1/p} \sup_{ \cup B( x^\ast_\varepsilon, 2) } \Big| \frac{\partial\eta_2}{\partial x_1}(t_0,x) \Big| \\ &= 4 k^{-1/2} \| \partial_1\eta_2(t_0)\|_{L^\infty(\cup B(x^*_\varepsilon,2))} \left( \int_{B(0,2)} \Big| \frac{\partial \rho}{\partial x_2}(x) \Big|^p dx \right)^{1/p} \end{align*} which gives the desired estimate. The second case is slightly more cumbersome. We have \begin{align*} \bigg( \int_{\mathbb{R}^2} \Big| &\frac{\partial \beta_{k,\lambda}}{\partial x_1}(x) \frac{\partial \eta_2}{\partial x_2}(t_0, x) \Big|^p dx \bigg)^{1/p} = \\ &= \Bigg( \int_{ \mathbb{R}^2 } \bigg| \frac{1}{\sqrt{k}} \lambda^{\frac{2}{p}-1} \sum_{\varepsilon_1, \varepsilon_2} \varepsilon_1 \varepsilon_2 \Big( k \rho(\lambda(x-x^*_\varepsilon)) \cos{kx_1} \, + \\ & \hskip 3cm + \lambda \frac{\partial\rho}{\partial x_1} (\lambda(x - x^*_\varepsilon)) \sin{kx_1} \Big) \frac{\partial \eta_2}{\partial x_2}(t_0,x) \bigg|^p dx \Bigg)^{1/p} \\ &\geq \Bigg( \int_{B(x^*,\frac{2}{\lambda}) \cap B(x^*,\delta)} \bigg| \sqrt{k} \lambda^{-1 + \frac{2}{p}} \cos{kx_1} \rho(\lambda(x-x^*)) \frac{\partial\eta_2}{\partial x_2}(t_0,x) \, + \\ &\hskip 3cm + \frac{1}{\sqrt{k}} \lambda^{\frac{2}{p}} \sin{kx_1} \frac{\partial\rho}{\partial x_1}(\lambda(x-x^*)) \frac{\partial\eta_2}{\partial x_2}(t_0,x) \bigg|^p dx \Bigg)^{1/p}. \end{align*} Taking $\lambda$ large enough so that $2/\lambda < \delta$ and using \eqref{eq:M} and the triangle inequality we can further estimate the above integral from below by \begin{align*} M \sqrt{k} \lambda^{-1} \bigg( \int_{B(x^*,\frac{2}{\lambda})} &\lambda^2 \big| \cos{kx_1} \rho(\lambda(x-x^*)) \big|^p dx \bigg)^{1/p} - \\ &- \frac{1}{\sqrt{k}} \bigg( \int_{B(x^*,\frac{2}{\lambda})} \lambda^2 \Big| \sin{kx_1} \frac{\partial\rho}{\partial x_1}(\lambda(x-x^*)) \frac{\partial\eta_2}{\partial x_2}(t_0,x)\Big|^p dx \bigg)^{1/p} \\ &\hskip - 2.2cm \geq M \sqrt{k} \lambda^{-1} \bigg( \int_{B(0,1)} \big| \cos{(k\lambda^{-1}x_1 + kx^*)} \big|^p dx \bigg)^{1/p} - \\ &- \frac{1}{\sqrt{k}} \| \partial_1\rho \|_{L^p(B(0,2))} \|\partial_2\eta_2(t_0)\|_{L^\infty(B(x^*,2))} \end{align*} where in the last step we changed variables $x \to \lambda x - \lambda x^*$ and used the fact that $\rho \equiv 1$ on the unit ball $B(0,1)$ by construction. It now suffices to observe that the integral term can bounded from below for the choices of the parameters made in \eqref{eq:kln}. In fact, since $p>2$ we have \begin{align*} \bigg( \int_{B(0,1)} \big| \cos(k\lambda^{-1}x_1 {+} kx^*_1) \big|^p dx \bigg)^{1/p} &\geq \left( \int_{-\pi/6}^{\pi/6} \int_{-\pi/6}^{\pi/6} \cos^2 (\lambda x {+} \lambda^2 x^*_1) \, dx dy \right)^{1/2} \\ &= \frac{\pi}{3\sqrt{2}} \end{align*} by a straightforward calculation. \end{proof} For each $n \in \mathbb{Z}_+$ consider the following sequence of (smooth) initial vorticities with compact support \begin{equation} \label{eq:omega-seq} \omega_{0,n}(x) = \omega_0(x) + \beta_n(x). \end{equation} Note that from \eqref{eq:n2} and Lemma \ref{lem:omega-0} it follows that $\omega_{0,n}$ belongs to $W^{1,p}$ for any $n$ in $\mathbb{Z}_+$. Let $\omega_n(t)$ be the corresponding (smooth) solutions of the vorticity equations \eqref{eq:euler-v}-\eqref{eq:euler-vic}. We now come to the crucial step in our construction. For each $n \in \mathbb{Z}_+$ let $\eta_n(t)$ be the flow of volume-preserving diffeomorphisms of the associated velocity fields $u_n = \nabla^\perp\Delta^{-1}\omega_n$ as in \eqref{eq:flow}-\eqref{eq:flow-ic}. Assume that the data-to-solution map for the Euler equations is continuous from bounded sets in $C^1(\mathbb{R}^2)$ to $C([0,1], C^1(\mathbb{R}^2))$. It then follows from \eqref{eq:omega-seq} and \eqref{eq:n1} that \begin{equation} \label{eq:A} \sup_{0 \leq t \leq 1}\| D\Delta^{-1} \nabla^\perp ( \omega_n(t) - \omega(t) ) \|_\infty \longrightarrow 0 \quad \text{as} \quad n \to \infty \end{equation} as well as \begin{equation} \label{eq:T} \sup_{0 \leq t \leq 1} \| \Delta^{-1} \nabla^\perp(\omega_n(t) - \omega(t)) \|_\infty \longrightarrow 0 \quad \text{as} \quad n \to \infty. \end{equation} (cf. also Thm. 2.12; inequality (2.21) of \cite{TTY}). Applying the comparison Lemma \ref{lem:comp} we then find \begin{align} \label{eq:LL} \sup_{0\leq t \leq 1} \big( \| \eta_n(t) - \eta(t) \|_\infty + \| D\eta_n(t) - D\eta(t) \|_\infty \big) = \theta_n \longrightarrow 0 \quad \text{as} \; n \to \infty \end{align} where $\eta(t)$ is the flow of the velocity field $u=\nabla^\perp\Delta^{-1}\omega$ with the initial vorticity $\omega_0$ given by \eqref{eq:iv} as in Proposition \ref{prop:Lag}. Using conservation of vorticity, formula \eqref{eq:SGr} of Remark \ref{rem:SGr} and the invariance of the $L^p$ norms under volume-preserving Lagrangian flows $\eta_n(t)$ (change of variables) we have \begin{align} \nonumber \qquad \| \omega_n(t_0) \|_{W^{1,p}} &\geq \| \nabla( \omega_{0,n}\circ\eta_n^{-1}(t_0)) \|_{L^p} \simeq \\ \nonumber &\hskip -2.8cm \| d\omega_{0,n}{\circ}\eta_n^{-1}(t_0) (\nabla^\perp\eta_{n,2}(t_0) {\circ}\eta_n^{-1}(t_0)) \|_{L^p} {+} \| d\omega_{0,n}{\circ}\eta_n^{-1}(t_0) (\nabla^\perp\eta_{n,1}(t_0){\circ}\eta_n^{-1}(t_0)) \|_{L^p} \\ \label{eq:BB} &\simeq \| d\omega_{0,n} (\nabla^\perp\eta_{n,2}(t_0)) \|_{L^p} + \| d\omega_{0,n} (\nabla^\perp\eta_{n,1}(t_0)) \|_{L^p} \\ \nonumber &\gtrsim \| d\omega_{0,n} ( \nabla^\perp\eta_{n,2}(t_0)) \|_{L^p}. \end{align} Since from the comparison estimate \eqref{eq:LL} we have $$ \| d\omega_{0,n}( \nabla^\perp\eta_2 - \nabla^\perp\eta_{n,2})(t_0)\|_{L^p} \lesssim \| D( \eta_2 - \eta_{n,2} )(t_0)\|_\infty \|\nabla\omega_{0,n}\|_{L^p} \leq \theta_n \|\nabla\omega_{0,n}\|_{L^p} $$ applying the triangle inequality and \eqref{eq:omega-seq} we can further bound the right side of the expression in \eqref{eq:BB} below by \begin{align} \label{eq:MT} &\| d\omega_{0,n} (\nabla^\perp\eta_2(t_0)) \|_{L^p} - \theta_n \| \nabla\omega_{0,n}\|_{L^p} \\ \nonumber &\hskip 1cm \gtrsim \| d\beta_n (\nabla^\perp\eta_2(t_0)) \|_{L^p} - \| d\omega_0(\nabla^\perp\eta_2(t_0))\|_{L^p} - \theta_n \| \nabla\omega_{0,n}\|_{L^p}. \end{align} Observe that by the assumption \eqref{eq:assump} we can bound the middle term on the right side of \eqref{eq:MT} as in \eqref{eq:BB} above by \begin{align} \nonumber \| d\omega_0( \nabla^\perp\eta_2(t_0)) \|_{L^p} &\leq \| \nabla\omega_0\circ\eta^{-1}(t_0) \cdot D\eta^{-1}(t_0) \|_{L^p} \\ \label{eq:WW} &\simeq \| \nabla( \omega_0\circ\eta^{-1} (t_0) )\|_{L^p} \leq \| \omega(t_0) \|_{W^{1,p}} \leq M^{1/3}. \end{align} It therefore remains to find a lower bound on the $\beta$-term in \eqref{eq:MT}. This however follows from the the two estimates in Lemma \ref{lem:remrem}. Namely, we have \begin{align} \| d\beta_n( \nabla^\perp\eta_2(t_0)) \|_{L^p} &= \big\| -\partial_1\beta_n \partial_2\eta_2(t_0) + \partial_2\beta_n \partial_1\eta_2(t_0) \big\|_{L^p} \nonumber \\ \nonumber &\geq \| \partial_1\beta_n \partial_2\eta_2(t_0)\|_{L^p} - \| \partial_2 \beta_n \partial_1\eta_2(t_0)\|_{L^p} \\ \label{eq:bBeta} &\gtrsim M - \frac{1}{n} \| \partial_2\eta_2(t_0)\|_{L^\infty(B(x^*,2))} - \frac{1}{n} \| \partial_1\eta_2(t_0)\|_{L^\infty(\cup B(x^*_\varepsilon,2))} \\ \nonumber &\gtrsim M \end{align} provided that $n$ is sufficiently large. Combining \eqref{eq:BB}, \eqref{eq:MT}, \eqref{eq:WW} and \eqref{eq:bBeta} completes the proof of Theorem \ref{thm:1}. \bibliographystyle{amsplain}
1303.5031
\section{Introduction} This article studies perturbations of finite dimensional dynamical systems by small multiplicative L\'evy noise with heavy-tailed large jumps with the focus on the exit behavior from a bounded neighborhood of those global attractor. The scenario we shall study is as follows. Let us consider a $d$-dimensional deterministic dynamical system $\dot u=f(u)$ on a positively invariant bounded domain $D$. We assume that the dynamical system has a global attractor $\mathcal{A}$ in $D$ and that uniformly over the initial conditions in $D$ the time averages of the trajectories converge to a unique invariant measure $P$ on $\mathcal{A}$. The most prominent examples of systems satisfying these settings are dynamical systems with a stable fixed point $\mathcal{A}= \{\mathfrak{s}\}$ or a stable limit cycle $\mathcal{A} = \mathcal{O}$. Clearly, in this case the paths of the dymamical system never leave $D$. This situation changes significantly in the presence of a perturbation by noise, however small its intensity $\varepsilon>0$ may be. In the generic situation, the perturbed solution always exits from $D$. However the growth rate of the exit time shows an asymptotic behavior that strongly depends of the nature of the noise. Without any doubt, beginning with the pioneering works by Kramers \cite{Kramers-40} and Freidlin and Wentzell \cite{FW70}, the case of Gaussian perturbations has been studied quite exhaustingly in the realm of the large deviation theory. The literature on large deviation principles is enormous and representative examples for finite and infinite dimensional systems contain the works \cite{Br91,Br96,BovierEGK-04, Day-83, FL82, Fr88}. where perturbed gradient dynamical systems were mainly considered. For the case of non-gradient and degenerate systems we refer to \cite{BerglundG-04, Da96, FreidlinW-98}. They all have in common that the first exit time rate grows in $\varepsilon$ with the order $\exp(\bar V/\varepsilon^2)$, in physics literature known as Kramer's law, where is $\bar V$ the minimal amount of energy needed for a Brownian path to steer the perturbed system from the attractor $\mathcal{A}$ to a point on the boundary $\partial D$. In other words, $\bar V$ depends only on the dynamical system outside the attractor. The dynamics on the attractor, where no energy is needed to travel, is irrelevant. The exit scenario changes fundamentally if the perturbation is a L\'evy process, with power tailed (heavy tailed) large jumps. In this case, the large jumps determine the exit behavior: It is possible to perform a time scales separation of big jumps from the small jumps and the Gaussian component such that on the new time scale the system's small noise behavior becomes essentially one of a deterministic system perturbed by large jumps. Using this approach, the gradient case or the case with point attractors in finite and infinite dimensional systems has been treated in \cite{DHI13, Godovanchuk-82,ImkellerP-06,ImkPavSta-10,Pavlyukevich11}. Since the deterministic system converges to the stable state fast enough in comparison to the occurence rate of large jumps, the exit occurs when a system jumps from a vicinity of $\mathfrak{s}$. The resulting exit rate turns out to be of a power order with respect to $1/\varepsilon$, and the asymptotic exit location in $D^c$ is given by the probability distribution of large jump increments conditioned to $D^c-\mathfrak{s}$. This is radically different from the case of Gaussian perturbations, where the exit occurs only on the boundary of $D$ due to the continuity of the paths. In the present paper, we generalize these results to the case where the global attractor $\mathcal{A}$ in $D$ is not necessarily a stable point. Once again the essential exit behavior is determined by the deterministic system perturbed by large jumps. However we face the problem that --- opposite to the case of gradient systems --- the convergence of the deterministic trajectory to a hyperbolic attractor as a set does not imply the convergence towards a trajectory on the attractor. Instead, what replaces the deterministic control of the trajectory is its ergodic behavior, that is its ``occupation statistics'' of its time-average on the attractor. In this sense the exit event will be asymptotically triggered by the large jumps starting on $\mathcal{A}$ under the invariant measure $P$. The exit rate is again of a power order in $1/\varepsilon$, but the precise prefactor depends now on the large jump distribution and the ergodic measure $P$ concentrated on $\mathcal{A}$. The distribution of the exit location is hence given by the probability distribution for large jumps conditioned to $D-v$, where $v$ is averaged over $P$ on $\mathcal{A}$. Therefore contrary to the aforementioned Gaussian case, the deterministic dynamics on the attractor turns out to be crucial for the asymptotics of the exit times. We can make this intuition rigorous for a very general class of additive and multiplicative L\'evy noises with a regularly varying L\'evy measure of index $-\alpha$, $\alpha >0$. In particular, our main result covers perturbations in It\^o and Stratonovich, as well as in the canonical (Marcus) integrals sense, where jumps in general do not occur along straight lines, but follow the flow of a vector field which determines the multiplicative noise. Limit cycles attractors perturbed by Gaussian noise are considered in the physics and other natural sciences literature since quite some time \cite{EFSV85, GM88, HLP09, KurSch91, LC98, SV87}. As an application of our main result we work out the example of the Van der Pol oscillator perturbed by multiplicative $\alpha$-stable noise. It has been well-known for a long time, that the first exit time and location problem for general Markov processes can be stated in terms a Poisson and Dirichlet problem of the generator of this process, consult for instance \cite{Wentzell-90}. However, the generators of the jump part in the case of L\'evy processes are non-local integro-differential operators, for which these problems are hard to solve, in particular in the case of the canonical Marcus noise. The advantage of our approach is among others the insensitivity to the boundary regularity of $D$ and the intuitive simplicity of the result. \section{Object of study and main result}\label{sec: object of study} \subsection{Deterministic dynamics}\label{subsec: deterministic dynamics} Consider a bounded domain $D\subset \mathbb{R}^d, d\geqslant 1$ with piecewise $\mathcal{C}^1$-smooth boundary and a vector field $f\in \mathcal{C}^2(D, \mathbb{R}^d)$, which points uniformly inward at the boundary. We are interested in the $d$-dimensional dynamical system given as the solution map $(t,x) \mapsto u(t; x),$ \mbox{$t\geqslant 0,$} $x \in D$ of the autonomous ordinary differential equation \begin{equation}\begin{aligned} \label{eq: det} \dot u&=f(u),\quad\quad u(0) = x. \end{aligned}\end{equation} We further assume that the unique solution exists for all $x\in D$ and $t\geq 0$. Further we assume that the dynamical system defined by (\ref{eq: det}) has a global attractor $\mathcal{A}$ in $D$. \begin{rem}\label{rem D.2} Since by definition the global attractor attracts bounded sets in $D$, see for instance \cite{Temam97}, there exists a positively invariant set $\mathcal{I}$ with $\mathcal{A} \subset \mathcal{I} \subset D$ such that $\dist(\partial D, \partial \mathcal{I})>0$ for which there is a time $\mathcal{S}>0$ such that for all $x\in D$ and $t\geqslant \mathcal{S}$ \begin{equation} u(t;x) \in \mathcal{I}. \end{equation} \end{rem} \paragraph{(D.1)} Let there exist a unique invariant probability measure $P$ on $\mathfrak{B}(\mathbb{R}^d)$ with $\supp(P) = \mathcal{A}$ such that all non-negative, measurable and bounded functions $\phi: \mathbb{R}^d \rightarrow \mathbb{R}$ satisfy \begin{equation} \lim_{t\rightarrow \infty} \sup_{x\in D} \frac{1}{t} \int_0^t \phi(u(s;x)) ds = \int_\mathcal{A} \phi(v) P(dv). \end{equation} \begin{dfn} For $\delta>0$ we define the reduced domain of attraction \begin{align*} D_{\delta} &:= D \setminus \mathcal{B}_{\delta}(\partial D). \end{align*} \end{dfn} \begin{rem} \label{lrm: properties of reduced domains} Due to the assumption on the uniform inward pointing of $f$ at $\partial D$, there is $\delta_0\in (0,1)$ such that for all $\delta\in (0, \delta_0]$ \[ u(t, D_{\delta}) \subset D_{\delta} \qquad \mbox{ for all }t\geqslant 0. \] \end{rem} \subsection{The probabilistic perturbation } \noindent On a filtered probability space $(\Omega, \mathcal{F} ,\mathbb{P}, (\mathcal{F}_t)_{t\geqslant 0})$, satisfying the usual hypothesis in the sense of \cite{Protter-04}, we consider a L\'evy process $Z = (Z_t)_{t\geqslant 0}$ with values in $\mathbb{R}^m$, $m\geqslant 1$ and the following characteristic function \begin{equation} \mathbf{E} e^{i\langle u, Z_1 \rangle}=\exp\Big( -\frac{\langle Au,u\rangle}{2} +i\langle b,u\rangle +\int( e^{i\langle u, z \rangle}-1-i\langle u,z\rangle\mathbf{1}} %\mathbb{J (\|z\|\geq 1)\nu(dz)\Big),\ u\in\mathbb{R}^m. \end{equation} \noindent Let us denote by $N(dt, dz)$ the associated Poisson random measure with the intensity measure $dt \otimes \nu(dz)$ and the compensated Poisson random measure $\tilde N(dt, dz) = N(dt, dz) - dt \nu(dz)$. Consequently, by the L\'evy--It\^o theorem \cite{Applebaum-09} the L\'evy process~$Z$ given above has the following almost surely pathwise additive decomposition \begin{equation}\label{eq: Levy-Ito} Z_t=b t+ A^{\frac{1}{2}} B_t+\int_{(0, t]}\int_{0<\|z\|<1} z \tilde N(ds, dz) + \int_{(0,t]}\int_{\|z\|\geqslant 1} z N(ds, dz),\qquad t\geqslant 0, \end{equation} with $B = (B_t)_{t\geqslant 0}$ a standard Brownian motion in $\mathbb{R}^m$. Furthermore, the random summands in (\ref{eq: Levy-Ito}) are independent. For further details on L\'evy processes we refer to \cite{Applebaum-09} and \cite{Sato-99}.\\ \noindent \textbf{(S.1)} The L\'evy measure $\nu$ of the process $Z$ is \textbf{regularly \mbox{varying at $\infty$}} with index $-\alpha$. Let $h\colon (0,\infty)\to (0,\infty)$ denote its tail, \begin{equation}\label{def: h} h(r):=\int_{\|y\|\geq r }\nu(dy). \end{equation} Then there exist $\alpha>0$ and a non-trivial self-similar Radon measure $\mu$ on $\bar \mathbb{R}^m\backslash\{0\}$ such that $\mu (\bar\mathbb{R}^m\backslash \mathbb{R}^m)=0$ and for any $a>0$ and any Borel set $A$ bounded away from the origin, $0\notin \overline{A}$ with $\mu(\partial A)=0$, the following limit holds true: \begin{equation}\label{eq: regular variation} \mu(aA)=\lim_{r\rightarrow \infty} \frac{\nu(raA)}{h(r)} = \frac{1}{a^\alpha} \lim_{r\rightarrow \infty} \frac{\nu(rA)}{h(r)}= \frac{1}{a^\alpha} \mu(A). \end{equation} In particular, following \cite{BinghamGT-87} there exists a positive function $\ell$ slowly varying at infinity such that \[ h(r) = \frac{1}{r^{\alpha} \ell(r)},\qquad\mbox{ for all} \quad r>0. \] The selfsimilarity property of the limit measure $\mu$ implies that $\mu$ assigns no mass to spheres centred at the origin of $\mathbb{R}^m$ and has no atoms. For more information on multivariate heavy tails and regular variation we refer the reader to Hult and Lindskog \cite{HultL-06-1} and Resnick \cite{Resnick-04}.\\ \noindent \textbf{(S.2)} \noindent Consider continuous maps $G\in \mathcal{C}(\mathbb{R}^d\times \mathbb{R}^m, \mathbb{R}^d)$ and $F, H: \mathbb{R}^d \rightarrow \mathbb{R}^d$ and fix the notation \[ a(x, y) := F(x) F(y)^T \qquad \mbox{ for }x, y \in \mathbb{R}^d. \] We assume that there exists $L>0$ such that $f$, $G$, $H$ and $F$ satisfy the following properties. \begin{enumerate} \item \textbf{Local Lipschitz conditions: } For all $x, y \in D$ \begin{multline*} \|f(x) -f(y)\|^2 + \|a(x,x) - 2 a(x, y) + a(y,y)\| + \|H(x) - H(y)\|^2\\ + \|F(x) - F(y)\|^2 + \int_{\mathcal{B}_1} \|G(x,w)-G(y,w)\|^2 \nu(dw) \leqslant L^2 \|x-y\|^2. \end{multline*} \item \textbf{Local boundedness: } For all $x\in D$ \begin{align*} \|f(x)\|^2 + \|a(x,x)\| + \|H(x)\|^2 + \|F(x)\|^2 + \int_{\mathcal{B}_1} \|G(x,w)\|^2 \nu(dw) \leqslant L^2 (1+ \|x\|^2). \end{align*} \item \textbf{Large jump coefficient:} For all $x, y \in D$ and $w\in \mathbb{R}^m$ \begin{align*} \|G(x,w) - G(y, w)\| \leqslant L e^{L (\|w\| \wedge L)} \|x-y\|. \end{align*} \item \textbf{Local bound for $G$ in small balls:} There exists $\delta'>0$ such that for $w\in \mathcal{B}_{\delta'}(0)$ \begin{align*} \sup_{v\in \mathcal{B}_{\delta'}(\mathcal{A})} \|G(v, w)\| \leqslant L. \end{align*} \end{enumerate} \label{S.4 G to 0 for small balls} \begin{prp} \noindent Let the assumptions \textbf{(D.1)} and \textbf{(S.1-2)} be fulfilled. Then for $\varepsilon\in (0,1)$ and $x\in D$ the stochastic differential equation \begin{align} X_{t,x}^\varepsilon &= x + \int_0^t f(X^\varepsilon_{s,x})ds + \varepsilon \int_0^t H(X^\varepsilon_{s,x}) b \;ds + \varepsilon \int_0^t F(X^\varepsilon_{s, x}) d (A^{\frac{1}{2}}B_s) \nonumber\\ &\qquad + \int_0^t \int_{\|z\| \leqslant 1} G(X^\varepsilon_{s-,x}, \varepsilon z) \tilde N(ds, dz) + \int_0^t \int_{\|z\| > 1} G(X^\varepsilon_{s-,x}, \varepsilon z) N(ds, dz). \label{eq: sde} \end{align} has a unique local strong solution process $(X^\varepsilon_{t \wedge \mathbb{T},x})_{t\geqslant 0}$ with c\`adl\`ag paths in $\mathbb{R}^d$ and defines a strong Markov process with respect to $(\mathcal{F}_t)_{t\geqslant 0}$, where $\mathbb{T} = \mathbb{T}_x(\varepsilon)$ is the first exit time \begin{align*} \mathbb{T}_x(\varepsilon)&:=\inf\{t\geqslant 0\colon X^\varepsilon_{t,x}\notin D\}, \qquad \varepsilon>0, x\in D. \end{align*} \end{prp} \noindent The proof can be found for instance stated as Theorem 6.23 in \cite{Applebaum-09} on page 367. \noindent The multiplicative perturbations in the sense of It\^o, Fisk--Stratonovich or Marcus are of a special interest for applications. Assume that $Z$ is a pure jump process with $A=0$, $b=0$. For a globally Lipschitz continuous function $\Phi\colon \mathbb{R}^d\to\mathbb{R}^{d\times m}$ consider the It\^o and canonical SDEs \begin{align} \label{eq:ito} X_t&=x+\int_0^t f(X_s)dt+\varepsilon\int_0^t \Phi(X_{s-})dZ_s,\\ \label{eq:marcus} X_t^\diamond &=x+\int_0^t f(X_s^\diamond)dt+\varepsilon\int_0^t \Phi(X_{s-}^\diamond)\diamond dZ_s. \end{align} Then the It\^o SDE \eqref{eq:ito} is obtained from \eqref{eq: sde} with \[ G(x,z):=x-\Phi(x)z \] and the Marcus SDE \eqref{eq:marcus} with \[ G(x,z):=\phi^z(x), \] where $\phi^z(x) = y(1;x)$ is the solution of the ordinary differential equation \[ \dot y(s) = \Phi(y(s)) z, \qquad y(0) =x , \quad s\in [0,1]. \] If $L$ is the Lipschitz constant of the matrix function $\Phi$ then the Gronwall lemma implies that \[ \|G(x,z)-G(y,z)\| \leqslant L e^{L\|z\|} \|x-y\|\qquad \forall x, y \in D, z\in \mathbb{R}^m. \] \subsection{The main result}\label{subsec: the main results} \noindent For $x\in \mathbb{R}^d$, $U\in \mathfrak{B}(\mathbb{R}^d)$ with $x\notin U$ we denote the set of increments $z\in \mathbb{R}^m$ which send $x$ into $U$ by \begin{align}\label{def: event E} E^{U}(x)& :=\{z\in \mathbb{R}^m\colon x + G(x,z)\in U\}. \end{align} \noindent We define the following measure assigning for $U \in \mathfrak{B}(\mathbb{R}^d)$ \begin{align*} Q(U) &:=\int_{\mathcal{A}} \mu(E^{U}(y))~P(dy). \end{align*} \begin{rem} Clearly for \begin{align*} \lambda_\varepsilon &:= \int_{\mathcal{A}} \nu\Big(\frac{E^{D^c}(y)}{\varepsilon}\Big) P(dy) \qquad \mbox{ and }\qquad h_\varepsilon := h\Big(\frac{1}{\varepsilon}\Big), \quad\varepsilon\in (0,1) \end{align*} equation (\ref{eq: regular variation}) implies \begin{align*} \lim_{\varepsilon\rightarrow 0+} \frac{\lambda_\varepsilon}{h_\varepsilon} = Q(D^c). \end{align*} \end{rem} \begin{thm}\label{thm: first exit times} Let Hypotheses \textbf{(D.1)} and \textbf{(S.1-2)} be fulfilled and suppose that $Q(\partial D) = 0$ and $Q(D^c) >0$. Then for any $\gamma \in (0,\frac{1}{5})$ any $\theta>0$ and $U\in \mathfrak{B}(\mathbb{R}^d)$ such that $Q(\partial U) =0$ the first exit time $\mathbb{T}_y(\varepsilon)$ satisfies \\ \begin{align*} \lim_{\varepsilon\to 0} \sup_{y\in D_{\varepsilon^\gamma}} \Big|\mathbb{E}\left[ e^{-\theta Q(D^c) h_\varepsilon \mathbb{T}_y(\varepsilon)} \mathbf{1}\{X^\varepsilon_{\mathbb{T}_y(\varepsilon) ,y}\in U\}\right]- \frac{1}{1+ \theta} \frac{Q(U \cap D^c)}{Q(D^c)} \Big|=0.\\ \end{align*} \end{thm} \begin{cor}\label{cor: first exit times} Under the assumptions of Theorem \ref{thm: first exit times} follows \begin{align*} Q(D^c) h_\varepsilon \mathbb{T}_x(\varepsilon) &\stackrel{d}{\to} \mbox{EXP}(1), \\[2mm] \mathbb{P}(X_{\mathbb{T}_x(\varepsilon), x}^\varepsilon \in U) &\to \frac{Q(U\cap D)}{Q(D)},\quad \varepsilon\to 0, \end{align*} where the convergence is uniform over all initial values $x \in D_{\varepsilon^\gamma}$. \end{cor} \subsection{Example: Van der Pol oscillator perturbed by $\alpha$-stable L\'evy noise } As a simple but illustrative application of Theorem \ref{thm: first exit times} we determine the law of the first exit time of a Van der Pol oscillator perturbed by small It\^o-multiplicative $\alpha$-stable L\'evy noise. More precisely, let $Z$ be a bivariate L\'evy process with the characteristic function \[ \mathbb{E}\left[ e^{i\langle u , Z_t \rangle}\right]=e^{-t c(\alpha) \|u\|^\alpha},\quad \alpha\in (0,2),\ u\in\mathbb{R}^2,\quad c(\alpha)=\frac{\pi}{2^\alpha}\frac{|\Gamma(-\frac{\alpha}{2})|}{\Gamma(1+\frac{\alpha}{2})}, \] and a L\'evy triplet $(0,\nu,0)$, where \begin{align*} \nu(dy)=\mathbf{1}} %\mathbb{J_{\mathbb{R}^2\backslash \{0\}}(y)\frac{dy}{\|y\|^{2+\alpha}}. \end{align*} Clearly, $\nu$ is a regularly varying measure of index $-\alpha$ with the limit measure $\mu=\nu$ and a scaling function \[ h(r)=\int_{\|y\|\geq r}\frac{dy}{\|y\|^{2+\alpha}}=\frac{2\pi}{\alpha}\frac{1}{r^\alpha}. \] Consider a Van der Pol oscillator for $u = (u_1, u_2)$ and $f = (f_1, f_2)$ \begin{align*} \dot u&=f(u), \qquad \begin{cases} f_1(u_1, u_2)&=u_1,\\ f_2(u_1, u_2)&=-u_1 + (1-u_1^2)u_2. \end{cases} \end{align*} which has an unstable stationary solution $u\equiv 0$ and a unique periodic solution $u^\circ=(u_1^\circ(t),u_2^\circ(t))_{t\in [0, T^\circ ]}$ of period $T^\circ>0$ irrespective of initial values which we can omit since all quantities involved will not depend on them. It is well known that the set $\mathcal{A} =\{ (u_1^\circ(t),u_2^\circ(t))_{t\in [0, T^\circ ]}\}\subset \mathbb{R}^2$ is an exponentially orbitally stable limit cycle. In particular for any bounded and measurable function $\phi: \mathbb{R}^d \rightarrow (0,\infty)>0$, any initial point $x\neq 0$, we have \begin{align*} &\frac{1}{t}\int_0^t \phi(u(s,x))\, ds\to \frac{1}{T^\circ} \int_0^{T^\circ} \phi(u^\circ(s)) ds =\int_\mathcal{A}\phi(v)P(dv),\\ &\mbox{ where }\quad P(B)=\frac{1}{T^\circ} \int_0^{T^\circ} \mathbf{1}} %\mathbb{J_B(u^\circ(s)) ds,\quad \mbox{ for }B\in \mathfrak{B}(\mathbb{R}^2), \end{align*} and this convergence is uniform over all $x \in D$ bounded away from the origin. Consider now a Van der Pol oscillator perturbed by multiplicative It\^o noise \begin{align*} dX^\varepsilon_t=f(X^\varepsilon_t)dt+\varepsilon G(X^\varepsilon_t) dZ_t \end{align*} where $(x_1, x_2) \mapsto G(x_1,x_2)$ is a $2\times 2$ matrix valued function satisfying Hypotheses (S.1) and (S.2) of Section 2. Let $D$ be an open bounded invariant domain of attraction containing the limit cycle $\mathcal{A}$ with $\dist(\mathcal{A},\partial D)>0$, see Fig.~\ref{f:vdp}. \begin{figure}[t] \begin{center} \hfill a) \includegraphics[width=7cm]{VdP-alpha19-1.eps}\hfill b)\includegraphics[width=7cm]{VdP-domains-02-mod.eps}\hfill \end{center} \caption{a) A typical exit path of a Van der Pol oscillator perturbed by 1.9-stable L\'evy noise; b) the domains $G^\ominus (D^c-u^\circ(t))$ in the space of noise jumps for two different values of $t\in[0, T^\circ]$.\label{f:vdp}} \end{figure} Let \begin{align*} G_t&= G(u^\circ_1(t),u^\circ_2(t))\quad \text{ and }\quad G^\ominus_t:= \begin{cases} G^{-1}_t, &\det G_t\neq 0,\\ 0,&\text{ otherwise}. \end{cases} \end{align*} For any $\delta>0$ we can choose a small neighbourhood $\mathcal{B}_\delta(0)$ of the unstable fixed point~$0$ of the Van der Pol oscillator, such that the domain $D^{(\delta)}=D \backslash \mathcal{B}_\delta(0)$ and $f$ satisfy Hypothesis (D.1). Let $x\in D^{(\delta)}$. Denote by $\mathbb T^{(\delta)}_x(\varepsilon)$ the first exit time from the domain $D^{(\delta)}$. We are now in the state to apply Theorem~\ref{thm: first exit times} and find that \begin{align*} \varepsilon^\alpha \frac{2\pi}{\alpha T^\circ} \int_0^{T^\circ}\Big[\int_{G_s^\ominus D^c} +\int_{G_s^\ominus \mathcal{B}_{(\delta)}(0)}\frac{dy}{\|y-u^\circ(s)\|^{2+\alpha}}\, \Big]d s \cdot \mathbb T_x^{(\delta)}(\varepsilon) \to {EXP}(1),\quad \varepsilon\to 0. \end{align*} Taking into account that $\int_{\mathcal{B}_{(\delta)}} dy \to 0$ as $\delta\to 0$ we finally obtain the limiting law for $\mathbb T_y(\varepsilon)$ such that \begin{align*} \varepsilon^\alpha \bigg(\frac{2\pi}{\alpha T^\circ} \int_0^{T^\circ}\int_{G_s^\ominus D^c} \frac{dy}{\|y-u^\circ(s)\|^{2+\alpha}}\, d s\bigg) \cdot \mathbb T_x^{(0)}(\varepsilon) \stackrel{d}{\to} EXP(1),\quad \varepsilon\to 0, \end{align*} with the convergence uniformly over all $x\in D \setminus \mathcal{B}_{\varepsilon^{\frac{1}{10}}}(\partial D^\times)$ with $D^\times = D\setminus \{0\}$. \section{Small jumps dynamics\label{sec: small noise asymptotics}} The aim of this section is to determine the precise asymptotics of $(X^\varepsilon_{t,x})_{t\in [0, T_1]}$ for the first large jump times $T_1$. This will be accomplished in Proposition \ref{prop: ergodicity}, which tell us that for times $t\in [0, T_1)$ the deterministic dynamics and its ergodicity property dominates, and at $t = T_1$ there occurs a single large jump. We assume Hypotheses \textbf{(D.1)} and \textbf{(S.1-2)} to be satisfied in the sequel. \subsection{Asymptotics until the first large jump Let $\rho=\rho_\varepsilon$, $\varepsilon\in(0,1]$, be a positive sequence, which is monotonically increasing to infinity, $\rho_\varepsilon\nearrow \infty$ as $\varepsilon\searrow 0$ and denote by \begin{align*} \beta_\varepsilon := \nu(\mathcal{B}_{\rho^\varepsilon}^c). \end{align*} Consider the following $\varepsilon$-dependent L\'evy-It\^o decomposition $Z_t := \xi^\varepsilon_t + \eta_t^\varepsilon$ for all $t \geqslant 0$, $\varepsilon\in (0,1)$, \begin{equation} \label{def: b eps} \begin{aligned} &\eta^\varepsilon_t := \int_{(0,t]}\int_{\|z\| > \rho^\varepsilon } z N(ds, dz),\\ &\xi^\varepsilon_t := Z_t - \eta_t^\varepsilon = b_\varepsilon t + A^{\frac{1}{2}} B_t+\int_{(0, t]}\int_{0<\|z\|\leqslant \rho^\varepsilon} z \tilde N(ds, dz),\\ &b_\varepsilon := b + \mathbb{E}[\int_{(0,1]}\int_{\{1< \|y\|\leqslant \rho^\varepsilon\}} y N(ds, dz)] = b + \int_{1 < \|z\|\leqslant \rho^\varepsilon} z \nu(dz). \end{aligned} \end{equation} The compound Poisson process $\eta^\varepsilon$ here is characterised by a family of i.i.d.\ waiting times $(\tau^\varepsilon_i)_{i\in \mathbb{N}}$ with \mbox{$\tau_i^\varepsilon \sim \mbox{EXP}(\beta_\varepsilon)$,} the renewal times \[ T_i^\varepsilon = \sum_{k=1}^i \tau_k^\varepsilon, \] and an family of i.i.d.\ large jumps $(W^\varepsilon_i)_{i\in\mathbb{N}}$, also independent of $(\tau^\varepsilon_i)_{i\in \mathbb{N}}$ with $W^\varepsilon_i \sim \nu_\varepsilon$, where \begin{equation}\label{eq: truncated nu} \nu_\varepsilon(\cdot) = \frac{\nu(\cdot \cap \mathcal{B}_{\rho^\varepsilon}^c)}{\nu(\mathcal{B}_{\rho^\varepsilon}^c)}. \end{equation} The process $\xi^\varepsilon$ is a L\'evy process with jumps bounded from above by $\rho^\varepsilon$ and hence has all finite moments. \subsection{Control of the small jump noise} In this subsection we show that the probabilities of deviations of bounded integrals driven by the small noise $\varepsilon \xi^\varepsilon$ defined in (S.2) decay exponentially. \begin{lem} \label{lem: drift estimate} Let $(\delta_\varepsilon)_{\varepsilon\in(0,1]}$ be a monotone sequence with $\delta_\varepsilon\searrow 0$ as $\varepsilon\searrow 0$ satisfying in addition \begin{equation}\label{eq: rho-delta limit 0} \lim_{\varepsilon\rightarrow 0+} \varepsilon \frac{\rho^\varepsilon}{\delta_\varepsilon} = 0. \end{equation} Then for any $C>0$ there is $\varepsilon_0\in (0,1)$ such that for all $\varepsilon \in (0, \varepsilon_0]$ \begin{equation} \label{eq: drift estimate} \frac{\varepsilon \|b_\varepsilon\|}{\delta_\varepsilon} \leqslant C. \end{equation} \end{lem} \begin{proof} In order to prove (\ref{eq: drift estimate}) we center the process $\xi$, $\tilde \xi_t := \xi_t - b_\varepsilon t$, such that $\tilde \xi_t$ is a L\'evy martingale with jumps bounded from above by $\rho^\varepsilon$. Since $\|b_\varepsilon\| \leqslant \|b\| + \|\int_{1< \|z\|\leqslant \rho^\varepsilon} y \nu(dy)\|$ we obtain by Jensen's inequality and the regular variation of the function $h$ defined by (\ref{def: h}) that \begin{align*} \Big\|\int_{1< \|z\|\leqslant \rho^\varepsilon} y \nu(dy)\Big\|^2 &\leqslant \int_{1 < \|z\|\leqslant \rho^\varepsilon} \|y\|^2 \nu(dy) = -\int_1^{\rho^\varepsilon} r^2 h(dr) \leqslant (\rho^\varepsilon)^2 h(1), \end{align*} such that $\|b_\varepsilon\| \leqslant \|b\| + \sqrt{h(1)} \rho^\varepsilon$, which gives the desired result with the help of (\ref{eq: rho-delta limit 0}). \end{proof} \begin{lem}\label{lem: quadratic variation} Let $(\delta_\varepsilon)_{\varepsilon\in(0,1]}$ be a monotone sequence with $\delta_\varepsilon\searrow 0$ as $\varepsilon\searrow 0$ and $p\geqslant 1$ satisfying \begin{equation}\label{eq: rho-delta limit} \lim_{\varepsilon\rightarrow 0+} \frac{\varepsilon\rho^\varepsilon}{\delta_{\varepsilon}^{(p+1)/2}} = 0. \end{equation} Then for all $T>0$ and $C>0$ there is $\varepsilon_0\in (0,1)$ such that for all $\varepsilon \in (0, \varepsilon_0]$ \begin{align*} \mathbb{P}([\varepsilon \xi]_{\tau^\varepsilon} > C \delta_\varepsilon^p) &\leqslant e^{-C \delta_\varepsilon^{-1} +1}. \end{align*} \end{lem} \begin{proof} The discontinuous part of the quadratic variation process $[\varepsilon \tilde \xi]_t^d = [\varepsilon \tilde \xi]_t - \trace(A) \varepsilon^2 t $ is a L\'evy subordinator and has the representation \begin{align*} [\varepsilon \tilde \xi]_t^d = \varepsilon^2 \sum_{s\leqslant t} \|\Delta \tilde \xi\|^2_s = \varepsilon^2 \int_{(0, t]}\int_{0<\|z\|\leqslant \rho^\varepsilon} \|z\|^2 N(dz, ds)\quad t\geqslant 0\mbox{ a.s.} \end{align*} Since the jumps of $[\varepsilon \tilde \xi]_t^d$ by construction are bounded by $(\varepsilon\rho^\varepsilon)^2\leqslant 1$, its Laplace transform is well-defined for all $\lambda\in \mathbb{R}$ and $t\geqslant 0$ \begin{align*} \mathbb{E}\left[e^{\lambda [\varepsilon \tilde \xi ]_t^d]}\right] &= \exp\big(t \int_{0< \|y\|\leqslant \rho^\varepsilon} (e^{\lambda^2\varepsilon^2 \|y\|^2} -1) \nu(dy)\big)\\ &= \exp\big( - t\int_{0< r\leqslant \rho^\varepsilon} (e^{\lambda^2\varepsilon^2 r^2}-1) h(dr) \big). \end{align*} For any $\lambda>0$ the exponential Chebyshev inequality yields \begin{align*} \mathbb{P}\big([\varepsilon \tilde \xi]_{T}^d >C\delta_\varepsilon^p \big) &\leqslant \mathbb{P}\big(e^{\lambda [\varepsilon \tilde \xi]^d_{T}} > e^{\lambda \delta_\varepsilon}\big) \leqslant e^{-\lambda C \delta_\varepsilon^p} \mathbb{E}\big[e^{\lambda [\varepsilon \tilde \xi]^d_{T}}\big] \\ &= \exp\big(-\lambda C \delta_\varepsilon^p - T \int_{0< r\leqslant \rho^\varepsilon} (e^{\lambda^2 \varepsilon^2 r^2} -1) h(dr) \big). \end{align*} We continue with the help of $e^{s}-1 \leqslant 2s$ for small $s$. Replacing $\lambda$ by $\delta_\varepsilon^{-(p+1)}$ we ensure the smallness of the argument noting that by (\ref{eq: rho-delta limit}) $\sup_{0 < r\leqslant \rho^\varepsilon} \varepsilon^2 r^2 /\delta_\varepsilon^{p+1} \leqslant (\varepsilon \rho^\varepsilon)^2/\delta_\varepsilon^{p+1} \rightarrow 0$ for $\varepsilon\rightarrow 0+$. We obtain \begin{align*} & \big|T \int_{0<r\leqslant \rho^\varepsilon} (e^{\varepsilon^2 r^2/\delta_\varepsilon^{p+1}}-1) h(dr) \big| \\ & \leqslant \big|2 T \varepsilon^2 /\delta_\varepsilon^{p+1} \big(\int_{0< r\leqslant 1} + \int_{1 < r\leqslant \rho^\varepsilon} \big) r^2 h(dr) \big|\\[1mm] & \leqslant 2 T \varepsilon^2 /\delta_\varepsilon^{p+1} \big|\int_{0< r\leqslant 1} r^2 h(dr) \big| + 2 T (\varepsilon \rho^\varepsilon)^2/\delta_\varepsilon^{p+1} |\int_{1 < r\leqslant \rho^\varepsilon} h(dr)\big|\\[2mm] & \leqslant c T (\varepsilon \rho^\varepsilon)^2/\delta_\varepsilon^{p+1}. \end{align*} Therefore by (\ref{eq: rho-delta limit}) there is $\varepsilon_0\in (0,1)$ such that $\varepsilon\in (0, \varepsilon_0]$ implies the final result \begin{align*} \mathbb{P}([\varepsilon\tilde \xi]_{T} > C\delta_\varepsilon^p) & \leqslant \exp\big(-C \delta_\varepsilon^{-1} + \trace(A) \varepsilon^{2} T + cT (\varepsilon \rho^\varepsilon)^2/\delta_\varepsilon^{p+1}\big) \leqslant \exp\big(-C\delta_\varepsilon^{-1}+1 \big). \end{align*} \end{proof} \noindent In the following lemma we estimate the deviation of the stochastic integral with respect to the (local) martingale part $\tilde \xi^\varepsilon$ of the small jumps noise process $\xi^\varepsilon$ \[ \tilde \xi^\varepsilon_t = A^{1/2} B_t + \int_{0< \|y\| \leqslant \rho^\varepsilon} y \tilde N(t, dy). \] \begin{lem}\label{lem: bounded stoch int} Let $(g_t)_{t\geqslant 0}$ be an adapted, c\`adl\`ag process with bounded values by $C_g$ in $\mathbb{R}^{m\otimes d}$ for a suitable matrix norm. For all $T>0$ and functions $\delta_\varepsilon$ and $\rho^\varepsilon$ satisfying (\ref{eq: rho-delta limit}) for $p=4$ there is $\varepsilon_0\in (0,1)$ and a constant $C_0>0$ such that for $\varepsilon \in (0, \varepsilon_0]$ \begin{align*} \mathbb{P}(\sup_{s\in [0, T]} \varepsilon \sum_{i=1}^d \big|\sum_{j=1}^m \int_0^t g_{s-}^{ij} d\tilde \xi^j(s) \big| > \delta_\varepsilon) \leqslant \exp(-C_0 \delta_\varepsilon^{-1} + \ln(6d)). \end{align*} \end{lem} \begin{proof} Suppose $\max_{i,j} \sup_{t\geqslant 0} |g_t^{ij}| \leqslant C_g$ almost surely. We consider the each component of the $d$-dimensional martingale \begin{align*} M_t^i = \sum_{j=1}^m \int_0^t g^{ij}_{s-} d\tilde \xi^j(s). \end{align*} By construction $\|\Delta_t M\| \leqslant m d C_g \rho^\varepsilon =: C \rho^\varepsilon$ almost surely. We estimate the probability of a deviation of size $\delta_\varepsilon$ from zero conditioned on small quadratic variation \begin{equation}\label{eq: estimate of small noise integral} \mathbb{P}(\sup_{s\in [0,T]} \|\varepsilon M_s\| > \delta_\varepsilon) \leqslant \mathbb{P}(\sup_{s\in [0,T]} \|\varepsilon M_s\| > \delta_\varepsilon ~|~ [\varepsilon M]_{T} \leqslant \delta_\varepsilon^4) + \mathbb{P}([\varepsilon M]_{T} > \delta_\varepsilon^4). \end{equation} \textbf{Step 1:} We estimate the first term of inequality (\ref{eq: estimate of small noise integral}). Following the lines of the proofs of Lemma~{26.19} and Theorem~{26.17} part (i) in \cite{Kallenberg-02} we find the following estimate. For any $\lambda>0$ \begin{equation*} \mathbb{P}(\sup_{s\in [0,T]} \varepsilon M_s^i > \delta_\varepsilon ~|~ [\varepsilon M]_{T} \leqslant \delta_\varepsilon^4) \leqslant \exp\big(-\lambda \delta_\varepsilon + \lambda^2 \Upsilon(\lambda C_g \varepsilon \rho^\varepsilon) \delta_\varepsilon^4 \big), \end{equation*} where $\Upsilon: (0, \infty) \rightarrow (0, \infty), \Upsilon(x) = - (x + \ln(1-x)_+) x^{-2}$. Replacing $\lambda$ by $\lambda_\varepsilon = \delta_\varepsilon^{-2}$ and keeping in mind that $\lim_{\varepsilon \rightarrow 0+} \Upsilon(\lambda C_g \varepsilon \rho^\varepsilon) = \frac{1}{2}$ yields \begin{equation*} \mathbb{P}(\sup_{s\in [0,T]} \varepsilon M_s^i > \delta_\varepsilon ~|~ [\varepsilon M]_{T} \leqslant \delta_\varepsilon^4) \leqslant \exp\big(-\delta_\varepsilon^{-1}). \end{equation*} For the infimum of the negative analogue holds the respective estimate, which provides for each $i$ for $\lambda_\varepsilon = d\delta_{\varepsilon}^{-2}$ instead \begin{equation*} \mathbb{P}(\sup_{s\in [0,T]} |\varepsilon M_s^i| > \frac{\delta_\varepsilon}{d} ~|~ [\varepsilon M]_{T} \leqslant \delta_\varepsilon^4) \leqslant \exp\big(-\delta_\varepsilon^{-1}+\ln(2)), \end{equation*} where the right-hand side does not depend on $i$, such that eventually \begin{align*} &\mathbb{P}(\sup_{s\in [0,T]} \|\varepsilon M_s\| > \delta_\varepsilon ~|~ [\varepsilon M]_{T} \leqslant \delta_\varepsilon^4) \\ & \leqslant \sum_{i=1}^d \mathbb{P}(\sup_{s\in [0,T]} |\varepsilon M_s^i| > \frac{\delta_\varepsilon}{m} ~|~ [\varepsilon M]_{T} \leqslant \delta_\varepsilon^4)\\[2mm] &\leqslant \exp\big(-\delta_\varepsilon^{-1}+\ln(2d)). \end{align*} \textbf{Step 2:} We treat the second term in inequality (\ref{eq: estimate of small noise integral}). The boundedness assumption of $g$ yields \begin{align*} [\varepsilon M]_t = \int_0^t \|g_{s-}^* g_{s-}\|^2 d[\varepsilon A^{\frac{1}{2}}B]_s + \int_0^t \|g_{s-}^* g_{s-}\|^2 d[\varepsilon \tilde \xi]_s^d \leqslant C^2 (\varepsilon^2 \trace(A) t+ [\varepsilon \tilde \xi]_t^d), \quad t\geqslant 0. \end{align*} Hence \begin{align*} \mathbb{P}([\varepsilon M]_{T} \geqslant \delta_\varepsilon^4) \leqslant \mathbb{P}(C^2 [\varepsilon \tilde \xi]_{T}^d \geqslant \frac{1}{2} \delta_\varepsilon^4) + \mathbb{P}( C^2 \trace(A) \varepsilon^{2} T \geqslant \frac{1}{2}\delta_\varepsilon^4). \end{align*} The second term vanishes by (\ref{eq: rho-delta limit}), which implies $\varepsilon^2 < \delta_\varepsilon^4$ for small $\varepsilon\in (0,1)$. The first term is treated as in Lemma~\ref{lem: quadratic variation}. Eventually \begin{align*} \mathbb{P}([\varepsilon M]_{T} \geqslant \delta_\varepsilon^4) \leqslant \mathbb{P}([\varepsilon \tilde \xi]_{T}^d \geqslant \frac{1}{2 C^2}\delta_\varepsilon^4) \leqslant \exp(-\frac{\delta_\varepsilon^{-1}}{2 C^2} + 1). \end{align*} Combining Step 1 and 2 yields a constant $\varepsilon_0\in (0,1)$ such that for all $\varepsilon\in (0, \varepsilon_0]$ \begin{align*} \mathbb{P}(\sup_{s\in [0,T]} \|\varepsilon M_s\| > \delta_\varepsilon) \leqslant \exp(-\min(1, \frac{1}{2 C^2})\delta_\varepsilon^{-1} + \ln(2de)). \end{align*} This finishes the proof. \end{proof} \subsection{Localization of $V^\varepsilon$ close to $u$ up to a fixed time} Let $V^\varepsilon$ be the solution of equation (\ref{eq: sde}), where the driving noise $Z$ is replaced by the $\varepsilon$-dependent small jumps part $\xi^\varepsilon$ of $Z$ as definied in (\ref{def: b eps}). The first large jump time $T_1>0$ is exponentially distributed by with intensity $\beta_\varepsilon \searrow 0$ as $\varepsilon \searrow 0$. By definition then \[ V^\varepsilon_{t, x} = X^\varepsilon_{t, x} \qquad \mbox{ for } t\in [0, T_1). \] In order to study the fluctions of $X^\varepsilon_{t,x}$ for $t< T_1$ we introduce \[ \mathbb{T}^*_x(\varepsilon) := \inf\{t>0~|~V^\varepsilon_{t,x} \notin D\}. \] \begin{lem}[Non-exit up to fixed times]\label{lem: localization in a ball} For any $T\geqslant 0$ there is $\varepsilon_0\in (0,1)$ such that for all $\varepsilon \in (0, \varepsilon_0]$ and $\delta_\varepsilon$ satisfying (\ref{eq: rho-delta limit}) there \begin{align*} &~ \sup_{x\in D_{\delta_\varepsilon}} \mathbb{P}(\mathbb{T}^*_x(\varepsilon) \leqslant T) \leqslant \exp(-\delta_\varepsilon^{-1} + 2+ \ln(d)). \end{align*} \end{lem} \begin{proof} By Remark \ref{lrm: properties of reduced domains} for any sufficiently small $\delta_\varepsilon$ and $x\in D_{\delta_\varepsilon}$ follows \[ \dist(u(t;x), \partial D)\geqslant \delta_\varepsilon \qquad \forall t\geqslant 0. \] Since $\mathbb{T}_x^*(\varepsilon)$ denotes the exit from $D$, we infer \begin{align*} \{\mathbb{T}_x^*(\varepsilon) \leqslant T\} &= \{\mathbb{T}_x^*(\varepsilon) \leqslant T\}\cap \{\sup_{t\in [0, \mathbb{T}_x^*(\varepsilon)]} \|V_{t,x} - u(t;x)\| > \delta_\varepsilon\}. \end{align*} We lighten notatoin $V = V^\varepsilon$, $\mathbb{T}^* = \mathbb{T}^+_x(\varepsilon)$ etc. Then for $t\leqslant T$ follows by definition \begin{align} & V_{t\wedge \mathbb{T}^*,x} - u(t\wedge \mathbb{T}^*; x)\nonumber\\ & = \int_{0}^{t\wedge \mathbb{T}^*} f(V_{s,x}) - f(u(s; x)) ds + \varepsilon \int_{0}^{t\wedge \mathbb{T}^*} H(V_{s,x}) b_\varepsilon ds + \varepsilon \int_{0}^{t\wedge \mathbb{T}^*} F(V_{s,x}) dA^{\frac{1}{2}} B_s \nonumber\\ & \qquad + \int_{0}^{t\wedge \mathbb{T}^*}\int_{0< \|z\|\leqslant \rho^\varepsilon} G(V_{s-,x}, \varepsilon z) \tilde N(ds, dz). \label{eq: V-u} \end{align} We fix the constant \begin{equation}\label{def: CR} C_{D} := \sup_{\substack{v\in D\\ w\in \mathcal{B}_1}}\max\{L, \|f(v)\|, \|H(v)\|, \|F(v)\|, \|G(v,w)\|\}. \end{equation} The global Lipschitz property of $f$ on $D$ and the standard integral version of Gronwall's lemma yield \begin{multline}\label{eq: sup V-u} \sup_{x\in D_{\delta_\varepsilon}} \sup_{t\in [0, T\wedge \mathbb{T}_x^*]} \|V_{t,x} - u(t; x)\| \\ \leqslant e^{C_{D} T} \sup_{x\in D_{\delta_\varepsilon}}\sup_{t\in [0, T\wedge\mathbb{T}_x]} \|\varepsilon \int_{0}^{t} H(V_{s,x}) b_\varepsilon ds + \varepsilon \int_{0}^{t} F(V_{s,x}) dA^{\frac{1}{2}} B_s \\ + \int_{0}^{t}\int_{0< \|z\|\leqslant \rho^\varepsilon} G(V_{s-,x}, \varepsilon z) \tilde N(ds, dz)\|. \end{multline} The representation (\ref{eq: V-u}) has the (local) martingale part \begin{equation} M_{t,x} := \varepsilon \int_0^t F(V_{s, x}) d (A^{\frac{1}{2}}B_s) + \int_0^t \int_{0<\|z\| \leqslant \rho^\varepsilon} G(V_{s-,x}, \varepsilon z) \tilde N(ds, dz). \label{eq: def m} \end{equation} The previous lemma yields for $i$-th component $M^i_{t,x}$ and any $\lambda>0$ \begin{align} & \sup_{x\in D_{\delta_\varepsilon}} \mathbb{P}(\mathbb{T}_x^*(\varepsilon) \leqslant T) \nonumber\\ & = \sup_{x\in D_{\delta_\varepsilon}} \mathbb{P}( \mathbb{T}_x^*(\varepsilon) \leqslant T, \sup_{t\in [0, T \wedge \mathbb{T}^*]} \|V_{t,x}- u(t;x)\| > \delta_\varepsilon) \nonumber\\ & \leqslant \sup_{x\in D_{\delta_\varepsilon}} \mathbb{P}(\mathbb{T}^*_x(\varepsilon) \leqslant T, \sup_{t\in [0, T \wedge \mathbb{T}^*] } e^{C_{D} T} \varepsilon \|\int_0^t H(V_{s,x}) b_\varepsilon ds\| > \frac{\delta_\varepsilon}{2}) \nonumber\\ & \qquad + \sup_{x\in D_{\delta_\varepsilon}} \mathbb{P}( \mathbb{T}^*_x(\varepsilon) \leqslant T, \sup_{t\in [0,T \wedge \mathbb{T}^*]} e^{C_{D} T} \|M_{t,x}\| > \frac{\delta_\varepsilon}{2}~|~[\varepsilon \xi]_{T} \leqslant \delta_\varepsilon^4) \nonumber\\ & \qquad + \mathbb{P}([\varepsilon \xi]_{T} > \delta_\varepsilon^4)\nonumber\\ &~\leqslant \mathbb{P}(\varepsilon \|b_\varepsilon\| e^{C_{D} T} T C_{D} > \frac{\delta_\varepsilon}{2}) + \mathbb{P}([\varepsilon \xi]_{T} > \delta_\varepsilon^4)\nonumber\\ &\qquad + \sum_{i=1}^d \sup_{x\in D_{\delta_\varepsilon}}\mathbb{P}(\sup_{t\in [0, T \wedge \mathbb{T}^*]} M_{t,x}^i > \frac{\delta_\varepsilon}{2d}~|~ [\varepsilon \xi]_{T} \leqslant \delta_\varepsilon^4) + \sup_{x\in D_{2\delta_\varepsilon}}\mathbb{P}(\sup_{t\in [0, T \wedge \mathbb{T}^* ]} M_{t,x}^i < -\frac{\delta_\varepsilon}{2d}~|~[\varepsilon \xi]_{T} \leqslant \delta_\varepsilon^4)\nonumber\\ &~\leqslant \exp(-\delta_\varepsilon^{-1} +1) + 2d \exp\big(-\lambda \frac{\delta_\varepsilon}{2d} + \lambda^2 \Upsilon(C_{D} \lambda) \delta_\varepsilon^2 \big).\label{eq: P(V-u)} \end{align} The vanshing of the formal first term in the third to last line is the direct consequence of Lemma~\ref{lem: drift estimate}. We note that the last inequality is valid for any local martingale with jumps bounded from above by~$C_{D}$. This is satisfied since by (\ref{eq: rho-delta limit 0}) $\lim_{\varepsilon\rightarrow 0+} \varepsilon \rho^\varepsilon = 0$ and for $x\in D$ and $s\in [0, \mathbb{T}^*_x]$ \[ \|\Delta_s V^\varepsilon_{\cdot, x}\|\leqslant \sup_{\substack{v\in \mathcal{B}_{D}\\ w \in \mathcal{B}_{\varepsilon \rho^\varepsilon}}}\|G(v, w)\| \leqslant C_{D}, \] where the last inequality stems from \textbf{(S.2)} part 4. We may now replace in inequality (\ref{eq: P(V-u)}) $\lambda$ by $2d/ \delta_\varepsilon^2$ and exploit that $\lim_{r\rightarrow \infty}\Upsilon(r) = \frac{1}{2}$. This yields the desired estimate and finishes the proof. \end{proof} \begin{cor}[Localization up to a fixed time $T$]\label{cor: V close to u For all $T>0$ there is $\varepsilon_0 \in (0,1)$ such that for all \mbox{$\varepsilon\in (0,\varepsilon_0]$} and $\delta_\varepsilon$ satisfying (\ref{eq: rho-delta limit}) follows \begin{align*} &~ \sup_{x\in D_{\delta_\varepsilon}} \mathbb{P}(\sup_{s\in [0, T]} \|V^\varepsilon_{s,x} - u(s;x)\| > \delta_\varepsilon)\leqslant \exp(-\delta_\varepsilon^{-1} + 3+ \ln(d)). \end{align*} \end{cor} \begin{proof} On the event $\{\mathbb{T}^*_x > T\}$ we repeat (\ref{eq: V-u}), (\ref{eq: sup V-u}) and (\ref{eq: P(V-u)}) replacing $t\wedge \mathbb{T}^*_x$ by $t\in [0, T]$. This directly yields the desired result. \end{proof} \subsection{Localization and ergodicity of $V^\varepsilon$} \begin{lem} [Non-exit] \label{cor: non-exit} For functions $\rho^\varepsilon$, $\delta_\varepsilon$ and $\beta_\varepsilon$ satisfing the relation (\ref{eq: rho-delta limit}) there exist constants $C>0$ and $\varepsilon_0\in (0,1)$ such that for all $\varepsilon \in (0,\varepsilon_0]$ \begin{align*} \sup_{x\in D_{\delta_\varepsilon}} \mathbb{P}(\exists\;t\in [0,T_1]: ~V^\varepsilon_{t,x} \notin D) \leqslant \frac{ C e^{-\delta_\varepsilon^{-1}}}{(\beta_\varepsilon\delta_\varepsilon)^2}. \end{align*} \end{lem} \begin{proof} Due to the independence of $T_1$ and $V^\varepsilon$ we calculate \begin{align*} \mathbb{P}(\exists\;t\in [0,T_1]: ~V^\varepsilon_{t,x} \notin D) & \leqslant \mathbb{P}(\exists\;t\in [0,\frac{1}{\beta_\varepsilon \delta_\varepsilon}]: ~V^\varepsilon_{t,x} \notin D) + \mathbb{P}(T_1 > \frac{1}{\beta_\varepsilon \delta_\varepsilon}) \end{align*} By construction \[ \mathbb{P}(T_1 > \frac{1}{\beta_\varepsilon \delta_\varepsilon}) = e^{-\delta_\varepsilon^{-1}} \rightarrow 0. \] Recall by Remark \ref{rem D.2} that $t\geqslant \mathcal{S}$ and $x\in D$ imply $u(t;x) \in \mathcal{I}$. Hence \begin{align*} \mathbb{P}(\exists\;t\in [0,\frac{1}{\beta_\varepsilon \delta_\varepsilon}]: ~V^\varepsilon_{t,x} \notin D) & = \int_0^{(\beta_\varepsilon \delta_\varepsilon)^{-1}} \beta_\varepsilon e^{-\beta_\varepsilon s} \mathbb{P}(\exists\;t\in [0,s]: ~V^\varepsilon_{t,x} \notin D) ds\\ & \leqslant \sum_{k=1}^{\lceil (\beta_\varepsilon \delta_\varepsilon \mathcal{S})^{-1}\rceil} \int_{(k-1)\mathcal{S}}^{k \mathcal{S}} \beta_\varepsilon e^{-\beta_\varepsilon s} \mathbb{P}(\exists\;t\in [0,s]: ~V^\varepsilon_{t,x} \notin D) ds\\ & \leqslant \sum_{k=1}^{\lceil (\beta_\varepsilon \delta_\varepsilon \mathcal{S})^{-1}\rceil} \mathbb{P}(\exists\;t\in [0,k \mathcal{S}]: ~V^\varepsilon_{t,x} \notin D) e^{-\beta_\varepsilon \mathcal{S} k}. \end{align*} We denote by \[ \mathcal{E}_x(\varepsilon) := \{\sup_{t\in [0, \mathcal{T}]} \|V_{t,x} -u(t;x)\| \leqslant \delta_\varepsilon\}. \] For the case $k=1$, $x\in D_{\delta_\varepsilon}$ Corollary \ref{cor: V close to u} yields \[ \mathbb{P}(\mathbb{T}_x^* \in [0, \mathcal{S}]) = \mathbb{P}(\exists\;t\in [0,\mathcal{S}]: ~V^\varepsilon_{t,x} \notin D) \leqslant \mathbb{P}(\sup_{t\in [0, \mathcal{S}]} \|V^\varepsilon_{t,x} - u(t;x)\| > \delta_\varepsilon) \leqslant C e^{-\delta_\varepsilon^{-1}}. \] Furthermore, Remark \ref{rem D.2} states that \[ V^\varepsilon_{\mathcal{S}, x} = V^\varepsilon_{\mathcal{S}, x} -u(\mathcal{S};x) + u(\mathcal{S};x) \in \mathcal{B}_{\delta_\varepsilon}(0) + \mathcal{I} \subset D_{2\delta_\varepsilon}. \] Exploiting the Markov property at time $\mathcal{S}$ we obtain \begin{align*} &\mathbb{P}(\mathbb{T}_x^* \in ((k-1)\mathcal{S}, k\mathcal{S}]) \\[2mm] &= \mathbb{P}(\{\forall\; t \in [0, (k-1)\mathcal{S}]: ~V^\varepsilon_{t,x} \in D\} \cap \{\exists\;t\in [(k-1)\mathcal{S},k\mathcal{S}]: ~V^\varepsilon_{t,x} \notin D\}) \\[2mm] &= \mathbb{P}(\{\forall t\in [0, (k-1)\mathcal{S}]: V^\varepsilon_{t,x} \in D\} \cap \{\exists t\in [(k-1)\mathcal{S}, k\mathcal{S}]: ~V^\varepsilon_{t,x}\notin D\} \cap \mathcal{E}_x) + \mathbb{P}(\mathcal{E}_x^c)\\ &\leqslant \sup_{x\in D_{2\delta_\varepsilon}} \mathbb{P}(\mathbb{T}_x^*(\varepsilon) \in [(k-2)\mathcal{S}, (k-1)\mathcal{S}]) + C e^{-\delta_\varepsilon^{-1}}. \end{align*} Therefore a recursive argument leads to \begin{align*} \mathbb{P}(\mathbb{T}_x^* \in ((k-1)\mathcal{S}, k\mathcal{S}]) \leqslant k C e^{-\delta_\varepsilon^{-1}}. \end{align*} Finally summing up we obtain the desired result \begin{align*} \mathbb{P}(\exists\;t\in [0,\frac{1}{\beta_\varepsilon \delta_\varepsilon}]: ~V^\varepsilon_{t,x} \notin D) &\leqslant \sum_{k=1}^{\lceil(\beta_\varepsilon\delta_\varepsilon\mathcal{S})^{-1}\rceil} k C e^{-\delta_\varepsilon^{-1}} \leqslant \frac{ C e^{-\delta_\varepsilon^{-1}}}{(\beta_\varepsilon\delta_\varepsilon\mathcal{S})^2} \end{align*} \end{proof} \noindent The proof further yields directly that at the time of the first large jump $T_1$ the small noise solution $V^\varepsilon$ is not far from $\mathcal{I}$. \begin{cor}\label{cor: at jump time in D delta} Let the assumptions of Lemma \ref{cor: non-exit} be fulfilled. Then for all $\kappa >0$ there is $\varepsilon_0\in (0,1)$ such that for $\varepsilon\in (0, \varepsilon_0]$ \begin{equation} \sup_{x\in D_{\delta_\varepsilon}} \mathbb{P}(V^\varepsilon_{T_1,x} \in \mathcal{B}_\kappa(\mathcal{I})) \leqslant \frac{ C e^{-\delta_\varepsilon^{-1}}}{(\beta_\varepsilon\delta_\varepsilon)^2}. \end{equation} \end{cor} \noindent We can now state and prove the main result of this section concerning the behavior of $X^\varepsilon_{t\in [0, T_1]}$. \begin{prp}[Ergodicity including the first large jump] \label{prop: ergodicity} Let the functions $\rho_\varepsilon, \delta_\varepsilon, \beta_\varepsilon$ satisfy (\ref{eq: rho-delta limit}) for $p=4$ and \begin{equation}\label{eq: exponentiell delta beta} \lim_{\varepsilon\rightarrow 0+}\frac{e^{-\delta_\varepsilon^{-1}}}{(\beta_\varepsilon\delta_\varepsilon)^2} = 0. \end{equation} Consider a set $U\in \mathfrak{B}(\mathbb{R}^d)$ such that \begin{equation}\label{eq: boundary zero} \lim_{t\rightarrow \infty} \sup_{x\in D} \frac{1}{t} \int_0^t \mu(E^{\partial U}(u(s;x))) ds =0. \end{equation} Further, we consider a family $U^\varepsilon \in \mathfrak{B}(\mathbb{R}^d)$ such that for all $\kappa >0$ there exists $\varepsilon_0 \in (0,1)$ satisfying for $\varepsilon\in (0, \varepsilon_0]$ that $U^\varepsilon \bigtriangleup U \subset \mathcal{B}_{\kappa}(\partial U)$. Then \begin{equation} \lim_{\varepsilon \rightarrow 0+} \sup_{x\in D_{\delta_\varepsilon}} \big|\mathbb{E}\left[e^{-T_1 \lambda_\varepsilon} \mathbf{1}\{V^\varepsilon_{T_1,x} + G(V^\varepsilon_{T_1, x}, \varepsilon W) \in U^\varepsilon \}\right] - \int_\mathcal{A} \mathbb{P}(v + G(v, \varepsilon W)\in U) P(dv) \big| = 0. \end{equation} \end{prp} \begin{proof} Let $\theta\in (0,1)$. Then by Hypothesis (D.2) there is $\mathcal{T} = \mathcal{T}_\theta>0$ such that for all $t\geqslant \mathcal{T}$ \begin{equation}\label{eq: limit measure approximation} \sup_{x\in D}|\frac{1}{\mathcal{T}} \int_0^{\mathcal{T}} \phi(u(s;x)) ds - \int_\mathcal{A} \phi(v) P(dv)|\leqslant \frac{\theta}{2}. \end{equation} In addition, we choose $\mathcal{T}>\mathcal{S}$. Furthermore there exists $\kappa>0$ such that \[ \sup_{x\in D} \frac{1}{\mathcal{T}} \int_0^\mathcal{T} \mu(E^{\mathcal{B}_{\kappa}(\partial U)}(u(s;x))) ds \leqslant \frac{\theta}{2}. \] Once again we lighten notation $V = V^\varepsilon$. Due to the independence of $T_1$ and $V$ we may continue for $x\in D_{\delta_\varepsilon}$ \begin{align*} & \mathbb{E}[e^{-T_1 \lambda_\varepsilon} \mathbf{1}\{V_{T_1, x}+ G(V_{T_1, x}, \varepsilon W) \in U^\varepsilon\}]\\[2mm] &\quad \leqslant \mathbb{E}[e^{-T_1 \lambda_\varepsilon} \mathbf{1}\{V_{T_1, x}+ G(V_{T_1, x}, \varepsilon W) \in U^\varepsilon\}] \\ &\quad \leqslant \sum_{k= 0}^{\infty} \mathbb{E}\int_{k \mathcal{T}}^{(k+1) \mathcal{T}} \beta_\varepsilon e^{-(\beta_\varepsilon + \lambda_\varepsilon) s} \mathbf{1}\{V_{s, x}+ G(V_{s, x}, \varepsilon W) \in U^\varepsilon\} ds]. \end{align*} We define \[ \mathcal{E}_x(\varepsilon) = \{\sup_{t\in [0, \mathcal{T}]} \|V_{t,x} -u(t;x)\| \leqslant \delta_\varepsilon\} \] and calculate \begin{align} & \mathbb{E}[\int_{k \mathcal{T}}^{(k+1) \mathcal{T}} \beta_\varepsilon e^{-(\beta_\varepsilon + \lambda_\varepsilon) s} \mathbf{1}\{ V_{s, x}+ G(V_{s, x}, \varepsilon W) \in U^\varepsilon\} ds] \nonumber\\ &\leqslant \beta_\varepsilon e^{-(\beta_\varepsilon + \lambda_\varepsilon) k\mathcal{T}} \mathbb{E}[\int_{k \mathcal{T}}^{(k+1) \mathcal{T}} \mathbf{1}\{V_{s, x}+ G(V_{s, x}, \varepsilon W) \in U^\varepsilon\} ds] \nonumber\\ & \leqslant \mathcal{T}\beta_\varepsilon e^{-(\beta_\varepsilon + \lambda_\varepsilon) k\mathcal{T}} ( \mathbb{E}[\frac{1}{\mathcal{T}}\int_{k \mathcal{T}}^{(k+1) \mathcal{T}} \mathbf{1}\{V_{s, x}+ G(V_{s, x}, \varepsilon W) \in U^\varepsilon\} \mathbf{1}(\mathcal{E}_x)ds] + \sup_{x\in D_{\delta_\varepsilon}}\mathbb{P}(\mathcal{E}_x^c))\nonumber\\ & = \mathcal{T}\beta_\varepsilon e^{-(\beta_\varepsilon + \lambda_\varepsilon) k\mathcal{T}} ( \mathbb{E}[\mathbb{E}[\frac{1}{\mathcal{T}}\int_{k \mathcal{T}}^{(k+1) \mathcal{T}} \mathbf{1}\{V_{s, x}+ G(V_{s, x}, \varepsilon W) \in U^\varepsilon\} \mathbf{1}(\mathcal{E}_x)ds~|~\mathcal{F}_{\mathcal{T}}]] + \sup_{x\in D_{\delta_\varepsilon}}\mathbb{P}(\mathcal{E}_x^c))\nonumber\\ & \leqslant \mathcal{T}\beta_\varepsilon e^{-(\beta_\varepsilon + \lambda_\varepsilon) k\mathcal{T}} (\sup_{y\in \mathcal{B}_{\delta_\varepsilon}(\mathcal{I})} \mathbb{E}[\frac{1}{\mathcal{T}}\int_{(k-1) \mathcal{T}}^{k \mathcal{T}} \mathbf{1}\{V_{s, x}+ G(V_{s, x}, \varepsilon W) \in U^\varepsilon\} ]+ \sup_{x\in D_{\delta_\varepsilon}}\mathbb{P}(\mathcal{E}_x^c)).\label{eq: Markov property} \end{align} A recursive argument yields \begin{align*} & \mathbb{E}[\int_{k \mathcal{T}}^{(k+1) \mathcal{T}} \beta_\varepsilon e^{-(\beta_\varepsilon + \lambda_\varepsilon) s} \mathbf{1}\{ V_{s, x}+ G(V_{s, x}, \varepsilon W) \in U^\varepsilon\} ds] \\ &\leqslant \mathcal{T}\beta_\varepsilon e^{-(\beta_\varepsilon + \lambda_\varepsilon) k\mathcal{T}} (\sup_{y\in D_{\delta_\varepsilon}} \frac{1}{\mathcal{T}} \int_{0}^{\mathcal{T}} \mathbb{P}(V_{s, y} + G(V_{s, y} , \varepsilon W) \in U^\varepsilon) ds \\ &\qquad + (k+1)\sup_{y\in D_{\delta_\varepsilon}} \mathbb{P}(\mathcal{E}_y^c)) = J. \end{align*} We choose $\varepsilon_0\in (0,1)$ small enough such that $\varepsilon\in (0, \varepsilon_0]$ implies \[ (U^\varepsilon \bigtriangleup U) + \mathcal{B}_{(1+Le^{L^2}) \delta_\varepsilon}(0) \subset \mathcal{B}_{\kappa}(\partial U). \] Hence we may continue \begin{align*} J &\leqslant \mathcal{T}\beta_\varepsilon e^{-(\beta_\varepsilon + \lambda_\varepsilon) k\mathcal{T}} (\sup_{y\in D_{\delta_\varepsilon}} (\frac{1}{\mathcal{T}} \int_{0}^{\mathcal{T}} \mathbb{P}(u(s;y) + G(u(s;y) , \varepsilon W) \in U^\varepsilon + \mathcal{B}_{(1+ L e^{L^2})\delta_\varepsilon}) ds \\ &\qquad + (k+1) \exp(-\delta_\varepsilon^{-1} + 3+ \ln(d))). \end{align*} The first summand in the brackets satisfies due to the regular variation of $\nu$, the measure continuity and conditions (\ref{eq: boundary zero}) and (\ref{eq: limit measure approximation}) \begin{align*} & \frac{\beta_\varepsilon}{h_\varepsilon} \frac{1}{\mathcal{T}} \int_{0}^\mathcal{T} \mathbb{P}(u(s;x) + G(u(s;x), \varepsilon W) \in \mathcal{B}_{\kappa}(U^\varepsilon)) \bigtriangleup U ) ds \\ & \quad \leqslant \frac{1}{\mathcal{T}} \int_{0}^\mathcal{T} \frac{1}{h_\varepsilon} \nu\Big(\frac{1}{\varepsilon}E^{\mathcal{B}_{\kappa}(\partial U)}(u(s;x))\Big) ds \\ & \quad \leqslant (1+\theta) \frac{1}{\mathcal{T}} \int_{0}^\mathcal{T} \mu\Big(E^{\mathcal{B}_{\kappa}(\partial U)}(u(s;x)\Big) ds \leqslant (1+\theta)\frac{\theta}{2}. \end{align*} Hence \begin{align*} & \sup_{y\in D_{\delta_\varepsilon}} (\frac{1}{\mathcal{T}} \int_{0}^{\mathcal{T}} \mathbb{P}(u(s;y) + G(u(s;y) , \varepsilon W) \in \mathcal{B}_{\kappa}(U^\varepsilon)) ds\\ &\quad \leqslant (1+\theta) \frac{1}{\mathcal{T}} \int_{0}^\mathcal{T} \mu\Big(E^U(u(s;x))\Big) ds + (1+\theta) \frac{\theta}{2} \frac{h_\varepsilon}{\beta_\varepsilon} \end{align*} We eventually obtain \begin{align*} \frac{1}{\mathcal{T}} \int_{0}^\mathcal{T} \mu\Big(E^U(u(s;x))\Big) ds \quad \leqslant (1+\theta) \int_\mathcal{A} \mu\Big(E^U(v)\big) P(dv). \end{align*} Summing up over $k$ we end up with an $\varepsilon_0 \in (0,1)$ such that for $\varepsilon \in (0, \varepsilon_0]$ \begin{align*} J &\leqslant (1+\theta)^2 \frac{\mathcal{T}~ \beta_\varepsilon}{1-e^{-\beta_\varepsilon \mathcal{T}}} \Big(\int_\mathcal{A} \mu\Big(E^U(v)\big) P(dv) + (1+\theta) \frac{\theta}{2} \frac{h_\varepsilon}{\beta_\varepsilon} \Big) + \frac{\beta_\varepsilon}{(1-e^{-\beta_\varepsilon \mathcal{T}})^2} \exp(-\delta_\varepsilon^{-1} + 3+ \ln(d))\\ &\leqslant (1+\theta)^3 \left(\int_{\mathcal{A}} \mu(E^U(v)) P(dv) + \frac{\theta}{2}\right) \end{align*} This closes the proof. \end{proof} \section{Proof of the Theorem \ref{thm: first exit times}} In this section we exploit the results on $(X^\varepsilon_{t,x})_{t\in [0, T_1]}$ and the strong Markov property to pass from $[0, T_1]$ to $[T_{k-1}, T_k]$ in order to determine the first exit scenario of $(X^\varepsilon_{t,x})_{t\geqslant 0}$. The main step consists in the upper bound of the Laplace transform. \subsection{The upper bound} \begin{prp}\label{prp: the upper bound} Assume Hypotheses \textbf{(D.1)} and \textbf{(S.1-2)} to be satisfied. We choose $\delta_\varepsilon = \varepsilon^{\gamma}$ for $\gamma>0$ and $\rho^\varepsilon = \varepsilon^{-\rho}$ for $\rho\in (0,1)$ such that conditions (\ref{eq: rho-delta limit}) for $p=4$ and (\ref{eq: exponentiell delta beta}) are satisfied. Furthermore we assume that \begin{equation}\label{eq: asymptotics symmetric difference} \int_{\mathcal{A}} \mu(E^{\partial D}(y)) P(dy) = 0 \qquad \mbox{ and } \qquad \int_{\mathcal{A}} \mu(E^{D^c}(y)) P(dy) >0. \end{equation} Then for all $\theta>0$ and $U\in \mathfrak{B}(\mathbb{R}^d)$ such that \begin{equation}\label{eq: Rand U null} \int_\mathcal{A} \mu(E^{\partial U}(y)) P(dy) = 0 \end{equation} and $C\in (0,1)$ there is $\varepsilon_0\in (0,1)$ such that for $\varepsilon\in (0,\varepsilon_0]$ the first exit time $\mathbb{T}_y = \mathbb{T}_y(\varepsilon)$ satisfies \begin{align*} \sup_{y\in D_{\delta_\varepsilon}} \mathbb{E}\left[ e^{-\theta Q(D^c) h_\varepsilon \mathbb{T}_y} \mathbf{1}\{X^\varepsilon_{\mathbb{T}_y ,y}\in U\}\right] \leqslant (1+C) \frac{1}{1+ \theta} \frac{Q(U \cap D^c)}{Q(D^c)}. \end{align*} \end{prp} \begin{proof} We start by lightening the notation. Whenever we consider the first jump $i=1$ we omit the index. Hence we write $T = T_1 = T_1^\varepsilon$, $W = W_1 = W_1^\varepsilon$ etc. Define $\tau_i := T_{i}- T_{i-1}$. All processes will loose their $\varepsilon$ index. For convenience we abreviate $Q = Q(D^c)$. We define the following events for $y\in D_{\delta_\varepsilon}$ and $s, t\geqslant 0$ by \begin{align*} A_{t,s,y} &:= \{X_{r,\cdot}\circ \theta_{s}(y)\in D \mbox{ for all } r\in [0, t]\},\\ B_{t,s,y} &:= \{X_{r,\cdot}\circ \theta_{s}(y)\in D \mbox{ for all } r\in [0, t), X_{t,\cdot} \circ \theta_{s}(y) \notin D\}\\ O_{t,s,y}(U) &:= \{X_{t, \cdot}\circ \theta_{s}(y) \in U\}. \end{align*} For $x\in D_{\delta_\varepsilon}$ and with the convention $T_0 = 0$ we denote the trivial disjoint repartition \[ \{\mathbb{T}_x <\infty\} = \bigcup_{k=1}^\infty \{\mathbb{T}_x \in (T_{k-1}, T_k)\}\cup\{\mathbb{T}_x = T_k\}. \] Furthermore consider for $k\geqslant 1$ and \[ \{\mathbb{T}_x = T_k\} = \bigcap_{i=1}^{k-1} A_{\tau_i, T_{i-1}, X_{T_{i-1}, x}} \cap B_{\tau_k, T_{k-1}, X_{T_{k-1}, x}} \] and analoguously \[ \{\mathbb{T}_x \in (T_{k-1}, T_k) \} = \bigcap_{i=1}^{k-1} A_{\tau_i, T_{i-1}, X_{T_{i-1}, x}} \cap \{V^{k}_{t}\circ \theta_{T_{k-1}}(x) \notin D \mbox{ for some }t \in (0, \tau_k)\}. \] Therefore we may calculate \begin{align*} & \mathbf{1}\{\mathbb{T}_x = T_k\} = \prod_{i=1}^{k-1} \mathbf{1}(A_{\tau_i, T_{i-1}, X_{T_{i-1}, x}}) \mathbf{1}(B_{\tau_k, T_{k-1}, X_{T_{k-1}, x}}), \\ \end{align*} for $k=1$ \begin{align*} \mathbf{1}\{\mathbb{T}_x \in (0, T_1) \} &= \mathbf{1}(\{V_{t,x} \notin D \mbox{ for some }t \in (0, T_1)\} \end{align*} and for $k\geqslant 2$ \begin{align*} \mathbf{1}\{\mathbb{T}_x \in (T_{k-1}, T_k) \} = \prod_{i=1}^{k-1} \mathbf{1}(A_{\tau_i, T_{i-1}, X_{T_{i-1}, x}}) \mathbf{1}(\{V^{k}_{t}\circ \theta_{T_{k-1}}(x) \notin D_{\delta_\varepsilon} \mbox{ for some }t \in (0, \tau_k)\}). \end{align*} We choose $\kappa_\varepsilon := \lceil \frac{1}{h_\varepsilon}\rceil$. Hence \begin{align*} & \sup_{x\in D_{\delta_\varepsilon}}\mathbb{E} \left[e^{-\theta Q h_\varepsilon \mathbb{T}_{x}} \mathbf{1}\{X_{\mathbb{T}_x , x} \in U \}\right]\\ & \leqslant \sum_{k=1}^{\kappa_\varepsilon-1} \sup_{x\in D_{\delta_\varepsilon}}\mathbb{E}\left[ e^{-\theta Q h_\varepsilon \mathbb{T}_{x}} (\mathbf{1}\{\mathbb{T}_x = T_k\}+\mathbf{1}\{\mathbb{T}_x \in (T_{k-1}, T_k) \})\mathbf{1}(O_{\mathbb{T}_x, 0, x}(U))\right]\\[2mm] & + \sum_{k=\kappa_\varepsilon}^{\infty} \sup_{x\in D_{\delta_\varepsilon}}\mathbb{E}\left[ e^{-\theta Q h_\varepsilon \mathbb{T}_{x}} \mathbf{1}\{\mathbb{T}_x \in (T_{k-1}, T_k] \}\right]\\[2mm] & =: S_{1} + S_{2} + S_3. \end{align*} First we treat the easiest sum. \paragraph{1) Estimate of $S_3$: } Due to $T_k = \tau_1 + \dots + \tau_k$ and the independence and stationarity of $(\tau_i)$ we obtain \begin{align*} S_3 &\leqslant \sum_{k=\kappa_\varepsilon}^\infty \mathbb{E}[e^{-\theta Q h_\varepsilon T_1}]^k = \sum_{k=\kappa_\varepsilon}^\infty \frac{1}{(1+ \frac{\theta Q h_\varepsilon}{\beta_\varepsilon})^k} = \sum_{k=\kappa_\varepsilon}^\infty e^{k \ln(1-\frac{\theta Q h_\varepsilon}{\beta_\varepsilon})}. \end{align*} There is $\varepsilon_0 \in (0,1)$ such that $\varepsilon\in (0,\varepsilon_0]$ \begin{align*} S_3 &\leqslant \sum_{k=\kappa_\varepsilon}^\infty e^{-k 2\frac{\theta Q h_\varepsilon}{\beta_\varepsilon}} = \frac{e^{-\kappa_\varepsilon 2\frac{\theta Q h_\varepsilon}{\beta_\varepsilon}}}{1-e^{-2\frac{\theta Q h_\varepsilon}{\beta_\varepsilon}}} \leqslant \frac{2 e^{-\kappa_\varepsilon 2\frac{\theta Q h_\varepsilon}{\beta_\varepsilon}}}{2\frac{\theta Q h_\varepsilon}{\beta_\varepsilon}} \leqslant \frac{C}{3}. \end{align*} \paragraph{2) Estimate of $S_1$:} We continue \begin{align*} S_{1} & \leqslant \sum_{k=1}^{\kappa_\varepsilon} \sup_{x\in D_{\delta_\varepsilon}} \mathbb{E}\left[ e^{-\theta Q h_\varepsilon T_k}\mathbf{1}\{\mathbb{T}_x = T_k\}\mathbf{1}(O_{T_k, 0,x}(U))\right]\\ & \leqslant \sum_{k=1}^{\kappa_\varepsilon} \sup_{x\in D_{\delta_\varepsilon}}\mathbb{E}\left[ \prod_{i=1}^{k-1}e^{-\theta Q h_\varepsilon \tau_i} \mathbf{1}(A_{\tau_i, T_{i-1}, X_{T_{i-1}, x}}) \mathbf{1}(B_{\tau_k, T_{k-1}, X_{T_{k-1}, x}})\mathbf{1}(O_{\tau_{k}, T_{k-1},X_{T_{k-1}, x}}(U))\right].\\ \end{align*} Exploiting the same reasoning as in inequality (\ref{eq: Markov property}) with the strong Markov property of $X^\varepsilon$ for the jump times $(T_k)_{k\geqslant 1}$ instead of Markov property at deterministic times $k\mathcal{T}$, and the independence and stationarity of the increments we estimate the $k$-th summand of $S_{1}$ by \begin{align} &\sup_{x\in D_{\delta_\varepsilon}} \mathbb{E}\bigg[ \prod_{i=1}^{k-1}e^{-\theta Q h_\varepsilon \tau_i} \mathbf{1}(A_{\tau_i, T_{i-1}, X_{T_{i-1}, x}}) e^{-\theta Q h_\varepsilon \tau_{k-1}} \mathbf{1}(B_{\tau_k, T_{k-1}, X_{T_{k-1}, x}}) \mathbf{1}(O_{\tau_{k}, T_{k-1},X_{T_{k-1}, x}}(U))\bigg]\nonumber\\ & \leqslant \sup_{x\in D_{\delta_\varepsilon}} \mathbb{E}\bigg[\bigg(e^{-\theta Q h_\varepsilon \tau_1} \mathbf{1}(A_{\tau_1, 0, x}) \mathbf{1}\{V_{T_1,x} \in D_{\delta_\varepsilon}\} + \mathbf{1}\{V_{T_1,x} \notin D_{\delta_\varepsilon}\}\bigg)\nonumber\\ & \qquad \mathbb{E}\bigg[\prod_{i=2}^{k-1}e^{-\theta Q h_\varepsilon \tau_i} \mathbf{1}(A_{\tau_i, T_{i-1}, X_{T_{i-1}, x}}) e^{-\theta Q h_\varepsilon \tau_{k-1}} \mathbf{1}(B_{\tau_k, T_{k-1}, X_{T_{k-1}, x}}) \mathbf{1}(O_{\tau_{k}, T_{k-1},X_{T_{k-1}, x}}(U))~|~\mathcal{F}_{T_{1}}\bigg] \bigg] \nonumber\\ & \leqslant \sup_{x\in D_{\delta_\varepsilon}} \mathbb{E}\bigg[e^{-\theta Q h_\varepsilon T} \mathbf{1}(A_{x})\bigg]\nonumber \\ &\qquad \sup_{x\in D_{\delta_\varepsilon}} \mathbb{E}\bigg[ \prod_{i=1}^{k-2}e^{-\theta Q h_\varepsilon \tau_i} \mathbf{1}(A_{\tau_i, T_{i-1}, X_{T_{i-1}, x}}) e^{-\theta Q h_\varepsilon \tau_{k-2}} \mathbf{1}(B_{\tau_{k-1}, T_{k-2}, X_{T_{k-2}, x}}) \mathbf{1}(O_{\tau_{k-1}, T_{k-2},X_{T_{k-2}, x}}(U)) \bigg]\nonumber\\ &\qquad + \sup_{x\in D_{\delta_\varepsilon}} \mathbb{P}(V_{T_1,x}\notin D_{\delta_\varepsilon}) \label{eq: Strong Markov estimate} \end{align} where we use the abreviation \begin{align*} A_x &= A_{T_1, 0, x} \\ B_x &= B_{T_1, 0, x} \\ O_x^{U} &= O_{T_1, 0, x}(U). \end{align*} The recursion from $k-1$ to $1$ leads to \begin{align} &\sup_{x\in D_{\delta_\varepsilon}} \mathbb{E}\bigg[ \prod_{i=1}^{k-1}e^{-\theta Q h_\varepsilon \tau_i} \mathbf{1}(A_{\tau_i, T_{i-1}, X_{T_{i-1}, x}}) e^{-\theta Q h_\varepsilon \tau_{k-1}} \mathbf{1}(B_{\tau_k, T_{k-1}, X_{T_{k-1}, x}}) \mathbf{1}(O_{\tau_{k}, T_{k-1},X_{T_{k-1}, x}}(U))\nonumber \\ & \leqslant \bigg(\sup_{x\in D_{\delta_\varepsilon}} \mathbb{E}\bigg[e^{-\theta Q h_\varepsilon T} \mathbf{1}(A_{x})\bigg]\bigg)^{k-1} \sup_{x\in D_{\delta_\varepsilon}} \mathbb{E}\bigg[e^{-\theta Q h_\varepsilon T} \mathbf{1}(B_{x}) \mathbf{1}(O^U_{x})) \bigg]\nonumber\\ &\qquad + \sup_{x\in D_{\delta_\varepsilon}} \mathbb{P}(V_{T_1,x}\notin D_{\delta_\varepsilon}) \sum_{j=0}^{k-2} \bigg(\sup_{x\in D_{\delta_\varepsilon}} \mathbb{E}\bigg[e^{-\theta Q h_\varepsilon T} \mathbf{1}(A_{x})\bigg]\bigg)^{j}. \label{eq: Strong Markov estimate recursion} \end{align} In the same way we estimate the $k$-th summand of $S_2$ for $k\geqslant 1$. \begin{align*} &\sup_{x\in D_{\delta_\varepsilon}} \mathbb{E}\bigg[ \prod_{i=1}^{k-1}e^{-\theta Q h_\varepsilon \tau_i} \mathbf{1}(A_{\tau_i, T_{i-1}, X_{T_{i-1}, x}}) e^{-\theta Q h_\varepsilon \tau_{k-1}} \mathbf{1}(\{V^{k}_{t}\circ \theta_{T_{k-1}}(x) \in D_{\delta_\varepsilon}^c \cap U \mbox{ for some }t \in (0, \tau_k)\})]\\ & \leqslant \bigg(\sup_{x\in D_{\delta_\varepsilon}} \mathbb{E}\bigg[e^{-\theta Q h_\varepsilon T} \mathbf{1}(A_{x})\bigg]\bigg)^{k-1} \sup_{x\in D_{\delta_\varepsilon}} \mathbb{E}\bigg[e^{-\theta Q h_\varepsilon T} \mathbf{1}(\{V^{\varepsilon}_{t, x} \in D \cap U \mbox{ for some }t \in (0, T_1)\}) \bigg]\\ &\qquad + \sup_{x\in D_{\delta_\varepsilon}} \mathbb{P}(V_{T_1,x}\notin D_{\delta_\varepsilon}) \sum_{j=0}^{k-2} \bigg(\sup_{x\in D_{\delta_\varepsilon}} \mathbb{E}\bigg[e^{-\theta Q h_\varepsilon T} \mathbf{1}(A_{x})\bigg]\bigg)^{j}. \nonumber \end{align*} We show now that the first sum can be estimated by $1/(1+\theta)$, the Laplace transform of $\mbox{EXP}(1)$ evaluated at $\theta$, plus a small error and that both additional sums tend to zero if $\varepsilon$ does so. \noindent \textbf{Starting with the first factor of the main sum} we obtain \begin{align*} \sup_{y\in D_{\delta_\varepsilon}} \mathbb{E}\left[ e^{-\theta Q h_\varepsilon T} \mathbf{1}} %\mathbb{J(A_y) \right] &\leqslant \sup_{y\in D_{\delta_\varepsilon}} ~\mathbb{E}\left[e^{-\theta Q h_\varepsilon T} (1-\mathbf{1}} %\mathbb{J\{V_{T, y} + G(V_{T, y}, \varepsilon W) \in D^c \})\right]. \end{align*} Proposition \ref{prop: ergodicity} and the independence of $W$ from $T$ and $V$ ensure the existence $\varepsilon_0\in (0,1)$ such that for $\varepsilon \in (0, \varepsilon_0]$ \begin{align*} \sup_{y\in D_{\delta_\varepsilon}} \mathbb{E}\left[ e^{-\theta Q h_\varepsilon T} \mathbf{1}} %\mathbb{J(A_y) \right] & \leqslant \frac{\beta_\varepsilon}{\theta Q h_\varepsilon + \beta_\varepsilon} \left(1 - (1-C) \int_\mathcal{A} \mathbb{P}(v + G(v, \varepsilon W)\in D^c) P(dv)\right). \end{align*} \noindent Since by definition \begin{align*} \mathbb{P}\Big(v + G(v, \varepsilon W)\in D^c\Big) = \frac{1}{\beta_\varepsilon} \nu\Big(\frac{1}{\varepsilon} E^{D^c}(v))\Big), \end{align*} $\mathcal{A}$ is compact and the distance $d(\mathcal{A}, \partial D) >0$, the regular variation of $\nu$ implies the existence of a constant $\varepsilon_0\in (0,1)$ such that \begin{align*}\displaystyle \sup_{v\in \mathcal{A}} \bigg|\frac{\mathbb{P}\Big(v + G( v, \varepsilon W)\in D^c\Big)} {\displaystyle\frac{h_\varepsilon}{\beta_\varepsilon} \mu\left(E^{D^c}(v)\right) } -1\bigg| \leqslant C, \quad \mbox{ for all }\varepsilon\in (0, \varepsilon_0]. \end{align*} Hence there is $\varepsilon_0 \in (0,1)$ such for $\varepsilon\in (0, \varepsilon_0]$ \begin{align} \sup_{y\in D_{\delta_\varepsilon}} \mathbb{E}\left[ e^{-\theta Q h_\varepsilon T} \mathbf{1}} %\mathbb{J(A_y) \right] &~\leqslant \frac{\beta_\varepsilon}{ \theta Q h_\varepsilon + \beta_\varepsilon} \left(1 - (1-C)^2\frac{h_\varepsilon}{\beta_\varepsilon} \int_{\mathcal{A}} \mu\left(E^{D^c}(u)\right) P(du)\right) \nonumber\\ &~= \frac{\beta_\varepsilon}{ \theta Q h_\varepsilon + \beta_\varepsilon} \left(1 - (1-C)^2\frac{Q h_\varepsilon}{\beta_\varepsilon}\right). \label{eq: estimate A} \end{align} \noindent \textbf{The second factor of the main sum} can be treated analogously and we obtain for sufficiently small $\varepsilon_0\in (0,1)$ that \begin{equation}\label{eq: estimate B} \sup_{y\in D_{\delta_\varepsilon}} \mathbb{E}\left[ e^{-\theta Q h_\varepsilon T} \mathbf{1}} %\mathbb{J(B_y)\mathbf{1}(O^{U}_y) \right] \leqslant (1+C)^2 \frac{\beta_\varepsilon}{\theta Q h_\varepsilon + \beta_\varepsilon} \frac{Q(U\cap D^c) h_\varepsilon}{\beta_\varepsilon} \end{equation} for $\varepsilon \in (0, \varepsilon_0]$, where \[ Q(D^c \cap U) = \int_\mathcal{A} \mu\left(E^{D^c \cap U}(u)\right) P(du). \] \paragraph{For the remainder sum} we exploit Corollary \ref{cor: at jump time in D delta} which yields a constant $C'>0$ and $\varepsilon_0\in (0,1)$ such that for $\varepsilon \in (0,\varepsilon_0]$ \begin{align*} \sup_{y\in D_{\delta_\varepsilon}}\mathbb{P}(\mathbb{T}_y \in (0, T)) \leqslant \frac{ C' e^{-\delta_\varepsilon^{-1}}}{(\beta_\varepsilon\delta_\varepsilon)^2} \end{align*} and obtain \begin{align*} \sup_{x\in D_{\delta_\varepsilon}} \mathbb{P}(V_{T_1,x}\notin D_{\delta_\varepsilon})& \sum_{k=1}^{k_\varepsilon-1} \sum_{j=0}^{k-2} \bigg(\sup_{x\in D_{\delta_\varepsilon}} \mathbb{E}\bigg[e^{-\theta Q h_\varepsilon T} \mathbf{1}(A_{x})\bigg]\bigg)^{j} \\ & \leqslant \frac{ C' e^{-\delta_\varepsilon^{-1}}}{(\beta_\varepsilon\delta_\varepsilon)^2} \sum_{k=1}^{\kappa_\varepsilon-1} \sum_{j=0}^{k-2} \left(\frac{\beta_\varepsilon}{ \theta Q h_\varepsilon + \beta_\varepsilon} \left(1 - (1-C)^2\frac{Q h_\varepsilon}{\beta_\varepsilon}\right)\right)^j. \end{align*} Let us call \[ q_\varepsilon = \frac{\beta_\varepsilon}{ \theta Q h_\varepsilon + \beta_\varepsilon} \left(1 - (1-C)^2\frac{Q h_\varepsilon}{\beta_\varepsilon}\right). \] Then \begin{align}\label{eq: remainder sum} \sup_{x\in D_{\delta_\varepsilon}} \mathbb{P}(V_{T_1,x}\notin D_{\delta_\varepsilon})& \sum_{k=1}^{k_\varepsilon-1} \sum_{j=0}^{k-2} \bigg(\sup_{x\in D_{\delta_\varepsilon}} \mathbb{E}\bigg[e^{-\theta Q h_\varepsilon T} \mathbf{1}(A_{x})\bigg]\bigg)^{j} \nonumber\\ & =: \frac{ C' e^{-\delta_\varepsilon^{-1}}}{(\beta_\varepsilon\delta_\varepsilon)^2} \sum_{k=0}^{\kappa_\varepsilon-2} \frac{1-q_\varepsilon^{k-1}}{1-q_\varepsilon} \nonumber\\ &\leqslant \frac{ C' e^{-\delta_\varepsilon^{-1}}}{(\beta_\varepsilon\delta_\varepsilon)^2} \frac{\kappa_\varepsilon}{1-q_\varepsilon} \leqslant \frac{C}{3}. \end{align} Eventually inequalities (\ref{eq: estimate A}), (\ref{eq: estimate B}) and (\ref{eq: remainder sum}) combined imply the existence of $\varepsilon_0\in (0,1)$ such that \[ S_1 \leqslant (1+C)^2 \frac{\beta_\varepsilon}{\theta Q h_\varepsilon + \beta_\varepsilon} \frac{Q(D^c \cap U) h_\varepsilon}{\beta_\varepsilon} \sum_{k=1}^\infty \left( \frac{\beta_\varepsilon}{\theta Q h_\varepsilon + \beta_\varepsilon} \left(1-\frac{Qh_\varepsilon}{\beta_\varepsilon}\left(1 - C\right)^2\right)\right)^{k-1} + \frac{C}{3} \] for $\varepsilon \in (0,\varepsilon_0]$. \paragraph{3) Estimate of $S_2$:} For $k=1$ we exploit Corollary \ref{cor: non-exit}, which yields a constant $C'>0$ and $\varepsilon_0\in (0,1)$ such that for $\varepsilon \in (0,\varepsilon_0]$ \begin{align*} \sup_{y\in D_{\delta_\varepsilon}}\mathbb{P}(\{\mathbb{T}_y \in (0, T)\} \cap O_{\mathbb{T}_y, 0, x}(U)) &\leqslant \sup_{x\in D_{\delta_\varepsilon}} \mathbb{P}(t\in [0,T]: ~V_{t,x} \notin D_{\delta_\varepsilon} \cap U)\\ &\leqslant \frac{ C' e^{-\delta_\varepsilon^{-1}}}{(\beta_\varepsilon\delta_\varepsilon)^2}\\ &\leqslant ((1+C)^3-(1+C)^2) \frac{\beta_\varepsilon}{\theta Q h_\varepsilon + \beta_\varepsilon} \frac{Qh_\varepsilon}{\beta_\varepsilon}. \end{align*} The last estimate follows by the algebraic choice of $\delta_\varepsilon$. and eventually with the help of estimate (\ref{eq: remainder sum}) of the remainder sum \begin{align*} S_2 & \leqslant ((1+C)^3-(1+C)^2) \frac{\beta_\varepsilon}{\theta Q h_\varepsilon + \beta_\varepsilon} \frac{Q(D^c \cap U) h_\varepsilon}{\beta_\varepsilon} \sum_{k=1}^\infty\left( \frac{\beta_\varepsilon}{ \theta Q h_\varepsilon + \beta_\varepsilon} \left(1 - (1-C)^2\frac{Q h_\varepsilon}{\beta_\varepsilon}\right)\right)^{k-1} + \frac{C}{3}\\ \end{align*} \paragraph{Conclusion:} We infer that there is a sufficiently small constant $\varepsilon_0\in (0,1)$ such that for $\varepsilon \in (0, \varepsilon_0]$ \begin{align*} \sup_{x\in D_{\delta_\varepsilon}} \mathbf{E}&\left[ e^{-\theta Q h_\varepsilon \mathbb{T}_x} \mathbf{1}\{X^\varepsilon_{\mathbb{T}_y ,y}\in U\}\right] \leqslant S_1 + S_2 + S_3\\ &\leqslant ~(1+C)^3 \frac{\beta_\varepsilon}{\theta Q h_\varepsilon + \beta_\varepsilon} \frac{Q(D^c \cap U)h_\varepsilon}{\beta_\varepsilon} \sum_{k=1}^\infty \left(\frac{\beta_\varepsilon}{\theta Q h_\varepsilon + \beta_\varepsilon} \left(1-\frac{Qh_\varepsilon}{\beta_\varepsilon}\left(1 - C \right)^3\right)\right)^{k-1} \\[2mm] &= ~\frac{\displaystyle (1+C)^3\frac{\beta_\varepsilon}{\displaystyle \theta Q h_\varepsilon + \beta_\varepsilon} \frac{Q(D^c \cap U)h_\varepsilon}{\beta_\varepsilon}}{\displaystyle 1- \left(\frac{\beta_\varepsilon}{\theta Q h_\varepsilon + \beta_\varepsilon} \left(1-\frac{Qh_\varepsilon}{\beta_\varepsilon}\left(1 - C \right)^3\right)\right)} + C\\[3mm] & = \frac{(1+C)^3}{\theta + (1 - C)^3}\frac{Q(D^c \cap U)}{Q} + C. \end{align*} By an appropriate renaming of the constant $C$ we close the proof. \end{proof} \subsection{The lower bound} \begin{prp}\label{prp: the lower bound} Let the assumptions of Proposition (\ref{prp: the upper bound}) be satisfied. Then for all $\theta>0$, $U\in \mathfrak{B}(\mathbb{R}^d)$ satisfying (\ref{eq: Rand U null}) and $C\in (0,1)$ there is $\varepsilon_0\in (0,1)$ such that for $\varepsilon\in (0,\varepsilon_0]$ the first exit time $\mathbb{T}_y = \mathbb{T}_y(\varepsilon)$ satisfies \begin{align*} \inf_{y\in D_{\delta_\varepsilon}} \mathbb{E}\left[ e^{-\theta Q(D^c) h_\varepsilon \mathbb{T}_y} \mathbf{1}\{X^\varepsilon_{\mathbb{T}_y,y}\in U\}\right] \geqslant \frac{Q(D^c \cap U)}{Q(D^c)} \frac{1-C}{1+ \theta +C}. \end{align*} \end{prp} \begin{proof} We keep the notation introduced in the proof of Proposition \ref{prp: the upper bound}. We define the following events for $y\in D_{\delta_\varepsilon}$ and $t, s\geqslant 0$ by \begin{align*} A^-_{t,s,y} &= \{X_{r,\cdot}\circ \theta_{s}(y)\in D \mbox{ for all } r\in [0, t) \mbox{ and }X_{t,\cdot}\circ \theta_{s}(y)\in D_{\delta_\varepsilon}\} \end{align*} and the abbreviation \[ A^-_{y} = A^-_{T_1, 0, y}. \] The identical strong Markov property estimates from below (\ref{eq: Strong Markov estimate}) and (\ref{eq: Strong Markov estimate recursion}) as in the proof of Proposition \ref{prp: the upper bound} only with inverted inequalities and neglecting all the nonnegative error terms yields \begin{align*} & \inf_{y\in D_{\delta_\varepsilon}} \mathbb{E}\left[ e^{-\theta Q h_\varepsilon \mathbb{T}_y} \mathbf{1}\{X^\varepsilon_{\mathbb{T}_y ,y}\in U\}\right]\\ & ~\geqslant ~\sum_{k=1}^\infty \left(\inf_{y\in D_{\delta_\varepsilon}} \mathbb{E}\left[ e^{-\theta Q h_\varepsilon T} \mathbf{1}} %\mathbb{J(A_y^{-}) \right]\right)^{k-1} \inf_{y\in D_{\delta_\varepsilon}} \mathbb{E}\left[ e^{-\theta Q h_\varepsilon T} \mathbf{1}} %\mathbb{J(B_y) \mathbf{1}(D^{U}_y)\right] . \end{align*} Proposition \ref{prop: ergodicity} yields a constant $\varepsilon_0\in (0,1)$ such that for $\varepsilon\in (0, \varepsilon_0]$ \begin{align*} \inf_{y\in D_{\delta_\varepsilon}}\mathbb{E}\left[ e^{-\theta Q h_\varepsilon T} \mathbf{1}} %\mathbb{J(A_y^-) \right] & \geqslant \frac{\beta_\varepsilon}{\theta Q h_\varepsilon + \beta_\varepsilon} (1- (1+C)^2\frac{Q h_\varepsilon}{\beta_\varepsilon} (D^c))\\ \end{align*} and \begin{align*} \inf_{y\in D_{\delta_\varepsilon}}\mathbb{E}\left[ e^{-\theta Q h_\varepsilon T} \mathbf{1}} %\mathbb{J(B_y) \mathbf{1}} %\mathbb{J(D^{U}_y)\right] & \geqslant (1-C)^2 \frac{\beta_\varepsilon}{\theta Q h_\varepsilon + \beta_\varepsilon} \frac{h_\varepsilon}{\beta_\varepsilon} Q(D^c \cap U). \end{align*} This eventually implies a constant $\varepsilon_0 \in (0,1)$ such that for $\varepsilon\in (0, \varepsilon_0]$ follows \begin{align*} & \inf_{x\in D_{\delta_\varepsilon}}\mathbb{E}\left[ e^{-\theta Q h_\varepsilon \mathbb{T}_{x}}\mathbf{1}\{X_{\mathbb{T}_{x} ,x}\in U\}\right] \\ &\qquad \geqslant (1-C)^2 \frac{\beta_\varepsilon}{\theta Q h_\varepsilon + \beta_\varepsilon}\frac{h_\varepsilon}{\beta_\varepsilon} Q(U\cap D^c) ~\sum_{k=1}^\infty \left(\frac{\beta_\varepsilon}{\theta Q h_\varepsilon + \beta_\varepsilon} (1- (1+C)^2\frac{h_\varepsilon}{\beta_\varepsilon} Q)\right)^{k-1} \\[2mm] &\qquad = \frac{Q(U \cap D^c)}{Q(D^c)} \frac{(1-C)^2}{\theta + (1 +C)^2}. \end{align*} An appropriate renaming of the constant $C$ finishes the proof. \end{proof} \bigskip \begin{proof} (Theorem \ref{thm: first exit times}) For $\rho \in (0,\frac{1}{2})$ we define $\rho^\varepsilon = \varepsilon^{-\rho}$ and verify condition (\ref{eq: rho-delta limit}) for $p=4$ and (\ref{eq: exponentiell delta beta}) for the choice of $\rho^\varepsilon$ and $\delta_\varepsilon$. \[ \frac{\varepsilon \rho^\varepsilon}{\delta_\varepsilon^{(p+1)/2}} = \varepsilon^{1-\rho - 5/2} \rightarrow 0, \mbox{ as }\varepsilon \rightarrow 0+. \] Since for small $\varepsilon$ the intensity $\beta_\varepsilon \approx_\varepsilon \varepsilon^{\alpha \rho} \ell(\frac{1}{\varepsilon^\rho}) \mu(\mathcal{B}_1^c(0))$ is asymptotically dominated by a polynomial order just as $\delta_\varepsilon$, the reasoning is reduced to the fact that the exponential convergence of $e^{-\delta_\varepsilon}$ dominates $(\delta_\varepsilon \beta_\varepsilon)^{-2}$ in the limit as $\varepsilon\rightarrow 0+$. This implies relation (\ref{eq: exponentiell delta beta}). Therefore the upper bound by Proposition \ref{prp: the upper bound} and the lower bound by Proposition \ref{prp: the lower bound} are satisfied, which yields the desired result. \end{proof}
0906.1938
\section{Introduction} When atoms or molecules are irradiated with laser fields that are intense enough to induce nonlinear effects, a wealth of fascinating phenomena may be observed~\cite{posthumus-2004}. This applies even to deceitfully ``uninteresting'' systems such as the simplest molecule, H$_2^+$~\cite{giusti-suzor-1995}: above-threshold or tunneling ionization~\cite{gibson-1997}, bond softening~\cite{bucksbaum-1990}, bond hardening (light induced bound states or vibrational trapping)~\cite{fransinski-1999}, charge resonance enhanced ionization~\cite{zuo-1993, chelkowski-1996}, above threshold dissociation~\cite{giusti-suzor-1990}, high harmonic generation~\cite{plummer-1995}, etc. This very same complexity in the molecular reaction, however, is what permits to envision the possibility of \emph{controlling} molecules with short (femtosecond time scale) and intense (10$^{11} - 10^{15}$ W/cm$^2$) laser pulses~\cite{shapiro-2003}. The short durations allow for \emph{coherent control}: the systems evolve uncoupled to the environment, and can be steered towards the desired outcomes without relying in the more traditional control parameters, i.e., average, thermodynamic functions such as the temperature. The high intensities trigger the strongly nonlinear, even non-perturbative, response of the systems. An essential ingredient to realize the molecular control is the capability of shaping the laser pulses -- a technological area that has witnessed spectacular advances in the recent years~\cite{weiner-2000}. Yet this complexity implies the need for challenging theoretical models. Not surprisingly, ionization is the first and most studied process, mainly because it could already be studied for atoms~\cite{protopapas-1997}, and because in this intensity regime it almost always occurs, be it accompanied or not by other phenomena. Even in the absence of influence from nuclear dynamics, the ionization of molecules is significantly more complex than that of atoms, due to the electron emission from different atomic centers~\cite{muth-bohm-2000, litvinyuk-2005}. Two rather successful models for molecular ionization that have recently been suggested are the so-called molecular-orbital strong-field approximation~\cite{muth-bohm-2000} and the molecular extension of the Ammosov-Delone-Kraine (ADK) approximation~\cite{tong-2002}. However, these approaches are insufficient as \emph{general} tools~\cite{awasthi-2008}. A common feature of the approaches mentioned above is the use of the single-active-electron (SAE) approximation. One-electron molecular systems such as the hydrogen molecular ion H$_2^+$ are therefore perfect candidates to isolate the error introduced by the SAE approximation from further simplifications. Electron correlation originates difficult and interesting phenomena such as non-sequential ionization~\cite{walker-1994}. In order to properly investigate the interaction of short and intense laser fields with molecules, one needs to perform explicitly time-dependent calculations, even if it might imply a heavy computational burden. Calculations of this kind, that propagate the time-dependent Schr{\"{o}}dinger equation (TDSE), have been presented for H$_2^+$ in the past, for example with the purpose of understanding the presence of maxima in the ionization yield for particular internuclear separations~\cite{enhanced-ionization}, or in order to disentangle the relationship between ionization and dissociation~\cite{ionization-dissociation, chelkowski-1996}. Recently, Selst{\o} {\em et al.}~\cite{selsto-2005} and Kjeldsen {\em et al.}~\cite{kjeldsen-2006} have reported calculations on the orientation dependence of the ionization yield -- lifting the commonly used assumption of a molecular axis parallel to the light polarization. In this work we take a further step, and focus on the possibility of theoretically designing, via fixed-nuclei three-dimensional (3D) TDSE calculations, laser pulses able to control (in particular, significantly enhance) the ionization yields, taking H$_2^+$ as an example system. Some recent experimental breakthroughs on this area have triggered our interest. For example, Suzuki {\em et al.}~\cite{suzuki-2004} demonstrated the control of the multiphoton ionization channels of I$_2$ molecules by making use of a pulse shaping system capable of varying in time the polarization directions. Simultaneously, Brixner {\em et al.}~\cite{brixner-2004} have made use of a similar polarization-shaping system to enhance ionization yields of diatomic molecules (K$_2$). Our focus is, however, on linearly polarized ultrashort pulses ($\approx$ 5~fs), so rapid that the nuclear movement does not play a role during the pulse action -- in contrast to the studies in which the ionization is studied as the internuclear distance changes, leading to possible resonances. \section{Methodology} The optimization problem could be formulated in the language of quantum optimal-control theory (QOCT)~\cite{qoct}. It consists of a set of equations -- along with various suitable, iterative algorithms that solve them -- whose solution provides an optimized \emph{control field} that typically maximizes a \emph{target operator} $\hat{O}$. In order to enhance ionization, one would just define ${\hat{O}}$ as the projection onto unbound states, or, alternatively, the identity minus the projection onto the bound states: \begin{equation} \label{eq:operator} \hat{O} = \hat{1} - \sum^{\rm bound}_i \vert\varphi_i\rangle\langle\varphi_i\vert\,. \end{equation} However, we have experienced numerical difficulties when attempting to solve the QOCT equations for this particular operator: The forward-backward propagations that must be performed in order to solve the QOCT equations proved to be, for our particular implementation, numerically unfeasible when using the operator given in Eq.~(\ref{eq:operator}) to define the target. This was due to the appearance of fields with unrealistically high frequencies and/or amplitudes. We believe that the reason lies in the fact that the backward propagation must be performed after acting with the operator $\hat{O}$ on the previously propagated wave function. This eliminates the smooth, \emph{numerically friendly} part of the wave function, enhancing, on the contrary, the high frequency components. This procedure is repeated at each iteration, eventually making the propagation impossible. We do not claim, however, that any other numerical implementation will not be able to successfully cope with this problem. Therefore, we have employed and present here, a {\emph{direct}} optimization scheme, which is in fact much closer in spirit to the techniques utilized by the experimentalists~\cite{rabitz-2000}. In this scheme, we construct a merit function by considering the expectation value of the operator defined in Eq.~(\ref{eq:operator}) at the end of the propagation: \begin{equation} F(x) = \langle\Psi_x(T)\vert\hat{O}\vert\Psi_x(T)\rangle\,, \end{equation} where $x$ is the set of parameters that define the laser pulse, and $\vert\Psi_x(T)\rangle$ is the wave function that results from performing the propagation with the laser determined by $x$, at the final time $T$. Of course, the sum over the bound states has to be truncated; for the calculations presented below, we find it sufficient to include the lowest ten states. The merit function is calculated by performing consecutive TDSE propagations: The resulting function values are fed into a recently developed derivative-free algorithm called NEWUOA~\cite{newuoa}. This algorithm seeks the maximum of any merit function $F(x)$ depending on $N$ variables $x$, and does not necessitate the gradient $\nabla F$. It is very effective for $N$ larger than ten and smaller than a few hundred, which is the case considered here. In all our runs, we necessitated around two hundred iterations to converge the ionization yields to within 1\%. The system and the TDSE propagations are modeled in our homegrown {\tt octopus} code~\cite{octopus}. We represent the wave functions on a real-space rectangular regular grid, and fix the nuclear position at their equilibrium distance. The small length of the pulses used here justifies this simplification. We perform calculations setting the polarization direction both parallel to the molecular axis and perpendicular to it. The size of the simulation box is selected large enough to ensure that very little of the electronic density has reached the grid boundaries at the end of the laser pulse. Nevertheless, we add absorbing boundaries to remove this charge; if the propagation is pursued after the pulse, part of the density will ``abandon'' the simulation box; the remaining integrated density should approach (as it does) one minus the ionization probability calculated as the expectation value of Eq.~(\ref{eq:operator}). The laser pulse is taken in the dipolar approximation, and represented in the length gauge. The temporal shape of the pulse is given by a function $f(t)$, which we expand in a Fourier series: \begin{equation} f(t) = f_0 + \sum_{n=1}^N \left[ f_n \sqrt{\frac{2}{T}}\cos(\omega_n t) + g_n \sqrt{\frac{2}{T}}\sin(\omega_n t) \right]\,, \end{equation} with $\omega_n = 2\pi n/T$. In order to ensure a physically meaningful laser pulse~\cite{madsen}, we must have $\int_0^T {\rm d}t f(t)=0$, which implies $f_0=0$. Moreover, we must have $f(0)=f(T)=0$, where $T$ is the total propagation time. This poses the following constraint: \begin{equation} \label{eq:condition-zero} \sum_{n=1}^N f_n = 0\,. \end{equation} The sum over frequencies is truncated according to physical considerations: Any pulse shaper must have a predefined range of frequencies it can work with. The feasibility of the numerical scheme depends on the possibility of truncating the previous expression at a reasonably low number. In the cases considered here, due to the short duration of the pulses, we obtain no more than around 20 degrees of freedom by setting the maximum frequency to one Hartree. Evidently, by increasing the intensity of a pulse one can enhance the ionization yield. Our wish is to improve this yield by changing the pulse shape, and not simply by lasing with larger intensity. Therefore, to ensure the {\emph{fairness}} in the optimization search, we constrain the search to laser pulses whose time-integrated intensity (fluence) is predefined to some value $F_0$: \begin{equation} F_0 = \int_0^T\!\!\!\! f^2(t)\,{\rm d}t = \sum_{n=1}^N (f_n^2+g_n^2)\,. \end{equation} The search space $\lbrace f_n,g_n\rbrace$ is thus constrained to the hyper-sphere defined by the previous equation. But we must add the condition given by Eq.~(\ref{eq:condition-zero}), which further restricts the search space to a hyper-ellipsoid. By performing the appropriate unitary transformation, this can be brought again into a hypersphere: \begin{equation} F_0 = \sum_{n=1}^{2N-1} \xi_n^2\,. \end{equation} This equal-fluence condition can be guaranteed if we perform a new transformation to hyperspherical coordinates: \begin{eqnarray} \nonumber \xi_1 & = & F_0^{1/2} \cos(\theta_1)\,, \\\nonumber \xi_2 & = & F_0^{1/2} \sin(\theta_1) \cos(\theta_2)\,, \\\nonumber \dots & = & \dots \\\nonumber \xi_{2N-2} & = & F_0^{1/2} \sin(\theta_1) \dots \sin(\theta_{2N-3}) \cos(\theta_{2N-2})\,, \\ \xi_{2N-1} & = & F_0^{1/2} \sin(\theta_1) \dots \sin(\theta_{2N-3}) \sin(\theta_{2N-2})\,. \end{eqnarray} The set of angles $\lbrace \theta_j\rbrace_{j=1}^{2N-2}$ constitute the $2N-2$ variables that define the search space for the optimization algorithm. \section{Results} The initial laser field before the optimization is a linearly polarized eight-cycle pulse having a sinusoidal envelope, fixed peak intensity, and wavelength of $\lambda=400$ nm -- a typical value for frequency-doubled Titanium-sapphire lasers. Correspondingly, the initial frequency is $\omega_0= 0.114$ Ha, and the pulse length is $5.3$ fs. The maximum allowed frequency of the {\em optimized} pulse is set to $\omega_{\rm max}=2\,\omega_0$. The pulse polarization is fixed to be parallel or perpendicular to the molecular axis. During the QOCT procedure, the polarization and the fluence $F_0$ are kept fixed, but the {\em peak} intensity may change from the initial value, which is selected in the range $I=0.5, 0.75, \ldots, 2\times 10^{15}$ W/cm$^2$. Figure~\ref{fig1} \begin{figure} \centerline{\includegraphics[width=0.7\columnwidth]{fig1.eps}} \caption{(color online). Ionization probability for the initial pulse (circles) and for the optimized pulse (squares) as a function of the peak intensity of the initial pulse. The polarization of the pulse is (a) parallel and (b) perpendicular to the molecular axis. } \label{fig1} \end{figure} shows the ionization probabilities as a function of the peak intensity (of the initial guess pulse) for the initial and optimized pulses polarized parallel (a) and perpendicular (b) to the molecular axis, respectively. Overall, the pulse optimization leads to a significant increase in the ionization. As expected, the ionization yield is slightly larger for pulses polarized parallel to the molecular axis. To get more insight into the optimized ionization process, we plot in Fig.~\ref{fig2} \begin{figure} \centerline{\includegraphics[width=0.99\columnwidth]{fig2.eps}} \caption{(color online). (a) Initial and optimized pulses (parallel polarization) and (b) the occupation of selected single-electron states in the optimized ionization process, when $I=2\times 10^{15}$ W/cm$^2$. (c-d) Same as (a-b) but for perpendicular polarization. } \label{fig2} \end{figure} the initial and optimal laser pulses and the occupations of some single-electron states during the pulse interaction. The peak intensity of the initial pulse is $I=2\times 10^{15}$ W/cm$^2$. Optimized pulses of both parallel (a) and perpendicular (c) polarization have large peaks near the end of the pulse. According to the corresponding occupations shown in Figs.~\ref{fig2}(b) and (d), these amplitude peaks account for almost all of the ionization: During the peaks the ground-state occupations rapidly collapse. The 2p $\sigma_{\rm u}$ (2p $\pi_{\rm u}$) excited state contributes to the process to a small extent in the parallel (perpendicular) case, whereas the other states are involved by a nearly negligible fraction; overall, no excited bound states contribute significantly. Hence, within the constraints set here for the laser pulse, the optimal ionization of H$_2^+$ is a direct process obtained by focusing most of the available pulse energy in a very short time frame -- though keeping the integrated total field at zero in accordance with Maxwell's equations (see Ref.~\cite{madsen}). The electron densities during the ionization process are visualized in Fig.~\ref{fig3} \begin{figure} \centerline{\includegraphics[width=0.99\columnwidth]{fig3.eps}} \caption{(color online). (a-f) Snapshots of the electron densities in the optimized ionization process when the pulse polarization is parallel to the molecular axis and the intensity is $I=2\times 10^{15}$ W/cm$^2$. The black dots mark the positions of the nuclei. The time-steps between snaphots are all equal. (g-l) The same for a pulse polarized perpendicular to the molecular axis. } \label{fig3} \end{figure} for both parallel (a) and perpendicular (b) polarizations. The previous optimizations have produced rather ``uninteresting'' solutions. It is a well known fact that in a short intense laser pulse, most of the ionization occurs during the peaks in the electric field. Therefore, the optimizations have just attempted to create short, intense bursts of light. This fact can be understood if we consider the process happening in the quasi-static, tunneling regime, in which the total ionization can be approximated by considering at each moment in time the static ionization rate that corresponds to the electric amplitude. This ionization rate is nonlinear, and it is much larger at the electric field peaks, which therefore cause most of the ionization. Note, however, that the cases discussed above lie in an intermediate regime between the tunneling and the multi-photon regime -- the Keldysh parameter, $\gamma$, is of the order of one [The Keldysh parameter $\gamma$ is defined as $\sqrt{\vert E_I\vert/2U_p}$, where $E_I$ is the ionization potential of the system, and $U_p$ is the pondemorotive energy, given in atomics units by $(E_0/2\omega)^2$, $E_0$ being the peak intensity of the electric field, and $\omega$ the pulse frequency. Since our optimized lasers do not have a single frequency -- not even necessarily a dominant one, we can only speak of approximate Keldysh parameters.] Alternatively, one can explain the simplicity of the pulses considering that the maximum allowed frequency, $2\,\omega_0 = 0.228$~Ha, is smaller than any resonance transition energy from the ground state. As a consequence, the system does not significantly populate these states, and the only ionizing channel is direct transition to the continuum. The picture changes significantly, however, if we allow for a larger cutoff frequency. First, this increases the value of the Keldysh parameter associated with the process, which may change the regime from a more quasistatic to a more of a multi-photon-like character. Secondly, the excited bound states are now accessible for single-photon transitions. For example, Fig.~\ref{fig4} displays results obtained for $4\,\omega_0$. The intensity is here set to $0.5\times 10^{15}$~W/cm$^2$. Doubling the cutoff frequency of the search space has a significant effect in the total ionization yield: Now we obtain 0.99 for the ionization probability, whereas in the first optimization the yield was 0.20 (see Fig.~\ref{fig1}, top panel, first point in the series). Note that the initial yield before any optimization was only 0.005. Moreover, the manner in which the ionization occurs with a larger cutoff frequency is qualitatively very different. Figure~\ref{fig4} \begin{figure} \centerline{\includegraphics[width=0.9\columnwidth]{fig4.eps}} \caption{ (color online). Upper panel: Optimized laser pulse for the ionization when the cutoff frequency is $4\,\omega_0$ (see text) and the intensity is fixed to $0.5 \times 10^{15}$ W/cm$^2$. Lower panel: Occupation of a few lowest states during the pulse interaction. } \label{fig4} \end{figure} displays the evolution of the occupation of some of the bound states. The first excited state ($2{\rm p}\,\sigma_{\rm u}$) plays a significant role, which can be understood because the transition energy from the ground state is now accessible in the field search space. In addition, a couple of other lowest states contribute in the ionization process in an ascending order as a function of time. It should be noted, however, that only the $\sigma$ orbitals, where the nodes are perpendicular to the polarization axis, participate in the transitions, and $\pi$ orbitals, for example, are not allowed due to a different symmetry. As a consequence of the involvement of several states in the ionization process, the structure of the optimized laser pulse shown in the upper panel of Fig.~\ref{fig4} is much more complicated than the single-burst fields obtained in the previous calculations. \section{Conclusions} In conclusion, we have shown with three-dimensional time-propagations of the time-dependent Schr\"odinger equation how the precise temporal shape of a short intense laser pulse may affect significantly the total ionization yield of H$_2^+$ at fixed internuclear separations. Moreover, we have employed a gradient-free optimization technique to find the laser pulse that enhances ionization. This optimization can be constrained in different ways, accounting for the limitations of physical sources -- not all frequencies and intensities are available, and not all possible shapes can be constructed with the state-of-the-art pulse shapers (although the technology improves at a phenomenal rate). The results will differ depending on these constraints: The optimized laser pulse may be the single burst of electric field that one would expect by considering a process in the tunnelling regime, or a field with a more complicated structure that drives the system through intermediate states -- the ionization can be enhanced by resonant transitions. Whereas in the former case it would be easy to design intuitively pulses that maximize the ionization, in the latter an optimization algorithm such as the one presented in this work is necessary. \acknowledgments We thank M. F{\o}rre and L. Madsen for helpful discussions. We acknowledge funding by the European Community through the e-I3 ETSF project (INFRA-2007-1.2.2: Grant Agreement Number 211956). AR acknowledges support by the Spanish MEC (FIS2007-65702-C02-01), "Grupos Consolidados UPV/EHU del Gobierno Vasco" (IT-319-07), CSIC, the Barcelona Supercomputing Center, "Red Espanola de Supercomputacion" and SGIker ARINA (UPV/EHU). ER acknowledges support by the Academy of Finland. AC and EKUG acknowledge support from the Deutsche Forschungsgemeinschaft within the SFB 658, and EKUG acknowledges hospitality from KITP-Sta. Barbara.
2102.08748
\section{Introduction} The short time Fourier transform (STFT) has many applications specially in signal and image processing for providing good time-frequency localization. Singularities of discontinuity across a curve such an edges in an image in multidimensional often hold the key information. To tackle the directional singularity Candes \cite{candes1998ridgelets} first introduced ridgelet transform, which is the wavelet transform in Radon domain. Ridgelets are constant along the ridge or hyperplane. Now a days Curvelets and shearlets play important role to represent directional selectivity as they give optimal sparse approximations for a class of bivariate functions exhibiting anisotropic features \cite{candes2004new,candes2005continuous,kutyniok2009resolution,han2020microlocal}. The relation between wavelet transform and the above directional representation is the above transform projects a hyperplane singularity into a point singularity, then takes one-dimensional wavelet transform. We recall that the short time Fourier transform (STFT) \cite{grochenig2001foundations} of a function $f\in L^2({\mathbb R}^n)$ w.r.t. a window $g\in L^2({\mathbb R}^n)$, is the function $V_gf(x,\omega)=\int_{{\mathbb R}^n}f(t)\overline{g(t-x)}e^{-2\pi i\omega t}dt$, i.e., \begin{equation}\label{repif} V_gf(x,\omega)=\widehat{f.T_x\bar{g}}(\omega). \end{equation} Since the STFT enjoy the orthogonality relation, it gives a full reconstruction of a function/distribution $f$ for $f,g,h\in L^2({\mathbb R}^n)$ and $\widehat{f}, \widehat{G}\in L^1({\mathbb R}^n)$ as \begin{equation} f(u)=\frac{1}{\langle h,g\rangle}\int_{{\mathbb R}^n}\int_{{\mathbb R}^n}V_gf(x,\omega)M_{\omega}T_x h(u)d\omega dx,\ \ \textrm{for\ all}\ u\in {\mathbb R}^n, \end{equation} where $M_{\omega}$ and $T_x$ are the modulation and translation operator respectively. But the STFT does not give the directional information about the function or distribution. For directional sensitivity of time-frequency decomposition Grafakos and Sansing \cite{grafakos2008gabor} introduced a variant of STFT of function $f\in L^2({\mathbb R})$ using Gabor ridge function $e^{2\pi i m (s-t)}.g(s-t)$ on $S^{n-1}\times {\mathbb R} \times {\mathbb R}$ as \begin{equation} (\xi,x,\omega)\rightarrow \int_{{\mathbb R}^n}f(t)\overline{g(\xi.t-x)}e^{-2\pi i \omega (\xi.t)}dt. \end{equation} But this transform does not provide a full reconstruction of a function $f$ and for this the authors in \cite{grafakos2008gabor} use the derivative of Gabor ridge function, i.e. ``weighted Gabor ridge functions" for the analysis and synthesis of the functions $f$. Moreover, this transform lost the Fourier transform representation \eqref{repif} of STFT, since $x\in{\mathbb R}^n$ and $\omega\in {\mathbb R}$. Modifying the idea, Giv \cite{giv2013directional} introduced directional short-time Fourier transform and studied some useful properties like orthogonality, full reconstruction formula. Recently Mejjaoli and Omri \cite{HMSO2020spectral} introduced a generalized two-wavelet multiplier using directional STFT and studied the $L^p$ boundedness and compactness of the two-wavelet multiplier. \\ Our aim in this paper is to give a generalization of two-wavelet multiplier on locally compact abelian topological groups $G$ associated to right $H$-translation invariant functions. For this as a generalization of directional STFT, we will define a transform $\mathcal{D}_H^gf(\omega,zH)$ of $f\in L^2(G)$ using a character $\omega\in \widehat{G}$ and a window $g\in L^2(G/H)$, where $H$ is a closed subgroup of $G$. In Section \ref{FoT}, we first define the STFT, $\mathcal{D}_H^gf$, $f\in L^1(G) $ and$g\in L^\infty(G/H)$ on $G/H$ which we have shown satisfies certain orthogonality relations. In this section we have also defined the generalized multiplier and generalized two-wavelet multiplier. In Section \ref{bound}, \ref{SchttenN} and \ref{lpbdd} we have shown that the generalized multiplier is bounded, Schatten class and $L^p-$ bounded, $1\leq p\leq \infty$ for all symbol $\sigma$ in $L^p(\widehat{G}\times G/H), $ $1\leq p\leq \infty$. Finally, in Section \ref{Compt} we have shown that the generalized wavelet mulpliers are ompact operators for all symbol $\sigma$ in $L^1(\widehat{G}\times G/H)$. In Section \ref{LPOp} we have defined the generalized Landau-Pollak-Slepian operator and shown that its is actually unitarily equivalent to scaler multiples of the generalized two-wavelet multiplier. \section{Fourier like transform on locally compact abelian topological groups associated to the coset of closed subgroup}\label{FoT} $G$ denotes a locally compact abelian topological group with the Haar measure $dm_{G}$ and $\widehat{G}$ is the dual group of $G$ with the Haar measure $dm_{\widehat{G}}$ such that $dm_{\widehat{G}}$ is the dual measure of $dm_{G}$. Let $H$ be a closed subgroup of an LCA group $G$ with the Haar measure $dm_{H}$. The annihilator of $H$ is the set $H^{\perp}\subset \widehat{G}$ given by $H^{\perp}=\lbrace \eta\in \widehat{G}: \langle y,\eta\rangle=1\ \textrm{for\ all}\ y\in H\rbrace$. Moreover, $H^{\perp}$ is a closed subgroup of $\widehat{G}$, is topologically isomorphic with the character group of $G/H$ and we have the followings: $$(H^{\perp})^{\perp}=H\ \ \ \ \textrm{and}\ \ \ \ \widehat{H}=\widehat{G}/H^{\perp}.$$ Let $f$ be any function in $L^1(G)$ then according to Theorem 28.54 in \cite{EHKAR1997abstract}, the function $x\rightarrow \int_{H}f(xy)dm_{H}(y)$ depends only on the [left] coset of $H$ containing $x$, is a function on the quotient group $G/H$. Moreover, the function $R_H$ is Haar measurable on $G/H$ and belongs to $L^1(G/H)$, where $R_H$ is the function $xH\rightarrow \int_{H}f(xy)dm_{H}(y)$. We normalize the Haar measure so that \begin{equation}\label{weilsformula} \int_{G/H}R_H(xH)dm_{G/H}(xH)=\int_{G/H}\int_{H}f(xy)dm_{H}(y)dm_{G/H}(xH)=\int_G f(x)dm_{G}(x). \end{equation} For $R_H\in L^1(G/H)$, the Fourier transform of $R_H$ is defined by $$\widehat{R_H}(\chi^+)=\int_{G/H}R_H(xH)\overline{\chi^+(xH)}dm_{G/H}(xH).$$ where the character $\chi^+(\in H^{\perp})$ of the group $G/H$ is defined by $\chi^+(xH)=\chi(x)$ for $\chi\in \widehat{G}$. The relation between the Fourier transform of the function on group $G/H$ and function on $G$ is the following: \begin{align}\label{fourierslice} \widehat{R_H}(\chi^+)&=\int_{G/H}\int_{H}f(xy)dm_{H}(y)\overline{\chi^+(xH)}dm_{G/H}(xH)\nonumber\\ &=\int_{G/H}\int_{H}f(xy)\overline{\chi(xy)}dm_{H}(y)dm_{G/H}(xH)\nonumber\\ &=\widehat{f}(\chi) \end{align} Before describing the operator first we recall the short time Fourier transform (STFT) for locally compact abelian group. Given an appropriate window $g\in L^2(G)$, the STFT of $f\in L^2(G)$ w.r.t. $g$ at a point $(x,\omega)\in G\times \widehat{G}$ is denoted as $$\mathcal{V}_gf(x,\omega)=\int_G f(y)\overline{g}(x^{-1}y)\overline{\omega(y)}dm_G(y)=\langle f,M_{\omega}T_xg\rangle=(f.T_x\overline{g})^{\wedge}(\omega).$$ The STFT enjoy the following properties for $f,g\in L^2(G)$, \begin{align} \int_{G}\int_{\widehat{G}}|\mathcal{V}_g f(x,\omega)|^2dxd\omega=\|g\|_2^2 \|f\|_2^2. \end{align} For $f,g\in L^2(G)$ and $\widehat{f},\widehat{G}\in L^1(\widehat{G})$, we have the reconstruction formula for $f$ as \begin{align} f(u)=\frac{1}{\langle h,g\rangle}\int_G\int_{\widehat{G}} \mathcal{V}_g f(x,\omega)M_\omega T_x h(u)dm_{\widehat{G}}(\omega) dm_G(x)\ \ \textrm{for\ all}\ u\in G. \end{align} Now for the closed subspace $H\subset G$, window $g\in L^\infty(G/H)$ and $zH\in G/H$, we define the STFT like transform of $f\in L^1(G)$ using right-$H$-translation-invariant functions on $G$ as a function on $G/H\times \widehat{G}$ as \begin{equation}\label{Dg} \mathcal{D}_H^gf(\omega,zH)=\int_G f(x)\overline{w(x)g(z^{-1}xH)}dm_G(x)=\int_{G}M_{-w}f(x)g(z^{-1}xH)dm_G(x). \end{equation} Next we show how the operator $\mathcal{D}_H^gf(\omega,zH)$ can be written with the inner product on $G/H$ in a natural way. \begin{theorem}\label{transformip} If $g\in L^\infty(G/H)$ and $f\in L^1(G)$, then for every $(\omega,zH)\in \widehat{G}\times G/H$ \begin{equation} \mathcal{D}_H^gf(\omega,zH)=({f(x)\overline{g(z^{-1}xH)}})^{\wedge}(\omega)=\langle R_H(M_{-\omega}f),T_{zH}g\rangle_{G/H}=\left(R_H(M_{-\omega}f)*g\right)(zH). \end{equation} \end{theorem} \begin{proof} The result can be easily followed in view of Weils formula \eqref{weilsformula} and the following: \begin{eqnarray*} \mathcal{D}_H^gf(\omega,zH)&=&\int_G f(x)\overline{w(x)g(z^{-1}xH)}dx\\ &=&\int_{G/H}\int_{H}f(xy)\overline{w(xy)g(z^{-1}xyH)}dm_{H}(y)dm_{G/H}(xH)\\ &=&\int_{G/H}R_H(M_{-\omega}f)(xH)\overline{g(z^{-1}xH)}dm_{G/H}(xH)\\ &=&\left(R_H(M_{-\omega}f)*g\right)(zH), \end{eqnarray*} where for the functions $f_1,f_2\in L^1(G/H)$ the convolution in the quotient space is defined by $$ (f_1*f_2)(xH)=\int_{G/H}f_1(yH)f_2(y^{-1}xH)dm_{G/H}(yH). $$ \end{proof} The transform $\mathcal{D}_H^gf(\omega,zH)$ can be regarded as a STFT in the quotient group $G/H$ through the following theorem: \begin{theorem} If $f\in L^1(G)$, $\widehat{f}\in L^1(\widehat{G})$ and $g\in L^1(G/H)\cap L^\infty(G/H)$, then the transform $\mathcal{D}_H^gf(\omega,zH)$ is the STFT of the function $T_{-\omega}\widehat{f}$ with respect to the window $\widehat{G}$, evaluated at $(0,-zH)$. \end{theorem} \begin{proof} From the previous theorem we have $\mathcal{D}_H^gf(\omega,zH)=\langle R_H(M_{-\omega}f),T_{zH}g\rangle_{G/H}$. Since $f\in L^1(G)$ and $\widehat{f}\in L^1(\widehat{G})$ then $R_H(M_{-\omega}f)\in L^1(G/H)$ and $\widehat{R_H(M_{-\omega}f)}\in L^1(G/H)$. So $R_H(M_{-\omega}f)\in L^2(G/H)$. Also $g\in L^2(G/H)$. Hence using Plancherel's theorem and \eqref{fourierslice} we can write $\mathcal{D}_H^gf(\omega,zH)$ the followings for $\eta\in H^{\perp}$: \begin{align*} \mathcal{D}_H^gf(\omega,zH)&=\langle \widehat{R_H(M_{-\omega}f)}(\eta),\widehat{T_{zH}g}(\eta)\rangle_{G/H}\\ &=\langle \widehat{M_{-\omega}f}(\eta),M_{-zH}\widehat{g}(\eta)\rangle_{H^{\perp}}\\ &=\langle T_{-\omega}\widehat{f}(\eta),M_{-zH}\widehat{g}(\eta)\rangle_{H^{\perp}}. \end{align*} Hence proved. \end{proof} The transform $\mathcal{D}_H^gf(\omega,zH)$ satisfy the following orthogonality relations: \begin{theorem}\label{orthothm} \begin{itemize} \item[(i)] For every directional window $g\in L^{\infty}(G/H)$, the operator $\mathcal{D}_g$ is bounded from $L^1(G)$ into $L^{\infty}(\widehat{G}\times G/H)$ and the operator norm satisfies \begin{equation}\label{Dgb} \|D_H^g\|\leq \|g\|_{L^{\infty}(G/H)}. \end{equation} \item[(ii)] Suppose $g_1,g_2\in L^{\infty}(G/H)$ and $f_1,f_2\in L^1(G)\cap L^2(G)$. If at least one of the $g_i$'s is in $L^1(G/H)$, then $\mathcal{D}_H^gf(\omega,zH)$ satisfies \begin{equation}\label{orthogonalitysp} \int_{\widehat{G}\times G/H}\mathcal{D}_H^{g_1}f_1(\omega,zH)\overline{\mathcal{D}_H^{g_2}f_2(\omega,zH)}dm_{\widehat{G}}(w)dm_{G/H}(zH)=\langle f_1,f_2 \rangle_{L^2(G)} \langle g_2,g_1\rangle_{L^2(G/H)}. \end{equation} Moreover, if $g\in L^1(G/H)\cap L^\infty (G/H)$ and $f\in L^1(G)\cap L^2(G)$, then $\mathcal{D}_H^{g}f(\omega,zH)\in L^2(G/H\times\widehat{G})$ and \begin{equation}\label{orthogonalitynorm} \|\mathcal{D}_H^{g}f\|_{L^2(G/H\times\widehat{G})}=\|g\|_{L^2(G/H)}\|f\|_{L^2(G)}. \end{equation} \end{itemize} \end{theorem} \begin{proof} The proof of first part follows from Equation \eqref{Dg}.\\ Since $g_i\in L^{\infty}(G/H)$ and $f_i\in L^1(G)\cap L^2(G)$, $f_i(x)\overline{w(x)g_i(z^{-1}xH)}\in L^1(G)\cap L^2(G)$ for $i=1,2$. Using Plancherel's theorem we can write the followings: \begin{align*} &\int_{\widehat{G}\times G/H}\mathcal{D}_H^{g_1}f_1(\omega,zH)\mathcal{D}_H^{g_2}f_2(\omega,zH)dm_{\widehat{G}}(w)dm_{G/H}(zH)\\ &= \int_{\widehat{G}\times G/H}({f_1(x)\overline{g_1(z^{-1}xH)}})^{\wedge}(\omega)\overline{({f_2(x)\overline{g_2(z^{-1}xH)}})^{\wedge}(\omega)}dm_{\widehat{G}}(w)dm_{G/H}(zH)\\ &= \int_{{G}\times G/H} f_1(x)\overline{g_1(z^{-1}xH)}\overline{f_2(x)}g_2(z^{-1}xH)dm_{G}(w)dm_{G/H}(zH)\\ &= \int_{G} f_1(x)\overline{f_2(x)}\left(\int_{G/H}\overline{g_1(z^{-1}xH)}g_2(z^{-1}xH)(w)dm_{G/H}(zH)\right)dm_{G}(x). \end{align*} Hence the proof of \eqref{orthogonalitysp} follows by noting $$\langle g_2,g_1\rangle_{G/H} = \int_{G/H}\overline{g_1(z^{-1}xH)}g_2(z^{-1}xH)(w)dm_{G/H}(zH),$$ since $G/H$ is unimodular group. The equation \eqref{orthogonalitynorm} follows as a consequence of \eqref{orthogonalitysp}. \end{proof} \begin{cor}\label{corinversion} Suppose $g_1,g_2\in L^\infty(G/H)$ and $f\in L^1(G)\cap L^2(G)$. If at least one of the $g_i$'s is in $L^1(G/H)$ and $\langle g_2,g_1\rangle\neq 0$, then $$f(x)=\frac{1}{\langle g_2,g_1\rangle}\int_{\widehat{G}\times G/H}\mathcal{D}_H^{g_1}f(\omega,zH)\omega(x)g_2(z^{-1}xH)dm_{\widehat{G}}(w)dm_{G/H}(zH).$$ Moreover, for non-zero function $g\in L^1(G/H)\cap L^\infty(G/H)$ $$f(x)=\frac{1}{\|g\|_{L^2(G/H)}^2}\int_{\widehat{G}\times G/H}\mathcal{D}_H^{g}f(\omega,zH)\omega(x)g(z^{-1}xH)dm_{\widehat{G}}(w)dm_{G/H}(zH)$$ \end{cor} \begin{proof} The proof follows by using Theorem~\ref{orthothm}. \end{proof} \begin{rem} The weak version of the above Corollary~\ref{corinversion} means that for every function $u$ in $L^1(G)\cap L^2(G)$ there exist unique $f\in L^2(G)$, s.t. $$\langle f,u\rangle=\frac{1}{\langle g_2,g_1\rangle}\int_{\widehat{G}\times G/H}\mathcal{D}_H^{g_1}f_1(\omega,zH)\overline{\mathcal{D}_H^{g_2}u(\omega,zH)}dm_{\widehat{G}}(w)dm_{G/H}(zH)$$ which is nothing but the equation \eqref{orthogonalitysp}. \end{rem} \begin{prop} We assume that $g\in L^1(G/H)\cap L^{\infty}(G/H)$, $f\in L^1(G)\cap L^2(G)$ and $p$ belongs in $[2,\infty]$. We have \begin{equation}\label{Dgp} \|D_g(f)\|_{L^p(\widehat{G}\times G/H)}\leq \|g\|_{L^p(G/H)}\|f\|_{L^{p^{\prime}}(G)}. \end{equation} \end{prop} \begin{exmp} Here we give one example where $G=\mathbb R^n$, $H=\lbrace x\in G: x.\theta=1,\ \theta\in S^{n-1}\rbrace$. For any element in $G/H$ we can write $xH=(x.\theta) \theta H$. Then the quotient group $G/H=\lbrace t\theta H,t\in \mathbb R\rbrace$. Hence the right-$H$-translation-invariant functions can be written as \begin{align*} R_H f(xH)=R_H f(t\theta H)=\int_{H}f(t\theta h)dm_H(h)=\int_{z.\theta=t}f(z)dz=R_\theta f(t), \end{align*} where $R_\theta f(t)$ is the function on $S^{n-1}\times R$, which is the Radon transform of $f$. In this case the transform $\mathcal{D}_H^{g}f(\omega,zH)$ is represented using Theorem~\ref{transformip} as \begin{align*} \mathcal{D}_H^{g}f(\omega,zH)=\int_{G/H}R_H(M_{-\omega}f)(xH)\overline{g(z^{-1}xH)}dm_{G/H}(xH). \end{align*} The quotient group is defined by $G/H=\lbrace t\theta H, t\in \mathbb R \rbrace$. Also, the characterization of the quotient group is the subgroup $H$ of $G$ defines an equivalence relation in $G$ by $g_1=g_2$ iff $g_2^{-1}g_1\in H$, i.e., for $g_1,g_2\in \mathbb{R}^n$ and $\theta\in S^{n-1}$, $g_1.\theta=g_2.\theta$. So the norms are equal and hence the quotient group $G/H$ is characterized by $\mathbb{R}$. Hence for $z\in \mathbb{R}$, $\mathcal{D}_H^{g}f(\omega,zH)$ is defined on $S^{n-1}\times \mathbb{R}\times \mathbb{R}^n$ as \begin{align*} \mathcal{D}_H^{g}f(\omega,zH)=\int_{t\in \mathbb R}R_\theta(M_{-\omega}f)(t)\overline{g(t-z)}dt=&\int_{t\in \mathbb R}\int_{x.\theta=t}f(x)\overline{w(x)}dx\overline{g(t-z)}dt\\ =&\int_{t\in \mathbb R}\int_{x.\theta=t}f(x)\overline{w(x)}\overline{g(x.\theta-z)}dxdt\\ =&\int_{\mathbb{R}^n}f(x)\overline{w(x)}\overline{g(x.\theta-z)}dx, \end{align*} which is the directional short-time Fourier transform of the function $f$ with respect to window $g$ \cite{giv2013directional, HMSO2020spectral}. \end{exmp} We define the generalized multiplier and generalized two-wavelet multiplier in the following. \begin{defn}\label{generalizedmultiplier} Let $\sigma\in L^{\infty}(\widehat{G}\times G/H),$ we define the linear operator $M_{\sigma, g}:L^2(G)\rightarrow L^2(G)$ by $$M_{\sigma, g}(f)=(D_{H}^g)^{-1}(\sigma D_{H}^gf).$$ This operator is called the generalized multiplier, where $0\neq g\in L^1(G/H)\cap L^{\infty}(G/H)$. \end{defn} \begin{defn}\label{def} Let $u,v$ be a measurable functions on $G$ and $\sigma$ be measurable function on $\widehat{G}\times G/H$, we define the generalized two wavelet multiplier operator denoted by $P_{u,v,g}(\sigma)$ on $L^p(G), 1\leq p\leq\infty$ defined by, for $g\neq 0, g\in L^1(G/H)\cap L^{\infty}(G/H)$, $$ P_{u,v,g}(\sigma)(f)(t)=\int_{\widehat{G}\times G/H}\sigma(w,zH)D_{H}^{g}(uf)(w,zH)g_{w,zH}(t)v(t)dm_{\widehat{G}}(\omega)dm_{G/H}(zH),$$ where $g_{w,zH}(x)=g(z^{-1}xH)w(x)$.\\ $P_{u,v,g}(\sigma)$ in a weak sense, for $f\in L^p(G), 1\leq p\leq\infty$ and $h\in L^{p^{\prime}}(G)$, $$\langle P_{u,v,g}(\sigma)(f),h\rangle=\int_{\widehat{G}}\int_{G/H}\sigma(w,zH)D_{H}^{g}(uf)(w,zH)\overline{D_{H}^g(vh)(w,zH)}dm_{\widehat{G}}(\omega)dm_{G/H}(zH).$$ \end{defn} \begin{prop}\label{adjoint} Let $f$ be in $L^p(G)$ and $h$ in $L^{p^{\prime}}$, where $1\leq p<\infty,\ \frac{1}{p}+\frac{1}{p^{\prime}}=1$. Then $$P^{\ast}_{u,v,g}(\sigma)=P_{v,u,g}(\overline{\sigma}).$$ \end{prop} \begin{proof} For $f$ be in $L^p(G)$ and $h$ in $L^{p^{\prime}}$, \begin{align*} & \langle P_{u,v,g}(\sigma)(f),h\rangle_{L^2(G)}\\ & =\int_{\widehat{G}}\int_{G/H}\sigma(w,zH)D_{H}^{g}(uf)(w,zH)\overline{D_{H}^g(vh)(w,zH)}dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\ & = \overline{\int_{\widehat{G}}\int_{G/H}\overline{\sigma(w,zH)D_{H}^{g}(uf)(w,zH)}D_{H}^g(vh)(w,zH)dm_{\widehat{G}}(\omega)dm_{G/H}(zH)}\\ & = \overline{\langle P_{v,u,g}(\overline{\sigma})(h), f\rangle}\\ & = \langle f, P_{v,u,g}(\overline{\sigma})(h)\rangle. \end{align*} Hence we get, $$P^{\ast}_{u,v,g}(\sigma)(h)=P_{v,u,g}(\overline{\sigma})(h).$$ This complete the proof. \end{proof} \begin{prop} Let $\sigma\in L^1(\widehat{G}\times G/H)\cup L^{\infty}(\widehat{G}\times G/H)$ and $u,v\in L^2(G)\cap L^{\infty}(G)$. Then $$\langle P_{u,v,g}(\sigma)(f),h\rangle_{L^2(G)}=\|g\|_{L^2(G/H)}^2\langle \overline{v}M_{\sigma,g}(uf),h\rangle_{L^2(G)}.$$ \end{prop} \begin{proof} In view of Definition~\ref{generalizedmultiplier} and Theorem~\ref{orthothm} we conclude \begin{equation} \begin{split} \langle P_{u,v,g}(\sigma)(f),h\rangle & =\int_{\widehat{G}}\int_{G/H}\sigma(w,zH)D_{H}^{g}(uf)(w,zH)\overline{D_{H}^g(vh)(w,zH)}dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\ & = \int_{\widehat{G}}\int_{G/H}D_{H}^g(M_{\sigma,g}(uf))(w,zH)\overline{D_{H}^g(vh)(w,zH)}dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\ & = \|g\|_{L^2(G/H)}^2\int_{G}M_{\sigma,g}(uf)(t)\overline{(vh)(t)}dt\\ & = \|g\|_{L^2(G/H)}^2\langle \overline{v}M_{\sigma,g}(uf),h\rangle_{L^2(G)}. \end{split} \end{equation}\end{proof} \section{$L^2-$boundedness of generalized two wavelet multiplier}\label{bound} In this section we will show the operators $$P_{u,v,g}(\sigma):L^2(G)\rightarrow L^2(G)$$ are bounded linear operators for all symbol $\sigma$ in $L^p(\widehat{G}\times G/H),~~~1\leq p\leq \infty$. \\ Let us assume $u,v$ be in $L^2(G)\cap L^{\infty}(G)$ such that $$\|u\|_{L^2(G)}=\|v\|_{L^2(G)}=1.$$ \begin{prop}\label{S infty1} Let $\sigma$ be in $L^1(\widehat{G}\times G/H)$, then the generalized two wavelet multiplier $P_{u,v,g}(\sigma)$ is in $S_{\infty}$. \end{prop} \begin{proof} For every functions $f$ and $h$ in $L^2(G)$, it follows from Definition \ref{def}, Equation \eqref{Dg} and Cauchy-Schwarz inequality, \begin{equation} \begin{split} & |\langle P_{u,v,g}(\sigma)(f),h\rangle_{L^2(G)}| \\ & \leq\int_{\widehat{G}\times G/H}|\sigma(w,zH)D_{H}^g(uf)(w,zH)D_{H}^g(vh)(w,zH)|dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\ & \leq \|D_H^g(uf)\|_{L^{\infty}(\widehat{G}\times G/H)}\|D_H^g(vh)\|_{{L^{\infty}(\widehat{G}\times G/H)}} \int_{\widehat{G}\times G/H}|\sigma(w,zH)|dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\ & \leq \|u\|_{L^2(G)}\|f\|_{L^2(G)}\|g\|_{L^{\infty}(G/H)}^2\|v\|_{L^2(G)}\|h\|_{L^2(G)}\|\sigma\|_{L^1(\widehat{G}\times G/H)}. \end{split} \end{equation} Hence $$\|P_{u,v,g}(\sigma)\|_{S_{\infty}}\leq \|\sigma\|_{L^1(\widehat{G}\times G/H)}\|g\|_{L^{\infty}(G/H)}^2.$$ \end{proof} \begin{prop}\label{S infty} Let $\sigma $ be in $L^{\infty}(\widehat{G}\times G/H)$, then the generalized two wavelet multiplier operator $P_{u,v,g}(\sigma)$ is in $S_{\infty}$. \end{prop} \begin{proof} Here $P_{u,v,g}(\sigma):L^2(G)\rightarrow L^2(G)$. For every functions $f$ and $h$ in $L^2(G)$ we have the followings from Definition \ref{def}, Cauchy-Schwarz's inequality and Plancherel formula \ref{orthogonalitynorm} : \begin{equation} \begin{split} & |\langle P_{u,v,g}(\sigma)(f),h\rangle_{L^2(G)}| \\ & \leq\int_{\widehat{G}\times G/H}|\sigma(\omega,zH)D_{H}^g(uf)(\omega,zH)D_{H}^g(vh)(\omega,zH)|dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\ & \leq \|D_H^g(uf)\|_{L^{2}(\widehat{G}\times G/H)}\|D_H^g(vh)\|_{{L^{2}(\widehat{G}\times G/H)}}\|\sigma\|_{L^{\infty}(\widehat{G}\times G/H)}\\ & \leq \|u\|_{L^{\infty}(G)}\|f\|_{L^2(G)}\|g\|_{L^2(G/H)}^2\|v\|_{L^{\infty}(G)}\|h\|_{L^2(G)}\|\sigma\|_{L^{\infty}(\widehat{G}\times G/H)}. \end{split} \end{equation}\end{proof} Now we can proceed to interpret the generalized two wavelet multiplier $$P_{u,v,g}(\sigma):L^2(G)\rightarrow L^2(G)$$ to all symbol $\sigma$ in $L^p(\widehat{G}\times G/H)~~1\leq p\leq\infty$ being in $S_{\infty}$. It is given in the following theorem. \begin{theorem}\label{Lpbounded} Let $\sigma\in L^p(\widehat{G}\times G/H),1\leq p\leq\infty$. Then there exists a unique bounded linear operator $P_{u,v,g}(\sigma):L^2(G)\rightarrow L^2(G)$ s.t. $$\|P_{u,v,g}(\sigma)\|_{S_{\infty}}\leq \|g\|_{L^{\infty}(G/H)}^{2/p}\|g\|_{L^2(G/H)}^{\frac{2(p-1)}{p}}(\|u\|_{L^{\infty}(G)}\|v\|_{L^{\infty}(G)})^{\frac{p-1}{p}}\|\sigma\|_{L^p(\widehat{G}\times G/H)}.$$ \end{theorem} \begin{proof} Let $f\in L^2(G) $ and $T:L^1(\widehat{G}\times G/H)\cap L^{\infty}(\widehat{G}\times G/H)\rightarrow L^2(G)$ given by $$T(\sigma):= P_{u,v,g}(\sigma)f.$$ We have from Proposition \ref{S infty1} $$\|P_{u,v,g}(\sigma)f\|_{L^2(G)}\leq \|f\|_{L^2(G)}\|g\|_{L^{\infty}(G/H)}^2\|\sigma\|_{L^1(\widehat{G}\times G/H)}$$ and from Proposition \ref{S infty} $$\|P_{u,v,g}(\sigma)f\|_{L^2(G)}\leq \|f\|_{L^2(G)}\|g\|_{L^{2}(G/H)}^2\|\sigma\|_{L^{\infty}(\widehat{G}\times G/H)}\|u\|_{L^{\infty}(G)}\|v\|_{L^{\infty}(G)}.$$ By Riesz-Thorin interpolation theorem, $T$ may uniquely extended to $L^p(\widehat{G}\times G/H), 1\leq p\leq\infty$ and $$\|T(\sigma)\|_{L^2(G)}=\|P_{u,v,g}(\sigma)f\|_{L^2(G)}.$$ So $$\|P_{u,v,g}(\sigma)f\|_{L^2(G)}\leq \|g\|_{L^{\infty}(G/H)}^{2/p}\|g\|_{L^2(G/H)}^{\frac{2(p-1)}{p}}(\|u\|_{L^{\infty}(G)}\|v\|_{L^{\infty}(G)})^{\frac{p-1}{p}}\|f\|_{L^2(G)}\|\sigma\|_{L^p(\widehat{G}\times G/H)}.$$ \end{proof} \section{Schatten class boundednes of generalized two wavelet multiplier}\label{SchttenN} Our aim is to show the linear operators $$P_{u,v,g}:L^2(G)\rightarrow L^2(G)$$ are in Schatten class $S_p$ for all symbol $\sigma$ in $L^p(\widehat{G}\times G/H),~~~1\leq p\leq \infty$. \begin{prop} Let $\sigma$ be in $L^1(\widehat{G}\times G/H),$ then the generalized two wavelet multiplier $P_{u,v,g}(\sigma)$ is in $S_2$ and we have $$\|P_{u,v,g}(\sigma)\|_{S_2}\leq \|g\|_{L^{\infty}(G/H)}\|\sigma\|_{L^1(\widehat{G}\times G/H)}.$$ \end{prop} \begin{proof} Let $\{ \phi_{j}:j=1,2,\cdots\}$ be an orthonormal basis for $L^2(G)$. $$\sum_{j=1}^{\infty}\|P_{u,v,g}(\sigma)(\phi_{j})\|_{L^2(G)}^{2}=\sum_{j=1}^{\infty}\langle P_{u,v,g}(\sigma)\phi_{j}, P_{u,v,g}(\sigma)\phi_{j}\rangle .$$ Now using Definition \ref{def}, Fubini's theorem, Parseval's identity and Proposition \ref{adjoint}, we have \begin{equation*} \begin{split} &\langle P_{u,v,g}(\sigma)\phi_{j}, P_{u,v,g}(\sigma)\phi_{j}\rangle\\ & =\int_{\widehat{G}\times G/H}\sigma(w,zH)D_H^g(u\phi _j)(w,zH)\overline{D_H^g(vP_{u,v,g}\sigma\phi _{j})(w,zH)} dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\ & = \int_{\widehat{G}\times G/H}\sigma(w,zH)\langle u\phi _j,\overline{g_{w,zH}}\rangle_{L^2(G)}\overline{\langle vP_{u,v,g}(\sigma)(\phi _j), \overline{g_{w,zH}}\rangle_{L^2(G)}}dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\ & = \int_{\widehat{G}\times G/H}\sigma(w,zH)\langle\phi_j, \overline{ug_{w,zH}}\rangle\overline{\langle P_{u,v,g}(\sigma)(\phi_j), \overline{g_{w,zH}}\rangle}dm_{\widehat{G}}(\omega)dm_{G/H}(zH). \end{split} \end{equation*} So \begin{align*} &\sum_{j=1}^{\infty}\|P_{u,v,g}(\sigma)(\phi_{j})\|_{L^2(G)}^{2}\\ &= \sum_{j=1}^{\infty}\int_{\widehat{G}\times G/H}\sigma(\omega,zH)\langle\phi_j, \overline{ug_{\omega,zH}}\rangle\overline{\langle P_{u,v,g}(\sigma)(\phi_j), \overline{vg_{\omega,zH}}\rangle}dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\ & = \int_{\widehat{G}\times G/H}\sigma(\omega,zH)\sum_{j=1}^{\infty}\langle\phi_j, \overline{ug_{\omega,zH}}\rangle\langle P_{v,u,g}^{\ast}(\sigma)(\overline{vg_{\omega,zH}}),\phi_j \rangle dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\ & = \int_{\widehat{G}\times G/H}\sigma(\omega,zH)\langle P_{v,u,g}^{\ast}(\sigma)(\overline{vg_{\omega,zH}}),\sum_{j=1}^{\infty}\phi_{j}\langle \phi_{j},\overline{ug_{\omega,zH}}\rangle\rangle dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\ & = \int_{\widehat{G}\times G/H}\sigma(\omega,zH)\langle P_{v,u,g}^{\ast}(\sigma)(\overline{vg_{\omega,zH}}),\overline{ug_{\omega,zH}}\rangle_{L^2(G)} dm_{\widehat{G}}(\omega)dm_{G/H}(zH). \end{align*} Thus \begin{equation*} \begin{split} &\sum_{j=1}^{\infty}\|P_{u,v,g}(\sigma)(\phi_{j})\|_{L^2(G)}^{2}\\ & \leq \|g\|_{L^2(G/H)}^2\int_{\widehat{G}\times G/H}|\sigma(\omega,zH)|\|P_{u,v,g}^{\ast}(\sigma)\|_{S_{\infty}}dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\ & \leq \|g\|_{L^{\infty}(G/H)}^4\|\sigma\|_{L^1(\widehat{G}\times G/H)}^2. \end{split} \end{equation*}\end{proof} \begin{prop} Let $\sigma$ be a symbol in $L^p(\widehat{G}\times G/H), 1\leq p <\infty.$ Then the generalized two-wavelet multiplier $P_{u,v,g}(\sigma)$ is compact. \end{prop} \begin{proof} We know $L^1(\widehat{G}\times G/H)\cap L^{\infty}(\widehat{G}\times G/H)$ is dense in $L^p(\widehat{G}\times G/H),(1\leq p <\infty)$. Let $\sigma\in L^p(\widehat{G}\times G/H)$, then there exist $\{\sigma_n\}_{n\in\mathbb{N}}\subset L^1(\widehat{G}\times G/H)\cap L^{\infty}(\widehat{G}\times G/H)$ s.t. $\sigma _{n}\rightarrow\sigma$ in $L^{p}(\widehat{G}\times G/H)$ as $n\rightarrow\infty$.\\ Now by Theorem \ref{Lpbounded} \begin{align*} &\|P_{u,v,g}(\sigma_n)-P_{u,v,g}(\sigma)\|_{B(L^2(G))} =\|P_{u,v,g}(\sigma_n-\sigma)\|_{B(L^2(G))}\\ & \leq \|g\|_{L^{\infty}(G/H)}^{2/p}\|g\|_{L^2(G/H)}^{\frac{2(p-1)}{p}}(\|u\|_{L^{\infty}(G)}\|v\|_{L^{\infty}(G)})^{\frac{p-1}{p}}\|\sigma_{n}-\sigma\|_{L^p(\widehat{G}\times G/H)}. \end{align*} So $P_{u,v,g}(\sigma_n)\rightarrow P_{u,v,g}(\sigma)$ in $B(L^2(G))$. Hence $P_{u,v,g}(\sigma)$ is compact. \end{proof} \begin{theorem}\label{S1} Let $\sigma$ be in $L^1(\widehat{G}\times G/H)$. Then $P_{u,v,g}(\sigma):L^2(G)\rightarrow L^2(G)$ is in $S_1$. \end{theorem} \begin{proof} Here $\sigma\in L^1(\widehat{G}\times G/H)$ then $P_{u,v,g}(\sigma)$ is in $S_2$. So there exists an orthonormal basis $\{\phi_j:j=1,2,3,\cdots\}$ for the orthogonal compliment of the kernel $P_{u,v,g}(\sigma)$ consisting of eigenvectors of $|P_{u,v,g}(\sigma)|$ and the subset $\{\psi_j:j=1,2,3,\cdots\}$ of $L^2(G)$ such that $$P_{u,v,g}(\sigma)(f)=\sum_{j=1}^{\infty}s_j\langle f,\phi_j\rangle_{L^2(G)}\psi_{j},$$ where $s_j, j=1,2,3,\cdots$ are positive singular values of $P_{u,v,g}(\sigma)$ corresponding to $\phi_j$.\\ Then $$\|P_{u,v,g}(\sigma)\|_{S_1}=\sum_{j=1}^{\infty}s_j=\sum_{j=1}^{\infty}\langle P_{u,v,g}(\sigma)(\phi_j),\psi_j\rangle_{L^2(G)}.$$ Now by Bessel's inequality, Cauchy-Schwarz's inequality, Fubini's theorem and the fact $\|u\|_{L^2(G)}=\|u\|_{L^2(G)}=1,$ we get \begin{align*} & \sum_{j=1}^{\infty}\langle P_{u,v,g}(\sigma)(\phi_j),\psi_j\rangle_{L^2(G)}\\ & =\sum_{j=1}^{\infty}\int_{\widehat{G}\times G/H}\sigma(\omega,zH)D_{H}^g(u\phi_j)(\omega,zH)\overline{D_H^g(v\psi_j)(\omega,zH)} dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\ & = \int_{\widehat{G}\times G/H}\sigma(\omega,zH)\sum_{j=1}^{\infty}\langle\phi_j,\overline{ug_{\omega,zH}}\rangle_{L^2(G)}\langle\overline{vg_{\omega,zH}},\psi_{j}\rangle_{L^2(G)}dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\ & \leq \int_{\widehat{G}\times G/H}|\sigma(\omega,zH)|\Big(\sum_{j=1}^{\infty}|\langle\phi_j,\overline{ug_{\omega,zH}}\rangle_{L^2(G)}|^2\Big)^{1/2}\\ & \Big(\sum_{j=1}^{\infty}|\langle\overline{vg_{\omega,zH}},\psi_{j}\rangle_{L^2(G)}|^2\Big)^{1/2}dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\ & \leq \int_{\widehat{G}\times G/H}|\sigma(\omega,zH)|\|ug_{\omega,zH}\|_{L^2(G)}\|vg_{\omega,zH}\|_{L^2(G)}dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\ & \leq \|\sigma\|_{L^1(\widehat{G}\times G/H)}\|g\|_{L^2(G/H)}^2. \end{align*} \end{proof} \begin{cor} For $\sigma\in L^1(\widehat{G}\times G/H)$, we have the following trace formula; $$\text{tr}(P_{u,v,g}(\sigma))=\int_{\widehat{G}\times G/H}\sigma(\omega,zH)\langle vg_{\omega,zH},ug_{\omega,zH}\rangle_{L^2(G)}dm_{\widehat{G}}(\omega)dm_{G/H}(zH).$$ \end{cor} \begin{proof} Let $\{ \phi_k:k=1,2,\cdots\}$ be an orthonormal basis for $L^2(G)$. By Theorem \ref{S1}, the generalized two wavelet multiplier $P_{u,v,g}(\sigma)$ is in $S_1$.\\ Now by definition of trace formula , Fubini's theorem and Parseval's identity, we have \begin{align*} & \text{tr}(P_{u,v,g}(\sigma))\\ & = \sum_{k=1}^{\infty}\langle P_{u,v,g}(\sigma)(\phi_k),\phi_k\rangle_{L^2(G)}\\ & =\sum_{k=1}^{\infty}\int_{\widehat{G}\times G/H}\sigma(\omega,zH)\langle \phi_{k},\overline{ug_{\omega,zH}}\rangle_{L^2(G)}\overline{\langle \phi_{k},\overline{vg_{\omega,zH}}\rangle_{L^2(G)}}dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\ & =\int_{\widehat{G}\times G/H}\sigma(\omega,zH)\sum_{k=1}^{\infty}\langle \phi_{k},\overline{ug_{\omega,zH}}\rangle_{L^2(G)}\overline{\langle \phi_{k},\overline{vg_{\omega,zH}}\rangle_{L^2(G)}}dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\ & = \int_{\widehat{G}\times G/H}\sigma(\omega,zH)\langle\overline{ug_{\omega,zH}},\overline{vg_{\omega,zH}}\rangle_{L^2(G)}dm_{\widehat{G}}(\omega)dm_{G/H}(zH). \end{align*}\end{proof} We give a result $$P_{u,v,g}(\sigma):L^2(G)\rightarrow L^2(G)$$ will be in $S_{p}$ for $\sigma\in L^p(G),~~1\leq p\leq \infty$. \begin{cor} Let $\sigma\in L^p(\widehat{G}\times G/H), 1\leq p\leq\infty$. Then the generalized two wavelet multiplier $P_{u,v,g}(\sigma):L^2(G)\rightarrow L^2(G)$ is in $S_p$ and we have $$\|P_{u,v,g}\|_{S_p}\leq\|g\|_{L^{\infty}(G/H)}^{2/p}\|g\|_{L^{2}(G/H)}^{\frac{2(p-1)}{p}}(\|u\|_{L^{\infty}(G)}\|v\|_{L^{\infty}(G)})^{\frac{p-1}{p}}\|\sigma\|_{L^p(\widehat{G}\times G/H)}.$$ \end{cor} \begin{proof} The proof follows from above Theorem \ref{S1} and Proposition \ref{S infty} and interpolation theorem in \cite{wong}.\end{proof} \section{$L^p-$boundedness of generalized two wavelet multiplier for $1\leq p\leq\infty$}\label{lpbdd} Our aim is to show the linear operators $$P_{u,v,g}:L^p(G)\rightarrow L^p(G)$$ are bounded operators for all symbol $\sigma$ in $L^r(\widehat{G}\times G/H),~~~1< r\leq \infty$ for all $p\in [\frac{2r}{r+1},\frac{2r}{r-1}]$ and for $r=1, p\in [1,\infty].$ Let us assume $0\neq g\in L^{\infty}(G/H)$ for this section. \begin{prop}\label{L1} Let $\sigma$ be in $L^1(\widehat{G}\times G/H), u\in L^{\infty}(G)$ and $v\in L^1(G)$, then the generalized two wavelet multiplier $$P_{u,v,g}(\sigma):L^1(G)\rightarrow L^1(G)$$ is a bounded operator and we have $$\|P_{u,v,g}(\sigma)\|_{B(L^1(G))}\leq \|u\|_{L^{\infty}(G)}\|v\|_{L^1(G)}\|g\|_{L^{\infty}(G/H)}^2\|\sigma\|_{L^1(\widehat{G}\times G/H)}.$$ \end{prop} \begin{proof} We know that $$P_{u,v,g}(\sigma)(f)(t)=\int_{\widehat{G}\times G/H}\sigma(\omega,zH)D_{H}^g(uf)(\omega,zH)g_{\omega,zH}(t)\overline{v(t)}dm_{\widehat{G}}(\omega)dm_{G/H}(zH).$$ Now by Equation \ref{Dg} and Equation \ref{Dgb} \begin{align*} & \|P_{u,v,g}(\sigma)(f)\|_{L^1(G)}\\ & \leq \int_{\widehat{G}\times G/H}\int_{G}|\sigma(\omega,zH)||D_{H}^g(uf)(\omega,zH)||g_{\omega,zH}(t)v(t)|dm_{\widehat{G}}(\omega)dm_{G/H}(zH)dt\\ & \leq \int_{\widehat{G}\times G/H}\int_{G}|\sigma(\omega,zH)||\langle uf,g_{\omega,zH}\rangle_{L^2(G)}|\|g\|_{L^{\infty}(G/H)}|v(t)|dm_{\widehat{G}}(\omega)dm_{G/H}(zH)dt\\ & \leq \|g\|_{L^{\infty}(G/H)}^2 \|f\|_{L^1(G)}\|u\|_{L^{\infty}(G)}\int_{\widehat{G}\times G/H}|\sigma(\omega,zH)|dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\int_{G}|v(t)|dt\\ & = \|g\|_{L^{\infty}(G/H)}^2 \|f\|_{L^1(G)}\|u\|_{L^{\infty}(G)}\|\sigma\|_{L^1(\widehat{G}\times G/H)}\|v\|_{L^1(G)}. \end{align*} \end{proof} \begin{prop}\label{L infty} Let $\sigma$ be in $L^1(\widehat{G}\times G/H), u\in L^1(G)$ and $v\in L^{\infty}(G)$, then the generalized two wavelet multiplier $$P_{u,v,g}(\sigma):L^{\infty}(G)\rightarrow L^{\infty}(G)$$ is a bounded operator and we have $$\|P_{u,v,g}(\sigma)\|_{B(L^{\infty}(G))}\leq \|u\|_{L^{1}(G)}\|v\|_{L^{\infty}(G)}\|g\|_{L^{2}(G/H)}^2\|\sigma\|_{L^1(\widehat{G}\times G/H)}.$$ \end{prop} \begin{proof} Let $f$ be in $L^{\infty}(G)$, then by Equation \ref{Dgb} and Equation \ref{Dg} we have \begin{align*} & |P_{u,v,g}(\sigma)(f)(t)| \\ & \leq\int_{\widehat{G}\times G/H}|\sigma(\omega,zH)||D_{H}^g(uf)(\omega,zH)||g_{\omega,zH}(t)||\overline{v(t)}|dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\ & \leq \|u\|_{L^{1}(G)}\|v\|_{L^{\infty}(G)}\|g\|_{L^{2}(G/H)}^2\|\sigma\|_{L^1(\widehat{G}\times G/H)}\|f\|_{L^{\infty}(G)}^2. \end{align*} Let $\|f\|_{L^{\infty}(G)}=1$, hence $$\|P_{u,v,g}(\sigma)\|_{B(L^{\infty}(G))}\leq \|u\|_{L^{1}(G)}\|v\|_{L^{\infty}(G)}\|g\|_{L^{2}(G/H)}^2\|\sigma\|_{L^1(\widehat{G}\times G/H)}.$$\end{proof} \begin{theorem}\label{th} Let $u, v\in L^1(G)\cap L^{\infty}(G).$ Then for all $\sigma\in L^1(\widehat{G}\times G/H),$ there exist a unique bounded linear operator $$P_{u,v,g}(\sigma):L^p(G)\rightarrow L^p(G),~~~~~ 1\leq p\leq\infty,$$ s.t. $$\|P_{u,v,g}(\sigma)\|_{B(L^p(G))}\leq \|u\|_{L^1(G)}^{1/p^{\prime}}\|v\|_{L^1(G)}^{1/p}\|u\|_{L^{\infty}(G)}^{1/p}\|v\|_{L^{\infty}(G)}^{1/p^{\prime}}\|g\|_{L^{\infty}(G/H)}\|\sigma\|_{L^1(\widehat{G}\times G/H)}.$$ \end{theorem} \begin{proof} We know $$P_{u,v,g}(\sigma):L^{1}(G)\rightarrow L^1(G)$$ is the adjoint of $$P_{v,u,g}(\overline{\sigma}):L^{\infty}(G)\rightarrow L^{\infty}(G).$$ Now by Proposition \ref{L1} and Proposition \ref{L infty} and interpolation theorem, for $1\leq p\leq \infty$, $$\|P_{u,v,g}(\sigma)\|_{B(L^p(G))}\leq \|u\|_{L^1(G)}^{1/p^{\prime}}\|v\|_{L^1(G)}^{1/p}\|u\|_{L^{\infty}(G)}^{1/p}\|v\|_{L^{\infty}(G)}^{1/p^{\prime}}\|g\|_{L^{\infty}(G/H)}\|\sigma\|_{L^1(\widehat{G}\times G/H)}.$$ \end{proof} We give another version of $L^p$- boundedness of Theorem \ref{th}. \begin{theorem} Let $\sigma$ be in $L^1(\widehat{G}\times G/H)$, $u$ and $v$ in $L^1(G)\cap L^{\infty}(G)$. Then there exists a unique bounded linear operator $P_{u,v,g}(\sigma):L^p(G)\rightarrow L^p(G)~~~1\leq p\leq \infty$ s.t. $$\|P_{u,v,g}(\sigma)\|_{B(L^p(G))}\leq\text{max}\Big(\|u\|_{L^1(G)}\|v\|_{L^{\infty}(G)},\|u\|_{L^{\infty}(G)}\|v\|_{L^1(G)}\Big)\|g\|_{L^{\infty}(G)}^2\|\sigma\|_{L^1(\widehat{G}\times G/H)}.$$ \end{theorem} \begin{proof} From Definition \ref{def}, \begin{align*} & P_{u,v,g}(\sigma)(f)(t)\\ &=\int_{\widehat{G}\times G/H}\sigma(\omega,zH)D_{H}^g(uf)(\omega,zH)g_{\omega,zH}(t)\overline{v(t)}dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\ & = \int_{\widehat{G}\times G/H}\sigma(\omega,zH)\int_G u(s)f(s)\overline{g_{\omega,zH}(s)}dsg_{\omega,zH}(t)\overline{v(t)}dm_{\widehat{G}}(\omega)dm_{G/H}(zH) \end{align*} So the integral operator is $$P_{u,v,g}(\sigma)(f)(t)=\int_{G}N(t;s)f(s)ds,$$ with the kernel $$N(t;s)=\int_{\widehat{G}\times G/H}\sigma(\omega,zH) u(s)\overline{g_{\omega,zH}(s)}g_{\omega,zH}(t)\overline{v(t)}dm_{\widehat{G}}(\omega)dm_{G/H}(zH).$$ Now \begin{equation} \begin{split} \int_{G}|N(t;s)|dt & \leq \int_{G}\int_{\widehat{G}\times G/H}|\sigma(\omega,zH)| |u(s)||\overline{g_{\omega,zH}(s)}||g_{\omega,zH}(t)||\overline{v(t)}|dm_{\widehat{G}}(\omega)dm_{G/H}(zH)dt\\ & \leq \|u\|_{L^{\infty}(G)}\|g\|_{L^{\infty}(G/H)}\|g\|_{L^{\infty}(G/H)}\|v\|_{L^{1}(G)}\|\sigma\|_{L^{1}(\widehat{G}\times G/H)}. \end{split} \end{equation} Similarly $$\int_{G}|N(t;s)|ds\leq \|u\|_{L^{1}(G)}\|g\|_{L^{\infty}(G/H)}\|g\|_{L^{\infty}(G/H)}\|v\|_{L^{\infty}(G)}\|\sigma\|_{L^{1}(\widehat{G}\times G/H)}.$$ Thus by Schur's test \cite{Fo}, we can conclude $P_{u,v,g}(\sigma):L^p(G)\rightarrow L^p(G)$ bounded and $$\|P_{u,v,g}(\sigma)\|_{B(L^p(G))}\leq\text{max}\Big(\|u\|_{L^1(G)}\|v\|_{L^{\infty}(G)},\|u\|_{L^{\infty}(G)}\|v\|_{L^1(G)}\Big)\|g\|_{L^{\infty}(G)}^2\|\sigma\|_{L^1(\widehat{G}\times G/H)}.$$ \end{proof} \begin{prop}\label{p>1} Let $\sigma$ be in $L^1(\widehat{G}\times G/H)$, $v$ in $L^p(G)$ and $u$ in $L^{p^{\prime}}(G)$ for $1< p\leq\infty$ then the generalized two-wavelet multiplier $P_{u,v,g}(\sigma): L^p(G)\rightarrow L^p(G)$ is a bounded linear operator and we have $$\|P_{u,v,g}(\sigma)\|_{B(L^2(G))}\leq \|u\|L^{p^{\prime}}(G)\|v\|_{L^p(G)}\|g\|_{L^{\infty}(G)}^2\|\sigma\|_{L^1(\widehat{G}\times G/H)}.$$ \end{prop} \begin{proof} For any $f\in L^p(G)$, consider the linear functional $I_{f}:L^{p^{\prime}}(G)\rightarrow\mathbb{C}$ defined by $$I_{f}(h)=\langle h,P_{u,v,g}(\sigma)(f)\rangle_{L^2(G)}.$$ Now from Definition \ref{def}, Equation \ref{Dg} and Holder's inequality, we have \begin{align*} &|\langle P_{u,v,g}(\sigma)(f),h\rangle_{L^2(G)}|\\ & \leq \int_{\widehat{G}\times G/H}|\sigma(\omega,zH)||D_{H}^g(uf)(\omega,zH)||D_{H}^g(vh)(\omega,zH)|dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\ & = \int_{\widehat{G}\times G/H}|\sigma(\omega,zH)||\langle uf, g_{\omega,zH}\rangle_{L^2(G)}||\langle vh,g_{\omega,zH}\rangle_{L^2(G)}|dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\ & \leq \int_{\widehat{G}\times G/H}|\sigma(\omega,zH)||\langle u, \overline{f}g_{\omega,zH}\rangle_{L^2(G)}||\langle h,\overline{v}g_{\omega,zH}\rangle_{L^2(G)}|dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\ & \leq \int_{\widehat{G}\times G/H}|\sigma(\omega,zH)|\|u\|_{L^{p^{\prime}}(G)}\|v\|_{L^p(G)}\|g\|_{L^{\infty}(G/H)}^2\|f\|_{L^p(G)}\|h\|_{L^{p^{\prime}}(G)}dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\ & \leq \|\sigma\|_{L^1(\widehat{G}\times G/H)}\|u\|_{L^{p^{\prime}}(G)}\|v\|_{L^p(G)}\|g\|_{L^{\infty}(G/H)}^2\|f\|_{L^p(G)}\|h\|_{L^{p^{\prime}}(G)}. \end{align*} By Riesz representation theorem $$\|P_{u,v,g}(\sigma)(f)\|=\|I_{f}\|_{B({L^{p^{\prime}}(G)})}.$$ So $$|I_{f}(h)|\leq \|\sigma\|_{L^1(\widehat{G}\times G/H)}\|u\|_{L^{p^{\prime}}(G)}\|v\|_{L^p(G)}\|g\|_{L^{\infty}(G/H)}^2\|f\|_{L^p(G)}\|h\|_{L^{p^{\prime}}(G)}.$$ Then $$\|I_{f}\|_{B({L^{p^{\prime}}(G)})}\leq \|\sigma\|_{L^1(\widehat{G}\times G/H)}\|u\|_{L^{p^{\prime}}(G)}\|v\|_{L^p(G)}\|g\|_{L^{\infty}(G/H)}^2\|f\|_{L^p(G)},$$ and hence $$\|P_{u,v,g}(\sigma)\|_{B({L^{p}(G)})}\leq \|\sigma\|_{L^1(\widehat{G}\times G/H)}\|u\|_{L^{p^{\prime}}(G)}\|v\|_{L^p(G)}\|g\|_{L^{\infty}(G/H)}^2.$$ \end{proof} We give the result for $p=1$ of Proposition \ref{p>1}. \begin{theorem} Let $\sigma$ be in $L^1(\widehat{G}\times G/H)$, $v$ in $L^p(G)$ and $u$ in $L^{p^{\prime}}(G)$ for $1\leq p\leq\infty$ then the generalized two-wavelet multiplier $P_{u,v,g}(\sigma): L^p(G)\rightarrow L^p(G)$ is a bounded linear operator and we have $$\|P_{u,v,g}(\sigma)\|_{B(L^2(G))}\leq \|u\|_{L^{p^{\prime}}}(G)\|v\|_{L^p(G)}\|g\|_{L^{\infty}(G)}^2\|\sigma\|_{L^1(\widehat{G}\times G/H)}.$$ \end{theorem} \begin{proof} We can show using Proposition \ref{p>1} and Proposition \ref{L1}. \end{proof} Let us consider $0\neq g\in L^1(G/H)\cap L^{\infty}(G/H)\subset L^q(G/H),~~~(1<q<\infty)$ for rest of this section. \begin{theorem} Let $\sigma$ be in $L^r(\widehat{G}\times G/H),~~~~ r\in [1,2]$ and $u,v\in L^1(G)\cap L^{\infty}(G).$ Then there exists a unique bounded linear operator $ P_{u,v,g}(\sigma):L^p(G)\rightarrow L^p(G)$ for all $p\in [r,r^{\prime}]$ and we have $$\|P_{u,v,g}(\sigma)\|_{B(L^p(G))}\leq c_1^tc^{1-t}_2\|\sigma\|_{L^r(\widehat{G}\times G/H)}, ~~~~\frac{t}{r}+\frac{1-t}{r^{\prime}}=\frac{1}{p}.$$ \end{theorem} \begin{proof} Consider the linear functional $$I:(L^1(\widehat{G}\times G/H)\cap L^2(\widehat{G}\times G/H))\times(L^1(G)\cap L^2(G))\rightarrow L^1(G)\cap L^2(G),$$ $$(\sigma, f)\mapsto P_{u,v,g}(\sigma)(f).$$ By Proposition \ref{L1} for $\sigma\in L^1(\widehat{G}\times G/H), f\in L^1(G)$, $$\|I(\sigma,f)\|_{L^1(G)}=\|P_{u,v,g}(\sigma)(f)\|_{L^1(G)}\leq \|f\|_{L^1(G)}\|u\|_{L^{\infty}(G)}\|v\|_{L^1(G)}\|g\|_{L^{\infty}(G/H)}^2\|\sigma\|.$$ By Theorem \ref{S infty}, for $\sigma\in L^2(\widehat{G}\times G/H), ~~ f\in L^2(G)$, \begin{equation*} \begin{split} \|I(\sigma, f)\|_{L^2(G)}& =\|P_{u,v,g}(\sigma)(f)\|_{L^2(G)}\leq \|f\|_{L^2(G)}\Big(\|u\|_{L^{\infty}(G)}\|v\|_{L^{\infty}(G)} \Big)^{1/2}\\ & \|\sigma\|_{L^2(\widehat{G}\times G/H)}\|g\|_{L^{\infty}(G/H)}\|g\|_{L^2(G/H)}. \end{split} \end{equation*} By multilinear interpolation theory, we get a unique bounded operator $$I:L^r(\widehat{G}\times G/H)\times L^r(\widehat{G}\times G/H)\rightarrow L^r(G)$$ such that $$\|I(\sigma, f)\|_{L^r(G)}\leq c_1\|f\|_{L^r(G)}\|\sigma\|_{L^r(\widehat{G}\times G/H)},$$ where $$c_1=\Big( \|g\|_{L^{\infty}(G/H)}^2\|u\|_{L^{\infty}(G)}\|v\|_{L^1(G)}\Big)^{\theta}\Big( \|g\|_{L^{2}(G/H)}^2\|g\|_{L^{\infty}(G/H)}\|u\|_{L^{\infty}(G)}\|v\|_{L^{\infty}(G)}\Big)^{\frac{1-\theta}{2}},$$ and $\frac{\theta}{1}+\frac{1-\theta}{2}=\frac{1}{r}.$\\ By definition of $I$, $$\|P_{u,v,g}(\sigma)\|_{B(L^r(G))}\leq c_1 \|\sigma\|_{L^r(\widehat{G}\times G/H)}.$$ Also $P_{v,u,g}(\overline{\sigma})$ is the adjoint of $P_{u,v,g}(\sigma)$, so $P_{u,v,g}(\overline{\sigma})$ is a bounded linear operator on $L^{r^{\prime}}(G)$ with the operator norm $$\|P_{v,u,g}(\overline{\sigma})\|_{B(L^{r^{\prime}}(G))}=\|P_{u,v,g}(\sigma)\|_{B(L^{r}(G))}\leq c_2\|\sigma\|_{L^r(\widehat{G}\times G/H)},$$ where $$c_2=\Big( \|g\|_{L^{\infty}(G/H)}^2\|u\|_{L^{1}(G)}\|v\|_{L^{\infty}(G)}\Big)^{\theta}\Big( \|g\|_{L^{2}(G/H)}^2\|g\|_{L^{\infty}(G/H)}\|u\|_{L^{\infty}(G)}\|v\|_{L^{\infty}(G)}\Big)^{\frac{1-\theta}{2}}.$$ By interpolation theorem, for $p\in [r,r^{\prime}]$, $$\|P_{u,v,g}(\sigma)\|_{B(L^p(G))}\leq c_1^t c_2^{1-t}\|\sigma\|_{L^r(\widehat{G}\times G/H)}.$$ \end{proof} \begin{theorem} Let $\sigma$ be in $L^r(\widehat{G}\times G/H),~~r\in[1,2)$ and $u,v\in L^r(G)\cap L^{\infty}(G).$ Then there exists a linear bounded operator $P_{u,v,g}(\sigma): L^p(G)\rightarrow L^p(G)$ for all $p\in[r,r^{\prime}]$ and we have \begin{align*} &\|P_{u,v,g}(\sigma)\|_{B(L^p(G))}\\ & \leq \|g\|_{L^{\infty}(G/H)}\|g\|_{L^{r^{\prime}}}(G/H)\Big( \|u\|_{L^r(G)}\|v\|_{L^{\infty}(G)}\Big)^t\Big( \|v\|_{L^r(G)} \|u\|_{L^{\infty}(G)}\Big)^{1-t}\|\sigma\|_{L^r(\widehat{G}\times G/H)}, \end{align*} where $t=\frac{r-p}{p(r-2)}$. \end{theorem} \begin{proof} For any $f\in L^{r^{\prime}}(G)$ and $h\in L^r(G)$ \begin{align*} & |\langle P_{u,v,g}(\sigma)(f),h\rangle_{L^2(G)}|\\ & \leq \int_{\widehat{G}\times G/H}|\Delta(\omega,zH)||D_{H}^g(uf)(\omega,zH)||D_{H}^g(vh)(\omega,zH)|dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\ & \leq \|D_{H}^g(uf)\|_{L^{\infty}(\widehat{G}\times G/H)}\|D_{H}^g(vh)\|_{L^{r^{\prime}}(\widehat{G}\times G/H)}\|\sigma\|_{L^r(\widehat{G}\times G/H)}. \end{align*} By Equation \eqref{Dg} $$\|D_{H}^g(uf)\|_{L^{\infty}(\widehat{G}\times G/H)}\leq \|g\|_{L^{\infty}(G/H)}\|u\|_{L^r(G)}\|f\|_{L^{r^{\prime}}(G)}.$$ By Equation \eqref{Dgp} $$\|D_{H}^g(vh)\|_{L^{r^{\prime}}(\widehat{G}\times G/H)}\leq \|g\|_{L^{r^{\prime}}(G/H)}\|v\|_{L^{\infty}(G)}\|h\|_{L^{r}(G)}.$$ So \begin{align*} &|\langle P_{u,v,g}(\sigma)(f),h\rangle_{L^2(G)}|\\ &\leq \sigma\|_{L^r(\widehat{G}\times G/H)}\|g\|_{L^{\infty}(G/H)}\|u\|_{L^r(G)}\|f\|_{L^{r^{\prime}}(G)}\|g\|_{L^{r^{\prime}}(G/H)}\|v\|_{L^{\infty}(G)}\|h\|_{L^{r}(G)}. \end{align*} Thus $$\|P_{u,v,g}(\sigma)\|_{B(L^{r^{\prime}}(G))}\leq \sigma\|_{L^r(\widehat{G}\times G/H)}\|g\|_{L^{\infty}(G/H)}\|u\|_{L^r(G)}\|g\|_{L^{r^{\prime}}(G/H)}\|v\|_{L^{\infty}(G)}. $$ \end{proof} \section{Compactness of generalized two wavelet multipliers }\label{Compt} Our aim is to show the linear operators $$P_{u,v,g}(\sigma):L^p(G)\rightarrow L^p(G)$$ are compact operators for all symbol $\sigma$ in $L^1(\widehat{G}\times G/H)$. \begin{prop}\label{pr} Under the same hypothesis of Theorem \ref{th}, the generalized two wavelet multiplier $P_{u,v,g}(\sigma):L^1(G)\rightarrow L^1(G)$ is compact. \end{prop} \begin{proof} Let $f_{n}\in L^1(G)$ s.t. $f_n \rightarrow 0$ weakly in $L^1(G)$. We need to show, $P_{u,v,g}(\sigma)(f_n)\rightarrow 0$ in $L^1(G)$. \begin{small} $$ \|P_{u,v,g}(\sigma)(f_n)\|_{L^1(G)} \leq \int_{\widehat{G}\times G/H}\int_{G}|\sigma(\omega,zH)||\langle f_n,g_{\omega,zH}u\rangle_{L^2(G)}||g_{\omega,zH}(t)v(t)|dm_{\widehat{G}}(\omega)dm_{G/H}(zH).$$ \end{small} Now $$\sigma(\omega,zH)||\langle f_n,g_{\omega,zH}u\rangle_{L^2(G)}||g_{\omega,zH}(t)v(t)|\leq C\|g\|_{L^{\infty}(G/H)}^2 \sigma(\omega,zH)\|u\|_{L^{\infty}(G)}|v(t)|.$$ By Dominated convergence theorem we conclude $$\lim_{n\rightarrow\infty}\|P_{u,v,g}(\sigma)f_n\|_{L^1(G)}=0$$ \end{proof} \begin{theorem} Under the hypothesis of Theorem \ref{th}, the bounded operator $$P_{u,v,g}(\sigma): L^p(G)\rightarrow L^p(G)$$ is compact for $1\leq p\leq\infty$. \end{theorem} \begin{proof} We know $P_{u,v,g}(\sigma): L^{\infty}(G)\rightarrow L^{\infty}(G)$ is the adjoint operator of $P_{u,v,g}(\sigma): L^1(G)\rightarrow L^1(G)$, which is compact by Proposition \ref{pr}. Hence by interpolation operator theorem for compactness on $L^1(G)$ and on $L^{\infty}(G)$ can be extended to compactness of $P_{u,v,g}(\sigma): L^p(G)\rightarrow L^p(G)$ for $1< p<\infty$. \end{proof} \section{Landau-Pollak-Slepian operator}\label{LPOp} Suppose $C_i, D$ and $\Omega$ are a compact neighbourhood of identity elements of $G$, $G/H$ and $\widehat{G}$ respectively for $i=1,2$. Define the operators $Q_R$ and $P_{R_i}$ as \begin{align*} Q_R:L^2(G/H\times \widehat{G})\rightarrow L^2(G/H\times \widehat{G})\ \ \textrm{and}\ \ P_{R_i}:L^2(G/H\times \widehat{G})\rightarrow L^2(G/H\times \widehat{G}) \end{align*} by \begin{align} Q_Rg(xH,\omega)=\chi_{D\times \Omega}(xH,\Omega)g(xH,\Omega) \end{align} and \begin{align} P_{R_i}g(xH,\omega)=\mathcal{D}_H^g\left(\chi_{C_i}(x)(\mathcal{D}_H^g)^{-1}g(xH,\omega)\right). \end{align} Now we present some properties of the above operator in the next proposition. \begin{prop} The operators $Q_R$ and $P_{R_i}$ are self-adjoint projections. \end{prop} \begin{proof} For $h,\phi\in G/H\times \widehat{G}$, we have \begin{align*} \langle P_{R_i}h,\phi\rangle = \langle \mathcal{D}_H^g\left(\chi_{C_i}(x)(\mathcal{D}_H^g)^{-1}h\right),\phi\rangle \end{align*} Now we can write the following in view of Theorem~\ref{orthothm} \begin{align*} \langle P_{R_i}h,\phi\rangle =& \langle \chi_{C_i}(x)(\mathcal{D}_H^g)^{-1}h,(\mathcal{D}_H^g)^{-1}\phi\rangle \ \ \|g\|^2_{L^2(G/H)}\\ =& \langle (\mathcal{D}_H^g)^{-1}h,\chi_{C_i}(x)(\mathcal{D}_H^g)^{-1}\phi\rangle \ \ \|g\|^2_{L^2(G/H)}\\ =& \langle h,\mathcal{D}_H^g\left(\chi_{C_i}(x)(\mathcal{D}_H^g)^{-1}\phi\right) \rangle\\ =& \langle h,P_{R_i}\phi\rangle. \end{align*} Hence the operator $P_{R_i}$ is the self-adjoint operator. Also $P_{R_i}$ is the projection operator by noting \begin{align*} \langle P_{R_i}^2h,\phi\rangle &= \langle P_{R_i}h,P_{R_i}\phi\rangle\\ &= \langle\mathcal{D}_H^g\left(\chi_{C_i}(x)(\mathcal{D}_H^g)^{-1}h\right),\mathcal{D}_H^g\left(\chi_{C_i}(x)(\mathcal{D}_H^g)^{-1}\phi\right)\rangle\\ &= \|g\|^2_{L^2(G/H)}\langle \chi_{C_i}(x)(\mathcal{D}_H^g)^{-1}h,\chi_{C_i}(x)(\mathcal{D}_H^g)^{-1}\phi\rangle\\ &= \|g\|^2_{L^2(G/H)}\langle \chi_{C_i}(x)(\mathcal{D}_H^g)^{-1}h,(\mathcal{D}_H^g)^{-1}\phi\rangle\\ &= \langle \mathcal{D}_H^g\left(\chi_{C_i}(x)(\mathcal{D}_H^g)^{-1}h\right),\phi\rangle\\ &= \langle P_{R_i}h,\phi\rangle. \end{align*} Similarly, $Q_R$ is also self-adjoint projection operator. \end{proof} The linear operator $P_{R_2} Q_R P_{R_1}: L^2(G/H\times \widehat{G})\rightarrow L^2(G/H\times \widehat{G})$ is called the generalized Landau-Pollak-Slepian operator. Now we will obtain the relation between the generalized Landau-Pollak-Slepian operator and the generalized two-wavelet multiplier. \begin{theorem} Let $u$ and $v$ are the functions of $G$ defined by \begin{align*} u=\frac{1}{\sqrt{|C_1|}}\chi_{C_1}(x)\ \ \ \textrm{and}\ \ \ v=\frac{1}{\sqrt{|C_2|}}\chi_{C_2}(x) \end{align*} then the generalized Landau-Pollak-Slepian operator $$P_{R_2} Q_R P_{R_1}: L^2(G/H\times \widehat{G})\rightarrow L^2(G/h\times \widehat{G})$$ is unitary equivalent to a scalar multiple of the generalized two-wavelet multiplier $$P_{u,v,g}\left( \chi_{D\times \Omega}\right): L^2(G)\rightarrow L^2(G).$$ In fact $$P_{R_2} Q_R P_{R_1}=\frac{\alpha(R_1,R_2)}{\|g\|_{L^2(G/H)}^2}\ \mathcal{D}_H^g\left(P_{u,v,g}\left( \chi_{D\times \Omega}\right)\right){\mathcal{D}_H^g}^{-1}$$ where $\alpha(R_1,R_2)=\sqrt{|C_1||C_2|}$, where $|C_i|$ is the Haar measure of the open set $C_i$. \end{theorem} \begin{proof} Clearly, $\|u\|_{L^2(G)}=1=\|v\|_{L^2(G)}$. We have \begin{align*} \langle P_{u,v,g}\left( \chi_{D\times \Omega}\right)f_1,f_2\rangle _{L^2(G)}=\int_{\chi_{D\times \Omega}}D_{H}^{g}(uf_1)(\omega,zH)\overline{D_{H}^g(vf_2)(\omega,zH)}dm_{\widehat{G}}(\omega)dm_{G/H}(zH), \end{align*} where \begin{align*} D_{H}^{g}(uf_1) &=\int_G u(x)f_1(x)\overline{w(x)g(z^{-1}xH)}dm_G(x)\\ &=\frac{1}{\sqrt{|C_1|}}\int_{C_1} f_1(x)\overline{w(x)g(z^{-1}xH)}dm_G(x)\\ &=\frac{1}{\sqrt{|C_1|}}P_{R_1}\left(D_{H}^{g}(f_1)\right). \end{align*} Hence we can write \begin{align*} & \langle P_{u,v,g}\left( \chi_{D\times \Omega}\right)f_1,f_2\rangle _{L^2(G)}\\ &=\frac{1}{\sqrt{|C_1C_2|}}\int_{\chi_{D\times \Omega}}P_{R_1}\left(D_{H}^{g}(f_1)\right)\overline{P_{R_2}\left(D_{H}^{g}(f_2)\right)}dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\ &=\frac{1}{\sqrt{|C_1C_2|}}\int_{G/H\times \widehat{G}}Q_RP_{R_1}\left(D_{H}^{g}(f_1)\right)\overline{P_{R_2}\left(D_{H}^{g}(f_2)\right)}dm_{\widehat{G}}(\omega)dm_{G/H}(zH)\\ &=\frac{1}{\sqrt{|C_1C_2|}}\langle Q_RP_{R_1}\left(D_{H}^{g}(f_1)\right),P_{R_2}\left(D_{H}^{g}(f_2)\right)\rangle_{L^2(G/H\times \widehat{G})}\\ &=\frac{1}{\sqrt{|C_1C_2|}}\langle P_{R_2}Q_RP_{R_1}\left(D_{H}^{g}(f_1)\right),\left(D_{H}^{g}(f_2)\right)\rangle_{L^2(G/H\times \widehat{G})}\\ &= \frac{\|g\|_{L^2(G/H)}^2}{\sqrt{|C_1C_2|}}\langle \left(D_{H}^{g}\right)^{-1}P_{R_2}Q_RP_{R_1}\left(D_{H}^{g}(f_1)\right),f_2\rangle_{L^2(G/H\times \widehat{G})} \end{align*} \end{proof}
math/0411244
\section{Introduction} \smallskip A subgroup covering (coset covering) of the group $G$ is a collection of its subgroups (cosets of its subgroups) whose union is the whole group. A covering is called irredundant or minimal if none of its memebers can be omitted. B.H. Neumann observed \cite{NE} that if a group $G$ has a finite irredundant (right) coset covering $H_1x_1~,~ H_2x_2~,~...~,~H_nx_n$ then the index of the intersection of the subgroups $H_i$ is bounded above by some function of $n$. Let $f(n)$ (resp. $g(n)$) be the maximal possible value of $|G:\bigcap H_i|$ where $G$ is a group with a coset covering $\{H_ix_i | i=1...n\}$ (resp. subgroup covering $\{H_i | i=1...n\}$). Obviously we have $f(n)\geq g(n)$. M.J. Tomkinson proved \cite{T} that $f(n)=n!$ and that $g(n)\geq {{1}\over{2}} \cdot 3^{2(n-1)/3}$. Since no super exponential lower bound has been found for $g(n)$, its order of magnitude is conjectured to be exponential. Let the functions $f_1(n)$ and $g_1(n)$ be similarly defined as $f(n)$ and $g(n)$ with the additional restriction that the group $G$ is always assumed to be Abelian. (Note that $f_1(n)\leq f(n)$ and $g_1(n)\leq g(n)$) L. Pyber pointed out (see: \cite{py}) that the order of magnitude of $g_1(n)$ is itself interesting. We need the following definition. \begin{defin} Let $G$ be a fixed finite group. Let $f(G)$ (resp. $g(G)$) denote the minimal k such that there exists an irredundant covering by $k$ cosets $\{H_ix_i | i=1...k\}$ (resp. subgroups $\{H_i | i=1...k\}$) of $G$ where $\bigcap H_i$ is trivial. (Note that the set of such subgroup coverings may be empty, and in this case we define $g(G)$ to be infinite) \end{defin} Now we have that $g(G)\geq f(G)$. Pyber's problem transforms to find a logarithmic lower bound for $g(A)$ in terms of $|A|$ if $A$ is an Abelian group. \begin{conjecture}[Pyber]\label{pybcon} There exists a fixed constant $c>1$ such that $g(A)> {\rm log}_c|A|$ for all finite Abelian groups $A$. \end{conjecture} Actually we believe that (in contrast with $f(n)=n!$) the growth of $f_1(n)$ is bounded above by some exponential function and thus \begin{conjecture}\label{pybcon2} There exists a fixed constant $c_2>1$ such that $f(A)> {\rm log}_{c_2}|A|$ for all finite Abelian groups $A$. \end{conjecture} We note that the worst known case (even for the function $f(A)$ and thus for $f_1(n)$) is the elementary Abelian $2$-group $A={C_2}^n~(n>1)$ where $f(G)=g(G)=n+1$ (See Corollary \ref{elemi}). It suggests that perhaps $2$ could be the true value for the constant $c$. We have two results related to Conjecture \ref{pybcon} and Conjecture \ref{pybcon2}. \begin{theorem}\label{fedthm} Let $A$ be an Abelian group of order $p_1^{\alpha_1}p_2^{\alpha_2}\dots p_n^{\alpha_n}$ . Then $g(A)\geq f(A)\geq 1+\sum_{i=1}^n \alpha_i$. \end{theorem} It means in particular that the inequality of Conjecture \ref{pybcon} holds with $c=p_n$ where $p_n$ is the largest prime divisor of the order of $A$. Alon and F\"uredi in \cite{AF} prove the suprising result that if want to cover all but one vertices of an $n$-dimensional cube, then we need at least $n$ hyperplanes. Actually they prove a more general result. \smallskip \noindent{\bf Theorem }(Alon, F\"uredi). {\it Let $h_1,h_2,\dots,h_n$ be positive integers and let $V$ be the set of all lattice points $(y_1,y_2,\dots,y_n)$ with $0\leq y_i\leq h_i$. If we want to cover all but one of the points of $V$, then we need at least $h_1+h_2+\dots+h_n$ hyperplanes.} \smallskip Our next result is an analogy of the previous one. We determine, how many cosets we need, if we want to cover all but one element of an Abelian group. This result yields good lower bound for the size of an irredundant coset covering system if it contains a small coset. \begin{theorem}\label{mainthm} Let $A$ be an Abelian group of order $p_1^{\alpha_1}p_2^{\alpha_2}\dots p_n^{\alpha_n}$. Let $\phi(A)$ denote the minimal number $k$ for which there exists a system of subgroups $H_1$,$H_2$,\dots,$H_k$ and elements $x_1,x_2,\dots,x_k$ such that $G\setminus\{1\}=\bigcup H_ix_i$. Then $\phi(A)=\sum_1^n \alpha_i (p_i-1)$. \end{theorem} \smallskip \begin{corollary}\label{mainthmc} Let $A$ be an Abelian group and let $\{H_ix_i | i=1...k\}$ be an irredundant coset covering of $A$. Then for all $i$ \begin{equation*} k\geq 1+{\rm log}_2|G:H_i| \end{equation*} \end{corollary} Note that Theorem \ref{mainthm} solves the special case of conjecture \ref{pybcon2} when one of the cosets has size 1. In this case both conjectures hold with constant 2. Corollary \ref{mainthmc} shows that if the covering system contains a "small" subgroup of size less than $|A|^p$ for some $p<1$ then both conjectures hold with constant $c=c_2=2/(1-p)$. The proof of Theorem \ref{mainthm}. uses character theory and some Galois theory. It is also worth mentioning that Theorem \ref{mainthm} implies that the blocking number of an affine space (i.e. the size of the smallest subset which intersect all hyperplanes) over the prime field GF($p$) is $1+n(p-1)$ which was proved (for arbitrary finite fields) by Brouwer, Schrijver and Jamison \cite{BS}, \cite{jamison} using polynomial method. \medskip \smallskip From the combinatorial point of view, the most important special case of the previously described covering problems is when the group $A$ is an elementary Abelian group $(C_p)^n$, and the covering system consists of hyperplanes (or affine hyperplanes). More generally, we can speak about hyperplane coverings of vector spaces over arbitrary finite fields. Many questions about graph colorings, nowhere zero flows or nowhere zero vectors can be translated to questions about special hyperplane coverings. However not much is known about such coverings. In Chapter 5. we present a character theoretic approach to hyperplane coverings in vector spaces over prime fields. The space of n-dimensional row vectors admits a natural scalar product. We prove the following: \begin{theorem} Let $p$ be an odd prime and let $A={\rm GF}(p)^n$. The space $A$ is covered by the hyperplanes ${{\mathbf x}_1}^{\bot},{{\mathbf x}_2}^{\bot},\dots,{{\mathbf x}_k}^{\bot}$ if and only if for all vectors ${\mathbf v}\in A$ the number of 0-1 combinations of the vectors ${{\mathbf x}_1},{{\mathbf x}_2},\dots,{{\mathbf x}_k}$ resulting ${\mathbf v}$ is even. \end{theorem} \medskip \noindent{\bf Conjecture} (Alon, Jaeger, Tarsi). {\it Let $F$ be a finite field with $q>3$ elements and let $M$ be a nonsingular $n$ by $n$ matrix over $F$. Then there exists a nowhere zero (column) vector $x$ (i.e. each component of $x$ is non zero) such that the vector $Mx$ is also a nowhere zero vector.} \smallskip With an elegant application of the polynomial method of Alon, Nathanson and Ruzsa (see: \cite{ANR}) Alon and Tarsi prove \cite{AT} that the latter conjecture holds if $F$ is a proper extension of some prime field GF($p$). Actually they prove more. For example, from their results follows that if ${\mathbf v}$ is an arbitrary (column) vector and $M$ is a nonsingular matrix over $F$ then there exists a nowhere zero vector ${\mathbf x}$ such that $M{\mathbf x}-{\mathbf v}$ is a nowhere zero vector. It is reasonable to believe that the same statement holds over GF($p$) where $p$ is prime number and is bigger than $3$. This conjecture will be called the {\bf choosability} version of the Alon-Jaeger-Tarsi conjecture. \begin{proposition}\label{pimpa} A positive answer of conjecture \ref{pybcon} implies the Alon-Jaeger-Tarsi conjecture for $F$=GF($p$) where $p\geq c^2$. \end{proposition} In Chapter 7. we discuss minimal hyperplane coverings. \begin{defin} Let $V$ be an $n$-dimensional vectorspace over the finite field $GF(q)$. Let $h_q(n)$ ($l_q(n)$) denote the minimal number $k$ such that there is collections of $k$ hyperplanes (affine hyperplanes) of $V$ which forms a minimal covering system and the intersection of these hyperplanes (the hyperplanes corresponding to these affine hyperplanes) is trivial. \end{defin} Obviously we have $h_q(n)\geq l_q(n)>n$. Let $$1+\varepsilon_q=\inf_n{l_q(n)/n}.$$ We conjecture that $\varepsilon_q>0$ if $q$ is an arbitrary prime power bigger than 2. This conjecture can be formulated in the following nice, self contained way. \begin{conjecture}\label{thecon} Assume that for some prime power $q>2$ the $GF(q)$ vectorspace $V$ is covered irredundantly by $k$ affine hyperplanes $H_1+v_1,H_2+v_2,\dots H_k+v_k$. Then the codimension of the intersection $\bigcap_i H_i$ is at most $k/(1+\varepsilon_q)$ for some fixed positive constant $\varepsilon_q$ which depends only on $q$. \end{conjecture} Using a result of Alon and Tarsi about nowhere zero points \cite{AT}, we prove the following. \begin{theorem} If $q$ is not a prime number then $\varepsilon_q\geq {1\over 2}$. \end{theorem} The $p=3$ case of Conjecture \ref{thecon} is especially interesting because it is strongly related to the next two conjectures. \smallskip \noindent{\bf Weak three flow conjecture} {\it There exits a fixed natural number $k$ such that if a graph $G$ is at least $k$-connected then it admits a nowhere zero $3$-flow.} \smallskip It is well known that the next conjecture (for $p=3$) would imply the weak 3-flow conjecture. \smallskip \noindent{\bf Additive basis conjecture}\quad (Jaeger, Linial, Payan, Tarsi). {\it For every prime $p$ there is a constant $c(p)$ depending only on $p$ such that if $B_1,B_2,\dots ,B_{c(p)}$ are bases of the $GF(p)$ vectorspace $V$ then all elements of $V$ can be written as a zero-one linear combination of the elements of the union (as multisets) of the previous bases.} \smallskip We show that $\varepsilon_3>0$ is equivalent with the additive basis conjecture for $p=3$. For a prime number $p>3$ we show that $\varepsilon_p>1$ implies the choosability version of the Alon-Jaeger-Tarsi conjecture and that the latter one implies $\varepsilon_p\geq 0.5$. Note that Conjecture \ref{pybcon2} implies that $\varepsilon_p\geq {\log}_2p-1$. \bigskip \section{Notation and basics} \smallskip Let $A$ be a finite Abelian group. A linear character of $A$ is a homomorphism from $A$ to $\Bbb{C}^*$. The linear characters of $A$ are forming a group under the point wise multiplication (which is isomorphic to $A$) and they are forming a basis in the vector space of all function $f:A\rightarrow \Bbb{C}$. The trivial character (which maps all elements of $A$ to $1$) will be denoted by $1_A$. The kernel of a linear character $\chi$ is the set of those group elements $g\in A$ for which $\chi(g)=1$. We denote the kernel of $\chi$ by ${\rm ker}(\chi)$. It is easy to see that the subgroup $H\leq A$ is the kernel of some liner character $\chi$ if and only if $A/H$ is cyclic. The group algebra $\Bbb{C}A$ consists of the formal linear combinations of the group elements. The fact that some Abelian groups are imagined as additive structures can cause some confusion because the concept of group algebra suggests that the group operation is the multiplication. For example we will work in the group algebra of the additive group of a finite vector space $V$. In this structure all vectors from $V$ are linearly independent and the group algebra $\Bbb{C}V^+$ consists of the formal $\Bbb{C}$-linear combinations of the elements of $V$. The product of two vectors ${\mathbf v}_1$ and ${\mathbf v}_2$ is the vector ${\mathbf z}={\mathbf v}_1+{\mathbf v}_2$ with coefficient $1$. If we add together ${\mathbf v}_1$ and ${\mathbf v}_2$ in the group algebra then it has nothing to do with the element ${\mathbf z}$. Another source of confusion is that the identity element of the group algebra is the zero vector with coefficient one. The identity element of the group algebra is always denoted by $1$. For a good reference about characters and group algebras see \cite{i}. \smallskip Let $V$ be an $n$ dimensional vector space. A Hyperplane of $V$ is a subspace of co-dimension 1. We say that the hyperplanes $H_1,H_2,\dots,H_k$ are independent (or the set $\{H_1,H_2,\dots,H_k\}$ is independent) if the co-dimension of their intersection is $k$. If $V$ is represented as the space of row vectors of length $n$ then there is a natural scalar product on $V$ defined by $({\mathbf x},{\mathbf y})=\sum_{i=1}^n x_iy_i$. The vectors ${\mathbf x}_1,{\mathbf x}_2,\dots,{\mathbf x}_k~\in V$ are linearly independent if and only if the hyperplanes ${{\mathbf x}_1}^{\bot},{{\mathbf x}_2}^{\bot},\dots,{{\mathbf x}_k}^{\bot}$ are linearly independent. If $V$ is a row space, then the usual basis will be always denoted by ${\mathbf b}_1,{\mathbf b}_2,\dots$, their orthogonal spaces will be denoted by $B_1,B_2,\dots$ and we call them basis hyperplanes. An affine hyperplane is the set $H+{\mathbf v}$ where $H$ is a hyperplane and ${\mathbf v}$ is a vector. If $A=H+{\mathbf v}$ is an affine hyperplane, we say that $H$ is the hyperplane corresponding to $A$. A collection of affine hyperplanes $A_1,A_2,\dots,A_k$ is called independent if the corresponding hyperplanes are independent. \bigskip \section{Proof of Theorem 4.} Let $\Omega=\{H_1x_1,H_2x_2,\dots,H_kx_k\}$ be a coset system of the Abelian group $A$. We say that $\bigcap_{i=1}^k H_i$ is the {\bf subgroup intersection} of $\Omega$. If $S$ is a subset of $A$, we say that $S$ is {\bf covered} by $\Omega$ if it is contained in the union of the elements of $\Omega$. Let $M$ be a subgroup of $A$. We denote by $\Omega/M$ the coset system in $A/M$ consisting of the images of the cosets $H_1x_1,H_2x_2,\dots,H_kx_k$ under the homomorphism $A\rightarrow A/M$. By abusing the notation, we denote by $\Omega\cap M$ the system consisting of the cosets $$H_1x_1\cap M,~H_2x_2\cap M,~\dots,~H_kx_k\cap M.$$ \begin{proof}[Proof of theorem 4.] For a natural number $n$ with prime decomposition $p_1^{\alpha_1}p_2^{\alpha_2}\dots p_m^{\alpha_m}$ let $\lambda(n)=\alpha_1+\alpha_2+\dots+\alpha_m$. We prove Theorem \ref{fedthm}. by induction on the order of the Abelian group $|A|$. During the proof we will frequently use the fact that the coset structure of $A$ is translation invariant. Let $\Omega=\{H_1x_1,H_2x_2,\dots,H_kx_k\}$ be a minimal coset covering system of $A$ with trivial subgroup intersection. We have to show that $k\geq 1+\lambda(|A|)$. Let $K$ be a maximal subgroup of $A$ containing $H_1$. Note the $K$ has prime index in $A$ and so $\lambda(|A|)=\lambda(|K|)+1$. Using the fact that any translation of the system $\Omega$ is again a minimal coset covering system with trivial subgroup intersection, we can assume that $x_1\notin K$. This means that $H_1x_1$ is disjoint from $K$ and that $H_2x_2,\dots,H_kx_k$ covers $K$. Let $\Omega_1\subseteq \{H_2x_2,\dots,H_kx_k\}\subset\Omega$ be minimal with the property that it covers $K$. There are two possibilities. The first one is that the subgroup intersection of $\Omega_1$ is trivial. In this case the subgroup intersection of $\Omega_1\cap K$ is also trivial and then by induction we can deduce that $$k\geq 1+f(K)\geq 2+\lambda(|K|)=1+\lambda(|A|)$$ which finishes the proof. The second one is that the subgroup intersection of $\Omega_1$ is not trivial. let $M_1$ denote the subgroup intersection of $\Omega_1$. Since the factor group $K/(K\cap M_1)$ is covered minimally by $\Omega_1/M_1$ with trivial subgroup intersection, we have by induction that $|\Omega_1/M_1|=|\Omega_1|\geq 1+\lambda(K/(K\cap M_1))$. Let $y_1$ be an element of $A$ which is not covered by the cosets in $\Omega_1$. Clearly the whole coset $M_1y_1$ does not intersect any coset from $\Omega_1$. Let $\Omega_2\subseteq\Omega$ be a minimal covering system for $M_1y_1$ and let $M_2$ be the subgroup intersection of $\Omega_1\cup\Omega_2$. Using translation invariance we have that $\Omega_2\geq 1+\lambda(M_1/M_2)$. \smallskip Now we define a process. Assume that the $\Omega_i$, $M_i$ and $y_i$ is already constructed for $1\leq i\leq t$ and the subgroup intersection $M_i$ is still not trivial. Let $\Omega_{t+1}\subseteq\Omega$ be a minimal covering for $M_ty_t$. Let $M_{t+1}$ denote the subgroup intersection of the system $\bigcup_{i=1}^{t+1}\Omega_i$. If $M_{t+1}$ is not trivial then let $y_{i+1}$ be an element which is not covered by the system $\bigcup_{i=1}^{t+1}\Omega_i$. Using the induction hypothesis and translation invariance we get that $|\Omega_{t+1}|\geq 1+\lambda(M_t/M_{t+1})$. Assume that $M_r$ is trivial, and thus $r$ is the length of the previous process. Now we have that $$|\Omega|\geq\sum_{i=1}^r |\Omega_i|\geq r+\lambda (|K|)\geq 1+\lambda(|A|).$$ \end{proof} Using Theorem 1. we obtain precise result for $(C_2)^n$. \begin{corollary}\label{elemi} $f((C_2)^n)=n+1$ \end{corollary} \begin{proof} Theorem 1. implies that $f((C_2)^n)\geq n+1$. Let $H_i$~($1\leq i\leq n$) be the subgroup consisting of all elements with 0 at the i-th place. The group $(C_2)^n$ is the union of the groups $H_i$ and the element $(1,1,\dots,1)$. \end{proof} \section{Proof of Theorem 5.} \smallskip For a natural number $n$ with prime decomposition $n=\prod_{i=1}^{s} {p_i}^{\alpha_i}$, let $\tau(n)=\sum_{i=1}^{s}{\alpha_i}(p_i-1)$. Let $\phi(A)$ denote the smallest number $k$ for which there is a collection of cosets $H_1x_1,H_2x_2,\dots,H_kx_k$ in the Abelian group $A$ such that $$A\setminus \{1\}=\cup_{i=1}^k H_ix_i.$$ \begin{lem}\label{lem0} $\phi(A)\leq\tau(|A|)$ \end{lem} \begin{proof} We go by induction on $|A|$. Let $B<A$ be a subgroup of index $p_1$. The group $A$ is a disjoint union of $p_1$ cosets of $B$. Using the statement for $B$ we obtain the result for $A$. \end{proof} \begin{lem}\label{lem1} Let $B$ and $C$ be two Abelian groups with $(|B|,|C|)=1$. Then $\phi(B\times C)\geq\phi (B)+\phi (C)$. \end{lem} \begin{proof} If two groups have coprime order then a subgroup of their direct product is a direct product of their subgroups. It follows that for $H\leq B\times C$ and $g\in B\times C$ there are subgroups $H_1\leq B$ , $H_2\leq C$ and elements $g_1\in B$ , $g_2\in C$ such that $Hg=\{(h_1g_1,h_2g_2)|h_1\in H_1 , h_2\in H_2\}$. Assume that $(B\times C)\setminus\{1\} = \bigcup_1^k{K_ig_i}$ where $K_i<B\times C$, $g_i\notin K_i$ and $k=\phi (B\times C)$. If $K_ig_i$ intersects $(B,1)\leq B\times C$ then it does not intersect $(1,C)$ otherwise the 1 would be an element of $K_ig_i$. (The analogous statement holds if $K_ig_i$ intersects $(1,C)$.) This implies that $k\geq \phi (B)+\phi (C)$. \end{proof} \begin{lem}\label{lem3} If $H<G$ and $g\notin H$ then there exists a subgroup $K$ of $G$ such that $H\leq K$, $g\notin K$, $G/K$ is cyclic. \end{lem} \begin{proof} Let $K$ be maximal with the property $H\leq K <G$, $g\notin K$. In the factor group $G/K$ every nontrivial subgroup $K_2$ contains $Kg$ otherwise the preimage of $K_2$ under the homomorphism $G\rightarrow G/K$ would be bigger than $K$ and would not contain $g$. It follows that $G/K$ can't be a direct product of two proper subgroups because one of them would not contain $Kg$. Using the structure theorem of finite Abelian groups we obtain that $G/K$ must be cyclic of prime power order. \end{proof} \begin{lem}\label{lem4} If $P$ is an Abelian group of order $p^{\alpha}$ for some prime $p$ and integer $\alpha$, then $\phi(P)\geq \alpha (p-1)$. \end{lem} \begin{proof} Let $k=\phi(P)$ and $P\setminus \{1\}=\bigcup_1^k{H_i}g_i$ (where $g_i\notin H_i$). Using Lemma \ref{lem3}. we obtain that there are subgroups $K_i$ $(1\leq i \leq k)$ with $H_i\leq K_i$, $g_i\notin K_i$ and $P/K_i$ is cyclic for all $1\leq i\leq k$. Now we have $P\setminus\{1\}=\bigcup_1^k{K_i}g_i$ and for each $K_i$ there exists a linear character $\chi_i$ of $P$ such that $\ker{\chi_i}=K_i$. Clearly the product $\prod_1^k{(\chi_i-(\chi_i(g_i))1_P)}$ takes the value zero on every element $1\neq g\in P$ but it is nonzero on the element $1$. From this we obtain the following equality $$\prod_1^k{(\chi_i-\chi_i(g_i)1_P)}= (\prod_1^k{(1-\chi(g_i))}/|P|)(\sum_{\chi \in \irr{P}}\chi)$$ The linear characters of $P$ are forming a basis of the vector space of $P\rightarrow \Bbb{C}$ functions, and thus after expanding both sides of the above equation, the coefficients of the characters must coincide. On the left hand side each coefficient is a sum of roots of unities thus they are algebraic integers. On the right hand side every character has coefficient $\prod_1^k{(1-\chi(g_i))}/|P|$, and thus this number is an algebraic integer. The $|P|$-th cyclotomic field $F$ is a normal extension of $\Bbb{Q}$, and the degree of the field extension $F/\Bbb{Q}$ is $p^{\alpha-1}(p-1)$. Using the fact that the Galois norm of an algebraic integer is an integer we deduce that $\norm{|P|}=$ $p^{\alpha p^{\alpha-1}(p-1)}$ divides $\prod_1^k{\norm{1-\chi(g_i)}}$ where $\norm{x}$ denotes the Galois norm of $x$ in the field extension $F/\Bbb{Q}$. An easy calculation shows that $\norm{1-\chi(g_i)}=p^{p^{\alpha-\logp{\ordo{\chi(g_i)}}}}\leq$ $p^{p^{\alpha-1}}$ where $\ordo{\chi(g_i)}$ denotes the multiplicative order of $\chi(g_i)$. The last inequality completes the proof. \end{proof} \begin{proof}[Proof of Theorem 5.] According to Lemma \ref{lem0}, it is enough to prove that $$\phi(A){\geq} \tau(|A|).$$ We go by induction on $|A|$. If $|A|$ is a prime power then Lemma \ref{lem4} yields the result. If $A$ is not a prime power then $A=B\times C$ where $(|B|,|C|)=1$ and using the statement for $B$ and $C$, Lemma \ref{lem1} completes the proof. \end{proof} \begin{proof}[proof of Corollary \ref{mainthmc}] Let $g$ be an element of $H_ix_i$ which is not covered by $H_jx_j$ for all $j\neq i$. Lemma \ref{lem0} shows that there is a coset system $\Omega$ consisting of $\tau(|H_i|)$ cosets whose union is $H_ix_i\setminus\{g\}$. The union of the system $\Omega\cup\{H_jx_j|j\neq i, 1\leq j\leq k\}$ is $A\setminus \{g\}$, so translating it with $g^{-1}$ we can apply Theorem \ref{mainthm}. We obtain that $k-1+\tau(|H_i|)\geq \tau(|A|)$ and thus $k\geq 1+\tau(|A:H_i|)$. It means in particular that $k\geq 1+{\rm log}_2|G:H_i|$. \end{proof} \medskip \section{Hyperplane coverings and characters} \smallskip Now we describe our character theoretical approach to hyperplane covering problems. Let $p$ be a fixed prime number, let $\omega=e^{2\pi i/p}$ and let $A=(C_p)^n$. We regard $A$ as the $n$-dimensional row vector space over GF($p$). \begin{lem}\label{covhyp} the space $A$ is covered by the hyperplanes ${{\mathbf x}_1}^{\bot},{{\mathbf x}_2}^{\bot},\dots,{{\mathbf x}_k}^{\bot}$ if and only if the equation \begin{equation*} ({\mathbf x}_1-{\mathbf 1})({\mathbf x}_2-{\mathbf 1})\dots({\mathbf x}_k-{\mathbf 1})=0. \end{equation*} is satisfied in the group algebra ${\Bbb C}[A]$ where ${\mathbf 1}$ denote the identity element of $A$ (which is actually the zero vector, if we think of $A$ as a vector space). \end{lem} Note that substraction in the previous lemma is the group algebra substraction and not the vector substraction. \begin{proof} The function \begin{equation*} f:(x_1,x_2,\dots,x_n)\rightarrow((y_1,y_2,\dots,y_n)\rightarrow \omega^{x_1y_1+x_2y_2+\dots+x_ny_n}) \end{equation*} gives an isomorphism between $A$ and its character group $A^*$. Moreover $f$ can be uniquely extended to an algebra isomorphism between the group algebra ${\Bbb C}[A]$ and the character algebra ${\Bbb C}[A^*]$. Note that the character algebra is just the algebra of all functions $A\rightarrow {\Bbb C}$ with the point wise multiplication. Clearly we have that ${\mathbf x}^\bot = {\rm ker}(f({\mathbf x}))$ for all row vectors ${\mathbf x}\in A$. It follows that the space $A$ is covered by the hyperplanes ${{\mathbf x}_1}^{\bot},{{\mathbf x}_2}^{\bot},\dots,{{\mathbf x}_k}^{\bot}$ if and only if \begin{equation*} (f({\mathbf x}_1)-1_A)(f({\mathbf x}_2)-1_A)\dots(f({\mathbf x}_k)-1_A)=0. \end{equation*} Applying $f^{-1}$ to both side of the previous equation we obtain the statement of the lemma . \end{proof} The previous lemma gives a characterization of covering systems in terms of orthogonal vectors. Our following theorem gives a group algebra free characterization of coverings in terms of orthogonal vectors if $p$ is an odd prime. \begin{theorem}\label{chariz} Let $p$ be an odd prime and let $A=(C_p)^n$. The space $A$ is covered by the hyperplanes ${{\mathbf x}_1}^{\bot},{{\mathbf x}_2}^{\bot},\dots,{{\mathbf x}_k}^{\bot}$ if and only if for all vectors ${\mathbf v}\in A$ the number of 0-1 combinations of the vectors ${{\mathbf x}_1},{{\mathbf x}_2},\dots,{{\mathbf x}_k}$ resulting ${\mathbf v}$ is even. \end{theorem} \begin{proof} Let $F$ be the algebraic closure of the field with two elements. Since $p$ is odd, $F$ contains a $p$-th root of unity $\omega$ and and thus we can repeat everything what we did over $\Bbb{C}$. We obtain that the space $A$ is covered by the hyperplanes ${{\mathbf x}_1}^{\bot},{{\mathbf x}_2}^{\bot},\dots,{{\mathbf x}_k}^{\bot}$ if and only if the equation \begin{equation*} ({\mathbf x}_1-{\mathbf 1})({\mathbf x}_2-{\mathbf 1})\dots({\mathbf x}_k-{\mathbf 1})=0. \end{equation*} holds in the group algebra $F[A]$. Since $F$ has characteristic 2 we don't have to care about the signs in the previous formula. The rest of the proof is straightforward by expanding the formula. \end{proof} \section{on the Alon-Jaeger-Tarsi conjecture} \smallskip The following lemma shows the relationship between hyperplane coverings and the Alon-Jaeger-Tarsi conjecture. \begin{lem}\label{ketfug} Let $p$ be a fixed prime number and let $n$ be a fixed natural number. The following statements are equivalent. \begin{enumerate} \item The $n$ dimensional vector space over GF($p$) can't be covered by the union of two independent sets of hyperspaces. \item If $M$ is a non singular $n$ by $n$ matrix over GF($p$) then there exists a nowhere zero vector ${\mathbf x}$ such that $M{\mathbf x}$ is also a nowhere zero vector. \end{enumerate} \end{lem} \begin{proof} (1)$\Rightarrow$ (2) Let ${\mathbf x}_1,{\mathbf x}_2,\dots,{\mathbf x}_n$ denote the rows of $M$, and let $H_i={{\mathbf x}_i}^{\bot}$ for $1\leq i \leq n$. Since $M$ is non singular we have that the subspaces $H_1,H_2,\dots,H_n$ are independent. Let $S_i$ be the hyperspace consisting of the row vectors with a zero at the $i$-th component. It follows from (1) that there exists a vector ${\mathbf y}$ which is not contained in the union of the spaces $H_i$ , $S_i$~ $(1\leq i\leq n)$. Clearly ${\mathbf y}$ is a nowhere zero vector such that $M{\mathbf y}^T$ is also a nowhere zero vector. (2)$\Rightarrow$ (1) Assume that $V$ is an $n$ dimensional vector space covered by the independent hyperspace sets $\Omega_1$ and $\Omega_2$. We can assume that both $\Omega_1$ and $\Omega_2$ are maximal independent sets. It is easy to see that we can represent $V$ as a row space such that the hyperspaces in $\Omega_1$ are exactly the spaces formed by all vectors with a zero at a fixed component. Let ${\mathbf x_1},{\mathbf x_2},\dots,{\mathbf x_n}$ be a system of non zero vectors whose orthogonal spaces are exactly the hyperspaces in $\Omega_2$. It is clear that the vectors ${\mathbf x_i}~ (1\leq i\leq n)$ are linearly independent. Let $M$ be a matrix such that its row vectors are ${\mathbf x_i}$~$1\leq i\leq n$. Now $M$ contradicts the assumption of (2). \end{proof} \begin{proof}[Proof of proposition \ref{pimpa}] Using Lemma \ref{ketfug} it is enough to show that if $V$ is covered by two sets of independent hyperspaces then $p<c^2$. Let $\Omega$ be the union of two independent hyperplane sets. Clearly $\Omega$ contains an independent set $\Delta$ of cardinality $k\geq |\Omega|/2$. Let $W$ denote the intersection of the hyperspaces in $\Omega$. Now, the factor space $V/W$ is covered irredundantly by the elements of $\Omega$ and the intersection of this covering system is trivial (in $V/W$). We also have that $d={\rm dim}V/W\geq k$. It follows that $|\Omega|\leq 2d$. If conjecture \ref{pybcon} holds then ${\rm log}_c(p^d)< 2d$ which means $p<c^2$. \end{proof} \begin{defin} Let $M$ be an $n$ by $n$ matrix. We say that $M$ is an {\bf AJT matrix} if there is a nowhere zero (column) vector ${\mathbf x}$ such that $M{\mathbf x}$ is also a nowhere zero vector. \end{defin} Note that $M$ is not an AJT if and only if the orthogonal spaces of the rows of $M$ cover all nowhere zero vectors. \begin{lem}\label{fedequ} Let $M$ be an $n$ by $n$ matrix over the field GF($p$) and let $\{{\mathbf x_1},{\mathbf x_2},\dots,{\mathbf x_n}\}$ be the rows of $M$. Moreover let ${\mathbf b_i}$ be the $i$-th row of the $n$ by $n$ identity matrix. Then $M$ is an AJT if and only if \begin{equation*} ({\mathbf b}_1-{\mathbf 1})({\mathbf b}_2-{\mathbf 1})\dots({\mathbf b}_n-{\mathbf 1}) ({\mathbf x}_1-{\mathbf 1})({\mathbf x}_2-{\mathbf 1})\dots({\mathbf x}_m-{\mathbf 1})\neq 0. \end{equation*} in the group algebra ${\Bbb C}[V^+]$ where $V$ denote the space of $n$ dimensional row vectors. If $p$ is odd and $F$ is the algebraic closure of the field with two elements then the same statements holds if we replace ${\Bbb C}$ by $F$. \end{lem} \begin{proof} The proof is straightforward from Lemma \ref{covhyp} and Theorem \ref{chariz}. \end{proof} \begin{defin} Let $B$ be a subset of the $n$ dimensional GF($p$) space $V$. Let $C(B)$ denote the set of all vectors ${\mathbf v}$ for which the number of zero-one combinations of the elements from $B$ resulting ${\mathbf v}$ is odd. In particular if $B$ is a linearly independent set then $C(B)$ is the set of all zero-one combinations of the elements from $B$. We say that $C(B)$ is the {\bf cube} determined by the set $B$. Let $A_1,A_2,\dots A_n$ be two element subsets of GF($p$). We say that the vector set $\{(a_1,a_2,\dots,a_n)|a_i\in A_i\}$ is a {\bf combinatorial cube} in the $n$-dimensional row space. \end{defin} Using our character theoretic approach we obtain the following characterization of AJT-s. \begin{theorem} Let $M$ be an $n$ by $n$ matrix over the field GF($p$) where $p>2$. Let $X$ be the set formed by the rows of $M$ and let $B$ be the ordinary basis of the $n$ dimensional row-space. $M$ is an AJT if and only if the set $C(X)\cap (C(B)+{\mathbf v})$ has odd number of points for some vector ${\mathbf v}$. \end{theorem} \begin{proof} Let $F$ be the algebraic closure of the field with two elements. Recall that the elements of the group algebra $F[V^+]$ are formal $F$-linear combinations of the group elements. Using Lemma \ref{fedequ} and that $1=-1$ in characteristic 2 we get that M is an AJT if and only if \begin{equation*} ({\mathbf b}_1+{\mathbf 1})({\mathbf b}_2+{\mathbf 1})\dots({\mathbf b}_n+{\mathbf 1}) ({\mathbf x}_1+{\mathbf 1})({\mathbf x}_2+{\mathbf 1})\dots({\mathbf x}_m+{\mathbf 1})= \end{equation*} \begin{equation*} \sum_{S_1\subseteq \{1,2,\dots,n\}}~\sum_{S_2\subseteq \{1,2,\dots,m\}}~ \prod_{i\in S_1}{\mathbf b}_i~\prod_{i\in S_2} {\mathbf x}_i \end{equation*} \noindent is not zero in the group algebra $F[V^+]$. Let ${\mathbf y}$ be a fixed vector in $V$. To determine the coefficient of ${\mathbf y}$ in the previous product we have to compute the number of the solutions of the following equation in $F[V^+]$ where $S_1\subseteq \{1,2,\dots,n\}$, ~$S_2\subseteq \{1,2,\dots,m\}$. \begin{equation*} \prod_{i\in S_1}{\mathbf b}_i~\prod_{i\in S_2}{\mathbf x}_i={\mathbf y} \end{equation*} Since it does not contain any addition, it can be translated into the following equation in $V$. \begin{equation*} \sum_{i\in S_1}{\mathbf b}_i~+\sum_{i\in S_2}{\mathbf x}_i={\mathbf y} \end{equation*} The number of the solutions of the previous equation is clearly $$|C(X)\cap(-C(B)+{\mathbf y})|=|C(X)\cap(C(B)-(1,1,...,1)+{\mathbf y}|$$ and the parity of this number gives the coefficient of ${\mathbf y}$. It follows that $M$ is an AJT if and only if there is a vector ${\mathbf v}$ for which $|C(X)\cap C(B)+{\mathbf v}|$ is an odd number. \end{proof} As a consequence of the previous lemma we obtain the following. \begin{corollary} Let $M$ be an $n$ by $n$ matrix, and let $X$ be the set formed by the rows of $M$. Then $M$ is an AJT If and only if there is a combinatorial cube which has odd intersection with $C(X)$. \end{corollary} \begin{proof} It is clear that if $M$ is an AJT and $N$ is obtained from $M$ by multiplying the rows by non zero scalars then $N$ is an AJT too. Applying the previous theorem to all possible such $N$ the proof is straightforward. \end{proof} \medskip \medskip \bigskip \section{minimal hyperplane coverings} \begin{lem}\label{brumm} Let $q$ be a prime power which is not a prime. Let $V$ be an $n$ dimensional vector space over GF($q$) and let $B_1$ and $B_2$ be two bases of $V$. Then each vector ${\mathbf v}\in V$ can be written as a nowhere zero linear combination (i.e. neither coefficient is zero) of $B_1\cup B_2$. \end{lem} \begin{proof} We write each vector as a row vector in the basis $B_1$. Let $M$ be a matrix whose rows are the vectors from $B_2$. According to the results of Alon and Tars in \cite{AT} there is a nowhere zero (row) vector ${\mathbf x}$ such that ${\mathbf v}-{\mathbf x}M$ is a nowhere zero vector ${\mathbf y}$. It means that ${\mathbf v}={\mathbf x}M+{\mathbf y}$ which yields the required linear combination. \end{proof} \begin{lem}\label{mats} Let $M$ be a matroid on the set $E$. If $|E|\geq r(E)k$ for a natural number $k$ then there is a subset $X\subseteq E$ such that $X$ as a matroid has $k$ disjoint bases. \end{lem} \begin{proof} Let $X$ be a minimal subset of $E$ with the property $|X|\geq r(X)k$. According to Edmond's matroid packing theorem, the maximal number of pairwise disjoint bases in $X$ equals $${\rm min}\left\{\left\lfloor{{|X|-|Y|}\over r(X)-r(Y)}\right\rfloor~:~Y\subseteq X~,r(Y)<r(X)\right\}.$$ The minimality of $X$ implies that for an arbitrary subset $Y\subset X$ with $r(Y)<r(X)$ we have that $|Y|< r(Y)k$. It follows that $|Y|-r(Y)k<|X|-r(X)k$ and so $(|X|-|Y|)/(r(X)-r(Y))>k$. \end{proof} \begin{theorem} Let $q$ be a prime power which is not a prime. Let $V$ be a vector space over GF($q$) which is covered irredundantly by $k$ affine hyperplanes $H_i+{\mathbf v}_i ~ (1\leq i\leq k)$. Then the co-dimension of the intersection of the hyperplanes $H_i~(1\leq i\leq k)$ is less than ${2\over 3}k$. \end{theorem} \begin{proof} It is easy to see that the space $V/\bigcap_{1\leq i\leq n} H_i$ is covered irredundantly by the images of $H_i+{\mathbf v}_i$ so we can assume that $\bigcap H_i$ is trivial. We go by contradiction. assume that ${\rm dim}(V)=n\geq {2\over 3}k$. Without loss of generality we can assume that $H_1,H_2,\dots H_n$ are independent hyperplanes, and ${\mathbf v}_i$ is the zero vector for $1\leq i\leq n$. We can choose a basis $B=\{{\mathbf b}_i|1\leq i\leq n\}$ such that the previous hyperplanes are exactly the orthogonal spaces of the basis elements. Let $W=\bigcap_{i>n} H_i$. From our assumption it follows that ${\rm dim}(V/W)\leq k-n\leq {1\over 2}n$. Let ${\mathbf p}_i$ be the image of ${\mathbf b}_i$ ($i=1\dots n$) under the homomorphism $V\rightarrow V/W$. From Lemma \ref{mats} it follows that there are two disjoint index sets $I_1,I_2\subset \{1\dots n\}$ such that $\{{\mathbf p}_i|i\in I_1\}$ and $\{{\mathbf p}_i|i\in I_2\}$ are bases of the same subspace $T\leq W$. Let $j$ be an element in $I_1$, and let ${\mathbf x}=\sum_{i=1}^{n} \lambda_i{\mathbf b}_i$ be an element in $H_j\leq V$ which is not covered by $H_i+{\mathbf v}_i$ for $i\neq j,~1\leq i\leq k$. Since the hyperplanes $H_l$~$(1\leq l\leq n)$ does not cover ${\mathbf x}$ for $l\neq j$ it follows that $\lambda_l\neq 0$ for all $l\neq j$. Let ${\mathbf y}=\sum_{i=1}^n \lambda_i{\mathbf p}_i$ and ${\mathbf y}_1=\sum_{i\in I_1\cup I_2} \lambda_i{\mathbf p}_i$. Lemma \ref{brumm} implies that ${\mathbf y}_1$ can be written as a nowhere zero linear combination of the vectors ${\mathbf p}_i$~$(i\in I_1\cup I_2)$ and thus ${\mathbf y}$ can be written in the form $\sum_{i=1}^{n} \mu_i{\mathbf p}_i$ where $\mu_i\neq 0$ for $1\leq i\leq n$. Let ${\mathbf z}= \sum_{i=1}^{n} \mu_i{\mathbf b}_i$. The vector ${\mathbf z}$ is a preimage of ${\mathbf y}$ under the homomorphism $V\rightarrow W$ and so ${\mathbf z}-{\mathbf v}\in W$. Since ${\mathbf z}$ is a nowhere zero vector in the basis $B$ we have that it is not contained in $H_1,H_2,\dots,H_n$. Let $t>n$ be a number for which $H_t+{\mathbf v}_t$ contains ${\mathbf z}$. By definition of $W$, $H_t+{\mathbf v}_t$ contains the set ${\mathbf z}+W$. This contradicts the assumption that ${\mathbf v}$ is covered only by $H_j$. \end{proof} Note that the condition on $q$ was hidden in Lemma \ref{brumm} when we used a result of \cite{AT}. This means that the choosability version of the Alon-Jaeger-Tarsi conjecture would imply the analogue statement for an arbitrary prime number bigger than $3$. It can also be seen (from the previous proof) that the following weak conjecture implies $\varepsilon_p>0$ if $p>2$. \medskip \noindent{\bf Weak conjecture} {\it For every prime $p>2$ there is a constant $c_2(p)$ depending only on $p$ such that if $B_1,B_2,\dots ,B_{c_2(p)}$ are bases of the $GF(p)$ vectorspace $V$ then all elements of $V$ can be written as a nowhere zero linear combination of the elements of the union (as multisets) of the previous basises.} \medskip The next result shows that the weak conjecture is equivalent with $\varepsilon_p>0$. \begin{lem} If $\varepsilon_p>0$ then the weak conjecture holds for $p$ with any $c_2(p)=k>{{1+\varepsilon_p}\over{\varepsilon_p}}$ . \end{lem} \begin{proof} We go by contradiction. Assume that the weak conjecture is not true with $c_2(p)=k$. Let $n$ be the minimal dimension where the conjecture is false (with $c(p)=k$) and assume that the bases $B_1,B_2,\dots,B_k$ are forming a counter example in the $n$ dimensional space $V$. Let $M$ be an $n$ by $nk$ matrix whose columns are the vectors from the previous bases. According to our assumption there is a vector ${\mathbf v}$ such that there is no nowhere zero vector ${\mathbf x}$ with $M{\mathbf x}={\mathbf v}$. Let say that an index set $I\subseteq \{1...nk\}$ is a blocking set if for all ${\mathbf x}\in {\rm GF}(p)^{nk}$ with $M{\mathbf x}={\mathbf v}$ there is a $j\in I$ such that the $j$-th coordinate of ${\mathbf x}$ is zero. Let $I$ be a minimal blocking set. First we prove by contradiction that $I=\{1...nk\}$. Assume that $P=\{1..nk\}\setminus I$ is not empty. Let $j$ be an element of $P$, let ${\mathbf y}$ be the $j$-th column of $M$ and let $W$ be the factor space $V/\langle{\mathbf y}\rangle$. Let $P_1,P_2,\dots,P_k$ be the images of the bases $B_1,B_2,\dots,B_k$ under the homomorphism $V\rightarrow W$. It is clear that each $P_i$ contains a basis for $W$ and by minimality of $n$ it follows that each vector ${\mathbf x}\in W$ is a nowhere zero linear combination of the elements in $P_1,P_2,\dots,P_k$. In particular the image of ${\mathbf v}$ can be written as such a nowhere zero combination. It means that there is a vector ${\mathbf x}\in {\rm GF}(p)^{nk}$ for which $M{\mathbf x}={\mathbf v}$ and all but the $j$-th coordinate of ${\mathbf x}$ are not zero. It contradicts the assumption that $I$ is a blocking set. Now we have that $\{1..nk\}$ is a minimal blocking set and thus for each $j\in \{1..nk\}$ there is a vector ${\mathbf x}_j\in {\rm GF}(p)^{nk}$ such that all but the $j$-th coordinate of ${\mathbf x}_j$ are not zero and $M{\mathbf x}_j={\mathbf v}$. Let $U$ be the affine hyperplane consisting of all ${\mathbf x}$ for which $M{\mathbf x}={\mathbf v}$. For all $j\in \{1..nk\}$ let $H_j\leq U$ be the affine hyperplane consisting of those elements ${\mathbf x}$ whose $j$-th coordinate is zero. Now the affine space $U$ is covered irredundantly by the affine hyperplanes $H_j$. Since ${\rm dim}(U)=n(k-1)$ it follows that ${k\over{k-1}}\geq 1+\varepsilon_p$. \end{proof} \bigskip \section{colorings and flows} In this section we outline the relation between colorings, flows and hyperplane coverings. Let $G$ be a finite, loopless graph with vertex set $V(G)$ and edge set $E(G)$. Let $q$ be a prime power and let $W$ be the vector space of all functions $V(G)\rightarrow {\rm GF}(q)$. For two functions $f,g\in W$ we define their scalar product by $$(f,g)=\sum_{v\in V(G)}f(v)g(v)$$ We associate a vector ${\mathbf v}_e\in W$ to each edge $e\in E(G)$ such that ${\mathbf v}_e$ takes $1$ and $-1$ at the two different endpoints of $e$, and it takes $0$ everywhere else. \begin{lem} $G$ is colorable with $q$ colors if and only if the orthogonal spaces of the vectors ${\mathbf v}_e$ do not cover the whole space $W$. \end{lem} \begin{proof} We can think of $W$ as the set of all possible (not necessary proper) colorings of $G$. It is clear that a vector ${\mathbf v}\in W$ is orthogonal to ${\mathbf v}_e$ for some $e\in E(G)$ if and only if ${\mathbf v}$ takes the same value at the endpoints of $e$. It means that $G$ has a proper coloring with $q$ colors if and only if there is a vector ${\mathbf v}\in W$ which is not contained in any of the spaces ${\mathbf v}_e^{\bot}$. \end{proof} Combining the previous lemma with Theorem \ref{chariz} one gets the following peculiar characterization of colorability. \medskip \begin{proposition} If $q$ is an odd prime then $G$ can be colored by $q$ colors if and only if there is a vector ${\mathbf v}\in W$ such that the number of zero-one combinations of the vectors ${\mathbf v}_e$ resulting ${\mathbf v}$ is odd. \end{proposition} \medskip Note that the space ${\mathbf v}_e^{\bot}$ depends only on the one dimensional space spanned by ${\mathbf v}_e$. It means that the vectors ${\mathbf v}_e$ can be replaced by any nonzero representative from their one dimensional spaces, which gives an even stronger version of the previous proposition. We also note that the "if" direction remains true if we delete the condition that $q$ is a prime number. Let $G=(V,E)$ be a directed graph and let $A$ be an Abelian group. An $A$-flow on $G$ is a function $f:E\rightarrow A$ such that for all $v\in V$ \begin{equation*} \sum_{e\in \delta^+ (v)}f(e)=\sum_{e\in \delta^- (v)}f(e). \end{equation*} where $\delta^+ (v)$ denote the set out going edges and $\delta^- (v)$ denote the set of in coming edges. If $f(e)\neq 0$ for all $e\in E$ then $f$ is called a nowhere zero flow. Clearly the existence of a nowhere zero flow on $G$ is independent of the orientation of $G$. If $G$ is undirected we will say that it admits a nowhere zero $A$-flow if some (an thus every) orientation of it admits a nowhere zero $A$-flow. Let $G$ be a fixed graph with a fixed direction and consider the set $B$ of all possible flows on $G$. It is clear that $B$ is a subgroup of the direct product $A^E$ and one can prove easily that $B\simeq A^{|E|-|V|+m}$ where $m$ denotes the number of connected components of $G$. For each edge $e$ there is a subgroup $B_e\leq B$ consisting of those flows which vanish on $e$. Clearly, $G$ has a nowhere zero flow if and only if the subgroups $B_e$ do not cover the group $B$. Moreover, it is also clear that the intersection of the subgroups $B_e$ is trivial. It means in particular that if $G$ is a graph which is "edge-minimal" respect to the property of having no nowhere zero flow (i.e. $G$ has no nowhere zero flow, but after deleting any edge, the resulting graph always has one) then the number of edges is less than $g_1(|B|)$ where $g_1$ is the function defined in the introduction. Note that if $A$ has a finite field structure, then the group $B$ can be regarded as a vectorspace over $A$ with hyperplane system $\{B_e|e\in E\}$. \section{hierarchy of conjectures} \begin{picture}(100,100)(10,10) \put (0,80){case $p>3$} \put (0,50){$c_2=2$} \put (30,53){\vector(1,0){20}} \put (55,50){$\varepsilon_p\geq {\rm log}_2(p)-1$} \put (130,53){\vector(1,0){20}} \put (155,50){$\varepsilon_p>1$} \put (185,53){\vector(1,0){20}} \put (210,50){C-AJT} \put (243,53){\vector(1,0){20}} \put (268,50){$\varepsilon_p\geq 0.5$} \put (280,48){\vector(0,-1){20}} \put (268,20){$\varepsilon_p>0$} \put (264,25){\vector(-1,0){20}} \put (244,21){\vector(1,0){20}} \put (230,20){W} \put (208,23){\vector(1,0){20}} \put (190,20){AB} \put (223,60){\vector(1,1){20}} \put (240,85){AJT} \end{picture} \begin{picture}(50,50)(10,10) \put(0,40){case $p=3$} \put(0,0){$c_2=2$} \put(30,3){\vector(1,0){20}} \put(55,0){$\varepsilon_3\geq {\rm log}_2(3)$} \put(110,3){\vector(1,0){20}} \put(133,0){$\varepsilon_3>0$} \put(163,1){\vector(1,0){20}} \put(183,5){\vector(-1,0){20}} \put(187,0){W} \put(199,1){\vector(1,0){20}} \put(219,5){\vector(-1,0){20}} \put(222,0){AB} \put(240,3){\vector(1,0){20}} \put(262,0){WT} \end{picture} \begin{picture}(10,10)(10,10) \end{picture} \bigskip \medskip \begin{tabular}{ll} \begin{tabular}{l} AJT\\ C-AJT\\ AB\\ W\\ WT\\ \end{tabular} & \begin{tabular}{l} Alon-Jaeger-Tarsi conjecture\\ choosability version of AJT\\ additive basis conjecture\\ weak conjecture\\ weak three flow conjecture\\ \end{tabular} \end{tabular} \bigskip {\bf Aknowledgements} The author thanks N. Alon, P.P. P\'alfy, L. Pyber and C. Szegedy for their kind help and helpful remarks. \medskip
cond-mat/0411329
\section{Introduction} In view of various applications such as the study of two--dimensional melting \cite{Pie80}, investigations of mesoscale structure formation \cite{Joa01} or engineering of colloidal crystals on spherical surfaces \cite{Din02}, the self--assembly of sub-$\mu$m colloidal particles at water--air or water--oil interfaces has gained much interest in recent years. These particles are trapped at the interface if the colloid is only partially wetted by both the water and the oil. This configuration is stable against thermal fluctuations and appears to be the global equilibrium state, as it is observed experimentally that the colloids immersed in the bulk phases are attracted to the interface \cite{Pie80} (see also Sec.~\ref{sec:1coll}). The mutual interaction between the trapped colloids at distances close to contact, i.e., within the range of molecular forces, is dominated by strong van--der--Waals attraction. In order to avoid coagulation due to this attraction, the colloids can be stabilized sterically with polymers or with charges such that the colloids repel each other. Variants of charge stabilization may include the coverage with ionizable molecules which dissociate in water, or the labeling of colloids with charged fluorescent markers. For charge--stabilized colloids at large distances, the resulting repulsive force at water interfaces stems from a dipole--dipole interaction as shown theoretically for point charges on surfaces \cite{Hur85} and verified experimentally for polystyrene (PS) spheres on water--oil interfaces \cite{Ave02}. Nonetheless, charged colloids at interfaces also show attractions far beyond the range of van--der--Waals forces. The corresponding experimental evidence can be roughly classified as follows. (i) According to Refs.~\cite{Ghe97,Ghe01,Gar98a,Gar98b,Sta00}, PS spheres (radii $R=0.25\dots 2.5$ $\mu$m) on flat water--air interfaces using highly deionized water exhibit spontaneous formation of complicated metastable mesostructures. They are consistent with the presence of an attractive, secondary minimum in the intercolloidal potential at distances $d/R\approx 3\dots 10$ with a depth of a few $k_B T$. The use of water slightly contaminated by ions seems to move the minimum further out and to reduce its depth \cite{Sta00,Que01}. (ii) In Ref.~\cite{Nik02}, PMMA spheres with radius $R=0.75$ $\mu$m were trapped at the interface of water droplets immersed in oil. Here, the secondary minimum has been measured at a distance $d/R=7.6$ and is reported to be surprisingly steep. The tentative explanation of these findings given in Ref.~\cite{Nik02} invokes an analogue of long--ranged flotation or capillary forces which decay $\propto 1/d$. This interpretation was criticized in Ref.~\cite{Meg03} (with which the authors of Ref.~\cite{Nik02} agreed \cite{Nik03}) and in Ref.~\cite{For04} which both concluded that possible capillary forces in this system are much shorter ranged, i.e., $\propto d^{-7}$, but the authors of these references disagree with respect to the sign of this shorter--ranged force. In yet another twist of the story, after completion of our work we encountered the very recent Ref.~\cite{Kra04} in which the authors claim that long--ranged capillary forces $\propto 1/d$ caused by the colloidal charges persist for sub-$\mu$m particles. This conclusion is based on measurements of the meniscus shape around single glass spheres with radii 200 \dots 300 $\mu$m floating at water--oil and water--air interfaces. Motivated by the experimental data summarized above and the still incomplete theoretical understanding, here we undertake a quite general analysis of capillary interactions between two spherical colloids trapped at fluid interfaces. We characterize the system by a general stress field which acts on the interface, e.g., due to a discontinuous electrostatic field at the interface, and by forces on the colloid of gravitational and/or electrostatic origin. Special attention will be given to the role of a restoring force fixing the interface, e.g., due to gravity or interface pinning. In Sec.~\ref{sec:1coll} we present a free energy model for a single colloid trapped at an interface in the limit of small stresses and forces. The general solution for the interface deformation will be used in Sec.~\ref{sec:2coll} in order to determine the effective potential between two colloids within the superposition approximation. In view of the differing theoretical results in the literature, the derivation of the interface deformation and the resulting effective potential is presented in detail in order to reveal properly the subtleties involved. In Sec.~\ref{sec:discussion} we apply the results to the case of charged polymeric spheres on water interfaces. It turns out that the constraint of approximate mechanical isolation of the experimental systems renders the capillary interaction basically short--ranged. Long--ranged attractive forces $\propto 1/d$ only arise through a restoring force acting on the interface (which is, however, weak) or in the presence of an external electric field. We shall discuss the relation to previous theoretical results, especially in view of the experimental results reported in Refs.~\cite{Nik02,Kra04}. Directions for further research will be pointed out. In Sec.~\ref{sec:summary} we summarize our results. \section{Equilibrium state of a single colloid} \label{sec:1coll} In this section we consider as a first step the equilibrium state of a single colloid of radius $R$ at the interface between two fluid phases denoted as 1 and 2. The contact angle formed by the interface and the colloid surface is given by Young's law, \begin{equation} \label{eq:young} \cos \theta = \frac{\gamma_1 - \gamma_2}{\gamma} , \end{equation} where $\gamma$ is the surface tension between phases 1 and 2, and $\gamma_i$ is the surface tension between the colloid and phase $i$. As a reference state, with respect to which changes in free energy will be measured, we take a planar meniscus configuration with the colloid at such a height $h_{\rm ref}$ that Young's law is satisfied (Fig.~\ref{fig:ref}). This corresponds to the equilibrium configuration of an uncharged colloid at the interface if its weight can be neglected --- which for generic cases is a safe approximation for $R \lesssim 1$ $\mu$m \cite{Kra00} (see also Sec.~\ref{sec:discussion}). We model the colloid as a smooth sphere so that the system is invariant under rotations around the colloid axis perpendicular to the reference planar meniscus. The presence of charges induces a shift of the system (colloid and interface) with respect to the reference state. Here we neglect corresponding changes in the surface tensions $\gamma$ and $\gamma_i$; this approximation is expected to be valid provided the concentration of charges is sufficiently small. This shift is characterized by the meniscus profile $u(r)$ relative to the planar configuration and by the height $h$ of the colloid center. In the reference configuration, the charge distribution is assumed to be already in equilibrium. In the following we do not consider the degree of freedom ``charge density field'' explicitly but take it to be fixed to that of the reference configuration. This amounts to neglecting the feedback of the interface displacement on the charge distribution. It turns out to be useful to introduce the radius $r_0$ of the three--phase contact line and the angle $\xi$ as auxiliary variables (see Fig.~\ref{fig:pert}). \begin{figure} \begin{center} \epsfig{file=fig1.eps,width=.95\textwidth} \caption{ Geometry of the reference state. The equilibrium contact angle $\theta$ fixes the height $h_{\rm ref} = - R \cos \theta$ of the colloid center, the contact radius $r_{0, \rm ref} = R \sin \theta$, and the auxiliary angular variable $\xi_{\rm ref}=\theta$.} \label{fig:ref} \end{center} \end{figure} \begin{figure} \begin{center} \epsfig{file=fig2.eps,width=.95\textwidth} \caption{ Description of deviations from the reference state. $u(r)$ is the meniscus profile and $h$ is the height of the colloid center. The angle $\xi$ and the radius $r_0 = R \sin \xi$ of the three--phase contact line are auxiliary variables, which depend on $u(r)$ and $h$ through the geometrical relationship $h = u(r_0) - R \cos \xi$. In this mesoscopic description $r_0$ is defined as the position where the meniscus profile intersects the surface of the sphere. $A_i$ is the surface area of the colloid exposed to phase $i$.} \label{fig:pert} \end{center} \end{figure} \subsection{The free energy} In this subsection we formulate a free energy functional for the degrees of freedom $u(r)$ and $h$. As shown later, for the cases of interest here the deviations from the reference configuration are small enough to justify a perturbative treatment, and the free energy can be expanded up to quadratic order in $\Delta h:=h-h_{\rm ref}$ and $u(r)$. We denote with $\Pi(r)$ the vertical force per unit area acting on the meniscus surface in the reference configuration. In the case of a charged colloid, $\Pi(r)$ is given by the $zz$--component of the difference of the Maxwell stress tensor right above and below the meniscus, plus the pressure difference acting across the meniscus (incluiding an imbalance in osmotic pressure due to the different concentration of ions just above and below the meniscus) \cite{For04,SMN02}. We introduce the following dimensionless parameter to measure the relative strength of this force: \begin{equation} \label{eq:epsPi} \varepsilon_\Pi := \frac{1}{2 \pi \gamma r_{0,\rm ref}} \int_{S_{\rm men,ref}} \!\!\!\!\!\! dA\; \Pi = \frac{1}{\gamma r_{0,\rm ref}} \int_{r_{0,\rm ref}}^\infty dr\; r\,\Pi(r) , \end{equation} where the integral extends over the flat meniscus $S_{\rm men,ref}$ (the plane $z=0$ with a circular hole of radius $r_{0, \rm ref}$). We assume $\Pi(r \to \infty) \sim r^{-n}$ with $n>2$, so that the integral converges. In the same spirit we introduce the total vertical force ${\bf F}= F{\bf e}_z$ acting on the charged colloid in the flat reference configuration, which includes gravity, the electrostatic force, and the total (i.e., hydrostatic and osmotic) pressure exerted by the fluids. This leads to the definition \begin{equation} \label{eq:epsF} \varepsilon_F := - \frac{F}{2 \pi \gamma r_{0,\rm ref}} . \end{equation} These two dimensionless parameters will appear naturally in the course of the calculations. If $\varepsilon_\Pi$ and $\varepsilon_F$ vanish, the reference configuration {\em is} the equilibrium state; it is the global minimum of the free energy functional given in Eqs.~(\ref{eq:Fdef}) and (\ref{eq:F}) below if $\Pi \equiv 0$, $F \equiv 0$. The aforementioned perturbative expansion of the free energy can be rephrased as an expansion in terms of the small parameters $\varepsilon_\Pi$ and $\varepsilon_F$. The free energy ${\cal F}$ of the colloid expressed relative to the aforementioned reference configuration consists of five terms: \begin{equation} \label{eq:Fdef} {\cal F} = {\cal F}_{\rm cont} + {\cal F}_{\rm men} + {\cal F}_{\rm vol} + {\cal F}_{\rm inter} + {\cal F}_{\rm coll} . \end{equation} In the following we discuss each contribution: \begin{itemize} \item {\it Fluid contact of the colloid}. With $A_i$ denoting the surface area of the colloid which is in contact with phase $i$, the surface free energy of the colloid due to its exposure to the phases 1 and 2 is \begin{equation} \label{eq:Fcont} {\cal F}_{\rm cont} = \gamma_1 A_1 + \gamma_2 A_2 - (\gamma_1 A_{1, \rm ref} + \gamma_2 A_{2, \rm ref} ) . \end{equation} In Appendix~\ref{sec:Fcont} we express this contribution as a function of $\Delta h$ and $u(r)$. The final result, valid up to corrections of at least third order in $\varepsilon_\Pi$ or $\varepsilon_F$, reads \begin{equation} \label{eq:Fcont_iso} {\cal F}_{\rm cont} \simeq \pi \gamma [ u(r_{0, \rm ref}) - \Delta h ]^2 + \pi \gamma (r_0^2 - r_{0,\rm ref}^2) . \end{equation} \item {\it Change of the meniscus area}. The free energy contribution due to variations in the meniscus area relative to the reference state reads \begin{equation} {\cal F}_{\rm men} = \gamma \int_{S_{\rm men}} \!\!\!\!\!\! dA \; \sqrt{1+|\nabla u|^2} - \gamma \int_{S_{\rm men,ref}} \!\!\!\!\!\! dA , \end{equation} where $S_{\rm men}$ is the surface of the fluid interface projected onto the plane $z=0$ (in which the reference interface is located), and $\nabla$ is the gradient operator on the flat reference interface. % For small slopes ($|\nabla u| \ll 1$) one obtains: \begin{eqnarray} \label{eq:Fmen} {\cal F}_{\rm men} & \simeq & \gamma \int_{S_{\rm men}\backslash S_{\rm men,ref}} \!\!\!\!\!\! dA + \frac{1}{2} \gamma \int_{S_{\rm men}} \!\!\!\!\!\! dA \; |\nabla u|^2 \nonumber \\ & \simeq & \pi \gamma (r_{0,\rm ref}^2 - r_0^2) + \frac{1}{2} \gamma \int_{S_{\rm men,ref}} \!\!\!\!\!\! dA \; |\nabla u|^2 \;. \end{eqnarray} Since the $u$--dependent term is of second order in $u$, we have approximated the integration domain $S_{\rm men}$ by $S_{\rm men, ref}$; the corrections are at least of third order in the small parameters $\varepsilon_\Pi$ or $\varepsilon_F$. The first term in this expression represents the change in the area of the meniscus which is cut out by the colloid, and in Eq.~(\ref{eq:Fdef}) it cancels the second term of Eq.~(\ref{eq:Fcont_iso}). \item {\it Volume forces on the fluids}. We consider the case that the only volume force acting on the fluid phases is gravity. The electrostatic forces are active only at the surfaces, where a net charge can accumulate. If the spherical colloid is replaced by a long cylinder of radius $r_{0, \rm ref}$ (Fig.~\ref{fig:cylinder}) the change in gravitational potential energy relative to the reference state due to the displacements of volumes of the fluid phases can be calculated easily: \begin{equation} \label{eq:Fvol} {\cal F}_{\rm vol} = \frac{1}{2} \gamma \int_{S_{\rm men,ref}} \!\!\!\!\!\! dA \; \frac{u^2(r)}{\lambda^2} = \pi \gamma \int_{r_{0, \rm ref}}^\infty dr \; r \, \frac{u^2(r)}{\lambda^2} \, , \end{equation} assuming that the mass density of phase 2 is larger than the one of phase 1, i.e., $\varrho_2 > \varrho_1$; $\lambda$ denotes the capillary length \begin{equation} \lambda=\sqrt{\frac{\gamma}{|\rho_2-\rho_1|g}} \end{equation} in terms of the acceleration $g$ of gravity. For a spherical colloid ${\cal F}_{\rm vol}$ in Eq.~(\ref{eq:Fvol}) has to be corrected to account for the specific dependence of the displaced fluid volume on the colloid height $h$, yielding a slightly cumbersome expression \cite{PKDN93}. This correction is usually numerically small and we neglect it for reasons of simplicity. As it will be shown in Subsec.~\ref{sec:lambda} this is justified because the results of interest here are insensitive to the precise form of ${\cal F}_{\rm vol}$ in the limit $\lambda \rightarrow \infty$ ($\lambda \approx 1$ mm, which is much larger than any other relevant length scale of the system). \begin{figure} \begin{center} \epsfig{file=fig3.eps,width=.95\textwidth} \caption{ Long cylinder of radius $r_{0, \rm ref}$ immersed vertically into the fluid interface between the phases 1 and 2.} \label{fig:cylinder} \end{center} \end{figure} \item {\it Force on the fluid interface}. The aforementioned surface force density $\Pi(r)$ acts on the fluid interface between phase 1 and phase 2. The free energy change due to the ensuing displacements of the meniscus is \begin{equation} \label{eq:Finter} {\cal F}_{\rm inter} \simeq - \int_{S_{\rm men}} \!\!\!\!\!\! dA \; \Pi u \simeq - \int_{S_{\rm men, ref}} \!\!\!\!\!\! dA \; \Pi u = - 2 \pi \int_{r_{0, \rm ref}}^\infty dr \; r \, \Pi(r) u(r) . \end{equation} Here $\Pi(r)$ is the surface force in the {\em reference} configuration ($\Pi>0$ corresponds to a force pointing upward). Changes in the force induced by meniscus deformations and colloidal displacements contribute terms of higher orders in $\varepsilon_\Pi$ or $\varepsilon_F$ in Eq.~(\ref{eq:Finter}). The replacement of $S_{\rm men}$ by $S_{\rm men,ref}$ in Eq.~(\ref{eq:Finter}) introduces terms of higher order, too. \item {\it Contribution from the colloid}. The free energy change due to a vertical displacement of the colloid is \begin{equation} \label{eq:Fcoll} {\cal F}_{\rm coll} \simeq - F \Delta h, \end{equation} where $F$ is the vertical force on the colloid in the {\em reference} configuration. Like for ${\cal F}_{\rm inter}$, changes of $F$ due to deviations from the reference configuration contribute to higher order terms. \end{itemize} In conclusion, by adding Eqs.~(\ref{eq:Fcont_iso}), (\ref{eq:Fmen}), (\ref{eq:Fvol}), (\ref{eq:Finter}), and (\ref{eq:Fcoll}) we obtain the following approximate expression for the total free energy, which is correct up to second order in $\varepsilon_\Pi$ or $\varepsilon_F$, and which is a function of $\Delta h$ and a functional of $u(r)$: \begin{equation} \label{eq:F} {\cal F} \simeq 2 \pi \gamma \int_{r_{\rm 0, ref}}^{\infty} dr \; r \left[ \frac{1}{2}\left(\frac{d u}{d r} \right)^2 + \frac{u^2}{2 \lambda^2} - \frac{1}{\gamma} \Pi \, u \right] + \pi \gamma [ u_0 - \Delta h ]^2 - F \Delta h , \end{equation} where $u_0 \equiv u(r_{0, \rm ref})$. \subsection{Minimization of the free energy} \label{sec:minF} The equilibrium configuration of the system minimizes the free energy expression~(\ref{eq:F}). The minimization proceeds in two stages. First we seek the minimum with respect to $\Delta h$ at fixed $u(r)$: \begin{equation} \label{eq:h_eq} \frac{\partial {\cal F}}{\partial (\Delta h)} = 0 \qquad \Rightarrow \qquad \Delta h = u_0 - \varepsilon_F r_{0, \rm ref} , \end{equation} where we have used the definition~(\ref{eq:epsF}). Using repeatedly the definitions of $h$ and $r_0$ in terms of the auxiliary angle $\xi$ (Fig.~\ref{fig:pert}) we can compute the change of the contact radius: \begin{equation} \label{eq:r_0,eq} \Delta r_0 \equiv r_0 - r_{0, \rm ref} = R(\sin \xi - \sin \theta) \simeq (\theta - \xi) h_{\rm ref} \simeq \frac{\Delta h - u_0}{r_{0, \rm ref}} h_{\rm ref} = - \varepsilon_F h_{\rm ref} , \end{equation} plus corrections of second order in $\varepsilon_\Pi$, $\varepsilon_F$. In the second step, we minimize Eq.~(\ref{eq:F}) with respect to $u(r)$ at fixed $\Delta h$. This is a problem of variations with a free boundary condition at $r=r_{0, \rm ref}$ \cite{CoHi}. Variation with respect to $u(r \neq r_{0, \rm ref})$ yields a second--order ordinary differential equation, \begin{equation} \label{eq:younglaplace} \frac{d^2 u}{d r^2} + \frac{1}{r} \frac{d u}{d r} - \frac{1}{\lambda^2} u = - \frac{1}{\gamma} \Pi (r) , \end{equation} while variation with respect to $u_0$ provides a boundary condition: \begin{equation} \label{eq:u'0} u'_0 := \left. \frac{d u}{d r}\right|_{r=r_{0, \rm ref}} = \frac{u_0 - \Delta h}{r_{0, \rm ref}} = \varepsilon_F , \end{equation} where the last equality follows from Eq.~(\ref{eq:h_eq}). The second boundary condition one has to impose on Eq.~(\ref{eq:younglaplace}) follows from the requirement that the meniscus is asymptotically flat far from the colloid (assuming that $\Pi(r \to \infty)=0$): \begin{equation} \label{eq:u_infty} \lim_{r \to \infty} u(r) = 0 . \end{equation} Equation~(\ref{eq:younglaplace}) describes mechanical equilibrium of the interface such that the Laplace pressure balances the forces acting on the interface. The boundary condition~(\ref{eq:u'0}) expresses mechanical equilibrium of the colloidal particle: At the contact line (i.e., the circle with radius $r_0$ at $z=u_0$) the interface exerts a force onto the colloid which has a non--vanishing contribution only in $z$--direction with the magnitude $2 \pi \gamma r_{0} \sin[\arctan(u_0')] \approx 2 \pi \gamma r_{0,\rm ref} u'_0$. This contact line force is balanced by the total force $F$. The solution of Eqs.~(\ref{eq:younglaplace}--\ref{eq:u_infty}) can be written in terms of the modified Bessel functions of zeroth order: \begin{equation} \label{eq:solBessel} u(r) = \frac{1}{\gamma} I_0 \left(\frac{r}{\lambda}\right) \int_r^{\infty} \!\!\! ds \; s\, \Pi(s) K_0 \left(\frac{s}{\lambda}\right) + \frac{1}{\gamma} K_0 \left(\frac{r}{\lambda}\right) \left[ A + \int_{r_{0, \rm ref}}^r \!\!\! ds \; s\, \Pi(s) I_0\left(\frac{s}{\lambda}\right) \right] , \end{equation} where the integration constant $A$ is determined by the boundary condition~(\ref{eq:u'0}). \subsection{Asymptotic behavior in the limit $\lambda \to \infty$} \label{sec:lambda} For typical values of the parameters, $\lambda$ is of the order of millimeters and therefore much larger than any other length scale occurring for experiments with sub-micrometer colloids. In order to study the intermediate asymptotics ($r_{0, \rm ref}, r \ll \lambda$) of $u(r)$ as given by Eq.~(\ref{eq:solBessel}), we insert the asymptotic expansions of the Bessel functions \cite{AbSt} as $\lambda \to \infty$ and retain those terms which do not vanish in this limit. Assuming that $\Pi(r \to \infty)$ decays sufficiently fast, Eq.~(\ref{eq:solBessel}) reduces to \begin{equation} \label{eq:sol1} u (r) \simeq r_{\rm 0, ref} (\varepsilon_\Pi - \varepsilon_F) \ln\frac{C \lambda}{r} - \frac{1}{\gamma} \int_{r}^{+\infty} \!\!\!\! ds \; s \, \Pi(s) \ln \frac{s}{r} , \end{equation} where $C = 2 e^{-\gamma_{\rm E}} \simeq 1.12$ and $\gamma_{\rm E}$ is Euler's constant. In Eq.~(\ref{eq:sol1}), we have expressed the integration constant $A$ appearing in Eq.~(\ref{eq:solBessel}) in terms of $\varepsilon_F$ by using the boundary condition~(\ref{eq:u'0}). The first term in Eq.~(\ref{eq:sol1}) is a solution of the homogeneous part of Eq.~(\ref{eq:younglaplace}) (with $\lambda^{-1}=0$ in the equation), demonstrating that the limit $\lambda \to \infty$ is singular as long as $\varepsilon_\Pi \neq \varepsilon_F$. The second term corresponds to a particular solution of the inhomogeneous differential equation. If the surface force $\Pi(r)$ decays algebraically, $\Pi(r \to \infty) \propto r^{-n}$, this term decays like $r^{2-n}$ (since we have assumed $n>2$), so that the logarithmic contribution is dominant. If $n \leq 2$, the asymptotic behavior is no longer given by Eq.~(\ref{eq:sol1}) but a different dependence on $\lambda$ arises. At distances $r$ of the order of $\lambda$, the expression~(\ref{eq:sol1}) is not valid and a crossover to the exact solution~(\ref{eq:solBessel}) takes place in order to satisfy the boundary condition at infinity (Eq.~(\ref{eq:u_infty})). Figure \ref{fig:u(r)} sketches the behavior of the meniscus profile $u(r)$. As can be checked directly in the differential equation~(\ref{eq:younglaplace}), one has $u(r) \sim (\lambda^2/\gamma) \Pi(r) \propto r^{-n}$ asymptotically as $r \to +\infty$, which expresses the balance between the surface force $\Pi(r)$ on the meniscus and the gravitational force. There is, however, another intermediate regime $r \gg \lambda$ but not too large, in which $u(r)$ decays $\propto \exp(-r/\lambda)$ and which corresponds to the solution of Eq.~(\ref{eq:younglaplace}) with $\Pi$ set to zero, i.e., when gravity is balanced by the Laplace pressure induced by the meniscus curvature. \begin{figure} \begin{center} \epsfig{file=fig4.eps,width=.5\textwidth} \caption{ The meniscus solution for the parameter choice $\lambda/r_{0,\rm ref}= 10^2$, $\varepsilon_F=10^{-2}$, and a dipole--like stress field $r_{0,\rm ref}\:\Pi(r) / \gamma = 0.08\:(r_{0,\rm ref}/r)^6$ ($\varepsilon_\Pi=2\cdot 10^{-2}$). The solid line represents the solution $u(r)$ given by Eq.~(\ref{eq:solBessel}). The dashed line is the intermediate asymptotic solution given by Eq.~(\ref{eq:sol1}). Note that the capillary length $\lambda$ is typically of $O$(1 mm). In the present context we focus on the length scale $r_{0,\rm ref},\: r \ll \lambda$, for which Eq.~(\ref{eq:sol1}) holds. } \label{fig:u(r)} \end{center} \end{figure} These conclusions, as well as the functional dependence of $u(r)$ on $\Pi(r)$, are robust and independent of the details of the boundary condition at $r \to \infty$. In order to support this statement, we consider two cases which implement the distant boundary condition differently: \begin{itemize} \item {\it Pinned interface}. In the absence of gravity the interface is assumed to be pinned at a finite distance $L$ from the colloid. This corresponds to setting $\lambda^{-1}=0$ in Eq.~(\ref{eq:younglaplace}) and replacing Eq.~(\ref{eq:u_infty}) by the new boundary condition $u(L)=0$. In this case the solution of Eq.~(\ref{eq:younglaplace}) is given by \begin{equation} \label{eq:sol_pin} u (r) = r_{\rm 0, ref} (\tilde{\varepsilon}_\Pi - \varepsilon_F) \ln\frac{L}{r} - \frac{1}{\gamma} \int_{r}^{L} \!\!\!\! ds \; s \, \Pi(s) \ln \frac{s}{r} , \end{equation} where \begin{equation} \label{eq:epsPi_pin} \tilde{\varepsilon}_\Pi := \frac{1}{\gamma r_{0,\rm ref}} \int_{r_{0,\rm ref}}^L dr\; r\,\Pi(r) , \end{equation} in analogy to Eq.~(\ref{eq:epsPi}). In the intermediate asymptotic regime ($r_{0, \rm ref},\: r \ll L$), the meniscus profile is then given by \begin{equation} \label{eq:sol2} u (r) \simeq r_{\rm 0, ref} (\varepsilon_\Pi - \varepsilon_F) \ln\frac{L}{r} - \frac{1}{\gamma} \int_{r}^{+\infty} \!\!\!\! ds \; s \, \Pi(s) \ln \frac{s}{r} , \end{equation} assuming that $\Pi(r)$ decays sufficiently fast so that the integrals in Eqs.~(\ref{eq:sol_pin}) and (\ref{eq:epsPi_pin}) converge as $L \to \infty$. Equation (\ref{eq:sol2}) resembles Eq.~(\ref{eq:sol1}) with $C \lambda$ replaced by $L$. \item {\it Pinned curved reference interface}. In some experiments the interface between phases 1 and 2 is in fact closed, so that the colloidal particle lies at the surface of a large nonvolatile spherical droplet of phase 2 which is immersed in phase 1 \cite{Nik02} (Fig.~\ref{fig:droplet}) and fixed by certain means (e.g.~by a glass plate). \begin{figure} \begin{center} \epsfig{file=fig5.eps,width=.55\textwidth} \caption{ Colloid at the surface of a droplet of phase 2 immersed into phase 1. The radius $R_{\rm drop}$ of the droplet, which is spherical without the colloid, is usually much larger than the colloid radius $R$. (The deformation of the droplet has been exaggerated.)} \label{fig:droplet} \end{center} \end{figure} The free energy functional~(\ref{eq:F}) has to be modified to account for the curvature of the reference interface as well as of the constraint that the droplet volume remains unchanged under deformation. To determine the interface deformation, we minimize the functional and employ the boundary condition that the droplet is fixed at some point far from the colloid. The mathematical details and the corresponding solution for $u(r)$ are presented in Appendix~\ref{sec:droplet}. Here we quote only the intermediate asymptotic behavior ($r_{0, \rm ref},\; r \ll R_{\rm drop}$, where $R_{\rm drop}$ is the radius of the undeformed droplet): \begin{equation} \label{eq:sol3} u (r) \simeq r_{\rm 0, ref} (\varepsilon_\Pi - \varepsilon_F) \ln\frac{\tilde{C} R_{\rm drop}}{r} - \frac{1}{\gamma} \int_{r}^{+\infty} \!\!\!\! ds \; s \, \Pi(s) \ln \frac{s}{r} , \end{equation} with $\tilde{C}$ a numerical constant given by Eq.~(\ref{eq:tildeC}). Again Eq.~(\ref{eq:sol3}) closely resembles Eqs.~(\ref{eq:sol1}) and (\ref{eq:sol2}). \end{itemize} The physical reason for the occurrence of the singularity in the limit $\lambda,\; L,\; R_{\rm drop} \rightarrow +\infty$ is that a ``restoring force'' far from the particle is required to yield a well--defined unperturbed interface which allows one to determine the deformation $u(r)$ unambiguously. For example, if one takes the limit $\lambda \rightarrow +\infty$ in the Young--Laplace Eq.~(\ref{eq:younglaplace}), it is inconsistent to impose the boundary condition $u(r\rightarrow +\infty)=0$ in the corresponding solution. The special case $\varepsilon_\Pi=\varepsilon_F$, however, is not singular; this will be discussed in Sec.~\ref{sec:discussion}. Furthermore, the comparison of Eqs.~(\ref{eq:sol1}), (\ref{eq:sol2}), and (\ref{eq:sol3}) demonstrates that the functional form of the intermediate asymptotic behavior is independent of how the restoring force is implemented. This corresponds to the so--called intermediate asymptotic behavior of the second kind \cite{Barenblatt}, characterized by the following features: \begin{enumerate} \item There is a length scale ($\lambda$, $L$, or $R_{\rm drop}$) which is much larger than the other length scales of the system under consideration and which seems --- at first sight --- to be irrelevant. \item Nevertheless, this length scale determines the dominant logarithmic (or more generally, the power--law) dependence. \item The detailed physical origin of this length scale (in the examples we have considered, gravity, pinning of a reference flat or curved interface) does not matter. \end{enumerate} Well known examples of this kind of asymptotic behavior are critical phenomena in phase transitions \cite{Goldenfeld}. In that case, it is the microscopic length scale given by the amplitude of the correlation length which cannot be set to zero although it is much smaller than the correlation length itself. This microscopic length scale is required to formulate the power--law behavior of certain properties of the system, but its detailed physical origin is unimportant for the universal decay exponents of the power laws. \section Effective interaction potential of two floating colloids} \label{sec:2coll} In this section we consider the equilibrium state of two identical colloids floating at the interface at a fixed lateral distance $d$ and compute the effective interaction potential $V_{\rm men}(d)$ generated by the meniscus. The free energy can be derived along the same lines leading to Eq.~(\ref{eq:F}), but with due account for the fact that in the presence of two colloids the meniscus slope no longer exhibits rotational symmetry. Nonetheless $V_{\rm men}$ depends only on the distance $d$ between the centers of the two spheres. The reference configuration is that of two colloids floating on a planar interface with the corresponding reference free energy being independent of $d$. In this case one has for the free energy relative to that of the reference configuration a contribution ${\cal F}_{\rm men}+{\cal F}_{\rm vol}+{\cal F}_{\rm inter}$ from the meniscus and a contribution of the form ${\cal F}_{\rm cont}+{\cal F}_{\rm coll}$ from each colloid. (The total free energy includes also the direct interaction between the colloids; this contribution will be considered in Subsec.~\ref{sec:elec}.) From Eqs.~(\ref{eq:Fcont_aniso}), (\ref{eq:Fmen}), (\ref{eq:Fvol}), (\ref{eq:Finter}), and (\ref{eq:Fcoll}) one obtains: \begin{equation} \label{eq:F2} {\cal \hat{F}} \simeq \gamma \int_{\mathbb{R}^2\backslash(S_1 \bigcup S_2)} \!\!\!\!\!\! dA \; \left[ \frac{1}{2} |\nabla \hat{u}|^2 + \frac{\hat{u}^2}{2 \lambda^2} - \frac{1}{\gamma} \hat{\Pi} \, \hat{u} \right] + \sum_{\alpha=1,2} \left\{ \frac{\gamma}{2 r_{0, \rm ref}} \oint_{\partial S_\alpha} \!\!\! d\ell \; [\Delta \hat{h}_\alpha - \hat{u}]^2 - \hat{F} \Delta \hat{h}_\alpha \right\} . \end{equation} Here, $\hat{u}$ is the meniscus profile in the presence of two colloids, $\Delta \hat{h}_\alpha$ are the corresponding heights, $\hat{\Pi}$ is the vertical force per unit area on the meniscus in the reference configuration, and $\hat{F}_\alpha$ is the force on each colloid. By symmetry, one has $\Delta \hat{h}_1=\Delta \hat{h}_2$ and $\hat{F}_1=\hat{F}_2$. $S_\alpha$ are the circular disks delimited by the contact lines of the colloids in the reference configuration, $\partial S_\alpha$ are the contact lines, with the convention that we trace them counterclockwise, and $S_{\rm men, ref} = \mathbb{R}^2\backslash(S_1 \bigcup S_2)$ (see Fig.~\ref{fig:2coll_top}). \begin{figure} \begin{center} \epsfig{file=fig6.eps,width=.9\textwidth} \caption{ Top view (projection onto the plane $z=0$) of the reference configuration with two colloids. $d$ is the distance between the colloid centers. $S_1$ and $S_2$ are disks of radius $r_{0, \rm ref}$, the corresponding circumferences (counterclockwise) are $\partial S_1$ and $\partial S_2$. The projection of the interface is $S_{\rm men, ref} = \mathbb{R}^2\backslash(S_1 \bigcup S_2)$.} \label{fig:2coll_top} \end{center} \end{figure} \subsection{Minimization of the free energy within the superposition approximation} The equilibrium configuration is the minimum of the free energy given by Eq.~(\ref{eq:F2}). The minimization procedure follows closely Subsec.~\ref{sec:minF}. First, minimizing with respect to $\Delta \hat{h}_\alpha$ at fixed meniscus height $\hat{u}$ leads to the height of the colloids, \begin{equation} \label{eq:h_eq2} \frac{\partial {\cal F}}{\partial (\Delta h_\alpha)} = 0 \qquad \Rightarrow \qquad \Delta \hat{h}_\alpha = \bar{\hat{u}}_\alpha - \varepsilon_{\hat{F}}\; r_{0, \rm ref} , \end{equation} where \begin{equation} \bar{\hat{u}}_\alpha := \frac{1}{2 \pi r_{0, \rm ref}} \oint_{\partial S_\alpha} \!\!\! d\ell \; \hat{u} \end{equation} is the mean height of the contact line. In the next step, we minimize with respect to $\hat{u}$ at fixed $\Delta \hat{h}_\alpha$. Variation in the interior of the domain $\mathbb{R}^2\backslash(S_1 \bigcup S_2)$ provides a second--order partial differential equation, \begin{equation} \label{eq:younglaplace2} \nabla^2 \hat{u} - \frac{1}{\lambda^2} \hat{u} = - \frac{1}{\gamma} \hat{\Pi} , \end{equation} while variation at the boundary $\partial S_1 \bigcup \partial S_2$ provides the following {\it transversality conditions} \cite{CoHi}: \begin{equation} \label{eq:trans} \frac{\partial \hat{u} ({\bf r})}{\partial n_\alpha} = \frac{\hat{u}({\bf r})-\Delta \hat{h}_\alpha}{r_{0, \rm ref}} = \varepsilon_{\hat{F}} + \frac{\hat{u}({\bf r})-\bar{\hat{u}}_\alpha}{r_{0, \rm ref}} , \qquad {\bf r}=(x,y) \in \partial S_\alpha , \end{equation} where the last equality follows from Eq.~(\ref{eq:h_eq2}). In this expression, $\partial/\partial n_\alpha$ is the derivative in the outward normal direction of $\partial S_\alpha$. (In this way, the triad (${\bf e}_n, {\bf e}_t, {\bf e}_z$) is right-handed, where ${\bf e}_n$ is the unit vector in the outward normal direction, ${\bf e}_t$ is the unit vector in the counterclockwise tangent direction, and ${\bf e}_z$ is the unit vector in the positive $z$--direction.) When applying the transversality condition, in the context of Gauss' theorem one must keep in mind that the boundary of the region $\mathbb{R}^2\backslash(S_1 \bigcup S_2)$ consists of the contours $\partial S_\alpha$ traced in {\em clockwise direction} with the normals directed towards the interior of $S_\alpha$. (The boundary at $r \rightarrow \infty$ does not contribute due to Eq.~(\ref{eq:u_infty2}).) In the special case of a single colloid, rotational invariance reduces Eq.~(\ref{eq:trans}) to Eq.~(\ref{eq:u'0}). Finally, one has the additional boundary condition \begin{equation} \label{eq:u_infty2} \lim_{r \to \infty} \hat{u}({\bf r}) = 0 . \end{equation} The solution of Eq.~(\ref{eq:younglaplace2}) with the boundary conditions given by Eqs.~(\ref{eq:trans}) and (\ref{eq:u_infty2}) is a difficult task. (We are only aware of the --- already very involved --- solution of the homogeneous Eq.~(\ref{eq:younglaplace2}), i.e., $\hat \Pi =0$, with the simplified boundary condition $\partial \hat u/\partial n_\alpha = \varepsilon_{\hat F}$ \cite{Kra91}.) For the present purpose, one can use the so-called {\em superposition approximation} \cite{Nico49,CHW81,PKDN93}, which yields the correct solution in the asymptotic limit of large separation $d \gg R$ between the colloids. Let ${u}_\alpha$ denote the equilibrium meniscus profile as if colloid $\alpha$ was alone, with ${\Pi}_\alpha$ and ${F}_\alpha$ denoting the corresponding forces. The superposition approximation then reads: \begin{eqnarray} \label{eq:superposition} \hat{u} & \simeq & {u}_1 + {u}_2 , \nonumber \\ \hat{\Pi} & \simeq & {\Pi}_1 + {\Pi}_2 , \\ \hat{F} & \simeq & {F}_1 = {F}_2 . \nonumber \end{eqnarray} Notice that the fields $u_\alpha({\bf r})$ and $\Pi_\alpha({\bf r})$ are defined in the domain $\mathbb{R}^2\backslash S_\alpha$, while the fields $\hat{u}({\bf r})$ and $\hat{\Pi}({\bf r})$ are defined in the smaller domain $\mathbb{R}^2\backslash(S_1 \bigcup S_2)$. Equations~(\ref{eq:younglaplace2}) and (\ref{eq:u_infty2}) are fulfilled by this approximate solution, but the boundary condition~(\ref{eq:trans}) is violated: using Eqs.~(\ref{eq:superposition}) and the boundary condition in Eq.~(\ref{eq:u'0}) for the single--colloid solution, one obtains \begin{equation} \label{eq:trans1} \left[ \frac{\partial \hat{u} ({\bf r})}{\partial n_1} - \varepsilon_{\hat{F}} - \frac{\hat{u}({\bf r})-\bar{\hat{u}}_1}{r_{0, \rm ref}} \right]_{{\bf r} \in \partial S_1} \simeq \left[ \frac{\partial {u}_2 ({\bf r})}{\partial n_1} - \frac{1}{r_{0, \rm ref}} \left\{ {u}_2 ({\bf r})- \frac{1}{2 \pi r_{0, \rm ref}} \oint_{\partial S_1} \!\!\! d\ell \; {u}_2 \right\} \right]_{{\bf r} \in \partial S_1} , \end{equation} and a similar expression for the other colloid with the indices 1 and 2 interchanged. In general, this expression does not vanish as required by Eq.~(\ref{eq:trans}). If $d$ is large, Eq.~(\ref{eq:trans1}) can be evaluated by expanding ${u}_2$ into a Taylor series around the center of colloid 1, yielding to lowest order (${\bf e}_1$ is the outward directed normal unit vector of $\partial S_1$) \begin{equation} \label{eq:correction_sup} \left[ \frac{\partial \hat{u} ({\bf r})}{\partial n_1} - \varepsilon_{\hat{F}} - \frac{\hat{u}({\bf r})-\bar{\hat{u}}_1}{r_{0, \rm ref}} \right]_{{\bf r} \in \partial S_1} \simeq \frac{1}{4} r_{0, \rm ref} [ {\bf e}_1 {\bf e}_1 : \nabla \nabla {u}_2 (d) + \nabla^2 {u}_2 (d) ] \end{equation} in dyadic notation. Inserting the single--colloid solution given in Eq.~(\ref{eq:sol1}), one finds that this expression decays like $d^{-2}$ if $\varepsilon_F \neq \varepsilon_\Pi$, and like ${\Pi}(d) \sim d^{-n}$ if $\varepsilon_F = \varepsilon_\Pi$. The superposition solution can be used to determine the vertical displacement according to Eq.~(\ref{eq:h_eq2}): \begin{equation} \label{eq:z_superposition} \Delta \hat{h}_\alpha \simeq \Delta {h} + \bar{{u}}, \end{equation} where $\Delta {h}$ is the relative vertical displacement of an isolated colloid (Eq.~(\ref{eq:h_eq})) and \begin{equation} \label{eq:meanu} \bar{{u}} := \frac{1}{2 \pi r_{0, \rm ref}} \oint_{\partial S_1} \!\!\! d\ell \; {u}_2 \end{equation} is the average of the single--colloid meniscus height at the contact line of the other colloid. \subsection{Effective interaction potential} \label{sec:Vmen} The meniscus--induced effective potential between the two colloids (without their direct interaction) is defined as \begin{equation} \label{eq:Veff_a} V_{\rm men}(d) := {\cal \hat{F}} - {\cal F}_1 - {\cal F}_2 , \end{equation} where ${\cal \hat{F}}$ is the free energy of the two--colloid equilibrium configuration (Eq.~(\ref{eq:F2})) while ${\cal F}_1 = {\cal F}_2$ is the single--colloid equilibrium free energy (Eq.~(\ref{eq:F})). As noted before the energy of the reference configuration is independent of the separation $d$ and drops out from Eq.~(\ref{eq:Veff_a}). We insert Eqs.~(\ref{eq:superposition}), (\ref{eq:z_superposition}), and (\ref{eq:meanu}) into Eq.~(\ref{eq:Veff_a}) and exploit the invariance of the free energy under exchange of the colloids, i.e., the symmetry under exchanging indices $1 \leftrightarrow 2$. After carrying out some analytic manipulations one finds the following expression for the effective potential: \begin{eqnarray} \label{eq:Vmen1} V_{\rm men} (d) & \simeq & \int_{\mathbb{R}^2\backslash(S_1 \bigcup S_2)} \!\!\!\!\!\! dA \; \left[ \gamma (\nabla {u}_1) \cdot (\nabla {u}_2) + \frac{\gamma}{\lambda^2} {u}_1 {u}_2 - 2 {\Pi}_1 {u}_2 \right] \nonumber \\ & & \mbox{} - \int_{S_1} \!\!\! dA \; \left[ \gamma |\nabla {u}_2|^2 + \frac{\gamma}{\lambda^2} {u}_2^2 - 2 {\Pi}_2 {u}_2 \right] \nonumber \\ & & \mbox{} + \frac{\gamma}{r_{0, \rm ref}} \oint_{\partial S_1} \!\!\! d\ell \; [\bar{{u}} - {u}_2]^2 - 2 F \bar{{u}}\;. \end{eqnarray} The first integral accounts for the change in surface energy, gravitational potential energy, and surface--stress potential energy of the meniscus due to the overlap of the meniscus deformations caused by the two colloids. The second integral is the corresponding change due to the fact that the interface is reduced by an amount $S_1$ compared to the single--colloid case because of the presence of the second colloid. The third integral is the change in surface free energy of one colloid due to the extra meniscus deformation induced by the second colloid. The last term is the change in energy due to the vertical displacement of one colloid by this extra meniscus deformation. For the mathematical manipulations to follow, it is suitable to rewrite Eq.~(\ref{eq:Vmen1}) by applying Gauss' theorem to the integrals involving $\nabla {u}$ and by using the fact that the functions $u_\alpha$ fulfill Eq.~(\ref{eq:younglaplace}) individually, e.g., \begin{eqnarray} \gamma \int_{\mathbb{R}^2\backslash(S_1 \bigcup S_2)} \!\!\!\!\!\! dA \; (\nabla {u}_1) \cdot (\nabla {u}_2) & = & \gamma \int_{\mathbb{R}^2\backslash(S_1 \bigcup S_2)} \!\!\!\!\!\! dA \; [ \nabla \cdot ({u}_2 \nabla {u}_1) - {u}_2 \nabla^2 {u}_1 ] = \nonumber \\ & & \mbox{} - \gamma \oint_{\partial S_1} \!\!\! d\ell \; \frac{\partial ({u}_1 {u}_2)}{\partial n_1} + \int_{\mathbb{R}^2\backslash(S_1 \bigcup S_2)} \!\!\!\!\!\! dA \; \left[ {\Pi}_1 {u}_2 - \frac{\gamma}{\lambda^2} {u}_1 {u}_2 \right] . \nonumber \\ \end{eqnarray} Thus one obtains from Eq.~(\ref{eq:Vmen1}) \begin{eqnarray} \label{eq:Vmen2} V_{\rm men} (d) & \simeq & \mbox{} - \int_{\mathbb{R}^2\backslash(S_1 \bigcup S_2)} \!\!\!\!\!\! dA \; {\Pi}_1 \, {u}_2 + \int_{S_1} \!\!\! dA \; {\Pi}_2 \, {u}_2 \nonumber \\ & & \mbox{} - \gamma \oint_{\partial S_1} \!\!\! d\ell \; \frac{\partial ({u}_1 {u}_2)}{\partial n_1} - \frac{1}{2} \gamma \oint_{\partial S_1} \!\!\! d\ell \; \frac{\partial {u}_2^2}{\partial n_1} \nonumber \\ & & \mbox{} + \frac{\gamma}{r_{0, \rm ref}} \oint_{\partial S_1} \!\!\! d\ell \; [\bar{{u}} - {u}_2]^2 - 2 F \bar{{u}} . \end{eqnarray} In this form all the integrals, except the first one, are performed over bounded domains. This allows one to carry out a Taylor expansion which yields a uniformly valid asymptotic expansions of the terms as $d \to \infty$. In the following calculations, the integrals are evaluated in polar coordinates with the origin at the center of $S_\alpha$ and ${\bf r}_\alpha$ is the position vector with respect to this origin. In particular, ${\bf r}_2 = d {\bf e}_x$ is the center of colloid 1 with respect to the center of colloid 2, and ${\bf r}_1 = - d {\bf e}_x$ is the center of colloid 2 with respect to the center of colloid 1 (Fig.~\ref{fig:2coll_top}) so that, e.g., $u_\alpha = u(|{\bf r}_\alpha|)$. \begin{enumerate} \item In order to compute the leading asymptotic behavior of the first term in Eq.~(\ref{eq:Vmen2}) as $d \to \infty$ we distinguish two cases: \begin{enumerate} \item $\varepsilon_\Pi \neq \varepsilon_F$: In this case, ${u} \sim \log{r}$ (Eq.~(\ref{eq:sol1})) while ${\Pi} \sim r^{-n}$ ($n>2$) as $r \to \infty$. Asymptotically the main contribution stems from the region near $S_1$ and \begin{equation} \label{eq:pi1u2_a} \int_{\mathbb{R}^2\backslash(S_1 \bigcup S_2)} \!\!\!\!\!\! dA \; {\Pi}_1 \, {u}_2 \simeq {u}(d) \int_{\mathbb{R}^2\backslash S_1} \!\!\!\!\!\! dA \; {\Pi}_1 = 2 \pi \gamma r_{0, \rm ref}\; \varepsilon_\Pi\; {u}(d) , \end{equation} employing the definition~(\ref{eq:epsPi}). It will turn out that there is no need to compute also the next--to--leading term in order to obtain $V_{\rm men} (d \to \infty)$ as the leading term will be part of the leading contribution to $V_{\rm men}$ and it will not be cancelled by other contributions. \item $\varepsilon_\Pi = \varepsilon_F$: In this case ${u} \sim r^{2-n}$ if ${\Pi}(r \to \infty) \sim r^{-n}$. The regions contributing mainly to the integral are the neighborhoods of $S_1$ and $S_2$. We employ the Taylor expansion of the integrand (which is valid up to a maximum order depending on how fast $\Pi$ decays) to evaluate the leading and the next--to--leading contributions if $n>4$ (using dyadic notation): \begin{eqnarray} \int_{\mathbb{R}^2\backslash(S_1 \bigcup S_2)} \!\!\!\!\!\! dA \; {\Pi}_1 \, {u}_2 & \simeq & \int_{\mathbb{R}^2\backslash S_1} \!\!\!\!\!\! dA \; {\Pi}_1 \left[{u}_2 + {\bf r}_1 \cdot (\nabla {u})_2 + \frac{1}{2} {\bf r}_1 {\bf r}_1 : (\nabla \nabla {u})_2 + \dots \right]_{{\bf r}_2 = d {\bf e}_x} \nonumber \\ & & \mbox{} + \int_{\mathbb{R}^2\backslash S_2} \!\!\!\!\!\! dA \; {u}_2 \left[{\Pi}_1 + \dots \right]_{{\bf r}_1 = - d {\bf e}_x} \nonumber \\ & \simeq & 2 \pi \gamma r_{0, \rm ref}\; \varepsilon_\Pi \;{u}(d) + \frac{1}{2} \pi \nabla^2 {u} (d) \int_{r_{0,\rm ref}}^\infty dr\; r^3\,\Pi(r) \nonumber \\ & & \mbox{} + {\Pi}(d) \int_{\mathbb{R}^2\backslash S_2} \!\!\!\!\!\! dA \; {u}_2 + \dots \; . \end{eqnarray} We can simplify the result further by evaluating the last integral by repeated partial integration with the explicit solution for ${u}$ given in Eq.~(\ref{eq:sol1}): \begin{equation} \label{eq:int_uda} \int_{\mathbb{R}^2\backslash S_2} \!\!\!\!\!\! dA \; {u} (r_2) = \frac{1}{2} \pi r_{0, \rm ref}^3 \left[ \varepsilon_\Pi - \frac{2 {u}_0}{r_{0,\rm ref}} - \frac{1}{\gamma r_{0,\rm ref}^3} \int_{r_{0,\rm ref}}^\infty dr\; r^3\,\Pi(r) \right] . \end{equation} One finally obtains \begin{eqnarray} \label{eq:pi1u2_b} \int_{\mathbb{R}^2\backslash(S_1 \bigcup S_2)} \!\!\!\!\!\! dA \; {\Pi}_1 \, {u}_2 & \simeq & 2 \pi \gamma r_{0, \rm ref}\; \varepsilon_\Pi\; {u}(d) \\ & & \mbox{} + \left[ 2 \int_{\mathbb{R}^2\backslash S_2} \!\!\!\!\!\! dA \; {u} (r_2) - \frac{1}{2} \pi r_{0, \rm ref}^3 \left( \varepsilon_\Pi - \frac{2 {u}_0}{r_{0,\rm ref}} \right) \right] {\Pi} (d) + \dots \nonumber \end{eqnarray} where we have employed Eq.~(\ref{eq:younglaplace2}) (with $\lambda^{-1} = 0$ for simplicity). Convergence of the integral of ${u}$ imposes the more stringent constraint $n>4$ on the decay of ${\Pi}$ (see Eq.~(\ref{eq:int_uda})). \end{enumerate} The validity of formulae~(\ref{eq:pi1u2_a}) and (\ref{eq:pi1u2_b}) has been checked explicitly by comparing their predictions with the numerical evaluation of the integral for a surface stress of the form $\Pi \propto r^{-n}$ with $n>4$. \item The second term in Eq.~(\ref{eq:Vmen2}) can be estimated in the limit $d \to \infty$ by expanding the integrand into a Taylor series, which is uniformly valid in the integration domain: \begin{eqnarray} \label{eq:pi2u2} \int_{S_1} \!\!\! dA \; {\Pi}_2 \, {u}_2 & \simeq & \int_{S_1} \!\!\! dA \; \left[ {\Pi}_2 \, {u}_2 + \dots \right]_{{\bf r}_2 = d {\bf e}_x} \simeq \pi r_{0, \rm ref}^2 \, {\Pi} (d) \, {u} (d) + \dots \; . \end{eqnarray} \item The third term in Eq.~(\ref{eq:Vmen2}) reads: \begin{eqnarray} \label{eq:u1u2} \oint_{\partial S_1} \!\!\! d\ell \; \frac{\partial ({u}_1 {u}_2)}{\partial n_1} & = & \int_0^{2 \pi} \!\!\! d\varphi \; \left[ {\bf r}_1 \cdot \nabla ({u}_1 {u}_2) \right]_{r_1=r_{0, \rm ref}} \\ & \simeq & \int_0^{2 \pi} \!\!\! d\varphi \; \left\{ ({\bf r}_1 \cdot \nabla) \left[ {u}_1 \left( \phantom{ \frac{1}{2} } \!\!\! {u}_2 + {\bf r}_1 \cdot (\nabla {u})_2 + \right.\right.\right.\nonumber \\ & & \qquad \qquad \left. \left. \left. \frac{1}{2} {\bf r}_1 {\bf r}_1 : (\nabla \nabla {u})_2 + \dots \right)_{{\bf r}_2=d {\bf e}_x} \right] \right\}_{r_1=r_{0, \rm ref}} \nonumber \\ & \simeq & 2 \pi r_{0, \rm ref} {u}(d) \left. \frac{d {u}}{d r} \right|_{r=r_{0, \rm ref}} + \frac{\pi}{2} r_{0, \rm ref} \nabla^2 {u}(d) \left. \frac{d (r^2 {u})}{d r} \right|_{r=r_{0, \rm ref}} + \dots \nonumber \\ & \simeq & 2 \pi r_{0, \rm ref} \varepsilon_F {u}(d) - \frac{\pi}{2\gamma} r_{0, \rm ref}^3 \left(\varepsilon_F + \frac{2{u}_0}{r_{0, \rm ref}} \right) {\Pi}(d) + \dots \end{eqnarray} using the boundary condition for ${u}$ in Eq.~(\ref{eq:u'0}). \item The fourth term in Eq.~(\ref{eq:Vmen2}) is evaluated analogously: \begin{eqnarray} \oint_{\partial S_1} \!\!\! d\ell \; \frac{\partial {u}_2^2}{\partial n_1} & = & \int_0^{2 \pi} \!\!\! d\varphi \; \left[ {\bf r}_1 \cdot \nabla {u}_2^2 \right]_{r_1=r_{0, \rm ref}} \\ & \simeq & \int_0^{2 \pi} \!\!\! d\varphi \; \left\{ {\bf r}_1 \cdot \left[ (\nabla {u}^2)_2 + {\bf r}_1 \cdot (\nabla \nabla {u}^2)_2 + \dots \right]_{{\bf r}_2=d {\bf e}_x} \right\}_{r_1=r_{0, \rm ref}} \nonumber \\ & \simeq & \pi r_{0, \rm ref}^2 \nabla^2 {u}^2 (d) + \dots \; . \end{eqnarray} \item The fifth term in Eq.~(\ref{eq:Vmen2}) is given by \begin{eqnarray} \label{eq:varu_taylor} \frac{1}{r_{0,\rm ref}} \oint_{\partial S_1} \!\!\! d\ell \; [{u}_2 - \bar{{u}}]^2 & = & \int_0^{2 \pi} \!\!\! d\varphi \; {u}_2^2 - 2 \pi \bar{{u}}^2 \nonumber \\ & \simeq & \frac{1}{2} \pi r_{0, \rm ref}^2 \left[ \nabla^2 {u}^2 (d) - 2 {u} \nabla^2 {u} (d) \right] + \dots \nonumber \\ & \simeq & \frac{1}{2} \pi r_{0, \rm ref}^2 \left[ \nabla^2 {u}^2 (d) + \frac{2}{\gamma} {\Pi} (d) \, {u} (d) \right] + \dots \;. \end{eqnarray} \item In order to evaluate the sixth term in Eq.~(\ref{eq:Vmen2}), we expand $u_2$ in the definition given by Eq.~(\ref{eq:meanu}): \begin{equation} \label{eq:meanu_taylor} \bar{{u}} = \frac{1}{2 \pi r_{0, \rm ref}} \oint_{\partial S_1} \!\!\! d\ell \; {u}_2 \simeq {u}(d) + \frac{1}{4} r_{0, \rm ref}^2 \nabla^2 {u} (d) + \dots \simeq {u}(d) - \frac{1}{4\gamma} r_{0, \rm ref}^2 {\Pi} (d) + \dots \; . \end{equation} \end{enumerate} The asymptotic behavior of the effective potential is finally obtained from Eq.~(\ref{eq:Vmen2}) by collecting all terms. There are two qualitatively different cases: \begin{itemize} \item \underline{$\varepsilon_\Pi \neq \varepsilon_F$}: The limit $\lambda \to \infty$ is singular and the single--colloid meniscus profile exhibits a logarithmic dependence. One finds that the leading contribution is provided by the terms proportional to ${u}(d)$: \begin{equation} \label{eq:logV} V_{\rm men} (d) \simeq 2 \pi \gamma r_{0, \rm ref}\; (\varepsilon_F - \varepsilon_\Pi) \; {u}(d) \simeq \mbox{} - 2 \pi \gamma r_{0, \rm ref}^2\; (\varepsilon_F - \varepsilon_\Pi)^2\; \ln\frac{C \lambda}{d}\;, \qquad (R \ll d \ll \lambda) , \end{equation} which represents a long-ranged attractive effective potential, irrespective of the sign of the forces $F$ and $\Pi$ acting on the system. \item \underline{$\varepsilon_\Pi = \varepsilon_F$}: The single--colloid meniscus profile decays like ${u}(d) \sim d^{2-n}$ if ${\Pi}(d) \sim d^{-n}$. The leading contribution is proportional to ${\Pi}(d)$, because the terms proportional to ${u}(d)$ cancel each other: \begin{equation} \label{eq:shortrangeV} V_{\rm men} (d) \simeq \mbox{} - 2 \, {\Pi}(d) \int_{\mathbb{R}^2\backslash S} \!\!\!\!\!\! dA \; {u} , \qquad (R \ll d \ll \lambda) . \end{equation} This correspond to a shorter--ranged effective interaction, which in principle can be either attractive or repulsive depending on the precise form of the function ${\Pi}(r)$. In the particular case that ${\Pi}(r)$ decays monotonically to zero, e.g., ${\Pi}(r) \propto r^{-n}$, it is easy to check that $V_{\rm men}$ amounts to a repulsive force, because the sign of ${u}$ is opposite to the sign of ${\Pi}$. We have seen that the error committed by the superposition approximation in satisfying one of the boundary conditions, Eq.~(\ref{eq:correction_sup}), decays like ${\Pi}(d)$, too. This suggests that the corrections to the superposition approximation could modify the precise value of the constant factor acting as an amplitude in Eq.~(\ref{eq:shortrangeV}), and {\it a priori} it cannot be excluded that there are cancellations leading to a vanishing amplitude, and therefore to an even faster decay for large $d$. Thus the superposition approximation might not be reliable enough for calculating $V_{\rm men}$ if $\varepsilon_\Pi = \varepsilon_F$. \end{itemize} If one traces back the origin of the dominant contributions to $V_{\rm men}$, one finds that in both cases only the first, third, and sixth term in Eq.~(\ref{eq:Vmen2}) are relevant. They correspond physically to the effect of the overlap of the two single--colloid meniscus profiles and the change of the colloid height (first integral and the term $- 2 F \bar{{u}}$ in Eq.~(\ref{eq:Vmen1}), respectively). \section{Applications and discussion} \label{sec:discussion} Equations (\ref{eq:logV}) and (\ref{eq:shortrangeV}) describe the asymptotic behavior of the meniscus--induced effective intercolloidal potential and thus represent a central result of our analysis. They provide the explicit functional dependence on an arbitrary stress field $\Pi(r)$ which decays sufficiently fast. The assumptions entering their derivation are (i) that the deviations of the meniscus profile from the reference configuration are small, allowing one to confine the analysis to a free energy expression which is quadratic in the deviations, and (ii) the superposition approximation, which expresses the two--colloid equilibrium state in terms of the single--colloid state. The analysis shows that the limit $\lambda \to +\infty$ is non--singular only in the case $\varepsilon_F = \varepsilon_\Pi$. \subsection{Flotation force} Equation (\ref{eq:logV}) can be used to determine the flotation force. There are no stresses acting at the meniscus, $\Pi \equiv 0$, while the force $F$ on the colloids is due to their weight corrected by the buoyancy force. Accordingly, Eq.~(\ref{eq:logV}) reduces to the flotation potential \begin{eqnarray} \label{eq:flotV} V_{\rm flot}(d) &=& 2 \pi \gamma r_{0,\rm ref} \varepsilon_F \, u(d)\; \\ & =& 2 \pi \gamma Q^2 \ln\frac{d}{C \lambda}\;, \qquad (R\ll d \ll \lambda) , \nonumber \end{eqnarray} where $Q := \varepsilon_F r_{0, \rm ref}$ is known as the capillary charge \cite{Kra00}, by analogy with two--dimensional electrostatics. The asymptotic form for $d \ll \lambda$ originates from a potential proportional to the modified Bessel function $K_0(d/\lambda)$ (see Eq.~(\ref{eq:solBessel})) \cite{Nico49,CHW81}. The order of magnitude of the capillary charge is $Q \sim r_{0, \rm ref} (R/\lambda)^2$. For a typical value \mbox{$\gamma = 0.05$ N m$^{-1}$} at room temperature and for colloids with a mass density of the order of \mbox{$1$ g cm$^{-3}$}, the prefactor of the logarithm can be estimated as \begin{equation} 2 \pi \gamma Q^2 \sim \left(\frac{R}{10 \rm \mu m}\right)^6 k_B T . \end{equation} Therefore, compared with the thermal energy, the flotation force is negligible for submicrometer--sized colloids. \subsection{Electrically charged colloids} \label{sec:elec} Another application is the case of electrically charged colloids. If one of the liquid phases is water, the charge of the colloid is screened (the Debye length of pure water is $\approx 1$ $\mu$m and smaller in the presence of an electrolyte), and the effective electric field is that of a dipole oriented perpendicularly to the fluid interface \cite{Hur85,RoEa93,GoHa98,Ave00a,Ave02}. The electrostatic field decays as $r^{-3}$ and the stress on the meniscus as $\Pi(r\to \infty) \propto r^{-6}$. Both the electrostatic stress and the osmotic ionic pressure decay in the same manner \cite{For04}. Thus the total intercolloidal potential at intermediate distances is \begin{eqnarray} \label{eq:Vtot} V_{\rm tot} = a \frac{k_B T}{d^3} + V_{\rm men}\qquad (a>0) \;. \end{eqnarray} If gravity is neglected both as a force on the colloid and as a restoring force for the interface $(\lambda \to \infty)$, then one can indeed show that the ensuing condition of mechanical isolation (no net force on the system) leads to $\varepsilon_\Pi = \varepsilon_F$, i.e., precisely the situation for which the limit $\lambda \to \infty$ is non--singular. To see this, we consider the total stress tensor ${\bf T}$ which consists of the Maxwell stress tensor (due to the electrostatic field) and a diagonal osmotic pressure tensor (due to the electrolytes) \cite{GDLM93,SMN02}. At interfaces ${\bf T}$ can be discontinuous. The total volume $V$ of the system is divided into volumes $V_1$, $V_2$, and $V_3$ (see Fig.~\ref{fig:forcebalance} for the explanation of the notation in the following equation). The total force reads (the superscript $^{+(-)}$ denotes evaluation on the positive(negative) side of the oriented surface, i.e., the side the arrows in Fig.~\ref{fig:forcebalance} point to (do not point to)) \begin{eqnarray} \label{eq:forcebalance} \oint_{S_{\rm tot}} \!\!\! d{\bf A} \cdot {\bf T} & = & \int_{V_1\bigcup V_2\bigcup V_3} \!\!\! dV \; (\nabla \cdot {\bf T}) + \int_{S_{\rm men,ref}\bigcup S_1 \bigcup S_2} \!\!\! d{\bf A} \cdot ({\bf T}^+ - {\bf T}^-) \nonumber \\ & = & \int_{V_1\bigcup V_2} \!\!\! dV \; (\nabla \cdot {\bf T}) + \int_{S_{\rm men,ref}} \!\!\! d{\bf A} \cdot ({\bf T}^+ - {\bf T}^-) + \nonumber \\ & & \left[ \int_{V_3} \!\!\! dV \; (\nabla \cdot {\bf T}) + \int_{S_1 \bigcup S_2} \!\!\! d{\bf A} \cdot ({\bf T}^+ - {\bf T}^-) \right] \nonumber \\ & = & \int_{S_{\rm men, ref}} \!\!\!\!\!\! dA \; \Pi \, {\bf e}_z + F {\bf e}_z \; . \end{eqnarray} In the first line, we have applied Gauss' theorem with due account of the possible discontinuities of the tensor ${\bf T}$ accross the interfaces. In the second line, $\nabla \cdot {\bf T} = {\bf 0}$ in the fluid phases $V_1$ and $V_2$, because the counterion distribution of the present reference configuration is the equilibrium distribution and thus locally force free. This distribution is considered to be fixed. The second term in the second line is the total force on the meniscus (which can have only a normal component), while the terms in square brackets sum up to the force $F$ acting on the colloid. Thus the vertical component of the total force is \begin{eqnarray} {\bf e}_z \cdot \oint_{S_{\rm tot}} \!\!\! d{\bf A} \cdot {\bf T} & = & 2 \pi \gamma r_{0, \rm ref} (\varepsilon_\Pi - \varepsilon_F) . \label{eq:forcebalance1} \end{eqnarray} If it vanishes, as it is the case for an isolated system, then $\varepsilon_\Pi = \varepsilon_F$. According to Eq.~(\ref{eq:logV}) this implies that the long--ranged logarithmic contribution to $V_{\rm men}$ is absent and thus the limit $\lambda \to \infty$ is regular. Physically this means that there is no need for a restoring force acting on the fluid--fluid interface when the deformation is created by localized internal stresses. Instead, according to Eq.~(\ref{eq:shortrangeV}), one obtains a potential $V_{\rm men} \propto d^{-6}$ for the present case of a dipolar electric field (see above). This shorter--ranged potential cannot counterbalance the direct electrostatic dipolar repulsion $\sim d^{-3}$. Such a counterbalance would be needed for a straightforward explanation of the aforementioned experimentally observed attractions. \begin{figure} \begin{center} \epsfig{file=fig7.eps,width=.9\textwidth} \caption{ (color online) In the reference configuration the whole system is divided into volumes $V_1$, $V_2$, and $V_3$. Volume $V_1$ (enclosed by the upper dashed curve) includes phase 1, volume $V_2$ (enclosed by the lower dashed curve) includes phase 2, and volume $V_3$ includes the interior of the colloid. The arrows indicate the direction in which the surfaces (including the infinitesimally displaced ones) are oriented: $S_{\rm tot}$ encloses the whole system, $S_{\rm men, ref}$ is the interface between phase 1 and phase 2, and $S_{1(2)}$ denotes the interface between the colloid and phase 1(2). } \label{fig:forcebalance} \end{center} \end{figure} The line of argument to explain the absence of the logarithm contribution to $V_{\rm men}$ was already put forward in Refs.~\cite{Meg03} and \cite{For04}, where exclusively the case ${\Pi} (r) \propto r^{-6}$ has been addressed. Our detailed analysis complements and generalizes these contributions. In Ref.~\cite{Meg03}, $V_{\rm men} (d)$ is estimated by taking into account only the degree of freedom ``meniscus profile'', $u({\bf r })$, and considering only the change in meniscus area due to the superposition of the dimples. This corresponds to retaining only the term $\propto (\nabla {u}_1) \cdot (\nabla {u}_2)$ in expression~(\ref{eq:Vmen1}). Although the type of $d$--dependence obtained that way is correct, the sign of the force turns out to be wrong (attraction instead of repulsion). The reason is that the contributions $\propto \Pi_1 u_2$ and $\propto F \bar{u}$ in Eq.~(\ref{eq:Vmen1}) are equally important as the retained term. This is taken into account in the more detailed analysis of Ref.~\cite{For04}, where the limiting case $R = 0$ (point--particle) is considered from the outset, so that ``height of the colloid'' is not an independent degree of freedom, i.e., $h = u(0)$. Correspondingly, $F$ is set to zero ($\varepsilon_F=0$), and its effect is modelled by a Dirac--delta contribution to the stress $\Pi(r)$ such that $\varepsilon_\Pi=0$. The potential $V_{\rm men}(d)$ calculated this way corresponds to keeping the three terms mentioned above which are relevant (the first and third term in the first integral of Eq.~(\ref{eq:Vmen1}) and the last term of Eq.~(\ref{eq:Vmen1})) and to setting $\lambda \to \infty$, since this limit is regular when $\varepsilon_F=\varepsilon_\Pi=0$. Our analysis has shown that the terms which are dropped in the limit $R \to 0$ (second and third integral in Eq.~(\ref{eq:Vmen1}), second, fourth, and fifth terms in Eq.~(\ref{eq:Vmen2})) yield indeed a subdominant contribution to Eq.~(\ref{eq:shortrangeV}). However, the integral appearing as a prefactor in Eq.~(\ref{eq:shortrangeV}) is divergent for $R \to 0$, so that in Ref.~\cite{For04} a short--distance cutoff $a$ has been introduced which is expected to be of the order of $R$. The precise value of $a$ depends on the details of the implementation of this cutoff; for the example used in Ref.~\cite{For04}, the application of Eq.~(\ref{eq:shortrangeV}) yields $a=r_{0, \rm ref}$. In the presence of gravity mechanical isolation is violated and as we have shown $\varepsilon_\Pi - \varepsilon_F \propto (R/\lambda)^2$. (The force--balance argument (Eq.~(\ref{eq:forcebalance})) can be easily generalized to include the effect of the gravitational volume force.) In the case of a curved reference interface (corresponding to the experiment reported in Ref.~\cite{Nik02} using colloids trapped at the interface of water droplets in oil), Eq.~(\ref{eq:forcebalance}) is replaced by \begin{equation} \oint_{S_{\rm tot}} \!\!\! d{\bf A} \cdot {\bf T} = \int_{S_{\rm men, ref}} \!\!\!\!\!\! dA \; \Pi \, {\bf e}_r + F {\bf e}_z \; , \end{equation} where ${\bf e}_r$ is the radial unit vector of the unperturbed spherical droplet. Force balance in the vertical direction yields a factor ${\bf e}_r \cdot {\bf e}_z = \cos\psi$ (see Fig.~\ref{fig:droplet}) in the integral, which can be expanded for small angles in the limit $R_{\rm drop} \to \infty$. This leads to a curvature--induced correction $\varepsilon_\Pi - \varepsilon_F \propto (r_{0,\rm ref}/R_{\rm drop})^2$ even for mechanical isolation. Thus we see that independently of the details of the implementation of the distant boundary conditions the logarithmic term in the potential $V_{\rm men}$ (Eq.~(\ref{eq:logV})) has a strength which is proportional to (colloid size/system length)$^4$, which is nevertheless too weak to explain the reported attractive total interaction. \subsubsection{The experiment reported in Ref.~\cite{Nik02}} Mechanical isolation is violated in the presence of an external homogeneous electric field $E {\bf e}_z$. For the experimental setup used in Ref.~\cite{Nik02} it cannot be ruled out that such an external field may have distorted the measurements \cite{Nik04}. Since this is the only experiment which provides quantitative information about the secondary minimum in $V_{\rm tot}$, we discuss the case of an external field in more detail. In this experiment position--recording measurements were performed on a hexagonal configuration of seven trapped colloids on a water droplet immersed in oil $(R_{\rm drop} \approx 32 R=24\mu$m). The latter was confined between two glass plates and the droplet stuck on the upper glass plate with a contact angle close to $\pi$. Residual charges might have resided on the upper plate \cite{Nik04}. The measurements yielded the position of the secondary minimum, $d_{\rm min} \approx 7.6 R$, and the curvature at the minimum, $V_{\rm tot}''(d_{\rm min}) \approx 12.94 \, k_B T /R^2$. With regard to the latter value we remark that systematic corrections can be estimated to lower $V_{\rm tot}''$ by a factor 2 to 3. These systematic corrections include (i) multiparticle effects and (ii) center--of-mass movement of the droplet. We have estimated the effect of (i) by carrying out Langevin simulations of the seven particle system using the intercolloidal potential Eq.~(\ref{eq:Vtot1}) below. As for the effect of (ii), any shape deformation induced by the moving colloids changes the center--of--mass position of the droplet since the droplet is fixed to the upper glass plate. The corresponding change in the gravitational energy of the droplet translates into a weak confining potential for the colloids which limits the stochastic movement of the center--of--mass of the seven colloids. This effect might be part of an explanation for the absence of center--of--mass movement observed in Ref.~\cite{Nik02}. According to Eqs.~(\ref{eq:logV}) and (\ref{eq:Vtot}), the total intercolloidal potential in an external field is \begin{eqnarray} \label{eq:Vtot1} V_{\rm tot} = a \frac{k_B T}{d^3} - b\ln \frac{C\lambda}{d} \qquad (a,b>0) \;. \end{eqnarray} Using the aforementioned experimental data for $d_{\rm min}$ and $V_{\rm tot}''(d_{\rm min})$, one obtains from Eq.~(\ref{eq:Vtot1}) $b\approx 249\:k_B T$, $a \approx 83\: d_{\rm min} ^3$ and $V_{\rm tot}(d_{\rm min}) \approx - 275 \:k_B T$ which is surprisingly deep, even if reduced by a factor 2 to 3 due to the systematic corrections mentioned above. Furthermore, from Eq.~(\ref{eq:logV}) we deduce $b = 2\pi\gamma r_{0,\rm ref}^2\; (\varepsilon_\Pi - \varepsilon_F)^2$ and with $\gamma\approx 0.05$ N/m we find $|\varepsilon_\Pi -\nolinebreak \varepsilon_F| \approx (2 \, \rm{nm}/r_{0,\rm ref})$. The long--ranged meniscus deformation (see Eq.~(\ref{eq:sol1})) is on the scale of nanometers. The short--ranged meniscus deformation near the colloid can only be evaluated with a specific microscopic model for $\Pi(r)$. For a rough estimate of $\varepsilon_\Pi$, we consider the colloid charge to be concentrated on the surface. The asymptotic behavior of the stress tensor in this case is given by $\Pi(r) = a/(4\pi)\: (k_B T)/r^6$ \cite{Hur85,For04}. If we assume this form of the stress tensor to hold for all $r$, we find $\varepsilon_\Pi \approx 10^{-4}/\sin^5\theta$ for the values of $\gamma$ and $a$ given above. For not too small contact angles, this {\em a posteriori} justifies the perturbative approach which we have adopted. If the system has a net charge $q$, then $|q E| = 2 \pi \gamma r_{0,\rm ref} |\varepsilon_\Pi-\varepsilon_F| = \sqrt{2 \pi \gamma \, b}$ (see Eqs.~(\ref{eq:forcebalance1}) and (\ref{eq:logV})). Using the values for $b$ and $\gamma$ as given above we find \begin{equation} |q E| \approx 3.6 \times 10^9 \, \rm e V \, m^{-1} . \end{equation} For the value $q \approx 2 \times 10^5 \, \rm e$ quoted in Ref.~\cite{Nik02}, this yields a relatively small electric field $E \approx 1.8 \times 10^4 \, \rm V \, m^{-1}$, indicating how sensitive the system can be to spurious external fields. Alternatively, an electric field $E \sim 10^3 \, \rm V \, m^{-1}$ is sufficient for the meniscus--induced logarithmic potential to have a depth of the order $b \sim 1 \, k_B T$. Thus the external field offers the possibility to tune easily the capillary long--ranged attraction and to manipulate the structures formed by the colloids at the interface. \subsubsection{The experiment and analysis reported in Ref.~\cite{Kra04}} In Ref.~\cite{Kra04}, experimental results for the meniscus deformation around glass particles of radii 200\dots 300 $\mu$m trapped at water--air or water--oil interfaces are reported. The data for the meniscus slope $u'(r_0)$ at the contact circle imply (Eq.~(\ref{eq:u'0})) $\varepsilon_F\approx 0.2$ (water--oil interface), about fifteen times larger than the corresponding $\varepsilon_{F,\rm g}$ caused by gravity alone. Furthermore, the reported meniscus shape for one sample contains a logarithmic part which is consistent with $\varepsilon_{F}-\varepsilon_\Pi \approx 0.1 \gg \varepsilon_{F,\rm g}$ (see Eq.~(\ref{eq:sol1})). As in the experiment analyzed in the previous subsection, the experimental observations could be understood within the framework of the theoretical model we have developed in terms of an external electric field violating mechanical isolation. Yet independently of us, the authors of Ref.~\cite{Kra04} have developed a theoretical model based on the same physical hypotheses as our approach which, they claim, explains the observed long--ranged meniscus deformation. Here we would like to note two important errors which flaw their analysis: \begin{enumerate} \item Eq.~(3.15) in Ref.~\cite{Kra04}, in terms of which they interpret their observations, can be obtained by inserting the large--distance ($r \gg r_{0, \rm ref}$) asymptotic behavior of $\Pi(r)$ in our Eq.~(\ref{eq:sol1}) (equivalent to their Eq.~(3.14) up to an additive constant) both into the integral term {\em and into the definition of $\varepsilon_\Pi$} (Eq.~(\ref{eq:epsPi})). Since the dominant contribution to the integral defining $\varepsilon_\Pi$ stems from points $r \simeq r_{0,\rm ref}$, this procedure is clearly inadmissible. As a consequence, they obtain a wrong, nonvanishing prefactor of the logarithm, in spite of their explicit consideration of mechanical isolation (see their Eq.~(6.6)). \item In order to calculate the intercolloidal effective potential within the superposition approximation, the formula relating $V_{\rm men}(d)$ to the meniscus deformation $u(d)$ {\em in the presence of gravity alone} (i.e., Eq.~(\ref{eq:flotV})) is applied even though $\Pi \neq 0$. Thus, an additional term contributing to the leading logarithm is not included (compare their Eq.~(3.16) and our Eq.~(\ref{eq:logV})). \end{enumerate} In Ref.~\cite{Kra04} also a numerical analysis is carried out. A detailed study of the relation between the results of this numerical analysis and our theoretical predictions will be published elsewhere~\cite{ODDDKB04}. \bigskip It may be possible that the presence of an external field is consistent with the data from Refs.~\cite{Nik02, Kra04}, as well as with the presence of the secondary potential minimum observed in the experiments using planar troughs, in particular in the cases $d_{\rm min}>10 R$, which fall into the intermediate asymptotic regime considered here. But this still remains as an open problem. \subsection{Outlook} Given that already nanometer distortions of the meniscus produce noticeable attractions, the surface topography of colloids might be relevant. In Ref.~\cite{Sta00}, colloidal surface roughness is proposed as an explanation of the attraction. The meniscus contact line is assumed to be pinned at defects on the colloid surface caused by surface roughness. This imposes a different boundary condition for the meniscus at contact, which is then deformed even in the absence of electrostatic forces. The corresponding analysis in Ref.~\cite{Sta00} is concerned only with the term $\propto (\nabla {u}_1) \cdot (\nabla {u}_2)$ in the expression~(\ref{eq:Vmen1}), leading to the conclusion that $V_{\rm men}(d)$ decays as $d^{-4}$ and corresponds to an attractive potential with a strength of $10^4 \, k_B T$ for meniscus deformations of the order of $50 \, {\rm nm}$. This conclusion, however, cannot be simply carried over to the case of charged colloids. As we have shown the contributions of the terms $\propto \Pi_1 u_2$ and $\propto F \bar{u}$ in Eq.~(\ref{eq:Vmen1}) are relevant and can change the qualitative behavior of $V_{\rm men}(d)$ even in the limit of point--colloids. It would be worthwhile to generalize the analysis of Ref.~\cite{Sta00} along the line of arguments presented here in order to assess the importance of surface roughness. This should be complemented by more precise experimental information about the actual colloidal topography. Recently, an explanation based on a contaminated interface has been advocated~\cite{FMMH04}. The air--water interface would be actually a two--dimensional emulsion consisting of hydrophylic (water) patches and hydrophobic (silicon oil) patches. The colloidal particles (hydrophobic in character according to Ref.~\cite{FMMH04}) would cluster in the hydrophobic patches. Thus, confinement of the colloids by finite--size hydrophobic patches would give the impression of an effective intercolloidal attraction. At present, this explanation is only of qualitative nature. Thermal fluctuations of the interface position around its mean value $u({\bf r})$ also induce an effective interaction between the colloids which confine these fluctuations (Casimir--like force). Using a Gaussian model of the fluctuating interface in analogy to the procedure employed in Ref.~\cite{Gol96} for calculating fluctuation--induced forces between rods in a membrane, one finds for uncharged colloids a fluctuation--induced potential $V_{\rm fluc} \simeq -k_B T\; (r_{0, \rm ref}/d)^4$, which is too small and falls off more rapidly than the intercolloidal dipole repulsion. Here, the generalization to the charged case might give hints for the effective attractions between {\em very} small particles trapped at interfaces. Concerning particle sizes well below the Debye length, one should modify our analysis to account for the overlap of the screening ionic clouds (this would affect, e.g., the superposition approximation for the stress field $\Pi$, Eq.~(\ref{eq:superposition})). Finally, in Ref.~\cite{SCLN04} the attraction of particles trapped at a nematic--air interface is reported and an explanation in terms of a logarithmic meniscus deformation is proposed which parallels the explanation given in Ref.~\cite{Nik02}: in this case, the deformation would be caused by the elastic distortion induced by the particles on the nematic phase~\cite{SCLN04}. Our detailed theoretical treatment shows that no long--range (logarithmic) meniscus distortion can arise on an interface if the system is mechanically isolated and the excess free energy of the perturbed interface is correctly described by an expression like Eq.~(\ref{eq:F}). Thus it appears that the simple explanation of the observed colloidal attractions given in Ref.~\cite{SCLN04} is not correct. However, it is not clear whether the free energy of a distorted nematic--air interface is equivalent to that of a simple fluid interface due to the long--ranged interactions in the nematic bulk caused by defects. Thus, a generalization of our theory to interfaces involving nematic phases would be desirable in order to assess the possibility of long--ranged colloidal attractions in more detail. \section{Summary} \label{sec:summary} We have analyzed the effective force induced by capillary deformation between two smooth spherical colloids floating at a fluid interface. The relevant degrees of freedom are the meniscus deformation and the height of the colloids (Fig.~\ref{fig:pert}), whose equilibrium values are given by minimization of a free energy functional. This functional was derived assuming small deviations from a reference configuration (Fig.~\ref{fig:ref}). It incorporates the surface tensions of the three interfaces involved (``colloid -- fluid phase 1'', ``colloid -- fluid phase 2'', and ``fluid phase 1 -- fluid phase 2''), the potential energy of the colloids under the action of a force $F$, the potential energy of the fluid interface in an arbitrary surface stress field $\Pi({\bf r})$, and the potential energy due to a restoring force acting on the interface (Eqs.~(\ref{eq:Fdef})--(\ref{eq:F})). The effective intercolloidal potential (Eq.~(\ref{eq:Veff_a})) was calculated in the limit of large separations by using the superposition approximation (Eq.~(\ref{eq:superposition})). We have shown in this limit that the contribution to the effective potential by the interfaces ``colloid -- fluid phases'' is subdominant. If the total force acting on the system, consisting of the two colloids and the meniscus, does not vanish, the presence of the restoring force acting on the fluid interface is essential --- although its precise form does not matter (Subsec.~\ref{sec:lambda}). In this case, the effective interaction is long--ranged and attractive (Eq.~(\ref{eq:logV})). If the total force vanishes, the restoring force is irrelevant altogether, the effective interaction is shorter--ranged (Eq.~(\ref{eq:shortrangeV})), and it cannot be computed reliably within the superposition approximation. As an application, we have considered the case of like--charged, micrometer--sized particles when the capillary deformation is due to the ensuing electrostatic field. We have discussed how one can tune the long--ranged attractive interaction by an external electric field, but we conclude that the experimentally observed attraction in an isolated system cannot be explained within the present model. Possible directions for future research such as colloidal surface roughness and fluctuations of the interface have been discussed. \begin{acknowledgments} We kindly acknowledge helpful discussions on the subject of the manuscript with M.~Nikolaides, A.~Bausch, K.~Danov, and P.~Kralchevsky. \end{acknowledgments}
gr-qc/0411130
\section{Introduction} There exist many approaches for the quantisation of systems with (first-class) constraints. They fall essentially into two categories, according to whether they implement the constraints before or after quantisation. In the first category lie schemes that attempt to quantise directly the reduced state space of a constrained system. Schemes of the second category are exemplified by Dirac quantisation: one quantises the theory before the imposition of the constraints and identifies the Hilbert space for the physical degrees of freedom as the zero-eigenspace of a quantum operator that represents the constraint. The present paper proposes a variation of the Reduced State Space Quantisation scheme. It is based upon coherent-state-path-integral quantisation scheme method, which has been developed by Klauder \cite{KlDa84, Kla88}. In this method the only input necessary for the quantisation of a classical symplectic manifold $\Gamma$ is the introduction of a Riemannian metric $ds^2$ on $\Gamma$. The metric allows the definition of a Wiener process on $\Gamma$, which plays the role of a regulator for the rigorous definition of a path-integral for the coherent state matrix elements. The key suggestion of our proposal is that a Riemannian metric on the reduced state space may be always identified, by exploiting the {\em geometry} associated to the classical reduction procedure. \section{Dirac vs Reduced state space quantisation} A Hamiltonian system with first-class constraints is described by a symplectic manifold $\Gamma$, equipped with a symplectic form $\Omega$ and a number of constraint functions $f^i : \Gamma \rightarrow {\bf R}$, such that $\{f^i, f^j \} = c^{ij}{}{}_k f^k$, for some structure {\em functions} $c^{ij}{}{}_k$. One defines the constraint surface $C$ as the submanifold of $\Gamma$ defined by the conditions $f^i = 0$. The restriction of the symplectic form $\Omega$ on $C$ is degenerate. The reduced state space $\Gamma_{red}$ is defined as the space of all orbits on the constraint surface generated by the constraints through the Poisson bracket. In other words, $\Gamma_{red}$ is the quotient $C/ \sim$, where $\sim$ is an equivalence relation, defined as follows: $z \sim z'$, if $z'$ can be reached from $z$ through a canonical transformation generated by the constraints. We define the projection map $\pi:C \rightarrow \Gamma_{red}$ as $\pi(z) = [z]$, where $[z]$ the orbit, in which the point $z \in C$ belongs then. $\Gamma_{red}$ is equipped with a symplectic form $\tilde{\Omega}$, such that $\pi_*\tilde{\Omega} = \Omega|_C$. The physical idea reduced state space quantisation (RSSQ) is that the rules of quantisation are only meaningful, when applied to the true physical degrees of freedom and are rather ambiguous when applied to the gauge degrees of freedom. It is suggested therefore that one should construct the quantum theory for the reduced state space $\Gamma_{red}$. In general, $\Gamma_{red}$ has a non-trivial topological structure, and generalisations of the canonical quantisation scheme is needed to pass into the quantum theory. The implementation of RSSQ, however, necessitates that one is able to fully solve the constraints classically, something that is very difficult if not impossible for interesting physical systems (e.g. general relativity). So for such systems not even the first step in the quantisation procedure can be implemented, leaving us in a complete blank about the properties of various quantum mechanical kinematical variables. Moreover, the reduced state space of field systems characterised by first-class constraints generically consists of non-local variables: the true degrees of freedom are not fields themselves. For this reason the spacetime character of the theory is not explicitly manifest in the reduced state space. In a reduced state space quantisation the quantum mechanical variables one constructs are not fields. Furthermore, there exist quantities of potential interest quantum mechanically that do not commute with the constraints--such as the area and volume operators in loop quantum gravity \cite{RoSm95}. There is no way to describe such objects in the RSSQ scheme. In many systems of interest, such as General Relativity, the reduced state space is not a manifold but an orbifold: a singular structure arises as a result of taking the equivalence classes with respect to the action of the constraints. This creates additional technical problems in the quantisation of such systems. Dirac quantisation on the other hand involves the construction a {\em kinematical} Hilbert space $H$, in which the basic observables of $\Gamma$ are represented by self-adjoint operators. One then constructs self-adjoint operators $\hat{f}^i$ to represent the constraints. The physical Hilbert space $H_{phys}$ is the subspace of $H$ consisting of all vectors $| \psi \rangle \in H$, such that $\hat{f}^i | \psi \rangle = 0$. Typically variations of this theme are employed for the construction of the physical Hilbert space, because constraint operators do not have continuous spectrum around zero. Hence `wave functions' that solve the quantum constraints are not square-integrable. This issue together with the normal-ordering ambiguities associated with the definition of constraint operators are the major technical problems of Dirac quantisation. The method we propose here solves the constraints at the classical level, but it does allow for the existence of a kinematical Hilbert space (like Dirac quantisation) and of a map between that Hilbert space and the one corresponding to the physical degrees of freedom. \section{Geometric reduction and coherent state path integrals} \subsection{Coherent states and path-integrals} Our prescription for RSSQ is based upon Klauder's scheme of coherent state quantisation. The Hilbert space $H$ associated to a classical symplectic manifold $\Gamma$ with symplectic form $\Omega$ is a suitable subspace of the space ${\cal L}^2(\Gamma, dz)$, where $dz$ is a shorthand for the Liouville form $ \Omega \times \ldots \times \Omega$ on $\Gamma$. This subspace corresponds to a projection operator ${\bf E}$ on ${\cal L}^2(\Gamma, dz)$, whose matrix elements define a complex-valued function $K(z,z')$, $z, z' \in \Gamma$, such that \begin{eqnarray} \int dz' K(z,z') K(z',z'') = K(z,z''), \label{1}\\ K(z,z') = K^*(z',z). \label{2} \end{eqnarray} Moreover, the Hilbert space $H$ is equipped with a family of generalised coherent states $| z \rangle, z \in \Gamma$, such that \begin{eqnarray} \langle z| z' \rangle = K(z,z'). \label{3} \end{eqnarray} Conversely, given any complex-valued function on $\Gamma \times \Gamma$ that satisfies (\ref{2}) and is positive definite, one may construct uniquely a Hilbert space $H$, equipped with a family of generalised coherent states $|z \rangle$, such that (\ref{3}) holds--see \cite{Kla97} for a detailed exposition. Positivity in this context is defined as the requirement that \begin{eqnarray} \sum_{l,l'} c_l c^*_{l'} K(z_l,z_{l'}), \geq 0, \label{positivity} \end{eqnarray} for any finite set of complex number $c_l$ and state space points $z_l$, and where equality in (\ref{positivity}) holds only for $c_l = 0$ It is important to note that two kernels $K_1$ and $K_2$ related by a phase transformation \begin{eqnarray} K_1(z,z') = e^{i \theta(z) - i \theta(z')} K_2(z,z'), \end{eqnarray} define the same Hilbert space, since the corresponding family of coherent states differ only with respect to the phase $e^{i \theta}$. The overlap kernel may be computed by means of path-integrals, in a procedure developed by Klauder. The key ingredient is a Riemannian metric $ds^2$ on the classical phase space. This metric is employed for the purpose of regularisation of the path-integral expressions, which cannot otherwise be defined. The metric $ds^2$ defines a Wiener measure on the space of paths $\Pi$, which consists of suitable continuous functions (in general non-differentiable) from a subset of ${\bf R}$ to $\Gamma$. \begin{eqnarray} d \mu^{\nu}[z(\cdot)] = Dz^a (\cdot) e^{- \frac{1}{2 \nu} \int_0^T d \lambda \left(\frac{ds}{d\lambda} \right)^2 d \lambda}, \end{eqnarray} where $\nu$ is a diffusion coefficient for the Wiener process, which plays the role of a regularisation parameter. The overlap kernel is then obtained as \begin{eqnarray} \langle z_f|z_i \rangle = \lim_{\nu \rightarrow \infty} {\cal N}_{\nu} \int d \mu^{\nu}[z(\cdot)]e^{i \int A[z(\cdot)]}. \label{pathintegral} \end{eqnarray} In this expression the summation is over paths $z(\cdot)$, such that $z(0) = z_i$ and $z(T) = z_f$. The Wiener measure is conditioned on these boundary values. The object $A_a$ is a U(1) connection, which is related to the symplectic form of the manifold $\Gamma$ by means of the relation $\Omega = dA$. In the coordinates that $\Omega = dp_a \wedge dq^a$, the connection $A$ reads $A = p_a dq^a$. The parameter ${\cal N}_{\nu}$ is a normalisation constant, which may be fixed by the condition that $\langle z|z \rangle = 1$. The propagator has also a path integral expression \begin{eqnarray} \langle z_f|e^{-i \hat{H}T}|z_i \rangle = \lim_{\nu \rightarrow \infty} {\cal N}_{\nu}\int d \mu^{\nu}[z(\cdot)]e^{i \int d A_a(z) dz^a - i \int_0^T d \lambda h(z) }, \end{eqnarray} where $h(z)$ is the classical Hamiltonian function corresponding to the Hamiltonian operator $\hat{H}$. Klauder extended this procedure to deal with systems with constraints (either first-class or second-class) \cite{Kla97, Klaother}. His scheme involved a generalisation of the Dirac method-- he showed that the projection operator into the zero eigenspace of the constraint operators may be expressed as a path-integral over the space of paths on $\Gamma$. In this paper we suggest that it is not necessary to define the constraints at the quantum level. Rather the reduction may be implemented at the classical level. in a way that allows the definition of integral of the form (\ref{pathintegral}), with paths on the {\em reduced state space}. \subsection{Path-integrals on the reduced state space} Our proposal starts from the remark that the path-integral (\ref{pathintegral}) may be employed to also define coherent states on the reduced state space $\Gamma_{red}$. The symplectic potential $A$ on the reduced state space is easily obtained, since the reduced state space $\Gamma_{red}$ is also a symplectic manifold. The problem is to identify the correct metric $ds^2_{red}$ on $\Gamma_{red}$, through which to define the path-integral. In unconstrained systems the metric is chosen by symmetry requirements. It is usually a homogeneous metric, whose isometries corresponding to the transitive action of a group on the state space. For classical mechanics on ${\bf R}^{2n}$, the corresponding group is the Weyl group, for the sphere $S^2$ that corresponds to a spin-system it is the group $SU(2)$ etc. This group--referred to as the canonical group-- provides a symmetry of the kinematical description \cite{Ish84}. Conversely, the requirement of symmetry in the kinematical description allows one to select a homogeneous Riemannian metric to define the path-integral. But a symmetry of the kinematical description does not, in general, leave the dynamics or the constraints invariant. (We know in standard quantum mechanics that no physically interesting Hamiltonian commutes with both position and momentum that form the generators of the Weyl group.) The metric $ds^2_{red}$ on $\Gamma_{red}$ therefore cannot be easily identified from the `symmetries' of the original theory. Still $ds^2_{red}$ may be computed, if one has chosen a Riemannian metric $ds^2$ on the unconstrained state space $\Gamma$, which can itself be chosen by invoking symmetry arguments. A metric $ds^2$ on $\Gamma$ defines by restriction a metric on the constraint surface $C$--hence a distance function $d(z,z')$ between points $z, z' \in C$. By definition, a point $\zeta$ of $\Gamma_{red}$ is a collection of points on $C$ generated by the action of the constraint functions. Hence a point $\zeta$ is identified with a submanifold of $C$. Moreover the submanifolds corresponding to different points of the reduced state space are disjoint, as different equivalence classes are always disjoint. This implies that one may exploit the distance function on $C$ to define a distance function on $\Gamma_{red}$, namely \begin{eqnarray} d_{red}(\zeta, \zeta') = \inf_{z \in \zeta, z' \in \zeta'} d(z,z'). \label{distance} \end{eqnarray} From $d_{red}$ one may define a metric on $\Gamma_{red}$. To summarise, given a metric on $\Gamma$--which defines a quantisation of the system prior to the imposition of the constraints-- one may construct uniquely a metric on $\Gamma_{red}$, through which one may construct the Hilbert space for the true degrees of freedom. There is a point of ambiguity in the present construction. The definition of distance in the reduced state space Eq. (\ref{distance}) involves the infimum of the distances between the points of these orbits. The infimum involves taking a limit, and it may turn out that the distance between specific orbits $\zeta$ and $\zeta'$ vanishes--even though these orbits are disjoint. In that case one has to identify the points $\zeta$ and $\zeta'$ in the reduced state space--the coherent states cannot distinguish between them. In many cases this point turns out to be a benefit, rather than a disadvantage. We shall later show that it is usually in singular points (ones that render $\Gamma_{red}$ into an orbifold) that the distance function vanishes. There is a physical interpretation of the reduction procedure we propose, which has to do with the uncertainty principle. A Riemannian metric on state space may be employed for a geometric expression of the uncertainty principle \cite{AnSav03}. Two points on the classical state space may be distinguished quantum mechanically only if their distance $\delta s^2$ is greater than one (in units such that $\hbar = 1$). The presence of a first-class constraint implies that points in the same orbit cannot be physically distinguished. To see whether one can distinguish between two orbits, the distance between any two points of each orbit should be larger than one (in terms of the original metric). This suggests the definition of the statistical distance between orbits--and consequently the metric on $\Gamma_{red}$-- as the minimum of distance between its points. \subsection{Relation to Dirac quantisation} An important feature of the construction described above is that it preserves some of the advantages of Dirac quantisation, namely the existence of quantum operators for variables that do not commute with the constraints. Indeed, one may employ the metric $ds^2$ on $\Gamma$ to construct a kinematical Hilbert space $H$, spanned by a coherent states family $|z \rangle$, $z \in \Gamma$. We may consider then the subspace of $H$, spanned by all finite linear combinations $\sum_{l = 1}^{n} c_l |z_l \rangle$, where the poitns $z_l \in C$. On the other hand, the physical Hilbert space $H_{phys}$ is constructed from the metric $ds^2_{red}$ and is spanned by coherent states $|\zeta \rangle$. The natural projection map $\pi: C \rightarrow \Gamma_{red}$ defined as $\pi(z) = [z]$, induces a map $i_{\pi}$ between $H_C$ and $H_{phys}$ \begin{eqnarray} i_{\pi} |z \rangle = | [z] \rangle, \label{map} \end{eqnarray} which by linearity can be extended to the whole of $H_C$. Moreover, if we denote by $E$ the projection map from $H$ to $H_C$, any vector $| \psi \rangle$ on $H$ may be mapped to a vector on $H_{phys}$, by mapping its projection $P | \psi \rangle$ on $H_C$ to $H_{phys}$ through the map $i_{\pi}$. The map $i_{\pi}$ is then extended to a map between the kinematical Hilbert space and the physical Hilbert space. It is not a projection operator in general (or even a self-adjoint operator), as the inner products on $H$ and on $H_C$ cannot be simply related \footnote{Note that even in conventional Dirac quantisation the mapping from $H$ and $H_{phys}$ is not implemented by a self-adjoint operator, if the constraints do not have a continuous spectrum at 0. }. Still $i_{\pi}$ may be employed to distinguish the 'gauge invariant part' of any operator in $H$, namely the part of the operator that can be 'projected' to the physical Hilbert space. Thus we obtain some of the benefits of Dirac quantisation, without having to deal with issues such as operator ordering or the existence of a continuous spectrum for the constraints near zero. The map between $H$ and $H_{phys}$ is hardly unique--the physical predictions of the theory remain the same, if one multiplies the right-hand-side of (\ref{map}) with a $z$-dependent phase $e^{i \theta(z)}$. Each choice of a function $e^{i \theta(z)}$ leads to a different linear map. But this arbitrariness is not problematic, because in any case the definition of $H$ in most physical systems contains a large degree of arbitrariness--especially in field theories. In a nutshell, our procedure proposes a way to pass from the symplectic form $\Omega$ and Riemannian metric $ds^2$ on $\Gamma$ to the symplectic form $\Omega_{red}$ and Riemannian metric $ds^2_{red}$ on $\Gamma_{red}$. The knowledge of the symplectic form and the metric defines uniquely a Hilbert space equipped with a family of coherent states. We may then construct on such Hilbert space $H$ for $\Gamma$ constructed via path integrals from $\Omega$ and $ds^2$ and one Hilbert space $H_{phys}$ for $\Gamma_{red}$ constructed from $\Omega_{red}$ and $ds^2_{red}$. The reduction procedure that takes us from $(\Omega, ds^2)$ to $(\Omega_{red}, ds^2_{red})$ defines a map from $H$ to $H_{phys}$, in a way that mirrors the projection map from $H$ to $H_{phys}$ appearing in Dirac quantisation. \subsection{Relation with path-integrals on $\Gamma$} An important point that provides a clarification of our proposal is that the path integral over the reduced state space may be equivalently rewritten as one over paths on $\Gamma$, similar (but not identical) to the one employed by Klauder in the Dirac quantisation scheme through the coherent-state-path-integral. If we define the Wiener measure $d \tilde{\mu}[\zeta(\cdot)]$ over paths in $\Gamma_{red}$ through the metric $ds^2_{red}$ and denote as $\tilde{A}$ a U(1) connection on $\Gamma_{red}$ corresponding to $\Omega_{red}$, one writes the path-integral (\ref{pathintegral}) on the reduced state space as \begin{eqnarray} \langle \zeta_f|\zeta_i \rangle = \lim_{\nu \rightarrow \infty} {\cal N}_{\nu} \int d \tilde{\bar{\mu}}^{\nu}[\zeta(\cdot)]e^{i \int \tilde{A}[\zeta(\cdot)]}. \label{pathintegral2} \end{eqnarray} Assuming that $\Gamma$ is a 2n-dimensional manifold, and that we have $m < n$ first-class constraints, the constraint surface $C$ is a $(2n - m)$ dimensional manifold, and $\Gamma_{red}$ has dimension equal to $2(n-m)$. If the functions $\zeta^i, i = 1, \ldots, 2(n-m)$ define a coordinate system on $\Gamma_{red}$, their pull-back on $C$ define a coordinate system on $C$ together with $m$ coordinates $v^i, i= 1, \ldots, m$ that span each gauge orbit. The symplectic form on $C$ does not depend on the coordinates $\lambda^i$--it is written as \begin{eqnarray} \Omega|_C = \Omega_{ij}(\zeta) d \zeta^i \wedge d \zeta^j \end{eqnarray} If $\tilde{A}^i$ is a $U(1)$ connection on $\Gamma_{red}$ that satisfies $d\tilde{A} = \Omega$, the most general connection one-form on $C$ that satisfies $dA = \Omega|_C$ may be written locally as \begin{eqnarray} \tilde{A}_i(\zeta) d \zeta^i + d \theta(\zeta,v), \end{eqnarray} in terms of a scalar function $\theta$ on $C$. Let us now consider an integral over paths on $C$ \begin{eqnarray} \int d \mu_{C}^{\nu}[z(\cdot)] e^{i \int A[z(\cdot)]} \end{eqnarray} Let us, for the moment, refrain from specifying the metric on $C$ that determines the Wiener measure $d \mu_{\nu}[z(\cdot)]$, except for the fact that it is conditioned on the endpoints. The exponent in the path integral reads $ i \int \tilde{A}_i d \zeta^i + i (\theta(\zeta_f,v_f) - \theta(\zeta_i,v_i)$. The path integral then reads \begin{eqnarray} e^{i\theta(\zeta_f,v_f) - i \theta(\zeta_i,v_i)} \int d \mu^{\nu}[z(\cdot)] e^{i \int A_i d \zeta^i}, \end{eqnarray} since the integration measure is conditioned at the endpoints. The phases in front of the path-integral may be reabsorbed in a phase change of the coherent states, and may therefore be dropped out. If we assume that the integration measure factorises into a piece along the orbits ($\mu_{gauge}$) and one across the orbits $\mu_{red}$) \begin{eqnarray} \int d \mu_{C}^{\nu}[z(\cdot)] = \int d \mu_{red}^{ \nu}[\zeta(\cdot)] \int d \mu_{gauge}^{\nu}[v(\cdot)] \label{factor} \end{eqnarray} then we see that \begin{eqnarray} \int d \mu_{C\nu}[z(\cdot)] e^{i \int A_a dz^a} = \int d \mu_{red}^{\nu}[\zeta(\cdot)] e^{i \int A_i d \zeta^i} \int d \mu_{gauge}^{\nu}[v(\cdot)] \end{eqnarray} If the two measures are separately normalised, the overlap kernel on the reduced phase space may be obtained by a path integral on the constraint surface, \begin{eqnarray} \langle \zeta_f|\zeta_i \rangle = \lim_{\nu \rightarrow \infty} {\cal N}_{\nu} \int d \tilde{\mu}^{\nu}_{ C}[z(\cdot)] e^{i \int A[z(\cdot)]}. \end{eqnarray} The crucial point is that the measure should satisfy the factorisation condition (\ref{factor}), which involves a suitable choice of metric on $C$. Recall that the metric is a bilinear functional on the tangent space $T_zC$ of the constraint surface. The degeneracy of the symplectic form $\Omega$ implies that each tangent space is split into the degeneracy subspace $D_zC$ of all tangent vectors $X$ such that $\Omega_C(X,\cdot) = 0$ and its complement $\bar{D}_zC$, which may be naturally identified with $T_{\zeta}\Gamma_{red}$. Since the Wiener measure is of the form (\ref{factor}) the factorisation condition may be obtained if the metric is in a block-diagonal form with respect to the split $T_zC = D_zC \oplus \bar{D}_zC$. In other words, g(X,Y) = 0 , if $X \in D_zC$ and $Y \in \bar{D}_zC$. Moreover, it is necessary that the restriction of the metric in $\bar{D}_zC$ does not depend on the variables $v$. The Hamiltonian vector fields generated by the constraints should leave the restriction of the metric on $\bar{D}_zC$ invariant. The constraint surface $C$ is defined as the submanifold of $\Gamma$, in which the constraint functions $f^a, a = 1, \ldots, m$ vanish. Any ordinary integral over $C$, may be expressed as an integral over $\Gamma$ by the inclusion of the product of delta functions $\prod_a \delta(f^a(z))$. Similarly, we may turn the path integral on $C$ into a path integral over $\Gamma$ by the insertion of delta function at each time point of the discretisation. We then write \begin{eqnarray} \langle \zeta_f|\zeta_i \rangle = \lim_{\nu \rightarrow \infty} {\cal N}_{\nu} \int d \mu^{\nu }[z(\cdot)] \prod_a \Delta[f^a] e^{i \int A[z(\cdot)]}, \end{eqnarray} where $\Delta[f^a] = \prod_t \delta (f^a(z(t)))$; the product refers to any discretisation of the paths in the path integral. The integration measure may be defined by any metric $ds^2$ on $\Gamma$, which reduces to a factorisable metric on $C$. Similarly, the connection on $\Gamma$ may be chosen arbitrarily as long as it satisfies $dA = \Omega$. Exploiting the representation of the delta function as an integral $\delta(f) = \int d N e^{-iNF}$ we write formally \begin{eqnarray} \Delta[f^a] = \int DN(\cdot) e^{-i dt N_a(t) f^a(t)}, \end{eqnarray} where $DN(\cdot)$ is a continuous limit of $\prod_a \prod_t dN^a_t$. We then write the formal expression \begin{eqnarray} \langle \zeta_f|\zeta_i \rangle = \lim_{\nu \rightarrow \infty} {\cal N}_{\nu} \int DN(\cdot) \int d \mu_{\nu }[z(\cdot)] e^{i \int A_a dz^a - i \int_0^T ds N_a(s) f^a(s)} \end{eqnarray} This expression is {\em formally similar} to the one employed by Klauder in his implementatation of Dirac quantisation through the coherent state path-integral \cite{Kla97}. The key difference is that the Wiener process in Klauder's scheme is defined by means of a homogeneous metric on $\Gamma$, while in the RPSQ scheme the metric on $\Gamma$ is in general non-homogeneous and needs to satisfy the factorisation condition on the constraint surface we described earlier\footnote{Moreover, in Klauder's version of Dirac quantisation the measure over $N(\cdot)$ is normalised to one, while here it is only a formal continuum limit of $\prod_t dN_t$}. The difference in the metrics implies that the diffusion processes regularising the path-integral are different, hence the final expressions for the overlap kernels. Since, however, the metric only appears for regularisation purposes, and both the constraints and the symplectic form are the same, the two methods are expected to yield the same results at the semi-classical level. \section{Examples} \subsection{A trivial example} We may consider a particle moving on the Euclidean three-space ${\bf R}^3$. The state space $\Gamma = {\bf R}^6$ is spanned by the coordinates $(x^i,p_i), i = 1,2,3$ and is equipped with the natural symplectic form. We then consider the constraint $x^3=0$, which can be trivially shown to imply that $\Gamma_{red} = {\bf R}^4$, spanned by the variables $(x_1,x_2,p_1,p_2)$. The coherent states on $\Gamma$ are the standard Gaussian coherent states, with overlap kernel \begin{eqnarray} \langle {\bf x}, {\bf p}| {\bf x'}, {\bf p'} \rangle = \exp \left( ip_ix'^i - i p_i' x^i -\frac{1}{2} |{\bf x} - {\bf x'}|^2 - \frac{1}{2}|{\bf p} - {\bf p'}|^2 \right) \end{eqnarray} which correspond to the homogeneous Riemannian metric on ${\bf R}^6$: \begin{eqnarray} ds^2_{\Gamma} = \delta_{ij} dx^i dx^j + \delta^{ij} dp_i dp_j \end{eqnarray} Since the orbits of the constraint surface are the lines of constant $(x,y,p_x,p_y)$ with $z = 0$, it is elementary to show that the distance between the lines corresponds to the reduced metric \begin{eqnarray} ds^2_{\Gamma_{red}} = dx_1^2 + dx_2^2 + dp_1^2 + dp_2^2, \end{eqnarray} which again corresponds to Gaussian coherent states for the reduced Hilbert space. \subsection{A spin system} We consider the state space ${\bf R}^4$, with variables $(x_1,x_2,p_1,p_2)$, equipped with the standard symplectic form \begin{eqnarray} \omega = dp_1 \wedge dx_1 + dp_2 \wedge dx_2 \end{eqnarray} and a constraint \begin{eqnarray} \frac{1}{2} (p_1^2 +p_2^2 + x_1^2 +x^2_2) = k > 0. \end{eqnarray} The constraint surface is the two sphere $S^3$. Employing the coordinates $(\theta, \phi, \chi)$ on $S^3$ through the definition \begin{eqnarray} \frac{1}{\sqrt{2}} (x_1 - i p_1) = \sqrt{k} \cos \frac{\theta}{2} e^{i(\phi + \chi)/2} \\ \frac{1}{\sqrt{2}} (x_2 - i p_2) = \sqrt{k} \sin \frac{\theta}{2} e^{i(\phi - \chi)/2}, \end{eqnarray} we obtain \begin{eqnarray} \Omega_C = \frac{k}{2} \sin \theta d \theta \wedge d \phi, \end{eqnarray} which implies that the degenerate direction corresponds to the vector field $\frac{\partial}{\partial \chi}$. Its orbits define the usual Hopf fibration of $S^3$, hence the reduced state space is $S^2$ equipped with the standard symplectic form. It is the state space of a classical spin system. As is well-known, the single-valuedness of the $U(1)$ connection $\cos \theta d \phi$ that appears in the path-integral, implies that in the quantum theory $k = 2 n$, for integers $n$. The homogeneous metric on ${\bf R}^4$ corresponding to Gaussian coherent states is \begin{eqnarray} ds^2 = dx_1^2 + dx^2_2 + dp_1^2 + dp_2^2, \end{eqnarray} reduces on the constraint surface $S^3$ to \begin{eqnarray} ds^2_C = \frac{k}{2}(d \theta^2 + d \phi^2 + d \chi^2 - 2 \cos \theta d \phi d \chi). \end{eqnarray} It is easy to minimise this metric over the orbits of constant $(\theta, \phi)$ to obtain the Riemannian metric on ${\bf S^2}$. \begin{eqnarray} ds^2_{red} = \frac{k}{2} (d \theta^2 + \sin^2 \theta d \phi^2). \end{eqnarray} This is the {\em homogeneous} metric for a sphere of radius $k/2 = n$; for this metric and this connection one may obtain through the path-integral (\ref{pathintegral}) the quantum description of spin in terms of the irreducible representations of $SU(2)$ for each value of $n$ \cite{Kl81}. \subsection{Spinless relativistic particle} The relativistic particle illustrate the strengths of the method, perhaps better than any other example. The reason is that the Poincar\'e symmetry of the system leaves few alternatives about the form of the Riemannian metric on the constraint surface, The resulting coherent states are therefore fully covariant. The spinless relativistic particle of mass is described by the presymplectic manifold $C$ (the constraint surface), which is spanned by a {\em unit, future-directed, timelike vector} $I$ and an element $X$ of Minkowski spacetime. The topology of $C$ is then $V \times {\bf R}^4$, where $V$ the mass-shell for particles of mass $m$. The symplectic form then reads \begin{eqnarray} \Omega_C = - m dI_{\mu} \wedge dX^{\mu}. \end{eqnarray} Note that we employ a $(+---)$ signature convention. It is easy to see that the vector field $I^{\mu} \frac{\partial}{\partial X^{\mu}}$ corresponds to the null direction of $\Omega_C$--it is the Hamiltonian vector field generated by the constraint $I^2 - 1 = 0$. The parameter $u = I\cdot X$ is the gauge degree of freedom along the orbits of the constraint, while the variables $I_{\mu}, Y^{\mu} = X^{\mu} - u I^{\mu}$ are projected on the reduced state space $\Gamma_{red}$. The reduced symplectic form reads \begin{eqnarray} \Omega_{red} = - m dI_{\mu} \wedge dY^{\mu}. \end{eqnarray} We next identify a Lorentz-invariant Riemannian metric on $C$. For this purpose we need to identify a Lorentz-invariant notion of distance between pairs $(I, X)$ and $(I', X')$. On the mass-shell $V$ there exists a natural Riemannian metric \begin{eqnarray} ds^2_V = - dI^{\mu} dI_{\mu} \geq 0, \end{eqnarray} which may be employed to define distance between $I$ and $I'$. To define a (positive definite) distance between $X$ and $X'$, let us note that if $I = I'$, then $2I_{\mu} I_{\nu} + \eta_{\mu \nu}$ defines a Riemannian metric on ${\bf R}^4$, and the corresponding distance between $X$ and $X'$ equals \begin{eqnarray} [2I_{\mu} I_{\nu} - \eta_{\mu \nu}](X^{\mu} - X'^{\mu}) (X^{\nu} - X'^{\nu}). \label{dist} \end{eqnarray} If $I \neq I'$, we need to boost $X'$ to the frame where $I = I'$ and then employ expression (\ref{dist}) for distance. If we denote by $\Lambda$ the unique Lorentz transformation (boost) that takes $I$ into $I'$, we may define the distance function \begin{eqnarray} [2I_{\mu} I_{\nu} - \eta_{\mu \nu}](X^{\mu} - \Lambda^{\mu}{}_{\rho} X'^{\rho}) (X^{\nu} - \Lambda^{\nu}{}_{\sigma} X'^{\sigma}). \end{eqnarray} The construction above suggests the following definition of a Lorentz invariant metric on $C$ \begin{eqnarray} ds^2_C = - c_1 dI^{\mu} dI_{\mu} + \frac{c_2}{m}[2I_{\mu} I_{\nu} - \eta_{\mu \nu}] (dX^{\mu} -I^{\mu} X\cdot dI - dI^{\mu} I \cdot X)\nonumber \\ \times (dX^{\nu} -I^{\nu} X\cdot dI - dI^{\nu} I \cdot X), \label{metric} \end{eqnarray} where $c_1, c_2$ are arbitrary positive numbers and the mass $m$ appears in the denominator to make the metric unit dimensionless ($\hbar = c = 1$). We shall choose for convenience $c_1 = c_2 = \frac{1}{2}$. To construct a metric on $\Pi_{red}$, we write Eq. (\ref{metric}) in terms of the coordinate $u$, together with the physical variables $I^{\mu}, Y^{\mu}$ \begin{eqnarray} ds^2_C = - \frac{1}{2} dI^{\mu} dI_{\mu} + \frac{m^2}{2}[I_{\mu} I_{\nu} - \eta_{\mu \nu}] dY^{\mu} dY^{\nu} + \frac{m^2}{2} du^2 \end{eqnarray} It is then easy to find the distance between neighboring orbits, by minimising over $du$, thus obtaining a Lorentz-invariant metric on the reduced state space \begin{eqnarray} ds^2_{red} = - \frac{1}{2} dI^{\mu} dI_{\mu} + m^2 [I_{\mu} I_{\nu} - \eta_{\mu \nu}] dY^{\mu} dY^{\nu} \label{metred}. \end{eqnarray} \paragraph{Particle in 2d} For simplicity we next consider the case of a particle in two-dimensional Minkowski spacetime. In terms of $I = I^1$ and $q = I^0 X^1 - I^1 X^0$, we may write $\Omega_{red}$ as \begin{eqnarray} \Omega_{red} = m d\xi \wedge dq, \end{eqnarray} where \begin{eqnarray} \xi = \int^I \frac{dx}{\sqrt{1+x^2}} = \log (I + \sqrt{1+I^2}). \end{eqnarray} The coordinates $\xi $ and $q$ are global on $\Gamma_{red}$. Moreover, they define a set of Cartesian coordinates for the metric (\ref{metred}), since \begin{eqnarray} ds^2_{red} = \frac{1}{2} d \xi^2 + \frac{1}{2 m^2} dq^2 \end{eqnarray} The metric and symplectic form on $\Gamma_{red} = {\bf R}^2$ correspond to that of the standard Gaussian coherent states. The path-integral then leads to the overlap kernel \begin{eqnarray} \langle \xi, q| \xi' q' \rangle = \exp \left( im \xi'q - i m \xi q' - \frac{1}{2} (\xi -\xi')^2 - \frac{m^2}{2} (q-q')^2 \right) \label{rpkernel} \end{eqnarray} Even though the definition of the parameters $\xi$ and $q$ involved the choice of a coordinate system, the coherent states constructed from the kernel (\ref{rpkernel}) are not. The reason is that $q$ has a covariant definition as $q = \epsilon_{\mu \nu}I^{\mu} X^{\nu}$, in terms of the alternating tensor $\epsilon_{\mu \nu}$ in two dimensions, while $\xi \rightarrow \xi + c $ under a Lorentz boost in two dimensions, leaving the kernel (\ref{rpkernel}) invariant up to a change of phase. We may pull-back the kernel $\langle \xi, q| \xi' q' \rangle$ on the constraint surface and write it in terms of $x = X^1$, $t = X^0$ and $p = m I^1$, which are the natural variables in the canonical description of time evolution. We obtain \begin{eqnarray} \langle x, p, t| x', p', t' \rangle = \exp \left( \frac{i}{m} p' (\sqrt{m^2 + p^2} x - pt) - \frac{i}{m} p (\sqrt{m^2 + p'^2} x' - p' t') \right.\nonumber \\ \left. - \frac{1}{2} [\log\frac{p + \sqrt{m^2 + p^2}}{p' + \sqrt{m^2 + p'^2}}]^2 - \frac{1}{2} (\sqrt{m^2 + p^2} x - p t - \sqrt{m^2 + p'^2}x' + p' t')^2 \right). \label{ckernel} \end{eqnarray} In the non-relativistic limit $p << m$, the overlap kernel reduces to \begin{eqnarray} \langle x, p, t| x', p', t' \rangle = \exp \left( i p' (x - p/mt) - i p(x' - p'/m t') - \nonumber \right. \\ \left. \frac{1}{2 m^2} (p - p')^2 - \frac{m^2}{2} (x - p/ m t - x' + p'/m t')^2 \right) \label{nonrel} \end{eqnarray} For $t = t'= 0 $ this kernel defines a Gaussian family of coherent states of the Weyl group, typical in the description of non-relativistic particles. However, for $t \neq t'$. $\langle x, p, t| x', p', t' \rangle $ does not correspond to matrix elements $\langle x, p|e^{-i\hat{H}(t - t')} | x' p' \rangle$ of any self-adjoint Hamiltonian $\hat{H}$. The coherent states (\ref{rpkernel}) are {\em instantaneous}: they are properly defined only on the reduced state space, in which the Hamiltonian vanishes due to constraints. For this reason the time-dependence in the arguments of (\ref{ckernel}) reflects solely the relation of the parameters on the reduced state space to a coordinate that is usually considered to play the role of time--see \cite{Sav99, SavAn00} for an interpretation. The coherent states are covariant under the Poincar\'e group; for any element $g$ of the Poincar\'e group, one may define the unitary operator $\hat{U}(g)$ as \begin{eqnarray} \hat{U}(g) |z \rangle = | g \cdot z \rangle, \end{eqnarray} where $z \in \Gamma_{red}$ and $g\cdot z$ denotes the symplectic action of the Poincar\'e group on $\Gamma_{red}$. For the one parameter subgroup of time-translations (the $0$-direction) in particular, we obtain the transformation \begin{eqnarray} |\xi, q \rangle \rightarrow |\xi, q + \sin \xi s \rangle, \end{eqnarray} which can be easily checked to be unitary. The matrix elements of the Hamiltonian in the coherent state basis have no relation to the pull-back of the coherent state overlap kernel on $C$, Eq. (\ref{ckernel}). The parameter $t = X^0$ cannot be identified with the time parameter of Schr\"odinger's equation, a fact that is also responsible for the non-definability of a covariant position operator for relativistic particles \cite{AnSav03}. \paragraph{Minisuperspace models} Minisuperspace models are quantum cosmological models characterised by a canonical state space ${\bf R}^{2n}$ with configuration variables $q^a$ and conjugate momenta $p_a$ and a by constraints of the form \begin{eqnarray} \frac{1}{2} g_{ab}(q) p^a p^b + V(q) = 0, \end{eqnarray} in terms of a metric $g$ on the configuration space with signature $-++ \ldots +$. In many aspects their structure is similar to that of the relativistic particle. There exist, however, no spacetime covariance argument to predetermine a form of the metric. We have applied the present method of quantisation to minisuperspace models in Ref. \cite{AnSav04}, where we studied in detail a Robertson-Walker universe with a scalar field. \subsection{Divergent points and orbifold structure} In many systems with first-class constraints, there may exist exceptional points of the reduced state space that correspond to orbits of reduced dimensionality than the generic orbit. This is usually the case, when specific subsets of the constraint surface are invariant under some (or even all) the first-class constraints. The symplectic structure on such points is typically divergent, with the result that the reduced state space is not well defined as a classical symplectic manifold. This is, in fact, one of the reasons that Dirac quantisation is preferred over RSSQ--one does not have to deal with such singular orbits in the Dirac method. We argue here that such divergent points do not pause any problem in the quantisation scheme through coherent states that is proposed here. The orbits corresponding to singular points in the reduced state space are absorbed in a redefinition of the reduced state space that is imposed by the consideration of the Riemannian structure. We shall consider for simplicity the case of a system with a single first-class constraint. Each orbit of the constraint surface $C$ is characterised by coordinates $\zeta^i$ that are constant along the orbit--and thus project on the reduced state space-- and a single gauge coordinate $\lambda$ distinguishing points along the orbit. Let us assume that there exist a specific point on $C$, determined by the coordinates $(\zeta^i_0,\lambda_0)$, that is left invariant under the action of the constraint. The corresponding orbit $\gamma_0$ consists then only of the single point $(\zeta^i_0, \lambda_0)$, while a generic orbit is characterised by varying $\lambda$, namely it corresponds to a line on $C$. There exist other orbits $\gamma'_0$ characterised by the same parameters $\zeta^i_0$ with $\gamma_0$: they correspond to values of $\lambda > \lambda_0$ and $\lambda < \lambda_0$. There will be two orbits if $\lambda$ runs to the full real axis and only one orbit if $\lambda$ is a periodic variable. To compute the metric on the reduced state space, one has to calculate the distance function between different orbits. The determination of the distance between $\gamma_0$ and and the other orbits $\gamma_0'$ of the same value of $\zeta^i$, is the infimum of the distance of the points of $\gamma_0'$ from the point $(\zeta^i, \lambda_0)$. The infimum will be always achieved for $\lambda \rightarrow \lambda_0$, implying that the distance between $\gamma_0$ and $\gamma_0'$ will be equal to zero. Consequently, the Wiener process constructed from the metric on $\Gamma_{red}$ will fail to distinguish between $\gamma_0$ and $\gamma'_0$. The path-integration will, therefore, treat all orbits with the same value of $\zeta^i$ as a single point. The coherent states, therefore, will only be parameterised by the regular points of $\Gamma_{red}$. Recalling the relation between the state space metric and the uncertainty relation, one may say that the quantum uncertainties essentially wash out any divergences related to {\em accidental} symmetries of the classical constraint surface. The argument we provided here is very general and immediately generalises to systems with more than one constraint functions. For a detailed example (a Robertson-Walker minisuperspace model) the reader is referred to \cite{AnSav04}. \section{Conclusions} We have presented here a variation of the reduced-state-space quantisation procedure that is based on the coherent-state-path-integral quantisation, developed by Klauder. The key point of our construction is that a metric $ds^2_{red}$ on the reduced state space may be constructed purely geometrically from a metric on the unconstrained state space $\Gamma$. The metric $ds^2_{red}$ may then be employed for the definition of a path integral for the degrees of freedom in the reduced state space. The advantages of this method are the following: \vspace{0.3cm} i. While the method incorporates the constraints at the classical level, it can be easily related to Dirac quantisation. The identification of a metric on the reduced state space is constructed uniquely from a metric on $\Gamma$ (or on the constraint surface $C$), serves to define a map from a kinematical Hilbert space (related to the degrees of freedom on $\Gamma$) to a physical Hilbert space that corresponds to the true degrees of freedom. One may therefor enjoy all benefits of Dirac quantisation, for example the existence of quantum operators for kinematical variables, without any of its drawbacks. \vspace{0.25cm} ii. The singular points of the reduced state space are 'smeared out' in the quantum theory, and do not appear as parameters in the coherent states. \vspace{0.25 cm} iii. Finally, the method is purely geometrical; the basic ingredient is the identification of the distance between constraint orbits. As such it is particularly suitable for theories, in which the state space involves geometric or combinatorial objects: for example, approaches to canonical quantum gravity that involve discretisation. \section*{ Acknowledgements} This work was supported by a Marie Curie Reintegration Grant of the European Commission and a research grant from the Empirikion Foundation. I would also like to thank N. Savvidou for many important discussions and remarks.
astro-ph/0411733
\section{Introduction} In Rocca-Volmerange et al., 2004, hereafter RV04, we showed that powerful radio sources are hosted by the most massive galaxies. Based on measurements of stellar masses with robust evolution models, the maximum mass is limited by the fragmentation limit at $\simeq$10$^{12}$ M$_{\odot}$~ (Rees \& Ostriker, 1977, Silk, 1977) clarifying the interpretation of the so-called $K$-$z$~relation in the $K$-band Hubble diagram. Moreover in this diagram, the authors give a constraint on galaxy types: only host galaxies of elliptical type fit the radio galaxy distribution from $z$=0 to 4. However at $z$=4, the time-scale of mass accumulation becomes so short that to form massive galaxies requires short dissipative time scales, of the same order as gravitational time scales. Typical UV to radio SEDs of 3CR galaxies were compiled from observations of ISO, IRAS and IRAM observatories by Andr\'eani et al., (2002). From the FIR emission, the authors conclude that there is double emission from a dusty torus and a larger-scale (cooler) dust distribution in the host galaxy. We propose more details on the dust temperatures by a multi-component approach including stars and jet. Statistically well identified by their high radio power, 3CR galaxies are also host by massive stellar populations, contributing to the optical and infrared emissions. In the near-infrared, the stellar emission of radio sources is often similar to populations of massive elliptical galaxies. In the mid-infrared, the recent analysis of a significant sample of early type galaxies observed with ISOCAM (Xilouris et al, 2004) shows that the emission is dominated by the presence of the PAH feature at 6.7$\mu$m, an excess of hot dust at 15$\mu$m and a cold component at ~30-40K. The comparison of the FIR emission of elliptical galaxies with that of radio sources will give specific information on the AGN contribution. Using templates from the code P\'EGASE (www.iap.fr/pegase) we are able to predict the stellar emission at all galaxy ages, while the synchrotron radiation contributes to SEDs when the AGN is active. Another objective is to estimate the cooling time-scale at the earliest phases of galaxy formation. Instead of using the classical cooling function of only helium and hydrogen clouds, each dissipative source (stars, gas, dust and AGN) has to be individually considered during galaxy evolution. In section 2, we compare the various components to the averaged observed SED of 3CR radio galaxies. Section 3 presents the best fit of dust emission with the sum of two main blackbody laws, the hot one is an intense source of dissipation. Section 4 predicts the dissipation rate from various sources (synchrotron, stars, gas and dust), to be compared to the dynamical time scale. The last section gathers the discussion and conclusion. \section{Stellar and synchrotron emissions} The striking similitude of the radio to UV SEDs of 3CR radio galaxies and quasars (see Fig.1 to 4 in Andr\'eani et al, 2002) evokes similar properties of the various components of these complex systems. All the spectra present a typical gap from the radio emission for $\lambda < $1mm, presuming similar properties of dust in 3CR galaxies. \subsection{Stellar and nebular components} Radio galaxies are embedded in massive elliptical galaxies, even at high redshifts (van Breugel et al, 1998, Lacy et al, 2000, Pentericci et al, 2001): from high luminosities L$\simeq $ 3 to 7 L$_*$(Papovich et al, 2001 ), they may reach up to 10 L$_*$ (Mc Lure et al, 2004). The 3CR radio galaxies are the most powerful galaxies in the $K$-band Hubble diagram, limiting the distribution by the so-called $K$-$z$~sequence. In RV04, we checked with the evolution model P\'EGASE that only elliptical scenarios of 10$^{12}$M$_{\odot}$~ baryonic masses are able to explain the distribution of the brightest radio galaxies in the Hubble diagram. However huge emission lines are typical of massive radio galaxies while elliptical galaxies show no emission lines. To confirm our mass estimates, we also include in our modeling the nebular emission of gas ionized by the AGN component (Moy \& Rocca-Volmerange, 2002), computed with the code CLOUDY (Ferland, 1996). In the present paper (Fig. \ref{figure:Kz30}), the observed radio galaxies in the Hubble $K$~ diagram are well fitted by 10$^{12}$M$_{\odot}$ elliptical models and AGN emission lines. The emission line widths are assumed to be 10$\AA$~ at $z$=0. The redshift of elliptical galaxy formation is $z_{for}$=30, instead of $z_{for}$=10 in RV04. Our conclusions on galaxy formation limited by the fragmentation limit 10$^{12}$M$_{\odot}$~remain unchanged, allowing us to adopt this mass for 3CR powerful radio galaxy hosts. The NIR predictions from 1 to 5 $\AA$~ are strongly dependent on the modeling of cold star populations. The effective temperatures of giant branches and asymptotic giant branches, as well as the mass population density are crucial but not accurately estimated. We check the P\'EGASE elliptical modeling of the two populations by comparing predictions to the observational template (see Fig.2 in Fioc \& Rocca-Volmerange, 1996). However, the separation of the stellar and hot dust component will be less robust between $\lambda$=3 to 5$\mu$m than for $\lambda >$ 5$\mu$m. Most scenarios of galaxy evolution take into account the extinction process by dust, computed with a transfer model in ellipsoidal or slab geometries (Fioc \& Rocca-Volmerange, 1997). However in the elliptical scenario, galactic winds expel gas and dust at 1 Gyr, so that at $z$=0 our modelling only simulates the stellar emission. In the following, we shall adopt the SED model of elliptical galaxies for predictions of the underlying populations of radio sources. \begin{figure*} \centering \includegraphics[width=13cm]{rocca1.ps} \caption{The distribution of radio (squares) and field (diamonds, crosses) galaxies in the $K$-band Hubble diagram, compared to the predicted sequence of elliptical galaxies of masses 10$^{12}$M$_{\odot}$, the adopted redshift of formation is $z_{for}$=30. Sequences with other masses and redshifts of formation were already tested in Rocca-Volmerange et al. 2004. The main emission lines from AGN appearing in the $K$~band for 0$< z <$ 30 (H$_{\alpha}$, [OIII]5007$\AA$~ + H$_{\beta}$, [OII]3727$\AA$~and Ly$_{\alpha}$1215$\AA$) are also plotted with width of 10$\AA$. Previous conclusions are unchanged: 10$^{12}$M$_{\odot}$~ is the upper mass limit of galaxies and also the fragmentation limit predicted by models} \label{figure:Kz30} \end{figure*} \subsection{Synchrotron radiation} The synchrotron radiation in radio galaxies follows a power law F$_{\nu}$ = $\nu^{\beta}$. Observations in the radio domain are well fitted by $\beta$ = -1.04 (Andr\'eani et al., 2002). The energy distribution and intensity of the dust emission appear from the radio-to-UV SED template of 3CR radio galaxies: by comparing the stellar and synchrotron emissions to the composite spectrum of averaged 3CR galaxy SEDs, we discover a large unresolved bump from $\lambda$= ~ 1 to 500 $\mu$m (log($\nu_{Hz}$) $\simeq$12 to 15). The average redshift $z$=0.5 of the observational sample has been corrected to the rest frame. The comparison is presented on Fig.\ref{figure:composantes}. \begin{figure*} \centering \includegraphics[width=13cm]{rocca2.ps} \caption{The observed energy distribution of 3CR galaxies by Andr\'eani et al, 2002 (full blue line) on the large wavelength range ($\lambda$ = 10$^{3}$~to 10$^{10}$ \AA~ is plotted with the stellar SED of elliptical template (dotted green line) from P\'EGASE and the synchrotron power law (full red line). The synchrotron radiation might contribute to the X-ray emission (see discussions on the so-called $EXOs$~ sources below). } \label{figure:composantes} \end{figure*} \section{Main hot and cold dust components} The various grains found in the interstellar medium (Desert et al., 1990) contribute at different levels to the total FIR emission of radio sources. The global FIR emission is described as the superposition of blackbody (BB) laws depending on the grain temperature, size and nature distribution. We do not consider large lines of PAH, generally attributed as signatures of star formation: in evolved elliptical galaxies, the star formation activity is null or very faint. The first result of the analysis is that one unique BB temperature is unable to reproduce the IR bump of dust deduced from stellar and synchrotron corrections. We then propose to limit the decomposition to only two BB laws. The second result is the excellent fit for two extreme temperatures: the hot component corresponds to the BB temperature of 340$K \pm$ 50$K$, the cold component to the BB temperature of 40$K \pm$ 16$K$. Fig. \ref{figure:temperatures} presents the stellar and synchrotron fluxes F($\lambda$)as function of the wavelength $\lambda$~ and the two BB laws compared to the observed bump. The hot wavelength peak derived from the Wien law at 340K is 8.5 $\mu$m. Fluxes at mid-height correspond to an accuracy of $\pm$50K. The hot component is an extreme source of energy dissipation. A similar temperature was observed in the far-infrared continua of quasars (Wilkes et al. 1998) but quasars do not allow to disentangle the contribution of other components, in particular stellar. Also found in radio galaxies, this hot component may be attributed to the active galaxy nucleus (AGN): however the hot dust component at 260 K found in early-type normal galaxies from ISOCAM data (Ferrari et al., 2002) could be of similar nature, even if it is strikingly less luminous. \begin{figure*} \centering \includegraphics[width=13cm]{rocca3.ps} \caption{The main three fluxes contributing to the radiative energy of the 3CR radio galaxy sample are the elliptical SED (green line), the synchrotron power law (red line) and the two blackbody laws (pink lines ) of respectively 340K$\pm$50K and 40K$\pm$16K. The dust emission (blue lines) is derived from observations after subtraction of stellar + synchrotron emissions. For $\lambda >$ 3$\mu$m, the fit with the sum of two blackbody laws is robust (full line), while for $\lambda <$ 3$\mu$m, the peak at the temperature $\simeq$~900K is more unaccurate (dashed line).} \label{figure:temperatures} \end{figure*} The cold wavelength peak derived from the Wien law at 40K is 72$\mu$m. Fluxes at mid-height correspond to an accuracy of $\pm$16K.This cold component may be compared to the already known 20K-60K temperatures found by {\it ISO} and {\it SPITZER} telescopes in dusty evolved galaxies (Blain et al, 2004, Xilouris et al, 2004). Qualitatively this result of two components is confirmed by the splendid IRAC/SPITZER image of the nearby radio galaxy Centaurus A (Fazio et al, 2004). A third emission peak at $\simeq$~3 $\mu$m is possible from data (dashed line on Fig. \ref{figure:temperatures}). The disentangling of the cold stellar and highly hot ($\simeq$900K) dust emission is less robust and of minor energy importance. \section{The balance of energy dissipation rates} The classical cooling function predicted for initial clouds of hydrogen and helium (Rees \& Ostriker, 1977 and references therein) becomes insufficient, and may be wrong, when intense cooling sources are activated during galaxy formation. While stellar and cold dust emissions, in particular in case of rapid metal enrichment, depend on star formation rate, the processes related to the AGN are only dissipative when the AGN is active. However the cooling processes efficiently contributes to the decrease the cooling time-scale t$_{cool}$. We propose to estimate hereafter the radiative energy balance, during the star formation evolution when the AGN is active. We integrated all the previously considered fluxes (stellar SEDs, BB laws, power law) on their largest wavelength domain of emission. The dissipative energy rate $dE/dt$~ in ergs. s$^{-1}$~ is then computed for all sources. For simplicity, we separate stellar and AGN sources. Stellar sources evolve on the galaxy time-scale (14 Gyr) with passive and active evolution while AGN, supposed to be active for an arbitrary time scale, is of short lifetime duration ($<$10$^{8}$~ years). \begin {itemize} \item The supernova rate is predicted at all ages from the evolution scenario of elliptical galaxies. Adopting the average luminosity of 10$^{42.5}$erg s$^{-1}$~from Nomoto models, the radiative energy rate is a minor component of the global emission. In fact, the largest fraction of the explosion energy heats the interstellar medium, up to the escape velocity. For more detailed predictions, the neutrinos could be taken into account, so that our results are a lower limit. \item From the gas ionized by massive stars, hydrogen and oxygen lines are the strongest sources of dissipation through the main lines (Ly1215\AA, H$\alpha$, H$\beta$, [OIII]5007\AA, [OII]3727\AA). The evolution of the number of Lyman continuum photons N$_{Lyc}$~is predicted by P\'EGASE and the emission lines are those of a classical HII region at the electronic temperature 8000K. The line intensities are then derived by taking into account the metallicity evolution of the gas. \item Huge envelops of gas ionised by the AGN are observed at all $z$~ in the most powerful radio galaxies, in particular around the most distant ones (van Ojik et al., 1997). Emission lines due to AGNs are computed with the code CLOUDY, assuming solar metallicity, the photo-ionisation parameter log U = -2 and the density 100 cm$^{-3}$. \item The dissipative energy of stellar emission is computed at all ages with the bolometric luminosity as a function of time, a standard output of the code P\'EGASE. \item The synchrotron radiation, mainly efficient in active AGNs, contributes to a fraction of the total radiative energy. The power law is integrated on the whole energy range from X-rays to radio. \item The hot and cold dust components, as found in the previous section, dissipate energies at rates $dE/dt$, deduced from integration of the blackbody laws on their respective wavelength domains. \end{itemize} We plot all the dissipation rates on Fig. \ref{figure:bilan} by adopting an active AGN at 1 Gyr. The hot dust component is luminous when the AGN is active while the cold component, also found in elliptical galaxies, is supposed to follow the evolution of the host galaxy metallicity. As a result, when the AGN is active, the most important source of dissipation is the hot dust component at $\simeq$340K. \section{Discussion and conclusion} Based on SEDs of 3CR radio galaxies observed from the X-ray to radio domain, we identify the dust emission by subtraction of the template of elliptical galaxies and of the synchrotron power law. Two main dust BB emissions (and a possible minor hotter one) are revealed in the $F(\lambda)$-$\lambda$~diagram, while the classical diagrams $F(\nu)$-$\lambda$~ (or $\nu$ F($\nu$)- $\lambda$) (Haas et al, 1998) are less adapted to the component separation. The hot dust emission peaks at 8.5$\pm$3$\mu$m with a blackbody law of 340K$\pm$50K. This component is in agreement with the standard AGN molecular torus model (Pier \& Krolik, 1993), embedded in a cooler component. By comparing to the IRAC/Spitzer image of Centaurus A (Fazio et al, 2004) the bright hot grain structure is not in the inner core but within a larger structure. Many speculations on the origin of the hot grain origin are possible, we only conclude on the presence of a large amount of dust photo-heated by the AGN. The hot BB temperature uncertainty at the maximum peak includes errors due to the calibration and to the modeling of the stellar emission. From Fig \ref{figure:temperatures}, the stellar emission is negligible at about 8 $\mu$m. The large bar of $\pm$50K means that the resolved signatures of Polycyclic Aromatic hydrocarbons (PAH) at respectively 7.7$\mu$m and 8.6$\mu$m could be included in this peak. The most important uncertainty concerns the 3 $\mu$m zone, highly sensitive to the subtraction. Moreover from observational data (Andr\'eani et al, 2002), this zone is highly dispersed. The authors present the extreme value for quasars which would indicate a strong emission peak. For radio galaxies, the $\simeq$~3 $\mu$m peak is significantly less energetic. The main difference with previous studies on active galaxies concerns the star formation. While the starburst activity is preferentially researched in the Ultra Luminous Infra Red Galaxies (ULIRG) (Genzel \& Cesarsky, 2000), star formation is a minor component in radio sources, dominated by the evolved population of elliptical galaxies. However the 8.5$\pm$3$\mu$m peak is also found in the ULIRGs (Ultra Luminous Infra Red Galaxies). In general attributed to the PAH, the peak is not thermal and due to the episodic excitation by PAHs. It is still premature to conclude on the link between the 8.5$\mu$m peak discovered in strong ULIRGs and the 8.5$\mu$m peak in powerful radio galaxies. But the similitude of the two peaks deserves further analyses on hidden AGN in starbursts or/and on star formation in AGN environments. \begin{figure*} \centering \includegraphics[width=13cm]{rocca4.ps} \caption{Various stellar (blue color) dissipative rates $dE/dt$~ in erg.s$^{-1}$~as a function of time: stellar continuum (dashed light blue line), ionized gas from massive stars(dash-dot blue line), cold dust fitted on metals (dashed blue line). Dissipative rates due to the AGN formed at 1 Gyr (pink/red color): ionized gas from AGN (dotted clear pink line), synchrotron (dashed-double dot pink line ), hot dust (dashed-dot red line). The total dust emission is the pink full line and the sum of all rates is the full green line. Radiative energy by supernova explosions (except neutrinos), too faint, is not plotted.} \label{figure:bilan} \end{figure*} The cool component at 40K is at about the same temperature than in elliptical galaxies (Xilouris et al, 2004). So that the origin is not necessarily linked to the presence of the AGN but to dusty populations of stars (low mass AGB stars or others). Moreover because stellar winds in elliptical galaxies must eject gas and dust, we need to justify the presence of dust. Would it be possible that stellar winds expel the interstellar gas, while the denser and more embedded dust component is maintained in the galaxy centre environment? Grains, more massive than gas, could be more rapidly attracted towards the galaxy centre than the gaseous component, the process may also depend on the angular momentum value. The time scale of grain infall towards the center is then shorter than the time scale of interstellar gas heating. Are these results on 3CR radio galaxies acceptable for all AGNs? 3CR radio galaxies are the most powerful and massive galaxies hosting super massive black holes (McLure \& Dunlop, 2002). If dust emission, in particular hot dust, is due to a collapsing process of dust towards the centre, the dust origin is not due to AGN which only heat grains and make them luminous. It could also be linked to a star formation process if massive stars are totally embedded in dusty clouds since no evidence of SED signatures due to starbursts is revealed in the optical part of the SED. Whatever the origin of the energetic photons heating the hot dust structure, the energy released by this structure is considerable. An approached dynamical scale $t_{grav}$ = 1/{(G $\rho$)$^{1/2}$ } $\simeq$ 600 Myr becomes comparable to the cooling time scale during the AGN phase. The total energy dissipated by the 340K emission, integrated on the AGN duration (10$^{8}$ yr) gives a time scale $t_{cool} >$ 400 Myr. Our evaluations show that $t_{grav}$~ and $t_{cool}$~are quite comparable and the dissipation may regulate the self-gravitational collapse models of galaxy formation (Rees \& Ostriker, 1977). The dissipation factor may be lower at higher redshifts when metals and dust mass are significantly lower. However, in case of a massive initial gas reservoir, the collapse is extremely rapid, the metal enrichment follows and finally the grain emission is dominant. Another source of uncertainties is the energy released by neutrinos from supernova explosions, so that the presence of the active nucleus could not be necessary to dissipate energy at a low time scale comparable to gravitational time scale. More detailed observations from the ISO archives and the rapidly increasing data set from the satellite SPITZER are required. \begin{acknowledgements} We thank Nick Seymour for reading the manuscript \end{acknowledgements}
math-ph/0411069
\section{Introduction}\label{sec:int} We consider systems composed of electrons and nuclei, i.e., point particles which interact via Coulomb interaction with the negatively charged particles being fermions. Due to their important role in describing nature, such systems have been intensively investigated. In particular, the thermodynamic limit, i.e., the limit in which the system becomes large, has been studied extensively in \cite{lieleb:the}. In that work, it was shown that the thermodynamic limit exists for thermodynamic quantities, such as the pressure and the free energy density, provided that they are defined using Dirichlet boundary conditions. Furthermore, it was shown that these quantities possess the properties which are expected from phenomenological thermodynamics. In order to define the canonical and the grand canonical partition function, one has to confine the particles of the system to lie in a bounded set $\Lambda \subset {\mathord{\mathbb R}}^3$, which we choose to be open. For the confined system to be well defined its Hamiltonian should be self adjoint. This requires that one imposes suitable boundary conditions on the boundary of $\Lambda$. For each particular choice of boundary conditions one obtains a canonical and a grand canonical partition function. In order to study the thermodynamic limit one considers a sequence $\{ \Lambda_l \}$ of bounded open domains such that the volume of $\Lambda_l$ tends to infinity as $l \to \infty$. For systems with Dirichlet boundary conditions it was shown in \cite{lieleb:the} that the canonical and grand canonical partition function exist for a large class of limiting sequences $\{ \Lambda_l \}$. Moreover, the limit is independent of the particular sequence. In this work we prove that, indeed, the same limit is obtained for systems with Neumann, periodic, or reflecting boundary conditions. We prove our result for limiting sequences which are obtained by scaling a bounded open set, which has a smooth boundary, except for isolated edges and corners. This class of limiting sequences is smaller than the class for which the thermodynamic limit for systems with Dirichlet boundary conditions has been shown to exist. We want to point out that this is only partially technical. For instance, there exist sequences of domains for which the thermodynamic limit of the ground state energy for Dirichlet boundary conditions exists, whereas for Neumann boundary conditions the ground state energy diverges to $- \infty$. Although such sequences are somewhat pathological, this demonstrates that the independence of boundary conditions for systems composed of electrons and nuclei cannot be considered as trivial. We will also comment on possible more general classes of limiting sequences for which our proof is applicable. For notational simplicity, we only state and prove our results for systems composed of a single species of negatively charged fermions and a single species of positively charged particles being bosons. The results as well as their proofs generalize to multicomponent systems in a straight forward way. We state the main result and present its proof for both: zero temperature and nonnegative temperature. Despite that the latter implies the former, we present that way an independent and technically easier proof for the temperature zero case. To prove the independence of the boundary conditions we use a sliding technique, which was introduced in \cite{conlieyau:the}, and refined in \cite{grasch:ont}. Thereby, one decomposes the space into simplices. By sliding and rotating the simplices one obtains a lower bound for the Hamiltonian of a large system in terms of Hamiltonians defined on the smaller simplices. Simplices which lie in the interior of the large system have Dirichlet boundary conditions. Whereas simplices on the boundary, i.e., simplices which intersect with the boundary of the large system, are subject to mixed boundary conditions. Using that the many body Coulomb potential can be estimated below by a sum of one body potentials \cite{lieyau:the}, we then show that the thermodynamic quantities in the boundary simplices are bounded. In the thermodynamic limit the sum of all the boundary contributions is proportional to the surface. This is negligible compared to the bulk contribution, which is proportional to the volume. We want to point out that independence of boundary conditions has been studied for systems with hard core interactions (see \cite{rob:sta}, \cite{rob:the}, and references given therein). The paper is organized as follows. In Section \ref{sec:modres} we introduce the model and state the results. In section \ref{sec:the} we present the proofs. \section{Model and Statement of Results}\label{sec:modres} We shall first recall the definition of Dirichlet and Neumann boundary conditions \cite{reesim:ana}. Let $\Lambda$ be a bounded open set in ${\mathord{\mathbb R}}^3$. The Dirichlet Laplacian for $\Lambda$, $- \Delta^{\rm D}_{\Lambda}$, is the unique self-adjoint operator on $L^2(\Lambda)$ whose quadratic form is the closure of the form \begin{displaymath} \phi \mapsto \int_{\Lambda} | \nabla \phi|^2 \, dx \end{displaymath} with domain $C_0^{\infty}(\Lambda)$. The Neumann Laplacian for $\Lambda$, $-\Delta^{\rm N}_{\Lambda}$, is the unique self-adjoint operator on $L^2(\Lambda)$ whose quadratic form is \begin{displaymath} \phi \mapsto \int_{\Lambda} | \nabla \phi |^2 \, dx \end{displaymath} with domain $H^1(\Lambda) = \{ \ f \in L^2(\Lambda) \ | \ \nabla f \in L^2(\Lambda) \ (\mathrm{in \ sense \ of \ distributions}) \ \}$. The model consists of electrons ($\hbar^2/2 = 1$, $m=1$, $|e|=1$) and nuclei with mass $M$ and charge $z$. We assume $z$ to be rational. The electrons are fermions, while the statistics of the nuclei is irrelevant. Let $\Lambda \in {\mathord{\mathbb R}}^3$ be an open set. The Hilbert space $\mathcal{H}_{\boldsymbol{N},\Lambda}$, with $\boldsymbol{N}=(n,k) \in {\mathord{\mathbb N}}^2$, for $n$ electrons and $k$ nuclei is the subspace of $L^2(\Lambda \times {\mathord{\mathbb Z}}_2)^{\otimes n} \otimes L^2(\Lambda)^{\otimes k}$ carrying the permutation symmetry appropriate to the given statistics. The Hamiltonian, acting on $\mathcal{H}_{\boldsymbol{N},\Lambda}$, is \begin{eqnarray*} H^B_{\boldsymbol{N},\Lambda} & = & - \sum_{j=1}^n \Delta^B_{\Lambda, x_j} - \frac{1}{M} \sum_{j=1}^{k} \Delta^{B}_{\Lambda, R_j} - z \sum_{i=1}^n \sum_{j=1}^{k} \frac{1}{|x_i - R_j |} \\ & & + \sum_{1 \leq i < j \leq n} \frac{1}{|x_i - x_j |} + z^2 \sum_{1 \leq i < j \leq k} \frac{1}{|R_i - R_j |} \; , \end{eqnarray*} where the electron coordinates are $x_i$, the nuclear coordinates are $R_i$ and by $B$ we denote the type of boundary conditions, e.g. ${\rm N},{\rm D}$, and ${\rm M}$ stands for Neumann, Dirichlet, and mixed boundary conditions. Variable particle numbers are accounted for by means of the direct sum \begin{eqnarray} \mathcal{H}_{\Lambda} = \bigoplus_{\boldsymbol{N}} \mathcal{H}_{\boldsymbol{N}, \Lambda} \label{sec:int:for1} \ \ , \qquad H^B_{\Lambda} = \bigoplus_{\boldsymbol{N}} H^B_{\boldsymbol{N}, \Lambda} \; . \end{eqnarray} The grand canonical partition function and the (finite volume) pressure are defined by \begin{eqnarray*} \Xi^B (\beta, \boldsymbol{\mu}, \Lambda) & = & \operatorname{Tr}_{\mathcal{H}_{\Lambda}} e^{- \beta ( H_{\Lambda}^B - \boldsymbol{\mu} \cdot \boldsymbol{N}) } = \sum_{\boldsymbol{N}} \operatorname{Tr}_{\mathcal{H}_{\boldsymbol{N}, \Lambda}} e^{- \beta ( H_{\boldsymbol{N}, \Lambda}^B - \boldsymbol{\mu} \cdot \boldsymbol{N})} \\ p^B (\beta, \boldsymbol{\mu} , \Lambda ) & = & ( \beta | \Lambda | )^{-1} \log \Xi^B(\beta, \boldsymbol{\mu}, \Lambda) \; , \end{eqnarray*} where $\boldsymbol{\mu} = (\mu_n , \mu_k ) \in {\mathord{\mathbb R}}^2$ stands for the chemical potentials of the electrons and the nuclei, $\beta > 0$ is the inverse temperature, and $\boldsymbol{N}$ denotes the particle number operator, for which we use the same symbol as for its eigenvalues. Here and below the volume of a subset $\Omega$ in ${\mathord{\mathbb R}}^3$ is denoted by $|\Omega|$. The canonical partition function of the system at reciprocal temperature $\beta$ and the free energy per unit volume are defined by \begin{eqnarray*} Z^B(\beta,\boldsymbol{N},\Lambda)&=& \operatorname{Tr}_{\mathcal{H}_{\boldsymbol{N},\Lambda}} e^{- \beta H^B_{\boldsymbol{N},\Lambda}} \\ f^B(\beta , \boldsymbol{N} , \Lambda) &=& - ( \beta |\Lambda |)^{-1} \log Z^B(\beta , \boldsymbol{N} , \Lambda ) \; . \end{eqnarray*} Furthermore, we consider the following zero temperature expressions, which we will denote as \begin{displaymath} G^B(\boldsymbol{\mu}, \Lambda) = {\rm inf}\, \sigma_{\mathcal{H}_{\Lambda}} (H_{\Lambda}^B - \boldsymbol{\mu} \cdot \boldsymbol{N} ) \ , \quad g^B(\boldsymbol{\mu}, \Lambda) = \frac{1}{|\Lambda|} G^B(\boldsymbol{\mu}, \Lambda) \; , \end{displaymath} \begin{displaymath} E^B( \boldsymbol{N} , \Lambda ) = {\rm inf}\, \sigma_{\mathcal{H}_{\Lambda}} (H^B_{\boldsymbol{N}, \Lambda}) \ , \quad e^B(\boldsymbol{N} , \Lambda ) = \frac{1}{|\Lambda|} E^B(\boldsymbol{N},\Lambda ) \; . \end{displaymath} \begin{definition} A sequence $\{\Lambda_l\}$ of bounded open sets in ${\mathord{\mathbb R}}^3$ is called a regular sequence of domains if: \begin{itemize} \item[(i)] For $l \to \infty$, $|\Lambda_l | \to \infty$. \item[(ii)] For each fixed $h \geq 0$ as $l \to \infty$ (with $\Lambda_l^c = {\mathord{\mathbb R}}^3 \setminus \Lambda_l$) \\ \mbox{$ | \{ \ x \in \Lambda_l \ | \ d(x,\Lambda^c_l) < h \ \}| /|\Lambda_l| \to 0 $} \ \ and \quad $ | \{ \ x \in \Lambda_l^c \ | \ d(x,\Lambda_l) \leq h \ \}| /|\Lambda_l| \to 0 \; . $ \item[(iii)] There exists a $\delta > 0$ such that for all $l$, $| \Lambda_l | / | B_l | \geq \delta $, where $B_l$ is the ball of smallest radius containing $\Lambda_l$. \end{itemize} \end{definition} \vspace{0.3cm} \noindent It was shown in \cite{lieleb:the} that for regular sequences $\{ \Lambda_l \}$ the thermodynamic limits \begin{displaymath} p^{\rm D}(\beta, \boldsymbol{\mu} ) = \lim_{l \to \infty} p^{\rm D}( \beta , \boldsymbol{\mu} , \Lambda_l ) \ , \qquad \lim_{l \to \infty} g^{\rm D}(\boldsymbol{\mu}, \Lambda_l ) = g^{\rm D}(\boldsymbol{\mu}) \end{displaymath} for Dirichlet boundary conditions exist and are independent of the particular sequence. To study the thermodynamic limit for the canonical ensemble, we consider systems with no net charge, i.e., \begin{displaymath} \boldsymbol{N} = ( n, k) \ \ \mathrm{with} \quad k = z n \; . \end{displaymath} We introduce the set \begin{displaymath} P_{S} = \{ \ (\rho_e, \rho_k,) \in {\mathord{\mathbb R}}_+^2 \ | \ \rho_k = z \rho_e \ \} \; , \end{displaymath} corresponding to neutral charge configurations. In \cite{lieleb:the}, it was also shown that for a regular sequence of domains $\{ \Lambda_l \}$ and neutral $\{ \boldsymbol{N}_l \}$, i.e., $\boldsymbol{N}_l \in {\mathord{\mathbb N}}^2 \cap P_S$, with \begin{displaymath} \lim_{l \to \infty} \frac{\boldsymbol{N}_l}{|\Lambda_l|} = \boldsymbol{\rho} \equiv (\rho_e , \rho_k) \in P_S \; , \end{displaymath} the limits for Dirichlet conditions \begin{displaymath} \lim_{l \to \infty} f^{\rm D}(\beta , \boldsymbol{N}_l , \Lambda_l ) = f^{\rm D} ( \beta ,\boldsymbol{\rho}) \ , \qquad \lim_{l \to \infty} e^{\rm D}(\boldsymbol{N}_l , \Lambda_l ) = e^{\rm D} (\boldsymbol{\rho}) \; \end{displaymath} exist independent of the particular sequence and are convex functions of $\boldsymbol{\rho} \in P_S$. The value of $\boldsymbol{\rho}$ gives the density of the particles. Furthermore, it was shown that the canonical and the grand canonical ensembles are equivalent, i.e., that \begin{equation} p^{\rm D}(\beta ,\boldsymbol{\mu} ) = \sup_{\boldsymbol{\rho} \in P_S} ( \boldsymbol{\rho} \cdot \boldsymbol{\mu} - f^{\rm D}( \beta , \boldsymbol{\rho}) ) \; , \qquad g^{\rm D}(\boldsymbol{\mu}) = {\rm inf}\,_{\boldsymbol{\rho} \in P_S} ( e^{\rm D}(\boldsymbol{\rho}) - \boldsymbol{\mu} \cdot \boldsymbol{\rho} ) \; . \label{sec:int:eoe} \end{equation} \vspace{0.3cm} \noindent We note that (\ref{sec:int:eoe}) implies that $g^{\rm D}(\boldsymbol{\mu})$ is concave and that $p^{\rm D}(\beta ,\boldsymbol{\mu} )$ is convex, and hence they are continuous functions of $\boldsymbol{\mu}$. We now state our main result. \begin{theorem} \label{sec:int:thm1} Let $\Lambda \subset {\mathord{\mathbb R}}^3$ be a bounded open set with smooth boundary, except for isolated edges and corners. Consider the sequence $\Lambda_L = L \Lambda$ for $L > 0$. Then \begin{itemize} \item[(a)] \quad $ g^{\rm N}(\boldsymbol{\mu} ) = \lim_{L \to \infty } | \Lambda_L |^{-1} G^{\rm N}(\boldsymbol{\mu}, \Lambda_L ) = g^{\rm D}(\boldsymbol{\mu})$. \item[(b)] \quad $p^{\rm N} ( \beta , \boldsymbol{\mu} ) = \lim_{L \to \infty} p^{\rm N}(\beta , \boldsymbol{\mu}, \Lambda_L ) = p^{\rm D}(\beta , \boldsymbol{\mu} )$. \end{itemize} \end{theorem} \vspace{0.5cm} \noindent We want to point out that sequences satisfying the assumption of Theorem \ref{sec:int:thm1} are regular sequences of domains. Theorem \ref{sec:int:thm1} has the following consequence. \begin{corollary} \label{sec:int:cor1} Let $\{ \Lambda_L \}$ be a sequence of domains as in Theorem \ref{sec:int:thm1}. Let $\{ \boldsymbol{N}_L \}$ be a sequence with neutral charge configuration, i.e., $\boldsymbol{N}_L \in {\mathord{\mathbb N}}^2 \cap P_S$, such that \begin{displaymath} \lim_{L \to \infty} \frac{\boldsymbol{N}_L}{| \Lambda_L| } = \boldsymbol{\rho} \in P_S \; . \end{displaymath} Then \begin{itemize} \item[(a)] \quad $ e^{\rm N}(\boldsymbol{\rho}) := \lim_{L \to \infty} e^{\rm N}(\boldsymbol{N}_L , \Lambda_L ) = e^{\rm D}(\boldsymbol{\rho}) \; ; $ \item[(b)] \quad $ f^{\rm N}(\beta ,\boldsymbol{\rho} ) := \lim_{L \to \infty} f^{\rm N}(\beta, \boldsymbol{N}_L , \Lambda_L ) = f^{\rm D}( \beta , \boldsymbol{\rho} ) \; . $ \end{itemize} \end{corollary} \begin{remark}{\rm Note that we only consider systems which consist of electrons and one type of spinless nuclei. The results and their proofs generalize in a straight forward way to multicomponent systems, with all negatively (or positively) charged particles being fermions.} \end{remark} \begin{remark}{\rm The essential technical requirement in the proof of Theorem \ref{sec:int:thm1} on the sequence of domains $\Lambda_L$, apart from being regular, is that the thermodynamic quantities of the boundary simplices are bounded, cp. Lemma \ref{sec:pro:lem2}. This in turn holds for all sequences satisfying the assertion of Lemma \ref{sec:pro:geom}, which is stated in the next section. We want to point out that there do exist sequences of domains for which the thermodynamic limit does not exist for systems with Neumann boundary conditions, and yet for Dirichlet conditions the thermodynamic limit exists, cp. the example below.} \end{remark} \begin{example}{\rm We consider a system where the charge $z$ of the nuclei is one. Let $\{ \Lambda_l \}$ be the union of a large ball $B_l$ of radius $l$ and a shrinking ball $B_{l^{-4}}$ of radius $l^{-4}$ separated from the large ball by a constant distance. Let $\{ \boldsymbol{N}_l \}$ be a sequence with $\boldsymbol{N}_l \in {\mathord{\mathbb N}}^2 \cap P_S$ $\lim_{l \to \infty} | \Lambda_l |^{-1} \boldsymbol{N}_l = \boldsymbol{\rho}$. The sequence $\{ \Lambda_l \}$ is a regular sequence of domains. We place one electron and a single nucleus in the small ball and put both in the Neumann ground state. In that situation the small ball has neutral charge distribution and hence there is no Coulomb interaction with the large ball. This provides us with the following upper bound \begin{displaymath} e^{\rm N}(\Lambda_l, \boldsymbol{N}_l ) \leq |\Lambda_l|^{-1} E^{\rm D}(B_l, \boldsymbol{N}_l -(1,1)) + |\Lambda_l|^{-1} | B_{l^{-4}}|^{-2} \int_{(B_{l^{-4}})^2} \frac{-1}{|x-y|} \, dx dy \; . \end{displaymath} The first term on the right hand side converges to $e^{\rm D}(\boldsymbol{\rho})$ while the second term diverges to $-\infty$. The same conclusion is easily seen to hold if we connect the small ball to the large ball by a thin tube provided that its thickness shrinks fast enough.} \end{example} \vspace{0.5cm} Finally we want to consider more general boundary conditions. Consider for instance, a Laplacian $- \Delta^{\rm A}_{\Lambda}$ with boundary conditions such that \begin{equation} \label{eq:perb} - \Delta^{\rm N}_{\Lambda} \leq - \Delta^{\rm A}_{\Lambda} \leq - \Delta^{\rm D}_{\Lambda} \;. \end{equation} (Here and below operator inequalities are understood in the sense of forms \cite{reesim:ana}.) Then Theorem \ref{sec:int:thm1} and Corollary \ref{sec:int:cor1}, respectively, imply that the same limits are obtained for systems with boundary conditions satisfying (\ref{eq:perb}). We note that periodic boundary conditions are of this type. Elastic boundary conditions, with elasticity $\sigma$, are defined as follows. Let $t^{\sigma}_{\Lambda}$ denote the quadratic form which is the closure of the form \begin{displaymath} \phi \mapsto \int_{\Lambda} | \nabla \phi |^2 \, dx + \sigma \int_{\partial \Lambda} |\phi |^2 \, dS \end{displaymath} with domain $H^1(\Lambda) \cap C(\overline{\Lambda})$ and where $dS$ is the surface measure of $\partial \Lambda$, the boundary of $\Lambda$. Let $-\Delta^{\sigma}_{\Lambda}$ denote the unique self adjoint operator with quadratic form $t^{\sigma}_{\Lambda}$. Functions in the domain of $- \Delta^{\sigma}_{\Lambda}$ satisfy \begin{displaymath} \left. \frac{\partial \phi}{\partial {n}} = \sigma \phi \right\vert_{\partial \Lambda} \end{displaymath} at the boundary of $\Lambda$, where $\partial \phi / \partial {n}$ denotes the normal derivative. Note that boundary conditions with elasticity zero are Neumann boundary conditions. For positive elasticity, $\sigma > 0$, we have the operator inequality $ - \Delta^{\rm N}_{\Lambda} \leq - \Delta^{\sigma}_{\Lambda} \leq - \Delta^{\rm D}_{\Lambda}$. This implies the statement of the following theorem in the case where the elasticity is positive. That it, indeed, holds for negative elasticity will be shown in Section \ref{sec:pro4}. \begin{theorem} \label{sec:int:thm2} Let $\Lambda_L$ be as in Theorem \ref{sec:int:thm1}. Then, for elastic boundary conditions with real elasticity $\sigma$, $\lim_{L \to \infty} p^{\sigma}(\beta , \boldsymbol{\mu} , \Lambda_L) = p^{\rm D}(\beta, \boldsymbol{\mu})$. Let $\{ \boldsymbol{N}_L \}$ be a sequence as in Corollary \ref{sec:int:cor1}. Then $\lim_{L \to \infty} f^{\sigma} (\beta , \boldsymbol{N}_L , \Lambda_L ) = f^{\rm D}(\beta, \boldsymbol{\rho})$. \end{theorem} \section{Proofs} \label{sec:the} First we show that Corollary \ref{sec:int:cor1} follows from Theorem \ref{sec:int:thm1}. In subsection \ref{subsec:32} we will prove Theorem \ref{sec:int:thm1}, which is our main result. The prove is based on a lemma which estimates the contributions from the boundary terms. The prove of that lemma is deferred to subsection \ref{subsec:33}. In subsection \ref{subsec:34} we prove Theorem \ref{sec:int:thm2} concerning reflecting boundary conditions. \subsection{Proof of Corollary \ref{sec:int:cor1}} \noindent (a). We know that for $\boldsymbol{\rho} \in P_S$ \begin{equation} \label{sec:int:edr} e^{\rm D}(\boldsymbol{\rho}) \geq \liminf_{L \to \infty} e^{\rm N}(\boldsymbol{N}_L , \Lambda_L ) \; . \end{equation} For a given $\boldsymbol{\rho} \in P_S$ there exists, by the convexity of $e^{\rm D}$, a $\boldsymbol{\mu}$ such that \begin{displaymath} e^{\rm D}(\boldsymbol{\rho}') \geq {e^{\rm D}(\boldsymbol{\rho}) + \boldsymbol{\mu} \cdot ( \boldsymbol{\rho}' - \boldsymbol{\rho})} \ \ , \quad \mathrm{for \ all} \ \boldsymbol{\rho}' \in P_S \; . \end{displaymath} Hence \begin{equation} \label{sec:int:ied1} {\rm inf}\,_{\boldsymbol{\rho}' \in P_S } ( e^{\rm D}(\boldsymbol{\rho}') - \boldsymbol{\mu} \cdot \boldsymbol{\rho}' ) = e^{\rm D}(\boldsymbol{\rho}) - \boldsymbol{\mu} \cdot \boldsymbol{\rho} \; . \end{equation} We have \begin{eqnarray*} \liminf_{L \to \infty} e^{\rm N}(\boldsymbol{N}_L , \Lambda_L ) & = & \liminf_{L \to \infty} \frac{E^{\rm N}(\boldsymbol{N}_L,\Lambda_L)}{|\Lambda_L|} \\ & = & \liminf_{L \to \infty} \frac{E^{\rm N}(\boldsymbol{N}_L , \Lambda_L ) - \boldsymbol{\mu} \cdot \boldsymbol{N}_L }{|\Lambda_L|} + \boldsymbol{\mu} \cdot \boldsymbol{\rho} \\ & \geq & \liminf_{L \to \infty} \left( {\rm inf}\,_{\boldsymbol{N}} \, \frac{E^{\rm N}(\boldsymbol{N}, \Lambda_L ) - \boldsymbol{\mu} \cdot \boldsymbol{N} }{|\Lambda_L|} \right) + \boldsymbol{\mu} \cdot \boldsymbol{\rho} \\ & = & g^{\rm D}(\boldsymbol{\mu}) + \boldsymbol{\mu} \cdot \boldsymbol{\rho} \\ & = & {\rm inf}\,_{\boldsymbol{\rho}' \in P_S} ( e^{\rm D}(\boldsymbol{\rho}') - \boldsymbol{\mu} \cdot \boldsymbol{\rho}' ) + \boldsymbol{\mu} \cdot \boldsymbol{\rho} \\ & = & e^{\rm D}(\boldsymbol{\rho}) \; , \end{eqnarray*} where we have used Theorem \ref{sec:int:thm1} (a) in the fourth, eq. (\ref{sec:int:eoe}) in the fifth and eq. (\ref{sec:int:ied1}) in the last line. The above inequality together with (\ref{sec:int:edr}) proves (a). \vspace{0.3cm} \noindent (b). The proof of (b) is analogous to (a). We know that for $\boldsymbol{\rho} \in P_S$ \begin{equation} \label{sec:int:fdb1} - f^{\rm D}(\beta , \boldsymbol{\rho} ) \leq \limsup_{L \to \infty} ( - f^{\rm N}(\beta , \boldsymbol{N}_L , \Lambda_L ) ) \; , \end{equation} where we used that the map $A \mapsto \mathrm{Tr} \, e^A$ is operator monotone. Since $P_S \ni \boldsymbol{\rho} \mapsto - f^{\rm D}(\beta , \boldsymbol{\rho})$ is concave, there exists for a given $\boldsymbol{\rho} \in P_S$ a $\boldsymbol{\mu}$ such that \begin{displaymath} - f^{\rm D}(\beta , \boldsymbol{\rho}' ) \leq - f^{\rm D}(\beta , \boldsymbol{\rho} ) - \boldsymbol{\mu} \cdot ( \boldsymbol{\rho}' - \boldsymbol{\rho} ) \ \ , \quad \mathrm{for \ all} \ \boldsymbol{\rho}' \in P_S \; , \end{displaymath} and hence \begin{equation} \label{sec:int:snr} \sup_{ \boldsymbol{\rho}' \in P_S} \left( \boldsymbol{\rho}' \cdot \boldsymbol{\mu} - f^{\rm D}( \beta , \boldsymbol{\rho}') \right) = \boldsymbol{\rho} \cdot \boldsymbol{\mu} - f^{\rm D}( \beta , \boldsymbol{\rho} ) \; . \end{equation} We have \begin{eqnarray*} \lefteqn{\limsup_{L \to \infty} \left( - f^{\rm N}(\beta , \boldsymbol{N}_L , \Lambda_L ) \right) = } \\ & & = \limsup_{L \to \infty} \left( - \boldsymbol{\rho} \cdot \boldsymbol{\mu} + (\beta | \Lambda_L |)^{-1} \beta \boldsymbol{\mu} \cdot \boldsymbol{N}_L - f^{\rm N}(\beta , \boldsymbol{N}_L , \Lambda_L ) \right) \\ & & = - \boldsymbol{\rho} \cdot \boldsymbol{\mu} + \limsup_{L \to \infty} \left( ( \beta | \Lambda_L |)^{-1} \log Z^{\rm N}(\beta ,\boldsymbol{N}_L , \Lambda_L ) e^{ \beta \boldsymbol{\mu} \cdot \boldsymbol{N}_L } \right) \\ & & \leq - \boldsymbol{\rho} \cdot \boldsymbol{\mu} + \limsup_{L \to \infty} \left( ( \beta | \Lambda_L |)^{-1} \log \left( \sum_{\boldsymbol{N}} Z^{\rm N}(\beta ,\boldsymbol{N} , \Lambda_L ) e^{ \beta \boldsymbol{\mu} \cdot \boldsymbol{N} } \right) \right) \\ & & = - \boldsymbol{\rho} \cdot \boldsymbol{\mu} + p^{\rm D}(\beta , \boldsymbol{\mu}) \\ & & = - f^{\rm D}(\beta , \boldsymbol{\rho}) \; , \end{eqnarray*} where we have used Theorem \ref{sec:int:thm1} (b) in the fourth and eqns. (\ref{sec:int:eoe},\ref{sec:int:snr}) in the last line. The above inequality together with (\ref{sec:int:fdb1}) proves (b). \qed \subsection{Proof of Theorem \ref{sec:int:thm1}} \label{subsec:32} \noindent To prove Theorem \ref{sec:int:thm1}, we will make use of the localization method of \cite{grasch:ont} (see also \cite{conlieyau:the}), where one breaks up ${\mathord{\mathbb R}}^3$ into simplices in the following way. Cutting the unit cube $W = [0,1]^3$ with all planes passing through the centre and an edge or a face diagonal of $W$, one obtains congruent simplices $\triangle_n \subset W$, ($n=1,...,24$). The simplices $\triangle_{\alpha} = \triangle_n + z$, with $\alpha = (z,n) \in {\mathord{\mathbb Z}}^3 \times \{ 1, ..., 24 \} = : I$, yield a partition of ${\mathord{\mathbb R}}^3$ up to their boundaries. We then choose a spherically symmetric $\varphi_0 \in C_0^{\infty}({\mathord{\mathbb R}}^3)$ with $\int \varphi_0^2 = 1$ and $\{ \varphi_0(x) \neq 0 \} = \{ |x| < 1/2 \}$. Let $\chi_{\alpha}$ be the characteristic function of $\triangle_{\alpha}$. Setting $\varphi(x) = \eta^{-3/2} \varphi_0(x/\eta)$ and $j_{\alpha} = (\chi_{\alpha} * \varphi^2 )^{1/2}$, we obtain a partition of unity, i.e., \begin{displaymath} \sum_{\alpha \in I} j_{\alpha}^2(x) = 1 \ \ , \; ( x \in {\mathord{\mathbb R}}^3 ) \; , \end{displaymath} with $j_{\alpha} \in C^{\infty}({\mathord{\mathbb R}}^3)$. There are congruent simplices $\triangle_{\alpha}^+$, which are scaled copies of $\triangle_{\alpha}$, such that \begin{eqnarray*} & \operatorname{supp} j_{\alpha} \subset \triangle_{\alpha}^+ & \\ & | \triangle_{\alpha}^+ | \leq | \triangle_{\alpha} | ( 1 + O(\eta)) & \end{eqnarray*} as $\eta \downarrow 0$. The following definitions depend on $\eta$ and $l > 0$ although the notation will not reflect this for simplicity. For the moment, let $\Lambda \subset {\mathord{\mathbb R}}^3$ be any bounded open set. For $y \in W$ and $R \in SO(3)$ we set \begin{displaymath} \Lambda^{y,R} = R^{-1} \Lambda - l y \; . \end{displaymath} We define the set \begin{displaymath} I(\Lambda) = \{ \alpha \in I | l \triangle_{\alpha}^+ \cap \Lambda^{y,R} \neq \emptyset \ \mathrm{for \ some} \ y \in W, R \in SO(3) \} \; . \end{displaymath} For $\alpha \in I(\Lambda)$, let $\mathcal{H}_{\alpha} := \mathcal{H}_{l \triangle_{\alpha}^+}$ be the many particle space for the simplices $l \triangle_{\alpha}^+$ as given in (\ref{sec:int:for1}). By $H^{\rm M}_{\alpha,y,R,\Lambda}$ we denote the Hamiltonian on $l \triangle_{\alpha}^+ \cap \Lambda^{y,R}$ with Neumann conditions on $(l \triangle_{\alpha}^+) \cap \partial \Lambda^{y,R}$ and Dirichlet conditions on the remaining part of the boundary. The operator $H^{\rm M}_{\alpha,y,R,\Lambda}$ acts on \begin{displaymath} \mathcal{H}_{\alpha , y, R, \Lambda} := \mathcal{H}_{l \triangle_{\alpha}^+ \cap \Lambda^{y,R}} \hookrightarrow \mathcal{H}_{\alpha} \end{displaymath} and hence on $\mathcal{H}_{\alpha}$ via the canonical embedding. Note, if $l\triangle_{\alpha}^+ \subset \Lambda^{y,R}$, then $H^{\rm M}_{\alpha,y,R,\Lambda} = H^{\rm D}_{l \triangle_{\alpha}^+}$ and $\mathcal{H}_{\alpha, y, R, \Lambda} = \mathcal{H}_{\alpha}$. We define the Hilbert space and a Hamiltonian, acting on it, as the direct integrals \begin{eqnarray*} \mathcal{H}_{I(\Lambda)} & = & \int^{\oplus}_{W \times SO(3)} dy d \mu (R) \bigotimes_{\alpha \in I(\Lambda)} \mathcal{H}_{\alpha} \; , \\ H^{\rm N}_{I(\Lambda)} & = & \int^{\oplus}_{W \times SO(3)} d y d \mu(R) \sum_{\alpha \in I(\Lambda) } H^{\rm M}_{\alpha ,y, R, \Lambda } \; , \end{eqnarray*} where $d \mu $ denotes the Haar measure on $SO(3)$. We shall define a map $J : \mathcal{H}_{\Lambda} \to \mathcal{H}_{I(\Lambda)}$ as follows. Let $j_{y,R,\alpha} : L^2(\Lambda) \to L^2(l \triangle_{\alpha}^+)$ be given by \begin{displaymath} ( j_{y,R,\alpha} \psi)(x) = j_{\alpha}(x/l) \psi( R(x + l y)) \; . \end{displaymath} Define \begin{eqnarray*} j_{y,R} : L^2(\Lambda) & \to & \bigoplus_{\alpha \in I(\Lambda)} L^2(l \triangle_{\alpha}^+ ) \\ j_{y,R} & = & \bigoplus_{\alpha \in I(\Lambda)} j_{y,R,\alpha} \; . \end{eqnarray*} This lifts to a map between the many particle spaces \begin{displaymath} \Gamma(j_{y,R} ) : \mathcal{H}_{\Lambda} \to \bigotimes_{\alpha \in I(\Lambda)} \mathcal{H}_{\alpha} \; , \end{displaymath} which acts as the $\boldsymbol{N}$-fold tensor product of $j_{y,R}$ on $\boldsymbol{N}$-particle states. We may now define \begin{displaymath} J : \mathcal{H}_{\Lambda} \to \mathcal{H}_{I(\Lambda)} \; , \qquad J = \int_{W \times SO(3)}^{\oplus} dy d \mu(R) \Gamma(j_{y,R}) \; . \end{displaymath} We note that the map $j^*_{y,R,\alpha} : L^2(l \triangle_{\alpha}^+ ) \to L^2(\Lambda)$ is given by \begin{displaymath} (j^*_{y,R,\alpha} \psi)(x) = j_{\alpha} (R^{-1}(x/l) - y) \psi(R^{-1}x - l y ) \; . \end{displaymath} Hence $j^*_{y,R} j_{y,R} : L^2(\Lambda) \to L^2(\Lambda)$ acts as multiplication by $\sum_{\alpha \in I(\Lambda)} j_{\alpha}^2(R^{-1}(x/l) - y )$. This function of $x \in \Lambda$ equals $1$. We conclude that $J^* J =1$, i.e., that $J$ is an isometry. We state the following Lemma, c.f. Lemma 7 in \cite{grasch:ont}. \begin{lemma} \label{sec:pro:lem1} Let $\eta = l^{-1}$. Then \begin{displaymath} H_{\Lambda}^{\rm N} \geq \kappa J^* H_{I(\Lambda)}^{\rm N} J - l^{-1} \mathbf{const} \cdot \boldsymbol{N} \end{displaymath} for large $l$, where $0 < \kappa \leq 1$ and $\kappa = 1 + O(l^{-1})$ as $l \to \infty$. \end{lemma} For the proof of this Lemma we refer the reader to the proof of Lemma 7 in \cite{grasch:ont}, where the statement is for the Dirichlet Laplacian. With little modification of the proof given there one can proof Lemma \ref{sec:pro:lem1}. From now on, let $\Lambda$ be a fixed open set as in the assumption of Theorem \ref{sec:int:thm1}, i.e., bounded with smooth boundary, except for isolated edges and corners. We will consider the sequence of scaled copies $\Lambda_L = L \Lambda$, with $L > 0$. Note that the claim of Lemma \ref{sec:pro:lem1} of course holds for all $\Lambda_L$. Let $\triangle$ denote a simplex which is similar to one (and thus all) of the simplices $\triangle_{\alpha}$, i.e., equal up to dilations, translations, and rotations. By $\triangle_c$ we denote its intersection with $\Lambda_L$, i.e., $\triangle_c = \triangle \cap \Lambda_L$. Let $-\Delta^{\rm M}_{\triangle_c}$ $(H^{\rm M}_{\triangle_c})$ denote the Laplacian (Hamiltonian) on $\triangle_c$ with Neumann conditions on $\triangle \cap \partial \Lambda_L$ and Dirichlet conditions on the rest of $\partial \triangle_c$. We note that $-\Delta^{\rm M}_{\triangle_c}$ is the unique self adjoint operator on $L^2(\triangle_c)$ whose quadratic form is the closure of the form $\phi \mapsto \int_{\triangle_c} | \nabla \phi |^2 \, dx$ with domain $\{ \, \phi \in H^1(\triangle_c) \cap C(\overline{\triangle_c}) \, | \, \phi \ \mathrm{vanishes \ in \ a \ neighborhood \ of} \ \partial \triangle \cap \Lambda_L \, \}$. The following Lemma whose prove will be postponed to subsection \ref{subsec:33} provides us with a bound for the contributions from the boundary simplices. \begin{lemma} \label{sec:pro:lem2} Let $\Lambda_L$ be a sequence of domains as in Theorem \ref{sec:int:thm1} and let $v > 0$. Then there exists a number $L_0$ and constants $C_E( \boldsymbol{\mu}, v )$ and $C_{\Xi}(\beta , \boldsymbol{\mu}, v)$ such that \begin{itemize} \item[(a)] \quad $ G^{\rm M}(\boldsymbol{\mu}, \triangle_c ) \geq C_E( \boldsymbol{\mu}, v ) > - \infty \; ; $ \item[(b)] \quad $ \operatorname{Tr}_{\mathcal{H}_{\triangle_c}} e^{ - \beta ( H^{\rm M}_{\triangle_c} - \boldsymbol{\mu} \cdot \boldsymbol{N} )} \leq C_{\Xi}(\beta , \boldsymbol{\mu}, v) < \infty \; , $ \end{itemize} for all $\triangle_c = \triangle \cap \Lambda_L$, with $L \geq L_0$ and $|\triangle| \leq v$. \end{lemma} \vspace{0.5cm} \noindent For the proof of Theorem \ref{sec:int:thm1} we will also use the following lemma. \begin{lemma} \label{sec:pro:lem4} Let $\{ \boldsymbol{\mu}_l \} $ be a sequence in ${\mathord{\mathbb R}}^2$ with $\lim_{l \to \infty} \boldsymbol{\mu}_l = \boldsymbol{\mu}$ and let $\{ \Lambda_l \}$ be a regular sequence of domains. Then \begin{itemize} \item[(a)] \quad $ \lim_{l \to \infty} |\Lambda_l|^{-1} G^{\rm D}(\boldsymbol{\mu}_l, \Lambda_l )= \lim_{l \to \infty} |\Lambda_l|^{-1} G^{\rm D}(\boldsymbol{\mu}, \Lambda_l )= g^{\rm D}(\boldsymbol{\mu}) \;$ ; \item[(b)] \quad $ \lim_{l \to \infty} p^{\rm D}(\beta, \boldsymbol{\mu}_l, \Lambda_l ) = \lim_{l \to \infty} p^{\rm D}(\beta, \boldsymbol{\mu}, \Lambda_l ) = p^{\rm D}(\beta, \boldsymbol{\mu} ) \; . $ \end{itemize} \end{lemma} \begin{proof} We use the notation $\boldsymbol{\mu}_l = ({\mu_n}_l , {\mu_k}_l)$ and $\boldsymbol{\epsilon}=(\epsilon, \epsilon)$. For $\epsilon > 0$, there exists an $l_0$ such that for all $l \geq l_0$ \begin{eqnarray*} \mu_n - \epsilon \leq &{\mu_n}_l& \leq \mu_n + \epsilon \\ \mu_k - \epsilon \leq &{\mu_k}_l& \leq \mu_k + \epsilon \; . \end{eqnarray*} For (a), we note that \begin{displaymath} |\Lambda_l|^{-1} G^{\rm D}(\boldsymbol{\mu} + \boldsymbol{\epsilon}, \Lambda_l ) \leq |\Lambda_l|^{-1} G^{\rm D}(\boldsymbol{\mu}_l, \Lambda_l ) \leq |\Lambda_l|^{-1} G^{\rm D}(\boldsymbol{\mu} - \boldsymbol{\epsilon}, \Lambda_l ) \; , \qquad \forall \ l \geq l_0 \; . \end{displaymath} Hence \begin{displaymath} g^{\rm D}(\boldsymbol{\mu} + \boldsymbol{\epsilon}) \leq \liminf_{l \to \infty} |\Lambda_l|^{-1} G^{\rm D}(\boldsymbol{\mu}_l, \Lambda_l ) \leq \limsup_{l \to \infty} |\Lambda_l|^{-1} G^{\rm D}(\boldsymbol{\mu}_l, \Lambda_l ) \leq g^{\rm D}(\boldsymbol{\mu} - \boldsymbol{\epsilon}) \; , \end{displaymath} and, by the continuity of $\boldsymbol{\mu} \mapsto g^{\rm D}(\boldsymbol{\mu})$, (a) follows. For (b), we first note that, by equation (\ref{sec:int:eoe}), $\boldsymbol{\mu} \mapsto p^{\rm D}(\beta , \boldsymbol{\mu})$ is convex and hence continuous. In analogy to (a) we have, using that $A \mapsto \mathrm{Tr} \, e^A$ is (operator) monotone, \begin{displaymath} p^{\rm D}(\beta, \boldsymbol{\mu} - \boldsymbol{\epsilon}, \Lambda_l ) \leq p^{\rm D}(\beta, \boldsymbol{\mu}_l, \Lambda_l ) \leq p^{\rm D}(\beta, \boldsymbol{\mu} + \boldsymbol{\epsilon}, \Lambda_l ) \; , \qquad \forall \ l \geq l_0 \; . \end{displaymath} Hence \begin{displaymath} p^{\rm D}(\beta, \boldsymbol{\mu} - \boldsymbol{\epsilon}) \leq \liminf_{l \to \infty} p^{\rm D}(\beta, \boldsymbol{\mu}_l, \Lambda_l ) \leq \limsup_{l \to \infty} p^{\rm D}(\beta , \boldsymbol{\mu}_l, \Lambda_l ) \leq p^{\rm D}(\beta ,\boldsymbol{\mu} + \boldsymbol{\epsilon}) \; , \end{displaymath} and (b) follows by the continuity of $\boldsymbol{\mu} \mapsto p^{\rm D}(\beta , \boldsymbol{\mu})$. \end{proof} \vspace{0.3cm} \noindent {\it Proof of Theorem \ref{sec:int:thm1}.} (a) Since $H_{\Lambda_L}^{\rm D} - \boldsymbol{\mu} \cdot \boldsymbol{N} \geq H_{\Lambda_L}^{\rm N} - \boldsymbol{\mu} \cdot \boldsymbol{N}$, the inequality \begin{equation} \label{eq:ineq1} g^{\rm D}(\boldsymbol{\mu}) \geq \limsup_{L \to \infty} g^{\rm N}(\boldsymbol{\mu}, \Lambda_L) \end{equation} is obvious. We shall show the inequality $\liminf_{L \to \infty} g^{\rm N}(\boldsymbol{\mu}, \Lambda_L) \geq g^{\rm D}(\boldsymbol{\mu})$. We introduce \begin{displaymath} \boldsymbol{N}_{I(\Lambda)} = \int_{W \times SO(3)}^{\oplus} dy d\mu(R) \sum_{\alpha \in I(\Lambda)} \boldsymbol{N}_{\alpha} \; , \end{displaymath} where $\boldsymbol{N}_{\alpha}$ denotes the number operator of $\mathcal{H}_{\alpha}$. Note that $J^* \boldsymbol{N}_{I(\Lambda)} J = \boldsymbol{N}$. By Lemma \ref{sec:pro:lem1}, \begin{displaymath} H_{\Lambda}^{\rm N} - \boldsymbol{\mu} \cdot \boldsymbol{N} \geq \kappa J^* \left( H_{I(\Lambda)}^{\rm N} - \boldsymbol{\tilde{\mu}}_l \cdot \boldsymbol{N}_{I(\Lambda)} \right) J \; , \end{displaymath} where we have set $\boldsymbol{\tilde{\mu}}_l = (1/\kappa) (\boldsymbol{\mu} + l^{-1}\mathbf{const} )$. We define \begin{eqnarray*} I^{\rm int}_{y,R}(\Lambda) & = & \{ \alpha | l \triangle_{\alpha}^+ \subset \Lambda^{y,R} \} \\ I^{\rm b}_{y,R}(\Lambda) & = & \{ \alpha | l \triangle_{\alpha}^+ \cap \partial \Lambda^{y,R} \neq \emptyset \} \; . \end{eqnarray*} Let $\Psi \in \mathcal{H}_{\Lambda_L}$ be normalized to one and smooth. We observe that \begin{eqnarray*} \lefteqn{ \left( \Psi, ( H^{\rm N}_{\Lambda_L} - \boldsymbol{\mu} \cdot \boldsymbol{N} ) \Psi \right) } \\ & \geq & \kappa \int_{W \times SO(3)} dy d \mu (R) \left( \Gamma(j_{y,R}) \Psi, \sum_{\alpha \in I(\Lambda_L)} (H_{\alpha,y,R,\Lambda}^{\rm M} - \boldsymbol{\tilde{\mu}}_l \cdot \boldsymbol{N}_{\alpha} ) \Gamma(j_{y,R}) \Psi \right) \\ & \geq & \kappa \int_{W \times SO(3)} d y d \mu (R) \left( \sum_{ \alpha \in I^{\rm int}_{y,R} (\Lambda_L) } G^{\rm D}(\boldsymbol{\tilde{\mu}}_l, l \triangle_{\alpha}^+) + \sum_{ \alpha \in I^{\rm b}_{y,R} (\Lambda_L) } G^{\rm M}(\boldsymbol{\tilde{\mu}}_l, l \triangle_{\alpha}^+ \cap \Lambda_L^{y,R}) \right) \; . \end{eqnarray*} Hence \begin{eqnarray*} \lefteqn{ \liminf_{L \to \infty} |\Lambda_L|^{-1} G^{\rm N}(\boldsymbol{\mu},\Lambda_L) \geq } \\ & & \liminf_{L \to \infty} \kappa \int_{W \times SO(3)} d y d \mu (R) \left( \sum_{ \alpha \in I^{\rm int}_{y,R} (\Lambda_L) } \frac{| l \triangle_{\alpha}^+ |}{| \Lambda_L |} \cdot | l \triangle_{\alpha}^+ |^{-1} G^{\rm D}(\boldsymbol{\tilde{\mu}}_l, l \triangle_{\alpha}^+ ) \right. \\ & & \hspace{7cm} + \left. \sum_{ \alpha \in I^{\rm b}_{y,R} (\Lambda_L) } \frac{1}{| \Lambda_L |} C_E(\boldsymbol{\tilde{\mu}}_l, | l\triangle_{\alpha}^+ |) \right) \\ & & \geq \kappa ( 1 + O(l^{-1})) \cdot | l \triangle^+ |^{-1} G^{\rm D}(\boldsymbol{\tilde{\mu}}_l, l \triangle^+ ) \; , \end{eqnarray*} where we used that all simplices $\triangle_{\alpha}^+$ are congruent to a single one, which we denote by $\triangle^+$, and in the last inequality we used Lemma \ref{sec:pro:lem2} (a) and that both limits \begin{eqnarray} \label{sec:pro:for1} \lim_{L \to \infty} \sum_{ \alpha \in I^{\rm int}_{y,R} (\Lambda_L) } \frac{| l \triangle_{\alpha}^+ |}{| \Lambda_L |} & = & 1 + O(l^{-1}) \; , \\ \lim_{L \to \infty} \sum_{ \alpha \in I^{\rm b}_{y,R} (\Lambda_L) } \frac{1}{| \Lambda_L |} & = & 0 \; \label{sec:pro:for2} \end{eqnarray} are uniform in $R \in SO(3)$, $y \in W$. We omit a proof of this simple facts. By Lemma \ref{sec:pro:lem4} (a), the subsequent limit $l \to \infty$ yields \begin{equation} \label{eq:ineq2} \liminf_{L \to \infty} g^{\rm N}(\boldsymbol{\mu}, \Lambda_L) \geq g^{\rm D}(\boldsymbol{\mu}) \;. \end{equation} The two inequalities (\ref{eq:ineq1}) and (\ref{eq:ineq2}) show the claim. (b) The inequality \begin{equation} \label{eq:ineq3} p^{\rm D}(\beta , \boldsymbol{\mu}) \leq \limsup_{L \to \infty} p^{\rm N}(\beta , \boldsymbol{\mu}, \Lambda_L ) \end{equation} follows from $-\Delta^{\rm N} \leq - \Delta^{\rm D}$. The opposite inequality \begin{displaymath} \limsup_{L \to \infty} p^{\rm N}(\beta , \boldsymbol{\mu} , \Lambda_L ) \leq p^{\rm D}(\beta , \boldsymbol{\mu}) \end{displaymath} is seen as follows. We set $\boldsymbol{\mu}_l = \boldsymbol{\mu} + l^{-1} \mathbf{const}$. Let $\{ \varphi_i \}_{i \in I}$ be an eigenbasis of $J^* ( \kappa H_{I(\Lambda_L)} - \boldsymbol{\mu}_l N_{I(\Lambda_L)} ) J$. Then, using Lemma \ref{sec:pro:lem1}, we have \begin{eqnarray*} \Xi^{\rm N}(\beta, \boldsymbol{\mu}, \Lambda_L) & \leq & \operatorname{Tr}_{\mathcal{H}_{\Lambda_L}} e^{- \beta J^* (\kappa H_{I(\Lambda)} - \boldsymbol{\mu}_l \cdot \boldsymbol{N}_{I(\Lambda_L)} ) J } \\ & = & \sum_{i \in I} e^{- \beta ( J \varphi_i , ( \kappa H_{I(\Lambda_L)} - \boldsymbol{\mu}_l \cdot \boldsymbol{N}_{I(\Lambda_L)} ) J \varphi_i )} \\ & \leq & \sum_{i \in I} \left( J \varphi_i , e^{- \beta ( \kappa H_{I(\Lambda_L)} - \boldsymbol{\mu}_l \cdot \boldsymbol{N}_{I(\Lambda_L)} )} J \varphi_i \right) \\ & \leq & \operatorname{Tr}_{\mathcal{H}_{I(\Lambda_L)}} e^{- \beta ( \kappa H_{I(\Lambda_L)} - \boldsymbol{\mu}_l \cdot \boldsymbol{N}_{I(\Lambda_L)} ) } \\ & \leq & \int_{W \times SO(3)} dy d \mu (R) \prod_{ \alpha \in I^{\rm int}_{y,R} (\Lambda_L) } \operatorname{Tr}_{\mathcal{H}_{\alpha}} e^{- \beta ( \kappa H^{\rm D}_{\alpha} - \boldsymbol{\mu}_l \cdot \boldsymbol{N}_{\alpha} )} \\ & & \qquad \times \prod_{ \alpha \in I^{\rm b}_{y,R} (\Lambda_L) } \operatorname{Tr}_{\mathcal{H}_{\alpha,y,R,\Lambda_L}} e^{- \beta ( \kappa H^{\rm M}_{\alpha,y,R,\Lambda_L} - \boldsymbol{\mu}_l \cdot \boldsymbol{N}_{\alpha} )} \; , \end{eqnarray*} where, in the third line, we used Jensen's inequality with the spectral measure of $(\kappa H_{I(\Lambda_L)} - \boldsymbol{\mu}_l \cdot \boldsymbol{N}_{I(\Lambda_L)})$ for $J \varphi_i$. Since $0 < \kappa \leq 1$, we have $\kappa H^{\rm D}_{\alpha} \geq \kappa^2 T^{\rm D}_{\alpha} + \kappa V \cong H^{\rm D}_{\kappa^{-1}l \triangle_{\alpha}^+}$, where $T^{\rm D}_{\alpha}$ denotes the kinetic Energy, $V$ is the Coulomb potential, and the unitary equivalence comes from scaling. Note that all the simplices $\triangle_{\alpha}^+$ are congruent to a single one $\triangle^+$. By Lemma \ref{sec:pro:lem2} (b), we have \begin{eqnarray*} \lefteqn{p^{\rm N}(\beta, \boldsymbol{\mu}, \Lambda_L )} \\ & = & (\beta | \Lambda_L |)^{-1} \log \Xi^{\rm N} (\beta , \boldsymbol{\mu}, \Lambda_L ) \\ & \leq & ( \beta |\Lambda_L |)^{-1} \log \left( \Xi^{D} (\beta , \boldsymbol{\mu}_l , \kappa^{-1} l \triangle^{+})^{\sup_{y,R}(| I^{\rm int}_{y,R} (\Lambda_L) |)} \cdot C_{\Xi}(\kappa \beta, \kappa^{-1}\boldsymbol{\mu}_l, | l \triangle^+ | )^{\sup_{y,R}(|I^{\rm b}_{y,R} (\Lambda_L)|)} \right) \\ & \leq & \sup_{y,R} ( | I^{\rm int}_{y,R} (\Lambda_L) |) | \Lambda_L |^{-1} |\kappa^{-1} l \triangle^+ | \cdot p^{\rm D}( \beta , \boldsymbol{\mu}_l , \kappa^{-1} l \triangle^+ ) \\ & & \ + \sup_{y,R}( |I^{\rm b}_{y,R} (\Lambda_L)|) ( \beta |\Lambda_L |)^{-1} \cdot \log C_{\Xi}(\kappa \beta, \kappa^{-1} \boldsymbol{\mu}_l, |l \triangle^+ | ) \; ; \end{eqnarray*} note that $\Xi^{\rm D} \geq 1$ and $C_{\Xi} \geq 1$. Thus \begin{displaymath} \limsup_{L \to \infty} p^{\rm N}( \beta, \boldsymbol{\mu} , \Lambda_L ) \leq \kappa^{-3} (1 + O(l^{-1})) p^{\rm D}(\beta , \boldsymbol{\mu}_l , \kappa^{-1} l \triangle^+ ) \; , \end{displaymath} where we have used equations (\ref{sec:pro:for1}, \ref{sec:pro:for2}). Using Lemma \ref{sec:pro:lem4} (b), the subsequent limit $l \to \infty$ gives \begin{equation} \label{eq:ineq4} \limsup_{L \to \infty} p^{\rm N}(\beta, \boldsymbol{\mu} ) \leq p^{\rm D}(\beta , \boldsymbol{\mu} ) \; . \end{equation} The claim in (b) follows from eqns. (\ref{eq:ineq3}) and (\ref{eq:ineq4}). \qed \vspace{0.3cm} \vspace{0.3cm} \subsection{Proof of Lemma \ref{sec:pro:lem2}} \label{subsec:33} To prove Lemma \ref{sec:pro:lem2}, we will first state a technical Lemma reflecting the geometry of $\Lambda$. We recall that $\triangle$ denotes a simplex which is similar to one of the $\triangle_{\alpha}$, i.e., equivalent up to dilations, translations and rotations. \begin{lemma} \label{sec:pro:geom} {\rm \bf } Let $\Lambda_L$ be a sequence of domains as in Theorem \ref{sec:int:thm1} and let $v>0$. Then there exist a constant $C_{\Lambda} > 0$ and a number $L_0$ (depending on $v$) such that for all $L \geq L_0$ and all simplices $\triangle$ with $|\triangle| \leq v$ which intersect $\partial \Lambda_L$, we can choose an open set $V$ containing $\triangle_c = \triangle \cap \Lambda_L$ and smooth coordinates \begin{displaymath} \varphi : V \to B_0 \quad x \mapsto \varphi(x) = ( y_1(x),y_2(x), y_3(x)) \end{displaymath} with the properties: \begin{itemize} \item[(i)] $B_0$ is a ball centered around the origin and $\Lambda \cap V$ corresponds to either the half space restriction $\{x\in V \ |\ y_3(x)<0 \}$, the quarter space restriction $\{x\in V \ |\ y_2(x),y_3(x)<0 \}$, or the octant restriction $\{x\in V \ |\ y_1(x), y_2(x),y_3(x)<0 \}$. \item[(ii)] The Jacobian $D \varphi$ has determinant one and $D \varphi^{-1} (D \varphi^{-1})^T \geq C_{\Lambda}$. \end{itemize} \end{lemma} We want to point out that in the case where $\Lambda$ is a box, this Lemma follows trivially by choosing $C_{\Lambda}=1$ and the coordinate maps to be an appropriate composition of a translation followed by a rotation. In that case, also the next lemma is trivial. \vspace{0.3cm} \noindent {\it Proof.} By assumption, $\Lambda$ is a bounded subset of ${\mathord{\mathbb R}}^3$ with smooth boundary, except for isolated edges or corners. This means that around any point $x_0 \in \partial \Lambda$ on the boundary of $\Lambda$ there is an open neighborhood $V_{x_0}$ on which we may choose smooth coordinates $(y_1,y_2,y_3)$ such that $\Lambda \cap V_{x_0}$ corresponds to either the half space restriction $\{x\in V_{x_0}\ |\ y_3(x)<0\ \}$, the quarter space restriction $\{x\in V_{x_0}\ |\ y_2(x),y_3(x)<0\ \}$, or the octant restriction $\{x\in V_{x_0}\ |\ y_1(x), y_2(x),y_3(x)<0\ \}$. By rescaling a coordinate we can achieve that the coordinate map $\varphi$ has Jacobian determinant equal to one\footnote{This can be achieved as follows. Let $\phi : x \mapsto w(x) = (w_1(x),w_2(x),w_3(x))$ be a coordinate map with Jacobian determinant not necessarily equal to one. Then, we define new coordinates \begin{displaymath} y_3(w) = \int_0^{w_3} | \det D \phi^{-1}(w_1,w_2,s) | \, ds \ \ , \quad y_1 = w_1 \ , \ y_2 = w_2 \; . \end{displaymath} It follows that $dy = | \det D \phi^{-1} | \, dw$ and thus the Jacobian determinant of the coordinate map $x \mapsto y(w(x))$ is 1.}. By possibly adjusting the coordinates and choosing the neighborhood $V_{x_0}$ smaller, we can achieve that the images of the coordinate neighborhoods $V_{x_0}$ under the coordinate maps are balls centered at the origin. By compactness there exist constants $C_{\Lambda} > 0$ and $r > 0$ such that for each point $x_0 \in \partial \Lambda$ we can choose a coordinate map $\varphi : V_{x_0} \to B$ such that \begin{displaymath} D \varphi^{-1} ( D\varphi^{-1})^T \geq C_{\Lambda} \; \end{displaymath} and moreover $|x - x_0 | < r$ implies $x \in V_{x_0}$. Given such a collection of coordinate maps for $\Lambda$ we obtain, by scaling, a collection of coordinate charts for $\Lambda_L$ with properties {\it(i)} and {\it(ii)}. Moreover, the constant $r$ becomes $L r$ under this scaling. Thus for large $L$, $\triangle_c = \triangle \cap \Lambda_L$ is contained in some coordinate chart. \qed \vspace{0.5cm} Let $-{\Delta}^{\rm M}_{\varphi (\triangle_c)}$ denote the Laplacian on $\varphi ( \triangle_c)$ with mixed boundary conditions, i.e., $\varphi$ maps Dirichlet (Neumann) boundaries of $\triangle_c$ to Dirichlet (Neumann) boundaries of $\varphi(\triangle_c)$. \begin{lemma} \label{sec:pro:ineq} Let $\varphi$ be a coordinate map as in Lemma \ref{sec:pro:geom}. Then the map $ U : L^2(V) \to L^2(B_0)$ $f \mapsto f \circ \varphi^{-1}$ is unitary. Moreover, on the form domain of $ - \Delta^{\rm M}_{\triangle_c}$ \begin{displaymath} - \Delta^{\rm M}_{\triangle_c} \geq U^* C_{\Lambda} ( - \Delta^{\rm M}_{\varphi(\triangle_c)}) U \; . \end{displaymath} \end{lemma} \vspace{0.3cm} \noindent {\it Proof.} Since the Jacobian determinant of $\varphi$ is one, $U$ is unitary. By abuse of notation we write $f(y)$ for $(U f ) (y) = f \circ \varphi^{-1}(y)$. We set $g^{ij} = ( D\varphi^{-1} ( D \varphi^{-1})^{T} )_{ij}$. For functions $f$ in the form domain of $- \Delta^{\rm M}_{\triangle_c}$ we write $( f , - \Delta^{\rm M}_{\triangle_c} f ) $ in terms of the $y$ coordinates and estimate \begin{eqnarray*} ( f , - \Delta^{\rm M}_{\triangle_c} f ) &&= \int_{\varphi(\triangle_c)} \sum_{i,j} g^{ij} \overline{\derb{y_i}{f}} \hspace{-1.5mm}(y) \derb{y_j}{f}(y) \, dy \\ && \geq \int_{\varphi(\triangle_c)} \sum_{i,j } C_{\Lambda} \delta^{ij} \overline{\derb{y_i}{f}} \hspace{-1.5mm}(y) \derb{y_j}{f}(y) d y \\ && = C_{\Lambda} ( Uf , - {\Delta}^{\rm M}_{\varphi(\triangle_c)} Uf ) \; . \end{eqnarray*} \qed \begin{lemma} \label{sec:pro:thm1} {\rm \bf (Lieb-Thirring estimate)} Let $v > 0$ be fixed. There exists a number $L_0$ and a constant $C_M$ (depending on $\Lambda$), such that for all $L \geq L_0 $ and $|\triangle| \leq v$ we have \begin{displaymath} \left| \operatorname{Tr}_{L^2(\triangle_c)} ( - \triangle^{\rm M}_{\triangle_c} + V )_- \right| \leq C_M \int_{\triangle_c} | V_-(x)|^{5/2} \, dx \; , \end{displaymath} where $V$ is any locally integrable function on $\triangle_c = \triangle \cap \Lambda_L$ with negative part $V_- \in L^{5/2}$. (Note that $\operatorname{Tr} A_-$ denotes the trace over the negative eigenvalues of the selfadjoint operator $A$.) \end{lemma} \vspace{0.5cm} \noindent {\it Proof.} We first observe that if $\triangle \cap \Lambda_L = \emptyset$ then the estimate is a simple consequence of the classical Lieb-Thirring inequality \cite{lie:the} since we have Dirichlet boundary conditions on the whole boundary. Thus for a given $v$ let $L_0$ be as in Lemma \ref{sec:pro:geom}. Assume that $\triangle$ intersects with the boundary of $\Lambda_L$ and that $|\triangle| \leq v$. Let $\varphi : V \to B_0$ be a coordinate map with the properties as stated in Lemma \ref{sec:pro:geom}. Thus $\triangle_c \in V$. We consider first the case where $V \cap \Lambda_L = \{ x \in V \ | \ y_3(x) < 0 \}$. On $B_0$ we define the reflection $\tau : (y_1,y_2,y_3) \mapsto (y_1,y_2,- y_3)$. By $\varphi(\triangle_c)^\tau $ we denote the interior of the closure of $\varphi(\triangle_c) \cup \tau(\varphi(\triangle_c))$. Given a function $h$ on $\varphi(\triangle_c)$ we extend it to a function $h^{\tau}$ defined a.e. on $\varphi(\triangle_c)^\tau $ by setting \begin{displaymath} h^{\tau}(y) = \left\{ \begin{array}{ll} h(y) \, , & \mathrm{if} \ y_3 < 0 \\ h(\tau(y)) \, ,& \mathrm{if} \ y_3 > 0 \; . \end{array} \right. \end{displaymath} This establishes the isometric injection \begin{eqnarray*} j : L^2(\varphi(\triangle_c)) &\to& L^2(\varphi(\triangle_c)^\tau) \\ f &\mapsto& 2^{-1/2} f^{\tau} \; . \end{eqnarray*} Thus for any locally integrable function $W$ we have \begin{displaymath} j^* ( - {\Delta}^{\rm D}_{\varphi(\triangle_c)^{\tau}} + W^{\tau}) j = - {\Delta}^{\rm M}_{\varphi(\triangle_c)} + W \; , \end{displaymath} where $- {\Delta}^{\rm D}_{\varphi(\triangle_c)^{\tau}}$ denotes the Dirichlet Laplacian on $\varphi(\triangle_c)^{\tau}$ w.r.t. the Euclidean metric $\delta^{ij}$. By the Neumann condition, $j$ maps the domain of $ - {\Delta}^{\rm M}_{\varphi(\triangle_c)}$ into the domain of $- {\Delta}^{\rm D}_{\varphi(\triangle_c)^{\tau}}$. We then conclude using Lemma \ref{sec:pro:ineq} \begin{eqnarray*} \operatorname{Tr}_{L^2(\triangle_c)}( - \Delta^{\rm M}_{\triangle_c} + V )_- &\geq& \operatorname{Tr}_{L^2(\triangle_c)}( - \Delta^{\rm M}_{\triangle_c} + V_- )_- \\ & \geq & C_{\Lambda} \operatorname{Tr}_{L^2(\varphi(\triangle_c))} ( - {\Delta}^{\rm M}_{\varphi(\triangle_c)} + C_{\Lambda}^{-1} V_- )_- \\ &\geq& C_{\Lambda} \operatorname{Tr}_{L^2(\varphi(\triangle_c)^{\tau})} ( - {\Delta}^{\rm D}_{\varphi(\triangle_c)^{\tau}} + C_{\Lambda}^{-1} V_-^{\tau})_- \\ &\geq& - C_{\Lambda}^{-3/2} C_{\rm LT} \int_{\varphi(\triangle_c)^{\tau}} | V_-^{\tau}(y)|^{5/2} \, dy \\ &=& - 2 C_{\Lambda}^{-3/2} C_{\rm LT} \int_{\triangle_c} |V_-(x)|^{5/2} \, dx \; , \end{eqnarray*} where we made abuse of notation by denoting $V_-\circ\phi^{-1}$ by $V_-$. In the step before last we used the classical Lieb-Thirring estimate with constant $C_{\rm LT}$. If $\Lambda \cap V$ has an edge or a corner the proof is essentially the same we just have to perform several reflections, which affects the value of the constant in the inequality by at most a factor 8, since in that case, we have to consider the volume obtained by reflecting $\varphi(\triangle_c)$ on all Neumann planes. Likewise we have to extend functions defined on $\varphi(\triangle_c)$. The details are left to the reader. It follows that the Lemma holds for $C_M = 16 \, C_{\Lambda}^{-3/2} C_{\rm LT}$. \qed \vspace{0.3cm} \noindent {\it Proof of Lemma \ref{sec:pro:lem2}.} Let $\triangle$ be a simplex with $|\triangle| \leq v$. Let $L_0$ be sufficiently large such that the assertions of Lemmas \ref{sec:pro:geom} and \ref{sec:pro:thm1} hold. Consider now $\triangle_c = \triangle \cap \Lambda_L$ for $L \geq L_0$. The Coulomb interaction is \begin{eqnarray*} V_c(x_1,...,x_n,R_1,...,R_k) & = & \sum_{1 \leq i < j \leq n} \frac{1}{|x_i - x_j |} - z \sum_{i=1}^n \sum_{j=1}^{k} \frac{1}{|x_i - R_j |} \\ & & + z^2 \sum_{1 \leq i < j \leq k} \frac{1}{|R_i - R_j |} \; . \end{eqnarray*} We introduce the nearest neighbor, or Voronoi, cells $\{ \Gamma_j \}_{j=1}^k $ defined by \begin{displaymath} \Gamma_j = \{ \, x \, | \ |x - R_j | \leq | x -R_l | \ \mathrm{ for \ all} \ l \neq j \, \} \; . \end{displaymath} Furthermore, define the distance $D_j$ of $R_j$ to the boundary of $\Gamma_j$, i.e., \begin{displaymath} D_j = \mathrm{dist} ( R_j , \partial \Gamma_j ) = \frac{1}{2} \min \{ | R_l - R_j | , \, j \neq l \} \; , \end{displaymath} By Theorem 6 in \cite{lieyau:the}, we have the following inequality \begin{equation} \label{sec:pro:vcx} V_c ( x_1,..., x_n , R_1 ,... ,R_k ) \geq - \sum_{i=1}^{n} W(x_i) + \frac{1}{8} z^2 \sum_{j=1}^k D_j^{-1} \; , \end{equation} where for $x$ in the cell $\Gamma_j$ \begin{displaymath} W(x) = \frac{2 z + 1}{|x - R_j |} \; . \end{displaymath} We note that, in the situation considered here, the coordinates $x_i$ and $R_j$ all lie in $\triangle_c$. Using inequality (\ref{sec:pro:vcx}), we find \begin{displaymath} H^{\rm M}_{\boldsymbol{N}, \triangle_c} - \mu_n n - \mu_k k \geq \sum_{i=1}^n h_i - \mu_k k + \frac{1}{8} z^2 \sum_{j=1}^k D_j^{-1} \; , \end{displaymath} with $h_i = -\Delta^{\rm M}_{\triangle_c, x_i} - W(x_i) - \mu_n$. The fermion ground state energy of $\sum_{i=1}^n h_i$ is bounded below by $ 2 \sum_j e_j$, where $e_j$ are the negative eigenvalues of $h_i$. Hence by Lemma \ref{sec:pro:thm1} ($f_+(x) = \mathrm{max} \, ( f(x) , 0 )$) \begin{eqnarray} \lefteqn{H^{\rm M}_{\boldsymbol{N}, \triangle_c} - \mu_n n - \mu_k k } \nonumber \\ & \geq & - 2 C_M \int_{\triangle_c} |(W + \mu_n)_+ |^{5/2} dx - \mu_k k + \frac{1}{8} z^2 \sum_{j=1}^{k} D_j^{-1} \nonumber \\ & \geq & - 2^{5/2} C_M \int_{\triangle_c} ( |W|^{5/2} + | {\mu_n}_+|^{5/2} ) dx - \mu_k k + \frac{1}{8} z^2 \sum_{j=1}^k D_j^{-1} \; , \label{eq:newu} \end{eqnarray} where, for the second inequality, we have used that $(a+b)^{5/2} \leq 2^{3/2} ( a^{5/2} + b^{5/2} )$ for $a,b \geq 0$. We estimate the first term using \begin{eqnarray*} \int_{\triangle_c} | W|^{5/2} dx & = & \sum_{j=1}^k \int_{\Gamma_j \cap \Delta_c} W_j(x)^{5/2} dx \\ & \leq & \sum_{j=1}^k \int_{|x - R_j | \leq R} (2z + 1)^{5/2} | x - R_j |^{-5/2} dx \\ & & + \sum_{j=1}^k \int_{ {x \in \Gamma_j \cap \triangle_c} \atop {|x - R_j | \geq R }} (2z + 1)^{5/2} R^{-5/2} dx \\ & \leq & (2z + 1)^{5/2} ( 4 \pi k R^{1/2} + | \triangle_c | R^{-5/2} ) \\ & \leq & (2z + 1 )^{5/2} 6 \left( \frac{4 \pi}{5} \right)^{5/6} | \triangle|^{1/6} k^{5/6} \; , \end{eqnarray*} where we have made the optimal choice for $R$. To estimate the term involving the $D_j$, we note that for $k \geq 2$ , \begin{equation} \label{sec:pro:sd3} \sum_{j=1}^k D_j^3 \leq \lambda | \triangle | \; , \end{equation} for some constant $\lambda > 0$. Using H\"olders inequality, i.e., \begin{displaymath} k = \sum_{j=1}^k D_j^{-3/4} D_j^{3/4} \leq \left( \sum_{j=1}^k D_j^{-1} \right)^{3/4} \left( \sum_{j=1}^k D_j^3 \right)^{1/4} \; , \end{displaymath} we find \begin{displaymath} k^{4/3} \lambda^{-1/3} |\triangle|^{-1/3} \leq \sum_{j=1}^k D_j^{-1} \end{displaymath} for $k \geq 2$. Inserting this into (\ref{eq:newu}), we have \begin{displaymath} H^{\rm M}_{\boldsymbol{N}, \triangle_c} - \mu_n n - \mu_k k \geq - C_1 | \triangle|^{1/6} k^{5/6} - C_2 | \triangle | |{\mu_n}_+ |^{5/2} - \mu_k k + C_3 |\triangle|^{-1/3} k^{4/3} ( 1 - \delta_{k1}) \end{displaymath} for some positive constants $0 < C_i < \infty \ , \; (i=1,2,3)$, which depend only on $z$, and $\Lambda$. The case $k=1$ is accounted for by $(1 - \delta_{k1} )$. We minimize with respect to $k$ with the result that \begin{displaymath} H^{\rm M}_{\boldsymbol{N}, \triangle_c} - \mu_n n - \mu_k k \geq C_E(\boldsymbol{\mu} , v ) \; , \ \forall \ \ |\triangle| \leq v \; , \end{displaymath} for some constant $ C_E(\boldsymbol{\mu}, v ) \in {\mathord{\mathbb R}}$. Hence we have shown (a). \vspace{0.3cm} To show (b) we decompose the kinetic energy \begin{displaymath} T^{\rm M}_{\triangle_c} = - \sum_{i=1}^n \Delta^{\rm M}_{ \triangle_c, x_i} - (1/M) \sum_{j=1}^k \Delta^{\rm M}_{\triangle_c, R_i} \end{displaymath} and use the same calculations as in (a). As a result \begin{eqnarray*} \lefteqn{H^{\rm M}_{\boldsymbol{N}, \triangle_c} - \mu_n n - \mu_k k } \\ & = & \frac{1}{2} T^{\rm M}_{\triangle_c} + \frac{1}{2} (T^{\rm M}_{\triangle_c} + 2 V_c + 2 \mu_n n + 2 \mu_k k ) \\ & \geq & \frac{1}{2} T^{\rm M}_{\triangle_c} + \phi(|\triangle|, \boldsymbol{\mu} , k ) \; , \end{eqnarray*} with \begin{displaymath} \phi(|\triangle|, \boldsymbol{\mu} , k ) := - 2^{3/2} C_1 | \triangle|^{1/6} k^{5/6} + C_2 | \triangle | |2 {\mu_n}_+|^{5/2} - 2 \mu_k k + C_3| \triangle|^{-1/3} k^{4/3} ( 1 - \delta_{k1}) \; . \end{displaymath} We estimate the grand canonical partition function as follows \begin{eqnarray} \Xi^{\rm M}(\beta , \boldsymbol{\mu} , \triangle_c ) &=& \operatorname{Tr}_{\mathcal{H}_{\triangle_c}} e^{- \beta ( H^{\rm M}_{\triangle_c} - \boldsymbol{\mu} \cdot \boldsymbol{N} )} \nonumber \\ &\leq& \sum_{n=0}^{\infty} \operatorname{Tr}_{\bigwedge^n L^2(\triangle_c \times {\mathord{\mathbb Z}}_2)} e^{ \beta \frac{1}{2} \sum_{i=1}^n \Delta^{\rm M}_{\triangle_c, x_i}} \nonumber \\ & & \ \times \sum_{k=0}^{\infty} \operatorname{Tr}_{L^2(\triangle_c)^{\otimes k}} e^{ \beta \frac{1}{2M} \sum_{j=1}^k \Delta^{\rm M}_{\triangle_c, R_j}} \cdot e^{- \beta \phi(\Delta, \boldsymbol{\mu} , k )} \label{sec:pro:xmb} \; . \end{eqnarray} If $\triangle$ does not intersect $\Lambda_L$ then we have only Dirichlet boundary conditions and in this case it is known that the desired bound exists. It remains to consider the case where $\triangle$ intersects with the boundary of $\Lambda_L$. Let $\varphi : V \to B_0$ be a map with the properties as given in Lemma \ref{sec:pro:geom}. We shall first consider the case where $V \cap \Lambda_L = \{ x \in V \ | \ y_3(x) < 0 \ \}$. We now use the reflection argument and the notation as introduced in the proof of Lemma \ref{sec:pro:thm1}. There we have shown that on the form domain of $- \triangle^{\rm M}_{\triangle_c}$, \begin{displaymath} - \Delta^{\rm M}_{\triangle_c} \geq U^* C_{\Lambda} ( - \Delta^{\rm M}_{\varphi(\triangle_c)} ) U = U^* j^* C_{\Lambda} ( - \Delta^{\rm D}_{\varphi(\triangle_c)^{\tau}} ) j U \; . \end{displaymath} Using this estimate we find \begin{displaymath} \sum_{n=0}^{\infty} \operatorname{Tr}_{\bigwedge^n L^2(\triangle_c \times {\mathord{\mathbb Z}}_2)} e^{ \beta \frac{1}{2} \sum_{i=1}^n \Delta^{\rm M}_{\triangle_c, x_i}} \leq \sum_{n=0}^{\infty} \operatorname{Tr}_{\bigwedge^n L^2(\varphi(\triangle_c)^{\tau} \times {\mathord{\mathbb Z}}_2 )} e^{\beta \frac{1}{2} C_{\Lambda} \sum_{i=1}^n \Delta^{\rm D}_{\varphi(\triangle_c)^{\tau}, x_i}} \; . \end{displaymath} The right hand side of this equation is the grand canonical partition function of an ideal Fermi gas with Dirichlet boundary conditions, which is known to be bounded above. Similarly we estimate \begin{eqnarray*} \operatorname{Tr}_{L^2(\triangle_c)^{\otimes k}} e^{ \beta \frac{1}{2M} \sum_{j=1}^k \Delta^{\rm M}_{\triangle_c, R_j}} & = & \left( \operatorname{Tr}_{L^2(\triangle_c)} e^{ \beta \frac{1}{2M} \Delta^{\rm M}_{\triangle_c}} \right)^k \\ & \leq & \left( \operatorname{Tr}_{L^2(\varphi(\triangle_c)^{\tau})} e^{ \beta \frac{1}{2M} C_{\Lambda} \Delta^{\rm D}_{\varphi(\triangle_c)^{\tau}}} \right)^k \\ & \leq & \left( \left(\frac{M}{2 \pi \beta C_{\Lambda} } \right)^{3/2} | \varphi(\triangle_c)^{\tau} | \right)^k \; , \end{eqnarray*} where the last inequality follows from a standard estimate \cite{fis:tfe}. Note that $| \varphi(\triangle_c)^{\tau} | \leq 2 | \triangle|$. We insert the above inequalities into eq. (\ref{sec:pro:xmb}). The sum over $k$ converges, due to the term with $k^{4/3}$. Thus we have shown (b) for the case where $V \cap \Lambda_L$ does not have any edges or corners. If $V \cap \Lambda_L$ has an edge or a corner the proof is essentially the same we just have to perform several reflections. We leave the details to the reader. It turns out that in the estimates above $\varphi(\triangle_c)^{\tau}$ is replaced by the volume obtained when reflecting $\varphi(\triangle_c)$ on all Neumann planes. Each of the three cases gives us a constant. Taking the largest we obtain the desired bound. \qed \subsection{Proof of Theorem \ref{sec:int:thm2}} \label{sec:pro4} \label{subsec:34} As mentioned in Section \ref{sec:modres}, the case $\sigma \geq 0$ is trivial. Thus let $\sigma < 0$. Everything in the proof of Theorem \ref{sec:int:thm1} holds if we replace Neumann boundary conditions with elastic boundary conditions. The only part of the proof which does not generalize trivially to elastic boundary conditions is the proof of Lemma \ref{sec:pro:lem2}. We circumvent this by showing that the Laplacian with elastic boundary conditions can be estimated below in terms of the Laplacian with Neumann boundary conditions. We recall that $-\Delta^{\rm M}_{\triangle_c}$ is the unique self adjoint operator on $L^2(\triangle_c)$ whose quadratic form is the closure of the form $\phi \mapsto \int_{\triangle_c} | \nabla \phi |^2 \, dx$ with domain $\mathcal{D} = \{ \, \phi \in H^1(\triangle_c) \cap C(\overline{\triangle}_c) \, | \, \phi \ \mathrm{vanishes \ in \ a \ neighborhood \ of} \ \partial \triangle \cap \Lambda_L \, \}$. Let $-\Delta^{{\rm M}, \sigma}_{\triangle_c}$ be the unique self adjoint operator on $L^2(\triangle_c)$ whose quadratic form is the closure of the form \begin{displaymath} \phi \mapsto \int_{\triangle_c} | \nabla \phi |^2 \, dx + \sigma \int_{\triangle \cap \partial \Lambda_L} |\phi|^2 \, dS \; \end{displaymath} with domain $\mathcal{D}$. Below we will show that for all $\triangle_c = \triangle \cap \Lambda_L$, with $|\triangle| \leq v$ and $L \geq 1$, \begin{equation} \label{eq:rest} - \Delta^{{\rm M},\sigma}_{\triangle_c} \geq \tau ( - \Delta^{M}_{\triangle_c} ) - C \; , \end{equation} for some $\tau$, with $0 < \tau \leq 1$, and some finite constant $C \geq 0$ depending only on $\sigma$ and the geometry of $\Lambda$. Thus setting $\boldsymbol{c} = (C,C)$ we have \begin{eqnarray*} H^{{\rm M},\sigma}_{\triangle_c} &=& T^{{\rm M},\sigma}_{\triangle_c} + V \\ &\geq& \tau T^{\rm M}_{\triangle_c} + V - \boldsymbol{c} \cdot \boldsymbol{N} \\ &\geq& \tau^{-1} ( \tau^2 T^{\rm M}_{\triangle_c} + \tau V ) - \boldsymbol{c} \cdot \boldsymbol{N} \\ &\cong& \tau^{-1} H^{\rm M}_{\tau^{-1} \triangle_c} - \boldsymbol{c} \cdot \boldsymbol{N} \; . \end{eqnarray*} By \begin{displaymath} \operatorname{Tr}_{\mathcal{H}_{\triangle_c}} e^{- \beta(H^{{\rm M},\sigma}_{\triangle_c} - \boldsymbol{\mu} \cdot \boldsymbol{N})} \leq \operatorname{Tr}_{\mathcal{H}_{\tau^{-1} \triangle_c}} e^{- \tau^{-1} \beta(H^{{\rm M},\sigma}_{\tau^{-1} \triangle_c} - \tau (\boldsymbol{\mu} + \boldsymbol{c} ) \cdot \boldsymbol{N} ) } \leq C_{\Xi} ( \tau^{-1} \beta, \tau ( \boldsymbol{\mu} + \boldsymbol{c}), \tau^{-3} v ) \; \end{displaymath} it is now evident that for $\sigma <0$ an analog of Lemma \ref{sec:pro:lem2} for elastic boundary conditions holds. It remains to show (\ref{eq:rest}). Let \begin{eqnarray*} \xi : \overline{\Lambda} &\to& {\mathord{\mathbb R}}^3 \\ x &\mapsto& \xi(x) = (\xi_1(x),\xi_2(x),\xi_3(x)) \end{eqnarray*} be a real vector field continuously differentiable in the closed region $\overline{\Lambda}$ and satisfying the boundary condition $\boldsymbol{n} \cdot \xi \leq - 1$ on $\partial \Lambda$ where $\boldsymbol{n}$ denotes the inward normal. First observe that such a vector field exists. If $\Lambda$ is a box or has smooth boundary this is clear. Consider now the general case, where the boundary of $\Lambda$ has isolated edges and corners. Since $\Lambda$ is bounded we can cover it with finitely many sufficiently small open sets $V_{\gamma}$, with the property that on each of these sets we can choose coordinates $x \mapsto y(x) = (y_1(x),y_2(x),y_3(x))$ such that $\Lambda \cap V_{\gamma}$ corresponds to either $V_\gamma$, the half space restriction $\{x\in V_{\gamma} \ |\ y_3(x)<0 \}$, the quarter space restriction $\{x\in V_{\gamma} \ |\ y_2(x),y_3(x)<0 \}$, or the octant restriction $\{x\in V_{\gamma} \ |\ y_1(x), y_2(x),y_3(x)<0 \}$, and such that there exists a vector field on $V_\gamma$ which is constant in the coordinate chart and satisfies the required property on $V_\gamma$. Pasting these local vector fields together by means of a partition of unity on $\Lambda$ subordinate to the open covering, we obtain a smooth vector field such that $\boldsymbol{n} \cdot \xi \leq - 1$. Given such a vector field on $\Lambda$, then $\xi_L(x) = \xi(x/L)$, for $L \in {\mathord{\mathbb R}}_{+}$, is a vector field on $\Lambda_L$ with $\boldsymbol{n} \cdot \xi \leq - 1$ (here $\boldsymbol{n}$ denotes the inward normal of $\Lambda_L$). For $\phi \in H^1(\triangle_c) \cap C(\overline{\triangle}_c)$ vanishing in a neighborhood of $\partial \triangle \cap \Lambda_L$ we have \begin{eqnarray*} \int_{\triangle \cap \partial \Lambda_L} | \phi |^2 \, dS \leq \int_{\triangle \cap \partial \Lambda_L} |\phi|^2 \xi_L ( - \boldsymbol{n} \, dS ) = \int_{\triangle_c} \nabla( \xi_L |\phi|^2) \, dx \; , \end{eqnarray*} where the equality follows from Gauss' Theorem. We calculate \begin{eqnarray*} \nabla( \xi_L |\phi|^2) = (\nabla \xi_L )| \phi|^2 + \xi_L ( \nabla \phi^* ) \phi + \xi_L \phi^* ( \nabla \phi) \; , \end{eqnarray*} and for any $\epsilon > 0$ we have \begin{displaymath} | \xi_L ( \nabla \phi^* ) \phi + \xi_L \phi^* ( \nabla \phi)| \leq \frac{1}{\epsilon} | \nabla \phi |^2 + \epsilon | \xi_L \phi |^2 \; . \end{displaymath} As a result \begin{eqnarray*} \int_{\triangle \cap \partial \Lambda_L} | \phi |^2 \, dS \leq \int_{\triangle_c} \left( \frac{1}{\epsilon} |\nabla \phi|^2 + \epsilon |\xi_L \phi|^2 + | \nabla \xi_L | | \phi |^2 \right) \, dx \; . \end{eqnarray*} This implies (\ref{eq:rest}). \qed \section*{Acknowledgement} D.H. wants to thank the Department of Mathematics at the University of Copenhagen, where this work was started.
nlin/0411045
\section{Introduction} Chaos in simple power system models has been studied extensively in recent papers. In Abed {\em et al.,} [1993], Tan {\em et al.,} [1993], bifurcations and chaos in a three node power system with a dynamic load model were studied using a classical model for the generator. In Rajesh \& Padiyar [1999], the authors studied dynamic bifurcations in a similar system and reported the existence of chaos even with detailed models. However, in Rajesh \& Padiyar [1999], it was observed that the field voltage assumed unrealistic values at the onset of chaos owing to the unmodeled effect of excitation limits. Though a limiter is fairly easy to model for simulation purposes, the effect of a limiter on dynamic bifurcations has been poorly understood because bifurcation analysis demands smoothness of the functions describing the model. Limit induced chaotic behavior in a Single Machine Infinite Bus system was studied in Ji and Venkatasubramanian [1996] by extensive numerical simulations. In this paper, we approximate the limiter by a smooth function to facilitate bifurcation analysis and study the changes which arise on it's consideration. The rest of the paper is organized as follows. Section 2 deals with the modeling of the system along with the limiter. Section 3 presents the results of a bifurcation analysis. Section 4 contains the discussions and Sec. 5, the conclusions. \section{System Modeling} The system as considered in Rajesh \& Padiyar [1999] is shown in Fig.~\ref{syst}. By a suitable choice of line impedances, we might regard the system as one of a generator supplying power to a local load which in turn is connected to a remote system modeled as an infinite bus. For the general reader's convenience, a brief explanation of the terms d-q and D-Q axis is provided here. The modeling and analysis of three phase synchronous machines is complicated by the fact that the basic machine equations are {\bf time varying}. This is circumvented by the use of Park's transformation which transforms the time varying machine equations in to a time invariant set. The three phase stator quantities (like voltage, current and flux), when transformed in to Park's frame yield the corresponding d-q-o variables. When a generator is described in the d-q frame, then naturally the external network connected to it should also be described in the same reference frame. However, the non-uniqueness of Park's transformation (each generator has it's own d-q components) prevents us from doing so. In order to transform the entire network using a single tranformation with reference to a common reference frame, the Kron's transformation where the variables are denoted by D-Q-O are used. For a complete , detailed and clear exposition of these concepts in power system modeling, the reader is refered to Padiyar [1996]. \begin{figure}[htbp] \centerline{\input{syst.tex}} \caption{The System} \label{syst} \end{figure} \subsection{Generator Model} {\bf Rotor Equations}\\ The rotor mechanical equations for the generator as given by the swing equations are, \begin{eqnarray} \dot{\delta} = \omega_Bs_m\\ \dot{s_m} = \frac{-ds_m+ P_m - P_g}{2H} \\ \nonumber \end{eqnarray} where $d$ is the damping factor in per unit, $\omega_B$ is the system frequency in rad/s, $P_m$ is the input power to the generator and $s_m$, the generator slip defined by \begin{equation} s_m = \frac{\omega - \omega_B}{\omega_B} \end{equation} Two electrical circuits are considered on the rotor, the field winding on the d-axis and one damper winding on the q-axis. The resulting equations are, \begin{eqnarray} \dot{E^{\prime}_q} = \frac{-E'_q + (x_d-x_d')i_d + E_{fd}}{T^{\prime}_{do}}\\ \dot{E'_d} = \frac{-E'_d - (x_q-x'_q)i_q}{T'_{qo}}\\ \nonumber \end{eqnarray} The power delivered by the generator $P_g$ can be expressed as \begin{equation} P_g = E'_qi_q + E'_di_d + (x'_d - x'_q)i_di_q \end{equation} {\bf Stator Equations}\\ Neglecting stator transients and the stator resistance, we have the following algebraic equations \begin{eqnarray} E'_q + x'_di_d = v_q\\ E'_d - x'_qi_q = v_d\\ \nonumber \end{eqnarray} \subsection {Excitation System } The excitation system for the generator is represented by a single time constant high gain AVR and the limiter as shown in Fig.~\ref{exci}. \begin{figure}[htbp] \centerline{\input{exci.tex}} \caption{Excitation System} \label{exci} \end{figure} The equation for this excitation system is given by \begin{equation} \dot{E}_{fdx} = \frac{ -E_{fdx} + K_A(V_{ref} - V_{t})}{T_A} \end{equation} \begin{eqnarray} E_{fd} = E_{fdx} \;if\; E_{fd}^{min} < E_{fdx} < E_{fd}^{max} \label{eq:softi} \\ = E_{fd}^{min} \;if\; E_{fdx} < E_{fd}^{min} \nonumber \\ = E_{fd}^{max} \;if\; E_{fdx} > E_{fd}^{max} \nonumber \end{eqnarray} The limiter shown in Fig.~\ref{exci} and defined by Eq. 10 is a soft or windup limiter. This limiter model cannot be directly used for bifurcation studies. An approximate model where the limiter is described by a smooth function is given below (see Fig.~\ref{flim}). Here, we consider symmetric limits i.e. $| E_{fd}^{max}|\;=\;| E_{fd}^{min}|\;=\; E_{fdl}$ \begin{equation} E_{fd} \; = \; f_{lim}(E_{fdx}) \; = \frac{2E_{fdl}}{\pi}tan^{-1}(aE_{fdx}\; exp(bE^2_{fdx})) \end{equation} \begin{figure}[htbp] \includegraphics[height=1.5in, width=3in]{limfun.eps} \caption{The function representing the limiter} \label{flim} \end{figure} {\bf Remarks}\\ Such an approximation amounts to perturbing the vector field slightly and hence the equilibrium structure of the system will also be slightly perturbed. So in our studies, the focus will be on how the limiter influences non-stationary solutions and their bifurcations. \subsection{Load Model} A dynamic load model as in Abed [1993] is used along with a constant power load $ (P_{ld},Q_{ld}) $ in parallel with it. Thus, the real and reactive load powers are specified by the following equations. \begin{eqnarray} P &=& P_{ld} + P_o + p_1\dot{\delta_L} + p_2\dot{V_L} + p_3V_L \label{eq:pld}\\ Q &=& Q_{ld} + Q_o + q_1\dot{\delta_L} + q_2V_L + q_3V_L^2 \label{eq:qld} \end{eqnarray} \subsection{Network Model} With the notation defined in Fig. 1, we can write the network equation in the D-Q reference frame as, \begin{eqnarray} \hat{E_b} + \frac{\hat{i_3}}{\hat{Y_3}} = \hat{V_t} \label{eq:fir}\\ \hat{V_L} + \frac{\hat{i_1}}{\hat{Y_1}}=\hat{V_t} \end{eqnarray} \newline Further, \begin{eqnarray} \hat{V_t} \;\;\; = (v_q +jv_d)e^{j\delta} \label{eq:vt} \\ \hat{i}\;\;\; =\;\;\; (i_q+ji_d )e^{j\delta}\;\;\;=\;\;\;\hat{i_1}\;\;\; +\;\;\; \hat{i_3}\\ \hat{Y} = Y\angle{\phi} = \hat{Y_1} + \hat{Y_3} \label{eq:las} \end{eqnarray} From Eqs., (\ref{eq:fir}) to (\ref{eq:las}) we can write, \begin{equation} (v_q+jv_d) = \frac{A_1 + B_1} {Y} \end{equation} where $A_1=E_bY_3e^{-j(\delta+\phi - \phi_3)}+ Y_1V_Le^{j(\delta_L-\delta-\phi +\phi_1)}$ \\and $B_1=(i_q+ji_d)e^{-j\phi}$\\ Defining, \begin{eqnarray} a = E_bY_3cos(\delta+\phi-\phi_3) + Y_1V_Lcos(\delta_L-\delta-\phi+\phi_1)\\ b = -E_bY_3sin(\delta+\phi-\phi_3) + Y_1V_Lsin(\delta_L-\delta-\phi+\phi_1) \end{eqnarray} \newline permits us to write, \begin{eqnarray} i_qcos(\phi) + i_dsin(\phi) = Yv_q - a\\ i_dcos(\phi) - i_qsin(\phi) = Yv_d -b \end{eqnarray} \subsection {Derivation of the System Model} Substituting for $v_d$ and $v_q$ from the stator algebraic equations (7) and (8), we have,\\ \begin{eqnarray} \left[\begin{array}{cc} cos(\phi) & (sin(\phi)-Yx^{\prime}_d)\\ -(sin(\phi)-Yx^{\prime}_q) & cos(\phi) \end{array} \right]\left[\begin{array}{c} i_q \\ i_d \end{array}\right] =\left[\begin{array}{c} Y_a\\Y_b\end{array}\right]\\ \nonumber \label{eq:sys} \end{eqnarray} where \hspace*{0.1cm} $Y_a=(YE^{\prime}_q -a)$\\ and \hspace*{0.3cm} $Y_b=(YE'_d -b)$ From Eq. (24), we can solve for the currents $i_d,\;i_q$ and subsequently solve for $ v_d\;$and$\;v_q$ from the stator algebraic equations. Further, from Eq. (\ref{eq:vt}) we get,\begin{eqnarray} |{\hat{V_t}}| =\sqrt{(v_q^2 + v_d^2)}\\ \theta = \delta + tan^{-1}({\frac{v_d}{v_q}}) \label{eq:thet} \end{eqnarray} Defining,\begin{eqnarray} r_1\;\;\;=\;\;\; \delta_L - \theta - \phi_1\\ r_2 \;\;\;=\;\;\;\delta_L-\phi_2\\ \nonumber \end{eqnarray} the power balance equation at bus 2 can be written as, \begin{eqnarray} P = V_tV_LY_1cos(r_1) - V_L^2Y_1cos(\phi_1) + E_bV_LY_2cos(r_2) \nonumber \\- V_L^2Y_2cos(\phi_2)\label{eq:p}\\ Q = V_tV_LY_1sin(r_1) + V_L^2Y_1sin(\phi_1) + E_bV_LY_2sin(r_2) \nonumber \\+ V_L^2Y_2sin(\phi_2)\label{eq:q} \end{eqnarray} Substituting from Eqs., (\ref{eq:sys}-\ref{eq:thet}), (\ref{eq:p}-\ref{eq:q}) in Eqs., (1-5) and (\ref{eq:pld}-\ref{eq:qld}), we get \begin{eqnarray} \bf \dot{x} = \bf f(\bf x,\lambda) \end{eqnarray} where \begin{math} \bf x = \left[\begin{array}{ccccccc} \delta & s_m & E'_q & E'_d & E_{fdx}& \delta_L & V_L\end{array}\right]^T\end{math} and $\lambda$ is a bifurcation parameter. As a simplification, we shall also consider the system described the One Axis Model for the generator as the effect of the limiter on this case is interesting in itself. For this, we neglect the damper winding on the q-axis and in terms of modeling, this is done by omitting $E'_d$ as a state variable and substituting \begin{equation} E'_d = -(x_q - x'_q)i_q \end{equation} in Eqs., (6) and (8). The state space structure remains the same, with the dimension being one less that the previous system. In this case, we have \begin{math} \bf x = \left[\begin{array}{cccccc} \delta & s_m & E'_q & E_{fdx}& \delta_L & V_L\end{array}\right]^T\end{math}. \section{Bifurcations} In this section, we illustrate the qualitative differences which arise on consideration of the limiter by studying bifurcations in the associated systems with {\bf AUTO97} (Doedel [1997]) a continuation and bifurcation software for ordinary differential equations. The generator input power ($P_m$) is a very important parameter in practical power systems operation. This is the parameter which is adjusted or varied by the power system operators (utility) to track the changes and variations in the system load (power demand) so as to maintain a stable operating condition. We hence, consider $P_m$ i.e. the input power to the generator as the bifurcation parameter. To describe the types of bifurcations, we shall use the following notations.\newline SNB: Saddle Node Bifurcation\\ HB: Hopf Bifurcation\\ CFB: Cyclic Fold Bifurcation\\ TR : Torus Bifurcation\\ PDB : Period Doubling Bifurcation\\ In all the bifurcation diagrams the state variable $E_{fdx}$ is plotted against the bifurcation parameter. In the case of periodic solutions, we use the maximum value of the variable which is indicated by the circles. Filled circles refer to stable solutions and the unfilled ones, to unstable solutions. \subsection{One Axis Model} {\bf Without limiter}\\ From Fig.~\ref{sax1}, we note that the stationary solutions undergo four bifurcations labeled as HB$^1$, HB$^2$, HB$^3$ and SNB$^4$. For $\lambda < \lambda^1$, the equilibrium point is stable, but as $\lambda$ is increased, the stationary point loses its stability at $\lambda$= $\lambda^1$ through HB$^1$. With a further increase in $\lambda$, the stationary point gains stability through HB$^2$, i.e. $\lambda$= $\lambda^2$. It remains stable until $\lambda$= $\lambda^3$, where stability is lost through HB$^3$. Further, SNB$^4$ does not influence the stability of the stationary point. Next, we focus on the family of periodic solutions emerging from HB$^1$. Since HB$^1$ is supercritical, it gives birth to a family of stable periodic solutions indicated by the filled circles. This periodic solution loses its stability at TR$^5$ and with a further increase in $\lambda$, gains it back through TR$^6$ and remains stable until TR$^7$. Further on, there is no qualitative change in its behavior with TR$^7$, CFB$^8$ and TR$^9$. Next, we find that the branch emerging on continuation of HB$^2$ is the same as that from HB$^1$. On continuation of HB$^3$, we find a family of unstable periodic solutions which gain stability through CFB$^{10}$. This stable periodic solution encounters a PDB$^{11}$ on continuation of which, we find a period doubling cascade accumulating at a critical value of $\lambda^c$ =0.931. (which is not shown here) which definitely suggests the onset of chaos. However, what is of interest here, is the behavior of the system after TR$^5$. It is clear that a torus bifurcation results in the emergence of quasi-periodic solutions. This is verified by simulation as shown in Fig.~\ref{mas1} which shows the quasi-periodic attractor. The bifurcation points are summarized in Table ~\ref{tsax1} \begin{small} \begin{table}[htbp] \begin{center} \caption{Bifurcation Points (see Fig.~\ref{sax1}) \label{tsax1} } \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|} \hline Point&HB$^1$&HB$^2$ &HB$^3$&SNB$^4$&TR$^5$&TR$^6$&TR$^7$& CFB$^8$&TR$^9$&CFB$^{10}$&PDB$^{11}$ \\ \hline $\lambda$ &0.583&1.0746&1.155&1.914&0.812&1.181&1.26&1.293&1.26&0.922&0.9278\\ \hline \end{tabular} \end{center} \end{table} \end{small} \begin{figure} \includegraphics[height=2in, width=3in]{UL7.ps} \caption{\scriptsize{$\lambda = P_m$, One Axis Model}} \label{sax1} \end{figure} \begin{figure} \includegraphics[height=2in, width=3in]{ma13.eps} \caption{\scriptsize{The Quasi-periodic trajectory when $P_m = 0.83$}} \label{mas1} \end{figure} {\bf With Limiter}\\ From Fig.~\ref{lima9}, we observe that the stable operating point loses its stability with HB$^1$, regains it at HB$^2$ and loses it back at HB$^3$ before encountering SNB$^4$ which is similar to the case without limiter (see Fig.~\ref{sax1}). Note that in Fig.~\ref{sax1}, for the static bifurcations HB$^1$ - SNB$^4$, $E_{fdx} < E_{fd}^{max}$ and hence we expect that these bifurcations should occur at the same values even with the limiter. However, this is not the case as seen from Table ~\ref{t42} because of the approximation which shifts the equilibrium structure as mentioned before. HB$^2$ and HB$^3$ occur very closely and hence cannot be distinguished in Fig.~\ref{lima9}. On continuation of HB$^1$ which is supercritical, we find that the stable periodic solutions do not undergo any bifurcation. HB$^2$ is also supercritical and its continuation yields the same stable periodic set obtained on continuation of HB$^1$. HB$^3$ is sub-critical and its continuation which yields CFB$^5$ where stability is gained for a while before CFB$^6$ is however, not shown here. Fig.~\ref{mal13} shows the time domain plot of the load bus voltage for $\lambda = 0.86$. The bifurcations are summarized in Table ~\ref{t42}. \begin{small} \begin{table}[htbp] \begin{center} \caption{Bifurcation Points (see Fig.~\ref{lima9}) \label{t42}} \begin{tabular}{|l|l|l|l|l|l|l|} \hline Point&HB$^1$ & HB$^2$ & HB$^3$ & SNB$^4$&CFB$^5$&CFB$^6$\\ \hline $\lambda$ & 0.5145 &1.1827 &1.1857&1.884&1.1284&1.1311\\ \hline \end{tabular} \end{center} \end{table} \end{small} \begin{figure}[htbp] \includegraphics[height=2in, width=3in]{LIMA9.ps} \caption{$\lambda = P_m$ , One Axis Model with limiter,continuation of HB$^1$ and HB$^2$} \label{lima9} \end{figure} \begin{figure}[htbp] \includegraphics[height=2in, width=3in]{mal13.eps} \caption{Sustained oscillations of load bus voltage with time when $\lambda = 0.86$ with the approximate limiter} \label{mal13} \end{figure} \subsection{Two Axis model} {\bf Without limiter}\\ We let $\lambda= P_m$ with reference to Fig.~\ref{sax2}. The stationary point undergoes two bifurcations, HB$^1$ where it loses its stability and SNB$^2$ which does not influence the stability further. HB$^1$ is a supercritical bifurcation and the family of stable periodic solutions from it undergo a period doubling cascade starting with the PDB$^1$, accumulating at a critical value of $\lambda^c= 1.315$. The chaotic attractor at $\lambda^c$ is shown in Fig.~\ref{mas2} which confirms the chaotic nature. The bifurcation points are summarized in Table~\ref{tsax2} \begin{small} \begin{table}[htbp] \begin{center} \caption{Bifurcation Points (see Fig.~\ref{sax2}) \label{tsax2} } \begin{tabular}{|l|l|l|l|l|} \hline Point&HB$^1$& SNB$^2$&PDB$^3$&PDB$^4$ \\ \hline $\lambda$ & 1.2281&1.9607&1.311&1.314 \\ \hline \end{tabular} \end{center} \end{table} \end{small} \begin{figure}[htbp] \includegraphics[height=2in, width=3in]{UL4.ps} \caption{\scriptsize{$\lambda = P_m$, Two Axis Model}} \label{sax2} \end{figure} \begin{figure}[htbp] \includegraphics[height=2in, width=3in]{ma1.eps} \caption{\scriptsize{The Chaotic trajectory when $P_m$ = 1.315}} \label{mas2} \end{figure} {\bf With Limiter}\\ From Fig.~\ref{lima1}, we observe that the stable operating point loses stability at HB$^1$ and then encounters SNB$^2$ (which is not shown here). On continuation of HB$^1$, which is sub-critical, we find that the unstable periodic solution stabilizes with CFB$^3$. This stable periodic solution undergoes a period doubling cascade initiated at PDB$^4$. In Fig.~\ref{lima1},we also show the period doubled solution and its subsequent bifurcation PDB$^5$. By numerical simulations, considering both the exact and the function approximation of the limiter, we verify that at $\lambda =1.3$, the system behavior is chaotic. The time domain plots are shown in Figs ~\ref{ex5} and ~\ref{ex6}. The chaotic attractor subject to limits is shown in Fig ~\ref{ex12}. The bifurcations are summarized in Table ~\ref{t41}. \begin{small} \begin{table}[htbp] \begin{center} \caption{Bifurcation Points (see Fig.~\ref{lima1}) \label{t41}} \begin{tabular}{|l|l|l|l|l|l|} \hline Point&HB$^1$ & SNB$^2$ & CFB$^3$ & PDB$^4$&PDB$^5$\\ \hline $\lambda$ & 1.2729 &1.923 &1.2557&1.282&1.2912\\ \hline \end{tabular} \end{center} \end{table} \end{small} \begin{figure}[htbp] \includegraphics[height=2in, width=3in]{LIMA1.ps} \caption{$\lambda = P_m$, Two Axis Model with limiter} \label{lima1} \end{figure} \begin{figure}[htbp] \includegraphics[height=2in, width=3in]{ex5.eps} \caption{Chaotic oscillations of load bus voltage with time when $\lambda = 1.3$ with the approximate limiter} \label{ex5} \end{figure} \begin{figure}[htbp] \includegraphics[height=2in, width=3in]{ex6.eps} \caption{Chaotic oscillations of load bus voltage with time when $\lambda = 1.3$ with the exact limiter} \label{ex6} \end{figure} \begin{figure}[htbp] \includegraphics[height=2in, width=3in]{ex12.eps} \caption{The Chaotic attractor subject to limits when $\lambda = 1.3$ } \label{ex12} \end{figure} \section{Discussions} The case studies with two different models were considered solely for illustrating the effect of the limiter on bifurcations in the system which is interesting in it's own right. The qualitative difference in system dynamics owing to modeling is however not discussed here (see Rajesh and Padiyar [1999] for a discussion). Another aspect worth mentioning is the differences in the bifurcation diagrams in this paper from those in the references. Abed {\em et al} consider a simplified generator model (classical model) in which the excitation system is entirely absent and use a slightly different system for the bifurcation studies. In Ji and Venkatasubramanian [1996], a Single Machine Infinite Bus (SMIB) system (which is different from that considered in this paper) wherein the load model is absent, is studied. This paper however, focusses mainly on studying bifurcations and changes which arise on the consideration of {\bf excitation limits}. When the One axis model is considered without the limiter, we observe stable quasi-periodic trajectories resulting from a TR bifurcation. However, with the limiter, we do not observe any bifurcations of periodic solutions with the result that the entire branch from HB$^1$ to HB$^2$ in Fig. is stable. When the Two axis model is considered without the limiter, we observe chaotic trajectories due to PDBs, which, with the limiter still occur. However, we observe in this case that the system has multiple attractors (see Fig.~\ref{lima1}) namely, a stable equilibrium point and a stable periodic solution. Further, we observe that the PDBs in this case occur very close to the boundary of stable fixed point operation. This means that if the system operates close to boundary of stable fixed point operation, and suffers a disturbance with the post-disturbance initial condition belonging to the chaotic region, the system can be easily pushed to the chaotic region. Another interesting aspect seen by comparing Fig.~\ref{sax2} and Fig.~\ref{lima1} is that stable equilibrium points close to the boundary of stable fixed point operation are surrounded by unstable limit cycles which suggests that the region of attraction for the equilibrium points {\em shrinks} in the presence of limits. \section{Conclusions} An attempt has been made to analyze bifurcations in the presence of a limiter by approximating the limiter by a smooth function. It is seen that this methodology provides good insight in to studying bifurcations in a system with a soft limiter. The observations in the case studies illustrate in general that, the limiter is capable of inducing spectacular qualitative changes in the system. Developing a formal theory for bifurcations and analyzing the global system dynamics in the presence of limits in the system would be a challenging task for further research in this area. \newpage {\bf References} \\ \noindent Abed E. H., Wang H. O., Alexander J. C., Hamdan A. M. A. and Lee H. C. [1993] ``Dynamic bifurcations in a power system model exhibiting voltage collapse'', {\em Int. J. Bifurcations and Chaos} {\bf 3} (5), 1169-1176. \\ \noindent Doedel E. J., Fairgrieve T. F., Sanstede B., Champneys A. R., Kuznetsov and Wang X [1997] ``AUTO97 : (User's Manual) continuation and bifurcation software for ordinary differential equations.'' \\ \noindent Ji W. and Venkatasubramaian V [1996] ``Hard limit induced chaos in a fundamental power system model'', {\em International Journal of Electrical Power and Energy Systems} {\bf 18} (5), 279-295.\\ \noindent Padiyar K. R. [1996] ``Power System Dynamics and Control'' John Wiley, Singapore. \\ \noindent Rajesh K. G. and Padiyar K. R. [1999] ``Bifurcation analysis of a three node power system with detailed models'' {\em International Journal of Electrical Power and Energy Systems} {\bf 21} 375-393.\\ \noindent Tan C. W., Varghese M., Varaiya P. P. and Wu F. W.[1995] ``Bifurcation,chaos and voltage collapse in power systems'' {\em Proceedings of the IEEE} {\bf 83} (11) 1484-1496. \ \section*{Appendix A} \begin{itemize} \item Network parameters. \newline \begin{math} Y_1 = 4.9752, Y_2 = 1.6584, Y_3 = 0,\phi_1 = \phi_2=\phi_3 = -1.4711 ,E_b = 1.0\end{math} \item Generator parameters. \newline \begin{math} x_d = 1.79, \;\;\;x_q = 1.71, \;\;\;T'_{do}=4.3, T^{\prime}_{qo}=0.85, x_d'=0.169, \newline x_q'=0.23, H = 2.894,w_b = 377,d = 0.05 ,E_m = 1.0 \end{math} \item Load parameters. \newline \begin{math} P_o = 0.4,\;\;,Q_o = 0.8\;\;,p_1 = 0.24\;\;,q_1 = -0.02\;\;,p_2 = 1.7\;\;,q_2 = -1.866 \newline p_3 = 0.2\;\;,q_3 = 1.4\;\; \end{math} \item AVR constants \newline \begin{math} K_A = 200\;\;,T_A = 0.05 \end{math} \item Limiter constants $a = 0.23\;,\;b = 0.1058\;,\;E_{fd}^{max} = 5\;,\;E_{fd}^{min} = -5$ \end{itemize} \end{document}
cond-mat/0411466
\section{\bf Introduction} The fundamental electron-hole (e-h) recombination process in organic light emitting diodes (OLEDs) can be written as, \begin{equation} \label{CT} P^+ + P^- \to G + S/T \end{equation} where $P^{\pm}$ are charged polaronic states of the emissive molecule, $G$ is the ground state of the neutral molecule, $S$ is the singlet excited state and $T$ the triplet excited state of the neutral molecule. Eq.~\ref{CT} indicates that both singlet and triplet excitons are likely products of the e-h recombination process. The quantity that determines the efficiency of OLEDs is then the fraction of emissive singlets, $\eta$, that are formed in the above charge recombination process. Since electrons and holes are injected randomly in the device, only 25\% of the initial polaron-pair states $|P^+P^-\rangle$ are singlets. If it is now assumed that the rate constant for the e-h recombination in reaction \ref{CT} is equal for the singlet and the triplet channels (as would be true in the independent-electron limit), one arrives at a theoretical upper bound for $\eta$ at 0.25. Many experimental studies, however, point to breaching of this upper bound in real systems \cite{cao,pk,Wilson01a,tandon,Greenham, Wohlgenannt02a,Virgili} and $\eta$ values considerably larger than 0.25 have been claimed. At the same time though, there exists other experimental work that views these results with skepticism \cite{Baldo99a,segal,Li04}. The latter authors argue that the value of $\eta$ is decided entirely by the fraction of initial bound polarons $|P^+P^-\rangle$ that are singlets, i.e., 0.25, and no further change in this quantity can take place, irrespective of the rates of the e-h recombination rates in the singlet and triplet channels in Eq.~(1). Theoretical attempts to resolve this paradox recognize the important role of electron correlations in all cases, but can nevertheless be classified into two broad categories. In one, the focus has mostly been on the {\it lowest} singlet and triplet excited states, as in the independent electron model \cite{Shuai00a,shuai2,Tandon,Moushumi}, and the the calculation of the relative cross-sections of the e-h recombinations within the singlet and triplet channels were carried out in detail. Although the actual calculations are quite different within the different theoretical approaches within this category, the calculated yield of the lowest singlet exciton is greater than 0.25 in all cases. Within the second category of theoretical work, it is tacitly assumed (but not proved) that (i) the initial products of the e-h recombination are {\it higher energy} singlet and triplet states, and (ii) the yields of these excited singlets and triplets are in the ratio 1:3 (in conformity with the independent-electron statistics). Within this category of models it is the subsequent relaxations from the high energy states to the lowest singlet and triplet excitons that gives the change in the ratio of emissive singlets to triplets \cite{hong,bittner}. As discussed later, from energetic considerations alone there is distinct possibility that the products of the e-h recombination reaction are the high energy excited states rather than the lowest excitons. The question therefore arises which of these two approaches, if any, describe best the e-h recombination in the real materials. In view of these diverse experimental and theoretical results, we revisit our original work \cite{Tandon,Moushumi} to investigate the possibility that the products of the e-h recombination reaction are high energy excited states, and to determine to what extent the singlet:triplet yield ratio is affected by this. We show that in both the singlet and triplet channels, there exists strong likelihood of bifurcation of the reaction paths, with one path leading to the lowest exciton, the other leading to a specific excited state. Assuming now the applicability of Kasha's rule \cite{kasha} within both spin channels (i.e., assuming that in both channels, the higher energy excited state decays to the lowest exciton in ultrafast time scale) one can in principle estimate the total yields of the lowest singlet and triplet states and the overall singlet:triplet ratio. This is what is attempted in the present paper. In section 2 we present a brief review of the experimental results. In section 3 we present a mechanistic discussion of the e-h recombination for noninteracting electrons, with emphasis on the degeneracies that characterize this limit and the consequences thereof. We follow this with a summary on the the nature of excited states in conjugated polymers in section 4. These results are useful in obtaining physical understanding of the numerical results that we obtain for the e-h recombination. In sections 5 and 6, we present our theoretical model for the e-h recombination as well as detailed numerical results for various cases. In the Conclusion section we end with some basic issues that need to be addressed in the future and discuss our viewpoints regarding these issues. \section{\bf Brief survey of experimental results} The basic difference between electroluminescence (EL) and photoluminescence (PL) in molecular materials lies in the initial process by which the excited state is formed. Independent of this step, in both cases, as per the Kasha rule \cite{kasha} fluorescent emission occurs from the lowest excited singlet electronic state (with very few exceptions). This is because rapid internal conversion funnels higher excited states to the lowest excited state of the same spin symmetry, whenever the equilibrium geometries of the initial and final excited states are not very different. Thus the cross-section for the final process of light emission is nearly the same in both EL and PL. However, while the formation of the emissive species, the singlet optical exciton, has a quantum efficiency (QE) of nearly 1 in the PL process, this QE is $\eta$ in the case of EL. Hence, the ratio of the EL to PL efficiency provides a lower bound on $\eta$ in the e-h recombination in the EL process. Using this principle, experimentally $\eta$ has been found to range from $\approx$ 0.25 to 0.66 in different materials \cite{cao,pk,tandon}. In OLEDs containing molecular components as the emissive species, such as aluminum tris (8-hydroxy quinoline) (Alq$_3$), $\eta$ has been determined to be close to the independent-electron statistical value \cite{Baldo99a,segal}. On the other hand, considerably larger $\eta$ of 0.45 has been claimed in derivatives of poly (para phenylene vinylene) (PPV) by Cao {\it et al.}\cite{cao} and Ho {\it et al.} \cite{pk}. $\eta$ has also been measured by photoinduced absorption detected magnetic resonance (PADMR) \cite{tandon,Wohlgenannt02a}. In this technique a magnetic field of about 0.1 Tesla is applied to Zeeman split the spin-1/2 states of the charged polarons at a temperature of about 20 $K$. Application of an intense microwave to match the Zeeman splitting leads to equal populations of the up and down spin states. This in turn would lead to higher probability of two neighboring polarons of opposite charge having antiparallel spin orientations than parallel spin orientations. Thus, the recombination will yield fewer triplets than in the absence of a field and a saturating microwave, resulting in attenuation of the triplet absorption in the PADMR experiment. From measurement of this attenuation, it is possible to calculate $\eta$. Wohlgenannt {\it et al.} \cite{tandon,Wohlgenannt02a} have determined $\eta$ for a large number of polymeric materials this way and have found it to be strongly material dependent; in all the cases they studied, $\eta$ was determined to be larger than 0.25. More recently, Wilson {\it et al.} \cite{Wilson01a} and Wohlgenannt {\it et al.} \cite{Wohlgenannt02a} have shown that $\eta$ varies with conjugation length, from 0.25 for small molecules to considerably larger for long chain oligomers. These results are in contradiction with the claim in reference \cite{segal} that in electroluminescent devices with MEH-PPV as the emissive material the singlet fraction is (20$\pm$4)\%. Increase in the population of polaron pairs with antiparallel spins under resonance condition also implies formation of a higher fraction of singlet excitons. Monitoring EL intensity under resonance conditions (ELDMR) should give an enhancement consistent with the decrease in PA intensity under the same conditions. This has indeed been observed by Segal {\it et al.} \cite{segal} as well as Li {\it et al.} \cite{Li04} in Alq$_3$. Since estimates based on other measurements give $\eta \sim 0.25$ for Alq$_3$ \cite{Baldo99a,segal}, Li {\it et al.} attribute this increase in EL to reduced polaron population under resonance leading to reduced quenching of the singlet excitons \cite{Li04}. Reduced polaron population results, however, from increased recombination process and hence increased population of singlet excitons. Thus in polymeric materials, the enhanced ELDMR could be both due to higher rate of recombination process and reduced rate of quenching of the singlet excitons. Finally a recent experimental paper by Lin {\it et al.} has claimed that $\eta$ can be {\it smaller} than 0.25 in LED devices under low electric field \cite{lin}. This work, however, has been severely criticized by authors who have pointed out that the experiments were carried out at room temperature, and the absorption spectrum assigned to triplets by Lin {\it et al.} was actually due to charged polarons \cite{Osterbacka,Dhoot}. Furthermore, the critical assumption in the model, namely, that no triplet states exist between the conduction band edge triplet and the lowest triplet is not borne by the exact triplet spectrum calculated for long conjugated chains \cite{MDas} using the widely accepted Pariser--Parr--Pople model \cite{PPP}. \section{Two-level picture of e-h recombination} In view of what follows, we discuss here briefly the recombination reaction (1) for independent electrons (H\"uckel model). The electronic structures of all the components in Eq.~(1) are given by single molecular configurations in this limit, and for arbitrary chain lengths or molecular sizes, only the highest occupied and lowest unoccupied molecular orbitals (HOMO and LUMO) are relevant. Each component of reaction (1) can therefore be described within a two-level scheme, as shown in Fig.~1. The total energy of the initial state $|P^+ \cdot P^-\rangle$ and the final state $|G \cdot S/T\rangle$ in Fig.~1 are identical. Hence if we view the e-h recombination as a tunneling process, there occurs resonant tunneling from the initial polaronic states to the final neutral states, if the matrix element of the perturbation (corresponding to the transfer term which causes an electron to hop between the conjugated chains) connecting the initial and final states are nonzero. The matrix elements $\langle G \cdot S|H|P^+P^-\rangle_S$ and $\langle G \cdot T|H|P^+P^-\rangle_T$, where $| P^+P^-\rangle_S$ and $| P^+P^-\rangle_T$ are singlet and triplet polaron-pair species, are identical for arbitrary interchain hopping and hence the tunneling probability is the same in the singlet and the three triplet channels. This quantum resonant tunneling picture leads us to $\eta~=~0.25$. Notice that this requires the strict degeneracy \begin{equation} \label{degeneracy} E(P^+) + E(P^-) = E(G) + E(S/T) \end{equation} where the energy E in each case refers to the total energy of the state in question. If for any reason these equalities are not obeyed, in particular if $E(S) \neq E(T)$ there is no reason to have $\eta$ = 0.25. ~\\ \begin{figure}[h] \begin{center} {\includegraphics[width=10cm,height=3cm]{eh_2level.eps}} \end{center} \end{figure} \noindent FIG. 1.{\it{The HOMO and LUMO orbital occupancies for the initial polaronic pair state and the final neutral states for the total z-component of the spin $M_s$=0.}} \section{\bf Excited states in conjugated polymers} We briefly review here the nature of the excited states in conjugated polymers within correlated electron models. This will be useful in understanding the bifurcation of the e-h recombination paths (1) mentioned in the above. The noninteracting electrons description of conjugated polymers is based on the $\pi$-electron H\"uckel model. The H\"uckel molecular orbitals (HMOs) are obtained as linear combinations of the atomic $2p_z$ orbitals, one at each carbon atom in conjugation. Usual quantum chemical approaches that go beyond H\"uckel theory use the HMOs as the starting point and include electron correlations via a configuration interaction (CI) scheme by using a restricted number of excited HMO configurations such as the singly excited and the doubly excited configurations in the singles and doubles CI (SDCI) approach. However, when the strength of repulsion between two electrons occupying the same $2p_z$ orbital is comparable to the energy difference between the LUMO and the HOMO, it is preferable to start with the electron configurations in the atomic orbital (AO) basis. This is particularly important in the polymer limit since approaches such as SDCI are not size-consistent and size-consistent techniques such as perturbation methods do not converge in the regime of intermediate correlations. The guiding factors in this regime would be the physical insights developed from a real space or AO picture. In this section we will illustrate how this picture helps in understanding the excitations of a conjugated polymer chain. \begin{figure} \begin{center} {\includegraphics[width=12cm,height=7cm]{vb_sgn4_19jul.eps}} \begin{center} (a) \end{center} \vspace*{0.5cm} {\includegraphics[width=12cm,height=7cm]{vb_trpn4_19jul.eps}} \begin{center} (b) \end{center} \end{center} FIG. 2. {\it The VB basis states for singlets and triplets for butadiene. Each cross (X) represents $2p_z$ orbital at the site occupied by two electrons, a dot ($\cdot$) represents an empty orbital and a line (arrow) between two sites represents singlet (triplet) spin-pairing of the singly occupied $2p_z$ orbitals at the sites. In (a) the twenty singlet VB diagrams yielding nine symmetrized singlet basis in the $^1A_g^+$ and six symmetrized singlet basis in the $^1B_u^-$ subspaces are shown. The ground state lies in the $^1A_g^+$ subspace while the optically allowed excitations from the ground state lie in the $^1B_u^-$ subspace. Two other subspaces corresponding to $^1A_g^-$ and $^1B_u^+$ subspaces are not shown. In (b) the fifteen triplet VB diagrams, the six basis states in the $^3B_u^+$ subspace, and the four basis states in the $^3A_g^-$ subspace are shown. The lowest triplet lies in the $^3B_u^+$ subspace.} \end{figure} We begin with the analysis of the simple case of butadiene with four $2p_z$ orbitals (N=4) in conjugation. The number of electrons N$_e$ occupying the N orbitals is also 4. The models we deal with are nonrelativistic (since spin-orbit interactions are negligible) and hence total spin, $S$, as well as the z-component of total spin, $M_S$, are well defined. We can write 36 distinct (linearly independent) electron configurations corresponding to $M_s=0$ for the case of N$_e$ = N = 4. Constructing basis functions with fixed total spin, such as S = 0 singlet basis functions or S = 1 triplet basis functions is nontrivial. The approach that has proved to be useful and physically appealing has been to use valence bond (VB) functions \cite{Ramasesha84a}, which are linear combinations of the Slater determinants corresponding to different MO configurations. The VB basis states are completely defined if (i) the orbital occupancies by electrons are defined and (ii) the spin-pairing among the singly occupied orbitals are provided. VB singlets are represented by lines connecting the orbital pair (see Fig. 2 and discussion below). For VB basis with $S \neq 0$ and $M_S=S$, besides the singlet pairings, it is also necessary to specify the $2S$ AOs with parallel spin occupancies. Within our VB theory such sites are connected by a triplet bond, represented by arrows (see Fig. 2). In Figs. 2 (a) and (b) we show the twenty singlet and fifteen triplet VB diagrams for $N=N_e=4$. Linear chains have mirror plane as well as inversion symmetries, implying that all basis functions as well as eigenstates can be classified as having even spatial parity ($A_g$) and odd spatial parity($B_u$). The bipartite nature of linear polyenes (as well as most conjugated polymers) also implies charge-conjugation symmetry; thus each spatial symmetry subspace can be further partitioned into even and odd charge-conjugation symmetries, giving four different symmetry subspaces overall. We have given in Fig.~2(a) the VB basis functions, -- superpositions of VB diagrams, -- that form the S = 0 $A_g^+$ and $B_u^-$ subspaces (the other two subspaces, $A_g^-$ and $B_u^+$, are not shown). Similarly in Fig.~2(b) we have shown the $B_u^+$ and $A_g^-$ S = 1 basis functions. Broadly, the basis functions in Figs.~2(a) and (b) can be classified as covalent, i.e., consisting of VB diagrams in which all atoms are singly occupied (the first two basis functions in the $^1A_g^+$ subspace, as well as the first two basis functions in the $^3B_u^+$ subspace), and ionic, with at least one doubly occupied site and one empty site (all other basis functions in Figs.~2). The ionic basis functions can be further classified into singly ionic (with one doubly occupied and one empty site), doubly ionic (with two doubly occupied and two empty sites), and so on (higher ionicities occurring in N$_e$=N $>$ 4). The ground state lies in the $A_g^+$ subspace (and henceforth is referred as the 1$^1A_g^+$), and with increasing electron-electron interactions, the ionicity of the ground state decreases and the wavefunction is more strongly dominated by the covalent basis functions (since the on-site part of the electron correlations raise the energies of basis functions with double occupancies). Optical excitation from this state is to the lowest $^1B_u^-$ state (hereafter 1$^1B_u^-$), which, as seen from Fig.~2(a), is necessarily ionic as there exist no covalent VB diagrams in the $^1B_u^-$ subspace. In contrast, the lowest triplet states in the $^3B_u^+$ subspace again have strong covalent contributions, and hence are lower in energy than the 1$^1B_u^-$. There exist a ``band'' of such triplet states between the 1$^1A_g^+$ and the 1$^1B_u^-$ in the long chain limit. The 1$^3B_u^+$ is higher in energy than the 1$^1A_g^+$, since while there can be charge-transfer delocalization across an arbitrary singlet bond, there is no such charge-transfer across triplet bonds. From the physical natures of the basis functions then, it is clear that several of the lowest triplets will occur below the 1$^1B_u^-$. If the electron correlations include intersite Coulomb interactions over and above on-site correlations, singly ionic VB basis functions with neighboring double occupancy (particle) and vacancy (hole) (the third basis function in the $^1A_g^+$ subspace and the first basis function in the $^1B_u^-$ subspace) have lower energy than basis functions in which the double occupancy and the vacancy are further away. In the long chain limit there are many more (practically infinite) basis functions that belong to the latter class and few with short range separations between the double occupancy and the vacancy. We see therefore that for realistic strong Coulomb interactions that characterize conjugated polymers there will be a strong tendency for formation of excitons, with a few of the states dominated by ionic VB functions with small separations between the double occupancy and the vacancy splitting off from the continuum of singly ionic states. The above conjectures based on the physical nature of the VB basis functions have been substantiated by numerical calculations by several groups \cite{DGuo,Abe,Barford,adqc} and is also supported by experiments \cite{DGuo,Weiser,Leng,Liess,Blatchford,Sinclair,Samuel,Leng1,Hsu,Yan}. In Fig.~3 we have shown the schematic energy spectrum of conjugated polymers. The singlet and triplet exciton states below the conduction band edge that are labeled as m$^1A^+_g$ and m$^3A^-_g$ have been discussed extensively in the context of optical nonlinearity and are characterized by very large transition dipole couplings to the 1$^1B^-_u$ and 1$^3B^+_u$ states respectively \cite{DGuo,Abe,Barford,adqc}. Energetically, the m$^1A^+_g$ is degenerate with the m$^3A^-_g$ at the level of singles-CI \cite{Abe} and very slightly above the m$^3A^-_g$ within exact calculations \cite{Shimoi}. Still higher energy singlet and triplet states with large transition dipoles to the m$^1A^+_g$ and m$^3A^-_g$ states are the n$^1B^-_u$ and n$^3B^+_u$ \cite{DGuo,Abe,Barford}, also included in Fig.~3. The n$^1B_u^-$ lies at the edge of the continuum threshold, as has been shown from earlier calculations \cite{DGuo,Abe,Barford}. The n$^3B^+_u$ has not been discussed previously. We have calculated the energy of this state exactly for a large range of parameters (see below) and have found in all cases this state to be nearly degenerate with the m$^3A^-_g$ and invariably below the n$^1B_u^-$. Although the bulk of the energetics calculations are for linear chain polyacetylenes and polydiacetylenes \cite{DGuo,Abe,Barford}, work by different groups have indicated that the same basic energy level scheme applies also to conjugated polymers with aromatic groups, in the energy region up to and including the conduction band threshold \cite{soosprl,Chandross,shuai97}. It is natural to assume that the ground state of the polaron pair formed in the OLEDs, $|P^+P^-\rangle$ is just at the bottom of the conduction band edge. This state is also included in Fig.~3, where we have made no distinction between $|P^+P^-\rangle_S$ and $|P^+P^-\rangle_T$, as the energy difference between the singlet and triplet polaron-pair states is tiny \cite{Kadashchuk} and would be invisible on the scale of Fig.~3. ~\\ \begin{center} {\includegraphics[width=7cm,height=7cm]{nrg_cong_level_6jul.eps}} \end{center} {FIG. 3.} {\it Schematic excitation spectrum of a conjugated polymer. Dots indicate that there are several excitations between the labeled states.} \section{Correlated electron theory of e-h recombination} Our goal in this section is to lay out the formalism for the detailed calculations of relative formation rates of singlet and triplet excitons starting from an oppositely charged polaron pair (see reaction (1)), in view of the correlated electron description of the electronic structure of $\pi$-conjugated polymers (see Fig.~3). In contrast to our earlier work \cite{Tandon,Moushumi}, we recognize at the outset that e-h recombination can generate $S$ and $T$ states at energies higher than the lowest energy excitons. We calculate the yields to all such states under different conditions using the techniques developed in \cite{Tandon,Moushumi}. Our procedure consists in computing the time-dependent evolution of oppositely charged polyene molecules under the influence of the composite two-chain Hamiltonian, as discussed below. As we limit ourselves to calculations based on the rigid bond approximation and small finite molecules, we assume that all high energy singlets and triplets decay in ultrafast times to the 1$^1B_u^-$ and 1$^3B_u^+$, respectively. We believe that this assumption is valid for the real systems \cite{MDas}. \subsection{The model system} Our model system consists of two polyene chains of equal lengths that lie directly on top of each other, separated by 4 \AA. We consider the charge recombination process of Eq.~\ref{CT}, and there are two possible initial states: (i) a specific chain (say chain 1) is positively charged, with the other (chain 2) having negative charge, a configuration that hereafter we denote as $P^+_1P^-_2$, where the subscripts 1 and 2 are chain indices, or (ii) the superposition $P^+_1P^-_2 \pm P^+_2P^-_1$, in the same notation. In our calculations we have chosen the first as the proper initial state, since experimentally in the OLEDS the symmetry between the chains is broken by the external electric field (we emphasize that the consequence of choosing the symmetric or antisymmetric superposition can be easily predicted from all our numerical calculations that follow). Even with initial state (i), the final state can consist of both $G_1 \cdot S_2$ and $G_2 \cdot S_1$ in the singlet channel. The same is true in the triplet channel, i.e., either of the two chains can be in the ground (excited) state. Hereafter we will write the initial states as $|i_S\rangle$ and $|i_T\rangle$, where the subscripts $S$ and $T$ correspond to spin states $S$ = 0 and 1. We consider only the $M_S$ = 0 triplet state. In the absence of an external magnetic field the e-h recombination reaction rate for all three triplet channels with different $M_S$ are the same. The initial states are simply the product states with appropriate spin combinations, \begin{eqnarray} \label{initial_states} |i_S\rangle = 2^{-1/2}(|P^+_{1,\uparrow}\rangle |P^-_{2,\downarrow}\rangle - |P^+_{1,\downarrow}\rangle |P^-_{2,\uparrow}\rangle) \\ \nonumber \\ |i_T\rangle = 2^{-1/2}(|P^+_{1,\uparrow}\rangle |P^-_{2,\downarrow}\rangle + |P^+_{1,\downarrow}\rangle |P^-_{2,\uparrow}\rangle) \end{eqnarray} The overall Hamiltonian for our composite two-chain system consists of an intrachain part $H_{intra}$ and an interchain part $H_{inter}$. $H_{intra}$ describes individual chains within the Pariser-Parr-Pople (PPP) Hamiltonian \cite{PPP} for $\pi$-electron systems, written as, \begin{eqnarray} H_{intra} = -\sum_{<ij>,\sigma}t_{ij} (a^{\dagger}_{i,\sigma} a^{}_{j,\sigma} + H.C.) + \sum_i \epsilon_i n_i + \nonumber \\ \sum_i U_i n_{i,\uparrow}n_{i,\downarrow} + \sum _{i>j} V_{ij} (n_i -z_i) (n_j -z_j) \label{PPP} \end{eqnarray} \noindent where $a^{\dagger}_{i,\sigma}$ creates a $\pi$-electron of spin $\sigma$ on carbon atom $i$, $n_{i,\sigma} = a^{\dagger}_{i,\sigma}a^{}_{i,\sigma}$ is the number of electrons on atom $i$ with spin $\sigma$ and $n_i = \sum_{\sigma}n_{i,\sigma}$ is the total number of electrons on atom $i$, $\epsilon_i$ is the site energy and $z_i$ are the local chemical potentials. The hopping matrix element $t_{ij}$ in the above are restricted to nearest neighbors and in principle can contain electron-phonon interactions, although a rigid bond approximation is used here. $U_i$ and $V_{ij}$ are the on-site and intrachain intersite Coulomb interactions. We use standard parameterizations for $H_{intra}$. The hopping integrals for single and double bonds are taken to be 2.232 eV and 2.568 eV, respectively and all site energies, for simple polyenes with all sites equivalent, are set to zero. We choose the Hubbard interaction parameter $U_C$ for carbon to be 11.26 eV, and for the $V_{ij}$ we choose the Ohno parameterization \cite{Ohno64}, \begin{eqnarray} V_{ij} = 14.397\left[\left({{28.794}\over {U_i+U_j}}\right)^2~+~r^2_{ij} \right]^{-{{1} \over {2}}} \label{Ohno} \end{eqnarray} where the distance $r_{ij}$ is in \AA, ~$V_{ij}$ is in eV and the local chemical potential $z_C$ for $sp^2$ carbon is one. It should be noted then when heteroatoms like nitrogen are present, the on-site correlation energy, the site energy and the local chemical potential could be different from those for carbon \cite{Tandon}. For $H_{inter}$, we choose the following form, \begin{eqnarray} \label{inter} H_{inter} = -t_{\perp}\sum_{i,\sigma}(a^{\dagger}_{i \sigma} a^{\prime}_{i,\sigma} + H.C.) + \nonumber \\ + X_{\perp}\sum_{i,\sigma}(n_i + n^{\prime}_i)(a^{\dagger}_{i \sigma} a^{\prime}_{i,\sigma} + H.C.) + \nonumber \\ \sum_{i,j} V_{i,j}(n_i -z_i)(n^{\prime}_j - z_{j^\prime}) \end{eqnarray} In the above, primed and unprimed operators refer to corresponding sites on different chains. Note that the interchain hopping $t_{\perp}$ is restricted to nearest interchain neighbors. The interchain Coulomb interaction $V_{i,j}$, however, includes interaction between any site on one chain with any other site on the other chain. In addition to the usual one-electron hopping that occurs within the zero differential overlap approximation we have also included a many-electron site charge-bond charge repulsion $X_{\perp}$ (operating between nearest interchain neighbors only) that consists of multicenter Coulomb integrals \cite{PPP,Campbell90a}. This term should also occur within $H_{intra}$, but is usually ignored there because of its small magnitude, relative to all other terms. In contrast, the $t_{\perp}$ in $H_{inter}$ is expected to be much smaller, and $X_{\perp}$ cannot be ignored in interchain processes, especially at large interchain separations \cite{Rice96a}. We have done calculations for both $X_{\perp}$ = 0 and $X_{\perp} \neq$ 0. \subsection{Time-evolution of the polaron pair state} Our approach consists in calculating the time-evolution of the initial states $|i_S\rangle$ and $|i_T\rangle$ [see Eqs. (3) and (4)] under the influence of the full Hamiltonian, and then evaluating the overlaps of the time-evolved states with all possible final states $|f_S\rangle$ and $|f_T\rangle$ of the individual neutral chains. In OLEDs, the $P^{\pm}$ are created at opposite ends of the device and they execute hopping motion towards each other under the influence of an external electric field ($P^{\pm} + G \to G + P^{\pm}$). The polaron wavefunctions remain unperturbed throughout this process, until they are within the radius of influence of each other. We define this particular instant as time $t$ = 0, and we visualize that the interchain interactions $H_{inter}$ are ``switched on'' suddenly from zero at this time. The intermolecular charge-transfer (CT) hereafter is rapid (a few to several tens of femtoseconds, for realistic interchain hopping $t_{\perp}$, see below). It is the ultrashort timescale of this CT process that justifies the choice of our initial state. In principle, given a Hamiltonian, propagation of any initial state is easily achieved by solving the time-dependent Schrodinger equation. One could use the interaction picture to separate the nontrivial evolution of the initial state from the trivial component which occurs as a result of the evolution of the product of the eigenstates of the Hamiltonian of the subsystems. In the context of the many-body PPP Hamiltonian such an approach is difficult to implement numerically. This is because the total number of eigenstates for the two-chain system is very large: the number of such states for two chains of six carbon atoms each is 853,776 in the $M_s$ = 0 subspace. Obtaining all the eigenstates of the two-component system and expressing the matrix elements of $H_{inter}$ in the basis of these eigenstates is therefore very intensive computationally. It is simpler to calculate the time evolution in the Schrodinger representation, determine the time-evolved states, and project them on to the desired final eigenstates (for instance, $|1^1A_g\rangle_1|1^1B_u\rangle_2$). This is the approach we take. Henceforth we refer to the initial states $|i_S\rangle$ and $|i_T\rangle$ collectively as $\Psi(0)$ and the time-evolved states as $\Psi(t)$. In principle, the time evolution can be done by operating on $\Psi(0)$ with the time evolution operator, \begin{equation} U(0,t) = \exp(-iHt) \label{time} \end{equation} where $H$ is the total Hamiltonian $H_{intra} + H_{inter}$. This approach would, however, require obtaining a matrix representation of the exponential time evolution operator, which again requires the determination of the prohibitively large number of eigenstates of the composite two-chain system. We can avoid this problem by using small discrete time intervals and expanding the exponential operator in a Taylor series, and stopping at the linear term. Such an approach, however, has the undesirable effect of spoiling unitarity, and for long time evolutions would lead to loss of normalization of the evolved state. The way around this dilemma has been proposed and used by others \cite{Crank47a,Varga62a} in different contexts and involves using the following truncated time-evolution scheme, \begin{eqnarray} (1 + iH{{\Delta t} \over {2}}) \Psi(t+\Delta t) = (1 - iH{{\Delta t} \over {2}}) \Psi(t) \label{evolve} \end{eqnarray} In the above equation, on the left hand side, we evolve the state at time $(t+\Delta t)$ backward by $\Delta t/2$ while on the right hand side, we evolve the state at time $t$ forward by $\Delta t/2$. By forcing these two to be equal, we ensure unitarity in the time evolution of the state. It can be seen easily that this time evolution which is accurate to ${{\Delta t^2} \over {2}}$ is unitary. For a given many-body Hamiltonian and initial state, the right hand side of Eq.~\ref{evolve} is a known vector in the Hilbert space of the two-chain Hamiltonian. The left hand side corresponds to the action of a matrix on an as yet unknown vector, that is obtained by solving the above set of linear algebraic equations. After each evolution step, the evolved state is projected onto the space of neutral product eigenstates of the two-chain system. The relative yield $I_{mn}(t)$ for a given product state $|m,n\rangle$ = $|m\rangle_1|n\rangle_2$ is then obtained from, \begin{equation} I_{mn}(t) = |\langle\Psi(t)|m,n \rangle|^{2} \label{overlap} \end{equation} In our case the states $|m,n\rangle$ can be any of the final states of interest, viz., $|(1^1A_g)_1(1^1B_u)_2\rangle$, $|(1^1A_g)_1(1^3B_u)_2\rangle$, etc. It is for efficient calculations of the overlaps (while at the same time maintaining spin purity) in Eq.~\ref{overlap} that we transfer our exact eigenstates of the neutral system in the VB basis to the total $M_S$ basis. We emphasize that $I_{mn}(t)$ is a measure of the yield of the state $|m,n\rangle$ at time $t$ and is not a cross-section. \section{Numerical Results} \label{results} We first present the results of our calculations of recombination dynamics for pairs of ethylenes, butadienes and hexatrienes within the noninteracting H\"uckel model ($U_i = V_{ij} = X_{\perp} = 0$) and the interacting PPP model. These results have already been discussed in detail in reference \cite{tandon}, and hence our presentation will be brief. Our calculations here clearly indicate that the yield of the lowest singlet exciton is considerably larger than that of the lowest triplet exciton within the PPP Hamiltonian. As already discussed in section I, however, it is not necessary that the reaction products of the e-h recombination reaction are limited to the lowest excitations when e-e interactions are nonzero. A thorough search within the PPP model has, however, not detected significant yield of any other excited state \cite{tandon}. We discuss why this might be a consequence of the small sizes of our model systems. We then present new results of calculations with $H_{intra}$ as the simple Hubbard Hamiltonian ($V_{ij} = 0$) and an extended Hubbard Hamiltonian with short-range intersite Coulomb interactions. In spite of the obvious limitations of our finite size calculations, a mechanism of e-h recombination appears to emerge from our work. We are able to show that in both singlet and triplet channels, there occurs a bifurcation of the e-h recombination paths, with one branch leading to the lowest 1$^1B_u$ (1$^3B_u$) exciton, the other leading to the formation of the n$^1B_u$ (n$^3B_u$) higher energy CT states. The overall S:T ratio then depends on the relative weights of the two branches in each spin channel, but the fractional singlet exciton yield continues to be greater than 0.25. \subsection{Dynamics in the H\"uckel Model} While there is no difference in energy between singlets and triplets in the H\"uckel Model, it is nevertheless possible to have spin singlet and triplet initial states $|i_S\rangle$ and $|i_T\rangle$, as well as singlet and triplet final states. In Fig. 4 we show the yield for the e-h recombination in the singlet channel, for pairs of ethylenes, butadienes and hexatrienes. The yields for the triplet channels are not shown separately in this case, -- we have ascertained that these are identical to those in the singlet channel in this case, as expected. \begin{figure} \begin{center} \centerline{\includegraphics[width=10cm,height=6cm]{fig2_paper.eps}} \end{center} {FIG. 4.} {\it Yield in the singlet channel as a function of time, for pairs of ethylenes (top panel), butadienes (middle panel), and hexatrienes (bottom panel), within the H\"uckel model ($U=V_{ij}=X_{\perp}=0$).} \end{figure} These calculations are for $t_{\perp} = 0.1$ eV, $X_{\perp} = 0$ within Eq.~\ref{inter}. We note that the yields $I_{mn}(t)$ oscillate with time. This is to be expected within our purely electronic Hamiltonian, within which an electron or hole jumps back and forth between the two molecular species. These oscillations are the analogs of the Rabi oscillations \cite{Rabi37a,Allen87a} that occur upon the stimulation of a system with light, where absorption of light can occur only with nonzero damping. Within our purely electronic Hamiltonian, complete transition to the final states can only occur in the presence of damping (for example, radiative and nonradiative relaxations of the final states), that has not been explicitly included in our Hamiltonian. The frequency of oscillation is higher for larger intermolecular transfer integral $t_{\perp}$, as expected. The frequency of the oscillation also depends upon the size of the molecule and is lower for larger molecules. The equalities in the yields of the singlet and triplet excited states found numerically conforms to the simple free spin statistics prediction that the probability of singlet and triplet formation are equal in the e-h recombination process with $M_s=0$ as the initial state. Since the $M_s=\pm 1$ cases always yield triplets, the spin statistics corresponding to 25\% singlets and 75\% triplets is vindicated in this case. \subsection{Dynamics in the PPP model} The results presented in this subsection are for interchain $V_{i,j}$ calculated using the Ohno parameters, and interchain hopping $t_{\perp}$ = 0.1 eV. We present the results for both $X_{\perp}$ = 0 and 0.1 eV. The top left and top right plots in Fig. 5 show the yield $I_{mn}(t)$ in the singlet and triplet channels for pairs of ethylenes, butadienes and hexatrienes, respectively, for the case of $X_{\perp}$ = 0. The same results are shown in bottom left and bottom right plots for $X_{\perp}$ = 0.1 eV. \begin{center} \includegraphics[width=6.8cm,height=5.9cm]{fig3a_paper_nw.eps} \hspace*{0.6cm}\includegraphics[width=7cm,height=6.2cm]{fig3b_paper.eps} \hspace*{0.2cm}\includegraphics[width=7.1cm,height=6.2cm]{fig3c_paper.eps} \hspace*{0.1cm}\includegraphics[width=7cm,height=6.2cm]{fig3d_paper.eps} \end{center} \noindent FIG. 5.{\it Yields in the singlet and triplet channels within the PPP Hamiltonian. In all four cases the top panel corresponds to pair of ethylenes, the middle panel to pairs of butadienes, and the bottom panel to pairs of hexatrienes. Top left: singlet channel, $t_{\perp}=0.1{\rm eV}, ~X_{\perp}=0$. Top right: triplet channel, $t_{\perp}=0.1{\rm eV}, ~X_{\perp}=0$. Bottom left: singlet channel, $t_{\perp}=0.1{\rm eV}, ~X_{\perp}=0.1 {\rm eV}$. Bottom right: triplet channel $t_{\perp}= 0.1{\rm eV},~X_{\perp}=0.1 {\rm eV}$. Evolution in case of hexatrienes is tracked for 20 fs while in other cases, the evolution is tracked for 60 fs. Significant yields in singlet channel occurs only for final states $|(1^1A_g^+)_1(1^1B_u^-)_2 \rangle$ and $|(1^1B_u^-)_1(1^1A_g^+)_2\rangle$, between which the yields are identical for $X_\perp = 0$ but different for $X_\perp \ne 0$. Similar asymmetry is observed also in the triplet channels for $X_\perp \ne 0$.} ~\\ ~\\ \noindent The differences from the H\"uckel model results are the following. First, the yields $I_{mn}(t)$ in both the singlet and triplet channels are considerably reduced in the PPP model. Second, the 1$^1B_u^-$ yield is now substantially higher than that of the 1$^3B_u^+$ in all cases. Finally, the observed higher yield of the singlet exciton is true for both $X_{\perp}$ = 0 and $X_{\perp} \neq$ 0. This is in contradiction to the results obtained in references \cite{Shuai00a,shuai2}, which ignore the energy difference between the 1$^1B_u^-$ and the 1$^3B_u^+$. The only consequence of nonzero $X_{\perp}$ is the asymmetry between the yields of $(1^1A_g)_1(1^1B_u)_2$ and $(1^1A_g)_2 (1^1B_u)_1$ in the singlet channels, and a similar asymmetry in the triplet channels. The overall conclusion that emerges from the results of the plots in Figs. 5 is that nonzero electron-electron interactions substantially enhances $\eta$. \subsection{Finite size effects and their origin} In what follows we will take the ground state energy E(1$^1A_g$) to be zero. In the infinite chain limit the sum total of the energies of the two oppositely charged polarons, E($P^+$) + E($P^-$), must be equal to the lowest singlet continuum band state in the neutral chain, independent of Coulomb interactions. Within the rigid band simple Hubbard model ($V_{ij} = \epsilon_i$ = 0 in Eq.~\ref{PPP}) as $H_{intra}$, this implies that in the long chain limit E($P^+$) + E($P^-$) = E(1$^1B_u$). For nonzero intersite Coulomb interactions that are large enough to create an excitonic energy spectrum, E($P^+$) + E($P^-$) = E(n$^1B_u$) in the long chain limit. Neither of these equalities are obeyed in short chains. In Table 1, we have listed E($P^+$) + E($P^-$) and E(1$^1B_u$) for the simple Hubbard model with $t_{ij} = t(1\pm\delta)$, $t$ = 1, and $\delta$ = 0.2 for many different $U$, for N = 6. All quantities are in units of $t$. We have also included here E(n$^3B_u$), defined in section 4. In all cases E(1$^1B_u$) is significantly larger than E($P^+$) + E($P^-$), with the difference increasing with $U$. For nonzero intersite Coulomb interactions, E(n$^1B_u$) is similarly significantly higher than E($P^+$) + E($P^-$), as shown in Table 2 for N = 6, although the difference in energy decreases with increasing interaction strength, due to localization. For sufficiently large Coulomb interactions, E(n$^3B_u$) occurs below E($P^+$) + E($P^-$) for N = 6 within the extended Hubbard model. ~\\ ~\\ \noindent Table 1: {\it N = 6 energetics within the simple Hubbard model. All energies are in units of t.}\\ \begin{center} \begin{tabular}{|c|c|c|c|}\hline & & & \\ \hspace{0.5cm}$U $\hspace{0.5cm}& \hspace{0.5cm}$E(P^{+}) + E(P^{-})$ \hspace{0.5cm}& \hspace{0.5cm}$E(1^1B_u)$\hspace{0.5cm}&\hspace{0.5cm}$E(n^3B_u)$\hspace{0.5cm}\\ \hspace{0.5cm}\hspace{0.5cm}&\hspace{0.5cm}\hspace{0.5cm}& \hspace{0.5cm}\hspace{0.5cm}& \hspace{0.5cm}\hspace{0.5cm}\\ & & &\\ \hline & & &\\ 2 &1.75 &1.99 & 2.94\\ & & &\\ 4 &2.70 &3.05 &3.58 \\ & & &\\ 6 &4.08 &4.48 & 4.84\\ & & &\\ 8 &5.70 &6.12& 6.40 \\ & & &\\ 10 &7.45 &7.89&8.12 \\ & & &\\ 12 &9.28 &9.72 &9.92 \\ & & &\\ 20 & 16.92 &17.38& 17.50 \\ & & &\\ \hline \end{tabular} \end{center} ~\\ \noindent Table 2: {\it N = 6 energetics within the extended Hubbard model. All energies are in units of t.}\\ \begin{center} \begin{tabular}{|c|c|c|c|c|}\hline & & & &\\ \hspace{0.3cm}$U $\hspace{0.3cm}& \hspace{0.3cm}$V $\hspace{0.3cm}& \hspace{0.3cm}$E(P^{+}) + E(P^{-})$ \hspace{0.3cm}& \hspace{0.3cm}$E(n^1B_u)$\hspace{0.3cm}& \hspace{0.3cm}$E(n^3B_u)$\hspace{0.3cm}\\ \hspace{0.3cm}\hspace{0.3cm}&\hspace{0.3cm}\hspace{0.3cm} & \hspace{0.3cm}\hspace{0.3cm}&\hspace{0.3cm}\hspace{0.3cm}&\hspace{0.3cm} \hspace{0.3cm}\\ & & & &\\ \hline & & & &\\ 10 &3 & 7.52 & 9.05 & 7.67 \\ & & & &\\ 10 &4& 7.32 & 8.98 & 7.06\\ & & & &\\ 12&4 & 9.26 & 10.66 & 8.77\\ & & & &\\ 20 &6 & 16.90 & 17.96 & 14.47\\ & & & &\\ 30 &10 &26.65 & 27.51 & 20.38 \\ & & & &\\ \hline \end{tabular} \end{center} \vskip 1pc The reason for this particular finite size effect is as follows. In the charged $P^{\pm}$ chains, there occurs a single carrier, a vacancy or a double occupancy within Hubbard and extended Hubbard models. Even for large Coulomb interactions, this carrier can be delocalized over the entire chain (see Fig.~6). In contrast, in the 1$^1B_u$ or the n$^1B_u$ (and also in the n$^3B_u$) both the vacancy and the double occupancy are present, and hence the overall space left for the delocalization of any one carrier is considerably smaller. This reduced delocalization in the neutral chain increases the energies of the $^1B_u$ states (and also of the CT triplet state, n$^3B_u$)\cite{shuai97}. Furthermore, the ``squeezed'' nature of the wavefunctions of the excited states of neutral chains implies that the matrix elements of the type $\langle P^+P^-|H_{inter}|1^1A_gj^1B_u \rangle$ (and the corresponding matrix elements in the triplet channels) are also modified strongly in short chains. These finite size effects are larger in the higher energy states than in the lowest excitons (since only higher energy CT states have delocalized character in the long chain limit), and hence it is to be expected that the calculated yields of the higher energy singlet and triplet excitations for {\it realistic} Coulomb parameters in short chains may not be representative of the results expected for long chains. In order to understand long chain behavior, we will have to minimize the relative difference in the characters of the neutral excited states and the charged polaron states. Since it is not possible to enhance the delocalizations of the double occupancy and the vacancy of the 1$^1B_u$ and the n$^3B_u$, we will go to the opposite limit of very strong Coulomb correlations, where the 1$^1B_u$, the n$^3B_u$ and the $P^{\pm}$ are all nearly {\it equally localized}. As shown previously in the context of optical nonlinearity, the strong Coulomb interaction limit for short chains can mimic the behavior of long chains with intermediate Coulomb interactions \cite{DGuo}. \begin{figure} \begin{center} {\includegraphics[height=6cm,width=5cm]{finite.eps}} \end{center} FIG. 6.: {\it A series of valence bond diagrams in which there occurs delocalization of the doubly occupied site (denoted by $x$) in polaron $P^-$ from left to right. The up arrow corresponds to an unpaired electron and a line between two sites denotes singlet spin-pairing of the singly occupied orbitals at the sites. In case of polaron $P^+$, the site with double occupancy ($x$) is replaced by an empty orbital. In excited states containing both the double occupancy and the vacancy in short chains, delocalization of each is considerably reduced, as they cannot pass one another without first going through a virtual state with mutual annihilation.} \end{figure} \subsection{Hubbard model simulations} We consider $H_{intra}$ as the simple Hubbard model ($V_{ij} = \epsilon_i = 0$ in Eq.~\ref{PPP}) with $t_{ij} = t(1\pm\delta)$, $t$ = 1 and $\delta = 0.2$. For $H_{inter}$ we choose the $V_{ij} = X_{\perp}$ = 0 limit of Eq.~\ref{inter}, and $t_{\perp}$ =0.1. In what follows, we will no longer discuss the oscillatory behavior of $I_{mn}(t)$, but will instead present the total yields $Y_{mn}$, obtained by integrating $I_{mn}(t)$ over the total duration of time evolution. As discussed in the previous subsection, within the simple Hubbard model E(1$^1B_u) >$ E($P^+) +$ E($P^-$) and we do not expect any significant yield of $^1B_u$ states higher than the 1$^1B_u$. On the other hand, with increasing $U$ the triplet energy difference $\Delta E(1^3B_u)$ = E($P^+$) + E($P^-$) -- E(1$^3B_u$) increases rapidly, while based on the discussions in section 4 we expect $\Delta E(n^3B_u)$ = E($P^+$) + E($P^-$) - E(n$^3B_u$) to decrease. Simultaneously, there occur many different triplet states below E($P^+$) + E($P^-$). It is of interest then to evaluate the yields of all these triplet states and to determine whether or not the results based on the PPP Hamiltonian survive for very strong Coulomb interactions. As indicated below, by simultaneously monitoring the $Y_{mn}$, the energy difference between the final and initial states, and the corresponding matrix elements, we are able to obtain a useful mechanistic viewpoint of the e-h recombination. In Figs. 7(a) and (b) we have summarized our results for the singlet channel. In spite of a thorough search, we did not find significant yield for any $^1B_u$ state other than the 1$^1B_u$. The yield for the 1$^1B_u$ decreases with $U$ for both N = 4 and 6, as shown in Fig.~7(a). We have also evaluated the energy difference $\Delta E(1^1B_u) = E(P^+) + E(P^-) - E(1^1B_u)$ as a function of $U$. As shown in Fig.~7(b), $|\Delta E(1^1B_u)|$ which are in units of t increases and the matrix element $\langle P^+P^-|H_{inter}|1^1A_gj^1B_u \rangle$ decreases with increasing $U$, suggesting that the yield scales as $\langle P^+P^-|H_{inter}|1^1A_gj^1B_u \rangle/|\Delta E(1^1B_u)|$, as would be true in a tunneling process. We have confirmed this scaling behavior based on our data. Interestingly, the matrix elements for N = 4 and 6 are nearly the same, and the higher yield in the longer chain is a simple consequence of the smaller $|\Delta E(1^1B_u)|$. \begin{figure} \begin{center} {\includegraphics[width=7cm,height=7cm]{figA_sg_yield.eps}} \hfill {\includegraphics[width=7cm,height=7cm]{figB_sg_nrg_mat.eps}} \end{center} FIG. 7. {\it (a) Yield of $1^1B_u$ as a function of $U$ for a pair of butadienes (open circles) and for a pair of hexatrienes (filled circles). (b) Energy difference $\Delta E(1^1B_u)$ (see text) and $\langle P^+P^-|H_{inter}|1^1A_g1^1B_u \rangle$ as functions of $U$ for a pair of butadienes (open circles and open squares respectively) and for a pair of hexatrienes (filled circles and filled squares respectively).} \end{figure} \vskip 1pc As expected, the behavior in the triplet channel is more complex. First of all, no $^3B_u$ state other than the 1$^3B_u$ and the n$^3B_u$ are generated in significant amounts, although several triplet states are found below the 1$^1B_u$ state for large $U$. This may be an artifact of the symmetry imposed by us on the two-chain model system (see section 7). More importantly, with increasing $U$, dominant triplet yield switches from the 1$^3B_u$ state to the n$^3B_u$ state, as seen in Fig.~8 (a) (n = 5 and 7 in butadiene and hexatriene, respectively). In Fig.~8(b) we have shown the behavior of $|\Delta E(1^3B_u)|$ and $|\Delta E(n^1B_u)|$ (where the energy differences in units of t are again with respect to $E(P^+) + E(P^-)$), as well as the matrix elements $\langle P^+P^-|H_{inter}|1^1A_g1^3B_u \rangle$ and $\langle P^+P^-|H_{inter}|1^1A_gn^3B_u \rangle$ for the case of N = 4 (the behavior of these quantities for N = 6 are identical). The rapid increase of $\Delta E(1^3B_u)$ and the decrease of $\langle P^+P^-|H_{inter}|1^1A_g1^3 B_u \rangle$, shown in Fig.~8(b), explain the rapid decrease in the 1$^3B_u$ yield seen in Fig.~8(a). E(n$^3B_u$) is higher than E($P^+$) + E($P^-$) for all values of $U$ (which as pointed out in the above, is a finite size effect) and the $|\Delta E(n^3B_u)|$ decreases very slowly with increasing $U$. The matrix element $\langle P^+P^-|H_{inter}|1^1A_gn^3B_u \rangle$ remains almost a constant over the complete range of $U$ we have studied. Thus the initial increase in the yield of the n$^3B_u$ followed by its saturation is expected from the behavior of $\langle P^+P^-|H_{inter}|1^1A_gn^3B_u \rangle/|\Delta E(n^1B_u)|$. Interestingly, $\langle P^+P^-|H_{inter}|1^1A_g1^3B_u \rangle$ continues to be larger than $\langle P^+P^-|H_{inter}|1^1A_gn^3B_u \rangle$ even in the region where the yield of the n$^3B_u$ is higher, indicating once again that both the matrix element and the energy difference between the initial and final states determine the yield in any given channel. Taken together, the results of Figs.~8 also suggest that in the triplet channels, matrix elements favor higher yield for the 1$^3B_u$, but energetics favor higher yield for the n$^3B_u$. \begin{figure} \begin{center} {\includegraphics[width=7cm,height=7cm]{figA_trp_yield.eps}} \hfill {\includegraphics[width=7cm,height=7cm]{figB_trp_nrg_mat_n4.eps}} \end{center} FIG. 8. {\it (a) Yields for the $1^3B_u$ and $n^3B_u$ as a function of $U$, for a pair of butadienes (open circles and open squares respectively) and for a pair of hexatrienes (filled circles and filled squares respectively). (b) $|\Delta E(j^3B_u)|$ and $\langle P^+P^-|H_{inter}|1^1A_gj^3B_u\rangle$ versus $U$ for $j$ = 1 (open circles and open squares) and $j$ = n (filled circles and filled squares) for a pair of butadienes. Unlike in Fig.~7, we have plotted here the absolute energy differences, as the $n^3B_u$ can occur both above and below the $|P^+P^-\rangle$ due to finite size effects.} \end{figure} The time-dependent perturbation theory \cite{Merzbacher} shows that the transition probability to a specified excited state $|k>$ from an initial state $|i>$ is given by \begin{eqnarray} P_{i \to k} = \left |{{\left<k|H^{'}|i\right>}\over{E_k-E_i}}\right |^2 \end{eqnarray} In this spirit, we compute the transition probability to all the dominant singlets as well as triplets. From these transition probabilities, we compute $\eta_{TP}$ for various values of the Hubbard parameter $U$. We also compute $\eta_D$ from the yields to all the singlet and triplet states obtained from our dynamical simulations. These two are shown as a function of $U$ in Fig. 9 for a pair of hexatrienes. Similar behavior is also seen for a pair of butadienes. We note that the two different approaches give qualitatively the same behavior. This clearly vindicates our focus on the matrix elements of the interchain interactions and the energy differences between the initial and final states. \begin{figure} \begin{center} {\includegraphics[width=10cm,height=9.0cm]{n6_etaplot_nw.eps}} \end{center} \noindent {FIG. 9.{ \it Variation of $\eta$ as a function of $U$ for a pair of hexatrienes.}} \end{figure} \vskip 1pc Although the above finite size calculations by themselves have limited scope, we believe that they are quite instructive. The behavior in the triplet channel clearly shows the bifurcation of the reaction paths, with the relative weights of the two paths being a strong function of the Coulomb parameter. The energy difference factor is large for the 1$^3B_u$ (which has the larger matrix element with the initial reactant state), while the matrix element is smaller for n$^3B_u$ and the energy difference is smaller. The 1$^1B_u$ has both a large matrix element (as the 1$^3B_u$) as well as a small energy difference (as the n$^3B_u$), and hence its yield is larger than the overall triplet yield which is the sum total of the yields of the 1$^3B_u$ and the n$^3B_u$. We believe that this particular result continues to be valid qualitatively for long chains with realistic Hubbard $U$. \subsection{Simulations within the extended Hubbard model} The simple Hubbard model does not lead to exciton formation and the singlet yield is limited to the 1$^1B_u$. In order to see the bifurcation of the e-h reaction path in the singlet channel, one therefore has to work with $H_{intra}$ corresponding to extended Hubbard models which support an excitonic electron structure. For moderate Coulomb interactions, as in the PPP model, the bifurcations are washed out due to the finite size effects discussed in section 6.3. We perform our calculations again for very strong Coulomb interactions in $H_{intra}$, where finite size effects are minimized due to extreme localization, both in the charged and neutral systems. Furthermore, we restrict the intersite Coulomb interactions to nearest neighbors only, to minimize the particle-hole separation in the 1$^1B_u$ exciton state and generate very strongly bound exciton. This procedure ensures that there exist distinct delocalized CT states above the exciton even in short chains (see section 4). We have again chosen $X_{\perp}$ = 0 and $t_{\perp}$ =0.1 in $H_{inter}$. In order to be consistent with nonzero intrachain intersite Coulomb interaction, we have now included interchain $V$ = 10\% of intrachain $V$. The results of our calculations are shown in Table 3, where we have listed the yields of the two dominant singlet (1$^1B_u$ and n$^1B_u$) and dominant triplet (1$^3B_u$ and n$^3B_u$) states, the energy differences $\Delta E(j^1B_u)$ and $\Delta E(j^3B_u)$ ($j$ = 1 and n), defined as before with respect to E($P^+$) + E($P^-)$, and the relevant matrix elements of $H_{inter}$ between the initial and various final states. Several conclusions emerge from these data. (1) For such large Coulomb interactions the n$^3B_u$ is (for moderate $U$ = 10, $V$ = 3 and 4) energetically close to the initial state even as the n$^1B_u$ is considerably higher in energy (we have already argued that the latter is a finite size effect \cite{Barford}). As in the previous subsection, the bifurcation in the triplet channel leads to a very high yield of the n$^3B_u$. Differently from the previous case though, the exciton character of the 1$^1B_u$ ensures a large $\Delta E(1^1B_u)$ in the present case, and hence a small yield of the 1$^1B_u$. Thus in this narrow regime of Coulomb interactions, the triplet yield dominates over the singlet yield. This is an artifact of our restriction to short chains, as explained below. (2) As the Coulomb interactions are increased further, the bifurcation in the singlet channel reaction path sets in and in this case the n$^1B_u$ yield dominates over that of the 1$^1B_u$. Indeed in this region the $\Delta E(n^3B_u)$ is moderately large once again even as $\Delta E(n^1B_u)$ is small (though still negative). Thus in the limit of very large Coulomb interactions the overall singlet yield again dominates over the triplet yield, as within the PPP model, with the difference that here in both the spin channels the higher energy state dominates over the corresponding lower energy state. (3) Matrix elements of $H_{inter}$ between initial and final states are not independent of the energy difference between them, -- smaller the energy difference larger is the matrix element. This makes understanding the finite size effects extremely important, since in short chains where the higher energy singlet and triplet states are much too high in energy, matrix elements leading to these states are {\it simultaneously} small, thereby reducing the overall yields to these states. In both singlet and triplet channels, we expect the bifurcations of the reaction paths to play important roles in the long chain limit. (4) Finally, we note that even as the energy differences between the initial polaron-pair state and higher (lower) energy final states become small (large), the relevant matrix elements continue to be large for the lower states 1$^1B_u$ and 1$^3B_u$. We believe that this will continue to be true in the long chain limit, the implication of which is that the matrix elements favor the lower excitons, while the smaller energy difference favors the higher energy CT states. This result is the same as that observed in the triplet channel for the simple Hubbard model. The true sum total yields in either spin channel is therefore very difficult to calculate directly, and proper implications of the above data should be sought. ~\\\\ Table 3: {\it The energy differences, matrix elements and yields of singlet and triplet $B_u$ states for various values of $U$ and $V$ parameters of N=6 with $H_{intra}$ as the extended Hubbard model. For each set of $U$ and $V$, the first row corresponds to S = 0 and the second row corresponds to S = 1. The total singlet to triplet yield ratio is small at the top but high at the bottom of the table. All energies are in units of t.} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|}\hline & & & & & & & \\ \hspace{0.07cm}$U$\hspace{0.07cm}&\hspace{0.07cm}$V$\hspace{0.07cm}& \hspace{0.1cm}$\Delta E(1B_u)$\hspace{0.1cm}& \hspace{0.1cm}$\Delta E (nB_u)$\hspace{0.1cm}& \hspace{0.1cm}$H^{\prime}_{P^+ \otimes P^-~;~ G\otimes 1B}$\hspace{0.1cm} & \hspace{0.1cm}$H^{\prime}_{P^+ \otimes P^-~;~ G\otimes nB} $\hspace{0.1cm} &\hspace{0.1cm} Y(1B) \hspace{0.1cm}&\hspace{0.1cm} Y(nB)\hspace{0.1cm} \\ \hspace{0.07cm}\hspace{0.07cm}&\hspace{0.07cm}\hspace{0.07cm} & \hspace{0.1cm}\hspace{0.1cm} &\hspace{0.1cm}\hspace{0.1cm} &$(10^{-3})$& $(10^{-3})$ & &\\ & & & & & & &\\ \hline & & & & & & &\\ & & 0.84 & -1.53 & 4.12& 0.35& 16.84& \\ 10& 3& & & & & &\\ & & 6.93 & -0.14 & 3.21& 1.64& 0.18&77.36 \\ & & & & & & &\\ \hline \hline & & & & & & & \\ & & 1.83 & -1.65 & 3.49& 0.53 & 3.00& \\ 10&4 & & & & & &\\ & & 6.64 & 0.27 & 3.39& 1.45 & 0.21&66.67 \\ & & & & & & & \\ \hline \hline & & & & & & & \\ & & 1.60 & -1.40 & 3.33& 0.54 & 4.27& \\ 12&4 & & & & & &\\ & & 8.74 & 0.50 & 2.88& 1.41 & 0.11&17.88 \\ & & & & & & & \\ \hline \hline & & & & & & & \\ & & 3.13 & -1.06 & 2.10& 0.64 & 0.65& 0.87 \\ 20&6 & & & & & &\\ & & 16.58 & 2.43 & 2.03& 0.92 & &0.43 \\ & & & & & & & \\ \hline \hline & & & & & & & \\ & & 6.86 & -0.86 & 1.53& 0.59 & 0.11& 0.76 \\ 30&10 & & & & & &\\ & & 26.42 & 6.26 & 1.70& 0.68 & & \\ & & & & & & & \\ \hline \end{tabular} \end{center} ~\\ \section{Discussion and conclusions} \label{conclusion} Although our calculations are for finite systems, the have the advantage of being exact. Based on our experience \cite{DGuo}, we believe that true long chain behavior for realistic Coulomb interactions can be obtained by ``grafting together'' the different pieces of information discussed in sections 6. The interpretation of the PPP model calculations is obvious: although $|\langle 1^1A_g \cdot 1^1B_u |H_{inter}| P^+P^- \rangle_S| \sim |\langle 1^1A_g \cdot 1^3B_u |H_{inter}| P^+P^- \rangle_T|$ with these parameters, the proximity of the S = 0 final state to the initial polaron-pair state relative to the S = 1 final state favors the singlet, thereby leading to $\eta$ considerably larger than 0.25 \cite{Tandon,Moushumi}. No significant yield to higher energy states are obtained within the PPP model at our chain lengths, but as we have shown, this is because the magnitudes of the matrix elements between initial and final states depend on the energy differences between them, and at these short chain lengths, the relevant energy differences are too large. Thus the PPP model results cannot be thought of as complete. The results in section 6.4 are very instructive. The relatively large (small) energy difference between the 1$^3B_u$ (n$^3B_u$) and the initial state composed of the polaron pairs that occurs here for large Hubbard $U$ is exactly what is expected at long chain lengths for intermediate $U$. We expect then that in long chains dominant singlet yield to the 1$^1B_u$ and dominant triplet yield to the n$^3B_u$. The continuing higher yield of the singlet, even when compared to the total triplet yield, is then significant. The reason for the relatively lower triplet yield can be understood from Fig.~8(b), -- even as $|\langle 1^1A_g \cdot 1^3B_u |H_{inter}| P^+P^- \rangle_T|$ decreases with $U$, it remains larger than $|\langle 1^1A_g \cdot n^3B_u |H_{inter}| P^+P^- \rangle_T|$. In contrast, the 1$^1B_u$ remains energetically proximate to $|P^+P^-\rangle_S$, and hence the matrix element in the singlet channel also continues to be larger. The resultant large $\eta$ within simple Hubbard models is therefore expected even in the long chain limit. The behavior within the extended Hubbard model is only slightly more complex. Once again, we believe that increasing Coulomb correlations at fixed N is qualitatively equivalent to increasing N at fixed Coulomb correlations, since both have the same effect on the order and proximities of the most relevant energy states. We have not included the results for smaller $U$, $V$ in Table 2, because at these parameters the behavior continues to be similar to that within the PPP model, i.e., the yields are dominated by the lowest S = 0 and 1 excitons, with $\eta >$ 0.25. With stronger Coulomb interactions, in both the singlet and triplet channels the yield shifts from the lowest excitons to the higher n$B_u$ states. Nevertheless, $\eta$ is predicted to be greater than 0.25 for most of the parameter space. Unfortunately, we are unable to demonstrate the gradual shift from lower energy to higher energy products as would occur upon increasing N at fixed Coulomb correlations: our reaction products are either the lower energy states or the higher energy ones. We believe that the true long chain behavior is very likely a ``superposition'' of the calculated results for moderate and strong Coulomb interactions, viz., larger matrix elements with the lower states as final states favoring these, while proximities in energy favoring the higher CT states. For systems with relatively small singlet exciton binding energies, in the long chain limit we expect the yield in the singlet channel to be dominated by the 1$^1B_u$, as in the simple Hubbard model. For systems with large singlet exciton binding energy, it is conceivable that the n$^1B_u$ and the n$^3B_u$ both make significant contributions. The relative yield of the 1$^3B_u$ in these systems should be very small, and as a consequence we expect again the overall $\eta$ to be large. To summarize then, what varying chain lengths, Coulomb parameters and exciton binding energies do is redistribute the overall yields within each spin channel between the lowest exciton and the higher energy CT state. However, even after this redistribution $\eta$ remains greater than 0.25. Clearly, completely convincing proof of the above conjecture would require long chain calculations that can treat low and high energy states as well as the polaron-pair states of long chains with high precision. We are pursuing density matrix renormalization group calculations currently to test the ideas posed in the above. One apparent surprise in our results is the overall limitation to the 1$^3B_u$ and the n$^3B_u$ as products in the triplet channel e-h recombination. This is surprising, given that there exist so many covalent triplet states between the 1$^3B_u$ and the n$^3B_u$ even in short chains (Fig. 10). \begin{figure} \begin{center} {\includegraphics[width=10cm,height=7.5cm]{nrg_0526.eps}} \end{center} {FIG. 10.{ \it The triplet energy spectrum between the 1$^3{\rm B}_u^+$ and 1$^3{\rm A}_g^-$ in a N = 12 chain relative to the singlet ground state, within the PPP-Ohno Hamiltonian. Different symmetry subspaces are shown separately. State marked by asterisk is dipole-coupled to the 1$^3{\rm B}_u^+$. The 1$^1{\rm B}_u^-$ and the $m^1{\rm A}_g^+$ ($m$ = 8 in N = 12) are also included.}} \end{figure} Indeed, if these triplets were generated, in principle $\eta$ could have reached values smaller than 0.25. One reason these triplets are not generated within our calculations is because of the artificial mirror plane symmetry that we have imposed between the two polyene chains, as a consequence of which even the charge-separated polaron-pair states have $B_u^-$ and $B_u^+$ symmetries in singlet and triplet spin spaces, respectively, and the interchain hop maintains these symmetries. All low to intermediate energy triplets that belong to symmetry subspaces different from the $^3B_u^+$ are thereby excluded. There do exist, however, $^3B_u^+$ states above the 1$^3B_u$ but below the n$^3B_u$, and even these are not generated in significant amounts over a broad region of the parameter space. We are currently pursuing exact calculations with all possible relative orientations between the two molecular components. While these calculations will obviously generate singlets and triplets belonging to all possible symmetry subspaces, we believe that they will demonstrate the existence of approximate ``sum rules'', i.e., the total yield in each spin channel remains nearly conserved, and that the total yields are close to what we have already found in our current calculations. Finally, we address the issue of electron-phonon interactions, ignored in our calculations. Electron-phonon interactions play a dominant role in theories of e-h recombination in which intermolecular charge-transfer leads to higher energy singlet and triplets only, and differences in cross-sections arises from differences in structural relaxations to the lowest excitons \cite{hong,bittner}. As we have pointed out elsewhere, these theories ignore triplet states in between the high energy charge-transfer state and the lowest triplet, and such triplets should definitely be involved in nonradiative relaxation \cite{MDas}. Inclusion of these intermediate triplets will enhance the triplet relaxation and wipe out all differences between singlet and triplet relaxations that give the difference between singlet and triplet yields within the above theories. The only way electron-phonon interactions can influence the singlet:triplet yield within our picture is if these interactions change substantively the relative energies of the most relevant states. It has been claimed by Conwell, for example, that the relaxed polaron energies E($P^+$)+E($P^-$) can be considerably below the E(n$^1B_u$) that is observed in nonlinear optical experiments \cite{Conwell}. This has, however, not been substantiated by any calculations, and we are currently investigating this possibility. \vskip 1pc \section{Acknowledgments} Work in Bangalore was supported by the CSIR, India and DST, India, through /INT/US (NSF-RP078)/2001. Work in Arizona was supported by NSF-DMR-0101659, NSF-DMR-0406604 and NSF-INT-0138051. We are grateful to our experimental colleagues Z.V. Vardeny and M. Wohlgenannt for numerous stimulating discussions. S.M. acknowledges the hospitality of the Indian Institute of Science, Bangalore, where this work was completed.
cond-mat/0411302
\section{Introduction} The concept of entanglement is at the heart of modern ideas on quantum information processing and quantum computing [1--5]. There exist different suggestions for manipulating quantum entanglement, in particular, by employing spin systems or considering atomic systems interacting with electromagnetic fields [6]. The aim of the present paper is to analyze the relation between entanglement and an ensemble of radiating atoms. A specific feature of this analysis is the consideration of a large ensemble of many atoms under conditions of well developed collective effects and coherent radiation. First of all, it is necessary to emphasize that, generally, there are two kinds of entanglement, the entanglement of a quantum state and the entanglement produced by a quantum operation. And one has to distinguish between these two principally different notions. For pure quantum states, represented by wave functions, the concept of entanglement is straightforward. A pure state of a complex system is termed entangled if and only if it cannot be represented as a tensor product of wave functions pertaining to different parts of the given system. For mixed states, represented by statistical operators, the notion of entanglement is essentially more complicated. This is because functions and operators are fundamentally different mathematical quantities. Despite of this, the name of a state is also commonly applied to statistical, or density, operators. One decides whether a statistical operator (a mixed state) is entangled or not according to the formal structure of the given operator. An operator $\hat\rho$ is called separable if and only if it can be written as a convex combination of product states: $$ \hat\rho = \sum_\nu \lambda_\nu \otimes_j \rho^{(j)}_\nu \; ; \qquad \lambda_\nu\geq 0 \; , \; \; \sum_\nu \lambda_\nu = 1 \; . $$ When $\hat\rho$ is not separable it is named entangled. However, from the mathematical point of view, the properties of an operator are correctly defined not according to its formal appearance but respectively to its action on a given set of functions. In order to better understand the difference between the structure of an operator, be it a statistical operator or not, and its action, it is useful to distract ourselves, for a while, from physical applications and to clear cut the point in mathematical terms. Then, for a given operator, one could define two types of entanglement. One is the entanglement {\it of an operator} and another one is the entanglement {\it produced by an operator}. In the first case, one treats an operator as being entangled or not according to its formal structure of being nonseparable or separable. In the second case, one should characterize an operator as {\it entangling} or not on the basis of its action over a set of disentangled functions. Pay attention that entangled and entangling are quite different characteristics. In physical applications, one usually discusses the first type of entanglement for a particular class of operators, that is, for statistical, or density, operators. The entanglement of a statistical operator is commonly termed the entanglement of state. A measure for this type of entanglement can be well defined only for pure bipartite systems. No general measure of this type is known for multipartite or mixed systems. It is worth stressing that, whatever entanglement of a state one is talking about, be it the entanglement of formation, entanglement cost, relative entropy of entanglement, or entanglement of distillation, all of them pertain to the same type characterizing the entanglement of an operator. The entanglement {\it produced by an operator} is a qualitatively different notion. To my understanding, only this type of produced entanglement, or entanglement production, can be correctly defined in mathematical terms. And it is just the entanglement production that is considered in the present paper. A general measure of entanglement production, realized by an arbitrary operator and for systems of any nature, be it multipartite or mixed systems, has recently been introduced [7,8]. This measure is studied here for characterizing entanglement produced in the process of collective atomic radiation. \section{Entanglement Production} First, we need to describe a general way of quantifying entanglement production. Consider a composite system consisting of parts enumerated by the index $i\in\{ i\}$ pertaining to a label manifold $\{ i\}$. For the standard case of a discrete manifold, one has $i=1,2,\ldots$ In general, the label manifold can be continuous. Let each part be characterized by a single-partite Hilbert space $$ {\cal H}_i \equiv \overline{\cal L} \{ |n_i>\} \; , $$ being a closed linear envelope of a single-partite basis $\{|n_i>\}$. The composite Hilbert space \begin{equation} \label{1} {\cal H} \equiv \otimes_i {\cal H}_i \end{equation} is the tensor product, which is a closed linear envelope $$ {\cal H} \equiv \overline{\cal L} \{ |n >\} $$ of a multipartite basis $\{|n>\}$ consisting of the vectors \begin{equation} \label{2} |n>\; \equiv \otimes_i \; |n_i> \; . \end{equation} For a continuous label manifold $\{ i\}$ the products (1) and (2) are the continuous tensor products, introduced by von Neumann [9] and employed for particular cases in Refs. [10,11]. In each space ${\cal H}_i$, for functions $\varphi_i$ and $\varphi_i'$, a scalar product $(\varphi_i,\varphi_i')$ is defined. The norm, generated by this product, is \begin{equation} \label{3} ||\varphi_i||_{{\cal H}_i}\; \equiv \sqrt{(\varphi_i,\varphi_i)} \; , \end{equation} which is termed the vector norm. The composite space (1) can be decomposed into two sets of functions. One sort of functions $f\in{\cal H}$ is the class of {\it factor functions} \begin{equation} \label{4} f \equiv \otimes_i \varphi_i \qquad (\varphi_i\in{\cal H}_i) \; , \end{equation} which are called {\it disentangled functions}. All such functions form the {\it disentangled set} \begin{equation} \label{5} {\cal D} \equiv \{ \otimes_i\varphi_i|\; \varphi_i\in {\cal H}_i\} \; . \end{equation} The remaining vectors of ${\cal H}$ constitute the complement ${\cal H}\setminus{\cal D}$ whose elements cannot be represented as tensor products of $\varphi_i\in{\cal H}_i$. Hence, the set ${\cal H}\setminus{\cal D}$ can be named the {\it entangled set}. By construction, $$ {\cal H} = {\cal D} \cup {\cal H} \setminus {\cal D} \; . $$ The set ${\cal D}$ is not necessarily a subspace of ${\cal H}$. Nevertheless, on the basis of the norms (3), it is straightforward to define the restricted vector norm \begin{equation} \label{6} ||f||_{\cal D} \equiv \prod_i ||\varphi_i||_{{\cal H}_i} \end{equation} over the set ${\cal D}\subset{\cal H}$. Let an operator $A$ be given on ${\cal H}$. For instance, this can be an operator of an observable quantity, or an operator representing a measurement or any other operation on ${\cal H}$. Assume that the operator $A$ is bounded, possessing the norm $||A||_{\cal H}$. We may introduce the restricted norm \begin{equation} \label{7} ||A||_{\cal D} \equiv \sup_{f\in{\cal D}}\; \frac{||Af||_{\cal D}}{||f||_{\cal D}} \; , \end{equation} interpreted as the supremum of the expression corresponding to the standard definition of a norm over an Hilbert space, restricted to the subset ${\cal D}$ of disentangled functions. The operator norm $||A||$, generated by the vector norm $||Af||$, is also called the Hermitian or spectral norm. The norm (7) can be written as \begin{equation} \label{8} ||A||_{\cal D} = \sup_{f,f'\in{\cal D}} \; \frac{|(f,Af')|}{||f||_{\cal D} ||f'||_{\cal D}} \; , \end{equation} where $f,f'\neq 0$. For a self-adjoint operator, this can be reduced to \begin{equation} \label{9} ||A||_{\cal D} = \sup_{f\in{\cal D}} \; \frac{|(f,Af)|}{||f||^2_{\cal D}} \; , \qquad (A^+=A) \; , \end{equation} where again $f\neq 0$. Any factor function $f\in{\cal D}$ can be written as an expansion $$ f=\otimes_i\; \sum_{n_i} a_{n_i}|n_i> \; , $$ which can be represented as $$ f=\sum_n c_n|n> \qquad (n=\{n_i\}) \; , $$ with $$ c_n =\prod_i a_{n_i} \; , \qquad |n>\; \equiv \otimes_i\; |n_i> \; . $$ Therefore, norm (9) takes the form \begin{equation} \label{10} ||A||_{\cal D} = \sup_{\{ c_n\} } \; \frac{|\sum_{mn} c_m^* c_n<m|A|n>|}{\sum_n |c_n|^2} \; . \end{equation} The latter, keeping in mind an orthonormalized basis, such that $<m|n>=\delta_{mn}$, can be simplified to \begin{equation} \label{11} ||A||_{\cal D} = \sup_n |<n|A|n>| \; . \end{equation} This representation is the most convenient for practical calculations. For each operator $A$ on ${\cal H}$, one may put into correspondence a nonentangling operator $A^\otimes$ having the structure of a tensor product $\otimes_i A_i$ of single-partite operators \begin{equation} \label{12} A_i \equiv {\rm Tr}_{\{ {\cal H}_{j\neq i}\} } \; A \; . \end{equation} To preserve the scale-invariant form of $A^\otimes$, we require the validity of the normalization condition \begin{equation} \label{13} {\rm Tr}_{\cal H} \; A^\otimes = {\rm Tr}_{\cal H} \; A \; , \end{equation} in which $$ {\rm Tr}_{\cal H}\; \otimes_i A_i = \prod_i {\rm Tr}_{{\cal H}_i} \; A_i \; . $$ The amount of entanglement produced by an operator $A$ is characterized [7,8] by the measure of entanglement production, which compares the actions of $A$ and $A^\otimes$ on the disentangled set ${\cal D}$. The nonentangling counterpart of $A$, taking account of condition (13), reads \begin{equation} \label{14} A^\otimes \equiv \frac{{\rm Tr}_{\cal H} A}{{\rm Tr}_{\cal H} \otimes_i A_i}\; \otimes_i A_i \; . \end{equation} The {\it measure of entanglement production} by an operator $A$ is \begin{equation} \label{15} \varepsilon(A) \equiv \log \; \frac{||A||_{\cal D}}{||A^\otimes||_{\cal D}} \; . \end{equation} Here the norm $||A||_{\cal D}$ is given by Eq. (11) and the norm $||A^\otimes||_{\cal D}$ can be calculated as $$ ||A^\otimes||_{\cal D} = \frac{{\rm Tr}_{\cal H} A}{{\rm Tr}_{\cal H} \otimes_i A_i}\; \prod_i ||A_i||_{{\cal H}_i} = ({\rm Tr}_{\cal H} A) \prod_i \frac{||A_i||_{{\cal H}_i}}{{\rm Tr}_{{\cal H}_i} A_i} \; . $$ Measure (15) will be used in what follows for particular physical applications. By construction, this measure satisfies all properties typical of any measure [7,8]. Several remarks are in order. When one is interested in the entangling properties of a family ${\cal A}\equiv\{ A\}$ of operators, one may calculate measure (15) for each of them and then find the maximal value $$ \varepsilon({\cal A}) \equiv \sup_{A\in{\cal A}}\; \varepsilon(A) $$ quantifying the entanglement production by the family ${\cal A}$. In particular, ${\cal A}$ can represent an algebra of observables. By replacing in the above expressions the disentangled set ${\cal D}$ by the entangled set ${\cal H}\setminus{\cal D}$, we can introduce the measure of disentanglement production $$ \delta(A) \equiv \log\; \frac{||A||_{{\cal H}\setminus{\cal D}}}{||A^\otimes||_{{\cal H}\setminus{\cal D}}} \; , $$ describing the disentangling properties of an operator $A$. The restricted norm (11) does not necessarily coincide with the standard norm, so that, generally speaking, $$ ||A||_{\cal D}\neq ||A||_{\cal H} \; , $$ though in many cases they can be identical. An example, when these norms are different, can be composed as follows. Let $\varphi\in{\cal H}$, with $||\varphi||=1$, can be written as $$ \varphi = \sum_n c_n|n> \qquad (n\equiv\{ n_i\}) \; . $$ Define the operator $$ A\equiv |\varphi><\varphi| \; , $$ for which $||A||_{\cal H}=1$. At the same time, the restricted norm (11) gives $$ ||A||_{\cal D} =\sup_n | c_n|^2 \; . $$ Assuming the case, when $\sum_n|c_n|^2=1$ and at least two of $c_n$ are nonzero, one has $\sup_n|c_n|^2<1$. Thence $$ ||A||_{\cal D} < ||A||_{\cal H} = 1 \; . $$ In general, the restricted norm $||A||_{\cal D}$ can be expressed through the standard norm $||A||_{\cal H}$ as follows. Introduce the projector $P_{\cal D}$, which projects the composite space ${\cal H}$ onto the disentangled set ${\cal D}$, so that $P_{\cal D}{\cal H}={\cal D}$. Then, we may write $$ ||A||_{\cal D} \equiv ||P_{\cal D} A P_{\cal D}||_{\cal H} \; . $$ From here, $||A||_{\cal D}\leq||A||_{\cal H}$. The projector $P_{\cal D}$, however, is nonlinear, hence $P_{\cal D} AP_{\cal D}$ is a nonlinear operator, even if $A$ is linear. Note that the operator norm is usually defined for linear operators. For pure bipartite systems, one measures entanglement by means of the reduced von Neumann entropy $$ S_N \equiv -{\rm Tr}_{{\cal H}_i}\; \hat\rho_i \log\hat\rho_i \; , $$ in which $$ \hat\rho_i \equiv {\rm Tr}_{{\cal H}_{j\neq i} }\; \hat\rho \; , $$ and $\hat\rho$ is a statistical operator of the bipartite system. Then $$ S_N = - \sum_n |c_n|^2 \log|c_n|^2 \; , $$ where $|c_n|$ is a Schmidt weight. The entropy $S_N$ is assumed to quantify the entanglement of a bipartite state. This should not be confused with the entanglement produced by the statistical operator $\hat\rho$, which is quantified by measure (15) yielding $$ \varepsilon(\hat\rho) = - \log \sup_n|c_n|^2 \; . $$ As is evident, these measures are, in general, different, coinciding only for the maximally entangled states, for which $|c_n|=const$. To stress once again that entanglement production by an operator and entanglement of an operator are rather different notions, let us consider the Werner operator [12], which is defined for bipartite systems of the composite space ${\cal H}={\cal H}_1\otimes{\cal H}_2$, with the single-partite space dimensionality $d\equiv{\rm dim}{\cal H}_i$. The Werner operator is $$ \hat W_\alpha \equiv \frac{1}{d^3-d} \left [ (d-\alpha)\hat 1 + (d\alpha - 1) \hat\sigma \right ] \; , $$ where $\alpha$ is a real number, $\hat 1$ is a unity operator $$ \hat 1 \equiv \sum_{i,j=1}^d \; |n_i n_j><n_j n_i| \; , $$ and $\hat\sigma$ is a flip operator $$ \hat\sigma \equiv \sum_{i,j=1}^d \; |n_i n_j><n_i n_j| \; . $$ The name of the latter operator comes from the property $$ \hat\sigma (\varphi_1\otimes\varphi_2) = \varphi_2 \otimes \varphi_1 \; . $$ Also, one has $$ {\rm Tr}_{\cal H} \hat W_\alpha =1 \; , \qquad \alpha= {\rm Tr}_{\cal H} \hat W_\alpha \hat \sigma \; . $$ The positive partial transpose criterion [13,14] tells us that $\hat W_\alpha$ is separable if and only if $\alpha\geq 0$. Because of this, the entanglement of the Werner operator for $\alpha\geq 0$ must be zero. However, this operator is {\it entangling} for all $\alpha\neq 1/2$, which means that it transforms nonentangled functions $f\in{\cal D}$ from the disentangled set ${\cal D}$, which are the factor functions of the form $$ f = \sum_{i=1}^d a_i|n_i> \otimes \sum_{j=1}^d b_j|n_j> \; , $$ into entangled functions $\hat W_\alpha f\in{\cal H}\setminus{\cal D}$. Therefore, the measure of entanglement production $\varepsilon(\hat W_\alpha)$ must be nonzero. For instance, in the case of two-dimensional single-partite spaces, when $d=2$, we have $$ \hat W_\alpha = \frac{1}{6}\; \left [ (2-\alpha)\hat 1 + (2\alpha -1)\hat\sigma \right ] \; . $$ From here, we find $$ \hat W_\alpha^\otimes = \frac{1}{4}\; \hat 1 \; , \qquad ||\hat W_\alpha^\otimes||_{\cal D} = \frac{1}{4} \; , \qquad ||\hat W_\alpha||_{\cal D} = \frac{1}{6}\; \sup\{ 2-\alpha,1+\alpha \} \; . $$ Then, measure (15) becomes $$ \varepsilon(\hat W_\alpha) = \log\left ( 1 + \frac{1}{3}\left | 2\alpha -1 \right | \right ) \; , $$ which is nonzero for all $\alpha\neq 1/2$. Another example of an operator that is separable, but at the same time entangling, can be found in Ref. [8]. \section{Spin Entanglement} The process of radiation by resonant atomic systems can be formulated in terms of the evolution equations for spin operators. The latter are often called pseudospin operators in order to emphasize that they do not correspond to actual spins but are just mathematical structures possessing the properties of spin operators. For resonant, effectively two-level, atoms, such operators correspond to spin-1/2 operators, which is implied in what follows. It is convenient to deal with the spin operators $S_j^\alpha\equiv S^\alpha({\bf r}_j)$ being either the ladder operators $S_j^\pm$ or the $z$-component operator $S_j^z$. The former are used for describing dipole transitions, while the latter represents the population difference. These operators possess the properties $$ S_i^\pm S_i^z = \mp \; \frac{1}{2}\; S_i^\pm \; , \qquad S_i^z S_i^\pm = \pm \; \frac{1}{2}\; S_i^\pm \; , $$ $$ S_i^+ S_i^- = \frac{1}{2} + S_i^z \; , \qquad S_i^- S_i^+ = \frac{1}{2}- S_i^z \; , \qquad S_i^\pm S_i^\pm = 0 \; , \qquad S_i^z S_i^z = \frac{1}{4} $$ and satisfy the commutation relations $$ [ S_i^+,\; S_j^-] = 2\delta_{ij} S_i^z \; , \qquad [S_i^\pm,\; S_j^z] = \mp \delta_{ij} S_i^\pm \; . $$ Let us introduce the spin density matrices [15] of two types. One type is $$ R_n = \left [ R_n(i_1\ldots i_n,j_1\ldots j_n)\right ] \; , $$ with the elements \begin{equation} \label{16} R_n(i_1\ldots i_n,j_1\ldots j_n) \equiv \; < S_{j_n}^+\ldots S_{j_1}^+ S_{i_1}^-\ldots S_{i_n}^- > \; . \end{equation} And another type is $$ Z_n = \left [ Z_n(i_1\ldots i_n,j_1\ldots j_n) \right ] \; , $$ with the elements \begin{equation} \label{17} Z_n(i_1\ldots i_n,j_1\ldots j_n) \equiv \; < S_{j_n}^z\ldots S_{j_1}^z S_{i_1}^z\ldots S_{i_n}^z > \; . \end{equation} Here the angle brackets $<\ldots>$ imply statistical averaging with a statistical operator $\hat\rho$. Expression (16) defines the dipole-transition, or transverse, matrix. Equation (17) determines the population-difference, or longitudinal, matrix. These are the matrices with respect to the indices $i$ and $j$. Spin density matrices [15] can be used for describing their entanglement production under magnetic transitions [8]. They also appear when studying the thermal entanglement caused by statistical operators of magnetic systems [8,16,17]. However let us recall that in the present paper the spin operators will be applied for describing not magnetic systems but ensembles of radiating atoms. Similar pseudospin operators arise when considering atomic entanglement in two-component Bose-Einstein condensates [18] or in systems of atoms with several internal states [19,20]. The spatial variables ${\bf r}_i$ can be either discrete or continuous, which is not essential, since the final results will not depend on this difference. In the case of discrete ${\bf r}_i\in\Bbb{Z}^3$, one employs summation over $i=1,2,\ldots,N$, while for continuous ${\bf r}_i\in\Bbb{R}^3$, one should replace the sum $\sum_i$ by the integral $\int d{\bf r}_i$. Single-partite spaces can be constructed on the basis of plane waves. For the discrete spatial variable, one has $$ \varphi_k({\bf r}_i) = \frac{1}{\sqrt{N}}\; e^{i{\bf k}\cdot{\bf r}_i} \qquad ({\bf r}_i\in\Bbb{Z}^3) \; ; $$ while for the continuous variable, $$ \varphi_k({\bf r}_i) = \frac{1}{\sqrt{V}}\; e^{i{\bf k}\cdot{\bf r}_i} \qquad ({\bf r}_i\in\Bbb{R}^3) \; , $$ $V$ being the system volume. Since the final results do not depend on whether ${\bf r}_i$ is discrete or continuous, we shall employ the simpler notation corresponding to the discrete case. The functions $\varphi_k({\bf r}_i)$ are orthogonal, $$ \sum_k \varphi_k^*({\bf r}_i) \varphi_k({\bf r}_j) = \delta_{ij} \; , $$ and form a complete basis, such that $$ \sum_{i=1}^N \varphi_k^*({\bf r}_i) \varphi_p({\bf r}_i) = \delta_{kp} \; . $$ Denoting the vector $$ |k>_i \; \equiv [\varphi_k({\bf r}_i)] $$ as a column with respect to $i$, we define the single-partite space $$ {\cal H}_i \equiv \overline{\cal L}\{|k>_i\} $$ as a closed linear envelope over the basis $\{|k>_i\}$. Then the composite space has the form ${\cal H}=\otimes_i{\cal H}_i$, in agreement with Eq. (1). The first-order spin density matrix (16) is \begin{equation} \label{18} R_1(i,j) =\; < S_j^+ S_i^->\; . \end{equation} For its trace, we have \begin{equation} \label{19} {\rm Tr}_{{\cal H}_i}\; R_1 \equiv \sum_{i=1}^N R_1(i,i) = \frac{N}{2}\; \left ( 1+ s \right ) \; , \end{equation} where the notation \begin{equation} \label{20} s \equiv \frac{2}{N} \sum_{i=1}^N < S_i^z> \end{equation} is introduced. The second-order spin density matrix (16) has the form \begin{equation} \label{21} R_2(i_1 i_2,j_1 j_2) = \; < S_{j_2}^+ S_{j_1}^+ S_{i_1}^- S_{i_2}^- > \; . \end{equation} Taking into consideration the properties of the spin operators, we see that \begin{equation} \label{22} \left [ ( 1 -\delta_{i_1i_2})(1-\delta_{j_1j_2}) -1 \right ] R_2(i_1i_2,j_1j_2) = 0 \; . \end{equation} Studying the semi-diagonal element $$ R_2(il,jl) = R_2(li,lj) \; , $$ we shall invoke the decoupling \begin{equation} \label{23} < S_l^z S_j^+ S_i^-> \; = \; <S_l^z><S_j^+ S_i^-> \; , \end{equation} valid for $l\neq i,j$. Then we get \begin{equation} \label{24} R_2(li,lj) = (1 -\delta_{il})(1-\delta_{jl})\left ( \frac{1}{2}\; + < S_l^z>\right ) \; < S_j^+ S_i^-> \; . \end{equation} This can be used for calculating the partial traces \begin{equation} \label{25} R_1^1 \equiv {\rm Tr}_{{\cal H}_2} R_2 \; , \qquad R_1^2 \equiv {\rm Tr}_{{\cal H}_1} R_2 \end{equation} of $R_2$ defined on ${\cal H}={\cal H}_1\otimes{\cal H}_2$. The elements of $R_1^a$, with $a=1,2$, are $$ R_1^1(i,j) \equiv \sum_{l=1}^N R_2(il,jl) \; , \qquad R_1^2(i,j) \equiv \sum_{l=1}^N R_2(li,lj) \; . $$ The number of atoms $N$ is assumed to be large, $N\gg 1$. We find \begin{equation} \label{26} R_1^a(i,j) = \frac{N}{2}\; (1+s) R_1(i,j) \; . \end{equation} From here, it is easy to get \begin{equation} \label{27} {\rm Tr}_{\cal H}\; R_2 = \frac{N^2}{4} \; (1 + s)^2 \; . \end{equation} The nonentangling counterpart of $R_2$ is proportional to $R_1^1\otimes R_1^2$, with the proportionality constant obtained from the normalization condition $$ {\rm Tr}_{\cal H}\; R_2 = {\rm Tr}_{\cal H} \; R_2^\otimes \; . $$ As a result, we come to \begin{equation} \label{28} R_2^\otimes = \frac{R_1^1\otimes R_1^2}{{\rm Tr}_{\cal H} R_2} \; , \end{equation} where the equality $$ {\rm Tr}_{\cal H} \left ( R_1^1 \otimes R_1^2 \right ) = \left ( {\rm Tr}_{\cal H}\; R_2 \right )^2 $$ is used and ${\rm Tr}_{\cal H} R_2$ is given by Eq. (27). To find the entanglement produced by $R_2$, we need to define the restricted norms \begin{equation} \label{29} ||R_1||_{\cal D} = \sup_k\; <k|R_1|k> \; , \qquad ||R_2||_{\cal D} = \sup_{kp}\; <kp| R_2| pk> \; . \end{equation} For this purpose, we shall use the single-mode laser approximation for the correlator \begin{equation} \label{30} <S_i^+ S_j^->\; = \frac{w}{4}\; e^{-i{\bf k}_0\cdot{\bf r}_{ij}} \qquad (i\neq j) \; , \end{equation} where ${\bf k}_0$ is a fixed wave vector of laser propagation and ${\bf r}_{ij}\equiv {\bf r}_i-{\bf r}_j$. Then we obtain \begin{equation} \label{31} ||R_1||_{\cal D} = \sup\left\{ \frac{1}{2}\; ( 1 + s), \; \frac{N}{4}\; w \right \} \; . \end{equation} For matrices (25) and (28), we find \begin{equation} \label{32} ||R_1^a||_{\cal D} = \frac{N}{2}\; (1 + s)||R_1||_{\cal D} \; , \qquad ||R_2^\otimes||_{\cal D} = ||R_1||^2_{\cal D} \; . \end{equation} The measure of entanglement produced by $R_2$ reads as \begin{equation} \label{33} \varepsilon(R_2) = \log\; \frac{||R_2||_{\cal D}}{||R_1||^2_{\cal D}} \; . \end{equation} To work out the norm $||R_2||_{\cal D}$, we shall involve the decoupling \begin{equation} \label{34} <S_i^+ S_j^+ S_m^- S_n^->\; = \; <S_i^+ S_n^-><S_j^+ S_m^-> + <S_i^+ S_m^-><S_j^+ S_n^-> \; , \end{equation} valid under condition that all indices are different. Then we get \begin{equation} \label{35} ||R_2||_{\cal D} = \sup\left\{ \frac{1}{2}\; (1+s)^2,\; \frac{N^2}{8}\; w^2 \right \} \; . \end{equation} Comparing Eqs. (31) and (32), we see that \begin{equation} \label{36} ||R_2||_{\cal D} = 2 ||R_1||^2_{\cal D} \; . \end{equation} Therefore norm (33) reduces to \begin{equation} \label{37} \varepsilon(R_2) = \log 2 \; . \end{equation} Thus, the entanglement produced by the transition spin density matrix $R_2$ is always constant. Now, let us turn to the longitudinal spin density matrices (17). Such a first-order matrix is \begin{equation} \label{38} Z_1(i,j) =\; <S_j^z S_i^z> \; . \end{equation} As is evident, \begin{equation} \label{39} {\rm Tr}_{{\cal H}_1} \; Z_1 = \sum_{i=1}^N Z_1(i,i) = \frac{N}{4} \; . \end{equation} The second-order longitudinal spin density matrix writes as \begin{equation} \label{40} Z_2(i_1i_2,j_1j_2) = \; <S_{j_2}^z S_{j_1}^z S_{i_1}^z S_{i_2}^z > \; . \end{equation} The matrix $Z_2$ is defined on ${\cal H}={\cal H}_1\otimes{\cal H}_2$. The partial traces give the matrices \begin{equation} \label{41} Z_1^1 \equiv {\rm Tr}_{{\cal H}_2}\; Z_2 \; , \qquad Z_1^2 \equiv {\rm Tr}_{{\cal H}_1} \; Z_2 \; , \end{equation} whose elements are \begin{equation} \label{42} Z_1^1(i,j) = \sum_{l=1}^N Z_2(il,jl) \; , \qquad Z_1^2(i,j) = \sum_{l=1}^N Z_2(li,lj) \; . \end{equation} We find \begin{equation} \label{43} Z_2(li,lj) =\frac{1}{4}\; < S_j^z S_i^z > \; , \qquad Z_1^a(i,j) = \frac{N}{4}\; < S_j^z S_i^z> \; , \end{equation} for any $a=1,2$. From here, \begin{equation} \label{44} {\rm Tr}_{\cal H} \; Z_2 = \frac{N^2}{16} \; . \end{equation} The nonentangling counterpart of $Z_2$ is \begin{equation} \label{45} Z_2^\otimes = \frac{Z_1^1\otimes Z_1^2}{{\rm Tr}_{\cal H} Z_2} \; . \end{equation} For the restricted norms, we have \begin{equation} \label{46} ||Z_1||_{\cal D} = \frac{1}{4}\; ( 1 + Ns^2 ) \; , \qquad ||Z_2^\otimes||_{\cal D} = ||Z_1||^2_{\cal D} \; . \end{equation} The measure of entanglement produced by $Z_2$ is \begin{equation} \label{47} \varepsilon(Z_2) = \log \; \frac{||Z_2||_{\cal D}}{||Z_1||^2_{\cal D}} \; . \end{equation} Substituting to Eq. (47) the norm \begin{equation} \label{48} ||Z_2||_{\cal D} = \sup\left\{ \frac{3}{16}\; , \; \frac{N^2}{16}\; s^4\right\} \; , \end{equation} we see that the entanglement production (47) strongly depends on the value of $s$, which is the average population difference. When the latter is zero, \begin{equation} \label{49} \varepsilon(Z_2) = \log 3 \qquad (s=0) \; . \end{equation} But for any nonzero $s$, we have \begin{equation} \label{50} \varepsilon(Z_2) = 0 \qquad (s\neq 0) \; , \end{equation} in view of large $N\gg 1$. The temporal behaviour of $s=s(t)$ depends on particular physical systems. \section{Collective Radiation} Our aim is to find the temporal evolution of the average population difference $s(t)$ under collective radiation of atoms. In this section, the derivation of the pseudospin equations governing the behaviour of $s(t)$ is given. This is done for resonant, effectively two-level, atoms with dipole radiation transitions. One usually assumes that electrodipole and magnetodipole transitions are described by equations possessing identical structure. However there exists a difference between the equations for these two types of transitions. In order to analyze this difference, we shall consider here the case, when electrodipole transition is forbidden, and the radiation occurs at magnetodipole transition. The Hamiltonian of an atomic system, emitting electromagnetic radiation, is \begin{equation} \label{51} \hat H = \hat H_a + \hat H_f + \hat H_{af} \; . \end{equation} The first term \begin{equation} \label{52} \hat H_a =\sum_{i=1}^N \omega_0 \left ( \frac{1}{2} + S_i^z \right ) \end{equation} corresponds to resonant atoms, with transition frequency $\omega_0$. The field Hamiltonian writes as \begin{equation} \label{53} \hat H_f = \frac{1}{8\pi} \int \left ( {\bf E}^2 + {\bf H}^2 \right ) \; d{\bf r} \; , \end{equation} where ${\bf E}$ is the electric field and ${\bf H}=\nabla\times{\bf A}$ is the magnetic field. The vector potential ${\bf A}$ satisfies the Coulomb calibration \begin{equation} \label{54} \nabla \cdot {\bf A} = 0 \; . \end{equation} The last term $\hat H_{af}$ describes atom-field interactions [21]. When electric dipole transitions are forbidden, we need to consider other radiation transitions. We study here the case of magnetic dipole transitions. The matrix element of the magnetic dipole \begin{equation} \label{55} {\vec\mu}_{mn} \equiv \frac{1}{2c} \int {\bf r} \times {\bf j}_{mn}({\bf r})\; d{\bf r} \; , \end{equation} with the light velocity, $c$, is expressed through the density of current $$ {\bf j}_{mn}({\bf r}) = -\; \frac{ie}{2m_e}\; \left ( \psi_m^*\nabla\psi_n - \psi_n\nabla\psi_m^* \right ) \; , $$ in which $e$ is electric charge, $m_e$ its mass, and $\psi_n=\psi_n({\bf r})$ are wave functions of states enumerated by the index $n=1,2$. Electric dipole transitions are forbidden under conserved parity, when \begin{equation} \label{56} \psi_m^*(-{\bf r})\psi_n(-{\bf r}) = \psi_m^*({\bf r})\psi_n({\bf r}) \end{equation} for all $m$ and $n$. Thence, the density of current is antisymmetric, \begin{equation} \label{57} {\bf j}_{mn}(-{\bf r}) = -{\bf j}_{mn}({\bf r}) \; . \end{equation} Denote the transition magnetic dipole \begin{equation} \label{58} \vec\mu \equiv\vec\mu_{21} \; , \qquad \vec\mu^* =\vec\mu_{12} \; . \end{equation} Diagonal elements of dipole (55) are also nonzero, $\vec\mu_{nn}\neq 0$. This is contrary to the case of electric dipole transitions, when parity is not conserved, and only the nondiagonal dipole elements survive, with diagonal ones being identically zero. The existence of nonzero diagonal elements $\vec\mu_{nn}$ is in the basis of the differences between the following equations for electric and magnetic dipole-transition radiation. Under magnetic dipole transitions, the atom-field interaction Hamiltonian writes as \begin{equation} \label{59} \hat H_{af} = - \sum_{i=1}^N {\bf M}_i \cdot {\bf B}_i + \hat H_{af}' \; , \end{equation} where the operator of magnetic moment is \begin{equation} \label{60} {\bf M}_i \equiv \vec\mu {\bf S}_i^+ + \vec\mu^* {\bf S}_i^- \; , \end{equation} and the last term is \begin{equation} \label{61} \hat H_{af}' = - \sum_{i=1}^N \left (\vec\mu_0 + \Delta\vec\mu\; S_i^z \right ) \cdot {\bf B}_i \; , \end{equation} with the notation \begin{equation} \label{62} \vec\mu_0 \equiv \frac{1}{2}\left ( \vec\mu_{11} + \vec\mu_{22} \right ) \; , \qquad \Delta\vec\mu \equiv \vec\mu_{22} -\vec\mu_{11} \; . \end{equation} The total magnetic field is the sum \begin{equation} \label{63} {\bf B}_i = {\bf H}_0 + {\bf H}_i \end{equation} of an external magnetic field ${\bf H}_0$ and the radiation field ${\bf H}_i\equiv{\bf H}({\bf r}_i,t)$. Using the commutation relations for the operators ${\bf E}$, ${\bf A}$ and ${\bf H}$, which can be found in book [22], we have the Heisenberg equations \begin{equation} \label{64} \frac{1}{c}\; \frac{\partial{\bf E}}{\partial t} = \nabla\times{\bf H} -\; \frac{4\pi}{c}\; {\bf j} \; , \qquad \frac{1}{c}\; \frac{\partial{\bf A}}{\partial t} = - {\bf E} \; , \end{equation} in which the density of current is \begin{equation} \label{65} {\bf j} = - c\sum_{i=1}^N \left ({\bf M}_i + \vec\mu_0 +\Delta\vec\mu\; S_i^z \right ) \times \nabla \delta({\bf r} -{\bf r}_i) \; . \end{equation} The Heisenberg equations for the pseudospin operators are \begin{equation} \label{66} \frac{dS_i^-}{dt} = - i S_i^-\left ( \omega_0 - \Delta\vec\mu\cdot{\rm B}_i \right ) - 2i S_i^z\vec\mu \cdot{\rm B}_i \; , \qquad \frac{dS_i^z}{dt} = i\left ( \vec\mu S_i^+ - \vec\mu^* S_i^- \right ) \cdot {\rm B}_i \; . \end{equation} One usually considers the system of equations (64) and (66) for the field and pseudospin operators. However, it is possible to eliminate the field variables and to reduce the problem to the equations for only pseudospin variables [21]. To this end, from Eqs. (64) we get the equation \begin{equation} \label{67} \left ( \nabla^2 - \; \frac{1}{c^2}\; \frac{\partial^2}{\partial t^2} \right ) {\bf A} = - \; \frac{4\pi}{c}\; {\bf j} \; , \end{equation} whose solution reads as \begin{equation} \label{68} {\bf A}({\bf r},t) = {\bf A}_{vac}({\bf r},t) + \frac{1}{c} \int {\bf j}\left ({\bf r}', t - \; \frac{|{\bf r}-{\bf r}'|}{c}\right ) \; \frac{d{\bf r}'}{|{\bf r}-{\bf r}'|} \; , \end{equation} where ${\bf A}_{vac}$ is the vacuum vector potential. Retardation in the dependence of spin operators can be taken in the Born approximation \begin{equation} \label{69} S_j^-\left ( t -\; \frac{r}{c}\right ) \cong S_j^-(t) \Theta(ct-r)\; e^{ik_0r} \; , \qquad S_j^z\left ( t - \; \frac{r}{c} \right ) \cong S_j^z(t) \Theta(ct-r) \; , \end{equation} where $\Theta(\cdot)$ is the unit-step function. Substituting the density of current (65) into potential (68), we have \begin{equation} \label{70} {\bf A}_i = {\bf A}_i^+ + {\bf A}_i^- + {\bf A}_i' +{\bf A}_{vac} \; , \end{equation} where $$ {\bf A}_i^+(t) = -\sum_j \frac{1+ik_0 r_{ij}}{r_{ij}^3} \; {\bf r}_{ij} \times \vec\mu\; S_j^+\left ( t -\; \frac{r_{ij}}{c}\right ) \; , $$ \begin{equation} \label{71} {\bf A}_i'(t) = -\sum_j \frac{1}{r_{ij}^3} \; {\bf r}_{ij} \times \left [ \vec\mu_0 + \Delta\vec\mu\; S_j^z\left ( t -\; \frac{r_{ij}}{c} \right ) \right ] \; , \end{equation} with the notation $$ {\bf r}_{ij} \equiv {\bf r}_i - {\bf r}_j \; , \qquad r_{ij}\equiv|{\bf r}_{ij}| \; . $$ Here the equality $$ \frac{\partial}{\partial r}\; S_j^-\left ( t-\; \frac{r}{c}\right ) = ik_0 S_j^- \left ( t-\; \frac{r}{c}\right ) $$ is taken into account, resulting from the Born approximation (69). For the magnetic field \begin{equation} \label{72} {\bf H}_i(t) \equiv {\bf H}({\bf r}_i,t) = \nabla_i\times {\bf A}_i(t) \; , \end{equation} we obtain the expression \begin{equation} \label{73} {\bf H}_i = {\bf H}_i^+ + {\bf H}_i^- + {\bf H}_i' +{\bf H}_{vac} \; , \end{equation} in which $$ {\bf H}_i^+(t) = \sum_j \left [ k_0^2 \; \frac{\vec\mu-(\vec\mu\cdot{\bf n}_{ij}){\bf n}_{ij}}{r_{ij}} \; - \left ( 1 + ik_0 r_{ij}\right ) \; \frac{\vec\mu-3(\vec\mu\cdot{\bf n}_{ij}){\bf n}_{ij}}{r_{ij}^3} \right ] \; S_j^+\left ( t-\; \frac{r_{ij}}{c}\right ) \; , $$ \begin{equation} \label{74} {\bf H}_i'(t) = -\; \sum_j \frac{{\bf M}_j'-3({\bf M}_j'\cdot{\bf n}_{ij}){\bf n}_{ij}}{r_{ij}^3} \; , \end{equation} where ${\bf n}_{ij}\equiv{\bf r}_{ij}/r_{ij}$ and \begin{equation} \label{75} {\bf M}_j' \equiv \vec\mu_0 + \Delta\vec\mu\; S_j^z\left ( t-\; \frac{r_{ij}}{c}\right ) \; . \end{equation} In what follows, we shall replace summation by integration according to the rule $$ \sum_{j=1}^N \Longleftrightarrow \rho \int_V d{\bf r} \; , $$ in which $\rho\equiv N/V$ is atomic density. Integrating over the spherical angle $\Omega({\bf n})$, related to the unit vector ${\bf n}\equiv{\bf r}/r$, we shall employ the properties $$ \int \left [ \vec\mu -(\vec\mu\cdot{\bf n}){\bf n} \right ] d\Omega({\bf n})= \frac{8\pi}{3}\; \vec\mu \; , $$ \begin{equation} \label{76} \int \left [ \vec\mu -3(\vec\mu\cdot{\bf n}){\bf n} \right ] \; d\Omega({\bf n})= 0 \; , \qquad \int (\vec\mu\cdot{\bf n}){\bf n} \; d\Omega({\bf n}) = \frac{4\pi}{3}\; \vec\mu \; . \end{equation} In sums (71) and (74), there are the terms, where $j=i$ and $r_{ij}=0$, which yields divergence. These are the so-called self-action terms, whose divergence occurs because of the point-like representation of atoms. To avoid this divergence, the self-action must be treated more accurately, which can be done as follows. Consider a single atom, located at ${\bf r}_j=0$. At the point ${\bf r}$, it creates the vector potential $$ {\bf A}_s = {\bf A}_s^+ +{\bf A}_s^- +{\bf A}_s' \; , $$ in which, according to Eqs. (71), $$ {\bf A}_s^+({\bf r},t) = -\; \frac{1+ik_0 r}{r^3} \; {\bf r}\times \vec\mu\; S^+ \left ( t -\; \frac{r}{c}\right ) \; , \qquad {\bf A}_s'({\bf r},t) = -\; \frac{1}{r^3}\; {\bf r}\times \left [ \vec\mu_0 + \Delta\vec\mu\; S^z\left ( t -\; \frac{r}{c}\right ) \right ] \; . $$ The corresponding magnetic field $$ {\bf H}_s \equiv \nabla\times {\bf A}_s = {\bf H}_s^+ +{\bf H}_s^- +{\bf H}_s' \; , $$ contains, in agreement with Eqs. (74), the terms $$ {\bf H}_s^+({\bf r},t) = \left [ k_0^2 \; \frac{\vec\mu-(\vec\mu\cdot{\bf n}){\bf n}}{r}\; - \left ( 1 + ik_0r \right )\; \frac{\vec\mu-3(\vec\mu\cdot{\bf n}){\bf n}}{r^3} \right ] S^+\left ( t -\; \frac{r}{c}\right ) \; , $$ $$ {\bf H}_s'({\bf r},t) = -\; \frac{{\bf M}'-3({\bf M}'\cdot{\bf n}){\bf n}}{r^3} \; , $$ where $$ {\bf M}' \equiv \vec\mu_0 + \Delta\vec\mu\; S^z\left ( t -\; \frac{r}{c}\right ) \; . $$ To consider the self-action, we need to go to the limit ${\bf r}\rightarrow 0$. Before doing this, let us average ${\bf H}_s$ over spherical angles, which gives $$ {\bf H}_s \longrightarrow \frac{1}{4\pi} \; \int {\bf H}_s({\bf r},t) \; d\Omega({\bf r}) = \frac{2k_0^2}{3r}\left [ \vec\mu\; S^+\left ( t-\; \frac{r}{c}\right ) + \vec\mu^* S^-\left ( t-\; \frac{r}{c}\right ) \right ] \; . $$ Keeping in mind $r\rightarrow 0$, we may present the exponential $e^{ik_0r}$ as an expansion $$ e^{ik_0r} \simeq 1 + ik_0 r \qquad (r\rightarrow 0) \; . $$ Imitating the atom nonlocality, we may average the singular term $1/r$ over the distance $r$ between the electron wavelength $\lambda_e\equiv2\pi\hbar/m_ec$ and the radiation wavelength $\lambda_0\equiv 2\pi c/\omega_0$, so that $$ \frac{1}{\lambda_0-\lambda_e} \; \int_{\lambda_e}^{\lambda_0} \frac{dr}{r} = \frac{k_0}{2\pi} \; \ln\left ( \frac{m_ec^2}{\hbar\omega_0}\right ) \; , $$ where we take into account that $\lambda_e\ll\lambda_0$. Then, for the averaged in this way self-action magnetic field, we find $$ {\bf H}_s(t) = \frac{2}{3}\; ik_0^3\; \left [ \vec\mu^* S^-(t) -\vec\mu\; S^+(t) \right ] + \frac{k_0^3}{3\pi} \; \ln\left ( \frac{m_ec^2}{\hbar\omega_0} \right ) \left [ \vec\mu^* S^-(t) +\vec\mu\; S^+(t) \right ] \; . $$ Introduce the notation for the natural width \begin{equation} \label{77} \gamma_0 \equiv \frac{2}{3}\; |\vec\mu|^2 k_0^3 \end{equation} and the Lamb frequency shift \begin{equation} \label{78} \delta_L \equiv \frac{\gamma_0}{2\pi} \; \ln\left ( \frac{m_ec^2}{\hbar\omega_0} \right ) \; . \end{equation} Equations (66), for a single atom with no external fields, are transformed into the system of equations $$ \frac{dS^-}{dt} = - i (\omega_0 - \delta_L -i\gamma_0) S^- -\; \frac{\vec\mu^2}{|\vec\mu|^2}\; \left ( \gamma_0 + i\delta_L \right ) S^+ + \frac{\Delta\vec\mu\cdot\vec\mu}{|\vec\mu|^2}\; (\gamma_0 + i\delta_L) \left ( \frac{1}{2}\; - S^z\right ) \; , $$ \begin{equation} \label{79} \frac{dS^z}{dt} = - 2\gamma_0 \left ( S^z +\frac{1}{2} \right ) \; . \end{equation} These equations show that the influence of the radiation self-action reduces to the appearance of the longitudinal attenuation $\gamma_1=2\gamma_0$ and transverse attenuation $\gamma_2=\gamma_0$. When the atom is immersed into a medium, one usually treats $\gamma_1$ and $\gamma_2$ as independent parameters, which is assumed in what follows. The Lamb shift can be included in the definition of the transition frequency $\omega_0$. Thus, the influence of self-action will be taken into consideration by incorporating the related attenuation terms with $\gamma_1$ and $\gamma_2$ into the evolution equations. And in the magnetic field ${\bf H}=\nabla\times{\bf A}$, the self-action will be excluded. Then the field ${\bf H}={\bf H}({\bf r},t)$, in view of Eq. (72), can be rearranged to \begin{equation} \label{80} {\bf H} ={\bf H}_{non} + {\bf H}_{dip} + {\bf H}_{vac} \; , \end{equation} where the nondipole part \begin{equation} \label{81} {\bf H}_{non} = {\bf H}^+_{non} + {\bf H}_{non}^- \end{equation} is due to the field created by radiating atoms, without the dipole part \begin{equation} \label{82} {\bf H}_{dip} = {\bf H}^+_{dip} + {\bf H}^-_{dip} + {\bf H}' \; . \end{equation} In the corresponding expressions for the nondipole field, \begin{equation} \label{83} {\bf H}_{non}^+({\bf r}_i,t) = \sum_{j(\neq i)} k_0^2\; \frac{\vec\mu-(\vec\mu\cdot{\bf n}_{ij}){\bf n}_{ij}}{r_{ij}} \; S^+_j\left ( t -\; \frac{r_{ij}}{c}\right ) \end{equation} and for the dipole part \begin{equation} \label{84} {\bf H}_{dip}^+({\bf r}_i,t) = - \sum_{j(\neq i)} (1+ik_0 r_{ij})\; \frac{\vec\mu-3(\vec\mu\cdot{\bf n}_{ij}){\bf n}_{ij}}{r^3_{ij}} \; S^+_j\left ( t -\; \frac{r_{ij}}{c}\right ) \end{equation} the self-action terms are omitted. The last term in Eq. (82), ${\bf H}'({\bf r},t)$, for ${\bf r}={\bf r}_i$, is given by ${\bf H}_i'(t)\equiv{\bf H}'({\bf r}_i,t)$ from Eq. (74), again with no self-action terms. Introduce the radiating symmetric part of the total radiation field, \begin{equation} \label{85} {\bf H}_{rad} \equiv \frac{1}{4\pi} \int \left ( {\bf H}_{non} +{\bf H}_{dip} \right ) \; d\Omega({\bf r}) = \frac{1}{4\pi} \int {\bf H}_{non}({\bf r},t) \; d\Omega({\bf r}) \; . \end{equation} This can be decomposed as \begin{equation} \label{86} {\bf H}_{rad} = {\bf H}^+_{rad} + {\bf H}_{rad}^- \; , \end{equation} with \begin{equation} \label{87} {\bf H}^+_{rad}({\bf r}_i,t) =\frac{2}{3}\; k_0^2\vec\mu \; \sum_{j(\neq i)} \frac{1}{r_{ij}} \; S_j^+\left ( t -\; \frac{r_{ij}}{c}\right ) \; . \end{equation} Hence, the total radiation field can be represented as \begin{equation} \label{88} {\bf H} ={\bf H}_{rad} + \Delta{\bf H} + {\bf H}_{vac} \; , \end{equation} with the notation \begin{equation} \label{89} \Delta{\bf H} \equiv {\bf H}_{non} + {\bf H}_{dip} - {\bf H}_{rad} \; . \end{equation} The anisotropic part of the radiation field contributes to the expression \begin{equation} \label{90} \xi({\bf r},t) \equiv -2i \vec\mu \cdot \left ( {\bf H}_{vac} + {\bf H}_{non} + {\bf H}_{dip} - {\bf H}_{rad} \right ) \; , \end{equation} for which $$ \int \xi({\bf r},t) \; d\Omega({\bf r}) = 0 \; . $$ Equation (90) describes local field fluctuations. We also define \begin{equation} \label{91} \xi_0({\bf r},t) \equiv -\Delta\vec\mu \cdot {\bf H}({\bf r},t) \; , \end{equation} where ${\bf H}$ is given by Eq. (88), and the frequency shift \begin{equation} \label{92} \Delta_0 \equiv -\Delta\vec\mu \cdot {\bf H}_0 \; . \end{equation} Then we consider the statistical averaging of the pseudospin equations (66) in the frame of the scale-separation approach [21]. In this course, we define the statistical averages for the {\it transition function} \begin{equation} \label{93} u({\bf r},t) \equiv 2<S^-({\bf r},t)>\; , \end{equation} {\it coherence intensity} \begin{equation} \label{94} w({\bf r},t) \equiv 4 <S^+({\bf r},t) S^-({\bf r}+0,t> \; , \end{equation} and {\it population difference} \begin{equation} \label{95} s({\bf r},t) \equiv 2<S^z({\bf r},t)> \; , \end{equation} in which the angle brackets $<\ldots>$ imply the averaging over the pseudospin degrees of freedom, not involving the fluctuating fields (90) and (91) treated as random variables. Function (94) is to be understood as the limit $$ w({\bf r},t) = 4\lim_{|{\bf r}-{\bf r}'|\rightarrow+0}\; < S^+({\bf r},t) S^-({\bf r}',t)> \; . $$ In the mean-field approximation, the latter simplifies to $w=|u|^2$. Definitions (93), (94), and (95) are in agreement with Eqs. (20) and (30). Also, introduce the effective field acting on atoms as \begin{equation} \label{96} f = f_0 + f_{rad} + \xi \; , \end{equation} in which \begin{equation} \label{97} f_0 \equiv -2i\vec\mu \cdot {\bf H}_0 \end{equation} is due to an external magnetic field; the term \begin{equation} \label{98} f_{rad} \equiv -2i\; <\vec\mu \cdot{\bf H}_{rad}> \end{equation} is caused by the isotropic radiation field (85); and $\xi$ is the local fluctuating field (90). Allowing for Eqs. (86) and (87), field (98) takes the form \begin{equation} \label{99} f_{rad}({\bf r},t) = - i\gamma_0\rho \int\left [ G({\bf r}-{\bf r}',t) u({\bf r}',t) + \frac{\vec\mu^2}{|\vec\mu|^2}\; G^*({\bf r}-{\bf r}',t) u^*({\bf r}',t)\right ] \; d{\bf r}' \; , \end{equation} with the transfer function $$ G({\bf r},t) \equiv \frac{\exp(ik_0r)}{k_0r} \; \Theta(ct-r) \; . $$ Finally, from Eqs. (66) we derive the evolution equations $$ \frac{\partial u}{\partial t} = -i(\omega_0 +\Delta_0 +\xi_0 -i\gamma_2) u + fs \; , $$ $$ \frac{\partial w}{\partial t} = - 2\gamma_2 w + \left ( u^* f + f^* u \right ) s\; , $$ \begin{equation} \label{100} \frac{\partial s}{\partial t} = -\; \frac{1}{2}\left ( u^* f + f^* u\right ) - \gamma_1(s-\zeta) \; , \end{equation} where $\zeta$ is a stationary population difference of an atom. These equations differ from analogous equations for electric dipole transitions [21,23] by the presence in Eqs. (100) of the frequency shifts $\Delta_0$ and $\xi_0$. Both these shifts are due to $\Delta\vec\mu$, given in Eqs. (62), which is nonzero because of the nonvanishing diagonal elements $\vec\mu_{nn}$. The constant part of the summary shift $\Delta_0+\xi_0$ can be neglected as compared to $\omega_0$. And the fluctuating part of this shift results in the appearance of an additional inhomogeneous broadening $\gamma_2^*$, so that the total line width becomes the sum $\gamma_2+\gamma_2^*$. \section{Entanglement by Superradiance} According to Sec. 3, the entanglement-production measure $\varepsilon(Z_2)$, caused by the spin density matrix $Z_2$, defined in Eq. (17), strongly depends on the value $s=s(t)$ of the population difference. Changing $s(t)$ would govern the level of produced entanglement. To make such an evolutional entanglement efficient, one needs that $s(t)$ would cross the value $s=0$, when the measure $\varepsilon(Z_2)$ jumps between expressions (49) and (50). Thus, we have to analyze the temporal behaviour of the population difference $s(t)$, which is described by Eqs. (100). It seems that an adequate method of quickly changing the population difference between $s\neq 0$ and $s=0$ could be by means of the process of superradiance. Then, at the initial time $t=0$, one prepares the atomic system in the inverted state, with $s\approx 1$. The superradiance burst is peaked at the delay time $t_0$, when $s\approx 0$, after which $s(t)$ goes rapidly to $s=-1$. Temporal dynamics of superradiance, described by Eqs. (100), has been investigated for atomic systems [21,23,24] as well as for spin superradiance [21,25,26]. Therefore, we shall not repeat here the solution of Eqs. (100) but will straightforwardly go to results. Let the atomic system at the initial time be inverted, with $s_0\equiv s(0)>0$. The effective coupling parameter of atomic interactions through radiation field is \begin{equation} \label{101} g \equiv \rho \; \frac{\gamma_0}{\gamma_2} \; \int \frac{\sin(k_0r-kz)}{k_0r}\; d{\bf r} \; , \end{equation} where $k\equiv\omega/c$ is the wave vector of the seed field selecting the longitudinal propagating mode. The frequency $\omega$ of the seed field is in resonance with the atomic transition frequency $\omega_0$. Because of $\omega\approx \omega_0$, one has $k\approx k_0$. Substantial coherence develops in the system if the coupling parameter (101) is large, such that $gs_0\gg 1$. Consider a purely self-organized process, when no coherence is imposed upon the system at $t=0$, so that $w_0\equiv w(0)=0$. Then the relaxation process starts with the incoherent quantum stage of spontaneous radiation, which lasts till the crossover time \begin{equation} \label{102} t_c = \frac{T_2}{2gs_0} \; , \end{equation} where $T_2\equiv 1/\gamma_2$. For $gs_0\gg 1$, the crossover time is small, $t_c\ll T_2$. After time (102), coherent effects become important and a superradiance pulse arises, with the intensity of radiation peaking at the delay time \begin{equation} \label{103} t_0 = t_c \left ( 1 +\ln\left | \frac{2}{\gamma_3 t_c}\right | \right ) \; . \end{equation} Here $\gamma_3$ is the dynamic inhomogeneous broadening defined as $$ \gamma_3 \equiv {\rm Re}\; \lim_{\tau\rightarrow\infty} \; \frac{1}{\tau} \; \int_0^\tau dt \; \int_0^t \ll \xi^*(t)\xi(t')\gg \; \exp\left\{ - (i\omega_0 +\gamma_2 - \gamma_2 gs)(t-t')\right \} \; dt' \; , $$ where $\ll\ldots\gg$ implies the stochastic averaging over the random fluctuating fields (90). At the delay time (103), the value $\gamma_3 t_c$, approximately, is $$ \gamma_3 t_c \approx \frac{1}{2gs_0} \ll 1 \; , $$ because of which $t_0>t_c$. The superradiant burst is rather narrow, with the pulse time \begin{equation} \label{104} \tau_p = \frac{T_2}{gs_0} \; . \end{equation} At the transient coherence stage, when $t_c<t\ll T_1$, where $T_1\equiv 1/\gamma_1$, the population difference behaves as \begin{equation} \label{105} s =\frac{1}{g}\; - s_0{\rm tanh}\left ( \frac{t-t_0}{\tau_p} \right ) \; . \end{equation} The first term here is small, provided that $g\gg 1$. After the time $t_0$, function (105) rapidly diminishes to the value about $s=-s_0$. The population difference passes through $s=0$ at the time, which is very close to $t_0$. Thus, the superradiant regime makes it possible to realize a very rapid passage of the population difference through zero. Then one can invert the system again by a $\pi$-pulse. In the process of this inversion, $s$ also crosses zero. Upon regaining inversion, a new superradiant regime develops, and $s$ crosses zero after the delay time $t_0$, counted from the moment of the novel inversion. This procedure can be repeated many times. Each time, when $s=0$, entanglement production reaches its maximum, with the measure $\varepsilon(Z_2)$ jumping from zero to $\log 3$, in accordance with Eqs. (49) and (50). These jumps are so sharp because of the large number of atoms $N\gg 1$ and, respectively, owing to well developed coherent effects. A series of superradiant bursts, generated by the repeated preparation of an inverted atomic system, can be called punctuated superradiance. Such a regime can also be realized for the case of spin systems, resulting in punctuated spin superradiance [26,27]. This regime can be employed for regulating the related entanglement production and for generating a punctuated series of sharp signals of produced entanglement. The described regime allowing for the generation of a regulated series of sharp entanglement pulses can be called the {\it punctuated entanglement production}. The nontrivial behaviour of evolutional entanglement, considered in this paper, is due to the entanglement production $\varepsilon(Z_2)$ caused by the second-order spin density matrix $Z_2$. Generally speaking, we could study as well the entanglement produced by higher-order spin density matrices $Z_n$. However, emphasis has been placed on $Z_2$ because of the special role of second-order density matrices in statistical mechanics [28] and for quantum information processing [29,30]. \vskip 5mm {\bf Acknowledgement} \vskip 2mm I am grateful to E.P. Yukalova for discussions and advice. \newpage
nucl-th/0411053
\section{Introduction} Lately we have been concerned with probing the behavior of the isospin-asymmetric equation of state (EOS) \cite{AS02}. With our recent work on neutron radii and neutron skins \cite{AS203}, we have explored applications of the EOS at densities typical for normal nuclei. It is also important to look into systems that are likely to constrain the behavior of the EOS at higher densities, where the largest model dependence is observed. Supernova explosions and neutron star formation/stability are phenomena where the nuclear EOS plays a crucial role. The symmetry energy determines the proton fraction in neutron stars in $\beta$ equilibrium, and, in turn, the cooling rate and neutrino emission. Models of prompt supernova explosion and systematic analyses of neutron star masses provide often conflicting information on the ``softness" of the EOS and its incompressibility at equilibrium. On the other hand, collisions of neutron-rich nuclei, which are the purpose of the Rare Isotope Accelerator (RIA), provide a unique opportunity to obtain terrestrial data suitable for constraining the properties of dense and highly asymmetric matter. Such reactions are capable of producing extended regions of space/time where both the total nucleon density and the neutron/proton asymmetry are large. Isospin-dependent Boltzmann-Uehling-Uhlenbeck (BUU) transport models \cite{BUU} include isospin-sensitive collision dynamics through the elementary $pp$, $nn$, and $np$ cross sections and the mean field. The latter is a crucial isospin-dependent mechanism, and is the focal point of this paper. The contribution to the mean field from the neutron/proton asymmetry can be measured through isospin-sensitive observables \cite{BAL}. In summary, this is a timely and exciting topic, which is stimulating new effort, on both the experimental and the theoretical sides. At this time, it is fair to say that the model dependence of the isospin asymmetric EOS is rather large. In fact, even the qualitative behavior of some predictions is controversial, as is the case, for instance, with the density dependence of the symmetry energy, upon which isospin diffusion in heavy-ion collisions is found to depend sensitively \cite{u01}. Thus any additional constraint is desirable and should be fully explored. As discussed in Ref.~\cite{BAL04}, nucleon-nucleus optical potential information can be exploited to constrain the strength and the energy dependence of the single-neutron/proton potentials in asymmetric nucler matter. The basic idea is that, even though infinite nuclear matter is an idealized system, the single-nucleon potentials should bear a clear signature of the optical potential in the interior of the nucleus. In this paper we will discuss the predictions for the single-neutron/proton potentials and the closely related symmetry potential as obtained from our Dirac-Brueckner-Hartree-Fock calculations of asymmetric matter. We will compare with other predictions from the literature as well as empirical optical potential information. We will point out the large model dependence of predictions for those observables that depend sensitively on the difference between neutron and proton properties in asymmetric matter. Additional experimental constraints are therefore important. Moreover, microscopic, parameter-free approaches are the best way to gain deeper insight into the isospin-dependent properties of nuclear matter. \section{The single-nucleon potentials} \subsection{Momentum dependence} Unless otherwise specified, we use the Bonn-B potential \cite{Mac89} and the relativistic Brueckner-Hartree-Fock (DBHF) model outlined in Ref.~\cite{AS02}. We begin by examining the momentum dependence of $U_{n/p}$, the single neutron/proton potential in neutron-rich matter. In Fig.~1, we show $U_{n/p}$ as a function of the momentum and for different values of the asymmetry parameter, $\alpha=(\rho_n - \rho _p)/(\rho_n + \rho_p)$, with $\rho_n$ and $\rho_p$ the neutron and proton densities. The total nucleon density considered in the figure is equal to 0.185 fm$^{-3}$ and corresponds to a Fermi momentum of 1.4 fm$^{-1}$, which is very close to our predicted saturation density. \begin{figure} \begin{center} \vspace*{0.5cm} \hspace*{-0.5cm} \psfig{figure=FIGURE1.ps,height=8.0cm} \vspace*{0.5cm} \caption{ The single-neutron (upper panel) and single-proton (lower panel) potential as a function of the nucleon momentum for three different values of the asymmetry parameter. The average Fermi momentum is 1.4 fm$^{-1}$. } \label{one} \end{center} \end{figure} For increasing values of $\alpha$, the proton potential becomes increasingly attractive while the opposite tendency is observed in $U_n$. This reflects the fact that the proton-neutron interaction, the one predominantly felt by the single proton as the proton density is depleted, is more attractive than the one between identical nucleons. Also, as it appears reasonable, the dependence on $\alpha$ becomes weaker at larger momenta. \begin{figure} \begin{center} \vspace*{0.5cm} \hspace*{-0.5cm} \psfig{figure=FIGURE2.ps,height=8.0cm} \vspace*{0.5cm} \caption{ Comparison between DBHF and BHF predictions of the single-neutron (upper panel) and single-proton (lower panel) potential. The value of the asymmetry parameter is fixed to 0.4 and the average Fermi momentum is 1.4 fm$^{-1}$. } \label{two} \end{center} \end{figure} In Figure 2 we show the DBHF results in comparison with those from (non-relativistic) conventional Brueckner-Hartree-Fock (BHF) calculations. We make the comparison to show the considerable difference between the two sets of results as well as to check that our BHF predictions are in qualitative agreement with other studies based on the conventional Brueckner G-matrix approach. An older work based on that approach can be found, for instance, in Ref.~\cite{BL91}, where separable representations of the nucleon-nucleon interaction are adopted. More recent calculations have been reported in Ref.~\cite{Tub04}, where the CD-Bonn potential \cite{CD} is used in conjunction with the BHF approximation. The role of the momentum dependence of the symmetry potential in heavy-ion collisions was recently examined \cite{DAS04} and found to be important. Symmetry potentials with and without momentum dependence and yielding similar predictions for the symmetry energy can lead to significantly different predictions of collision observables \cite{DAS04}. \subsection{Asymmetry dependence and the symmetry potential} Regarding $U_{n/p}$ as functions of the asymmetry parameter $\alpha$, one can easily verify that the following approximate relation applies \begin{equation} U_{n/p}(k,k_F,\alpha) \approx U_{n/p}(k,k_F,\alpha=0) \pm U_{sym}(k,k_F)\alpha \end{equation} with the $\pm$ referring to neutron/proton, respectively. Figure 3 displays the left-hand side of Eq.~(1) for fixed density and nucleon momentum and clearly reveals the linear behaviour of $U_{n/p}$ as a function of $\alpha$. \begin{figure} \begin{center} \vspace*{0.5cm} \hspace*{-0.5cm} \psfig{figure=FIGURE3.ps,height=8.0cm} \vspace*{0.5cm} \caption{ The single-neutron (upper panel) and single-proton (lower panel) potential as a function of the asymmetry parameter for fixed average density ($k_F$= 1.4 fm$^{-1}$) and nucleon momentum ($k$= $k_F$). } \label{three} \end{center} \end{figure} Although the main focus of Fig.~3 is the $\alpha$ dependence, predictions are displayed for the Bonn A, B, and C potentials \cite{Mac89}. These three models differ mainly in the strength of the tensor force, which is mostly carried by partial waves with isospin equal to 0 and thus should fade away in the single-neutron potential as the neutron fraction increases. Reduced differences among the three models are in fact observed in $U_n$ at the larger values of $\alpha$. Already several decades ago, it was pointed out that the real part of the nuclear optical potential depends on the asymmetry parameter as in Eq.~(1) \cite{Lane}. Thus, the quantity \begin{equation} \frac{U_{n} + U_p}{2} = U_0 , \end{equation} which is obviously the single-nucleon potential in absence of asymmetry, should be a reasonable approximation to the isoscalar part of the optical potential. The momentum dependence of $U_0$ (which is shown in Fig.~1 as the $\alpha$=0 curve), is important for extracting information about the symmetric matter EOS and is reasonably agreed upon \cite{u01,u02,u03,u04,u05,u06,u07,u08,u09}. On the other hand, \begin{equation} \frac{U_{n} - U_p}{2\alpha} = U_{sym} \end{equation} should be comparable with the Lane potential \cite{Lane}, or the isovector part of the nuclear optical potential \cite{Lane}. (Notice that in the two equations above the dependence upon density, momentum, and asymmetry has been suppressed for simplicity.) We have calculated $U_{sym}$ close to nuclear matter density and as a function of the momentum, or rather the corresponding kinetic energy. The predictions obtained with Bonn A, B, and C are shown in Fig.~4. They are compared with the phenomenological expression \cite{Lane} \begin{equation} U_{Lane} = a -b T \end{equation} where $T$ is the kinetic energy, $a \approx 22-34 MeV$, $b\approx 0.1-0.2 MeV$. \begin{figure} \begin{center} \vspace*{4cm} \hspace*{-0.5cm} \psfig{figure=FIGURE4.ps,height=6.5cm} \vspace*{-2.5cm} \caption{ The symmetry potential as a function of the nucleon kinetic energy at nuclear matter density. The predictions obtained with Bonn A, B, and C are compared with empirical information from nuclear optical potential data (shaded area). See text for details. } \label{four} \end{center} \end{figure} The strength of the predicted symmetry potential decreases with energy, a behavior which is consistent with the empirical information. The same comparison is done in Ref.~\cite{BAL04} starting from a phenomenological formalism for the single-nucleon potential \cite{Bomb01,Rizzo}. There, it is shown that it is possible to choose two sets of parameters which lead to similar values of the symmetry energy but exactly opposite tendencies in the energy dependence of the symmetry potential as well as opposite sign of the proton-neutron mass splitting. As a consequence of that, these two sets of parameters lead to very different predictions for observables in heavy-ion collisions induced by neutron-rich nuclei \cite{Rizzo}. Our effective masses for proton and neutron are shown in Fig.~5 as a function of $\alpha$ and at saturation density. The predicted effective mass of the neutron being larger than the proton's is a trend shared with microscopic non-relativistic calculations \cite{BL91}. In the non-relativistic case, one can show from very elementary arguments based on the curvature of the single-particle potential that a more attractive potential, as the one of the proton, leads to a smaller effective mass. In our DBHF effective-mass approximation, we assume momentum-independent nucleon self-energies, $U_S$ and $U_V$, with a vanishing spacial component of the vector part. In such limit, following similar calculations of symmetric matter \cite{BM}, the one-body potential is written as \cite{AS02} \begin{equation} U_i(p) = \frac{m^*_i}{E^*_i}U_{S,i} + U_{V,i} \end{equation} where $E^*_i=\sqrt{(m^*_i)^2 + p^2}$, $m^*_i = m_i + U_{S,i}$, and $i=n$ or $p$ for neutrons or protons, respectively. Defining for convenience $U_{0,i} = U_{S,i} + U_{V,i}$, the expression above becomes a two-parameter formula which requires the fitting of two constants, just like in the non-relativistic case. Now, since the single-proton potential is more attractive (see Fig.~1), and both the neutron and proton potentials tend to the same limit at high momenta, it is easy to see from Eq.~(5), or rather its derivative, that the proton effective mass obtained in this way must be smaller than the neutron's. \begin{figure} \begin{center} \vspace*{4.0cm} \hspace*{-0.5cm} \psfig{figure=FIGURE5.ps,height=6.5cm} \vspace*{-2.5cm} \caption{ The proton and neutron effective mass as a function of the asymmetry parameter and for fixed average density ($k_F$= 1.4 fm$^{-1}$). } \label{five} \end{center} \end{figure} One can encounter in the literature the statement that relativistic mean field (RMF) models predict a switch in sign of the neutron-proton effective mass splitting whenever the scalar isovector $\delta$ meson is included \cite{Rizzo}. We suggest that the reason for this mechanism is in the first-order nature of RMF calculations. The first-order contribution of the $\delta$ meson is known to be repulsive in $^{3}S_{1}$ (the most important partial wave for generating nuclear binding). Therefore, in a context where the proton density increases (thus making the $np$ interaction the dominant contribution to the single-proton potential), a first-order calculation would in fact generate additional {\it repulsion} in the single-proton potential. Hence the larger proton effective mass. On the other hand, in more microscopic approaches, such as the one we use, the nucleon potential is iterated to all orders via the in-medium scattering equation. In particular, most of the attraction in the $np$ interaction is generated in second order, through the two-pion exchange. (The $\delta$ meson is included, of course, but mainly for the sake of accurate reproduction of NN phase shifts.) As we already observed in Section IIA, increasing the $np$ contribution to the single-proton potential results in increased attraction. \begin{figure} \begin{center} \vspace*{0.5cm} \hspace*{-0.5cm} \psfig{figure=FIGURE6.ps,height=8.0cm} \vspace*{0.5cm} \caption{ Contribution from the asymmetry to the average potential energy per neutron (upper panel) and proton (lower panel). Average density as in the previous figures. } \label{six} \end{center} \end{figure} Some comments are in place concerning other DBHF calculations of asymmetric matter where the Dirac neutron effective mass is reported to be larger than the proton's \cite{plb,DFF}. In Ref.~\cite{plb}, the authors argue that the Dirac mass, $m^* = m + U_S$, should not be compared with the effective mass which can be extracted, for instance, from analyses based on non-relativistic optical models. Instead, an effective mass based on the energy dependence of the Shroedinger equivalent potential should be calculated, in which case one finds that $m^*_n >m^*_p$. The arguments found in Ref.~\cite{plb} are well known and were already advanced in Ref.~\cite{fw}, where as many as six different definitions for the effective mass are introduced. Again, one must keep in mind the point we already made above. If the nucleon self-energy is written in terms of a scalar potential and {\it only} the time-like component of a vector potential, and both are taken to be momentum independent, it is then easy to see that the expansion of the single-particle energy is consistent to leading order with the non-relativistic single-particle energy. Thus it is reasonable, and in fact to be expected, that our Dirac masses would be qualitatively consistent with those from non-relativistic predictions, such as BHF calculations. In summary, the effective mass is just part of a convenient parametrization of the single-particle potential. Clearly, how many terms are retained in the nucleon self-energy and how their momentum dependence is handled will impact the parametrization. Ultimately, physical observables depending on the neutron/proton mean field must be correctly described, irrespective of the chosen parametrization. One conclusion that can be made from all of the above is that the parameters of the single-nucleon potential in asymmetric matter appear to be weakly correlated to observables such as the energy per particle or the symmetry energy, where proton and neutron contributions are averaged together. Constraints from ``differential" or relative observables, namely those specifically sensitive to the difference between proton and neutron properties, are thus very much needed. Before closing, we also show for completeness the average potential energy per neutron/proton, where the momentum dependence has been integrated out. This is the proton/neutron potential energy contribution to the total energy per particle which then appears in the EOS. Actually, what we show in Fig.~5 are the average potential energies from which the part coming from the symmetric EOS has been subtracted out, that is, just the contribution from the asymmetry to the interaction potential energy, \begin{equation} <\Delta U_{n/p}>(\rho,\alpha) = <U_{n/p}>(\rho, \alpha) - <U (\rho, \alpha=0)>. \end{equation} Clearly, the contribution from the asymmetry, in both the momentum-dependent and the momentum-averaged potentials, turns out to be large and positive for neutrons, large and negative for protons. This component of the mean field will then be effective in separating the collision dynamics for neutrons and protons by making more neutrons unbound than protons (or, by making the neutrons more energetic, if already unbound). This effect can be discerned through observables such as the neutron/proton differential flow in heavy-ion collisions \cite{BAL}. \section{Conclusions} We have focussed on some of the properties of neutrons and protons in neutron-rich matter. This is a topic of contemporary interest. Its relevance extends from the dynamics of colliding nuclei to nuclear astrophysics. Different models may be in fair agreement with respect to averaged properties of the EOS, and yet produce very different predictions of properties such as the symmetry potential and the closely related single-nucleon potentials and effective masses. Clearly, more stringent constraints are needed for the isospin-dependent properties of the EOS. Very good transport model calculations are available from the literature \cite{BUU,BAL}. However, considerable amount of phenomenology is often involved in the input of these models (for instance, the mean field is based on some phenomenological interaction \cite{DAS03,DAS04} and/or the elementary cross sections are obtained from empirical data). We calculate all of the above ingredients {\it microscopically} and internally consistent with respect to the two-body force. We are presently studying the dependence on density {\it and} asymmetry of the in-medium isospin-dependent nucleon-nucleon cross sections with the purpose of obtaining a convenient parametrization as a function of energy, density, and asymmetry. Our microscopic information (both elementary cross sections and mean field) can be a valuable input for transport model calculations of heavy-ion dynamical observables. This combined effort will complement new data to be taken at RIA and eventually shed light on the less known aspects of the nuclear equation of state. \\ \\ \begin{center} {\bf ACKNOWLEDGMENTS} \end{center} The authors acknowledge financial support from the U.S. Department of Energy under grant number DE-FG02-03ER41270.
1004.5543
\section{Introduction} The most commonly used large sample tests are the likelihood ratio \citep{Wilks1938}, Wald \citep{Wald1943} and Rao score \citep{Rao1948} tests. Recently, \cite{Terrell2002} proposed a new test statistic that shares the same first order asymptotic properties with the likelihood ratio ($LR$), Wald ($W$) and Rao score ($S_R$) statistics. The new statistic, referred to as the {\it gradient statistic} ($S_T$), is markedly simple. In fact, \cite{Rao2005} wrote: ``The suggestion by Terrell is attractive as it is simple to compute. It would be of interest to investigate the performance of the [gradient] statistic.'' The present paper goes in this direction. Let $\bm{x} = (x_{1}, \ldots, x_{n})^{\top}$ be a random vector of $n$ independent observations with probability density function $\pi(\bm{x}\mid\bm{\theta})$ that depends on a $p$-dimensional vector of unknown parameters $\bm{\theta} = (\theta_{1}, \ldots,\theta_{p})^{\top}$. Consider the problem of testing the composite null hypothesis $\mathcal{H}_{0}:\bm{\theta}_{2} = \bm{\theta}_{20}$ against $\mathcal{H}_{1}:\bm{\theta}_{2}\neq\bm{\theta}_{20}$, where $\bm{\theta} = (\bm{\theta}_{1}^{\top}, \bm{\theta}_{2}^{\top})^{\top}$, $\bm{\theta}_{1} = (\theta_{1}, \ldots,\theta_{q})^{\top}$ and $\bm{\theta}_{2} = (\theta_{q+1}, \ldots,\theta_{p})^{\top}$, $\bm{\theta}_{20}$ representing a $(p-q)$-dimensional fixed vector. Let $\ell$ be the total log-likelihood function, i.e. $\ell = \ell(\bm{\theta}) = \sum_{l=1}^{n}\log \pi(x_{l}\mid\bm{\theta})$. Let $\bm{U}(\bm{\theta}) = \partial\ell/\partial\bm{\theta} = (\bm{U}_{1}(\bm{\theta})^{\top}, \bm{U}_{2}(\bm{\theta})^{\top})^{\top}$ be the corresponding total score function partitioned following the partition of $\bm{\theta}$. The restricted and unrestricted maximum likelihood estimators of $\bm{\theta}$ are $\widehat{\bm{\theta}} = (\widehat{\bm{\theta}}_{1}^{\top}, \widehat{\bm{\theta}}_{2}^{\top})^{\top}$ and $\widetilde{\bm{\theta}} = (\widetilde{\bm{\theta}}_{1}^{\top}, \bm{\theta}_{20}^{\top})^{\top}$, respectively. The gradient statistic for testing $\mathcal{H}_{0}$ is \begin{equation}\label{grad_stat} S_{T} = \bm{U}(\widetilde{\bm{\theta}})^{\top}(\widehat{\bm{\theta}} - \widetilde{\bm{\theta}}). \end{equation} Since $\bm{U}_1(\widetilde{\bm{\theta}})= \bm{0}$, the gradient statistic in (\ref{grad_stat}) can be written as $S_{T} = \bm{U}_{2}(\widetilde{\bm{\theta}})^{\top}(\widehat{\bm{\theta}}_{2} - \bm{\theta}_{20})$. Clearly, $S_{T}$ has a very simple form and does not involve knowledge of the information matrix, neither expected nor observed, and no matrices, unlike $W$ and $S_{R}$. Asymptotically, $S_{T}$ has a central chi-square distribution with $p-q$ degrees of freedom under $\mathcal{H}_{0}$. \cite{Terrell2002} points out that the gradient statistic ``is not transparently non-negative, even though it must be so asymptotically.'' His Theorem 2 implies that if the log-likelihood function is concave and is differentiable at $\widetilde{\bm{\theta}}$, then $S_{T}\ge 0$. In this paper we derive the asymptotic distribution of the gradient statistic for a composite null hypothesis under a sequence of Pitman alternatives converging to the null hypothesis at a convergence rate $n^{-1/2}$. In other words, the sequence of alternative hypotheses is $\mathcal{H}_{1n}:\bm{\theta}_{2}=\bm{\theta}_{20} + n^{-1/2}\bm{\epsilon}$, where $\bm{\epsilon} = (\epsilon_{q+1}, \ldots,\epsilon_{p})^{\top}$. Similar results for the likelihood ratio and Wald tests were obtained by \cite{Hayakawa1975} and for the score test, by \cite{HarrisPeers1980}. Comparison of local power properties of the competing tests will be performed. Our results will be specialized to the case of the one-parameter exponential family. A brief discussion closes the paper. \section{Notation and preliminaries} Our notation follows that of \cite{Hayakawa1975, Hayakawa1977}. We introduce the following log-likelihood derivatives \[ y_{r} = n^{-1/2}\frac{\partial\ell}{\partial\theta_{r}},\quad y_{rs} = n^{-1}\frac{\partial^{2}\ell}{\partial\theta_{r}\partial\theta_{s}},\quad y_{rst} = n^{-3/2}\frac{\partial^{3}\ell}{\partial\theta_{r}\partial\theta_{s}\partial\theta_{t}}, \] their arrays $\bm{y} = (y_{1},\ldots,y_{p})^{\top}$, $\bm{Y}=((y_{rs}))$, $\bm{Y}_{...}=((y_{rst}))$, the corresponding cumulants $\kappa_{rs} = E(y_{rs})$, $\kappa_{r,s} = E(y_{r}y_{s})$, $\kappa_{rst} = n^{1/2}E(y_{rst})$, $\kappa_{r,st} = n^{1/2}E(y_{r}y_{st})$, $\kappa_{r,s,t} = n^{1/2}E(y_{r}y_{s}y_{t})$ and their arrays $\bm{K}= ((\kappa_{r,s}))$, $\bm{K}_{...} = ((\kappa_{rst}))$, $\bm{K}_{.,..} = ((\kappa_{r,st}))$ and $\bm{K}_{.,.,.} = ((\kappa_{r,s,t}))$. We make the same assumptions as in \cite{Hayakawa1975}. In particular, it is assumed that the $\kappa$'s are all $O(1)$ and they are not functionally independent; for instance, $\kappa_{r,s} = -\kappa_{rs}$. Relations among them were first obtained by \cite{Bartlett1953a, Bartlett1953b}. Also, it is assumed that $\bm{Y}$ is non-singular and that $\bm{K}$ is positive definite with inverse $\bm{K}^{-1} = ((\kappa^{r,s}))$ say. For triple-suffix quantities we use the following summation notation \[ \bm{K}_{...}\circ\bm{a}\circ\bm{b}\circ\bm{c} = \sum_{r,s,t=1}^{p}\kappa_{rst}a_{r}b_{s}c_{t}, \quad \bm{K}_{.,..}\circ\bm{M}\circ\bm{b} = \sum_{r,s,t=1}^{p}\kappa_{r,st}m_{rs}b_{t}, \] where $\bm{M}$ is a $p\times p$ matrix and $\bm{a}$, $\bm{b}$ and $\bm{c}$ are $p\times 1$ column vectors. The partition $\bm{\theta} = (\bm{\theta}_{1}^{\top}, \bm{\theta}_{2}^{\top})^{\top}$ induces the corresponding partitions: \[ \bm{Y}= \begin{bmatrix} \bm{Y}_{11} & \bm{Y}_{12} \\ \bm{Y}_{21} & \bm{Y}_{22} \end{bmatrix}, \quad \bm{K}= \begin{bmatrix} \bm{K}_{11} & \bm{K}_{12} \\ \bm{K}_{21} & \bm{K}_{22} \end{bmatrix}, \quad \bm{K}^{-1}= \begin{bmatrix} \bm{K}^{11} & \bm{K}^{12} \\ \bm{K}^{21} & \bm{K}^{22} \end{bmatrix}, \] $\bm{a} = (\bm{a}_{1}^{\top},\bm{a}_{2}^{\top})^{\top}$, etc. Also, \[ \bm{K}_{2..}\circ\bm{a}_{2}\circ\bm{b}\circ\bm{c} = \sum_{r=q+1}^{p}\sum_{s,t=1}^{p}\kappa_{rst}a_{r}b_{s}c_{t}. \] Using a procedure analogous to that of \cite{Hayakawa1975}, we can write the asymptotic expansion of $S_{T}$ for the composite hypothesis up to order $n^{-1/2}$ as \begin{align*}\label{eqSg2} \begin{split} S_{T} &= -(\bm{Z}\bm{y}+\bm{\xi})^{\top}\bm{Y}(\bm{Z}\bm{y}+\bm{\xi}) -\frac{1}{2\sqrt{n}}\bm{K}_{...} \circ(\bm{Z}\bm{y} + \bm{\xi})\circ\bm{Y}^{-1}\bm{y}\circ\bm{Y}^{-1}\bm{y}\\ &\quad-\frac{1}{2\sqrt{n}}\bm{K}_{...} \circ(\bm{Z}\bm{y} + \bm{\xi})\circ(\bm{Z}_{0}\bm{y}-\bm{\xi})\circ(\bm{Z}_{0}\bm{y}-\bm{\xi}) + O_{p}(n^{-1}), \end{split} \end{align*} where $\bm{Z} = \bm{Y}^{-1} - \bm{Z}_{0}$, \[ \bm{Z}_{0} = \begin{bmatrix} \bm{Y}_{11}^{-1} & \bm{0} \\ \bm{0} & \bm{0} \end{bmatrix}, \quad \bm{\xi} = \begin{bmatrix} \bm{Y}_{11}^{-1}\bm{Y}_{12}\\ -\bm{I}_{p-q} \end{bmatrix}\bm{\epsilon}, \] $\bm{I}_{p-q}$ being the identity matrix of order $p-q$. We can now use a multivariate Edgeworth Type A series expansion of the joint density function of $\bm{y}$ and $\bm{Y}$ up to order $n^{-1/2}$ \citep{Peers1971}, which has the form \begin{align*} f_{1} &= f_{0}\biggl[1 + \frac{1}{6\sqrt{n}}(\bm{K}_{.,.,.} \circ\bm{K}^{-1}\bm{y}\circ\bm{K}^{-1}\bm{y}\circ\bm{K}^{-1}\bm{y} -3\bm{K}_{.,.,.}\circ\bm{K}^{-1}\circ\bm{K}^{-1}\bm{y})\\ &\quad-\frac{1}{\sqrt{n}}\bm{K}_{.,..}\circ\bm{K}^{-1}\bm{y}\circ\bm{D}\biggr] + O(n^{-1}), \end{align*} where \[ f_{0} = (2\pi)^{-p/2}|\bm{K}|^{-1/2}\exp\biggl\{-\frac{1}{2}\bm{y}^{\top}\bm{K}^{-1}\bm{y}\biggr\} \prod_{r,s=1}^{p}\delta(y_{rs} - \kappa_{rs}), \] $\bm{D} = ((d_{bc}))$, $d_{bc} = \delta'(y_{bc} - \kappa_{bc})/\delta(y_{bc} - \kappa_{bc})$, with $\delta(\cdot)$ being the Dirac delta function \citep{Bracewell}, to obtain the moment generating function of $S_{T}$, $M(t)$ say. From $f_{1}$ and the asymptotic expansion of $S_{T}$ up to order $n^{-1/2}$, we arrive, after long algebra, at \begin{align*}\label{mgf} \begin{split} M(t) &= (1-2t)^{-\frac{1}{2}(p-q)}\exp\biggr(\frac{t}{1-2t}\bm{\epsilon}^{\top}\bm{K}_{22.1}\bm{\epsilon}\biggr)\\ &\qquad\qquad\times\biggl[1 + \frac{1}{\sqrt{n}}(A_{1}d + A_{2}d^{2} + A_{3}d^{3})\biggr] + O(n^{-1}), \end{split} \end{align*} where $d = 2t/(1-2t)$, $\bm{K}_{22.1} = \bm{K}_{22} - \bm{K}_{21}\bm{K}_{11}^{-1}\bm{K}_{12}$, $A_{1} = -(\bm{K}_{...}\circ\bm{K}^{-1}\circ\bm{\epsilon}^{*} + 4\bm{K}_{.,..}\circ\bm{A}\circ\bm{\epsilon}^{*} + \bm{K}_{...}\circ\bm{A}\circ\bm{\epsilon}^{*} +\bm{K}_{...}\circ\bm{\epsilon}^{*}\circ\bm{\epsilon}^{*}\circ\bm{\epsilon}^{*})/4$, $A_{2} = -(\bm{K}_{...}\circ\bm{K}^{-1}\circ\bm{\epsilon}^{*} -\bm{K}_{...}\circ\bm{A}\circ\bm{\epsilon}^{*} - 2\bm{K}_{.,..}\circ\bm{\epsilon}^{*}\circ\bm{\epsilon}^{*}\circ\bm{\epsilon}^{*})/4$, $A_{3} = -\bm{K}_{...}\circ\bm{\epsilon}^{*}\circ\bm{\epsilon}^{*}\circ\bm{\epsilon}^{*}/12$, \[ \quad \bm{\epsilon}^{*} = \begin{bmatrix} \bm{K}_{11}^{-1}\bm{K}_{12}\\ -\bm{I}_{p-q} \end{bmatrix}\bm{\epsilon}, \quad \bm{A} = \begin{bmatrix} \bm{K}_{11}^{-1} & \bm{0}\\ \bm{0} & \bm{0} \end{bmatrix}. \] When $n\to\infty$, $M(t)\to (1-2t)^{-(p-q)/2}\exp\{2t\lambda/(1-2t)\}$, where $\lambda = \bm{\epsilon}^{\top}\bm{K}_{22.1}\bm{\epsilon}/2$, and hence the limiting distribution of $S_T$ is a non-central chi-square distribution with $p-q$ degrees of freedom and non-centrality parameter $\lambda$. Under $\mathcal{H}_{0}$, i.e. when $\bm{\epsilon} = \bm{0}$, $M(t) = (1-2t)^{-(p-q)/2} + O(n^{-1})$ and, as expected, $S_{T}$ has a central chi-square distribution with $p-q$ degrees of freedom up to an error of order $n^{-1}$. Also, from $M(t)$ we may obtain the first three moments of $S_{T}$ up to order $n^{-1/2}$ as $\mu_{1}'(S_{T}) = p-q + \lambda + 2A_{1}/\sqrt{n}$, $\mu_{2}(S_{T}) = 2(p-q+2\lambda) + 8(A_{1} + A_{2})/\sqrt{n}$ and $\mu_{3}(S_{T}) = 8(p-q+3\lambda) + 6(A_{1} + 2A_{2} + A_{3})/\sqrt{n}$. \section{Main result} The moment generating function of $S_{T}$ in a neighborhood of $\bm{\theta}_{2} = \bm{\theta}_{20}$ can be written, after some algebra, as \begin{align*} M(t) &= (1-2t)^{-\frac{1}{2}(p-q)}\exp\biggl(\frac{t}{1-2t}\bm{\epsilon}^{\top}\bm{K}_{22.1}^{\dagger}\bm{\epsilon}\biggr)\\ &\qquad\qquad\times\biggl[1 + \frac{1}{\sqrt{n}}\sum_{k=0}^{3}a_{k}(1-2t)^{-k}\biggr] + O(n^{-1}), \end{align*} where \begin{align}\label{as} \begin{split} a_{1} &= \frac{1}{4}\bigl\{\bm{K}_{...}^{\dagger}\circ(\bm{K}^{-1})^{\dagger}\circ(\bm{\epsilon}^{*})^{\dagger} -(4\bm{K}_{.,..} + 3\bm{K}_{...})^{\dagger}\circ\bm{A}^{\dagger}\circ(\bm{\epsilon}^{*})^{\dagger}\\ &\quad -2(\bm{K}_{...} + 2\bm{K}_{.,..})^{\dagger}\circ(\bm{\epsilon}^{*})^{\dagger} \circ(\bm{\epsilon}^{*})^{\dagger}\circ(\bm{\epsilon}^{*})^{\dagger}\\ &\quad-2(\bm{K}_{2..} + \bm{K}_{2,..})^{\dagger} \circ\bm{\epsilon}\circ(\bm{\epsilon}^{*})^{\dagger}\circ(\bm{\epsilon}^{*})^{\dagger}\bigr\},\\ a_{2} &= -\frac{1}{4}\bigl\{\bm{K}_{...}^{\dagger}\circ(\bm{K}^{-1} - \bm{A})^{\dagger}\circ(\bm{\epsilon}^{*})^{\dagger}\\ &\quad-(\bm{K}_{...} + 2\bm{K}_{.,..})^{\dagger}\circ(\bm{\epsilon}^{*})^{\dagger} \circ(\bm{\epsilon}^{*})^{\dagger}\circ(\bm{\epsilon}^{*})^{\dagger}\bigr\},\\ a_{3} &= -\frac{1}{12}\bm{K}_{...}^{\dagger}\circ(\bm{\epsilon}^{*})^{\dagger} \circ(\bm{\epsilon}^{*})^{\dagger}\circ(\bm{\epsilon}^{*})^{\dagger}, \end{split} \end{align} and $a_{0} = -(a_{1} + a_{2} + a_{3})$. The symbol ``$\dagger$'' denotes evaluation at $\bm{\theta} = (\bm{\theta}_{1}^\top, \bm{\theta}_{20}^\top)^\top$. Inverting $M(t)$, we arrive at the following theorem, our main result. \begin{theorem}\label{theorem1} The asymptotic expansion of the distribution of the gradient statistic for testing a composite hypothesis under a sequence of local alternatives converging to the null hypothesis at rate $n^{-1/2}$ is \begin{equation}\label{asymp} \Pr(S_{T}\leq x) = G_{f,\lambda}(x) + \frac{1}{\sqrt{n}}\sum_{k=0}^{3}a_{k}G_{f+2k,\lambda}(x) + O(n^{-1}), \end{equation} where $G_{m,\lambda}(x)$ is the cumulative distribution function of a non-central chi-square variate with $m$ degrees of freedom and non-centrality parameter $\lambda$. Here, $f = p-q$, $\lambda = \bm{\epsilon}^{\top}\bm{K}_{22.1}^{\dagger}\bm{\epsilon}/2$ and the $a_{k}$'s are given in~(\ref{as}). \end{theorem} If $q=0$, the null hypothesis is simple, $\bm{\epsilon}^{*} = -\bm{\epsilon}$ and $\bm{A} = \bm{0}$. Therefore, an immediate consequence of Theorem \ref{theorem1} is the following corollary. \begin{corollary} The asymptotic expansion of the distribution of the gradient statistic for testing a simple hypothesis under a sequence of local alternatives converging to the null hypothesis at rate $n^{-1/2}$ is given by (\ref{asymp}) with $f = p$, $\lambda = \bm{\epsilon}^{\top}\bm{K}^{\dagger}\bm{\epsilon}/2$, $a_{0} = \bm{K}_{...}^{\dagger}\circ\bm{\epsilon}\circ\bm{\epsilon}\circ\bm{\epsilon}/6$, $a_{1} = -\{\bm{K}_{...}^{\dagger}\circ(\bm{K}^{-1})^{\dagger}\circ\bm{\epsilon} -2\bm{K}_{.,..}^{\dagger}\circ\bm{\epsilon}\circ\bm{\epsilon}\circ\bm{\epsilon}\}/4$, $a_{2} = \{\bm{K}_{...}^{\dagger}\circ(\bm{K}^{-1})^{\dagger}\circ\bm{\epsilon} -(\bm{K}_{...} + 2\bm{K}_{.,..})^{\dagger}\circ\bm{\epsilon}\circ\bm{\epsilon}\circ\bm{\epsilon}\}/4$ and $a_{3} = \bm{K}_{...}^{\dagger}\circ\bm{\epsilon}\circ\bm{\epsilon}\circ\bm{\epsilon}/12$. \end{corollary} \section{Power comparisons between the rival tests} To first order $S_{T}$, $LR$, $W$ and $S_{R}$ have the same asymptotic distributional properties under either the null or local alternative hypotheses. Up to an error of order $n^{-1}$ the corresponding criteria have the same size but their powers differ in the $n^{-1/2}$ term. The power performance of the different tests may then be compared based on the expansions of their power functions ignoring terms or order less than $n^{-1/2}$. \cite{HarrisPeers1980} presented a study of local power, up to order $n^{-1/2}$, for the likelihood ratio, Wald and score tests. They showed that none of the criteria is uniformly better than the others. Let $S_{i}$ ($i=1,2,3,4$) be, respectively, the likelihood ratio, Wald, score and gradient statistics. We can write their local powers as $\Pi_{i} = 1 - \Pr(S_{i}\leq x) = \Pr(S_{i} > x)$, where \[ \Pr(S_{i}\leq x) = G_{p-q,\lambda}(x) + \frac{1}{\sqrt{n}}\sum_{k=0}^{3}a_{ik}G_{p-q+2k,\lambda}(x) + O(n^{-1}). \] The coefficients that define the local powers of the likelihood ratio and Wald tests are given in \cite{Hayakawa1975}, those corresponding to the score and gradient tests are given in \cite{HarrisPeers1980} and in~(\ref{as}), respectively. All of them are complicated functions of joint cumulants of log-likelihood derivatives but we can draw the following general conclusions: \vspace{-0.2cm} \begin{itemize} \item all the four tests are locally biased; \item if $\bm{K}_{...} = \bm{0}$, the likelihood ratio, Wald and gradient tests have identical local powers; \item if $\bm{K}_{...} = 2\bm{K}_{.,.,.}$, the score and gradient tests have identical local powers. \end{itemize} \vspace{-0.2cm} Further classifications are possible for appropriate subspaces of the parameter space; see, for instance, \cite{HarrisPeers1980} and \cite{Hay-Puri85}. Therefore, there is no uniform superiority of one test with respect to the others. Hence, the gradient test, which is very simple to compute as pointed out by C.R.~Rao, is an attractive alternative to the likelihood ratio, Wald and score tests. \section{One-parameter exponential family} Let $\bm{x} = (x_1,\ldots,x_n)^{\top}$ be a random sample of size $n$, with each $x_{l}$ having probability density function $\pi(x;\theta)=\exp\{t(x;\theta)\}$, where $\theta$ is a scalar parameter. To test $\mathcal{H}_{0}:\theta=\theta_{0}$, where $\theta_{0}$ is a fixed known constant, the likelihood ratio, Wald, score and gradient statistics are, respectively, \[ S_{1} = 2\sum_{l=1}^{n}\{t(x_{l};\widehat{\theta}) - t(x_{l};\theta_{0})\}, \quad S_{2} = n(\widehat{\theta} - \theta_{0})^2K(\widehat{\theta}), \] \[ S_{3} = \frac{(\sum_{l=1}^{n}t'(x_{l};\theta_{0}))^2}{nK(\theta_{0})}, \quad S_{4} = (\widehat{\theta} - \theta_{0})\sum_{l=1}^{n}t'(x_{l};\theta_{0}), \] where $\widehat{\theta}$ is the maximum likelihood estimator of $\theta$ and $K=K(\theta)$ denotes the Fisher information for a single observation. Under $\mathcal{H}_{0}$ all the four statistics have a central chi-square distribution with one degree of freedom asymptotically. Now, let $\kappa_{\theta\theta} = E\{t''(x;\theta)\}$, $\kappa_{\theta\theta\theta} = E\{t'''(x;\theta)\}$, $\kappa_{\theta\theta,\theta} = E\{t''(x;\theta)t'(x;\theta)\}$, $\kappa^{\theta,\theta} = -\kappa_{\theta\theta}^{-1}$, etc, where primes denote derivatives with respect to $\theta$; for instance $t''(x;\theta) = {\rm d}^{2}t(x;\theta)/{\rm d}\theta^2$. The asymptotic expansion of the distribution of the gradient statistic for the null hypothesis $\mathcal{H}_{0}:\theta=\theta_{0}$ under the sequence of local alternatives $\mathcal{H}_{1n}:\theta = \theta_{0} + n^{-1/2}\epsilon$ is given by (\ref{asymp}) with $f=1$, $\lambda = K^{\dagger}\epsilon^2/2$, \[ a_{0} = \frac{\kappa_{\theta\theta\theta}^{\dagger}\epsilon^3}{6}, \quad a_{1} = -\frac{\kappa_{\theta\theta\theta}^{\dagger}(\kappa^{\theta,\theta})^{\dagger}\epsilon -2\kappa_{\theta,\theta\theta}^{\dagger}\epsilon^3}{4}, \] \[ a_{2} = \frac{\kappa_{\theta\theta\theta}^{\dagger}(\kappa^{\theta,\theta})^{\dagger}\epsilon -(\kappa_{\theta\theta\theta} + 2\kappa_{\theta,\theta\theta})^{\dagger}\epsilon^3}{4}, \quad a_{3} = \frac{\kappa_{\theta\theta\theta}^{\dagger}\epsilon^3}{12}. \] We now specialize to the case where $\pi(x;\theta)$ belongs to the one-parameter exponential family. Let $t(x;\theta) = -\log\zeta(\theta) - \alpha(\theta)d(x) + v(x)$, where $\alpha(\cdot)$, $\zeta(\cdot)$, $d(\cdot)$ and $v(\cdot)$ are known functions. Also, $\alpha(\cdot)$ and $\zeta(\cdot)$ are assumed to have first three continuous derivatives, with $\zeta(\cdot) > 0$, $\alpha'(\theta)$ and $\beta'(\theta)$ being different from zero for all $\theta$ in the parameter space, where $\beta(\theta) = \zeta'(\theta)/\{\zeta(\theta)\alpha'(\theta)\}$. Since $K = \alpha'(\theta)\beta'(\theta)$, $\sum_{l=1}^{n}t(x_{l};\theta) = -n\{\log\zeta(\theta) + \alpha(\theta)\bar{d} - \bar{v}\}$, $\sum_{l=1}^{n}t'(x_{l};\theta) = -n\alpha'(\theta)\{\beta(\theta) + \bar{d}\}$, with $\bar{d}=\sum_{l=1}^{n}d(x_{l})/n$ and $\bar{v}=\sum_{l=1}^{n}v(x_{l})/n$, we have \[ S_{1} = 2n\biggl[\log\biggl\{\frac{\zeta(\theta_{0})}{\zeta(\widehat{\theta})}\biggr\} + \{\alpha(\theta_{0}) - \alpha(\widehat{\theta})\}\bar{d}\biggr], \quad S_{2} = n(\widehat{\theta} - \theta_{0})^2\alpha'(\widehat{\theta})\beta'(\widehat{\theta}), \] \[ S_{3} = \frac{n\alpha'(\theta_{0})\{\beta(\theta_{0}) + \bar{d}\}^2}{\beta'(\theta_{0})}, \quad S_{4} = n(\theta_{0}-\widehat{\theta})\alpha'(\theta_{0})\{\beta(\theta_{0}) + \bar{d}\}. \] Let $\alpha' = \alpha'(\theta)$, $\alpha'' = \alpha''(\theta)$, $\beta' = \beta'(\theta)$ and $\beta'' = \beta''(\theta)$. It can be shown that $\kappa_{\theta\theta} = -\alpha'\beta'$, $\kappa_{\theta\theta\theta} = -(2\alpha''\beta' + \alpha'\beta'')$, $\kappa_{\theta,\theta\theta} = \alpha''\beta'$, $\kappa_{\theta,\theta,\theta} = \alpha'\beta'' - \alpha''\beta'$. The coefficients that define the local powers of the tests that use $S_{1}$, $S_{2}$, $S_{3}$ and $S_{4}$ are \[ a_{10} = a_{20} = a_{30} = -a_{23} = 2a_{43} = -\frac{(2\alpha''\beta' + \alpha'\beta'')\epsilon^3}{6},\quad a_{11} = \frac{\alpha''\beta'\epsilon^3}{2}, \] \[ a_{12} = a_{33} = -a_{40} = \frac{(\alpha'\beta'' - \alpha''\beta')\epsilon^3}{6},\quad a_{31} = \frac{\alpha''\beta'\epsilon^3}{2} - \frac{(\alpha'\beta'' - \alpha''\beta')\epsilon}{2\alpha'\beta'},\quad \] \[ a_{21} = -a_{22} = \frac{\alpha''\beta'\epsilon^3}{2} - \frac{(2\alpha''\beta' + \alpha'\beta'')\epsilon}{2\alpha'\beta'}, \quad a_{32} = \frac{(\alpha'\beta'' - \alpha''\beta')\epsilon}{2\alpha'\beta'}, \quad a_{13} = 0, \] \[ a_{41} = \frac{\alpha''\beta'\epsilon^3}{2} + \frac{(2\alpha''\beta' + \alpha'\beta'')\epsilon}{4\alpha'\beta'}, \quad a_{42} = \frac{\alpha'\beta''\epsilon^3}{4} - \frac{(2\alpha''\beta' + \alpha'\beta'')\epsilon}{4\alpha'\beta'}. \] If $\alpha(\theta) = \theta$, $\pi(x;\theta)$ corresponds to a one-parameter natural exponential family. In this case, $\alpha'=1$, $\alpha''=0$ and the $a$'s simplify considerably. We now present some analytical comparisons among the local powers of the four tests for a number of distributions within the one-parameter exponential family. Let $\Pi_{i}$ and $\Pi_{j}$ be the power functions, up to order $n^{-1/2}$, of the tests that use the statistics $S_{i}$ and $S_{j}$, respectively, with $i\neq j$ and $i,j=1,2,3,4$. We have, \begin{equation}\label{diff_power} \Pi_{i} - \Pi_{j} = \frac{1}{\sqrt{n}}\sum_{k=0}^{3}(a_{jk} - a_{ik})G_{1+2k,\lambda}(x). \end{equation} It is well known that \begin{equation}\label{diff_G} G_{m,\lambda}(x) - G_{m+2,\lambda}(x) = 2g_{m+2,\lambda}(x), \end{equation} where $g_{\nu,\lambda}(x)$ is the probability density function of a non-central chi-square random variable with $\nu$ degrees of freedom and non-centrality parameter $\lambda$. From~(\ref{diff_power}) and (\ref{diff_G}), we can state the following comparison among the powers of the four tests. Here, we assume that $\theta > \theta^{(0)}$; opposite inequalities hold if $\theta < \theta^{(0)}$. \begin{enumerate} \item Normal ($\theta>0$, $-\infty\leq\mu\leq\infty$ and $x\in$ I\!R): \begin{itemize} \item $\mu$ known: $\alpha(\theta) = (2\theta)^{-1}$, $\zeta(\theta) = \theta^{1/2}$, $d(x) = (x-\mu)^2$ and $v(x) = -\{\log(2\pi)\}/2$, $\Pi_{4} > \Pi_{3} > \Pi_{1} > \Pi_{2}$. \item $\theta$ known: $\alpha(\mu) = -\mu/\theta$, $\zeta(\mu) = \exp\{\mu^2/(2\theta)\}$, $d(x) = x$ and $v(x) = -\{x^2 + \log(2\pi\theta)\}/2$, $\Pi_{1} = \Pi_{2} = \Pi_{3} = \Pi_{4}$. \end{itemize} \item Inverse normal ($\theta>0$, $\mu>0$ and $x>0$): \begin{itemize} \item $\mu$ known: $\alpha(\theta) = \theta$, $\zeta(\theta) = \theta^{-1/2}$, $d(x) = (x-\mu)^2/(2\mu^2x)$ and $v(x) = -\{\log(2\pi x^3)\}/2$, $\Pi_{1} > \Pi_{4} > \Pi_{2} = \Pi_{3}$. \item $\theta$ known: $\alpha(\mu) = \theta/(2\mu^2)$, $\zeta(\mu) = \exp\{-\theta/\mu)\}$, $d(x) = x$ and $v(x) = -\{\theta/(2x) - \log(\theta/(2\pi x^3))\}/2$, $\Pi_{4} > \Pi_{3} > \Pi_{1} > \Pi_{2}$. \end{itemize} \item Gamma ($k>0$, $k$ known, $\theta>0$ and $x>0$): $\alpha(\theta) = \theta$, $\zeta(\theta) = \theta^{-k}$, $d(x) = x$ and $v(x) = (k-1)\log(x) - \log\{\Gamma(k)\}$, $\Gamma(\cdot)$ is the gamma function, $\Pi_{4} > \Pi_{1} > \Pi_{2} = \Pi_{3}$. \item Truncated extreme value ($\theta > 0$ and $x > 0$): $\alpha(\theta) = \theta^{-1}$, $\zeta(\theta) = \theta$, $d(x) = \exp(x) - 1$ and $v(x) = x$, $\Pi_{4} > \Pi_{3} > \Pi_{1} > \Pi_{2}$. \item Pareto ($\theta>0$, $k>0$, $k$ known and $x>k$): $\alpha(\theta) = 1 + \theta$, $\zeta(\theta) = (\theta k^\theta)^{-1}$, $d(x) = \log(x)$ and $v(x) = 0$, $\Pi_{4} > \Pi_{1} > \Pi_{2} = \Pi_{3}$. \item Laplace ($\theta>0$, $-\infty<k<\infty$, $k$ known and $x > 0$): $\alpha(\theta) = \theta^{-1}$, $\zeta(\theta) = 2\theta$, $d(x) = |x - k|$ and $v(x) = 0$, $\Pi_{4} > \Pi_{3} > \Pi_{1} > \Pi_{2}$. \item Power ($\theta>0$, $\phi>0$, $\phi$ known and $x>\phi$): $\alpha(\theta) = 1 - \theta$, $\zeta(\theta) = \theta^{-1}\phi^\theta$, $d(x) = \log(x)$ and $v(x) = 0$, $\Pi_{4} > \Pi_{1} > \Pi_{2} = \Pi_{3}$. \end{enumerate} \section{Discussion} The gradient test can be an interesting alternative to the classic large-sample tests, namely the likelihood ratio, Wald and Rao score tests. It is competitive with the other three tests since none is uniformly superior to the others in terms of second order local power as we showed. Unlike the Wald and the score statistics, the gradient statistic does not require to obtain, estimate or invert an information matrix, which can be an advantage in complex problems. Theorem 3 in \cite{Terrell2002} points to another important feature of the gradient test. It suggests that we can, in general, improve the approximation of the distribution of the gradient statistic by a chi-square distribution under the null hypothesis by using a less biased estimator to $\bm{\theta}$. It is well known that the maximum likelihood estimator can be bias-corrected using \cite{CoxSnell1968} results or the approach proposed by \cite{DavidFirth1993}. The effect of replacing the maximum likelihood estimator by its bias-corrected versions will be studied in future research. Note that, unlike $LR$ and $S_{R}$, the gradient statistic is not invariant under non-linear reparameterizations, as is the case of $W$. However, we can improve its performance, under the null hypothesis, by choosing a parameterization under which the maximum likelihood estimator is nearly unbiased. Our results are quite general, and can be specified to important classes of statistical models, such as the generalised linear models. Local power comparisons of the three usual large-sample tests in generalised linear models are presented by \cite{CordeiroBotterFerrari1994} and \cite{FerrariBotterCribari1997}. The extension of their studies to include the gradient test will be reported elsewhere. As a final remark, the power comparisons performed in the present paper consider the four tests in their original form, i.e. they are not corrected to achieve local unbiasedness; see \cite{RaoMukerjee1997} and references therein for this alternative approach. In fact, this approach can be explored in future work for the gradient test.
1004.5525
\section{Introduction} \label{sec:intro} The current ongoing flow of high accuracy astrophysical observations has important consequences for our understanding of the very early Universe. In particular, the widely accepted inflationary paradigm~\cite{Guth:1980zm, Linde:1981mu, Albrecht:1982wi,Linde:1983gd} (for a review, see \textsl{e.g.~} Refs.~\cite{Linde:2007fr,Martin:2003bt,Martin:2004um,Martin:2007bw}) is now under close scrutiny. According to this scenario, the Cosmic Microwave Background (CMB) anisotropies and the large scale structures originate from the unavoidable quantum fluctuations of the inflaton and gravitational fields in the very early Universe subsequently amplified during inflation~\cite{Mukhanov:1981xt,Hawking:1982cz,Starobinsky:1982ee, Guth:1982ec,Bardeen:1983qw}. One can show that the corresponding power spectrum of the cosmological fluctuations naturally acquires an almost scale invariant form which is fully consistent with all observations. Another crucial property of the inflationary power spectrum is that the slight deviations from scale invariance are linked to the microphysics of inflation~\cite{Stewart:1993bc,Mukhanov:1990me,Liddle:1994dx}. Therefore, by measuring these deviations, one can probe the shape of the inflaton potential and, therefore, learn about the physical origin of the inflaton field. \par It is often claimed from the above properties that observations give access to a limited part of the potential only, namely the one which is slow-rolled over by the inflaton when scales of astrophysical interest today left the Hubble radius. This observational window represents a range of approximately $7$ e-folds or three decades in wavenumbers. However, inflation does not consist of the slow-roll phase only and the pre and/or reheating period is also of fundamental importance since it allows us to understand how inflation is connected to the hot big-bang phase~\cite{Turner:1983he,Kofman:1997yn,Bassett:2005xm, Mazumdar:2010sa}. This physical phenomenon is related to a different part of the inflationary potential, usually the one located close to its true minimum, \textsl{i.e.~} a few decades in e-folds away from the observable window. \par Observation of pre-/reheating effects can be achieved in two ways. First, the power spectrum can evolve on large scales when the inflaton field oscillates around the minimum of its potential. However, this happens only in quite complicated models, typically those containing more than one field~\cite{Finelli:1998bu, Bassett:1998wg, Finelli:2000ya}. In fact, it was recently shown that this type of effect can also happen in single field inflation but on much smaller scales~\cite{Jedamzik:2010dq,Jedamzik:2010hq,Easther:2010mr}. Second, the duration of the pre-/reheating phase can significantly modify the position of the observational window mentioned above. Put differently, at fixed astrophysical scales today, changing the pre-/reheating duration is equivalent to moving the window along the potential, hence probing different values of the power spectrum spectral index, amplitude of the anisotropies and tensor-to-scalar ratio. Obviously, this cannot be done arbitrarily because CMB data impose accurate bounds on their value. Conversely, this opens up the possibility to constrain the pre-/reheating duration and/or its equation of state from CMB data~\cite{Martin:2006rs}. Notice that a direct detection of primordial gravitational waves would also allow us to probe the reheating temperature, as shown in Refs.~\cite{Nakayama:2008wy, Kuroyanagi:2009br}. \par The goal of this article is to address this question for the standard scenarios of inflation. It is traditional to study three categories of models usually considered as representative of the full inflationary space. These models are large field~\cite{Linde:1984st}, small field~\cite{Linde:1981mu,Albrecht:1982wi}, and hybrid inflation~\cite{Linde:1993cn}. Hybrid scenarios involve multiple fields and, therefore, the power spectrum can change during the preheating phase. This makes this class of scenarios deserving of a separate investigation. For this reason, in this article, we limit ourselves to the class of large and small field models. \par In the following, we will use the term ``reheating'' to refer to the pre-/reheating phases of the Universe defined to have occurred just after the end of inflation and just before the radiation dominated era. So far, the constraints on the reheating energy scale are not so numerous. Obviously, it should be less than the energy scale of inflation which implies that $T_\mathrm{reh}\lesssim 10^{16}$ GeV. In addition, if one assumes that supersymmetry is the correct extension of the standard model of particle physics, then constraints from Big-Bang Nucleosynthesis (BBN) on unstable gravitinos lead to a reheating temperature $T_\mathrm{reh}\lesssim 10^7$ GeV~\cite{Khlopov:1984pf, Kallosh:1999jj, Giudice:1999yt, Lemoine:1999sc, Maroto:1999ch, Giudice:1999am, Buonanno:2000cp, Copeland:2005qe,Jedamzik:2006xz, Kawasaki:2008qe, Bailly:2009pe}. Notice that this constraint can nevertheless be avoided if one considers the scenario of Ref.~\cite{Mardon:2009gw}. Reheating itself should also proceed before BBN and this implies that $T_\mathrm{reh} \gtrsim 10$ MeV. We see that the reheating temperature is poorly constrained, in particular, its lower limit. As a matter of fact, the work presented here precisely yields a lower limit on the reheating energy scale from the current seven years Wilkinson Microwave Anisotropies Probe (WMAP7) data~\cite{Jarosik:2010iu, Komatsu:2010fb, Larson:2010gs}. \par In order to derive constraints on the reheating phase, we make use of Bayesian techniques and utilize a full numerical approach~\cite{Ringeval:2007am}. This has several advantages. First, it is exact and rests only on the linear theory of cosmological perturbations: the method remains accurate when the slow-roll approximation breaks down, as one expects near the end of inflation. Second, and of particular importance for the present work, it permits a new treatment of reheating. Indeed, instead of viewing the reheating parameters as nuisance parameters, they can easily be included in the Bayesian data analysis process. Third, the evolution of cosmological perturbations in the hot big-bang eras already relies on numerical codes. Treating perturbations during inflation in the same way allows the whole procedure to be automatized and to be easily extended to other scenarios. Fourth, the numerical approach allows us to address the question of the priors choice in a particularly well-defined way. Indeed, from a physical point of view, our prior knowledge is on the inflationary theory and not on the shape of the primordial power spectra which is actually a model prediction. Therefore, it is better, and easier, to choose prior probability distributions directly on the model parameters, such as the power index of the large field potentials. This reflects the fact that a model of inflation is not a disembodied mathematical structure that one only needs to ``fit'' but a physical scenario rooted in high energy physics that one needs to understand. \par This paper is organized as follows. In Sec.~\ref{sec:physicalorigin}, we extend the above discussion and explain in detail why the reheating epoch can be constrained with CMB data. In particular, we introduce the so-called reheating parameter which depends on the reheating duration and on the mean equation of state of the fluid dominating the Universe during this epoch. Then, using the slow-roll approximation, we analytically demonstrate that the accuracy of the WMAP7 data is now sufficient to obtain some constraints on the reheating era. In Sec.~\ref{sec:cmbtoreh}, using a full numerical integration of the tensor and scalar power spectra coupled to Bayesian methods, we derive the constraints that any reheating model has to satisfy. Then, assuming specific values for the mean equation of state, we translate these constraints into new lower limits for the reheating energy density and/or reheating temperature. These results significantly improve the bounds coming from the Big-Bang Nucleosynthesis. In Sec.~\ref{sec:conclusion}, we recap our main findings and discuss how our results are modified by the inclusion of others CMB data sets. In Appendix~\ref{appendix:lfreh}, we work out a typical example which illustrates the robustness of our assumptions: a noninstantaneous transition between reheating and the radiation dominated era when one considers the finite decay width of the inflaton field. Finally, as a by-product of our data analysis, Appendix~\ref{appendix:srpost} presents the updated WMAP7 constraints on the spectral index, tensor-to-scalar ratio and first order slow-roll parameters marginalized over second order effects. \section{Physical origin of the constraint} \label{sec:physicalorigin} Before presenting and discussing the constraints on the reheating temperature, we explain why and how these ones can be inferred from high accuracy CMB observations. In particular, we use the slow-roll approximation to explicitly illustrate the method. \subsection{Parametrizing the reheating} \label{subsec:parameter} The evolution of scalar (density) perturbations is controlled by the so-called Mukhanov--Sasaki variable $v_{{\boldsymbol k}}$. If matter is described by a scalar field (as is the case during inflation and pre-/reheating), then its equation of motion is given, in Fourier space, by~\cite{Mukhanov:1990me,Martin:2003bt,Martin:2004um,Martin:2007bw} \begin{equation} \label{eq:eqmotv} v_{{\boldsymbol k}}''+\left[k^2-\frac{\left(a\sqrt{\epsilon_1}\right)''} {a\sqrt{\epsilon_1}}\right]v_{{\boldsymbol k}}=0. \end{equation} Here, a prime denotes a derivative with respect to conformal time. The quantity $k$ is the comoving wave number and $\epsilon _1\equiv -\dot{H}/H^2$ is the first Hubble flow function~\cite{Schwarz:2001vv}, $H=\dot{a}/a$ being the Hubble parameter and $a$ the Friedmann--Lema\^{\i}tre--Robertson--Walker (FLRW) scale factor (a dot means derivative with respect to cosmic time). The quantity $v_{\boldsymbol k}$ is related to the curvature perturbation $\zeta_{\boldsymbol k}$ through the following expression: \begin{equation} \label{eq:zetavsv} \zeta_{\boldsymbol k}=\frac{1}{M_{_{\mathrm Pl}}}\frac{v_{\boldsymbol k}}{a\sqrt{2\epsilon _1}}\,, \end{equation} where $M_{_{\mathrm Pl}}$ stands for the reduced Planck mass. As a consequence, the power spectrum of $\zeta _{\boldsymbol k}$ can be expressed as \begin{equation} {\cal P}_{\zeta}(k)\equiv \frac{k^3}{2\pi ^2} \left\vert \zeta_{\boldsymbol k}\right\vert ^2 =\frac{k^3}{4\pi^2 M_{_{\mathrm Pl}}^2}\left\vert \frac{v_{\boldsymbol k}}{a\sqrt{\epsilon _1}}\right\vert ^2\, . \label{Pzeta} \end{equation} In order to calculate ${\cal P}_{\zeta}(k)$, one needs to integrate Eq.~(\ref{eq:eqmotv}), which requires the knowledge of the initial conditions for the mode function $v_{\boldsymbol k}$. Since, at the beginning of inflation, all the modes of astrophysical interest today were much smaller than the Hubble radius, the initial conditions are chosen to be the Bunch-Davis vacuum which amounts to \begin{equation} \lim_{k/\mathcal{H} \rightarrow +\infty}v_{\boldsymbol k}=\frac{1}{\sqrt{2k}} {\rm e}^{-ik\eta }\, , \label{eq:initial} \end{equation} where $\eta $ denotes conformal time and $\mathcal{H}=aH$ is the conformal Hubble parameter. The importance of the curvature perturbation lies in the fact that it is directly related to CMB anisotropies, the two point correlation function of which can be expressed in term of the spectrum of $\zeta _{\boldsymbol k}$. Moreover, under very general conditions (including the assumption that inflation proceeds with only one field), $\zeta _{\boldsymbol k}$ is a conserved quantity on large scales and, therefore, can be used to propagate the inflationary spectrum from the end of inflation to the post-inflationary era~\cite{Martin:1997zd}. In other words, the power spectrum is not affected by the post-inflationary evolution, in particular by the pre-/reheating epoch. \par However, this does not mean that the reheating era has no effect on the inflationary predictions. On the contrary, the relation between the physical scales at present time and during inflation depends on the properties of this phase of evolution. As a consequence, in order to calibrate the inflationary spectrum with respect to the physical scales of astrophysical interest today, it is necessary to know how the reheating phase proceeded. Conversely, this also opens the possibility to constrain the physical conditions that prevailed at that time by means of CMB observations. \par In order to put the above considerations on a quantitative footing, let us rewrite Eq.~(\ref{eq:eqmotv}) in terms of the number of e-folds during inflation, $N\equiv \ln\left(a/a_{\rm ini}\right)$, where $a_{\rm ini}$ is the value of the scale factor at the beginning of inflation. It takes the form \begin{equation} \label{eq:eomNefold} \frac{{\rm d}^2v_{\boldsymbol k}}{{\rm d}N^2}+\frac{1}{\cal H} \frac{{\rm d}{\cal H}}{{\rm d}N}\frac{{\rm d}v_{\boldsymbol k}}{{\rm d}N} +\left[\left(\frac{k}{\cal H}\right)^2 -U_{_\mathrm{S}}(N)\right]v_{\boldsymbol k}=0, \end{equation} where $U_{_\mathrm{S}}(N)$ is an effective potential for the perturbations which depends on the scale factor and its derivatives only. All the terms in this equation but $k/{\cal H}$ are completely specified by the inflationary background evolution. In practice, we are given a physical scale today, say $k/a_{\rm now}$ (for instance $k/a_{\rm now} = 0.05\, \mbox{Mpc}^{-1}$) and we need to express $k/{\cal H}$ in terms of $k/a_{\rm now}$ and quantities defined during inflation. Straightforward considerations lead to \begin{equation} \frac{k}{{\cal H}}=\frac{\Upsilon_{\boldsymbol k}}{H(N)}{\rm e}^{N_{_{\rm T}}-N}, \end{equation} where $N_{_{\rm T}}$ is the total number of e-folds during inflation and $\Upsilon _{\boldsymbol k}$ is defined by \begin{equation} \Upsilon_{\boldsymbol k}\equiv \frac{k}{a_{\rm now}}\left(1 + z_\mathrm{end}\right), \end{equation} with $z_\mathrm{end}$ being the redshift of the end of inflation. As expected $\Upsilon _{\boldsymbol k}$ depends on the whole post-inflationary history through $z_\mathrm{end}$. During this post-inflationary history, only the reheating phase is poorly known and represents, by far, the main source of uncertainty for the inflationary predictions. For convenience, we rewrite $\Upsilon_{\boldsymbol k}$ as \begin{equation} \label{eq:defRrad} \Upsilon_{\boldsymbol k}=\frac{k}{a_{\rm now}} \left(\frac{\rho_{\rm end}}{\Omega _{\gamma}\rho_{\rm cri}}\right)^{1/4} R_{\rm rad}^{-1}, \end{equation} thus defining the new parameter $R_{\rm rad}$. This parameters plays a crucial role in this article. In the above equation, $\rho_{\rm end}$ is the energy density at the end of inflation, $\rho_{\rm cri}$ is the present day critical energy density and $\Omega _{\gamma}\simeq 2.471\times 10^{-5}h^{-2}$ is the density parameter of radiation today. As a result $\Omega _{\gamma }\rho_{\rm cri}\equiv \rho_\gamma$ is the present day radiation energy density and does not depend on $h^2$. The above equations make clear that the parameter $R_\urad$ must be specified if one wants to compare an inflationary model to observations. \par In fact, the quantity $R_\urad$ has a simple physical interpretation. Let us assume that the reheating phase is dominated by a conserved effective fluid with energy density $\rho$ and pressure $P$. The fact that we assume the effective fluid to be conserved is not a limitation. For instance, in a simple model where the inflaton scalar field is coupled to radiation (see Appendix~\ref{appendix:lfreh}), the effective fluid is just defined by $\rho=\rho _{\phi}+\rho _{\gamma }$ and $P = P_{\phi}+\rho_{\gamma }/3$. The scalar field and the radiation are not separately conserved but the effective fluid is. Then, it is straightforward to show that \begin{equation} \label{eq:rhorehend} \rho(N)=\rho_\uend \exp\left\{-3\int _{N_{_\uT}}^{N}\left[1+w_{\mathrm{reh}}(n)\right]{\dd} n\right\}, \end{equation} where $w_{\mathrm{reh}}\equiv P/\rho$ is the equation of state function during reheating. Using this expression, one obtains \begin{equation} \label{eq:Rrad} \ln R_\urad = \frac{\Delta N}{4}\left(-1+3\bar{w}_{\mathrm{reh}}\right), \end{equation} where \begin{equation} \Delta N\equiv N_{\mathrm{reh}} - N_{_\uT}, \end{equation} is the total number of e-folds during reheating, $N_{\mathrm{reh}}$ being the number of e-folds at which reheating is completed and the radiation dominated era begins. The quantity $\bar{w}_{\mathrm{reh}}$ stands for the mean equation of state parameter \begin{equation} \bar{w}_{\mathrm{reh}}\equiv \frac{1}{\Delta N}\int _{N_{_{\rm T}}}^{N_{\mathrm{reh}}} w_{\mathrm{reh}}(n){\rm d}n. \end{equation} Therefore, the parameter $R_{\rm rad}$ only depends on what happens during reheating. To put differently, it singles out in the expression of $\Upsilon_{\boldsymbol k}$, the contribution coming from reheating. Equation~(\ref{eq:Rrad}) also allows us to understand why $R_{\rm rad}$ carries the subscript ``rad''. Indeed, if the effective fluid is equivalent to radiation, then $\bar{w}_\mathrm{reh}=1/3$ and $\ln R_\urad=0$. The physical interpretation is very clear: in this case the reheating stage cannot be distinguished from the subsequent radiation dominated era and, therefore, cannot affect the inflationary predictions: as a consequence $R_\urad=1$ in Eq.~(\ref{eq:defRrad}). \par In fact, one can even go further and express $R_\urad$ in an even more compact form. Using Eq.~(\ref{eq:rhorehend}), one can write $\rho_\ureh=\rho_\uend \exp\left[-3\Delta N(1+\bar{w}_{\mathrm{reh}})\right]$ from which, together with Eq.~(\ref{eq:Rrad}), one obtains \begin{equation} \label{eq:Rradw} \ln R_{\rm rad}=\frac{1-3\bar{w}_{\mathrm{reh}}}{12(1+\bar{w}_{\mathrm{reh}})}\ln \left( \frac{\rho_\ureh}{\rho_\uend}\right), \end{equation} where $\rho_\ureh$ has to be understood as the energy density at the end of the reheating era, \textsl{i.e.~} $\rho(N_\ureh)$. \par Let us summarize our discussion. In order to calculate the power spectrum of the inflationary cosmological perturbations, one needs to solve Eq.~(\ref{eq:eomNefold}). In this formula, all the terms are accurately known during inflation except \begin{equation} \frac{k}{\cal H}=\frac{k}{a_{\rm now}} \left(\frac{\rho_\uend}{\rho_\gamma}\right)^{1/4}\frac{1}{H(N)R_\urad} {\rm e}^{N_{_{\rm T}}-N}, \end{equation} and the theoretical uncertainty in this expression solely comes from the parameter $R_\urad$ which depends on reheating only (more precisely, on the energy density at the end of reheating, $\rho_\ureh$, and the mean equation of state $\bar{w}_{\mathrm{reh}}$). \subsection{Why CMB observations constrain reheating} \label{subsec:why} Having discussed the physical interpretation of $R_{\rm rad}$, we now explain how the CMB observations can constrain its value. For this purpose, we reexpress $R_{\rm rad}$ in terms of quantities defined at the Hubble radius crossing. One obtains \begin{eqnarray} \label{eq:Rradsr} \ln R_{\rm rad} &=& N_{_{\rm T}}-N_*+N_0 -\frac14\ln\left(\frac{H_*^2}{M_{_{\mathrm Pl}}^2\epsilon_{1*}}\right) \nonumber \\ &+& \frac{1}{4}\ln\left(\frac{3}{\epsilon_{1*}} \frac{V_{\rm end}}{V_*}\frac{3-\epsilon_{1*}} {3-\epsilon_{1\,\rm end}}\right), \end{eqnarray} where we have defined \begin{equation} N_0 \equiv \ln \left( \dfrac{k/a_{\rm now}}{\rho_\gamma^{1/4}} \right). \end{equation} In this formula, $N_*$ is the e-folds number at which the scale $k/a_{\rm now}$ crossed out the Hubble radius during inflation (all the quantities with a subscript ``*'' are evaluated at that time) and $V(\phi)$ is the inflaton potential. Despite the appearance of the first Hubble flow function, this equation is exact (moreover, we also have $\epsilon_{1 \rm end}=1$). At leading order, one has \begin{equation} \dfrac{H_*^2}{M_{_{\mathrm Pl}}^2\epsilon_{1*}} = 8\pi^2 P_*, \end{equation} where the amplitude of the scalar power spectrum at the pivot scale $P_* = {\cal P}_{\zeta}(k_*)$ is directly related to the Cosmic Background Observer (COBE) normalization. \par The above equation~(\ref{eq:Rradsr}) can be used in two different manners. The first way is to assume something about $R_\urad$ and to derive the corresponding range of variations of the inflationary slow-roll predictions $N_*$ and $\epsilon_i(N_*)$. In other words, this determines how the inflationary predictions depend on the details of the reheating era. This approach is the one usually considered in the literature to compare inflationary predictions to the current constraints on the slow-roll parameters $\epsilon_{i*}$ (or spectral index and tensor-to-scalar ratio). Unfortunately, the assumptions on $R_\urad$ are rarely explicit and comparison is only made by choosing reasonably assumed values of $N_*$: typically $30$ and $60$ e-folds as one may derive under generic assumptions~\cite{Liddle:2003as}. However, as Eq.~(\ref{eq:Rradsr}) explicitly shows, once $V(\phi)$ is chosen, and the tilt and amplitude of the scalar perturbations measured, $N_*$ is directly related to $R_\urad$, which itself, as already noticed, depends on the energy density $\rho_\ureh$ at which reheating ends and $\bar{w}_{\mathrm{reh}}$. As a result, the range of variation for $N_*$ can only be known once a reheating model is assumed. Without such an assumption, from one model to another, an assumed value of $N_*$ may inconsistently imply that the reheating occurs after nucleosynthesis, or even at energy densities higher than $\rho_\uend$. This type of model would therefore appears to be compatible with the CMB data favored power spectra while being totally inconsistent with standard cosmology. \par Let us now see how it works in practice. In order to be consistent with the standard cosmological model, $\ln R_\urad $ cannot take arbitrary values. One should have $\bar{w}_{\mathrm{reh}}<1$ to respect the positivity energy conditions of General Relativity and $\bar{w}_\mathrm{reh} > -1/3$ by the very definition of reheating which is not inflation. Notice that we impose conditions on the mean value of the equation of state only. In addition, reheating should occur after inflation and before BBN, \textsl{i.e.~} $\rho _{\rm nuc}<\rho _{\mathrm{reh}} <\rho_{\rm end}$, with \begin{equation} \rho_\unuc \equiv \left(10 \mbox{MeV} \right)^4. \end{equation} This allows us to explicitly use Eq.~(\ref{eq:Rradsr}). Combined with Eq.(\ref{eq:Rradw}), we can determine the range of variation of $\Delta N_*\equiv N_{_{\rm T}}-N_*\in [\Delta N_*^{\rm nuc},\Delta N_*^{\rm end}]$. Straightforward manipulations lead to \begin{eqnarray} \label{eq:Nnuc} \Delta N_*^{\rm nuc}&=&-N_0+ \ln \left(\frac{H_*}{M_{_{\mathrm Pl}}}\right)- \frac{1}{3(1+\bar{w}_{\mathrm{reh}})}\ln \frac{\rho_{\rm end}}{M_{_{\mathrm Pl}}^4} \nonumber \\ && +\frac{1-3 \bar{w}_\mathrm{reh}}{12(1+\bar{w}_{\mathrm{reh}})} \ln \frac{\rho_{\rm nuc}}{M_{_{\mathrm Pl}}^4}\, , \end{eqnarray} while if one chooses $\rho_{\mathrm{reh}}=\rho_{\rm end}$, one obtains \begin{eqnarray} \label{eq:Nend} \Delta N_*^{\rm end}&=&-N_0+ \ln \left(\frac{H_*}{M_{_{\mathrm Pl}}}\right) -\frac{1}{4}\ln \frac{\rho_{\rm end}}{M_{_{\mathrm Pl}}^4}.\nonumber \\ \end{eqnarray} Interestingly enough, the last equation no longer depends on $\bar{w}_{\mathrm{reh}}$. This is of course because requiring $\rho_\ureh=\rho_\uend$ means that one immediately reheats the Universe after inflation. It is important to notice that these equations are algebraic for $\Delta N_*^{\mathrm{nuc}}$ and $\Delta N_*^{\mathrm{end}}$ because $H_*$ and $\rho_\uend$ are also functions of $\Delta N_*$. The corresponding range of variations of the inflationary predictions is determined by calculating $\Delta \epsilon_{i*}=\epsilon_i(\Delta N_*)$ with $\Delta N_*$ given above. To proceed further, one needs to specify the model of inflation. In the next section, one considers the prototypical scenario of chaotic inflation as well as the small field models. \par Before dealing with these explicit examples, let us briefly anticipate and discuss the second way of using Eq.~(\ref{eq:Rradsr}). It consists of considering $R_\urad$ as an observable model parameter and in including it in the data analysis, as it should be from a Bayesian point of view. If we are given a specific potential, then $V_\mathrm{end}$ is explicitly known. CMB data put a limit on $H_*^2/\epsilon_{1*}$ through the amplitude of the anisotropies, as well as on $\epsilon_{1*}$ from the tensor-to-scalar ratio. As a result, one expects CMB data to also give some information on $R_\urad$. This is the subject of Sec.~\ref{sec:cmbtoreh} in which we perform a Bayesian data analysis of the WMAP7 data for both the large and small field models by including the reheating. For the first time, we find that $R_\urad$ is not only a nuisance parameter for inflation but ends up being constrained by the WMAP7 data. We then discuss the physical implications of these bounds and show that CMB data give us a lower bound on the energy scale at which the reheating ended. \subsection{Large field models} \label{subsec:largefield} We now consider the archetypal model of inflation, namely, large field inflation. This working example is important because it allows us to show explicitly which type of constraints one should expect. Large field models are characterized by the potential \begin{equation} \label{eq:lfpot} V(\phi)=M^4\left(\frac{\phi}{M_{_{\mathrm Pl}}}\right)^p, \end{equation} where $M$ is an energy scale which fixes the amplitude of the CMB anisotropies and $p$ is a free index. In this case the slow-roll trajectory is explicitly known and one can calculate $\phi_*$, the field vacuum expectation value at Hubble radius crossing from $\phi_{\mathrm{end}}/M_{_{\mathrm Pl}}=p/\sqrt{2}$, the field vacuum expectation value (VEV) at which inflation stops. One gets~\cite{Martin:2006rs} \begin{equation} \phi_*^2 = 2 p M_{_{\mathrm Pl}}^2\Delta N_* + \phi_\mathrm{end}^2. \end{equation} \par The reheating phase in large field models proceeds by parametric oscillations around the minimum of the potential and it is well known that the corresponding equation of state parameter is given by~\cite{Turner:1983he, Kofman:1997yn, Liddle:2003as, Martin:2003bt} \begin{equation} \label{eq:wrehlf} \bar{w}_{\mathrm{reh}}=\dfrac{p-2}{p+2}\,. \end{equation} In particular, for $p=2$, one obtains $\bar{w}_{\mathrm{reh}}=0$: that is to say the oscillatory phase is equivalent to a matter dominated era (the quartic case corresponding to a radiation dominated era, and so on). Although this formula is derived without taking into account the coupling between the inflaton field and radiation, we show in Appendix~\ref{appendix:lfreh} that it is a very good approximation. \par Knowing explicitly the equation during reheating, we are now in a position where the algebraic equations~(\ref{eq:Nnuc}) and~(\ref{eq:Nend}) can be solved exactly. After some algebra, one obtains \begin{eqnarray} \label{eq:Dnnuclf} \Delta N_*^{\rm nuc}&=&-\frac{p}{4}-\frac{p^2-2p+4}{12p}\Lambert{0} \biggl\{-\frac{12p}{p^2-2p+4} \nonumber \\ & & \times \exp\left[-\frac{12p\left({\cal N}^{\rm nuc}+p/4\right)} {p^2-2p+4}\right]\biggr\}, \end{eqnarray} and \begin{eqnarray} \label{eq:Dnendlf} \Delta N_*^{\rm end}&=&-\frac{p}{4}-\frac{p-2}{8}\Lambert{0} \Biggl\{-\frac{8}{p-2} \nonumber \\ & & \times \exp\left[-\frac{8\left({\cal N}^{\rm end}+p/4\right)} {p-2}\right]\Biggr\}, \end{eqnarray} where $\Lambert{0}$ is a Lambert function. Both quantities $\mathcal{N}^\unuc$ and $\mathcal{N}^\uend$ depend only on the model parameter $p$ and the amplitude of the observed anisotropies. Explicitly, $\mathcal{N}^\unuc$ reads \begin{equation} \label{eq:calnuc} \begin{aligned} \mathcal{N}^\unuc &= -N_0 +\dfrac{2}{3p}(p-1)\ln\left(2\pi\sqrt{120}\frac{Q_{\mathrm{rms-PS}}}{T}\right) \\ & -\frac{p+2}{6p}\ln\left[9 \frac{2^{(-p^2+p-6)/(p+2)}} {p^{(-p^2+2p-4)/(2p+4)}}\right] -\frac{p-4}{3p}\ln \frac{\rho_\unuc}{M_{_{\mathrm Pl}}}\,, \end{aligned} \end{equation} where the amplitude of the CMB anisotropies has been expressed in terms of the quadrupole moment \begin{equation} \dfrac{Q_{\mathrm{rms-PS}}}{T} = \sqrt{\dfrac{5 C_2}{4\pi}} \simeq 6\times 10^{-6}. \end{equation} In Eq.~(\ref{eq:calnuc}), the last term vanishes for $p=4$ since, as already noticed above, the phase of oscillations is equivalent to a radiation dominated era which cannot be distinguished from the subsequent hot big-bang epoch. On the other hand, the constant $\mathcal{N}^\uend$ can be expressed as \begin{equation} \label{eq:calend} \begin{aligned} \mathcal{N}^\uend = -N_0 &+\frac12\ln\left(2\pi\sqrt{120}\frac{Q_{\mathrm{rms-PS}}}{T}\right) \\ &- \frac{1}{4}\ln \left(9\frac{2^{1-p}}{p^{1-p/2}}\right). \end{aligned} \end{equation} \begin{figure*} \begin{center} \includegraphics[width=0.95\textwidth,clip=true]{lfnsR_w7} \caption{Reheating consistent slow-roll predictions for the large field models in the plane $(n_{_{\mathrm S}},r)$. The two contours are the one and two-sigma WMAP confidence intervals (marginalized over second order slow-roll). The two lines represent the locus of the $p \gtrsim 1$ and $p=2$ models while the blue point annotated ``$16$'' corresponds to $p=4$. The annotations trace the energy scale at which the large field reheating ends and correspond to $\log(g_*^{1/4}T_{\mathrm{reh}}/\mbox{GeV})$. Clearly, these values are limited from below to stay inside the two-sigma contours.} \label{fig:srlfnsR} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=0.95\textwidth,clip=true]{lf_w7} \caption{Reheating consistent slow-roll predictions for the large field models in the plane $(\epsilon_1,\epsilon_2)$. The two blue dot-dashed contours are the one- and two-sigma WMAP3 confidence intervals (marginalized over second order slow-roll) while the pink solid contours are the one- and two-sigma WMAP7 ones. As in Fig.~\ref{fig:srlfnsR}, the annotations trace the energy scale at which the large field reheating ends and correspond to $\log(g_*^{1/4}T_{\mathrm{reh}}/\mbox{GeV})$. The solid line represents the model $p \gtrsim 1$. This confirms that there now exists a lower bound on the value of $g_*^{1/4}T_{\mathrm{reh}}$ (see Sec.~\ref{sec:cmbtoreh}).} \label{fig:srlf} \end{center} \end{figure*} {}From Eqs.~(\ref{eq:Dnnuclf}) and (\ref{eq:Dnendlf}), we immediately deduce that, for the large field models, the range of allowed values of $\Delta N_*$ strongly depends on $p$. In Figs.~\ref{fig:srlfnsR} and~\ref{fig:srlf}, we have plotted the large field predictions, obtained from Eqs.~(\ref{eq:Dnnuclf}) and (\ref{eq:Dnendlf}), for the slow-roll parameters $\epsilon_{1}(\Delta N_*)$ and $\epsilon_{2}(\Delta N_*)$ compared to the one- and two-sigma WMAP7 confidence intervals (see Appendix~\ref{appendix:srpost}). The first figure represents the slow-roll predictions in the plane $(n_{_{\mathrm S}},r)$, while the second one corresponds to the plane $(\epsilon_1,\epsilon_2)$. The annotated values trace the quantity $\log(g_*^{1/4}T_{\mathrm{reh}}/\mbox{GeV})$, where the reheating temperature is defined by the relation \begin{equation} \label{eq:Trehdef} g_*^{1/4}T_{\mathrm{reh}}\equiv \left(\dfrac{30}{\pi^2} \rho_\ureh \right)^{1/4}, \end{equation} and $g_*$ is the number of relativistic degrees of freedom at that time. Large values of $p$ cannot explain the current measurements of $n_{_{\mathrm S}}$ and $r$, while the low values can. Therefore, it is clear that there now exists a lower bound on the reheating temperature. In Sec.~\ref{sec:cmbtoreh}, we derive the Bayesian two-sigma limits on $\rho_\ureh$ (or $g_*^{1/4}T_{\mathrm{reh}}$) by including the reheating parameter into the data analysis process. These plots should make evident that the reheating in the large field models is already observable with the current CMB data, and more than being a nuisance parameter, it is actually constrained. \par Let us also remark that the case $p=4$ is particularly interesting; see the blue point annotated ``$16$'' in Figs.~\ref{fig:srlfnsR} and~\ref{fig:srlf}. Indeed, the value $p=4$ is the extreme case in which $\Delta N_*$ is actually fixed to \begin{equation} \Delta N_*^{p=4} = -N_0 + \dfrac{1}{2} \ln \left( 2\pi\sqrt{120} \frac{Q_{\mathrm{rms-PS}}}{T}\right) \simeq 58.5\,. \end{equation} This is why this model is represented by a single point in Figs.~\ref{fig:srlfnsR} and~\ref{fig:srlf}. Making any other choice is equivalent to assuming a more complicated reheating model which should at least be specified. For instance, Ref.~\cite{Komatsu:2010fb} (see Fig.~19) uses two values, $50$ and $60$ e-folds, instead of one. From the above considerations, it is clear that $50$ is much too small. But, of course, one can always assume that the shape of the potential in the slow-roll regime is not the same as in the reheating regime (actually, this has to be the case for small field models, see below). In this case $V(\phi)\propto \phi^4$ is not relevant during the oscillations of the field and $\bar{w}_{\mathrm{reh}}\neq 1/3$. However, as discussed in the next section, even in this case, the reheating epoch is still constrained. The same range of variations for $\Delta N_*$ has also been used in Ref.~\cite{Finelli:2009bs} (see Fig.~2) for the $\phi^2$ model. Compared to our results, $50$ is too high and excludes models which are still allowed while $60$ predicts a reheating energy scale higher the energy scale at the end of inflation: $\rho_\mathrm{reh}> \rho_\mathrm{end}$. \subsection{Small field models} \label{subsec:smallfield} One of the reasons leading to such a strong reheating influence on the large field model predictions comes from Eq.~(\ref{eq:wrehlf}). Once the potential is chosen, the spectral index and tensor-to-scalar ratio are intimately linked to the way the reheating proceeds. One may therefore wonder how the reheating can influence the model predictions in a case where it is unrelated to the shape of the primordial power spectra. As a motivated example, we discuss in this section the case of the small field models ending with a reheating characterized by a mean equation of state $\bar{w}_\mathrm{reh}$. The small field potential reads \begin{equation} \label{eq:sfpot} V(\phi) = M^4 \left[1- \left( \dfrac{\phi}{\mu} \right)^p \right], \end{equation} where $\mu$ represents a VEV for the field $\phi$, $p$ a power index, and $M$ fixes the amplitude of the observed anisotropies. Inflation proceeds from small to large values of the field. For convenience, we denote by $\chi$ the field value in units of $\mu$, \textsl{i.e.~} $\chi \equiv \phi/\mu $. The slow-roll trajectory in terms of $\chi$ reads~\cite{Martin:2006rs} \begin{equation} \label{eq:sftraj} \Delta N_* = \dfrac{\mu^2}{2p M_{_{\mathrm Pl}}^2} \left(\chi_*^2 + \dfrac{2}{p-2} \chi_*^{2-p} - \chi_\mathrm{end}^2 + \dfrac{2}{2-p} \chi_\mathrm{end}^{2-p} \right), \end{equation} where, again, $\chi_\mathrm{end}$ is defined by $\epsilon_1(\chi_\mathrm{end}) = 1$, \textsl{i.e.~} \begin{equation} \label{eq:chiend} \chi_\mathrm{end}^{p-1} = \dfrac{\sqrt{2}}{p} \dfrac{\mu}{M_{_{\mathrm Pl}}} \left(1 - \chi_\mathrm{end}^p \right). \end{equation} Both of these equations do not have an explicit solution (unless $p=2$) but can be numerically solved for a given set of model parameters $\mu$ and $p$. For this reason, instead of deriving the reheating allowed values for $\Delta N_*$, it is more convenient to derive the reheating allowed values for $\chi_*$. In fact, $\chi_*$ should lie between $\chi_*^\mathrm{nuc}$ and $\chi_*^\mathrm{end}$, the field values such that reheating ends, respectively, at BBN and just after inflation. After some algebra, one finds $\chi_*^\mathrm{nuc}$ to be the solution of \begin{equation} \label{eq:chistarnuc} \begin{aligned} \dfrac{\mu^2}{2p M_{_{\mathrm Pl}}^2} & \left(\chi_* + \dfrac{2}{p-2} \chi_*^{2-p} \right) + \dfrac{3 \bar{w}_\mathrm{reh}}{3 + 3 \bar{w}_\mathrm{reh}} \ln\left(1 - \chi_*^p \right) \\ & - \dfrac{3 \bar{w}_\mathrm{reh}+1}{3 + 3 \bar{w}_\mathrm{reh}} \ln\left(\chi_*^{p-1} \right) \\ & = \dfrac{\mu^2}{2p M_{_{\mathrm Pl}}^2}\left(\chi_\mathrm{end}^2 + \dfrac{2}{p-2} \chi_\mathrm{end}^{2-p} \right) + \mathcal{F}^\unuc, \end{aligned} \end{equation} with \begin{equation} \begin{aligned} & \mathcal{F}^\unuc = -N_0 + \dfrac{1+3\bar{w}_\mathrm{reh}}{3+3\bar{w}_\mathrm{reh}}\ln\left(2\pi \sqrt{120} \dfrac{Q_{\mathrm{rms-PS}}}{T} \right) \\ & - \dfrac{1}{3+3\bar{w}_\mathrm{reh}} \ln\left[9 \left(\dfrac{\mu}{M_{_{\mathrm Pl}} p} \right)^{3\bar{w}_\mathrm{reh}+1} 2^{(3\bar{w}_\mathrm{reh}-1)/2}\right] \\ & - \dfrac{1}{3 +3 \bar{w}_\mathrm{reh}} \ln \left(1-\chi_\mathrm{end}^p \right) + \dfrac{1-3\bar{w}_\mathrm{reh}}{3+3\bar{w}_\mathrm{reh}} \ln \left(\dfrac{\rho_\unuc^{1/4}}{M_{_{\mathrm Pl}}} \right). \end{aligned} \end{equation} Similarly, solving for $\rho_\ureh=\rho_\uend$ gives $\chi_*^\mathrm{end}$ as the solution of \begin{equation} \label{eq:chistarend} \begin{aligned} & \dfrac{\mu^2}{2p M_{_{\mathrm Pl}}^2} \left(\chi_*^2 + \dfrac{2}{p-2} \chi_*^{2-p} \right) + \dfrac{1}{4} \ln(1-\chi_*^p) - \dfrac{1}{2} \ln( \chi_*^{p-1}) \\ & = \dfrac{\mu^2}{2p M_{_{\mathrm Pl}}^2}\left(\chi_\mathrm{end}^2 + \dfrac{2}{p-2} \chi_\mathrm{end}^{2-p} \right) + \mathcal{F}^\uend, \end{aligned} \end{equation} where \begin{equation} \begin{aligned} \mathcal{F}^\uend = -N_0 & + \dfrac{1}{2}\ln\left(2\pi \sqrt{120} \dfrac{Q_{\mathrm{rms-PS}}}{T} \right) \\ & - \dfrac{1}{4} \ln\left[9 \left(\dfrac{\mu}{M_{_{\mathrm Pl}} p} \right)^{2} \left(1-\chi_\mathrm{end}^p \right) \right]. \end{aligned} \end{equation} As a result, for given values of $\mu$, $p$ and $\bar{w}_\mathrm{reh}$, one has first to solve Eq.~(\ref{eq:chiend}) to get $\chi_\mathrm{end}$, then Eqs.~(\ref{eq:chistarnuc}) and (\ref{eq:chistarend}) to obtain $\chi_*^\mathrm{nuc}$ and $\chi_*^\mathrm{end}$ from which $\Delta N_*^\mathrm{nuc}$ and $\Delta N_*^\mathrm{end}$ are deduced by using Eq.~(\ref{eq:sftraj}). From the value of $\chi_*$, one can also directly evaluate the two slow-roll parameters $\epsilon_{1*}$ and $\epsilon_{2*}$. Let us notice that some of the expressions above can be ill defined if $p=2$. In this case, Eqs.~(\ref{eq:chistarnuc}) and (\ref{eq:chistarend}) should be rederived from the start and one can show that it then always leads to well-defined expressions. The rest is the same as for the large field models (see Sec.~\ref{subsec:largefield}), $\rho_\ureh$ being in one-to-one correspondence with the value of $\Delta N_*$ through Eqs.~(\ref{eq:Rrad}) and (\ref{eq:Rradw}) once a value of $\bar{w}_{\mathrm{reh}}$ has been chosen. It is worth emphasizing again that, in order to derive these results, no assumption has been made about the reheating epoch which is entirely characterized by $\rho_\ureh$ and $\bar{w}_{\mathrm{reh}}$. \begin{figure*} \begin{center} \includegraphics[width=0.45\textwidth,clip=true]{sf_w7_p3_wm02} \includegraphics[width=0.45\textwidth,clip=true]{sfnsR_w7_p3_wm02} \includegraphics[width=0.45\textwidth,clip=true]{sf_w7_p3_w0} \includegraphics[width=0.45\textwidth,clip=true]{sfnsR_w7_p3_w0} \includegraphics[width=0.45\textwidth,clip=true]{sf_w7_p3_wp08} \includegraphics[width=0.45\textwidth,clip=true]{sfnsR_w7_p3_wp08} \caption{Reheating consistent slow-roll predictions for the small field models with an assumed generic value $p=3$ and for $\bar{w}_\mathrm{reh}=-0.2$ (top), $\bar{w}_\mathrm{reh}=0$ (middle) and $\bar{w}_\mathrm{reh}=0.8$ (bottom). The right panels display the corresponding predictions in the plane $(n_{_{\mathrm S}},r)$. The reheating has a strong influence for low values of both $\bar{w}_\mathrm{reh}$ and $\mu/M_{_{\mathrm Pl}}$. As in Figs.~\ref{fig:srlfnsR} and~\ref{fig:srlf}, the annotations give the values of $\log(g_*^{1/4}T_{\mathrm{reh}}/\mbox{GeV})$. It is also interesting to notice that the sequence of successive $T_\mathrm{reh}$ is switched for $\bar{w}_\mathrm{reh}=0.8$ (in fact for $\bar{w}_\mathrm{reh}>1/3$), \textsl{i.e.~} large reheating temperatures correspond to smaller spectral indices for $\bar{w}_\mathrm{reh}=0.8$ (\textsl{i.e.~} $\bar{w}_\mathrm{reh}>1/3$) while, for $\bar{w}_\mathrm{reh}=0$ or $\bar{w}_\mathrm{reh}=-0.2$ (\textsl{i.e.~} $\bar{w}_\mathrm{reh}<1/3$), they correspond to larger $n_{_{\mathrm S}}$.} \label{fig:srsf} \end{center} \end{figure*} In Fig.~\ref{fig:srsf}, we have represented the slow-roll predictions for an assumed generic value of $p=3$ and various values of $\bar{w}_\mathrm{reh}$ ranging from $-0.2$ to $0.8$. The annotations are the values of $\log(g_*^{1/4}T_{\mathrm{reh}}/\mbox{GeV})$ while the color scale traces the values of $\mu/M_{_{\mathrm Pl}}$. For small field models, the reheating energy scale is all the more so constrained that $\bar{w}_\mathrm{reh}$ and $\mu$ are small. In fact, these plots show that $\bar{w}_\mathrm{reh}$ and $\mu$ are degenerated: it is possible to render compatible a low value of $\bar{w}_\mathrm{reh}$ provided $\mu$ is super-Planckian. Conversely, small values of $\mu$ can be made compatible with the data for a high energy scale reheating if $\bar{w}_\mathrm{reh}<1/3$, or a low energy scale reheating for $\bar{w}_\mathrm{reh}>1/3$. In the next section, we perform a full analysis of the small field models, reheating included, in view of the WMAP7 data to quantify the above claims in terms of posterior probability distributions. \section{Inferring reheating from CMB data} \label{sec:cmbtoreh} In view of the previous results, the correct way to discuss how well the CMB data constrain a set of known inflationary models is to perform a Bayesian analysis of the data given the model parameters, including the reheating. Notice that this is different than constraining the slow-roll parameters, or the spectral index and tensor-to-scalar ratio, which only encode the shape of the primordial power spectra and know nothing about reheating (whereas a model of inflation does). \subsection{Exact numerical integration} \label{subsec:exact} The numerical exact integration method has been introduced in Refs.~\cite{Ringeval:2005yn, Martin:2006rs,Ringeval:2007am, Lorenz:2007ze} and consists of the computation of the primordial power spectra assuming only General Relativity and linear perturbation theory. Therefore, the only model parameters are the ones appearing in the inflaton potential together with the reheating parameter $R_\urad$, for the very reasons explained in Sec.~\ref{sec:physicalorigin}. The numerical integration of the inflationary perturbations sets up the initial conditions for the subsequent cosmological perturbations from which the CMB anisotropies are deduced. For this purpose, we have used a modified version of the \texttt{CAMB} code~\cite{Lewis:1999bs} coupled to a Monte--Carlo--Markov--Chain (MCMC) exploration of the parameter space implemented in the \texttt{COSMOMC} code~\cite{Lewis:2002ah} and given the WMAP7 data~\cite{Komatsu:2010fb, Larson:2010gs, Jarosik:2010iu}. Concerning the standard cosmological model, we have assumed a flat $\Lambda$CDM model having five parameters: the density parameter of baryons, $\Omega_\ub$, of cold dark matter $\Omega_\udm$, the Hubble parameter today $H_0$, the optical depth $\tau$ encoding the redshift at which the Universe reionized, and the nuisance parameter $A_{_\uSZ}$ encoding the relative amplitude of the diffuse Sunyaev--Zel'dovich (SZ) effect compared to the analytical model of Ref.~\cite{Komatsu:2002wc}. In fact, as discussed in Ref.~\cite{Lewis:2002ah}, it is more convenient to sample the cosmological parameter space along the rescaled quantity $(\Omega_\ub h^2, \Omega_\udm h^2, \tau, \theta, A_{_\uSZ})$ where $H_0 = 100 h \, \mbox{km}/\sec/\mbox{Mpc}$ and $\theta$ measures the ratio of the sound horizon at last scattering to the angular diameter distance. Following Ref.~\cite{Komatsu:2010fb}, we have included the lensing corrections on the temperature and polarization power spectra, and, to limit parameter degeneracies, completed the WMAP7 data with the latest Hubble Space Telescope (HST) bound on $H_0$~\cite{Riess:2009pu}. Concerning the primordial parameters, they are now provided by our inflationary model parameters, up to some observationally convenient rescaling. For instance, we will prefer to sample on $P_*$, the amplitude of the scalar perturbation at the pivot scale, rather than on the potential normalization $M$, both being in one-to-one correspondence. Similarly, it is more convenient to sample the reheating era over the parameter $R$, \begin{equation} \label{eq:Rrehdef} R \equiv R_\urad \dfrac{\rho_\uend^{1/4}}{M_{_{\mathrm Pl}}}\,, \end{equation} rather than $R_\urad$. As can be seen by comparing Eq.~(\ref{eq:Rradsr}) and the following exact expression: \begin{equation} \label{eq:Rrehsr} \begin{aligned} \ln R = N_{_\uT} - N_* + N_0 + \dfrac{1}{2} \ln\left(3 \dfrac{V_\mathrm{end}}{V_*} \dfrac{3 - \epsilon_{1*}}{3 - \epsilon_{1\mathrm{end}}} \right), \end{aligned} \end{equation} contrary to $R$, the values of $R_\urad$ explicitly depend on $P_*$. This would induce unwanted correlations between $R_\urad$ and $P_*$ which are therefore avoided by sampling the reheating over $R$. Notice that since $R$ and $R_\urad$ differ by a factor $\rho_\uend$, they are also in one-to-one correspondence once the model of inflation is specified~\cite{Martin:2006rs}. In order to perform the MCMC analysis, we still have to specify the prior probability distributions. Concerning the cosmological parameters, we have chosen wide flat priors around the preferred posterior values obtained by the WMAP team~\cite{Komatsu:2010fb}. The reheating energy scale being unknown, we assume a flat prior on $\ln R$ whose extension is given by the consistency conditions mentioned in Sec.~\ref{subsec:why}. Reheating should occur before nucleosynthesis and after the end of inflation while the positivity energy conditions imply $-1/3< \bar{w}_\mathrm{reh}<1$. As a result, we take a flat prior for $\ln R$ in the range~\cite{Martin:2006rs} \begin{equation} \ln\left(\dfrac{\rho_\unuc^{1/4}}{M_{_{\mathrm Pl}}} \right) < \ln R < -\dfrac{1}{3} \ln\left(\dfrac{\rho_\unuc^{1/4}}{M_{_{\mathrm Pl}}} \right) + \dfrac{4}{3} \ln\left(\dfrac{\rho_\uend^{1/4}}{M_{_{\mathrm Pl}}} \right). \end{equation} The lower bound is approximately $\simeq -47$ for $\rho_\unuc =10 \,\mbox{MeV}$ whereas the upper bound depends on $\rho_\uend$, and thus on the other inflationary model parameters. Finally, we have chosen a flat prior on the logarithm of $P_*$ around the value giving the right amplitude of the CMB anisotropies: $2.7 < \ln\left(10^{10} P_*\right) < 4.0$. Let us notice that since $P_*$ is well constrained, this translates into an upper bound on $\ln (\rho_\uend^{1/4}/M_{_{\mathrm Pl}}) < -5.5$ (analogous to the upper bound on $H_*$ in the slow-roll approximation) that will therefore be inherited by $\ln R$ so that the maximal value of the upper bound is $\simeq 8.3$. The other prior choices on the primordial parameters are those concerning the inflaton potential and will be specified later. \par In the following, we perform the WMAP7 data analysis along those lines for both the large and small field models. As the first step, we sample over the rescaled reheating parameter $\ln R$ without any assumptions on $\bar{w}_\mathrm{reh}$. We show that it is actually constrained for all models. As can be checked in Eq.~(\ref{eq:Rrad}), it means that the CMB data restrict the \emph{a priori} possible values of $\Delta N$ and $\bar{w}_\mathrm{reh}$. Conversely, this result shows that not including the reheating parameter when constraining inflationary models is no longer a reasonable option. For the second step, we break the degeneracy between $\bar{w}_\mathrm{reh}$ and $\Delta N$ and assume that $\bar{w}_\mathrm{reh}$ takes its natural values for the large field models given in Eq.~(\ref{eq:wrehlf}), or choose a specific value in the small field models. These reasonable extra assumptions translate the bounds on $\ln R$ into a lower limit on $\rho_\ureh$ and/or $g_*^{1/4}T_{\mathrm{reh}}$. Unless specified, we have stopped the MCMC exploration according to the R--statistics~\cite{Gelman:1992} implemented in \texttt{COSMOMC} such that the difference in variances between the different Markov chains does not exceed a few percent. Typically, this corresponds to a set of $300\,000$ to $500\,000$ samples depending on the underlying model of inflation. \begin{figure} \begin{center} \includegraphics[width=8.9cm]{lf_wmap7_hst_th_cosmo} \caption{Marginalized posterior probability distributions for the base and derived cosmological parameters in large field inflation. The black solid lines are without any assumptions on the large field reheating whereas the red dashed ones are under the prior $\bar{w}_\mathrm{reh}=(p-2)/(p+2)$. They have been represented only when they differ with respect to the former.} \label{fig:lfcosmo} \end{center} \end{figure} \subsection{Large field models} \label{subsec:lfnumerics} The potential for the large field models is given in Eq.~(\ref{eq:lfpot}). Together with $P_*$ and $\lnR$, there is only one additional primordial parameter $p$ for which we have chosen a flat prior in the range $p\in[0.2,5]$. The upper bound is motivated by the previous constraints on large fields~\cite{Martin:2006rs} whereas the lower one is a theoretical prejudice associated with the non-naturalness of extremely small values of $p$ in any field theory. The marginalized posterior distributions for the sampled and derived cosmological parameters are represented in Fig.~\ref{fig:lfcosmo} for the two prior assumptions detailed in the following. The solid lines are without any assumption on the reheating whereas the dashed ones are under the natural equation of state $\bar{w}_\mathrm{reh}=(p-2)/(p+2)$. These probabilities are compatible with the one already derived in the literature~\cite{Lorenz:2008je, Komatsu:2010fb} up to slight shifts coming from changing the reheating assumptions. This is the result of some tension between the large field models which generically predict a large tensor-to-scalar ratio $r$ and its non-observation. Being more restrictive on the reheating gives less flexibility to the model such that $H_0$ and $\Omega_\ub$ are slightly shifted to compensate for the too high $r$ values. \begin{figure} \begin{center} \includegraphics[width=9cm]{lf_wmap7_hst_th} \caption{Marginalized posterior probability distributions (solid lines) and mean likelihoods (dotted) for the large field model primordial parameters. This is without assumption on the reheating era. Notice the lower bound on the reheating parameter $\ln R$ which correlates with the potential power $p$ (see also Fig.~\ref{fig:lf2D}). The energy scale at the end of large field inflation is also constrained.} \label{fig:lf1D} \end{center} \end{figure} In Fig.~\ref{fig:lf1D}, we have plotted the marginalized probability distribution for the large field primordial parameters without assumption on the reheating. It is particularly interesting to compare these plots to Fig.~18 of Ref.~\cite{Martin:2006rs} since this allows us to see the improvements on the parameter constraints coming from the passage from WMAP3 data to WMAP7. In addition to the expected constraints on $P_*$, we find the $95\%$ confidence limit \begin{equation} \label{eq:lfpmax} p < 2.2\,, \end{equation} suggesting that $\phi^2$ inflation may now be considered under pressure. Let us emphasize that this result is robust against any possible reheating evolution since marginalized over $\ln R$. Concerning this last parameter, we find a $95\%$ lower bound: \begin{equation} \label{eq:lfRmin} \ln R > -28.9\,. \end{equation} In fact, as can be checked in Fig.~\ref{fig:lf2D}, these two parameters are correlated together and also with $\rho_\uend$. These correlations can be understood as follows. From Eq.~(\ref{eq:Rrehsr}), the quantities $\ln R$, $p$ and $\ln\left(\rho_\uend/M_{_{\mathrm Pl}}^4\right)$ are related by the formula \begin{eqnarray} \ln \left(\frac{\rho_\uend}{M_{_{\mathrm Pl}}^4}\right)&=&\ln \left(128\pi^2P_*\right)-2N_0 +2\ln R-\frac{2}{1-n_{_{\mathrm S}}}\nonumber \\ & & -\frac{p}{2}\frac{1+n_{_{\mathrm S}}}{1-n_{_{\mathrm S}}}+ \ln\left[\frac{8p(1-n_{_{\mathrm S}})}{p+2}\right], \end{eqnarray} \begin{figure} \begin{center} \includegraphics[width=9cm]{lf_wmap7_hst_th_2D} \caption{Two-dimensional marginalized posterior probability distribution (point density) in the plane $(p, \ln R)$ and its one- and two-sigma confidence intervals. Correlations with the energy scale of large field inflation are traced by the color scale.} \label{fig:lf2D} \end{center} \end{figure} \noindent where $P_*$ and $n_{_{\mathrm S}}$ are well constrained quantities. As a result, at fixed $\ln R$, the larger the $p$ values, the lower the energy scale at the end of inflation has to be, which is exactly what is observed in Fig.~\ref{fig:lf2D}. Of course, $p$ cannot be too large since, in this case, the tensor-to-scalar ratio $r$ increases and rapidly becomes incompatible with the CMB data. In Fig.~\ref{fig:lf2D}, we also observe that the smaller $p$, the larger the allowed range of variation of $\ln R$. The upper limit on $\ln R$ does not depend on $p$ and just comes from the upper limit on the energy scale of inflation ($\ln R \lesssim 8.3$). On the other hand, the lower limit strongly depends on $p$ and represents a non trivial result. This expresses the fact that, for a given $p$, there are values of $\ln R$ for which there is no way to obtain, at the same time, a consistent reheating epoch and CMB predictions compatible with the data. From this effect, we also get the energy scale of large field inflation, and at two-sigma level \begin{equation} \label{eq:rhoendbound} 4.4 \times 10^{15} \mbox{GeV} < \rho_\uend^{1/4} < 1.2 \times 10^{16} \mbox{GeV}\,. \end{equation} The upper limit just comes from the constrain on the energy scale of inflation while the lower limit originates from the fact that $p$ cannot be too large (recall low values of $\rho_\uend$ means large value of $p$). \begin{figure} \begin{center} \includegraphics[width=8.9cm]{lfwreh_wmap7_hst_th} \caption{Marginalized posterior probability distributions (solid lines) and mean likelihood (dotted lines) for the large field parameters when $\bar{w}_\mathrm{reh}=(p-2)/(p+2)$. This extra-assumption on reheating yields to tighter constraints than in Fig.~\ref{fig:lf1D}. In particular, we find $\rho_\ureh > 17.3\, \mbox{TeV}$ at $95\%$ confidence level, as well as $p<2.1$.} \label{fig:lfwreh1D} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=9cm]{lfwreh_wmap7_hst_th_2D} \caption{One- and two-sigma marginalized limits in the plane $[p,\ln(\rho_\ureh/M_{_{\mathrm Pl}}^4)]$ for the large field models with $\bar{w}_\mathrm{reh}=(p-2)/(p+2)$. The point density traces the associated two-dimensional posterior while the color map shows correlations with the energy scale at which inflation ends.} \label{fig:lfwreh2D} \end{center} \end{figure} Assuming now that the reheating proceeds according to Eq.~(\ref{eq:wrehlf}), one obtains the marginalized posteriors plotted in Fig.~\ref{fig:lfwreh1D} and Fig.~\ref{fig:lfwreh2D}. To be consistent, we have modified our prior on $p$ by assuming a flat distribution in $p\in]1,5]$; the case $p=1$ is a limiting case that may be problematic. Indeed, values of $p<1$ would induce $\bar{w}_\mathrm{reh}<-1/3$ and inflation would not stop. Moreover, instead of using $\ln R$, we have used Eqs.~(\ref{eq:Rradw}) and (\ref{eq:Rrehdef}) to sample the parameter space over $\ln (\rho_\ureh/M_{_{\mathrm Pl}}^4)$ and from a flat prior in $[\ln(\rho_\unuc/M_{_{\mathrm Pl}}^4), \ln(\rho_\uend/M_{_{\mathrm Pl}}^4)]$. The upper bound on the $p$ posterior is slightly tighter due to the restriction made over the reheating: we find the two-sigma limit $p<2.1$. For the same reasons, the energy scale of large field inflation is a bit more constrained, and the two-sigma range becomes \begin{equation} 5.2 \times 10^{15}\,\mbox{GeV} < \rho_\uend^{1/4} < 9.1\times 10^{15}\, \mbox{GeV}. \end{equation} Certainly, the more interesting result is the lower bound on the reheating energy scale. At $95\%$ of the confidence limit \begin{equation} \label{eq:lfrhorehmin} \rho_\ureh^{1/4} > 17.3\, \mbox{TeV}\,. \end{equation} The correlations between these three parameters are represented in Fig.~\ref{fig:lfwreh2D} and have the same origin as the ones displayed in Fig.~\ref{fig:lf2D}, up to the change of variable $R$ to $\rho_\ureh$. In particular, we see that, at a fixed value of $p$, the constraints on $\rho_\ureh$ are tighter for $p \lesssim 1.5$ than for $p \simeq 1.5$. This comes from the fact that the reheating is well constrained for a negative mean equation of state, which precisely corresponds to $p<2$ [see Eq.~(\ref{eq:wrehlf})]. The change of behavior around $p=1.5$ comes from this effect combined with a two high tensor-to-scalar ratio when $p\gtrsim 2$. Let us also emphasize that Eq.~(\ref{eq:lfrhorehmin}) is marginalized over all large field models. Coming with a theoretical preference for a given value of $p$ can lead to stronger bounds, as for instance if $p \gtrsim 1$ or $p=2$ (see Fig.~\ref{fig:lfwreh2D}). Finally, one can check that the bounds found in this section are compatible with the expectations we have derived from the slow-roll predictions of Sec.~\ref{subsec:largefield}. \par In the next section, we perform a similar analysis for the small field models. \subsection{Small field models} \label{subsec:sfnumerics} \begin{figure} \begin{center} \includegraphics[width=9cm]{sf_wmap7_hst_th} \caption{Marginalized posterior probability distributions (solid lines) and mean likelihoods (dotted) for the small field model primordial parameters. This is without assumption on the reheating era. Notice again the lower bound on the reheating parameter $\ln R$ and the slightly favored super-Planckian values of $\mu$. Correlations are displayed in Fig.~\ref{fig:sf2D}.} \label{fig:sf1D} \end{center} \end{figure} \begin{figure*} \begin{center} \includegraphics[width=18cm]{sf_wmap7_hst_th_2D} \caption{One- and two-sigma contours of the two-dimensional marginalized probability distributions for small field inflation. The point density traces the associated two-dimensional posterior while the color scale shows correlations with third parameters. Notice the correlation between $\mu$ and $p$, as well as with $\ln R$.} \label{fig:sf2D} \end{center} \end{figure*} The small field model potential of Eq.~(\ref{eq:sfpot}) involves an extra parameter compared to large fields which is the VEV $\mu$. The scale of this parameter being unknown, we have chosen a flat prior on $\log(\mu/M_{_{\mathrm Pl}})$ in the range $[-1,2]$. With the lower and upper limits being chosen only for numerical convenience, one should keep in mind that the physical values of $\mu$ may be larger or smaller. The important point is however that such a prior includes both sub-Planckian and super-Planckian values without prejudice. Concerning the potential index $p$, we have chosen a flat prior in the range $p\in[2.4,10]$. The upper bound is arbitrary whereas the lower bound excludes $p=2$ since this model is a special case~\cite{Martin:2006rs}. In fact, there exists another reason that justifies the above choices. In the limit $\mu/M_{_{\mathrm Pl}}\gg 1$, one can show, using a perturbative expansion in $M_{_{\mathrm Pl}}/\mu$, that the two first horizon flow functions $\epsilon_1$ and $\epsilon_2$ become independent from $\mu $ and $p$, namely $\epsilon_1=(4\Delta N_*+1)^{-1}$ and $\epsilon_2=4\epsilon_1$. Therefore, it would be useless to take a larger upper bound on the $\mu$ prior since the corresponding physical predictions are no longer affected by this choice. All the other priors, both on the cosmological and primordial parameters, have been chosen as for the large field exploration. For the sake of clarity, we have not represented the marginalized posteriors for the cosmological parameters. Contrary to the large field models, these posteriors are almost the same whatever we assume for the reheating. The reason is that small field models do not have a tendency to produce a high tensor-to-scalar ratio. There is therefore no need for the cosmological parameters to compensate for such an effect and they decouple from the details of the inflationary and reheating phases. Finally, the cosmological parameter posteriors in small field inflation end up being very similar to the dashed curves (or solid curves when they are absent) plotted in Fig.~\ref{fig:lf1D}. \begin{figure} \begin{center} \includegraphics[width=8.8cm]{sfwrehm03_wmap7_hst_th} \caption{Marginalized posterior probability distributions (solid lines) and mean likelihoods (dotted lines) for the small field model. These posteriors assume a constant equation of state with (from bottom to top) $\bar{w}_\mathrm{reh}=-0.3$ (black lines), $-0.2$ (red lines), $-0.1$ (blue lines) and $\bar{w}_\mathrm{reh}=0$ (green lines). Only $\rho_\ureh$ is affected by such prior choices (bottom right panel). The energy scale of reheating is all the more constrained from below than $\bar{w}_\mathrm{reh}$ is small.} \label{fig:sfwreh1D} \end{center} \end{figure} In Fig.~\ref{fig:sf1D} and Fig.~\ref{fig:sf2D}, we have plotted the marginalized posterior probability distributions (one- and two-dimensional, respectively) for the primordial small field parameters without assumption on the reheating. Again, we find the WMAP data to give a lower limit on the reheating parameter, at two-sigma level \begin{equation} \label{eq:sfRrehmin} \ln R > -23.1\,. \end{equation} Concerning the parameters $\mu$ and $p$, they are not constrained. However, the posteriors of Fig.~\ref{fig:sf1D} clearly show a tendency to favor super-Planckian values of $\mu$ together with large values of $p$. As can be seen in Fig.~\ref{fig:sf2D}, since $\mu$ can take arbitrarily low values, the energy scale of small field inflation is not constrained from below. We find only the consistency condition that $\rho_\uend^{1/4} < 9 \times 10^{15}\, \mbox{GeV}$ from the $P_*$ limits. The correlations between $\mu$ and $p$ can be understood from Sec.~\ref{subsec:smallfield} and come from the requirements of having the right spectral index. The bound of Eq.~(\ref{eq:sfRrehmin}) has the same origin but through the selection of the favored $\Delta N_*$ values. More details on these effects can be found in Refs.~\cite{Martin:2006rs, Ringeval:2007am}. \begin{figure} \begin{center} \includegraphics[width=9cm]{sfwrehm03_wmap7_hst_th_2D} \caption{One and two-sigma marginalized limits in the plane $[\log(\mu / M_{_{\mathrm Pl}}),\ln(\rho_\ureh/M_{_{\mathrm Pl}}^4)]$ for the large field models with $\bar{w}_\mathrm{reh}=-0.3$. The point density traces the associated two-dimensional posterior while the color map shows correlations with the energy scale at which inflation ends.} \label{fig:sfwreh2D} \end{center} \end{figure} As we did for large fields, we now assume an equation of state parameter for the small field reheating. Contrary to large field models, we no longer have a relationship between $\bar{w}_\mathrm{reh}$ and the inflationary potential and one may only assume fiducial values ranging from $-1/3$ to $1$. Again, the MCMC has been now sampled directly over $\ln(\rho_\ureh/M_{_{\mathrm Pl}}^4)$ rather than over $\ln R$ by making use of Eqs.~(\ref{eq:Rradw}) and (\ref{eq:Rrehdef}). The resulting one-dimensional posteriors for the primordial parameters have been plotted in Fig.~\ref{fig:sfwreh1D} for the four values $\bar{w}_\mathrm{reh}=-0.3$, $-0.2$, $-0.1$ and $\bar{w}_\mathrm{reh}=0$, and only when the posteriors are affected by this choice. Comparing Figs.~\ref{fig:sf1D} and Fig.~\ref{fig:sfwreh1D} shows that the posteriors of $\mu$, $p$ (and $P_*$) are mostly independent of the details of the reheating. The only quantity changing accordingly is $\rho_\ureh$. This is not surprising since at a given $R$, changing the values of $\bar{w}_\mathrm{reh}$ modifies the number of e-folds the Universe reheated. As a result, the constraint on $R$ for small fields translates into a lower bound on $\rho_\ureh$ but only when $\bar{w}_\mathrm{reh}$ is small. At $95\%$ of confidence, we find \begin{equation} \begin{aligned} \label{eq:conssf} \bar{w}_\mathrm{reh} & = -0.3 \quad \Rightarrow \quad \rho_\ureh^{1/4} > 8.9 \times 10^5\,\mbox{GeV},\\ \bar{w}_\mathrm{reh} & = -0.2 \quad \Rightarrow \quad \rho_\ureh^{1/4} > 3.9 \times 10^2\, \mbox{GeV}, \end{aligned} \end{equation} while higher values of $\bar{w}_\mathrm{reh}$ do not constrain $\rho_\ureh$ more than the prior $\rho_\ureh>\rho_\unuc$. Physically, these results can be fully understood from Fig.~\ref{fig:srsf}. In particular, the fact that constraints on $\rho_\ureh$ can be derived for negative values of $\bar{w}_\mathrm{reh}$ is apparent from these plots. As expected, correlations between $\rho_\ureh$ and the other parameters are similar to the ones associated with $\ln R$. In Fig.~\ref{fig:sfwreh2D}, we have represented the two-dimensional posterior and its one- and two-sigma contours in the plane $[\log(\mu/M_{_{\mathrm Pl}}),\ln(\rho_\ureh/M_{_{\mathrm Pl}}^4)]$. The color scale traces correlations with the energy scale of small field inflation. We recover the slightly disfavored values of sub-Planckian vacuum expectation values. They clearly remain acceptable but only if the reheating ends at a high energy. Again, the previous results are compatible and easily understandable with the slow-roll predictions of Sec.~\ref{subsec:smallfield}. \par To conclude this section, we have found that the current WMAP data give non-trivial information on the reheating era in the two most considered classes of prototypical inflationary models (large and small fields). Conversely, when the goal is to use CMB data to constrain these models, including the reheating into the marginalization is definitely no longer an option. \subsection{Importance sampling from slow-roll bounds} \label{subsec:sampling} The formulas derived in Sec.~\ref{subsec:why} link the reheating and inflationary model parameters to the spectral index and tensor-to-scalar ratio. As a result, they could be used to extract constraints on the reheating energy scale from some already derived constraints on the slow-roll parameters by using importance sampling~\cite{Lewis:2002ah}. However, this method is highly inefficient since most of the favored $(n_{_{\mathrm S}},r)$ values do not necessarily correspond to a consistent reheating model in a given inflationary framework. However, it clearly illustrates how knowledge on the primordial power spectra shape, complete with an inflaton potential, translates into some information on the energy scale at which the reheating ends. For this reason, we briefly discuss this method in the following although we prefer an exact numerical integration as performed above. \par Assuming an inflationary model, with a known potential $V(\phi)$, one can generate any spectral index and tensor-to-scalar ratio such that, at leading order in slow-roll, \begin{equation} \label{eq:srtoobs} r = 16 \epsilon_{1*}, \qquad n_{_{\mathrm S}} - 1 = -2 \epsilon_{1*} - \epsilon_{2*}, \end{equation} with \begin{equation} \label{eq:epstoobs} \epsilon_{1*} = \dfrac{1}{2} \left(\dfrac{V_*'}{V_*}\right)^2, \qquad \epsilon_{2*} = -2 \left[\dfrac{V_*''}{V_*} - \left(\dfrac{V_*'}{V_*} \right)^2 \right]. \end{equation} A prime here is understood as a derivative with respect to the field $\phi$. The star still refers to the time at which the scale under consideration crossed out the Hubble radius during inflation. Inverting Eq.~(\ref{eq:srtoobs}), or Eq.~(\ref{eq:epstoobs}), gives the value of $\phi_*$ (and eventually other parameters) leading to the required couple $(n_{_{\mathrm S}},r)$. For instance, in the large field models, one would find \begin{equation} \frac{\phi_*}{M_{_{\mathrm Pl}}} = \sqrt{\dfrac{8 \epsilon_{1*}}{\epsilon_{2*}^2}}, \qquad p = 4 \dfrac{\epsilon_{1*}}{\epsilon_{2*}}\,. \end{equation} {}From $\phi_*$ one can derive $\Delta N_*$ from the slow-roll trajectory and the value at which inflation stops, $\phi_\mathrm{end}$. For instance, for large field inflation, one would obtain \begin{equation} \Delta N_*=\frac{1-\epsilon_{1*}}{\epsilon_{2*}}. \end{equation} The energy scale at which reheating ends stems from Eqs.~(\ref{eq:Rrad}) and (\ref{eq:Rradw}): \begin{equation} \begin{aligned} \ln\left(\dfrac{\rho_\ureh^{1/4}}{M_{_{\mathrm Pl}}} \right) &=\dfrac{3+3 \bar{w}_\mathrm{reh}}{1-3 \bar{w}_\mathrm{reh}} (N_0 + \Delta N_*) \\ &- \dfrac{1+3 \bar{w}_\mathrm{reh}}{2(1-3 \bar{w}_\mathrm{reh})} \ln\left(8 \pi^2 P_*\right) + \ln \sqrt{\epsilon_{1*}} \\ &+ \dfrac{1}{1-3 \bar{w}_\mathrm{reh}} \ln\left(\dfrac{3}{\epsilon_{1*}} \dfrac{3-\epsilon_{1*}}{3 - \epsilon_{1\mathrm{end}}} \dfrac{V_\mathrm{end}}{V_*} \right), \end{aligned} \label{eq:srtorhoreh} \end{equation} provided $\bar{w}_\mathrm{reh}\ne 1/3$. As expected, if the reheating is radiation dominated, it cannot be distinguished from the usual radiation era and $\rho_\ureh$ cannot be inferred. Clearly, for a given set $(\epsilon_{1*}, \epsilon_{2*}, P_*)$, in an assumed model of inflation, the right-hand side of Eq.~(\ref{eq:srtorhoreh}) is uniquely determined and hence is $\rho_\ureh$. As already mentioned, such a method is not well suited: picking up a random $(\epsilon_{1*}, \epsilon_{2*}, P_*)$ compatible with the power spectra shapes usually predicts a value of $\rho_\ureh$ which is either incompatible with BBN, or with the underlying model, \textsl{i.e.~} $\rho_\ureh > \rho_\uend$. The reason is that inflationary physics is much more than Taylor expanding a potential and fitting the power spectra shape. In order to solve this issue, the way out is to perform an exact numerical integration of the inflationary perturbations, including the reheating parameter, as we previously did. This method has also the advantage to free ourselves from any assumption on the equation of state parameter $\bar{w}_\mathrm{reh}$. \section{Conclusion} \label{sec:conclusion} We now conclude our investigation by revisiting our main results. The most important conclusion is that, both for large and small field scenarios, the reheating parameter $\ln R$ is now constrained by CMB data. The physical origin of this result is clear. For fixed physical length scales today, a change in $\ln R$ modifies the location of the CMB observable window along the inflationary potential, which is possible only for a limited range of $\ln R$ given the data accuracy. This conclusion is general and does not depend on the details of the reheating epoch. However, if one assumes a model for reheating, typically if one chooses a specific value of the mean equation of state, then it becomes possible to express the constraints mentioned above as limits on the energy density at the end of reheating or, equivalently, as constraints on the reheating temperatures. This leads to Eqs.~(\ref{eq:lfrhorehmin}). and~(\ref{eq:conssf}). These results are of particular interest for the supersymmetric extension of either large or small field inflationary models. Indeed, our result limits the reheating temperature from below whereas gravitinos production gives an upper bound~\cite{Khlopov:1984pf, Kallosh:1999jj, Giudice:1999yt, Lemoine:1999sc, Maroto:1999ch, Giudice:1999am, Buonanno:2000cp, Copeland:2005qe,Jedamzik:2006xz, Kawasaki:2008qe, Bailly:2009pe}: $T_\mathrm{reh} < 10^{4}\,\mbox{TeV}$, where $T_\mathrm{reh}$ is given in Eq.~(\ref{eq:Trehdef}). Therefore, assuming $g_*\simeq 200$ in the large field case, we now have an allowed range of variation for the reheating temperature given by \begin{equation} 6 \, \mbox{TeV}\lesssim T_\mathrm{reh}\lesssim 10^4 \, \mbox{TeV}. \end{equation} \par By including the reheating parameter in our analysis, we can marginalize over all reheating history to infer the inflationary parameter values in a robust way. For large field scenarios, we find that the power index is upper limited by $p<2.2$, at $95\%$ confidence limit. This means that the prototypical model of inflation, namely, massive chaotic inflation, is now under pressure. Similarly, the small field models with sub-Planckian vacuum expectation values $\mu$ are slightly disfavored. In fact, $\mu$ is correlated with the potential power $p$, as represented in Fig.~\ref{fig:sf2D}. Without marginalization over $p$, large values of $p \gtrsim 6$ are actually necessary to allow the sub-Planckian values of $\mu$ to be inside the $95\%$ contour. If one has a theoretical prejudice for $\mu< M_{_{\mathrm Pl}}$, and for reasonable values of $p < 6$, then small field models can also be considered under pressure. \par Since the constraints on the reheating parameters are directly related to the ability of the data to determine the observable parameters, one could consider more data sets than the WMAP7 data. In fact, since solely the accuracy on the primordial parameters matters, data sets improving the constraints on the standard cosmological parameters do not change the reheating bounds. On the other hand, small scale CMB experiments may be decisive but, as mentioned in Ref.~\cite{Komatsu:2010fb}, they do not give a significant improvement on the determination of $n_{_{\mathrm S}}$ and $r$ due to their low accuracy at large multipoles. We have indeed tested that our limits do not change by including the Baryonic Acoustic Oscillation~\cite{Percival:2009xn} or the Arcminute Cosmology Bolometer Array Receiver data~\cite{Reichardt:2008ay} in our analysis. On the other hand, since the future Planck data are expected to improve the bounds on $n_{_{\mathrm S}}$, $r$ and even on new primordial observables, we should get unprecedented information on the inflationary reheating era. \begin{acknowledgments} This work is partially supported by the Belgian Federal Office for Science, Technical and Cultural Affairs, under the Inter-university Attraction Pole Grant No. P6/11. \end{acknowledgments}
2111.09321
\section{Introduction} \label{sec:intro} Cosmological observations imply that around 80\% of the total matter content in our universe is made up of dark matter (DM)~\cite{Planck:2018vyg}. The gravitational impact of DM on the dynamics of visible matter has been measured on a large range of astrophysical and cosmological scales. Nonetheless, despite substantial effort, searches in colliders~\cite{Kahlhoefer:2017dnp}, direct~\cite{Undagoitia:2015gya}, and indirect~\cite{Gaskins:2016cha} experiments have so far not yielded any clear hints of interactions other than gravitational between the DM and the standard model particles. While the aforementioned search strategies depend on the existence (and sufficient strength) of such an interaction, here we focus on a complementary path to constrain particle physics models of DM, by considering the DM imprint on the formation of cosmological structures and their potential contribution to the effective number of neutrinos, $\Delta N_{\rm eff}$. This is of particular relevance for very weakly interacting DM, potentially out-of-reach of other search strategies. An especially relevant probe in this direction is the Lyman-$\alpha$ forest, which provides a measurement of the positions of hydrogen clouds along the line-of-sight through the absorption lines of distant quasars~\cite{Viel:2005qj, Viel:2013fqw, Palanque-Delabrouille:2019iyz, Garzilli:2019qki}. Accordingly, Lyman-$\alpha$ forest observations probe structure on intermediate to small scales at redshifts around $2\!\sim \!6$~\cite{Ikeuchi:1986qq, 10.1093/mnras/218.1.25P}. These small-scale structures can be washed out by DM free-streaming, which is caused by significant deviations in the DM momentum distribution compared to the standard cold dark matter (CDM) scenario. Various groups have analysed data of the Lyman-$\alpha$ flux power spectrum~\cite{Viel:2013fqw,Irsic:2017ixq,Palanque-Delabrouille:2019iyz,Garzilli:2019qki} and provided results for canonical warm dark matter (WDM), \emph{i.e.}~thermalised DM that freezes out relativistically in the early universe. In this scenario, masses below 5.3 keV~\cite{Palanque-Delabrouille:2019iyz} could be excluded under reasonable assumptions, see however~\cite{Garzilli:2019qki} for a critical discussion of these assumptions, where the bound is then reduced to 1.9 keV. Here we consider non-thermalised DM, \emph{i.e.}~a DM candidate that is so weakly coupled to the standard model that it never reaches thermal equilibrium with the primordial plasma of standard model particles. Such candidates are commonly referred to as \emph{feebly interacting massive particles} (FIMPs). In these scenarios we can, therefore, no longer rely on the standard freeze-out mechanism to produce the correct relic abundance of DM\@. However, despite its feeble interaction, DM may still be produced to a sufficient amount by scatterings or decays of other (thermalised) particles. There are mainly two such production mechanisms that have been considered in the literature. \emph{(i)} Freeze-in (FI)~\cite{Bolz:2000fu,Pradler:2006hh,McDonald:2001vt,Covi:2002vw,Asaka:2005cn,Frere:2006hp,Hall:2009bx} is the non-efficient production of DM from decays or scatterings of particles in the thermal bath, where \emph{non-efficient} refers to the fact that the respective production rate is small compared to the Hubble expansion rate. \emph{(ii)} The superWIMP (SW) mechanism~\cite{Covi:1999ty,Feng:2003uy} is the late decay of a frozen-out mother particle into DM\@. While both contributions may arise from the very same decay process, typically they take place at very different times. Hence, their characteristic momentum distribution -- relevant for their imprint on cosmological structures -- can be very different. As the scales considered by Lyman-$\alpha$ data lie in the non-linear regime, normally assessing the impact of a certain DM model on the Lyman-$\alpha$ forest requires computationally expensive hydrodynamic simulations. However, on the basis of only the linear matter power spectrum -- which we obtain from a modified version of the Boltzmann code \textsc{class}~\cite{Blas:2011rf, Lesgourgues:2011rh} -- we can, to good approximation, use the results obtained for WDM to estimate Lyman-$\alpha$ constraints for the model considered here. To do so, we employ three different strategies, with varying degrees of sophistication and uncertainty. First, following the approach of~\cite{Bae:2017dpt}, we consider the velocity dispersion as the characteristic measure of the free-streaming of DM\@. Second, we use an analytical fit to the transfer function, which relates the linear matter power spectrum of a model to a CDM one, and constrain the fitting parameters, as was done in~\cite{Bode:2000gq,Viel:2005qj}. Finally, we make use of the area criterion~\cite{Murgia:2017lwo,Schneider:2016uqi}, which considers the integral over the one-dimensional linear power spectrum as a characteristic quantity constrained by Lyman-$\alpha$ data. Although all three methods will allow us to derive limits on the pure FI or SW case, only the latter enables the analysis of the mixed scenario. In our analyses, we also study the conditions under which the FIMPs considered here could give rise to significant contributions to $\Delta N_{\rm eff}$, reaching the conclusion that this is not expected to provide any more stringent constraints on the FI or SW scenarios. Having derived general bounds for these models, we then consider a benchmark scenario with a top-philic simplified $t$-channel mediator model introducing a coloured scalar top-partner and a singled Majorana DM candidate, both odd under a discrete $Z_2$-symmetry that stabilises DM\@. We thereby extend the work of~\cite{Garny:2018ali}, where Lyman-$\alpha$ constraints on the model were estimated by simple considerations of the free-streaming length. Furthermore, following~\cite{Harz:2018csl}, we take into account important bound state formation effects in the freeze-out process of the mediator, which are particularly relevant for the computation of the Lyman-$\alpha$ constraints towards high mediator masses. This paper is organised as follows. We begin in Sec.~\ref{sec:FIbasics} by discussing the different production mechanisms for FIMPs, as well as the corresponding Boltzmann equations. In Sec.~\ref{sec:cosmo}, we focus on the cosmological implications of FIMP DM, reviewing the observables that will constrain these models. We then focus on a specific realisation of our set-up, top-philic FIMPs, in Sec.~\ref{sec:topphilic}, before concluding in Sec.~\ref{sec:concl}. Finally, in App.~\ref{sec:more} we go into more detail about SW production, in App,~\ref{sec:somBSF} we provide all relevant expressions for Sommerfeld enhancement and bound state formation, and in App.~\ref{sec:fit_fluid} we discuss the various approximations and consideration made to extract the Lyman-$\alpha$ bounds. \section{FIMPs in the early universe} \label{sec:FIbasics} To understand the production of FIMPs we first review the underlying formalism. The case of FIMP production from decays and scatterings and their impact on small-scale structures has already been addressed in several recent works~\cite{Heeck:2017xbu,Boulebnane:2017fxw,Bae:2017dpt,Ballesteros:2020adh,DEramo:2020gpr}. Nevertheless, here we briefly summarise the relevant steps of the computation and precise, where relevant, new inputs compared to previous literature. We also detail our implementation of FIMP momentum distribution functions in the public Boltzmann code \textsc{class} \footnote{Our modified \textsc{class} version can be found at \href{https://github.com/dchooper/class_fisw}{https://github.com/dchooper/class\_fisw}. }. Complementary discussion on the Boltzmann equations for FI can be found in \emph{e.g.}~\cite{Belanger:2018ccd,Belanger:2020npe}. \subsection{Boltzmann equations} \label{sec:BE} In order to describe the momentum distribution of FIMPs, one has to solve the unintegrated Boltzmann equation for the DM phase-space distribution function $f_\chi(t,p)$ \begin{equation} \frac{\text{d} f_\chi}{\text{d} t}={\cal C}[f_\chi] \label{eq:fcoll} \end{equation} where $ \chi$ refers to the DM particle, with $t$ and $p$ the proper time and momentum, and ${\cal C}$ refers to the collision terms responsible for FIMP production from the decays or scatterings of some mother particle $B$. The number density of any species $i$ can be obtained by integrating out the distribution function $f_i (t,p)$ as \begin{equation}\label{eq:numDens} n_i= g_i \int \frac{\text{d}^3 p}{(2\pi)^3}f_i (t,p)\,, \end{equation} where $g_i$ is the number of degrees of freedom (dof) of the species $i$. It is usually appropriate to re-express proper time and momentum in terms of independent dimensionless variables. In the context of the DM studied here, the time variable $t$ is traded with $x={m_{\rm ref}}/{T}$, where $m_{\rm ref}$ denotes some reference mass (often the mass of the mother particle $B$ for FIMP production) and $T$ denotes the temperature of the standard model bath. The relation between $x$, or equivalently $T$, and $t$ can be easily obtained when entropy is conserved, which we will assume throughout this work. In this case, we have $\text{d}(s a^3)/\text{d} t=0$, where $s$ is the entropy density and $a$ the scale factor. As a result, keeping in mind that $s\propto g_{*S} T^3$, one obtains \begin{equation} \frac{\text{d}\ln T}{\text{d}\ln t}= -{\bar H} \quad {\rm with } \quad \bar H=\frac{H}{1+1/3\, \text{d}\ln g_{*S}/\text{d}\ln T} \, , \label{eq:barH} \end{equation} where $g_{*S}(T)$ denotes the number of relativistic dof in the thermal bath of temperature $T$ contributing to the entropy, and $H= \text{d}\ln a/\text{d} t$ is the Hubble expansion rate. In a radiation dominated era, the Hubble rate reduces to \begin{eqnarray} H&=&\frac{T^2}{M_0(T)} \quad {\rm with} \quad M_0(T)=M_\mathrm{Pl} \sqrt{\frac{45}{4 \pi^3 g_*(T)}}\,, \label{eq:M0} \end{eqnarray} where $M_\mathrm{Pl}=1.2 \times 10^{19}$ GeV is the Planck mass and $g_*(T)$ denotes the number of relativistic dof in the thermal bath of temperature $T$, this time contributing to the radiation energy density. Here we mostly consider scenarios for which $g_*(T),g_{*S}(T)$ are constant before FIMP production. As a result, $\frac{d\ln T}{d\ln t}=-H$ and it is convenient to use \begin{equation} x=\frac{m_B}{T} \quad{\rm and}\quad q=\frac{p}{T} \label{eq:xq-basic} \end{equation} as time and momentum-independent variables\footnote{In full generality, $q=p/T$, which is not a time-independent variable as the temperature scales as $T\propto g_{*S}^{1/3} a^{-1}$ and $p\propto 1/a$.} and the Boltzmann equation from eq.~(\ref{eq:fcoll}) simply reduces to \begin{equation} x H \partial_x f_\chi={\cal C}[f_\chi] \, . \label{eq:fcollx} \end{equation} In App.~\ref{sec:gSx} we discuss the relevant choice of time and momentum variable for time-varying $g_*,g_{*S}$. We assume that the initial FIMP abundance is negligible. We use the compact notation ${\rm in}\to {\rm fin} + \chi $ for the DM particle $\chi$ production processes, including decays and scatterings. With ``in'' (``fin'') we refer to an ensemble of initial (final) state particles as a source for DM production. In this context, the collision term in eq.~(\ref{eq:fcollx}) reads \begin{equation} {\cal C}[f_\chi]=\frac{1}{2 g_{\chi}E_\chi}\int \Pi_\alpha\frac{ \text{d}^3 p_\alpha}{(2\pi)^32E_\alpha}(2\pi)^4 \delta^4(P_{\rm fin}+ p_\chi-P_{\rm in}) f_{\rm in} (1\pm f_{\rm fin}) (1\pm f_\chi) |{\cal M}|^2_{{\rm in}\to {\rm fin} + \chi}\,. \label{eq:coll} \end{equation} In this expression the index $\alpha$ runs over all particles in the initial and final states except for DM, $P_\alpha$ is the sum of the four-momenta of initial or final state particles for $\alpha={\rm in}$ and ${\rm fin}$, $f_{\rm in}$ refers to the product of the distribution functions of the initial state particles, and $(1\pm f_{\rm fin})$ is the product of Pauli blocking (with a minus sign) or Bose-Einstein enhancing (with a plus sign) factors for final state particles. Furthermore, $|{\cal M}|^2_{{\rm in}\to {\rm fin} + \chi}$ denotes the amplitude squared {\it summed} over initial and final state quantum numbers. For concreteness, we will focus here on 2-body decays of the form $B\to A \chi$, and $2 \to 2$ scatterings of the form $B B'\to A' \chi$ for DM production. As such, we consider a scenario where $B$ and $\chi$ are odd under a $Z_2$ symmetry that stabilizes DM\@. We will also neglect spin statistics effects by taking $(1\pm f_{\rm fin})=1$, see \emph{e.g.}~\cite{Belanger:2020npe,DEramo:2020gpr,Ballesteros:2020adh} for some complementary studies. In this paper we focus on scenarios in which the mother particle is in kinetic equilibrium while producing the DM and $m_B> m_\chi$. For $B$ in kinetic equilibrium, its distribution function can be written as (see \emph{e.g.}~\cite{Binder:2017rgn,Gondolo:1990dk} for a discussion) \begin{equation} f_B(x,q)=\frac{Y_B(x)}{Y_B^{\mathrm{eq}}(x)} f_B^{\mathrm{eq}}(x,q)\,, \label{eq:fBSW} \end{equation} where $f^{\mathrm{eq}}(x,q)$ denotes the usual equilibrium distribution function with zero chemical potential. In order to derive an analytic estimate for the DM distribution function, we will consider a Maxwell-Boltzmann distribution for $B$, but we have explicitly checked numerically that the results do not change significantly when considering \emph{e.g.}~a Bose-Einstein distribution, see also~\cite{Heeck:2017xbu,Belanger:2018ccd,DEramo:2020gpr}. The average DM momentum at the time of production, and its subsequent redshifted value, provide a good tool to estimate the importance of cosmological constraints arising from small-scale structure, more specifically the Lyman-$\alpha$ power flux constraints and the number of extra relativistic dof, see \emph{e.g.}~\cite{Heeck:2017xbu,Bae:2017dpt,Ballesteros:2020adh,DEramo:2020gpr} and also \emph{e.g.}~\cite{Baldes:2020nuv} in a slightly different context. In particular, the rescaled $n^{\text{th}}$-moment of the distribution is obtained evaluating \begin{equation} \langle q^n\rangle= \frac{\int \text{d}^3 q \, q^n f_\chi(q) }{\int \text{d}^3 q \, f_\chi(q)}\,, \label{eq:qn} \end{equation} where $f_\chi(q)$ is the FIMP distribution after production ($\equiv f_\chi(x,q)$ for $x\gg x_{\rm prod}$). \subsection{FIMPs from decays} \label{sec:FI} For DM production through decays, the collision term in the Boltzmann eq.~(\ref{eq:fcollx}) reduces to \begin{equation} {\cal C}_{\rm dec}[f_\chi]=\frac{x}{16 \pi g_{\chi} q\sqrt{q^2m_B^2 + m^2_\chi x^2}}\int_{\xi_-}^{\xi_+} \text{d} \xi_B f_B |{\cal M}|^2_{B\to A\chi} \,, \label{eq:Cdec} \end{equation} where $\xi_B = E_B/T$ and the values of $\xi_{\pm}$ are discussed in App.~\ref{sec:more}, see also~\citep{Boulebnane:2017fxw}. In what follows, we distinguish between the FI and the SW production from decays of a mother particle that is in kinetic equilibrium with the thermal bath. In the case of FI production, discussed in Sec.~\ref{sec:FI-dec}, the mother is both in kinetic and chemical equilibrium. On the other hand, SW production would refer to the DM production after $B$ freeze-out, \emph{i.e.}~after $B$ chemically decouples, see Sec.~\ref{sec:SW}. Accordingly, the two contributions -- although stemming from the very same decay process -- can arise at different times with distinct mean momenta and momentum distributions. This is illustrated in Sec.~\ref{sec:FI-SW}. In this context, it is convenient to introduce the dimensionless ratio \begin{equation} R_\Gamma^{\rm prod}=\frac{M_0(T_{\rm prod}) \Gamma_{B\to A\chi}}{m_B^2}, \label{eq:Rgam} \end{equation} where $M_0(T_{\rm prod})$ corresponds to the rescaled Planck mass of eq.~(\ref{eq:M0}) with the number of relativistic dof estimated at the DM production temperature $T_{\rm prod}$. \subsubsection{Freeze-in from decays} \label{sec:FI-dec} The largest contribution to DM freeze-in from decays of a bath particle $B$, arises around $x_{\rm FI}=m_B/T\sim 3$~\cite{Hall:2009bx} due to the interplay of two competing effects. On the one hand, in a radiation dominated era, $\Gamma_{B\to A\chi}/H$ increases with $x$, leading the decay to become more efficient at late times. On the other hand, once the bath particle becomes non-relativistic, \emph{i.e.}~$x\gtrsim1$, its number density starts to decrease exponentially. Considering renormalisable interactions in the radiation dominated era and assuming\footnote{If the DM mass is not neglected in the computation of the DM distribution function, a further analytic expression for the latter would be needed, while an expression for $\partial_x f_\chi$ is given in~\cite{Boulebnane:2017fxw}. Integrating out the distribution function numerically, Ref.~\cite{Boulebnane:2017fxw} showed that the analytic form of $f_\chi$ obtained in the limit $m_\chi\to 0$, Eq.~(\ref{eq:fdec}), is a very good approximation in the range of $q$ relevant to extract the Lyman-$\alpha$ constraints.} $m_\chi\ll m_B, m_A$ as well as a Maxwell-Boltzmann distribution for the mother bath particle $B$, \emph{i.e.}~$f_{B}= \exp(-E_B/T)$, we can obtain a simple analytic expression for $f_\chi$ of the form~\cite{Heeck:2017xbu,Bae:2017dpt} \begin{eqnarray} g_{\chi} f_\chi^\mathrm{FI,\,dec}(q)&=& 2g_B \frac{ R_\Gamma^{\rm FI}}{\delta^3} \sqrt{\frac{\pi \delta}{q}} \exp\left( - \frac{q}{\delta}\right),\label{eq:fdec}\\ &{\rm with} &\delta=\frac{m_B^2-m_A^2}{m_B^2}\label{eq:deltdec} \end{eqnarray} where we use the short-hand notation $f^\mathrm{FI,\,dec}_{\chi}(q)=f^\mathrm{FI,\,dec}_{\chi} (x\to\infty,q)$. Furthermore, $g_B$ is the number of dof of $B$, $T_{\rm FI}$ is the temperature at FI production, which is $T_{\rm FI}= m_B/x_{\rm FI}$. Further details on the computation and involved approximations are given in App.~\ref{sec:more}. Integrating out eq.~(\ref{eq:fdec}) over momenta, one obtains the DM abundance from FI, \begin{equation} \Omega_\chi h^2|_\mathrm{FI,\,dec}=m_\chi\times\frac{135}{8\pi^3}\frac{g_B}{g_*\left( T_{\rm FI}\right)}R^\mathrm{FI}_\Gamma \frac{s_0h^2}{\rho_\mathrm{crit}}\,, \end{equation} where $\rho_\mathrm{crit}=3 M_\mathrm{Pl}^2 H_0^2/(8\pi)$ is the critical energy density, $s_0$ is the entropy density today, and $h$ is the rescaled Hubble parameter today, $h=H_0/(100\,\text{km}\,\text{s}^{-1}\text{Mpc}^{-1})\sim 0.7$. Making use of eqs.~(\ref{eq:qn}) and (\ref{eq:fdec}), the $n^{\rm th}$-moment of the rescaled DM momentum distribution of (\ref{eq:fdec}) is given by \begin{equation} \langle q^n\rangle|_\mathrm{FI,\,dec}= \frac{4}{3\sqrt{\pi}}\Gamma\left(\frac{5}{2}+n\right)\times \delta^{n}\,, \label{eq:qavFIdec} \end{equation} where the $\Gamma$ denotes the mathematical Gamma-function. In particular, $\langle q\rangle|_\mathrm{FI,\,dec}=5/2 \times \delta$ while for thermal WDM one would get $\langle q\rangle_{\rm thermal}\simeq 3$, see \emph{e.g.}~\cite{Heeck:2017xbu} for a discussion. \subsubsection{FIMPs from superWIMP mechanism} \label{sec:SW} After the time at which $B$ gets chemically decoupled, usually referred to as freeze-out time, around $x_{\rm FO}\sim 25$, the frozen out particle eventually decays into DM and, hence, provides a contribution to the DM abundance. This DM production mechanism is usually referred to as the SW mechanism. Interestingly, the associated DM phase-space distribution might also peak at significantly higher $q$ values than in the case of FI production. To get an analytic expression of the DM phase-space distribution, we employ the ansatz of eq.~(\ref{eq:fBSW}) for the bath particle distribution, together with the non-relativistic expression for the $B$ equilibrium comoving density, $Y_B^{\mathrm{eq}}(x)$. After chemical decoupling only late $B$ decays can affect the $B$ abundance so that $Y_B$ should satisfy \begin{equation} \frac{\text{d} \ln Y_B}{\text{d} x}=-R^\mathrm{SW}_\Gamma x \frac{K_1(x)}{K_2(x)}\quad \Rightarrow \quad Y_B(x)\simeq Y_{\rm FO} e^{- R_\Gamma^{\rm SW}(x^2-x^2_{\rm FO})/2 } \quad[x>x_{\rm FO} ]\,, \label{eq:YB} \end{equation} where $R^{\rm SW}_\Gamma$ is given by eq.~(\ref{eq:Rgam}) with $M_0=M_0( T_{\rm SW})$ and $Y_{\rm FO}$ is the roughly constant frozen-out bath particle abundance between $B$ chemical decoupling and complete decay to DM at $x_{\rm SW}$, \emph{i.e.}~$Y_B\simeq Y_{\rm FO}$ for $x_{\rm FO}\lesssim x\lesssim x_{\rm SW}$. In order to derive the above analytic expression we have further assumed that ${K_1(x)}/{K_2(x)} \simeq 1$ in the non-relativistic limit, as well as a constant number of relativistic dof\@. From eq.~(\ref{eq:YB}) it is clear that the characteristic temperature parameter at which the decay takes place is \begin{equation} x_{\rm SW}=\sqrt{\frac{2}{R_\Gamma^{\rm SW}}} \,. \label{eq:xSW} \end{equation} Plugging the above inputs into eq.~(\ref{eq:Cdec}) we can readily integrate over the $\xi_B$ with the lower integration bound $\xi_{B\, \rm min}=q/\delta+\delta x^2/(4q)$ and get \begin{eqnarray} g_{\chi} \partial_x f_\chi^{\rm SW}(x,q)&=& \frac{Y_B(x)}{Y_B^{\mathrm{eq}}(x)} \times\frac{g_B }{ \delta}\frac{x^2 }{ q^2} R_\Gamma^{\rm SW} \exp\left( -q/\delta-\delta x^2/(4 q)\right)\,. \label{eq:dfXdxR} \end{eqnarray} Integrating eq.~(\ref{eq:dfXdxR}) over $x$ we obtain \begin{eqnarray} g_{\chi}f_\chi^{\rm SW}(q)&\simeq& \sqrt{8\pi} \, \frac{ C_\mathrm{SW} }{q \delta}\, \exp\!\left(-\frac{2 R^{\rm SW}_\Gamma q^2}{\delta^2}\right) \cr &{\rm with}& C_{\rm SW}=g_{*S}(x_{\rm SW}) Y_{\rm FO} \frac{R^{\rm SW}_\Gamma}{\delta}(2 \pi)^{3/2}\frac{ 2 \pi^2 }{45 }\,, \label{eq:fSW} \end{eqnarray} where $g_{*S}$ has to be evaluated at the temperature of SW decay. To derive such a simple expression, we have assumed that the relevant $(x,q)$ parameter space for SW corresponds to $x\gg x_{\rm FO}$ and $2 q R^{\rm SW}_\Gamma\ll\delta$, see App.~\ref{sec:more} for details. In addition, the results derived here assumed that $g_{*S}$ is constant throughout SW production. While this is not always true, we have explicitly checked that when considering $g_{*S}=g_{*S}(x_{\rm SW})$ in eq.~(\ref{eq:fSW}) the results are in very good agreement with numerical calculations taking a time-dependent $g_{*S}$ into account, see the discussion in App.~\ref{sec:gSx}.\footnote{Notice that in~\cite{Merle:2015oja,Baumholzer:2019twf}, $B$ has been assumed to be kinetically decoupled since freeze-out time, i.e. eq.~(\ref{eq:fBSW}) does not hold. This is usually not the case when $B$ is charged under standard model gauge group, which we assume here. Therefore, we cannot directly compare our results to theirs.} Finally, integrating out eq.~(\ref{eq:fSW}) over momenta, we simply recover that the DM abundance arising from SW, $Y_\chi^{\rm SW}$ is equal to $Y_{\rm FO}$, confirming the consistency of our approach. We can also easily evaluate the $n^\mathrm{th}$-moments of the DM rescaled momentum distribution (eq.~(\ref{eq:fSW})) from SW production, which reduces to \begin{equation} \langle q^n\rangle|_{\rm SW}\simeq \left( 2 R^{\rm SW}_\Gamma\right)^{-n/2}\delta^n\,\Gamma\left(\frac{n}{2}+1\right)\, . \end{equation} In particular, for $n=1$, we have $\langle q\rangle|_{\rm SW}=\delta\sqrt{\frac{\pi}{8R^{\rm SW}_\Gamma}}$. \subsubsection{When superWIMP meets freeze-in } \label{sec:FI-SW} \begin{figure*}[t] \begin{center} \includegraphics[width=.495\textwidth]{plots/YStopDM.pdf} \includegraphics[width=.485\textwidth]{plots/PSDFISW.pdf} \vspace*{-5mm} \end{center} \caption{FIMP production from $B$ decays with $B$ in kinetic equilibrium with the standard model bath. Two benchmarks are displayed taking $R_\Gamma= 7\times 10^{-4}$ (green curves) and $R_\Gamma= 7\times 10^{-8}$ (purple curves). \textbf{Left}: Bath particle (dashed curves) and DM (solid curves) comoving number density as a function of the time variable $x$. \textbf{Right}: FIMP distribution function multiplied by the momentum squared, $q^2 g_{\chi}f_\chi(q)$, as a function of the rescaled momentum $q$. The analytic FI and SW contributions are shown with grey dashed and dot dashed curves respectively while the coloured solid lines correspond to the sum of the latter two. With the grey dotted curves we also show the results obtained by integrating eq.~(\ref{eq:Cdec}) without any approximation. } \label{fig:PSD} \end{figure*} As mentioned above, one single decay process can give rise to two types of FIMP DM production mechanisms: one from FI and another from SW. In Fig.~\ref{fig:PSD} we illustrate the comoving number densities evolution as a function of the temperature parameter $x$ (left), and the DM distribution function $f_\chi (q)$ dependency in rescaled momentum (right) for two benchmarks taking $R_\Gamma= 7\times 10^{-4}$ (green curves) and $R_\Gamma= 7\times 10^{-8}$ (purple curves). We have assumed $m_B\gg m_A$, such that $\delta =1$ and $g_*=g_{*,S}= 106.75$ at both $T_{\rm FI}$ and $T_{\rm SW}$, \emph{i.e.}~$R_\Gamma=R_\Gamma^{\rm FI}=R_\Gamma^{\rm SW}$. In the left panel of Fig.~\ref{fig:PSD}, we show both $Y_B(x)$, the bath particle comoving abundance (dashed lines), and $Y_{\chi}(x)$, the DM comoving abundance (solid lines). At early times, $Y_B$ follows the equilibrium Maxwell-Boltzmann distribution which is already becoming exponentially suppressed around $x\sim 1$. At chemical decoupling, for $x=x_{\rm FO}$, $Y_B$ freezes-out and remains constant, with $Y_B=Y_{\rm FO}$, up until $x\sim x_{\rm SW}$ where it fully decays to DM\@. In parallel, the DM abundance is slowly produced up until $x_{\rm FI}\sim 3$ where it freezes in at a value $Y_{\chi} (x_{\rm FI})$. The second contribution to the DM abundance from the SW mechanism is produced around $x_{\rm SW}\sim 53$ and $5.3\times 10^4$ for $R_\Gamma= 7\times 10^{-4}$ and $R_\Gamma= 7\times 10^{-8}$, respectively, contributing around 2 \% and 99\% to the relic DM abundance. If $Y_B(x_{\rm FO})$ is large enough compared to $Y_\chi(x_\mathrm{FI})$, the SW contribution can significantly affect the DM abundance, as visible for $R_\Gamma= 7\times 10^{-8}$ (purple curve). In the right panel of Fig.~\ref{fig:PSD}, we show the DM distribution multiplied by the rescaled momentum squared, $q^2 g_\chi f_\chi (q)$, as a function of $q$. The FI from $B$ decay contribution to $f_\chi (q)$, as in eq.~(\ref{eq:fdec}), is shown with grey dot dashed curves while the SW contribution from eq.~(\ref{eq:fSW}) is shown with dashed curves. The sum of the latter two analytic results is shown with coloured solid curves. For comparison, we show with grey dotted curves the numerical result obtained integrating out the collision term of eq.~(\ref{eq:Cdec}) without any approximations. We see that both coloured and grey dotted lines give rise to very similar results. More quantitatively, for $R_\Gamma= 7\times 10^{-4}$ ( $7\times 10^{-8}$) we introduce a relative error below 1\% (around 2\%) in estimating the DM relic abundance by integrating out the analytic result instead of the numeric result. This, in particular, illustrates that the analytic results derived in the previous section provide a very good estimate of the SW contribution to DM abundance and distribution function. From this figure, it is also clear that the FI through decay distribution peaks around $q\sim {\cal O}(1)$ as expected from eq.~(\ref{eq:qavFIdec}) while the SW distribution is always expected to peak at larger $q$ values giving rise to a multimodal DM distribution. As a final comment, let us also mention that while here we illustrate the case where FI and SW contributions arise from the same mother particle, $B$, the most relevant contribution to each production mechanism could also originate from two different particles, see \emph{e.g.}~\cite{Baumholzer:2019twf}. \subsection{FIMPs from scatterings} \label{sec:FI-scat} FIMPs could also have been produced in the early universe through FI from scatterings. In the case of $B B'\to A' \chi$ scatterings, assuming a Maxwell-Boltzmann distributions for the bath particles $B$ and $B'$, we have\footnote{In eq.~(\ref{eq:Cscat}), we have an extra factor of $1/2$ compared to~\cite{Heeck:2017xbu}, which we believe to be a typo, as our numerical integration fully agrees with the results of \cite{Bae:2017dpt}.} \begin{equation} {\cal C}_{\rm scat}[f_\chi]=\frac{1}{32\pi^2 g_{\chi} E p}\int_{s_\mathrm{min}} \text{d} s \int_{E_{A'}^\mathrm{min}}\text{d} E_{A'} \, \exp\left(-\frac{E+E_{A'}}{T}\right)\frac{\hat\sigma}{2}\frac{s}{ \sqrt{(p\cdot p_{A'})^2- (m_\chi m_{A'})^2}} \,, \label{eq:Cscat} \end{equation} where $\hat\sigma(s)$ denotes the reduced $B B'\to A' \chi$ cross-section, which is a function of the centre of mass energy squared $s$, satisfying \begin{equation} \frac{\text{d} \hat\sigma}{\text{d} t}= \frac{1}{8 \pi s} |{\cal M}|^2 \,, \label{eq:sighat} \end{equation} where the derivative is taken with respect to the Mandelstam variable $t$, and $|{\cal M}|^2$ is again the transition amplitude squared summed over initial and final state dof. Going to the limit of $m_\chi\ll m_A',m_B,m_B'$, eq.~\eqref{eq:Cscat} reduces to \begin{eqnarray} g_{\chi}f_\chi^\mathrm{FI,\,scat}(q)&=& \frac{1}{32 \pi^2 q^2}\frac{M_0^\mathrm{FI}}{m_{\rm ref}} \int_0^\infty \text{d} x\int_{\tilde s_\mathrm{min}}^\infty \text{d} \tilde s\frac{\hat \sigma \tilde s}{\tilde \Delta}\exp\left( - \frac{q\tilde s}{\tilde\Delta}-\frac{\tilde\Delta}{4 q}\right)\label{eq:fscat}\\ &{\rm with} &\Delta=s-m_{A'}^2 \quad {\rm and} \quad \tilde\Delta=\Delta/T^2 \,, \label{eq:deltscat} \end{eqnarray} where we again use the short-hand notation $f_\chi^{\rm scat}(q)=f_\chi^\mathrm{FI,\,scat} (x\to\infty,q)$, which is the FIMP distribution today when produced through $2\to 2$ scatterings, in agreement with~\cite{Heeck:2017xbu}. In eq.~(\ref{eq:fscat}), we denote with a tilde dimensionless variables rescaled with temperature with \emph{e.g.}~$\tilde s = s/T^2$. As the details of the distribution function from FI through scatterings is quite model-dependent, see \emph{e.g.}~\cite{Bae:2017dpt}, we leave for Sec.~\ref{sec:topphilic} a more thorough discussion on the latter in the context of a top-philic DM scenario. Nevertheless, when $s_\mathrm{min} $ and $\hat \sigma$ can be assumed to be temperature-independent, it is possible to get a generic expression for $\langle q^n\rangle$ from eq.~(\ref{eq:qn}), namely \begin{equation} \langle q^n\rangle|_\mathrm{FI,\,scat}= \frac{4}{3\sqrt{\pi}}\Gamma\left(\frac{5}{2}+n\right)\times \left[1+\frac{\int \text{d} s \, \hat \sigma \, \left( -1+\left( 1-m_{A'}^2/s\right)^n\right)/s^{3/2}}{\int \text{d} s \, \hat \sigma/s^{3/2}}\right]\,, \label{eq:qnavsc} \end{equation} where the integrals over $s$ run from $s_\mathrm{min}= {\rm max}\left(\q m_B+m_{B'}\right)^2, m^2_{A'}\right)$ to $\infty$.\footnote{In general, in eq.~(\ref{eq:fscat}), the lower integration limit on the centre of mass energy squared and the reduced cross-section could be explicit functions of the bath temperature, \emph{i.e.}~$s_\mathrm{min} = s_\mathrm{min}\left( T\right) \,$ and $\hat \sigma = \hat \sigma(s,T)$. This is, for example, the case when taking into account thermal corrections such as a temperature-dependent mass. In that case the results and implications of eqs.~\eqref{eq:qnavsc} and \eqref{eq:relicDensScat} do not apply.} The overall prefactor is nothing but $\langle q^n\rangle|_\mathrm{FI,\,dec}$ in the $\delta=1$ case. In addition, the second term in the squared parenthesis vanishes when $m_{A'}$ is small with respect to one of the masses of the initial bath particles. Therefore, it is apparent that, when there is one initial state particle that is much heavier than the final state particles, the squared parenthesis in eq.~(\ref{eq:qnavsc}) reduces to 1, and we recover the FI through decay result. This in particular implies that FI through decay and scattering distributions share the same $q$-dependence, \begin{equation}\label{eq: generalPSDScats} f_\chi(q)|_\mathrm{FI,\,scat}\propto q^{-1/2} \exp(-q)\,, \qquad [m_{A'},m_\chi\ll m_{B} \text{ or }m_{B'}] \,, \end{equation} which would agree with the distributions used in~\cite{DEramo:2020gpr}. Even when $m_{A'}$ is non-negligible, since $m_{A'}^2\leq s_\mathrm{min}$, the second term in the squared parenthesis is always negative. We thus find that \begin{equation} \langle q^n\rangle|_\mathrm{FI,\,scat}\leq \frac{4}{3\sqrt{\pi}}\Gamma\left(\frac{5}{2}+n\right) \label{eq:qnavsclim} \end{equation} both for FI from scatterings and from decays. Finally, the contribution to the relic density from FI through scattering is given by \begin{align}\label{eq:relicDensScat} \Omega_\chi h^2|_\mathrm{FI,\,scat} = m_\chi\times \frac{s_0}{\rho_\mathrm{crit}/h^2}\frac{135M_0}{256\pi^5g_*(T_\mathrm{FI})} \times\left( \int_{s_\mathrm{min}}^{\infty}\text{d} s \; \frac{\hat \sigma}{s^{3/2}}\right), \end{align} assuming again that $s_\mathrm{min} $ and $\hat \sigma$ are temperature-independent. \subsection{FIMP distribution functions in {\mdseries \textsc{class}}} \label{sec:distrib} In order to precisely follow the cosmological evolution of the FIMPs, we have implemented the FIMP distribution functions in the public Boltzmann code \textsc{class}~\cite{Lesgourgues:2011rh}. For that purpose, it is convenient to introduce a new rescaled momentum variable, \begin{eqnarray} q_\star= \frac{p (t)}{T_\star(t)} \quad { \rm with}\quad T_\star (t)= c_\star T_\gamma(t_{\rm prod}) \frac{a_{\rm prod}}{a(t)} \label{eq:TNCDM_gen} \end{eqnarray} where $p\propto 1/a$ is the proper momentum, $c_\star$ is a constant factor that will be chosen for each FIMP production mode, $a_{\rm prod}$ and $T_\gamma(t_{\rm prod})$ are the scale factor and the photon temperature at the time of production. The definition of $T_\star (t)$ is introduced in \textsc{class}~through the input variable ${\tt T_{ncdm}}$ which corresponds to the ratio of temperatures $T_\star$ and $T_\gamma$ today. Using eq.~(\ref{eq:TNCDM_gen}), the latter dimensionless variable takes the form \begin{equation} {\tt T_{ncdm}}=\frac{T_\star(t_0)}{T_\gamma(t_0)}=c_\star a_{\rm prod} \frac{ T_\gamma(t_{\rm prod}) }{ T_\gamma(t_0)}=c_\star\left(\frac{g_{*S}(t_0)}{ g_{*S}(t_{\rm prod})}\right)^{1/3} \,, \label{eq:TNCDMqCLASS} \end{equation} where $t_0$ refers to the time today, the scale factor today is $a_0=1$ and $g_{*S}(t_0)=3.91$. We see that ${\tt T_{ncdm}}$ reduces to the ratio of relativistic dof at production time and today to the power 1/3 up to the constant prefactor $c_\star$, see \emph{e.g.}~\cite{Lesgourgues:2011rh,Ballesteros:2020adh} for other NCDM models. In practice, for our implementation of FI and SW in \textsc{class}, we have chosen the $c_\star$ prefactors in eq.~(\ref{eq:TNCDMqCLASS}) to be $c^{\rm FI}_\star=\delta$ and $c^{\rm SW}_\star= \delta/\sqrt{2 R^{\rm SW}_\Gamma}$. This implies that the distribution functions for FI from decay and SW of eqs.~(\ref{eq:fdec}) and~(\ref{eq:fSW}) take the following simpler forms: \begin{equation} \begin{dcases*} g_{\chi} f_\chi^\mathrm{FI,\,dec}(q_\star)= 2g_B \frac{ R_\Gamma^{\rm FI}}{\delta^3} \sqrt{\pi} \times \left[\frac{1}{q_\star^{1/2}}\, \exp\!\left(-q_\star\right)\right]\, & for FI through decays,\\ g_{\chi} f_\chi^{\rm SW}(q_\star)= \frac{4 \sqrt{\pi R_\Gamma^{\rm SW}} C_{\rm SW} }{ \delta^2} \times \left[\frac{1}{q_\star}\, \exp\!\left(-q_\star^2\right)\right]\, & for SW\,, \end{dcases*} \label{eq:fCL} \end{equation} where the superscript FI or SW in $R_\Gamma$ reminds that the number of relativistic dof in $M_0$ have to be determined at $T_{\rm FI}$ or $T_{\rm SW}$. The resulting dimensionless variables ${\tt T_{ncdm}}$ which are provided as an input to the \textsc{class}~code then read \begin{equation} \begin{dcases*} {\tt T_{ncdm}^\mathrm{FI}}= \delta \times \left(\frac{g_{*S}(t_0)}{g_{*S}(T_{\rm FI})}\right)^{1/3} & and $T_{\rm FI}= \frac{1}{3} m_B$, \\ {\tt T_{ncdm}^\mathrm{SW}}= \frac{\delta}{ \sqrt{2 R_\Gamma^{\rm SW}}}\times \left(\frac{g_{*S}(t_0)}{g_{*S}(T_{\rm SW})}\right)^{1/3} & and $T_{\rm SW}= \sqrt{\frac{R_\Gamma}{2} }m_B$. \end{dcases*} \label{eq:TNCDM} \end{equation} Notice that the momentum dependence of the SW distribution in eq.~(\ref{eq:fCL}) is the same as in the case of moduli decay in a radiation dominated era, considered in~\cite{Ballesteros:2020adh}. Let us also mention that in the case of FI through scatterings and under the same assumptions used to derive eq.~(\ref{eq:qnavsc}), we expect a similar $q_\star$ dependence as in the case of FI through decays, but the prefactor would become cross-section dependent instead of decay-rate dependent, see Sec.~\ref{sec:FI-scat} for details. Finally, using the above parametrisation in eq.~(\ref{eq:qn}), the mean rescaled momenta $\langle q_\star\rangle$ and the mean rescaled squared momenta $\langle q_\star^2\rangle$ reduce to \begin{equation} \begin{dcases*} \langle q_\star\rangle_{\rm FI,\,dec}=\frac{5}{2}, \quad \langle q_\star^2\rangle_{\rm FI,\,dec} = \frac{35}{4} & for FI through decays,\\ \langle q_\star\rangle_{\rm SW}=\frac{\sqrt{\pi}}{2}, \quad \langle q_\star^2\rangle=1 & for SW.\\ \end{dcases*} \label{eq:qstarav} \end{equation} Following the discussion in Sec.~\ref{sec:FI-scat}, we can just replace the equality sign with $\lesssim$ in the case of FI through scatterings. We will now use the different quantities introduced in this subsection in order to characterise the typical NCDM cosmological imprint of FIMP DM and the associated constraints in the next section. \section{Imprint of FIMPs on cosmological observables} \label{sec:cosmo} Once FIMPs have been produced at a time where the standard model bath temperature is $T=T_{\rm prod}$, with $T_{\rm prod}= T_{\rm FI}$ ($T_{\rm SW}$) for production from the FI (SW) mechanism, the resulting DM particles free-stream. If their velocity is sufficiently large at late times, they can free-stream from overdense to underdense regions and prevent small-scale structure formation. Furthermore, if FIMPs are still relativistic at Big Bang Nucleosynthesis (BBN) or Cosmic Microwave Background (CMB) times, they constitute extra radiation dof that might be constrained by $\Delta N_{\rm eff}$ bounds. In Secs.~\ref{sec:lyman-alpha} and~\ref{sec:delta-neff} we study the resulting constraints on cosmological observables. We show that when the DM abundance $\Omega_\chi h^2=0.12$ results at 100 \% from the FI or from the SW mechanism, Lyman-$\alpha$ data provide a lower bound on the DM mass of the form \begin{equation} m_\chi \gtrsim \begin{dcases*} m_{\rm FI}^{\rm lim} \times \delta \times \left( \frac{106.75}{g_{*S}(T_{\rm FI})} \right)^{1/3}& for FI through decays,\\ m_{\rm SW}^{\rm lim}\times \delta \times \left( \frac{106.75}{g_{*S}(T_{\rm SW})} \right)^{1/3} \times \left(R_\Gamma^{\rm SW}\right)^{-1/2} & for SW,\\ \end{dcases*} \label{eq:limsly} \end{equation} where the prefactors $m_{\rm FI,SW}^{\rm lim}$ are in the keV mass range, see the summary in Tab.~\ref{tab:NCDMnounds}. The results for FI are valid for FI from decays as well as for any FI from scattering scenario that would give rise to an equality in eq.~(\ref{eq:qnavsclim}). Our results for FI are in very good agreement with the previous literature in~\cite{Boulebnane:2017fxw,Bae:2017dpt,Ballesteros:2020adh,DEramo:2020gpr} when using the same methodology,\footnote{Let us in particular emphasise that for the fit to the power spectrum and the area criterion, our results are obtained by switching the perfect fluid approximation off in \textsc{class}, which is the only valid approximation for generic NCDM, see the discussion in App.~\ref{sec:fit_fluid}.} see also~\emph{e.g.}~\cite{Dvorkin:2019zdi,Dvorkin:2020xga} for similar results obtained in a slightly different context. On the other hand, for mixed FI-SW scenarios a more detailed analysis is needed, see Sec.~\ref{sec:mixed}. \renewcommand{\arraystretch}{1.15} \begin{table}[t] \centering \begin{tabular}{ | c | c | c | c | } \hline Probe & NCDM test & $m_{\rm FI}^{\rm lim}$ [keV] & $m_{\rm SW}^{\rm lim}$ [keV] \\ \hline \hline \multirow{3}{2cm}{\centering Lyman-$\alpha$}&Velocity dispersion, Sec.~\ref{sec:approx-lyman-al} & 16 & 3.8 \\ &Fits to transfer function, see Sec.~\ref{sec:pure-fi-sw} & 15 & 3.9 \\ &Area criterion, see Sec.~\ref{sec:mixed} & 15 & 3.8\\ \hline $\Delta N_{\rm eff}$ & see Sec.~\ref{sec:delta-neff} & $1.3\times 10^{-2}$& $3.4\times 10^{-3}$\\ \hline \end{tabular} \caption{ Mass scales in keV entering into the lower bounds of the FIMP masses of eq.~(\ref{eq:limsly}). They arise from the FIMP NCDM imprint on cosmological structures assuming that 100\% of the DM content results from FI or SW mechanism production. The values for $m_{\rm FI,SW}^{\rm lim}$ correspond to the WDM bounds $m_{\rm WDM}^{\mathrm{Ly}\alpha}>5.3$ keV and $\DeltaN_{\rm eff} (T_{\rm BBN})<0.31$.} \label{tab:NCDMnounds} \end{table} \subsection{FIMP free-streaming and Lyman-$\alpha$ bound} \label{sec:lyman-alpha} The Lyman-$\alpha$ forest flux power spectrum probes hydrogen clouds at redshifts $2\lesssim z \lesssim 6$. It provides constraints on the matter power spectrum on small scales~\cite{Ikeuchi1986, Rees1986}. The scales tested by Lyman-$\alpha$ data, typically 0.5 Mpc/h $ < \lambda <$ 100 Mpc/h~\cite{Murgia:2017lwo}, are in the non-linear regime so that computationally expensive hydrodynamical N-body simulations would be required in order to properly test a given NCDM scenario. These expensive simulations have been performed for thermal WDM\@. Following the early work of~\cite{Viel:2013fqw}, the analysis of \cite{Palanque-Delabrouille:2019iyz} obtained a bound of $m_{\rm WDM}^{\mathrm{Ly}\alpha}=5.3\,$keV at $95\,\%$ confidence level (CL) from Lyman-$\alpha$ flux observations. It has, however, been argued that the assumptions made about the instantaneous temperature and pressure effects of the intergalactic medium in this work might have been too strong. Relaxing these assumptions \cite{Garzilli:2019qki} found a bound of $m_{\rm WDM}^{\mathrm{Ly}\alpha}=1.9\,$keV at $95\,\%$ CL\@. We take the latter as a conservative bound on the thermal WDM mass while the one of~\cite{Palanque-Delabrouille:2019iyz} will be considered as a stringent bound. To circumvent the need for new N-body simulations for these models, in this paper we implement the FIMP distribution functions discussed in Sec.~\ref{sec:distrib} in the Boltzmann code \textsc{class}. We use this to extract the linear matter power spectrum of our NCDM scenarios, as well as the corresponding transfer functions discussed in Sec.~\ref{sec:pure-fi-sw}. We then follow a strategy similar to those applied to NCDM in \emph{e.g.}~\cite{Murgia:2017lwo,Bae:2017dpt,Baldes:2020nuv,Ballesteros:2020adh,DEramo:2020gpr}. In Secs.~\ref{sec:approx-lyman-al} and~\ref{sec:pure-fi-sw} we extract a lower bound on the DM mass in pure FI and SW scenarios, making use of the DM velocity dispersion and of fits to the transfer functions. Notice that these constraints are only valid for FIMPs accounting for 100\% of the DM content. In Sec.~\ref{sec:mixed}, we address the case of the mixed FI-SW scenarios, or equivalently cases where a given production mechanism cannot account for all the DM, by applying the area criterion introduced in~\cite{Murgia:2017lwo}. \subsubsection{Velocity dispersion } \label{sec:approx-lyman-al} If the DM distribution is simple, \emph{e.g.}~with one local maximum, one can expect that an estimate of the bound on the FIMP mass can be derived by comparing the typical velocity of the NCDM candidate to the one of the thermal WDM for which dedicated hydrodynamical simulations have been performed. Here we follow the same approach as the one proposed by~\cite{Bae:2017dpt}, where an estimated Lyman-$\alpha$ bound was obtained by considering the root mean square (rms) velocity of DM today, $ \sqrt{\langle p^2\rangle_0}/m_\chi$. Here $\langle p^2\rangle_0$ refers to today's second moment of the momentum distribution, directly related to the velocity dispersion of the DM today. When DM arises from one single production mechanism or production channel $\sqrt{\langle p^2\rangle_0}/m_\chi= \sqrt{\langle q_\star^2\rangle}{\tt T_{ncdm}}/m_\chi T_\gamma(t_0)\,$. The lower bound \begin{equation} m_\chi\gtrsim 1.75 \,{\rm keV} \times \sqrt{\langle q_\star^2\rangle}{\tt T_{ncdm}}\times\left(\frac{m_{\rm WDM}^{\mathrm{Ly}\alpha}}{\rm keV}\right)^{4/3} \label{eq:Kamada} \end{equation} is obtained imposing that the rms velocity, $ \sqrt{\langle p^2\rangle_0}/m_\chi$, computed for a FIMP of mass $m_\chi$ equals the rms velocity for a thermal WDM candidate of mass $m_{\rm WDM}^{\mathrm{Ly}\alpha}$ saturating the \mbox{Lyman-$\alpha$} bound. Notice that $\sqrt{\langle q_\star^2\rangle}$ in eq.~(\ref{eq:Kamada}) corresponds to the warmness parameter $\tilde \sigma$ of~\cite{Bae:2017dpt} and that~\cite{Ballesteros:2020adh} derived the same constraints by equating the equation of states of the FIMP and the WDM following the early work of~\cite{Colombi:1995ze}. Eq.~(\ref{eq:Kamada}) was also used in~\cite{DEramo:2020gpr} in the context of FI to be compared to other methodologies. In those references it has already been argued that eq.~\eqref{eq:Kamada} can provide a very good estimate of the Lyman-$\alpha$ constraint for FIMPs\@. Additionally, in~\cite{Jedamzik:2005sx} the DM velocity is computed in order to derive constraints on the WDM arising from the SW mechanism, and perfectly agrees with the rms velocity used here to extract Lyman-$\alpha$ constraints. Using the stringent WDM limit $m_{\rm WDM}^{\mathrm{Ly}\alpha}=5.3$ keV from~\cite{Palanque-Delabrouille:2019iyz}, the Lyman-$\alpha$ bound on FIMP DM of eq.~(\ref{eq:Kamada}) gives the lower bound on the DM mass reported in eq.~(\ref{eq:limsly}) with $m_{\rm FI}^{\rm lim}= 16$ keV and $m_{\rm SW}^{\rm lim}= 3.8$ keV, as given in the first line of Tab.~\ref{tab:NCDMnounds}. When using the conservative bound of $m_{\rm WDM}^{\mathrm{Ly}\alpha}=1.9$ keV from \cite{Garzilli:2019qki} the prefactors in eq.~(\ref{eq:limsly}) reduce to $m_{\rm FI}^{\rm lim}=4.0$ keV and $m_{\rm SW}^{\rm lim}= 0.97$ keV. In the cases where NCDM would only account for part of the DM content a dedicated analysis should be performed to compare to the case of thermal WDM~\cite{Baur:2017stq}. However, as suggested in~\cite{Bae:2017dpt}, when multiple production channels are at the origin of the DM relic abundance but the total DM distribution is unimodal, one can still use the rms velocity ${\sqrt{\langle p^2\rangle_0}}/{m_\chi}$ to extract a bound on the DM mass. Considering the definition of the second moment of the momentum distribution, it can be shown that \begin{equation} \frac{\sqrt{\langle p^2\rangle_0}}{m_\chi}=\frac{ T_\gamma(t_0)}{m_\chi}\left(\sum_{\rm prod} \left(\frac{\Omega_\chi h^2|_{\rm prod}}{\Omega_\chi h^2}\right) \times \left(\langle q_\star^2\rangle {\tt T_{ncdm}^2}\right)|_{\rm prod}\right)^{1/2}\,, \label{eq:rms-mix} \end{equation} where the sum runs over the FIMP production mechanisms, $\Omega_\chi h^2|_{\rm prod}$ refers to the $\chi$ relic abundance from a given production channel while $\Omega_\chi h^2$ refers to the total relic abundance. A first naive estimate of the Lyman-$\alpha$ bound in the case of mixed scenarios could thus be extracted by comparing the quantity $\sqrt{\langle p^2\rangle_0}/m_\chi$ to the one of thermal WDM saturating the Lyman-$\alpha$ bound when $\Omega_\chi h^2=0.12$. Within this framework, we get \begin{equation} m_\chi\gtrsim 1.75 \,{\rm keV} \times \left(\frac{m_{\rm WDM}^{\mathrm{Ly}\alpha}}{\rm keV}\right)^{4/3} \times \left[\sum_{\rm prod} \left(\frac{\Omega_\chi h^2|_{\rm prod}}{0.12}\right) \times \left(\langle q_\star^2\rangle {\tt T_{ncdm}^2}\right)|_{\rm prod}\right]^{1/2} \,, \label{eq:Kamada-mix} \end{equation} where it has been assumed that $\Omega_\chi h^2=0.12$ in order to compare to the thermal WDM constraints. Let us emphasise that eq.~(\ref{eq:Kamada-mix}) is only valid if the total FIMP distribution, arising from different production processes, is unimodal. This is, for example, the case of FIMPs from FI through scatterings and decays analysed in \emph{e.g.}~\cite{Bae:2017dpt}. When the DM distribution is multimodal, as \emph{e.g.}~in a mixed FI-SW scenario, the area criterion introduced in~\cite{Murgia:2017lwo} should be used instead, see the discussion in Sec.~\ref{sec:mixed} below. \subsubsection{Fits to transfer function} \label{sec:pure-fi-sw} In order to parametrise the small-scale suppression of the matter power spectrum within a given NCDM model with respect to the equivalent CDM case, one can express the ratio between the CDM power spectrum, $P_{\rm{CDM}}(k)$, and the power spectrum of some new DM species $X$, $P_{X}(k)$, in terms of the transfer function $T_X$, defined as \begin{equation} P_{X}(k) = P_{\rm{CDM}}(k) \, T^2_{X}(k) \,, \label{eq:pwdm} \end{equation} where $k$ is the wavenumber. It has been shown that the transfer function for some NCDM scenarios can be parametrised in terms of a finite set of parameters and physical inputs. In particular, in the thermal WDM case,~\cite{Bode:2000gq,Viel:2005qj} use the following parametrisation to describe the transfer function, \begin{equation} T_{X}(k) = \left(1+ (\alpha_{X} k)^{2\mu}\right)^{-5/\mu} \,, \label{eq:twdm} \end{equation} where $\mu$ is a dimensionless exponent and $\alpha_X$ is the breaking scale. A more general parametrisation that can be applied to a larger set of NCDM models was also introduced in~\cite{Murgia:2017lwo,Murgia:2018now,Archidiacono:2019wdp}. In the case of thermal WDM,~\cite{Viel:2005qj} obtained a very good fit for $\alpha$ and $\mu$ from dedicated N-body simulations. We will make use of this fit, but with a minor modification to the numerical prefactor motivated in App.~\ref{sec:fit_fluid}, where we also discuss the validity of this prescription. As such, the breaking scale we will use for eq.~(\ref{eq:twdm}) is given by $\mu=1.12$ and \begin{eqnarray} \alpha_{\rm{WDM}} &=& 0.045\left( \frac{m_{\rm WDM}}{1\,\text{keV}}\right)^{-1.11}\left(\frac{\Omega_{\rm WDM}}{0.25}\right)^{0.11}\left(\frac{h}{0.7}\right)^{1.22}h^{-1}\text{Mpc}\,,\label{eq: alphaWDM} \end{eqnarray} in terms of the WDM mass $m_{\rm WDM}$. \begin{figure*}[t] \begin{center} \includegraphics[width=.48\textwidth]{plots/PSDTransfer.pdf} \includegraphics[width=.45\textwidth]{plots/TransferFunctions.pdf} \vspace*{-5mm} \end{center} \caption{ \textbf{Left}: Distribution functions, $q^2 g_{\chi}f_\chi(q)$, as a function of the rescaled momentum~$q$ for an example FIMP model. \textbf{Right}: Corresponding transfer functions (continuous coloured curves) as a function of the wavenumber $k$. The thermal WDM transfer function for $m_{\rm WDM}^{\mathrm{Ly}\alpha} = 5.3$ keV (dashed curve) is also shown for comparison. These curves are obtained assuming the DM model considered in Sec.~\ref{sec:topphilic} and choosing the DM mass to be $50$ keV. Fixing $\Omega_\chi h^2=0.12$, the remaining model parameters were varied such that the relative FI contribution ranges from 40\% to 97\%, see Sec.~\ref{sec:topphilic} for details. Going from purple to green indicates going from warmer to colder DM or equivalently going to a smaller value of the area criterion parameter $\delta A_\chi$, see Sec.~\ref{sec:mixed} for details.} \label{fig:transfers} \end{figure*} In the case of FIMPs from the FI and SW production mechanisms, the transfer function can take multiple forms. In the right panel of Fig.~\ref{fig:transfers}, we illustrate the transfer functions computed with \textsc{class}~from the distributions shown in the left panel. They correspond to different benchmark scenarios all giving rise to $\Omega_\chi h^2=0.12$ within the top-philic DM model described in Sec.~\ref{sec:topphilic}. Each benchmark has a different FI relative contribution to the total DM relic abundance ranging from 40\% (dark purple) to 97\% (light green). This is visible in the left panel as the FI contribution to the DM distribution function, that peaks around $q=2.5$, increases in amplitude when going from the dark purple to the light green curve. In the right panel, we see that when \emph{e.g.}~the FI contribution tends to 100\%, the FIMP transfer function recovers the shape of a thermal WDM transfer function, depicted with a dashed curve for $m^{\mathrm{Ly}\alpha}_{\rm WDM}=5.3$ keV. This has already been pointed out in earlier works, see~\cite{Boulebnane:2017fxw,Ballesteros:2020adh}. Similarly, for 100\% SW contribution, the FIMP transfer function resembles a WDM-like shape. For intermediate relative FI or SW contribution though, the shape of the transfer function can strongly deviate from thermal WDM like scenarios. Using our modified \textsc{class}~version, we have checked that the transfer function of eq.~(\ref{eq:twdm}) provides a very good fit to the case of DM produced purely through the FI or SW mechanisms. For the fitting curves using $\mu=1.12$, as in the thermal WDM case, we obtain \begin{eqnarray} \alpha_{\rm{FI,\,dec}} &=& 0.164\left( \frac{m_\chi}{1\,\text{keV} }\times \frac{1}{\delta}\right)^{-0.833} \left(\frac{g_{*S}(t_0)}{g_{*S}(T_{\rm FI})}\right)^{0.278} h^{-1}\text{Mpc}\,, \label{eq:alphFI}\\ \alpha_{\rm{SW}} &=& 0.0542 \left( \frac{m_\chi}{1\,\text{keV}}\times\frac{\sqrt{R_\Gamma^{\rm SW}}}{ \delta } \right)^{-0.833} \left(\frac{g_{*S}(t_0)}{g_{*S}(T_{\rm SW})}\right)^{0.278} h^{-1}\text{Mpc}\,, \label{eq:alphSW} \end{eqnarray} where the parameter dependency of the breaking scales was inspired by the analytic estimate of the Lyman-$\alpha$ bound of eq.~(\ref{eq:Kamada}). The numerical prefactors in eqs.~(\ref{eq:alphFI}) and~(\ref{eq:alphSW}), on the other hand, have been obtained by doing a one-parameter fit based on the actual transfer functions produced by \textsc{class}. In the case of FI, the fit was done over 15 models, with a final error on the prefactors of $\sim 1.5\%$. In the case of SW, we used 20 models for the fit, with an expected error of $\sim 2\%$. In both cases, the fit has been optimised in the mass range where we expect the Lyman-$\alpha$ constraints to appear, based on eq.~(\ref{eq:limsly}) (see App.~\ref{sec:fit_fluid} for further discussions). In similar spirit to what was done in \emph{e.g.}~\cite{Baldes:2020nuv}, we can now compare the breaking scales ($\alpha_\mathrm{FI}$ and $\alpha_\mathrm{SW}$) found in eqs.~(\ref{eq:alphFI}) and~(\ref{eq:alphSW}) with the breaking scale for WDM ($\alpha_\mathrm{WDM}$) from eq.~(\ref{eq: alphaWDM}), to obtain approximate Lyman-$\alpha$ bounds on FIMP DM. Assuming once more that the DM is produced at 100\% through the FI or SW mechanisms and that this accounts for all of the DM abundance, taking the stringent WDM limit $m_{\rm WDM}^{\mathrm{Ly}\alpha}=5.3$ keV from~\cite{Palanque-Delabrouille:2019iyz} we get $m_{\rm FI}^{\rm lim}= 15$ keV and $m_{\rm SW}^{\rm lim}= 3.9$ keV in eq~(\ref{eq:limsly}), see the second line of Tab.~\ref{tab:NCDMnounds}. We can see that these bounds are in very good agreement with the approximate constraints found in Sec.~\ref{sec:approx-lyman-al} using the rms velocity. When using the conservative bound of $m_{\rm WDM}^{\mathrm{Ly}\alpha}=1.9$ keV from \cite{Garzilli:2019qki} these prefactors reduce to $m_{\rm FI}^{\rm lim}=3.5$ keV and $m_{\rm SW}^{\rm lim}<1.0$ keV. \subsubsection{Area criterion} \label{sec:mixed} An alternative approach to extract the Lyman-$\alpha$ bounds on NCDM scenarios is based on the area criterion introduced in~\cite{Murgia:2017lwo}, see also~\cite{Schneider:2016uqi,DEramo:2020gpr,Egana-Ugrinovic:2021gnu}. The methodology goes as follows. For a given DM scenario $X$, the 3D power spectrum $P_X(k)$ has to be computed. The deviation from the corresponding CDM scenario is obtained by evaluating the ratio \begin{equation} r(k)= \frac{P_{1\mathrm{D}}^{X}(k)}{P_{1\mathrm{D}}^{\rm CDM}(k)}\quad {\rm with} \quad P^X_{1\mathrm{D}}(k)=\int_k^{\infty} \text{d} k' \, k' \, P_X(k')\,, \label{eq:P1D} \end{equation} where $P^X_{1\mathrm{D}}$ is the 1D power spectrum in the DM scenario $X$. This ratio is estimated over the range of scales probed by the Lyman-$\alpha$ observations. In~\cite{Murgia:2017lwo} the suggested range corresponding to the MIKE/HIRES+XQ-100 combined data set, used in~\cite{Irsic:2017ixq} to derive the stringent WDM bound considered here, was taken to be \begin{equation} [k_{\rm min},k_{\rm max}]=[0.5 \,\mathrm{h}/{\rm Mpc},20 \, \mathrm{h}/{\rm Mpc}]\,. \label{eq:Ly-range} \end{equation} More precisely, in order to quantify the suppression of the power spectrum in the NCDM model $X$, one should compute the area estimator \begin{equation} \delta A_X=\frac{A_{\rm CDM}-A_X}{A_{\rm CDM}} \quad {\rm with} \quad A_X=\int_{\rm k_{\rm min}}^{k_{\rm max}} \text{d} k' \, r(k')\,, \end{equation} and $A_{\rm CDM}= k_{\rm max}-k_{\rm min}$ by definition. As underlined by the authors of the original work~\cite{Murgia:2017lwo} introducing this criterion, let us emphasise that the area criterion has some arbitrariness in defining the integration limits, and should, therefore, only be used after careful calibration with an example WDM model. For the cosmological and precision parameters considered in our analysis, we get \begin{equation} \delta A_{\rm WDM}=0.33\quad {\rm for} \quad m_{\rm WDM}=5.3\, {\rm keV}. \label{eq:dAw} \end{equation} A NCDM scenario that would give rise to $\delta A_X=\delta A_{\rm WDM}$ above is thus expected to saturate the stringent WDM Lyman-$\alpha$ bound considered here.\footnote{Notice that in~\cite{Murgia:2017lwo}, a much smaller $\delta A_{\rm WDM}$ of 0.21 is reported for a 5.3 keV WDM\@. We have checked together with R. Murgia of~\cite{Murgia:2017lwo} that the methodology followed here is perfectly correct. A discrepancy with the numerical results for $\delta A_{\rm WDM}$ quoted in~\cite{Murgia:2017lwo} has also been reported in \emph{e.g.}~\cite{DEramo:2020gpr}. This emphasises the importance of recomputing self-consistently the $\delta A_{\rm WDM}$ before applying any constraint to a new NCDM scenario.} Making use of the linear 3D power spectrum computed with our modified version of \textsc{class}~ for pure FI and SW DM scenarios and, comparing $\delta A_\mathrm{FI,\,dec}$ and $\delta A_{\rm SW}$ to the stringent bound provided by eq.~(\ref{eq:dAw}), we get a limit similar to the one derived in Secs.~\ref{sec:approx-lyman-al} and~\ref{sec:pure-fi-sw}. More precisely, for the prefactors of eq.~(\ref{eq:limsly}), we get $m_{\rm FI}^{\rm lim}= 15$ keV\footnote{Note that if we make use of the perfect fluid approximation in \textsc{class}, we obtain $m_{\rm FI}^{\rm lim}= 16$ keV, as in~\cite{DEramo:2020gpr}. However, we will switch this approximation off for NCDM from FI, see App.~\ref{sec:fit_fluid}.} and $m_{\rm SW}^{\rm lim}= 3.8$ keV, see the third line of Tab.~\ref{tab:NCDMnounds}. We have also checked that using the fits provided in eqs.~(\ref{eq:alphFI}) and (\ref{eq:alphSW}) instead of the $P(k)$ from \textsc{class}~gives rise to the same conclusions. It appears, therefore, that in the case of pure FI or SW, all 3 methodologies considered in Sec.~\ref{sec:lyman-alpha} agree with each other. In particular, this suggests that a very accurate estimate of the Lyman-$\alpha$ bound, for FIMP scenarios with unimodal distribution functions, can readily be extracted from eq.~(\ref{eq:Kamada}) without going through the detailed implementation of the NCDM model in \textsc{class}. In contrast, a more advanced approach proposed in~\cite{Archidiacono:2019wdp}, where a Lyman-$\alpha$ likelihood was developed for multiple NCDM models, allows for full Monte Carlo Markov Chain (MCMC) analyses. However, such analyses are hindered by the execution speed of the corresponding NCDM model in \textsc{class}. For the models considered here, the corresponding runtime needed to calculate the matter power spectrum is of the order of $\sim 30\,$min per model\footnote{See App.~\ref{sec:fit_fluid} for more details on the computation time.}, making MCMC analyses computationally infeasible. As such, here we limit ourselves to the more simplistic methods discussed above. In the case of mixed FI-SW scenarios, the DM distribution function is multimodal and the resulting transfer function can significantly deviate from the WDM one, as illustrated in Fig.~\ref{fig:transfers}. The area criterion is the only estimator of the Lyman-$\alpha$ bound that has been carefully tested against hydrodynamical simulations for a large ensemble of NCDM scenarios, see~\cite{Murgia:2017lwo,Murgia:2019igt}. For this reason, we make use of the latter criterion when considering mixed FI-SW models. In particular, for the set of benchmarks of Fig.~\ref{fig:transfers}, the gradient of colours in the curves corresponds to a value of the area criterion. More precisely, going from purple to green curves we have $\delta A_{\chi}= \left( 0.75,0.54,0.38,0.29,0.20,0.18\right) $, respectively, \emph{i.e.}~the first three benchmarks are excluded when considering eq.~(\ref{eq:dAw}). We have also checked that the area criterion gives rise to a more conservative bound than the estimator of eq.~(\ref{eq:Kamada-mix}) for mixed scenarios. As a result, for mixed FI-SW scenarios, it necessary to implement the exact NCDM model in \textsc{class}~in order to extract a reliable estimate of the Lyman-$\alpha$ bound. \subsection{Bound from $\Delta N_{\rm eff} $} \label{sec:delta-neff} The FIMPs considered here can potentially affect the effective number of relativistic non-photonic species, $N_{\rm eff}$, entering in the computation of CMB and BBN observables, see \emph{e.g.}~\cite{Merle:2015oja, Baumholzer:2019twf,Ballesteros:2020adh}. Here we consider the possibility for the DM candidates to contribute as an extra fermionic species. Our goal is, therefore, to compute their $\Delta N_{\rm eff} (T)$ contribution at a given temperature $T$, corresponding to a given scale factor $a(T)$. It is instructive to first estimate for which mass range FIMPs arising from FI or SW are still relativistic. This is the case when the rescaled momentum $ \langle q_\star\rangle $ is larger than the ratio $m_\chi/T_\star$. Using eqs.~(\ref{eq:TNCDM}) and (\ref{eq:qstarav}), the condition on the FIMP mass becomes \begin{equation} m_\chi> \begin{dcases*} \frac{\delta}{a\left( T\right)} \times 2\times 10^{-7}\, {\rm keV} & for relativistic FIMP from FI,\\ \frac{\delta}{a\left( T\right) R_\Gamma^{1/2}} \times 5\times 10^{-8}\, {\rm keV} & for relativistic FIMP from SW,\, \end{dcases*} \label{eq:relativ} \end{equation} when $g_{*S} \left( T_{\rm prod}\right) = 106.75$. From Sec.~\ref{sec:lyman-alpha}, we know that for FIMPs from FI, Lyman-$\alpha$ forest data imply a lower bound on their mass of around $15$ keV\@. FIMPs from FI with larger masses cannot be further constrained by $\Delta N_{\rm eff}$ bounds from CMB data, as they are expected to be highly non-relativistic for $a(T_{\rm CMB})\sim 10^3$. For FIMPs from SW, with mass $m_\chi> 10$ keV we would need $R_\Gamma> 10^{-14}$ for them to be non-relativistic at CMB time. On the other hand, since $a\left( T_{\rm BBN}\right)\sim 10^{-10}$, one can more easily get relativistic FIMPs from both FI and SW at BBN time. We will, therefore, focus on $\Delta N_{\rm eff}$ at BBN time and impose the bound~\cite{Hufnagel:2017dgo} \begin{equation} \Delta N_{\rm eff}(T_{\rm BBN})< 0.31\,, \label{eq:NeffBBN} \end{equation} at 95\% CL, see also~\cite{Pitrou:2018cgg,Patrignani:2241948}. We compute the FIMP contribution to the effective number of relativistic non-photonic species, $\Delta N_{\rm eff} \left( T\right)$, following~\cite{Merle:2015oja}. In general, at a given bath temperature $T$, we should evaluate \begin{eqnarray} \Delta N_{\rm eff} (T)&=&\frac{\rho_\chi (T)- m_\chi n_\chi(T)}{\rho_{\mathrm{rel} \, \nu} (T)/N_{\mathrm{eff}}^{\nu}}\cr &=&g_\chi\frac{60}{7 \pi^4} \left(\frac{T_\star}{T_\nu}\right)^{4} \times \int \text{d} q_\star q_\star^2 \left(\left(q_\star^2+\frac{m_\chi^2 }{T_\star^2}\right)^{1/2}-\frac{m_\chi}{T_\star}\right) f(q_\star)\,, \label{eq:Neff} \end{eqnarray} where $\rho_{\mathrm{rel} \, \nu} /N_{\rm eff}^{\nu}= 2\times \frac{7}{8} \frac{\pi^2}{30} T_\nu(T)^4$ is the energy density per relativistic standard model neutrino and $T_\star, T_\nu$ are time-dependent variables. For relativistic FIMPs at BBN time, this contribution reduces to \begin{eqnarray} \Delta N_{\rm eff}^{\mathrm{rel}} \left( T_{\rm BBN}\right) &\simeq&\left.\frac{\rho^{\mathrm{rel}}_\chi }{\rho_{\mathrm{rel} \, \nu} /N_{\mathrm{eff}}^{\nu}}\right|_{ T_{\rm BBN}} =\langle q_\star\rangle \times\left.\frac{ T_\star n_\chi }{\rho_{\mathrm{rel} \, \nu}/N_{\mathrm{eff}}^{\nu}}\right|_{ T_{\rm BBN}}\,. \label{eq:Neffrel} \end{eqnarray} Rescaling $T_\star n_\chi$ from BBN time to today, keeping in mind that $\langle q_\star\rangle$ is constant, and that $T_\nu\left( T\right) = T$ at BBN time, we get \begin{equation} \Delta N_{\rm eff}^{\mathrm{rel}}\left( T_{\rm BBN}\right)\simeq 5.0\times10^{-4} \times \langle q_\star\rangle {\tt T_{ncdm}} \times \left(\frac{\Omega_\chi h^2}{0.12}\right) \left(\frac{10 \, \rm keV}{m_\chi}\right)\,, \end{equation} where $\Omega_\chi h^2$ is the non-relativistic FIMP abundance today. Using the values of $\langle q_\star\rangle {\tt T_{ncdm}}$ obtained above and the condition eq.~(\ref{eq:NeffBBN}) we find \begin{equation} m_\chi \gtrsim \begin{dcases*} 1.3 \times 10^{-2}\, {\rm keV} \times \delta \left(\frac{\Omega_\chi^{\rm FI} h^2}{0.12}\right) \left( \frac{106.75}{g_{*S}(T_{\rm FI})} \right)^{1/3} & for FI ,\\ 3.4 \times 10^{-3}\, {\rm keV}\times \delta \left(R_\Gamma^{\rm SW}\right)^{-1/2} \left(\frac{\Omega_\chi^{\rm SW} h^2}{0.12}\right) \left( \frac{106.75}{g_{*S}(T_{\rm SW})} \right)^{1/3}& for SW,\\ \end{dcases*} \label{eq:NeffFIMP} \end{equation} where $\Omega_\chi^{\rm FI, SW}$ refers to the FIMP abundance arising from FI or SW production.\footnote{ The FI constraint in eq.~(\ref{eq:NeffFIMP}) can be applied to FI through scatterings by setting $\delta = 1$ when the conditions to extract eq.~(\ref{eq:qstarav}) are met.} At first sight, the above constraints seem less constraining than Lyman-$\alpha$, in agreement with~\cite{Ballesteros:2020adh}, see also~\cite{Li:2021okx}. Notice, however, that contrarily to \emph{e.g.}~eq.~(\ref{eq:limsly}), the above constraints are applicable even when ${\Omega_\chi h^2}<{0.12}$.\footnote{Also notice that our results for SW do not agree with the results of~\cite{Baumholzer:2019twf}. Let us re-emphasise, however, that~\cite{Baumholzer:2019twf} assumed that $B$ is kinetically decoupled after FO, which is usually not the case if $B$ is charged under standard model symmetries. } \section{FIMPs within a top-philic mediator model} \label{sec:topphilic} For an application of the above results and their comparison to other constraints we consider a simplified $t$-channel mediator DM model. It supplements the standard model with a singlet Majorana fermion, $\chi$, and a coloured scalar mediator, $\tilde t$, with gauge quantum numbers identical to the right-handed top quark. Imposing a $Z_2$ symmetry under which $\chi\to-\chi$ and $\tilde t\to -\tilde t$ (while standard model particles transform evenly), $\chi$ is stable for $m_\chi<m_{\tilde t}$ and, hence, constitutes a viable DM candidate. The renormalisable interactions allowed by the $Z_2$ and gauge symmetries are described by the Lagrangian \begin{equation} \mathcal{L}_\text{int} = |D_\mu \tilde t|^2 + \lambda_\chi \tilde t\, \bar{t}\,\frac{1-\gamma_5}{2}\chi +\text{h.c.} +\lambda_{H \tilde t} \,\tilde t^\dag\tilde t H^\dag H \,, \label{eq:stopmodel} \end{equation} where $D_\mu$ is the covariant derivative, $t$ the top quark Dirac field and $H$ the standard model Higgs doublet. The masses $m_\chi,m_{\tilde t}$ and the coupling $\lambda_\chi$ are the phenomenologically relevant parameters considered here. The latter governs the (feeble) DM interactions with the thermal bath. The Higgs portal coupling, $\lambda_{H \tilde t}$, affects the interactions of the mediator with the thermal bath. For DM production via FI, during which the mediator is in thermal equilibrium, the presence of this coupling does not affect the relevant dynamics. For the case of SW production, it can contribute to the mediator annihilation during its freeze-out, potentially lowering its abundance. However, to compete with the annihilation rate associated with the strong interactions of the mediator (which are further enhanced through non-perturbative effects, see below) requires the Higgs portal coupling to be very large. Here we assume $\lambda_{H \tilde t}$ to be well below unity, in which case it is totally negligible for the phenomenology considered. The model is reminiscent of a supersymmetric standard model. In fact, it may be realised as a limiting case of a non-minimal supersymmetric extension in which $\chi$ is a mixture of the bino and the fermionic component of an additional supermultiplet that is a singlet under the standard model gauge group~\cite{Belanger:2005kh,Ibarra:2008kn}. However, we will remain agnostic to a possible theoretical embedding of the simplified model, assuming that the above Lagrangian captures the relevant physics. In the context of FI and SW production, this model has been studied in~\cite{Garny:2018ali}. Similar results have been obtained for other spin-assignments~\cite{Belanger:2018sti,Calibbi:2021fld}. A variant of the model without an imposed $Z_2$ symmetry was discussed in~\cite{Arcadi:2013aba,Arcadi:2014tsa}, while its phenomenology in the case of thermalised DM can be found in~\cite{Ibarra:2015nca,Garny:2018icg}. In~\cite{Garny:2018ali}, constraints from Lyman-$\alpha$ forest observations have been estimated with a comparison of the respective limits on the free-streaming length obtained for WDM from~\cite{Baur:2017stq}. Here we revisit the phenomenology and improve the analysis with respect to~\cite{Garny:2018ali} in two main aspects. First, we improve the structure formation bounds utilising the methodology outlined in Secs.~\ref{sec:FIbasics} and~\ref{sec:cosmo}. In particular, computing the DM phase-space distribution and making use of the area criterion, we can derive a reliable and more stringent bound in the region of mixed FI and SW production. Second, we take into account bound state formation effects in the mediator freeze-out, which are relevant for the SW production of DM. In the following we will first discuss the mediator freeze-out in Sec.~\ref{sec:bound-state}. We will then detail the FI and SW production processes of DM within the model in Sec.~\ref{sec:DMprod}, before deriving the constraints on the model parameter space in Sec.~\ref{sec:pamaspace}. \subsection{Mediator freeze-out} \label{sec:bound-state} For parameter regions with a sizeable SW contribution to the DM production, the DM density (and, in general, its phase-space distribution) depends on the evolution of the mediator abundance governed by thermal freeze-out. This process is subject to non-perturbative effects. On the one hand, gluon exchange between the initial state mediators modifies their wave function, leading to an enhancement with respect to the tree-level annihilation rate at small relative velocities, \emph{i.e.}~the Sommerfeld enhancement~\cite{Sommerfeld:1931,Hisano:2002fk,Cirelli:2007xd}. On the other hand, mediator pairs can form bound states that affect the freeze-out dynamics leading to a further reduction of the mediator abundance, see \emph{e.g.}~\cite{Liew:2016hqo,Mitridate:2017izz,Biondini:2018pwp,Harz:2018csl}. Here we consider both effects in the non-relativistic limit using the computations derived in~\cite{Harz:2018csl}. Accordingly, for mediator pair-annihilation into gluons, we employ the $s$-wave annihilation Sommerfeld factor for a Coulomb potential. Mediator pair annihilation into quark pairs is $p$-wave suppressed and, hence, sub-dominant for small relative velocities. We take into account bound state formation (ionization) via one-gluon emission (absorption) and the leading bound state decay process into a pair of gluons. We consider the ground state configuration only. Furthermore, it is assumed that the rate of bound state number changing processes (formation, ionization or decay) is large compared to all other rates involved in the mediator freeze-out. In this case, the effects of bound state formation can be described by an effective annihilation cross-section~\cite{Mitridate:2017izz}. It reads \begin{equation} \label{eq:effsigmav} \langle\sigma_{\tilde t\tilde t^\dagger}v\rangle_\text{eff} = \langle\sigma_{\tilde t\tilde t^\dagger\to gg}v\rangle \times S_\text{Som} + \langle\sigma_{\tilde t\tilde t^\dagger\to q \bar q}v\rangle +\langle\sigma_{\tilde t\tilde t^\dagger\to {\cal B}g} v\rangle \times \frac{\Gamma_{\!{\cal B},\text{dec}}}{\Gamma_{\!{\cal B},\text{ion}} + \Gamma_{\!{\cal B},\text{dec}}} \end{equation} where $S_\text{Som}$ is the Sommerfeld enhancement factor, $\langle\sigma_{\tilde t\tilde t^\dagger\to {\cal B}g} v\rangle$ is the thermally averaged bound state formation cross-section, $\Gamma_{\!{\cal B},\text{ion}}$ is the respective ionization rate, ${\cal B} g \to \tilde t\tilde t^\dagger$, and $\Gamma_{\!{\cal B},\text{dec}}$ its decay rate, ${\cal B} \to gg$. For further details see App.~\ref{sec:somBSF}. We compute the thermally averaged annihilation cross-sections for the perturbative processes $\tilde t\tilde t^\dagger\to gg, q \bar q$ with \textsc{MadDM}~\cite{Ambrogi:2018jqj}. Assuming the maintenance of kinetic equilibrium via elastic gluon scattering, $\tilde t g\to \tilde t g$, throughout the entire freeze-out process, we compute the mediator abundance by solving the integrated Boltzmann equation\footnote{When the annihilations become inefficient, only the second term in the squared parenthesis of eq.~(\ref{eq:YtildetBME}) is left and we recover eq.~(\ref{eq:YB}) taken into account for the SW mechanism with $B=\tilde t$ in the $x>1$ limit.} \begin{equation} \label{eq:YtildetBME} \frac{\mbox{d} Y_{\tilde t}}{\mbox{d} x}=\frac{1}{ 3 H}\frac{\mbox{d} s}{\mbox{d} x}\,\left[\frac12 \langle\sigma_{\tilde t\tilde t^\dagger}v\rangle_\text{eff}\left(Y_{\tilde t}^2-{Y_{\tilde t}^\mathrm{eq}}^{2}\right) + \frac{ \Gamma_{\tilde t}}{s} \,Y_{\tilde t}\right]\,, \end{equation} where $Y_{\tilde t}$ denotes the summed abundance of the mediator and its antiparticle and $ \Gamma_{\tilde t}$ is the (thermally averaged) rate for the mediator decay, \emph{i.e.}~for $ {\tilde t}\to t \chi$. Figure~\ref{fig:Ymed} shows the effective annihilation cross-section and the resulting evolution of $Y_{\tilde t}$ for two example mediator masses including Sommerfeld enhancement only (dashed curves) and including bound state effects in addition (solid curves). In these plots, we choose $ \Gamma_{\tilde t}$ small such that the decay is inefficient in the displayed $x$-range. The presence of bound states leads to a prolonged freeze-out process, as bound state effects cause an enhancement of $\langle\sigma_{\tilde t\tilde t^\dagger}v\rangle_\text{eff}$ at large $x$. Towards larger mediator masses, the maximum of this enhancement is shifted to higher $x$, while the effect on the mediator abundance becomes smaller. For a mass $10^3$ ($10^6$)\,GeV, bound state effects reduce the abundance by a factor of 3.9 (1.9). \begin{figure*}[t] \begin{center} \includegraphics[width=.49\textwidth,trim= {0.3cm 0.2cm 0.5cm 0.5cm},clip]{plots/Ymed1e3.pdf} \includegraphics[width=.49\textwidth,trim= {0.3cm 0.2cm 0.5cm 0.5cm},clip]{plots/Ymed1e6.pdf} \vspace*{-10mm} \end{center} \caption{ Evolution of the mediator abundance $Y_{\tilde t}$ (purple curves, left axes) and the effective annihilation cross-section $\langle\sigma_{\tilde t\tilde t^\dagger}v\rangle_\text{eff}$ (green curves, right axes) as a function of $x=m_{\tilde t}/T$ for $m_{\tilde t}=10^3$\,GeV (left panel) and $m_{\tilde t}=10^6$\,GeV (right panel). The solid curves take into account Sommerfeld enhancement and bound state formation effects (`Sommerfeld+BSF') while for the dashed curves only the former has been considered (`Sommerfeld only'). The purple dotted curves denote the mediator equilibrium abundance $Y_{\tilde t}^\mathrm{eq}$. } \label{fig:Ymed} \end{figure*} \subsection{Dark matter production processes} \label{sec:DMprod} The leading processes to DM production are scatterings of the form $X \tilde t \to X' \chi$, where $X,X'$ denote standard model particles, and mediator decays $ {\tilde t}\to t \chi$. The latter gives rise to both a FI and SW contribution. The respective vacuum decay rate reads \begin{equation} \Gamma_{{\tilde t}\to t \chi} = \frac{\lambda_\chi^2}{16 \pi m_{\tilde t}^3} \left(m_{\tilde t}^2 - m_\chi^2 - m_t^2\right) \lambda^{1/2}\!\left(m_{\tilde t}^2 , m_\chi^2 , m_t^2\right)\,, \end{equation} where $\lambda(x,y,z)=x^2+y^2+z^2 - 2(xy+ xz + yz)$. Pair production of DM of the form $X X' \to \chi \chi$ is of higher order in the coupling $\lambda_\chi$ and, hence, neglected here. Among the scattering processes, we consider the leading processes in the strong coupling $\alpha_\text{s}$, \emph{i.e.}~$t \tilde t \to g \chi $ and $g \tilde t \to t \chi$, which are expected to contribute similarly. However, the second process is subject to a soft divergence, which we regularise by introducing a thermal mass for the gluon~\cite{Binder:2019ikc}; \begin{equation} m_g\left( T \right) = \frac{4\pi \alpha_\text{s}}{6} T^2 \left( N_c + \frac{N_f}{2} + \frac{N_s}{2}\right) \, , \end{equation} where $T$ is the bath temperature, and $N_c, N_f, N_s$ are the number of colours and number of active fermions and scalars in the thermal bath, respectively. The thermal mass enters the cross-section and the lower integration limit, $\tilde s_\text{min}$, in eq.~\eqref{eq:fscat}. Note that the process belongs to the ${\cal O} (\alpha_\text{s})$ corrections to the mediator decay at finite temperatures. Their rigorous computation can only be performed in thermal field theory, which is beyond the scope of this work.\footnote{See \emph{e.g.}~\cite{Biondini:2020ric,Jackson:2021dza} for recent advances in the treatment of thermal corrections relevant for FI.} Conservatively, we consider the size of the FI contribution from scattering as a rough estimate for the uncertainty of the total FI contribution to the relic density. The scattering processes contribute between 15 and 25$\%$ for a mediator mass in the range of $10^3$ GeV to $10^{10}$ GeV. Note that the thermal mass of the gluon introduces a temperature dependence in the cross-section, as well as in the minimal centre of mass energy. As a result, eqs.~(\ref{eq:qnavsc}) and~(\ref{eq:qnavsclim}) do not apply and the mean momentum shifts to a higher value $\langle q\rangle \approx 3.15$. However, this effect in the total distribution is marginal because the channel $g \tilde t \to t \chi$ is suppressed compared to the others when considering the gluon thermal mass as a regulator. This is illustrated in Fig.~\ref{fig:FIdistrimodel} for a parameter point with $\lambda_\chi = 10^{-7},\; m_{\tilde t} = 5.6\times 10^6\text{ GeV},$ and $ m_\chi = 10^{-3} \text{ GeV}$. The total FI distribution is shown with a solid curve, while the contributions from the two scattering processes and the decay are shown with dot-dot-dashed, dotted and dashed curves, respectively. \begin{figure*}[t] \begin{center} \includegraphics[width=.7\textwidth]{plots/PSDFIDetailed.pdf} \vspace*{-5mm} \end{center} \caption{Contributions to the DM distribution function arising from FI, $q^2g_\chi f_\chi^{\rm FI}(q)$, as a function of $q$ for top-philic DM when taking $\lambda_\chi = 10^{-7},\; m_{\tilde t} = 5.6\times 10^6\text{ GeV},$ and $ m_\chi = 10^{-3} \text{ GeV}$. From top to bottom we have the total distribution arising from both decays and scattering (solid), as well as the decay (dashed) and the total scattering (dot-dashed) contributions. The latter divides into the $g\tilde t \to t\chi$ (dot-dot-dashed) and the $t\tilde t \to g\chi$ (dotted) contributions. Because of the gluon thermal mass considered to regularise the scattering cross-section for $g\tilde t \to t\chi$, the dot-dot-dashed curve has a mean momentum shifted to higher values than the expected $\langle q\rangle=2.5$ for FI. } \label{fig:FIdistrimodel} \end{figure*} For very small DM masses, the coupling $\lambda_\chi$ that yields the measured relic density can become large enough to render the decay efficient already close to the time of mediator freeze-out.\footnote{For the DM masses around the Lyman-$\alpha$ constraint, decays and scatterings are, however, at least about two orders of magnitude smaller than the Hubble rate for $x\lesssim 3$, justifying the commonly made approximations in the FI computation.} In this case, the distinction between the FI and SW production processes may be less obvious. For definiteness, we consider the contribution in the regime $x<7$ ($x>7$) to belong to the FI (SW) production. We only consider scatterings in the former while taking into account the full evolution of the mediator abundance, solving eq.~\eqref{eq:YtildetBME}, only in the latter regime. This value of $x$ has been chosen since scatterings are already completely negligible at this point. In addition, deviations from thermal equilibrium are still small even for the largest mediator masses considered here, which feature the earliest deviations from thermal equilibrium. Note that the SW contribution from early decays is only comparable to the FI contribution for very large mediator masses, where the larger mediator freeze-out abundance overcompensates the small ratio of masses $m_\chi/m_{\tilde t}$ entering the SW contribution to the DM relic density. \subsection{Viable parameter space and constraints} \label{sec:pamaspace} By numerically solving \begin{equation} \label{eq:lambdasol} \Omega_\chi h^2|_\text{FI}(\lambda_\chi) + \Omega_\chi h^2|_\text{SW}(\lambda_\chi)=0.12 \end{equation} we compute the required DM coupling, $\lambda_\chi$, that matches the measured relic density for a given DM and mediator mass in the considered parameter space. The resulting hyperplane is shown in Fig.~\ref{fig:paramspace} by displaying contours of equal $\lambda_\chi$ in the plane spanned by $m_\chi$ and $\Delta m= m_{\tilde t} - m_\chi$ (green curves in the left panel) and by drawing contours of equal $m_\chi$ in the plane spanned by $\lambda_\chi$ and $\Delta m$ (cyan curves in the right panel). Note that we have inverted the scale of the abscissa in the right panel to make the correspondence between the two projections more obvious. To the right of the thick black line in the left panel, $\Omega_\chi h^2|_\text{SW}(\lambda_\chi)>0.12$ for any $\lambda_\chi$ and so no solution for eq.~\eqref{eq:lambdasol} can be found. Approaching this boundary from the left, the coupling drops by orders of magnitude. This region is only visually resolved in the right panel. The black long-dashed curves denote contours of equal SW contribution. The 50\% curve divides the parameter space into the FI (to the left) and SW dominated regions (to the right). In the former the relic density is (asymptotically) proportional to $\lambda_{\chi}^2$ while in the latter the $\lambda_\chi$-dependence is mild. However, due to the prolonged freeze-out process discussed in Sec.~\ref{sec:bound-state}, even the SW contribution depends on $\lambda_\chi$ in a considerable part of the parameter space. In particular, in the region of large mediator masses and significant SW contribution, the mediator decays while mediator pair annihilations have not yet become fully inefficient. \begin{figure*}[t] \begin{center} \includegraphics[width=.465\textwidth]{plots/paramspace_mchi_Dm} \hspace*{5mm} \includegraphics[width=.4557\textwidth]{plots/paramspace_lam_Dm} \vspace*{-5mm} \end{center} \caption{Cosmologically viable parameter space ($\Omega h^2=0.12$) of the considered top-philic $t$-channel mediator model. \textbf{Left}: Projection onto the plane spanned by $m_\chi$ and $\Delta m= m_{\tilde t} - m_\chi$. The green contours denote decades of the coupling $\lambda_\chi$. For parameter points to the right of the thick black line, DM is over-abundant regardless of the coupling, \emph{i.e.}~no solution can be found. \textbf{Right}: Projection onto the $\Delta m$-$\lambda_\chi$-plane. The cyan contours denote decades of $m_\chi/$GeV. (To reduce clutter we only display every second line.) Note that the scale of the abscissa has been inverted allowing for a more direct comparison of the two projections. In both panels, the black, long-dashed curves denote contours of equal SW contribution to the total relic density. The grey dotted lines denote contours of equal decay length. Our constraints from the Lyman-$\alpha$ observations (Ly-$\alpha$) are shown in purple, while BBN bounds are displayed in red. Constraints from LHC searches for displaced vertices (DV) and $R$-hadrons are shown in royal blue and aqua blue, respectively. } \label{fig:paramspace} \end{figure*} For the computation of the Lyman-$\alpha$ bound on the top-philic DM parameter space, we have exploited the area criterion. As discussed in Sec.~\ref{sec:mixed}, this allows us to probe the mixed FI-SW scenarios encountered in this model. To this aim, we have used our modified version of \textsc{class}~including the analytic FI from decay and SW DM distribution functions\footnote{Because of the prolonged freeze-out, one should a priori compute the DM distribution arising -- from both FI and SW -- fully numerically by integrating out the collision term given in eq.~(\ref{eq:Cscat}), where $f_B$ would obtained using eq.~(\ref{eq:fBSW}) with $Y_B=Y_{\tilde t}$ arising from the integrated Boltzmann eq.~(\ref{eq:YtildetBME}). We have checked that using our analytic distributions of Sec.~\ref{sec:distrib} with $Y_{\rm FO}=\Omega_\chi h^2|_{\rm SW} (\lambda_\chi) \times \rho_{\rm crit}/(s_0 h^2 m_\chi)$, we recover the numerical results up to a few percent error. } displayed in Sec.~\ref{sec:distrib}, together with fits to the numerically obtained contributions arising from FI via scatterings. We have followed the methodology described in Sec.~\ref{sec:mixed} for a selection of parameter points which were expected to lie near the Lyman-$\alpha$ limits. An example of this selection is shown in Fig.~\ref{fig:transfers} for $m_\chi=50$ keV. The Lyman-$\alpha$ observations constrain the parameter space towards small DM masses (in the FI dominated regime, \emph{i.e.}~to the left), towards small DM couplings and, hence, large mediator lifetimes (in the SW dominated regime, \emph{i.e.}~to the right), and towards large mediator masses (in the mixed regime, \emph{i.e.}~to the top). The exclusion is displayed as the purple shaded region in both panels of Fig.~\ref{fig:paramspace}. In the limit of FI and SW dominated production, the limits correspond the ones in eq.~\eqref{eq:limsly} from the area criterion. Note that the limits are considerably stronger than the ones estimated in~\cite{Garny:2018ali}, in particular, in the region of similar contributions from FI and SW region providing an upper bound on the mediator of around $\Delta m = 2\times 10^9\,$GeV. Towards small mediator masses and towards large mediator lifetimes, the parameter space is constrained by two further observations. First, searches for long-lived coloured particles at the LHC constrain mediator masses up to the TeV scale. Here we illustrate the limits imposed by current data considering searches for $R$-hadrons and displaced vertices. In the region of parameter space providing large lifetimes compared to the detector size, \emph{i.e.}~for $c\tau>100\,$m, we directly apply the limit from the 13\,TeV ATLAS search~\cite{ATLAS:2019gqq} for detector-stable $R$-hadrons containing a supersymmetric top-partner. For smaller lifetimes, we reinterpret the 13\,TeV ATLAS search for displaced vertices and missing transverse energy~\cite{ATLAS:2017tny} within our model using the recasting from~\cite{Calibbi:2021fld}. We use the squark cross-section prediction provided in~\cite{Beenakker:2016lwe} for the $\tilde t$ pair production at the LHC\@. Second, the decay of the coloured mediator during the epoch of BBN may spoil the successful predictions for the primordial abundances of light elements~\cite{Jedamzik:2006xz,Kawasaki:2017bqm,Jedamzik:2007qk,Kusakabe:2009jt}. We estimate these constraints employing the results from \cite{Jedamzik:2006xz} for a hadronic branching ratio of 1. The relatively mild dependence of the limits on the mediator mass is approximately taken into account linearly interpolating (and extrapolating) the results for 100\,GeV and 1\,TeV in log-log space. The same approach was followed in~\cite{Garny:2018ali}. The LHC and BBN bounds are shown in Fig.~\ref{fig:paramspace} as the blue and red shaded regions, respectively. For small mediator masses, the smaller freeze-out energy density of the mediator required by eq.~\eqref{eq:lambdasol} allows for larger lifetimes. In this regime the BBN constraints arise dominantly from the observed primordial abundance of $^2$H. For larger mediator masses and correspondingly larger energy densities the stronger limits derived from $^4$He observations dominate, constraining considerably smaller lifetimes. For comparison, we highlight the contour with a lifetime of 1s as the dotted red curve. However, as the derivation of these bounds partly rely on a extrapolation of the results of~\cite{Jedamzik:2006xz} we consider them as a rough estimate only and leave a dedicated analysis for future work. Noticeable developments of numerical tools for the reinterpretation of BBN bounds have been made more recently, see \emph{e.g.}~\cite{Depta:2020mhj,Arbey:2011nf}. The LHC searches for $R$-hadrons and displaced vertices exclude mediator masses up to around 1.3 and 1.5\,TeV, respectively. Note that the slight gap in their sensitivity for a DM mass between 1 and 10\,MeV -- corresponding to a mediator decay length of around 10 to 100\,m -- is expected to be closed when applying a reinterpretation of the null-results of the ATLAS $R$-hadron search for intermediate lifetimes. For instance, the ATLAS search in~\cite{ATLAS:2018lob} performed for $R$-hadrons containing gluino bound states imposes limits down to a decay length of around 3\,m that are similarly strong as in the detector-stable regime. The null-results in the CMS search for delayed jets~\cite{CMS:2019qjk} is expected to impose similar constraints for intermediate lifetimes, see \emph{e.g.}~\cite{Calibbi:2021fld} for a similar DM scenario. Finally, we stress that the interplay of the above constraints is specific to the presence of the imposed $Z_2$ symmetry that renders DM absolutely stable. A variant of this model without a $Z_2$ symmetry has been studied, for instance, in~\cite{Arcadi:2013aba,Arcadi:2014tsa}. In general, allowing for a non-zero branching fraction of the mediator decay into standard model particles only, can lower the SW contribution to the DM density and, hence, relax the upper bound on the mediator mass found here. Furthermore, such a decay mode would change the LHC bounds. However, the requirement of a sufficiently long DM lifetime and indirect detection limits from DM decay provide additional constraints. A study of such scenarios is beyond the scope of this work. \section{Conclusions} \label{sec:concl} Despite substantial experimental efforts dedicated to the search for DM, no indisputable signature of DM has been found in (astro-)particle physics experiments. As a complementary path to unveil the nature of DM, here we explored the imprint of non-cold DM, in the form of FIMPs, on cosmological observables. In particular, we provided generic lower bounds on the DM mass when DM is produced through the FI and SW mechanisms. Our FI bound is valid for FI via 2-body decays, and we discussed the applicability of this bound to the case of a production via $2\to 2$ scatterings. We first revisited the Boltzmann equations relevant for extracting the DM momentum distribution arising from these two production mechanisms and provided simple analytic expressions of these. Our results are given in eqs.~(\ref{eq:fCL}). For FI we confirmed the result from previous literature, while the expression derived for the SW scenario -- where DM arises from the late decay of a frozen-out mother particle -- constitutes a new result. These analytic expressions can also be used to describe mixed FI-SW scenarios, where contributions from both FI and SW can be similarly important. Due to their relatively large velocity dispersion at the time of structure formation, FIMPs from FI and SW production can affect clustering on small scales. Interestingly, the associated free-streaming effect can be constrained with Lyman-$\alpha$ forest data, as in the case of thermal WDM\@. For the purpose of exploiting this probe, we implemented the analytic DM momentum distribution for FI and SW in the Boltzmann code \textsc{class} (which we will make publicly available). This allowed us to calculate the linear 3D matter power spectra and the corresponding transfer functions for both pure FI and SW DM production, as well as for mixed FI-SW scenarios where both contributions are relevant. In the case of pure FI and SW production, the transfer functions are similar in shape to the one of thermal WDM\@. This enabled us to provide generic fits to the transfer functions, the breaking scale of which depend on the DM model parameters: the DM mass, the mother particle mass, decay width, and the number of relativistic dof at the time of production, see eqs.~(\ref{eq:alphFI}) and~(\ref{eq:alphSW}). These novel results can be used to evaluate the effects of FIMP production on the linear matter power spectrum for the pure FI and SW scenarios, obviating the need to run a numerical Boltzmann code such as \textsc{class}. For the mixed FI-SW scenario, however, the corresponding distribution and transfer function can significantly deviate from the thermal WDM case, requiring the numerical computation. Usually to calculate general Lyman-$\alpha$ bounds on these NCDM models, one should run computationally expensive hydrodynamical simulations, in order to properly model the NCDM scenarios in the non-linear regime. Here we instead followed three alternative approaches to estimate the Lyman-$\alpha$ bound. The first one exploits the root mean square velocity of the DM particles today, while the second builds on the fits to the DM transfer functions that we provided and constrains the DM breaking scale. The third one makes use of the area criterion, which measures the suppression of the 1D NCDM matter power spectrum compared to the CDM one within the range of scales probed by the relevant cosmological experiments. After careful calibration checks on thermal WDM, see eqs.~(\ref{eq: alphaWDM}),~(\ref{eq:dAw}) as well as App.~\ref{sec:fit_fluid}, we reinterpreted the existing bound from Lyman-$\alpha$ forest observations on the WDM mass in terms of generic lower bounds on DM mass for pure FI and SW scenarios. Our results for each method are given in eq.~(\ref{eq:limsly}) and Tab.~\ref{tab:NCDMnounds}, assuming a lower bound on the thermal WDM mass given by $m_{\rm WDM}^{{\rm Ly}\alpha}=5.3$ keV\@. All three methods are in good agreement, which can be traced back to the fact that FI and SW production give rise to a cut in the matter power spectrum very similar to the one of thermal WDM\@. In the case of FI from 2-body decays, we recovered a lower bound on the DM mass of 15 keV (when $T_{\rm FI}> T_\mathrm{EW}$) in agreement with previous results, while the bound from SW could exclude much larger DM masses depending on the decay width and mass of mother particle. For mixed FI-SW scenarios, we reached the conclusion that the area criterion provides a conservative estimate of the DM mass bound. When FIMPs arising from FI and SW are still relativistic at the time of BBN or CMB, they might provide a non negligible contribution to $\Delta N_{\rm eff}$. We obtained a generic lower bound on the DM mass of similar form as in the case of the Lyman-$\alpha$ bound. However, imposing $\Delta N_{\rm eff} (T_{\rm BBN})< 0.31$, the resulting bound appears much looser, see Tab.~\ref{tab:NCDMnounds}. Notice, though, that the latter bound can be applied without the need of using any Boltzmann code or hydrodynamical simulations and is also applicable in general to mixed scenarios, see eq.~(\ref{eq:NeffFIMP}). Having seen the general application, we turned our attention to an example model, namely a coloured $t$-channel DM model. Here we revisited the top-philic DM model, taking special care in the treatment of non-perturbative effects, such as Sommerfeld and bound state enhancement effects on coloured mediator annihilation cross-section at early times, as well as on the computation of the DM production via $2\to 2$ scatterings. This is of particular importance in this model in the case of SW and FI production, respectively. The two panels of Fig.~\ref{fig:paramspace} summarise the viable parameter space of FIMPs arising from FI and SW production in this scenario, complementarily bounded by cosmological (Lyman-$\alpha$, BBN) and particle physics (LHC $R$-hadrons and displaced vertices searches) observables. In particular, the Lyman-$\alpha$ bound derived in the first part of this paper plays an important role. On the one hand, it excludes small DM masses, ${\cal O}(15$ keV), in the region of dominant FI production. On the other hand, it constrains the parameter space towards small couplings and, hence, large mediator lifetimes in the case of dominant SW production. In the latter case, Lyman-$\alpha$ observations supersede BBN constraints for mediator masses above $10^4$ GeV and reach DM masses up to ${\cal O}(100$ GeV). Here we have shown the importance of structure formation bounds in constraining FIMPs arising from FI and SW mechanisms, and illustrated the need to consider bounds from both particle physics and cosmology to fully understand these scenarios. In particular, the case of mixed NCDM models -- giving rise to a multimodal momentum distribution -- has, to our knowledge, not been discussed thoroughly in the literature. This case can naturally appear in FIMP scenarios with a decaying mother particle at the origin of the DM production. For the corresponding transfer function, which significantly deviates from the standard WDM scenario or from mixed warm + cold DM scenarios, no example hydrodynamical simulations have been run, and we can only provide a conservative lower bound on the DM mass. In future, it would be interesting to provide a thorough analysis of this case to validate our estimations and to check if other probes, such as reionization, the luminosity function at high redshift, or the 21 cm signal could help to test these models further and distinguish them from the WDM-like DM scenarios. \acknowledgments We would like to thank S.~Junius for discussion and providing us with his recasting of DV+MET searches. We would also like to thank R.~Murgia for clarifications on \mbox{Lyman-$\alpha$} constraints for FIMPs as well as F.~D'Eramo and A.~Lenoci for discussions. LLH is a Research associate and QD benefits form a FRIA PhD Grant of the Fonds de la Recherche Scientifique F.R.S.-FNRS. LLH, QD and DH acknowledge support of the FNRS research grant number F.4520.19 and the IISN convention 4.4503.15. JH~acknowledges support from the Collaborative Research Center TRR 257 and the F.R.S.-FNRS (Charg\'e de recherches). DH is further supported by the Academy of Finland grant no. 328958.
2111.09203
\section{Introduction} In the past decades photoelectron spectroscopy has developed into the most powerful experimental technique to determine the detailed electronic properties of matter. On the experimental side, a wide range of energy-, momentum- and spin-resolving electron spectrometers has been employed. Here the large majority of spectrometers records single electron spectra, e.g. the kinetic energy distribution upon photon absorption. However, also the emission of two electrons excited by absorption of a single photon is possible, which might be due to a strong electron-electron correlation. This process can be identified if two time-resolving electron detector are used. Both detectors are connected via a coincidence logic. Only if both electrons hit the detectors within an appropriate small relative time window, the two-electron event is registered. In general, the coincidence rates are often rather small, in the range of 0.1 to 10 events/s. Experimentally it is challenging to discriminate these true coincidence events from the large number of single-electron events originating from different photons.\linebreak \indent To achieve the experimentally desired signal-to-noise ratios, adequate event statistics are required. The resulting measurement time scales with the inverse of the repetition rate of the pulsed excitation. Therefore high repetition rates are favored, which are only limited by the requirement that the slowest electrons have reached the detector within one excitation periode. The latter is typically in the order of a few MHz and is fulfilled at synchrotron light sources in special operation modes as well as with pulsed UV laser sources. In Fig.\ref{true_random_cartoon} electron pair emission due to single-photon absorption known as double photoemission (DPE) is illustrated. An unwanted background signal arises, if two photons are independently absorbed as this will also lead to the emission of an electron pair. It is customary to refer to the latter process as `random'\ coincidences whereas the genuine DPE signal is known as `true'\ coincidence. As we discuss later the process of interest scales linearly with the primary flux, while the undesirable process scales quadratically with the flux. This unwanted pathway very quickly overwhelms the genuine signal and the operation has to commence with much reduced flux. Let us quote a numerical example for electron pair emission due to primary electron impact, known as (e,2e). If one utilizes an experiment with energy dispersive elements, as described later, the sample current is $10^{-13}$ A. This is many orders of magnitudes lower than for Auger electron spectroscopy using a primary electron beam for which primary currents are of the order of a few $\mu$A. The requirement of low primary flux asks for a detection scheme with large detection probability. This means that the detector solid angle should be as large as possible while a large electron kinetic energy window should be covered. There exist different solutions for experiments in the gas phase or at surfaces.\cite{1098_Jensen,1039_Thurgate,889_Herrmann,1038_Gotter,1619_Ullrich,1006_Hattass, 1712_Eland,2334_Penent}\linebreak \indent These schemes can be separated into energy dispersive detection or time-of-flight (ToF) spectroscopy. The latter asks for a pulsed excitation source which in the case of photon beams is more involved. For efficient measurement scheme a repetition rate of the order of a few MHz is needed, because once the slowest electron has been detected the next excitation can commence. This requirement can be fulfilled at synchrotron light sources in special operation modes, but the available beamtime is restricted.\linebreak \indent More recent developments in laser technology using high-order harmonic generation (HHG) allow to perform in-house DPE experiments.\cite{1794_Chiang,1865_Chiang,1932_Chiang,2268_Chiang,2111_Truetzschler} While it is possible to carry out these type of experiments also with an energy dispersive element together with a standard vacuum-ultraviolet VUV light source, the available photon energies are limited to a few lines. These sources employ the discharge of nobel gases of He, Ar and Ne.\cite{1833_Schoenhense} We use in our work a monochromatized He light source.\cite{MBS_lamp,Scienta_mono} For ToF spectroscopy, HHG light sources provide more options.\linebreak \begin{figure}[t] \includegraphics[width=7.5cm]{Fig1.pdf} \caption{Sketch of the emission of an electron pair via single photon absorption (top) or absorption of two photons (bottom). The first case is the process of interest and leads to true coincidences. The other pathway leads to unwanted background commonly referred to as `random'\ coincidences. \label{true_random_cartoon}} \end{figure} \indent It is well-established on how to determine the individual contribution of `random'\ and `true'\ coincidences. Here we discuss a novel approach for ToF-spectroscopy similar to a recently reported concept in Auger-photoelectron coincidences exploring synchrotron radiation on surfaces, for a recent overview about coincidence spectroscopy we refer to the literature.\cite{2325_Leitner,1938_Arion} \indent In this work we compare this with the common approach of a double logarithmic analysis. We will demonstrate that both approaches yield the same characteristic parameters for different DPE experiments in which the intensity levels are orders of magnitude different.\linebreak \begin{figure}[b] \includegraphics[width=7.5cm]{Fig2.pdf} \caption{ We display an example of the sum of `true'\ and `random'\ coincidence count rates as a function of single photoemission count rate in a double logarithmic manner. The arrow indicates the singles rate for which the number of `true'\ and `random'\ coincidences are equal.\label{log_log_scienta}} \end{figure} \indent We start by considering how a genuine pair emission can be identified and how the statistical probabilities for `true'\ and `random'\ events have to be formulated. Second, we introduce the concept of delayed coincidences and how the relevant quantities can be experimentally determined. Third, we provide experimental data to prove the concept and how it can be related to a known procedure. Finally, we provide an outlook on how the `random'\ events can be removed from coincidence event spectra. \section{signature of `true'\ coincidences} Although our discussion will be valid for a variety of experimental approaches, we will present data obtained from two different set-ups for the investigation of solid surfaces. Our ToF instrument consists of a pair of lenses. The lens axes include an angle of 90$^\circ$ and have an angular opening of $\pm$15$^\circ$.\cite{1804_Huth} As excitation sources a pulsed electron gun or an HHG light source is available. The light source can operate at a repetition rate of 0.2-1 MHz and provides a photon energy range from 13 to 39~eV.\cite{1865_Chiang,2111_Truetzschler,2268_Chiang} The energy dispersive experiment utilizes a pair of hemispherical analyzers.\cite{1588_Schumann} Also in this case the two electron-optical axes include an angle 90$^\circ$ with an angular acceptance of $\pm$15$^\circ$ within the scattering plane. This instrument has proven to be very versatile. We reported on experiments with primary electrons, positrons, He$^{2e+}$ ion and photons from laboratory and synchrotron sources.\cite{1825_Brandt,1853_Wei,2019_DiFilippo,2113_Li,2250_Schumann}\linebreak \indent Let us define the probability to find one photon in a light pulse by $P_1(\lambda)$ while the probability $P_2(\lambda)$ describes the probability to find two photons in a pulse. The quantity $\lambda$ is the average number of photons in a pulse and hence proportional to the primary flux. If we assume a Poisson distribution and $\lambda\ll 1$ we obtain the following expressions: \begin{equation} P_{1}(\lambda)=\lambda~~, \\ ~~ P_{2}(\lambda)=\frac{1}{2}\lambda^2 \label{poisson1} \end{equation} The key result is that the probability $P_{2}(\lambda)$ goes quadratically with $\lambda$ and therefore with the flux, while the term $P_{1}(\lambda)$ displays a linear relation. This explains the flux dependence for `random'\ and `true'\ coincidences, respectively.\linebreak \indent This difference in the flux dependence can be observed by presenting the coincidence rate as a function of singles rate in a double logarithmic manner, see Fig.\ref{log_log_scienta}. The dashed lines indicate the behavior for low and high flux. In a simplified analysis one can evaluate these regions separately. The slopes are close to 1 and 2 indicative of the expected behavior. The intercept of those lines occurs at a primary flux at which the number of `true'\ and `random'\ coincidences are equal. We will present below an improved procedure to analyze the data. The key point is that for this study it is proven that`true'\ coincidence exist. Furthermore, one can adjust the primary flux such that the ratio of true-to-random coincidences (termed TR ratio) is at an acceptable level. The usual wisdom is to aim for a value above 1. This type of analysis can be used for energy dispersive experiments and ToF spectrometer. Implicit is the assumption that the primary flux can be increased to such a high level that `random'\ coincidences dominate the count rate. While this is in many cases not a constraint it can be an issue. In such a case the log-log plot can not be generated.\linebreak \indent In the pioneering experiment of Bothe and Geiger a different way appropriate for energy dispersive experiments was discussed.\cite{1389_Bothe} It is based on the measurement of the arrival time difference of the two emitted electrons at their respective detectors, see Fig.\ref{hist_cartoon}. The implementation for an instrument which consists of a pair of hemispherical analyzers has been discussed in detail, here we recall only the conceptional points. \cite{1537_vanRiesen,1425_vanRiessen,1588_Schumann} For a valid coincidence event the arrival time ($t_{left}$ and $t_{right}$) of two electrons at the respective detector with respect to the coincidence trigger is known. Therefore we can plot the coincidence intensity as a function of the time difference $dt$=$t_{left}-t_{right}$. A schematic curve is shown in Fig.\ref{hist_cartoon}. The emergence of a peak is evidence of `true'\ coincidences as discussed in the literature.\cite{1389_Bothe,1098_Jensen,1624_Amaldi,1494_McCarthy,1696_Hayes, 1039_Thurgate,888_Kirschner}. There is no temporal relation between the `random'\ coincidences which explains the constant intensity outside the peak region. Apart from proving the existence of `true'\ coincidences this approach allows the straightforward determination of the TR ratio of `true'\ to `random'\ events. \begin{figure}[b] \includegraphics[width=8.0cm]{Fig3.pdf} \caption{Sketch of the arrival time histogram. The emergence of the peak is proof of the emission of electron pairs. The width $t_{true}$ is a consequence of the time dispersion of the spectrometer. The histogram has a total width of $t_{tot}$.\label{hist_cartoon}} \end{figure} \indent The total width of the $dt$ curve is given by $t_{tot}$. The base width of the peak is $t_{true}$ as indicated by the pair of vertical dashed lines, see Fig.\ref{hist_cartoon}. This result is used to define a region of interest where `true'\ coincidences are located. The peak width is in accordance with the time dispersion of the spectrometer.\cite{1493_Volkel,1479_Imhof, 1638_Kugeler} \linebreak \indent The generation of a log-log plot with suitable statistics takes a considerable time and will interrupt the actual measurement, because largely different primary fluxes have to be used.\linebreak \indent In this respect the use of a histogram is very powerful. We can illustrate this point by observing how the histogram evolves. For this we take a data file from a DPE experiment from a Pb surface. The experiment in question captured the coincidence of a $5d$ core electron and related Auger electron upon excitation with 40.8~eV photons, see Fig.8 of Aliaev et al.\cite{2176_Aliaev}. The coincidence rate was around 1~cps for this study. The curve of Fig.\ref{Pb_hist} (a) shows the histogram after data acquisition of 200 coincidence events which for the given coincidence rate translates to 200 s. Even after this short time it is clear that `true'\ coincidences exist and the TR ratio can be computed. The evolution of the TR ratio as a function of the counted events is displayed in Fig.\ref{Pb_hist} (b) and one can see that the value of the TR ratio is accurately known after around 2000 counts. At the beginning of a new study the required intensity level for a sufficiently good TR ratio is not known, hence the primary flux needs to be varied. For a log-log plot the picture emerges only after a sufficiently large flux window of 2-3 orders of magnitude, see Fig.\ref{log_log_scienta}. This is different for the histogram measurements where a single measurement gives a result. \begin{figure}[t] \includegraphics[width=8.0cm]{Fig4.pdf} \caption{ Panel (a) shows the arrival time histograms for 200 coincidence counts. The evolution of the TR ratio as a function of the accumulated counts is plotted in panel (b). The data are from a measurement from a Pb surface, see Fig.8 (a) in Aliaev et al\cite{2176_Aliaev}.\label{Pb_hist}} \end{figure} \section{delayed coincidence for ToF} Our ToF experiment is not set-up to record two electrons per photon pulse on one of the MCP detectors. This limitation is not as severe as it appears, because previous work has shown that the emission of two electrons into the same angular range is strongly reduced. This was related to an important concept of solid state theory known as exchange-correlation hole.\cite{1057_Schumann,1094_Schumann,1302_Hattass,1214_Schumann,1430_Schumann,2304_Schumann} The angular range of the reduced intensity region is of the order of 60$^\circ$ and exceeds the angular acceptance of the entrance lenses which is $\pm15^{\circ}$.\linebreak \indent It is highly desirable to be able to perform an equally fast determination of the TR ratio for a ToF experiment. This becomes possible if we employ a second coincidence circuit. For simplicity and also reflecting our experimental situation we assume that the time $t_{rep}$ between two subsequent primary pulses is longer than the time-of-flight of the slowest detectable electron. In the first circuit the two multi-channelplate (MCP) signals of the different spectrometers are used. The second circuit, called B, receives one MCP signal which is delayed by the time $t_{rep}$ between two consecutive primary pulses. A schematic view of the two coincidence circuits is seen in Fig.\ref{cartoon_delayed}. While circuit A records `true'\ and `random'\ coincidences circuit B can only record `random'\ events. As indicated by Fig.\ref{true_random_cartoon} we define a `random'\ event as being caused by two independent photons. This is clearly the case for circuit B. \begin{figure}[b] \includegraphics[width=9.0cm]{Fig5.pdf} \caption{Schematic signals from two detectors. The dashed vertical lines indicate the time when photons from the different pulses arrive at the sample. The time between subsequent pulses is given by $t_{rep}$. Circuit A triggers if both detectors supply a signal after photon absorption at the first pulse and display temporal overlap. Circuit B triggers if the temporal overlap exists between the delayed signal of detector 1 and a signal from detector 2 from the next pulse. \label{cartoon_delayed}} \end{figure} The probability to find two photons in the same pulse is given by $P_{2}(\lambda)$. This is in contrast to find one photon in one pulse while the second photon is in the subsequent pulse. For this scenario the probability is given by the joint product of probabilities $P_{\lambda}(1)^2$. For $\lambda\ll1$ we can simplify $P_{\lambda}(1)^2=2\cdot P_{\lambda}(2)$ by using the expansions of Eq.(\ref{poisson1}). In the case of a `random'\ coincidence, circuit A will record the first electron either by detector 1 or 2. The second electron is then registered in the opposite detector. In other words there exists two detection combinations in contrast to circuit B. This means that the `random'\ rate for both circuits is the same and we can write: \begin{equation} c_A=r+t \\, \quad c_B=r \label{counter} \end{equation} The coincidence rates for circuits A and B are given by $c_A$ and $c_B$, while the terms $r$ and $t$ refer to the `random'\ and `true'\ coincidence rates. This terms are low count rate approximations as discussed in the literature.\cite{2371_Kossmann} With this we define $ratio$ as: \begin{equation} ratio=\frac{c_A}{c_B}=\frac{r+t}{r}=1+\frac{t}{r} \label{counter_ratio} \end{equation} We immediately see that $ratio$ is closely related to the quantity of interest, namely the TR ratio. We recall that the `random'\ rate varies quadratic with flux while the `true'\ events scale linearly with the flux. Since the singles rate also depends linearly on the flux, a presentation of $ratio$ as a function of the inverse singles rate is expected to yield a line with an intercept of 1. If we label the singles rate with $s$ we can formulate: \begin{equation} ratio=a_1+a_2 \cdot \frac{1}{s} \label{counter_ratio2} \end{equation} \begin{figure}[b] \includegraphics[width=7.5cm]{Fig6.pdf} \caption{ In (a) the measurement of the $ratio$ as a function of inverse singles rate is shown. These are DPE data come from a SrRuO$_3$ surface with a photon energy of 30 eV. The solid line is a linear fit to the data and reveals an intercept $a_1$=1.03$\pm 0.35$. The horizontal dashed line indicates $ratio=2$ which is equivalent to TR=1. In panel (b) we plot the delayed coincidence rate as a function of the singles rate via a log-log plot. We find for the slope $b_2$=1.98$\pm 0.04$. \label{SrRuO_30eV}} \end{figure} We expect the intercept $a_1$ to be close to 1 according Eq.(\ref{counter_ratio}). In this case the slope $a_2$ can be identified with the singles rate $s$ for which ratio is close to 2 or equivalently the TR ratio is 1.\linebreak \indent A single photon pulse will lead with a probability $p_s$ to the detection of an electron in one spectrometer. The time between subsequent pulses is given by $t_{rep}$. Therefore the single electron rate is : \begin{equation} s=\frac{p_s}{t_{rep}} \label{singles} \end{equation} Similarly we get for the delayed coincidence rate $c_b$: \begin{equation} c_b=\frac{p_s^2}{t_{rep}}=s^2 \cdot t_{rep} \label{delayed_rate} \end{equation} If we take the logarithm of both sides we obtain: \begin{equation} log(c_b)=log(t_{rep})+2\cdot log(s)=b_1+b_2\cdot log(s) \label{delayed_rate} \end{equation} This means that the fitting of the data in a double logarithmic fashion will yield fit parameters $b_1$ and $b_2$. For our experiment with a $t_{rep}=2 \cdot 10^{-6}$ sec we expect $b_1$=-5.699 and $b_2$=2. In order to explore the validity of the previous statements we have performed experiments using a ToF instrument with an HHG light source as described previously.\cite{1804_Huth,2111_Truetzschler,2268_Chiang} The spectrometer settings were identical to those reported in our previous work. We selected a SrRuO$_3$ sample as target which we investigated with a few different photon energies.\linebreak \indent In Fig.\ref{SrRuO_30eV} (a) we show the behavior of $ratio$ as a function of the inverse singles rate $s^{-1}$ for a photon energy of 30~eV. It is apparent that the data are well-described by a line which has an intercept of $a_1$=1.03$\pm$0.35 in agreement with the expected value of 1. The horizontal dashed line marks the level for $ratio$=2 or TR=1. For the other parameter we find $a_2$=2013.5 cps which is singles rate for which the TR ratio is near 1. In Fig.\ref{SrRuO_30eV} (b) we show the delayed coincidence rate as a function of the singles rate in double logarithmic manner. A linear fit reveals a slope of $b_2$=1.98$\pm0.04$ which is within the error of the expected value for `random'\ coincidences. The vertical dashed line marks the singles rate $s=a_2$ and the crossing point with the solid line occurs for a delayed coincidence rate of 7.87 cps which is identical to the `true'\ rate. We obtain a value of the fit parameter $b_1$=-5.6479 which is within the expected range.\linebreak \begin{figure}[t] \includegraphics[width=7.5cm]{Fig7.pdf} \caption{We summarize the flux dependent measurements for a variety of photon energies. In the panels (a)-(c) we show the variation of the fit parameters $a_1$, $b_1$ and $b_2$ with their error bars. The dashed horizontal lines in each plot are the expected values as discussed in the text. \label{summary_delayed}} \end{figure} \indent We summarized our findings for a variety of photon energies in Fig.\ref{summary_delayed}. The three panels show the variation of the fit parameters $a_1$, $b_1$ and $b_2$ together with the expected values as indicated by the dashed horizontal lines in each plot. The blue data points for $a_1$ of Fig.\ref{summary_delayed}{(a) are obtained if only the data with highest primary flux are retained. In this case $a_1$=1 is almost exactly fulfilled. The key point is that the material independent parameters are essentially in agreement with the prediction. \indent In the description so far we have employed 4 parameters which can be determined by fitting the experimental data. It is appropriate to perform calibration measurements with the aim to keep some parameters fixed. The intercept $a_1$ from Eq.(\ref{counter_ratio2}) can be easily measured by operating at a high singles rate. This is best done at lower photon energies, because there we observe a lower `true'\ rate for the same singles rate. For the evaluation of the `true'\ and singles rate for TR=1 we will set $a_1$=1 and $b_2$=2. In other words in our analysis we adjust only two parameters. We compare the results with the usual double logarithmic analysis of the coincidence rate $r_a$ versus the singles rate $s$ formulated as: \begin{equation} log(r_a)=log(a \cdot s + b\cdot s^2) \label{loglog_eq} \end{equation} The parameters $a$ and $b$ refer to the contribution of `true'\ and `random'\ coincidences, respectively. Once the parameters $a$ and $b$ are obtained in the analysis it is straightforward to determine the singles and `true'\ rate for TR=1. These are given by $s=a/b$ and $t=a^2/b$. The outcome of this analysis and comparison with the result from the delayed coincidence is plotted in Fig.\ref{delay_loglog}. It is clearly shown that both ways of analysis give essentially the same result. Furthermore, the variation of the `true'\ rate covers more than 2 orders of magnitude and starts at a low rate of 0.02 cps for a photon energy of 15 eV. The demonstration of consistent results clearly proves that the delayed coincidence is a valid approach. Therefore, the evaluation of the TR ratio without the need of a flux variation is possible. Hence monitoring the TR ratio during a measurement similar to those for the energy dispersive set-up shown in Fig.\ref{Pb_hist} is possible. \begin{figure}[t] \includegraphics[width=8.5cm]{Fig8.pdf} \caption{Photon energy dependence of the `true'\ (a) and singles (b) count rate for TR=1. The black data points refer to the result from the delayed coincidences while the red data points stem from a double-logarithmic analysis. \label{delay_loglog}} \end{figure} \section{Removal of `random'\ background} Besides the knowledge of the `random'\ contribution to the count rate it is also beneficial to be able to remove the `random'\ background from the energy spectra without the need to collect an additional spectrum at very high primary flux. This possibility exists for the energy dispersive instrument and we have discussed our implementation in the literature.\cite{1588_Schumann,2250_Schumann} It turns out that this can also be achieved with a ToF instrument as shown in a recent work on APECS at a synchrotron light source.\cite{2325_Leitner} Effectively the concept of the delayed coincidence was used which was facilitated by the possibility to record the absolute time for each photon bunch. One can make use of a delayed coincidence circuit to get the equivalent information. The key point is to change the operation of the coincidence circuit and to add an additional channel in which the signal level indicates the presence of a delayed coincidence. Rather than using the AND operation with the two MCP signals one selects the OR option. Therefore, each time at least one spectrometer records an electron impact a trigger is generated. At this point the detector signals are read and stored in a list as for the usual AND operation. In Table \ref{table} we provide the schematic data structure. The 1st column is the recorded event number. The 2nd and 3rd columns indicate whether the respective spectrometer, labeled as $left$ and $right$, has kinetic energy and emission angle information. Consequently the entry is either $yes$ or $no$ for each of these two columns. \begin{table}[b] \caption{\label{tab:example}Schematic data structure.} \begin{ruledtabular} \begin{tabular}{ccccc} event & left spect & right spect & coin & delayed coin\\ \hline 1 & yes & yes & yes & no \\ 2 & yes & no & no &no \\ 3 & no & yes & no &yes \\ 4 & no & yes & no & no \\ 5 & yes & no & no &no \\ 6 & no & yes &no &yes \\ .. & .. & .. & .. &.. \\ \end{tabular} \end{ruledtabular} \label{table} \end{table} If for a given event both spectrometer registered an electron this is a coincidence event which gives the entry $yes$ in the column $coin$. This is the case for the 1st event in Table \ref{table}. This means that the usual AND condition of the coincidence circuit can be recovered in a post-experiment analysis and the energy spectrum containing both `true'\ and `random'\ coincidences can be computed.\linebreak \indent Alongside the coincidence circuit which was changed to an OR operation we will operate a delayed coincidence circuit in the AND mode as sketched in Fig.\ref{cartoon_delayed} as circuit B. This provides the signal for the last column of the table. It has an entry if a delayed coincidence circuit has triggered. This is the case for event 3 where only the $right$ spectrometer records an electron. However, the $left$ spectrometer has an entry for event 2. This means that the events 2 and 3 originate from two photon pulses separated exactly by the time $t_{rep}$. This means that we can in a post-experiment analysis identify a delayed `random'\ coincidence and compute the energy spectrum. If we remove this spectrum from the one containing `true'\ and `random'\ we obtain effectively the `true'\ spectrum. This procedure is allowed, because we have established above that the number of `random'\ coincidence in the regular coincidence circuit is identical to those in the delayed circuit. \section{revised definition of $\lambda$} % In the discussion so far we have introduced the primary intensity via the average number of photons per pulse $\lambda$. However, we are talking about electron detection, hence we have to introduce a parameter which describes the probability for electron emission upon photon absorption and detection within a given spectrometer. This will be a material and photon energy dependent entity which we want to abbreviate with $y$ for yield. The focus are `random'\ events which are mainly determined by single electron emission, hence we ignore double electron emission at this moment. The value of $y$ will depend also on the detailed spectrometer settings which determines the solid angle of detection and the probed energy window. This will lead to a singles count rate $s$ which is easily determined. Conceptually the photon flux can be measured in absolute units and with the known time between pulses $t_{rep}$ the relation between $\lambda$ and $y$ reads as: % % \begin{equation} y=\frac{s\cdot t_{rep}}{\lambda} \label{def_y} \end{equation} % This means that the average number of detected electrons per pulse is simply $y \cdot \lambda$. In a next step one would like to know the probability function for emitted electrons. In other words what is the probability that $m$ electrons are detected. For this evaluation one needs first the distribution function for $n$ photons in a pulse, which we have abbreviated with $P_n(\lambda)$ in Eq.(\ref{poisson1}). % % Further, we do not assume that $\lambda\ll1$. If $n$ photons are absorbed by the sample, the number of electrons $m$ registered by one spectrometer is $m\leq n$. The probability for this to occur is determined by the binomial distribution: % \begin{equation} el_{n,m}(y)= \binom{n}{m} \cdot y^m \cdot (1-y)^{n-m} \label{binomial} \end{equation} % This finally leads to the probability for the detection of $m$ electrons: % \begin{equation} P_m^{elec}(\lambda,y)=\sum_{i=1}^{n} P_i(\lambda) \cdot el_{n,m}(y) \label{binomia2l} \end{equation} % If the photon distribution is a Poisson distribution then one can show that $P_m^{elec}$ is also a Poisson distribution with a new parameter $\lambda_{eff}=\lambda \cdot y$. Hence we need to demand that $\lambda_{eff}\ll 1$ in order to derive the equivalent expressions of Eq.(\ref{poisson1}). From Fig.\ref{SrRuO_30eV} we can see that the highest singles rate was 15000 cps. With $t_{rep}$=2000 ns we obtain for $\lambda_{eff}=$0.03. Clearly the condition $\lambda_{eff}\ll 1$ is fulfilled in our DPE measurements and the approximations made in Eq.(\ref{poisson1}) are valid.\linebreak \indent In the discussion of the delayed coincidence we have ignored the possibility that detector 1 has a `true'\ counter part in detector 2. This will lead to additional events, however the `true'\ coincidence rate is more than 2 orders of magnitude smaller than the singles rate, see Fig.\ref{delay_loglog}. Hence this contribution can be safely ignored.\linebreak \indent This observation is not tied to the current DPE measurement on SrRuO$_3$, but is a rather general result in our activities on surfaces. This includes also observations from Auger-Photoelectron coincidences. This in turn means that the consideration of $y$ to contain only single electron events is justified. This should not be read as an indication that electron pair emission from surfaces is a weak effect. A combined (e,2e) and DPE study revealed that 15-40$\%$ of electrons detected by one spectrometer have a counterpart emitted somewhere in the halfspace.\cite{2021_Schumann} % \section{Comparison energy dispersive versus ToF performance} A variety of coincidence spectrometer have been developed addressing different scientific questions. There is no rule which determines the best approach. Here we only want to provide a simple estimate on the performance relation between an energy dispersive and ToF approach. For simplicity we assume that the emission of pairs is isotropic and constant within an energy window. We assume at the moment that the energy dispersive and ToF instrument cover the same solid angle $\Omega$ and energy window. Later we will discuss the deviations from this description.\linebreak \indent An example resembling this situation is a pair of hemispherical analyzers compared to a pair of ToF spectrometers which are based on the entrance lens of a hemispherical analyzer. In our activities these approaches have been realized.\cite{1335_vanRiessen,1588_Schumann,1804_Huth,2111_Truetzschler,2268_Chiang} The key number characterizing the performance of the hemispheres is the time dispersion. This is essentially the quantity $t_{true}$, see Figs.\ref{hist_cartoon} and \ref{Pb_hist}. A ToF instrument requires a pulsed excitation source and the time between subsequent pulses is $t_{rep}$. Let us assume that the average number of primary particles per pulse is given by $\alpha$ which yields the desired ratio of `true'\ to `random'\ . This means that the primary flux for the ToF experiment is: \begin{equation} f_{ToF}=\frac{\alpha}{t_{rep}} \label{int_tof} \end{equation} For a proper comparison with an energy dispersive instrument we have to allow the same number of primary particles within the coincidence window. The temporal width of this is characterized by the value of $t_{true}$, see Fig.\ref{hist_cartoon}. This will ensure the same ratio of `true'\ to `random'\ and we obtain for the primary flux $f_{disp}$: \begin{equation} f_{disp}=\frac{\alpha}{t_{true}} \label{int_tof} \end{equation} This allows now to compare the primary flux of the two types of experiments and we obtain: \begin{equation} \frac{f_{disp}}{f_{ToF}}=\frac{t_{trep}}{t_{true}} \label{flux_relation} \end{equation} Let us use some typical values for our instruments which are $t_{true}$=20~ns and $t_{rep}$=1000~ns. This tells us that the energy dispersive system is a factor of 50 more efficient or equivalently a higher coincidence rate by this factor is possible.\linebreak \indent However, the assumption of the same solid angle for both types of spectrometer is not valid. While our ToF instrument captures 3$\%$ of the half sphere this value is reduced to 0.7$\%$ for the hemisphere set-up. Since the solid angle enters quadratic into the detection efficiency it reduces the performance advantage to about 3.\linebreak \indent The energy window captured by the ToF and energy dispersive instrument can be selected. The requirement of the operation with a low primary flux makes asks for efficient detection hence a sizable window is required at the expense of energy resolution. In the context of this work we operated the hemispherical analyzers with a pass energy of 150~eV which provides an energy window of 13.5~eV. The typical settings of the ToF instrument for coincidence measurements have been described.\cite{2194_Huth} It is possible to cover an energy window of 30 eV, but only over a range of 10~eV the spectrometer captures electrons with the maximum solid angle of 3$\%$. Outside this range the solid angle drops significantly. This means that the detection efficiency for a ToF is about a factor of 1.25 larger than for the hemisphere. For a coincidence experiment one has to consider the square of this value and this gives a factor 1.56. From this one concludes that the performance advantage of the hemisphere reduces to a factor 2. In this estimate other experimental considerations have been ignored, but the main conclusion is that both approaches provide performance on equal terms. \section{summary} For coincidence experiment it is vital to know the ratio of `true'\ to `random'\ events which is sensitive to the primary flux. We reviewed the established procedures on how to identify `true'\ electron pair emission. A powerful tool known for energy dispersive coincidence experiments was found to have its counter part in ToF instrumentation. This approach we termed delayed coincidence circuit. We discussed the relevant parameters of this procedure and how they can be determined in an experiment. In order to support this notion we performed photon energy dependent DPE studies. These resulted in intensity levels varying by more than 2 order of magnitude. The characteristic values of the delayed coincidence are in agreement with the predicted behavior and with the concurrent obtained double-logarithmic presentation.\linebreak \indent This experimental proof of the concept allows to monitor the TR ratio during the actual measurement. Therefore there is no need to resort to the time consuming flux dependency measurement for the conventional double-logarithmic analysis which interrupts the actual measurement. Furthermore, it is not required to have such a high primary flux in which the coincidence rate is dominated by `random'\ coincidences.\linebreak \indent We have also outlined how it becomes possible to remove the `random'\ contribution from the spectra by using the novel approach. \bibliographystyle{aip}
1702.07113
\section{Introduction} \label{sec:intro} A fundamental connection between three dimensional topology and higher categories is the $(2+1)$-dimensional topological quantum field theory ($\mathrm{TQFT}$) introduced in \cite{witten1988topological,atiyah1988topological}. A $(2+1)$-$\mathrm{TQFT}$ associates to every closed oriented $2$-manifold a finite dimensional vector space and to every $3$-manifold a vector in the vector space corresponding to its boundary. These assignments should satisfy certain axioms \cite{atiyah1988topological}. The empty set is considered as a closed $2$-manifold and the vector space associated to it is required to be $\mathbb{C}$. Then the vector corresponding to a closed $3$-manifold becomes a complex scalar called the partition function or path integral, which is an invariant of $3$-manifolds. Invariants arising from $\mathrm{TQFT}$s are called quantum invariants. Quantum invariants have largely been constructed by state-sum models from monoidal categories and Hopf algebras. Reshetikhin and Turaev constructed an invariant of $3$-manifolds using modular tensor categories, which is believed to be the mathematical realization of Witten's $\mathrm{TQFT}$ from non-abelian Chern-Simon theories \cite{reshetikhin1991invariants}. Turaev and Viro gave a state-sum invariant of $3$-manifolds (Turaev-Viro invariant) from a ribbon fusion category \cite{turaev1992state}. Later Barrett and Westbury generalized this construction (Turaev-Viro-Barrett-Westbury invariant or $\text{TVBW}$ invariant) by using spherical fusion categories \cite{barrett1996invariants}. These invariants can all be extended to define a $(2+1)$-$\mathrm{TQFT}$. Apart from these categorical constructions, another approach is by using certain Hopf algebras, among which the Kuperberg invariant \cite{kuperberg1996noninvolutory} and the Hennings invariant \cite{kauffman1995invariants} \cite{hennings1996invariants} are non-semisimple generalizations of the Turaev-Viro invariant and the Reshetikhin-Turaev invariant, respectively. A special case of the Kuperberg invariant (and also the Turaev-Viro invariant) reduces to the Dijkgraaf-Witten theory \cite{dijkgraaf1990topological}. The study of $(2+1)$-$\mathrm{TQFT}$s has led to applications in quantum groups, $3d$ topology, and knot theories. For example, the Turaev-Viro invariant can distinguish certain $3$-manifolds which are homotopy equivalent. The main result of this paper is a construction of a state-sum invariant of $3$-manifolds from what we call a spherical multi-fusion category ($\text{SMFC}$). When the $\text{SMFC}$ is a fusion category, the invariant reduces to the $\text{TVBW}$ invariant. It is straightforward to extend the construction to obtain a $(2+1)$-$\mathrm{TQFT}$. However, for simplicity, here we only focus on quantum invariants. Our contribution touches on the following three aspects. Firstly, we introduced the concept of $\text{SMFC}$s. The current definitions of spherical categories and multi-fusion categories are in general not compatible; a multi-fusion category can never be spherical unless it is a fusion category (see Section \ref{subsec:multifusion}). Thus a $\text{SMFC}$ is not a spherical category in the usual sense. We weakened the definition of sphericity based on the construction of state-sum invariants. Explicitly, let $\mathcal{C} = \bigoplus\limits_{i,j \in I} \mathcal{C}_{ij}$ be a multi-fusion category, where $\mathcal{C}_{ij}$, called the $(i,j)$-sector, satisfies $\mathcal{C}_{ik} \otimes \mathcal{C}_{kj} \subset \mathcal{C}_{ij}$. Here $I$ is called the index set and for each $i \in I$, $\mathcal{C}_{ii}$ is a fusion category with unit $\mathbf{1}_i$ and the unit of $\mathcal{C}$ is $\mathbf{1} = \bigoplus\limits_{i\in I}\mathbf{1}_i$. If $f \in \End(X), X \in \mathcal{C}_{ij}$, instead of requiring the left trace of $f$ equal the right trace of $f$ on the nose, i.e., $\Tr^l(f) = \Tr^r(f)$ which in general does not hold, we define $\mathcal{C}$ to be spherical if $|\Tr^l(f)| = |\Tr^r(f)|$ where $|\Tr^l(f)|$, $|\Tr^l(f)|$ are scalars that satisfy $\Tr^l(f) = |\Tr^l(f)| id_{\mathbf{1}_j}$, $\Tr^r(f) = |\Tr^r(f)| id_{\mathbf{1}_i}$. When $I$ consists of one element, this definition reduces to the usual one. Another motivation of this weakening comes from graphical calculus of multi-fusion categories. The new definition of sphericity guarantees that isotopic colored graphs in the sphere have the same evaluation. Secondly, the construction of quantum invariants is more general than the $\text{TVBW}$ approach. In the $\text{TVBW}$ model, only the $1$-simplices are colored while in our model, both the $1$-simplices and $0$-simplices are colored. Let $M$ be a closed oriented $3$-manifolds and $\mathcal{T}$ be a triangulation of $M$ whose vertices are ordered. Let $\mathcal{C}$ be a $\text{SMFC}$ with index set $I$. A coloring $F$ of $\mathcal{T}$ assigns to each $0$-simplex ordered by $i$ an element $f^0_i \in I$ and to each $1$-simplex $(ij)$ a simple object $f^1_{ij}\in \mathcal{C}_{f^0_if^0_j}$. Then the partition function is defined to be, \begin{align} \label{equ:partition2} Z_{\mathcal{C}}(M,\mathcal{T}) = \sum\limits_{F = (f^0,f^1)} \prod\limits_{\tau \in \mathcal{T}^0} K^{-1} \prod\limits_{\alpha \in \mathcal{T}^1} d_{f^1_{\alpha}} \, \prod\limits_{\beta \in \mathcal{T}^2} \theta_F(\beta)^{-1} \, \prod\limits_{\gamma \in \mathcal{T}^3} \tilde{Z}^{\epsilon(\gamma)}_F(\gamma;B), \end{align} where $K$, $d_{(\cdot)}$, $\theta_{F}(\cdot)$, and $\tilde{Z}_{F}^{\epsilon(\cdot)}(\cdot, B)$ are certain scalars associated to simplices of various dimensions. See Section \ref{sec:main} for details. The main result is as follows. \begin{theorem}[Main] \label{thm:main} The partition function $Z_{\mathcal{C}}(M,\mathcal{T})$ is independent of the triangulation $\mathcal{T}$, and thus $Z_{\mathcal{C}}(M):= Z_{\mathcal{C}}(M,\mathcal{T})$ is an invariant of closed oriented $3$-manifolds. Moreover, this construction extends to a $(2+1)$-$\mathrm{TQFT}$. \end{theorem} Lastly, by studying a class of $\text{SMFC}$s coming from categorical groups and some additional cohomological data, we recovered the $(2+1)$-$\mathrm{TQFT}$ in \cite{kapustin2013higher} which is obtained from higher gauge theory. The latter $\mathrm{TQFT}$ was not known to have a categorical construction. The rest of the paper is organized as follows. In Section \ref{sec:review} we provide a review of basic category theories and propose the concept of $\text{SMFC}$s. Section \ref{sec:main} contains the main construction of quantum invariants. In Section \ref{sec:generalized}, we define generalized categorical groups and study the $\text{SMFC}$s obtained from them. Finally in Section \ref{sec:SET}, we make some connections to symmetry enriched topological phases. \section{Spherical Multi-fusion Categories} \label{sec:review} We assume the readers to have a background on basic category theories and especially monoidal (tensor) category theories. In Section \ref{subsec:pivotal} we set up some notations and briefly review monoidal categories with additional structures such as duals and pivotal structures. We also review graphical calculus which is a convenient way to represent morphisms. There are many references on this subject. For instance, see \cite{bojko2001lectures}\cite{Etingof_finitetensor}\cite{turaev1994quantum}\cite{kassel1995quantum}, etc. In Section \ref{subsec:multifusion}, we first recall the definition of (multi)-fusion categories and then introduce the concept of a spherical multi-fusion category, which is not spherical according to the usual definition unless the category is a fusion category. Spherical multi-fusion categories are natural generalizations of spherical fusion categories. Throughout the context, let $\mathcal{C}$ be a category. Denote by $\mathcal{C}^0$ the set of objects and by $\Hom_{\mathcal{C}}(X, Y)$ or simply $\Hom(X, Y)$ the set of morphisms between an object $X$ and an object $Y$. If $X=Y$, $\Hom(X,X)$ is also written as $\End(X)$. The compositions of morphisms will be read from right to left. Namely, if $f \in \Hom(X, Y), g \in \Hom(Y, Z)$, then $g\circ f \in \Hom(X, Z)$. \subsection{Pivotal Categories and Graphical Calculus} \label{subsec:pivotal} Let $\mathcal{C}$ be a rigid monoidal category, that is, a category endowed with the tuple $(\otimes, \mathbf{1}, a, l, r, (\cdot)^*)$, where $\otimes: \mathcal{C} \times \mathcal{C} \longrightarrow \mathcal{C}$ is the tensor product functor, $\mathbf{1}$ is the unit object, and $a, l, r$ are natural isomorphisms: \begin{align*} a_{X,Y,Z}: & (X \otimes Y) \otimes Z \overset{\simeq}{\longrightarrow} X \otimes (Y \otimes Z), \\ l_X: &\mathbf{1} \otimes X \overset{\simeq}{\longrightarrow} X,\\ r_X: & X \otimes \mathbf{1} \overset{\simeq}{\longrightarrow} X, \quad X, Y, Z \in \mathcal{C}^0, \end{align*} which satisfy the Pentagon Equation and Triangle Equation, and $(\cdot)^*$ is the contra-variant functor of taking duals. For each object $X$, denote the birth (also called co-evaluation) and death (also called evaluation) morphism by $b_X$ and $d_X$ respectively: \begin{align*} b_X: \mathbf{1} \longrightarrow X \otimes X^*, \qquad d_X: X^* \otimes X \longrightarrow \mathbf{1}, \end{align*} A pivotal structure on a rigid monoidal category is a natural isomorphism $\delta: Id_{\mathcal{C}} \longrightarrow (\cdot)^{**}$. Thus for each object $X$, there is an isomorphism: \begin{align*} \delta_{X}: X \overset{\simeq}{\longrightarrow} X^{**}, \end{align*} such that $\delta_{X} \otimes \delta_{Y} \overset{\cdot}{=} \delta_{X \otimes Y}$, where \lq$\overset{\cdot}{=}$' means \lq equal' up to a composition of certain canonical isomorphism. With the pivotal structure $\delta$, we can define another set of \lq birth' and \lq death' morphisms, \begin{align*} b_{X}': \mathbf{1} \longrightarrow X^* \otimes X, \qquad d_{X}': X \otimes X^* \longrightarrow \mathbf{1} \end{align*} where $b_{X'}:= (id_{X^*} \otimes \delta_{X}^{-1})b_{X^*}$, $d_{X'}:= d_{X^*}(\delta_X \otimes id_{X^*})$. A pivotal category is a rigid monoidal category with a chosen pivotal structure. Given a morphism $f \in \End(X)$, define the left trace $\Tr^l(f) \in \End(\mathbf{1})$ by $\Tr^l(f) = d_X(id_{X^*} \otimes f)b_X'$ and the right trace $\Tr^r(f) \in \End(\mathbf{1})$ by $\Tr^r(f) = d_{X}'(f \otimes id_{X^*})b_X$. Then a pivotal category is called spherical if $\Tr^l(f) = \Tr^r(f)$ for all endmorphisms $f$. Every pivotal category $\mathcal{C}$ is equivalent (in a properly defined sense) to a strict pivotal category $\hat{\mathcal{C}}$ where all the structural isomorphisms $a, l ,r, \delta $ are the identity map \cite{barrett1999spherical}\cite{ng2007higher}. If $\mathcal{C}$ is spherical, so is $\hat{\mathcal{C}}$. In a strict pivotal category $\mathcal{C}$, graphical calculus is a convenient way to represent and manipulate morphisms. We sketch the rules for graphical calculus following the conventions in \cite{turaev1994quantum}\cite{turaev2010on}. A graph diagram is a collection of rectangles \footnote{In \cite{turaev1994quantum}, they are called coupons.} and directed arcs (including circles) in $\mathbb{R} \times [0,1]$, satisfying the conditions: \begin{itemize} \item The longer sides of each rectangle are parallel to $\mathbb{R} \times \{0\}$ and the shorter sides vertical to $\mathbb{R} \times \{0\}$. \item The collection of $\{$rectangles, arcs$\}$ are mutually disjoint from each other except that every non-circular arc starts and ends transversely either on $\mathbb{R} \times \{0,1\}$ or on the horizontal sides of a rectangle. \end{itemize} A $\mathcal{C}$-colored (or colored, for short) graph diagram is a graph diagram $\mathcal{G}$ which further satisfies: \begin{itemize} \item Each arc is labeled by an object of $\mathcal{C}$ and each rectangle is labeled by a morphism obeying the following rule. For a rectangle $\gamma$ labeled by $f$, denote by $\beta_1, \cdots, \beta_{m}$ the set of arcs incident to the bottom of $\gamma$, and by $\beta^1, \cdots, \beta^n$ the set of arcs incident to the top of $\gamma$ both listed from left to right. Note that some $\beta_{i_1}$ and $\beta_{i_2}$ might be the same arc if this arc intersects $\gamma$ twice. For each $\beta_i$ (resp. $\beta^j$), define $\epsilon_i$ (resp. $\epsilon^j$) to be $+1$ if $\beta_i$ (resp. $\beta^j$) is directed downwards near the rectangle, and $-1$ otherwise. Denote the label on the arc $\beta_i$ (resp. $\beta^j$) by $X_i$ (resp. $X^j$), then we require \begin{align*} f \in \Hom(\bigotimes\limits_{i=1}^{m} X_i^{\epsilon_i}, \bigotimes\limits_{j=1}^{n} (X^j)^{\epsilon^j}), \end{align*} where for an object $X$, $X^{+1}:= X$ and $X^{-1}:= X^*$. If $m=0$ or $n=0$, then let the corresponding object be the unit $\mathbf{1}$. For instance, in Figure \ref{fig:examplegraph}, we represent the labels by putting an object next to each arc and a morphism inside each rectangle. Then $f \in \Hom(X_1 \otimes X_2 \otimes X_3^*,\, Y_1^* \otimes Y_2)$. Note that we have omitted and will never draw the lines $\mathbb{R} \times \{0,1\}$. \end{itemize} Given a $\mathcal{C}$-colored graph diagram $\mathcal{G}$, denote by $\beta_1, \cdots, \beta_{m}$ (resp. $\beta^1, \cdots, \beta^{n}$) the set of arcs incident to $\mathbb{R} \times \{0\}$ (resp. $\mathbb{R} \times \{1\}$), and define the $X_i\,$'s, $\epsilon_i\,$'s, $X^j\,$'s, and $\epsilon^j\,$'s in the same way as above. Let $S(\mathcal{G}) = \bigotimes\limits_{i=1}^{m} X_i^{\epsilon_i}$ and $ T(\mathcal{G}) = \bigotimes\limits_{j=1}^{n} (X^j)^{\epsilon^j}$. Then $\mathcal{G}$ can be interpreted as a morphism $F(\mathcal{G}) \in \Hom(S(\mathcal{G}), T(\mathcal{G}))$ by the following rules: \begin{enumerate} \item If there is a color-preserving isotopy between $\mathcal{G}$ and $\mathcal{G}'$ relative to $\mathbb{R} \times \{0,1\}$, then $F(\mathcal{G}) = F(\mathcal{G}')$. \item If $\mathcal{G}$ is cut into two colored graph diagrams $\mathcal{G}_1$ (lower) and $\mathcal{G}_2$ (upper) by the line $\mathbb{R} \times \{\frac{1}{2}\}$, then $F(\mathcal{G}) = F(\mathcal{G}_2)F(\mathcal{G}_1)$. \item If $\mathcal{G}$ is separated into two disjoint colored graph diagrams $\mathcal{G}_1$ (left) and $\mathcal{G}_2$ (right) by the line $\{0\} \times \mathbb{R}$, then $F(\mathcal{G}) = F(\mathcal{G}_1) \otimes F(\mathcal{G}_2)$. \item Reversing the direction of an arc and changing its color to the dual at the same time does not change $F(\mathcal{G})$. \item If $\mathcal{G}$ is one of the diagrams in Figure \ref{fig:rigid pic}, then $F(\mathcal{G})$ is the corresponding morphism listed below. \end{enumerate} It is not hard to see the above rules uniquely determine $F(\mathcal{G})$. However, it takes more effort to check these rules are consistent. See \cite{turaev2010on} for more details. \begin{figure} \centering \begin{tikzpicture}[scale = 0.5] \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>}}}] \draw [postaction={decorate}] (0,1) -- (0,0) node[left]{$X_1$}; \draw [postaction={decorate}] (1.5,1) -- (1.5,0) node[left]{$X_{2}$}; \draw [postaction={decorate}] (3,0)node[right]{$X_3$} -- (3,1) ; \draw [postaction={decorate}] (0,2) -- (0,3) node[left]{$Y_1$}; \draw [postaction={decorate}] (3,3)node[right]{$Y_2$} -- (3,2) ; \draw (-0.5,1) rectangle (3.5,2)node[pos = 0.5]{$f$}; \draw [postaction={decorate}] (5, 0) -- (5,3)node[right]{$Z$}; \draw [postaction={decorate}] (6.5,1.5) circle (0.5cm) ; \draw (7.2, 1.5) node[right]{$W$}; \end{scope} \end{tikzpicture} \caption{Example of a colored graph diagram. $f$ is a morphism from $X_1 \otimes X_2 \otimes X_3^*$ to $ Y_1^* \otimes Y_2$, and the whole diagram is a morphism from $X_1 \otimes X_2 \otimes X_3^* \otimes Z^*$ to $Y_1^* \otimes Y_2 \otimes Z^*$.}\label{fig:examplegraph} \end{figure} \begin{figure} \centering \begin{tikzpicture}[scale = 0.5] \begin{scope}[xshift = -0.5cm, decoration={ markings, mark=at position 0.5 with {\arrow{>}}}] \draw [postaction={decorate}] (-0.2,1) -- (-0.2,0)node[left]{$X_1$}; \draw [postaction={decorate}] (1,0)node[right]{$X_m$} -- (1,1); \draw (0.5, 0.5) node{$\cdots$}; \draw (-0.5,1) rectangle (1.5, 2) node[pos = 0.5]{$f$}; \draw [postaction={decorate}] (-0.2,3) node[left]{$Y_1$} -- (-0.2,2); \draw [postaction={decorate}] (1,3)node[right]{$Y_n$} -- (1,2); \draw (0.5, 2.5) node{$\cdots$}; \draw (0.5,-2) node{$a)\; f$}; \end{scope} \begin{scope}[xshift = 3cm, decoration={ markings, mark=at position 0.5 with {\arrow{>}}}] \draw [postaction={decorate}] (0,3) -- (0,0) node[right]{$X$}; \draw (0,-2) node{$b)\; id_X$}; \end{scope} \begin{scope}[xshift = 6cm, decoration={ markings, mark=at position 0.5 with {\arrow{>}}}] \draw [postaction={decorate}] (0,1) node[left]{$X$} arc(-180:0:1cm) ; \draw (1,-2) node{$c)\; b_X$}; \end{scope} \begin{scope}[xshift = 10cm, decoration={ markings, mark=at position 0.5 with {\arrow{>}}}] \draw [postaction={decorate}] (0,0) arc(180:0:1cm) node[right]{$X$}; \draw (1,-2) node{$d)\; d_X$}; \end{scope} \begin{scope}[xshift = 14cm, decoration={ markings, mark=at position 0.5 with {\arrow{>}}}] \draw [postaction={decorate}] (2,1) node[right]{$X$} arc(0:-180:1cm); \draw (1,-2) node{$e)\; b_X'$}; \end{scope} \begin{scope}[xshift = 19cm, decoration={ markings, mark=at position 0.5 with {\arrow{>}}}] \draw [postaction={decorate}] (2,0) arc(0:180:1cm) node[left]{$X$}; \draw (1,-2) node{$f)\; d_X'$}; \end{scope} \end{tikzpicture} \caption{$F(\mathcal{G})$ for some colored graph diagrams} \label{fig:rigid pic} \end{figure} A graph diagram is called closed if it does not have any free ends, i.e., it is disjoint from $\mathbb{R} \times \{0,1\}$. If $\mathcal{G}$ is closed, then $F(\mathcal{G}) \in \Hom(\mathbf{1},\mathbf{1})$. We can view closed colored graph diagrams as sitting in $\mathbb{R}^2$, and isotopy in $\mathbb{R}^2$ does not change its value. For instance, given $f \in \End(X)$, the left and right trace of $f$ are represented by the two closed colored graph diagrams in Figure \ref{fig:trace}. Note that these two diagrams are not isotopic in $\mathbb{R}^2$. However they become isotopic when we embed $\mathbb{R}^2$ in $\mathbb{S}^2 = \mathbb{R}^2 \cup \{\infty\}$. Thus, a necessary condition for isotopic closed graph diagrams in $\mathbb{S}^2$ to represent the same morphism is $\Tr^l = \Tr^r$, i.e., $\mathcal{C}$ is spherical. In fact, this condition is also sufficient. Let $\mathcal{G}$ be a closed colored graph diagram in $\mathbb{S}^2$ (defined similarly as above), then we can remove a point $pt$ in the complement of $\mathcal{G}$, and view $\mathcal{G}$ as in $\mathbb{R}^2 = \mathbb{S}^2 \setminus \{pt\}$, and interpret $\mathcal{G}$ as a morphism $F(\mathcal{G}) \in \End(\mathbf{1})$. In \cite{turaev2010on} it is shown that if $\mathcal{C}$ is spherical, then $F(\mathcal{G})$ is well-defined for $\mathcal{G}$ in $\mathbb{S}^2$. If $\mathcal{C}$ is spherical, then define $\Tr(f):= \Tr^l(f) = \Tr^r(f).$ In particular, for an object $X$, define the dimension of $X$ to be $d_X:= \Tr(id_X)$, which is represented by a circle labeled by $X$. The direction of the circle is irrelevant since $\mathcal{C}$ is spherical. Also, by the rules of graphical calculus, $d_{X} = d_{X^*}$. \begin{figure} \centering \begin{tikzpicture}[scale = 0.5] \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>}}}] \draw [postaction={decorate}] (2,1) arc(0:-180:1cm); \draw [postaction={decorate}] (0,4) arc(180:0:1cm); \begin{scope}[xshift = 2cm, yshift = 1cm] \draw [postaction={decorate}] (0,1) -- (0,0)node[left]{$X$}; \draw (-0.5,1) rectangle (0.5, 2) node[pos = 0.5]{$f$}; \draw [postaction={decorate}] (0,3) node[left]{$X$} -- (0,2); \end{scope} \draw (0,1) -- (0,4); \end{scope} \begin{scope}[xshift = 5cm, decoration={ markings, mark=at position 0.5 with {\arrow{>}}}] \draw [postaction={decorate}] (0,1) arc(-180:0:1cm); \draw [postaction={decorate}] (2,4) arc(0:180:1cm); \begin{scope}[xshift = 0cm, yshift = 1cm] \draw [postaction={decorate}] (0,1) -- (0,0)node[left]{$X$}; \draw (-0.5,1) rectangle (0.5, 2) node[pos = 0.5]{$f$}; \draw [postaction={decorate}] (0,3) node[left]{$X$} -- (0,2); \end{scope} \draw (2,1) -- (2,4); \end{scope} \end{tikzpicture} \caption{$\Tr^l(f)$ (Left) and $\Tr^r(f)$ (Right)}\label{fig:trace} \end{figure} \subsection{Multi-fusion Categories} \label{subsec:multifusion} Let $\mathcal{C}$ be a rigid monoidal category. $\mathcal{C}$ is called $\mathbb{C}$-linear if all $\Hom$ sets are finite dimensional vector spaces over $\mathbb{C}$, and the composition and tensor product of morphisms are $\mathbb{C}$-linear w.r.t each component. An object $X$ in a $\mathbb{C}$-linear category is called simple if $\End(X) = \mathbb{C}\, id_X$. An idempotent, i.e., a morphism $f \in \End(Y)$ such that $f^2 = f$, is called split if there are morphisms $g \in \Hom(Z, Y), h \in \Hom(Y, Z)$ for some $Z$, such that $hg = id_{Z}$ and $gh = f$. A $\mathbb{C}$-linear category is called semi-simple if it has direct sums, all idempotents split, and every object is isomorphic to a direct sum of simple objects \cite{muger2003subfactors}. There is a unique zero object denoted by $0$ in a semi-simple category. \begin{remark} \begin{itemize} \item Another way to define a semi-simple category is to require a priori the category to be Abelian. This is equivalent to the current definition \cite{muger2003subfactors}. We avoid to use the terminology \lq Abelian category' since we will not deal with kernels and cokernels. \item Given a $\mathbb{C}$-linear category $\mathcal{C}$, there is a canonical way to embed it as a full subcategory into a category $\DSC{\mathcal{C}}$ which has direct sums \cite{gabriel1997representations}. Roughly speaking, to define $\DSC{\mathcal{C}}$ one just formally introduces direct sums of objects of $\mathcal{C}$ as objects of $\DSC{\mathcal{C}}$ and defines the morphism spaces in the most natural way. There is also a standard way, called idempotent completion (or Karoubi envelope or Cauchy completion), to embed $\mathcal{C}$ into one $\IC{\mathcal{C}}$ which has all idempotents split. Moreover, $\IC{(\DSC{\mathcal{C}})} = \DSC{(\IC{\mathcal{C}})}$. We will discuss more about this in Section \ref{subsec:idempotent}. \end{itemize} \end{remark} \begin{definition} A multi-fusion category over $\mathbb{C}$ is a $\mathbb{C}$-linear rigid monoidal category which is semi-simple and has finitely many isomorphism classes of simple objects. A fusion category is a multi-fusion category in which the unit $\mathbf{1}$ is simple. \end{definition} Let $\mathcal{C}$ be a multi-fusion category, and $\mathbf{1} = \bigoplus\limits_{i \in I} \mathbf{1}_i$ where the $\mathbf{1}_i\,$'s are simple objects. One can show that $\mathbf{1}_i \otimes \mathbf{1}_j \simeq \delta_{i,j} \mathbf{1}_i$. Moreover, for any simple object $X$, there is a unique $i \in I$, $j \in I$ such that $\mathbf{1}_i \otimes X \simeq X \simeq X \otimes \mathbf{1}_j$. Let $\mathcal{C}_{ij}$ be the full subcategory spanned by such simple objects. Then we have \begin{align*} \mathcal{C} = \bigoplus\limits_{i,j \in I} \mathcal{C}_{ij}. \end{align*} It follows that $\mathcal{C}_{ik} \otimes \mathcal{C}_{kj} \subset \mathcal{C}_{ij}$. Each $\mathcal{C}_{ii}$ is a fusion category with the unit $\mathbf{1}_i$ and each $\mathcal{C}_{ij}$ is a $\mathcal{C}_{ii}$-$\mathcal{C}_{jj}$ bi-module category. We call $\mathcal{C}$ an $|I| \times |I|$ multi-fusion category with index set $I$, $\mathcal{C}_{ij}$ the sector indexed by $(i,j)$, and call an object homogeneous if it belongs to some sector. If two homogeneous objects are from different sectors, then the only morphism between them is $0$. More generally, if $X = \bigoplus\limits_{i,j \in I}X_{ij}, \, Y = \bigoplus\limits_{i,j \in I}Y_{ij}, X_{ij}, Y_{ij} \in \mathcal{C}_{ij}$, then any morphism $f \in \Hom(X,Y)$ can be written as $f = (f_{ij})_{i,j \in I}$, where $f_{ij} \in \Hom(X_{ij}, Y_{ij})$. Here are some examples of multi-fusion categories. \begin{example} \label{example} \begin{enumerate} \item \textbf{The \boldmath{$n \times n$}-matrix \boldmath{$\mathcal{M}_n$}}: the index set is $I = \{1,2,\cdots, n\}$. Each $(i,j)$-sector contains exactly one simple object $E_{ij}$. The tensor product obeys matrix multiplication rule: $E_{ik} \otimes E_{k'j} = \delta_{k,k'}E_{ij}$. Moreover, $\mathbf{1}_i = E_{ii}, \, E_{ij}^* = E_{ji}$. All structural isomorphisms and $b_A\,'$s, $d_{A}\,'$s are the identity map. \item \textbf{\boldmath{$G$}-graded fusion category}: For a finite group $G$, let $\mathcal{C} = \bigoplus\limits_{g} \mathcal{C}_g$ be a $G$-graded fusion category, that is, a fusion category such that $\mathcal{C}_{g} \otimes \mathcal{C}_{g'} \subset \mathcal{C}_{gg'}$. Define a multi-fusion category $\tilde{\mathcal{C}}$ whose index set is $G$ and whose $(g,g')$-sector is $\mathcal{C}_{g^{-1}g'}$. The tensor product and dual in $\tilde{\mathcal{C}}$ are the same as those in $\mathcal{C}$. Note that all the diagonals $\tilde{\mathcal{C}}_{gg}$ are copies of $\mathcal{C}_{e}$. \end{enumerate} \end{example} Now let $\mathcal{C}$ be a multi-fusion category which is also pivotal. Note that $\mathbf{1}$ is not a homogeneous object. If $A \in \mathcal{C}_{ij}$, then $A^* \in \mathcal{C}_{ji},\, A \otimes A^* \in CatC_{ii}$, thus the birth map $b_A: \mathbf{1} \longrightarrow A \otimes A^*$ can be equivalently viewed as a morphism in $\Hom(\mathbf{1}_i, A \otimes A^*)$ since all other components of $b_A$ are zero. Similarly, we regard $b_{A}' \in \Hom(\mathbf{1}_j, A^* \otimes A),\, d_A \in \Hom(A^* \otimes A, \mathbf{1}_j),\, d_A' \in \Hom(A \otimes A^*, \mathbf{1}_i)$. Since $\mathcal{C}$ is pivotal, graphical calculus still makes sense in $\mathbb{R} \times [0,1]$. However, since we will be mostly interested in homogeneous objects and also to avoid zero object, we refine the notion of graphical calculus reviewed in Section \ref{subsec:pivotal}. Let $\mathcal{G}$ be a graph diagram in $\mathbb{R} \times [0,1]$ endowed with the standard counter clockwise orientation. The complement of $\mathcal{G}$ is divided into a disjoint union of connected regions, which are denoted by $R_1, R_2, \cdots$. Then a colored graph diagram is a graph diagram $\mathcal{G}$ satisfying the following condition: \begin{itemize} \item Each $R_i$ is labeled by an index $g_i \in I$, each arc is labeled by an object of $\mathcal{C}$, and each rectangle is labeled by a morphism obeying the following rules. For each arc $\beta$, let $R_i, R_j$ be the two regions bounded by $\beta$ such that the direction of $\beta$ together with the arrow pointing from $R_i$ to $R_j$ matches the orientation of $\mathbb{R} \times [0,1]$. Then we require the object labeling $\beta$ to be from $\mathcal{C}_{g_i g_j}$. See Figure \ref{fig:arc} for an illustration. The requirement on the labeling of rectangles is the same as that mentioned in Section \ref{subsec:pivotal}. \end{itemize} \begin{figure} \centering \begin{tikzpicture}[scale = 0.5] \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>}}}] \draw [postaction={decorate}] (0,2)node[left]{$A$} -- (0,0); \draw (-1,1)node{$i$}; \draw (1,1)node{$j$}; \draw (0,-1)node{$A \in \mathcal{C}_{ij}$}; \end{scope} \begin{scope}[xshift = 6cm,decoration={ markings, mark=at position 0.5 with {\arrow{>}}}] \draw [postaction={decorate}] (0,0) -- (0,2)node[left]{$B$}; \draw (-1,1)node{$i$}; \draw (1,1)node{$j$}; \draw (0,-1)node{$B \in \mathcal{C}_{ji}$}; \end{scope} \end{tikzpicture} \caption{Rules of labeling an arc in a graph diagram.}\label{fig:arc} \end{figure} The rules for interpreting colored graph diagrams as morphisms are the same as before. One can check the additional requirement on the labeling of arcs are consistent with these rules. For instance, if $A \in \mathcal{C}_{ij}$ labels an arc $\beta$, then reversing the direction of $\beta$ and changing $A$ to $A^* \in \mathcal{C}_{ji}$ still make it a well-defined coloring. Let $X \in \mathcal{C}_{ij}, f \in \End(X)$, then it is direct to see that $\Tr^l(f) \in \End(\mathbf{1}_j), \Tr^r(f) \in \End(\mathbf{1}_i)$. See Figure \ref{fig:trace2}. Therefore, if $i \neq j$, then $\Tr^l(f)$ can never be equal to $\Tr^r(f)$. We conclude that a pivotal $n \times n$ multi-fusion category for $n > 1$ cannot be spherical according to the existing definition of sphericity. However, since the $\mathbf{1}_i\,'$s are simple, we have $\Tr^l(f) = |\Tr^l(f)| id_{\mathbf{1}_j}, \Tr^r(f) = |\Tr^r(f)| id_{\mathbf{1}_i}$ for some complex numbers $|\Tr^l(f)|,\, |\Tr^r(f)|$. When interpreting a closed colored graph diagram as a morphism in some $\End(\mathbf{1}_i)$ which is equal some complex number times $id_{\mathbf{1}_i}$, what we are really interested in is the complex number but not the morphism itself. This motivates us to propose the following weakened definition: \begin{definition} \label{def:spherical_multi} A spherical multi-fusion category $(\text{SMFC})$ is a pivotal multi-fusion category $\mathcal{C}$ such that $|\Tr^l(f)| = |\Tr^r(f)|$ for all $f \in \End(X), X \in \mathcal{C}_{ij}$. Define the trace of $f$ to be $\Tr(f):= |\Tr^l(f)|$, and the dimension of $X$ to be $d_{X}:= \Tr(id_{X})$. \end{definition} \begin{figure} \centering \begin{tikzpicture}[scale = 0.5] \begin{scope}[decoration={ markings, mark=at position 0.5 with {\arrow{>}}}] \draw [postaction={decorate}] (2,1) arc(0:-180:1cm); \draw [postaction={decorate}] (0,4) arc(180:0:1cm); \begin{scope}[xshift = 2cm, yshift = 1cm] \draw [postaction={decorate}] (0,1) -- (0,0)node[left]{$X$}; \draw (-0.5,1) rectangle (0.5, 2) node[pos = 0.5]{$f$}; \draw [postaction={decorate}] (0,3) node[left]{$X$} -- (0,2); \end{scope} \draw (0,1) -- (0,4); \draw (-1,2.5)node{$j$}; \draw (1,2.5)node{$i$}; \end{scope} \begin{scope}[xshift = 5cm, decoration={ markings, mark=at position 0.5 with {\arrow{>}}}] \draw [postaction={decorate}] (0,1) arc(-180:0:1cm); \draw [postaction={decorate}] (2,4) arc(0:180:1cm); \begin{scope}[xshift = 0cm, yshift = 1cm] \draw [postaction={decorate}] (0,1) -- (0,0)node[left]{$X$}; \draw (-0.5,1) rectangle (0.5, 2) node[pos = 0.5]{$f$}; \draw [postaction={decorate}] (0,3) node[left]{$X$} -- (0,2); \end{scope} \draw (2,1) -- (2,4); \draw (3,2.5)node{$i$}; \draw (1,2.5)node{$j$}; \end{scope} \end{tikzpicture} \caption{$\Tr^l(f)$ (Left) and $\Tr^r(f)$ (Right) for $X \in \mathcal{C}_{ij}$}\label{fig:trace2} \end{figure} For fusion categories, the above definition coincides with the existing definition of sphericity. Just as in the case of spherical categories, graphical calculus in a $\text{SMFC}$ can also be generalized from the plane to $\mathbb{S}^2$. One point to keep in mind is that when interpreting closed diagrams, it is the scalar but not the morphism that remains invariant under isotopy. At the end of this section, we introduce a special class of $\text{SMFC}$s, which are the ingredients that will be used to construct invariants of $3$-manifolds in Section \ref{sec:main}. Let $\Label{\mathcal{C}}$ be a complete set of representatives, i.e., a set of simple objects that contains exactly one representative from each isomorphism class of simple objects, and let $\Label{\mathcal{C}}_{ij}$ be the subset of $\Label{\mathcal{C}}$ whose objects are from $\mathcal{C}_{ij}$, then $\Label{\mathcal{C}} = \bigsqcup\limits_{i,j \in I} \Label{\mathcal{C}}_{ij}$. Define the dimension of $\mathcal{C}_{ij}$ to be $K(\mathcal{C}_{ij}):= \sum\limits_{a \in \Label{\mathcal{C}}_{ij}} d_a^2$, the dimension of the $i$-th row to be $K(\mathcal{C}_i):= \sum\limits_{j \in I}K(\mathcal{C}_{ij})$, and the dimension of $\mathcal{C}$ to be $K(\mathcal{C}) = \sum\limits_{i,j \in I} K(\mathcal{C}_{ij})$. \begin{definition} A $\text{SMFC}$ $\mathcal{C}$ is called $\text{special}$ if $K(\mathcal{C}_i)$ is the same for all $i \in I$. \end{definition} \section{Construction of Quantum Invariants} \label{sec:main} The Turaev-Viro-Barrett-Westbury $(\text{TVBW})$ invariant is a quantum invariant of $3$-manifolds constructed from a spherical fusion category. In this section, we generalize this construction to produce a quantum invariant of $3$-manifolds from a $\text{special}$ $\text{SMFC}$ which is defined in Section \ref{subsec:multifusion}. Let $\mathcal{C} = \bigoplus\limits_{i,j \in I} \mathcal{C}_{ij}$ be a $\text{special}$ $\text{SMFC}$ with index set $I$. Recall that $\mathcal{C}$ is $\text{special}$ if $K(\mathcal{C}_i)$ is the same for all $i \in I$. In this case, denote $K(\mathcal{C}_i)$ by $K$. Let $\Label{\mathcal{C}}$ be a complete set of representatives, i.e., a set of simple objects that contains exactly one representative from each isomorphism class of simple objects, and let $\Label{\mathcal{C}}_{ij}$ be the subset of $\Label{\mathcal{C}}$ whose objects are from $\mathcal{C}_{ij}$, then $\Label{\mathcal{C}} = \bigsqcup\limits_{i,j \in I} \Label{\mathcal{C}}_{ij}$. By definition, $K(\mathcal{C}_i) = \sum\limits_{j \in I} \sum\limits_{a \in \Label{\mathcal{C}}_{ij}} d_a^2$. For any two homogeneous objects $X,Y \in \mathcal{C}_{ij}$, a pairing on $\Hom(X,Y) \times \Hom(Y,X)$ is defined as: \begin{align} \label{equ:pairingdef} \langle\;,\;\rangle: \Hom(X,Y) \times \Hom(Y,X) &\longrightarrow \mathbb{C} \nonumber \\ (\ \phi\quad\; , \quad\psi\ ) \quad\qquad &\mapsto \Tr(\phi\psi) \end{align} Recall from Section \ref{subsec:multifusion} that $\Tr(\phi\psi) = |\Tr^l(\phi\psi)| = |\Tr^r(\phi\psi)|$. The pairing is non-degenerate. Thus, there are natural isomorphisms $\Hom(X,Y)\simeq \Hom(Y,X)^*$, $\Hom(Y,X) \simeq \Hom(X,Y)^*$. The $\text{TVBW}$ invariant from spherical fusion categories is defined on a triangulation of $3$-manifolds \cite{barrett1996invariants}, or more generally on a polytope decomposition \cite{kirillov2010turaev}. Here the invariant to be introduced below can also be defined both on triangulations and polytope decompositions. For simplicity, we restrict ourselves on triangulations. Let $M$ be a closed oriented $3$-manifolds. By a triangulation of $M$ is meant a $\Delta$-complex whose underlying space is homeomorphic to $M$. An ordered triangulation is one whose vertices are ordered by $0,1, \cdots$. Let $\mathcal{T}$ be an ordered triangulation of $M$. Denote by $\mathcal{T}^i$ the set of $i$-simplices of $\mathcal{T}$. For each $i$-simplex $\sigma$ of $\mathcal{T}$, the ordering on $\mathcal{T}^0$ induces a relative ordering on the vertices of $\sigma$ with which we can identify $\sigma$ with the standard $i$-simplex $(0,1,\cdots, i)$. The invariant of $3$-manifolds to be defined will only depend on this relative ordering for each simplex. \begin{definition} Let $M, \mathcal{T}, \mathcal{C}$ be as above and $\Label{\mathcal{C}}$ be an arbitrary complete set of representatives. A $\mathcal{C}$-coloring of the pair $(M, \mathcal{T})$ is a pair of functions $F = (f^0, f^1)$, $f^0: \mathcal{T}^0 \longrightarrow I$, $f^1: \mathcal{T}^1 \longrightarrow \Label{\mathcal{C}}$, such that for each $1$-simplex $\alpha = (01)$ with the induced ordering on its vertices, \begin{align*} f^1_{01} \in \Label{\mathcal{C}}_{f^0_0,f^0_1}. \end{align*} \end{definition} In the above definition, we have identified a $1$-simplex with the standard one $(01)$. Under the absolute ordering, a $1$-simplex whose vertices are ordered by $(i,j), \ i < j$ is subject to the condition that $f^1_{ij} \in \Label{\mathcal{C}}_{f^0_i,f^0_j}$. In the following we will use this identification for other simplices as well. Assume a coloring $F$ has been given. For each $2$-simplex $\beta = (012)$, we have $f^1_{ij} \in \Label{\mathcal{C}}_{f^0_i,f^0_j}$, $0 \leq i < j \leq 2$, then $f^1_{01} \otimes f^1_{12} \in \Label{\mathcal{C}}_{f^0_0,f^0_2}$ is in the same sector as $f^1_{02}$. Define, \begin{align*} V^{+}_{F}(\beta) = \Hom(f^1_{02}, f^1_{01} \otimes f^1_{12}), \qquad, V^{-}_{F}(\beta) = \Hom(f^1_{01} \otimes f^1_{12},f^1_{02}). \end{align*} Then by the non-degenerate pairing in Equation \ref{equ:pairingdef}, $V^{+}_{F}(\beta) \simeq V^{-}_{F}(\beta)^*$, $V^{-}_{F}(\beta) \simeq V^{+}_{F}(\beta)^*$. For each $3$-simplex $\gamma = (0123)$, define a linear functional $\tilde{Z}^{+}_{F}(\gamma)$, \begin{align*} \tilde{Z}^{+}_{F}(\gamma): V^{-}_F(123) \otimes V^{+}_F(023) \otimes V^{-}_F(013) \otimes V^{+}_F(012) \longrightarrow \mathbb{C} \end{align*} as follows. For $\phi_{123} \otimes \phi_{023}\otimes \phi_{013}\otimes \phi_{012}$ in the domain, define, \begin{align} \label{equ:Zptilde} \tilde{Z}^{+}_{F}(\gamma) (\phi_{123} \otimes \phi_{023}\otimes \phi_{013}\otimes \phi_{012}) = \Tr(\phi_{012}(id \otimes \phi_{123})(\phi_{012} \otimes id)\phi_{023}), \end{align} or graphically as the diagram given in Figure \ref{fig:6jbox} (Left), where the value of $\tilde{Z}^{+}_{F}(\gamma)$ is the evaluation of the diagram with each box colored by the corresponding $\phi_{ijk}$. One can check the requirement on the coloring makes the composition of the $\phi_{ijk}\,$'s in Equation \ref{equ:Zptilde} well defined. By the non-degenerate pairing, $\tilde{Z}^{+}_{F}(\gamma)$ induces a linear map, \begin{align*} Z^{+}_{F}(\gamma): V^{+}_F(023) \otimes V^{+}_F(012) \longrightarrow V^{+}_F(123) \otimes V^{+}_F(013), \end{align*} such that \begin{align*} \langle Z^{+}_{F}(\gamma)(\phi_{023} \otimes \phi_{012}), \phi_{123} \otimes \phi_{013} \rangle = \tilde{Z}^{+}_{F}(\gamma) (\phi_{123} \otimes \phi_{023}\otimes \phi_{013}\otimes \phi_{012}). \end{align*} Similarly, we define \begin{align*} \tilde{Z}^{-}_{F}(\gamma): V^{+}_F(123) \otimes V^{-}_F(023) \otimes V^{+}_F(013) \otimes V^{-}_F(012) \longrightarrow \mathbb{C} \end{align*} by \begin{align} \label{equ:Zmtilde} \tilde{Z}^{-}_{F}(\gamma) (\phi_{123}' \otimes \phi_{023}'\otimes \phi_{013}'\otimes \phi_{012}') = \Tr(\phi_{023}'(\phi_{012}' \otimes id)(id \otimes \phi_{123}')\phi_{013}'), \end{align} or graphically as the diagram shown in Figure \ref{fig:6jbox} (Right). And in the same way, this induces a linear map, \begin{align*} Z^{-}_{F}(\gamma): V^{+}_F(123) \otimes V^{+}_F(013) \longrightarrow V^{+}_F(023) \otimes V^{+}_F(012). \end{align*} \begin{figure} \centering \includegraphics[scale=1]{6jbox} \caption{Graphical definition of $\tilde{Z}^{+}_{F}(0123)$ (Left) and $\tilde{Z}^{-}_{F}(0123)$ (Right). Here we put the vertex colors in circles to distinguish with edge colors.}\label{fig:6jbox} \end{figure} Let $V^{-}_F(\gamma) = V^{+}_F(023) \otimes V^{+}_F(012)$ and $V^{+}_F(\gamma) = V^{+}_F(123) \otimes V^{+}_F(013)$. Define $\epsilon(\gamma) = +$ if the orientation on $\gamma$ induced from that of $M$ coincides with the one determined by the ordering of its vertices, and define $\epsilon(\gamma) = -$ otherwise. Then we have \begin{align*} Z^{\epsilon(\gamma)}_{F}(\gamma): V^{-\epsilon(\gamma)}_F(\gamma) \longrightarrow V^{+\epsilon(\gamma)}_F(\gamma), \end{align*} where we adopt the convention $++ = -- = +,\, +- = -+ = -$. One observation on the definition of $\epsilon(\gamma)$ is as follows. If $\gamma = (0123)$ is a $3$-simplex, then $\epsilon(\gamma)(0123)$ matches the orientation of $M$. A boundary face $\beta = (ijk)$ of $\gamma$ is called positive if its orientation induced by the ordering of its vertices matches $\partial (\epsilon(\gamma)(0123))$, and is called negative otherwise. Then $V_F^{+}(\beta)$ appears as a component in the domain of $Z_F^{\epsilon(\gamma)}(\gamma)$ if it is negative and as a component in the codomain otherwise. Let $V_F = \bigotimes\limits_{\beta \in \mathcal{T}^2} V^{+}_F(\beta)$. Since $M$ is closed, each $2$-simplex $\beta$ is the common face of exactly two $3$-simplices $\gamma_1, \gamma_2$ ($\gamma_1$ could be the same as $\gamma_2$), and moreover, the sign of $\beta$ in $\gamma_1$ and $\gamma_2$ are opposite. Thus, if $V^{+}_F(\beta)$ appears as a component in the domain of $Z^{\epsilon(\gamma_1)}_{F}(\gamma_1)$, then it must appear as a component in the codomain of $Z^{\epsilon(\gamma_2)}_{F}(\gamma_2)$. From this observation, we have $V_F = \bigotimes\limits_{\gamma \in \mathcal{T}^3} V^{-\epsilon(\gamma)}_F(\gamma) =\bigotimes\limits_{\gamma \in \mathcal{T}^3} V^{+\epsilon(\gamma)}_F(\gamma)$ (up to permutation of tensor components or viewed as an unordered tensor product). This implies $\bigotimes\limits_{\gamma \in \mathcal{T}^3} Z^{\epsilon(\gamma)}_{F}(\gamma)$ is an endmorphism on $V_F$. \begin{definition} Let $M, \mathcal{T}, \mathcal{C}$ be as above. The partition function of the pair $(M, \mathcal{T})$ is defined to be, \begin{align} \label{equ:partition} Z_{\mathcal{C}}(M,\mathcal{T}) = \sum\limits_{F = (f^0,f^1)} K^{-|\mathcal{T}^0|} \prod\limits_{\alpha \in \mathcal{T}^1} d_{f^1_{\alpha}} \, \Tr(\bigotimes\limits_{\gamma \in \mathcal{T}^3} Z^{\epsilon(\gamma)}_{F}(\gamma)), \end{align} where the summation is over all colorings. \end{definition} \begin{remark} In the $\text{TVBW}$ construction, the relevant factor involving $|\mathcal{T}^0|$ is $K(\mathcal{C}')^{-|\mathcal{T}^0|}$, where $K(\mathcal{C}')$ is the dimension of a spherical fusion category $\mathcal{C}'$. However, in the definition of the current invariant, the corresponding factor is $K^{-|\mathcal{T}^0|}$, where $K = K(\mathcal{C}_i)$ is the dimension of the $i$-th row, i.e., the direct sum $\bigoplus\limits_{j\in I}\mathcal{C}_{ij}$, and we require $K$ to be independent of $i \in I$. This requirement is necessary when in the proof of invariance of the partition function under the Pachner $1$-$4$ move. \end{remark} The main result is as follows. \begin{theorem} \label{thm:main} The partition function $Z_{\mathcal{C}}(M,\mathcal{T})$ is independent of the choice of $a)$ the complete set of representatives $\Label{\mathcal{C}}$, $b)$ the ordering on the vertices of $\mathcal{T}$, and $c)$ the triangulation $\mathcal{T}$. Therefore, $Z_{\mathcal{C}}(M):= Z_{\mathcal{C}}(M,\mathcal{T})$ is an invariant of closed oriented $3$-manifolds. \begin{proof} The proof is mostly parallel to that in the case of spherical fusion categories in \cite{barrett1996invariants}. To avoid repetition, we only illustrate why $\mathcal{C}$ is required to be $\text{special}$. Let $(01234)$ be the standard $4$-simplex whose boundary is partitioned into $\mathcal{T} \sqcup \mathcal{T}'$ with $\mathcal{T} = (0124) \sqcup (0234), \mathcal{T}' = (1234) \cup (0134) \cup (0123) $. Then $\mathcal{T}$ and $\mathcal{T}'$ share all vertices and edges except that $\mathcal{T}'$ has one edge $(13)$ of its own. Let $F$ be a coloring on $\mathcal{T}$, and $F'$ be an extension from $F$ to a coloring on $\mathcal{T}'$. To emphasize the change of coloring on $(13)$, in the following we will write $\sum_{F'}$ as $\sum_{13}$, with the understanding that the colors on all other simplices are fixed. We also drop the subscript $F$ in $Z^{\pm}_{F}(\cdot)$. As in \cite{barrett1996invariants}, the following two facts hold. \begin{align} \label{equ:Z2-3} Z^+(0124)_{2,3}Z^+(0234)_{1,2} = \sum\limits_{13}d_{13} Z^+(1234)_{1,2}Z^+(0134)_{1,3}Z^+(0123)_{2,3}, \end{align} \begin{align} \label{equ:Zorthogonal} d_{02}\sum\limits_{13} d_{13} Z^{-}(0123)Z^{+}(0123) = id, \end{align} where in the first equation the map on either side is from $V^{+}(034) \otimes V^{+}(023)\otimes V^{+}(012)$ to $V^{+}(234) \otimes V^{+}(124)\otimes V^{+}(014)$, and $Z^+(\cdot)_{i,j}$ means $Z^+(\cdot)$ acts on the $i$-th and $j$-th components. The invariance of $Z_{\mathcal{C}}(M)$ under pachner move $2$-$3$ is proved with Equation \ref{equ:Z2-3}. To prove the invariance under pachner move $1$-$4$, it suffices to show, \begin{align} \label{equ:Z1-4} & Z^+(0234) \\ =& \sum\limits_{1,01,12,13,14}K^{-1} d_{01}d_{12}d_{13}d_{14} \Tr_3(Z^-(0124)_{2,3}Z^+(1234)_{1,2}Z^+(0134)_{1,3}Z^+(0123)_{2,3}) \nonumber, \end{align} where $\Tr_3(\cdot)$ means taking partial trace with respect to the $3$rd component, and the summation on the right side is over all colorings which change colors only on the vertex $1$ and on the edges $(01),(12),(13),(14)$. We prove Equation \ref{equ:Z1-4}. \begin{align*} \text{RHS} &\overset{Equ. \ref{equ:Z2-3}}{=} \sum\limits_{1,01,12,14}K^{-1} d_{01}d_{12}d_{14} \Tr_3(Z^-(0124)_{2,3}Z^+(0124)_{2,3}Z^+(0234)_{1,2}) \\ &\overset{Equ. \ref{equ:Zorthogonal}}{=} \sum\limits_{1,01,12}K^{-1} d_{01}d_{12}d_{02}^{-1} \Tr_3(Z^+(0234)_{1,2}) \\ &=\sum\limits_{1,01,12}K^{-1} d_{01}d_{12}d_{02}^{-1}\dim {V^+(012)} \quad Z^+(0234)\\ &= Z^+(0234) \\ \end{align*} The last equality above is due to the following property. For fixed $c \in \Label{\mathcal{C}}_{ij}$, \begin{align} \label{equ:fixedc} \sum\limits_{k \in I}\sum\limits_{\substack{a \in \Label{\mathcal{C}}_{ik}, \\ b \in \Label{\mathcal{C}}_{kj}}} d_ad_bd_c^{-1} N_{ab}^c =& \sum\limits_{k \in I}\sum\limits_{\substack{a \in \Label{\mathcal{C}}_{ik}, \\ b \in \Label{\mathcal{C}}_{kj}}} d_ad_bd_{\bar{c}}^{-1} N_{\bar{c}a}^{\bar{b}} =& \sum\limits_{k \in I}\sum\limits_{a \in \Label{\mathcal{C}}_{ik}} d_a^2 &= K(\mathcal{C}_i), \end{align} where $N_{ab}^c:= \dim \Hom(a\otimes b,c),\, \bar{c}:= c^{*}$. \end{proof} \end{theorem} \begin{example} As an example, we compute the invariant of $\mathbb{S}^3$ with the standard orientation. We use the notations in the proof of Theorem \ref{thm:main}. Let $\gamma_1 = (0123), \gamma_2 = (0123)$ be the two standard $3$-simplices glued together along their corresponding faces, one with positive orientation and the other with negative orientation. Thus their union $\gamma_1 \cup \gamma_2$ is a triangulation of $\mathbb{S}^3$. With this triangulation we have, \begin{align*} Z_{\mathcal{C}}(\mathbb{S}^3) &= \sum\limits_{\substack{0,1,2,3 \\ 01,02,03,12,13,23}} K^{-4} \prod\limits_{(i,j):0 \leq i<j\leq 3} d_{ij} \quad \Tr(Z^{-}(0123)Z^{+}(0123)) \\ &\overset{Equ. \ref{equ:Zorthogonal}}{=} \sum\limits_{\substack{0,1,2,3 \\ 01,02,03,12,23}} K^{-4} d_{01}d_{03}d_{12}d_{23} \dim V^+(023) \dim V^+(012) \\ &\overset{Equ. \ref{equ:fixedc}}{=} \sum\limits_{\substack{0,2,3 \\02,03,23}} K^{-3} d_{03}d_{23}d_{02} \dim V^+(023) \\ &\overset{Equ. \ref{equ:fixedc}}{=} \sum\limits_{\substack{0,3 \\03}} K^{-2} d_{03}^2 \\ &= \frac{|I|}{K}. \end{align*} \end{example} In the following, we give a formula for $Z_F = \Tr(\bigotimes\limits_{\gamma \in \mathcal{T}^3} Z^{\epsilon(\gamma)}_{F}(\gamma))$ under certain basis. For simplicity, we assume the category $\mathcal{C}$ is multiplicity free, that is, for any three simple objects $a,b,c$, $\Hom(c, a \otimes b)$ has dimension either $0$ or $1$. If it is the latter case, we call $(a,b,c)$ {\it admissible}. Now, for any admissible $(a,b,c)$, we choose a basis element $B_{c}^{ab} \in \Hom(c, a\otimes b)$ and $B_{ab}^c \in \Hom(a \otimes b, c)$ such that, \begin{align*} \langle B_{c}^{ab} , B_{ab}^c \rangle = \Tr(B_{ab}^cB_{c}^{ab}) = \theta(a,b,c), \end{align*} where $\theta(a,b,c)$ is certain constant to be specified later. Graphically, $B_{c}^{ab}, B_{ab}^c$ and their relations are represented as in Figure \ref{fig:basis}. \begin{figure} \centering \includegraphics[scale=1]{basis} \caption{Graphical representation of $B_{c}^{ab}$ (Left), $B_{ab}^c$ (Middle), and their relation (Right). Here $i,j,k$ are indices coloring regions}\label{fig:basis} \end{figure} \begin{figure} \centering \includegraphics[scale=1]{6j} \caption{Graphical representations of $\tilde{Z}^{+}_F(0123;B)$ (Left) and $\tilde{Z}^{-}_F(0123;B)$ (Right). Here we put the vertex colors in circles to distinguish with edge colors.}\label{fig:6j} \end{figure} \begin{proposition} With the notations as above, the invariant has the \lq state-sum' formula: \begin{align} \label{equ:partition2} Z_{\mathcal{C}}(M,\mathcal{T}) = \sum\limits_{F = (f^0,f^1)} \prod\limits_{\tau \in \mathcal{T}^0} K^{-1} \prod\limits_{\alpha \in \mathcal{T}^1} d_{f^1_{\alpha}} \, \prod\limits_{\beta \in \mathcal{T}^2} \theta_F(\beta)^{-1} \, \prod\limits_{\gamma \in \mathcal{T}^3} \tilde{Z}^{\epsilon(\gamma)}_F(\gamma;B), \end{align} where $\tilde{Z}^{\epsilon(\gamma)}_F(\gamma;B)$ is defined as the evaluation of the diagrams in Figure \ref{fig:6j}. \begin{proof} Fix a coloring $F$ on the triangulation $\mathcal{T}$. For each $2$-simplex $\beta = (012)$, denote by, \begin{align*} B^{+}(\beta) = B_{f^1_{02}}^{f^1_{01}f^1_{12}} \in V^{+}_{F}(\beta), \quad B^{-}(\beta) = B^{f^1_{02}}_{f^1_{01}f^1_{12}} \in V^{-}_{F}(\beta), \quad \theta_F(\beta) = \langle B^+(\beta),B^-(\beta)\rangle. \end{align*} Then it follows that, for any $3$-simplex $\gamma = (0123)$, \begin{align*} Z_F^{+}(0123)(B^+(023)\otimes B^+(012)) = \frac{\tilde{Z}^+_F(0123;B)}{\theta_F(123)\theta_F(013)} B^+(123)\otimes B^+(013), \end{align*} where $\tilde{Z}^+_F(0123;B) = \tilde{Z}_F^{+}(0123)(B^-(123) \otimes B^+(023)\otimes B^-(013) \times B^+(012))$ is the evaluation of the colored graph in Figure \ref{fig:6j} (Left). Similarly, we have \begin{align*} Z_F^{-}(0123)(B^+(123)\otimes B^+(013)) = \frac{\tilde{Z}^-_F(0123;B)}{\theta_F(023)\theta_F(012)} B^+(023)\otimes B^+(012), \end{align*} where $\tilde{Z}^-_F(0123;B) = \tilde{Z}_F^{-}(0123)(B^+(123) \otimes B^-(023)\otimes B^+(013) \times B^-(012))$ is the evaluation of the colored graph in Figure \ref{fig:6j} (Right). Then we have \begin{align*} Z_F = \Tr(\bigotimes\limits_{\gamma \in \mathcal{T}^3} Z^{\epsilon(\gamma)}_{F}(\gamma)) = \prod\limits_{\beta \in \mathcal{T}^2} \theta_F(\beta)^{-1} \, \prod\limits_{\gamma \in \mathcal{T}^3} \tilde{Z}^{\epsilon(\gamma)}_F(\gamma;B). \end{align*} \end{proof} \end{proposition} A common choice of $\theta(a,b,c)$ is to have $\theta(a,b,c) = 1$ for all admissible $(a,b,c)$, in which case the formula in Equation \ref{equ:partition2} does not involve contributions from $2$-simplices. Another common choice in physics literature, assuming $\mathcal{C}$ is unitary which implies the quantum dimension of any non-zero object is positive, is to have $\theta(a,b,c) = \sqrt{d_ad_bd_c}$. \section{Invariants from Generalized Categorical Groups} \label{sec:generalized} In this section, we study a class of $\text{special}$ $\text{SMFC}$s obtained from what we call generalized categorical groups. We first have a review of $2$-groups and categorical groups, and then introduce the notion of generalized categorical groups which are generalizations of categorical groups. By a process called idempotent completion, we turn a generalized categorical group into a $\text{special}$ $\text{SMFC}$. The partition functions (and $(2+1)$-$\mathrm{TQFT}$s) from such $\text{SMFC}$s are shown to contain the ones in \cite{kapustin2013higher}, while the latter $\mathrm{TQFT}$ was not known to have a categorical construction before. \subsection{2-groups} \label{subsec:2group} A $2$-group is a triple $\mathcal{G} = (G, A, \beta)$, where $G$ is a finite group, $A$ is a finite Abelian group endowed with a $G$-action, and $\beta \in H^3(G, A)$ is a $3$rd cohomology class, where the cohomology group $H^3(G, A)$ is defined with respect to the $G$-action on $A$. By abusing of languages, we also assume $\beta$ is a co-cycle representing the class $\beta$. Different choices of representative co-cycles correspond to equivalent $2$-groups. We write the product in $A$ multiplicatively instead of additively. Denote the unit element in a group by $e$ and the inverse of an element $g$ by $g^{-1}$ or $\overline{g}$. Then $\beta$ being a co-cycle means that for any $g_0, g_1, g_2, g_3 \in G$, \begin{align} \label{equ:3rd cocycle} e &= \delta \beta (g_1,g_2,g_3,g_4) \nonumber \\ &= \lsup{g_0}{\beta(g_1, g_2, g_3)} \overline{\beta(g_0g_1, g_2, g_3)} \beta(g_0, g_1g_2, g_3) \overline{\beta(g_0, g_1, g_2g_3)} \beta(g_0, g_1, g_2) . \end{align} A typical example of a $2$-group comes from the homotopy $2$-type $(\pi_1(X), \pi_2(X), \beta)$ of a complex $X$, where $\pi_2(X)$ is endowed with the monodromy action of $\pi_1(X)$ and $\beta$ is the Postnikov invariant of $X$ \cite{maclane1949cohomology}. Actually, $2$-groups classify homotopy $2$-types \cite{maclane19503}. A categorical group is a rigid monoidal category in which all objects and all morphisms are invertible. From a $2$-group $\mathcal{G} = (G, A, \beta)$, a categorical group $\mathcal{C}(\mathcal{G})$ can be constructed as follows. \begin{enumerate} \item $\mathcal{C}(\mathcal{G})^0 = G$; $\Hom_{\mathcal{C}(\mathcal{G})}(g_1, g_2)$ is $A$ if $g_1 = g_2$ and the empty set otherwise; composition of morhisms is multiplication in $A$. \item For $g_1, g_2 \in G, h_1 \in \Hom(g_1, g_1), h_2 \in \Hom(g_2, g_2)$, $g_1 \otimes g_2:= g_1g_2$, $h_1 \otimes h_2:= h_1 \lsup{g_1}{(h_2)}$, $\mathbf{1}:= e$. \item For $g_1, g_2, g_3 \in G,$ the association isomorphism is defined to be \begin{equation*} \beta(g_1,g_2,g_3): (g_1 \otimes g_2) \otimes g_3 \longrightarrow g_1 \otimes (g_2 \otimes g_3). \end{equation*} \item The dual of an object $g$ is $g^* := \overline{g}$. \end{enumerate} It is straight forward to check the above defines a categorical group. For instance, the Pentagon Equation that the association isomorphism needs to satisfy translates exactly to the co-cycle condition in Equation \ref{equ:3rd cocycle}. Actually it is also true that up to some appropriately defined equivalence, $2$-groups are in one-to-one correspondence with categorical groups \cite{Etingof_finitetensor}. One can also \lq linearize' the categorical group $\mathcal{C}(\mathcal{G})$ by redefining the objects as formal direct sums of elements of $G$ and the morphisms from $g \in G$ to itself as linear spans of elements of $A$ with coefficients in $\mathbb{C}$, namely, $\Hom(g,g) = \mathbb{C}[A]$. (If $g_1 \neq g_2$, then $\Hom(g_1,g_2)$ is redefined to be the zero vector space.) The composition and tensor product are extended linearly. We still denote the \lq linearized' category as $\mathcal{C}(\mathcal{G})$. Note that if $A$ is not the trivial group, then $\mathcal{C}(\mathcal{G})$ is not a semisimple category since $\Hom(g,g)$ is not isomorphic to $\mathbb{C}$. A more fundamental reason is that there are idempotents in $\Hom(g,g)$ which do not split. We show in Section \ref{subsec:idempotent} that by a process called idempotent completion, $\mathcal{C}(\mathcal{G})$ can be turned into a semisimple category, or more specifically a $\text{SMFC}$. Before doing that, we first show in Section \ref{subsec:generalized} that the notion of categorical groups can be generalized so that the tensor product and Pentagon solution encode more data that a three co-cycle. \subsection{Generalized Categorical Groups} \label{subsec:generalized} Here we explore more general structures in a rigid monoidal category whose underlying objects and morphisms are the same as a categorical group. To be more precise, let $\mathcal{C}(G, A)$ be a rigid monoidal category such that the objects form a finite group $G$ by tensor product, $\Hom(g_1, g_2) = \delta_{g_1,g_2}\mathbb{C}[A]$ for $g_1, g_2 \in G$, and the composition of morphisms is multiplication in $\mathbb{C}[A]$, where $A$ is a finite Abelian group on which $G$ acts. Of-course, a categorical group arising from a $2$-group is such a category. We show below that a more general form of tensor product of morphisms and the solution to Pentagon Equation can be defined in $\mathcal{C}(G, A)$ beyond those in a categorical group. Let $\hat{A}$ be the group complex characters on $A$. The action of $G$ on $A$ induces an action on $\hat{A}$ and extends by linearity to an action on $\mathbb{C}[A]$. Specifically, for $g \in G, \chi \in \hat{A}$, $\lsup{g}{\chi}: = \chi(\lsup{\overline{g}}{(\cdot)})$. Let $h_1, h_2 \in A$ but consider $h_1 \in \Hom(g_1, g_1), h_2 \in \Hom(g_2,g_2)$, and define $h_1 \otimes h_2 \in \Hom(g_1g_2, g_1g_2)$ by \begin{align*} h_1 \otimes h_2:= \lambda(\overline{g_1})(h_2) h_1 \lsup{g_1}{h_2}, \end{align*} where $\lambda: G \longrightarrow \hat{A}$ is some map. Extend the definition linearly to define tensor product of morphisms which are linear combinations of group elements. Thus, compared to the tensor product in $\mathcal{C}(\mathcal{G})$, an extra coefficient $\lambda(\overline{g_1})(h_2)$ is introduced in the current setting. It is direct to check that the associativity $(h_1 \otimes h_2) \otimes h_3 = h_1 \otimes (h_2 \otimes h_3)$ is equivalent to the condition \begin{align*} \lambda(\overline{g_2} \, \overline{g_1}) = \lambda(\overline{g_2}) \lsup{\overline{g_2}}{\lambda(\overline{g_1})}, \end{align*} which means $\lambda$ is a co-cycle in $Z^1(G, \hat{A})$. Again, equivalent choices of $\lambda$ within the same cohomology would correspond to equivalent structures on the category, thus we can view $\lambda \in H^1(G, \hat{A})$. The case of $\mathcal{C}(\mathcal{G})$ corresponds to the trivial cohomology class. Before looking at the association isomorphisms, we first recall some properties of characters. For each $\chi \in \tilde{A}$, define $P_{\chi} \in \mathbb{C}[A]$ by \begin{equation} P_{\chi}:= \frac{1}{|A|} \sum\limits_{h \in A} \overline{\chi(h)}\, h. \end{equation} By standard character theories, the following properties hold. \begin{itemize} \item $P_{\chi_1}P_{\chi_2} = \delta_{\chi_1, \chi_2} P_{\chi_1}$; $h = \sum\limits_{\chi \in \tilde{A}} \chi(h)P_{\chi}$; in particular, $e = \sum\limits_{\chi \in \tilde{A}} P_{\chi}$. \item $\lsup{g}{P_{\chi}} = P_{\lsup{g}{\chi}}$; $id_g \otimes P_{\chi} = P_{\lambda(g)\lsup{g}{\chi}}$. \end{itemize} The first property means $\{P_{\chi}: \chi \in \tilde{A}\}$ forms a set of complete orthogonal idempotents, and a basis of $\mathbb{C}[A]$ in particular. A general element in $\mathbb{C}[A]$ is of the form $\sum\limits_{\chi}c_{\chi} P_{\chi}$, $c_{\chi} \in \mathbb{C}$, and it is invertible if and only if $c_{\chi} \neq 0$ for all $\chi$. Thus, for $g_1, g_2, g_3 \in G$, the association isomorphism $a(g_1,g_2,g_3)$ takes the form \begin{align*} a(g_1,g_2,g_3) = \sum\limits_{\chi} a(g_1,g_2,g_3)_{\chi} P_{\chi},\quad a(g_1,g_2,g_3)_{\chi} \neq 0. \end{align*} The Pentagon Equation is then equivalent to the condition, \begin{align} \label{equ:general_pentagon} a(g_1g_2, g_3,g_4)_{\chi} a(g_1,g_2, g_3g_4)_{\chi} = a(g_1,g_2, g_3)_{\chi}a(g_1,g_2g_3,g_4)_{\chi}a(g_2, g_3,g_4)_{\lambda(\overline{g_1})\lsup{\overline{g_1}}{\chi}}. \end{align} Let $C^1(\hat{A})$ be the group of all maps from $\hat{A}$ to non-zero complex numbers $\mathbb{C}^{\times}$. Given $\lambda \in Z^1(G, \hat{A})$, we define a new action of $G$ on $\hat{A}$ by $\phi(g, \chi) := \lambda(g)\lsup{g}{\chi}$. Note that in this action, $\hat{A}$ is viewed as a set but not a group, and $\phi(g, \cdot)$ is not a group automorphism unless $\lambda$ is the trivial co-cycle. However, the induced action on $C^1(\hat{A})$ defined by $\phi(g, \psi) = \psi(\overline{g}, \cdot)$ is an action by automorphism whether or not $\lambda$ is trivial. We denote by $C^1(\hat{A})_{\phi}$ the group $C^1(\hat{A})$ with the induced action defined above. Then Equation \ref{equ:general_pentagon} is equivalent to \begin{align} a(g_1g_2, g_3,g_4) a(g_1,g_2, g_3g_4) = a(g_1,g_2, g_3)a(g_1,g_2g_3,g_4)\phi(g_1,a(g_2, g_3,g_4)), \end{align} where $a(g_1, g_2, g_3)$ is viewed as an element in $C^1(\hat{A})_{\phi}$. Hence, we have $a \in Z^3(G, C^1(\hat{A})_{\phi})$. The above defined category depends on the data $(G, A, \lambda, a)$, where $\lambda \in H^1(G, \hat{A}), a \in H^3(G, C^1(\hat{A})_{\phi})$. We call such a category a {\it generalized categorical group} and denote it by $\mathcal{C}(G, A, \lambda, a)$. If $\lambda$ is the trivial co-cycle, then the action $\phi: G \times \hat{A} \longrightarrow \hat{A}$ coincides with action of $G$ on $\hat{A}$ induced by the given action of $G$ on $A$, namely, $\phi(g, \chi) = \lsup{g}{\chi} = \chi(\lsup{\overline{g}}(\cdot))$. Note that $A \simeq \hat{\hat{A}}$ is a subgroup of $C^1(\hat{A})_{\phi}$. That is, given $h \in A, \chi \in \hat{A}$, $h$ is viewed as an element of $C^1(\hat{A})_{\phi}$ by $h(\chi) := \chi(h)$. If $\lambda$ is trivial, it can be shown that the embedding $\iota: A \hookrightarrow C^1(\hat{A})_{\phi}$ is $G$-equivariant. Then we have the induced map $\iota_*: H^3(G, A) \longrightarrow H^3(G, C^1(\hat{A})_{\phi})$. Let $a = \iota_*(\beta)$ for some $\beta \in H^3(G, A)$, then $a(g_1, g_2, g_3)_{\chi} = \chi(\beta(g_1,g_2,g_3))$, and hence \begin{align*} a(g_1, g_2, g_3) &= \sum\limits_{\chi} a(g_1, g_2, g_3)_{\chi} P_{\chi} &= \sum\limits_{\chi} \chi(\beta(g_1,g_2,g_3)) P_{\chi} &= \beta(g_1, g_2,g_3) \end{align*} Therefore, we recovered the categorical group constructed from the $2$-group $(G, A, \beta)$ when $\lambda$ is trivial and $a = \iota_*(\beta)$. More generally if $\lambda$ is not necessarily trivial, let $\omega \in H^3(G,\mathbb{C}^{\times} ) \simeq H^3(G, U(1))$, $\beta \in H^3(G, A)$, and let \begin{align} \label{equ:a_omega_beta} a(g_1,g_2,g_3)_{\chi}:= \omega(g_1,g_2,g_3) \chi(\beta(g_1,g_2,g_3)). \end{align} Then Equation \ref{equ:general_pentagon} can be rewritten as, \begin{align} \delta \omega(g_1,g_2,g_3,g_4) \cdot \chi (\delta\beta (g_1,g_2,g_3,g_4)) \cdot \lambda(\overline{g_1})(\beta(g_2,g_3,g_4)) = 1, \end{align} which is equivalent to \begin{align} \label{equ:cup} \lambda(\overline{g_1})(\beta(g_2,g_3,g_4)) =\overline{\lambda(g_1)}(\lsup{g_1}{\beta(g_2,g_3,g_4)}) = 1, \end{align} where the first equality above is due to the fact $\lambda(g_1) \lsup{g_1}{\lambda(\overline{g_1})} = 1$ since $\lambda$ is a co-cycle. Define the following map: \begin{align*} \langle \cdot, \cup \cdot\rangle: H^1(G, \hat{A}) \otimes H^3(G, A) \overset{\cup}{\longrightarrow} H^4(G, \hat{A} \otimes A) \overset{\text{(eval)}_*}{\longrightarrow} H^4(G, U(1)), \end{align*} where $\cup$ is the cup product and $\text{eval}: \hat{A} \otimes A \longrightarrow U(1)$ is the evaluation map which commutes with the $G$-action. (We assume $G$ acts on $U(1)$ trivially.) To be more precise, the formula for $\langle\lambda, \cup \beta \rangle$ is given by, \begin{align*} \langle\lambda, \cup \beta \rangle (g_1, g_2, g_3, g_4) = \text{eval}(\lambda(g_1), \lsup{g_1}{(\beta(g_2,g_3,g_4))}). \end{align*} Then Equation \ref{equ:cup} means, \begin{align} \label{equ:cup_condition} \langle\lambda, \cup \beta \rangle = e \in H^4(G, U(1)). \end{align} Thus we obtained the generalized categorical group $\mathcal{C}(G, A, \lambda, a(\omega, \beta))$, where $\omega \in H^3(G, U(1)), \beta \in H^3(G, A)$ satisfy Equation \ref{equ:cup_condition} and $a(\omega, \beta)$ is defined by Equation \ref{equ:a_omega_beta}. We will use this category in Section \ref{subsec:invariants_categorical}. \subsection{Idempotent Completion} \label{subsec:idempotent} Let $\mathcal{C}$ be a category. The idempotent completion, also called Karoubi envelop or Cauchy completion, $\IC{\mathcal{C}}$ of $\mathcal{C}$ is a category defined as follows. The objects of $\IC{\mathcal{C}}$ consist of pairs $(X, \phi)$, where $X$ is an object of $\mathcal{C}$ and $\phi: X \rightarrow X$ is an idempotent, i.e., $\phi^2 = \phi$. Given two objects $(X, \phi), (X', \phi')$ of $\IC{\mathcal{C}}$, \begin{equation*} \Hom_{\IC{\mathcal{C}}}((X, \phi), (X', \phi')):= \phi' \circ \Hom_{\mathcal{C}}(X, X')\circ \phi = \{\psi \in \Hom_{\mathcal{C}}(X, X'):\ \phi' \psi = \psi = \psi \phi\}. \end{equation*} The composition of morphisms in $\IC{\mathcal{C}}$ is the same as that in $\mathcal{C}$. Now let $\mathcal{C}$ be the generalized categorical group $\mathcal{C}(G, A, \lambda, a)$ defined in Section \ref{subsec:generalized}. Recall that $\{P_{\chi}: \chi \in \hat{A}\}$ forms a set of complete orthogonal idempotents. It follows that the idempotents in $\mathbb{C}[A]$ are of the form \begin{equation} \sum\limits_{\chi \in \tilde{A}} c_{\chi} P_{\chi}, \quad c_{\chi} \in \{0,1\}. \end{equation} Thus there are in total $2^{|A|}$ idempotents in $\mathbb{C}[A]$. It also follows that, \begin{align*} \Hom((g, P_{\chi}), (g', P_{\chi'})) = \delta_{g,g'}\delta_{\chi, \chi'}\,\mathbb{C} \{P_{\chi}\}, \end{align*} and that, \begin{align*} (g, P_{\chi_1} + P_{\chi_2}) \simeq (g, P_{\chi_1}) \oplus (g, P_{\chi_2}),\, \chi_1 \neq \chi_2. \end{align*} Therefore $\IC{\mathcal{C}}$ is a semisimple category whose non-zero simple objects are $\{(g, P_{\chi})\ : g \in G, \chi \in \tilde{A}\}$. Since the zero morphism is an idempotent, $(g, 0)$ is the zero object for any $g$. We abbreviate $(g, P_{\chi})$ as $(g, \chi)$ when no confusion arises. Now we study the monoidal structure on $\IC{\mathcal{C}}$. Recall that for $P_{\chi_i} \in \Hom_{\mathcal{C}}(g_i,g_i), i =1,2$, we have $P_{\chi_1} \otimes P_{\chi_2} = P_{\chi_1}P_{\phi(g_1, \chi_2)} = \delta_{\phi(\overline{g_1}, \chi_1),\chi_2}P_{\chi_1}$. Then for two simple objects $(g_1, P_{\chi_1}), (g_2, P_{\chi_2})$ of $\IC{\mathcal{C}}$, define \begin{align*} (g_1, P_{\chi_1}) \otimes (g_2, P_{\chi_2}):= (g_1 \otimes g_2, P_{\chi_1} \otimes P_{\chi_2}) = (g_1g_2,\delta_{\phi(\overline{g_1}, \chi_1),\chi_2}P_{\chi_1}). \end{align*} Thus for the tensor product to be a non-zero object, $\chi_2$ must equal $\phi(\overline{g_1}, \chi_1)$. For three objects $(g_i, P_{\chi_i}), i=1,2,3$ with $\chi_2 = \phi(\overline{g_1}, \chi_1), \chi_3 = \phi(\overline{g_2}, \chi_2) =\phi(\overline{g_1g_2}, \chi_1)$, then \begin{align*} \{(g_1, P_{\chi_1}) \otimes (g_2, P_{\chi_2})\} \otimes (g_3, P_{\chi_3}) =(g_1g_2g_3,P_{\chi_1})=(g_1, P_{\chi_1}) \otimes \{(g_2, P_{\chi_2}) \otimes (g_3, P_{\chi_3}) \}, \end{align*} and define the association isomorphism by {\footnotesize \begin{align*} a(g_1, g_2, g_3)_{\chi_1}P_{\chi_1}: \{(g_1, P_{\chi_1}) \otimes (g_2, P_{\chi_2})\} \otimes (g_3, P_{\chi_3}) \overset{\simeq}{\longrightarrow} (g_1, P_{\chi_1}) \otimes \{(g_2, P_{\chi_2}) \otimes (g_3, P_{\chi_3}) \}, \end{align*}} namely, the association isomorphism is the $P_{\chi_1}$-component of $a(g_1,g_2,g_3)$ in the $\{P_{\chi}: \chi \in \hat{A}\}$ basis. If either $\chi_2$ or $\chi_3$ is not given as above, then the tensor product of the $(g_i,P_{\chi_i})\,'$s is the zero object and we define the corresponding association isomorphism as the unique zero morphism (also the identity morphism). It is direct to check the association isomorphism satisfies the Pentagon Equation. The unit object is defined to be $(e,e) = (e, \sum_{\chi} P_{\chi}) = \oplus_{\chi} (e, P_{\chi})$. Note that $(e, P_{\chi'}) \otimes (g, P_{\chi}) = \delta_{\chi', \chi}(g, P_{\chi}) = (g, P_{\chi}) \otimes (e,P_{\phi(\bar{g}, \chi')} )$, hence the category $\IC{\mathcal{C}}$ is a multi-fusion category indexed by $\hat{A}$. Specifically, let $\IC{\mathcal{C}}_{\chi_1, \chi_2}$ be spanned additively by \begin{align*} \{(g, P_{\chi}): (e, P_{\chi_1}) \otimes (g, P_{\chi}) = (g, P_{\chi}) = (g, P_{\chi}) \otimes (e, P_{\chi_2}) \}. \end{align*} Then we have \begin{align*} \IC{\mathcal{C}} = \bigoplus\limits_{\chi_1, \chi_2 \in \hat{A}} \IC{\mathcal{C}}_{\chi_1,\chi_2}. \end{align*} and $(g, P_{\chi}) \in \IC{\mathcal{C}}_{\chi, \phi(\overline{g}, \chi)}$. For each $\chi$, $(e, P_{\chi})$ is the unit in the fusion category $\IC{\mathcal{C}}_{\chi,\chi}$. For a simple object $(g, P_{\chi})$, define the dual $(g, P_{\chi})^*:= (\overline{g}, P_{\phi(\overline{g}, \chi))}$. We sum up the properties of $\IC{\mathcal{C}}$ as a proposition. \begin{proposition} Let $\mathcal{C} = \mathcal{C}(G,A, \lambda, a)$ be a generalized categorical group, then $\IC{\mathcal{C}}$ is a $\text{SMFC}$ indexed by $\hat{A}$ where, \begin{enumerate} \item the simple objects correspond to elements of $G \times \tilde{A}$; \item $(g, \chi)$ is in the sector $(\chi, \phi(\overline{g}, \chi))$; \item $(g_1, \chi_1) \otimes (g_2, \chi_2) = \delta_{\phi(\overline{g_1}, \chi_1), \chi_2} (g_1g_2, \chi_1)$; \item the quantum dimension of each simple object is $1$, and the dimension of each row is $|G|$, thus $\mathcal{C}$ is $\text{special}$. \end{enumerate} \end{proposition} \iffalse \subsection{More General Structures on $\IC{\mathcal{C}(\mathcal{G})}$} In Section (?), it was shown that the idempotent completion of $\mathcal{C}(\mathcal{G})$ produces a multi-fusion category. In this section we show that there is a more general monoidal structure one can put on $\IC{\mathcal{C}(\mathcal{G})}$. To avoid confusion, denote by $\IC{\mathcal{C}(\mathcal{G})}$ the category with more general monoidal structure. It the same category as $\IC{\mathcal{C}(\mathcal{G})}$ as an Abelian category and also has the same tensor product on objects. We define a more general association isomorphism on it. For three objects $(g_i, \chi_i), i=1,2,3$ with $\chi_2 = \lsup{g_1^{-1}}{\chi_1}, \chi_3 = \lsup{g_2^{-1}}{\chi_2} = \lsup{(g_1g_2)^{-1}}{\chi_1}$, \begin{align*} \{(g_1, \chi_1) \otimes (g_2, \chi_2)\} \otimes (g_3, \chi_3) = (g_1, \chi_1) \otimes \{(g_2, \chi_2) \otimes (g_3, \chi_3)\} = (g_1g_2g_3, \chi_1), \end{align*} Thus the most general association isomorphism is a scalar $\omega(g_1,g_2, g_3)_{\chi_1}$. Let $C^1(\tilde{A})$ be the group of all maps from $\tilde{A}$ to $\mathbb{C}^{\times}$. We then view $\omega$ as a function from $G^3 $ to $C^1(\tilde{A})$. The Pentagon Equation yields the condition on $\omega$: \begin{equation} \omega(g_1,g_2, g_3)_{\chi_1} \omega(g_1,g_2g_3,g_4)_{\chi_1} \omega(g_2,g_3, g_4)_{\lsup{g_1^{-1}}{\chi_1}} = \omega(g_1g_2, g_3,g_4)_{\chi_1}\omega(g_1,g_2, g_3g_4)_{\chi_1} \end{equation} There is an induced action of $G$ on $C^1(\tilde{A})$ by $\lsup{g}{\phi}(\chi):= \phi(\lsup{g^{-1}}{\chi})$. Then the above equation is equivalent to \begin{equation} \omega(g_1,g_2, g_3) \omega(g_1,g_2g_3,g_4) \lsup{g_1}{(\omega{g_2,g_3, g_4})} = \omega(g_1g_2, g_3,g_4)\omega(g_1,g_2, g_3g_4), \end{equation} which precisely means that $\omega$ is a three cocycle with coefficient in $C^1(\tilde{A})$. As usual, one can show different choice of $\omega$ within the same cohomology class corresponds to equivalent monoidal structures. Thus we can view $\omega$ as an element in $H^3(G, C^1(\tilde{A}))$. We denote by $\IC{\mathcal{C}(\mathcal{G})}_{\omega}$ the resulting multi-fusion category. One can check the quantum dimension of each simple object is still $1$. We now analyze $C^1(\tilde{A})$. The group $\mathbb{C}^{\times }$ is naturally identified with the group of constant maps from $\tilde{A}$ to $\mathbb{C}^{\times}$. Let $C^1(\tilde{A})'$ be the subgroup of $C^1(\tilde{A})$ consisting of maps which map the unit element of $\tilde{A}$ to $1$ in $\mathbb{C}^{\times}$. It is clear that $\mathbb{C}^{\times} $ and $C^1(\tilde{A})'$ are also submodules and as $G$-modules, \begin{align*} C^1(\tilde{A}) = C^1(\tilde{A})' \oplus \mathbb{C}^{\times}. \end{align*} Also $A$ is a submodule of $C^1(\tilde{A})$, since $A$ is the exactly the group of characters on $\tilde{A}$. Actually, we have $A \subset C^1(\tilde{A})'$. Then, \begin{equation} H^3(G, C^1(\tilde{A})) \simeq H^3(G, C^1(\tilde{A})') \oplus H^3(G, \mathbb{C}^{\times}), \end{equation} and we have the long exact sequence, \begin{equation} \cdots \ \longrightarrow H^3(G, A) \overset{\tau}{\longrightarrow} H^3(G, C^1(\tilde{A})') \longrightarrow H^3(G, C^1(\tilde{A})'/A) \longrightarrow H^4(G, A) \longrightarrow \ \cdots \end{equation} If we choose $\beta \in H^3(G, A)$, and take $\omega = \tau(\beta)$. Then $\omega(g_1, g_2, g_3)_{\chi} = \chi(\beta(g_1,g_2, g_3))$, which is exactly the association isomorphism that we defined in Section (?) from idempotent completion. There is an alternative way to characterize $H^3(G,C^1(\tilde{A}))$. Let $\tilde{A} = \sqcup_{i \in I} \tilde{A}$ be the disjoint union of $G$-orbits, and let $G_i$ be the stabilizer of some element in $\tilde{A}$. Let $G/G_i$ be the set of left cosets of $G_i$. Then $G/G_i \simeq A_i$, and \begin{align*} C^1(\tilde{A}) = \bigoplus\limits_{i \in I} C^1(\tilde{A}_i) = \bigoplus\limits_{i \in I} C^1(G/G_i). \end{align*} Therefore \begin{equation} H^3(G, C^1(\tilde{A})) = \bigoplus\limits_{i \in I} H^3(G, C^1(G/G_i)) = \bigoplus\limits_{i \in I} H^3(G_i, \mathbb{C}^{\times}), \end{equation} where the second equality above is by Shapiro's lemma (?). \fi \subsection{Invariants from Generalized Categorical Groups} \label{subsec:invariants_categorical} Throughout this section, let $\mathcal{C} = \mathcal{C}(G, A, \lambda, a)$ be a generalized categorical group. We study the invariant of $3$-manifolds $Z_{\IC{\mathcal{C}}}(\cdot)$ where $\IC{\mathcal{C}}$ is the $\text{SMFC}$ as constructed in Section \ref{subsec:idempotent}. Let $M$ be a closed oriented $3$-manifolds and $\mathcal{T}$ be an ordered triangulation of $M$. We have $I = \hat{A}$ and $\Label{\IC{\mathcal{C}}} = G \times \hat{A}$. Recall from Section \ref{sec:main} that a $\IC{\mathcal{C}}$-coloring is a pair of maps $F = (f^0,f^1),\, f^0: \mathcal{T}^0 \longrightarrow \hat{A}, \,f^1 : \mathcal{T}^1 \longrightarrow G \times \hat{A}$ such that for any $1$-simplex $(01)$, \begin{align*} f^1_{01} \in \IC{\mathcal{C}}_{f^0_0,f^0_1}. \end{align*} Let $f^1_{01} = (g_{01}, \chi_{01})$, then \begin{align*} \chi_{01} = f^0_0, \quad \text{and} \quad f^0_1 = \phi(\overline{g_{01}}, f^0_0). \end{align*} Thus $\chi_{01}$ is uniquely determined by the coloring $f^0$. For any $2$-simplex $(012)$, \begin{align*} V^{\pm}_{F}(012) = \begin{cases} \mathbb{C}, & g_{02} = g_{01}g_{12} \\ 0 , & \text{otherwise.} \end{cases} \end{align*} Combing the observations, we have the following definition. \begin{definition} \label{def:admissible_color} Let $\mathcal{C}, M, \mathcal{T}$ be as above. An admissible $\IC{\mathcal{C}}$-coloring is a pair of maps $\tilde{F} = (f^0,g)$, $f^0: \mathcal{T}^0 \longrightarrow \hat{A},\, g: \mathcal{T}^1\longrightarrow G$ such that, \begin{itemize} \item for any $1$-simplex $(01)$, $f^0_1 = \phi(\overline{g_{01}}, f^0_0)$; \item for any $2$-simplex $(012)$, $g_{02} = g_{01}g_{12}$. \end{itemize} \end{definition} Given an admissible $\IC{\mathcal{C}}$-coloring $\tilde{F} = (f^0,g)$, choose any path $p_i$ in $\mathcal{T}^1$ consisting of the edges $(v_0v_1)-(v_1v_2)- \cdots - (v_{m-1}v_m)$ where $v_0 = 0, v_m = i$, and let $g_{p_i} = g_{v_0v_1} \cdots g_{v_{m-1}v_m}$, where $g_{v_{k-1}v_k}$ is defined to be $\overline{g_{v_kv_{k-1}}}$ if $v_{k-1} > v_{k}$. It is not hard to see that $f^0_i = \phi(\overline{g_{p_i}},f^0_0)$, and thus the choice of a path $p_i$ connecting vertex $0$ to $i$ is irrelevant. \begin{proposition} The invariant $Z_{\IC{\mathcal{C}}}(M)$ is given by the formula, \begin{align} Z_{\IC{\mathcal{C}}}(M) &= \frac{1}{|G|^{|\mathcal{T}^0|}}\sum\limits_{\tilde{F}} \prod\limits_{\gamma = (ijkl) \in \mathcal{T}^3} \{a(g_{ij}, g_{jk}, g_{kl})_{f^0_{i}}\}^{\epsilon(\gamma)} \nonumber \\ &= \frac{1}{|G|^{|\mathcal{T}^0|}}\sum\limits_{\tilde{F}} \prod\limits_{\gamma = (ijkl) \in \mathcal{T}^3} \{a(g_{ij}, g_{jk}, g_{kl})_{\phi(\overline{g_{p_i}},f^0_0)}\}^{\epsilon(\gamma)} \end{align} \begin{proof} In $\IC{\mathcal{C}}$, for any admissible $(a,b,c)$ of simple objects, one can choose $B_{ab}^c$ and $B_{c}^{ab}$ (see Section \ref{sec:main}) to be the identity map, and $\theta(a,b,c) = 1$. The quantum dimension of each simple object is $1$ and the dimension of each row is $K = |G|$. Admissible colorings correspond to those colorings whose contribution to the summation term in Equation \ref{equ:partition} or \ref{equ:partition2} is not zero. Thus we only need to consider admissible colorings. Given an admissible coloring $\tilde{F} = (f^0,g)$, for a $3$-simplex $\gamma = (0123)$, the evaluation of Figure \ref{fig:6j} (Left) is seen to be $a(g_{01},g_{12},g_{23})_{f^0_{0}}$. \end{proof} \end{proposition} \begin{corollary} Let $\mathcal{C} = \mathcal{C}(G, A, \lambda, (\beta, \omega))$ be as defined in Section \ref{subsec:generalized}, then \begin{align} Z_{\IC{\mathcal{C}}}(M) &= \frac{1}{|G|^{|\mathcal{T}^0|}}\sum\limits_{\tilde{F}} \prod\limits_{\gamma = (ijkl) \in \mathcal{T}^3} \{\omega(g_{ij}, g_{jk}, g_{kl})\,f^0_{i}(\beta(g_{ij}, g_{jk}, g_{kl}))\}^{\epsilon(\gamma)}. \end{align} \end{corollary} The partition function in the above corollary matches exactly the $(2+1)$-$\mathrm{TQFT}$ (the dual model) constructed from higher gauge theory in \cite{kapustin2013higher}, where a finite gauge group is replaced by a finite $2$-group. Thus here we provided a categorical construction of such $\mathrm{TQFT}$s. According to \cite{kapustin2013higher}, the $\mathrm{TQFT}$s thus obtained are more general than Dijkgraaf-Witten theory and provide new symmetry protected phases of matter. \section{$2D$ Symmetry Enriched Topological Phases} \label{sec:SET} Symmetry plays an important role in understanding topological phases of matter. A useful approach to study topological phases is to construct exactly solvable lattice models. When anyon excitations also possess global symmetries, such a topological phase is called a symmetry enriched topological ($\text{SET}$) phase. $\text{SET}$s in two spacial dimension are of great interest in condensed matter physics. In \cite{cheng2016exactly}\cite{barkeshli2016reflection}\cite{heinrich2016symmetry}\cite{chang2015enriching}, exactly solvable models for a wide class of $(2D)$ bosonic $\text{SET}$s are constructed. When the global symmetry is onsite and unitary, then the input to their models is a unitary $G$-graded fusion category, where $G$ is the global symmetry group. In this section, we show that their construction of $\text{SET}$s extends to the framework of multi-fusion categories. Let $\mathcal{D} = \bigoplus\limits_{g \in G}\mathcal{D}_g$ be a $G$-graded unitary fusion category and let $\tilde{\mathcal{D}}$ be the multi-fusion category obtained from $\mathcal{D}$ as given in Example \ref{example}. That is, $\tilde{\mathcal{D}} = \bigoplus\limits_{g,h \in G}\tilde{\mathcal{D}}_{g,h}$ is indexed by $G$ where $\tilde{\mathcal{D}}_{g,h} = \mathcal{D}_{\bar{g}h}$. The tensor products in $\tilde{\mathcal{D}}$ are the same as those in $\mathcal{D}$ and for any $g \in G$ the unit $\mathbf{1}_g$ in $\tilde{\mathcal{D}}_{g,g}$ is the unit $\mathbf{1}$ in $\mathcal{D}_{e}$ (and also the unit in $\mathcal{D}$). $\tilde{\mathcal{D}}$ is spherical since $\mathcal{D}$ is spherical. (Unitarity implies sphericity.) Also, for any $g \in G$, $K(\tilde{\mathcal{D}}_{g}) = \sum\limits_{h} K(\tilde{\mathcal{D}}_{g,h}) = \sum\limits_{h} K(\mathcal{D}_{g^{-1}h}) = K(\mathcal{D})$. Thus, $\tilde{\mathcal{D}}$ is a $\text{special}$ $\text{SMFC}$. Assume $\mathcal{D}$ is multiplicity free. As in Section \ref{sec:main}, for any admissible $(a,b,c)$ of simple objects, we choose a basis element $B_{c}^{ab} \in \Hom(c, a\otimes b)$ and $B_{ab}^c \in \Hom(a \otimes b, c)$ such that, \begin{align*} \langle B_{c}^{ab} , B_{ab}^c \rangle = \Tr(B_{ab}^cB_{c}^{ab}) = \theta(a,b,c), \end{align*} where $\theta(a,b,c) = \sqrt{d_ad_bd_c}$. Let $M$ be an oriented $3$-manifold and $\mathcal{T}$ be a triangulation of $M$. If $M$ has no boundary, then the partition function $Z_{\tilde{\mathcal{D}}}(M)$ is given by Equation \ref{equ:partition} or Equation \ref{equ:partition2} as a state-sum model. By definition, a $\tilde{\mathcal{D}}$-coloring $F = (f^0,f^1)$ assigns to each vertex ordered by $k$ a group element $f^0_k \in G$ and assigns to each $1$-simplex $(ij)$ a simple object $f^1_{ij} \in \tilde{\mathcal{D}}_{f^0_if^0_j} = \mathcal{D}_{\overline{f^0_i}f^0_j}$. It is direct to check the partition function $Z_{\tilde{\mathcal{D}}}(M)$ thus obtained is the same as the one given in \cite{barkeshli2016reflection}. More generally, when $M$ is bounded by a surface $\partial M$, the wave function associated with $M$ is defined by \begin{align} \Psi(\partial M, F) = \sum\limits_{\tilde{F}: \tilde{F}_{| \partial M} = F} \prod\limits_{\tau \in \mathcal{T}^0} K^{-1} \prod\limits_{\alpha \in \mathcal{T}^1} d_{f^1_{\alpha}} \, \prod\limits_{\beta \in \mathcal{T}^2} \theta_F(\beta)^{-1} \, \prod\limits_{\gamma \in \mathcal{T}^3} \tilde{Z}^{\epsilon(\gamma)}_F(\gamma;B), \end{align} where $F$ is a coloring of $\mathcal{T}$ restricted to $\partial M$ and the summation on the right hand side is over all colorings $\tilde{F}$ extending $F$. For $g \in G$, let $\lsup{g}{F}$ be the coloring $(g. f^0, f^1)$, namely, the color on each vertex is multiplied on the left by $g$ while the color on each edge remains unaltered. Clearly, $\lsup{g}{F}$ is a well-defined color and that $\Psi(\partial M, F) = \Psi(\partial M, \lsup{g}{F})$. \vspace{1cm} \noindent\textbf{Acknowledgment} $\quad$ The first author acknowledges the support from the Simons Foundation and would like to thank Meng Cheng and Ryan Thorngren for helpful discussions. The second author is partially supported by NSF grant DMS-1411212. \bibliographystyle{plain}
1702.06902
\section{Invariant verification} \label{sec:reachalgo} A subproblem for invariant verification is to compute $\reachtube{\H}$, or more specifically, the reachtubes for the set of trajectories $\TL$ in a given mode, up to a time bound. This is a difficult problem, even when $\TL$ is generated by white-box models. The algorithms in~\cite{donze2010breach,DMV:EMSOFT2013,FanMitra:2015df} approximate reachtubes using simulations and sensitivity analysis of ODE models generating $\TL$. Here, we begin with a probabilistic method for estimating sensitivity from black-box simulators. \subsection{Discrepancy functions} \label{sec:disc} Sensitivity of trajectories is formalized by the notion of discrepancy functions \cite{DMV:EMSOFT2013}. For a set $\TL$, a {\em discrepancy function\/} is a uniformly continuous function $\beta: \reals^n \times \reals^n \times \nnreals \rightarrow \nnreals$, such that for any pair of identically labeled trajectories $\langle \tau_1,\ell \rangle, \langle \tau_2, \ell \rangle \in \TL$, and any $t \in \tau_1.{\mathit dom} \cap \tau_2.{\mathit dom}$: \begin{inparaenum}[(a)] \item $\beta$ upper-bounds the distance between the trajectories, i.e., \begin{align} |\tau_1(t) - \tau_2(t)| \leq \beta(\tau_1.\mathop{\mathsf {fstate}},\tau_2.\mathop{\mathsf {fstate}},t), \label{eq:discrepancy} \end{align} and \item $\beta$ converges to $0$ as the initial states converge, i.e., for any trajectory $\tau$ and $t \in \tau.{\mathit dom}$, if a sequence of trajectories $\tau_1,\ldots, \tau_k, \ldots$ has $\tau_k.\mathop{\mathsf {fstate}} \rightarrow \tau.\mathop{\mathsf {fstate}}$, then $\beta(\tau_k.\mathop{\mathsf {fstate}},$ $\tau.\mathop{\mathsf {fstate}},t)$ $\rightarrow 0$. \end{inparaenum} In~\cite{DMV:EMSOFT2013} it is shown how given a $\beta$, condition~(a) can used to over-approximate reachtubes from simulations, and condition~(b) can be used to make these approximations arbitrarily precise. Techniques for computing $\beta$ from ODE models are developed in~\cite{FanMitra:2015df,FanM:EMSOFT2016,HFMMK:CAV2014}, but these are not applicable here in absence of such models. Instead we present a simple method for discovering discrepancy functions that only uses simulations. Our method is based on classical results on PAC learning linear separators~\cite{KearnsVazirani}. We recall these before applying them to find discrepancy functions. \vspace{-10pt} \subsubsection{Learning linear separators.} \label{Sec: discrepancy} For $\Gamma \subseteq \reals\times\reals$, a \emph{linear separator} is a pair $(a,b) \in \reals^2$ such that \begin{align} \forall (x,y) \in \Gamma.\ x \leq ay + b. \label{eq:separator} \end{align} Let us fix a subset $\Gamma$ that has a (unknown) linear separator $(a_*,b_*)$. Our goal is to discover some $(a,b)$ that is a linear seprator for $\Gamma$ by sampling points in $\Gamma$~\footnote{We prefer to present the learning question in this form as opposed to one where we learn a Boolean concept because it is closer to the task at hand.}. The assumption is that elements of $\Gamma$ can be drawn according to some (unknown) distribution ${\cal D}$. With respect to ${\cal D}$, the \emph{error} of a pair $(a,b)$ from satisfying Equation~\ref{eq:separator}, is defined to be $\mathsf{err}_{{\cal D}}(a,b) = {\cal D}(\{(x,y) \in \Gamma\: |\: x > ay+b\})$ where ${\cal D}(X)$ is the measure of set $X$ under distribution ${\cal D}$. Thus, the error is the measure of points (w.r.t. ${\cal D}$) that $(a,b)$ is not a linear separator for. There is a very simple (probabilistic) algorithm that finds a pair $(a,b)$ that is a linear separator for a large fraction of points in $\Gamma$, as follows. \begin{enumerate} \itemsep0em \item\label{alg:linsep1} Draw $k$ pairs $(x_1,y_1), \ldots (x_k,y_k)$ from $\Gamma$ according to ${\cal D}$; the value of $k$ will be fixed later. \item\label{alg:linsep2} Find $(a,b) \in \reals^2$ such that $x_i \leq ay_i + b$ for all $i \in \{1,\ldots k\}$. \end{enumerate} Step~\ref{alg:linsep2} involves checking feasibility of a linear program, and so can be done efficiently. This algorithm, with high probability, finds a linear separator for a large fraction of points. \begin{proposition} \proplabel{linear-sep-learn} Let $\epsilon, \delta \in \plreals$. If $k \geq \frac{1}{\epsilon}\ln\frac{1}{\delta}$ then, with probability $\geq 1-\delta$, the above algorithm finds $(a,b)$ such that $\mathsf{err}_{{\cal D}}(a,b) < \epsilon$. \end{proposition} \begin{proof} The result follows from the PAC-learnability of concepts with low VC-dimension~\cite{KearnsVazirani}. However, since the proof is very simple in this case, we reproduce it here for completeness. Let $k$ be as in the statement of the proposition, and suppose the pair $(a,b)$ identified by the algorithm has error $> \epsilon$. We will bound the probability of this happening. Let $B = \{(x,y)\: |\: x > ay+b\}$. We know that ${\cal D}(B) > \epsilon$. The algorithm chose $(a,b)$ only because no element from $B$ was sampled in Step~\ref{alg:linsep1}. The probability that this happens is $\leq (1-\epsilon)^k$. Observing that $(1-s) \leq e^{-s}$ for any $s$, we get $(1-\epsilon)^k \leq e^{-\epsilon k} \leq e^{-\ln \frac{1}{\delta}} = \delta$. This gives us the desired result. \end{proof} \subsubsection{Learning discrepancy functions} Discrepancy functions will be computed from simulation data independently for each mode. Let us fix a mode $\ell \in \L$, and a domain $[0,T]$ for each trajectory. The discrepancy functions that we will learn from simulation data, will be one of two different forms, and we discuss how these are obtained. \vspace{-10pt} \paragraph{Global exponential discrepancy (GED)} is a function of the form \[ \beta(x_1,x_2,t) = |x_1 - x_2| Ke^{\gamma t}. \] Here $K$ and $\gamma$ are constants. Thus, for any pair of trajectories $\tau_1$ and $\tau_2$ (for mode $\ell$), we have \[ \forall t \in [0,T].\ |\tau_1(t) - \tau_2(t)| \leq |\tau_1.\mathop{\mathsf {fstate}} - \tau_2.\mathop{\mathsf {fstate}}| Ke^{\gamma t}. \] Taking logs on both sides and rearranging terms, we have \[ \forall t.\ \ln \frac{|\tau_1(t) - \tau_2(t)|}{|\tau_1.\mathop{\mathsf {fstate}} - \tau_2.\mathop{\mathsf {fstate}}|} \leq \gamma t + \ln K. \] It is easy to see that a global exponential discrepancy is nothing but a linear separator for the set $\Gamma$ consisting of pairs $(\ln \frac{|\tau_1(t) = \tau_2(t)|}{|\tau_1.\mathop{\mathsf {fstate}} - \tau_2.\mathop{\mathsf {fstate}}|}, t)$ for all pairs of trajectories $\tau_1,\tau_2$ and time $t$. Using the sampling based algorithm described before, we could construct a GED for a mode $\ell \in \L$, where sampling from $\Gamma$ reduces to using the simulator to generate traces from different states in $\TL_{\sf init, \ell}$. \propref{linear-sep-learn} guarantees the correctness, with high probability, for any separator discovered by the algorithm. However, for our reachability algorithm to not be too conservative, we need $K$ and $\gamma$ to be small. Thus, when solving the linear program in Step~\ref{alg:linsep2} of the algorithm, we search for a solution minimizing $\gamma T + \ln K$. \vspace{-10pt} \paragraph{Piece-wise exponential discrepancy (PED).} The second form of discrepancy functions we consider, depends upon dividing up the time domain $[0,T]$ into smaller intervals, and finding a global exponential discrepancy for each interval. Let $0 = t_0,t_1,\ldots t_N = T$ be an increasing sequence of time points. Let $K, \gamma_1, \gamma_2, \ldots \gamma_N$ be such that for every pair of trajectories $\tau_1,\tau_2$ (of mode $\ell$), for every $i \in \{1,\ldots, N\}$, and $t \in [t_{i-1},t_i]$, $|\tau_1(t) = \tau_2(t)| \leq |\tau_1(t_{i-1}) - \tau_2(t_{i-1})| Ke^{\gamma_i t}$. Under such circumstances, the discrepancy function itself can be seen to be given as \[ \beta(x_1,x_2,t) = |x_1 - x_2| Ke^{\sum_{j=1}^{i-1}\gamma_j(t_j - t_{j-1}) + \gamma_i (t-t_{i-1})} \qquad \mbox{for } t \in [t_{i-1},t_i]. \] If the time points $0 = t_0,t_1,\ldots t_N = T$ are fixed, then the constants $K, \gamma_1, \gamma_2, \ldots $ $\gamma_N$ can be discovered using the learning approach described for GED; here, to discover $\gamma_i$, we take $\Gamma_i$ to be the pairs obtained by restricting the trajectories to be between times $t_{i-1}$ and $t_i$. The sequence of time points $t_i$ are also dynamically constructed by our algorithm based on the following approach. Our experience suggests that a value for $\gamma$ that is $\geq 2$ results in very conservative reach tube computation. Therefore, the time points $t_i$ are constructed inductively to be as large as possible, while ensuring that $\gamma_i < 2$. \vspace{-10pt} \subsubsection{Experiments on learning discrepancy} We used the above algorithm to learn discrepancy functions for dozens of modes with complex, nonlinear trajectories. Our experiments suggest that around 10-20 simulation traces are adequate for computing both global and piece-wise discrepancy functions. For each mode we use a set $S_{\sf train}$ of simulation traces that start from independently drawn random initial states in $\TL_{\sf init,\ell}$ to learn a discrepancy function. Each trace may have $100$-$10000$ time points, depending on the relevant time horizon and sample times. Then we draw another set $S_{\sf test}$ of $1000$ simulations traces for validating the computed discrepancy. For every pair of trace in $S_{\sf test}$ and for every time point, we check whether the computed discrepancy satisfies Equation~\ref{eq:discrepancy}. We observe that for $|S_{\sf train}| > 10$ the computed discrepancy function is correct for $96\%$ of the points $S_{\sf test}$ in and for $|S_{\sf train}| > 20$ it is correct for more than $99.9\%$, across all experiments. \subsection{Verification algorithm} \label{sec:verfication_algorithm} In this section, we present algorithms to solve the bounded verification problem for hybrid systems using learned exponential discrepancy functions. We first introduce an algorithm $\mathit{GraphReach}$ (Algorithm \ref{alg:ComputeRT}) which takes as input a hybrid system $\H = \langle \L, \Theta, G, \TL \rangle$ and returns a set of reachtubes---one for each vertex of $G$---such that their union over-approximates $\reachtube{\H}$. $\mathit{GraphReach}$ maintains two data-structures: \begin{inparaenum}[(a)] \item $RS$ accumulates pairs of the form $\langle RT, v\rangle$, where $v \in \V$ and $RT$ is its corresponding reachtube; \item $\mathit{VerInit}$ accumulates pairs of the form $\langle S, v \rangle$, where $v \in \V$ and $S\subset \reals^n$ is the set of states from which the reachtube in $v$ is to be computed. \end{inparaenum} Each $v$ could be in multiple such pairs in $RS$ and $\mathit{VerInit}$. Initially, $RS = \emptyset$ and $\mathit{VerInit} = \{\langle \Theta, v_{\sf init}\rangle\}$. $\mathit{LearnDiscrepancy(S_{\sf init},d,\ell)}$ computes the discrepancy function for mode $\ell$, from initial set $S_{\sf init}$ and upto time $d$ using the algorithm of Section \ref{sec:disc}. $\mathit{ReachComp}(S_{\sf init},d,\beta)$ first generates finite simulation traces from $S_{\sf init}$ and then bloats the traces to compute a reachtube using the discrepancy function $\beta$. This step is similar to the algorithm for dynamical systems given in~\cite{DMV:EMSOFT2013}. The $\mathit{GraphReach}$ algorithm proceeds as follows: first, a topologically sorted array of the vertices of the DAG $G$ is computed in $\mathit{Order}$ (\lnref{ln: init2}). The pointer $ptr$ iterates over the $\mathit{Order}$ and for each vertex $\mathit{curv}$ the following is computed. The variable $\mathit{dt}$ is set to the maximum transition time to other vertices from $\mathit{curv}$ (\lnref{ln: dwt}). For each possible initial set $S_{\sf init}$ corresponding to $\mathit{curv}$ in $\mathit{VerInit}$, the algorithm computes a discrepancy function (\lnref{ln: disc}) and uses it to compute a reachtube from $S_{\sf init}$ up to time $\mathit{dt}$ (\lnref{ln: reachtube}). For each successor $\mathit{nextv}$ of $\mathit{curv}$, the restriction of the computed reachtube $RT$ to the corresponding transition time interval ${\sc elab}((\mathit{curv,nextv}))$ is set as an initial set for $\mathit{nextv}$ (\lnsref{ln: forloopbegin}{ln: forloopend}). \begin{algorithm}[h!] \caption{$\mathit{GraphReach}(\H)$ computes bounded time reachtubes for each vertex of the transition $G$ of hybrid system $\H$.} \label{alg:ComputeRT} \SetKwInOut{Input}{input} \SetKwInOut{Initially}{initially} {$RS \gets \emptyset; \mathit{VerInit} \gets \{\langle \Theta, v_{\sf init} \rangle\}; \mathit{Order} \gets \mathit{TopSort}(G)$\;} \lnlabel{ln: init2} {\bf for} {$ptr = 0: len(Order)-1$} { {$\mathit{curv} \gets \mathrm{Order}[ptr]$ \;} \lnlabel{ln: currentV} {$\ell \gets {\sc vlab}(\mathit{curv})$\;} \lnlabel{ln: currentL} $\mathit{dt} \gets \textrm{max} \{t' \in \nnreals \:| \:\exists vs \in \V, (\mathit{curv}, vs) \in \E, (t,t') \gets {\sc elab} \left( (\mathit{curv}, vs) \right) \}$\; \lnlabel{ln: dwt} {\bf for} {$S_{\sf init} \in \{S~|~ \langle S,\mathit{curv} \rangle \in \mathit{VerInit}\}$} { {$\beta \gets \mathit{LearnDiscrepancy}(S_{\sf init},\mathit{dt},\ell)$\;} \lnlabel{ln: disc} {$ RT \gets \mathit{ReachComp}(S_{\sf init},\mathit{dt},\beta)$\;} \lnlabel{ln: reachtube} {$RS \gets RS \cup \langle RT, \mathit{curv} \rangle $\;} {\bf for} {$\mathit{nextv} \in \mathit{curv}.succ$} { $(t,t') \gets {\sc elab} \left( (\mathit{curv}, nextv) \right)$\; \lnlabel{ln: forloopbegin} $\mathit{VerInit} \gets \mathit{VerInit} \cup \langle \mathit{Restr}(RT,(t,t')), nextv\rangle$\; \lnlabel{ln: forloopend} } } } \Return $RS$ \; \end{algorithm} The invariant verification algorithm $\mathit{VerifySafety}$ decides safety of $\H$ with respect to a given unsafe set $\U$ and uses $\mathit{GraphReach}$. The detailed pseudocode appears in Appendix~\ref{appendix:safetyveri}. This algorithm proceeds in a way similar to the simulation-based verification algorithms for dynamical and hybrid systems~\cite{DMV:EMSOFT2013,fan2016automatic}. Given initial set $\Theta$ and transition graph $G$ of $\H$, this algorithm partitions $\Theta$ into several subsets, and then for each subset $S$ it checks whether the computed over-approximate reachtube $RS$ from $S$ intersects with $\U$: \begin{inparaenum}[(a)] \item If $RS$ is disjoint, the system is safe starting from $S$; \item if certain part of a reachtube $RT$ is contained in $\U$, the system is declared as unsafe and $RT$ with the the corresponding path of the graph are returned as counter-example witnesses; \item if neither of the above conditions hold, then the algorithm performs refinement to get a more precise over-approximation of $RS$. \end{inparaenum} Several refinement strategies are implemented in {{\sc DryVR}} to accomplish the last step. Broadly, these strategies rely on splitting the initial set $S$ into smaller sets (this gives tighter discrepancy in the subsequent vertices) and splitting the edge labels of $G$ into smaller intervals (this gives smaller initial sets in the vertices). The above description focuses on invariant properties, but the algorithm and our implementation in {{\sc DryVR}} can verify a useful class of temporal properties. These are properties in which the time constraints only refer to the time since the last mode transition. For example, for the $\auto{Powertrn}$ benchmark the tool verifies requirements like ``after $4$s in $\Lmode{normal}$ mode, the air-fuel ratio should be contained in $[14.6,14.8]$ and after $4$s in $\Lmode{powerup}$ it should be in $[12.4,12.6]$''. \vspace{-10pt} \paragraph{Correctness} Given a correct discrepancy function for each mode, we can prove the soundness and relative completeness of Algorithm~\ref{alg:safetyveri}. This analysis closely follows the proof of Theorem~19 and Theorem~21 in~\cite{duggirala2015dynamic}. Combining this with the probabilistic correctness of the $\mathit{LearnDiscrepancy}$, we obtain the following probabilistic soundness guarantee. \begin{theorem} If the $\beta$'s returned by $\mathit{LearnDiscrepancy}$ are always discrepancy functions for corresponding modes, then $\mathit{VerifySafety}(\H,U)$ (Algorithm~\ref{alg:safetyveri}) is sound. That is, if it outputs ``SAFE'', then $\H$ is safe with respect to $\U$ and if it outputs ``UNSAFE'' then there exists an execution of $\H$ that enters $\U$. \end{theorem} \section{Appendix} \label{app:A} \subsection{ADAS and autonomous vehicle venchmarks} \label{app:adas} We provide more details for the different scenarios used for testing ADAS and Autonomous driving control systems. Recall, that each vehicle model in {Simulink\textsuperscript{\textregistered}}\ has several continuous variables including the $x, y$-coordinates of the vehicle on the road, its velocity, heading, steering angle, etc. The vehicle can be controlled by two input signals, namely the throttle (acceleration or brake) and the steering speed. By choosing appropriate values of these input signals, we have defined the following modes for each vehicle \begin{inparaenum}[(a)] \item \Lmode{cruise}: move forward at constant speed, \Lmode{speedup}: constant acceleration, \Lmode{brake}: constant (slow) deceleration, \Lmode{em\_brake}: constant (hard). \end{inparaenum} We have designed lane switching modes \Lmode{ch\_left} and \Lmode{ch\_right} in which the acceleration and steering are controlled in such a manner that the vehicle switches to its left (resp. right) lane in a certain amount of time. For each vehicle, we mainly analyze four variables: absolute position ($sx$) and velocity $vx$ orthogonal to the road direction ($x$-axis), and absolute position ($sy$) and velocity $vy$ along the road direction ($x$-axis). The throttle and steering information can be expressed using the four variables. We will use subscripts to distinguish between different vehicles. The following scenarios are constructed by defining appropriate sets of initial states and transitions graphs labeled by the modes of two or more vehicles. \begin{description} \item[$\auto{MergeBehind}$:] Initial condition: Vehicle A is in left and vehicle B is in the right lane; initial positions and speeds are in some range; A is in \Lmode{cruise} mode, and B is in \Lmode{cruise} or \Lmode{speedup}. % Transition graph: Vehicle A goes through the mode sequence \Lmode{speedup}, \Lmode{ch\_right}, \Lmode{cruise} with specified intervals of time to transit from mode to another mode. % Requirement: A merges behind B within a time bound and maintains at least a given safe separation. \item[$\auto{MergeAhead}$:] Initial condition: Same as $\auto{MergeBehind}$ with except that B is in \Lmode{cruise} or \Lmode{brake} mode. Transition graph: Same structure as $\auto{MergeBehind}$ with different timing parameters. % Requirement: A merges ahead of B and maintains at least a given safe separation. \item[$\auto{AutoPassing}$:] Initial condition: Vehicle A behind B in the same lane, with A in \Lmode{speedup} and B in \Lmode{cruise}; initial positions and speeds are in some range. Transition graph: A goes through the mode sequence \Lmode{ch\_left}, \Lmode{speedup}, \Lmode{brake}, and \Lmode{ch\_right}, \Lmode{cruise} with specified time intervals in each mode to complete the overtake maneuver. If B switches to \Lmode{speedup} before A enters \Lmode{speedup} then A aborts and changes back to right lane. If B switches to \Lmode{brake} before A enters \Lmode{ch\_left}, then A should adjust the time to switch to \Lmode{ch\_left} to avoid collision. % Requirement: Vehicle A overtakes B while maintaining minimal safe separation. \item[$\auto{AEB}$:] (Emergency brakes) Initial condition: Vehicle A behind B in the same lane with A in \Lmode{cruise}, B is stopped (in \Lmode{cruise} mode with velocity $0$). Initial positions and speeds are in some range; Transition graph: A transits from \Lmode{cruise} to \Lmode{em\_brake} over a given interval of time or several disjoint intervals of time. Requirement: Vehicle A stops behind B and maintains at least a given safe separation. \item[$\auto{MergeBetween}$:] Initial condition: Vehicle A, B, C are all in the same lane, with A behind B, B behind C, and in the \Lmode{cruise} mode, initial positions and speeds are in some range. Transition graph: A goes through the mode sequence \Lmode{ch\_left}, \Lmode{speedup}, \Lmode{brake}, and \Lmode{ch\_right}, \Lmode{cruise} with specified time intervals in each mode to overtake B. C transits from \Lmode{cruise} to \Lmode{speedup} then transits back to \Lmode{cruise}, so C is always ahead of A. Requirement: Vehicle A merges between B and C and any two vehicles maintain at least a given safe separation. \end{description} \subsection{Automatic transmission control} \label{ssec:gear} We provide some details about the Automatic transmission control benchmark that we have modeled as a hybrid system that combine white-box and black-box components and we have verified using {\sc DryVR}'s safety verification algorithm. This is a slightly modified version of the Automatic Transmission model provided by {Mathworks\textsuperscript{\textregistered}}\ as a {Simulink\textsuperscript{\textregistered}}\ demo \cite{Matlab_trans}. It is a model of an automatic transmission controller that exhibits both continuous and discrete behavior. The model has been previously used by S-taliro~\cite{S-Taliro} for falsifying certain requirements. We are not aware of any verification results for this system. For our experiments, we made some minor modifications to the {Simulink\textsuperscript{\textregistered}}\ model to create the hybrid system $\auto{ATS}$. This allows us to simulate the vehicle from any one of the four modes, namely, \Lmode{gear1}, \Lmode{gear2}, \Lmode{gear3} and \Lmode{gear4}. Although the system has many variables, we are primarily interested in the car Speed ($v$), engine RPM (Erpm), impeller torque ($T_i$), output torque ($T_o$), and transmission RPM (Trpm), and therefore, use simulations that record these. Transition graph of $\auto{ATS}$ encodes transition sequences and intervals for shifting from \Lmode{gear1} through to \Lmode{gear4}. Requirement of interest is that the engine RPM is less than a specified maximum value, which in turn is important for limiting the thermal and mechanical stresses on the cylinders and camshafts. Typical unsafe set $\U_t$ could be Erpm $>4000$. \subsection{Safety verification algorithm} \label{appendix:safetyveri} The safety verification algorithm is shown in~\ref{alg:safetyveri}. It proceeds along the line of the simulation-based verification algorithms presented in~\cite{DMV:EMSOFT2013, FanMitra:2015df, DuggiralaMV:2015c2e2}. \begin{algorithm} \caption{$\mathit{VerifySafety}(\H,\U)$ verifies safety of hybrid system $\H$ with respect to unsafe set $\U$.} \label{alg:safetyveri} \SetKwInOut{Input}{input} \SetKwInOut{Initially}{initially} {\bf initially}{${\ensuremath{\cap}}.push(Partition(\Theta))$} {\bf while} {${\ensuremath{\cap}} \neq \emptyset$} { $S \gets {\ensuremath{\cap}}.pop()$\; $RS \gets \mathit{GraphReach}(\H)$ \; \uIf {$RS \cap \U = \emptyset$} { continue\;} \uElseIf {$\exists (x,l,t) \in RT$ s.t. $\langle RT,v \rangle \in RS$ and $(x,l,t) \subseteq \U $} {\Return UNSAFE, $\langle RT,v \rangle$} {\bf else} { {$I.push (Partition(S))$ \;} {Or, $G \gets RefineGraph(G)$ \;} } } \Return SAFE \end{algorithm} \section{Conclusions} \label{sec:cs} The work presented in this paper takes an alternative view that complete mathematical models of hybrid systems are unavailable. Instead, the available system description combines a black-box simulator and a white-box transition graph. Starting from this point of view, we have developed the semantic framework, a probabilistic verification algorithm, and results on simulation relations and sequential composition for reasoning about complex hybrid systems over long switching sequences. Through modeling and analysis of a number of automotive control systems using implementations of the proposed approach, we hope to have demonstrated their promise. One direction for further exploration in this vein, is to consider more general timed and hybrid automata models of the white-box, and develop the necessary algorithms and the reasoning techniques. \section{Other examples} \subsection{ADAS and autonomous vehicle benchmarks} \label{ssec:adas} This is a suite of benchmarks we have created representing various common scenarios used for testing ADAS and Autonomous driving control systems. The hybrid system for a scenario is constructed by putting together several individual vehicles. The higher-level decisions (paths) followed by the vehicles are captured by transition graphs while the detailed dynamics of each vehicle comes from a black-box {Simulink\textsuperscript{\textregistered}} \ simulator from {Mathworks\textsuperscript{\textregistered}}~\cite{simulinkcar}. Each vehicle has several continuous variables including the $x, y$-coordinates of the vehicle on the road, its velocity, heading, and steering angle. The vehicle can be controlled by two input signals, namely the throttle (acceleration or brake) and the steering speed. By choosing appropriate values for these input signals, we have defined the following modes for each vehicle --- \Lmode{cruise}: move forward at constant speed, \Lmode{speedup}: constant acceleration, \Lmode{brake}: constant (slow) deceleration, \Lmode{em\_brake}: constant (hard) deceleration. In addition, we have designed lane switching modes \Lmode{ch\_left} and \Lmode{ch\_right} in which the acceleration and steering are controlled in such a manner that the vehicle switches to its left (resp. right) lane in a certain amount of time. For each vehicle, we mainly analyze four variables: absolute position ($sx$) and velocity ($vx$) orthogonal to the road direction ($x$-axis), and absolute position ($sy$) and velocity ($vy$) along the road direction ($y$-axis). The throttle and steering are captured using the four variables. We will use subscripts to distinguish between different vehicles. The following scenarios are constructed by defining appropriate sets of initial states and transitions graphs labeled by the modes of two or more vehicles. In all of these scenarios a primary safety requirement is that the vehicles maintain safe separation. See Appendix~\ref{app:adas} for more details on initial states and transition graphs of each scenario. % \begin{description} \itemsep0em \item[$\auto{Merge}$:] Vehicle A in the left lane is behind vehicle B in the right lane. A switches through modes \Lmode{cruise}, \Lmode{speedup}, \Lmode{ch\_right}, and \Lmode{cruise} over specified intervals to merge behind B. Variants of this scenario involve $B$ also switching to \Lmode{speedup} or \Lmode{brake}. \item[$\auto{AutoPassing}$:] Vehicle A starts behind B in the same lane, and goes through a sequence of modes to overtake B. If B switches to \Lmode{speedup} before A enters \Lmode{speedup} then A aborts and changes back to right lane. \item[$\auto{Merge3}$:] Same as $\auto{AutoPassing}$ with a third car C always ahead of $B$. \item[$\auto{AEB}$:] Vehicle A cruises behind B and B stops. A transits from \Lmode{cruise} to \Lmode{em\_brake} possibly over several different time intervals as governed by different sensors and reaction times. \end{description} \subsection{Experiments on safety verification} \label{sec:reachexp} \begin{figure} \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth,trim={1.0cm 0.8cm 1.5cm 0.8cm},clip]{Car_reachTube.pdf} \caption{Safe reachtube. } \label{fig:AutoPassingA} \end{subfigure} ~ ~ \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth,trim={1.0cm 0.8cm 1.5cm 0.8cm},clip]{Car_simulation.pdf} \caption{Unsafe execution. } \label{fig:AutoPassingB} \end{subfigure} \vspace{-8pt} \caption{\small $\auto{AutoPassing}$ verification. Vehicle A's (red) modes are shown above each subplot. Vehicle B (green) is in \Lmode{cruise}. Top: $sx_A,sx_B$. Bottom: $sy_A, sy_B$. } \label{fig:AutoPassing} \end{figure} The algorithms have been implemented in {{\sc DryVR}} and have been used to automatically verify the benchmarks from Section~\ref{sec:prelims} and an Automatic Transmission System (Appendix~\ref{ssec:gear}). The transition graph, the initial set, and unsafe set are given in a text file. {{\sc DryVR}} uses simulators for modes, and outputs either ``Safe'' of ``Unsafe''. Reachtubes or counter-examples computed during the analysis are also stored in text files. The implementation is in Python using the MatLab's Python API for accessing the {Simulink\textsuperscript{\textregistered}} ~simulators. Py-GLPK~\cite{python-glpk-package} is used to find the parameters of discrepancy functions; either global (GED) or piece-wise (PED) discrepancy can be selected by the user. Z3~\cite{de2008z3} is used for reachtube operations. At this stage, all the benchmarks we are working on heavily rely on {{Mathworks\textsuperscript{\textregistered}}} {{Simulink\textsuperscript{\textregistered}}}. We don't have a public {{Mathworks\textsuperscript{\textregistered}}} license to release the tool, and it is complicated for the users to build a connection between {{\sc DryVR}} and their own {{Simulink\textsuperscript{\textregistered}}} models. We will release {{\sc DryVR}} soon after we move the blackbox benchmarks to a different open source software. Figure~\ref{fig:AutoPassing} shows example plots of computed safe reachtubes and counter-examples for a simplified $\auto{AutoPassing}$ in which vehicle B stays in the \Lmode{cruise} always. As before, vehicle A goes through a sequence of modes to overtake B. Initially, for both $i \in \{A,B\}$, $sx_i = vx_i = 0$ and $vy_i = 1$, i.e., both are cruising at constant speed at the center of the right lane; initial positions along the lane are $sy_A\in [0,2], sy_B \in [15,17]$. Figure~\ref{fig:AutoPassingA} shows the lateral positions ($sx_A$ in red and $sx_B$ in green, in the top subplot), and the positions along the lane ($sy_A$ in red and $sy_B$ in green, in the bottom plot). Vehicle A moves to left lane ($sx$ decreases) and then back to the right, while B remains in the right lane, as A overtakes B (bottom plot). The unsafe set $(|sx_A-sx_B|<2 \ \&\ |sy_A-sy_B|<2)$ is proved to be disjoint from computed reachtube. With a different initial set, $sy_B \in [30,40]$, {\sc DryVR}\ finds counter-example (Figure~\ref{fig:AutoPassingB}). \begin{table} \begin{center} \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{|c|rlcccr|} \hline Model & TH & Initial set & $\U$ & Ref & Safe & Runtime \\ \hline \makecell[c]{$\auto{Powertrn}$ \\ (5 vers, 6 edges)} & 80 & \makecell[l]{$\lambda \in [14.6,14.8]$} & $\U_{p}$ & 2 & \cmark & 217.4s \\ \cline{1-7} {$\auto{AutoPassing}$} & 50 & $sy_A \in [-1,1]$ $sy_B \in [14,16]$ & $\U_{c}$ & 4 & \cmark & 208.4s \\ \cline{2-7} (12 vers, 13 edges) & 50 & $sy_A \in [-1,1]$ $sy_B \in [4,6.5]$ & $\U_{c}$ & 5 & \xmark & 152.5s \\ \hline {$\auto{Merge}$ } & 50 & $sx_A \in [-5,5]$ $sy_B\in [-2,2]$ & $\U_{c}$ & 0 & \cmark & 55.0s \\ \cline{2-7} (7 vers, 7 edges) & 50 & $sx_A \in [-5,5]$ $sy_B \in [2,10]$ & $\U_{c}$ & - & \xmark & 38.7s \\ \hline {$\auto{Merge3}$} & 50 & \makecell[l]{$sy_A \in [-3,3]$ $sy_B \in [14, 23]$\\$sy_C \in [36, 45]$} & $\U_{c}$ & 4 & \cmark & 197.6s \\ \cline{2-7} (6 vers, 5 edges) & 50 & \makecell[l]{$sy_A \in [-3,3]$ $sy_B \in [14,15]$ \\ $sy_C \in [16, 20]$} & $\U_{c}$ & - & \xmark & 21.3s \\ \hline \makecell[c]{ {$\auto{ATS}$ }\\ (4 vers, 3 edges)} & 50 & Erpm $\in [900,1000]$& $\U_{t}$ & 2 & \cmark & 109.2s \\ \cline{2-7} \hline \end{tabular}% } \vspace{-14pt} \caption{\small Safety verification results. Numbers below benchmark names: \# vertices and edges of $G$, TH: duration of shortest path in $G$, Ref: \# refinements performed; Runtime: overall running time.} \label{table:results} \end{center} \end{table}% Table~\ref{table:results} summarizes some of the verification results obtained using {\sc DryVR}. $\auto{ATS}$ is an automatic transmission control system (see Appendix~\ref{ssec:gear} for more details). These experiments were performed on a laptop with Intel Core i7-6600U CPU and 16 GB RAM. The initial range of only the salient continuous variables are shown in the table. The unsafe sets are discussed with the model description. For example $\U_c$ means two vehicles are too close. For all the benchmarks, the algorithm terminated in a few minutes which includes the time to simulate, learn discrepancy, generate reachtubes, check the safety of the reachtube, over all refinements. For the results presented in Table~\ref{table:results}, we used GED. The reachtube generated by PED for $\auto{Powertrn}$ is more precise, but for the rest, the reachtubes and the verification times using both GED and PED were comparable. In addition to the $\mathit{VerifySafety}$ algorithm, {\sc DryVR}\ also looks for counter-examples by quickly generating random executions of the hybrid system. If any of these executions is found to be unsafe, {\sc DryVR}\ will return ``Unsafe'' without starting the $\mathit{VerifySafety}$ algorithm. \section{Introduction} \label{sec:intro} The starting point of existing hybrid system verification approaches is the availability of nice mathematical models describing the transitions and trajectories. This central conceit severely restricts the applicability of the resulting approaches. Real world control system ``models'' are typically a heterogeneous mix of simulation code, differential equations, block diagrams, and hand-crafted look-up tables. Extracting clean mathematical models from these descriptions is usually infeasible. At the same time, rapid developments in Advanced Driving Assist Systems (ADAS), autonomous vehicles, robotics, and drones now make the need for effective and sound verification algorithms stronger than ever before. The {\sc DryVR}\ framework presented in this paper aims to narrow the gap between sound and practical verification for control systems. \vspace{-10pt} \paragraph{Model assumptions} Consider an ADAS feature like automatic emergency braking system (AEB). The high-level logic deciding the timing of when and for how long the brakes are engaged after an obstacle is detected by sensors is implemented in a relatively clean piece of code and this logical module can be seen as a {\em white-box\/}. In contrast, the dynamics of vehicle itself, with hundreds of parameters, is more naturally viewed as a {\em black-box\/}. That is, it can be simulated or tested with different initial conditions and inputs, but it is nearly impossible to write down a nice mathematical model. % The empirical observation motivating this work is that many control systems, and especially automotive systems, share this combination of white and black boxes (see other examples in Sections~\ref{ex:powertrain}, \ref{ssec:adas}, and \ref{ssec:gear}). In this paper, we view hybrid systems as a combination of a white-box that specifies the mode switches and a black-box that can simulate the continuous evolution in each mode. Suppose the system has a set of modes $\L$ and $n$ continuous variables. The mode switches are defined by a {\em transition graph} $G$ which is a directed acyclic graph (DAG) whose vertices and edges define the allowed mode switches and the switching times. The black-box is a set of trajectories $\TL$ in $\reals^n$ for each mode in $\L$. We do not have a closed form description of $\TL$, but instead, we have a {\em simulator\/}, that can generate sampled data points on individual trajectories for a given initial state and mode. Combining a transition graph $G$, a set of trajectories $\TL$, and a set of initial states in $\reals^n$, we obtain a hybrid system for which executions, reachability, and trace containment can be defined naturally. We have studied a suite of automotive systems such as powertrain control~\cite{jin2014powertrain}, automatic transmission control~\cite{Matlab_trans}, and ADAS features like automatic emergency braking (AEB), lane-change, and auto-passing, that are naturally represented in the above style. In verifying a lane change or merge controller, once the maneuver is activated, the mode transitions occur within certain time intervals. In testing a powertrain control system, the mode transitions are brought about by the driver and it is standard to describe typical driver classes using time-triggered signals. Similar observations hold in other examples. \vspace{-10pt} \paragraph{Safety verification algorithm} With black-box modules in our hybrid systems, we address the challenge of providing guaranteed verification. Our approach is based on the idea of simulation-driven reachability analysis~\cite{fan2016automatic,DMV:EMSOFT2013,DuggiralaMV:2015c2e2}. For a given mode $\ell \in \L$, finitely many simulations of the trajectories of $\ell$ and a {\em discrepancy function\/} bounding the sensitivity of these trajectories, is used to over-approximate the reachable states. For the key step of computing discrepancy for modes that are now represented by black-boxes, we introduce a probabilistic algorithm that learns the parameters of exponential discrepancy functions from simulation data. The algorithm transforms the problem of learning the parameters of the discrepancy function to the problem of learning a linear separator for a set of points in $\reals^2$ that are obtained from transforming the simulation data. A classical result in PAC learning, ensures that any such discrepancy function works with high probability for all trajectories. We performed dozens of experiments with a variety of black-box simulators and observed that 15-20 simulation traces typically give a discrepancy function that works for nearly 100\% of all simulations. The reachability algorithm for the hybrid system proceeds along the vertices of the transition graph in a topologically sorted order and this gives a sound bounded time verification algorithm, provided the learned discrepancy function is correct. \vspace{-10pt} \paragraph{Reasoning} White-box transition graphs in our modelling, identify the switching sequences under which the black-box modules are exercised. Complex systems have involved transition graphs that describe subtle sequences in which the black-box modules are executed. To enable the analysis of such systems, we identify reasoning principles that establish the safety of system under a complex transition graph based on its safety under a simpler transition graph. We define a notion of forward simulation between transition graphs that provides a sufficient condition of when one transition graph ``subsumes'' another --- if $G_1$ is simulated by $G_2$ then the reachable states of a hybrid system under $G_1$ are contained in the reachable states of the system under $G_2$. Thus the safety of the system under $G_2$ implies the safety under $G_1$. Moreover, we give a simple polynomial time algorithm that can check if one transition graph is simulated by another. Our transition graphs are acyclic with transitions having bounded switching times. Therefore, the executions of the systems we analyze are over a bounded time, and have a bounded number of mode switches. An important question to investigate is whether establishing the safety for bounded time, enables one can conclude the safety of the system for an arbitrarily long time and for arbitrarily many mode switches. With this in mind, we define a notion of sequential composition of transition graphs $G_1$ and $G_2$, such that switching sequences allowed by the composed graph are the concatenation of the sequences allowed by $G_1$ with those allowed by $G_2$. Then we prove a sufficient condition on a transition graph $G$ such that safety of a system under $G$ implies the safety of the system under arbitrarily many compositions of $G$ with itself. \vspace{-10pt} \paragraph{Automotive applications} We have implemented these ideas to create the {\bf D}ata-d{\bf r}iven S{\bf y}stem for {\bf V}erification and {\bf R}easoning ({\sc DryVR}). The tool is able to automatically verify or find counter-examples in a few minutes, for all the benchmark scenarios mentioned above. Reachability analysis combined with compositional reasoning, enabled us to infer safety of systems with respect to arbitrary transitions and duration. \vspace{-10pt} \paragraph{Related work} Most automated verification tools for hybrid systems rely on analyzing a white-box mathematical model of the systems. They include tools based on decidablity results~\cite{doty95,hh95,ck99,adm02,Dutertre04timedsystems,fre05}, semi-decision procedures that over-approximate the reachable set of states through symbolic computation~\cite{gm99,mt00,bt00,kv00,st00,Frehse:cav11,ariadne,flow}, using abstractions~\cite{adi03,efhkost03-1,efhkost03-2,Henzinger_refinement,seg07,07-HSolver,dkl07,JKWB:HSCC:2007,HareFMSD,14-sttt,14-AGAR,15-PLC-CEGAR,HareTACAS16}, and using approximate decision procedures for fragments of first-order logic~\cite{dreach}. More recently, there has been interest in developing simulation-based verification tools~\cite{Julius:2007:RTG:1760804.1760833,SensitivityDM,donze2010breach,Kanade09,staliro-tool-paper,Fainekos:2009:RTL:1609208.1609591,DRJqest13,DuggiralaMV:2015c2e2}. Even though these are simulation based tools, they often rely on being to analyze a mathematical model of the system. The type of analysis that they rely on include instrumentation to extract a symbolic trace from a simulation~\cite{Kanade09}, stochastic optimization to search for counter-examples~\cite{staliro-tool-paper,Fainekos:2009:RTL:1609208.1609591}, and sensitivity analysis~\cite{SensitivityDM,donze2010breach,DRJqest13,DuggiralaMV:2015c2e2}. Some of the simulation based techniques only work for systems with linear dynamics~\cite{Sim2Veri,ILABS}. Recent work on the APEX tool~\cite{o2016apex} for verifying trajectory planning and tracking in autonomous vehicles is related our approach in that it targets the same application domain. \section{Modeling/semantic framework} \label{sec:prelims} We introduce a powertrain control system from~\cite{jin2014powertrain} as a running example to illustrate the elements of our hybrid system modeling framework. \subsection{Powertrain control system} \label{ex:powertrain} This system ($\auto{Powertrn}$) models a highly nonlinear engine control system. The relevant state variables of the model are intake manifold pressure ($p$), air-fuel ratio ($\lambda$), estimated manifold pressure ($pe$) and intergrator state ($i$). The overall system can be in one of four modes $\Lmode{startup}$, $\Lmode{normal}$, $\Lmode{powerup}$, $\Lmode{sensorfail}$. A {Simulink\textsuperscript{\textregistered}}\ diagram describes the continuous evolution of the above variables. In this paper, we mainly work on the {\em Hybrid I/O Automaton Model} in the suite of powertrain control models. The {Simulink\textsuperscript{\textregistered}}\ model consists of continuous variables describing the dynamics of the powertrain plant and sample-and-hold variables as the controller. One of the key requirements to verify is that the engine maintains the air-fuel ratio within a desired range in different modes for a given set of driver behaviors. This requirement has implications on fuel economy and emissions. For testing purposes, the control system designers work with sets of driver profiles that essentially define families of switching signals across the different modes. Previous verification results on this problem have been reported in~\cite{DFMV:CAV2015, FDMV:ARCH2015} on a simplified version of the powertrain control model. \vspace{-10pt} \subsection{Transition graphs} \label{ssec:modes} We will use $\L$ to denote a finite set of {\em modes\/} or locations of the system under consideration. The discrete behavior or mode transitions are specified by what we call a transition graph over $\L$. \begin{definition} A {\em transition graph\/} is a labeled, directed acyclic graph $G = \langle \L, \V, \E, {\sc vlab}, {\sc elab} \rangle$, where \begin{inparaenum}[(a)] \item $\L$ is the set of vertex labels also called the set of {\em modes\/}, \item $\V$ the set of vertices, \item $\E\subseteq \V \times \V$ is the set of edges, \item ${\sc vlab}: \V \rightarrow \L$ is a vertex labeling function that labels each vertex with a mode, and \item ${\sc elab}: \E\rightarrow \nnreals \times \nnreals$ is an edge labeling function that labels each edge with a nonempty, closed, bounded interval defined by pair of non-negative reals. \end{inparaenum} \end{definition} Since $G$ is a DAG, there is a nonempty subset $\V_{\sf init} \subseteq \V$ of vertices with no incoming edges and a nonempty subset $\V_{\sf term} \subseteq \V$ of vertices with no outgoing edges. We define the set of initial locations of $G$ as $\L_{\sf init} = \{ \ell \ | \ \exists \ v \in \V_{\sf init}, {\sc vlab}(v) = \ell \}$. A (maximal) {\em path\/} of the graph $G$ is a sequence $\pi = v_1, t_1, v_2, t_2, \ldots, v_k$ such that, \begin{inparaenum}[(a)] \item $v_1 \in \V_{\sf init}$, \item $v_k \in \V_{\sf term}$, and \item for each $(v_i,t_i,v_{i+1})$ subsequence, there exists $(v_i, v_{i+1}) \in \E$, and $t_i \in {\sc elab}((v_i,v_{i+1}))$. \end{inparaenum} $\paths{G}$ is the set of all possible paths of $G$. For a given path $\pi = v_1, t_1, v_2, t_2, \ldots,$ $v_k$ its {\em trace\/}, denoted by ${\sc vlab}(\pi)$, is the sequence ${\sc vlab}(v_1), t_1, {\sc vlab}(v_2), t_2, \ldots,$ ${\sc vlab}(v_k)$. Since $G$ is a DAG, a trace of $G$ can visit the same mode finitely many times. ${\trace{}}{G}$ is the set of all traces of $G$. An example transition graph for the $\auto{Powertrain}$ system of Section~\ref{ex:powertrain} is shown in Figure~\ref{fig:power_graph}. The set of vertices $\V = \{0,\ldots, 4\}$ and the ${\sc vlab}$'s and ${\sc elab}$'s appear adjacent to the vertices and edges. \begin{figure}[h] \includegraphics[scale=0.4]{power_graph1.pdf} \vspace{-10pt} \caption{\small A sample transition graph for $\auto{Powertrain}$ system.} \label{fig:power_graph} \end{figure} \vspace{-8pt} \subsubsection{Trace containment} We will develop reasoning techniques based on reachability, abstraction, composition, and substitutivity. To this end, we will need to establish containment relations between the behaviors of systems. Here we define containment of transition graph traces. Consider transition graphs $G_1, G_2,$ with modes $\L_1,\L_2,$ and a mode map ${\sc lmap}: \L_1 \rightarrow \L_2$. For a trace $\sigma = \ell_1, t_1, \ell_2, t_2, \ldots, \ell_k \in {\trace{}}{G_1}$, simplifying notation, we denote by ${\sc lmap}(\sigma)$ the sequence ${\sc lmap}(\ell_1), t_1, {\sc lmap}(\ell_2), t_2,$ $\ldots, {\sc lmap}(\ell_k)$. We write $G_1 \preceq_{{\sc lmap}} G_2$ iff for every trace $\sigma \in {\trace{}}{G_1}$, there is a trace $\sigma' \in {\trace{}}{G_2}$ such that ${\sc lmap}(\sigma)$ is a prefix of $\sigma'$. \begin{definition} Given graphs $G_1, G_2$ and a mode map ${\sc lmap}: \L_1 \rightarrow \L_2$, a relation $R \subseteq \V_1 \times \V_2$ is a {\em forward simulation relation from $G_1$ to $G_2$\/} iff \begin{enumerate}[(a)] \itemsep0em \item for each $v \in \V_{1 {\sf init}}$, there is $u \in \V_{2 {\sf init}}$ such that $(v,u) \in R$, \item for every $(v,u) \in R$, ${\sc lmap}({\sc vlab}_1(v)) = {\sc vlab}_2(u)$, and \item for every $(v,v') \in \E_1$ and $(v,u)\in R$, there exists a finite set $u_1, \ldots, u_k$ such that: \begin{inparaenum}[(i)] \item for each $u_j$, $(v,u_j) \in R$, and \item ${\sc elab}_1((v,v')) \subseteq \cup_j {\sc elab}_2((u,u_j))$. \end{inparaenum} \end{enumerate} \end{definition} \begin{proposition} \label{prop:graphsim} If there exists a forward simulation relation from $G_1$ to $G_2$ with ${\sc lmap}$ then $G_1 \preceq_{{\sc lmap}} G_2$. \end{proposition} \vspace{-10pt} \subsubsection{Sequential composition of graphs} We will find it convenient to define the \emph{sequential composition} of two transition graphs. Intuitively, the traces of the composition of $G_1$ and $G_2$ will be those that can be obtained by concatenating a trace of $G_1$ with a trace of $G_2$. To keep the definitions and notations simple, we will assume (when taking sequential compositions) $|\V_{\sf init}| = |\V_{\sf term}| = 1$; this is true of the examples we analyze. It is easy to generalize to the case when this does not hold. Under this assumption, the unique vertex in $\V_{\sf init}$ will be denoted as $v_{\sf init}$ and the unique vertex in $\V_{\sf term}$ will be denoted as $v_{\sf term}$. \begin{definition} \label{def:seq-comp} Given graphs $G_1 = \langle \L, \V_1, \E_1, {\sc vlab}_1, {\sc elab}_1 \rangle$ and $G_2 = \langle \L, \V_2, \E_2,$ ${\sc vlab}_2, {\sc elab}_2 \rangle$ such that ${\sc vlab}_1(v_{1 {\sf term}}) = {\sc vlab}_2(v_{2 {\sf init}})$, the \emph{sequential composition} of $G_1$ and $G_2$ is the graph $G_1\seqcomp G_2 = \langle \L, \V, \E, {\sc vlab}, {\sc elab} \rangle$ where \begin{enumerate}[(a)] \itemsep0em \item $\V = (\V_1 \cup \V_2) \setminus \{v_{2 {\sf init}})\}$, \item $\E = \E_1 \cup \{(v_{1 {\sf term}},u)\: |\: (v_{2 {\sf init}},u) \in \E_2\} \cup \{(v,u) \in \E_2\: |\: v \neq v_{2 {\sf init}}\}$, \item ${\sc vlab}(v) = {\sc vlab}_1(v)$ if $v \in \V_1$ and ${\sc vlab}(v) = {\sc vlab}_2(v)$ if $v \in \V_2$, \item For edge $(v,u) \in \E$, ${\sc elab}((v,u)) $ equals \begin{inparaenum}[(i)] \item ${\sc elab}_1((v,u))$, if $u \in \V_1$, \\ \item ${\sc elab}_2((v_{2 {\sf init}},u))$, if $v = v_{1 {\sf term}}$, \item ${\sc elab}_2((v,u)), otherwise$. \end{inparaenum} \end{enumerate} \end{definition} Given our definition of trace containment between graphs, we can prove a very simple property about sequential composition. \begin{proposition} \label{prop:seqcomp-containment} Let $G_1$ and $G_2$ be two graphs with modes $\L$ that can be sequential composed. Then $G_1 \preceq_{{\sf id}} G_1\seqcomp G_2$, where ${\sf id}$ is the identity map on $\L$. \end{proposition} The proposition follows from the fact that every path of $G_1$ is a prefix of a path of $G_1\seqcomp G_2$. Later in Section \ref{sec: sub_exp} we see examples of sequential composition. \subsection{Trajectories} The evolution of the system's continuous state variables is formally described by continuous functions of time called {\em trajectories\/}. Let $n$ be the number of continuous variables in the underlying hybrid model. A {\em trajectory\/} for an $n$-dimensional system is a continuous function of the form $\tau: [0,T] \rightarrow \reals^n$, where $T \geq 0$. The interval $[0,T]$ is called the {\em domain\/} of $\tau$ and is denoted by $\tau.{\mathit dom}$. The first state $\tau(0)$ is denoted by $\tau.\mathop{\mathsf {fstate}}$, last state $\tau.\mathop{\mathsf {lstate}} = \tau(T)$ and $\tau.\mathop{\mathsf {ltime}} = T$. For a hybrid system with $\L$ modes, each trajectory is labeled by a mode in $\L$. A {\em trajectory labeled by $\L$\/} is a pair $\langle \tau, \ell \rangle$ where $\tau$ is a trajectory and $\ell \in \L$. A {\em $T_1$-prefix\/} of $\langle \tau, \ell\rangle$, for any $T_1 \in \tau.{\mathit dom}$, is the labeled-trajectory $\langle \tau_1, \ell\rangle$ with $\tau_1:[0,T_1] \rightarrow \reals^n$, such that for all $t \in [0, T_1]$, $\tau_1(t) = \tau(t)$. A set of labeled-trajectories $\TL$ is prefix-closed if for any $\langle \tau,\ell \rangle \in \TL$, any of its prefixes are also in $\TL$. A set $\TL$ is {\em deterministic\/} if for any pair $\langle \tau_1, \ell_1 \rangle, \langle \tau_2,\ell_2 \rangle \in \TL$, if $\tau_1.\mathop{\mathsf {fstate}} = \tau_2.\mathop{\mathsf {fstate}}$ and $\ell_1 = \ell_2$ then one is a prefix of the other. A deterministic, prefix-closed set of labeled trajectories $\TL$ describes the behavior of the continuous variables in modes $\L$. We denote by $\TL_{\sf init,\ell}$ $=\{ \tau.\mathop{\mathsf {fstate}} \ | \ \langle \tau,\ell \rangle \in \TL \}$, the set of initial states of trajectories in mode $\ell$. Without loss generality we assume that $\TL_{{\sf init},\ell}$ is a connected, compact subset of $\reals^n$. We assume that trajectories are defined for unbounded time, that is, for each $\ell \in \L, T >0$, and $x \in \TL_{{\sf init},\ell}$, there exists a $\langle \tau, \ell \rangle \in \TL$, with $\tau.\mathop{\mathsf {fstate}} = x$ and $\tau.\mathop{\mathsf {ltime}} = T$. In control theory and hybrid systems literature, the trajectories are assumed to be generated from models like ordinary differential equations (ODEs) and differential algebraic equations (DAEs). Here, we avoid an over-reliance on the models generating trajectories and closed-form expressions. Instead, {{\sc DryVR}} works with sampled data of $\tau(\cdot)$ generated from simulations or tests. \begin{definition} \label{def:sims} A {\em simulator\/} for a (deterministic and prefix-closed) set $\TL$ of trajectories labeled by $\L$ is a function (or a program) ${\sc sim}$ that takes as input a mode label $\ell \in \L$, an initial state $x_0 \in \TL_{\sf init,\ell}$, and a finite sequence of time points $t_1, \ldots, t_k$, and returns a sequence of states ${\sc sim}(x_0,\ell,t_1), \ldots, {\sc sim}(x_0,\ell, t_k)$ such that there exists $\langle\tau,\ell\rangle \in \TL$ with $\tau.\mathop{\mathsf {fstate}} = x_0$ and for each $i\in \{1,\ldots, k\}$, ${\sc sim}(x_0,\ell,t_i) = \tau(t_i)$. \end{definition} The trajectories of the $\auto{Powertrn}$ system are described by a {Simulink\textsuperscript{\textregistered}}\ diagram. The diagram has several switch blocks and input signals that can be set appropriately to generate simulation data using the {Simulink\textsuperscript{\textregistered}}\ ODE solver. For simplicity, we assume that the simulations are perfect (as in the last equality of Definition~\ref{def:sims}). Formal guarantees of soundness of {{\sc DryVR}} are not compromised if we use \emph{validated simulations} instead. \vspace{-10pt} \paragraph{Trajectory containment} Consider sets of trajectories, $\TL_1$ labeled by $\L_1$ and $\TL_2$ labeled by $\L_2$, and a mode map ${\sc lmap}: \L_1 \rightarrow \L_2$. For a labeled trajectory $\langle \tau,\ell \rangle \in \TL_1$, denote by ${\sc lmap}(\langle \tau,\ell \rangle)$ the labeled-trajectory $\langle \tau,{\sc lmap}(\ell) \rangle$. Write $\TL_1 \preceq_{{\sc lmap}} \TL_2$ iff for every labeled trajectory $\langle \tau,\ell \rangle \in \TL_1$, ${\sc lmap}(\langle \tau,\ell \rangle) \in \TL_2$. \subsection{Hybrid systems} \begin{definition} An $n$-dimensional {\em hybrid system\/} $\H$ is a 4-tuple $\langle \L, \Theta, G, \TL \rangle$, where \begin{inparaenum}[(a)] \item $\L$ is a finite set of modes, \item $\Theta \subseteq \reals^n$ is a compact set of initial states, \item $G = \langle \L, \V, \E, {\sc elab} \rangle$ is a transition graph with set of modes $\L$, and \item $\TL$ is a set of deterministic, prefix-closed trajectories labeled by $\L$. \end{inparaenum} \end{definition} A {\em state\/} of the hybrid system $\H$ is a point in $\reals^n \times \L$. The set of initial states is $\Theta \times \L_{\sf init}$. Semantics of $\H$ is given in terms of executions which are sequences of trajectories consistent with the modes defined by the transition graph. An {\em execution\/} of $\H$ is a sequence of labeled trajectories $\alpha = \langle \tau_1, \ell_1\rangle\ldots, \langle \tau_{k-1}, \ell_{k-1}\rangle, \ell_k$ in $\TL$, such that \begin{inparaenum}[(a)] \item $\tau_1.\mathop{\mathsf {fstate}} \in \Theta$ and $\ell_1 \in \L_{\sf init}$, \item the sequence $\pathof{\alpha}$ defined as $\ell_1, \tau_1.\mathop{\mathsf {ltime}}, \ell_2, \ldots \ell_k$ is in ${\trace{}}{G}$, and \item for each consecutive trajectory, $\tau_{i+1}.\mathop{\mathsf {fstate}} = \tau_i.\mathop{\mathsf {lstate}}$. \end{inparaenum} The set of all executions of $\H$ is denoted by ${\exec{}}{\H}$. The first and last states of an execution $\alpha = \langle \tau_1, \ell_1\rangle\ldots, \langle \tau_{k-1}, \ell_{k-1}\rangle, \ell_k$ are $\alpha.\mathop{\mathsf {fstate}} = \tau_1.\mathop{\mathsf {fstate}}$, $\alpha.\mathop{\mathsf {lstate}} = \tau_{k-1}.\mathop{\mathsf {lstate}}$, and $\alpha.\mathit{fmode} = \ell_1$ $\alpha.\mathit{lmode} = \ell_k$. A state $\langle x, \ell \rangle$ is {\em reachable\/} at time $t$ and vertex $v$ (of graph $G$) if there exists an execution $\alpha = \langle \tau_1, \ell_1\rangle\ldots, \langle \tau_{k-1}, \ell_{k-1}\rangle, \ell_k \in {\exec{}}{\H}$, a path $\pi = v_1, t_1, \ldots v_k$ in $\paths{G}$, $i \in \{1,\ldots k\}$, and $t' \in \tau_i.{\mathit dom}$ such that ${\sc vlab}(\pi) = \pathof{\alpha}$, $v = v_i$, $\ell = \ell_i$, $x = \tau_i(t')$, and $t = t' + \sum_{j=1}^{i-1} t_j$. The set of reachable states, reach tube, and states reachable at a vertex $v$ are defined as follows. \begin{itemize}[\null] \itemsep0pt \item $\reachtube{\H} = \{\langle x,\ell,t \rangle\: |\: \mbox{for some } v,\ \langle x, \ell \rangle \mbox{ is reachable at time $t$ and vertex $v$}\}$ \item $\reach{\H} = \{\langle x,\ell \rangle\: |\: \mbox{for some } v,t,\ \langle x, \ell \rangle \mbox{ is reachable at time $t$ and vertex $v$}\}$ \item $\reach{\H}^v = \{\langle x,\ell \rangle\: |\: \mbox{for some } t,\ \langle x, \ell \rangle \mbox{ is reachable at time $t$ and vertex $v$}\}$ \end{itemize} Given a set of (unsafe) states $\U \subseteq \reals^n \times \L$, the {\em bounded safety verification problem\/} is to decide whether $\reach{\H} \cap \U = \emptyset$. In Section~\ref{sec:reachalgo} we will present {\sc DryVR}'s algorithm for solving this decision problem. \begin{remark} Defining paths in a graph $G$ to be maximal (i.e., end in a vertex in $\V_{{\sf term}}$) coupled with the definition above for executions in $\H$, ensures that for a vertex $v$ with outgoing edges in $G$, the execution must leave the mode ${\sc vlab}(v)$ within time bounded by the largest time in the labels of outgoing edges from $v$. \end{remark} An instance of the bounded safety verification problem is defined by (a) the hybrid system for the $\auto{Powertrn}$ which itself is defined by the transition graph of Figure~\ref{fig:power_graph} and the trajectories defined by the {Simulink\textsuperscript{\textregistered}}\ model, and (b) the unsafe set ($\U_p$): in \Lmode{powerup} mode, $t>4\wedge \lambda \notin [12.4,12.6]$, in \Lmode{normal} mode, $t>4 \wedge \lambda \notin [14.6,14.8]$. Containment between graphs and trajectories can be leveraged to conclude the containment of the set of reachable states of two hybrid systems. \begin{proposition} \label{prop:hybrid-contain} Consider a pair of hybrid systems $\H_i = \langle \L_i, \Theta_i, G_i, \TL_i \rangle$, $i \in \{1,2\}$ and mode map ${\sc lmap}: \L_1 \to \L_2$. If $\Theta_1 \subseteq \Theta_2$, $G_1 \preceq_{{\sc lmap}} G_2$, and $\TL_1 \preceq_{{\sc lmap}} \TL_2$, then $\reach{\H_1} \subseteq \reach{\H_2}$. \end{proposition} \section{#1}% \newcommand{\pow}[1]{{\bf P}(#1)} \newcommand{\inverse}[1]{#1^{-1}} \newcommand{\range}[1]{\ms{range{(#1)}}} \newcommand{\domain}[1]{{\it dom}(#1)} \newcommand{\type}[1]{\ms{type{(#1)}}} \newcommand{\dtype}[1]{\ms{dtype{(#1)}}} \newcommand{\mathrel{\lceil}}{\mathrel{\lceil}} \newcommand{\matrel{\lceil}}{\matrel{\lceil}} \newcommand{\mathrel{\downarrow}}{\mathrel{\downarrow}} \newcommand{\point}[1]{\wp(#1)} \def{\cal A}{{\cal A}} \def{\cal B}{{\cal B}} \def\C{{\cal C}} \def\D{{\cal D}} \def\E{{\cal E}} \def\mathit{false}{{\cal F}} \def\G{{\cal G}} \def\H{{\cal H}} \def{\ensuremath{\cap}}{{\cal I}} \def\K{{\cal K}} \def\L{{\cal L}} \def\M{{\cal M}} \def{\cal N}{{\cal N}} \def\O{{\cal O}} \def\P{{\cal P}} \def\Q{{\cal Q}} \def\R{{\cal R}} \def\S{{\cal S}} \def\mathit{true}{{\cal T}} \def\V{{\cal V}} \def\U{{\cal U}} \def\X{{\cal X}} \def\Y{{\cal Y}} \def{\cal Z}{{\cal Z}} \def\TL{\cal{T\mkern-3mu L}} \def{\cal A}_q{{\cal A}_1} \def{\cal A}_2{{\cal A}_2} \def{\cal A}_3{{\cal A}_3} \def{\cal B}{{\cal B}} \newcommand{\col}[1]{\relax\ifmmode \mathscr #1\else $\mathscr #1$\fi} \def\col{S}{\col{S}} \definecolor{HIOAcolor}{rgb}{0.776,0.22,0.07} \newcommand{\textcolor{HIOAcolor}{\tt HIOA\hspace{3pt}}}{\textcolor{HIOAcolor}{\tt HIOA\hspace{3pt}}} \newcommand{\textcolor{HIOAcolor}{\tt PVS\hspace{3pt}}}{\textcolor{HIOAcolor}{\tt PVS\hspace{3pt}}} \newcommand{\textcolor{HIOAcolor}{\tt PVS\hspace{1pt}}}{\textcolor{HIOAcolor}{\tt PVS\hspace{1pt}}} \newcommand{\textcolor{HIOAcolor}{\tt HIOA\hspace{6pt}}}{\textcolor{HIOAcolor}{\tt HIOA\hspace{6pt}}} \newcommand{\textcolor{HIOAcolor}{\tt HIOA}}{\textcolor{HIOAcolor}{\tt HIOA}} \newcommand{\lessgtr}{\lessgtr} \newcommand{\SC}[2]{\relax\ifmmode {\tt Scount}(#1,#2) \else ${\tt Scount}(#1,#2)$\fi} \newcommand{\SCM}[2]{\relax\ifmmode {\tt Smin}(#1,#2) \else ${\tt Smin}(#1,#2)$\fi} \newcommand{\Aut}[1]{\relax\ifmmode {\tt Aut}(#1) \else ${\tt Aut}(#1)$\fi} \newcommand{\auto}[1]{{\operatorname{\mathsf{#1}}}} \newcommand{\operatorname{\mathsf {act}}}[1]{{\operatorname{\mathsf{#1}}}} \newcommand{\smodel}[1]{{\operatorname{\mathsf{#1}}}} \newcommand{\pvstheory}[1]{{\operatorname{\mathit{#1}}}} \newcommand{\vspace{0.1in}}{\vspace{0.1in}} \newcommand{\hspace{-0.1in}}{\hspace{-0.1in}} \newcommand{\algref}[1]{Algorithm~\ref{#1}} \newcommand{\chlabel}[1]{\label{ch:#1}} \newcommand{\chref}[1]{Chapter~\ref{ch:#1}} \newcommand{\chreftwo}[2]{Chapters~\ref{ch:#1}~and~\ref{ch:#2}} \newcommand{\seclabel}[1]{\label{sec:#1}} \newcommand{\secref}[1]{Section~\ref{sec:#1}} \newcommand{\secreftwo}[2]{Sections~\ref{sec:#1}~and~\ref{sec:#2}} \newcommand{\sslabel}[1]{\label{ss:#1}} \newcommand{\ssref}[1]{Subsection~\ref{ss:#1}} \newcommand{\figlabel}[1]{\label{fig:#1}} \newcommand{\figref}[1]{Fig.~\ref{fig:#1}} \newcommand{\tablabel}[1]{\label{tab:#1}} \newcommand{\tabref}[1]{Table~\ref{tab:#1}} \newcommand{\figreftwo}[2]{Figs.~\ref{fig:#1}~and~\ref{fig:#2}} \newcommand{\applabel}[1]{\label{app:#1}} \newcommand{\appref}[1]{Appendix~\ref{app:#1}} \newcommand{\lnlabel}[1]{\label{line:#1}} \newcommand{\lnsref}[2]{Lines~\ref{line:#1}--\ref{line:#2}\xspace} \newcommand{\lnref}[1]{Line~\ref{line:#1}\xspace} \newcommand{\lnreftwo}[2]{Lines~\ref{line:#1}~and~\ref{line:#2}\xspace} \newcommand{\thmref}[1]{Theorem~\ref{thm:#1}\xspace} \newcommand{\thmlabel}[1]{\label{thm:#1}} \newcommand{\lemref}[1]{Lemma~\ref{lem:#1}\xspace} \newcommand{\lemreftwo}[2]{Lemmata~\ref{lem:#1} and~\ref{lem:#2}\xspace} \newcommand{\lemlabel}[1]{\label{lem:#1}} \newcommand{\corref}[1]{Corollary~\ref{cor:#1}\xspace} \newcommand{\corlabel}[1]{\label{cor:#1}} \newcommand{\invref}[1]{Invariant~\ref{inv:#1}\xspace} \newcommand{\invlabel}[1]{\label{inv:#1}} \newcommand{\assref}[1]{Assumption~\ref{assump:#1}\xspace} \newcommand{\assreftwo}[2]{Assumptions~\ref{assump:#1}~and~\ref{assump:#2}\xspace} \newcommand{\asslabel}[1]{\label{assump:#1}} \newcommand{\eqlabel}[1]{\label{eq:#1}} \newcommand{\defref}[1]{Definition~\ref{def:#1}\xspace} \newcommand{\deflabel}[1]{\label{def:#1}} \newcommand{\propref}[1]{Proposition~\ref{prop:#1}\xspace} \newcommand{\proplabel}[1]{\label{prop:#1}} \newcommand{\exref}[1]{Example~\ref{ex:#1}\xspace} \newcommand{\exlabel}[1]{\label{ex:#1}} \newcommand{\ppageref}[1]{page~\pageref{#1}} \newcommand{\conjref}[1]{Conjecture~\ref{conj:#1}\xspace} \newcommand{\conjlabel}[1]{\label{conj:#1}} \newcommand{\remove}[1]{} \newcommand{\salg}[1]{\relax\ifmmode {\mathcal F}_{#1}\else ${\mathcal F}_{#1}$\fi} \newcommand{\msp}[1]{\relax\ifmmode (#1, \salg{#1}) \else $(#1, \salg{#1})$\fi} \newcommand{\msprod}[2]{\relax\ifmmode ( #1 \times #2, \salg{#1} \otimes \salg{#2}) \else $(#1 \times #2, \salg{#1} \otimes \salg{#2})$\fi} \newcommand{\dist}[1]{\relax\ifmmode {\mathcal P}\msp{#1} \else ${\mathcal P}\msp{#1}$\fi} \newcommand{\subdist}[1]{\relax\ifmmode {\mathcal S}{\mathcal P}\msp{#1} \else ${\mathcal S}{\mathcal P}\msp{#1}$\fi} \newcommand{\disc}[1]{\relax\ifmmode {\sf Disc}(#1) \else ${\sf Disc}(#1)$\fi} \newcommand{\relax\ifmmode {\mathcal R}_\mathit{true} \else ${\mathcal R}_\mathit{true}$\fi}{\relax\ifmmode {\mathcal R}_\mathit{true} \else ${\mathcal R}_\mathit{true}$\fi} \newcommand{\relax\ifmmode {\mathcal R}_A \else ${\mathcal R}_A$\fi}{\relax\ifmmode {\mathcal R}_A \else ${\mathcal R}_A$\fi} \newcommand{\relax\ifmmode \lambda \else $\lambda$\fi}{\relax\ifmmode \lambda \else $\lambda$\fi} \newcommand{\close}[1]{\relax\ifmmode \overline{#1} \else $\overline{#1}$\fi} \newcommand{\mathop{\mathsf {c}}}{\mathop{\mathsf {c}}} \newcommand{\mathop{\mathsf {full}}}{\mathop{\mathsf {full}}} \newcommand{\mathop{\mathsf {dur}}}{\mathop{\mathsf {dur}}} \newcommand{\mathop{\mathsf {tdist}}}{\mathop{\mathsf {tdist}}} \newcommand{\mathop{\mathsf {extbeh}}}{\mathop{\mathsf {extbeh}}} \newcommand{\mathop{\mathsf {apply}}}[2]{\mathop{\mathsf {apply}({#1},{#2})}} \newcommand{\mathop{\mathsf {supp}}}{\mathop{\mathsf {supp}}} \newcommand{\mathrel{R}}{\mathrel{R}} \newcommand{C}{C} \newcommand{\mathord{\mathsf {flatten}}}{\mathord{\mathsf {flatten}}} \newcommand{\mathord{\mathsf {Disc}}}{\mathord{\mathsf {Disc}}} \newcommand{\lift}[1]{\mathrel{{\mathcal L}(#1)}} \newcommand{\expansion}[1]{\mathrel{{\mathcal E}(#1)}} \newcommand{\operatorname{\mathsf {SubDisc}}}{\operatorname{\mathsf {SubDisc}}} \newcommand{\operatorname{\mathsf {tran}}}{\operatorname{\mathsf {tran}}} \newcommand{{\operatorname{\mathsf {Frags}}}}{{\operatorname{\mathsf {Frags}}}} \newcommand{{\operatorname{\mathsf {trace}}}}{{\operatorname{\mathsf {trace}}}} \newcommand{{\operatorname{\mathsf {finite}}}}{{\operatorname{\mathsf {finite}}}} \newcommand{{\operatorname{\mathsf {hide}}}}{{\operatorname{\mathsf {hide}}}} \newcommand{{\operatorname{\mathsf {Early}}}}{{\operatorname{\mathsf {Early}}}} \newcommand{{\operatorname{\mathsf {Late}}}}{{\operatorname{\mathsf {Late}}}} \newcommand{{\operatorname{\mathsf {Toss}}}}{{\operatorname{\mathsf {Toss}}}} \newcommand{:=}{:=} \newcommand{{\operatorname{\mathsf {counter}}}}{{\operatorname{\mathsf {counter}}}} \newcommand{{\operatorname{\mathsf {chosen}}}}{{\operatorname{\mathsf {chosen}}}} \newcommand{{\operatorname{\mathsf {random}}}}{{\operatorname{\mathsf {random}}}} \newcommand{{\operatorname{\mathsf {unif}}}}{{\operatorname{\mathsf {unif}}}} \newcommand{i.e.,\xspace}{i.e.,\xspace} \newcommand{I.e.,\xspace}{I.e.,\xspace} \newcommand{e.g.,\xspace}{e.g.,\xspace} \newcommand{E.g.,\xspace}{E.g.,\xspace} \newcommand{\mybox}[3]{ \framebox[#1][l] { \parbox{#2} { #3 } } } \newcommand{\two}[4]{ \parbox{.95\columnwidth}{\vspace{1pt} \vfill \parbox[t]{#1\columnwidth}{#3}% \parbox[t]{#2\columnwidth}{#4}% }} \newcommand{\twosep}[4]{ \parbox{\columnwidth}{\vspace{1pt} \vfill \parbox[t]{#1\columnwidth}{#3}% \vrule width 0.2pt \parbox[t]{#2\columnwidth}{#4}% }} \newcommand{\eqntwo}[4]{ \parbox{\columnwidth}{\vspace{1pt} \vfill \parbox[t]{#1\columnwidth}{$ #3 $} \parbox[t]{#2\columnwidth}{$ #4 $} }} \newcommand{\three}[6]{\vspace{1pt} \vfill \parbox{\columnwidth}{% \parbox[t]{#1\columnwidth}{#4}% \parbox[t]{#2\columnwidth}{#5}% \parbox[t]{#3\columnwidth}{#6}% }} \newcommand{\tup}[1] { \relax\ifmmode \langle #1 \rangle \else $\langle$ #1 $\rangle$ \fi } \newcommand{\lit}[1]{ \relax\ifmmode \mathord{\mathcode`\-="702D\sf #1\mathcode`\-="2200} \else {\it #1} \fi } \newcommand{\scriptsize}{\scriptsize} \newcommand{\footnotesize}{\footnotesize} \lstdefinelanguage{ioa}{ basicstyle=\scriptsize, keywordstyle=\bf \scriptsize, identifierstyle=\it \scriptsize, emphstyle=\tt \scriptsize, mathescape=true, tabsize=20, sensitive=false, columns=fullflexible, keepspaces=false, flexiblecolumns=true, basewidth=0.05em, escapeinside={(*@}{@*)}, moredelim=[il][\rm]{//}, moredelim=[is][\sf \scriptsize]{!}{!}, moredelim=[is][\bf \scriptsize]{*}{*}, keywords={automaton,and, assumed, choose,const,continue, components, derived, discrete, do, eff, external,else, elseif, evolve, end, fi,for, each, foreach, forward, from, hidden, in,input,internal,if,invariant, initially, imports, let, or, output, operators, od, of, pre, private, return, satisfies, shared, signature, simulation, stop, such, trajectories,trajdef, transitions, that,then, type, types, to, tasks, variables, vocabulary, when,where, with,while}, emph={Set, set, seq, tuple, map, array, enumeration}, literate= {(}{{$($}}1 {)}{{$)$}}1 {\\in}{{$\in\ $}}1 {\\preceq}{{$\preceq\ $}}1 {\\subset}{{$\subset\ $}}1 {\\subseteq}{{$\subseteq\ $}}1 {\\supset}{{$\supset\ $}}1 {\\supseteq}{{$\supseteq\ $}}1 {\\forall}{{$\forall$}}1 {\\le}{{$\le\ $}}1 {\\ge}{{$\ge\ $}}1 {\\gets}{{$\gets\ $}}1 {\\cup}{{$\cup\ $}}1 {\\cap}{{$\cap\ $}}1 {\\langle}{{$\langle$}}1 {\\rangle}{{$\rangle$}}1 {\\exists}{{$\exists\ $}}1 {\\bot}{{$\bot$}}1 {\\rip}{{$\rip$}}1 {\\emptyset}{{$\emptyset$}}1 {\\notin}{{$\notin\ $}}1 {\\not\\exists}{{$\not\exists\ $}}1 {\\ne}{{$\ne\ $}}1 {\\to}{{$\to\ $}}1 {\\implies}{{$\implies\ $}}1 {<}{{$<\ $}}1 {>}{{$>\ $}}1 {=}{{$=\ $}}1 {~}{{$\neg\ $}}1 {|}{{$\mid$}}1 {'}{{$^\prime$}}1 {\{\cal A}}{{$\forall\ $}}1 {\\E}{{$\exists\ $}}1 {\\nE}{{$\nexists\ $}}1 {\\/}{{$\vee\,$}}1 {\\vee}{{$\vee\,$}}1 {/\\}{{$\wedge\,$}}1 {\\wedge}{{$\wedge\,$}}1 {=>}{{$\Rightarrow\ $}}1 {->}{{$\rightarrow\ $}}1 {<=}{{$\Leftarrow\ $}}1 {<-}{{$\leftarrow\ $}}1 {~=}{{$\neq\ $}}1 {\\U}{{$\cup\ $}}1 {\{\ensuremath{\cap}}}{{$\cap\ $}}1 {|-}{{$\vdash\ $}}1 {-|}{{$\dashv\ $}}1 {<<}{{$\ll\ $}}2 {>>}{{$\gg\ $}}2 {||}{{$\|$}}1 {[}{{$[$}}1 {]}{{$\,]$}}1 {[[}{{$\langle$}}1 {]]]}{{$]\rangle$}}1 {]]}{{$\rangle$}}1 {<=>}{{$\Leftrightarrow\ $}}2 {<->}{{$\leftrightarrow\ $}}2 {(+)}{{$\oplus\ $}}1 {(-)}{{$\ominus\ $}}1 {_i}{{$_{i}$}}1 {_j}{{$_{j}$}}1 {_{i,j}}{{$_{i,j}$}}3 {_{j,i}}{{$_{j,i}$}}3 {_0}{{$_0$}}1 {_1}{{$_1$}}1 {_2}{{$_2$}}1 {_n}{{$_n$}}1 {_p}{{$_p$}}1 {_k}{{$_n$}}1 {-}{{$\ms{-}$}}1 {@}{{}}0 {\\delta}{{$\delta$}}1 {\\R}{{$\R$}}1 {\\Rplus}{{$\Rplus$}}1 {\{\cal N}}{{${\cal N}$}}1 {\\times}{{$\times\ $}}1 {\\tau}{{$\tau$}}1 {\\alpha}{{$\alpha$}}1 {\\beta}{{$\beta$}}1 {\\gamma}{{$\gamma$}}1 {\\ell}{{$\ell\ $}}1 {--}{{$-\ $}}1 {\\TT}{{\hspace{1.5em}}}3 } \lstdefinelanguage{ioaNums}[]{ioa} { numbers=left, numberstyle=\tiny, stepnumber=2, numbersep=4pt } \lstdefinelanguage{ioaNumsRight}[]{ioa} { numbers=right, numberstyle=\tiny, stepnumber=2, numbersep=4pt } \newcommand{\lstinline[language=IOA]}{\lstinline[language=IOA]} \lstnewenvironment{IOA}% {\lstset{language=IOA}} {} \lstnewenvironment{IOANums}% { \if@firstcolumn \lstset{language=IOA, numbers=left, firstnumber=auto} \else \lstset{language=IOA, numbers=right, firstnumber=auto} \fi } {} \lstnewenvironment{IOANumsRight}% { \lstset{language=IOA, numbers=right, firstnumber=auto} } {} \newcommand{\figioa}[5]{ \begin{figure}[#1] \hrule \mathit{false} {\scriptsize \bf #2} \lstinputlisting[language=ioaLang]{#5} \mathit{false} \hrule \mathit{false} \caption{#3} \label{fig: #4} \end{figure} } \newcommand{\linefigioa}[9]{ } \newcommand{\twofigioa}[8]{ \begin{figure}[#1] \hrule \mathit{false} {\scriptsize \bf #2} \\ \two{#5}{#6} { \lstinputlisting[language=ioaLang]{#7} } { \lstinputlisting[language=ioaLang]{#8} } \mathit{false} \hrule \mathit{false} \caption{#3} \label{fig: #4} \end{figure} } \lstdefinelanguage{ioaLang}{% basicstyle=\ttfamily\small, keywordstyle=\rmfamily\bfseries\small, identifierstyle=\small, keywords={assumes,automaton,axioms,backward,bounds,by,case,choose,components,const,d,det,discrete,do,eff,else,elseif,ensuring,enumeration,evolve,fi,fire,follow,for,forward,from,hidden,if,in,% input,initially,internal,invariant,let, local,od,of,output,pre,schedule,signature,so,% simulation,states,variables, tasks, stop,tasks,that,then,to,trajdef,trajectory,trajectories,transitions,tuple,type,union,urgent,uses,when,where,while,yield}, literate= {\\in}{{$\in$}}1 {\\preceq}{{$\preceq$}}1 {\\subset}{{$\subset$}}1 {\\subseteq}{{$\subseteq$}}1 {\\supset}{{$\supset$}}1 {\\supseteq}{{$\supseteq$}}1 {\\rho}{{$\rho$}}1 {\\infty}{{$\infty$}}1 {<}{{$<$}}1 {>}{{$>$}}1 {=}{{$=$}}1 {~}{{$\neg$}}1 {|}{{$\mid$}}1 {'}{{$^\prime$}}1 {\{\cal A}}{{$\forall$}}1 {\\E}{{$\exists$}}1 {\\/}{{$\vee$}}1 {/\\}{{$\wedge$}}1 {=>}{{$\Rightarrow$}}1 {->}{{$\rightarrow$}}1 {<=}{{$\leq$}}1 {>=}{{$\geq$}}1 {~=}{{$\neq$}}1 {\\U}{{$\cup$}}1 {\{\ensuremath{\cap}}}{{$\cap$}}1 {|-}{{$\vdash$}}1 {-|}{{$\dashv$}}1 {<<}{{$\ll$}}2 {>>}{{$\gg$}}2 {||}{{$\|$}}1 {<=>}{{$\Leftrightarrow$}}2 {<->}{{$\leftrightarrow$}}2 {(+)}{{$\oplus$}}1 {(-)}{{$\ominus$}}1 } \lstdefinelanguage{bigIOALang}{% basicstyle=\ttfamily, keywordstyle=\rmfamily\bfseries, identifierstyle=, keywords={assumes,automaton,axioms,backward,by,case,choose,components,const,% d,det,discrete,do,eff,else,elseif,ensuring,enumeration,evolve,fi,for,forward,from,hidden,if,in% input,initially,internal,invariant,local,od,of,output,pre,schedule,signature,so,% tasks, simulation,states,stop,tasks,that,then,to,trajdef,trajectories,transitions,tuple,type,union,urgent,uses,when,where,yield}, literate= {\\in}{{$\in$}}1 {\\preceq}{{$\preceq$}}1 {\\subset}{{$\subset$}}1 {\\subseteq}{{$\subseteq$}}1 {\\supset}{{$\supset$}}1 {\\supseteq}{{$\supseteq$}}1 {<}{{$<$}}1 {>}{{$>$}}1 {=}{{$=$}}1 {~}{{$\neg$}}1 {|}{{$\mid$}}1 {'}{{$^\prime$}}1 {\{\cal A}}{{$\forall$}}1 {\\E}{{$\exists$}}1 {\\/}{{$\vee$}}1 {/\\}{{$\wedge$}}1 {=>}{{$\Rightarrow$}}1 {->}{{$\rightarrow$}}1 {<=}{{$\leq$}}1 {>=}{{$\geq$}}1 {~=}{{$\neq$}}1 {\\U}{{$\cup$}}1 {\{\ensuremath{\cap}}}{{$\cap$}}1 {|-}{{$\vdash$}}1 {-|}{{$\dashv$}}1 {<<}{{$\ll$}}2 {>>}{{$\gg$}}2 {||}{{$\|$}}1 {<=>}{{$\Leftrightarrow$}}2 {<->}{{$\leftrightarrow$}}2 {(+)}{{$\oplus$}}1 {(-)}{{$\ominus$}}1 } \lstnewenvironment{BigIOA}% {\lstset{language=bigIOALang,basicstyle=\ttfamily} \csname lst@SetFirstLabel\endcsname} {\csname lst@SaveFirstLabel\endcsname\vspace{-4pt}\noindent} \lstnewenvironment{SmallIOA}% {\lstset{language=ioaLang,basicstyle=\ttfamily\scriptsize} \csname lst@SetFirstLabel\endcsname} {\csname lst@SaveFirstLabel\endcsname\noindent} \newcommand{\relax\ifmmode \mathit true \else \em true \/\fi}{\relax\ifmmode \mathit true \else \em true \/\fi} \newcommand{\relax\ifmmode \mathit false \else \em false \/\fi}{\relax\ifmmode \mathit false \else \em false \/\fi} \newcommand{{\operatorname{\texttt{Real}}}}{{\operatorname{\texttt{Real}}}} \newcommand{{\operatorname{\texttt{Bool}}}}{{\operatorname{\texttt{Bool}}}} \newcommand{{\operatorname{\texttt{Char}}}}{{\operatorname{\texttt{Char}}}} \newcommand{{\operatorname{\texttt{Int}}}}{{\operatorname{\texttt{Int}}}} \newcommand{{\operatorname{\texttt{Nat}}}}{{\operatorname{\texttt{Nat}}}} \newcommand{{\operatorname{\texttt{AugmentedReal}}}}{{\operatorname{\texttt{AugmentedReal}}}} \newcommand{{\operatorname{\texttt{String}}}}{{\operatorname{\texttt{String}}}} \newcommand{{\operatorname{\texttt{Discrete}}}}{{\operatorname{\texttt{Discrete}}}} \newcommand{\Rightarrow}{\Rightarrow} \newcommand{\Leftrightarrow}{\Leftrightarrow} \newlength{\bracklen} \newcommand{\sem}[1]{\settowidth{\bracklen}{[} [\hspace{-0.5\bracklen}[#1]\hspace{-0.5\bracklen}]} \newcommand{1.4}{1.4} \renewcommand{\arraystretch}{1.4} \newcommand{\mathcal{S}}{\mathcal{S}} \newcommand{\mathcal{V}}{\mathcal{V}} \newcommand{\mathcal{FV}}{\mathcal{FV}} \newcommand{\mathcal{V}_\mathit{spec}}{\mathcal{V}_\mathit{spec}} \newcommand{\mathcal{V}_\mathit{A}}{\mathcal{V}_\mathit{A}} \newcommand{\mathcal{V}_\mathit{sigs}}{\mathcal{V}_\mathit{sigs}} \newcommand{\mathcal{V}_\mathit{sorts}}{\mathcal{V}_\mathit{sorts}} \newcommand{\mathcal{V}_\mathit{ops}}{\mathcal{V}_\mathit{ops}} \newcommand{\mathit{sort}}{\mathit{sort}} \newcommand{\mathit{sig}}{\mathit{sig}} \newcommand{\mathit{id}}{\mathit{id}} \newcommand{\lsl`->`}{\lsl`->`} \newcommand{\super}[2]{\ensuremath{\mathit{#1}^\mathit{#2}}} \newcommand{\tri}[3]{\ensuremath{\mathit{#1}^\mathit{#2}_\mathit{#3}}} \newcommand{\ensuremath{\mathit{Assumptions}}}{\ensuremath{\mathit{Assumptions}}} \newcommand{\actPred}[3][\pi]{\tri{P}{#2,#1}{#3}} \newcommand{\actualTypes}[1]{\super{actualTypes}{#1}} \newcommand{\actuals}[1]{\super{actuals}{#1}} \newcommand{\autActVars}[2][\pi]{\vars{#2}{},\vars{#2,#1}{}} \newcommand{\bracket}[2]{\mathit{#1}[\mathit{#2}]} \newcommand{\compVars}[1]{\super{compVars}{#1}} \newcommand{\mathit{context}}{\mathit{context}} \newcommand{\ensuring}[2]{\tri{ensuring}{#1}{#2}} \newcommand{\initPred}[1]{\tri{P}{#1}{init}} \newcommand{\initVals}[1]{\super{initVals}{#1}} \newcommand{\initially}[2]{\tri{initially}{#1}{#2}} \newcommand{\invPred}[2]{\tri{Inv}{#1}{#2}} \newcommand{\knownVars}[1]{\super{knownVars}{#1}} \newcommand{\localPostVars}[2]{\tri{localPostVars}{#1}{#2}} \newcommand{\localVars}[2]{\tri{localVars}{#1}{#2}} \newcommand{\locals}[1]{\bracket{Locals}{#1}} \newcommand{\nam}[1]{\rho^{\mathit{#1}}} \newcommand{\otherActPred}[3][\pi]{\otherTri{P}{#2,#1}{#3}} \newcommand{\otherParams}[2]{\otherTri{params}{#1}{#2}} \newcommand{\otherSub}[2]{\otherTri{\sigma}{#1}{#2}} \newcommand{\otherTri}[3]{\tri{\smash{#1'}}{#2}{#3}} \newcommand{\otherVars}[2]{\otherTri{vars}{#1}{#2}} \newcommand{\params}[2]{\tri{params}{#1}{#2}} \newcommand{\postVars}[1]{\super{postVars}{#1}} \newcommand{\pre}[2]{\tri{Pre}{#1}{#2}} \newcommand{\prog}[2]{\tri{Prog}{#1}{#2}} \newcommand{\prov}[2]{\tri{Prov}{#1}{#2}} \newcommand{\stateSorts}[1]{\super{stateSorts}{#1}} \newcommand{\stateVars}[1]{\super{stateVars}{#1}} \newcommand{\states}[1]{\bracket{States}{#1}} \newcommand{\sugActPred}[3][\pi]{\tri{P}{#2,#1}{#3,desug}} \newcommand{\sugLocalVars}[2]{\ifthenelse{\equal{}{#2}}% {\tri{localVars}{#1}{desug}}% {\tri{localVars}{#1}{#2,desug}}} \newcommand{\sugVars}[2]{\ifthenelse{\equal{}{#2}}% {\tri{vars}{#1}{desug}}% {\tri{vars}{#1}{#2,desug}}} \newcommand{\cVars}[1]{\super{cVars}{#1}} \newcommand{\dot{\varrho}}{\dot{\varrho}} \newcommand{\map}[2]{\tri{\dot{\varrho}}{#1}{#2}} \newcommand{\types}[1]{\super{types}{#1}} \newcommand{\vars}[2]{\tri{vars}{#1}{#2}} \newcommand{\subActPred}[3][\pi]{\sub{#2,#1}{#3}(\tri{P}{#2,#1}{#3,desug})} \newcommand{\subLocalVars}[2]{\sub{#1}{#2}(\tri{localVars}{#1}{#2,desug})} \newcommand{\hat{A}}{\hat{A}} \newcommand{\renameAction}[1]{\ensuremath{\rho_{#1}(\vars{\hat{A}{#1},\pi}{})}} \newcommand{\renameComponent}[1]{\ensuremath{\rho_{#1}\hat{A}_{#1}}} \newenvironment{Syntax}{\[\begin{subSyntax}}{\end{subSyntax}\]\vspace{-.3in}} \newenvironment{subSyntax}{\begin{array}{l}}{\end{array}} \newcommand{\ms}[1]{\ifmmode% \mathord{\mathcode`-="702D\it #1\mathcode`\-="2200}\else% $\mathord{\mathcode`-="702D\it #1\mathcode`\-="2200}$\fi} \newcommand{\kw}[1]{{\bf #1}} \newcommand{\tcon}[1]{{\tt #1}} \newcommand{\syn}[1]{{\tt #1}} \newcommand{\pvskw}[1]{{\sc #1}} \newcommand{\pvsid}[1]{{\operatorname{\mathit{#1}}}} \def{\cal A}{{\cal A}} \def{\cal B}{{\cal B}} \def\D{{\cal D}} \def\mathit{true}{{\cal T}} \newcommand{{\bf v}}{{\bf v}} \newcommand{{\bf w}}{{\bf w}} \newcommand{{\bf x}}{{\bf x}} \newcommand{{\bf y}}{{\bf y}} \newcommand{{\bf a}}{{\bf a}} \newcommand{{\bf b}}{{\bf b}} \newcommand{{\bf c}}{{\bf c}} \newcommand{{\bf q}}{{\bf q}} \newcommand{{\bf s}}{{\bf s}} \newcommand{{\bf m}}{{\bf m}} \newcommand{{\bf u}}{{\bf u}} \newcommand{\arrow}[1]{\mathrel{\stackrel{#1}{\rightarrow}}} \newcommand{\sarrow}[2]{\mathrel{\stackrel{#1}{\rightarrow_{#2}}}} \newcommand{\concat}{\mathbin{^{\frown}}} \newcommand{\mathrel{\diamond}}{\mathrel{\diamond}} \def\CC{{\mathscr C}} \lstdefinelanguage{pvs}{ basicstyle=\tt \scriptsize, keywordstyle=\sc \scriptsize, identifierstyle=\it \scriptsize, emphstyle=\tt \scriptsize, mathescape=true, tabsize=20, sensitive=false, columns=fullflexible, keepspaces=false, flexiblecolumns=true, basewidth=0.05em, moredelim=[il][\rm]{//}, moredelim=[is][\sf \scriptsize]{!}{!}, moredelim=[is][\bf \scriptsize]{*}{*}, keywords={and, begin, cases, const, do, external, else, exists, end, endcases, endif, fi,for, forall, from, hidden, in, if, importing, let, lambda, lemma, measure, not, or, of, return, recursive, stop, theory, that,then, type, types, type+, to, theorem, var, with,while}, emph={nat, setof, sequence, eq, tuple, map, array, enumeration, bool, real, exp, nnreal, posreal}, literate= {(}{{$($}}1 {)}{{$)$}}1 {\\in}{{$\in\ $}}1 {\\mapsto}{{$\rightarrow\ $}}1 {\\preceq}{{$\preceq\ $}}1 {\\subset}{{$\subset\ $}}1 {\\subseteq}{{$\subseteq\ $}}1 {\\supset}{{$\supset\ $}}1 {\\supseteq}{{$\supseteq\ $}}1 {\\forall}{{$\forall$}}1 {\\le}{{$\le\ $}}1 {\\ge}{{$\ge\ $}}1 {\\gets}{{$\gets\ $}}1 {\\cup}{{$\cup\ $}}1 {\\cap}{{$\cap\ $}}1 {\\langle}{{$\langle$}}1 {\\rangle}{{$\rangle$}}1 {\\exists}{{$\exists\ $}}1 {\\bot}{{$\bot$}}1 {\\rip}{{$\rip$}}1 {\\emptyset}{{$\emptyset$}}1 {\\notin}{{$\notin\ $}}1 {\\not\\exists}{{$\not\exists\ $}}1 {\\ne}{{$\ne\ $}}1 {\\to}{{$\to\ $}}1 {\\implies}{{$\implies\ $}}1 {<}{{$<\ $}}1 {>}{{$>\ $}}1 {=}{{$=\ $}}1 {~}{{$\neg\ $}}1 {|}{{$\mid$}}1 {'}{{$^\prime$}}1 {\{\cal A}}{{$\forall\ $}}1 {\\E}{{$\exists\ $}}1 {\\/}{{$\vee\,$}}1 {\\vee}{{$\vee\,$}}1 {/\\}{{$\wedge\,$}}1 {\\wedge}{{$\wedge\,$}}1 {->}{{$\rightarrow\ $}}1 {=>}{{$\Rightarrow\ $}}1 {->}{{$\rightarrow\ $}}1 {<=}{{$\Leftarrow\ $}}1 {<-}{{$\leftarrow\ $}}1 {~=}{{$\neq\ $}}1 {\\U}{{$\cup\ $}}1 {\{\ensuremath{\cap}}}{{$\cap\ $}}1 {|-}{{$\vdash\ $}}1 {-|}{{$\dashv\ $}}1 {<<}{{$\ll\ $}}2 {>>}{{$\gg\ $}}2 {||}{{$\|$}}1 {[}{{$[$}}1 {]}{{$\,]$}}1 {[[}{{$\langle$}}1 {]]]}{{$]\rangle$}}1 {]]}{{$\rangle$}}1 {<=>}{{$\Leftrightarrow\ $}}2 {<->}{{$\leftrightarrow\ $}}2 {(+)}{{$\oplus\ $}}1 {(-)}{{$\ominus\ $}}1 {_i}{{$_{i}$}}1 {_j}{{$_{j}$}}1 {_{i,j}}{{$_{i,j}$}}3 {_{j,i}}{{$_{j,i}$}}3 {_0}{{$_0$}}1 {_1}{{$_1$}}1 {_2}{{$_2$}}1 {_n}{{$_n$}}1 {_p}{{$_p$}}1 {_k}{{$_n$}}1 {-}{{$\ms{-}$}}1 {@}{{}}0 {\\delta}{{$\delta$}}1 {\\R}{{$\R$}}1 {\\Rplus}{{$\Rplus$}}1 {\{\cal N}}{{${\cal N}$}}1 {\\times}{{$\times\ $}}1 {\\tau}{{$\tau$}}1 {\\alpha}{{$\alpha$}}1 {\\beta}{{$\beta$}}1 {\\gamma}{{$\gamma$}}1 {\\ell}{{$\ell\ $}}1 {--}{{$-\ $}}1 {\\TT}{{\hspace{1.5em}}}3 } \lstdefinelanguage{BigPVS}{ basicstyle=\tt, keywordstyle=\sc, identifierstyle=\it, emphstyle=\tt , mathescape=true, tabsize=20, sensitive=false, columns=fullflexible, keepspaces=false, flexiblecolumns=true, basewidth=0.05em, moredelim=[il][\rm]{//}, moredelim=[is][\sf \scriptsize]{!}{!}, moredelim=[is][\bf \scriptsize]{*}{*}, keywords={and, begin, cases, const, do, datatype, external, else, exists, end, endif, endcases, fi,for, forall, from, hidden, in, if, importing, let, lambda, lemma, measure, not, or, of, return, recursive, stop, theory, that,then, type, types, type+, to, theorem, var, with,while}, emph={nat, setof, sequence, eq, tuple, map, array, first, rest, add, enumeration, bool, real, posreal, nnreal}, literate= {(}{{$($}}1 {)}{{$)$}}1 {\\in}{{$\in\ $}}1 {\\mapsto}{{$\rightarrow\ $}}1 {\\preceq}{{$\preceq\ $}}1 {\\subset}{{$\subset\ $}}1 {\\subseteq}{{$\subseteq\ $}}1 {\\supset}{{$\supset\ $}}1 {\\supseteq}{{$\supseteq\ $}}1 {\\forall}{{$\forall$}}1 {\\le}{{$\le\ $}}1 {\\ge}{{$\ge\ $}}1 {\\gets}{{$\gets\ $}}1 {\\cup}{{$\cup\ $}}1 {\\cap}{{$\cap\ $}}1 {\\langle}{{$\langle$}}1 {\\rangle}{{$\rangle$}}1 {\\exists}{{$\exists\ $}}1 {\\bot}{{$\bot$}}1 {\\rip}{{$\rip$}}1 {\\emptyset}{{$\emptyset$}}1 {\\notin}{{$\notin\ $}}1 {\\not\\exists}{{$\not\exists\ $}}1 {\\ne}{{$\ne\ $}}1 {\\to}{{$\to\ $}}1 {\\implies}{{$\implies\ $}}1 {<}{{$<\ $}}1 {>}{{$>\ $}}1 {=}{{$=\ $}}1 {~}{{$\neg\ $}}1 {|}{{$\mid$}}1 {'}{{$^\prime$}}1 {\{\cal A}}{{$\forall\ $}}1 {\\E}{{$\exists\ $}}1 {\\/}{{$\vee\,$}}1 {\\vee}{{$\vee\,$}}1 {/\\}{{$\wedge\,$}}1 {\\wedge}{{$\wedge\,$}}1 {->}{{$\rightarrow\ $}}1 {=>}{{$\Rightarrow\ $}}1 {->}{{$\rightarrow\ $}}1 {<=}{{$\Leftarrow\ $}}1 {<-}{{$\leftarrow\ $}}1 {~=}{{$\neq\ $}}1 {\\U}{{$\cup\ $}}1 {\{\ensuremath{\cap}}}{{$\cap\ $}}1 {|-}{{$\vdash\ $}}1 {-|}{{$\dashv\ $}}1 {<<}{{$\ll\ $}}2 {>>}{{$\gg\ $}}2 {||}{{$\|$}}1 {[}{{$[$}}1 {]}{{$\,]$}}1 {[[}{{$\langle$}}1 {]]]}{{$]\rangle$}}1 {]]}{{$\rangle$}}1 {<=>}{{$\Leftrightarrow\ $}}2 {<->}{{$\leftrightarrow\ $}}2 {(+)}{{$\oplus\ $}}1 {(-)}{{$\ominus\ $}}1 {_i}{{$_{i}$}}1 {_j}{{$_{j}$}}1 {_{i,j}}{{$_{i,j}$}}3 {_{j,i}}{{$_{j,i}$}}3 {_0}{{$_0$}}1 {_1}{{$_1$}}1 {_2}{{$_2$}}1 {_n}{{$_n$}}1 {_p}{{$_p$}}1 {_k}{{$_n$}}1 {-}{{$\ms{-}$}}1 {@}{{}}0 {\\delta}{{$\delta$}}1 {\\R}{{$\R$}}1 {\\Rplus}{{$\Rplus$}}1 {\{\cal N}}{{${\cal N}$}}1 {\\times}{{$\times\ $}}1 {\\tau}{{$\tau$}}1 {\\alpha}{{$\alpha$}}1 {\\beta}{{$\beta$}}1 {\\gamma}{{$\gamma$}}1 {\\ell}{{$\ell\ $}}1 {--}{{$-\ $}}1 {\\TT}{{\hspace{1.5em}}}3 } \lstdefinelanguage{pvsNums}[]{pvs} { numbers=left, numberstyle=\tiny, stepnumber=2, numbersep=4pt } \lstdefinelanguage{pvsNumsRight}[]{pvs} { numbers=right, numberstyle=\tiny, stepnumber=2, numbersep=4pt } \newcommand{\lstinline[language=PVS]}{\lstinline[language=PVS]} \lstnewenvironment{BigPVS}% {\lstset{language=BigPVS}} {} \lstnewenvironment{PVSNums}% { \if@firstcolumn \lstset{language=pvs, numbers=left, firstnumber=auto} \else \lstset{language=pvs, numbers=right, firstnumber=auto} \fi } {} \lstnewenvironment{PVSNumsRight}% { \lstset{language=pvs, numbers=right, firstnumber=auto} } {} \newcommand{\figpvs}[5]{ \begin{figure}[#1] \hrule \mathit{false} {\scriptsize \bf #2} \lstinputlisting[language=pvs]{#5} \mathit{false} \hrule \mathit{false} \caption{#3} \label{fig: #4} \end{figure} } \newcommand{\linefigpvs}[9]{ } \newcommand{\twofigpvs}[8]{ \begin{figure}[#1] \hrule \mathit{false} {\scriptsize \bf #2} \\ \two{#5}{#6} { \lstinputlisting[language=pvsLang]{#7} } { \lstinputlisting[language=pvsLang]{#8} } \mathit{false} \hrule \mathit{false} \caption{#3} \label{fig: #4} \end{figure} } \lstdefinelanguage{pvsproof}{ basicstyle=\tt \scriptsize, mathescape=true, tabsize=4, sensitive=false, columns=fullflexible, keepspaces=false, flexiblecolumns=true, basewidth=0.05em, } \lstdefinelanguage{pseudo}{ basicstyle=\scriptsize, keywordstyle=\bf \scriptsize, identifierstyle=\it \scriptsize, emphstyle=\tt \scriptsize, mathescape=true, tabsize=20, sensitive=false, columns=fullflexible, keepspaces=false, flexiblecolumns=true, basewidth=0.05em, moredelim=[il][\rm]{//}, moredelim=[is][\sf \scriptsize]{!}{!}, moredelim=[is][\bf \scriptsize]{*}{*}, keywords={automaton,and, choose,const,continue, components, discrete, do, eff, external,else, elseif, evolve, end, fi,for, forward, from, hidden, in,input,internal,if,invariant, initially, imports, let, or, output, operators, od, of, pre, return, round, such,satisfies, stop, signature, simulation, trajectories,trajdef, transitions, that,then, type, types, to, tasks, upon, variables, vocabulary, wait, when,where, with,while}, emph={set, seq, tuple, map, array, enumeration}, literate= {(}{{$($}}1 {)}{{$)$}}1 {\\in}{{$\in\ $}}1 {\\preceq}{{$\preceq\ $}}1 {\\subset}{{$\subset\ $}}1 {\\subseteq}{{$\subseteq\ $}}1 {\\supset}{{$\supset\ $}}1 {\\supseteq}{{$\supseteq\ $}}1 {\\forall}{{$\forall$}}1 {\\le}{{$\le\ $}}1 {\\ge}{{$\ge\ $}}1 {\\gets}{{$\gets\ $}}1 {\\cup}{{$\cup\ $}}1 {\\cap}{{$\cap\ $}}1 {\\langle}{{$\langle$}}1 {\\rangle}{{$\rangle$}}1 {\\exists}{{$\exists\ $}}1 {\\bot}{{$\bot$}}1 {\\rip}{{$\rip$}}1 {\\emptyset}{{$\emptyset$}}1 {\\notin}{{$\notin\ $}}1 {\\not\\exists}{{$\not\exists\ $}}1 {\\ne}{{$\ne\ $}}1 {\\to}{{$\to\ $}}1 {\\implies}{{$\implies\ $}}1 {<}{{$<\ $}}1 {>}{{$>\ $}}1 {=}{{$=\ $}}1 {~}{{$\neg\ $}}1 {|}{{$\mid$}}1 {'}{{$^\prime$}}1 {\{\cal A}}{{$\forall\ $}}1 {\\E}{{$\exists\ $}}1 {\\/}{{$\vee\,$}}1 {\\vee}{{$\vee\,$}}1 {/\\}{{$\wedge\,$}}1 {\\wedge}{{$\wedge\,$}}1 {=>}{{$\Rightarrow\ $}}1 {->}{{$\rightarrow\ $}}1 {<=}{{$\Leftarrow\ $}}1 {<-}{{$\leftarrow\ $}}1 {~=}{{$\neq\ $}}1 {\\U}{{$\cup\ $}}1 {\{\ensuremath{\cap}}}{{$\cap\ $}}1 {|-}{{$\vdash\ $}}1 {-|}{{$\dashv\ $}}1 {<<}{{$\ll\ $}}2 {>>}{{$\gg\ $}}2 {||}{{$\|$}}1 {[}{{$[$}}1 {]}{{$\,]$}}1 {[[}{{$\langle$}}1 {]]]}{{$]\rangle$}}1 {]]}{{$\rangle$}}1 {<=>}{{$\Leftrightarrow\ $}}2 {<->}{{$\leftrightarrow\ $}}2 {(+)}{{$\oplus\ $}}1 {(-)}{{$\ominus\ $}}1 {_i}{{$_{i}$}}1 {_j}{{$_{j}$}}1 {_{i,j}}{{$_{i,j}$}}3 {_{j,i}}{{$_{j,i}$}}3 {_0}{{$_0$}}1 {_1}{{$_1$}}1 {_2}{{$_2$}}1 {_n}{{$_n$}}1 {_p}{{$_p$}}1 {_k}{{$_n$}}1 {-}{{$\ms{-}$}}1 {@}{{}}0 {\\delta}{{$\delta$}}1 {\\R}{{$\R$}}1 {\\Rplus}{{$\Rplus$}}1 {\{\cal N}}{{${\cal N}$}}1 {\\times}{{$\times\ $}}1 {\\tau}{{$\tau$}}1 {\\alpha}{{$\alpha$}}1 {\\beta}{{$\beta$}}1 {\\gamma}{{$\gamma$}}1 {\\ell}{{$\ell\ $}}1 {--}{{$-\ $}}1 {\\TT}{{\hspace{1.5em}}}3 } \newcommand{\abs}[1]{\left\lvert#1\right\rvert} \newcommand{\pair}[1]{\left\langle#1\right\rangle} \newcommand{\floor}[1]{\left\lfloor#1\right\rfloor} \newcommand{\ceil}[1]{\left\lceil#1\right\rceil} \newcommand{\norm}[1]{\left\lvert\left\lvert#1\right\rvert\right\rvert} \newcommand{\argmin}[2]{\underset{#2}{\operatorname{argmin}} #1} \newcommand{\argmax}[2]{\underset{#2}{\operatorname{argmax}} #1} \newcommand{\maxel}[2]{\underset{#2}{\operatorname{max}} #1} \newcommand{\minel}[2]{\underset{#2}{\operatorname{min}} #1} \newcommand{\supel}[2]{\underset{#2}{\operatorname{sup}} #1} \newcommand{\infel}[2]{\underset{#2}{\operatorname{inf}} #1} \newcommand{\sgn}[1]{\operatorname{sgn} \left( #1 \right)} \newcommand{\lima}[2]{\underset{#2}{\operatorname{lim}} #1} \newcommand{\ds}[1]{\left\llbracket#1\right\rrbracket} \newcommand{\mathrel{\stackrel{\scriptscriptstyle\Delta}{=}}}{\mathrel{\stackrel{\scriptscriptstyle\Delta}{=}}} \newenvironment{proofof}[1]{\trivlist\item[]\emph{Proof of #1}:}{\unskip\nobreak\hskip 1em plus 1fil\nobreak\qed\parfillskip=0pt\endtrivlist} \def\sim{\sim} \def\hat{\nu}{\hat{\nu}} \def\vec{V}_I{\vec{V}_I} \def\vec{V}_T{\vec{V}_T} \def\dot{\nu}_{2min}{\dot{\nu}_{2min}} \def\dot{\nu}_{2max}{\dot{\nu}_{2max}} \newcommand{\reacht}[1]{\relax\ifmmode {\sf Reach}_{#1}(t) \else ${\sf Reach}_{#1}(t)$\fi} \newcommand{\reachi}[2]{\relax\ifmmode {\sf Reach}_{#1}(#2) \else ${\sf Reach}_{#1}(#2)$\fi} \newcommand{\breach}[2]{\relax\ifmmode {\sf Reach}^{#2}_{#1} \else ${\sf Reach}^{#2}_{#1}$\fi} \newcommand{\breachi}[3]{\relax\ifmmode {\sf Reach}^{#2}_{#1}(#3) \else ${\sf Reach}^{#2}_{#1}(#3)$\fi} \newcommand{\breacht}[2]{\relax\ifmmode {\sf Reach}^{#2}_{#1}(t) \else ${\sf Reach}^{#2}_{#1}(t)$\fi} \newcommand{\INDSTATE}[1][1]{\STATE\hspace{#1\algorithmicindent}} \defC{C} \def\sigma{\sigma} \def\Omega{\Omega} \defT{T} \defY{Y} \defy{y} \def\Gamma{\Gamma} \def\I{{\ensuremath{\cap}}} \newcommand{\comp}[1]{ {#1}' } \defe{e} \newcommand{\varvec}[1]{ \mathbf{#1} } \def\overrightarrow{\Delta V}{\overrightarrow{\Delta V}} \def\textbf{(Right)}\xspace{\textbf{(Right)}\xspace} \def\textbf{(Left)}\xspace{\textbf{(Left)}\xspace} \def\circ{\circ} \subsection{Experiments on trace containment reasoning} \label{sec: sub_exp} \paragraph{Graph simulation} Consider the $\auto{AEB}$ system of Section~\ref{ssec:adas} with the scenario where Vehicle B is stopped ahead of vehicle A, and A transits from \Lmode{cruise} to \Lmode{em\_brake} to avoid colliding with B. In the actual system ($G_2$ of Figure~\ref{fig: em_brake_graphs}), two different sensor systems trigger the obstacle detection and emergency braking at time intervals $[1,2]$ and $[2.5,3.5]$ and take the system from vertex $0$ (\Lmode{cruise}) to two different vertices labeled with \Lmode{em\_brake}. \begin{figure}[ht] \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{brake_2_modes.png} \caption{\small Transition graph $G_1$.} \label{fig: em_brake_graphsA} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{brake_3_modes.png} \caption{Transition graph $G_2$.} \label{fig: em_brake_graphsB} \end{subfigure} \begin{subfigure}[b]{0.35\textwidth} \includegraphics[width=\textwidth]{Abstraction_reachTube.pdf} \caption{$\auto{AEB}$ Reachtubes.} \label{fig:abstraction} \end{subfigure} \caption{\small Graphs and reachtubes for the Automatic Emergency Braking $\auto{AEB}$ system.} \label{fig: em_brake_graphs} \end{figure} To illustrate trace containment reasoning, consider a simpler graph $G_1$ that allows a single transition of A from \Lmode{cruise} to \Lmode{em\_brake} over the interval bigger $[0.5, 4.5]$. Using Proposition~\ref{prop:hybrid-contain} and checking that graph $G_2 \preceq_{\sf id} G_1$, it follows that verifying the safety of $\auto{AEB}$ with $G_1$ is adequate to infer the safety with $G_2$. Figure~\ref{fig:abstraction} shows that the safe reachtubes returned by the algorithm for $G_1$ in red, indeed contain the reachtubes for $G_2$ (in blue and gray). \vspace{-10pt} \paragraph{Sequential composition} We revisit the $\auto{Powertrn}$ example of Section~\ref{ex:powertrain}. The initial set $\Theta$ and unsafe set are the same as in Table \ref{table:results}. Let $G_A$ be the graph ($v_0$,\Lmode{startup}) $\xrightarrow{[5,10]}$ ($v_1$,\Lmode{normal}) $\xrightarrow{[10,15]}$ ($v_2$,\Lmode{powerup}), and $G_B$ be the graph ($v_0$,\Lmode{powerup}) $\xrightarrow{[5,10]}$ ($v_1$,\Lmode{normal}) $\xrightarrow{[10,15]}$ ($v_2$,\Lmode{powerup}). The graph $G_1 =$ ($v_0$,\Lmode{startup}) $\xrightarrow{[5,10]}$ ($v_1$,\Lmode{normal}) $\xrightarrow{[10,15]}$ ($v_2$,\Lmode{powerup}) $\xrightarrow{[5,10]}$ ($v_3$,\Lmode{normal}) $\xrightarrow{[10,15]}$ ($v_4, $\Lmode{powerup}), can be expressed as the composition $G_1 = G_A \seqcomp G_B$. Consider the two hybrid systems $\H_i = \langle \L, \Theta_i, G_i, \TL \rangle$, $i \in \{A,B\}$ with $\Theta_A = \Theta$ and $\Theta_B = \reach{\H_A}^{v_2}$. {{\sc DryVR}}'s estimate of $\Theta_{B}$ had $\lambda$ in the range from $14.68$ to $14.71$. The reachset $\reach{\H_B}^{v_2}$ computed by {{\sc DryVR}} had $\lambda$ from $14.69$ to $14.70$. The remaining variables also were observed to satisfy the containment condition. Therefore, $\reach{\H_B}^{v_2} \subseteq \Theta_{B}$. Consider the two hybrid systems $\H_i = \langle \L, \Theta, G_i, \TL \rangle$, $i \in \{1,2\}$, where $G_1$ is (defined above) $ G_A \seqcomp G_B$, and $G_2 = G_A \seqcomp G_B \seqcomp G_B \seqcomp G_B$. Using Theorem \ref{thm:seqcomp-mainresult} it suffices to analyze $\H_1$ to verify $\H_2$. $\H_1$ was been proved to be safe by {\sc DryVR}\ without any refinement. As a sanity check, we also verified the safety of $\H_2$. {{\sc DryVR}} proved $\H_2$ safe without any refinement as well. \section{Reasoning principles for trace containment} \label{sec:trace-containment} For a fixed unsafe set $\U$ and two hybrid systems $\H_1$ and $\H_2$, proving $\reach{\H_1} \subseteq \reach{\H_2}$ and the safety of $\H_2$, allows us to conclude the safety of $\H_1$. Proposition~\ref{prop:hybrid-contain} establishes that proving containment of traces, trajectories, and initial sets of two hybrid systems, ensures the containment of their respective reach sets. These two observations together give us a method of concluding the safety of one system, from the safety of another, provided we can check trace containment of two graphs, and trajectory containment of two trajectory sets. In our examples, the set of modes $\L$ and the set of trajectories $\TL$ is often the same between the hybrid systems we care about. So in this section present different reasoning principles to check trace containment between two graphs. Semantically, a transition graph $G$ can be viewed as one-clock timed automaton, i.e., one can constructed a timed automaton $T$ with one-clock variable such that the timed traces of $T$ are exactly the traces of $G$. This observation, coupled with the fact that checking the timed language containment of one-clock timed automata~\cite{ow-lics} is decidable, allows one to conclude that checking if $G_1 \preceq_{{\sc lmap}} G_2$ is decidable. However the algorithm in~\cite{ow-lics} has non-elementary complexity. Our next observation establishes that forward simulation between graphs can be checked in polynomial time. Combined with Proposition~\ref{prop:graphsim}, this gives a simple sufficient condition for trace containment that can be efficiently checked. \begin{proposition} \proplabel{prop:check-sim} Given graphs $G_1$ and $G_2$, and mode map ${\sc lmap}$, checking if there is a forward simulation from $G_1$ to $G_2$ is in polynomial time. \end{proposition} \begin{proof} The result can be seen to follow from the algorithm for checking timed simulations between timed automata~\cite{cerans} and the correspondence between one-clock timed automata; the fact that the automata have only one clock ensures that the region construction is poly-sized as opposed to exponential-sized. However, in the special case of transition graphs there is a more direct algorithm which does not involve region construction that we describe here. Observe that if $\{R_i\}_{i\in I}$ is a family of forward simulations between $G_1$ and $G_2$ then $\cup_{i \in I} R_i$ is also a forward simulation. Thus, like classical simulations, there is a unique largest forward simulation between two graphs that is the greatest fixpoint of a functional on relations over states of the transition graph. Therefore, starting from the relation $\V_1 \times \V_2$, one can progressively remove pairs $(v,u)$ such that $v$ is not simulated by $u$, until a fixpoint is reached. Moreover, in this case, since $G_1$ is a DAG, one can guarantee that the fixpoint will be reached in $|\V_1|$ iterations. \end{proof} Executions of hybrid systems are for bounded time, and bounded number of mode switches. This is because our transition graphs are acyclic and the labels on edges are bounded intervals. Sequential composition of graphs allows one to consider switching sequences that are longer and of a longer duration. We now present observations that will allow us to conclude the safety of a hybrid system with long switching sequences based on the safety of the system under short switching sequences. To do this we begin by observing simple properties about sequential composition of graphs. In what follows, all hybrid systems we consider will be over a fixed set of modes $\L$ and trajectory set $\TL$. Also ${\sf id}$ will be identity function on $\L$. Our first observation is that trace containment is consistent with sequential composition. \begin{proposition} \label{prop:cong-seqcomp} Let $G_i, G_i'$, $i \in \{1,2\}$, be four transition graphs over $\L$ such that $G_1\seqcomp G_2$ and $G_1'\seqcomp G_2'$ are defined, and $G_i \preceq_{{\sf id}} G_i'$ for $i \in \{1,2\}$. Then $G_1\seqcomp G_2 \preceq_{{\sf id}} G_1'\seqcomp G_2'$. \end{proposition} Next we observe that sequential composition of graphs satisfies the ``semi-group property''. \begin{proposition} \label{prop:semi-group} Let $G_1,G_2$ be graphs over $\L$ for which $G_1\seqcomp G_2$ is defined. Let $v_{1 {\sf term}}$ be the unique terminal vertex of $G_1$. Consider the following hybrid systems: $\H = \langle \L, \Theta, G_1\seqcomp G_2, \TL\rangle$, $\H_1 = \langle\L, \Theta, G_1, \TL\rangle$, and $\H_2 = \langle\L, \reach{\H_1}^{v_{1 {\sf term}}}, $$ G_2,$ $ \TL\rangle$. Then $\reach{\H} = \reach{\H_1} \cup \reach{\H_2}$. \end{proposition} Consider a graph $G$ such that $G\seqcomp G$ is defined. Let $\H$ be the hybrid system with transition graph $G$, and $\H'$ be the hybrid system with transition graph $G\seqcomp G$; the modes, trajectories, and initial set for $\H$ and $\H'$ are the same. Now by Proposition~\ref{prop:seqcomp-containment} and~\ref{prop:hybrid-contain}, we can conclude that $\reach{\H} \subseteq \reach{\H'}$. Our main result of this section is that under some conditions, the converse also holds. This is useful because it allows us to conclude the safety of $\H'$ from the safety of $\H$. In other words, we can conclude the safety of a hybrid system for long, possibly unbounded, switching sequences (namely $\H'$) from the safety of the system under short switching sequences (namely $\H$). \begin{theorem} \label{thm:seqcomp-mainresult} Suppose $G$ is such that $G\seqcomp G$ is defined. Let $v_{{\sf term}}$ be the unique terminal vertex of $G$. For natural number $i \geq 1$, define $\H_i = \langle \L, \Theta, G^i, \TL\rangle$, where $G^i$ is the $i$-fold sequential composition of $G$ with itself. In particular, $\H_1 = \langle \L, \Theta, G, \TL\rangle$. If $ \reach{\H_1}^{v_{{\sf term}}} \subseteq \Theta $ then for all $i$, $\reach{\H_i} \subseteq \reach{\H_1}$. \end{theorem} \begin{proof} Let $\Theta_1 = \reach{\H_1}^{v_{{\sf term}}}$. From the condition in the theorem, we know that $\Theta_1 \subseteq \Theta$. Let us define $\H_i' = \langle \L, \Theta_1, G^i, \TL\rangle$. Observe that from Proposition~\ref{prop:hybrid-contain}, we have $\reach{\H_i'} \subseteq \reach{\H_i}$. The theorem is proved by induction on $i$. The base case (for $i = 1$) trivially holds. For the induction step, assume that $\reach{\H_i} \subseteq \reach{\H_1}$. Since $\seqcomp$ is associative, using Proposition~\ref{prop:semi-group} and the induction hypothesis, we have $ \reach{\H_{i+1}} = \reach{\H_1} \cup \reach{\H_i'} \subseteq \reach{\H_1} \cup \reach{\H_i} = \reach{\H_1}. $ \end{proof} Theorem~\ref{thm:seqcomp-mainresult} allows one to determine the set of reachable states of a set of modes $\L$ with respect to graph $G^i$, provided $G$ satisfies the conditions in the statement. This observation can be generalized. If a graph $G_2$ satisfies conditions similar to those in Theorem~\ref{thm:seqcomp-mainresult}, then using Proposition~\ref{prop:semi-group}, we can conclude that the reachable set with respect to graph $G_1\seqcomp G_2^i\seqcomp G_3$ is contained in the reachable set with respect to graph $G_1\seqcomp G_2\seqcomp G_3$. The formal statement of this observation and its proof is skipped in the interest of space, but we will use it in our experiments. \input{substitutivity_exp}
cond-mat/9605118
\section{\bf Introduction} The most interesting feature of high temperature superconductors (HTSCs) is that the superconducting transition is broadened in presence of magnetic field which has been understood in terms of thermodynamic fluctuations near the superconducting phase transition and has drawn a great interest and many theories have been developed \cite{ike,ull,ztc,moor,over}. It is expected that fluctuations of the normal phase into the superconducting phase above the transition temperature $T_{c}$ can give rise to additional contributions to both thermodynamic and transport quantities. These fluctuations can be viewed as superconducting droplets spontaneously appear and disappear on time scales of $\hbar /k_{B} \mid T-T_{c} \mid$ \cite{tink}. Our discussion is based on the lowest landau level approximation. According to this approximation, the applied field forces the superconducting electrons to move in the lowest landau orbitals perpendicular to the field, thereby, reducing the effective dimensionality of the system. Within this approximation, it is shown \cite{ztc} that physical properties exhibit scaling with scaling variable $x=[\frac{T-T_{c}(B)}{(TB)^{n}}]$, where $n$=2/3 for 3-dimensional (3D) systems and 1/2 for 2D systems that has been observed experimentally \cite{welp}.\\ \par If the quartic term is dropped in the GL free energy, an analytic expression for the specific heat is obtained which diverges at $T=T_{c}(B)$ \cite{thou}. Close to $T_{c}(B)$, this term plays an important role and cannot be ignored. The effect of including the quartic term for layered superconductors is studied { \it implicitly} by Quader and Abrahams \cite{qua}. But, if the variation of order parameter along the direction of the applied field is smooth enough, then the problem is reduced to ordinary 3D GL equation \cite{tink2}. Sufficiently near $T_{c}$, the coherence length along the $c-$axis $\xi_{c}$ will be so large that $\xi_{c} > s$, the interplanar distance and the 3D description of GL theory is justified. \par An exact evaluation of the partition function Z is not possible in the presence of the quartic term. However, the partition function can be evaluated within a Hartree-type approximation where $\mid \psi \mid^{4}$ is replaced by 2$< \mid \psi \mid^{2} > \mid \psi \mid^{2}$ \cite{mask}. In this paper, we take the quartic term into account and derive the specific heat in the Hartree approximation. \section{\bf Hartree Approximation} The free energy functional for a superconductor in a uniform flux density $B$ near the $T_{c}$ has the form \cite{thou} \begin{equation} F_{s}=F_{n}+\int dV [\frac{1}{2m} \mid(-{\it i}\hbar \nabla - eB \times r) \psi \mid^2 + \alpha \mid \psi \mid^{2} + \frac{\beta}{2} \mid \psi \mid^{4} +.....] \end{equation} \noindent where $\alpha= \alpha_{0} t$ and $\beta=\beta_{0}$. To calculate the fluctuations above $T_{c}(B)$, we can use the free energy functional of Eq. (1). The terms quadratic in $\psi$ is considered which gives the specific heat to be divergent at $T=T_{c}(B)$. Close to $T_{c}(B)$, the terms quartic in the fluctuations can not be ignored. Within the Hartree approximation, \begin{equation} \Delta F[\psi]=F_{s}-F_{n}=\int dV [\alpha \mid \psi \mid^{2} +\beta < \mid \psi \mid^{2} > \mid \psi \mid^{2} + \frac{1}{2m} \mid(-{\it i}\hbar \nabla - eB \times r) \psi \mid^2 ] \end{equation} \noindent In the free energy expression, we have dropped the term $B^{2}/2\mu_{0}$ because $B$ is taken to be the mean value of the induction and neglects the fluctuations. The free energy can be diagonalized in terms of the solutions of the Eq. \begin{equation} \frac{1}{2m} (-{\it i}\hbar \nabla - eB \times r)^{2} \psi + \alpha \psi + \beta < \mid \psi \mid^2 >= E \psi \end{equation} Writing $\psi=\sum_{q} C_{q} \psi_{q}(r)$, we get the eigen values are \begin{equation} E_{q}=\alpha+(2n+1)\frac{e \hbar B}{m}+\frac{\hbar^2 k_{z}^2}{2m} +\beta < \mid \psi \mid^{2} >= \alpha_{B}+2n \frac{e \hbar B}{m}+\frac{\hbar^2 k_{z}^2}{2m} +\beta < \mid \psi \mid^{2} > \end{equation} with degeneracy $eB/\pi \hbar$ per unit area and $\alpha_{B}= \alpha+\frac{e \hbar B}{m}$. In the vicinity of the upper critical field, the dominant contribution comes from the lowest landau level ($n$=0). The order parameter averages are given by \cite{mask} \begin{equation} < \mid \psi \mid^2 >= k_{B}T \sum_{q} \frac{1}{E_{q}} \end{equation} and the specific heat can be calculated as \cite{thou}, \begin{equation} C=k_{B}T^{2} \sum_{q} \frac{1}{E_{q}^{2}}(\frac{dE_{q}}{dT})^{2} \end{equation} \subsection {\it Specific Heat in 2D Systems} In case of 2D, for the applied field perpendicular to the film, the fluctuations are zero dimensional and the $k_{z}$ component is suppressed. The average of the $\mid \psi \mid^{2}$ for the film of thickness d can be evaluated as \cite{lee}, \begin{equation} < \mid \psi \mid^2 > =\frac{-\alpha_{B}+\sqrt{\alpha_{B}^{2}+(\frac{eB}{d \pi \hbar}) \beta k_{B}T}}{2 \beta} \end{equation} \noindent This along with Eq.(6) gives \begin{equation} {C_{2D}}={{\left(\frac{eB}{\pi \hbar}\right) kT^2 \left(\frac{\alpha_{0 }}{2T_c} + \frac{1}{4} \frac{1}{\sqrt{\left( {\alpha_{B}}^2 + \beta kT \left( \frac{eB}{d \pi \hbar} \right ) \right )}} \left( \frac{2 \alpha_{0} \alpha_{B}}{T_c} + \beta k \left( \frac{eB}{d \pi \hbar}\right) \right) \right)}\over{ \frac{1}{4} \left(\alpha_{B} + \sqrt{\left( {\alpha_{B}}^2 + \beta kT \left( \frac{eB}{d \pi \hbar} \right) \right)} \right )^2}} \end{equation} As $\beta$ is independent of temperature and $\alpha$ decreases and as $T \rightarrow T_{c}(B)$, $\alpha_{B} \rightarrow 0$ \begin{equation} < \mid \psi \mid^2 >=(\frac{eB}{\pi \hbar})^{1/2} \beta^{-1/2} \end{equation} which is non-zero and give rise to a non divergent specific heat. \subsection {Specific Heat in 3D Systems} Replacing the sum by integration, the average of the $\mid \psi \mid^{2}$ is obtained as \begin{equation} < \mid \psi \mid^2 >= \frac{(\frac{eB}{4 \pi \hbar})k_{B}T \sqrt{\frac{m}{2}}}{\sqrt{\alpha_{B}+\beta < \mid \psi \mid^2 >}} \end{equation} which gives, \begin{equation} < \mid \psi \mid^2 >=(\frac{m}{2})^{2/3} (\frac{eB}{\pi^2})^{2/3} \beta^{-2/3} \end{equation} as $T \rightarrow T_{c}(B)$. Hence the specific heat is non-divergent and is given by \begin{equation} {C_{3D}}={{({eB\over \pi \hbar})kT^2 \left(\frac{\alpha_0}{T_c} + \frac{2}{3}\left( \sqrt{\frac{m}{2}} \frac{eB}{\pi \hbar} \beta k\right)^ \frac{2}{3} \frac{1}{T^\frac{1}{3}}\right)}\over{\left( \alpha_{B} + \left(\sqrt{\frac{m}{2}} \left(\frac{eB}{\pi \hbar} \right) kT\right)^ \frac{2}{3}\right)^\frac{3}{2}}} \end{equation} \section{\bf Discussion} The fluctuation in specific heat above $T_{c}(B)$ in case of 3D differs by a factor $\sqrt{2}$ at a corresponding distance below $T_{c}(B)$ because the coherence length for $\psi(r)$ is smaller by a factor $\sqrt{2}$ below $T_{c}(B)$ in comparison with the region above $T_{c}(B)$ \cite{tink}. Whereas, the specific heat in case of 2D will be the same and that has been observed experimentally. Fig.1 shows the temperature dependence of the specific heat in case of 2D systems originated from fluctuations for different values of $\alpha_{0}$ and $\beta_{0}$. It is noted from fig.1 that the width and the shape of the transition depends on $\alpha_{0}$ and $\beta_{0}$. Fig.1(a) corresponds to $\alpha_0$ = 10, $\beta_0 = 5000$ and fig.1(b) corresponds to $\alpha_0$ = 100, $\beta_0 = 10000$. For both the figures the dashed and the solid curve corresponds to high (4T) and low (2T) external magnetic fields respectively. For our calculation we have taken $\frac{dB_{c_{2}}}{dT} = -3.2$. It has been shown \cite{kkn1} that the field dependence of the specific heat below $T_{c}(B)$ can be accounted for in the mean field approximation by evaluating the interactions between vortices. Here, the vortex structure has not been included in these calculations which influence the specific heat below $T_{c}(B)$. Therefore, below the transition temperature we do not expect the agreement of our result with the experimental data. However, we note from figure 1(a) and 1(b) that the width of the transition increases with magnetic field. This is in accordance with the experiments on both low $T_{c}$ \cite{bar} as well as high-$T_c$ materials \cite{isrg,janod}. Our result also gives similar behaviour above $T_{c}(B)$ as obtained by Bray \cite{bray} using screening theory. \section{\bf Conclusion} We have presented and discussed the results for the fluctuation specific heat, both with and without the quartic term in the free energy. An inspection of our results indicate that the interaction term plays an important role. The specific heat does not diverge in contrast to the GL theory without $\mid \psi \mid^4$ term. We have also shown that the decrease of the sharpness of transition of the fluctuation specific heat with the increase of magnetic field as is observed in experiment. Above $T_{c}(B)$, the nature of the fluctuation specific heat curve is also similar to that, obtained from screening theory, agreeing the experimental result. \newpage
hep-ph/9605315
\section{Introduction} \label{sec:introduction} It is generally believed that QCD undergoes chiral restoration at sufficiently high temperatures. This is supported by lattice simulations \cite{K95}, as well as by a variety of model calculations. As the temperature grows, the value of the quark condensate increases from its negative $T=0$ value and approaches zero. As shown in Ref. \cite{GL}, in the exact chiral limit (zero current quark masses) chiral symmetry dictates the form of the first two terms ($\sim T^2$ and $T^4$) in the low-temperature expansion of the quark condensate. At higher temperatures we do not have fundamental knowledge of the behavior of $\langle \overline{q}q \rangle$, however most of model calculations show a phase transition at temperatures $T \sim {\rm 150 - 200}$~MeV. Lattice calculations also show a dramatic change of $\langle \overline{q}q \rangle$ at similar temperatures. In this letter we study the temperature dependence of the quark condensate in the two-flavor Nambu--Jona-Lasinio (NJL) model \cite{NJL}. There have already been several studies \cite{HK85,BMZ87,Lutz92} of chiral restoration in this model. Our investigation brings a new important element: it includes {\em meson loops} in a self-consistent way. Previous studies have been performed at the quark-loop level only. Attempts have been made to include meson loops in the NJL model, but self-consistency was not completely fulfilled \cite{Heid}. Our approximation is symmetry-conserving \cite{DSTL95,NBCRG96}, hence it is consistent with all requirements of chiral symmetry. The key ingredient is the self-consistency in solving the equation for the scalar density with meson loops present. This makes the approach consistent with the requirements of chiral symmetry, such as the Goldstone theorem, Gell-Mann--Oaks--Renner and Goldberger-Treiman relations, or one-loop chiral expansions. We find important qualitative and quantitative differences in the temperature dependence of the quark condensate in our calculation with meson loops compared to the case with quark loops only. With quark loops only, at low temperatures the condensate remains flat, whereas in our case it changes considerably. We show that in the exact chiral limit the change agrees with the prediction of the chiral perturbation theory \cite{GL}. We also find that meson loops decrease the temperature of chiral restoration by about 10\%. \section{Definition of the model} \label{sec:model} The Lagrangian of the two-flavor NJL model with scalar-isoscalar and pseu\-do\-sca\-lar-isovector interactions is \begin{equation} {\cal L}=\bar q({\rm i}\partial^\mu \gamma_\mu - m)q + {\frac{1}{2 a^2}}\left( (\bar q q)^2 + (\bar q{\rm i}\gamma_5 \mbox{\boldmath $\tau$} q)^2\right) \; , \label{eq:lagr} \end{equation} where $q$ is the quark field, $m$ is the current quark mass, and $1/a^2$ is the coupling constant. It is convenient to apply the formalism of effective action \cite{ItzZub} to Lagrangian (\ref{eq:lagr}). Details of this procedure are given in Ref. \cite{NBCRG96}. Meson fields are introduced in the usual way (partial bosonization), with $\Phi = (\Phi_0, \mbox{\boldmath $\Phi$})$ related to the sigma and the pion mean field. At the quark-loop level the effective action is \begin{equation} {I}(\Phi) = \int d^4x \left ( \half{a^2} \Phi^2 - a^2 m \Phi_0 + \half{a^2} m^2 \right ) - \half{\rm Tr}\,\ln (D^{\dagger }D) \;, \label{eq:Seffq} \end{equation} where $D$ is the Dirac operator , $D = \partial_\tau - {\rm i}{\mbox{\boldmath $\alpha$} \cdot \mbox{\boldmath $\nabla$} } + \beta \Phi_0 + {\rm i} \beta \gamma_5 {\mbox{\boldmath $\tau$}} \cdot {\mbox{\boldmath $\Phi$}}$. We work in Euclidean space-time ($\tau$, ${\mbox{\boldmath $x$}}$). In Eq.~(\ref{eq:Seffq}) we have replaced the usual ${\rm Tr}\,\ln D$ term with $\half{\rm Tr}\,\ln (D^{\dagger }D)$, which is allowed in the absence of anomalies. In fact, this replacement is necessary for the introduction of the proper-time regulator \cite{pt} used in many NJL calculations, and also in this paper. Meson loops bring an additional term to the effective action \cite{NBCRG96,ItzZub} \begin{equation} {\Gamma}(\Phi) = {I}(\Phi) + \half {\rm Tr}\,\ln ({K}^{-1}) \; . \label{eq:Seffm} \end{equation} The inverse {\em meson propagator} matrix $K$ is defined as $K^{-1}_{ab}(x,y) = \frac{\delta^2 I \left( \Phi \right)} {\delta \Phi_a(x) \delta \Phi_b(y)}$. In Eqs.~(\ref{eq:Seffq},\ref{eq:Seffm}) ${\rm Tr}$ denotes the full trace, including functional space, isospin, and in addition color and spinor trace for quarks. In the $N_c$-counting scheme, the quark loop term ${I}(\Phi)$ is the leading contribution of order ${\cal O}(N_c)$, and the meson loop term $\frac{1}{2}{\rm Tr}\,\ln {K}$ is of order ${\cal O}(1)$. Thus the one-meson-loop contributions give the first correction to the leading-$N_c$ results. Using standard methods, Green's functions can be obtained from Eq.~(\ref{eq:Seffm}) via differentiation with respect to mean meson fields. Of particular importance is the one-point function, which gives the expectation value of the sigma field. The condition \begin{eqnarray} \label{GAP} \frac{\delta \Gamma(\Phi)} {\delta \Phi_0(x)}_{\mid \Phi_0(x)=S} = && a^2 (S-m) - \half {\rm Tr} \left( (D^{\dagger} D)^{-1} \frac{\delta (D^{\dagger }D)}{\delta \Phi_0(x)} \right )_{\Phi_0(x)=S} \nonumber \\ && + \half {\rm Tr} \left ( K \frac{\delta K^{-1}} {\delta \Phi_0(x)} \right)_{\Phi_0(x)=S} = 0 \end{eqnarray} yields the equation for the vacuum expectation value of $\Phi_0$, which we denote by $S$. Introducing \begin{eqnarray} \label{prop} K_\sigma(S,Q^2) & = & \left ( 4 N_c f(S,Q^2)(Q^2+4S^2) + a^2 m/S \right )^{-1} \; , \nonumber \\ K_\pi(S,Q^2) & = & \left ( 4 N_c f(S,Q^2) Q^2 + a^2 m/S \right )^{-1} \;, \end{eqnarray} and retaining terms up to order ${\cal O}(N_c^0)$, Eq.~(\ref{GAP}) can be written in the form \cite{NBCRG96} \begin{eqnarray} \label{gap0} & & a^2 \left(S - m \right) - 8 N_c \, S g(S) \nonumber \\ & & + S \frac{N_c}{4 \pi^4} \int d^4 Q \left\{ \left [2 f(S,0) + \frac{d}{dS^2} \left (f(S,Q^2)(Q^2 + 4 S^2) \right ) \right] K_\sigma(S,Q^2) \right. \nonumber \\ & & + \left. 3 \left [2 f(S,0) + \frac{d}{dS^2} f(S,Q^2) Q^2 \right] K_\pi(S,Q^2) \right\} = 0. \end{eqnarray} Functions $g$ and $f$ in the above expressions are the {\em quark bubble functions}. Their form is very simple if no cut-offs were present. In this case we would have \mbox{$g(S) = \int {d^4k \over (2\pi)^4} {1 \over k^2 + S^2}$} and \mbox{$f(S,Q^2) = \int {d^4k \over (2\pi)^4} {1 \over k^2 + S^2} {1 \over (k+Q)^2 + S^2 }$}, and Eq.~(\ref{gap0}) could be interpreted via standard Feynman diagrams (see Fig.~\ref{fig:0}). In the presence of a cut-off these functions are complicated. In the case of the proper-time cut-off \cite{pt} used here we have \cite{NBCRG96} \begin{equation} \label{g0r} g(S) = \int {d^4k \over (2\pi)^4} \int\limits_{\Lambda_f^{-2}}^{\infty} ds \, \exp\left\{-s [k^2+S^2]\right\} = {\Lambda_f^2 \over 16 \pi^2} \, E_2\left[{S^2 \over \Lambda_f^2} \right] \end{equation} and \begin{eqnarray} \label{f0r} f(S,Q^2) &=& \int {d^4k \over (2\pi)^4} \int\limits_{\Lambda_f^{-2}}^{\infty} ds \, s \int\limits_0^1 du \exp\left\{-s [k^2 + S^2 + u(1-u) Q^2] \right\} \nonumber \\ &=& {1 \over 16 \pi^2} \int\limits_0^1 du \, E_1\left[{S^2 \over \Lambda_f^2} + u (1-u) {Q^2 \over \Lambda_f^2} \right], \end{eqnarray} where $\Lambda_f$ is the quark cut-off, and the exponential integral is defined as \mbox{$E_n(x) \equiv \int\limits_1^{\infty} dt \, {e^{-xt} \over t^n}$}. The one meson-loop gap equation (\ref{gap0}) requires also the introduction of a regulator for meson momenta. In other words, we have to regularize the divergent integral over $d^4Q$. In Ref. \cite{NBCRG96} this was achieved by the substitution \mbox{$\int d^4Q \longrightarrow \pi^2 \int\limits_0^{\Lambda_b^2} dQ^2 \, Q^2$}, where $\Lambda_b$ was the four-dimensional Euclidean meson momentum cutoff. In the present study at finite temperatures, we employ the three-dimensional cutoff procedure, i.e., we make the replacement \begin{equation} \label{3dc} \int d^4Q \longrightarrow 4 \pi \int d\omega \int\limits_0^{\Lambda_b} dq \, q^2 \,\,\,\,\,\, , \end{equation} where $Q=(\omega,{\bf q})$ and $q = |{\bf q}|$. The form (\ref{3dc}) is convenient for the implementation of the boundary conditions satisfied by temperature Green's functions. \vfill \begin{figure}[b] \xslide{fig0.ps}{3cm}{30}{370}{560}{490} \caption{Diagramatic representation of Eq.~(\ref{gap0}). The cross represents the first term, the one-quark-loop contribution corresponds to the second term, and meson-loop terms represent subsequent terms. The solid lines represent the quark propagator $1/(D^{\dagger} D)$, the dased lines correspond to the meson propagators $K$ of Eqs.~(\ref{prop}), the external dased line represents scalar-isoscalar coupling, and the vertices follow from the form of $(D^{\dagger} D)$.} \label{fig:0} \end{figure} \section{Finite temperature} \label{sec:gap} For calculations at finite temperature $T$ we shall adopt the imaginary time formalism \cite{Kapusta}. This can be done by making the following replacement in the quark momentum integrals \begin{equation} \label{itf} \int {d^4k \over (2\pi)^4} F(k) = \int {dE \over 2\pi} \int {d^3k \over (2\pi)^3} F(E,{\bf k}) \rightarrow T \sum_{j=-\infty}^{\infty} \int {d^3k \over (2\pi)^3} F(E_j,{\bf k}). \end{equation} Here $F(k)=F(E,{\bf k})$ is an arbitrary integrand, and the sum runs over the fermionic Matsubara frequencies $E_j = (2j+1)\pi T$. The integral over the meson four-momenta should be also replaced by the sum of the form (\ref{itf}). In this case, however, the sum runs over the bosonic Matsubara frequencies $\omega_n = 2\pi n T$. With this prescription we can turn to the calculation of the functions which are the finite temperature analogs of $g(S)$ and $f(S,Q^2)$. We find \begin{eqnarray} g(S,T) &=& T \sum_j \int {d^3k \over (2\pi)^3} \int\limits_{\Lambda_f^{-2}}^{\infty} ds \, \exp\left\{ -s \left[ E_j^2 + {\bf k}^2 + S^2 \right] \right\} \nonumber \\ &=& {T \Lambda_f \over 8 \pi^{{3 \over 2}} } \sum_j E_{3 \over 2} \left[{{S^2+E_j^2} \over \Lambda_f^2} \right] \end{eqnarray} and \begin{eqnarray} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!& &f(S,n,q,T) = T \sum_j \int {d^3k \over (2\pi)^3} \int_{\Lambda_f^{-2}}^{\infty} ds \, s \int\limits_0^1 du \times \nonumber \\ \!\!\!\!\!\!\!\!\!\!\!\!& & \exp\left\{ -s \left[ S^2 + u(1-u)(\omega_n^2 + {\bf q}^2) + \left[ {\bf k} - {\bf q} (1-u) \right]^2 + \left[E_j - \omega_n (1-u) \right]^2 \right] \right\} \nonumber \\ \!\!\!\!\!\!\!\!\!\!\!\!& &= {T \over 8 \pi^{{3 \over 2}} \Lambda_f} \sum_j \int\limits_0^1 du \, E_{1\over 2}\left[{S^2 \over \Lambda_f^2} + u(1-u) {\omega_n^2 + {\bf q}^2 \over \Lambda_f^2} \right. + \left. {[E_j-\omega_n(1-u)]^2 \over \Lambda_f^2} \right] \;. \end{eqnarray} Analogously, the inverse meson propagators become \begin{eqnarray} \label{propT} K_\sigma(S,n,q,T) & = & \left ( 4 N_c f(S,n,q,T)(\omega_n^2 + {\bf q}^2+4S^2) + a^2 m/S \right )^{-1} \; , \nonumber \\ K_\pi(S,n,q,T) & = & \left ( 4 N_c f(S,n,q,T)(\omega_n^2 + {\bf q}^2) + a^2 m/S \right )^{-1} \;. \end{eqnarray} Finally, we can write the finite-temperature analog of Eq.~(\ref{gap0}): \begin{eqnarray} \label{gapT} & & a^2 \left(S - {m} \right) - 8 N_c S \, g(S,T) + {2 S N_c T \over \pi^2} \sum_n \int\limits_0^{\Lambda_b} dq \, q^2 \times \nonumber \\ & & \left\{ \left [2 f(S,0,0,T) + \frac{d}{dS^2} \left (f(S,n,q,T) (\omega_n^2 + {\bf q}^2 + 4S^2) \right ) \right] K_\sigma(S,n,q,T) \right. \nonumber \\ & & + \left. 3 \left [2 f(S,0,0,T) + \frac{d}{dS^2} f(S,n,q,T) (\omega_n^2 + {\bf q}^2) \right] K_\pi(S,n,q,T) \right\} = 0 \;.\nonumber \\ \end{eqnarray} If chiral symmetry is broken, then the above equation has a nontrivial solution for $S$. The quark condensate and $S$ are related by the formula \begin{equation} \label{qq} \langle \overline{q}q \rangle = - a^2 (S - m) \;, \end{equation} which follows immediately from the fact that $\langle \overline{q}q \rangle = \delta \Gamma(\Phi)/\delta m$ and Eq.~(\ref{eq:Seffm}). \section{Low-temperature expansion in the chiral limit} \label{sec:lowT} Before presenting our numerical results for $\langle {\overline q} q \rangle_T$ let us consider the low-temperature expansion. As shown by Gasser and Leutwyler \cite{GL}, {\em in the chiral limit} the low-temperature expansion of the quark condensate has the form \begin{equation} \label{eq:gl} \langle \overline{q} q \rangle_T = \langle \overline{q} q \rangle_0 \left ( 1 - \frac{T^2}{8 F_\pi^2} - \frac{T^4}{384 F_\pi^4} + ... \right ) . \end{equation} First, let us do the $N_c$ counting in this formula. Since $F_\pi \sim {\cal O}(\sqrt{N_c})$, subsequent terms in the expansion are suppressed by $1/N_c$. Since our one-meson-loop calculation accounts for first subleading effects in the $1/N_c$ expansion, we can hope for reproducing only the $T^2$ term in Eq.~(\ref{eq:gl}). Further terms would require more loops. Using standard techniques \cite{Kapusta}, the sum over the bosonic Matsubara frequencies in Eq.~(\ref{gapT}) can be converted to a contour integral in the complex energy plane. By deforming this contour we collect all contributions from the singularities of the integrand, weighted with the thermal Bose distribution. At low temperatures, the dominant contribution comes from the lowest lying pion pole, and other singularities are negligible. Thus, the third term in (\ref{gapT}) becomes \begin{equation} \label{ae1} {3T \over \pi^2} \sum_n \int\limits_0^{\Lambda_b} dq \, q^2 {1 \over \omega_n^2 + q^2} = {3 \over 2\pi^2 } \int\limits_0^{\Lambda_b} dq \, q \left[ 1 + {2 \over e^{q/T} - 1} \right]. \end{equation} Writing Eq. (\ref{ae1}) we have approximated the function $f(M,n,q,T)$, appearing in the pion propagator, by its value at $n=q=0$. For sufficiently large cutoff $\Lambda_b$, the integral over the thermal distribution function in (\ref{ae1}) can be expressed by the Riemann zeta function $\zeta(2) = \pi^2/6$. Thus, the final result for (\ref{ae1}) is $ 3\Lambda^2_b/4 \pi^2 + T^2/2$. Inserting the above result into the gap equation (\ref{gapT}) we find, with $m=0$, the following equality: \begin{equation} \label{ae2} h(S,T) \equiv a^2 - 8N_c g(S,T) + {3\Lambda_b^2 \over 4 \pi^2} + \half T^2 = 0. \end{equation} Eq. (\ref{ae2}) defines implicitly the function $S(T)$, which satisfies the equation \begin{equation} \label{ae3} {dS \over dT^2} = - { \partial h(S,T) / \partial T^2 \over \partial h(S,T) / \partial S} = \left[ 16 N_c {\partial g(S,T) \over \partial S} \right]^{-1}. \end{equation} Here we have neglected the term $\partial g(S,T) / \partial T^2$, since it is exponentially suppressed by the factor $\exp(-S/T)$. Furthermore, the leading-$N_c$ term on the right hand side of (\ref{ae3}) can be rewritten using the relations \cite{NBCRG96} $\partial g(S,T)/ \partial S = -2S f(S,0)$ and \mbox{$4 N_c f(S,0) = \overline{F}_{\pi}^2/S^2$}, where $\overline{F}_{\pi}$ is the leading-$N_c$ piece of the pion decay constant. Collecting these equalities we arrive at $dS/dT^2 = -S/(8\overline{F}_{\pi}^2)$, which finally gives \begin{equation} \label{ae4} S(T) = S(0) \left[ 1 - {T^2 \over 8\overline{F}_{\pi}^2} \right]. \end{equation} Proportionality (\ref{qq}) implies that the above expression coincides (in the large $N_c$ limit) with Eq.~(\ref{eq:gl}). Hence our method is consistent with a basic requirement of chiral symmetry at the one-meson-loop level. \section{Results} \label{sec:res} In the exact chiral limit the model has 3 parameters: $a$, $\Lambda_f$, and $\Lambda_b$. In this paper we fix arbitrarily $\Lambda_b/\Lambda_f = \half$. The remaining 2 parameters are fixed by reproducing the physical value of $F_\pi=93{\rm MeV}$ and a chosen value for $\langle \overline{q} q \rangle_0$. For the case of $m \neq 0$ we have an extra parameter, $m$, which is fitted by requiring that the pion has its physical mass. We compare results with meson loops to results with the quark loop only ($\Lambda_b = 0$). Parameters for the two calculations are adjusted in such a way, that the values of $F_\pi$, $\langle \overline{q} q \rangle_0$, and $m_\pi$ are the same. The calculation of $F_\pi$ with meson loops, although straightforward, is rather tedious, so we do not present it here. The method has been presented in detail in Ref.~\cite{NBCRG96,thesis}. The only difference in our calculation is that the three-momentum cut-off (\ref{3dc}) rather than the four-momentum cut-off of Ref.~\cite{NBCRG96} is used. \begin{figure}[b] \xslide{fig1.ps}{13.5cm}{45}{160}{550}{680} \caption{Dependence of of the quark condensate on $T^2$ in the chiral limit $m_\pi=0$. The curves correspond to the calculation with meson loops (solid line), with quark loops only (dashed line), and the lowest-order chiral expansion (dotted line). The parameters for the solid line and dashed line are adjusted in such a way that $F_\pi = 93{\rm MeV}$ and $\langle \overline{q} q \rangle_0 =-(184{\rm MeV})^3$. For the solid line $a=175{\rm MeV}$, $\Lambda_f = 723{\rm MeV}$, and $\Lambda_b = \half \Lambda_f$, whereas for the dashed line $a=201{\rm MeV}$, $\Lambda_f = 682{\rm MeV}$, and $\Lambda_b = 0$. } \label{fig:1} \end{figure} Figure \ref{fig:1} shows the dependence of $\langle \overline{q} q \rangle$ on $T^2$. The solid line represents the case with meson loops. We note that at low temperatures the curve has a finite slope, as requested by Eq.~(\ref{ae4}). The slope is close to the leading-order Gasser-Leutwyler result (dotted curve). As explained earlier, the slopes would overlap in the large-$N_c$ limit. This behavior is radically different from the case with quark loops only (dashed curve). In this case at low temperatures \mbox{$\langle \overline{q} q \rangle_T - \langle \overline{q} q \rangle_0 \sim e^{-M/T}$}, where $M$ is the mass of the constituent quark. All derivatives of this function vanish at $T=0$, and $\langle \overline{q} q \rangle$ is flat at the origin. We can also see from the figure that the fall-off of the condensate is faster when the meson loops are included. In fact, for the parameters of Fig.~\ref{fig:1} we have an interesting phenomenon. At $T = 162{\rm MeV}$ the condensate abruptly jumps to 0. There is a first-order phase transition, with a latent heat necessary to melt the quark condensate. Such a behavior is not present in the case of calculations without meson loops \cite{HK85,BMZ87,Lutz92}. We note that with meson loops present the chiral restoration temperature is $162{\rm MeV}$, {\em i.e.} about 10\% less than $176{\rm MeV}$ of the quark-loop-only case. \begin{figure}[b] \xslide{fig2.ps}{13.5cm}{45}{160}{550}{680} \caption{Same as Fig.~\ref{fig:1} for $m_\pi=139~{\rm MeV}$, $F_\pi = 93{\rm MeV}$, and $\langle \overline{q} q \rangle_0 =-(174{\rm MeV})^3$. For the solid line $a=164{\rm MeV}$, $\Lambda_f = 678{\rm MeV}$, $\Lambda_b = \half \Lambda_f$, and $m = 15{\rm MeV}$, whereas for the dashed line $a=175{\rm MeV}$, $\Lambda_f = 645{\rm MeV}$, $\Lambda_b = 0$, and $m = 15{\rm MeV}$. } \label{fig:2} \end{figure} Figure \ref{fig:2} shows the same study, but for the physical value of $m_\pi$. We note that now $\langle \overline{q} q \rangle$ (solid line) is also flat at the origin, since the pion is no more massless, and at low $T$ we have \mbox{$\langle \overline{q} q \rangle_T - \langle \overline{q} q \rangle_0 \sim e^{-m_\pi/T}$}. Nevertheless, the region of this flatness is small, and at intermediate temperatures the curve remains close to the Gasser-Leutwyler expansion. We note again that meson loops considerably speed up the melting of the condensate compared to the case of quark loops only. However, there is no first-order phase transition such as in Fig.~\ref{fig:1}. Instead, we observe a smooth cross-over typical for the case of $m \neq 0$. The faster change of the quark condensate in our study is not surprising. It is caused by the presence of light pions which are known to play a dominant role at low-temperatures \cite{Heid}. The behavior of $\langle \overline{q} q \rangle$ reflects this general feature. Concluding, we stress that the inclusion of meson loops in the NJL model qualitatively and quantitatively changes the results in comparison to the calculations at the quark-loop level. In particular, we find finite slope of $\langle \overline{q} q \rangle$ vs. $T^2$ at the origin in the chiral limit, faster melting of the condensate, and lower chiral restoration temperature.
hep-th/9605097
\section{Introduction} In 1981 Polyakov \cite{Pol} showed that when non-critical strings are quantised so as to maintain reparametrisation invariance, the scale of the metric becomes a dynamical degree of freedom even though it decouples classically. Although the associated action is that of a soluble quantum field theory, the Liouville theory, the integration measure is not the usual one encountered in the functional approach to quantum field theory. Consequently it was unclear how to proceed until David, Distler and Kawai \cite{DDK} showed that the effect of the measure could be accounted for by a simple renormalisation of the action. In this paper we study the effects of boundaries on this approach, extending their results to the cases of open string theory and to the coupling of boundary conformal field theories to 2D quantum gravity. We start in section 2 with a brief review of the coupling of the minimal models to closed 2D quantum gravity. In section 3 we consider our solution for the example of Polyakov's non-critical open bosonic string. The key point is the choice of boundary conditions on the Liouville field. Thus, we discuss the Weyl anomaly cancellation for Neumann, Dirichlet and free boundary conditions. We use a linear Coulomb gas perturbative expansion \cite{MS,DF} to find the renormalised central charge of the conformally extended Liouville theory that describes the gravitational sector. As expected this will be shown to be the same central charge calculated for the coupling on closed surfaces. Since the metric is to be written as a reference metric multiplied by the exponential of the Liouville field the theory must be independent of a shift in this field together with a compensating Weyl transformation on the reference metric. This leads to the dressing of primary operators that acquire conformal weight $(1,1)$ on the bulk and conformal weight $(1/2,1/2)$ on the boundary. Consequently, we show that the Liouville field renormalisation is equal to the one found for closed surfaces both on the bulk and on the boundary of the open surfaces. This only works for Neumann and free boundary conditions on the Liouville field. The Dirichlet boundary conditions freeze the Liouville boundary quantum dynamics so that, it is not possible to cancel all the boundary terms in the Weyl anomaly by a shift in the boundary values of the Liouville field, without leading to a discontinuity in the metric as the boundary is approached. Due to the presence of the boundary we find new renormalised couplings to 2D gravity. Under Weyl invariance at the quantum level we show that they are all determined by the bulk or closed surface couplings as would be expected. We also show how the Coulomb gas screening charge selection rule is a crucial condition for the cancellation of non-local and Weyl anomalous contributions to the correlation functions due to zero modes. In section 4 we analyse the semi-classical limit which singles out the free boundary conditions on the Liouville field as being the most natural. We define the open string susceptibility, the anomalous gravitational scaling dimensions and a new mass critical exponent. In the context of Yang-Mills theory this mass exponent has an interesting physical interpretation as the critical exponent associated with the Feynman propagator for a test particle which interacts with the gauge fields. In section 5 we generalise the open string analysis to a natural Feigin-Fuchs representation of $c\leq1$ minimal conformal field theories on open random surfaces. Finally, we present our conclusions. \section{Minimal Models On Closed Random Surfaces} We now review the aspects of the approach of David, Distler and Kawai \cite{DDK} to minimal models on closed random surfaces that will be useful when we consider boundaries. The Coulomb gas representation of conformal field theories due to Dotsenko and Fateev \cite{MS,DF} has a natural Lagrangian interpretation. We introduce the action \begin{eqnarray} {S_M}[\Phi,\tilde{g}]&={1\over{8\pi}}\int{d^2}\xi\sqrt{\tilde{g}} \left[{1\over{2}}{\tilde{g}^{ab}}{\partial_a}\Phi{\partial_b} \Phi+i\left(\beta-1/\beta\right)\tilde{R}\Phi\right]+\nonumber\\ &+{\mu^2}\int{d^2}\xi\sqrt{\tilde{g}}\left({e^{i\beta\Phi}}+ {e^{-i/\beta\Phi}}\right)\label{30} \end{eqnarray} \noindent to define the minimal unitary series of conformal field theories on closed surfaces. This is a conformally extended Liouville theory \cite{PM} with imaginary coupling, $i\beta$, on a surface with metric $\tilde{g}_{ab}$ and curvature $\tilde R$. The central charge of the matter theory is ${c_M}=1-6{{(\beta-1/\beta)}^2}$ which means the minimal models \cite{BPZ} are at the rational points ${\beta^2}=(2+k')/(2+k)$. The primary fields are vertex operators given by \[ U(jj')=\int{d^2}\xi \sqrt{\tilde{g}}\exp\left[-i\left(j\beta-{j'\over{\beta}}\right) \Phi\right] \] where $j,j'\geq{0}$ are half-integer spins labelling pairs of representations of the Virasoro algebra $A_1$. To couple this theory to gravity we treat $\tilde{g}_{ab}$ as a dynamical variable and add a cosmological constant term ${\mu_0^2}\int{d^2}\xi \sqrt{\tilde{g}}$ to the action. In the conformal gauge $\tilde{g}_{ab}$ is decomposed as a reparametrisation of ${e^{\varphi}}{{\hat{g}}_{ab}}$. Integrating over the matter field and reparametrisations generates a Weyl anomaly which yields a kinetic term for $\varphi$ if the matter central charge is not balanced by the corresponding reparametrisation ghost charge. For this non-critical theory there results a Liouville field theory for $\varphi$ \[ {S_L}\left[\varphi,\hat{g}\right]=-{{d-26}\over{48\pi}}\int{d^2}\xi \sqrt{\hat{g}}\left({1\over{2}}\varphi\hat{\Delta}\varphi+ \hat{R}\varphi\right)+{\mu_1^2}\int{d^2} \sqrt{\hat{g}}{e^{\varphi}}, \] where $\hat\Delta$ is the covariant Laplacian $-(1/\sqrt{\hat g}){\partial_a}\sqrt {\hat g}{{\hat g}^{ab}} {\partial_b}$. The functional integral volume element for this theory is induced by the inner product on variations of the Liouville field \[ \left\|\delta\varphi\right\|^2_{\tilde{g}}=\int{d^2}\xi \sqrt{\hat{g}}{e^{\varphi}}{{(\delta\varphi)}^2}. \] This theory is deeply non-linear and its complete solution has not yet been found \cite{DDK,LQT,GNM,KPZ}. The reason is the presence of $e^{\varphi}$ in the inner product, which means that the volume element is not the usual one that occurs in quantum field theory. According to David, Distler and Kawai this may be replaced by a conventional field theory measure provided the Liouville mode and its couplings to 2D quantum gravity are renormalised: \[ {S_L}\left[\phi,\hat{g}\right]={1\over{8\pi}}\int{d^2}\xi \sqrt{\hat{g}}\left[{1\over{2}} \phi\hat{\Delta}\phi+i\left(\gamma+{1\over{\gamma}}\right) \hat{R}\phi\right]+{\mu_2^2}\int{d^2}\sqrt{\hat{g}}{e^{\alpha\phi}}. \] Since the separation of ${\tilde g}_{ab}$ into the scale $e^\phi$ and reference metric ${{\hat g}}_{ab}$ is arbitrary the new theory is required to be invariant under simultaneous shifts in $\phi$ and compensating scalings of ${{\hat g}}_{ab}$. Thus a form of Weyl invariance must be preserved at the quantum level. When we integrate $\phi$ we generate a background Weyl anomaly which we add to the background anomaly coming from the integration of the matter field and the reparametrisation ghosts. The theory is Weyl invariant at the quantum level if this anomaly is absent and the amplitude is independent of the conformal factor of the reference metric. The anomaly cancellation sets the total central charge of the system to zero. This gives $\gamma=\pm{i}\beta$. Also the Liouville field renormalisation parameter $\alpha$ must satisfy $1-\alpha(\beta+ 1/\beta)+{\alpha^2}=0$ if we choose $\gamma=-i\beta$. Then we have two branches ${\alpha_{+}}=\beta$ and ${\alpha_{-}}=1/\beta$. The dressed vertex operators of vanishing conformal weight are \[ {U_D}(jj')=\int{d^2}\xi\sqrt{\hat{g}}\exp\left[\left(l\beta-{l'\over {\beta}}\right)\phi\right]\exp\left[-i\left(j\beta-{j'\over {\beta}}\right)\Phi\right] \] \noindent where $l=-j$, $l'=j'+1$ or $l=j+1$, $l'=-j'$. It is important to note that Weyl invariance at the quantum level is only possible because we have imposed an independent charge conservation selection rule \cite{DDK,MS,DF} on the matter and the gravitational sectors. For each sector the Gaussian integrals over $\Phi$ and $\phi$ yield contributions of the form of the exponential of \[ {\mathcal{F}^N}[g]= {1\over{16\pi}}\int{d^2}\xi'{d^2}\xi''\sqrt{g(\xi')}{J^N}(\xi') G(\xi',\xi'') \sqrt{g(\xi'')}{J^N}(\xi'') \] \noindent where $g_{ab}$ stands for either $\tilde{g}_{ab}$ or $\hat{g}_{ab}$, $J^N$ is the coefficient of the term in the action that is linear in the field and $G(\xi,\xi')$ is the covariant Laplacian's Green's function which satisfies \[ \Delta{G}(\xi,\xi')={{{\delta^2}(\xi-\xi')}\over{\sqrt{g(\xi)}}}- {1\over{\int{d^2}\xi'' \sqrt{g(\xi'')}}}, \] is symmetric in its arguments and orthogonal to the constant zero-mode \[ \int{d^2}\xi\sqrt{g(\xi)}G(\xi,\xi')=0. \] Due to the presence of the Laplacian's zero-mode we find a non-local Weyl anomaly: \begin{eqnarray*} &{\delta_\rho}{\mathcal{F}^N}=-{Q\over{8\pi}}\int{d^2}\xi \sqrt{g}{J^N}{\delta_\rho}\ln\int{d^2}\xi\sqrt{g}- \nonumber\\ &-{1\over{8\pi\int{d^2}\xi\sqrt{g}}}\int{d^2}\xi \sqrt{g}{J^N}\int{d^2}\xi'{d^2}\xi'' \sqrt{g(\xi')}\rho(\xi')G(\xi',\xi'') \sqrt{g(\xi'')}{J^N}(\xi''), \end{eqnarray*} \noindent where $Q$ is either $i(\beta-1/\beta)$ or $i(\gamma+1/\gamma)$. When we integrate the zero mode of the fields in each sector the charge selection rule gives $\int{d^2}\xi\sqrt{g}{J^N}=0$ for all non-zero contributions to the amplitude, leading to the cancellation of the non-local anomaly. Using a simple scaling argument David, Distler and Kawai's approach leads us to the random surfaces critical exponents. We find the susceptibility exponent \cite{DDK,ZCKT} $\Gamma({\chi_c})=2-{\chi_c}(\beta+1/\beta)/ (2{\alpha_{\pm}})$, where ${\chi_c}=2-2h$ is the Euler characteristic of the closed Riemann surface given in terms of its genus. It is related to the world-sheet integral of $\tilde{R}$ by the Gauss-Bonnet theorem: \begin{equation} \int{d^2}\xi\sqrt{\tilde{g}}\tilde{R}=4\pi{\chi_c}\label{5}. \end{equation} The semi-classical limit corresponds to $\beta\to+\infty$ and as expected it selects the solution $\alpha_{+}=\beta$. We also get the gravitational scaling dimensions of the matter primary fields \cite{DDK} $\Delta(jj')=1-\beta(jj')_{\pm}/\alpha_{\pm}$. Here $\beta(jj')_{\pm}$ defines the coefficient of the two possible dressings of the primary field $U(jj')$. When this is combined with the bare conformal weight using the equation which defines $\alpha$ it gives the KPZ equation \cite{KPZ}: \[ \Delta-{\Delta_0}=-{\alpha^2}\Delta(\Delta-1). \] These results for the critical exponents of a $c\leq1$ minimal conformal field theory on closed random surfaces agree with the KPZ light-cone analysis on the sphere \cite{KPZ}. Distler, Hlousek and Kawai \cite{DHK} also used this conformal gauge approach to calculate the Hausdorff dimension of the random surfaces. All the results are in striking agreement with those of the theory of dynamical triangulated random surfaces \cite{KM}. Our aim in this paper is to see if this picture still holds when the random surfaces have boundaries and the minimal model becomes a boundary conformal field theory. The main issue is the choice of boundary conditions on the gravitational sector. Since the Liouville theory has a natural generalisation in the presence of boundaries, we expect the coupling of the minimal boundary conformal field theories to 2D quantum gravity to be again described by two conformally extended Liouville theories which are complementary. We start by presenting our solution in the simple case of Polyakov's open bosonic string. \section{Open String 2D Quantum Gravity} \subsection{Free boundary conditions} For simplicity let us consider Polyakov's open bosonic string partition function $Z$ for the topology of a disc \cite{Dur,OAL,Man}. We take free boundary conditions on the string field $X^{\mu}$ and on the Liouville conformal gauge factor $\varphi$. In the case of the reparametrisation ghosts $\theta^a$ we consider diffeomorphisms which preserve the parameter domain in $\mathcal{R}^2$ but allow for general reparametrisations along the boundary. This means that the component of $\theta^a$ along the outward normal to the boundary must be zero $\tilde{n}\cdot\theta=0$, but its component along the tangent $\tilde{t}\cdot\theta$ is kept free just like the boundary values of $X^{\mu}$ and $\varphi$. More precisely we initially require that $X^{\mu}$, $\varphi$ and $\tilde{t}\cdot\theta$ take prescribed values $Y^{\mu}$, $\psi$ and $\eta$ on the boundary and then we integrate over these boundary values \cite{Man}. The functional $Z[Y,\psi,\eta]$, obtained as an intermediate step, has the physical interpretation of being the tree-level (in the sense of string loops) contribution to the wave-functional of the vacuum for closed string theory in the Schr\"odinger representation. The quantum partition function is thus given by \[ Z=\int{\mathcal{D}_{\tilde{g}}}(Y,\psi,\eta) Z[Y,\psi,\eta] \] \noindent where the wave functional is \begin{equation} Z[Y,\psi,\eta]=\int{\mathcal{D}_{\tilde{g}}}X {\mathcal{D}_{\tilde{g}}}\tilde{g}\exp\left\{- S[X,\tilde{g}]\right\}\label{29}. \end{equation} \noindent The action consists of the standard bosonic string matter action of Brink, Di Vecchia and Howe plus renormalisation counterterms: \[ S[X,\tilde{g}]={1\over{16\pi}}\int{d^2}\xi\sqrt{\tilde{g}} {\tilde{g}^{ab}}{\partial_{a}}{X^{\mu}}{\partial_{b}}{X^{\nu}} {\eta_{\mu\nu}}+{\mu_0^2} \int{d^2}\xi\sqrt{\tilde{g}}+{\lambda_0}\oint{d}\tilde{s}+{\nu_0}\oint {d}\tilde{s}{k_{\tilde{g}}}. \] \noindent The cosmological constant terms in the area ${\mu_0^2}\int{d^2}\xi\sqrt{\tilde{g}}$, the invariant length of the boundary ${\lambda_0}\oint{d}\tilde{s}$ and the integral of its geodesic curvature ${\nu_0}\oint{d}\tilde{s}{k_{\tilde{g}}}$ are the non-trivial pure gravity contributions to the action in two dimensions. The first two are necessary as counterterms due to short distance singularities. Although the geodesic curvature counterterm is not associated with divergencies we will see that it is absolutely necessary for our solution. Here we note that this term can be written as $({\nu_0}/2)\int{d^2}\xi\sqrt{\tilde{g}}\tilde{R}$ if we use the Gauss-Bonnet theorem \begin{equation} \int{d^2}\xi\sqrt{\tilde{g}}\tilde{R}+2\oint{d}\tilde{s} {k_{\tilde{g}}}=4\pi{\chi_o}\label{18} \end{equation} \noindent where ${\chi_o}$ is the Euler characteristic of the open Riemann surface. It is given by ${\chi_o}=2-2h-b$ where $h$ is the genus of the surface and $b$ the number of smooth boundaries. Note also that in the open string the Gauss-Bonnet theorem cannot fix both the integrals of the scalar curvature $\tilde{R}$ and of the geodesic curvature $k_{\tilde{g}}$ so that we should allow one of these as a pure gravity contribution to the action. To calculate $Z$ let us first determine the wave functional $Z[Y,\psi,\eta]$. We start by separating $X^{\mu}$ into two parts ${X^{\mu}}={X_c^{\mu}}+{\bar{X}^{\mu}}$. We define $X_c^{\mu}$ and $\bar{X}^{\mu}$ in such a way that the string action gets split into two independent pieces, one for $X_c^{\mu}$ which contains all the dependence on the boundary value $Y^{\mu}$ and another for $\bar{X}^{\mu}$. This is easily done if we fix $X_c^{\mu}$ using $Y^{\mu}$, \begin{equation} \tilde{\Delta}{X_c^{\mu}}=0,\quad{X_c^{\mu}}{|_{B}}={Y^{\mu}}, \label{6} \end{equation} \noindent and impose on $\bar{X}^{\mu}$ a homogeneous Dirichlet boundary condition ${\bar{X}^{\mu}}{|_{B}}=0$. Here we have used the notation $B$ to say that the fields are evaluated at a point $\xi$ of the boundary $B$. Eq. (\ref{6}) is solved in terms of $Y^{\mu}$ using the homogeneous Green's function for the Laplacian with Dirichlet boundary conditions defined for the metric $\tilde{g}_{ab}$. We will separate the boundary value $Y^\mu$ into a constant piece, and a piece that is orthogonal with respect to the natural metric on the boundary, i.e. we write ${Y^{\mu}}={Y_0^{\mu}}+{\bar{Y}^{\mu}}$ where $\oint{d}\tilde{s}{\bar{Y}^{\mu}}=0$. Then if $\partial_{\tilde{n}}$ is the outward normal derivative on the boundary the solution is \begin{equation} {X_c^{\mu}}(\xi')={Y_0^{\mu}}-\oint{d}\tilde{s}(\xi) {\partial_{\tilde{n}}} {\tilde{G}_{D}} (\xi,\xi'){\bar{Y}^{\mu}}(\xi)\label{7} \end{equation} \noindent if the point $\xi'$ is not in the boundary and ${X_c^{\mu}}{|_{B}}={Y^{\mu}}$ if it is. Of course here we have considered \begin{equation} \tilde{\Delta}{\tilde{G}_D}(\xi,\xi')={{{\delta^2}(\xi-\xi')}\over {\sqrt{\tilde{g}(\xi)}}}\label{8} \end{equation} \noindent where ${\tilde{G}_D}(\xi,\xi')=0$ if either argument lies on the boundary. In this case we can integrate eq. (\ref{8}) leading to an integral condition on its outward normal derivative \[ \oint{d}\tilde{s}(\xi){\partial_{\tilde{n}}}{\tilde{G}_D}(\xi,\xi')=-1, \] which allows the decomposition of $X_c^{\mu}$ given in eq. (\ref{7}). The string action can now be cast in the form $S[X,\tilde{g}]={S_c} [{X_c},\tilde{g}]+S[\bar{X},\tilde{g}]$. The action for $\bar{X}^{\mu}$ is just the free bosonic action where the kinetic kernel is the covariant Laplacian. To find ${S_c}[{X_c},\tilde{g}]$ as a boundary action we take a total derivative and use eq. (\ref{6}). We may write the result introducing the boundary kinetic kernel ${\tilde{K}_D}(\xi,\xi')=-1/(8\pi) {\partial_{\tilde{n}}}{\partial_{\tilde{n}'}}{\tilde{G}_D} (\xi,\xi')$: \[ {S_c}[{X_c},\tilde{g}]={1\over{2}}\oint{d}\tilde{s}(\xi)d\tilde{s} (\xi')Y(\xi)\cdot{\tilde{K}_D}(\xi,\xi')Y(\xi'). \] In standard fashion \cite{Pol,Pho,Dur,OAL,Man} the functional integration measure ${\mathcal{D}_{\tilde{g}}}X$ is characterised by an $\mathcal{L}^2$ norm for variations of $X^{\mu}$ \[ \left\|\delta{X}\right\|^2_{\tilde{g}}=\int{d^2}\xi\sqrt{\tilde{g}} \delta {X}\cdot\delta{X},\quad\int{\mathcal{D}_{\tilde{g}}}\delta{X}{e^{- \left\|\delta{X}\right\|^2_{\tilde{g}}}}=1. \] \noindent When we integrate $X^{\mu}$ keeping $Y^{\mu}$ fixed ${\mathcal{D}_{\tilde{g}}}X$ is actually ${\mathcal{D}_{\tilde{g}}}\bar{X}$. For the integration over the metric ${\mathcal{D}_{\tilde{g}}}\tilde{g}$ we need to consider the similar $\mathcal{L}^2$ norm for $\tilde{g}_{ab}$ \[ \left\|\delta\tilde{g}\right\|^2_{\tilde{g}}=\int{d^2}\xi \sqrt{\tilde{g}} \left({\tilde{g}^{ac}}{\tilde{g}^{bd}}+u{\tilde{g}^{ab}} {\tilde{g}^{cd}} \right)\delta{\tilde{g}_{ab}}\delta{\tilde{g}_{cd}} \] \noindent where $u$ is a non-negative constant. In the conformal gauge we decompose the integration over $\tilde{g}_{ab}$ into an integration over $\varphi$ and an integration over $\theta^a$. On the disc an arbitrary infinitesimal variation of $\tilde{g}_{ab}$ is $\delta{\tilde{g}_{ab}}=\delta\varphi {\tilde{g}_{ab}}+{\tilde{\nabla}_a} \delta{\theta_b}+{\tilde{\nabla}_b}\delta{\theta_a}$, where $\tilde {\nabla}_a$ is the covariant derivative in the metric $\tilde{g}_{ab}$. The variations of $\tilde{g}_{ab}$ induced by the reparametrisation ghosts and by Weyl transformations are not orthogonal. They intersect in the conformal Killing vectors ${\tilde{P}_{ab}}(\delta\theta)=0$, where $\tilde{P}_{ab}$ acts on vectors to make symmetric, traceless tensor fields ${\tilde{P}_{ab}}(\delta\theta)={\tilde{\nabla}_a} \delta{\theta_b}+{\tilde{\nabla}_b}\delta{\theta_a}-{\tilde{g}_{ab}} {\tilde{\nabla}_e}\delta{\theta^e}$. The adjoint acts on tensor fields to make vectors ${\tilde{P}^{\dagger}_b}=-2{\tilde{\nabla}_a}{h^a_b}$. Then redefining $\varphi$ we write \[ \left\|\delta\tilde{g}\right\|^2_{\tilde{g}}=2(1+2u)\int{d^2}\xi \sqrt{\tilde{g}}{{(\delta \varphi)}^2}+\int{d^2}\xi\sqrt{\tilde{g}}{\tilde{g}^{ac}} {\tilde{g}^{bd}}{\tilde{P}_{ab}}(\delta\theta){\tilde{P}_{cd}}(\delta \theta). \] \noindent We now split $\theta^a$ into a field $\bar{\theta}^a$ vanishing at the boundary and another field $\vartheta^a$ such that at the boundary ${\vartheta^a}=\eta{\tilde{t}^a}$. Assuming that $\vartheta^a$ is fixed by its boundary value in some way we obtain: \[ \left\|\delta\tilde{g}\right\|^2_{\tilde{g}}=2(1+2u)\int{d^2}\xi \sqrt{\tilde{g}}{{(\delta \varphi)}^2}+\int{d^2}\xi\sqrt{\tilde{g}}\delta\bar{\theta}\cdot {\tilde{P}^{\dagger}}\tilde{P}(\delta\bar{\theta}). \] Omiting the renormalisation counterterms we integrate $\bar{X}^{\mu}$ and $\bar{\theta}$ to find \[ Z[Y,\psi,\eta]=\exp\left\{- {S_c}[{X_c},\tilde{g}]\right\}\int{\mathcal{D}_{\tilde{g}}}\varphi {{\left(Det'\tilde{\Delta}\right)}^{-d/2}}{{\sqrt{Det' {\tilde{P}^{\dagger}} \tilde{P}}}\over{Vol(CKV)}} \] \noindent where the prime denotes the omission of the zero modes and we have divided by the volume of the space of conformal Killing vectors $Vol(CKV)$. As is well known these infinite determinants generate a Weyl anomaly \cite{Pol,Pho,Dur,OAL,Man}. If we use the covariant heat kernel to regularise them it is easy to see that the Weyl anomaly only depends on the values of the heat kernels for small proper time cutoff $\sqrt{\varepsilon}$. This means that the Weyl anomaly is a local phenomenon which only reflects the structure of the world-sheet at short distances. Since $\sqrt{\varepsilon}$ can be made infinitesimally small, the bulk and boundary contributions to the anomaly must be independent. Using locality, reparametrisation invariance, dimensional analysis and the commutativity of Weyl transformations we are led to the following expansion in powers of the proper time cutoff $\sqrt{\varepsilon}$: \begin{eqnarray*} &{\delta_{\rho}}\ln\left[{{\left(Det'\tilde{\Delta}\right)}^{-d/2}} {{\sqrt{Det'{\tilde{P}^{\dagger}}\tilde{P}}}\over{Vol(CKV)}}\right]= {{d-26}\over{48\pi}}\int{d^2}\xi\sqrt{\tilde{g}}\tilde{R}\rho+ {{d-26}\over{24\pi}}\oint{d}\tilde{s}{k_{\tilde{g}}}\rho+\nonumber\\ &+{C_1}\oint{d}\tilde{s}{\partial_{\tilde{n}}}\rho+ {{C_2}\over{\varepsilon}}\int{d^2}\xi\sqrt{\tilde{g}}\rho+ {{C_3}\over{\sqrt{\varepsilon}}}\oint{d}\tilde{s}\rho+ O(\sqrt{\varepsilon}), \end{eqnarray*} \noindent where the $C_i$ are dimensionless constants which can be determined exactly \cite{Dur,Man}. Here we will not worry about them because all are absorbed in the renormalisation counterterms. Integrating the infinitesimal variation leads to the usual Liouville action plus background contributions depending on the reference metric of the conformal gauge $\hat{g}_{ab}$: \[ Z[Y,\psi,\eta]=\exp\left\{- {S_c}[{X_c},\tilde{g}]\right\}\int{\mathcal{D}_{\tilde{g}}}\varphi {{\left(Det'\hat{\Delta}\right)}^{-d/2}}{{\sqrt{Det'{\hat{P}^ {\dagger}} \hat{P}}}\over{Vol(CKV)}}\exp\left\{- {S_L}[\varphi,\hat{g}]\right\}, \] \noindent where the Liouville action is given by \begin{eqnarray*} {S_L}\left[\varphi,\hat{g}\right]&=-{{d-26}\over{48\pi}}\int{d^2}\xi \sqrt{\hat{g}}\left({1\over{2}}{\hat{g}^{ab}}{\partial_a} \varphi{\partial_b}\varphi+\hat{R}\varphi\right)-{{d-26}\over{24\pi}} \oint{d}\hat{s}{k_{\hat{g}}}\varphi+\nonumber\\ &+{\mu_1^2}\int{d^2} \sqrt{\hat{g}}{e^{\varphi}}+{\lambda_1}\oint{d}\hat{s}{e^{\varphi/2}}+ {\nu_1}\oint{d}\hat{s}{\partial_{\hat{n}}}\varphi. \end{eqnarray*} \noindent Here $\mu_1^2$, $\lambda_1$ and $\nu_1$ are arbitrary finite constants left over from the renormalisation process. Next we start the integration of the Liouville mode and determine the renormalisation of the couplings to 2D quantum gravity. \subsubsection{Anomaly cancellation for coupling renormalisation} To integrate the Liouville mode we start by taking the Coulomb gas perturbative approach expanding the area cosmological constant counterterm. In each order of perturbation theory we split $\varphi$ in two fields $\varphi_c$, $\bar{\varphi}$ in exactly the same way we split $X^{\mu}$ previously. As before the Liouville action becomes the sum of two independent pieces, ${S_L}[{\varphi_c},\hat{g}]$, which contains all the dependence on the boundary value $\psi$, and ${S_L}[\bar{\varphi},\hat{g}]$. We further split $\psi={\psi_0}+\bar{\psi}$ into a constant $\psi_0$ and an orthogonal piece $\bar{\psi}$. The field $\varphi_c$ is now expressed in terms of $\bar{\psi}$ and $\psi_0$: \begin{equation} {\varphi_c}(\xi')={\psi_0}-\oint{d}\hat{s}(\xi){\partial_{\hat{n}}} {\hat{G}_D} (\xi,\xi')\bar{\psi}(\xi)\label{13}. \end{equation} Let us take the lowest order in the area cosmological constant perturbative expansion. When we integrate $\varphi$ we consider a fixed value of $\psi$. Then ${\mathcal{D}_{\tilde{g}}}\varphi= {\mathcal{D}_{\tilde{g}}}\bar{\varphi}$ and the lowest order contribution to the wave functional is given by \[ {Z^{00}}[Y,\psi,\eta]=\exp\left\{- {S_c}[{X_c},\tilde{g}]-{S_c^0}[{\varphi_c},\hat{g}]\right\} {{\left(Det'\hat{\Delta}\right)}^{-d/2}}{{\sqrt{Det'{\hat{P}^ {\dagger}} \hat{P}}}\over{Vol(CKV)}}{\bar{Z}^0}[Y,\psi,\eta] \] \noindent where \[ {\bar{Z}^0}[Y,\psi,\eta]=\int{\mathcal{D}_{\tilde{g}}}\bar{\varphi} \exp\left\{-{\bar{S}^0}[\bar{\varphi},\hat{g}]\right\}. \] \noindent Above we have introduced the lowest order Liouville actions for $\bar{\varphi}$ \[ {\bar{S}^0}[\bar{\varphi},\hat{g}]=-{{d-26}\over{48\pi}}\int{d^2} \xi\sqrt{\hat{g}}\left({1\over{2}}\bar{\varphi}\hat{\Delta} \bar{\varphi}+\hat{R}\bar{\varphi}\right)+{\nu_1}\oint{d}\hat{s} {\partial_{\hat{n}}}\bar{\varphi} \] and for $\varphi_c$ \begin{eqnarray} &{S_c^0}\left[{\varphi_c},\hat{g}\right]=-{{d-26}\over{48\pi}} \int{d^2}\xi \sqrt{\hat{g}}\left({1\over{2}}{\hat{g}^{ab}}{\partial_a} {\varphi_c}{\partial_b}{\varphi_c}+\hat{R}{\varphi_c}\right)- \nonumber\\ &-{{d-26}\over{24\pi}} \oint{d}\hat{s}{k_{\hat{g}}}{\varphi_c}+ {\lambda_1}\oint{d}\hat{s}{e^{{\varphi_c}/2}}\label{14}. \end{eqnarray} The functional integration measure for the integral over $\bar{\varphi}$ is conformally invariant but non-linear in the Liouville field: \[ \left\|\delta{\bar{\varphi}}\right\|^2_{\tilde{g}}=\int{d^2}\xi \sqrt{\hat{g}}{e^{\varphi}}{{(\delta{\bar{\varphi}})}^2}. \] \noindent To proceed we need to use David, Distler and Kawai's renormalisation ansatz \cite{DDK}. We may consider a canonical measure in the background $\hat{g}_{ab}$, \[ \left\|\delta{\bar{\phi}}\right\|^2_{\hat{g}}=\int{d^2}\xi \sqrt{\hat{g}}{{(\delta{\bar{\phi}})}^2}, \] \noindent provided we renormalise the Liouville field and its couplings to 2D gravity. Observe that this renormalisation involves the whole Liouville field. As pointed out by Symanzik in the presence of the boundary we should expect to take independent bulk and boundary renormalisations \cite{SYM}. Since the boundary pieces of the Liouville mode are fixed at the moment we do not need to worry about them for the time being. We also note that the canonical measure can only be introduced if a set of background counterterms is included: \[ {S_R}(\hat{g})={\mu_3^2}\int{d^2}\xi\sqrt{\hat{g}}+{\lambda_3} \oint{d}\hat{s}+{\nu_3}\oint{d}\hat{s}{k_{\hat{g}}}. \] \noindent When we renormalise the field $\bar{\varphi}\to\alpha\bar{\phi}$ and its couplings to gravity we get the following renormalised lowest order Liouville action: \[ {\bar{S}^0}[\bar{\phi},\hat{g}]={1\over{8\pi}}\int{d^2} \xi\sqrt{\hat{g}}\left({1\over{2}}\bar{\phi}\hat{\Delta}\bar{\phi}+ Q\hat{R}\bar{\phi}\right)+{\nu_2}\oint{d}\hat{s}{\partial_{\hat{n}}} \bar{\phi}. \] The renormalised parameters of the theory are determined by requiring invariance under a shift in $\phi$ and a compensating Weyl transformation of the reference metric. Once $\phi$ has been integrated out the result is required to be invariant under Weyl transformations of the metric alone. For the moment we integrate $\bar{\phi}$. To do so we need to follow Alvarez \cite{OAL} and set $\nu_2$ to zero because the standard way to deal with a term that is linear in the field is to shift the integration variable, in this case by a constant, but this would spoil the homogeneous Dirichlet condition on $\bar\phi$. Next we change variables as follows $\sqrt{8\pi}\bar{\phi}\to\bar{\phi}+{\hat{O}_Q^0}$. Here we have set \[ {\hat{O}_Q^0} (\xi')=\int{d^2}\xi\sqrt{\hat{g}(\xi)}{\hat{J}_Q^0}(\xi) {\hat{G}_D}(\xi,\xi'), \quad{\hat{O}_Q^0}{|_{B}}=0 \] \noindent and introduced the current ${\hat{J}_Q^0}=Q\hat{R}$. As a result we get the free field integrand \[ {S_F}[\bar{\phi},\hat{g}]={1\over{2}}\int{d^2}\xi\sqrt{\hat{g}} \bar{\phi}\hat{\Delta}\bar{\phi} \] \noindent plus the non-local functional \begin{equation} {\mathcal{F}_D^0}[\hat{g}]={{Q^2}\over{16\pi}}\int{d^2}\xi{d^2} \xi'\sqrt{\hat{g}(\xi)}\hat{R}(\xi){\hat{G}_D}(\xi,\xi') \sqrt{\hat{g}(\xi')} \hat{R}(\xi')\label{11}. \end{equation} \noindent Because there is no zero mode ${\hat{G}_D}(\xi,\xi')$ is Weyl invariant for distinct values of its arguments (coincident values require regularisation which introduces dependence on the scale of the metric). Thus, the Weyl anomaly associated with eq. (\ref{11}) is determined by the scaling of the current \begin{equation} {\delta_{\rho}}\sqrt{\hat{g}}\hat{R}=\sqrt{\hat{g}} \hat{\Delta}\rho\label{17}. \end{equation} Integrating by parts we find: \begin{equation} {\delta_{\rho}}{\mathcal{F}_D^0}={{Q^2}\over{8\pi}}\int{d^2}\xi \sqrt{\hat{g}}\hat{R}\rho+{{Q^2}\over{8\pi}} \oint{d}\hat{s}(\xi)\int{d^2}\xi'\rho(\xi){\partial_{\hat{n}}} {\hat{G}_D}(\xi,\xi') \sqrt{\hat{g}(\xi')}\hat{R}(\xi')\label{35}. \end{equation} \noindent The product of functional determinants resulting from the integration over the matter field, the reparametrisations and $\bar \phi$ also varies under a Weyl transformation: \begin{eqnarray} &{\delta_{\rho}}\ln\left[{{\left(Det'\hat{\Delta}\right)}^{-(d+1)/2}} {{\sqrt{Det'{\hat{P}^{\dagger}}\hat{P}}}\over{Vol(CKV)}}\right]= {{d-25}\over{48\pi}}\int{d^2}\xi\sqrt{\hat{g}}\hat{R}\rho+ {{d-25}\over{24\pi}}\oint{d}\hat{s}{k_{\hat{g}}}\rho+\nonumber\\ &+{{C'}_1}\oint{d}\hat{s}{\partial_{\hat{n}}}\rho+ {{{C'}_2}\over{\varepsilon}}\int{d^2}\xi\sqrt{\hat{g}}\rho+ {{{C'}_3}\over{\sqrt{\varepsilon}}}\oint{d}\hat{s}\rho+ O(\sqrt{\varepsilon})\label{36}, \end{eqnarray} \noindent where the ${C'}_i$ are dimensionless constants which as before can be determined exactly. Ignoring the counterterms for the moment we cancel the bulk local piece of the Weyl anomaly between eqs. (\ref{35}) and (\ref{36}) if we set \[ Q=\pm\sqrt{{25-d}\over{6}}. \] \noindent Since $\rho$ is an arbitrary infinitesimal Weyl scaling in the bulk and on the boundary of the surface we also need to deal with the non-local term and with the local boundary contribution in the geodesic curvature found respectively in eqs. (\ref{35}) and (\ref{36}). To do so we have to consider the integration over the boundary values of the Liouville field. First we integrate $Y^{\mu}$ and $\eta$. The boundary measures for these fields are induced by the natural reparametrisation invariant inner products on variations of the boundary values: \[ \left\|\delta{Y}\right\|^2_{\tilde{g}}=\oint{d}\tilde{s}\delta{Y} \cdot\delta{Y},\quad \left\|\delta\eta\right\|^2_{\tilde{g}}=\oint{d}\tilde{s} {{(\delta\eta)}^2}. \] \noindent As the formalism is explicitly reparametrisation invariant the integration over $\eta$ is trivial leading to an overall factor. For the boundary matter field we find: \begin{equation} \int{\mathcal{D}_{\tilde{g}}}Y\exp\left\{-{S_c}[{X_c},\tilde{g}] \right\}={{\left({{Det'{\tilde{K}_D}}\over{\oint{d}\tilde{s}}} \right)}^{-d/2}}\int{\prod_{\mu}}d{Y_0^{\mu}}\label{12}. \end{equation} Above we took into account the zero mode of the boundary kernel $\hat{K}_D$. Its existence can be seen by considering the eigenvalue problem \[ \oint{d}\hat{s}(\xi){\hat{K}_D}(\xi,\xi'){\hat{v}_N}(\xi)= {\hat{\lambda}_N} {\hat{v}_N}(\xi'). \] \noindent These eigenfunctions form a complete and orthonormal set of functions on the boundary: \[ {\sum_N}{\hat{v}_N}(\xi){\hat{v}_N}(\xi')={\hat{\delta}_B}(\xi-\xi'), \quad \oint{d}\hat{s}(\xi){\hat{v}_N}(\xi){\hat{v}_M}(\xi)={\delta_{NM}}. \] \noindent Here the boundary delta function is defined by $\oint{d}\hat{s}(\xi){\hat{\delta}_B}(\xi-\xi')f(\xi)=f(\xi')$. Then the eigenvalues may be expressed as \begin{eqnarray*} &{\hat{\lambda}_N}=\oint{d}\hat{s}(\xi)d\hat{s}(\xi'){\hat{v}_N}(\xi) {\hat{K}_D} (\xi,\xi'){\hat{v}_N}(\xi')=\nonumber\\ &=-{1\over{8\pi}}\oint{d} \hat{s}(\xi)d\hat{s}(\xi'){\hat{v}_N} (\xi) {\partial_{\hat{n}}}{\partial_{\hat{n}'}}{\hat{G}_D}(\xi,\xi') {\hat{v}_N}(\xi'). \end{eqnarray*} Now define $\hat V_N$ to be the solution of Laplace's equation with boundary value $v_N$: \[ \hat{\Delta}{\hat{V}_N}=0,\quad{\hat{V}_N}{|_B}={\hat{v}_N}. \] \noindent This has the solution \[ {\hat{V}_N}(\xi')=-\oint{d}\hat{s}(\xi){\partial_{\hat{n}}}{\hat{G}_D} (\xi,\xi'){\hat{v}_N}(\xi), \] enabling us to write the eigenvalues as \[ {\hat{\lambda}_N}={1\over{8\pi}}\oint{d}\hat{s}(\xi){\hat{V}_N}(\xi) {\partial_{\hat{n}}}{\hat{V}_N}(\xi)= {1\over{8\pi}}\int{d^2}\sqrt{\hat{g}}{\hat{g}^{ab}}{\partial_a} {\hat{V}_N}{\partial_b}{\hat{V}_N}. \] \noindent Thus ${\hat{\lambda}_N}\geq{0}$ and it is only zero when $\hat V_N$ is constant. Denoting this solution by $N=0$ and using the normalisation condition we conclude that $\hat K_D$ has the zero mode ${\hat{v}_0}={{\left(\oint{d}\hat{s}\right)}^{-1/2}}$. The determinant in eq. (\ref{12}) will generate a new boundary term for the Liouville action. This is the gluing anomaly found in \cite{Man}. The kernel $\tilde{K}_D$ has a boundary heat kernel which can only be sensitive to short distance effects, and since the boundary has no intrinsic geometry it can only be sensitive to the invariant length of the boundary. As a consequence covariance and dimensional analysis lead to a contribution to the Weyl anomaly which can be absorbed into the cosmological constant counterterm in the invariant world-sheet length of the boundary. To cancel the remaining terms in the Weyl anomaly we have to integrate $\psi$. Just as in the case of $\bar{\varphi}$ we have a non-linear inner product on variations of $\psi$: \[ \left\|\delta\psi\right\|^2_{\tilde{g}}=\oint{d}\hat{s}{e^{\psi/2}} {{(\delta\psi)}^2}. \] \noindent We will assume, following David, Distler and Kawai, that we can use the inner product that is more usual for a quantum field in the background $\hat{g}_{ab}$, \[ \left\|\delta\Psi\right\|^2_{\hat{g}}=\oint{d}\hat{s} {{(\delta\Psi)}^2}, \] \noindent provided we renormalise ${\psi_0}\to{\alpha_0}{\Psi_0}$, and $\bar{\psi}\to {\alpha_B}\bar{\Psi}$ as well as their couplings to 2D quantum gravity. Note that this means we need to introduce an independent field renormalisation for $\bar{\varphi}_c$, the component of $\varphi_c$ orthogonal to the zero mode $\psi_0$. According to eq. (\ref{13}), its explicit expression in terms of $\bar{\psi}$ involves a coupling to 2D gravity. Thus we must also consider ${\bar{\varphi}_c} \to{\bar{\alpha}_B} {\bar{\phi}_c}$. This is to be done in each order of the perturbative expansion in the length cosmological constant. Note that we have allowed for a different renormalisation of $\psi_0$ and $\bar{\psi}$. This is because we take independent bulk and boundary renormalisations and $\psi_0$ is related to the zero mode of the Laplacian on closed surfaces that would be generated if we glued together two disc shaped topologies to obtain a sphere, corresponding to the inner product of the closed string vacuum with itself. Thus $\psi_0$ is really associated with the Liouville field in the bulk and should be renormalised accordingly. Now when we decompose $\psi$ into $\psi_0$ and $\bar{\psi}$ eq. (\ref{14}) can be rewritten as: \begin{eqnarray*} &{S_c^0}[\bar{\psi},{\psi_0},\hat{g}]=-{{d-26}\over{12}}\oint{d}\hat{s} (\xi)d\hat{s}(\xi')\bar{\psi}(\xi){\hat{K}_{D}}(\xi,\xi')\bar{\psi} (\xi')- {{d-26}\over{24\pi}}\oint{d}\hat{s}{k_{\hat{g}}}\bar{\psi}+\nonumber\\ &+{\lambda_1} \oint{d}\hat{s}{e^{\bar{\psi}+{\psi_0}}}+ {{d-26}\over{48\pi}}\int{d^2}\xi\sqrt{\hat{g}(\xi)}\hat{R}(\xi) \oint{d}\hat{s}(\xi') {\partial_{\hat{n}'}}{\hat{G}_D}(\xi,\xi')\bar{\psi}(\xi')-\nonumber\\ &-{{d-26}\over{12}}{\chi_o}{\psi_0}. \end{eqnarray*} Introducing the coupling renormalisation parameters $Q_0$, $Q_B$ and $\bar{Q}_B$ we write the renormalised lowest order boundary action \begin{eqnarray} &{S_c^{00}}[\bar{\Psi},{\Psi_0},\hat{g}]={1\over{2}}\oint{d}\hat{s} (\xi) d\hat{s}(\xi')\bar{\Psi}(\xi){\hat{K}_D}(\xi,\xi')\bar{\Psi}(\xi')+ \oint{d}\hat{s} {\hat{H}_D^{00}}{\bar{\Psi}}+\nonumber\\ &+{{{Q_0}{\chi_o}}\over{2}}{\Psi_0},\label{insertp} \end{eqnarray} \noindent where we have the current \begin{equation} {\hat{H}_D^{00}}(\xi)=-{{Q_B}\over{8\pi}}\int{d^2}\xi'\sqrt{\hat{g} (\xi')}\hat{R}(\xi') {\partial_{\hat{n}}}{\hat{G}_D}(\xi,\xi')+{{\bar{Q}_B}\over{8\pi}} {k_{\hat{g}}}(\xi)\label{16}. \end{equation} To integrate this we shift out the linear piece in $\bar{\Psi}$. We introduce the Green's function of $\hat{K}_D$ defined by \begin{equation} \oint{d}\hat{s}(\xi''){\hat{K}_D}(\xi,\xi''){\hat{G}_K}(\xi'',\xi')= {\hat{\delta}_B}(\xi-\xi') -{1\over{\oint{d}\hat{s}(\xi''')}}\label{15}. \end{equation} The last term on the right-hand side of eq. (\ref{15}) is necessary to ensure consistency when the equation is integrated with respect to $\hat s (\xi)$, since \[ \oint{d}\hat{s}(\xi){\hat{K}_D}(\xi,\xi') =0. \] Its value is fixed by the zero mode of $\hat{K}_D$ we have calculated before. Also ${\hat{G}_K}(\xi,\xi')$ is symmetric in its arguments and is orthogonal to the constant zero mode \begin{equation} \oint{d}\hat{s}(\xi){\hat{G}_K}(\xi,\xi')=0\label{55}. \end{equation} Then we can consider the shift $\bar{\Psi}\to\bar{\Psi}+{\hat{\mathcal{F}}_K^{00}}$ where \[ {\hat{\mathcal{F}}_K^{00}}(\xi')=\oint{d}\hat{s}(\xi) {\hat{H}_D^{00}}(\xi) {\hat{G}_K}(\xi,\xi') \] \noindent is also orthogonal to the zero mode. Thus the integration leads to \[ \int{\mathcal{D}_{\hat{g}}}(\bar{\Psi},{\Psi_0})\exp\{- {S_c^{00}}[\bar{\Psi},{\Psi_0},\hat{g}]\}={e^{\mathcal{F}_B^{00}}} {{\left({{Det'{\tilde{K}_D}}\over{\oint{d}\tilde{s}}}\right)}^{-d/2}} \int{d}{\Psi_0}{e^{-{Q_0}{\chi_o}{\Psi_0}/2}} \] \noindent where \begin{equation} {\mathcal{F}_B^{00}}={1\over{2}}\oint{d}\hat{s}(\xi)d\hat{s}(\xi') {\hat{H}_D^{00}}(\xi){\hat{G}_k}(\xi,\xi'){\hat{H}_D^{00}}(\xi') \label{31}. \end{equation} \noindent The determinant only changes the background renormalisation counterterm in the world-sheet length. The important contribution to the Weyl anomaly comes from eq. (\ref{31}). To calculate it we first need the Weyl transformation associated with eq. (\ref{16}). Using eq. (\ref{17}) and the corresponding transformation of the geodesic curvature \[ {\delta_{\rho}}d\hat{s}{k_{\hat{g}}}={1\over{2}}d\hat{s} {\partial_{\hat{n}}}\rho, \] we take a total derivative and introduce the boundary kernel $\hat{K}_D$ to find: \begin{eqnarray} &{\delta_{\rho}}[d\hat{s}(\xi'){\hat{H}_D^{00}}(\xi')]= {Q_B}\oint{d}\hat{s} (\xi)\rho(\xi)d\hat{s}(\xi'){\hat{K}_D}(\xi,\xi')+\nonumber\\ &+{1\over{16\pi}}( {\bar{Q}_B}-2{Q_B})d\hat{s}(\xi'){\partial_{\hat{n}'}}\rho(\xi') \label{22}. \end{eqnarray} Then eqs. (\ref{15}) and (\ref{16}) lead us to \begin{equation} {\delta_{\rho}}{\mathcal{F}_B^{00}}=-{{Q_B^2}\over{8\pi}} \oint{d}\hat{s}(\xi)\int{d^2}\xi'\rho(\xi){\partial_{\hat{n}}} {\hat{G}_D}(\xi,\xi') \sqrt{\hat{g}(\xi)}\hat{R}(\xi')+{{Q_B^2}\over{4\pi}}\oint{d}\hat{s} {k_{\hat{g}}}\rho\label{32}. \end{equation} \noindent Above we have taken ${\bar{Q}_B}=2{Q_B}$ which is a condition needed to eliminate the contribution associated with the outward normal derivative of $\rho$: \[ {{\bar{Q}_B}-2{Q_B}\over 16\pi}\oint{d}\hat{s}(\xi)d\hat{s}(\xi') {\partial_{\hat{n}}}\rho(\xi){\hat{G}_K}(\xi,\xi'){\hat{H}_D^{00}} (\xi'). \] \noindent Also we note that the zero mode integration defines a net charge selection rule for the gravitational sector just like in the closed string. This allow us to ignore the non-local contributions to eq. (\ref{32}) coming from the zero mode of the kernel $\hat{K}_D$. They will be generated by eq. (\ref{15}) and by the non-local Weyl anomaly associated with ${\hat{G}_K}(\xi,\xi')$. We find (see Appendix A): \begin{equation} {\delta_{\rho}}{\hat{G}_K}(\xi,\xi')=-{1\over{2\oint{d}\hat{s} (\xi''')}} \oint{d}\hat{s}(\xi'') \rho(\xi'')\left[{\hat{G}_K}(\xi'',\xi)+{\hat{G}_K}(\xi'',\xi') \right]\label{56}. \end{equation} This will also contribute when the two points approach each other. In this case we must also include the contribution coming from the regularisation of $\hat{G}_K$ at coincident points. We use the reparametrisation invariant heat kernel \[ {\hat{G}_{K\varepsilon}}(\xi,\xi')={\int_{\varepsilon}^{\infty}}dt \left[{\hat{\mathcal{G}}_K}(t,\xi,\xi')-{1\over{\oint{d}\hat{s} (\xi'')}} \right] \] where $\hat{\mathcal{G}}_K$ satisfies the generalised heat equation \[ {\partial\over\partial t}{\hat{\mathcal{G}}_K}(t,\xi,\xi') =\oint{d}\hat{s}(\xi''){\hat{K}_D}(\xi,\xi'') {\hat{\mathcal{G}}_K}(t,\xi'',\xi'), \quad{\hat{\mathcal{G}}_K} (0,\xi,\xi')={\hat{\delta}_B}(\xi-\xi'). \] For coincident arguments the regularisation of the Green's function is controlled by the small-$t$ behaviour of the heat kernel which is computable in a standard perturbation series \cite{Man}. This thus leads to: \begin{equation} {\delta_{\rho}}{\hat{G}_{K\varepsilon}}(\xi',\xi')=4\rho(\xi')- {1\over{\oint{d}\hat{s}(\xi''')}} \oint{d}\hat{s}(\xi'') \rho(\xi''){\hat{G}_K}(\xi'',\xi')\label{25}. \end{equation} All these non-local contributions always decouple one of the variables, so they will generate terms in eq. (\ref{32}) which will all be proportional to the net charge on the whole surface. To eliminate remaining terms between eqs. (\ref{35}), (\ref{36}) and (\ref{32}) we need ${Q_B}=Q$. If we finally tune the background cosmological counterterm contributions to zero we get a Weyl invariant lowest order partition function. This shows that we need to include the counterterm in the geodesic curvature because otherwise the finite contribution coming from the reparametrisation ghosts cannot be eliminated. Of course in this particular lowest order case we have a null contribution to the partition function because the net charge, $\oint{d}\hat{s}(\xi){\hat{H}_D^{00}}(\xi)$, is the topological background gravity charge which for the disc is never zero due to the Gauss-Bonnet theorem. However all the terms we have discussed will persist in the more complicated expressions that satisfy the charge selection rule. This analysis still leaves the parameter $Q_0$ undetermined. To find it we make the connection with the closed string partition function. As explained earlier this is obtained by identifying the arguments of two copies of $Z[Y,\psi,\eta]$ and integrating over these boundary values. This corresponds to gluing together two discs along their boundaries to produce a sphere. The closed string partition function is \[ {Z_{closed}}=\int{\mathcal{D}_{\tilde{g}}}(Y,\psi,\eta) {Z_{open}^1}[Y,\psi,\eta]{Z_{open}^2}[Y,\psi,\eta]. \] \noindent When we integrate the string field $X^{\mu}$ and the reparametrisation ghosts in each open string wave functional we find: \begin{eqnarray*} &{Z_{closed}}=\int{\mathcal{D}_{\tilde{g}}}(\psi,{\varphi_1}, {\varphi_2}) \exp\{-{S_L}[{\varphi_1},{\hat{g}_1}]-{S_L}[{\varphi_2}, {\hat{g}_2}]\} \nonumber\\ &{{\left(Det'{\hat{\Delta}_1}\right)}^{-d/2}} {{\sqrt{Det'{\hat{P}^{\dagger}_1} {\hat{P}_1}}}\over{{Vol_1}(CKV)}} {{\left(Det'{\hat{\Delta}_2}\right)}^{-d/2}} {{\sqrt{Det'{\hat{P}^{\dagger}_2} {\hat{P}_2}}}\over{{Vol_2}(CKV)}}. \end{eqnarray*} \noindent Here the boundary fields $Y^{\mu}$, $\eta$ have already been integrated and absorbed in the length renormalisation counterterm. The next step is to perturb in each area renormalisation counterterm and in the common length cosmological constant. Just like before we split each field $\varphi_i$, $i=1,2$ in two independent fields $\varphi_{ci}$, $\bar{\varphi}_i$. In the present case we only need to consider the lowest order in the perturbative expansion. Then we have the following decomposition \begin{eqnarray*} &{Z_{closed}}={Z_B^{00}}{\bar{Z}_{open}^1}{\bar{Z}_{open}^2} \nonumber\\ &{{\left(Det'{\hat{\Delta}_1}\right)}^{-d/2}} {{\sqrt{Det'{\hat{P}^{\dagger}_1} {\hat{P}_1}}}\over{{Vol_1}(CKV)}} {{\left(Det'{\hat{\Delta}_2}\right)}^{-d/2}} {{\sqrt{Det'{\hat{P}^{\dagger}_2} {\hat{P}_2}}}\over{{Vol_2}(CKV)}} \end{eqnarray*} \noindent where the boundary partition function is \[ {Z_B^{00}}=\int{\mathcal{D}_{\tilde{g}}}(\psi,{\psi_0})\exp\{- {S_B^{00}}[\psi,{\psi_0},\hat{g}]\}. \] \noindent Above we have used the simple property that the outward normal derivative of one of the open surfaces is just the inward normal derivative of the other at the common boundary, plus the Gauss-Bonnet theorems given in eqs. (\ref{5}) and (\ref{18}) to find the boundary action: \[ {S_B^{00}}(\bar{\psi},{\psi_0},\hat{g})=-{{d-26}\over{12}} \oint{d}\hat{s}d {\hat{s}'}\bar{\psi}(\xi){\hat{K}_{D}}(\xi,\xi')\bar{\psi}(\xi')- {{d-26}\over{12}}{\chi_c}{\psi_0}. \] Next we renormalise the fields and their couplings to 2D gravity to consider canonical measures in the background $\hat{g}_{ab}$. When we integrate $\bar{\phi}_i$ we get the same Weyl anomaly for each field and again using the property of the normal derivative at the common boundary we can easily see that the boundary contributions cancel up to the usual length renormalisation counterterm, leading to ${Q_i}=Q$, $i=1,2$, where the $Q_i$ define the renormalisation of the coupling of the $\varphi_i$ to the scalar curvature $\hat{R}_i$. The boundary integration is just equal to the zero mode charge selection rule. If ${Q_0}=Q$ that is exactly the selection rule we get for the closed string. \subsubsection{Anomaly cancellation for Liouville field renormalisation} So far we have only been able to determine parameters associated with the renormalisation of the couplings to 2D quantum gravity. To go further and calculate the Liouville field renormalisation we need to consider higher orders in the Coulomb gas perturbative expansion. In the case of the couplings we have seen that the renormalised central charge of the conformally extended Liouville field theory is exactly the same as the corresponding central charge of the same theory on a closed surface. We have also proved that the boundary couplings are fixed by this value of the central charge. We have seen that this is all a consequence of the quantum Weyl invariance of the theory. Now we want to find out if the bulk field renormalisation is equal to the corresponding closed string parameter and if the boundary field renormalisation is actually the same as its bulk counterpart as it should happen when we interpret the Liouville field as an arbitrary Weyl scaling defined everywhere on the surface including its boundary. As Symanzik's work makes clear, this is not something we should take for granted. We will now show that this is also a consequence of the quantum Weyl invariance assumed for the theory. We start with the case where we have a single Liouville vertex operator on the bulk: \begin{equation} \int{d^2}\xi\sqrt{\hat{g}}{e^{{\alpha_0}{\Psi_0}+\alpha\bar{\phi}+ {\bar{\alpha}_B} {\bar{\phi}_c}}}\label{24}. \end{equation} \noindent In this case we find the following action for $\bar{\phi}$ \[ {S_L^{1}}\left[\bar{\phi},\hat{g}\right]={1\over{8\pi}}\int{d^2}\xi \sqrt{\hat{g}}\left({1\over{2}} \bar{\phi}\hat{\Delta}\bar{\phi}+{\hat{J}_Q^1}\bar{\phi}\right) \] \noindent where we need the current \[ {\hat{J}_Q^1}(\xi)=Q{\hat{R}}(\xi)-8\pi\alpha{{{\delta^2}(\xi-\xi')} \over{\sqrt{\hat{g}(\xi)}}}. \] \noindent By shifting $\bar{\phi}$ we generate the functional: \begin{equation} {\mathcal{F}_D^1}[\hat{g}]={\mathcal{F}_D^0}[\hat{g}]-\alpha{Q} \int{d^2} \xi\sqrt{\hat{g}(\xi)}{\hat{R}}(\xi){\hat{G}_D}(\xi,\xi')+4\pi {\alpha^2} {\hat{G}_D}(\xi',\xi')\label{20}. \end{equation} \noindent On the other hand we also find the following renormalised boundary action \begin{eqnarray*} &{S_c^{10}}[\bar{\Psi},{\Psi_0},\hat{g}]={1\over{2}}\oint{d}\hat{s} (\xi) d\hat{s}(\xi')\bar{\Psi}(\xi){\hat{K}_D}(\xi,\xi')\bar{\Psi}(\xi')+ \oint{d}\hat{s} {\hat{H}_D^{10}}{\bar{\Psi}}+\nonumber\\ &+\left({{{Q_0}{\chi_o}}\over{2}}-{\alpha_0}\right) {\Psi_0} \end{eqnarray*} \noindent where we have introduced the current \[ {\hat{H}_D^{10}}(\xi)={\hat{H}_D^{00}}(\xi)+ {\bar{\alpha}_B}{\partial_{\hat{n}}}{\hat{G}_D}(\xi,\xi'). \] \noindent In this case we get \begin{eqnarray} &{\mathcal{F}_B^{10}}={\mathcal{F}_B^{00}}+{\bar{\alpha}_B}\oint{d} \hat{s} (\xi)d\hat{s}(\xi'') {\hat{H}_D^{00}}(\xi){\hat{G}_K}(\xi,\xi''){\partial_{\hat{n}''}} {\hat{G}_D}(\xi'',\xi') +\nonumber\\ &+{1\over{2}}{\bar{\alpha}_B^2}\oint{d}\hat{s}(\xi) d\hat{s}(\xi''){\partial_{\hat{n}}}{\hat{G}_D}(\xi,\xi') {\hat{G}_K}(\xi,\xi'') {\partial_{\hat{n}''}}{\hat{G}_D}(\xi'',\xi')\label{23}. \end{eqnarray} To analyse the anomaly cancellation in this order of the perturbative expansion we first recall that although ${\hat{G}_D}(\xi,\xi')$ is Weyl invariant for distinct values of its arguments, at coincident points it requires regularisation which introduces dependence on the scale of the metric. To calculate the correspondent Weyl transformation we represent ${\hat{G}_D}(\xi,\xi')$ in terms of the Green's function $\hat{G}(\xi,\xi')$ considered on the whole plane \[ {\hat{G}_D}(\xi,\xi')=\hat{G}(\xi,\xi')-{\hat{H}_D}(\xi,\xi'), \] where ${\hat{H}_D}(\xi,\xi')$ satisfies the boundary-value problem: \[ \hat{\Delta}{\hat{H}_D}(\xi,\xi')=0,\quad{\hat{H}_D} (\xi,\xi'){|_{\xi'\in B}} =\hat{G}(\xi,\xi'){|_{\xi'\in B}}. \] When $\xi=\xi'$ is on the bulk ${\hat{H}_D}(\xi,\xi)$ is Weyl invariant. Also on the whole plane there is no zero mode. Thus the Weyl transformation of ${\hat{G}_D}(\xi,\xi)$ is just given by the corresponding well known local change of $\hat{G}(\xi,\xi)$ \cite{Pol,Pho}: \begin{equation} {\delta_{\rho}}{\hat{G}_{D\varepsilon}}(\xi,\xi)={{\rho(\xi)}\over {4\pi}},\quad\xi \not\in{B}\label{21} . \end{equation} Then applying eqs. (\ref{17}), (\ref{21}) we conclude that the Weyl anomaly of eq. (\ref{20}) is given by \[ {\delta_{\rho}}{\mathcal{F}_D^1}={\delta_{\rho}}{\mathcal{F}_D^0}- \alpha{Q}\oint{d}\hat{s}(\xi)\rho(\xi){\partial_{\hat{n}}}{\hat{G}_D} (\xi,\xi')+({\alpha^2}-\alpha{Q})\rho(\xi'). \] On the other hand, ignoring the non-local zero mode contributions which are all proportional to the net charge on the whole surface given in this order by $\oint{d}\hat{s}{\hat{H}_D^{10}}$, we use eq. (\ref{22}) and ${\bar{Q}_B}=2{Q_B}$ to find the Weyl anomaly of eq. (\ref{23}): \[ {\delta_{\rho}}{\mathcal{F}_D^{10}}={\delta_{\rho}} {\mathcal{F}_D^{00}}+ {\bar{\alpha}_B}{Q_B}\oint{d}\hat{s}(\xi)\rho(\xi) {\partial_{\hat{n}}} {\hat{G}_D}(\xi,\xi'). \] Thus we can easily see that to ensure Weyl invariance at the quantum level we must further set ${Q_B}=Q$, ${\bar{\alpha}_B}=\alpha$ and \[ 1-\alpha{Q}+{\alpha^2}=0. \] \noindent Here we took into account the contribution to the Weyl anomaly of the $\sqrt{\hat{g}}$ present in eq. (\ref{24}). Introducing the value of $Q$ we find: \[ {\alpha_{\pm}}={1\over{2\sqrt{6}}}\left(\sqrt{25-d}\pm\sqrt{1-d} \right). \] As we noted previously, these renormalised parameters only cancel the local contributions to the Weyl anomaly. As in the lowest order case we have to assume the charge selection rule associated with the zero mode integration to eliminate the non-local pieces. To find the renormalised parameters of the charge selection rule we need to glue the two discs to form a sphere enabling us to use the closed string result. We already know the value of $Q_0$ but now we also want the value of $\alpha_0$. The calculation goes exactly as before, all boundary contributions cancel out up to the length renormalisation counterterm and we find a zero mode integral which corresponds to a closed string selection rule with two bulk vertex operator charges $\alpha_0$ and a background gravity charge $Q_0=Q$. This implies that ${\alpha_0}=\alpha$ as expected. With this calculation we are able to guarantee Weyl invariance at the quantum level for insertions of arbitrary numbers of gravitational Liouville vertex operators in the bulk. To see what happens when operators are inserted on the boundary let us consider the simplest case of just one such operator, \begin{equation} \oint{d}\hat{s}{e^{{\alpha_0}{\Psi_0}/2+{\alpha_B}\bar{\Psi}/2}} \label{26}. \end{equation} In this case only the boundary integration over $\psi$ gets changed. The renormalised boundary Liouville action is \begin{eqnarray*} &{S_c^{01}}[\bar{\Psi},{\bar{\Psi}_0},\hat{g}]={1\over{2}}\oint{d} \hat{s}(\xi)d\hat{s}(\xi')\bar{\Psi}(\xi){\hat{K}_D}(\xi,\xi') \bar{\Psi}(\xi')+ \oint{d}\hat{s}{\hat{H}_D^{01}}\bar{\Psi}+\nonumber\\ &+\left({{{Q_0}{\chi_o}} \over{2}}- {{\alpha_0}\over{2}}\right){\bar{\Psi}_0} \end{eqnarray*} \noindent where we have introduced the current \[ {\hat{H}_D^{01}}(\xi)={\hat{H}_D^{00}}(\xi)-{{\alpha_B}\over{2}} {\hat{\delta}_B}(\xi-\xi'). \] The relevant functional is now: \[ {\mathcal{F}_B^{01}}={\mathcal{F}_B^{00}}-{{\alpha_B}\over{2}}\oint{d} \hat{s}(\xi){\hat{H}_D^{00}}(\xi){\hat{G}_K}(\xi,\xi')+ {{\alpha_B^2}\over{8}}{\hat{G}_K}(\xi',\xi'). \] To find out our last renormalised parameter $\alpha_B$ we need the local Weyl transformation of $\hat{G}_K$ at coincident points given in eq. (\ref{25}). Thus the local anomaly vanish if all the other parameters keep their previous values and $1/2-{\alpha_B}Q/2+{\alpha_B^2}/2=0$, where the $1/2$ term comes from the Weyl transformation of $d\hat{s}$ in eq. (\ref{26}). Thus ${\alpha_B}=\alpha$. Since as before the non-local contributions cancel due to the charge selection rule this result shows that the full perturbative expansion is Weyl invariant at the quantum level for the values of the renormalised parameters found. Whenever we couple distinct Liouville vertex operators in higher orders there are no additional Weyl anomalous contributions. \subsubsection{Comments} Our results for the non-critical open string show that the gravitational sector can be interpreted as a conformally extended boundary Liouville field theory. In this picture $Q$ defines the central charge of the Liouville theory ${c_{\phi}}=1+6{Q^2}$, which has its value fixed by demanding that it cancels the central charges of the matter and ghost systems ${c_M}+{c_{gh}}=d-26$. Thus the central charge of the theory with boundary is equal to the central charge of the theory without boundary. This is to be expected since anomalies are local effects. We have interpreted the Liouville field as an arbitrary Weyl scaling all over the open surfaces. Then we found that the value of $\alpha$ is exactly right to define a Liouville vertex operator $\int{d^2}\xi\sqrt{\hat{g}}{e^{\alpha\phi}}$ of zero conformal weight. On the extended field theory it corresponds to a primary field $:{e^{\alpha\phi}}:$ of weight $(1,1)$. As expected $\alpha$ has the same value it takes when the surfaces are closed. We also found the right value for $\alpha_B$ in the sense that the boundary vertex operator $\oint{d}\hat{s}{e^{{\alpha_B}\phi/2}}$ has zero conformal weight corresponding to the boundary primary field $:{e^{{\alpha_B}\phi/2}}:$ of conformal weight $(1/2,1/2)$. This means that the renormalisation of the Liouville field is the same all over the surface and is equal to the renormalisation on the closed surface as it should be. We are now in a position to see that Dirichlet boundary conditions on the Liouville field imply that the metric is discontinuous as the boundary is approached. In this case the calculation stops at $S^{00}_c$ in eq. (\ref{insertp}), since we do not integrate over boundary values of the Liouville field, but leave them fixed. The Weyl anomaly of eq. (\ref{36}) must now be cancelled by the Weyl transform of $S^{00}_c$, together with a shift in the boundary value of the Liouville field, $\Psi$. This fixes the latter to be $\delta\Psi=-Q\rho$. Now the full metric is a reparametrisation of ${\hat{g}}_{ab}{e^{\alpha\Psi}}$, which should be invariant under this simultaneous Weyl transformation on ${\hat{g}}_{ab}$ and shift in $\Psi$, since the separation into reference metric and Liouville field is arbitrary. However it is not because $Q\ne 1/\alpha$, as the correct relation, $1-\alpha Q +\alpha^2=0$, has an extra quantum piece. One way out of this would be to assume that the Liouville field is renormalised differently in the bulk and on the boundary, a phenomenon that occurs in $\phi^4$ theory in four dimensions, \cite{SYM}. However, this implies that the metric is discontinuous as the boundary is approached, and also that the functionals obtained by imposing Dirichlet boundary conditions cannot be sewn together to make closed surface functionals. As for the closed string we also find the need to restrict the validity of the approach to target space dimensions $d\leq1$. Only in this way we have real renormalised parameters such that $e^{\alpha\phi}$ and $e^{{\alpha_B}\phi/2}$ can be interpreted as real Weyl scalings for a real scalar renormalised Liouville field $\phi$. From this we can see that our results extend very naturally those found for the closed string by David, Distler and Kawai. Since the analysis is fully local and we can choose the moduli integration measure to be independent of the conformal factor of the metric our results also generalise immediately to higher genus Riemann surfaces with just one boundary. Clearly more general boundary structures can also be considered. Here for simplicity we have just analysed the random surfaces one loop functional defined in euclidean space. Our results hold for an arbitrary number of loops. We also may consider non-smooth boundaries \cite{Man}. \subsubsection{Tachyon gravitational dressings} Since this formalism is only valid for $d\leq1$ the string serves as a toy model for the more realistic $c\leq{1}$ minimal series of boundary conformal field theories \cite{JC}. With this in mind let us see how a bulk tachyon vertex operator gets dressed by the gravitational sector. Taking an n-point function of bulk tachyons with momentum $p_j$ such that the momenta sum to zero we can easily see by following the same path of calculations that the operator $\int{d^2}{\xi_j}\sqrt{\tilde{g}_j}{e^{i{p_j}\cdot{X}({\xi_j})}}$ gets dressed to $\int{d^2}{\xi_j}\sqrt{\hat{g}_j}{e^{{\gamma_j}\phi}}{e^{i{p_j} \cdot{X} ({\xi_j})}}$, where quantum Weyl invariance demands \[ {\Delta_j^0}-{\gamma_j}({\gamma_j}-Q)=1,\quad{\Delta_j^0}= {p_j^2}. \] \noindent The above equation shows that the dressed bulk tachyon vertex operator has zero conformal weight. The primary Liouville field $: {e^{{\gamma_j}\phi}}:$ dresses the tachyon field $:{e^{i{p_j}\cdot{X}( {\xi_j})}}:$ in such a way that $:{e^{{\gamma_j}\phi}}{e^{i{p_j} \cdot{X}({\xi_j})}}:$ has conformal weight $(1,1)$. Note that if we solve for $\gamma_j$ in terms of $Q$ and $p_j^2$ we get ${\gamma_j}=(1/2) \left[Q\pm\sqrt{{Q^2}+4({p_j^2}-1)}\right]$. Just as $\alpha$ should be real for an arbitrary real Weyl scaling so should be $\gamma_j$. This implies that $d\leq1$ and ${p_j^2}\geq{0}$. If we now consider a boundary tachyon vertex operator $\oint{d} {\hat{s}_j}{e^{i{p_j}\cdot{X}({\xi_j})/2}}$ we may follow the same steps to find the dressed operator $\oint{d}{\hat{s}_j} {e^{{\gamma_j}\phi/2+i{p_j}\cdot{X}({\xi_j})/2}}$ where $\gamma_j$ satisfies the same equation as the its bulk counterpart. It means that the dressed field $:{e^{i{p_j}\cdot{X}({\xi_j})/2}} {e^{{\gamma_j}\phi/2}}:$ considered in the boundary of the surface has conformal weight (1/2,1/2) as a consequence of the quantum Weyl invariance of the theory. \subsection{Neumann boundary conditions} The choice of boundary conditions always depends on the specific physical applications we have in mind. So far we have argued that in a proper coupling to 2D quantum gravity, the boundary conditions on the Liouville field have to be such that it can be interpreted as an arbitrary Weyl scaling on the whole surface and not just on its interior. As we said this rules out Dirichlet boundary conditions but we are free to choose Neumann boundary conditions for the conformal factor. To see what happens in this case let us for simplicity take also Neumann boundary conditions on the matter field ${\partial_{\tilde{n}}}{X^{\mu}}=0$ and on the reparametrisation ghosts $\tilde{n}\cdot\delta\theta=0$. Consider first the partition function. We can then follow the same reasoning as in the case of free boundary conditions with much more ease because the Neumann boundary condition simply eliminates the most part of the boundary contributions we had to worry about before. Thus we write the following renormalised Liouville action \begin{eqnarray*} &{S_L^N}\left[\phi,\hat{g}\right]={1\over{8\pi}}\int{d^2}\xi \sqrt{\hat{g}}\left({1\over{2}} \phi\hat{\Delta}\phi+Q\hat{R}\phi\right)+{{\bar{Q}_B}\over{8\pi}} \oint{d}\hat{s}{k_{\hat{g}}}\phi+\nonumber\\ &+{\mu_2^2}\int{d^2} \sqrt{\hat{g}}{e^{\alpha\phi}}+{\lambda_2}\oint{d}\hat{s} {e^{{\alpha_B}\phi/2}}. \end{eqnarray*} \noindent Here $Q$, $\bar{Q}_B$ refer to coupling renormalisation and $\alpha$, $\alpha_B$ are its field renormalisation counterparts. To ensure quantum Weyl invariance (see Appendix B) we must satisfy the charge conservation selection rule, tune the local reference counterterms to zero, set ${\bar{Q}_B}=2Q$, ${\alpha_B}=\alpha$ plus $Q=\pm\sqrt{(25-d)/6}$ and $1-\alpha{Q}+{\alpha^2}=0$. Starting from a general open string bulk tachyon amplitude it is clear that we may follow the steps of the partition function calculation to find the equation for the gravitational dressing of the bulk tachyon vertex operator. A tachyon vertex operator with momentum $p_j$ gets dressed by the coupling to 2D quantum gravity $\int{d^2}{\xi_j}\sqrt{\hat{g}}{e^{{\gamma_j}\phi}} {e^{i{p_j}\cdot{X}({\xi_j})}}$, where ${\Delta_j^0}-{\gamma_j}({\gamma_j}-Q)=1$, ${\Delta_j^0}= {p_j^2}$. For the boundary tachyon vertex operator the coupling to gravity leads to the dressed operator $\oint{d}{\hat{s}_j}{e^{{\gamma_j}\phi/2+ i{p_j}\cdot{X}({\xi_j})/2}}$ of zero weight, where $\gamma_j$ satisfies the same equation as the its bulk counterpart. Thus so far we conclude that our results are exactly the same for Neumann and for free boundary conditions on the Liouville field. \section{Critical exponents and the saddle point limit} \subsection{The open string susceptibility and Yang-Mills Feynman mass exponents} Let us consider again the case of free boundary conditions and start with Polyakov's sum over random surfaces with the topology of a disc. Generalising the closed string case \cite{ZCKT} the quantum partition function may now be written as an integral of the partition function for surfaces constrained to have fixed area, $A$, and perimeter, $L$, $\Gamma (A,L)$: \[ Z={\int_0^{+\infty}}\Gamma(A,L){e^{-{\mu_0^2}A-{\lambda_0}L}}dAdL. \] \noindent After integrating out the matter and reparametrisation ghost fields we renormalise the Liouville field and its couplings to 2D gravity to find the following integral for $\Gamma (A,L)$: \begin{eqnarray} &\Gamma(A,L)=\int{\mathcal{D}_{\hat{g}}}(\bar{\Psi},\bar{\phi}) d{\bar{\Psi}_0}{{\left(\oint{d}\hat{s}\right)}^{1/2}}{e^{-{S_c^{00}} [\bar{\Psi},{\bar{\Psi}_0},\hat{g}]-{\bar{S}^0} [\bar{\phi},\hat{g}]}}\delta\left(\int{d^2}\xi\sqrt{\hat{g}} {e^{\alpha\phi}}-A\right)\nonumber\\ &\delta\left(\oint{d}\tilde{s} {e^{\alpha/2\phi}}-L\right)\label{28}. \end{eqnarray} \noindent Here we have factored out the cosmological constant counterterms left over from the renormalisation process. Note that in this process the initially infinite constants $\mu_0^2$ and $\lambda_0$ are changed into the finite constants $\mu_2^2$ and $\lambda_2$. As discussed before we have set ${\nu_2}=0$. To calculate the critical exponents we apply David, Distler and Kawai's scaling argument \cite{DDK}. Consider the shift of the integration variable $\phi$ by a constant $\phi\to\phi+\rho/\alpha$. Since we keep $\hat{g}_{ab}$ fixed our functional integral should scale. Recall that the theory is invariant under arbitrary scalings of the reference metric once we have integrated $\phi$. So, it is only invariant under a shift of the integration variable provided this is compensated by a Weyl transformation of the reference metric. Because we consider a translational invariant quantum measure in eq. (\ref{28}), the scaling behavior is determined by the change in the action $S\equiv{S_c^{00}}[\bar{\Psi},{\bar{\Psi}_0},\hat{g}]+{\bar{S}^0} [\bar{\phi},\hat{g}]$, and in the delta functions which are used to fix the area $A$ and the perimeter $L$ of the surface. Being the shift constant only the zero mode $\bar{\Psi}_0$ is actually changed. Thus the shift in the action is \[ S\to{S}+{{Q{\chi_o}}\over{2\alpha}}\rho, \] and the shifts in the delta functions are \begin{eqnarray*} &\delta\left(\int{d^2}\xi\sqrt{\hat{g}}{e^{\alpha\phi}}-A\right)\to {e^{-\rho}}\delta\left(\int{d^2}\xi\sqrt{\hat{g}}{e^{\alpha\phi}}- {e^{-\rho}}A\right),\\ &\delta\left(\oint{d}\hat{s}{e^{\alpha\phi/2}}-L\right)\to{e^{-\rho/2}} \delta\left(\oint{d}\hat{s}{e^{\alpha\phi/2}}-{e^{-\rho/2}}L\right). \end{eqnarray*} Then we get the following scaling law: \begin{equation} \Gamma(A,L)={e^{-{\rho\over{2}}\left({{{\chi_o}Q}\over {\alpha}}+3\right)}} \Gamma\left({e^{-\rho}}A,{e^{-\rho/2}}L\right)\label{42}. \end{equation} To be able to introduce critical exponents we have to define the partition function for fixed area $A$, $\Sigma(A)$, and the partition function for fixed perimeter $L$, $\Omega(L)$. Factoring out the appropriate counterterms we write: \[ \Sigma(A,{\lambda_2})=\int\Gamma(A,L){e^{-{\lambda_2}L}}dL,\quad \Omega(L,{\mu_2^2})=\int\Gamma(A,L){e^{-{\mu_2^2}A}}dA. \] Then, from eq. (\ref{42}), we get: \begin{equation} \Sigma(A,{\lambda_2})={e^{-\rho\left({{{\chi_o}Q}\over{2\alpha}}+ 1\right)}} \Sigma\left({e^{-\rho}}A,{\lambda_2}{e^{\rho/2}}\right)\label{50}, \end{equation} \begin{equation} \Omega(L,{\mu_2^2})={e^{-{\rho\over{2}}\left({{{\chi_o}Q}\over{\alpha}}+ 1\right)}} \Omega\left({e^{-\rho/2}}L,{\mu_2^2}{e^{\rho}}\right)\label{51}. \end{equation} The open string susceptibility exponent is defined just like in the closed string. In the case ${\lambda_2}=0$ we can continue to use the scaling argument. As $A\to+\infty$: \[ \Sigma(A)\sim{A^{\sigma({\chi_o})-3}}. \] and \[ \sigma({\chi_o})=2-{{{\chi_o}Q}\over{2\alpha}}. \] \noindent The last result is just the expected open string version of the closed string critical exponent. If we take the positive root for $Q$ and the corresponding negative one for $\alpha$ we find that in the semi-classical limit $d\to-\infty$: \[ \sigma({\chi_o})={{d-19}\over{12}}{\chi_o}+2. \] For the open string we can also consider the asymptotic limit $L\to+\infty$ and introduce a mass critical exponent in close analogy with the the asymptotic limit $A\to+\infty$. Here we take ${\mu_2^2}=0$. This case was considered by Durhuus, Olesen and Petersen \cite{LSW} in connection with the calculation of the Wilson loop quark-antiquark potencial. We define $\omega({\chi_o})$ by \[ \Omega(L)\sim{L^{\omega({\chi_o})-3}}. \] \noindent Thus we find \[ \omega({\chi_o})=2-{{{\chi_o}Q}\over{\alpha}}, \] \noindent to which we associate the semi-classical limit \[ \omega({\chi_o})={{d-19}\over{6}}{\chi_o}+2. \] We can interpret of $\omega({\chi_o})$ in the context of Yang-Mills gluon dynamics. To see this first note that the wave functional given in eq. (\ref{29}) models the Wilson loop ,W, for Yang-Mills theory \cite{OAL,LSW}. Consider the first quantised functional integral representing the propagator of a particle of mass $\lambda_2$ moving under the influence of a Yang-Mills field. At coincident points its trace is a gauge invariant expression \[ \int{\mathcal{D}}Y\,tr\,P\,e^{-{\lambda_2}\oint{d}\tilde{s}- \oint dY\cdot{\cal A}} =tr\,G_{\cal A}(x,x). \] If this is averaged over the Yang-Mills field we get \[ <tr\,G_{\cal A}(x,x)>_{\cal A}=\int{\mathcal{D}Y}\,e^{-{\lambda_2} \oint{d}\tilde{s}}\,W =\int dL\,e^{-\lambda_2L}\int{\mathcal{D}Y}\,\delta(L-\oint{d}\tilde{s}) \,W \] but this last functional integral is just what we mean by $\Omega$. Substituting the form that holds for $\mu_2^2=0$ we get \[ <tr\,G_{\cal A}(x,x)>_{\cal A} \propto{{\lambda_2}^{{\chi_o}Q/\alpha}}={{\lambda_2}^ {2-\omega({\chi_o})}}, \] valid for small $\lambda_2$, corresponding to large $L$. Thus $\omega({\chi_o})$ is the critical exponent associated with the Feynman propagator of a test particle which interacts with the Yang-Mills gauge fields. So far we have expanded the cosmological terms so as to linearise the contribution of the exponential terms to the action. We will now discuss a different approach based on the semi-classical expansion. \subsection{The saddle point expansion} When we consider the partition function with free boundary conditions and integrate the matter and ghost fields in the conformal gauge ${\tilde{g}_{ab}}={e^{\varphi}}{\hat{g}_{ab}}$, the result is \[ \Gamma(A,L)=\int{\mathcal{D}_{\tilde{g}}}(\psi,\varphi){e^{-{S_L} [\varphi, \hat{g}]}}\delta\left(\int{d^2}\xi\sqrt{\hat{g}}{e^{\varphi}}-A\right) \delta\left(\oint{d}\hat{s}{e^{\varphi/2}}-L\right) \] \noindent where the Liouville action is given by \[ {S_L}[\varphi,\hat{g}]={{26-d}\over{48\pi}}\int{d^2}\xi\sqrt{\hat{g}} \left({1\over{2}}{\hat{g}^{ab}}{\partial_a}\varphi{\partial_b}\varphi+ \hat{R}\varphi\right)+{{26-d}\over{24\pi}}\oint{d}\hat{s}{k_{\hat{g}}} \varphi. \] Representing the delta functions by integrals over (imaginary) Lagrange multipliers $p,q,$ gives the Euclidean action \begin{eqnarray*} &{S_L}[\varphi,\hat{g},p,q]={{26-d}\over{48\pi}}\int{d^2}\xi\sqrt{\hat{g}} \left({1\over{2}}{\hat{g}^{ab}}{\partial_a}\varphi{\partial_b}\varphi+ \hat{R}\varphi\right)+{{26-d}\over{24\pi}}\oint{d}\hat{s}{k_{\hat{g}}} \varphi-\nonumber\\ &-p\left(\int{d^2}\xi\sqrt{\hat{g}}{e^{\varphi}}-A\right)-q\left(\oint {d}\hat{s}{e^{\varphi/2}}-L\right). \end{eqnarray*} \noindent This action is invariant under M\"obius transformations on the upper half-plane, i.e. $SL(2,\mathcal{R})$ invariant. These transformations preserve the conformal gauge, mapping the upper half-plane onto itself $\omega\to\omega'=(a\omega+b)/(c\omega+d)$, $\varphi(\omega,\bar{\omega})\to\varphi(\omega',\bar{\omega}')+2\ln |d\omega'/d\omega|$, where $a,b,c,d\in\mathcal{R}$ and $ad-bc=1$. It will be more convenient to work on the unit disc obtained from the upper half-plane by the complex M\"obius transformation $\omega\to{z}=(i-\omega)/(i+\omega)$, $\varphi(\omega,\bar{\omega})\to\varphi(z,\bar{z})+2\ln |dz/d\omega|$. To get the correspondent invariance on the unit disc we map the $SL(2,\mathcal{R})$ transformation. The result is the conformal mapping of the unit disc onto itself $z\to{z'}=\exp(i{\theta_0})(z+{c_0})/(1+{\bar{c}_0}z)$, $\varphi(z,\bar{z})\to\varphi(z',\bar{z}')+2\ln |dz/d\omega|$, where ${\theta_0}\in\mathcal{R}$ and $|{c_0}|<1$. In the saddle point approximation we expand around the solution of the following classical problem: \[ \tilde{R}=\eta,\quad{2}{k_{\tilde{g}}}=k,\quad \int{d^2}\xi\sqrt{\tilde{g}}=A,\quad\oint{d}\tilde{s}=L \] \noindent where $\eta=p\gamma$, $\gamma=48\pi/(26-d)$ and $2k=q\gamma$. This is a boundary-value problem for the Liouville field of the conformal gauge ${\tilde{g}_{ab}}={e^{\varphi}} {\hat{g}_{ab}}$. The classical field $\varphi_c$ must satisfy the Liouville equation \[ \hat{R}+\hat{\Delta}\varphi_c=\eta{e^{\varphi_c}},\quad\int{d^2}\xi \sqrt{\hat{g}}{e^{\varphi_c}}=A \] \noindent subject to the boundary condition \cite{GNM}: \[ 2{k_{\hat{g}}}+{\partial_{\hat{n}}}\varphi_c=k{e^{\varphi_c/2}}, \quad\oint{d}\hat{s}{e^{\varphi_c/2}}=L. \] \noindent Here $\eta$ and $k$ are not independent. Applying the Gauss-Bonnet theorem gives $\eta{A}+kL=4\pi$. Let us now solve this problem on the unit disc. In the polar coordinates $z=\rho{e^{i\theta}}$, $\rho\in[0,1]$, $\theta\in[0,2\pi]$ we assume that $\varphi_c$ only depends on $\rho$ to find the following solution: \[ {\varphi_c}(\rho)=2\ln{{2A}\over{L}}{{\left[1+\left({{4\pi{A}} \over{L^2}}-1 \right){\rho^2}\right]}^{-1}}. \] \noindent Due to the $\theta$ independence this is the metric of a spherical cap of length $L$ and area $A$. Note that ${\rho^2}{e^{\varphi_c}}>0$ leads to ${L^2}<4\pi{A}$. We also note that ${\eta_c}=8\pi/A[1-{L^2}/(4\pi{A})]$, ${k_c}=2L/A(1-2\pi{A}/{L^2})$. The saddle point tree level approximation is given by the classical functional ${e^{-{S_L}[{\varphi_c},\hat{g},{p_c},{q_c}]}}$. Introducing the new coordinate $\varrho$ such that $\rho$ is given by $\rho=\tan(\varrho/2)/ \sqrt{4\pi{A}/{L^2}-1}$, we obtain \[ \Gamma(A,L)={{\left({A\over{L}}\right)}^{(d-26)/6}} {{\left({{L^2}\over{4\pi{A}}}\right)}^{2{L^2}/A\gamma}} {e^{-2{L^2}/A\gamma}}. \] \noindent In the semi-classical limit $d\to-\infty$ we get \[ \Gamma(A,L)={e^{d/12\rho}}\Gamma\left({e^{-\rho}}A,{e^{-\rho/2}}L \right). \] \noindent If we take the branch $\alpha_{-}$, ${\chi_o}=1$ and the limit $d\to-\infty$ we reproduce this scaling law from eq. (\ref{42}) so that in the case of the disc topology both methods match in the asymptotic limit $A\to+\infty$, $L\to+\infty$ such that $A/{L^2}\to{const}$. If we go to one loop we must consider \[ \Gamma(A,L)={e^{-{S_L}[{\varphi_c},\hat{g},{p_c},{q_c}]}}\int {\mathcal{D}_{g_c}}(\phi,\chi)\delta\left(\int{d^2}\xi\sqrt{g_c} \chi\right) \delta\left(\oint{d}{s_c}\phi\right){e^{-{S_1} [\chi,\phi,{g_c}]}}. \] \noindent Here $\chi$ is the the quantum flutuation around the classical solution and $\phi$ is the free value it takes on the boundary. The metric ${g_c}^{ab}$ is given by ${e^{{\varphi_c}}}{\hat{g}^{ab}}$ and the one loop action is \[ {S_1}[\chi,\phi,{g_c}]={1\over{2\gamma}}\int{d^2}\xi\sqrt{g_c} {{g_c}^{ab}}{\partial_a}\chi{\partial_b}\chi-{1\over{2\gamma}}{\eta_c} \int{d^2}\xi\sqrt{g_c}{\chi^2}-{1\over{4\gamma}}{k_c}\oint{d}{s_c} {\phi^2}. \] \noindent Let us separate $\chi$ into a fixed background field $\chi_b$ and an homogeneous Dirichlet field $\bar{\chi}$. Introducing the operator ${\mathcal{O}_c}={\Delta_c}-{\eta_c}$ we specify $\chi_b$ as the solution to the boundary-value problem ${\mathcal{O}_c}{\chi_b}=0$, ${\chi_b}{|_B}=\phi$. Thus: \begin{eqnarray*} &\Gamma(A,L)={e^{-{S_L}[{\varphi_c},\hat{g},{p_c},{q_c}]}}\int {\mathcal{D}_{g_c}}\phi\,\delta\left(\oint{d}{s_c}{\chi_b}\right) \exp\left(-{1\over{2\gamma}}\oint{d}{s_c}{\chi_b}{\partial_{nc}} {\chi_b}\right)\nonumber\\ &\int{\mathcal{D}_{g_c}}\bar{\chi}\, \delta\left[\int\sqrt{g_c}\left(\bar{\chi}+{\chi_b}\right)\right] \exp\left(- {1\over{2\gamma}}\int{d^2}\xi\sqrt{g_c}\bar{\chi}{\mathcal{O}_c^D} \bar{\chi}\right). \end{eqnarray*} \noindent We can use the delta function for the integral along the boundary of $\phi$ to eliminate the constant zero mode of the covariant Laplacian $\Delta_c$. However unlike the closed string case we still have another delta function which involves the other orthogonal modes of $\chi$. Unfortunately this means we are left with a functional integral too difficult to be solved here. All these calculations can be attempted taking homogeneous Neumann boundary conditions on the Liouville field $\partial_{\hat n}\varphi=0$. The results for the critical exponents using the scaling argument are the same. However we run into difficulties in performing the semi-classical expansion because the classical solution $\varphi_c$ does not satisfy homogeneous Neumann boundary conditions so if the full Liouville field does, then the classical field and the quantum fluctuation are not independent, but rather are related with each other on the boundary $\partial_{\hat n}\varphi_c+\partial_{\hat n}\chi=0$. So we conclude that the free boundary conditions are much better suited for the semi-classical expansion. \subsection{The tachyon gravitational scaling dimensions} Let us now calculate the gravitational scaling dimensions of the tachyon vertex operators for free boundary conditions. For the anomalous gravitational scaling dimension of the bulk tachyon vertex operator we consider the expectation value of the 1-point function at fixed area $A$ \begin{eqnarray*} &<{W_j}>(A)={1\over{\Gamma(A)}}\int{\mathcal{D}_{\hat{g}}} (\bar{\phi},\bar{\Psi})d{\bar{\Psi}_0} {{\left(\oint{d}\hat{s}\right)}^{1/2}} {e^{-{S_c^{00}}[\bar{\Psi},{\bar{\Psi}_0}, \hat{g}]-{\bar{S}^0}[\bar{\phi},\hat{g}]}}\nonumber\\ &\delta\left(\int{d^2}\xi\sqrt{\hat{g}}{e^{\alpha\phi}}-A\right) \int{d^2}{\xi_j} \sqrt{\hat{g}}{e^{i{p_j}\cdot{X}({\xi_j})}} {e^{{\gamma_j}\phi}}. \end{eqnarray*} \noindent By definition the bulk gravitational scaling dimension is as in the closed string $<{W_j}>(A)\sim{A^{1-{\Delta_j}}}$. Applying the scaling argument we find ${\Delta_j}=1-{\gamma_j}/\alpha$ and this leads to the KPZ equation for the anomalous gravitational dimension in the open string: \[ {\Delta_j}-{\Delta_j^0}=-{\alpha^2}{\Delta_j}({\Delta_j}-1). \] Similarly we define the anomalous gravitational scaling dimension of the boundary tachyon vertex operator by $<{W_j^B}>(A)\sim{A^{1/2-{\Delta_j^B}}}$. Then the scaling argument gives ${\Delta_j^B}={\Delta_j}/2$. We can also define critical exponents associated with the expectation values at fixed length $L$. These should also be interpreted as anomalous gravitational scaling dimensions. In this case the asymptotic limits are $<{W_j}>(L)\sim{L^{1-{\Delta_j}}}$ and $<{W_j^B}>(L)\sim{L^{1/2-{\Delta_j^B}}}$, where $\Delta_j$ and $\Delta_j^B$ are given as in the case of fixed area $A$. \subsection{A connection with matrix models} These results generalise to other models and physical systems. As we observed before the open string is a toy model for the $c\leq1$ boundary conformal field theories \cite{JC} coupled to 2D quantum gravity. In the next section we show that similar results can be written down for this more realistic class of models. Here we finish by considering a comparison with exact results of matrix models at genus zero \cite{MSS}. According to ref. \cite{MSS} we may deduce from matrix models calculations the following exact expression for $\Gamma(A,L)$ when the surface has the topology of a disc: \[ \Gamma(A,L)={A^x}{L^y}{e^{-{L^2}/A}}, \] where $x=-Q/\alpha$ and $y=-3+Q/\alpha$. This formula is consistent with our scaling laws given in eqs. (\ref{42}), (\ref{50}) and (\ref{51}). Introducing it in the definitions of $\Sigma(A)$ and $\Omega(L)$ we find: \[ \sigma(1)=x+y/2+7/2,\quad\omega(1)=2x+y+5. \] When we substitute back the values of $x$ and $y$ we get the same results for $\sigma(1)$ and $\omega(1)$ as we did using the David, Distler and Kawai's scaling argument. This is an indication that our results should be in agreement with those obtained in models of dynamically triangulated open random surfaces. However it should be emphasised that a full comparison is beyond the scope of the present work. \section{Minimal Models On Open Random Surfaces} The open string analysis can now be easily extended to $c\leq1$ minimal conformal field theories on open random surfaces if we represent the matter sector by a conformally extended Liouville theory. The curious affinity between the matter and gravitational sector Liouville theories that emerges for closed surfaces generalises to the case with boundaries. We simply take the matter action of eq. (\ref{30}) with additional boundary terms: \begin{eqnarray*} &{S_M}[\Phi,\tilde{g}]={1\over{8\pi}}\int{d^2}\xi\sqrt{\tilde{g}} \left[ {1\over{2}}{\tilde{g}^{ab}}{\partial_a}\Phi{\partial_b}\Phi+i\left( \beta-1/\beta\right)\tilde{R}\Phi\right]+\nonumber\\ &+{i\over{4\pi}}\left( \beta-1/\beta\right)\oint{d}\tilde{s}{k_{\tilde{g}}}\Phi\ +{\mu^2}\int{d^2}\xi\sqrt{\tilde{g}}\left({e^{i\beta\Phi}}+ {e^{-i/\beta\Phi}}\right)+\nonumber\\ &+\lambda\oint{d}\tilde{s}\left[ {e^{i\beta\Phi/2}}+{e^{-i/(2\beta)\Phi}}\right]. \end{eqnarray*} \noindent This is the conformally extended Toda field theory defined on an open surface for the Lie algebra $A_1$. It has recently been considered as a Coulomb gas description of the $c\leq1$ minimal conformal matter in the case of Neumann boundary conditions imposed on the matter field \cite{JS}. Here we assume without proof that the same is true of when the matter satisfies free boundary conditions. In fact for both free and Neumann boundary conditions we have a full Weyl invariant non-critical theory at the quantum level to all orders in the Coulomb gas perturbation theory. For definiteness we take here the free boundary conditions on all fields. The central charge of the matter theory is ${c_M}=1-6{{\left(\beta-1/\beta\right)}^{2}}$. Requiring that the sum of this and the central charges of the gravitational sector Liouville field and the reparametrisation ghosts vanish gives $\gamma=\pm{i}\beta$, where $\gamma$ relates to our previous string $Q$, $Q=i(\gamma+1/\gamma)$. The Liouville field renormalisation parameter must satisfy the equation $1-\alpha(\beta+1/\beta)+{\alpha^2}=0$ which, as before, gives us two branches ${\alpha_+}=\beta$ and ${\alpha_-}=1/\beta$. All the boundary renormalisation parameters relate to $\alpha$ and $\gamma$ as happened for the string case. We find dressed vertex operators of vanishing conformal weight on the bulk \[ {U^D}(jj')=\int{d^2}\xi\sqrt{\hat{g}}\exp\left[\left(l\beta+ {{l'}\over{\beta}}\right) \phi\right]\exp\left[-i\left(j\beta-{{j'}\over{\beta}}\right)\Phi\right] \] \noindent where $l=-j$, $l'=j'+1$ or $l=j+1$, $l'=-j'$. On the boundary we also define dressed primary vertex operators of vanishing conformal weight consistent with the need to consider the Liouville field as an arbitrary Weyl scaling on the whole surface \[ {U_B^D}(jj')=\oint{d}\hat{s}\exp\left[\left(l\beta+{{l'}\over{\beta}} \right){\phi\over{2}} \right]\exp\left[-i\left(j\beta-{{j'}\over{\beta}}\right){\Phi\over{2}} \right]. \] \noindent As occurred for the string, Dirichlet boundary conditions on the Liouville field imply that we have no dynamical quantum degrees of freedom on the boundary, and hence no boundary vertex operators. Although they still allow the cancellation of the Weyl anomaly provided the metric has a discontinuity as the boundary is approached. The open string formulas for the critical exponents generalise to these models. Thus the susceptibility exponent is $\sigma({\chi_o})=2-{\chi_o}Q/(2\alpha)$, the Feynman mass exponent is $\omega({\chi_o})=2-{\chi_o}Q/ \alpha$. The semi-classical limit is obtained for $\beta\to+\infty$ and, just like for closed surfaces, selects the classical branch ${\alpha_+}=\beta$. As in the open string the saddle point expansion singles out the free boundary conditions on the Liouville field. Similarly we find the same expressions for the anomalous gravitational scaling dimensions of the primary vertex operators. In the end the gravitational scaling dimension of a boundary operator is half that of a bulk operator, the latter being related to its bare conformal dimension by the KPZ equation. \section{Conclusions} In this paper we have shown how to extend the approach of David, Distler and Kawai to the coupling of boundary conformal field theories to 2D quantum gravity. The organising principal behind their approach is Weyl invariance at the quantum level applied to a perturbative expansion analogous to the Coulomb gas. We used this to determine the renormalised parameters, gravitational dressings and surface critical exponents such as the susceptibility of random surfaces, the anomalous gravitational scaling dimensions of primary vertex operators and the Feynman mass exponent. The crucial problem is the choice of boundary conditions on the Liouville field. We have discussed free, Neumann and Dirichlet boundary conditions on the Liouville field. The first two lead to similar results within this perturbative approach, but Dirichlet conditions imply that the metric is discontinuous as the boundary is approached. We have also considered the semi-classical expansion and advocated the free boundary conditions for the Liouville field, since homogeneous Neumann boundary conditions do not allow a clean split between the classical and quantum pieces of the field, but rather couple them together. As would be expected the bulk properties are equal for open and closed surfaces. This approach may also be naturally extended to higher genus and more complex boundary structures. Unfortunately as for closed surfaces the results only apply to the weak coupling of $c\leq1$ boundary conformal field theories to gravity. In the case of the open string this means unrealistic target space dimensions $d\leq1$. Finally, we found the same close affinity between the matter sector when represented by a Liouville theory and the gravitational sector in this weak Coulomb gas phase as occurs in the case of closed surfaces. \vspace{1cm} \centerline{\bf{Note added}} \vspace{0.25cm} After submitting this paper we were informed of refs. \cite{JAS} and \cite{JAM}. In ref. \cite{JAS} the open string 2D quantum gravity with Neumann boundary conditions has been analysed. The results agree with ours. In ref. \cite{JAM} it is conjectured that Neumann and free boundary conditions are equivalent, although as our discussion shows the free boundary conditions are in fact better suited to the semi-classical expansion. \vspace{1cm} \centerline{\bf{Appendix A}} \vspace{0.25cm} \leftline{THE NON-LOCAL WEYL CHANGE OF ${\hat{G}_K}(\xi,\xi')$} \vspace{0.25cm} Start by taking eq. (\ref{15}) and multiply it by $d\hat{s}(\xi)$. Since $d\hat{s}(\xi)d\hat{s}(\xi''){\hat{K}_D}(\xi,\xi'')$ and $d\hat{s}(\xi){\hat{\delta}_B}(\xi-\xi')$ are Weyl invariant use ${\delta_{\rho}}d\hat{s}(\xi)=(1/2)\rho(\xi)d\hat{s}(\xi)$ to get \begin{equation} d\hat{s}(\xi)\oint{d}\hat{s}(\xi''){\hat{K}_D}(\xi,\xi''){\delta_{\rho}} {\hat{G}_K}(\xi'',\xi')=-{{\rho(\xi)d\hat{s}(\xi)}\over{2\oint{d}\hat{s} (\eta)}}+ {{d\hat{s}(\xi)\oint{d}\hat{s}(\zeta)\rho(\zeta)}\over{2{{\left[\oint {d}\hat{s}(\eta)\right]}^2}}}\label{57}. \end{equation} Next multiply eq. (\ref{57}) by ${\hat{G}_K}(\xi,\xi''')$ and integrate on $\xi$. Using eqs. (\ref{15}) and (\ref{55}) find: \[ {\delta_\rho}{\hat{G}_K}(\xi,\xi')=- {{\oint{d}\hat{s}(\xi''){\hat{G}_K}(\xi'',\xi)\rho(\xi'')}\over{2\oint{d} \hat{s}(\xi''')}}+{{\oint{d}\hat{s}(\xi''){\delta_\rho}{\hat{G}_K} (\xi'',\xi')}\over{\oint{d}\hat{s}(\xi''')}} \] Finally the Weyl transformation of eq. (\ref{55}) \[ \oint{d}\hat{s}(\xi''){\delta_\rho}{\hat{G}_K}(\xi'',\xi')= -{1\over{2}}\oint{d}\hat{s}(\xi'')\rho(\xi''){\hat{G}_K}(\xi'',\xi') \] leads to eq. (\ref{56}). \vspace{1cm} \centerline{\bf{Appendix B}} \vspace{0.25cm} \leftline{ANOMALY CANCELLATION FOR NEUMANN 2D QUANTUM GRAVITY} \vspace{0.25cm} The charge conservation selection rule is $\int{d^2}\xi\sqrt{\hat{g}}{{\hat{J}}_N^{MM'}}=0$. Here we have written ${{\hat{J}}_N^{MM'}}=Q\hat{R}+{\bar{Q}_B}{k_{\hat{g}}} {\hat{\delta}_B^2} -8\pi\alpha{\sum_{P=1}^M} {\delta^2}(\xi-{\xi_P})/\sqrt{\hat{g}(\xi)}- 4\pi{\alpha_B}{\sum_{P=1}^{M'}} {\hat{\delta}_B^2}(\xi-{\xi_P})$, where $\int{d^2}\xi\sqrt{\hat{g}} {\hat{\delta}_B^2}=\oint{d}\hat{s}$. The Neumann Green's function satisfies \[ \hat{\Delta}{\hat{G}_N}(\xi,\xi')={{{\delta^2}(\xi-\xi')}\over {\sqrt{\hat{g}(\xi)}}}-{1\over{\int{d^2}\xi''\sqrt{\hat{g}(\xi'')}}}, \quad {\partial_{\hat{n}}}{\hat{G}_N}(\xi,\xi')=0, \] \[ \int{d^2}\xi\sqrt{\hat{g}(\xi)}{\hat{G}_N}(\xi,\xi')=0. \] We find the non-local functional: \[ {\mathcal{F}_N^{MM'}}={1\over{16\pi}}\int{d^2}\xi'{d^2}\xi'' \sqrt{\hat{g}(\xi')}{\hat{J}_N^{MM'}}(\xi'){\hat{G}_N}(\xi',\xi'') \sqrt{\hat{g}(\xi'')}{\hat{J}_N^{MM'}}(\xi''). \] Consider just one bulk Liouville vertex operator. Then: \begin{eqnarray*} &{\delta_\rho}{\mathcal{F}_N^{10}}={Q\over{8\pi}}\int{d^2}\xi \sqrt{\hat{g}} {\hat{J}_N^{10}}\rho+{\alpha^2}\rho({\xi_1})-{Q\over{8\pi}} \int{d^2}\xi \sqrt{\hat{g}}{\hat{J}_N^{10}}{\delta_\rho}\ln\int{d^2}\xi\sqrt{\hat{g}}- \nonumber\\ &-{1\over{8\pi\int{d^2}\xi\sqrt{\hat{g}(\xi)}}}\int{d^2}\xi \sqrt{\hat{g}}{\hat{J}_N^{10}}\int{d^2}\xi'{d^2}\xi'' \sqrt{\hat{g}(\xi')}\rho(\xi'){\hat{G}_N}(\xi',\xi'') \sqrt{\hat{g}(\xi'')}{\hat{J}_N^{10}}(\xi''), \end{eqnarray*} where ${\bar{Q}_B}=2Q$. This eliminates the terms in ${\partial_{\hat{n}}}\rho$. The $\alpha^2$ term comes from the Weyl change of ${\hat{G}_N}(\xi,\xi)$, $\xi\not\in{B}$. We have: \[ {\hat{G}_N}(\xi,\xi')=\hat{G}(\xi,\xi')+{\hat{H}_N}(\xi,\xi') \] where ${\hat{H}_N}(\xi,\xi')$ is defined by: \[ \hat{\Delta}{\hat{H}_N}(\xi,\xi')=-{1\over{\int{d^2}\xi'' \sqrt{\hat{g}(\xi'')}}},\quad{\partial_{\hat{n}}}{\hat{H}_N} (\xi,\xi')=-{\partial_{\hat{n}}}\hat{G}(\xi,\xi'). \] Then the local change is ${\delta_{\rho}}{\hat{G}_{N\varepsilon}}(\xi,\xi)=\rho(\xi)/(4\pi)$. Thus $Q=\pm\sqrt{(25-d)/6}$ and $1-\alpha{Q}+{\alpha^2}=0$. Now consider just one boundary Liouville vertex operator. When $\xi=\xi'$ is on the boundary, ${\hat{H}_N}(\xi,\xi)$ is divergent because $\hat{G}(\xi,\xi)$ is singular. In a neighbourhood of order $\sqrt{\varepsilon}$ around $\xi$ the shape of the boundary is flat. Then ${\hat{G}_N}(\xi,\xi)$ is defined by the method of images so that ${\hat{H}_N}(\xi,\xi)=\hat{G}(\xi,\xi)$. Hence the local change now is ${\delta_{\rho}}{\hat{G}_{N\varepsilon}}(\xi,\xi)=\rho(\xi)/(2\pi)$. Thus ${\alpha_B}=\alpha$.
hep-ph/9605382
\section{Introduction} \label{sec:1} In the past few years it has become clear that topological defects produced in the early universe may have a considerably richer microstructure than had previously been imagined\cite{sym rest}. In particular, the core of a defect acquires additional features at each subsequent symmetry breaking which preserves the topology of the object. The new microphysics associated with additional core structure has been exploited by several authors to provide a new, defect-based scenario for electroweak baryogenesis\cite{{BDT},{DMEWBG}}. The purpose of the present paper is to constrain general particle physics theories by demanding that the microphysics of defects in these models be consistent with the requirements of the standard cosmology. The basic idea, due originally to Davis and Shellard\cite{{D&S},{D&S 89},{V&S}}, is as follows. If a spontaneously broken field theory admits linear topological defects - {\it cosmic strings} - which subsequently become superconducting, then an initially weak current on a closed string loop will automatically tend to amplify as the loop undergoes dissipative contraction. This current may become sufficiently strong to modify the dynamics and halt the contraction so that the loop settles down in an equilibrium state known as a {\it vorton}. The population of vorton states produced by such a mechanism is tightly constrained by empirical cosmological considerations. It was first pointed out by Davis and Shellard that to avoid obtaining a present day cosmological closure factor ${\mit \Omega}} \def\Ov{{\mit\Omega}_{\rm v}$ greatly exceeding unity, any theory giving rise to stable vorton creation by superconductivity that sets in during string formation is ruled out if the symmetry breaking scale is above some critical value. One of the first attempts to estimate this critical scale\cite{C} indicated that it probably could not exceed that of electroweak symmetry breaking at about $10^2 \GeV$ by more than a few orders of magnitude. Such strong limits are of course dependent on the supposition that the vortons are absolutely stable on timescales as long as the present age of the universe. However, even if the vortons only survive for a few minutes, this would be sufficient to significantly affect primordial nucleosynthesis and hence provide limits of a weaker but nevertheless still interesting kind\cite{vorton papers}. In all previous work it was supposed that the relevant superconductivity sets in during or very soon after the primary phase transition in which the strings are formed. What is new in the present work is an examination of the extent to which the limits discussed above are weakened if it is supposed that superconductivity sets in during a distinct secondary phase transition occurring at what may be a very much lower temperature than the string formation scale\cite{DPWP}. The structure of the paper is as follows. In Section IIA we shall, for completeness, give a brief introductory review of string superconductivity. In Section IIB we describe the mechanism of formation of a vorton from an originally distended string loop and in Section IIC we summarize the basic properties of vorton equilibrium states. In Section IIIA we first comment on how it can be that the formation scale and the superconductivity scale can be separated by many orders of magnitude. We then demonstrate, using a suitably simplified statistical description of the string network, how to estimate the vorton abundance for a generic theory as a function of the temperature and the symmetry breaking scales. In Section IIIB we apply this procedure to the relatively simple case when the superconductivity develops during the early period when dissipation is mainly due to the friction of the ambient medium. In Section IIIC we go on to treat the more complicated situation that arises if the superconductivity does not develop until the much later stage in which dissipation is mainly due to gravitational radiation and in Section IIID we briefly comment on stability issues. In Section IV we consider the comparitively weak bounds that are obtained if it is supposed that the vortons are stable only for a few minutes. Finally, in Section V we consider the rather stronger bounds that are obtained if the vortons are of a kind that is sufficiently stable to survive as a constituent of the dark matter in the universe at the present epoch. We conclude in Section VI. \section{Consequences of Cosmic String Superconductivity} \subsection{Currents in the Witten model.} In so far as it has been developed at the present time, the quantitative theory of vorton structure has been entirely based on the supposition that the essential features are describable in terms of a simple bosonic superconductivity model of the kind introduced by Witten\cite{W}. This category of models consists of spontaneously broken gauged $U(1) \times U(1)$ field theories, which generalise the even simpler category of spontaneously broken gauged $U(1)$ field theories on which the standard Kibble description of non superconducting cosmic strings is based. The Kibble model is characterised by a potential $V$ with a quartic dependence on a complex Higgs field $\phi$ of the familiar form ${\tilde \lambda}(\vert\phi\vert^2 -v^2)^2$. Here ${\tilde \lambda}$ is a dimensionless coupling and $v$ is a mass scale of order the ``Kibble mass" $\mx$, whose square is identifiable with the string tension, $\T$, which is (in this model) constant and equal to the mass per unit length. The string is defined as the region in which $\vert\phi\vert$ is topologically excluded from its vacuum value as given by $v\simeq\mx$. In addition to $\phi$, the Witten model contains a second complex scalar field, $\sigma$, and is characterised by a quartic potential depending on several dimensionless parameters that, like ${\tilde \lambda}$, are assumed to be of order unity. Witten's potential function also depends on a second mass parameter, $m_\sigma} \def\mx{m_{\rm x}} \def\mc{m_{\rm c}$ say, which determines the temperature scale below which $\sigma$ gives rise to a current carrying condensate on the vortex. In the Witten model the vortex defects are cosmic strings in which the tension $\T$ is no longer constant but variable, as a function of the current magnitude $\vert j\vert$, attaining its maximum (Kibble) value only when the current magnitude vanishes, so that in general one has $\T\leq \mx^{\,2}$. In earlier discussions\cite{{D&S},{D&S 89},{V&S},{C},{vorton papers}} of vorton physics it was implicitly or explicitly supposed that the magnitude of $m_\sigma} \def\mx{m_{\rm x}} \def\mc{m_{\rm c}$ was not very different from that of the original symmetry breaking mass parameter $\mx$. In that case, the formation of the $\sigma$ condensate could be considered as part of the same symmetry breaking phase transition as that by which the strings themselves were formed. Our purpose here is to consider scenarios in which $m_\sigma} \def\mx{m_{\rm x}} \def\mc{m_{\rm c}$ may be very much smaller than $\mx$, as will occur when successive phase transitions at two entirely distinct cosmological epochs are involved\cite{DPWP}. It is only after the second phase transition that a condensate with amplitude $\vert\sigma\vert$ and angular variable phase, $\theta$ say, will form on the string world sheet. There then exists an identically conserved worldsheet phase current with components \begin{equation}} \newcommand{\ee}{\end{equation} \tilde j^\alp = {1\over2\pi}\varepsilon^{\alp\bet}\partial_\bet\theta \ , \label{old 1} \ee where $\varepsilon^{\alp\bet}$ are the components of the antisymmetric unit surface element tensor induced on the 2-dimensional worldsheet. As well as this topologically conserved phase current, there will also be a dynamically conserved particle number current with components given\cite{V&S} by \begin{equation}} \newcommand{\ee}{\end{equation} j_\alp = 2{\tilde\Sigma}(\partial_\alp\theta - e A_\alp) \ , \label{old 2} \ee where ${\tilde \Sigma}$ is the surface integral of $\vert \sigma\vert^2$ over the vortex core cross section. Here $A_\alp$ are the induced components of the electomagnetic background potential and $e$ is the coupling constant associated with the carrier field if it is gauged. When $e$ is non zero $\tilde j^\alp$ will be gauge dependent, but $j^\alp$ is physically well defined and will determine a corresponding electric surface current given by \begin{equation}} \newcommand{\ee}{\end{equation} I^\alp=ej^\alp\ . \label{plus 1} \ee \subsection{Formation of Vorton States.} In the case of a closed string loop the conserved surface currents characterised above will determine a corresponding pair of integral quantum numbers that are expressible in terms of circuit integration round the loop. These are given by \begin{equation}} \newcommand{\ee}{\end{equation} N = \oint\tilde j^\alp d\ell_\alp\ ,\hskip 1 cm Z = \oint j^\alp d\ell_\alp \ . \label{old 3} \ee where $d\ell_\alp$ are the components of the length element normal to the circuit in the worldsheet. Note that even when $\tilde j^\alp$ is gauge dependent, $N$ is well defined. A non conducting Kibble type string loop must ultimately decay by radiative and frictional drag processes until it disappears completely. However, since a Witten type conducting string loop is characterised by the classically conserved quantum numbers $N$ and $Z$, such a loop may be saved from disappearance by reaching a state in which the energy attains a minimum for given non zero values of these numbers. In view of a widespread misunderstanding about this point, it is to be emphasised that the existence of such energy minimising ``vorton" states does not require that the carrier field be gauge coupled. If there is indeed a non vanishing charge coupling then the loop will of course be characterised by a corresponding total electric charge \begin{equation}} \newcommand{\ee}{\end{equation} Q=\oint I^\alp d\ell_\alp \ , \label{plus 2} \ee in terms of which the particle number will be expressible directly as $Z=Q/e$. However, the important point is that even in the uncoupled case, for which $I^\alp$, and hence also $Q$, vanish, the quantum number $Z$ will nevertheless remain perfectly well defined. Although the essential physical properties of a vorton state will be fully determined by the specification of the relevant pair of integers $N$ and $Z$, it is not true that any arbitrary choice of these two numbers will characterise a viable vorton configuration. This is because the requirement that the strictly classical string description should remain valid turns out to be rather restrictive. To start with, it is evident that to avoid decaying completely like a non conducting loop, a conducting loop must have a non zero value for at least one of the numbers $N$ and $Z$. In fact, one would expect that both these numbers should be reasonably large compared with unity to diminish the likelihood of quantum decay by barrier tunneling. However, even for moderately large values of $N$ and $Z$ there will be further restrictions on the admissible values of their ratio $Z/N$ due to the necessity of avoiding spontaneous particle emission as a result of current saturation. The existence of a maximum amplitude for the string current was originally predicted by Witten himself\cite{W}. However, quantitative knowledge about the current saturation phenomenon remained undeveloped until the appearance of an important pioneering investigation by Babul, Piran, and Spergel\cite{BPS} who undertook a detailed numerical analysis of the mechanism whereby the presence of a current tends to diminish the string tension $\T$. In a non conducting string, the tension $\T$ is identifiable with the energy per unit length $\cal U} \def\T{{\cal T}} \def\Th{T$, but in the conducting case the diminution of $\T$ is accompanied by an augmentation of $\cal U} \def\T{{\cal T}} \def\Th{T$ such that \begin{equation}} \newcommand{\ee}{\end{equation} \T\leq \mx^{\,2}\leq \cal U} \def\T{{\cal T}} \def\Th{T\ . \label{plus 3} \ee The analysis of~\cite{BPS} provided an empirical ``equation of state" specifying $\T$ as a non-linear function of the current magnitude $\vert j\vert$ and hence of the energy density $\cal U} \def\T{{\cal T}} \def\Th{T$. The minimum of $\T$, and hence the maximum allowed value for $\cal U} \def\T{{\cal T}} \def\Th{T$, is obtained when the current amplitude $\vert j\vert$ reaches a critical saturation value with order of magnitude \begin{equation}} \newcommand{\ee}{\end{equation} \vert j \vert^2\approx \cal U} \def\T{{\cal T}} \def\Th{T-\T \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} m_\sigma} \def\mx{m_{\rm x}} \def\mc{m_{\rm c}^{\,2}\ . \label{plus 4} \ee Such knowledge of the equation of state is precisely what is required for investigating string dynamics. We now apply this to vorton equilibrium states. The phase angle, $\theta$, is expressible in terms of the background time coordinate $t$ and a coordinate $\ell$ representing arc length round the loop by \begin{equation}} \newcommand{\ee}{\end{equation} \theta = \omega t-k\ell \ , \label{old 4} \ee where $\omega$ and $k$ are constants. If the total circumference of the vorton configuration is denoted by $\lv$ then it can be seen that these constants will be given in terms of the corresponding quantum numbers by \begin{equation}} \newcommand{\ee}{\end{equation} \omega=\frac{Z}{2{\tilde \Sigma}\lv}\ ,\hskip 1 cm k=\frac{2\pi N}{\lv}\ . \label{plus 5} \ee The general condition\cite{C ring} for purely centrifugally supported equilibrium is that the rotation velocity should be given by a formula of the same simple form as the one\cite{C 89} for the speed of extrinsic wiggle propagation, namely \begin{equation}} \newcommand{\ee}{\end{equation} v^2=\frac{\T}{\cal U} \def\T{{\cal T}} \def\Th{T} \ . \label{plus 6} \ee Note that this relationship applies when the electrical coupling is absent or, as will commonly be the case, negligible because of the smallness of the fine structure constant\cite{P ring}. \subsection{Properties of Vortons.} The random distribution of initial values of the quantum numbers $N$ and $Z$ leads to the formation of a range of qualitatively different kinds of vorton states. We shall now briefly review the reasons why it is to be expected that the most numerous initially will be of approximately {\it chiral} type, meaning that their rotation velocity is comparable with the speed of light as given by $v=1$. Note however, that it may be that vortons of the very different {\it subsonic} type, with $v\ll 1$, will be favoured by natural selection in the long run. In the case of a {\it spacelike} current, with $\omega^2<k^2$ (the only possibility envisaged in~\cite{BPS}), the velocity given by~(\ref{plus 6}) is to be interpreted simply as the {\it phase speed}: \begin{equation}} \newcommand{\ee}{\end{equation} v={\omega\over k}={\pi Z\over{\tilde \Sigma} N }\ . \label{plus 7}\ee Since we are assuming that $m_\sigma} \def\mx{m_{\rm x}} \def\mc{m_{\rm c}$ is small compared with $\mx$, the saturation limit (\ref{plus 4}) implies \begin{equation}} \newcommand{\ee}{\end{equation} \frac{\cal U} \def\T{{\cal T}} \def\Th{T-\T}{\T}\ll 1 \ , \label{plus 8}\ee which obviously, by (\ref{plus 6}), implies the property of approximate chirality, $v\simeq1$. It is expected on dimensional grounds that, although the sectional integral ${\tilde \Sigma}$ is a function of the current, it will not get extremely far from the order unity. This behaviour has been observed numerically in particular cases\cite{BPS}\cite{P} (see also \cite{{P},{CP}}). It can therefore be deduced that approximate chirality requires the vorton to be characterised by a pair of quantum numbers having roughly comparable orders of magnitude, \begin{equation}} \newcommand{\ee}{\end{equation} \vert Z\vert \approx N \ , \label{plus 9} \ee where we use an orientation convention such that $N$ is always positive. On purely statistical grounds one would expect that the two quantum numbers would most commonly be formed with comparable order of magnitude, so as to satisfy (\ref{plus 9}), though not necessarily with an exact ratio small enough to give a spacelike vorton current. In fact, although the limit (\ref{plus 4}) definitely excludes the possibility of vorton states with $N\gg \vert Z\vert$, it turns out that there is nothing to prevent the existence of non-chiral vortons having a current that is not just marginally timelike ($\omega^2>k^2$), but which have $\vert Z\vert\gg N$. In the {\it timelike} case the velocity $v$ in (\ref{plus 6}) is not the phase speed but the {\it current velocity} given by \begin{equation}} \newcommand{\ee}{\end{equation} v=k/\omega \ . \label{current velocity} \ee A preliminary application\cite{Moriand 95} of the principles described in the following sections suggests that subsonic vortons, which will necessarily be characterised by $\vert Z\vert \gg N$, will initially be formed in much smaller numbers than the more familiar chiral variety described by (\ref{plus 9}). This means that if ordinary chiral vortons are sufficiently stable to survive over cosmologically significant timescales (a question that must be left open for future research) then they can provide us with much stronger constraints on admissible particle theories than the more exotic subsonic variety. In order to keep our discussion as clear and simple as possible, we shall therefore say no more about subsonic vortons and restrict our attention to the chiral variety. Chiral vortons are only marginally affected by the electromagnetic coupling, $e$\cite{P ring}. Therefore, they can be described by the simplest kind of elastic string formalism in which electromagnetic effects are ignored altogether. This means\cite{C ring} that the mass energy $E_{\rm v}} \def\lv{\ell_{\rm v}} \def\Rv{R_{\rm v}$ of such a vorton will be given in terms of its circumference $\lv$ by \begin{equation}} \newcommand{\ee}{\end{equation} E_{\rm v}} \def\lv{\ell_{\rm v}} \def\Rv{R_{\rm v}= \lv(\cal U} \def\T{{\cal T}} \def\Th{T+\T)\simeq \lv \mx^{\,2}\ \ . \label{old 5} \ee In order to evaluate this quantity all that remains is to work out $\lv$. In the earliest work on vorton states it was always presumed that they would be circular, with radius therefore given by $\Rv=\lv/2\pi$ and angular momentum quantum number $J$ given\cite{C ring} by $J=NZ$ or equivalently by $J^2= \cal U} \def\T{{\cal T}} \def\Th{T\T\lv^{\,4}/4\pi^2$. Thus, eliminating $J$, one obtains the required result as \begin{equation}} \newcommand{\ee}{\end{equation} \lv=(2\pi)^{1/2}\vert NZ\vert^{1/2}(\cal U} \def\T{{\cal T}} \def\Th{T\T)^{-1/4} \simeq(2\pi)^{1/2}\vert NZ\vert^{1/2}\mx^{-1} \ . \label{old 6} \ee More recent work\cite{Cam} has established that even in cases where the vorton configuration is strongly distorted, the expression~(\ref{old 6}) will remain perfectly valid. Combining this with~(\ref{old 5}), and recalling that for chiral vortons $|Z|\approx N$, we thus obtain a final estimate of the vorton mass energy as \begin{equation}} \newcommand{\ee}{\end{equation} E_{\rm v}} \def\lv{\ell_{\rm v}} \def\Rv{R_{\rm v}\simeq(2\pi)^{1/2}\vert NZ\vert^{1/2}\mx\approx N\mx\ . \label{plus 10} \ee The preceding formulae are based on a classical description of the string dynamics. This is valid only if the length $\lv$ is large compared with the relevant quantum wavelengths, of which the longest is the Compton wavelength associated with the carrier mass $m_\sigma} \def\mx{m_{\rm x}} \def\mc{m_{\rm c}$. It can be seen from (\ref{old 6}) that this condition, namely \begin{equation}} \newcommand{\ee}{\end{equation} \lv\gg m_\sigma} \def\mx{m_{\rm x}} \def\mc{m_{\rm c}^{-1} \ , \label{minimum} \ee will only be satisfied if the product of the quantum numbers $N$ and $Z$ is sufficiently large. A loop that does not satisfy this requirement will never stabilise as a vorton. After its length has been reduced to the order of magnitude (\ref{minimum}) by a classical contraction process, it will presumably undergo a rapid quantum decay whereby it will finally disappear completely just as if there were no current. \section{The Vorton Abundance} \subsection{Basic postulates: a scheme based on two mass scales.} The present analysis will be carried out within the framework of the usual FRW model in which the universe evolves in approximate thermal equilibrium with a cosmological background temperature $\Th$. The effective number of massless degrees of freedom at temperature $\Th$ is denoted by $g^*$. Note that $\aa\approx 1$ at low temperatures but that in the range where vorton production is likely to occur, from the electroweak scale through to grand unification, $\aa \simeq 10^2$ is a reasonable estimate. Any vorton formation processes must occur during the radiation dominated era which ended when the temperature of the universe dropped below $10^{-2} \GeV$ and became effectively transparent. The relevant cosmological quantities are the age of the universe, given by $t \approx H^{-1}$ where $H$ is the Hubble parameter, and the radiation dominated time-temperature relationship \begin{equation}} \newcommand{\ee}{\end{equation} t\approx {\mP\over\sqrt\aa\Th^2} \ , \label{plus 18} \ee where $\mP$ is the Planck mass. During this cosmological evolution, the particle physics gauge group is assumed to undergo a series of successive spontaneous symmetry breaking phase transitions, expressible schematically as \begin{equation}} \newcommand{\ee}{\end{equation} \GGUT \mapsto \cdots \H \cdots \mapsto \GEW \mapsto SU(3)\times U(1) \ . \label{old 7} \ee Here $\GGUT$ is the ``grand unified" group, $\H$ is some hypothetical intermediate symmetry group (such as that of the axion phase), and $\GEW$ is the standard model group $SU(3)\times SU(2)\times U(1)$ or one of its non-standard (e.g. supersymmetric) extensions. The role of the Witten model is to provide an approximate description of an evolution process dominated by two distinct steps in this chain. When a semi-simple symmetry group, $\G$ say, is broken down to a subgroup, $\H$ say, the topological criterion for cosmic string formation is that the first homotopy group of the quotient should be non trivial: \begin{equation}} \newcommand{\ee}{\end{equation} \pi_1\{\G/\H\} \neq 1 \ . \label{old 8} \ee Our primary supposition is that such a process occurs at some particular cosmological temperature, $\Tx$, which we assume to be of the same order of magnitude as the relevant Kibble mass scale $\mx$. This mass scale is interpretable as being of the order of the mass of the Higgs particle responsible for the symmetry breaking according to the simple model discussed in the previous section. Our next basic postulate is that a current carrying field, characterised by the independent mass scale $m_\sigma} \def\mx{m_{\rm x}} \def\mc{m_{\rm c}$, condenses on the ensuing string defect at a subsequent stage, when the background temperature has dropped to a lower value, $\Ts$, which we assume to have the same order of magnitude as the mass scale $m_\sigma} \def\mx{m_{\rm x}} \def\mc{m_{\rm c}$. The formation of a condensate with finite amplitude characterised by the dimensionless sectional integral ${\tilde \Sigma}$ does not in itself imply a non zero expectation value for the corresponding local current vector, $j$. However, one expects that thermal fluctuations will give rise to a non zero value for its squared magnitude $\vert j\vert^2$ and hence that a random walk process will result in a spectrum of finite values for the corresponding string loop quantum numbers $N$ and $Z$. Therefore, in the long run, those loops for which these numbers satisfy the minimum length condition (\ref{minimum}) are predestined to become stationary vortons, provided of course that the quantum numbers are strictly conserved during the subsequent motion, a requirement whose validity depends on the condition that string crossing processes later on are statistically negligible. We describe these loops as {\it protovortons}. Note that the protovortons will not become vortons in the strict sense until a lower temperature, the vorton relaxation temperature $\Tr$ say (whose value will not be relevant for our present purpose) since the loops must first lose their excess energy. Whereas frictional drag and electromagnetic radiation losses will commonly ensure rapid relaxation, there may be cases in which the only losses are due to the much weaker mechanism of gravitational radiation. As the string network evolves, the distribution rarifies due to damping out of its fine structure first by friction and later by radiation reaction. However, not all of its lost energy goes directly into the corresponding frictional heating of the background or emitted radiation. There will always be a certain fraction, $\ef$ say, that goes into loops which evolve without subsequent collisions with the main string distribution. It is this process that provides the raw material for vorton production. Such loops will ultimately be able to survive as vortons if the current induced by random fluctuations during the carrier condensation process is sufficient for the condition (\ref{old 6}) to be satisfied, i.e. provided its winding number and particle number are large enough to satisfy \begin{equation}} \newcommand{\ee}{\end{equation} \vert NZ \vert^{1/2} \gg {\Tx\over\Ts}\ . \label{minprod} \ee Any loop that fails to satisfy this condition is doomed to lose all its energy and disappear. In favorable circumstances, (namely those considered in Section IIIB) most of the loops that emerge in this way at $\Ts$ will satisfy the condition~(\ref{minprod}) and thus be describable as protovortons. However in other cases (namely those considered in Section IIIC) the majority of the loops that emerge during the period immediately following the carrier condensation will be too small to have aquired sufficiently large quantum numbers by this stochastic mechanism. These loops will therefore not be viable in the long run and are classified as {\it doomed loops}. Nevertheless, even in such unfavourable circumstances, the monotonic increase of the damping length $L_{\rm min}$ will ensure that at a lower temperature $\Tf<\Ts$ a later, and less prolific, generation of emerging loops will after all be able to qualify as protovortons. We refer to $\Tf$ as the protovorton formation temperature. The scenario summarised above is based on the accepted understanding of the Kibble mechanism\cite{V&S}, according to which, after the temperature has dropped below $\Tx$ the effect of various damping mechanisms will remove most of the structure below an effective smoothing length, $L_{\rm min}$, which will increase monotonically as a function of time, so that nearly all the surviving loops will be have a length $L=\oint d\ell$ that satisfies the inequality \begin{equation}} \newcommand{\ee}{\end{equation} L\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} L_{\rm min} \label{loopl} \ee There will thus be a distribution of string loops, of which the most numerous will be relatively short ones, with $L\approx L_{\rm min}$, that are on the verge of emerging, or that have already emerged, as protovortons or doomed loops as the case may be. Whereas on larger scales closed loops and wiggles on very long string segments will be tangled together, on the shortest scales, characterised by the lower cutoff $L_{\rm min}$, loops will be of a relatively smooth form. It is these smallest loops that are candidates for subsequent transformation into vortons. The total number density of small loops with length and radial extension of the order of $L_{\rm min}$ will (due to the rapid fall off of the spectrum that is expected for larger scales) be not much less than the number density of all closed loops and so will be given by an expression of the form \begin{equation}} \newcommand{\ee}{\end{equation} n\approx \nu\ L_{\rm min}^{-3} \label{plus 22} \ee where $\nu$ is a time-dependent parameter which we will discuss later. The theory reviewed above was originally developed on the assumption that the string evolution is governed by Goto-Nambu type dynamics. In the kind of scenario we are considering, this condition will obviously be satisfied as long as the cosmological temperature $\Th$ is greater than or comparable with the carrier condensation temperature $\Ts$. Moreover, the usual Goto-Nambu type description, and its consequences as described above, will remain valid for a while after the strings have become superconducting since the currents will initially be too weak to have significant dynamical effects. The Goto-Nambu theory inevitably breaks down at some temperature above $\Tr$. However, there may be cases for which such a description will break down even before the protovorton formation temperature $\Tf$ is reached. The typical length scale of string loops at the transition temperature, $L_{\rm min}(\Ts)$, is considerably greater than relevant thermal correlation length, $\Ts^{-1}$, that will presumably characterise the local current fluctuations at that time. It is because of this that string loop evolution is modified after current carrier condensation. The inequality \begin{equation}} \newcommand{\ee}{\end{equation} L_{\rm min}(\Ts)\gg \Ts^{-1} \ , \label{plus 24} \ee and the fact that, by (\ref{loopl}), the length of any loop present at the time of the condensation will satisfy $L\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} L_{\rm min}(\Ts)$, means that the random walk effect can build up reasonably large, and typically comparable initial values of the quantum numbers $\vert Z\vert$ and $N$. The reason is that for a loop of length $L$, the expected root mean square values produced in this way from carrier field fluctuations of wavelength $\lambda$ can be estimated as \begin{equation}} \newcommand{\ee}{\end{equation} \vert Z\vert \approx N \approx \sqrt{L\over\lambda} \ . \label{N} \ee At the time of the condensation, a typical loop is characterised by \begin{equation}} \newcommand{\ee}{\end{equation} L\approx L_{\rm min}(\Ts)\ \label{typil} \ee and \begin{equation}} \newcommand{\ee}{\end{equation} \lambda\approx\Ts^{-1} \label{lambda} \ee so that one obtains the estimate \begin{equation}} \newcommand{\ee}{\end{equation} \vert Z\vert \approx N \approx \sqrt{L_{\rm min}(\Ts)\Ts} \ , \label{plus 29} \ee which, by~(\ref{plus 24}), is large compared with unity. For current condensation during the friction dominated regime discussed in Section IIIB, we shall see that this will always be sufficient to satisfy the requirement (\ref{minprod}). However, this condition will not hold for condensation later on in the radiation damping regime discussed in Section IIIC. In the latter case, typical small loops that free themselves from the main string distribution at or soon after the time of current condensation will be doomed loops since they do not satisfy~(\ref{minprod}). However, there will always be a minority of longer loops for which~(\ref{minprod}) will be satisfied, namely those exceeding a minimum length given, according to (\ref{plus 29}), by \begin{equation}} \newcommand{\ee}{\end{equation} L\approx {\Tx{\,^2}\over \Ts^{\,3}} \ . \label{longenuf} \ee This condition is still not quite sufficient to qualify them as protovortons since such exceptionally long loops will be very wiggly and collision prone. It is not until a later time at a lower temperature $\Tf$ that free protovorton loops will emerge. In this case, the typical wavelength of the carrier field will be given by~(\ref{lambda}) and the final value of the length of a typical loop is \begin{equation}} \newcommand{\ee}{\end{equation} L\approx L_{\rm min}(\Tf) \ . \label{xif} \ee The new estimate for the values of the quantum numbers is \begin{equation}} \newcommand{\ee}{\end{equation} \vert Z\vert \approx N \approx \sqrt{L_{\rm min}(\Tf)\Ts\over \zf} \ , \label{Nlater} \ee where we have included a blueshift factor, $\zf$, whose value is not immediately obvious but that is needed allow for the net effect on the string of weak stretching due to the cosmological expansion and stronger shrinking due to wiggle damping during the period as the temperature cools from $\Ts$ to $\Tf$. In the earlier friction dominated regime $\Tf$ is identifiable with $\Ts$ so the problem does not arise, and in the radiation damping era cosmological stretching will in fact be negligible. Therefore, the net effect is that $\zf$ will be small compared with unity, the hard part of the problem being to estimate how much so. The value given by (\ref{Nlater}) will increase monotonically as $\Tf$ diminishes. The required value of $\Tf$, at which the formation of the protovorton loops will actually occur, is that for which the function in~(\ref{Nlater}) reaches the minimum qualifying value given by~(\ref{minprod}). This value is thus obtainable in principle by solving the equation \begin{equation}} \newcommand{\ee}{\end{equation} {L_{\rm min}(\Tf)\over \zf} \approx {\Tx^2\over \Ts^3}\ , \label{fff} \ee but this can only be done in practice when we have found the $\Tf$-dependence of $L_{\rm min}(\Tf)$ and $\zf$. We discuss the $\Tf$-dependence of $\zf$ shortly. The number density of protovorton loops at the temperature $\Tf$ will be comparable with the total loop number density at the time, so that by (\ref{plus 22}) it will be expressible as \begin{equation}} \newcommand{\ee}{\end{equation} \nf\approx\ef\nuf\, L_{\rm min}(\Tf)^{-3} \ , \label{newf} \ee where $\ef$ is an efficiency factor of order unity, and $\nuf$ is the value of the dimensionless parameter $\nu$ at that time. If the current condenses in the friction dominated regime $\nuf$ will simply have an order of unity value. However, if the condensation does not occur until later on, in the radiation dominated era, $\nuf$ will have a lower value which is not so easy to evaluate. The number of protovorton loops in a comoving volume will be approximately conserved during their subsequent evolution. Therefore, since volumes will scale proportionally to the inverse of the entropy density it follows that the number density $\nv$ of the resulting vortons at a lower temperature $\Th$ will be given in terms of the number density $\nf$ of the proto-vorton loops at the time of condensation by \begin{equation}} \newcommand{\ee}{\end{equation} {\nv\over\nf}\approx f\left( {\Th\over\Tf}\right)^{3} \ . \label{add 1} \ee Here $f$ is a dimensionless adjustment factor that we expect to be small but not very small compared with unity, and that will be given by \begin{equation}} \newcommand{\ee}{\end{equation} f\simeq{\ef\aa\over\aaf} \ , \label{plus 27} \ee where $\aaf$ is the value of $\aa$ at the protovorton formation temperature $\Tf$. Using~(\ref{plus 10}) the corresponding mass density will be given by \begin{equation}} \newcommand{\ee}{\end{equation} \rhv\approx N\mx \nv\ . \label{plus 28} \ee Thus, the mass density of the distribution of the protovortons in the range $\Tf \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} \Th \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} \Tr$, and of the mature vortons after their formation in the range $\Th \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} \Tr$, is given by the general formula \begin{equation}} \newcommand{\ee}{\end{equation} \rhv \approx f\nuf {\Tx\Ts^{1/2}\over\zf^{1/2}L_{\rm min}(\Tf)^{5/2}} \Big({T\over\Tf}\Big)^3 \ . \label{add 2} \ee When the dependences of $\nuf$, $L_{\rm min}(\Tf)$ and $\Tf$ on the fundamental parameters $\Tx$ and $\Ts$ are known, the formula (\ref{add 2}) will allow us to place limits on $\Ts$ by determining how the presence of the corresponding population of remnant vortons would affect the course of cosmic evolution. In the following sections we shall derive several constraints on such a population by demanding that it not significantly interfere with the cornerstones of the standard cosmology. However, before we can do so it remains to obtain at least rough estimates of the required values of the dependent variables. This turns out to be fairly easy in the case of condensation during the friction dominated era that will be discussed in the next section. However the derivation of firm conclusions is less straightforward for the kind of scenario discussed in Section IIIC, in which the current condensation occurs in the radiation dominated regime. \subsection{Condensation in the friction damping regime.} According to the standard picture\cite{KEH}, the evolution of a cosmic string network is initially dominated by the frictional drag of the thermal background. The relevant dynamical damping timescale, $\tau$, during this period is approximately given by \begin{equation}} \newcommand{\ee}{\end{equation} \tau\approx{\Tx^{\,2}\over\beta\,\Th^3} \ , \label{plus 19} \ee where $\beta$ is a dimensionless drag coefficient that depends on the details of the underlying field theory but that is typically expected\cite{{KEH},{V&V}} to be of order unity. In this regime the large scale structure is frozen and retains the Brownian random walk form~(\ref{plus 22}). However, the microstructure is smoothed out below a correlation length $L_{\rm min}$ given by \begin{equation}} \newcommand{\ee}{\end{equation} L_{\rm min}\approx\sqrt{\tau t} \ , \label{plus 20} \ee where $t$ is the Hubble time. Neglecting the very weak $g^*$-dependence, the required correlation length is thus found to be given by \begin{equation}} \newcommand{\ee}{\end{equation} L_{\rm min}\approx \left({\mP\over\beta}\right)^{1/2}{\Tx\over\Th^{\,5/2}}\ . \label{old 13} \ee The friction-dominated regime continues until the temperature $\Th$ drops below a critical value $\Th_\star} \def\Tra{\Th_\dagger} \def\Tho{\Th_\ddagger$ given by \begin{equation}} \newcommand{\ee}{\end{equation} \Th_\star} \def\Tra{\Th_\dagger} \def\Tho{\Th_\ddagger \approx\frac{\Tx^{\,2}}{\beta\,\mP} \ , \label{old 10} \ee at which $\tau$ is comparable with $t$. Setting $T$ equal to $\Ts$ in (\ref{old 13}) in order to obtain the relevant value of $L_{\rm min}(\Ts)$, and using (\ref{plus 29}) and (\ref{typil}), the required expectation value for the quantum number $N$ can be estimated as \begin{equation}} \newcommand{\ee}{\end{equation} N\approx\left({\mP\over\beta\Ts}\right)^{1/4} \left({\Tx\over \Ts}\right)^{1/2} \ . \label{old 21} \ee It follows from (\ref{plus 10}) and (\ref{old 6}) that a typical vorton in this relic distribution will have a mass-energy given by \begin{equation}} \newcommand{\ee}{\end{equation} E_{\rm v}} \def\lv{\ell_{\rm v}} \def\Rv{R_{\rm v}\approx\left({\mP\over\beta\Ts}\right)^{1/4} \left({\Tx^3\over\Ts}\right)^{1/2} \ , \label{plus 30} \ee which corresponds to a vorton circumference \begin{equation}} \newcommand{\ee}{\end{equation} \lv\approx \left(\frac{\mP}{\beta\Ts}\right)^{1/4}\big(\Tx\Ts\big)^{-1/2} \ . \label{old 20} \ee It can thus be confirmed using (\ref{old 10}) that the postulate $\Ts > \Th_\star} \def\Tra{\Th_\dagger} \def\Tho{\Th_\ddagger$ automatically ensures that these vortons will indeed satisfy the minimum length requirement (\ref{minimum}), though only marginally when $\Th$ is at the lower end of this range. From (\ref{newf}) and (\ref{old 13}) the number density of these proto-vorton loops, at formation, is \begin{equation}} \newcommand{\ee}{\end{equation} \nf\approx \nu_\star} \def\nuf{\nu_{\rm f}\left({\beta\,\Ts\over\mP}\right)^{3/2} \left({\Ts^{\,2}\over\Tx}\right)^3\ . \label{plus 25} \ee It follows from (\ref{add 1}) that at later times the number density of their mature vorton successors will be \begin{equation}} \newcommand{\ee}{\end{equation} \nv\approx\nu_\star} \def\nuf{\nu_{\rm f} f\left({\beta\Ts\over \mP}\right)^{3/2} \left({\Ts\Th\over\Tx}\right)^3 \ . \label{plus 26} \ee Thus, after the temperature has fallen below the value $\Tr$, the resulting mass density of the relic vorton population will be \begin{equation}} \newcommand{\ee}{\end{equation} \rhv\approx \nu_\star} \def\nuf{\nu_{\rm f} f N\left({\beta\Ts\over\mP}\right)^{3/2}\, \left({\Ts\over\Tx}\right)^2 \Ts\Th^3\ , \label{old 17} \ee which by (\ref{old 21}) gives our final estimate as \begin{equation}} \newcommand{\ee}{\end{equation} \rhv\approx \nu_\star} \def\nuf{\nu_{\rm f} f\left({\beta\Ts\over\mP}\right)^{5/4} \left({\Ts\over\Tx}\right)^{3/2}\Ts\Th^3 \ . \label{plus 31} \ee \subsection{Condensation in the radiation damping regime.} For strings formed at low energies, for example in some non-standard electroweak symmetry breaking transition, the scenario of the preceding subsection is the only one that needs to be considered. However, for strings formed at much higher energies, in particular for the commonly considered case of GUT strings, current condensation could occur during the extensive temperature range below $\Th_\star} \def\Tra{\Th_\dagger} \def\Tho{\Th_\ddagger$. The minimum length requirement is only marginally satisfied by typical loops when condensation occurs near the end of the friction dominated regime. Therefore, if $\Ts < \Th_\star} \def\Tra{\Th_\dagger} \def\Tho{\Th_\ddagger$, typical loops present during the transition will not be long enough to qualify as protovortons. This means that the vorton formation temperature $\Tf$ will not coincide with $\Ts$ as it did in the friction dominated regime, but rather will have a distinctly lower value. In these scenarios the final stage of protovorton formation will be preceeded by a period of evolution in the temperature range $\Th_\star} \def\Tra{\Th_\dagger} \def\Tho{\Th_\ddagger\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} \Th\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$}\Tf$. During this interval friction will be negligible and the only significant dissipation mechanism will be that of radiation reaction. Moreover, during the first part of this period, in the range $\Th_\star} \def\Tra{\Th_\dagger} \def\Tho{\Th_\ddagger > \Th >\Ts$, the only radiation mechanism will be gravitational, which is so weak that to begin with it will have no perceptible effect at all. Thus, there will be an interval during which the smoothing length remains roughly constant at its value at the end of the friction dominated era, given, according to (\ref{old 13}) and (\ref{old 10}), by \begin{equation}} \newcommand{\ee}{\end{equation} L_{\rm min}(\Th_\star} \def\Tra{\Th_\dagger} \def\Tho{\Th_\ddagger)\approx {\beta^2\mP^3\over\Tx^4} \ . \label{xicrit} \ee During the last stage before the protovortons are formed, in the range $\Ts>\Th>\Tf$ there will already be currents on the strings. However, in practice, even in the coupled case the expected currents will be too weak for electromagnetic radiation damping to be important. Therefore, gravitational radiation is the only important effect throughout the range $\Th_\star} \def\Tra{\Th_\dagger} \def\Tho{\Th_\ddagger>\Th>\Tf$. The resulting gravitational smoothing scale will be the length of the shortest loop for which the survival time exceeds the cosmological timescale (\ref{plus 18}). From dimensional considerations this can be estimated by the expression \begin{equation}} \newcommand{\ee}{\end{equation} t\approx {L_{\rm min}\over{\mit\Gamma}} \def\GeV{\,{\rm GeV} G\,\cal U} \def\T{{\cal T}} \def\Th{T}\ , \label{add 5} \ee where $\cal U} \def\T{{\cal T}} \def\Th{T\simeq \T \simeq \Tx^{\,2}$ is the mass energy density of the string. Here ${\mit\Gamma}} \def\GeV{\,{\rm GeV}$ is a dimensionless coefficient of order unity and, for the GUT strings the gravitational factor will be given by $G\,\cal U} \def\T{{\cal T}} \def\Th{T\simeq (\mx/\mP)^2\approx 10^{-6}$. The validity of the formula (\ref{add 5}) has been confirmed in many particular cases by numerical simulations\cite{V&S}, though the value of the coefficient turns out to be typically ${\mit\Gamma}} \def\GeV{\,{\rm GeV}\approx 10^2$. Equating (\ref{add 5}) to the cosmological timescale we have \begin{equation}} \newcommand{\ee}{\end{equation} L_{\rm min}\approx{{\mit\Gamma}} \def\GeV{\,{\rm GeV}\over\sqrt\aa\,\mP}\left({\Tx\over \Th}\right)^2 \ . \label{add 7} \ee This formula is valid when its value becomes larger than that given by~(\ref{xicrit}). This occurs at a critical value $\Tra<\Th_\star} \def\Tra{\Th_\dagger} \def\Tho{\Th_\ddagger$ which, from (\ref{add 7}), is given by \begin{equation}} \newcommand{\ee}{\end{equation} \Tra\approx \Big({{\mit\Gamma}} \def\GeV{\,{\rm GeV}\over\sqrt\aa}\Big)^{1/2} {\Tx^{\,3}\over\beta\,\mP^{\,2}} \ . \label{Trad} \ee The relation (\ref{add 7}) may also be expressed as \begin{equation}} \newcommand{\ee}{\end{equation} L_{\rm min}\approx\kappa t \label{defalpha} \ee where $\kappa$ is a constant given by \begin{equation}} \newcommand{\ee}{\end{equation} \kappa\approx {\mit\Gamma}} \def\GeV{\,{\rm GeV} \Big({\Tx\over\mP}\Big)^2 \ . \label{alpha} \ee To avoid ambiguity we can of course simply use the formula (\ref{defalpha}) as a defining relation to specify the parameter $\kappa$ during the Hubble damping ``doldrum" regime $\Th_\star} \def\Tra{\Th_\dagger} \def\Tho{\Th_\ddagger\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$}\Th\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$}\Tra$. However, with this convention $\kappa$ will have to be considered as a function of time, starting with unit value at $\Th_\star} \def\Tra{\Th_\dagger} \def\Tho{\Th_\ddagger$ and decreasing to the very low value (\ref{alpha}) at which it levels off at $T\approx \Tra$. This description of the network evolution is illustrated graphically in figure~1. In order to apply the formula (\ref{add 2}) for the final vorton mass density we must evaluate the dimensionless coefficient $\nu$ that determines the protovorton number density. It was easy to do this for the friction dominated case for which we could assume a constant value $\nu_\star} \def\nuf{\nu_{\rm f}$ of the order of unity. However, $\nu$ may subsequently be reduced by an amount whose estimation is not so clearly evident. It is reasonable to expect that in the radiation dominated regime the string distribution tends towards a scale invariant form, albeit one that will not be quite so simple as the Brownian form that prevailed in the friction-dominated era. Although there may already be approximate scaling for large scales, it is clear that scaling on the smaller scales that matter for our present purpose can only be obtained after the parameter $\kappa$ defined by (\ref{defalpha}) has settled down to a constant value. This occurs at temperatures below $\Tra$ for which a detailed analysis is beyond the scope of the simulations that have been achieved so far. For simplicity we shall use a crude, but we hope sufficiently robust, description based on simple and quite natural physical considerations. In so far as $\nu$ is concerned, the plausible conjecture that it should be describable by a scaling solution is to be interpreted as meaning that it should be a function only of the dimensionless ratio $R/t$ (where, it is to be recalled, $R$ denotes the radial scale under consideration and $t$ is the Hubble time). It is reasonable to expect that for values of $R$ in the range $\kappa\ll R/t \ll 1$, the value of $\nu$ should be given by a simple power law of the form \begin{equation}} \newcommand{\ee}{\end{equation} \nu\approx\nu_\star} \def\nuf{\nu_{\rm f} \Big({R\over t}\Big)^\zeta} \def\p{\varepsilon} \def\ef{\varepsilon \ , \label{scaling} \ee with constant index $\zeta} \def\p{\varepsilon} \def\ef{\varepsilon$. We expect that, in the radiation dominated regime, the appropriate value of the index should be close to but perhaps slightly greater than a lower limit given by\cite{T&B 86} \begin{equation}} \newcommand{\ee}{\end{equation} \zeta} \def\p{\varepsilon} \def\ef{\varepsilon={3\over 2} \ . \label{index} \ee (The analogue for the matter era in which we are situated today is a value slightly greater than a lower limit given by $\zeta} \def\p{\varepsilon} \def\ef{\varepsilon=2$.) Assuming that the formula (\ref{scaling}) still gives the right order of magnitude at the lower end of its range, $\nu$ will be given in the radiation dominated regime by the constant value \begin{equation}} \newcommand{\ee}{\end{equation} \nu \approx \nu_\star} \def\nuf{\nu_{\rm f}\, \kappa^\zeta} \def\p{\varepsilon} \def\ef{\varepsilon \label{barnu} \ee with $\kappa$ given by $(\ref{alpha})$. This means that the corresponding value of the loop number density itself will be given according to (\ref{defalpha}) by \begin{equation}} \newcommand{\ee}{\end{equation} n\approx \nu_\star} \def\nuf{\nu_{\rm f}\, \kappa^{\zeta} \def\p{\varepsilon} \def\ef{\varepsilon-3} t^{-3} \ . \label{nloop} \ee Before evaluating the required result (\ref{add 2}), it remains to obtain the value $\Tf$. To do this we have to solve the equation (\ref{fff}) that results from the minimum length requirement, which, from (\ref{add 7}), reduces to \begin{equation}} \newcommand{\ee}{\end{equation} \zf\Tf^{\,2}\approx{{\mit\Gamma}} \def\GeV{\,{\rm GeV}\Ts^{\,3}\over\sqrt\aa\,\mP} \ . \label{quation} \ee However, before we can solve this deceptively simple equation in practice, we need to know the $\Th$ dependence of the factor $\z$. This is the most delicate part of the calculation, since it involves competing effects of comparable magnitude. As the string distribution evolves, the time dependent blue shift factor $\z$ is the factor by which the supporting string length has shrunk since the time of the condensation at the temperature $\Ts$. This shrinking can be accounted for as the net result of three main effects, two of which are comparitively easy to evaluate. The weakest of these effects is the stretching due to the expansion of the universe. This will always be more than compensated (except in the ``doldrum" period in which compensation is only marginal) by shrinking due to the steady damping out of the short wavelength wiggles that give the most important contribution to the total string length per unit volume in the radiation dominated era. If these two effects were the only ones it would be relatively easy to estimate $\z$ since it would simply be proportional to the total string length, ${{\mit\Sigma}} \def\LSs{\LS_\sigma}$ say, in a comoving volume that can conveniently be taken to be a cubic thermal wavelength. This is given by \begin{equation}} \newcommand{\ee}{\end{equation} {\mit\Sigma}} \def\LSs{\LS_\sigma\approx \Lamb\Th^{-3} \label{longsum} \ee where $\Lambda$ is the total string length per unit volume. Now, note that the main contribution to the total length of the string distribution is provided by short wavelength modes with scale of order the smoothing length $L_{\rm min}$. Therefore, $\Lamb$ can be estimated as being $nL_{\rm min}$, where $n$ is given by (\ref{plus 22}). This implies \begin{equation}} \newcommand{\ee}{\end{equation} \Lambda\approx \nu\, L_{\rm min}^{-2} \ , \label{Lamb} \ee in which $\nu$ is given by (\ref{barnu}). If the only effect of the damping were to smooth out the short wavelength wiggles on the main part of the string distribution, it would be possible to identify $\z$ with the ratio of ${\mit\Sigma}} \def\LSs{\LS_\sigma$ to its value $\LSs$ at the time of the current condensation. However, this would not allow for a third important effect, namely the losses of small loops which are continually liberated from the string distribution at the lower end of the spectrum. Whether these loops survive as vortons or disappear altogether, the effect on the main part of the string distribution is that the corresponding string lengths must be subtracted at each stage. Thus, the total shrinking factor $\z$ will be the ratio of ${\mit\Sigma}} \def\LSs{\LS_\sigma $, not to its original known value $\LSs$, but to a value that is considerably reduced in such a way as to take account of this new effect. In terms of the variation $\delta{\mit\Sigma}} \def\LSs{\LS_\sigma$ of ${\mit\Sigma}} \def\LSs{\LS_\sigma$, the variation $\delta\z$ of $\z$ is given by \begin{equation}} \newcommand{\ee}{\end{equation} {\delta\z\over\z} \simeq { \delta{\mit\Sigma}} \def\LSs{\LS_\sigma + \Delta{\mit\Sigma}} \def\LSs{\LS_\sigma \over {\mit\Sigma}} \def\LSs{\LS_\sigma} \label{looploss} \ee where $\Delta{\mit\Sigma}} \def\LSs{\LS_\sigma$ is the length of string irreversibly chopped off in the form of small loops per comoving thermal volume within the short time interval $\delta t$ under consideration. The delicate question is that of quantifying $\Delta{\mit\Sigma}} \def\LSs{\LS_\sigma$. It is not hard to obtain an order of magnitude estimate, but since the final result is rather sensitively dependent on this quantity a more accurate estimate would be desirable. In the absence of a detailed numerical investigation, we express $\Delta{\mit\Sigma}} \def\LSs{\LS_\sigma$ as a fraction of the total lost length \begin{equation}} \newcommand{\ee}{\end{equation} \Delta{\mit\Sigma}} \def\LSs{\LS_\sigma \simeq -\p \delta{\mit\Sigma}} \def\LSs{\LS_\sigma \ , \label{efficiency} \ee where $\p$ is a dimensionless efficiency factor that must lie in the range $0 <\p < 1$. This factor is roughly identifiable with the coefficient introduced, using the same notation, in (\ref{newf}). However in that context it was sufficient to know that it should be of order unity. Whatever the exact value of $\p$, the substitution of (\ref{efficiency}) in (\ref{looploss}) provides a differential equation that can be solved to give \begin{equation}} \newcommand{\ee}{\end{equation} \z\approx\Big({{\mit\Sigma}} \def\LSs{\LS_\sigma\over\LSs}\Big)^{1-\p} \label{shift} \ee and using~(\ref{add 7}) and (\ref{Lamb}) we finally obtain \begin{equation}} \newcommand{\ee}{\end{equation} \z\approx\Big({\Th\over\Ts}\Big)^{1-\p} \ . \label{blue} \ee It is to be remarked that if the efficiency $\p$ of loop production were zero, this would mean that the carrier field would be blue shifted by a factor that would be precisely the inverse of that by which the background radiation is redshifted. However, if substantial loop production occurs there will be a blueshift by a moderate factor. Assuming the particle number weighting factor can be taken to have the fixed value ${g^*_\sigma}} \def\aaf{{g^*_{\rm f}}$,~(\ref{quation}) and~(\ref{blue}) then give \begin{equation}} \newcommand{\ee}{\end{equation} {\Tf\over\Ts}=\Big({{\mit\Gamma}} \def\GeV{\,{\rm GeV}\,\Ts\over\sqrt{g^*_\sigma}} \def\aaf{{g^*_{\rm f}}\,\mP}\Big)^{1/(3-\p)} \ . \label{solution} \ee Using this result in conjunction with the estimates (\ref{add 7}) and (\ref{barnu}) the formula (\ref{add 2}) gives the mass density of the resulting vorton distribution as \begin{equation}} \newcommand{\ee}{\end{equation} \rhv\approx\ef\aa\nu_\star} \def\nuf{\nu_{\rm f}{\mit\Gamma}} \def\GeV{\,{\rm GeV}^{\zeta} \def\p{\varepsilon} \def\ef{\varepsilon-5/2}\Big({\mP\over\Tx}\Big)^{5-2\zeta} \def\p{\varepsilon} \def\ef{\varepsilon} \Big({\Ts\over\mP}\Big)^{5/2} \Big({{\mit\Gamma}} \def\GeV{\,{\rm GeV}\,\Ts\over\sqrt{g^*_\sigma}} \def\aaf{{g^*_{\rm f}}\,\mP}\Big)^ {(3+\p)/(6-2\p)}\, \Tx\Th^3 \ . \label{fmassden} \ee If we adopt the value (\ref{index}) for $\zeta} \def\p{\varepsilon} \def\ef{\varepsilon$, this simplifies to \begin{equation}} \newcommand{\ee}{\end{equation} {\rhv\over\Tx\Th^3}\approx\ef\aa\nu_\star} \def\nuf{\nu_{\rm f}{\mit\Gamma}} \def\GeV{\,{\rm GeV}^{-1/2}\Big({\mP\over\Tx}\Big)^{2} \Big({\Ts\over\mP}\Big)^{5/2} \Big({{\mit\Gamma}} \def\GeV{\,{\rm GeV}\,\Ts\over\sqrt{g^*_\sigma}} \def\aaf{{g^*_{\rm f}}\,\mP}\Big)^ {(3+\p)/(6-2\p)}\ , \label{den3} \ee but there remains an unsatisfactory degree of sensitivity to the uncertain efficiency factor $\p$. This is because although this index will always be quite small, the factor $\Ts/\mP$ is very tiny in the cases of interest. \subsection{Stability Issues} Before we consider cosmological constraints we would like to say a few words about stability. One of our postulates is that superconducting current conservation is sufficiently effective to allow the protovortons that emerge at the temperature $\Tf\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$}\Ts$ to settle down as dynamically stable bodies, at a possibly lower relaxation temperature $\Tr\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$}\Tf$. This does not exclude the possibility that in the very long run they may finally decay by quantum tunneling or other ``secular" instability mechanisms. In this case they would ultimately disappear when the thermal background reached a corresponding vorton death temperature with an even lower value, $\Td$ say. This means that a complete analysis of vorton formation and evolution could involve five successive temperature scales related by \begin{equation}} \newcommand{\ee}{\end{equation} \Tx \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} \Ts \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} \Tf \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$}\Tr \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} \Td \ . \label{old 9} \ee Protovortons are small compared with the ever expanding scales characterising the rest of the string distribution and hence undergo few extrinsic collisions. Also, they are sufficiently smooth to avoid destructive fragmentation by self collisions. It therefore follows that in most cases the relevant quantum numbers $N$ and $Z$ will be conserved. As a consequence, the statistical properties of the future vorton population will be predetermined by those of the corresponding protovorton loops at the time of their emergence at the temperature $\Tf$. It is therefore unnecessary for our present purpose to consider how long it takes for the protovorton loops to settle down and become proper stationary vortons. Thus, the value of $\Tr$ will not play any role in the discussion that follows. This is convenient because the details of protovorton loop decay have not yet been adequately studied, and will obviously be sensitively dependent on whether the current is electromagnetically coupled, in which case would expect the later stages of the protovorton loop contraction to be relatively rapid. The final decay temperature $\Td$ is also absent from the quantitative formulae that we will derive. Indeed, its only role is to characterise the two principle kinds of scenario that we consider. In Section V, we discuss vortons which survive until the present epoch, which requires that $\Td$ should not much exceed $10^{-12} \GeV$ (corresponding to the observed 3 degree radiation temperature). However, in Section IV we adopt the weaker supposition that the vortons survive at least till the time of nucleosynthesis, which requires only that $\Td$ should not much exceed $10^{-4} \GeV$. This means that the only temperature scales that remain as variable parameters in the analysis that follows are the Higgs-Kibble temperature $\Tx$, the condensation temperature $\Ts$, and the protovorton formation temperature $\Tf$ which will not be truly independent but is a function of $\Ts$ and $\Tx$. \section{The Nucleosynthesis Constraint.} One of the most robust predictions of the standard cosmological model is the abundances of the light elements that were fabricated during primordial nucleosynthesis at a temperature $\Th_{_{\rm N}}} \def\TEW{\Th_{_{\rm EW}}\approx 10^{-4} \GeV$. In order to preserve this well established picture, it is necessary that the energy density in vortons at that time, $\rhv(\Th_{_{\rm N}}} \def\TEW{\Th_{_{\rm EW}})$ should have been small compared with the background energy density in radiation, $\rhN\approx\aa\Th_{_{\rm N}}} \def\TEW{\Th_{_{\rm EW}}^4$. Assuming that carrier condensation occurs during the friction damping regime and that $\aa$ has dropped to a value of order unity by the time of nucleosynthesis, it can be seen from (\ref{plus 31}) that this restriction, \begin{equation}} \newcommand{\ee}{\end{equation} \rhv(\Th_{_{\rm N}}} \def\TEW{\Th_{_{\rm EW}}) \ll \rhN \ , \label{old 22} \ee is expressible as \begin{equation}} \newcommand{\ee}{\end{equation} \ef\nu_\star} \def\nuf{\nu_{\rm f}{g^*_\sigma}} \def\aaf{{g^*_{\rm f}}^{-1}\beta^{5/4}\mP^{-5/4}\Tx^{-3/2}\Ts^{\,15/4}\ll\Th_{_{\rm N}}} \def\TEW{\Th_{_{\rm EW}}\ . \label{old 24} \ee Below we apply this constraint to some specific examples. \subsection{ Case $\bf \Tx \approx\Ts$.} The case for which carrier condensation occurs at or very soon after the time of string formation has been studied previously and yields rather strong restrictions for very long lived vortons \cite{C}. If it is only assumed that the vortons survive for a few minutes, which is all that is needed to reach the nucleosynthesis epoch we obtain a much weaker restriction. Setting $\Ts$ equal to $\Tx$ in (\ref{old 24}) gives \begin{equation}} \newcommand{\ee}{\end{equation} \left(\frac{\ef\nu_\star} \def\nuf{\nu_{\rm f}}{{g^*_\sigma}} \def\aaf{{g^*_{\rm f}}}\right)^{4/9}\Tx \ll \left(\frac{\mP}{\beta}\right)^{5/9}\Th_{_{\rm N}}} \def\TEW{\Th_{_{\rm EW}}^{4/9}\ . \label{old 25} \ee Taking ${g^*_\sigma}} \def\aaf{{g^*_{\rm f}} \approx 10^2$ and assuming (in view of the low value of the index) that the net efficiency factor $(\ef\nu_\star} \def\nuf{\nu_{\rm f})^{4/9}$ and the drag factor $\beta^{5/9}$ are of the order of unity yields the inequality \begin{equation}} \newcommand{\ee}{\end{equation} \Tx\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 10^9\ \GeV \ . \label{plus 34} \ee This is the condition that must be satisfied by the formation temperature of {\it cosmic strings that become superconducting immediately}, subject to the rather conservative assumption that the resulting vortons last for at least a few minutes. It is to be observed that this condition rules out the formation of such strings during any conceivable GUT transition, but is consistent by a wide margin with their formation at temperatures close to that of the electroweak symmetry breaking transition. \subsection{Case $\bf \Tx\simeq\Th_{_{\rm GUT}}\approx10^{16}$ GeV.} Here we wish to calculate the highest temperature at which GUT strings can become superconducting without violating the nucleosynthesis constraints. Setting $\Tx$ equal to $\Th_{_{\rm GUT}}$ in (\ref{old 24}), and again using ${g^*_\sigma}} \def\aaf{{g^*_{\rm f}}\approx 10^2$, we obtain \begin{equation}} \newcommand{\ee}{\end{equation} \Ts \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} (\ef\nu_\star} \def\nuf{\nu_{\rm f})^{-4/15}\big({g^*_\sigma}} \def\aaf{{g^*_{\rm f}}\Th_{_{\rm N}}} \def\TEW{\Th_{_{\rm EW}}\big)^{\,4/15} \Th_{_{\rm GUT}}^{\ 2/5} \left(\frac{\mP}{\beta}\right)^{1/3} \approx 10^{12}\ \GeV \ , \label{old 26} \ee where, in the last step, we have neglected the dependence on order of unity quantities. It can be checked, using the Kibble formula (\ref{old 10}), that the maximum value given by (\ref{old 26}) is at least marginally consistent with the assumption that current condensation occurs in the friction-dominated regime. The validity of our derivation is thereby confirmed. It follows that the nucleosynthesis constraint will always be satisfied when $\Ts$ lies in the radiation damping epoch. Therefore, subject again to the rather conservative assumption that the resulting vortons last for at least a few minutes, theories in which GUT cosmic strings become superconducting above $10^{12}$ GeV are inconsistent with the observational data. \section{The Dark Matter Constraint.} In this section we consider the rather stronger constraints that can be obtained if at least a substantial fraction of the vortons are sufficiently stable to last until the present epoch. It is generally accepted that the virial equilibrium of galaxies and particularly of clusters of galaxies requires the existence of a cosmological distribution of ``dark" matter. This matter must have a density considerably in excess of the baryonic matter density, $\rhb\approx 10^{-31}$ gm/cm$^3$. On the other hand, on the same basis, it is also generally accepted that to be consistent with the formation of structures such as galaxies it is necessary that the total amount of this ``dark" matter should not greatly exceed the critical closure density, namely \begin{equation}} \newcommand{\ee}{\end{equation} \rho_{\rm c}} \def\rhN{\rho_{_{\rm N}}\approx 10^{-29} {\rm gm \ cm^{-3}} \ . \label{add 15} \ee As a function of temperature, the critical density scales like the entropy density so that it is given by \begin{equation}} \newcommand{\ee}{\end{equation} \rho_{\rm c}} \def\rhN{\rho_{_{\rm N}}(T)\approx \aa\mc\Th^3\ , \label{plus 35} \ee where $\mc$ is a constant mass factor. Since $\aa\approx1$ at the present epoch, the required value of $\mc$ (which is roughly as the critical mass per black body photon) can be estimated as \begin{equation}} \newcommand{\ee}{\end{equation} \mc\approx 10^{-28}\mP\approx 1\ \hbox{eV}\ . \label{plus 36} \ee However, for comparison with the density of vortons that were formed as a result of current condensation at an earlier epoch characterised by $\Ts$, what one needs is the corresponding factor ${g^*_\sigma}} \def\aaf{{g^*_{\rm f}}\mc$, which can be estimated to be \begin{equation}} \newcommand{\ee}{\end{equation} {g^*_\sigma}} \def\aaf{{g^*_{\rm f}} \mc\approx 10^{-26}\mP\approx 10^2\,\hbox{eV}\ . \label{plus 37} \ee (This distinction was obscured in the previous derivations of the dark matter constraint\cite{C}\cite{Moriand 95}, in which the value quoted for $\mc$ should be interpreted as meaning the value of ${g^*_\sigma}} \def\aaf{{g^*_{\rm f}}\mc$, which is what one actually needs.) The general dark matter constraint is \begin{equation}} \newcommand{\ee}{\end{equation} \Ov \equiv {\rhv\over\rho_{\rm c}} \def\rhN{\rho_{_{\rm N}}}\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 1\ . \label{plus 38} \ee In the case of vortons formed as a result of condensation during the friction damping regime the relevant estimate for the vortonic dark matter fraction is obtainable from (\ref{plus 31}) as \begin{equation}} \newcommand{\ee}{\end{equation} \Ov\approx\beta^{5/4}\left({\ef\nu_\star} \def\nuf{\nu_{\rm f}\mP\over{g^*_\sigma}} \def\aaf{{g^*_{\rm f}}\mc}\right) \left({\Ts\over\mP}\right)^{9/4} \left({\Ts\over\Tx}\right)^{3/2} . \label{old 27} \ee In particular, this formula applies to the case in which the carrier condensation occurs very soon after the strings themselves are formed, as was supposed in earlier work. However, if we want to strengthen the nucleosynthesis limit (\ref{old 26}) for the general category of strings formed at the GUT scale, then we are obliged to consider the case of vortons formed as a result of condensation during the gravitational radiation damping regimes. In this case, equation~(\ref{den3}) gives the relevant estimate for the vortonic dark matter fraction as \begin{equation}} \newcommand{\ee}{\end{equation} \Ov\approx\ef\nu_\star} \def\nuf{\nu_{\rm f}{\mit\Gamma}} \def\GeV{\,{\rm GeV}^{-1/2}\Big({\mP^2\over\mc\Tx}\Big) \Big({\Ts\over\mP}\Big)^{5/2} \Big({{\mit\Gamma}} \def\GeV{\,{\rm GeV}\Ts\,\over\sqrt{g^*_\sigma}} \def\aaf{{g^*_{\rm f}}\,\mP}\Big)^{(3+\p)/(6-2\p)}\ , \label{add 18} \ee Let us now again examine some specific examples. \subsection{Case $\bf \Tx \approx\Ts$.} The formula (\ref{old 27}) is applicable to the case considered in earlier work\cite{C}, in which it was supposed that vortons sufficiently stable to last until the present epoch were formed as the result of the carrier condensation occurring at or very soon after the time of string formation. This example provides the strongest limits on $\Tx$. Setting $\Ts$ equal to $\Tx$ in (\ref{old 27}) one obtains \begin{equation}} \newcommand{\ee}{\end{equation} \beta^{5/9}{\Tx\over\mP} \left({\nu_\star} \def\nuf{\nu_{\rm f}\mP\over{g^*_\sigma}} \def\aaf{{g^*_{\rm f}}\mc}\right)^{4/9} \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 1\ . \label{plus 39} \ee Substituting the estimates above (supposing, as before, that the efficiency and drag factors are order unity), we obtain \begin{equation}} \newcommand{\ee}{\end{equation} \Tx \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 10^7\, \GeV\ . \label{plus 40} \ee This result is based on the assumptions that the vortons in question are stable enough to survive until the present day. Thus, this constraint is naturally more severe than its analogue in the previous section. It is to be remarked that vortons produced in a phase transition occurring at or near the limit that has just been derived would give a significant contribution to the elusive dark matter in the universe. However, if they were produced at the electroweak scale, i.e. with $\Tx\approx\Ts \approx\TEW$, where $\TEW\approx 10^2\, \GeV$, then they would constitute such a small dark matter fraction, $\Ov\approx 10^{-9}$, that they would be very difficult to detect. \subsection{Case $\bf \Tx\simeq\Th_{_{\rm GUT}}\approx10^{16}$ GeV.} For the most commonly considered case, namely that of strings formed during the GUT transition, the nucleosynthesis limit (\ref{old 26}) is already sufficient for the exclusion of carrier condensation in the friction damping regime. To obtain the stronger limit that is applicable if the vortons are sufficiently stable to survive as a dark matter constituent, we need to consider the case in which the condensation occurs during the the regime of gravitational damping. In this case, the relevant dark matter fraction is given by (\ref{add 18}). Setting $\Tx$ equal to $\Th_{_{\rm GUT}}$ in this formula, and dropping the order of unity coefficients we obtain the corresponding limit \begin{equation}} \newcommand{\ee}{\end{equation} {\Ts\over\mP}\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$}\Big({\mc\Th_{_{\rm GUT}}\over\mP^2}\Big)^{(3-\p)/(9-2\p)} \ . \label{add 19} \ee If the loop production were extremely efficient, $\p\simeq 1$, this would already give the numerical limit \begin{equation}} \newcommand{\ee}{\end{equation} \Ts\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 10^{10} \GeV\ . \label{oldlimit} \ee which is significantly stronger than the more conservative limit (\ref{old 26}) that pertains if the vortons only survive for a few minutes. However, contrary to what one might have guessed, the highest conceivable loop production efficiency is not what maximises ultimate vorton production. This is because it merely tends to enhance the charge and current loss rate by production not of protovortons but of doomed loops. Thus if, instead of supposing that the loop production efficiency $\p$ is close to a hundred percent, one makes the plausible supposition that it does not much exceed fifty per cent, then one obtains \begin{equation}} \newcommand{\ee}{\end{equation} \Ts\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 10^{9} \GeV\ . \label{newlimit} \ee This limit is still compatible by a very large margin with one of the most obvious, albeit rather extreme, possibilities\cite{DPWP} that comes to mind, namely that in which the strings are formed at the GUT level, $\Tx\approx\Th_{_{\rm GUT}}$, but no current carrier condenses until the electroweak level, $\Ts\approx\TEW$. \section{Conclusions} We have explored the constraints and implications both for particle physics and cosmology arising from the existence of populations of remnant vortons of more general types than have previously been considered. Specifically, we have envisaged the possibility of cosmic string superconductivity by condensation of the relevant carrier field at energy scales significantly below that of the string formation. We have seen that there are two qualitatively very different possibilities. In scenarios for which the carrier condensation occurs at comparitively high energy, during the friction damping regime, a substantial majority of the superconducting string loops will ultimately survive as vortons. Such scenarios are more easily excluded on observational grounds than the alternative possibility, which is that superconductivity does not set in until a later stage, in which case only a minority of the loops initially present ultimately become vortons. We have shown that large classes of particle physics models can provisionally be ruled out as incompatible by these cosmological considerations, and in particular we have shown that models admitting GUT strings must not allow any string superconductivity giving stable vortons to set in much above $10^{9}$ GeV. The excluded regions of parameter space are shown in figure~2. Our conclusions are, however, dependent on a number of more or less ``conventional" assumptions, whose validity will need to be systematically scrutinised in future work. Invalidation of these conventional assumptions, particularly those concerning the long term stability of the vortons, in specific theoretical contexts would mean that in such circumstances the constraints given here might need to be considerably relaxed. On the other hand, our constraints may be considerably tightened by the use of more detailed observational data and the ensuing limits on the populations of various kinds of vortons that can exist today. On the constructive side, we have shown that it is possible for various conceivable symmetry breaking schemes to give rise to a remnant vorton density sufficient to make up a significant portion of the dark matter in the universe. \section{Acknowledgements} R.B. is supported in part by the US Department of Energy under Grant \#DE-FG0291ER40688; B.C. is supported by the CNRS; A.D. is supported by PPARC; and M.T. is supported in part by funds provided by the U.S. Department of Energy under cooperative research agreement \#DF-FC02-94ER40818. We have benefitted from helpful converations with many colleagues including Warren Perkins, Patrick Peter, Tsvi Piran, Paul Shellard and Neil Turok.
2104.01387
\section{Introduction \label{sec:intro}} Flows in magnetic confinement devices are known to have a strong influence in suppressing turbulence and affecting neoclassical \citep{Hinton_Wong_1985_neoclassical_transport_rotating,helander1998neoclassical} and turbulent transport \citep{burrell1997_ExB_effects}, as well as stability \citep{chu_Greene_1995effect_toroidal_flow}. In particular, sheared flows like zonal flows can reduce turbulent transport by shearing the turbulent eddies \citep{lin_Hahm_1998_Zonal_flow_suppress_turbulent,fujisawa2008_zonal_flow_review,terry2000_turb_suppress_shear_flow}. Study of the interaction of plasma flows and turbulence is critical \citep{stroth2011_plasma_flow_interaction,groebner_Burrel_1990role} in understanding the edge region of tokamaks and the L-H transition. It has been experimentally verified that the radial electric field plays a key role in the transition \citep{burrell1999tests_ExB_turbulence}. Flows can also have applications in healing magnetic islands in stellarators \citep{hegna2012_flow_healing_island}. Flows are also an essential object of study in space and astrophysical plasmas \citep{Goedbloed2004_MHD_book,beskin2009mhd,Istomin1996stability,Davis2020magnetohydrodynamics_AGN}. If the flow speed is assumed on the order of the sound speed, ideal MHD is a good description of plasmas since gyroradius corrections do not enter \citep{morozov1980steady_flow,freidberg2014ideal,Goedbloed2004_MHD_book}. The theory of steady axisymmetric plasma flows is well developed \citep{hameiri1983_MHD_flow_equilibrium,hassam1996_NF_poloidal_sonic,guazzotto2005_MHD_toroidal_poloidal_flow,mcclements2010steady,tasso1998axisymmetric,Abel_2013_multiscale_GK,beskin1997axisymmetric}. Toroidal angular momentum conservation, which follows from Noether's theorem, facilitates the reduction of axisymmetric flows to the study of a modified Grad-Shafranov (GS) equation which includes the effect of steady plasma flow \citep{hameiri1983_MHD_flow_equilibrium,hassam1996_NF_poloidal_sonic,tasso1998axisymmetric,Goedbloed1997stationary_symmetric_MHD_flows,beskin1997axisymmetric}. Similarly, for plasma systems with helical symmetry, a helical Grad-Shafranov with large flows can be obtained \citep{andreussi_Morrison_2012hamiltonian,Villata_Tsinganos_1993exact_helical_MHD_eqb}. Global solutions of such flow modified Grad-Shafranov equations provide useful models of plasma jets in space \citep{bogoyavlenskij2000_Exact_astro_jets_MHD_Eqb,Villata_Ferrari_1994exact_helical_astro_jets, bogoyavlenskij2000_helical_asttro_jets,beskin1997axisymmetric}. While the standard GS equation for static equilibrium is always elliptic, the flow-modified GS equation can be elliptic, parabolic, or hyperbolic, depending on the flow's Mach number. Therefore, even with symmetry, complicated flows such as transonic flows \citep{morawetz1985weak} that modify the characteristics in a given domain can appear \citep{lifschitz_Goedbled1997transonic}. There is considerable interest in understanding the effects of symmetry breaking three-dimensional perturbations and stability of three-dimensional MHD flows in astrophysical plasmas \citep{Igumenshchev2003three,Mckinney2009stability,Istomin1996stability,Igumenshchev2008_MAD_numerics}. In particular, 2D and 3D simulations of magnetically arrested disks show very different behaviors \citep{White2019_MAD_resolution,Igumenshchev2008_MAD_numerics}. However, high-resolution 3D simulations are required to carefully capture the effects of symmetry-breaking \citep{White2019_MAD_resolution}. In the absence of symmetry, reduction to a Grad-Shafranov like equation fails in general. Non-symmetric toroidal MHD equilibrium with flows share the same mathematical difficulty as the static non-symmetric MHD equilibrium, namely, the appearance of singularities on rational surfaces. Although large steady shear-flows are desirable in various confinement geometries, steady flows show a strong alignment with a symmetry direction when present \citep{helander2007rapid,helander_Simakov_PRL_2008_intrinsic_ambipolarity,Helander2014,spong2005generation}. In axisymmetric tokamaks, the persistent flow is in the toroidal symmetry direction \citep{Throumoulopoulos_tasso_Weitzner2006_nonexistence_purely_poloidal_flow,tasso1998axisymmetric}. While the toroidal angular momentum in a tokamak is typically conserved over several inverse collision frequency \citep{Helander2012stellarator_v_tokamak}, poloidal flows are damped due to neoclassical effects. Axisymmetry makes tokamaks intrinsically ambipolar, i.e., the radial fluxes of ions and electrons are equal and independent of the radial electric field in the leading order gyroradius expansion, leading to flow in the symmetry direction being unconstrained. In the absence of intrinsic amibipolarity, the rotation is damped to the value required for ambipolar radial transport and typically leads to flows of the order of diamagnetic flows \citep{helander_Simakov_PRL_2008_intrinsic_ambipolarity, Helander2014}. Plasma rotation in tokamaks and stellarators, therefore, differ significantly because stellarators do not, in general, have a symmetry direction and are not intrinsically ambipolar \citep{Helander2012stellarator_v_tokamak}. The only class of stellarators that have been shown to support large flows are quasisymmetric \citep{tessarotto1996,canik_HSX_2007_improved_neoclassical,spong2005generation,helander2007rapid,helander_Simakov_PRL_2008_intrinsic_ambipolarity}. In a quasisymmetric stellarator, continuous symmetry comes from the symmetry of the magnitude of the magnetic field. As a result of the symmetry, such stellarators are also approximately intrinsically ambipolar \citep{helander_Simakov_PRL_2008_intrinsic_ambipolarity}. However, it is difficult, if not impossible, to obtain exact global quasisymmetry in a volume \citep{Garren1991a,Landreman_Sengupta_ho_2019}. The breakdown of quasisymmetry leads to flow damping \citep{spong2005generation,Simakov_Helander_2011plasma_rotation_QS,helander_Simakov_PRL_2008_intrinsic_ambipolarity}. In the literature, it has been further argued in the literature \citep{sugama2011_QS_large_flow,Simakov_Helander_2011plasma_rotation_QS,tessarotto1996} that quasisymmetry is not enough to accommodate the large centrifugal forces from sonic flows, and one needs axial/helical symmetries. As a result of the difficulties imposed by the three-dimensional geometry, a lack of symmetry that leads to magnetic resonances on rational surfaces and stringent restrictions on plasma flows from neoclassical physics, a complete mathematical description of the characteristics of a generic 3D ideal MHD equilibrium, let alone stability of such systems is not available in the standard literature. The analytic description of nonsymmetric 3D flows has employed various approximations such as incompressibility \citep{kamchatnov1982topological_soliton_mhd} or large aspect ratio \citep{kovrizhnykh1989_3D_avg_MHD_flow,kovrizhnykh1980_3D_avg_MHD} have been invoked. Several variational formalisms for 3D plasma flow problem exist \citep{hameiri1998variational,greene1969variational,Ilgisonis_Pastukhov_2000_variational,Vladimirov_Moffatt_1995_variational_MHD}. A nontrivial result from the variational formalism is the proof that under certain conditions ideal MHD allows more than one kind of cross-helicity \citep{hameiri1981spectral_C,Ilgisonis_Pastukhov_2000_variational,Vladimirov_Moffatt_1995_variational_MHD}. We shall call the additional symmetry vector associated with the conserved quantity Hameiri's vector because it was first discovered in \citep{hameiri1981spectral_C} and later derived independently in \citep{Ilgisonis_Pastukhov_2000_variational} and \citep{Vladimirov_Moffatt_1995_variational_MHD} (see discussion in \citep{hameiri1998variational} and \citep{Ilgisonis_Pastukhov_2000_variational}). A major disadvantage of the flow-modified variational formalism is that, unlike the static limit, the energy functional for the flow problem does not possess a minimum unless the equilibrium flow is small enough or is parallel to the magnetic field \citep{hameiri1998variational}. In this work, instead of addressing the flow damping problem in a 3D system, we take a step back and analyze the ideal MHD equilibrium with steady flows in a non-symmetric geometry. Our goal here is to construct a class of perturbative solutions to non-symmetric ideal MHD with steady flows and obtain consistency conditions for steady flows when the magnetic fields close on themselves. The flows that we consider are larger than diamagnetic flows but are not sonic. We show that the same procedure that allowed us to obtain non-symmetric ideal MHD static equilibrium in a low shear stellarator can be generalized to MHD with such flows. In particular, we highlight the crucial role of closed field lines \cite{Grad1971plasma,weitzner2014ideal, Weitzner2016} in avoiding magnetic resonances and going to arbitrarily high order in the perturbative analysis. We also show that for a special class of flows that possess the additional symmetry, the Hameiri's vector, one can obtain a generalized Grad Shafranov equation (GGS) similar to the quasisymmetric generalized Grad Shafranov equation first derived in \citep{Burby2020} (see also related works by \cite{burby2020_GGS,Constantin2020}). The GGS reduces to the standard flow-modified Grad-Shafranov equation in symmetric geometries. We explicitly obtain the necessary conditions that need to be satisfied on rational surfaces. If these conditions are not satisfied current singularities can develop on rational surfaces \citep{Soloviev_Shafranov_v5_1970plasma, Grad1967toroidal_confinement,Boozer_coords_1981}. The outline of the paper is as follows. First, in section \ref{prelim}, we study the ideal MHD equilibrium with flows and identify the characteristic surfaces. Then, in section \ref{Linearization} we study the perturbation of a class of exact solutions and obtain the dispersion relation. Then, we outline in section \ref{nearly_parallel} how the perturbation theory can be carried out to all orders for nearly parallel flows. Next, we utilize Hameiri's vector to derive the GGS and the necessary conditions on rational surfaces in section \ref{sec:GGS_C}. Finally, we discuss the implications of our results in section \ref{sec:discussion}. \section{Preliminaries} \label{prelim} We employ the standard model of ideal magnetohtdrodynamics to represent the plasma in steady flow state but we add the simplifying assumption that the entropy is a constant in the entire plasma. The system is \begin{subequations} \begin{align} \bm{\nabla}\cdot (\rho \bm{u} )&=0 \label{density_eqn}\\ \rho \bm{u} \cdot \bm{\nabla} \bm{u} +\bm{\nabla} p &=\bm{J} \times \bm{B},\quad \bm{J}=\bm{\nabla}\times \bm{B} \label{momentum_eqn}\\ \bm{\nabla} \cdot \bm{B}&=0 \label{divB}\\ -\bm{E}=\bm{\nabla} \Phi &= \bm{u} \times \bm{B}. \label{Efield} \end{align} \label{ideal_MHD_flows} \end{subequations} It is often convenient to replace \eqref{Efield} by the equivalent expression \begin{align} \bm{\nabla} \times \left( \bm{u} \times \bm{B}\right)=0 \label{NoEfield} \tag{\ref{Efield}*} \end{align} While \eqref{Efield} and \eqref{NoEfield} are equivalent, the form \eqref{Efield} makes explicit the need to select specific EMFs, equivalently periods associated with the potential $\Phi$. As noted we add the assumption \begin{align} p=p(\rho) \label{pressure_eqn} \end{align} to close the system. For concreteness, we shall assume an adiabatic equation of state of the form $p\sim \rho^\gamma$, where $\gamma$ is the ratio of specific heats. If we introduce \begin{align} \Pi\equiv p+\frac{1}{2}\bm{B}\cdot\bm{B} \label{Pi_eqn} \end{align} then an equivalent form for \eqref{momentum_eqn} is \begin{align} \rho \bm{u} \cdot \bm{\nabla} \bm{u} +\bm{\nabla} \Pi =\bm{B}\cdot\bm{\nabla} \bm{B} \label{Pi_momentum_eqn} \tag{\ref{momentum_eqn}*} \end{align} We start with a formal mathematical exploration of the system \eqref{density_eqn}-\eqref{pressure_eqn}. It is clear there are eight scalar first-order differential equations for the eight scalar unknowns $\rho,\Phi,\bm{u},\bm{B}$. If we were to use \eqref{NoEfield} instead of \eqref{Efield}, there would be one fewer unknown but also a hidden solvability condition for \eqref{NoEfield} that adds an unknown and leads back to \eqref{Efield}. We now wish to determine the ``type" of the system, elliptic, hyperbolic, parabolic, mixed, or whatever may be. A closely related issue is the description of the standing waves in the linearized system. We linearize about a constant state for $\rho, \u$ and $\bm{B}$ and look for waves propagating as $\exp{(i \bm{k} \cdot \bm{x})}$. Such solutions are special cases of simple waves in fluid dynamics \citep{Courant_Friedrichs_1999supersonic}. With $c^2\equiv dp/d\rho$, we obtain the following linear system \begin{subequations} \begin{align} (\bm{k} \cdot \bm{u}) \delta \rho + \rho (\bm{k} \cdot \delta\bm{u}) &=0 \label{17a}\\ \rho (\bm{k} \cdot \bm{u})\delta \bm{u} +c^2 \bm{k} \delta \rho&= \delta \bm{B} (\bm{k} \cdot \bm{B})-\bm{k} (\delta \bm{B} \cdot \bm{B}) \label{17b}\\ \bm{k} \times (\delta \bm{u} \times \bm{B})+\bm{k} \times (\bm{u} \times \delta \bm{B}) &=0 \label{17c}\\ \bm{k} \cdot \delta \bm{B} &=0. \label{17d} \end{align} \label{k_pertb_eqn_system} \end{subequations} It follows easily that given the first order perturbations $\partial \rho$, $\delta \u$ and $\delta \bm{B}$ , they satisfy the system (2.7a)-(2.7d) From \eqref{17c} we find \begin{align} \delta \bm{u} (\bm{k} \cdot \bm{B})-\bm{B} (\bm{k} \cdot \delta \bm{u})- \delta \bm{B} (\bm{k} \cdot \bm{u}) =0 \label{17cs} \tag{\ref{17c}*} \end{align} while \eqref{17b} together with \eqref{17a} yields \begin{align} \rho (\bm{u} \cdot \bm{k})^2 \delta \bm{u} -\rho \bm{k} c^2 (\bm{k} \cdot \delta \bm{u})= \left( \bm{k} \times \left( \delta \u (\bm{k} \cdot \bm{B})-\bm{B} (\bm{k} \cdot \delta \u) \right) \right) \times \bm{B} \label{pre_disp_eqn} \end{align} Taking components of \eqref{pre_disp_eqn} in $\bm{k}$ and $\bm{B}$ directions, we obtain \begin{subequations} \begin{align} \rho \left( (\bm{k} \cdot \bm{u})^2-c^2k^2\right) \bm{k} \cdot \delta \bm{u} = k^2 B^2 (\bm{k} \cdot \delta \bm{u})-k^2 (\bm{k} \cdot \bm{B})(\bm{B}\cdot \delta \bm{u})\\ \rho (\bm{k} \cdot \bm{u})^2(\bm{B}\cdot \delta \bm{u})-\rho (\bm{k}\cdot \bm{B})c^2 (\bm{k} \cdot \delta\bm{u}) =0. \end{align} \end{subequations} A simple calculation shows that the condition for a solution is the “dispersion relation” : \begin{align} \left( \rho (\bm{k} \cdot \bm{u})^2\right)^2 -\rho (\bm{k} \cdot \bm{u})^2\left( \rho c^2 +B^2\right) k^2 +k^2 (\bm{k} \cdot \bm{B})^2\rho c^2=0 \label{The_disp} \end{align} We observe that this condition is satisfied identically for all vectors $\bm{k}$ perpendicular to $\u$ and $\bm{B}$. The expression also shows connections to the fast-slow wave combination found in ideal magnetohydrodynamic wave propagation studies. Note, however, that no Alfv\'en wave appears. Moreover, the magnitude of $\bm{k}$ does not appear in the system; only its direction occurs. We study the dispersion relation \eqref{The_disp} and its solutions in much more detail in Appendix \ref{app:The_disp}. In particular, we show that real physical solutions of \eqref{The_disp} exist. An alternate interpretation of the relation is also significant. We might pose the question: What are the characteristic surfaces for the steady flow system? Characteristic surfaces are defined by the following property. Suppose one specifies $\rho, \u$ and $\bm{B}$ on a given surface. Can one then determine the normal derivative of these quantities on that surface? If so, the surface is not characteristic. On a characteristic surface, the flow variables are not arbitrary; they must be related. These surfaces define important properties of any solution. Suppose $\chi=0$ is such a surface and define $\bm{k}$ as the gradient of $\chi$. Hence $\bm{k}$ is normal to the surface. If one further defines $\delta \rho, \delta \u$ and $\delta \bm{B}$ as the gradients of these variables dotted into the normal to the surface, then these quantities satisfy the system (2.7), except that there would be inhomogeneous terms added to the equations, terms given solely in terms of the variables on the given surface. With this interpretation of the system, the relation is the condition that one cannot give the unknown functions arbitrarily on the surface in question. Such peculiar surfaces, the characteristic surface, are thus defined by \eqref{The_disp}. A physical problem without such real surfaces is typically elliptic. When they exist, there are usually hyperbolic properties of the system as well. Their appearance indicates the complexity of steady magnetohydrodynamic flow. Flows with both elliptic and hyperbolic characteristics can occur in ideal magnetohydrodynamic equilibria. The mixed nature (neither fully elliptic nor hyperbolic) of MHD leads to difficult unresolved mathematical issues. \section{A simple linearized problem} \label{Linearization} In order to gain some insight into the nature of steady flow states we examine a simple linearized problem. We start from an exact solution of the system \eqref{density_eqn}-\eqref{Efield} \begin{subequations} \begin{align} \bm{B} &= (0,f(x),g(x))\\ \bm{u} &= (0,v(x),w(x))\\ p&=p(x)\\ \Pi&=p(x)+\frac{1}{2}(f^2+g^2)\\ -\bm{E}&=\Phi_{,x}= v(x)g(x)-w(x)f(x). \end{align} \label{eqb_exact_soln} \end{subequations} We linearize about the state and introduce the perturbed variables with the structure \begin{subequations} \begin{align} \B^{(1)}&= (B^{(1)}_x(x) \sin{(my + nz)},B^{(1)}_y(x) \cos{(my + nz) },B^{(1)}_z(x) \cos{(my + nz) })\\ \bm{\u}^{(1)}&= (u^{(1)}_x(x) \sin{(my + nz)},u^{(1)}_y(x) \cos{(my + nz) },u^{(1)}_z(x) \cos{(my + nz) })\\ (p^{(1)},\Pi^{(1)},\Phi^{(1)})&= (p^{(1)}(x),\Pi^{(1)}(x),\Phi^{(1)}(x))\cos{(my+nz)} \end{align} \label{linear_amplitudes} \end{subequations} We next give the linearized system of equations for the many amplitudes in \eqref{linear_amplitudes}. The two divergent relations are \begin{subequations} \begin{align} B^{(1)}_{x,x}-(m B^{(1)}_y + n B^{(1)}_z)=0 \label{23a}\\ - \rho^{(1)}( m v(x)+n w(x))+(\rho(x)u^{(1)}(x))_{,x}-\rho(m v^{(1)}+n w^{(1)}) =0\label{23b} \end{align} \label{divergent_reln} \end{subequations} while $\Pi^{(1)}$ is specified as \begin{align} \Pi^{(1)}= c^2 \rho^{(1)}+f(x) B^{(1)}_y +g(x) B^{(1)}_z \label{24} \end{align} where $c^2$ is the sound speed $dp/d\rho$ at the corresponding value of $\rho(x)$. Ohm's law reads \begin{subequations} \begin{align} \Phi^{(1)}_{,x}&=v^{(1)} g + v B^{(1)}_z - w^{(1)} f - w B^{(1)}_y \label{25a}\\ -m \Phi^{(1)}&= w(x) B^{(1)}_x - u^{(1)} g(x) \label{25b}\\ -n \Phi^{(1)}&= -v(x) B^{(1)}_x + u^{(1)} f(x) \label{25c} \end{align} \label{PhiOne_eqn} \end{subequations} Finally the three momentum balance equations are \begin{subequations} \begin{align} u^{(1)}(m v+n w)\rho(x)+\Pi^{(1)}_{,x}&=B^{(1)}_{x}(m f + n g) \label{26a}\\ \rho \left( u^{(1)} v'-(m v+n w)v^{(1)}\right) -m \Pi^{(1)}&= B^{(1)}_{x}f'(x)-(m f+n g)B^{(1)}_y \label{26b}\\ \rho \left( u^{(1)} w'-(m v+n w)w^{(1)}\right) -n \Pi^{(1)}&= B^{(1)}_{x}g'(x)-(m f+n g)B^{(1)}_z \label{26c} \end{align} \label{momentum_balance_eqn} \end{subequations} We introduce the notation \begin{align} \bar{f}= m f + n g, \quad \bar{v} = m v + n w \label{notation} \end{align} We see that $\rho^{(1)}$ occurs only in \eqref{23b} and \eqref{24}. It is convenient to eliminate $\rho^{(1)}$ from the problem and we replace (24) by \begin{align} \bar{v} \Pi^{(1)}= c^2\left( p(m v^{(1)} + n w^{(1)})-(p u^{(1)})_{,x}\right) +f(x) B^{(1)}_y +g(x)B^{(1)}_z \label{24s}\tag{\ref{24}*} \end{align} Provided the series retains the structure in the angles given in (\ref{linear_amplitudes}) in all orders $m=n=0$ is never possible in the two equations: Ohm's law (\ref{25b}) and (\ref{25c}). Thus, we may replace \eqref{25b},\eqref{25c} by the following \begin{subequations} \begin{align} -\bar{v} \Phi^{(1)}&= u^{(1)}\left( v g - w f\right) = u^{(1)} \Phi^{(0)}_{,x} \label{25bs}\tag{\ref{25b}*}\\ -\bar{f} \Phi^{(1)}&= \left( u f - v g\right) B^{(1)}_x =-B^{(1)}_x \Phi^{(0)}_{,x} \label{25cs}\tag{\ref{25c}*} \end{align} \label{VbfbPhiOne} \end{subequations} We may carry out similar reductions of pressure balance and from \eqref{25b} and \eqref{25c} we obtain \begin{subequations} \begin{align} \rho u^{(1)} \bar{v}'-\rho \bar{v} (m v^{(1)} + n w^{(1)})-(m^2+n^2)\Pi= B^{(1)}_x \bar{f}' -\bar{f} (n B^{(1)}_y - m B^{(1)}_z) \label{26bs} \tag{\ref{26b}*}\\ \rho u^{(1)} (n v' -m w')-\rho \bar{v} (n v^{(1)} -m w^{(1)})= B^{(1)}_x (n f - m g ) -\bar{f} (n B^{(1)}_y - m B^{(1)}_z) \label{26cs} \tag{\ref{26c}*} \end{align} \label{rhouOne_vprime} \end{subequations} The $x$ component of \eqref{26a} has the simpler form \begin{align} (v g - w f)\Pi^{(1)}_{,x} = (\bar{f}^2+\bar{g}^2)\Phi^{(1)} \label{26as}\tag{\ref{26a}*} \end{align} At this point we have five equations (\ref{23a},\ref{25a},\ref{24s}) and (\ref{26bs},\ref{26cs}) in which we may eliminate $u^{(1)}$ and $\B^{(1)}_x$ by means of (\ref{25bs},\ref{25cs}) and with the unknowns $(\Phi^{(1)}, \Phi^{(1)}_{,x},v^{(1)},w^{(1)},B^{(1)}_y,B^{(1)}_z)$ and $\Pi^{(1)}$. Hence if we consider $\Phi^{(1)}$ and $\Pi^{(1)}$ as given then we may solve for $\Phi^{(1)}_{,x}$. Thus, we obtain a first order homogeneous linear differential equation for $\Phi^{(1)}$ of the form $$\Phi^{(1)}_{,x}= a(x) \Phi^{(1)} +b(x) \Pi^{(1)}.$$ However, the construction fails with the vanishing of the determinant of the system of five equations for $(\Phi^{(1)}_{,x},v^{(1)},w^{(1)},B^{(1)}_y,B^{(1)}_z)$ in terms of $\Phi^{(1)}$ and $\Pi^{(1)}$. The zeros of this determinant constitute the resonant singularities of the system. We then obtain the determinant of this system by replacing $B^{(1)}_{,x}$ and $u^{(1)}$ by $\Phi^{(1)}$ and ignoring the terms proportional to $\Phi^{(1)}$ or $\Pi^{(1)}$. We find \begin{align} \Delta = \begin{vmatrix} \frac{\bar{f}}{\Phi^{(0)}_{,x}} \quad \quad 0 \quad\quad\quad 0 \quad \quad-m \quad\quad-n\\ 1\quad\quad -g \quad\quad\quad f \quad\quad\quad w \quad\quad -v\\ 0 \quad\quad\quad \rho \bar{v}\quad\quad\quad 0\quad\quad -\bar{f}\quad\quad\quad 0\\ 0\quad\quad \quad 0\quad\quad\quad \rho \bar{v}\quad\quad\quad 0 \quad\quad -\bar{f}\\ -\frac{\rho c^2 \bar{v}}{\Phi^{(0)}_{,x}}\quad \rho c^2 m \quad\:\: \rho c^2 n\quad\quad f\bar{v} \quad\quad g\bar{v} \end{vmatrix} \label{Det} \end{align} and after straightforward reduction we obtain \begin{align} \Delta= (\rho \bar{v}^2-\bar{f}^2)(\rho \bar{v}^2(c^2+(f^2+g^2)/\rho)-c^2\bar{f}^2) \label{28} \end{align} For the particular background state, resonances are associated with Alfv\'en waves, the first factor, and a particular form of the fast-slow wave, the second factor. It is clear that for a general equilibrium there will be resonances when $\bar{v}=\pm \bar{f}/\sqrt{\rho}$ or $\pm \bar{f} c/\sqrt{c^2\rho + (f^2+g^2)}$. For wide ranges of the ratio of the mode indices, $m/n$ resonant surfaces will appear. Just as it was possible to find formal expansions in mode amplitudes of ideal MHD equilibrium, it may be of interest to explore the possibility of formal expansions of flows with no resonant singularities. In a more mathematically precise language we can state the main idea as follows. One starts with a homogeneous linear system of five equations in seven unknowns of the form $A X = 0$, where $X$ is a 7-component vector and $A$ is a 5x7 rectangular matrix. Next one write $X = (x,y)$, where $x$ denotes the first 5 components of X, and $y$ denotes the last two components, $\Phi^{(1)}$ and $\Pi^{(1)}$. Then the value of $y$ is fixed, which leads to the inhomogeneous linear system $ax = b$, where $a$ is the restriction of $A$ to the subspace $y=0$ and $b = A(0,y)$. When $\text{det}(a)$ is non-zero, one therefore gets expressions for each component of $x$ that are linear in $y$. When $\text{det}(a)=0$, there is no solution for $x$ in general unless $b$ is in the image of $a$. The quantity $\Delta$ in eq. \eqref{Det} is just that $\text{det}(a)$. Such a possibility depends on arranging the state so that it is in resonance everywhere or nowhere. Note that when $\Delta = \text{det}(a)=0$, there is still a solution for $x$ provided $y$ can be adjusted so that $A(0,y)$ is in the image of $a$. Such a situation is possible for parallel flows, i.e., flows where $\bm{B}$ and $\u$ are parallel, or \begin{align} \u= (0,v(x),w(x))=\lambda(x)(0,B_y(x),B_z(x)). \label{29} \end{align} In this case $\bar{v}=\lambda \bar{f}$, so that \eqref{28} becomes \begin{align} \Delta= \bar{f}^2\left( \rho (\lambda(x))^2-1\right) \left( \rho (\lambda(x))^2(c^2+(f^2+g^2)/\rho)-c^2 \right) =0 \label{30} \end{align} If $\lambda(x)$ is chosen so that the second or the third term vanishes, then $\Delta$ is identically zero for every mode. Such flows may well be of some interest, but they seem extremely pathological and do not appear to be realizable. However, the case $\bar{f}=0$ is of more interest and similar to the equilibrium case. In particular we assume the second factor never vanishes and $\bar{f}=0$ for $m=M, n=N$ where $M$ and $N$ are relatively prime. Thus, we assume \begin{align} \B^{(0)}=(0,N,-M)\mu(x) \label{31} \end{align} where $\mu(x)$ is an arbitrary function, while $\u$ is given by (\ref{29},\ref{31}). We expand in amplitude about this state and describe expansions to all order in the amplitude. \section{Nearly parallel flows} \label{nearly_parallel} In the last section we showed that perturbations of steady flows are typically subject to appearances of singularitiies on particular surfaces. However, for parallel flows given by (\ref{29},\ref{31}) singular surfaces need not appear. In this section we examine the flows with these properties and we lay out the argument that one may construct a formal series solution to all orders without the appearance of any singularities when the lowest order system is a parallel flow. We do not give the full details of the proof of expansions to all orders but we consider the conclusion valid. The process closely parallels the development in \citep{weitzner2014ideal}, where such a formalism is fully developed. The expansion parameter of the series is the amplitude of the flow and field components. We assume that the lowest order field and flow state is given by (\ref{29},\ref{31}). In first order we add fields and flows which is a sum of terms given by \eqref{linear_amplitudes} subject to the condition that none of them is resonant. We may then continue to construct an expansion order by order. The series we construct are assumed to have the same structure of angular dependence in all orders. The linearized system used to determine the characteristics of the system, closely parallels \eqref{k_pertb_eqn_system}. In this expansion at each order we must solve a system similar to \eqref{23a},\eqref{24s},\eqref{25a}, \eqref{25bs},\eqref{25cs},\eqref{26a},\eqref{26bs},\eqref{26cs} where sums of products of lower order terms may appear added to the right hand side of the equations. When the mode is not resonant so that $\Delta$ given by \eqref{30} is not zero, one may solve the linear homogeneous system. Our task is to indicate how one should be able to select series so that order by order there is no net resonant term arising in the combination of sums of products of lower orders which form the inhomogeneous terms of the generalization of the system. Before we address the central issue in this work we must characterize the structure of the resonance more fully. In resonance the quantities $\bar{v}$ and $\bar{f}$ are identically zero. Thus, from \eqref{25bs},\eqref{25cs}, \eqref{26as} and \eqref{VbfbPhiOne} we find that \begin{subequations} \begin{align} u^{1}(x)&=B_x^{(1)}=0 \label{u1Bx1}\\ \Phi^{(1)}&=0 \label{PhiOneis0} \quad \quad (\text{from \eqref{VbfbPhiOne}})\\ \Pi^{1}&=0 \label{PiOneis0} \quad \quad (\text{from \eqref{26as}}). \end{align} \label{resonant_eqn1} \end{subequations} Finally, the system is closed with the two conditions \begin{subequations} \begin{align} m B^{(1)}_y + n B^{(1)}_z &=0\\ m v^{(1)}+ n w^{(1)}&=0 \end{align} \label{resonant_eqn2} \end{subequations} The state given by \eqref{resonant_eqn1}, \eqref{resonant_eqn2} is the resonant mode in any order, provided \begin{align} \frac{m}{n}=\frac{M}{N}. \label{resonant_eqn3} \end{align} We finally turn to the inhomogeneous system where we expand the solution to some order in the amplitude say order $P$. We obtain a system of the form \eqref{23a},\eqref{24s}, \eqref{PhiOne_eqn},\eqref{momentum_balance_eqn} where the index one is replaced by $P$ and inhomogeneous terms are added to the right hand sides of the equations. These terms are sums of products of terms of order lower than $P$. We can solve the system provided that there is no net contribution at a resonant term, for if there were such a term we could not guarantee the construction of a periodic solution of the system. We must show that the series can be arranged so that no net inhomogeneous term is present. We modify the structure of the solution in orders $(P-2)$ and $(P-1)$ so that no net singular terms arise in order $P$. We assume that before modification there are terms in order $P$ with $m$ and $n$ satisfying \eqref{resonant_eqn3}. We add a resonant term in order $(P-2)$ with angular dependence $\exp{i(\mu y +\nu z)}$ and undetermined $x$ dependence. There are two such independent additions $B_y^{(P-2)}$ and $v^{(P-2)}$, say. We assume that there are non-resonant terms in the first order of the form \eqref{linear_amplitudes}. When these terms beat against each other there will be terms in order $P-1$ with angular dependence $\exp{i((\mu\pm m) y +(\nu\pm n) z)}$. Finally, in order $P$, terms from order $(P-1)$ beat again against the first order terms with angular structure $\exp{i(m y +n z)}$. We choose the as yet undetermined $x$ dependence so as to satisfy the two constraints. The quantities $u^{(P)},B_x^{(P)},\Phi^{(P)},\text{ and }\Pi^{(P)}$ which were all zero in the solution of the homogeneous data problem are all non-zero and are determined. \section{Analysis of steady flows that admit an additional symmetry vector }\label{sec:GGS_C} We have so far outlined how one can perturbatively construct non-symmetric steady plasma flows in ideal MHD. We considered flows that are nearly parallel to avoid resonances. Compared to non-symmetric MHD equilibrium without flows, the steady flow system \eqref{ideal_MHD_flows} is far more complicated due to extra equations and nonlinearities from the flow variables. In particular, it is not clear if the resonances can be avoided. Fortunately, analytical progress can be made if the steady flows possess a symmetry vector, the Hameiri vector \citep{hameiri1983_MHD_flow_equilibrium}, which is closely related to the quasisymmetry vector \citep{Burby2020,Rodriguez2020a}. In the following, we employ a more formal approach to investigate non-symmetric 3D flows that come equipped with the Hameiri vector. We shall now state our goals and the main results obtained in this section. Our primary objective is to investigate the properties of a class of non-symmetric 3D flow system that has flow components both parallel and perpendicular to the magnetic field and admits Hameiri's symmetry vector. Under an additional assumption that the density does not change along this vector, we show that non-symmetric generalizations of Bernoulli's law, angular momentum conservation, a generalized Grad-Shafranov (GGS) equation, and associated generalized Hamada conditions can be obtained. Therefore, our main results show that the description of these special non-symmetric flows parallels symmetric flows. The organization of this section is as follows. We begin with a discussion of the Hameiri vector and point out its close connections with the quasisymmetry vector in section \ref{subsec:Hameiri_C_and QS}. We then discuss the key assumptions that we make in order to carry out the analysis in \ref{subsec:basic_assumptions}. We derive the generalizations of Bernoulli's law in section \ref{subsec:gen_Bernoulli}. We show that a generalized Grad-Shafranov equation can be constructed following the approach of \cite{Burby2020} in section \ref{subsec:GGS_flow}. We discuss the generalization of Hamada conditions in section \ref{subsec:last_Hamada} and summarize the results in section \ref{subsec:summary}. \subsection{ Hameiri's vector $\textbf{C}$ and weak quasisymmetry }\label{subsec:Hameiri_C_and QS} We briefly summarize a key result due to Hameiri, which is essential to our work. Assuming a steady flow with nested toroidal flux surfaces labelled by $\psi$, \cite{hameiri1983_MHD_flow_equilibrium} showed that ideal MHD allows an additional cross-helicity of the form $\int d^3\bm{r}\:\u\cdot\bm{C}$, where $d^3\bm{r}$ denotes volume integral over 3D space, if there exists a vector $\bm{C}$ such that \begin{align} \bm{C}\cdot \bm{\nabla} \psi=0,\quad \bm{\nabla} \cdot \bm{C} =0, \quad \bm{\nabla} \times \left(\bm{B} \times \frac{1}{\rho}\bm{C} \right)=0, \quad \bm{\nabla} \times \left( \u \times \bm{C} \right)=0. \label{C_conditions} \end{align} Hameiri has shown that in axisymmetry, $\bm{C}$ is $R^2\bm{\nabla} \varphi$ in the usual $(R,Z,\varphi)$ cylindrical coordinates used in tokamak literature. Furthermore, from \eqref{C_conditions} we can see that the vector $\bm{C}$ is analogous to a magnetic field since it is frozen-in with the flow and lies on flux surfaces. We shall make an additional assumption that the density is constant in the direction of the Hameiri vector i.e. $\bm{C}\cdot\bm{\nabla} \rho=0$. Writing $\bm{C}=\rho \bm{Q}$, we get an equivalent set of conditions on $\bm{Q}$, \begin{align} \bm{Q}\cdot \bm{\nabla} \psi=0,\quad \bm{\nabla} \cdot (\rho\bm{Q}) =0, \quad \bm{\nabla} \times \left(\bm{B} \times \bm{Q} \right)=0, \quad \bm{\nabla} \times \left( \rho\u \times \bm{Q} \right)=0. \label{Q_n_u_conditions} \end{align} The conditions on $\bm{Q}$ from \eqref{Q_n_u_conditions} reads \begin{subequations} \begin{align} \bm{Q}\cdot \bm{\nabla} \rho=0 \label{QDrho}\\ \bm{B} \times \bm{Q} =\bm{\nabla} \psi \label{BxQ} \\ \bm{\nabla} \cdot \bm{Q} =0, \label{divQ} \end{align} \label{Q_conditions} \end{subequations} while the conditions on $\u$ from \eqref{Q_n_u_conditions} and \eqref{ideal_MHD_flows} are \begin{subequations} \begin{align} \bm{\nabla} \cdot (\rho\u) =0 \label{div_rho_u}\\ \u\times \bm{B} = \bm{\nabla} \Phi(\psi) \label{uxB}\\ \bm{\nabla} \times \left( \rho\u \times \bm{Q} \right)=0. \label{curl_rho_uxQ} \end{align} \label{u_conditions} \end{subequations} We want to point out that the conditions given in \eqref{Q_conditions} are closely related to the ``weak quasisymmetry" conditions obtained in \citep{Rodriguez2020a}. If instead of $\Q\cdot \bm{\nabla} \rho=0$, we assume $\Q\cdot \bm{\nabla} B=0$ and $\rho=\rho(\psi,B)$ we get exactly the weak quasisymmetry (QS) conditions. This close connection of ideal MHD flows with quasisymmetry was pointed out earlier by Helander \citep{helander2007rapid,Helander2014}. The analogy with ``weak QS" becomes more evident when we look at the internal consistency of \eqref{Q_conditions}. For given $\bm{B}$ and $\rho$, the system \eqref{Q_conditions} is an overdetermined system for $\bm{Q}$ as it represents four constraints for the three components of $\bm{Q}$. From \eqref{BxQ} we get \begin{align} \bm{Q}= \frac{1}{B^2}\left( \left( \B\cdot \Q\right) \bm{B} -\B\times \bm{\nabla} \psi \right). \label{Q_form} \end{align} The component $\left( \B\cdot \Q\right)$ can be chosen such that \eqref{QDrho} is satisfied. The divergence-free condition \eqref{divQ} then imposes the following condition on $\bm{B}$ \begin{align} \bm{J}\cdot\bm{\nabla}\psi=\bm{B}\cdot\bm{\nabla} \left( \B\cdot \Q\right)-\Q\cdot \bm{\nabla} B^2, \label{J_del_psi} \end{align} which must be satisfied in order for \eqref{Q_conditions} to have a consistent solution. Such a consistency condition indeed appears in QS systems \citep{Rodriguez2020a,Burby2020} without the last term since $\Q\cdot \bm{\nabla} B=0$. If we assume that $\bm{Q}$ is given we can construct a $\bm{B}$ consistent with $\bm{Q}$. Using \eqref{BxQ}, we can write $\bm{B}$ in terms of $\bm{Q}$ as \begin{align} \bm{B}=\frac{1}{Q^2}\left( \left( \B\cdot \Q\right) \bm{Q} +\bm{Q}\times\bm{\nabla} \psi \right), \label{BinQform} \end{align} which, is analogous to the symmetry flux coordinates in a tokamak \citep{haeseleerflux_coordinates} and the form used by \cite{Burby2020} to study the strong form of QS. To ensure that $\bm{\nabla}\cdot\bm{B}=0$ is satisfied by \eqref{BinQform}, we must have \begin{align} \Q\cdot \bm{\nabla} \left( \B\cdot \Q\right) +\bm{\nabla}\cdot \left( \bm{Q}\times\bm{\nabla}\psi \right) -\bm{B}\cdot\bm{\nabla} Q^2=0. \label{divB_extra_condition} \end{align} In the following, we shall assume that there exists a Hameiri vector of the form $\bm{C}=\rho \bm{Q}$, where $\bm{C}\cdot \bm{\nabla} \rho=0$ and $\bm{Q}$ satisfies \eqref{Q_conditions} and \eqref{divB_extra_condition} and is given. \subsection{Underlying assumptions} \label{subsec:basic_assumptions} Before proceeding further, we shall discuss the various assumptions that we make in our analysis. We begin with our fundamental assumption that there exist non-symmetric plasma flows that possess nested flux surfaces. In our formal exploration, we are not forced to make any specific assumptions about the rotational transform of the magnetic field or the magnetic shear. The nestedness of flux surfaces is not guaranteed but is often made in the literature. In the following, we shall derive the necessary conditions (the Hamada conditions) required to suppress current singularities on the rational surfaces and preserve the nested surfaces. It will be apparent that the Hamada conditions can not possibly be satisfied in a toroidal volume unless magnetic shear is weak. With the analysis of previous sections in mind, we now make a conscious choice of weak magnetic shear in the following. Our second important assumption is the existence of Hameiri's symmetry vector $\bm{C}$. Theoretically, $\bm{C}$ can be obtained by applying Noether's theorem to the one-fluid ideal MHD Lagrangian \citep{Ilgisonis_Pastukhov_2000_variational}. In particular, $\bm{C}$ is related to the relabelling symmetry of ideal MHD \citep{Ilgisonis_Pastukhov_2000_variational,hameiri1998variational,Vladimirov_Moffatt_1995_variational_MHD}. However, the system of equations determining $\bm{C}$, \eqref{C_conditions}, being a nonlinear overdetermined system, it is difficult to prove the existence of such a vector. Moreover, unlike QS, a systematic study of $\bm{C}$ has not been carried out to the best of our knowledge. As discussed in details in \citep{hameiri1998variational}, an important necessary condition that needs to be satisfied in order for $\bm{C}$ to exist in a toroidal domain \citep{hameiri1983_MHD_flow_equilibrium} is that on each closed magnetic field line, \begin{align} \oint \frac{d\ell}{B}\rho = m(\psi). \label{rho_condition} \end{align} Here, $m(\psi)$ is assumed to be a smooth function and $d\ell$ denotes the differential element along the magnetic field. If condition \eqref{rho_condition} is not satisfied then the flow must be nearly-parallel \citep{hameiri1998variational}. We then assumed that $\bm{C}\cdot\bm{\nabla}\rho=0$, which implies that the density (and hence pressure from \eqref{pressure_eqn} ) possess a continuous symmetry along $\bm{C}$. This choice has been made solely for simplicity and analytical tractability. Let us briefly discuss some of the consequences of the assumption of $\Q\cdot \bm{\nabla} \rho=0=\Q\cdot \bm{\nabla} p$. Firstly, we note that \eqref{rho_condition} is satisfied identically. Secondly, if $\bm{Q}$ lines do not close on themselves then density and pressure must not vary on a flux surface i.e. $p=p(\psi),\rho=\rho(\psi)$. Therefore, such systems must be subsonic \citep{tasso1998axisymmetric} since large centrifugal forces lead to variations of density and pressure on flux surfaces. When $\bm{Q}$ lines are closed, flows do not have to be subsonic, and pressure variations on flux surfaces are allowed. Although we shall not make any assumption on the closure of $\bm{Q}$ lines, it is interesting to note that in QS, the symmetry lines close helically or toroidally depending on the type of QS. The final assumption that $\bm{Q}$ satisfies \eqref{divB_extra_condition}, is due to the fact that \eqref{BinQform} is divergence-free only in axisymmetry \citep{Burby2020}. The condition \eqref{divB_extra_condition} is not really an additional constraint on $\bm{Q}$ (or $\bm{C}=\rho \bm{Q}$), since it follows directly from \eqref{BxQ}. Any $\bm{Q}$ satisfying \eqref{BxQ} with a divergence free physical magnetic field necessarily satisfies \eqref{divB_extra_condition}. We have included the condition \eqref{divB_extra_condition} only because of its usage in subsequent calculations. \subsection{ Generalized Bernoulli's law and angular momentum }\label{subsec:gen_Bernoulli} It is well known \citep{hameiri1983_MHD_flow_equilibrium,Throumoulopoulos_tasso_Weitzner2006_nonexistence_purely_poloidal_flow} that five scalar functions are needed to describe the steady axisymmetric ideal MHD flows. Out of these, two functions are given by the electrostatic potential $\Phi'(\psi)$ and the entropy (assumed constant). The remaining three functions, denoted by $(\Lambda(\psi), H(\psi), I(\psi))$, are associated with the parallel component of the flow, the Bernoulli law, and angular momentum, respectively. In the following, our goal is to derive these quantities systematically for non-symmetric flows with the help of the Hameiri vector $\bm{C}=\rho \bm{Q}$. For a given magnetic field $\bm{B}$ and $\bm{Q}$ that satisfy \eqref{Q_conditions} and \eqref{J_del_psi}, it can be shown \citep{hameiri1998variational} that a steady state flow satisfying \eqref{u_conditions} is given by \begin{align} \u = \frac{\Lambda(\psi)}{\rho}\bm{B} - \Phi'(\psi) \bm{Q}, \label{u_form} \end{align} which clearly resembles the form of the axisymmetric steady flow of a tokamak \citep{hameiri1983_MHD_flow_equilibrium}. We also get the flux function $\Lambda(\psi)$ from \eqref{u_form}. From \eqref{u_form}, we note that any two vectors from $\u,\bm{B}$ and $\bm{Q}$ can be used as basis vectors on a flux surface. We shall choose $\bm{B}$ and $\bm{Q}$ since the commutator of the two derivatives $\bm{B}\cdot\bm{\nabla},\Q\cdot \bm{\nabla} $ vanish (as shown in Appendix \ref{app:useful}). Choosing $\bm{B},\bm{Q},\bm{\nabla}\psi$ as basis vectors, we shall now obtain the components of the ideal MHD force balance equation \eqref{momentum_eqn}. We find it convenient to rewrite \eqref{momentum_eqn} in the form of a vorticity equation \begin{align} \bm{\nabla}\left( \frac{1}{2}u^2 + h \right) = \u \times \left( \bm{\nabla}\times \u\right) +\frac{1}{\rho}\bm{J}\times \bm{B}. \label{vorticity_eqn} \end{align} Here, we have defined $h$ such that $\bm{\nabla} h= (1/\rho)\bm{\nabla} p(\rho)$, and we have used the standard vector identity $\u\cdot\bm{\nabla} u = \bm{\nabla} (u^2/2) -\u\times (\bm{\nabla}\times \u)$. Dotting with $\bm{B}$ and $\bm{Q}$ respectively, and using (\eqref{uxB},\eqref{u_form}) we get \begin{subequations} \begin{align} \bm{B}\cdot\bm{\nabla} \left( \frac{1}{2}u^2 + h \right) + \bm{\nabla}\cdot \left( \Phi'(\psi)\u \times \bm{\nabla} \psi \right)=0 \label{temp_BD_vort}\\ \rho \Q\cdot \bm{\nabla} \left( \frac{1}{2}u^2 + h \right) - \bm{J}\cdot\bm{\nabla}\psi + \bm{\nabla}\cdot \left( \Lambda(\psi) \u \times \bm{\nabla} \psi\right)=0. \label{temp_QD_vort} \end{align} \label{temp_BD_QD_vort} \end{subequations} To evaluate the divergence terms that appear in \eqref{temp_BD_QD_vort} we use \eqref{uxB} and \eqref{uxdlpsi_n_its_div}. We note that the assumption $\Q\cdot \bm{\nabla} \rho=0$ helps us to simplify \eqref{temp_QD_vort}. Simplifying \eqref{temp_BD_QD_vort} using \eqref{QDrho}, \eqref{J_del_psi} and \eqref{uxdlpsi_n_its_div} we get \begin{subequations} \begin{align} \bm{B}\cdot\bm{\nabla} \left( \frac{1}{2}u^2 + h + \Phi'(\psi)\left( \u\cdot \Q \right) \right) -\Q\cdot \bm{\nabla} \left( \left( \u\cdot \B\right) \Phi'(\psi)\right)=0 \label{BD_vort}\\ \bm{B}\cdot\bm{\nabla} \left( \left( \bm{B} -\Lambda(\psi)\u \right)\cdot \bm{Q}\right)- \Q\cdot \bm{\nabla} \left( \frac{1}{2}\rho u^2 +B^2 -\Lambda(\psi) \left( \u\cdot \B\right) \right) =0. \label{QD_vort} \end{align} \label{BD_QD_vort} \end{subequations} In axisymmetry, the $\Q\cdot \bm{\nabla} $ terms in \eqref{BD_QD_vort} vanish identically, and we get two homogeneous magnetic differential equations. Solving the homogeneous magnetic differential equations, we get two flux functions which denote Bernoulli's law and momentum conservation in the symmetry ($\bm{Q}$) direction \citep{hameiri1983_MHD_flow_equilibrium,tasso1998axisymmetric}. The situation is similar here except that the magnetic differential equations are non-homogeneous; therefore, we need to check for consistency conditions. We postpone the discussion on the consistency conditions until section \ref{subsec:last_Hamada}. Assuming that the solvability conditions are satisfied, we solve \eqref{BD_QD_vort} and obtain the generalized Bernoulli and the generalized angular momentum equations \begin{subequations} \begin{align} &\frac{1}{2}u^2 + h + \Phi'(\psi)\left( \u\cdot \Q \right) =H(\psi) +\mathcal{H}\quad \quad \quad \text{(Bernoulli)} \label{Bernoulli} \\ &\left( \B\cdot \Q\right)-\Lambda(\psi)\left( \u\cdot \Q \right) = I(\psi)+\mathcal{I},\quad \quad\text{(angular momentum)} \label{Q_momentum} \end{align} \label{Bernouili_n_Q_momentum} \end{subequations} where, $\mathcal{H},\mathcal{I}$ defined by \begin{align} \mathcal{H} = \Q\cdot \bm{\nabla} \int\frac{d\ell}{B} \left( \u\cdot \B\right) \Phi'(\psi), \quad \mathcal{I} =\Q\cdot \bm{\nabla} \int \frac{d\ell}{B}\left( \frac{1}{2}\rho u^2 +B^2 -\Lambda(\psi) \left( \u\cdot \B\right) \right), \label{cH_n_cI} \end{align} are single-valued and doubly-periodic functions if Hamada conditions (see \eqref{solvability_u_B_circulations} below) are satisfied. They vanish identically in the axisymmetric case. As is well-known \citep{beskin2009mhd,hameiri1998variational,hameiri1983_MHD_flow_equilibrium} availability of conserved quantities are very helpful in characterizing flow patterns. \subsection{ Generalized Grad-Shafranov equation for flows with the Hameiri vector }\label{subsec:GGS_flow} We are now in a position to obtain the GGS from the $\bm{\nabla} \psi \cdot $ component of the vorticity equation \eqref{vorticity_eqn}. The step-by-step derivation is provided in Appendix \ref{app:GGS}. The steps to derive the GGS are identical to the ones in the derivation of the flow-modified classical Grad-Shafranov equation in axisymmetry or helical symmetry. However, there are important differences which we shall now discuss. Firstly, in the absence of axial/helical symmetry, the vector \begin{align} \bm{w}= \left( \bm{\nabla} \times \bm{Q} \right) \times \bm{Q} +\bm{\nabla} Q^2, \label{w_form} \end{align} which denotes the deviation of $\bm{Q}$ from axial/helicalsymmetry vector \citep{Burby2020} appears in the GGS. We can check that \eqref{divB_extra_condition}, the condition required to ensure $\bm{\nabla}\cdot\bm{B}=0$ when $\bm{B}$ is expressed in terms of $\bm{Q}$ (equation \eqref{BinQform}), is identical to \begin{align} \Q\cdot \bm{\nabla} \left( \B\cdot \Q\right) - \bm{B}\cdot \bm{w}=0. \end{align} Obviously, if $\bm{Q}$ is an axial/helical symmetry both terms individually vanish and $\bm{B}$ is automatically divergence-free. Secondly, the geometric quantity \begin{align} \omega_Q\equiv\bm{Q}\cdot \bm{\nabla} \times \bm{Q}, \label{QdotcurlQ} \end{align} which doesn't appear in the axisymmetric GS, but appears in the helically symmetric GS, also appear in the GGS. Finally, terms appear with the force-like quantity \begin{align} \bm{F_Q}= \bm{J}\times \bm{Q} + \bm{\nabla} \left( \B\cdot \Q\right), \label{FQ_form} \end{align} which vanishes identically in static MHD \citep{Burby2020}. With the definitions (\eqref{w_form},\eqref{QdotcurlQ} and \eqref{FQ_form}) in mind, we can write the flow-modified GGS in the following form \begin{align} \bm{\nabla}\cdot \left(\lbr 1-\frac{\Lambda^2}{\rho}\right) \frac{1 }{Q^2} \bm{\nabla} \psi\right) +\rho \frac{dH}{d\psi}+&\left( \u\cdot \B\right) \frac{d\Lambda}{d\psi} + \frac{\left( \B\cdot \Q\right)}{Q^2}\frac{dI}{d\psi} - \rho\left( \u\cdot \Q \right) \frac{d\Phi'}{d\psi} +\mathcal{N}=0, \label{GGS} \end{align} where, \begin{align} \mathcal{N}=\frac{I+\mathcal{I}}{Q^2}\left( \frac{\omega_Q}{Q^2} + \frac{\bm{F_Q}\cdot \bm{\nabla}\psi}{|\bm{\nabla}\psi|^2}\right)&+ \left( \rho \bm{\nabla} \mathcal{H} + \frac{\left( \B\cdot \Q\right)}{Q^2}\bm{\nabla} \mathcal{I}\right)\cdot \frac{\bm{\nabla}\psi}{|\bm{\nabla}\psi|^2} \nonumber \\ &+\frac{\bm{w}\cdot \bm{\nabla} \psi}{Q^2} \left( \frac{1}{Q^2}\left( 1-\frac{\Lambda^2}{\rho} \right) - \frac{\rho \Phi'\left( \u\cdot \Q \right)}{|\bm{\nabla}\psi|^2}\right). \label{cN} \end{align} In axisymmetry \citep{hameiri1983_MHD_flow_equilibrium}, $\mathcal{N}=0$. For helical symmetry, an additional term of the form $(I\omega_Q)/Q^4$ appears. To study force-balance other than ideal MHD force-balance, we can replace $\bm{F_Q}$ by other forces \citep{Rodriguez_Sengupta_Bhattacharjee_QS_flow}. For the case of flows only in the symmetry direction $\Lambda(\psi)=0$ and for strictly parallel flows $\Phi'(\psi)=0$. One can easily extend the GGS to include other non-relativistic forces. Unlike in symmetric geometries, the coefficients and derivatives in the flow-modified GGS depend on all three spatial variables \citep{Constantin2020,Burby2020}. Hence, we need to impose the additional condition \begin{align} \bm{Q} \cdot \bm{\nabla} \psi=0. \label{Qdelpsi_s} \end{align} Whether such solutions to the GGS can be constructed is a challenging open problem. \subsection{ The generalized Hamada conditions }\label{subsec:last_Hamada} We now return to the consistency conditions that must be satisfied in order for \eqref{BD_QD_vort} to have non-singular smooth solutions. When the rotational transform is irrational, we can flux-surface average \eqref{BD_QD_vort}. Since both $\bm{B}$ and $\bm{C}$ are tangential to the flux surfaces, there is no inconsistency. On the other hand, for rational rotational transform, the vanishing of the two equations under closed field line $\oint d\ell/B$ integral is not automatic and leads to nontrivial constraints \citep{Newcomb_1959_MDE}- the so-called \textit{Hamada conditions} \citep{Hamada1962_coordinates,Helander2014}. We recall that in ideal MHD equilibrium with scalar pressure $p(\psi)$, the integral constraints that must be satisfied on rational surfaces are \citep{Grad1967toroidal_confinement,Soloviev_Shafranov_v5_1970plasma,Grad1971plasma} \begin{align} \oint \frac{d\ell}{B} = c_1(\psi), \quad \oint \frac{d\ell}{B} B^2= \oint \bm{B}\cdot \bm{d\ell} = c_2(\psi), \quad c'_2(\psi) + p'(\psi) c_1(\psi)=0, \label{ideal_MHD_dlB_oonstraints} \end{align} where, $c_1(\psi),c_2(\psi)$ are single-valued continuous functions of $\psi$ that reduce to constants in the vacuum limit. We give a straightforward proof of these conditions in Appendix \ref{app:consistency_conditions}. Hamada conditions \eqref{ideal_MHD_dlB_oonstraints} are identically satisfied in symmetric geometries but are not satisfied in general for non-symmetric geometry with arbitrary pressure and rotational transform profiles \citep{Boozer_coords_1981,Weitzner2016}, leading to singular currents on rational surfaces \citep{Loizu_2015_existence_current_sheets,Loizu_2015_magnetic_islands_singular_currents}. In the following, we shall obtain the generalized Hamada conditions for steady flows with Hameiri's vector $\bm{C}=\rho\bm{Q}$. We note that \eqref{rho_condition} is already in the form of a Hamada condition. To find the other conditions, we carry out the $\oint d\ell/B$ integrals of \eqref{BD_QD_vort} (denoted by $\langle \quad \rangle$ brackets ). Using the commutation of the two derivatives to get two homogeneous equations of the form $\Q\cdot \bm{\nabla} \langle A \rangle =0$, which implies that $A$ must be a flux function. Thus, we obtain two consistency conditions \begin{subequations} \begin{align} \langle\u \cdot\bm{B} \rangle&= \oint \u\cdot \bm{d\ell}=C_1(\psi) \label{solvability_u_circulations}\\ \bigg\langle B^2 + \frac{1}{2}\rho u^2 -\Lambda \left( \u\cdot \B\right)\bigg\rangle &= \oint \bm{B}\cdot \bm{d\ell}+\oint \frac{d\ell}{B}\left( \frac{1}{2}\rho u^2 \right) -\Lambda(\psi) C_1(\psi) =C_2(\psi), \label{solvability_B_circulations} \end{align} \label{solvability_u_B_circulations} \end{subequations} which show that the circulations of $\u$ and $\bm{B}$ along the closed field line are required to be flux functions in order that \eqref{BD_QD_vort} be solvable. The final Hamada condition is obtained by closed field line averaging of \eqref{GGS} on a rational flux surface, which leads to the following equation for $C_2'(\psi)$ (see Apendix \ref{app:derivation_final_Hamada}) \begin{align} C'_2(\psi)+C_1(\psi)\frac{d\Lambda}{d\psi}= H\partial_\psi \langle\rho\rangle -\Phi'\partial_\psi \langle\rho\left( \u\cdot \Q \right) \rangle -\langle \mathcal{N} \rangle + \partial_\psi\bigg\langle \rho \mathcal{H} +\frac{\mathcal{I}}{Q^2}\left( \B\cdot \Q\right) \bigg\rangle. \label{C2prime_comstraint} \end{align} Equations \eqref{solvability_u_B_circulations} and \eqref{C2prime_comstraint}, together with \eqref{rho_condition} describe integral constraints that need to be satisfied on each rational flux surface. These are the generalized Hamada conditions that include the effect of steady flows. In Appendix \ref{app:consistency_conditions} we show that when these consistency conditions are satisfied magnetic resonances on rational surfaces are avoided. Given the complicated nature of the generalized Hamada conditions, it is doubtful that they will be satisfied by a generic steady flow. Furthermore, since the Hamada conditions need to be satisfied on each rational surface, the magnetic shear needs to be sufficiently weak to avoid passage through multiple low-order rational surfaces. \subsection{Summary of section \ref{sec:GGS_C} } \label{subsec:summary} In section \ref{sec:GGS_C}, we discussed a special class of non-symmetric steady flows endowed with Hameiri's symmetry vector $\bm{C}=\rho \bm{Q}$, where $\bm{C}\cdot \bm{\nabla} \rho =0$ and $\bm{Q}$ satisfy the overdetermined system \eqref{Q_conditions}. If the integral constraints \eqref{rho_condition}, \eqref{solvability_u_B_circulations} and \eqref{C2prime_comstraint}, are satisfied then the description of such flows involve a GGS \eqref{GGS} and four flux-functions similar to axisymmetric flows \citep{hameiri1983_MHD_flow_equilibrium, tasso1998axisymmetric} namely, $\Lambda(\psi),\Phi'(\psi),H(\psi),I(\psi)$. Note that only the first four appear in the GGS since we have assumed barotropic equation of state with constant entropy. The flux function $m(\psi)$ is needed to ensure existence of Hameiri's symmetry vector. The integral constraints \eqref{solvability_u_B_circulations} are necessary to avoid current singularities (see Appendix \ref{app:consistency_conditions}). They furnish two more flux-functions $C_1(\psi), C_2(\psi)$ which measure the circulations of $\u$ and $\bm{B}$ in a closed field line system. Physically, they amount to the constraint that the flows should not lead to accumulation of charges inside the closed magnetic flux tubes on rational surfaces. Finally, non-symmetric geometry introduces several extra terms grouped together as $\mathcal{N}$ such as $\mathcal{H},\mathcal{I}, \bm{w}\cdot \bm{\nabla} \psi$ etc. \section{Discussion}\label{sec:discussion} We studied several classes of non-symmetric ideal MHD equilibrium with flows larger than the diamagnetic flows. Such flows could occur in stellarators with We showed that the techniques developed in carrying out perturbation methods to all orders for static MHD equilibrium \cite{weitzner2014ideal} can also be extended to MHD equilibrium with flows. The basic idea is that if the field lines are closed, i.e., the rotational transform is rational and the magnetic shear is weak, one can systematically eliminate magnetic resonances at each order by utilizing the ``free-function" that one gets by solving the magnetic differential equation from lower orders. Plasma flows introduce extra resonances, which are absent in static MHD. It is, in general, very complicated to eliminate all such resonances. However, for the class of equilibrium with nearly parallel flows, one can do so. It is to be noted that exact solutions with large parallel flows can be constructed \citep{kamchatnov1982topological_soliton_mhd} when the flow is considered incompressible. We have argued that compressible, nearly parallel flows are also possible in the MHD model of plasma. Although the present work on the analysis of magnetic resonances has been carried out on rectangular coordinates with periodic boundary conditions, extensions to polar and toroidal coordinates should be fairly direct. The principal issue is behavior near the magnetic axis. To deal with this complication, one must require that components of the fields and flows vanish sufficiently rapidly near the axis. The interactions through the nonlinear terns typically do not destroy these properties. We leave the detailed near-axis analysis for the future. We have studied a particular class of perpendicular flows by taking a more formal approach. Our approach is based on utilizing a symmetry vector found by Hameiri \citep{hameiri1981spectral_C,hameiri1998variational}. The generalizations of Bernoulli's law and conservation of angular momentum were also obtained thanks to Hameiri's symmetry vector. We obtain a flow-modified generalized Grad-Shafranov equation for such flows and the constraints resulting from the closed magnetic field lines. It is clear from the nature of the constraints that if such non-parallel flows exist, they must be special. There exists a close connection between Hameiri's vector and the quasisymmetry vector that will be further discussed elsewhere \citep{Rodriguez_Sengupta_Bhattacharjee_QS_flow}. We have not attempted to estimate the damping of the flows due to neoclassical effects, which is an essential but difficult question \citep{Simakov_Helander_2009_neoclassical_mom_transp}. We shall address this in detail in a forthcoming paper \citep{Rodriguez_Sengupta_Bhattacharjee_QS_flow}. We only make a few observations here. First, the neoclassical effects are tied to the parallel electric field, which does not appear in ideal MHD. If the parallel electric field can be shown to be self-consistently small or higher-order in the Larmor radius, then the MHD description prevails, and flow damping would be minimized. Therefore, our results are valid within the framework of ideal MHD. Second, the techniques developed here can be easily extended to more sophisticated MHD models that are relevant to astrophysical plasmas. Our results here do not necessarily contradict earlier results \citep{Simakov_Helander_2009_neoclassical_mom_transp, sugama2011_QS_large_flow} that argue that large flows can not be supported even if the stellarator is quasisymmetric but not completely axisymmetric. We show that it is challenging to avoid magnetic resonances unless the flow is nearly parallel. Except for very special flows, the generalized Hamada conditions on rational surfaces can not be satisfied, severely limiting the possibility of steady non-symmetric flows in generic stellarators. We expect that kinetic constraints will further restrict the class of available solutions. \section*{ Acknowledgements} The authors would like to sincerely thank the anonymous reviewer for their excellent, meticulous, and constructive suggestions that have significantly improved the organization and presentation of the results in the paper. In addition, WS would like to thank E. Rodriguez, A.Bhattacharjee, A.B.Hassam, E.J.Paul, and J.Juno for helpful discussions. This research was partly funded by the US DOE grant no. DEFG02-86ER53223 and Simons Foundation/SFARI (560651, AB).
1806.05358
\section{Byzantine Perturbed Gradient Descent}\label{sec:perturbed_bgd} In this section, we describe our algorithm, \emph{Byzantine Perturbed Gradient Descent} (ByzantinePGD), which provably finds a second-order stationary point of the population loss $F(\cdot)$ in the distributed setting with Byzantine machines. As mentioned, ByzantinePGD robustly aggregates gradients from the worker machines, and performs multiple rounds of carefully calibrated perturbation to combat the effect of Byzantine machines. We now elaborate. It is well-known that naively aggregating the workers' messages using standard averaging can be arbitrarily skewed in the presence of just a single Byzantine machine. In view of this, we introduce the subroutine $\mathsf{GradAGG}\{ \widehat{\mathbf{g}}_i(\mathbf{w}) \}_{i=1}^m$, which robustly aggregates the gradients $\{ \widehat{\mathbf{g}}_i(\mathbf{w}) \}_{i=1}^m$ collected from the $ m $ workers. We stipulate that $\mathsf{GradAGG}$ provides an estimate of the true population gradient $ \nabla F(\cdot) $ with accuracy $ \Delta $, \emph{uniformly} across $\mathcal{W}$. This property is formalized using the terminology of \emph{inexact gradient oracle}. \begin{definition}[Inexact gradient oracle]\label{def:inexact_oracle} We say that $\mathsf{GradAGG}$ provides a $\Delta$-inexact gradient oracle for the population loss $F(\cdot)$ if, for every $\mathbf{w}\in\mathcal{W}$, we have $ \twonms{\mathsf{GradAGG}\{ \widehat{\mathbf{g}}_i(\mathbf{w}) \}_{i=1}^m - \nabla F(\mathbf{w})} \le \Delta$. \end{definition} Without loss of generality, we assume that $\Delta \le 1$ throughout the paper. In this section, we treat $\mathsf{GradAGG}$ as a given black box; in Section~\ref{sec:robust_estimation}, we discuss several robust aggregation algorithms and characterize their inexactness $\Delta$. We emphasize that in the Byzantine setting, the output of $\mathsf{GradAGG}$ can take values adversarially within the error bounds; that is, $\mathsf{GradAGG}\{ \widehat{\mathbf{g}}_i(\mathbf{w})\}_{i=1}^m$ may output an arbitrary vector in the ball $\mathbb{B}_{\nabla F(\mathbf{w})}(\Delta)$, and this vector can depend on the data in all the machines and all previous iterations of the algorithm. The use of robust aggregation with bounded inexactness, however, is not yet sufficient to guarantee convergence to an approximate local minimizer. As mentioned, the Byzantine machines may create fake local minima that traps a vanilla gradient descent iteration. Our ByzantinePGD algorithm is designed to escape such fake minima as well as any existing saddle points of $ F $. \subsection{Algorithm}\label{sec:algorithm} We now describe the details of our algorithm, given in the left panel of Algorithm~\ref{alg:main_alg}. We focus on unconstrained optimization, i.e., $\mathcal{W} = \mathbb{R}^d$. In Section~\ref{sec:robust_estimation}, we show that the iterates $\mathbf{w}$ during the algorithm actually stay in a bounded $\ell_2$ ball centered at the initial iterate $\mathbf{w}_0$, and we will discuss the statistical error rates within the bounded space. In each parallel iteration, the master machine sends the current iterate $\mathbf{w}$ to all the worker machines, and the worker machines send back $\{\widehat{\mathbf{g}}_i(\mathbf{w})\}$. The master machine aggregates the workers' gradients using $\mathsf{GradAGG}$ and computes a robust estimate $\widehat{\mathbf{g}}(\mathbf{w})$ of the population gradient $\nabla F(\mathbf{w})$. The master machine then performs a gradient descent step using $\widehat{\mathbf{g}}(\mathbf{w})$. This procedure is repeated until it reaches a point $\widetilde{\mathbf{w}}$ with $\twonms{ \widehat{\mathbf{g}}(\mathbf{w}) } \le \epsilon$ for a pre-specified threshold $\epsilon$. At this point, $\widetilde{\mathbf{w}}$ may lie near a saddle point whose Hessian has a large negative eigenvalue. To escape this potential saddle point, the algorithm invokes the $\mathsf{Escape}$ routine (right panel of Algorithm~\ref{alg:main_alg}), which performs $Q$ rounds of \emph{perturbation-and-descent} operations. In each round, the master machine perturbs $\widetilde{\mathbf{w}}$ randomly and independently within the ball $\mathbb{B}_{\widetilde{\mathbf{w}}}(r)$. Let $\mathbf{w}_0^\prime$ be the perturbed vector. Starting from the $\mathbf{w}_0^\prime$, the algorithm conducts at most $T_{\mathrm{th}}$ parallel iterations of $\Delta$-inexact gradient descent (using $\mathsf{GradAGG}$ as before): \begin{equation}\label{eq:perturbed_iterates} \mathbf{w}_t^\prime = \mathbf{w}_{t-1}^\prime - \eta \widehat{\mathbf{g}}(\mathbf{w}_{t-1}^\prime),~t \le T_{\mathrm{th}}. \end{equation} During this process, once we observe that $\twonm{\mathbf{w}_{t}^\prime - \mathbf{w}_{0}^\prime} \ge R$ for some pre-specified threshold $R$ (this means the iterate moves by a sufficiently large distance in the parameter space), we claim that $\widetilde{\mathbf{w}}$ is a saddle point and the algorithm has escaped it; we then resume $\Delta$-inexact gradient descent starting from $\mathbf{w}_{t}^\prime$. If after $Q$ rounds no sufficient move in the parameter space is ever observed, we claim that $\widetilde{\mathbf{w}}$ is a second-order stationary point of $F(\mathbf{w})$ and output $\widetilde{\mathbf{w}}$. \begin{figure*} \noindent\begin{minipage}[t]{\textwidth} \centering% \setlength{\fboxsep}{0.5pt} \fbox{% \begin{minipage}[t]{.48\textwidth} $\mathsf{ByzantinePGD}(\mathbf{w}_0, \eta, \epsilon, r, Q, R, T_{\mathrm{th}})$ \begin{algorithmic} \STATE $\mathbf{w} \leftarrow \mathbf{w}_0$ \WHILE{ $\mathrm{true}$ } \STATE \textit{\underline{Master}}: send $\mathbf{w}$ to worker machines. \PARFOR{$i\in[m]$} \STATE \textit{\underline{Worker $i$}}: compute $ \widehat{\mathbf{g}}_i(\mathbf{w}) $ \STATE send to master machine. \ENDPARFOR \STATE \textit{\underline{Master}}: \STATE $\widehat{\mathbf{g}}(\mathbf{w}) \leftarrow \mathsf{GradAGG}\{ \widehat{\mathbf{g}}_i(\mathbf{w}) \}_{i=1}^m$. \IF{$ \twonms{\widehat{\mathbf{g}}(\mathbf{w})} \le \epsilon$} \STATE \textit{\underline{Master}}: $\widetilde{\mathbf{w}} \leftarrow \mathbf{w}$, \STATE $(\mathsf{esc}, \mathbf{w}, \widehat{\mathbf{g}}(\mathbf{w}))$ $\leftarrow$ $\mathsf{Escape}$ ($\widetilde{\mathbf{w}}$, $\eta$, $r$, $Q$, $R$, $T_{\mathrm{th}}$). \IF{$ \mathsf{esc} = \mathrm{false}$} \STATE \textbf{return} $\widetilde{\mathbf{w}}$. \ENDIF \ENDIF \STATE \textit{\underline{Master}}: $\mathbf{w} \leftarrow \mathbf{w} - \eta\widehat{\mathbf{g}}(\mathbf{w}) $. \ENDWHILE \vspace{0.05in} \end{algorithmic} \end{minipage} } \fbox{ \begin{minipage}[t]{.48\textwidth} $\mathsf{Escape}(\widetilde{\mathbf{w}}, \eta, r, Q, R, T_{\mathrm{th}})$ \begin{algorithmic} \FOR{$k = 1,2, \ldots, Q$} \STATE \textit{\underline{Master}}: sample $ \mathbf{p}_k\sim\text{Unif}(\mathbb{B}_0(r))$, \STATE $\mathbf{w}' \leftarrow \widetilde{\mathbf{w}} + \mathbf{p}_k$, $\mathbf{w}_0^\prime \leftarrow \mathbf{w}'$. \FOR{$t = 0, 1, \ldots, T_{\mathrm{th}}$} \STATE \textit{\underline{Master}}: send $\mathbf{w}'$ to worker machines. \PARFOR{$i\in[m]$} \STATE \textit{\underline{Worker $i$}}: compute $ \widehat{\mathbf{g}}_i(\mathbf{w}') $ \STATE send to master machine. \ENDPARFOR \STATE \textit{\underline{Master}}: $\widehat{\mathbf{g}}(\mathbf{w}') \leftarrow \mathsf{GradAGG}\{ \widehat{\mathbf{g}}_i(\mathbf{w}') \}_{i=1}^m$. \IF{$ \twonms{\mathbf{w}' - \mathbf{w}_0^\prime} \ge R$} \STATE \textbf{return} $(\mathrm{true}, \mathbf{w}', \widehat{\mathbf{g}}(\mathbf{w}'))$. \ELSE \STATE $\mathbf{w}' \leftarrow \mathbf{w}' - \eta\widehat{\mathbf{g}}(\mathbf{w}') $ \ENDIF \ENDFOR \ENDFOR \STATE \textbf{return} $(\mathrm{false}, \mathbf{w}', \widehat{\mathbf{g}}(\mathbf{w}'))$. \vspace{0.18in} \end{algorithmic} \end{minipage} } \captionof{algorithm}{Byzantine Perturbed Gradient Descent (ByzantinePGD)} \label{alg:main_alg} \end{minipage} \end{figure*} \subsection{Convergence Guarantees}\label{sec:convergence} In this section, we provide the theoretical result guaranteeing that Algorithm~\ref{alg:main_alg} converges to a second-order stationary point. In Theorem~\ref{thm:main}, we let $F^* := \min_{\mathbf{w}\in\mathbb{R}^d} F(\mathbf{w})$, $\mathbf{w}_0$ be the initial iterate, and $F_0 := F(\mathbf{w}_0)$. \begin{theorem}[ByzantinePGD]\label{thm:main} Suppose that Assumptions~\ref{asm:lip_assumptions} holds, and assume that $\mathsf{GradAGG}$ provides a $ \Delta $-inexact gradient oracle for $F(\cdot)$ with $\Delta \le 1$. Given any $\delta\in(0,1)$, choose the parameters for Algorithm~\ref{alg:main_alg} as follows: step-size $\eta = \frac{1}{L_F}$, $\epsilon = 3\Delta$, $r = 4 \Delta^{3/5} d^{3/10}\rho_F^{-1/2}$, $R = \Delta^{2/5} d^{1/5} \rho_F^{-1/2}$, \begin{align*} Q &= 2 \log \bigg( \frac{ \rho_F( F_0 - F^*) }{48 L_F \delta (\Delta^{6/5}d^{3/5} + \Delta^{7/5}d^{7/10}) } \bigg), \text{ and}\\ T_{\mathrm{th}} &= \frac{L_F}{384 (\rho_F^{1/2} + L_F) (\Delta^{2/5}d^{1/5} + \Delta^{3/5}d^{3/10}) }. \end{align*} Then, with probability at least $1- \delta$, the output of Algorithm~\ref{alg:main_alg}, denoted by $\widetilde{\mathbf{w}}$, satisfies the bounds \begin{equation} \label{eq:first_second_order} \begin{aligned} \twonms{\nabla F(\widetilde{\mathbf{w}})} & \le 4\Delta, \\ \lambda_{\min} \big(\nabla^2 F (\widetilde{\mathbf{w}})\big) & \ge -1900 \big( \rho_F^{1/2} + L_F \big) \Delta^{2/5}d^{1/5} \log \Big( \frac{10}{\Delta} \Big), \end{aligned} \end{equation} and the algorithm terminates within $\frac{2(F_0 - F^*)L_F}{3\Delta^2} Q$ parallel iterations. \end{theorem} We prove Theorem~\ref{thm:main} in Appendix~\ref{prf:main}.\footnote{We make no attempt in optimizing the multiplicative constants in Theorem~\ref{thm:main}.} Below let us parse the above theorem and discuss its implications. Focusing on the scaling with $ \Delta $, we may read off from Theorem~\ref{thm:main} the following result: \begin{observation}\label{obs:achievable} Under the above setting, within $\widetilde{\mathcal{O}} (\frac{1}{\Delta^2})$ parallel iterations, ByzantinePGD outputs an $(\mathcal{O}(\Delta), \widetilde{\mathcal{O}}(\Delta^{2/5}))$-second-order stationary point $ \widetilde{\mathbf{w}} $ of $F(\cdot)$;\footnote{Here, by using the symbol $\widetilde{\mathcal{O}}$, we ignore logarithmic factors and only consider the dependence on $\Delta$.} that is, $$\twonms{\nabla F(\widetilde{\mathbf{w}})} \le 4\Delta \text{~~and~~} \lambda_{\min}(\nabla^2 F(\widetilde{\mathbf{w}})) \ge -\widetilde{\mathcal{O}}(\Delta^{2/5}).$$ \end{observation} In terms of the iteration complexity, it is well-known that for a smooth non-convex $F(\cdot)$, gradient descent requires at least $\frac{1}{\Delta^2}$ iterations to achieve $\twonms{\nabla F(\widetilde{\mathbf{w}})}\le \mathcal{O}(\Delta)$~\cite{nesterov1998introductory}; up to logarithmic factors, our result matches this complexity bound. In addition, our $\mathcal{O}(\Delta)$ first-order guarantee is clearly order-wise \emph{optimal}, as the gradient oracle is $\Delta$-inexact. It is currently unclear to us whether our $\widetilde{\mathcal{O}}(\Delta^{2/5})$ second-order guarantee is optimal. We provide a converse result showing that one cannot hope to achieve a second-order guarantee better than $ {\mathcal{O}}(\Delta^{1/2})$. \begin{proposition}\label{obs:second_order_lb} There exists a class of real-valued $1$-smooth and $1$-Hessian Lipschitz differentiable functions $\mathcal{F}$ such that, for any algorithm that only uses a $\Delta$-inexact gradient oracle, there exists $f\in\mathcal{F}$ such that the output of the algorithm $\widetilde{\mathbf{w}}$ must satisfy $\twonms{\nabla F(\widetilde{\mathbf{w}})} > \Delta/2$ and $\lambda_{\min}(\nabla^2 F(\widetilde{\mathbf{w}})) < -\Delta^{1/2}/2$. \end{proposition} We prove Proposition~\ref{obs:second_order_lb} in Appendix~\ref{prf:second_order_lb}. Again, we emphasize that our results above are in fact not restricted to the Byzantine distributed learning setting. They apply to any non-convex optimization problems (distributed or not) with inexact information for the gradients, including those with noisy but non-adversarial gradients; see Section~\ref{sec:related} for comparison with related work in such settings. As a byproduct, we can show that with a different choice of parameters, ByzantinePGD can be used in the standard (non-distribued) setting with access to the \emph{exact} gradient $\nabla F(\mathbf{w})$, and the algorithm converges to an $(\epsilon, \widetilde{\mathcal{O}}(\sqrt{\epsilon}))$-second-order stationary point within $\mathcal{O}(\frac{1}{\epsilon^2})$ iterations: \begin{theorem}[Exact gradient oracle]\label{thm:exact_oracle} Suppose that Assumptions~\ref{asm:lip_assumptions} holds, and assume that for any query point $\mathbf{w}$ we can obtain exact gradient, i.e., $\widehat{\mathbf{g}}(\mathbf{w}) \equiv\nabla F(\mathbf{w})$. For any $\epsilon \in (0, \min\{ \frac{1}{\rho_F}, \frac{4}{L_F^2\rho_F} \})$ and $\delta\in(0,1)$, we choose the parameters in Algorithm~\ref{alg:main_alg} as follows: step-size $\eta = 1/L_F$, $Q=1$, $r=\epsilon$, and $R = \sqrt{\epsilon/\rho_F}$, $T_{\mathrm{th}} = \frac{L}{12\rho_F(R+r)}$. Then, with probability at least $1-\delta$, Algorithm~\ref{alg:main_alg} outputs a $\widetilde{\mathbf{w}}$ satisfying the bounds \begin{align*} \twonms{\nabla F(\widetilde{\mathbf{w}})} \le & \epsilon, \\ \lambda_{\min}(\nabla^2 F(\widetilde{\mathbf{w}})) \ge & -60\sqrt{\rho_F \epsilon} \log \Big(\frac{8\rho_F\sqrt{d}(F_0 - F^*)}{\delta\epsilon^2} \Big), \end{align*} and the algorithm terminates within $\frac{2L_F(F_0 - F^*)}{\epsilon^2}$ iterations. \end{theorem} We prove Theorem~\ref{thm:exact_oracle} in Appendix~\ref{apx:exact_oracle}. The convergence guarantee above matches that of the original PGD algorithm~\cite{jin2017escape} up to logarithmic factors. Moreover, our proof is considerably simpler, and our algorithm only requires gradient information, whereas the original PGD algorithm also needs function values. \section{Conclusion}\label{sec:conclusion} In this paper, we study security issues that arise in large-scale distributed learning because of the presence of saddle points in non-convex loss functions. We observe that in the presence of non-convexity and Byzantine machines, escaping saddle points becomes much more challenging. We develop ByzantinePGD, a computation- and communication-efficient algorithm that is able to provably escape saddle points and converge to a second-order stationary point, even in the presence of Byzantine machines. We also discuss three different choices of the robust gradient and function value aggregation subroutines in ByzantinePGD---median, trimmed mean, and the iterative filtering algorithm. We characterize their performance in statistical settings, and argue for their near-optimality in different regimes including the high dimensional setting. \section{Introduction}\label{sec:intro} Distributed computing becomes increasingly important in modern data-intensive applications. In many applications, large-scale datasets are distributed over multiple machines for parallel processing in order to speed up computation. In other settings, the data sources are naturally distributed, and for privacy and efficiency considerations, the data are not transmitted to a central machine. An example is the recently proposed \emph{Federated Learning} paradigm~\cite{mcmahan2017federated,konevcny2016federated,konevcny2015federated}, in which the data are stored and processed locally in end users' cellphones and personal computers. In a standard worker-server distributed computing framework, a single master machine is in charge of maintaining and updating the parameter of interest, and a set of worker machines store the data, perform local computation and communicate with the master. In this setting, messages received from worker machines are prone to errors due to data corruption, hardware/software malfunction, and communication delay and failure. These problems are only exacerbated in a decentralized distributed architecture such as Federated Learning, where some machines may be subjected to malicious and coordinated attack and manipulation. A well-established framework for studying such scenarios is the \emph{Byzantine} setting~\cite{lamport1982byzantine}, where a subset of machines behave completely arbitrarily---even in a way that depends on the algorithm used and the data on the other machines---thereby capturing the unpredictable nature of the errors. Developing distributed algorithms that are robust in the Byzantine setting has become increasingly critical. In this paper we focus on robust distributed optimization for statistical learning problems. Here the data points are generated from some unknown distribution $\mathcal{D}$ and stored locally in $m$ worker machines, each storing $n$ data points; the goal is to minimize a population loss function $F:\mathcal{W}\rightarrow \mathbb{R}$ defined as an expectation over $\mathcal{D}$, where $\mathcal{W}\subseteq\mathbb{R}^d$ is the parameter space. We assume that $\alpha \in (0, 1/2)$ fraction of the worker machines are Byzantine; that is, their behavior is arbitrary. This Byzantine-robust distributed learning problem has attracted attention in a recent line of work~\cite{alistarh2018sgd,blanchard2017byzantine,chen2017distributed,feng2014distributed,su2016fault,su2016non,yin2018byzantine}. This body of work develops robust algorithms that are guaranteed to output an approximate minimizer of $F$ when it is convex, or an approximate stationary point in the non-convex case. However, fitting complicated machine learning models often requires finding a \emph{local minimum} of \emph{non-convex} functions, as exemplified by training deep neural networks and other high-capacity learning architectures~\cite{soudry2016no,ge2016matrix,ge2017no}. It is well-known that many of the stationary points of these problems are in fact saddle points and far away from any local minimum~\cite{kawaguchi2016deep,ge2017no}. These tasks hence require algorithms capable of efficiently \emph{escaping saddle points} and converging approximately to a local minimizer. In the centralized setting without Byzantine adversaries, this problem has been studied actively and recently ~\cite{ge2015escaping,jin2017escape,carmon2016accelerated,jin2017accelerated}. A main observation of this work is that the interplay between non-convexity and Byzantine errors makes escaping saddle points much more challenging. In particular, by orchestrating their messages sent to the master machine, \emph{the Byzantine machines can create fake local minima near a saddle point of $ F $ that is far away from any true local minimizer}. Such a strategy, which may be referred to as \textbf{saddle point attack}, foils existing algorithms as we elaborate below: \begin{itemize}[leftmargin=3mm] \item \textbf{Challenges due to non-convexity:} When $F$ is convex, gradient descent (GD) equipped with a robust gradient estimator is guaranteed to find an approximate global minimizer (with accuracy depending on the fraction of Byzantine machines)~\cite{chen2017distributed,yin2018byzantine,alistarh2018sgd}. However, when $F$ is non-convex, such algorithms may be trapped in the neighborhood of a saddle point; see Example 1 in Appendix~\ref{apx:hardness}. \item \textbf{Challenges due to Byzantine machines:} Without Byzantine machines, vanilla GD~\cite{lee2016converge}, as well as its more efficient variants such as perturbed gradient descent (PGD)~\cite{jin2017escape}, are known to converge to a local minimizer with high probability. However, Byzantine machines can manipulate PGD and GD (even robustified) into fake local minimum near a saddle point; see Example 2 in Appendix~\ref{apx:hardness}. \end{itemize} We discuss and compare with existing work in more details in Section~\ref{sec:related}. The observations above show that existing robust and saddle-escaping algorithms, as well as their naive combination, are insufficient against saddle point attack. Addressing these challenges requires the development of new robust distributed optimization algorithms. \subsection{Our Contributions} In this paper, we develop \textbf{\emph{ByzantinePGD}}, a computation- and communication-efficient first-order algorithm that is able to escape saddle points and the fake local minima created by Byzantine machines, and converge to an approximate local minimizer of a non-convex loss. To the best of our knowledge, our algorithm is the first to achieve such guarantees under \emph{adversarial} noise. Specifically, ByzantinePGD aggregates the empirical gradients received from the normal and Byzantine machines, and computes a robust estimate $ \widehat{\mathbf{g}}(\mathbf{w}) $ of the true gradient $ \nabla F(\mathbf{w}) $ of the population loss $ F $. Crucial to our algorithm is the injection of random perturbation to the iterates $ \mathbf{w} $, which serves the dual purpose of escaping saddling point and fake local minima. Our use of perturbation thus plays a more signified role than in existing algorithms such as PGD~\cite{jin2017escape}, as it also serves to combat the effect of Byzantine errors. To achieve this goal, we incorporate two crucial innovations: (i) we use multiple rounds of larger, yet carefully calibrated, amount of perturbation that is necessary to survive saddle point attack, (ii) we use the moving distance in the parameter space as the criterion for successful escape, eliminating the need of (robustly) evaluating function values. Consequently, our analysis is significantly different, and arguably simpler, than that of PGD. We develop our algorithmic and theoretical results in a flexible, two-part framework, decomposing the \emph{optimization} and \emph{statistical} components of the problem. \paragraph*{The optimization part:} We consider a general problem of optimizing a population loss function $ F $ given an \emph{inexact gradient oracle}. For each query point~$\mathbf{w}$, the $\Delta$-inexact gradient oracle returns a vector $ \widehat{\mathbf{g}}(\mathbf{w}) $ (possibly chosen adversarially) that satisfies $\twonms{\widehat{\mathbf{g}}(\mathbf{w}) - \nabla F(\mathbf{w})} \le \Delta$, where $\Delta$ is non-zero but bounded. Given access to such an inexact oracle, we show that ByzantinePGD outputs an approximate local minimizer; moreover, \emph{no} other algorithm can achieve significantly better performance in this setting in terms of the dependence on $ \Delta $: \begin{theorem}[Informal; see Sec.~\ref{sec:convergence}]\label{thm:informal_main} Within $\widetilde{\mathcal{O}}(\frac{1}{\Delta^2})$ iterations, ByzantinePGD outputs an approximate local minimizer $\widetilde{\mathbf{w}}$ that satisfies $\twonms{\nabla F(\widetilde{\mathbf{w}})} \lesssim \Delta $ and $\lambda_{\min}\big( \nabla^2 F(\widetilde{\mathbf{w}}) \big) \gtrsim -\Delta^{2/5}$, where $ \lambda_{\min} $ is the minimum eigenvalue. In addition, given only access to $\Delta$-inexact gradient oracle, \textbf{no} algorithm is guaranteed to find a point $ \widetilde{\mathbf{w}} $ with $\twonms{\nabla F(\widetilde{\mathbf{w}})} < \Delta/2 $ \textbf{or} $\lambda_{\min}\big( \nabla^2 F(\widetilde{\mathbf{w}}) \big) > -\Delta^{1/2}/2$. \end{theorem} Our algorithm is communication-efficient: it only sends gradients, and the number of parallel iterations in our algorithm \emph{matches} the well-known iteration complexity of GD for non-convex problems in non-Byzantine setting~\cite{nesterov1998introductory} (up to log factors). In the exact gradient setting, a variant of the above result in fact matches the guarantees for PGD~\cite{jin2017escape}---as mentioned, our proof is simpler. Additionally, beyond Byzantine distributed learning, our results apply to any non-convex optimization problems (distributed or not) with inexact information for the gradients, including those with noisy but non-adversarial gradients. Thus, we believe our results are of independent interest in broader settings. \paragraph*{The statistical part:} The optimization guarantee above can be applied whenever one has a robust aggregation procedure that serves as an inexact gradient oracle with a bounded error $\Delta$. We consider three concrete examples of such robust procedures: median, trimmed mean, and iterative filtering~\cite{diakonikolas2016robust,diakonikolas2017being}. Under statistical settings for the data, we provide explicit bounds on their errors $ \Delta $ as a function of the number of worker machines $m$, the number of data points on each worker machine $n$, the fraction of Byzantine machines $\alpha$, and the dimension of the parameter space $d$. Combining these bounds with the optimization result above, we obtain concrete statistical guarantees on the output $ \widetilde{\mathbf{w}} $. Furthermore, we argue that our first-order guarantees on $\twonms{\nabla F(\widetilde{\mathbf{w}})}$ are often nearly \emph{optimal} when compared against a universal statistical lower bound. This is summarized below: \begin{theorem}[Informal; see Sec.~\ref{sec:robust_estimation}]\label{thm:statistical} When combined with each of following three robust aggregation procedures, ByzantinePGD achieves the statistical guarantees: \\ (i) median/; $\twonms{\nabla F(\widetilde{\mathbf{w}})} \lesssim \frac{\alpha \sqrt{d}}{\sqrt{n}} + \frac{d}{\sqrt{nm}} + \frac{\sqrt{d}}{n} $;\\ (ii) trimmed mean: $\twonms{\nabla F(\widetilde{\mathbf{w}})} \lesssim \frac{\alpha d}{\sqrt{n}} + \frac{d}{\sqrt{nm}}$;\\ (iii) iterative filtering: $\twonms{\nabla F(\widetilde{\mathbf{w}})} \lesssim \frac{\sqrt{\alpha}}{\sqrt{n}} + \frac{\sqrt{d}}{\sqrt{nm}}$. \\ Moreover, \textbf{no} algorithm can achieve $ \twonms{\nabla F(\widetilde{\mathbf{w}})} = o\big( \frac{\alpha}{\sqrt{n}} + \frac{\sqrt{d}}{\sqrt{nm}} \big)$. \end{theorem} We emphasize that the above results are established under a very strong adversary model: the Byzantine machines are allowed to send messages that depend arbitrarily on each other and on the data on the normal machines; they may even behave adaptively during the iterations of our algorithm. Consequently, this setting requires robust \emph{functional} estimation (of the gradient function), which is a much more challenging problem than the robust \emph{mean} estimation setting considered by existing work on median, trimmed mean and iterative filtering. To overcome this difficulty, we make use of careful covering net arguments to establish certain error bounds that hold \emph{uniformly} over the parameter space, regardless of the behavior of the Byzantine machines. Importantly, our inexact oracle framework allows such arguments to be implemented in a transparent and modular manner. \paragraph*{Notation} For an integer $N>0$, define the set $[N] := \{1,2,\ldots, N\}$. For matrices, denote the operator norm by $\twonms{\cdot}$; for symmetric matrices, denote the largest and smallest eigenvalues by $\lambda_{\max}(\cdot)$ and $\lambda_{\min}(\cdot)$, respectively. The $d$-dimensional $\ell_2$ ball centered at $\mathbf{w}$ with radius $r$ is denoted by $\mathbb{B}_{\mathbf{w}}^{(d)}(r)$, or $\mathbb{B}_{\mathbf{w}}(r)$ when it is clear from the context. \section*{Appendix} \section{Challenges of Escaping Saddle Points in the Adversarial Setting} \label{apx:hardness} We provide two examples showing that in non-convex setting with saddle points, inexact oracle can lead to much worse sub-optimal solutions than in the convex setting, and that in the adversarial setting, escaping saddle points can be inherently harder than the adversary-free case. Consider standard gradient descent using exact or $\Delta$-inexact gradients. Our first example shows that Byzantine machines have a more severe impact in the non-convex case than in the convex case. \paragraph{Example 1.}\label{ex:convexity} Let $d=1$ and consider the functions $F^{(1)}(w) = (w-1)^2$ and $F^{(2)}(w) = (w^2-1)^2/4$. Here $F^{(1)}$ is strongly convex with a unique local minimizer $w^*=1$, whereas $F^{(2)}$ has two local (in fact, global) minimizers $w^*=\pm 1$ and a saddle point (in fact, a local maximum) $w=0$. Proposition~\ref{ppn:bad_example_2} below shows the following: for the convex $F^{(1)}$, gradient descent (GD) finds a near-optimal solution with sub-optimality proportional to $\Delta$, regardless of initialization; for the nonconvex $F^{(2)}$, GD initialized near the saddle point $w=0$ suffers from an $\Omega(1)$ sub-optimality gap. \begin{proposition}\label{ppn:bad_example_2} Suppose that $\Delta \le 1/2$. Under the setting above, the following holds.\\ (i) For $F^{(1)}$, starting from any $w_0$, GD using a $\Delta$-inexact gradient oracle finds $w$ with $F^{(1)}(w) - F^{(1)}(w^*) \le \mathcal{O} (\Delta )$. \\ (ii) For $F^{(2)}$, there exists an adversarial strategy such that starting from a $w_0$ sampled uniformly from $[-r,r]$, GD with a $\Delta$-inexact gradient oracle outputs $w$ with $F^{(2)}(w) - F^{(2)}(w^*) \ge \frac{9}{64}, \forall w^*=\pm1$, with probability $\min\{1, \frac{\Delta}{r}\}$. \end{proposition} \begin{proof} Since $F^{(2)}(w) = \frac{1}{4}(w^2-1)^2$, we have $\nabla F^{(2)}(w)=w^3 - w$. For any $w\in[-\Delta, \Delta]$, $| \nabla F^{(2)}(w) | \le \Delta$ (since $\Delta \le 1/2$). Thus, the adversarial oracle can always output $\widehat{g}(w) = 0$ when $w\in[-\Delta, \Delta]$, and we have $| \widehat{g}(w) - \nabla F^{(2)}(w) | \le \Delta$. Thus, if $w\in[-\Delta, \Delta]$, the iterate can no longer move with this adversarial strategy. Then, we have $F^{(2)}(w) - F^{(2)}(w^*) \ge F^{(2)}(\Delta) - 0 \ge \frac{9}{64}$ (since $\Delta \le 1/2$). The result for the convex function $F^{(1)}$ is a direct corollary of Theorem 1 in~\cite{yin2018byzantine}. \end{proof} Our second example shows that escaping saddle points is much harder in the Byzantine setting than in the non-Byzantine setting. \paragraph{Example 2.} Let $d=2$, and assume that in the neighborhood $\mathbb{B}_0(b)$ of the origin, $F$ takes the quadratic form $F(\mathbf{w}) \equiv \frac{1}{2}w_1^2 - \frac{\lambda}{2} w_2^2$, with $\lambda > \epsilon_H$.\footnote{$F(\mathbf{w}) \equiv \frac{1}{2}w_1^2 - \frac{\lambda}{2} w_2^2$ holds locally around the origin, not globally; otherwise $F(\mathbf{w})$ has no minimum.} The origin $\mathbf{w}_0 = 0$ is not an $(\epsilon_g, \epsilon_H)$-second-order stationary point, but rather a saddle point. Proposition~\ref{ppn:bad_example} below shows that exact GD escapes the saddle point almost surely, while GD with an inexact oracle fails to do so. \begin{proposition}\label{ppn:bad_example} Under the setting above, if one chooses $r < b$ and sample $\mathbf{w}$ from $\mathbb{B}_0(r)$ uniformly at random, then: \\ (i) Using exact gradient descent, with probability $1$, the iterate $\mathbf{w}$ eventually leaves $\mathbb{B}_0(r)$.\\ (ii) There exists an adversarial strategy such that, when we update $\mathbf{w}$ using $\Delta$-inexact gradient oracle, if $\Delta \ge \lambda r$, with probability $1$, the iterate $\mathbf{w}$ cannot leave $\mathbb{B}_0(r)$; otherwise with probability $\frac{2}{\pi}\Big( \arcsin \big(\frac{\Delta}{\lambda r}\big) + \frac{\Delta}{\lambda r}\sqrt{1 - (\frac{\Delta}{\lambda r})^2} \Big)$ the iterate $\mathbf{w}$ cannot leave $\mathbb{B}_0(r)$. \end{proposition} \begin{proof} Since $F(\mathbf{w}) = \frac{1}{2}w_1^2 - \frac{1}{2}\lambda w_2^2$, $\forall~\mathbf{w}\in\mathbb{B}_0(r)$, we have $\nabla F(\mathbf{w})=[w_1,~-\lambda w_2]^\top$. Sample $\mathbf{w}_0$ uniformly at random from $\mathbb{B}_0(r)$, and we know that with probability $1$, $w_{0,2} \neq 0$. Then, by running exact gradient descent $\mathbf{w}_{t+1} = \mathbf{w}_t - \eta \nabla F(\mathbf{w}_t) $, we can see that the second coordinate of $\mathbf{w}_t$ is $w_{t,2} = (1+\eta \lambda)^t w_{0,2}$. When $w_{0,2}$, we know that as $t$ gets large, we eventually have $w_{t,2}>r$, which implies that the iterate leaves $\mathbb{B}_0(r)$. On the other hand, suppose that we run $\Delta$-inexact gradient descent, i.e., $\mathbf{w}_{t+1} = \mathbf{w}_t - \eta \widehat{\mathbf{g}}(\mathbf{w}_t)$ with $\twonms{\widehat{\mathbf{g}}(\mathbf{w}_t) - \nabla F(\mathbf{w}_t)} \le \Delta$. In the first step, if $| w_{0,2} | \le \frac{\Delta}{\lambda}$, the adversary can simply replace $\nabla F(\mathbf{w}_0)$ with $\widehat{\mathbf{g}}(\mathbf{w}_0) = [w_{0,1},~0]^\top$ (one can check that here we have $\twonms{\widehat{\mathbf{g}}(\mathbf{w}_0) - \nabla F(\mathbf{w}_0)} \le \Delta$), and then the second coordinate of $\mathbf{w}_1$ does not change, i.e., $w_{1,2}=w_{0,2}$. In the following iterations, the adversary can keep using the same strategy and the second coordinate of $\mathbf{w}$ never changes, and then the iterates cannot escape $\mathbb{B}_0(r)$, since $F(\mathbf{w})$ is a strongly convex function in its first coordinate. To compute the probability of getting stuck at the saddle point, we only need to compute the area of the region $\{\mathbf{w} \in \mathbb{B}_0(r) : |w_2| \le \frac{\Delta}{\lambda}\}$, which can be done via simple geometry. \end{proof} \paragraph*{Remark.} Even if we choose the largest possible perturbation in $\mathbb{B}_0(r)$, i.e., sample $\mathbf{w}$ from the circle $\{\mathbf{w}\in\mathbb{R}^2 : \twonms{\mathbf{w}} = r\}$, the stuck region still exists. We can compute the length of the arc $\{ \twonms{\mathbf{w}} = r : |w_2| \le \frac{\Delta}{\lambda}\}$ and find the probability of stuck. One can find that when $\Delta \ge \lambda r$, the probability of being stuck in $\mathbb{B}_0(r)$ is still $1$, otherwise, the probability of being stuck is $\frac{2}{\pi}( \arcsin(\frac{\Delta}{\lambda r}) ) $. The above examples show that the adversary can significantly alter the landscape of the function near a saddle point. We counter this by exerting a large perturbation on the iterate so that it escapes this bad region. The amount of perturbation is carefully calibrated to ensure that the algorithm finds a descent direction ``steep'' enough to be preserved under $\Delta$-corruption, while not compromising the accuracy. Multiple rounds of perturbation are performed, boosting the escape probability exponentially. \section{Proof of Theorem~\ref{thm:main}}\label{prf:main} We first analyze the gradient descent step with $\Delta$-inexact gradient oracle. \begin{lemma}\label{lem:each_iter} Suppose that $\eta = 1/ L_F$. For any $\mathbf{w}\in\mathcal{W}$, if we run the following inexact gradient descent step: \begin{equation}\label{eq:iter_w} \mathbf{w}' = \mathbf{w} - \eta\widehat{\mathbf{g}}(\mathbf{w}), \end{equation} with $\twonms{\widehat{\mathbf{g}}(\mathbf{w}) - \nabla F(\mathbf{w})} \le \Delta$. Then, we have \[ F(\mathbf{w}') \le F(\mathbf{w}) - \frac{1}{2L_F}\twonms{\nabla F(\mathbf{w})}^2 + \frac{1}{2L_F}\Delta^2. \] \end{lemma} \begin{proof} Since $F(\mathbf{w})$ is $L_F$ smooth, we know that \begin{align*} F(\mathbf{w}') \le & F(\mathbf{w}) + \innerps{\nabla F(\mathbf{w})}{ \mathbf{w}' - \mathbf{w}} + \frac{L_F}{2}\twonms{\mathbf{w}' - \mathbf{w}}^2 \\ = & F(\mathbf{w}) - \innerps{\nabla F(\mathbf{w})}{\frac{1}{L_F} (\widehat{\mathbf{g}}(\mathbf{w}) - \nabla F(\mathbf{w})) } - \innerps{\nabla F(\mathbf{w})}{\frac{1}{L_F}\nabla F(\mathbf{w})} \\ &+ \frac{1}{2L_F}\twonms{\widehat{\mathbf{g}}(\mathbf{w}) - \nabla F(\mathbf{w}) + \nabla F(\mathbf{w})}^2 \\ \le & F(\mathbf{w}) - \frac{1}{2L_F}\twonms{\nabla F(\mathbf{w})}^2 + \frac{1}{2L_F}\Delta^2. \end{align*} \end{proof} Let $\epsilon$ be the threshold on $\twonms{\widehat{\mathbf{g}}(\widetilde{\mathbf{w}})}$ that the algorithm uses to determine whether or not to add perturbation. Choose $\epsilon = 3\Delta$. Suppose that at a particular iterate $\widetilde{\mathbf{w}}$, we observe $\twonms{\widehat{\mathbf{g}}(\widetilde{\mathbf{w}})} > \epsilon$. Then, we know that \[ \twonms{\nabla F(\widetilde{\mathbf{w}})} \ge \twonms{\widehat{\mathbf{g}}(\widetilde{\mathbf{w}})} - \Delta \ge 2\Delta. \] According to Lemma~\ref{lem:each_iter}, by running one iteration of the inexact gradient descent step, the decrease in function value is at least \begin{equation}\label{eq:func_decrease_no_perturb2} \frac{1}{2L_F}\twonms{\nabla F(\widetilde{\mathbf{w}})}^2 - \frac{1}{2L_F}\Delta^2 \ge \frac{3\Delta^2}{2L_F}. \end{equation} We proceed to analyze the perturbation step, which happens when the algorithm arrives at an iterate $\widetilde{\mathbf{w}}$ with $\twonms{\widehat{\mathbf{g}}(\widetilde{\mathbf{w}})} \le \epsilon$. In this proof, we slightly abuse the notation. Recall that in equation~\eqref{eq:perturbed_iterates} in Section~\ref{sec:algorithm} , we use $\mathbf{w}_t^\prime$ $(0\le t \le T_{\mathrm{th}})$ to denote the iterates of the algorithm in the saddle point escaping process. Here, we simply use $\mathbf{w}_t$ to denote these iterates. We start with the definition of \emph{stuck region} at $\widetilde{\mathbf{w}}\in\mathcal{W}$. \begin{definition}\label{def:stuck_region} Given $\widetilde{\mathbf{w}}\in\mathcal{W}$, and parameters $r$, $R$, and $T_{\mathrm{th}}$, the stuck region $\mathbb{W}_S(\widetilde{\mathbf{w}}, r, R, T_{\mathrm{th}}) \subseteq \mathbb{B}_{\widetilde{\mathbf{w}}}(r)$ is a set of $\mathbf{w}_0 \in \mathbb{B}_{\widetilde{\mathbf{w}}}(r) $ which satisfies the following property: there exists an adversarial strategy such that when we start with $\mathbf{w}_0$ and run $T_{\mathrm{th}}$ gradient descent steps with $\Delta$-inexact gradient oracle $\widehat{\mathbf{g}}(\mathbf{w})$: \begin{equation}\label{eq:stuck_gd} \mathbf{w}_t = \mathbf{w}_{t-1} - \eta \widehat{\mathbf{g}}(\mathbf{w}_{t-1}),~ t = 1,2,\ldots, T_{\mathrm{th}}, \end{equation} we observe $\twonms{ \mathbf{w}_t - \mathbf{w}_0 } < R$, $\forall~t \le T_{\mathrm{th}}$. \end{definition} When it is clear from the context, we may simply use the terminology stuck region $\mathbb{W}_S$ at $\widetilde{\mathbf{w}}$. The following lemma shows that if $\nabla^2 F(\widetilde{\mathbf{w}})$ has a large negative eigenvalue, then the stuck region has a small width along the direction of the eigenvector associated with this negative eigenvalue. \begin{lemma}\label{lem:moving_dist} Assume that the smallest eigenvalue of $\mathbf{H} := \nabla^2F(\widetilde{\mathbf{w}})$ satisfies $\lambda_{\min}(\mathbf{H}) \le -\gamma < 0$, and let the unit vector $\mathbf{e}$ be the eigenvector associated with $\lambda_{\min}(\mathbf{H})$. Let $\mathbf{u}_0, \mathbf{y}_0 \in \mathbb{B}_{\widetilde{\mathbf{w}}}(r)$ be two points such that $\mathbf{y}_0 = \mathbf{u}_0 + \mu_0\mathbf{e}$ with some $\mu_0 \ge \mu \in (0, r)$. Choose step size $\eta = \frac{1}{L_F}$, and consider the stuck region $\mathbb{W}_S(\widetilde{\mathbf{w}}, r, R, T_{\mathrm{th}})$. Suppose that $r$, $R$, $T_{\mathrm{th}}$, and $\mu$ satisfy \footnote{Without loss of generality, here we assume that $\frac{2}{\eta\gamma} \log_{9/4}(\frac{2(R + r)}{\mu})$ is an integer, so that $T_{\mathrm{th}}$ is an integer.} \begin{align} & T_{\mathrm{th}} = \frac{2}{\eta\gamma} \log_{9/4}(\frac{2(R + r)}{\mu}), \label{eq:Tth_def} \\ & R \ge \mu, \label{eq:R_mu} \\ & \rho_F(R + r)\mu \ge \Delta, \label{eq:Delta_R} \\ & \gamma \ge 24 \rho_F(R+r) \log_{9/4} (\frac{2(R + r)}{\mu}). \label{eq:gamma_R_log} \end{align} Then, there must be either $\mathbf{u}_0 \notin \mathbb{W}_S$ or $\mathbf{y}_0 \notin \mathbb{W}_S$. \end{lemma} We prove Lemma~\ref{lem:moving_dist} in Appendix~\ref{prf:moving_dist}. With this lemma, we proceed to analyze the probability that the algorithm escapes the saddle points. In particular, we bound the probability that $\mathbf{w}_0 \in \mathbb{W}_S(\widetilde{\mathbf{w}}, r, R, T_{\mathrm{th}})$ when $\lambda_{\min}(\nabla^2F(\widetilde{\mathbf{w}})) \le -\gamma$ and $\mathbf{w}_0$ is drawn from $\mathbb{B}_{\widetilde{\mathbf{w}}}(r)$ uniform at random. \begin{lemma}\label{lem:vol_stuck_region} Assume that $\lambda_{\min}(\nabla^2F(\widetilde{\mathbf{w}})) \le -\gamma < 0$, and let the unit vector $\mathbf{e}$ be the eigenvector associated with $\lambda_{\min}(\nabla^2F(\widetilde{\mathbf{w}}))$. Consider the stuck region $\mathbb{W}_S(\widetilde{\mathbf{w}}, r, R, T_{\mathrm{th}})$ at $\widetilde{\mathbf{w}}$, and suppose that $r$, $R$, $T_{\mathrm{th}}$, and $\mu$ satisfy the conditions in~\eqref{eq:Tth_def}-\eqref{eq:gamma_R_log}. Then, when we sample $\mathbf{w}_0$ from $\mathbb{B}_{\widetilde{\mathbf{w}}}(r)$ uniformly at random, the probability that $\mathbf{w}_0 \in \mathbb{W}_S(\widetilde{\mathbf{w}}, r, R, T_{\mathrm{th}})$ is at most $\frac{2\mu\sqrt{d}}{r}$. \end{lemma} \begin{proof} Since the starting point $\mathbf{w}_0$ is uniformly distributed in $\mathbb{B}_{\widetilde{\mathbf{w}}}(r)$, to bound the probability of getting stuck, it suffices to bound the volume of $\mathbb{W}_S$. Let $\mathds{1}_{\mathbb{W}_S}(\mathbf{w})$ be the indicator function of the set $\mathbb{W}_S$. For any $\mathbf{w}\in\mathbb{R}^d$, let $w^{(1)}$ be the projection of $\mathbf{w}$ onto the $\mathbf{e}$ direction, and $\mathbf{w}^{(-1)}\in\mathbb{R}^{d-1}$ be the remaining component of $\mathbf{w}$. Then, we have \begin{align*} \mathrm{Vol}(\mathbb{W}_S) = & \int_{ \mathbb{B}_{\widetilde{\mathbf{w}}}^{(d)}(r) } \mathds{1}_{\mathbb{W}_S}(\mathbf{w}) \mathrm{d} \mathbf{w} \\ = & \int_{ \mathbb{B}_{\widetilde{\mathbf{w}}}^{(d-1)}(r) } \mathrm{d} \mathbf{w}^{(-1)} \int_{\widetilde{w}^{(1)} - \sqrt{r^2 - \twonms{\widetilde{\mathbf{w}}^{(-1)} - \mathbf{w}^{(-1)}}^2 }}^{\widetilde{w}^{(1)} + \sqrt{r^2 - \twonms{\widetilde{\mathbf{w}}^{(-1)} - \mathbf{w}^{(-1)}}^2 }} \mathds{1}_{\mathbb{W}_S}(\mathbf{w}) \mathrm{d} \widetilde{w}^{(1)} \\ \le & 2\mu \int_{ \mathbb{B}_{\widetilde{\mathbf{w}}}^{(d-1)}(r) } \mathrm{d} \mathbf{w}^{(-1)} \\ = & 2\mu \mathrm{Vol}(\mathbb{B}_0^{(d-1)}(r)), \end{align*} where the inequality is due to Lemma~\ref{lem:moving_dist}. Then, we know that the probability of getting stuck is \begin{align*} \frac{ \mathrm{Vol}(\mathbb{W}_S) }{ \mathrm{Vol}(\mathbb{B}_0^{(d)}(r)) } \le & 2\mu \frac{\mathrm{Vol}(\mathbb{B}_0^{(d-1)}(r))}{\mathrm{Vol}(\mathbb{B}_0^{(d)}(r))} = \frac{2\mu}{\sqrt{\pi} r} \frac{\Gamma(\frac{d}{2} + 1)}{\Gamma(\frac{d}{2} + \frac{1}{2})} \le \frac{2\mu}{\sqrt{\pi} r} \sqrt{\frac{d}{2} + \frac{1}{2}} \le \frac{2\mu\sqrt{d}}{r}, \end{align*} where we use the fact that $\frac{\Gamma(x+1)}{\Gamma(x+\frac{1}{2})} < \sqrt{x + \frac{1}{2}}$ for any $x\ge 0$. \end{proof} We then analyze the decrease of value of the population loss function $F(\cdot)$ when we conduct the perturbation step. Assume that we successfully escape the saddle point, i.e., there exists $t \le T_{\mathrm{th}}$ such that $\twonms{\mathbf{w}_t - \mathbf{w}_0} \ge R$. The following lemma provides the decrease of $F(\cdot)$. \begin{lemma}\label{lem:func_value_decay} Suppose that $\lambda_{\min}(\nabla^2F(\widetilde{\mathbf{w}})) \le -\gamma < 0$, and at $\widetilde{\mathbf{w}}$, we observe $\twonms{\widehat{\mathbf{g}}(\widetilde{\mathbf{w}})} \le \epsilon=3\Delta$. Assume that $\mathbf{w}_0 \in \mathbb{B}_{\widetilde{\mathbf{w}}}(r)$ and that $\mathbf{w}_0 \notin \mathbb{W}_S(\widetilde{\mathbf{w}}, r, R, T_{\mathrm{th}})$. Let $t \le T_{\mathrm{th}}$ be the step such that $\twonms{\mathbf{w}_t - \mathbf{w}_0} \ge R$. Then, we have \begin{equation}\label{eq:func_decay_main} F(\widetilde{\mathbf{w}}) - F(\mathbf{w}_t) \ge \frac{L_F}{4T_{\mathrm{th}}} R^2 - \frac{ \Delta^2 T_{\mathrm{th}}}{L_F} -4\Delta r - \frac{L_F}{2}r^2. \end{equation} \end{lemma} We prove Lemma~\ref{lem:func_value_decay} in Appendix~\ref{prf:func_value_decay}. Next, we choose the quantities $\mu$, $r$, $R$, and $\gamma$ such that (i) the conditions~\eqref{eq:Tth_def}-\eqref{eq:gamma_R_log} in Lemma~\ref{lem:moving_dist} are satisfied, (ii) the probability of escaping saddle point in Lemma~\ref{lem:vol_stuck_region} is at least a constant, and (iii) the decrease in function value in~\eqref{eq:func_decay_main} is large enough. We first choose \begin{align} \mu & = \Delta^{3/5}d^{-1/5}\rho_F^{-1/2}, \label{eq:choice_mu} \\ r & = 4 \Delta^{3/5} d^{3/10}\rho_F^{-1/2}, \label{eq:choice_r} \\ R &= \Delta^{2/5} d^{1/5} \rho_F^{-1/2}. \label{eq:choice_bigR} \end{align} One can simply check that, according to Lemma~\ref{lem:vol_stuck_region}, when we drawn $\mathbf{w}_0$ from $\mathbb{B}_{\widetilde{\mathbf{w}}}(r)$ uniformly at random, the probability that $\mathbf{w}_0 \in \mathbb{W}_S(\widetilde{\mathbf{w}}, r, R, T_{\mathrm{th}})$ is at most $1/2$. Since we assume that $\Delta \le 1$, one can also check that the condition~\eqref{eq:R_mu} is satisfied. In addition, since $\rho_F R \mu = \Delta$, the condition~\eqref{eq:Delta_R} is also satisfied. According to~\eqref{eq:Tth_def}, we have \begin{equation}\label{eq:Tth_def2} T_{\mathrm{th}} = \frac{2L_F}{\gamma} \log_{9/4}(\frac{2d^{2/5}}{\Delta^{1/5}} + 8d^{1/2}). \end{equation} In the following, we choose \begin{equation}\label{eq:choice_gamma} \gamma = 768 (\rho_F^{1/2} + L_F) (\Delta^{2/5}d^{1/5} + \Delta^{3/5}d^{3/10}) \log_{9/4}(\frac{2d^{2/5}}{\Delta^{1/5}} + 8d^{1/2}), \end{equation} which implies \begin{equation}\label{eq:Tth_def3} T_{\mathrm{th}} = \frac{L_F}{384 (\rho_F^{1/2} + L_F) (\Delta^{2/5}d^{1/5} + \Delta^{3/5}d^{3/10}) } \end{equation} Then we check condition~\eqref{eq:gamma_R_log} holds. We have \[ 24 \rho_F(R+r) \log_{9/4} (\frac{2(R + r)}{\mu}) = 24 \rho_F^{1/2} (\Delta^{2/5} d^{1/5} + 4\Delta^{3/5}d^{3/10}) \log_{9/4} (\frac{2d^{2/5}}{\Delta^{1/5}} + 8d^{1/2}) \le \gamma. \] Next, we consider the decrease in function value in~\eqref{eq:func_decay_main}. Using the equations~\eqref{eq:Tth_def2} and~\eqref{eq:choice_gamma}, we can show the following three inequalities by direct algebra manipulation. \begin{align} &\frac{L_F}{4T_{\mathrm{th}}} R^2 \ge 6 \frac{\Delta^2 T_{\mathrm{th}}}{L_F}, \label{eq:func_decay_1} \\ &\frac{L_F}{4T_{\mathrm{th}}} R^2 \ge 24 \Delta r \label{eq:func_decay_2}, \\ &\frac{L_F}{4T_{\mathrm{th}}} R^2 \ge 3L_Fr^2 \label{eq:func_decay_3}. \end{align} By adding up~\eqref{eq:func_decay_1},~\eqref{eq:func_decay_2}, and~\eqref{eq:func_decay_3}, we obtain \[ \frac{L_F}{4T_{\mathrm{th}}} R^2 \ge 2 \frac{\Delta^2 T_{\mathrm{th}}}{L_F} + 8 \Delta r + L_Fr^2, \] which implies that when we successfully escape the saddle point, we have \begin{equation}\label{eq:func_decay_4} F(\widetilde{\mathbf{w}}) - F(\mathbf{w}_t) \ge \frac{L_F}{8T_{\mathrm{th}}} R^2 = 48(\rho_F^{-1/2} + L_F\rho_F^{-1}) (\Delta^{6/5}d^{3/5} + \Delta^{7/5}d^{7/10}). \end{equation} Then, one can simply check that, the average decrease in function value during the successful round of the $\mathsf{Escape}$ process is \begin{equation}\label{eq:func_decay_ave} \frac{F(\widetilde{\mathbf{w}}) - F(\mathbf{w}_t)}{ t } \ge \frac{F(\widetilde{\mathbf{w}}) - F(\mathbf{w}_t)}{ T_{\mathrm{th}} } \ge \frac{2 ( \Delta^{8/5}d^{4/5} + \Delta^2 d)}{L_F} > \frac{3\Delta^2}{2L_F}. \end{equation} Recall that according to~\eqref{eq:func_decrease_no_perturb2}, when the algorithm is not in the $\mathsf{Escape}$ process, the function value is decreased by at least $\frac{3\Delta^2}{2L_F}$ in each iteration. Therefore, if the algorithm successfully escapes the saddle point, during the $\mathsf{Escape}$ process, the average decrease in function value is \emph{larger} than the iterations which are not in this process. So far, we have chosen the algorithm parameters $r$, $R$, $T_{\mathrm{th}}$, as well as the final second-order convergence guarantee $\gamma$. Now we proceed to analyze the total number of iterations and the failure probability of the algorithm. According to Lemma~\ref{lem:vol_stuck_region} and the choice of $\mu$ and $r$, we know that at each point with $\twonms{\widehat{\mathbf{g}}(\widetilde{\mathbf{w}})} \le \epsilon$, the algorithm can successfully escape this saddle point with probability at least $1/2$. To boost the probability of escaping saddle points, we need to repeat the process $Q$ rounds in $\mathsf{Escape}$, independently. Since for each successful round, the function value decrease is at least \[ 48(\rho_F^{-1/2} + L_F\rho_F^{-1}) (\Delta^{6/5}d^{3/5} + \Delta^{7/5}d^{7/10}) \ge 48 L_F\rho_F^{-1} (\Delta^{6/5}d^{3/5} + \Delta^{7/5}d^{7/10}), \] and the function value can decrease at most $F_0 - F^*$. Therefore, the total number of saddle points that we need to escape is at most \begin{equation}\label{eq:total_saddle} \frac{\rho_F (F_0 - F^*) }{48 L_F (\Delta^{6/5}d^{3/5} + \Delta^{7/5}d^{7/10}) }. \end{equation} Therefore, by union bound, the failure probability of the algorithm is at most \[ \frac{\rho_F (F_0 - F^*) }{48 L_F (\Delta^{6/5}d^{3/5} + \Delta^{7/5}d^{7/10}) } (\frac{1}{2})^Q, \] and to make the failure probability at most $\delta$, one can choose \begin{equation}\label{eq:choice_Q} Q \ge 2 \log\left( \frac{ \rho_F( F_0 - F^*) }{48 L_F \delta (\Delta^{6/5}d^{3/5} + \Delta^{7/5}d^{7/10}) } \right). \end{equation} Again, due to the fact that the function value decrease is at most $F_0 - F^*$, and in each \emph{effective} iteration, the function value is decreased by at least $\frac{3\Delta^2}{2L_F}$. (Here, the effective iterations are the iterations when the algorithm is not in the $\mathsf{Escape}$ process and the iterations when the algorithm successfully escapes the saddle points.) The total number of effective iterations is at most \begin{equation}\label{eq:total_effective_iter} \frac{2(F_0 - F^*)L_F}{3\Delta^2}. \end{equation} Combing with~\eqref{eq:choice_Q}, we know that the total number of parallel iterations is at most \[ \frac{4(F_0 - F^*)L_F}{3\Delta^2} \log\left( \frac{\rho_F (F_0 - F^*) }{48 L_F \delta (\Delta^{6/5}d^{3/5} + \Delta^{7/5}d^{7/10}) } \right). \] When all the algorithm terminates, and the saddle point escaping process is successful, the output of the algorithm $\widetilde{\mathbf{w}}$ satisfies $\twonms{\widehat{\mathbf{g}}(\widetilde{\mathbf{w}})} \le \epsilon$, which implies that $\twonms{\nabla F(\widetilde{\mathbf{w}})} \le 4\Delta$, and \begin{equation}\label{eq:lambda_min_init} \begin{aligned} \lambda_{\min}(\nabla^2 F(\widetilde{\mathbf{w}})) & \ge -\gamma = -768 (\rho_F^{1/2} + L_F) (\Delta^{2/5}d^{1/5} + \Delta^{3/5}d^{3/10}) \log_{9/4}(\frac{2d^{2/5}}{\Delta^{1/5}} + 8d^{1/2}) \\ &\ge -950 (\rho_F^{1/2} + L_F) (\Delta^{2/5}d^{1/5} + \Delta^{3/5}d^{3/10}) \log(\frac{2d^{2/5}}{\Delta^{1/5}} + 8d^{1/2}). \end{aligned} \end{equation} We next show that we can simplify the guarantee as \begin{equation}\label{eq:lambda_min_final} \lambda_{\min}(\nabla^2 F(\widetilde{\mathbf{w}})) \ge -1900 (\rho_F^{1/2} + L_F) \Delta^{2/5}d^{1/5} \log(\frac{10}{\Delta}). \end{equation} We can see that if $\Delta \le \frac{1}{\sqrt{d}}$, then $ \Delta^{3/5}d^{3/10} \le \Delta^{2/5}d^{1/5}$ and $\frac{2d^{2/5}}{\Delta^{1/5}} + 8d^{1/2} \le \frac{10}{\Delta}$. Thus, the bound~\eqref{eq:lambda_min_final} holds. On the other hand, when $\Delta > \frac{1}{\sqrt{d}}$, we have $\Delta^{2/5}d^{1/5} > 1$ and thus \[ 1900 (\rho_F^{1/2} + L_F) \Delta^{2/5}d^{1/5} \log(\frac{10}{\Delta}) > L_F. \] By the smoothness of $F(\cdot)$, we know that $\lambda_{\min}(\nabla^2 F(\widetilde{\mathbf{w}})) \ge -L_F$. Therefore, the bound~\eqref{eq:lambda_min_final} still holds, and this completes the proof. \subsection{Proof of Lemma~\ref{lem:moving_dist}}\label{prf:moving_dist} We prove by contradiction. Suppose that $\mathbf{u}_0,\mathbf{y}_0 \in \mathbb{W}_S$. Let $\{\mathbf{u}_t\}$ and $\{\mathbf{y}_t\}$ be two sequences generated by the following two iterations: \begin{align} \mathbf{u}_t &= \mathbf{u}_{t-1} - \eta \widehat{\mathbf{g}}(\mathbf{u}_{t-1}), \label{eq:seq_ut_stuck} \\ \mathbf{y}_t &= \mathbf{y}_{t-1} - \eta \widehat{\mathbf{g}}(\mathbf{y}_{t-1}), \label{eq:seq_yt_stuck} \end{align} respectively, where $\twonms{\widehat{\mathbf{g}}(\mathbf{w}) - \nabla F(\mathbf{w})} \le \Delta$ for any $\mathbf{w} \in \mathcal{W}$. According to our assumption, we have $\forall~t \le T_{\mathrm{th}}$, $\twonms{\mathbf{u}_t - \mathbf{u}_0} < R$ and $\twonms{\mathbf{y}_t - \mathbf{y}_0} < R$. Define $\mathbf{v}_t := \mathbf{y}_t - \mathbf{u}_t$, $\boldsymbol\delta_t := \widehat{\mathbf{g}}(\mathbf{u}_t) - \nabla F(\mathbf{u}_t)$, and $\boldsymbol\delta'_t := \widehat{\mathbf{g}}(\mathbf{y}_t) - \nabla F(\mathbf{y}_t)$. Then we have \begin{align*} \mathbf{y}_{t+1} & = \mathbf{y}_t - \eta(\nabla F(\mathbf{y}_t) + \boldsymbol\delta'_t) \\ & = \mathbf{u}_t + \mathbf{v}_t - \eta(\nabla F(\mathbf{u}_t + \mathbf{v}_t) + \boldsymbol\delta'_t) \\ & = \mathbf{u}_t + \mathbf{v}_t - \eta\nabla F(\mathbf{u}_t) - \eta \left[ \int_0^1\nabla^2 F(\mathbf{u}_t + \theta\mathbf{v}_t) \right]\mathbf{v}_t - \eta\boldsymbol\delta'_t \\ & = \mathbf{u}_{t+1} + \eta\boldsymbol\delta_t + \mathbf{v}_t - \eta \left[ \int_0^1 \nabla^2 F(\mathbf{u}_t + \theta\mathbf{v}_t) \mathrm{d} \theta \right]\mathbf{v}_t - \eta\boldsymbol\delta'_t, \end{align*} which yields \begin{equation}\label{eq:iter_vt} \mathbf{v}_{t+1} = (\mathbf{I} - \eta\mathbf{H}) \mathbf{v}_t - \eta\mathbf{Q}_t \mathbf{v}_t + \eta(\boldsymbol\delta_t - \boldsymbol\delta'_t), \end{equation} where \begin{equation}\label{eq:def_qtprimte} \mathbf{Q}_t := \int_0^1 \nabla^2 F(\mathbf{u}_t + \theta\mathbf{v}_t) \mathrm{d} \theta - \mathbf{H}. \end{equation} By the Hessian Lipschitz property, we know that \begin{equation}\label{eq:op_norm_qtprime2} \begin{aligned} \twonms{\mathbf{Q}_t} \le & \rho_F(\twonms{\mathbf{u}_t - \widetilde{\mathbf{w}}} + \twonms{\mathbf{y}_t - \widetilde{\mathbf{w}}} ) \\ \le & \rho_F(\twonms{\mathbf{u}_t - \mathbf{u}_0} + \twonms{\mathbf{u}_0 - \widetilde{\mathbf{w}}} + \twonms{\mathbf{y}_t - \mathbf{y}_0} + \twonms{\mathbf{y}_0 - \widetilde{\mathbf{w}}} ) \\ \le & 2\rho_F(R+r). \end{aligned} \end{equation} We let $\psi_t$ be the norm of the projection of $\mathbf{v}_t$ onto the $\mathbf{e}$ direction, and $\phi_t$ be the norm of the projection of $\mathbf{v}_t$ onto the remaining subspace. By definition, we have $\psi_0 = \mu_0 \ge \mu > 0$ and $\phi_0 = 0$. According to~\eqref{eq:iter_vt} and~\eqref{eq:op_norm_qtprime2}, we have \begin{align} \psi_{t+1} & \ge (1+\eta\gamma)\psi_t - 2 \eta \rho_F(R+r) \sqrt{\psi_t^2 + \phi_t^2} - 2\eta\Delta, \label{eq:iter_psi} \\ \phi_{t+1} & \le (1+\eta\gamma)\phi_t + 2 \eta \rho_F(R+r) \sqrt{\psi_t^2 + \phi_t^2} + 2\eta \Delta. \label{eq:iter_phi} \end{align} In the following, we use induction to prove that $\forall~t \le T_{\mathrm{th}}$, \begin{equation}\label{eq:induction_arg} \psi_t \ge (1+\frac{1}{2}\eta\gamma)\psi_{t-1} \quad \text{and} \quad \phi_t \le \frac{t}{T_{\mathrm{th}}}\psi_t \end{equation} We know that~\eqref{eq:induction_arg} holds when $t=0$ since we have $\phi_0 = 0$. Then, assume that for some $t < T_{\mathrm{th}}$, we have $\forall~\tau \le t$, $\psi_\tau \ge (1+\frac{1}{2}\eta\gamma) \psi_{\tau-1}$ and $\phi_\tau \le \frac{\tau}{T_{\mathrm{th}}} \psi_\tau$. We show that~\eqref{eq:induction_arg} holds for $t+1$. First, we show that $\psi_{t+1} \ge (1+\frac{1}{2}\eta\gamma) \psi_{t}$. Since $\forall~\tau \le t$, $\psi_\tau \ge \psi_{\tau-1}$, we know that $\psi_{t} \ge \psi_0 \ge \mu$. Therefore, according to~\eqref{eq:Delta_R}, we have \begin{equation}\label{eq:delta_up_psi} \Delta \le \rho_F(R+r) \mu \le \rho_F(R+r) \psi_t. \end{equation} In addition, since $t < T_{\mathrm{th}}$, we have \begin{equation}\label{eq:phi_psi} \phi_t \le \psi_t. \end{equation} Combining~\eqref{eq:delta_up_psi},~\eqref{eq:phi_psi} and~\eqref{eq:iter_psi},~\eqref{eq:iter_phi}, we get \begin{align} \psi_{t+1} & \ge (1+\eta\gamma)\psi_t - 2 \eta \rho_F(R+r) \sqrt{2\psi_t^2} - 2\eta \rho_F(R+r) \psi_t > (1+\eta\gamma)\psi_t - 6\eta \rho_F(R+r) \psi_t, \label{eq:iter_psi2} \\ \phi_{t+1} & \le (1+\eta\gamma)\phi_t + 2 \eta \rho_F(R+r) \sqrt{2\psi_t^2} + 2\eta \rho_F(R+r) \psi_t < (1+\eta\gamma)\phi_t + 6\eta \rho_F(R+r) \psi_t. \label{eq:iter_phi2} \end{align} According to~\eqref{eq:gamma_R_log}, we have $\gamma \ge 24 \rho_F(R+r) \log_{9/4} (\frac{2(R + r)}{\mu}) > 12\rho_F(R+r)$. Combining with~\eqref{eq:iter_psi2}, we know that $\psi_{t+1} \ge (1+ \frac{1}{2}\eta\gamma)\psi_t $. Next, we show that $\phi_{t+1} \le \frac{t+1}{T_{\mathrm{th}}}\psi_{t+1}$. Combining with~\eqref{eq:iter_psi2} and~\eqref{eq:iter_phi2}, we know that to show $\phi_{t+1} \le \frac{t+1}{T_{\mathrm{th}}}\psi_{t+1}$, it suffices to show \begin{equation}\label{eq:suffice1} (1+\eta \gamma)\phi_t + 6\eta\rho_F(R+r)\psi_t \le \frac{t+1}{T_{\mathrm{th}}} [1+\eta\gamma - 6\eta\rho_F(R+r)] \psi_t. \end{equation} According to the induction assumption, we have $\phi_{t} \le \frac{t}{T_{\mathrm{th}}}\psi_{t}$. Then, to show~\eqref{eq:suffice1}, it suffices to show that \begin{equation}\label{eq:suffice2} (1+\eta \gamma)t + 6\eta\rho_F(R+r)T_{\mathrm{th}} \le (t+1) [1+\eta\gamma - 6\eta\rho_F(R+r)] \end{equation} Since $t+1\le T_{\mathrm{th}}$, we know that to show~\eqref{eq:suffice2}, it suffices to show \begin{equation}\label{eq:suffice3} 12 \eta\rho_F(R+r)T_{\mathrm{th}} \le 1. \end{equation} Then, according to~\eqref{eq:Tth_def} and~\eqref{eq:gamma_R_log}, we know that~\eqref{eq:suffice3} holds, which completes the induction. Next, according to~\eqref{eq:induction_arg}, we know that \begin{align*} \twonms{\mathbf{u}_{T_{\mathrm{th}}} - \mathbf{y}_{T_{\mathrm{th}}}} &\ge \phi_{T_{\mathrm{th}}} \ge (1+\frac{1}{2} \eta \gamma)^{T_{\mathrm{th}}}\mu_0 \\ & \ge (1+\frac{1}{2} \eta \gamma)^{ \frac{2}{\eta\gamma} \log_{9/4}(\frac{2(R+r)}{\mu}) }\mu_0 \\ & \ge \frac{2(R+r)}{\mu} \cdot \mu_0 = 2(R+r), \end{align*} where the last inequality is due to the fact that $\eta = \frac{1}{L_F}$ and thus $\eta \gamma \le 1$. On the other hand, since we assume that $\mathbf{u}_0,\mathbf{y}_0 \in \mathbb{W}_S$, we know that \[ \twonms{\mathbf{u}_{T_{\mathrm{th}}} - \mathbf{y}_{T_{\mathrm{th}}}} \le \twonms{\mathbf{u}_{T_{\mathrm{th}}} - \mathbf{u}_0 } + \twonms{\mathbf{y}_{T_{\mathrm{th}}} - \mathbf{y}_0 } + \twonms{\mathbf{u}_0 - \mathbf{y}_0} < 2(R +r), \] which leads to contradiction and thus completes the proof. \subsection{Proof of Lemma~\ref{lem:func_value_decay}}\label{prf:func_value_decay} Recall that we have the iterations $\mathbf{w}_{\tau+1} = \mathbf{w}_\tau - \eta \widehat{\mathbf{g}}(\mathbf{w}_\tau)$ for all $\tau < t$. Let $\boldsymbol\delta_\tau = \nabla F(\mathbf{w}_{\tau}) - \widehat{\mathbf{g}}(\mathbf{w}_\tau)$, and then $\twonms{\boldsymbol\delta_\tau} \le \Delta$. By the smoothness of $F(\cdot)$ and the fact that $\eta = \frac{1}{L_F}$, we have \begin{equation}\label{eq:smoothness_decay} \begin{aligned} F(\mathbf{w}_\tau ) - F(\mathbf{w}_{\tau+1}) \ge & \innerps{\nabla F(\mathbf{w}_\tau)}{\mathbf{w}_\tau - \mathbf{w}_{\tau+1}} - \frac{L_F}{2}\twonms{\mathbf{w}_{\tau} - \mathbf{w}_{\tau+1}}^2 \\ = & \innerp{\frac{\mathbf{w}_\tau - \mathbf{w}_{\tau+1}}{\eta} + \boldsymbol\delta_\tau}{\mathbf{w}_\tau - \mathbf{w}_{\tau+1}} - \frac{L_F}{2}\twonms{\mathbf{w}_\tau - \mathbf{w}_{\tau+1}}^2 \\ = & \frac{L_F}{2}\twonms{\mathbf{w}_\tau - \mathbf{w}_{\tau+1}}^2 + \innerps{\boldsymbol\delta_\tau}{\mathbf{w}_\tau - \mathbf{w}_{\tau+1}} \\ \ge & \frac{L_F}{4} \twonms{\mathbf{w}_\tau - \mathbf{w}_{\tau+1}}^2 - \frac{ \twonms{\boldsymbol\delta_\tau}^2 }{L_F} \\ \ge & \frac{L_F}{4} \twonms{\mathbf{w}_\tau - \mathbf{w}_{\tau+1}}^2 - \frac{ \Delta^2 }{L_F}. \end{aligned} \end{equation} By summing up~\eqref{eq:smoothness_decay} for $\tau = 0,1,\ldots, t-1$, we get \begin{equation}\label{eq:decay_w0_wt} F(\mathbf{w}_0 ) - F(\mathbf{w}_t) \ge \frac{L_F}{4} \sum_{\tau=0}^{t-1} \twonms{\mathbf{w}_\tau - \mathbf{w}_{\tau+1}}^2 - \frac{ \Delta^2 t}{L_F}. \end{equation} Consider the $k$-th coordinate of $\mathbf{w}_\tau$ and $\mathbf{w}_{\tau+1}$, by Cauchy-Schwarz inequality, we have \[ \sum_{\tau=0}^{t-1} (w_{\tau, k} - w_{\tau+1, k})^2 \ge \frac{1}{t} (w_{0,k} - w_{t,k})^2, \] which implies \begin{equation}\label{eq:cauthy_dist} \sum_{\tau=0}^{t-1} \twonms{\mathbf{w}_\tau - \mathbf{w}_{\tau+1}}^2 \ge \frac{1}{t} \twonms{\mathbf{w}_0 - \mathbf{w}_t}^2. \end{equation} Combining~\eqref{eq:decay_w0_wt} and~\eqref{eq:cauthy_dist}, we obtain \begin{equation}\label{eq:decay_w0_wt2} F(\mathbf{w}_0 ) - F(\mathbf{w}_t) \ge \frac{L_F}{4t} \twonms{\mathbf{w}_0 - \mathbf{w}_t}^2 - \frac{ \Delta^2 t}{L_F} \ge \frac{L_F}{4T_{\mathrm{th}}} R^2 - \frac{ \Delta^2 T_{\mathrm{th}}}{L_F}. \end{equation} On the other hand, by the smoothness of $F(\cdot)$, we have \begin{equation}\label{eq:tw_w0} F(\widetilde{\mathbf{w}}) - F(\mathbf{w}_0) \ge \innerps{\nabla F(\widetilde{\mathbf{w}})}{\widetilde{\mathbf{w}} - \mathbf{w}_0} - \frac{L_F}{2}\twonms{\mathbf{w}_0 - \widetilde{\mathbf{w}}}^2 \ge -(\epsilon+\Delta)r - \frac{L_F}{2}r^2. \end{equation} Adding up~\eqref{eq:decay_w0_wt2} and~\eqref{eq:tw_w0}, we obtain \begin{equation}\label{eq:function_value_inc} F(\widetilde{\mathbf{w}}) - F(\mathbf{w}_t) \ge \frac{L_F}{4T_{\mathrm{th}}} R^2 - \frac{ \Delta^2 T_{\mathrm{th}}}{L_F} -(\epsilon+\Delta)r - \frac{L_F}{2}r^2, \end{equation} which completes the proof. \section{Proof of Theorem~\ref{thm:exact_oracle}}\label{apx:exact_oracle} First, when we run gradient descent iterations $\mathbf{w}' = \mathbf{w} - \eta\nabla F(\mathbf{w})$, according to Lemma~\ref{lem:each_iter}, we have \begin{equation}\label{eq:exact_iter} F(\mathbf{w}') \le F(\mathbf{w}) - \frac{1}{2L_F}\twonms{\nabla F(\mathbf{w})}^2. \end{equation} Suppose at $\widetilde{\mathbf{w}}$, we observe that $\twonms{\nabla F(\widetilde{\mathbf{w}})} \le \epsilon$, and then we start the $\mathsf{Escape}$ process. When we have exact gradient oracle, we can still define the stuck region $\mathbb{W}_S$ at $\widetilde{\mathbf{w}}$ as in the definition of stuck region in Appendix~\ref{prf:main}, by simply replacing the inexact gradient oracle with the exact oracle. Then, we can analyze the size of the stuck region according to Lemma~\ref{lem:moving_dist}. Assume that the smallest eigenvalue of $\mathbf{H} := \nabla^2F(\widetilde{\mathbf{w}})$ satisfies $\lambda_{\min}(\mathbf{H}) \le -\gamma < 0$, and let the unit vector $\mathbf{e}$ be the eigenvector associated with $\lambda_{\min}(\mathbf{H})$. Let $\mathbf{u}_0, \mathbf{y}_0 \in \mathbb{B}_{\widetilde{\mathbf{w}}}(r)$ be two points such that $\mathbf{y}_0 = \mathbf{u}_0 + \mu_0\mathbf{e}$ with some $\mu_0 \ge \mu \in (0, r)$. Consider the stuck region $\mathbb{W}_S(\widetilde{\mathbf{w}}, r, R, T_{\mathrm{th}})$. Suppose that $r$, $R$, $T_{\mathrm{th}}$, and $\mu$ satisfy \begin{align} & T_{\mathrm{th}} = \frac{2}{\eta\gamma} \log_{9/4}(\frac{2(R + r)}{\mu}), \label{eq:exact_Tth_def} \\ & R \ge \mu, \label{eq:exact_R_mu} \\ & \gamma \ge 24 \rho_F(R+r) \log_{9/4} (\frac{2(R + r)}{\mu}). \label{eq:exact_gamma_R_log} \end{align} Then, there must be either $\mathbf{u}_0 \notin \mathbb{W}_S$ or $\mathbf{y}_0 \notin \mathbb{W}_S$. In addition, according to Lemma~\ref{lem:vol_stuck_region}, if conditions~\eqref{eq:exact_Tth_def}-\eqref{eq:exact_gamma_R_log} are satisfied, then, when we sample $\mathbf{w}_0$ from $\mathbb{B}_{\widetilde{\mathbf{w}}}(r)$ uniformly at random, the probability that $\mathbf{w}_0 \in \mathbb{W}_S(\widetilde{\mathbf{w}}, r, R, T_{\mathrm{th}})$ is at most $\frac{2\mu\sqrt{d}}{r}$. In addition, according to~\eqref{eq:function_value_inc} in the proof of Lemma~\ref{lem:func_value_decay}, assume that $\mathbf{w}_0 \in \mathbb{B}_{\widetilde{\mathbf{w}}}(r)$ and that $\mathbf{w}_0 \notin \mathbb{W}_S(\widetilde{\mathbf{w}}, r, R, T_{\mathrm{th}})$. Let $t \le T_{\mathrm{th}}$ be the step such that $\twonms{\mathbf{w}_t - \mathbf{w}_0} \ge R$. Then, we have \begin{equation}\label{eq:exact_func_decay_main} F(\widetilde{\mathbf{w}}) - F(\mathbf{w}_t) \ge \frac{L_F}{4T_{\mathrm{th}}} R^2 - \epsilon r - \frac{L_F}{2}r^2. \end{equation} Combining~\eqref{eq:exact_Tth_def} and~\eqref{eq:exact_gamma_R_log}, we know that the first term on the right hand side of~\eqref{eq:exact_func_decay_main} satisfies \begin{equation}\label{eq:exact_func_first} \frac{L_F}{4T_{\mathrm{th}}} R^2 \ge 3\rho_F R^3. \end{equation} Choose $R = \sqrt{\epsilon/\rho_F}$ and $r = \epsilon$. Then, we know that when $\epsilon \le \min\{ \frac{1}{\rho_F}, \frac{4}{L_F^2\rho_F} \}$, we have $\epsilon r \le \rho_F R^3$ and $\frac{1}{2} L_F r^2 \le \rho_FR^3$. Combining these facts with~\eqref{eq:exact_func_decay_main} and~\eqref{eq:exact_func_first}, we know that, when the algorithm successfully escapes the saddle point, the decrease in function value satisfies \begin{equation}\label{eq:exact_func_dec} F(\widetilde{\mathbf{w}}) - F(\mathbf{w}_t) \ge \rho_F R^3. \end{equation} Therefore, the average function value decrease during the $\mathsf{Escape}$ process is at least \begin{equation}\label{eq:exact_func_dec_ave} \frac{F(\widetilde{\mathbf{w}}) - F(\mathbf{w}_t) }{T_{\mathrm{th}}} \ge \frac{12}{L_F} \epsilon^2. \end{equation} When we have exact gradient oracle, we choose $Q=1$. According to~\eqref{eq:exact_iter} and~\eqref{eq:exact_func_dec_ave}, for the iterations that are not in the $\mathsf{Escape}$ process, the function value decrease in each iteration is at least $\frac{1}{2L_F}\epsilon^2$; for the iterations in the $\mathsf{Escape}$ process, the function value decrease on average is $\frac{12}{L_F} \epsilon^2$. Since the function value can decrease at most $F_0 - F^*$, the algorithm must terminate within $\frac{2L_F(F_0 - F^*)}{\epsilon^2}$ iterations. The we proceed to analyze the failure probability. We can see that the number of saddle points that the algorithm may need to escape is at most $\frac{F_0 - F^*}{\rho_F R^3} $. Then, by union bound the probability that the algorithm fails to escape one of the saddle points is at most \[ \frac{2\mu \sqrt{d}}{r} \cdot \frac{F_0 - F^*}{\rho_F R^3} \] By letting the above probability to be $\delta$, we obtain \[ \mu = \frac{\delta \epsilon^{5/2}}{2\sqrt{\rho_F d}(F_0 - F^*)}, \] which completes the proof. \section{Proof of Proposition~\ref{obs:second_order_lb}}\label{prf:second_order_lb} We consider the following class of one-dimensional functions indexed by $ s\in \mathbb{R} $: \[ \mathcal{F} = \{f_s(\cdot): f_s(w) = \Delta^{3/2}\sin (\Delta^{-1/2}w + s), s\in\mathbb{R}\}. \] Then, for each function $f_s(\cdot)\in\mathcal{F}$, we have \[ \nabla f_s(w) = \Delta \cos (\Delta^{-1/2}w + s), \] and \[ \nabla^2 f_s (w) = -\Delta^{1/2} \sin(\Delta^{-1/2}w + s). \] Thus, we always have $| \nabla f_s(w) | \le \Delta, \forall w$. Therefore, the $\Delta$-inexact gradient oracle can simply output $0$ all the time. In addition, we verify that for all $ s $ and $w $, $ |\nabla^2 f_s(w) | \le \Delta^{1/2} \le 1$ and $ |\nabla^3 f_s(w) | =| -\cos (\Delta^{-1/2}w + s) | \le 1 $ under the assumption that $ \Delta \le 1 $, so all the functions in $\mathcal{F}$ are $1$-smooth and $1$-Hessian Lipschitz as claimed. In this case, the output of the algorithm does not depend on $ s $, that is, the actual function that we aim to minimize. Consequently, for any output $\widetilde{w}$ of the algorithm, there exists $s\in\mathbb{R}$ such that $ \Delta^{-1/2}\widetilde{w} + s = \pi/4$, and thus $ | \nabla f_s(\widetilde{w}) | =\Delta /\sqrt{2} $ and $\lambda_{\min}(\nabla^2 f_s(\widetilde{w})) = -\Delta^{1/2} /\sqrt{2}$. \section{Proof of Proposition~\ref{ppn:size_w_space}}\label{prf:size_w_space} Suppose that during all the iterations, the $\mathsf{Escape}$ process is called $E+1$ times. In the first $E$ times, the algorithm escapes the saddle points, and in the last $\mathsf{Escape}$ process, the algorithm does not escape and outputs $\widetilde{\mathbf{w}}$. For the first $E$ processes, there might be up to $Q$ rounds of perturb-and-descent operations, and we only consider the successful descent round. We can then partition the algorithm into $E+1$ segments. We denote the starting and ending iterates of the $t$-th segment by $\mathbf{w}_t$ and $\widetilde{\mathbf{w}}_t$, respectively, and denote the length (number of inexact gradient descent iterations) by $T_t$. When the algorithm reaches $\widetilde{\mathbf{w}}_t$, we randomly perturb $\widetilde{\mathbf{w}}_t$ to $\mathbf{w}_{t+1}$, and thus we have $\twonms{\widetilde{\mathbf{w}}_t - \mathbf{w}_{t+1}} \le r$ for every $t = 0,1,\ldots, E-1$. According to~\eqref{eq:total_effective_iter}, we know that \[ \sum_{t=0}^E T_t \le \frac{2(F_0 - F^*)L_F}{3\Delta^2} := \widetilde{T}, \] and according to~\eqref{eq:total_saddle}, we have \[ E \le \frac{\rho_F (F_0 - F^*) }{48 L_F (\Delta^{6/5}d^{3/5} + \Delta^{7/5}d^{7/10}) }. \] According to~\eqref{eq:decay_w0_wt2}, we know that \[ F(\mathbf{w}_t) - F(\widetilde{\mathbf{w}}_t) \ge \frac{L_F}{4T_t} \twonms{\mathbf{w}_t - \widetilde{\mathbf{w}}_t}^2 - \frac{\Delta^2T_t}{L_F}, \] which implies \[ \twonms{\mathbf{w}_t - \widetilde{\mathbf{w}}_t} \le \frac{2}{\sqrt{L_F}} \sqrt{T_t(F(\mathbf{w}_t) - F(\widetilde{\mathbf{w}}_t))} + \frac{2\Delta T_t}{L_F}. \] Then, by Cauchy-Schwarz inequality, we have \begin{equation}\label{eq:total_dist_1} \sum_{t=0}^E \twonms{\mathbf{w}_t - \widetilde{\mathbf{w}}_t} \le 2 \sqrt{\frac{\widetilde{T}}{L_F} \sum_{t=0}^E (F(\mathbf{w}_t) - F(\widetilde{\mathbf{w}}_t))} + \frac{2\Delta\widetilde{T}}{L_F}. \end{equation} On the other hand, we have \[ \sum_{t=0}^E ( F(\mathbf{w}_t) - F(\widetilde{\mathbf{w}}_t) ) + \sum_{t=0}^{E-1} ( F(\widetilde{\mathbf{w}}_t) - F(\mathbf{w}_{t+1}) ) = F(\mathbf{w}_0) - F(\widetilde{\mathbf{w}}_E) \le F(\mathbf{w}_0) - F^*. \] According to~\eqref{eq:tw_w0}, we have \[ F(\widetilde{\mathbf{w}}_t) - F(\mathbf{w}_{t+1}) \ge -4\Delta r - \frac{L_F}{2}r^2, \] and thus \begin{equation}\label{eq:total_dist_2} \sum_{t=0}^E ( F(\mathbf{w}_t) - F(\widetilde{\mathbf{w}}_t) ) \le F(\mathbf{w}_0) - F^* + E(4\Delta r + \frac{L_F}{2}r^2) \end{equation} Combining~\eqref{eq:total_dist_1} and~\eqref{eq:total_dist_2}, and using the bounds for $\widetilde{T}$ and $E$, we obtain that \begin{equation}\label{eq:total_dist_2} \sum_{t=0}^E \twonms{\mathbf{w}_t - \widetilde{\mathbf{w}}_t} \le C_1 \frac{F(\mathbf{w}_0) - F^*}{\Delta}, \end{equation} where $C_1>0$ is a quantity that only depends on $L_F$ and $\rho_F$. In addition, we have \begin{equation}\label{eq:total_dist_3} \sum_{t=0}^{E-1} \twonms{\widetilde{\mathbf{w}}_t - \mathbf{w}_{t+1}} \le Er \le C_2 \frac{F(\mathbf{w}_0) - F^*}{\Delta^{3/5}d^{3/10} + \Delta^{4/5}d^{2/5}}, \end{equation} where $C_2>0$ is a quantity that only depends on $L_F$ and $\rho_F$. Combining~\eqref{eq:total_dist_2} and~\eqref{eq:total_dist_3}, and using triangle inequality, we know that \[ \twonms{\widetilde{\mathbf{w}}_{E} - \mathbf{w}_0} \le C_1 \frac{F(\mathbf{w}_0) - F^*}{\Delta} + C_2 \frac{F(\mathbf{w}_0) - F^*}{\Delta^{3/5}d^{3/10} + \Delta^{4/5}d^{2/5}} \le C \frac{F(\mathbf{w}_0) - F^*}{\Delta}. \] Here, the last inequality is due to the fact that we consider the regime where $\Delta\rightarrow 0$, and $C$ is a quantity that only depends on $L_F$ and $\rho_F$. As a final note, the analysis above also applies to any iterate prior to the final output, and thus, all the iterates during the algorithm stays in the $\ell_2$ ball centered at $\mathbf{w}_0$ with radius $C \frac{F(\mathbf{w}_0) - F^*}{\Delta}$. \section{Robust Estimation of Gradients}\label{apx:robust_estimation} \subsection{Iterative Filtering Algorithm}\label{apx:filtering} We describe an iterative filtering algorithm for robust mean estimation. The algorithm is originally proposed for robust mean estimation for Gaussian distribution in~\cite{diakonikolas2016robust}, and extended to sub-Gaussian distribution in~\cite{diakonikolas2017being}; then algorithm is reinterpreted in~\cite{steinhardt2017resilience}. Here, we present the algorithm using the interpretation in~\cite{steinhardt2017resilience}. Suppose that $ m $ random vectors $\mathbf{x}_1, \mathbf{x}_2, \ldots, \mathbf{x}_m \in \mathbb{R}^d$ are drawn i.i.d.\ from some distribution with mean~$\boldsymbol\mu$. An adversary observes all these vectors and changes an $\alpha$ fraction of them in an arbitrary fashion, and we only have access to the corrupted data points $\widehat{\mathbf{x}}_1, \widehat{\mathbf{x}}_2, \ldots, \widehat{\mathbf{x}}_m$. The goal of the iterative filtering algorithm is to output an accurate estimate of the true mean $\boldsymbol\mu$ even when the dimension~$d$ is large. We provide the detailed procedure in Algorithm~\ref{alg:iter_filtering}. Here, we note that the algorithm parameter $\sigma$ needs to be chosen properly in order to achieve the best possible statistical error rate. \begin{algorithm}[h] \caption{Iterative Filtering~\cite{diakonikolas2016robust,diakonikolas2017being,steinhardt2017resilience}} \begin{algorithmic} \REQUIRE corrupted data $\widehat{\mathbf{x}}_1, \widehat{\mathbf{x}}_2, \ldots, \widehat{\mathbf{x}}_m\in\mathbb{R}^d$, $\alpha \in [0,\frac{1}{4})$, and algorithm parameter $\sigma > 0$. \STATE $\mathcal{A} \leftarrow [m]$, $c_i \leftarrow 1$, and $\tau_i \leftarrow 0$, $\forall~i\in\mathcal{A}$. \WHILE{ true } \STATE Let $\mat{W}\in\mathbb{R}^{|\mathcal{A}| \times |\mathcal{A}|}$ be a minimizer of the convex optimization problem: \STATE \[ \min_{\substack{0 \le W_{ji}\le \frac{3+\alpha}{(1-\alpha)(3-\alpha)m} \\ \sum_{j\in\mathcal{A}} W_{ji}=1}} \max_{\substack{ \mathbf{U} \succeq 0 \\ \operatorname{tr}(\mathbf{U}) \le 1 } } \sum_{i\in\mathcal{A}} c_i (\widehat{\mathbf{x}}_i - \sum_{j\in\mathcal{A}}\widehat{\mathbf{x}}_jW_{ji})^\top \mathbf{U} (\widehat{\mathbf{x}}_i - \sum_{j\in\mathcal{A}}\widehat{\mathbf{x}}_jW_{ji}), \] \STATE and $\mathbf{U}\in\mathbb{R}^{d\times d}$ be a maximizer of the convex optimization problem: \[ \max_{\substack{ \mathbf{U} \succeq 0 \\ \operatorname{tr}(\mathbf{U}) \le 1 } } \min_{\substack{0 \le W_{ji}\le \frac{3+\alpha}{(1-\alpha)(3-\alpha)m} \\ \sum_{j\in\mathcal{A}} W_{ji}=1}} \sum_{i\in\mathcal{A}} c_i (\widehat{\mathbf{x}}_i - \sum_{j\in\mathcal{A}}\widehat{\mathbf{x}}_jW_{ji})^\top \mathbf{U} (\widehat{\mathbf{x}}_i - \sum_{j\in\mathcal{A}}\widehat{\mathbf{x}}_jW_{ji}). \] \STATE $\forall~i\in\mathcal{A},~\tau_i \leftarrow (\widehat{\mathbf{x}}_i - \sum_{j\in\mathcal{A}}\widehat{\mathbf{x}}_jW_{ji})^\top \mathbf{U} (\widehat{\mathbf{x}}_i - \sum_{j\in\mathcal{A}}\widehat{\mathbf{x}}_jW_{ji})$. \IF{$ \sum_{i\in\mathcal{A}} c_i\tau_i > 8m\sigma^2 $} \STATE $\forall~i\in\mathcal{A},~c_i\leftarrow (1-\frac{\tau_i}{\tau_{\max}})c_i$, where $\tau_{\max} = \max_{i\in\mathcal{A}}\tau_i$. \STATE $\mathcal{A} \leftarrow \mathcal{A} \setminus \{i:c_i \le \frac{1}{2}\}$. \ELSE \STATE \textbf{return} $\widehat{\boldsymbol\mu} = \frac{1}{|\mathcal{A}|} \sum_{i\in\mathcal{A}}\widehat{\mathbf{x}}_i$ \ENDIF \ENDWHILE \end{algorithmic}\label{alg:iter_filtering} \end{algorithm} \subsection{Proof of Theorem~\ref{thm:iterative_filtering}}\label{prf:iterative_filtering} To prove Theorem~\ref{thm:iterative_filtering}, we first state a result that bounds the error of the iterative filtering algorithm when the original data points $\{ \mathbf{x}_i \} $ are deterministic. The following lemma is proved in~\cite{diakonikolas2017being,steinhardt2017resilience}; also see~\cite{su2018securing} for additional discussion. \begin{lemma}\label{lem:deterministic} \cite{diakonikolas2017being,steinhardt2017resilience} Let $\mathcal{S}:=\{\mathbf{x}_1, \mathbf{x}_2, \ldots, \mathbf{x}_m\}$ be the set of original data points and $\boldsymbol\mu_\mathcal{S}:=\frac{1}{m} \sum_{i=1}^m \mathbf{x}_i$ be their sample mean. Let $\widehat{\mathbf{x}}_1, \widehat{\mathbf{x}}_2, \ldots, \widehat{\mathbf{x}}_m$ be the corrupted data. If $\alpha \le \frac{1}{4}$, and the algorithm parameter $\sigma$ is chosen such that \begin{equation}\label{eq:condi_sigma} \twonm{ \frac{1}{m}\sum_{i=1}^m (\mathbf{x}_i - \boldsymbol\mu_\mathcal{S})(\mathbf{x} - \boldsymbol\mu_\mathcal{S})^\top } \le \sigma^2, \end{equation} then the output of the iterative filtering algorithm satisfies $\twonms{\widehat{\boldsymbol\mu} - \boldsymbol\mu_\mathcal{S}} \le \mathcal{O}(\sigma\sqrt{\alpha})$. \end{lemma} By triangle inequality, we have \begin{equation}\label{eq:mean_triangle} \twonms{\widehat{\boldsymbol\mu} - \boldsymbol\mu} \le \twonms{\widehat{\boldsymbol\mu} - \boldsymbol\mu_{\mathcal{S}}} + \twonms{\boldsymbol\mu_\mathcal{S} - \boldsymbol\mu }, \end{equation} and \begin{align} \twonm{ \frac{1}{m}\sum_{i=1}^m (\mathbf{x}_i - \boldsymbol\mu_\mathcal{S})(\mathbf{x} - \boldsymbol\mu_\mathcal{S})^\top } = & \frac{1}{m} \twonm{ ([\mathbf{x}_1,\cdots,\mathbf{x}_m] - \boldsymbol\mu_\mathcal{S} \vect{1}^\top) ([\mathbf{x}_1,\cdots,\mathbf{x}_m] - \boldsymbol\mu_\mathcal{S} \vect{1}^\top)^\top} \nonumber \\ =& \frac{1}{m}\twonm{ [\mathbf{x}_1,\cdots,\mathbf{x}_m] - \boldsymbol\mu_\mathcal{S} \vect{1}^\top }^2 \nonumber \\ \le & \frac{1}{m} \Big( \twonms{[\mathbf{x}_1,\cdots,\mathbf{x}_m] - \boldsymbol\mu \vect{1}^\top} + \sqrt{m} \twonms{\boldsymbol\mu - \boldsymbol\mu_\mathcal{S}} \Big)^2, \label{eq:opnorm_triangle} \end{align} where $\vect{1}$ denotes the all-one vector.\footnote{We note that similar derivation also appears in~\cite{su2018securing}.} By choosing \[ \sigma = \Theta(\frac{1}{\sqrt{m}} \twonms{[\mathbf{x}_1,\cdots,\mathbf{x}_m] - \boldsymbol\mu \vect{1}^\top} + \twonms{\boldsymbol\mu - \boldsymbol\mu_\mathcal{S}} ) \] in Lemma~\ref{lem:deterministic} and combining with the bounds~\eqref{eq:mean_triangle} and~\eqref{eq:opnorm_triangle}, we obtain that \begin{equation}\label{eq:sufficient} \twonms{\widehat{\boldsymbol\mu} - \boldsymbol\mu} \lesssim \frac{\sqrt{\alpha}}{\sqrt{m}}\twonms{[\mathbf{x}_1,\cdots,\mathbf{x}_m] - \boldsymbol\mu \vect{1}^\top} + \twonms{\boldsymbol\mu - \boldsymbol\mu_\mathcal{S}}. \end{equation} With the above bound in hand, we now turn to the robust gradient estimation problem, where the data points are drawn i.i.d. from some unknown distribution. Let $\widehat{\mathbf{g}}(\mathbf{w}):=\mathsf{filter}\{\widehat{\mathbf{g}}_i(\mathbf{w})\}_{i=1}^m$, where $\mathsf{filter}$ represents the iterative filtering algorithm. In light of~\eqref{eq:sufficient}, we know that in order to bound the gradient estimation error $\sup_{\mathbf{w}\in\mathcal{W}} \twonms{\widehat{\mathbf{g}}(\mathbf{w}) - \nabla F(\mathbf{w})}$, it suffices to bound the quantities \[ \sup_{\mathbf{w}\in\mathcal{W}} \twonms{[\nabla F_1(\mathbf{w}),\cdots,\nabla F_m(\mathbf{w})] - \nabla F(\mathbf{w}) \vect{1}^\top} \] and \[ \sup_{\mathbf{w}\in\mathcal{W}}\twonms{\frac{1}{m}\sum_{i=1}^m\nabla F_i(\mathbf{w}) - \nabla F(\mathbf{w})}. \] Here, we recall that $\nabla F_i(\mathbf{w})$ is the true gradient of the empirical loss function on the $i$-th machine, and $\widehat{\mathbf{g}}_i(\mathbf{w})$ is the (possibly) corrupted gradient. We first bound $\sup_{\mathbf{w}\in\mathcal{W}}\twonms{\frac{1}{m}\sum_{i=1}^m\nabla F_i(\mathbf{w}) - \nabla F(\mathbf{w})}$. Note that we have $\frac{1}{m}\sum_{i=1}^m\nabla F_i(\mathbf{w}) = \frac{1}{nm} \sum_{i=1}^m\sum_{j=1}^n \nabla f(\mathbf{w};\mathbf{z}_{i,j})$. Using the same method as in the proof of Lemma 6 in~\cite{chen2017distributed}, we can show that for each fixed $\mathbf{w}$, with probability at least $1-\delta$, \[ \twonms{\frac{1}{m}\sum_{i=1}^m\nabla F_i(\mathbf{w}) - \nabla F(\mathbf{w})} \le \frac{2\sqrt{2} \zeta}{\sqrt{nm}} \sqrt{d\log 6 + \log\Big(\frac{1}{\delta}\Big)}. \] For some $\delta_0 > 0$ to be chosen later, let $\mathcal{W}_{\delta_0} = \{\mathbf{w}^1,\mathbf{w}^2,\ldots,\mathbf{w}^{N_{\delta_0}}\}$ be a finite subset of $\mathcal{W}$ such that for any $\mathbf{w}\in\mathcal{W}$, there exists some $\mathbf{w}^\ell \in \mathcal{W}_{\delta_0}$ such that $\twonms{\mathbf{w}^\ell - \mathbf{w}} \le \delta_0$. Standard $ \epsilon $-net results from~\cite{vershynin2010introduction} ensure that $N_{\delta_0} \le (1+\frac{D}{\delta_0})^d$. Then, by the union bound, we have with probability $1-\delta$, for all $\mathbf{w}^\ell \in \mathcal{W}_{\delta_0}$, \begin{equation}\label{eq:in_delta_net} \twonms{\frac{1}{m}\sum_{i=1}^m\nabla F_i(\mathbf{w}^\ell) - \nabla F(\mathbf{w}^\ell)} \le \frac{2\sqrt{2} \zeta}{\sqrt{nm}} \sqrt{d\log 6 + \log\Big(\frac{N_{\delta_0}}{\delta}\Big)}. \end{equation} When~\eqref{eq:in_delta_net} holds, by the smoothness of $f(\cdot;\mathbf{z})$ we know that for all $\mathbf{w}\in\mathcal{W}$, \[ \twonms{\frac{1}{m}\sum_{i=1}^m\nabla F_i(\mathbf{w}) - \nabla F(\mathbf{w})} \le \frac{2\sqrt{2} \zeta}{\sqrt{nm}} \sqrt{d\log 6 + \log\Big(\frac{N_{\delta_0}}{\delta}\Big)} + 2L\delta_0. \] By choosing $\delta_0 = \frac{1}{nmL}$ and $\delta = \frac{1}{(1+mnDL)^d}$, we obtain that with probability at least $1-\frac{1}{(1+mnDL)^d}$, for all $\mathbf{w}\in\mathcal{W}$, \begin{equation}\label{eq:unif_grad_norm} \twonms{\frac{1}{m}\sum_{i=1}^m\nabla F_i(\mathbf{w}) - \nabla F(\mathbf{w})} \lesssim \frac{\zeta}{\sqrt{nm}}\sqrt{d\log(1+nmDL)}. \end{equation} We next bound $\sup_{\mathbf{w}\in\mathcal{W}} \twonms{[\nabla F_1(\mathbf{w}),\cdots,\nabla F_m(\mathbf{w})] - \nabla F(\mathbf{w}) \vect{1}^\top}$. We note that when the gradients are sub-Gaussian distributed, similar results for the centralized setting have been established in~\cite{charikar2017learning}. One can check that for every $i$, $\nabla F_i(\mathbf{w}) - \nabla F(\mathbf{w})$ is $\frac{\zeta}{\sqrt{n}}$-sub-Gaussian. Define $\mathbf{G}(\mathbf{w}) := [\nabla F_1(\mathbf{w}),\cdots,\nabla F_m(\mathbf{w})] - \nabla F(\mathbf{w}) \vect{1}^\top$. Using a standard concentration inequality for the norm of a matrix with independent sub-Gaussian columns~\cite{vershynin2010introduction}, we obtain that for each fixed $\mathbf{w}$, with probability at least $1-\delta$, \[ \twonms{ \frac{1}{m}\mathbf{G}(\mathbf{w})\mathbf{G}(\mathbf{w})^\top - \frac{1}{n}\mat{\Sigma}(\mathbf{w}) } \lesssim \frac{\zeta^2}{n}\left( \sqrt{\frac{d}{m}} + \frac{d}{m} + \frac{1}{m}\log\Big(\frac{1}{\delta}\Big) + \sqrt{\frac{1}{m}\log\Big(\frac{1}{\delta}\Big)} \right), \] which implies that \[ \frac{1}{\sqrt{m}} \twonms{\mathbf{G}(\mathbf{w})} \lesssim \frac{\sigma}{\sqrt{n}} + \frac{\zeta}{\sqrt{n}} \left( \sqrt{\frac{d}{m}} + \frac{d}{m} + \frac{1}{m}\log\Big(\frac{1}{\delta}\Big) + \sqrt{\frac{1}{m}\log\Big(\frac{1}{\delta}\Big)} \right)^{1/2}. \] Recall the $ \delta_0 $-net $\mathcal{W}_{\delta_0} = \{\mathbf{w}^1,\mathbf{w}^2,\ldots,\mathbf{w}^{N_{\delta_0}}\}$ as defined above. Then, we have with probability at least $1-\delta$, for all $\mathbf{w}^\ell \in \mathcal{W}_{\delta_0}$ \begin{equation}\label{eq:in_delta_net_op} \frac{1}{\sqrt{m}} \twonms{\mathbf{G}(\mathbf{w}^\ell)} \lesssim \frac{\sigma}{\sqrt{n}} + \frac{\zeta}{\sqrt{n}} \left( \sqrt{\frac{d}{m}} + \frac{d}{m} + \frac{1}{m}\log\Big(\frac{N_{\delta_0}}{\delta}\Big) + \sqrt{\frac{1}{m}\log\Big(\frac{N_{\delta_0}}{\delta}\Big)} \right)^{1/2}. \end{equation} For each $\mathbf{w}$ with $\twonms{\mathbf{w}^\ell - \mathbf{w}} \le \delta_0$, we have \begin{align*} \twonms{\mathbf{G}(\mathbf{w}^\ell) - \mathbf{G}(\mathbf{w}) } \le & \fbnorms{ \mathbf{G}(\mathbf{w}^\ell) - \mathbf{G}(\mathbf{w}) } \\ \le & \left( \sum_{i=1}^m \twonms{(\nabla F_i(\mathbf{w}^\ell) - \nabla F(\mathbf{w}^\ell)) - (\nabla F_i(\mathbf{w}) - \nabla F(\mathbf{w}))}^2 \right)^{1/2} \\ \le & 2L\delta_0\sqrt{m}. \end{align*} This implies that when the bound~\eqref{eq:in_delta_net_op} holds, we have for all $\mathbf{w}\in\mathcal{W}$, \begin{equation}\label{eq:unif_op_norm} \frac{1}{\sqrt{m}} \twonms{\mathbf{G}(\mathbf{w})} \lesssim \frac{\sigma}{\sqrt{n}} + \frac{\zeta}{\sqrt{n}} \left( \sqrt{\frac{d}{m}} + \frac{d}{m} + \frac{1}{m}\log\Big(\frac{N_{\delta_0}}{\delta}\Big) + \sqrt{\frac{1}{m}\log\Big(\frac{N_{\delta_0}}{\delta}\Big)} \right)^{1/2} + 2L\delta_0. \end{equation} Choose $\delta_0 = \frac{1}{nmL}$, in which case the last term above is a high order term. In this case, choosing $\delta = \frac{1}{(1+mnDL)^d}$, we have with probability at least $1-\frac{1}{(1+mnDL)^d}$, for all $\mathbf{w}\in\mathcal{W}$, \begin{align} \frac{1}{\sqrt{m}} \twonms{\mathbf{G}(\mathbf{w})} \lesssim & \frac{\sigma}{\sqrt{n}} + \frac{\zeta}{\sqrt{n}} \left( \Big(\frac{d}{m} + \sqrt{\frac{d}{m}}\Big) \log(1+nmDL) \right)^{1/2} \nonumber \\ \lesssim & \frac{\sigma}{\sqrt{n}} + \frac{\zeta}{\sqrt{n}} \left( 1+\sqrt{\frac{d}{m}} \right)\sqrt{\log(1+nmDL)}. \label{eq:unif_op_norm_2} \end{align} Combining the bounds~\eqref{eq:sufficient},~\eqref{eq:unif_grad_norm}, and~\eqref{eq:unif_op_norm_2}, we obtain that with probability at least $1-\frac{2}{(1+mnDL)^d},$ \[ \sup_{\mathbf{w}\in\mathcal{W}}\twonms{\widehat{\mathbf{g}}(\mathbf{w}) - \nabla F(\mathbf{w})} \lesssim \left( (\sigma + \zeta)\sqrt{\frac{\alpha}{n}} + \zeta\sqrt{\frac{d}{nm}} \right)\sqrt{\log(1+nmDL)}, \] which completes the proof. \subsection{Median and Trimmed Mean}\label{apx:med_tm} In this section, we present the error bounds of median and trimmed mean operations in the Byzantine setting in~\cite{yin2018byzantine} for completeness. \begin{asm}\label{asm:lip_each_loss} For any $\mathbf{z}\in\mathcal{Z}$, the $ k $-th partial derivative $\partial_kf(\cdot;\mathbf{z})$ is $L_k$-Lipschitz for each $k\in[d]$. Let $\widehat{L} := (\sum_{k=1}^d L_k^2)^{1/2}$. \end{asm} For the median-based algorithm, one needs to use the notion of the \emph{absolute skewness} of a one-dimensional random variable $X$, defined as $S(X) := {\EXPS{|X - \EXPS{X}|^3}}/{\operatorname{Var}(X)^{3/2}}$. Define the following upper bounds on the standard deviation and absolute skewness of the gradients: \[ v := \sup_{\mathbf{w}\in\mathcal{W}} \big( \EXPS{\twonms{\nabla f(\mathbf{w}; \mathbf{z}) - \nabla F(\mathbf{w})}^2} \big)^{1/2}, \quad s := \sup_{\mathbf{w}\in\mathcal{W}}\max_{k\in[d]} S\big(\partial f_k(\mathbf{w}; \mathbf{z})\big). \] Then one has the following guarantee for the median-based algorithm. \begin{theorem}[median]\label{thm:median_inexactness} ~\cite{yin2018byzantine} Suppose that Assumption~\ref{asm:lip_each_loss} holds. Assume that \[ \alpha + \bigg( \frac{ d\log( 1 + nmD\widehat{L} ) }{ m(1-\alpha) } \bigg)^{1/2} + c_1\frac{ s }{\sqrt{n}} \le \frac{1}{2}-c_2 \] for some constant $c_1,c_2>0$. Then, with probability $1 - o(1)$, $\mathsf{GradAGG}\equiv {\sf med}$ provides a $\Delta_{{\sf med}}$-inexact gradient oracle with \[ \Delta_{{\sf med}} \le \frac{c_3}{\sqrt{n}}v \big(\alpha + (\frac{d\log( n m D \widehat{L} )}{m})^{1/2} + \frac{s}{\sqrt{n}} \big) + \mathcal{O}(\frac{1}{nm}), \] where $c_3$ is an absolute constant. \end{theorem} Therefore, the median operation provides a $\widetilde{\mathcal{O}} ( v (\frac{\alpha}{\sqrt{n}} + \sqrt{\frac{d}{nm}} + \frac{s}{ n }) )$-inexact gradient oracle. If each partial derivative is of size $\mathcal{O}(1)$, the quantity $v$ is of the order $\mathcal{O}(\sqrt{d})$ and thus one has $\Delta_{{\sf med}} \lesssim \frac{\alpha \sqrt{d}}{\sqrt{n}} + \frac{d}{\sqrt{nm}} + \frac{\sqrt{d}}{n}$. For the trimmed mean algorithm, one needs to assume that the gradients of the loss functions are sub-exponential. \begin{asm}\label{asm:sub_exp_grad} For any $\mathbf{w} \in \mathcal{W}$, $\nabla f(\mathbf{w}; \mathbf{z})$ is $\xi$-sub-exponential. \end{asm} In this setting, there is the following guarantee. \begin{theorem}[trimmed mean]\label{thm:trmean_inexactness} ~\cite{yin2018byzantine} Suppose that Assumptions~\ref{asm:lip_each_loss} and~\ref{asm:sub_exp_grad} hold. Choose $ \beta = c_4 \alpha \le \frac{1}{2} - c_5 $ with some constant $c_4 \ge 1$, $c_5 >0$. Then, with probability $1-o(1)$, $\mathsf{GradAGG} \equiv {\sf trmean}_\beta$ provides a $\Delta_{{\sf tm}}$-inexact gradient oracle with \[ \Delta_{{\sf tm}} \le c_6 \xi d \Big(\frac{\alpha}{\sqrt{n}} + \frac{1}{\sqrt{nm}} \Big) \sqrt{\log(n m D \widehat{L}) }, \] where $c_6$ is an absolute constant. \end{theorem} Therefore, the trimmed mean operation provides a $\widetilde{\mathcal{O}} ( \xi d (\frac{\alpha}{\sqrt{n}} + \frac{1}{\sqrt{nm}} ) )$-inexact gradient oracle. \subsection{Lower Bound for First-Order Guarantee}\label{apx:lower_bound} In this section we prove Observation~\ref{obs:lower_bound}. We consider the simple mean estimation problem with random vector $\mathbf{z}$ drawn from a distribution $\mathcal{D}$ with mean $\boldsymbol\mu$. The loss function associated with $\mathbf{z}$ is $f(\mathbf{w};\mathbf{z}) = \frac{1}{2}\twonms{\mathbf{w}-\mathbf{z}}^2$. The population loss is $F(\mathbf{w}) = \frac{1}{2}(\twonms{\mathbf{w}}^2 - 2\boldsymbol\mu^\top\mathbf{w} + \EXPS{\twonms{\mathbf{z}}^2})$, and $\nabla F(\mathbf{w}) = \mathbf{w} - \boldsymbol\mu$. We first provide a lower bound for distributed mean estimation in the Byzantine setting, which is proved in~\cite{yin2018byzantine}. \begin{lemma}\label{lem:lower_bound_mean} ~\cite{yin2018byzantine} Suppose that $\mathbf{z}$ is Gaussian distributed with mean $\boldsymbol\mu$ and covariance $\sigma^2\mathbf{I}$. Then, any algorithm that outputs an estimate $\widetilde{\mathbf{w}}$ of $\boldsymbol\mu$ has a constant probability such that \[ \twonms{\widetilde{\mathbf{w}} - \boldsymbol\mu} = \Omega(\frac{\alpha}{\sqrt{n}} + \sqrt{\frac{d}{nm}} ). \] \end{lemma} Since $\nabla F(\widetilde{\mathbf{w}}) = \widetilde{\mathbf{w}} - \boldsymbol\mu$, the above bound directly implies the lower bound on $\twonms{\nabla F(\widetilde{\mathbf{w}})}$ in Observation~\ref{obs:lower_bound}. \section{Related Work}\label{sec:related} \begin{table*}[h] \centering \begin{tabular}{|c|c|c|c|c|} \hline Algorithm & PGD & Neon+GD & Neon2+GD & \textbf{ByzantinePGD} \\ \hline Byzantine-robust? & no & no & no & yes \\ \hline Purpose of perturbation & escape SP & escape SP & escape SP & \shortstack{escape SP \\ \& robustness} \\ \hline Escaping method & GD & NC search & NC search & inexact GD \\ \hline Termination criterion & decrease in $F$ & decrease in $F$ & distance in $\mathcal{W}$ & distance in $\mathcal{W}$ \\ \hline Multiple rounds? & no & no & no & yes \\ \hline \end{tabular} \caption{Comparison with PGD, Neon+GD, and Neon2+GD. SP = saddle point.} \label{tab:comparison_perturbation} \end{table*} \begin{table*}[h] \centering \begin{tabular}{|c|c|c|} \hline & Robust Aggregation Method & Non-convex Guarantee \\ \hline \citet{feng2014distributed} & geometric median & no \\ \hline \citet{chen2017distributed} & geometric median & no \\ \hline \citet{blanchard2017byzantine} & Krum & first-order \\ \hline \citet{yin2018byzantine} & median, trimmed mean & first-order \\ \hline \citet{xie2018generalized} & mean-around-median, marginal median & first-order \\ \hline \citet{alistarh2018sgd} & martingale-based & no \\ \hline \citet{su2018securing} & iterative filtering & no \\ \hline \textbf{This work} & median, trimmed mean, iterative filtering & second-order \\ \hline \end{tabular} \caption{Comparison with other Byzantine-robust distributed learning algorithms.} \label{tab:comparison_byzantine} \end{table*} \paragraph*{Efficient first-order algorithms for escaping saddle points} Our algorithm is related to a recent line of work which develops efficient first-order algorithms for escaping saddle points. Although vanilla GD converges to local minimizers almost surely~\cite{lee2016converge,lee2017first}, achieving convergence in polynomial time requires more a careful algorithmic design~\cite{du2017gradient}. Such convergence guarantees are enjoyed by several GD-based algorithms; examples include PGD~\cite{jin2017escape}, Neon+GD~\cite{xu2017first}, and Neon2+GD~\cite{allen2017neon2}. The general idea of these algorithms is to run GD and add perturbation to the iterate when the gradient is small. While our algorithm also uses this idea, the design and analysis techniques of our algorithm are significantly different from the work above in the following aspects (also summarized in Table~\ref{tab:comparison_perturbation}). \begin{itemize}[leftmargin=3mm] \item In our algorithm, besides helping with escaping saddle points, the random perturbation has the additional role of defending against adversarial errors. \item The perturbation used in our algorithm needs to be larger, yet carefully calibrated, in order to account for the influence of the inexactness of gradients across the iterations, especially iterations for escaping saddle points. \item We run inexact GD after the random perturbation, while Neon+GD and Neon2+GD use negative curvature (NC) search. It is not immediately clear whether NC search can be robustified against Byzantine failures. Compared to PGD, our analysis is arguably \emph{simpler} and more \emph{straightforward}. \item Our algorithm does not use the value of the loss function (hence no need for robust function value estimation); PGD and Neon+GD assume access to the (exact) function values. \item We employed \emph{multiple} rounds of perturbation to boost the probability of escaping saddle points; this technique is not used in PGD, Neon+GD, or Neon2+GD. \end{itemize} \paragraph*{Inexact oracles} Optimization with an inexact oracle (e.g.\ noisy gradients) has been studied in various settings such as general convex optimization~\cite{bertsekas2000errors,devolder2014inexact}, robust estimation~\cite{prasad2018robust}, and structured non-convex problems~\cite{balakrishnan2014EM,chen2015fast,candes2014wirtinger,zhang2016provable}. Particularly relevant to us is the recent work by~\citet{jin2018minimizing}, who consider the problem of minimizing $F$ when only given access to the gradients of another \emph{smooth} function $\widehat{F}$ satisfying $\|{\nabla\widehat{F}(\mathbf{w}) - \nabla F(\mathbf{w})}\|_\infty \le \Delta/\sqrt{d},~\forall\mathbf{w}$. Their algorithm uses Gaussian smoothing on $\widehat{F}$. We emphasize that the inexact gradient setting considered by them is much more benign than our Byzantine setting, since (i) their inexactness is defined in terms of $\ell_\infty$ norm whereas the inexactness in our problem is in $\ell_2$ norm, and (ii) we assume that the inexact gradient can be \emph{any} vector within $\Delta$ error, and thus the smoothing technique is not applicable in our problem. Moreover, the iteration complexity obtained by~\citet{jin2018minimizing} may be a high-degree polynomial of the problem parameters and thus not suitable for distributed implementation. \paragraph*{Byzantine-robust distributed learning} Solving large scale learning problems in distributed systems has received much attention in recent years, where communication efficiency and Byzantine robustness are two important topics~\cite{shamir2014communication,lee2015distributed,yin2017gradient,blanchard2017byzantine,chen2018draco,damaskinos2018asynchronous}. Here, we compare with existing Byzantine-robust distributed learning algorithms that are most relevant to our work, and summarize the comparison in Table~\ref{tab:comparison_byzantine}. A general idea of designing Byzantine-robust algorithms is to combine optimization algorithms with a robust aggregation (or outlier removal) subroutine. For convex losses, the aggregation subroutines analyzed in the literature include geometric median~\cite{feng2014distributed,chen2017distributed}, median and trimmed mean~\cite{yin2018byzantine}, iterative filtering for the high dimensional setting~\cite{su2018securing}, and martingale-based methods for the SGD setting~\cite{alistarh2018sgd}. For non-convex losses, to the best of our knowledge, existing works only provide first-order convergence guarantee (i.e., small gradients), by using aggregation subroutines such as the Krum function~\cite{blanchard2017byzantine}, median and trimmed mean~\cite{yin2018byzantine}, mean-around-median and marginal median~\cite{xie2018generalized}. In this paper, we make use of subroutines based on median, trimmed mean, and iterative filtering. Our analysis of median and trimmed mean follows~\citet{yin2018byzantine}. Our results based on the iterative filtering subroutine, on the other hand, are new: \begin{itemize}[leftmargin=3mm] \item The problem that we tackle is harder than what is considered in the original iterative filtering papers~\cite{diakonikolas2016robust,diakonikolas2017being}. There they only consider robust estimation of a single mean parameter, where as we guarantee robust gradient estimation over the parameter space. \item Recent work by~\citet{su2018securing} also makes use of the iterative filtering subroutine for the Byzantine setting. They only study strongly convex loss functions, and assume that the gradients are sub-exponential and $ d \le \mathcal{O}(\sqrt{mn})$. Our results apply to the non-convex case and do not require the aforementioned condition on $d$ (which may therefore scale, for example, linearly with the sample size $ mn $), but we impose the stronger assumption of sub-Gaussian gradients. \end{itemize} \paragraph*{Other non-convex optimization algorithms} Besides first-order GD-based algorithms, many other non-convex optimization methods that can provably converge to approximate local minimum have received much attention in recent years. For specific problems such as phase retrieval~\cite{candes2014wirtinger}, low-rank estimation~\cite{chen2015fast,zhao2015nonconvex}, and dictionary learning~\cite{agarwal2014learning,sun2015complete}, many algorithms are developed by leveraging the particular structure of the problems, and the either use a smart initialization~\cite{candes2014wirtinger,tu2015low} or initialize randomly~\cite{chen2018gradient,chatterji2017alternating}. Other algorithms are developed for general non-convex optimization, and they can be classified into gradient-based~\cite{ge2015escaping,levy2016power,xu2017first,allen2017natasha,allen2017neon2,jin2017accelerated}, Hessian-vector-product-based~\cite{carmon2016accelerated,agarwal2016linear,royer2018complexity,royer2018newton}, and Hessian-based~\cite{nesterov2006cubic,curtis2017trust} methods. While algorithms using Hessian information can usually achieve better convergence rates---for example, $\mathcal{O}(\frac{1}{\epsilon^{3/2}})$ by~\citet{curtis2017trust}, and $\mathcal{O}(\frac{1}{\epsilon^{7/4}})$ by~\citet{carmon2016accelerated}--- gradient-based methods are easier to implement in practice, especially in the distributed setting we are interested in. \paragraph*{Robust statistics} Outlier-robust estimation is a classical topic in statistics~\cite{huber2011robust}. The coordinate-wise median aggregation subroutine that we consider is related to the median-of-means estimator~\cite{nemirovskii1983problem,jerrum1986random}, which has been applied to various robust inference problems~\cite{minsker2015geometric,lugosi2016risk,minsker2017distributed}. A recent line of work develops efficient robust estimation algorithms in high-dimensional settings~\cite{bhatia2015robust,diakonikolas2016robust,lai2016agnostic,charikar2017learning,steinhardt2017resilience,li2017robust,bhatia2017consistent,klivans2018efficient,liu2018regression}. In the centralized setting, the recent work~\cite{diakonikolas2018sever} proposes a scheme, similar to the iterative filtering procedure, that iteratively removes outliers for gradient-based optimization. \section{Robust Estimation of Gradients}\label{sec:robust_estimation} The results in the previous section can be applied as long as one has a robust aggregation subroutine $\mathsf{GradAGG}$ that provides a $\Delta$-inexact gradient oracle of the population loss $ F $. In this section, we discuss three concrete examples of $\mathsf{GradAGG}$: \emph{median}, \emph{trimmed mean}, and a high-dimension robust estimator based on the \emph{iterative filtering} algorithm~\cite{diakonikolas2016robust,diakonikolas2017being,steinhardt2017resilience}. We characterize their inexactness $\Delta$ under the statistical setting in Section~\ref{sec:setup}, where the data points are sampled independently according to an unknown distribution~$\mathcal{D}$. To describe our statistical results, we need the standard notions of sub-Gaussian/exponential random vectors. \begin{definition}[sub-Gaussianity and sub-exponentiality]\label{def:subexp} A random vector $\mathbf{x}$ with mean $\boldsymbol\mu$ is said to be $\zeta$-sub-Gaussian if $ \EXPS{\exp(\lambda \innerps{\mathbf{x} - \boldsymbol\mu}{\mathbf{u}})} \le e^{\frac{1}{2}\zeta^2\lambda^2 \twonms{\mathbf{u}}^2}, \forall~\lambda, \mathbf{u} $. It is said to be $\xi$-sub-exponential if $\EXPS{\exp(\lambda \innerps{\mathbf{x} - \boldsymbol\mu}{\mathbf{u}})} \le e^{\frac{1}{2}\xi^2\lambda^2 \twonms{\mathbf{u}}^2},~\forall~|\lambda| < \frac{1}{\xi},\mathbf{u}$. \end{definition} We also need the following result (proved in Appendix~\ref{prf:size_w_space}), which shows that the iterates of ByzantinePGD in fact stay in a bounded set around the initial iterate $\mathbf{w}_0$. \begin{proposition}\label{ppn:size_w_space} Under the choice of algorithm parameters in Theorem~\ref{thm:main}, all the iterates $\mathbf{w}$ in ByzantinePGD stay in the $\ell_2$ ball $\mathbb{B}_{\mathbf{w}_0}(D/2)$ with $D := C\frac{F_0 - F^*}{\Delta}$, where $C>0$ is a number that only depends on $L_F$ and $\rho_F$. \end{proposition} Consequently, for the convergence guarantees of ByzantinePGD to hold, we only need $\mathsf{GradAGG}$ to satisfy the inexact oracle property (Definition~\ref{def:inexact_oracle}) within the bounded set $\mathcal{W} = \mathbb{B}_{\mathbf{w}_0}(D/2)$, with $D$ given in Proposition~\ref{ppn:size_w_space}. As shown below, the three aggregation procedures indeed satisfy this property, with their inexactness $ \Delta $ depends mildly (logarithmically) on the radius $D$. \subsection{Iterative Filtering Algorithm} We start with a recently developed high-dimension robust estimation technique called the \emph{iterative filtering} algorithm~\cite{diakonikolas2016robust,diakonikolas2017being,steinhardt2017resilience} and use it to build the subroutine $\mathsf{GradAGG} $. As can be seen below, iterative filtering can tolerate a constant fraction of Byzantine machines even when the dimension grows---an advantage over simpler algorithms such as median and trimmed mean. We relegate the details of the iterative filtering algorithm to Appendix~\ref{apx:filtering}. Again, we emphasize that the original iterative filtering algorithm is proposed to robustly estimate a single parameter vector, whereas in our setting, since the Byzantine machines may produce unspecified probabilistic dependency across the iterations, we need to prove an error bound for robust gradient estimation uniformly across the parameter space $\mathcal{W}$. We prove such a bound for iterative filtering under the following two assumptions on the gradients and the smoothness of each loss function $f(\cdot;\mathbf{z})$. \begin{asm}\label{asm:sub_guass_grad} For each $\mathbf{w} \in \mathcal{W}$, $\nabla f(\mathbf{w}; \mathbf{z})$ is $\zeta$-sub-Gaussian. \end{asm} \begin{asm}\label{asm:lip_each_loss_2} For each $\mathbf{z}\in\mathcal{Z}$, $f(\cdot; \mathbf{z})$ is $L$-smooth. \end{asm} Let $\mat{\Sigma}(\mathbf{w})$ be the covariance matrix of $\nabla f(\mathbf{w};\mathbf{z})$, and define $\sigma:=\sup_{\mathbf{w}\in\mathcal{W}}\twonms{\mat{\Sigma}(\mathbf{w})}^{1/2}$. We have the following bounds on the inexactness parameter of iterative filtering. \begin{theorem}[Iterative Filtering]\label{thm:iterative_filtering} Suppose that Assumptions~\ref{asm:sub_guass_grad} and~\ref{asm:lip_each_loss_2} hold. Use the iterative filtering algorithm described in Appendix~\ref{apx:filtering} for $\mathsf{GradAGG}$, and assume that $\alpha \le \frac{1}{4}$. With probability $1-o(1)$, $\mathsf{GradAGG}$ provides a $\Delta_{\mathsf{ftr}}$-inexact gradient oracle with \[ \Delta_{\mathsf{ftr}} \le c \left( (\sigma + \zeta)\sqrt{\frac{\alpha}{n}} + \zeta\sqrt{\frac{d}{nm}} \right)\sqrt{\log(nmDL)}, \] where $c$ is an absolute constant. \end{theorem} The proof of Theorem~\ref{thm:iterative_filtering} is given in Appendix~\ref{prf:iterative_filtering}. Assuming bounded $ \sigma$ and $ \zeta $, we see that iterative filtering provides an $\widetilde{\mathcal{O}} \big( \sqrt{\frac{\alpha}{n}} + \sqrt{\frac{d}{nm}} \big) $-inexact gradient oracle. \subsection{Median and Trimmed Mean} The median and trimmed mean operations are two widely used robust estimation methods. While the dependence of their performance on $d$ is not optimal, they are conceptually simple and computationally fast, and still have good performance in low dimensional settings. We apply these operations in a coordinate-wise fashion to build $\mathsf{GradAGG}$. Formally, for a set of vectors $\mathbf{x}^i \in \mathbb{R}^d$, $i \in [m]$, their coordinate-wise median $\mathbf{u} := {\sf med}\{\mathbf{x}^i\}_{i=1}^m$ is a vector with its $ k $-th coordinate being $u_k = {\sf med}\{x^i_k\}_{i=1}^m$ for each $k\in[d]$, where $ {\sf med} $ is the usual (one-dimensional) median. The coordinate-wise $ \beta $-trimmed mean $\mathbf{u} := {\sf trmean}_\beta\{\mathbf{x}^i\}_{i=1}^m$ is a vector with $u_k = \frac{1}{(1-2\beta)m}\sum_{x \in \mathcal{U}_k}x$ for each $k\in[d]$, where $\mathcal{U}_k$ is a subset of $\{x^1_k,\ldots, x^m_k\}$ obtained by removing the largest and smallest $\beta$ fraction of its elements. For robust estimation of the gradient in the Byzantine setting, the error bounds of median and trimmed mean have been studied by~\citet{yin2018byzantine}. For completeness, we record their results below as an informal theorem; details are relegated to Appendix~\ref{apx:med_tm}. \begin{theorem}[Informal]\label{thm:med_tm} ~\cite{yin2018byzantine} Under appropriate smoothness and probabilistic assumptions,\footnote{Specifically, for median we assume that gradients have bounded skewness, and for trimmed mean we assume that the gradients are sub-exponentially distributed.} with high probability, the median operation provides a $\Delta_{{\sf med}}$-inexact gradient oracle with $\Delta_{{\sf med}} \lesssim \frac{\alpha \sqrt{d}}{\sqrt{n}} + \frac{d}{\sqrt{nm}} + \frac{\sqrt{d}}{n} $, and the trimmed mean operation provides a $\Delta_{{\sf tm}}$-inexact gradient oracle with $\Delta_{{\sf tm}} \lesssim \frac{\alpha d}{\sqrt{n}} + \frac{d}{\sqrt{nm}}$. \end{theorem} \subsection{Comparison and Optimality}\label{sec:comparison} In Table~\ref{tab:comparison}, we compare the above three algorithms in terms of the dependence of their gradient inexactness $\Delta$ on the problem parameters $\alpha$, $n$, $m$, and $d$ . We see that when $d = \mathcal{O}(1)$, the median and trimmed mean algorithms have better inexactness due to a better scaling with $\alpha$. When $d$ is large, iterative filtering becomes preferable. \begin{table}[h] \centering \begin{tabular}{|c|c|} \hline & Gradient inexactness $\Delta$ \\ \hline median & $\widetilde{\mathcal{O}}\big(\frac{\alpha \sqrt{d}}{\sqrt{n}} + \frac{d}{\sqrt{nm}} + \frac{\sqrt{d}}{n}\big)$ \\ \hline trimmed mean & $ \widetilde{\mathcal{O}}\big(\frac{\alpha d}{\sqrt{n}} + \frac{d}{\sqrt{nm}}\big) $ \\ \hline iterative filtering & $ \widetilde{\mathcal{O}}\big(\frac{\sqrt{\alpha}}{\sqrt{n}} + \frac{\sqrt{d}}{\sqrt{nm}}\big) $ \\ \hline \end{tabular} \caption{Statistical bounds on gradient inexactness $\Delta$.} \label{tab:comparison} \end{table} Recall that according to Observation~\ref{obs:achievable}, with $\Delta$-inexact gradients the ByzantinePGD algorithm converges to an $(\mathcal{O}(\Delta), \widetilde{\mathcal{O}}(\Delta^{2/5}))$-second-order stationary point. Combining this general result with the bounds in Table~\ref{tab:comparison}, we obtain explicit statistical guarantees on the output of ByzantinePGD. To understand the statistical optimality of these guarantees, we provide a converse result below. \begin{observation}\label{obs:lower_bound} There exists a statistical learning problem in the Byzantine setting such that the output $\widetilde{\mathbf{w}}$ of \emph{any} algorithm must satisfy $\twonms{\nabla F(\widetilde{\mathbf{w}})} = \Omega\big(\frac{\alpha}{\sqrt{n}} + \frac{\sqrt{d}}{\sqrt{nm}}\big)$ with a constant probability. \end{observation} We prove Observation~\ref{obs:lower_bound} in Appendix~\ref{apx:lower_bound}. In view of this observation, we see that in terms of the first-order guarantee (i.e., on $\twonms{\nabla F(\widetilde{\mathbf{w}})}$) and up to logarithmic factors, trimmed mean is optimal if $d=\mathcal{O}(1)$, the median is optimal if $d = \mathcal{O}(1)$ and $n \gtrsim m$, and iterative filtering is optimal if $\alpha = \Theta(1)$. The statistical optimality of their second-order guarantees (i.e., on $ \lambda_{\min} (\nabla^2 F(\widetilde{\mathbf{w}})) $) is currently unclear to us, and we believe this is an interesting problem for future investigation. \subsection*{Acknowledgements} D. Yin is partially supported by Berkeley DeepDrive Industry Consortium. Y. Chen is partially supported by NSF CRII award 1657420 and grant 1704828. K. Ramchandran is partially supported by NSF CIF award 1703678. P. Bartlett is partially supported by NSF grant IIS-1619362. The authors would like to thank Zeyuan Allen-Zhu for pointing out a potential way to improve our initial results, and Ilias Diakonikolas for discussing references~\cite{diakonikolas2016robust,diakonikolas2017being,diakonikolas2018sever}. \bibliographystyle{plainnat} \section{Problem Setup}\label{sec:setup} We consider empirical risk minimization for a statistical learning problem where each data point $ \mathbf{z} $ is sampled from an unknown distribution $\mathcal{D}$ over the sample space $\mathcal{Z}$. Let $f(\mathbf{w}; \mathbf{z})$ be the loss function of a parameter vector $\mathbf{w}\in\mathcal{W}\subseteq\mathbb{R}^d$, where $\mathcal{W}$ is the parameter space. The population loss function is therefore given by $F(\mathbf{w}) := \ensuremath{\mathbb{E}}_{\mathbf{z}\sim\mathcal{D}}[f(\mathbf{w};\mathbf{z})]$. We consider a distributed computing system with one \emph{master} machine and $m$ \emph{worker} machines, $\alpha m$ of which are Byzantine machines and the other $ (1-\alpha)m $ are normal. Each worker machine has $n$ data points sampled i.i.d.~from $\mathcal{D}$. Denote by $\mathbf{z}_{i,j}$ the $j$-th data point on the $i$-th worker machine, and let $F_i(\mathbf{w}) := \frac{1}{n}\sum_{j=1}^n f(\mathbf{w}; \mathbf{z}_{i,j}) $ be the empirical loss function on the $i$-th machine. The master machine and worker machines can send and receive messages via the following communication protocol: In each parallel iteration, the master machine sends a parameter vector $\mathbf{w}$ to all the worker machines, and then each \emph{normal} worker machine computes the gradient of its empirical loss $F_i(\cdot)$ at $\mathbf{w}$ and sends the gradient to the master machine. The Byzantine machines may be jointly controlled by an adversary and send arbitrary or even malicious messages. We denote the unknown set of Byzantine machines by $\mathcal{B}$, where $|\mathcal{B}|=\alpha m$. With this notation, the gradient sent by the $i$-th worker machine is \begin{equation}\label{eq:def_grad_value_pair} \widehat{\mathbf{g}}_i(\mathbf{w}) = \begin{cases} \nabla F_i(\mathbf{w}) & i \in [m]\setminus\mathcal{B}, \\ * & i \in \mathcal{B}, \end{cases} \end{equation} where the symbol $*$ denotes an arbitrary vector. As mentioned, the adversary is assumed to have complete knowledge of the algorithm used and the data stored on all machines, and the Byzantine machines may collude~\cite{lynch1996distributed} and adapt to the output of the master and normal worker machines. We only make the mild assumption that the adversary cannot \emph{predict} the random numbers generated by the master machine. We consider the scenario where $F(\mathbf{w})$ is non-convex, and our goal to find an approximate local minimizer of $F(\mathbf{w})$. Note that a first-order stationary point (i.e., one with a small gradient) is not necessarily close to a local minimizer, since the point may be a \emph{saddle point} whose Hessian matrix has a large negative eigenvalue. Accordingly, we seek to find a \emph{second-order stationary point} $\widetilde{\mathbf{w}}$, namely, one with a small gradient and a nearly positive semidefinite Hessian: \begin{definition}[Second-order stationarity]\label{def:second_stationary} We say that $\widetilde{\mathbf{w}}$ is an $(\epsilon_g, \epsilon_H)$-second-order stationary point of a twice differentiable function $F(\cdot)$ if $\twonms{\nabla F(\widetilde{\mathbf{w}})} \le \epsilon_g$ and $\lambda_{\min}\big(\nabla^2 F(\widetilde{\mathbf{w}})\big) \ge - \epsilon_H$. \end{definition} In the sequel, we make use of several standard concepts from continuous optimization. \begin{definition}[Smooth and Hessian-Lipschitz functions]\label{def:lipschitz} A function $ h $ is called $ L $-smooth if $\sup_{\mathbf{w}\neq\mathbf{w}'} \frac{\twonms{\nabla h(\mathbf{w}) - \nabla h(\mathbf{w}')}}{\twonms{\mathbf{w} - \mathbf{w}'}} \le L $, and $ \rho $-Hessian Lipschitz if $\sup_{\mathbf{w}\neq\mathbf{w}'} \frac{\twonms{\nabla^2h(\mathbf{w}) - \nabla^2h(\mathbf{w}')}}{\twonms{\mathbf{w} - \mathbf{w}'}} \le \rho $. \end{definition} Throughout this paper, the above properties are imposed on the \emph{population} loss function $F(\cdot)$. \begin{asm}\label{asm:lip_assumptions} $F$ is $L_F$-smooth, and $\rho_F$-Hessian Lipschitz on $\mathcal{W}$. \end{asm}
1806.05195
\section*{Glossary} Here we provide an overview of the acronyms used throughout this paper and also in common use in the literature. \\ \begin{tabular}{ll} BBH & Binary black hole\\ BH & Black hole \\ BNS & Binary neutron star\\ BSSN & Baumgarte-Shapiro-Shibata-Nakamura \\ CBM & Compact binary mergers \\ CMB & Cosmic microwave background \\ DM & Dark matter \\ ECO & Exotic Compact Object\\ EFT & Effective Field theory \\ EMRI & Extreme-mass-ratio inspiral\\ EOB & Effective One Body model\\ EOS & Equation of state\\ eV & electron Volt\\ GR & General Relativity \\ GSF & Gravitational self-force \\ GRB & Gamma-ray burst\\ GW & Gravitational Wave \\ HMNS & Hypermassive neutron star\\ IMBH & Intermediate-mass black hole\\ IVP & Initial Value Problem\\ LVC & LIGO Scientific and Virgo Collaborations\\ MBH & Massive black hole\\ NK & Numerical kludge model\\ NSB & Neutron star binary\\ NS & Neutron star\\ NR & Numerical Relativity\\ PBH & Primordial black hole\\ PN & Post-Newtonian \\ PM & Post-Minkowskian\\ QNM & Quasinormal modes\\ sBH & Black hole of stellar origin\\ SGWB & Stochastic GW background \\ SM & Standard Model \\ SMBBH & Supermassive binary black hole\\ SOBBH & Stellar-origin binary black hole\\ SNR & Signal-to-noise ratio \\ ST & Scalar-tensor \end{tabular} \newpage \maketitle \tableofcontents \markboth{Black holes, gravitational waves and fundamental physics: a roadmap}{Black holes, gravitational waves and fundamental physics: a roadmap} \newpage \section*{Preface} The long-held promise of {\it gravitational-wave astronomy} as a new window onto the universe has finally materialized with the dramatic discoveries of the LIGO-Virgo collaboration in the past few years. We have taken but the first steps along a new, exciting avenue of exploration that has now opened before us. The questions we will tackle in the process are cross-cutting and multidisciplinary, and the answers we will get will no doubt reshape our understanding of black-hole-powered phenomena, of structure formation in the universe, and of gravity itself, at all scales. The harvesting of useful information from gravitational-wave (GW) signals and the understanding of its broader implications demand a cross-disciplinary effort. What exactly will GWs tell us about how, when and in which environment black holes were formed? How fast do black holes spin and how have some of them grown to become supermassive? GWs from merging black holes probe the environment in which they reside, potentially revealing the effect of dark matter or new fundamental degrees of freedom. The analysis of GWs will allow for precise tests of General Relativity, and of the black hole paradigm itself. However, to be able to collect and interpret the information encoded in the GWs, one has to be equipped with faithful and accurate theoretical models of the predicted waveforms. To accomplish the far-reaching goals of gravitational-wave science it is of paramount importance to bring together expertise over a very broad range of topics, from astrophysics and cosmology, through general-relativistic source modelling to particle physics and other areas of fundamental science. In 2016, a short time before the announcement of the first gravitational-wave detection, a cross-disciplinary initiative in Europe led to the establishment of the new COST networking Action on ``Black holes, gravitational waves and fundamental physics'' (``GWverse''). GWverse aims to maintain and consolidate leadership in black-hole physics and gravitational-wave science, linking three scientific communities that are currently largely disjoint: one specializing in gravitational-wave detection and analysis, another in black-hole modelling (in both astrophysical and general-relativistic contexts), and a third in strong-gravity tests of fundamental physics. The idea is to form a single, interdisciplinary exchange network, facilitating a common language and a framework for discussion, interaction and learning. The Action will support the training of the next generation of leaders in the field, and the very first ``native'' GW/multi-messenger astronomers, ready to tackle the challenges of high-precision GW astronomy with ground and space-based detectors. \vskip 1cm \noindent Leor Barack \noindent Vitor Cardoso \noindent Samaya Nissanke \noindent Thomas Sotiriou \newpage \input WG1.tex \newpage \input WG2.tex \newpage \input WG3.tex \newpage \phantomsection \addcontentsline{toc}{part}{\bf Postscript} \begin{center} {\large \bf Postscript} \end{center} Gravity sits at the heart of some of the most important open problems in astrophysics, cosmology and fundamental physics, making it a subject of strong interdisciplinarity. Black holes are, in some ways, the ``atoms'' of gravity, the ``simplest'' astrophysical objects, yet they harbour some of the most remarkable predictions of GR, including that of its own ultimate failure. Gravitational-wave astronomy will allow us to test models of BH formation, growth and evolution, as well as models of GW generation and propagation. It will provide evidence for event horizons and ergoregions, test GR itself and may reveal the existence of new fundamental fields. The synthesis of these results has the potential to shed light on some of the most enigmatic issues in contemporary physics. This write-up summarized our present knowledge and key open challenges. We hope that it can serve as a guide for the exciting road ahead. \section*{Acknowledgements} This article is based upon work from COST Action CA16104 ``GWverse'', supported by COST (European Cooperation in Science and Technology). We would like to thank Walter del Pozzo for useful comments. A.A. acknowledges partial support from the Polish National Science Center (NCN) through the grant UMO-2016/23/B/ST9/02732 and is currently supported by the Carl Tryggers Foundation through the grant CTS 17:113. L.B. acknowledges support from STFC through Grant No. ST/R00045X/1. V.C. acknowledges financial support provided under the European Union's H2020 ERC Consolidator Grant ``Matter and strong-field gravity: New frontiers in Einstein's theory'' grant agreement no. MaGRaTh--646597. T.P.S. acknowledges partial support from Science and Technology Facilities Council Consolidated Grant ST/P000703/1. K. B. acknowledges support from the Polish National Science Center (NCN) grant: Sonata Bis 2 (DEC-2012/07/E/ST9/01360). E. B. acknowledges financial support from projects 176003 and 176001 by the Ministry of Education and Science of the Republic of Serbia. T. B. was supported by the TEAM/2016-3/19 grant from FNP. P. C. acknowledges support from the Austrian Research Fund (FWF), Project P 29517-N16, and by the Polish National Center of Science (NCN) under grant 2016/21/B/ST1/00940. B. K. acknowledges support from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme ERC-2014-STG under grant agreement No 638435 (GalNUC) and from the Hungarian National Research, Development, and Innovation Office grant NKFIH KH-125675. G. N. is supported by the Swiss National Science Foundation (SNF) under grant 200020-168988. P. P. acknowledges financial support provided under the European Union's H2020 ERC, Starting Grant agreement no.~DarkGRA--757480. U. S. acknowledges H2020-MSCA-RISE-2015 Grant No.~690904, NSF Grant No.~PHY-090003 and PRACE Tier-0 Grant No.~PPFPWG. The Flatiron Institute is supported by the Simons Foundation. P. A. S. acknowledges support from the Ram{\'o}n y Cajal Programme of the Ministry of Economy, Industry and Competitiveness of Spain. E. B. supported by NSF Grants No. PHY-1607130 and AST-1716715. K. B., A. C., A. G., N. I., T. P. and G. Z. acknowledge financial support from the Slovenian Research Agency. S.~B. acknowledges support by the EU H2020 under ERC Starting Grant, no.~BinGraSp-714626. P. C-D. is supported by the Spanish MINECO (grants AYA2015-66899- C2-1-P and RYC-2015-19074) and the Generalitat Valenciana (PROMETEOII-2014-069). D. G. is supported by NASA through Einstein Postdoctoral Fellowship Grant No. PF6–170152 by the Chandra X-ray Center, operated by the Smithsonian Astrophysical Observatory for NASA under Contract NAS8–03060. R. E. acknowledges financial support provided by the Scientific and Technical Research Council of Turkey (TUBITAK) under the grant no. 117F296. J. A. F. is supported by the Spanish MINECO (AYA2015-66899-C2-1-P), by the Generalitat Valenciana (PROMETEOII-2014-069), and by the H2020-MSCA-RISE-2017 Grant No.~FunFiCO-777740. F. M. R. is supported by the Scientific and Technological Research Council of Turkey (T\"{U}B\.{I}TAK) project 117F295. I. R. was supported by the POLONEZ programme of the National Science Centre of Poland which has received funding from the European Union`s Horizon 2020 research and innovation programme under the Marie Sk{\l}odowska-Curie grant agreement No.~665778. A. S. thanks the Spanish Ministry of Economy, Industry and Competitiveness, the Spanish Agencia Estatal de Investigaci\'on, the Vicepresidència i Conselleria d'Innovaci\'o, Recerca i Turisme del Govern de les Illes Balears, the European Commission, the European Regional Development Funds (ERDF). N. S. acknowledges support from DAAD Germany-Greece Grant (ID 57340132) and GWAVES (pr002022) of ARIS-GRNET(Athens). H. W. acknowledges financial support by the Royal Society UK under the University Research Fellowship UF160547-BHbeyGR and the Research Grant $RGF-R1-180073$. K. Y. is supported by T\"UB\.ITAK-117F188. \vskip 2cm \input{Affilliation.tex} \newcommand{}{} \raggedright{} \nocite{apsrev41Control} \bibliographystyle{apsrev4-1} \section{Introduction} \label{Sec:introduction1} In the last two years, strong-field gravity astrophysics research has been undergoing a momentous transformation thanks to the recent discoveries of five binary black hole (BBH) mergers that were observed in gravitational waves (GWs) by the LIGO and Virgo detectors. This was compounded last year by the multi-messenger discovery of a binary neutron star (BNS) merger measured in both GWs and detected in every part of the electromagnetic (EM) spectrum, allowing us to place compact object mergers in their full astrophysical context. These measurements have opened up an entirely new window onto the Universe, and given rise to a new rapidly growing and observationally-driven field of GW astrophysics. Despite the multiple scientific breakthroughs and ``firsts'' that these discoveries signify, the measured properties of the BBH and BNS mergers have immediately bought up accompanying challenges and pertinent questions to the wider astrophysics community as a whole. Here, we aim to provide an up-to-date and encompassing review of the astrophysics of compact object mergers and future prospects and challenges. Section~\ref{Sec:LVC} first introduces and briefly details the LIGO and Virgo observations of BBH and BNS mergers. Section~\ref{Sec:BHgenesis} then discusses the astrophysics of BHs, in particular BBHs, from their genesis to archeology, for BHs that span more than ten decades in their mass range. In the case of stellar-mass BBH mergers, Section~\ref{Sec:binaryevolution} details the formation of compact binary mergers, in particular BBHs, through isolated stellar binary evolution. Section~\ref{Sec:nbody} then reviews how one could dynamically form such events in order to explain the observed merger rate and distribution of masses, mass ratios, and spins. Section~\ref{Sec:PBHandDM} explores the intriguing possibility that at least a fraction of dark matter (DM) in the Universe is in the form of primordial BHs (PBHs), an area that has recently been invigorated by the recent LIGO and Virgo observations of BBH mergers. Section~\ref{Sec:FormSMBH} presents an overview on the formation of supermassive BBHs through galaxy mergers, and Section~\ref{Sec:PTA} introduces efforts underway to probe the astrophysics of such supermassive BBHs with pulsar timing arrays. Turning our attention to the mergers themselves as multi-messenger sources, Section~\ref{Sec:NumericalRelativity} reviews the state-of-the-art numerical modelling of compact object mergers, in particular, systems with NSs in which we have already observed accompanying EM radiation. For detailed and exhaustive discussion of the modelling of BBH mergers, we refer the reader to Chapter II. Section~\ref{Sec:EMfollowup} provides a summary of the observational efforts by a wide range of facilities and instruments in following up GW mergers in light of the first BNS merger discovery measured in both GWs and EM. Focusing entirely on EM observations, Section~\ref{Sec:BHBandAGN} reviews observations of active galactic nuclei as probes of BBH systems and Section~\ref{Sec:XrayGammaRay} concludes by summarising recent advances in high-energy observations of X-ray binaries. Finally, Section~\ref{Sec:cosmography} provides an extensive review on how observations of GWs can impact the field of cosmology, that is, in our understanding of the origins, evolution and fate of the Universe. \section{LIGO and Virgo Observations of Binary Black Hole Mergers and a Binary Neutron Star } \label{Sec:LVC} \vspace{-3mm} {\it Contributors:} E.~Porter, M.~Hendry, I.~S.~Heng \vspace{3mm} On September 15th 2014 the discovery of GWs from the merger of two BHs during the first advanced detectors era run, commonly called O1, by the two LIGO observatories heralded the dawn of GW astronomy~\cite{Abbott:2016blz}. This event was quickly followed up by two other BBH mergers: one of lower significance on October 12th, 2015, and another on December 26th, 2015~\cite{Abbott:2016nmj,TheLIGOScientific:2016pea}; see Table~\ref{tab:definition_sources} for the source properties of the published GW mergers. These detections, as exemplified by this white paper, have had a major impact on the fields of astrophysics and fundamental physics~\cite{TheLIGOScientific:2016wfe,TheLIGOScientific:2016htt,TheLIGOScientific:2016pea,Abbott:2016ymx,TheLIGOScientific:2016xzw,Abbott:2016cjt}. The detection of GWs from only BBH mergers from all O1 detections has had significant ramifications on our understanding of astrophysical populations~\cite{Abbott:2016nhf,Abbott:2016drs,TheLIGOScientific:2016pea,Abbott:2016ymx}. The detected BHs were more massive than any BHs that had been previously detected in low mass X-ray binaries, requiring a re-evaluation of the models of stellar evolution in binary systems~\cite{TheLIGOScientific:2016htt}. From just these three events, the LIGO Scientific and Virgo collaborations (LVC) constrained the rate of BBH mergers to between 9-240 Gpc$^{-3}$ yr$^{-1}$~\cite{TheLIGOScientific:2016pea} (see \cite{Abbott:2017vtc} for an updated BBH merger rate of 12-213 Gpc$^{-3}$ yr$^{-1}$). The non-detection of BNSs and NS-BH binaries allowed constraints of $< 12,600$ Gpc$^{-3}$ yr$^{-1}$ and $< 3,600$ Gpc$^{-3}$ yr$^{-1}$ respectively~\cite{Abbott:2016ymx}. At the time of this run, the LVC had over 60 MOUs signed with external telescopes, satellites and neutrino detectors. No EM counterparts were found relating to the BBH mergers~\cite{Abbott:2016gcq,Abbott:2016iqz,Aasi:2013wya}. To detect and extract astrophysical information, GW astronomy uses the method of matched filtering~\cite{1991PhRvD..44.3819S}. This method is the optimal linear filter for signals buried in noise, and is very much dependent on the phase modelling of a GW template. Within the LVC, the GW templates are constructed using both analytical and numerical relativity~\cite{TheLIGOScientific:2016qqj}. In this case, the phase evolution of the template is a function of a number of frequency dependent coefficients. Alternative theories of gravity predict that these coefficients should be individually modified if general relativity (GR) is not the correct theory of gravity. While GR predicts specific values for these coefficients, one can treat each coefficient as a free variable and use Bayesian inference to test for deviations in the values of the parameters from the nominal GR value. All tests conducted by the LVC displayed no deviations from GR~\cite{TheLIGOScientific:2016src,TheLIGOScientific:2016pea,Abbott:2017vtc,Abbott:2017oio} In addition, searches for generic GW transients, or GW-bursts, typically do not require a well-known or accurate waveform model and are robust against uncertainties in the GW signature. GW-burst searches are designed to detect transients with durations between $10^{-3} - 10$ seconds with minimal assumptions about the expected signal waveform. Such searches are, therefore, sensitive to GW transients from a wide range of progenitors, ranging from known sources such as BBH mergers to poorly-modeled signals such as core-collapse supernovae as well as transients that have yet to be discvered. An overview of GW-burst searches performed by LVC can be found here~\cite{TheLIGOScientific:2016uux}. Both GW-burst and compact binary coalescences (CBC) searches detected the first GW signal from BBH mergers, GW150914. In November 2016, the {\it second} Advanced Era Observation run, O2, began. Once again, in January and June 2017, two BBH mergers were observed by the two LIGO detectors~\cite{Abbott:2017vtc,Abbott:2017oio}. At the end of July 2017, the Advanced Virgo detector joined the global network of detectors. On August 14th, all three detectors observed the merger of a BBH system. In previous detections, using only the two LIGO detectors, the sources were located to 1000s of square degrees in the sky. In this case, due to the addition of Advanced Virgo, this system was localised to within 60 square degrees. While not greatly advancing our understanding of the formation mechanisms of such systems, this detection did have a major effect in the field of fundamental physics. Due to the misalignment of the three detectors, for the first time we were able to test the tensorial nature of GWs. This event allowed the LVC to conclude that the GW signals were tensorial in nature, as is predicted by GR~\cite{Abbott:2017oio}. Burst searches were also used as an independent analysis to complement matched filtering analyses for the detection of GW170104~\cite{Abbott:2017vtc}. Burst searches further identified a coherent signal, corresponding to GW170608, with a false-alarm rate of 1 in $\sim 30$ years~\cite{Abbott:2017gyy} and validated the detection of GW170814 with a false-alarm rate $< 1$ in 5900 years~\cite{Abbott:2017oio}. Note that, given the ``unmodelled'' nature of burst searches, the estimated event significances from burst searches tend to be lower than matched-filtered searches for the same event, especially for lower-mass compact binary signals. On August 17th, the first BNS merger was observed by the LIGO and Virgo detectors~\cite{TheLIGOScientific:2017qsa}. This event was very quickly associated with a short gamma-ray burst (sGRB) detected by both the Fermi and Integral satellites~\cite{GBM:2017lvd}. Within 10 hours, the host galaxy had been optically identified. Within 16 days, the source had been identified across all bands of the EM spectrum. This single event heralded the true beginning of multi-messenger astronomy, and raised as many questions as it answered. While confirming the hypothetical link between BNS mergers and sGRBs, the delay between the gamma and X-ray signals (9 days) suggested that not all sGRBs are the same~\cite{Monitor:2017mdv}. This fact generated a number of studies regarding equation of state models, and the possible remnant of such mergers. This one event also allowed the LVC to update the BNS event rate from $< 12,600$ Gpc$^{-3}$ yr$^{-1}$ in O1, to 320-4740 Gpc$^{-3}$ yr$^{-1}$ in O2~\cite{TheLIGOScientific:2017qsa}. Perhaps, the most interesting results from this event concern fundamental physics. The delay between the detection of GWs and gamma-rays was 1.74 seconds. This places a bound on the difference between the speed of light and the speed of GWs of $3\times10^{-15}\leq|\Delta c/c|\leq 7\times 10^{-16}$~\cite{Monitor:2017mdv}. This single result has implications for certain alternative theories of gravity. For instance, the fact that GWs seem to travel at the same speed as that of strongly constrains the family of alternative theories of gravity that require $v_+, v_{\mathrm{\times}} \neq v_{\rm{light}}$ (e.g., beyond Horndeski, quartic/quintic Galileon, Gauss-Bonnet, if they are supposed to explain cosmology), as well as theories that predict a massive graviton. Furthermore, by investigating the Shapiro delay, the GW170817 detection also rules out MOND and DM emulator MOND-like theories (e.g., TeVeS), as according to these theories, the GWs would have arrived 1000 days after the gamma-ray detection. The detection of GWs by the Advanced LIGO and Advanced Virgo detectors have had a major effect on our understanding of the Universe, sparking the fields of GW and multi-messenger astronomy and cosmology~\cite{GBM:2017lvd,Abbott:2017xzu}. It is becoming increasingly clear that combining EM and GW information will be the only way to better explain observed phenomena in our Universe. The third Advanced Detector Observation run (O3) will begin in early 2019, and will run for a year~\cite{Aasi:2013wya}. We expect the detected events to be dominated by BBH mergers at a rate of one per week. However, we also expect on the order of ten BNS events during this time, and possibly a NS-BH discovery (and potentially more than one such system). Given the effects of one GW detection on both astrophysics and fundamental physics, we expect O3 to fundamentally change our view of the Universe. \begin{table*}[t] \begin{center} \scriptsize \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & GW150914 & GW151226 & LVT151012 & GW170104 & GW170608 & GW170814 & GW170817 \\ \hline $m_1/{\rm ~M}_{\odot}$ & $36.2^{+5.2}_{-3.8}$ & $14.2^{+8.3}_{-3.7}$ & $23^{+18}_{-6}$ & $31.2^{+8.4}_{-6.0}$ & $12^{+7}_{-2}$ & $30.5^{+5.7}_{-3.0}$ & $(1.36,1.60)$ \\ $m_2/{\rm ~M}_{\odot}$ & $29.1^{+3.7}_{-4.4}$ & $7.5^{+2.3}_{-2.3}$ & $13^{+4}_{-5}$ & $19.4^{+5.3}_{-5.9}$ & $7^{+2}_{-2}$ & $25.3^{+2.8}_{-4.2}$ & $(1.16,1.36)$ \\ ${\mathcal M}/{\rm ~M}_{\odot}$ & $28.1^{+1.8}_{-1.5}$ & $8.88^{+0.33}_{-0.28}$ & $15.1^{+1.4}_{-1.1}$ & $21.1^{+2.4}_{-2.7}$ & $7.9^{+0.2}_{-0.2}$ & $24.1^{+1.4}_{-1.1}$ & $1.186^{+0.001}_{-0.001}$\\ $q$ & $0.81^{+0.17}_{-0.20}$ & $0.52^{+0.40}_{-0.29}$ & $0.57^{+0.38}_{-0.37}$ & $0.62^{+}_{-}$ & $0.6^{+0.3}_{-0.4}$ & $0.83^{+}_{-}$ & $(0.73,1)$ \\ $M_f/{\rm ~M}_{\odot}$ & $62.3^{+3.7}_{-3.1}$ & $20.8^{+6.1}_{-1.7}$ & $35^{+14}_{-4}$ & $48.7^{+5.7}_{-4.6}$ & $18.0^{+4.8}_{-0.9}$ & $53.2^{+3.2}_{-2.5}$ & $---$ \\ $\chi_{eff}$ & $-0.06^{+0.14}_{-0.29}$ & $0.21^{+0.20}_{-0.10}$ & $0.03^{+0.31}_{-0.20}$ & $-0.12^{+0.21}_{-0.30}$ & $0.07^{+0.23}_{-0.09}$ & $0.06^{+0.12}_{-0.12}$ & $0.00^{+0.02}_{-0.01}$ \\ $a_f$ & $0.68^{+0.05}_{-0.06}$ & $0.74^{+0.06}_{-0.06}$ & $0.66^{+0.09}_{-0.10}$ & $0.64^{+0.09}_{-0.20}$ & $0.69^{+0.04}_{-0.05}$ & $0.70^{+0.07}_{-0.05}$ & $---$ \\ $D_L$ / Mpc & $420^{+150}_{-180}$ & $440^{+180}_{-190}$ & $1020^{+500}_{-490}$ & $880^{+450}_{-390}$ & $340^{+140}_{-140}$ & $540^{+130}_{-210}$ & $40$ \\ $z$ & $0.090^{+0.029}_{-0.036}$ & $0.094^{+0.035}_{-0.039}$ & $0.201^{+0.086}_{-0.091}$ & $0.18^{+0.08}_{-0.07}$ & $0.07^{+0.03}_{-0.03}$ & $0.11^{+0.03}_{-0.04}$ & $0.0099$ \\ \hline \end{tabular} \end{center} \caption{Source properties of the published BBH and BNS discoveries (June 2018) by the LIGO and Virgo detectors} \label{tab:definition_sources} \end{table*} \section{Black hole genesis and archaeology } \label{Sec:BHgenesis} \vspace{-3mm} {\it Contributors:} M.~Colpi and M.~Volonteri \vspace{3mm} \subsection{Black Hole Genesis} Gravity around BHs is so extreme that gravitational energy is converted into EM and kinetic energy with high efficiency, when gas and/or stars skim the event horizon of astrophysical BHs. Black holes of stellar origin (sBHs) with masses close to those of known stars power galactic X-ray sources in binaries, while supermassive black holes (SMBHs) with masses up to billions of solar masses power luminous quasars and active nuclei at the centre of galaxies. BHs are key sources in EM in our cosmic landscape. According to General Relativity (GR), Kerr BHs, described by their mass $M_{\rm BH}$ and spin vector ${\bf S}=\chi_{\rm spin} \, G \, M_{\rm BH}/c$ (with $-1\leq \chi_{\rm spin}\leq 1$) are the unique endstate of unhalted gravitational collapse. Thus understanding astrophysical BHs implies understanding the conditions under which gravitational equilibria lose their stability irreversibly. The chief and only example we know is the case of NSs which can exist up to a maximum mass $M_{\rm max}^{\rm NS}$, around $2.2{\rm ~M}_{\odot}-2.6{\rm ~M}_{\odot}$. No baryonic microphysical state emerges in nuclear matter, described by the standard model, capable to reverse the collapse to a BH state, during the contraction of the iron core of a supernova progenitor. The existence of $M_{\rm max}^{\rm NS}$ is due to the non linearity of gravity which is sourced not only by the mass ``charge'' but also by pressure/energy density, according to the Oppenheimer-Volkoff equation. Thus, sBHs carry a mass exceeding $M_{\rm max}^{\rm NS}$. Discovering sBHs lighter than this value (not known yet to high precision) would provide direct evidence of the existence of PBHs arising from phase transitions in the early universe. As of today, we know formation scenarios in the mass range between $5{\rm ~M}_{\odot} -40{\rm ~M}_{\odot}$, resulting from the core-collapse of very massive stars. The high masses of the sBHs revealed by the LVC, up to $36{\rm ~M}_{\odot}$, hint formation sites of low-metallicity, $Z$\footnote{In astrophysics ``metallicity'' refers to the global content of heavy elements above those produced by primordial nucleosynthesis}, below $0.5\%$ of the solar value $Z_\odot=0.02$ \cite{Belczynski:2010tb,Spera15,Belczynski:2016obo}. Theory extends this range up to about $40-60{\rm ~M}_{\odot}$~\cite{Giacobbo18} and predicts the existence of a gap, between about $60 \, \lower2pt\hbox{$\buildrel {\scriptstyle < \, M_{\rm BH}/{\rm ~M}_{\odot} \, \lower2pt\hbox{$\buildrel {\scriptstyle < \, 150,$ since in this window pair instabilities during oxygen burning lead either to substantial mass losses or (in higher mass stellar progenitors) the complete disruption of the star~\cite{Woosley17,Belczynski:2016jno,Spera:2017fyx}. sBHs heavier than $150{\rm ~M}_{\odot}$ can form at $Z \, < \, 1\% \, Z_\odot$, if the initial mass function of stars extends further out, up to hundreds of solar masses. The majestic discovery of BBHs, detected by LVC interferometers~\cite{Abbott:2016blz,TheLIGOScientific:2016htt,Abbott:2016nmj,Abbott:2017vtc,Abbott:2017gyy,Abbott:2017oio}, at the time of their coalescence further indicates, from an astrophysical standpoint, that in nature sBHs have the capability of pairing to form binary systems, contracted to such an extent that GW emission drives their slow inspiral and final merger, on observable cosmic timescales. As GWs carry exquisite information on the individual masses and spins of the BHs, and on the luminosity distance of the source, detecting a population of coalescing sBHs with LVC in their advanced configurations, and with the next-generation of ground-based detectors~\cite{Evans:2016mbw,2012CQGra..29l4013S}, will let us reconstruct the mass spectrum and evolution of sBHs out to very large redshifts. Observations teach us that astrophysical BHs interact with their environment, and that there are two ways to increase the mass: either through accretion, or through a merger, or both. These are the two fundamental processes that drive BH mass and spin evolution. Accreting gas or stars onto BHs carry angular momentum, either positive or negative, depending on the orientation of the disk angular momentum relative to the BH spin. As a consequence the spin changes in magnitude and direction~\cite{King05,Perego09,Berti:2008af}. In a merger, the spin of the new BH is the sum of the individual and orbital angular momenta of the two BHs, prior to merging~\cite{Barausse:2012qz,Rezzolla16}. An outstanding and unanswered question is can sequences of multiple accretion-coalescence events let sBHs grow, in some (rare) cases, up to the realm of SMBHs? If this were true, the ``only'' collapse to a BH occurring in nature would be driven by the concept of instability of NSs at $M_{\rm max}^{\rm NS}$. SMBHs are observed as luminous quasars and active galactic nuclei, fed by accretion of gas~\cite{Merloni16}, or as massive dark objects at the centre of quiescent galaxies which perturb the stellar and/or gas dynamics in the nuclear regions~\cite{Kormendy13}. The SMBH mass spectrum currently observed extends from about $5\times 10^4{\rm ~M}_{\odot}$ (the SMBH in the galaxy RGG118~\cite{Baldassare15}) up to about $1.2\times 10^{10}{\rm ~M}_{\odot}$ (SDSS J0100+2802~\cite{Wu15}), as illustrated in Figure \ref{seeds-BHmass-function}. The bulk of active and quiescent SMBHs are nested at the centre of their host galaxies, where the potential well is the deepest. The correlation between the SMBH mass $M_\bullet$ and the stellar velocity dispersion $\sigma$ in nearby spheroids, and even in disk/dwarf galaxies~\cite{2018ApJ...858..118N} hints towards a concordant evolution which establishes in the centre-most region controlled by powerful AGN outflows. Extrapolated to lower mass disk or dwarf galaxies, this correlation predicts BH masses of $M_\bullet\sim 10^3{\rm ~M}_{\odot}$ at $\sigma$ as low as $10{\rm ~km} {\rm ~s}^{-1}$, typical of nuclear star clusters (globular clusters) \cite{vandenbosch16}. We remark that only BHs of mass in excess of $10^3{\rm ~M}_{\odot}$ can grow a stellar cusp. The lighter BHs would random walk, and thus would have a gravitational sphere of influence smaller than the mean stellar separation and of the random walk mean pathlength. Observations suggest that SMBHs have grown in mass through repeated episodes of gas accretion and (to a minor extent) through mergers with other BHs. This complex process initiates with the formation of a {\it seed} BH of yet unknown origin~\cite{Volonteri10}. The concept of seed has emerged to explain the appearance of a large number of SMBHs of billion suns at $z\sim 6,$ shining when the universe was only 1 Gyr old~\cite{Jiang16}. Furthermore, the comparison between the local SMBH mass density, as inferred from the $M_\bullet-\sigma$ relation, with limits imposed by the cosmic X-ray background light, resulting from unresolved AGN powered by SMBHs in the mass interval between $10^{8-9}{\rm ~M}_{\odot}$, indicates that radiatively efficient accretion played a large part in the building of SMBHs below $z\sim 3,$ and that information is lost upon their initial mass spectrum~\cite{Marconi04}. Thus, SMBHs are believed to emerge from a population of seeds of yet unconstrained initial mass, in a mass range {\it intermediate} between those of sBHs and SMBHs, about $10^2{\rm ~M}_{\odot}$ to $10^5{\rm ~M}_{\odot}$, and therefore they are sometimes dubbed Intermediate-mass BHs (IMBHs). Seeds are IMBHs that form ``early'' in cosmic history (at redshift $z\sim 20$, when the universe was only 180 Myr old). They form in extreme environments, and grow over cosmic time by accretion and mergers. Different formation channels have been proposed for the seeds~\cite{Volonteri10,Schleicher13,Latif:2016qau}. {\it Light seeds} refer to IMBHs of about $100{\rm ~M}_{\odot}$ that form from the relativistic collapse of massive Pop III stars, but the concept extends to higher masses, up to $\sim 10^3{\rm ~M}_{\odot}$. These seeds likely arise from runaway collisions of massive stars in dense star clusters of low metallicity \cite{Mapelli16,Devecchi12}, or from mergers of sBHs in star clusters subjected to gas-driven evolution~\cite{Lupi14}. The progenitors of light seeds are massive stars. However, they could be also the end result of repeated mergers among BHs~\cite{Gerosa:2017kvu}. Finding merging sBHs with the LVC detectors with masses in the pair instability gap would be a clear hint of a second generation of mergers resulting from close dynamical interactions. \begin{figure}[!t] \begin{center} \includegraphics[width=1.00\textwidth]{seeds-BHmass-function} \caption{Cartoon illustrating the BH mass spectrum encompassing the whole astrophysical relevant range, from sBHs to SMBHs, through the unexplored (light-green) zone where BH seeds are expected to form and grow. Vertical black-lines denote the two sBH masses in GW150914, the mass $M_\bullet$ of RGG118 (the lightest SMBH known as of today in the dwarf galaxy RG118), of SgrA* in the Milky Way, and of J0100+2802 (the heaviest SMBH ever recorded). The mass distribution of sBHs, drawn from the observations of the Galactic sBH candidates, has been extended to account for the high-mass tail following the discovery of GW150914. The minimum (maximum) sBHs is set equal to $3{\rm ~M}_{\odot}$ ($60{\rm ~M}_{\odot}$), and the theoretically predicted pair-instability gap is depicted as a narrow darker-grey strip. The SMBH distribution has been drawn scaling their mass according to the local galaxy mass function and $M_\bullet$-$\sigma$ correlation. The decline below $\sim 10^5{\rm ~M}_{\odot}$ is set arbitrarily: BH of $\sim 10^{4-5}{\rm ~M}_{\odot}$ may not be ubiquitous in low-mass galaxies as often a nuclear star cluster is in place in these galaxies, which may or may not host a central IMBH~\cite{Graham09}. The black stars and dashed tracks illustrate the possibility that a SMBH at high redshift forms as sBH-only (born on the left side of the sBH gap) or as light seed (on the right of the gap) which then grows through phases of super-Eddington accretion~\cite{Lupi16}. The red circle and dotted track illustrates the possibility of a {\it genetic} divide between sBHs and SMBHs, and that a heavy seed forms through the direct collapse of a supermassive protostar in a metal free, atomic-hydrogen cooling, DM halo~\cite{Latif13,Schleicher13}. The seed later grows via gas accretion and mergers with SMBHs in other black halos. } \label{seeds-BHmass-function} \end{center} \end{figure} Accretion on sBHs occurs in X-ray binaries, and there is no evidence of accretion from the interstellar medium onto isolated sBHs in the Milky Way. But, in gas-rich, dense environments characteristic of galaxy halos at high redshifts, single sBHs might accrete to grow sizably, despite their initial small gravitational sphere of influence, if specific dynamical conditions are met. For instance, in rare cases they may be captured in dense gas clouds within the galaxy~\cite{Lupi16}. Another possibility is that a sBH forms at the very center of the galaxy, where large inflows may temporarily deepen the potential well and allow it to grow significantly. This ``winning sBH'' must be significantly more massive than all other sBHs in the vicinity to avoid being ejected by scatterings and to be retained at the center of the potential well by dynamical friction. Similar conditions can also be present in nuclear star clusters characterized by high escape velocities. After ejection of the bulk of the sBHs, the only (few) remaining isolated BH can grow by tidally disrupting stars and by gas accretion~\cite{Stone17} sparking their growth to become an IMBH. {\it Heavy seeds} refer instead to IMBHs of about $10^{4-5}{\rm ~M}_{\odot} $ resulting from the monolithic collapse of massive gas clouds, forming in metal-free halos with virial temperatures $T_{\rm vir} \, \lower2pt\hbox{$\buildrel {\scriptstyle > \, 10^4$ K, which happen to be exposed to an intense H$_2$ photodissociating ultraviolet flux~\cite{Latif13,Dijkstra14,Habouzit16,Regan17,Latif:2016qau}. These gas clouds do not fragment and condense in a single massive proto-star which is constantly fueled by an influx of gas that lets the proto-star grow large and massive. Then, the star contracts sizably and may form a quasi-star \cite{Begelman10}, or it may encounter the GR instability that leads the whole star to collapse directly into a BH. Heavy seeds might also form in major gas-rich galaxy mergers over a wider range of redshifts, as mergers trigger massive nuclear inflows \cite{Mayer15}. Figure \ref{seeds-BHmass-function} is a cartoon summarising the current knowledge of BHs in our Universe, and the link that may exist between sBHs and SMBHs, which is established by seed BHs along the course of cosmic evolution. The seeds of the first SMBHs are still elusive to most instruments that exist today, preventing us to set constraints on their nature. Seed BHs are necessarily a {\it transient} population of objects and inferring their initial mass function and spin distribution from observations is possible only if they can be detected either through EM or GW observations at very high $z$, as high as $\sim 20$ (even $z\sim 40$ as discussed recently). Since, according to GR, BHs of any flavour captured in binaries are loud sources of GWs at the time of their merging, unveiling seeds and MBHs through cosmic ages via their GW emission at coalescence would provide unique and invaluable information on the {\sl BH genesis and evolution}. The {\sl Gravitational Wave Universe} is the universe we can sense using GWs as messengers~\cite{Colpi17,Audley:2017drz}. In this universe, BBHs are key sources carrying invaluable information on their masses, spins and luminosity distance that are encoded in the GW signal. There is one key condition that needs to be fulfilled: that the BHs we aim at detecting pair and form a binary with GW coalescence time smaller that the Hubble time, possibly close to the redshift of their formation. This condition, enabling the detection of seeds at very high redshifts, is extremely challenging to be fulfilled. BHs in binaries form at ``large'' separation. Thus, nature has to provide additional dissipative processes leading to the contraction of the BBH down to the scale where GWs drive the inspiral. This requires a strong coupling of the two BHs with the environment, before and after forming a binary system. As we now discuss, understanding this coupling is a current challenge in contemporary astrophysics, cosmology and computational physics~\cite{Colpi14}. \subsection{Black Hole Binaries: the difficulty of pairing} Due to the weakness of gravity, BBH inspirals driven by GW emission occur on a timescale: \begin{eqnarray} \nonumber t_{\rm coal} & =& {\frac{5 c^5}{256G^3}} {\cal G}(e)(1-e^2)^{7/2} {\frac{{a}^4}{\nu\, M_{\rm BBH}^3}} \\ & = & {\frac{5\cdot 2^4}{256}} {\cal G}(e)(1-e^2)^{7/2}{\frac{GM_{\rm BBH}}{ \nu c^3}{\tilde a}^4}, \label{tcoal} \end{eqnarray} where $M_{\rm BBH}$ is the total mass of the BBH, $a$ and $e$ the semi-major axis and eccentricity respectively (${\cal G}(e)$ a weak function of $e$, and ${\cal G}(0)=1$) and $\nu=\mu/M_{\rm BBH}$ the symmetric mass ratio ($\nu=1/4$ for equal mass binaries), with $\mu$ the reduced mass of the binary. The values of $a$ and $e$ at the time of formation of the binary determine $ t_{\rm coal}$, and this is the longest timescale. A (circular) binary hosting two equal-mass seed BHs of $10^3{\rm ~M}_{\odot}$ ($M_{\rm BBH}=10^5{\rm ~M}_{\odot}$) would reach coalescence in $0.27$ Gyrs, corresponding to the cosmic time at redshift $z\sim 15,$ if the two BHs are at an initial separation of $ a\sim \nu^{1/4} 4.84\times 10^4 R_{\rm G}$ ( $ \nu^{1/4} \,1.5\times 10^4 R_{\rm G}$) corresponding to $a\sim \nu^{1/4}\,0.1$ AU, ($\nu^{1/4}\,30$ AU). For the case of two equal-mass MBHs of $10^6{\rm ~M}_{\odot}$ coalescing at $z\sim 3$ (close to the peak of the star-formation rate and AGN rate of activity in the universe) $ a\sim \nu^{1/4} 4.84\times 10^6 R_{\rm G}$ corresponding to about one milli-parsec. These are {\it tiny} scales, and to reach these separations the binary needs to harden under a variety of dissipative processes. The quest for efficient mechanisms of binary shrinking, on AU-scales for sBHs and BH seeds, and sub-galactic scales for MBHs, make merger rate predictions extremely challenging, as Nature has to set rather fine-tuned conditions for a BBH to reach these critical separations. Only below these critical distances the binary contracts driven by GW emission. The merger occurs when the GW frequency (to leading order equal to twice the Keplerian frequency) reaches a maximum value, \begin{equation} f_{\rm GW}^{\rm max}\sim {\frac{c^3} {\pi 6^{3/2} GM_{\rm BBH}}}=4.4\times 10^3 \left ({\frac{{\rm ~M}_{\odot}}{ M_{\rm BBH}}}\right )\,\rm Hz. \label{fmax} \end{equation} This frequency $f_{\rm GW}^{\rm max}$ scales with the inverse of the total mass of the binary, $M_{\rm BBH}$ as it is determined by the size of the horizon of the two BHs, at the time of coalescence. Coalescing sBHs in binaries occur in galactic fields~\cite{Belczynski:2010tb,Dominik:2013tma}, or/and in stellar systems such as globular clusters or/and nuclear star clusters \cite{Portegies00,Benacquista13,Rodriguez:2016avt,Antonini16,Haster16}. Thus, sBHs describe phenomena {\it inside galaxies}. Since there is a time delay between formation of the binary and its coalescence, dictated by the efficiency of the hardening processes, sBHs can merge in galaxies of all types, as in this lapse time that can be of the order of Gyrs, host galaxies undergo strong evolution. Instead, coalescing IMBHs, seeds of the SMBHs, formed in DM halos at high redshifts, and thus track a different environment. Forming in pristine gas clouds their pairing either requires in-situ formation, e.g., from fissioning of rotating super-massive stars \cite{Reisswig13}, or via halo-halo mergers on cosmological scales subjected to rapid evolution and embedded in a cosmic web that feed baryons through gaseous stream~\cite{Valiante17}. Coalescing MBHs refer exclusively to galaxy-galaxy mergers of different morphological types \cite{Colpi14} occurring during the clustering of cosmic structures, which encompass a wide range of redshifts, from $z\sim 9$ to $z\sim 0$ passing through the era of cosmic reionization, and of cosmic high noon when the averaged star formation rate has its peak. In the following Sections we describe in detail the different channels proposed for the formation and pairing of BBHs at all scales. For each physical scenario we review the state of the art, challenges and unanswered questions and the most promising lines of research for the future. Ample space is devoted to stellar mass objects (BNS and BH-NS binaries and BBHs, with a particular focus on the latter), for which we discuss separately the three main formation channels: pairing of isolated binaries in the field, the various flavors of dynamical formation processes, relics from the early universe. We then move onto discuss the state of the art of our understanding of MBH binary pairing and evolution, the current theoretical and observational challenges, and the role of future surveys and pulsar timing arrays (PTAs) in unveiling the cosmic population of these elusive systems. \section{The formation of compact object mergers through classical binary stellar evolution}\label{Sec:binaryevolution} \vspace{-3mm} {\it Contributors:} K.~Belczynski, T.~Bulik, T.~M. Tauris, G.~Nelemans \vspace{3mm} \subsection{Stellar-origin black holes} The LIGO/Virgo detections of BBH mergers can be explained with stellar-origin BHs~\cite{Belczynski:2016obo} or by primordial BHs that have formed from density fluctuations right after Big Bang \cite{Garcia-Bellido:2017imq}, Stars of different ages and chemical compositions can form BHs and subsequently BBH mergers. In particular, the first metal-free (population~III) stars could have produced BBH mergers in the early Universe ($z\approx10$), while the local ($z\approx 0-2$) Universe is most likely dominated by mergers formed by subsequent generation of more metal-rich population~II and I stars~\cite{Belczynski:2016ieo}. The majority of population~I/II stars (hereafter: stars) are found in galactic fields ($\sim 99\%$) where they do not experience frequent or strong dynamical interactions with other stars. In contrast, some small fraction of stars ($\sim 1\%$) are found in dense structures like globular or nuclear clusters, in which stellar densities are high enough that stars interact dynamically with other stars. Here, we briefly summarize basic concepts of isolated (galactic fields) stellar and binary evolution that leads to the formation of BBH mergers. \subsubsection{Single star evolution} Detailed evolutionary calculations with numerical stellar codes that include rotation (like BEC, MESA or the Geneva code, e.g., \cite{ywl10,Paxton2015,Eggenberger2012,Georgy2013}) allow us to calculate the evolution of massive stars. Note that these are not (detailed) hydrodynamic nor multi-dimensional calculations (as such computations are well beyond current computing capabilities), but they solve the basic equations of stellar structure/radiation, energy transport and element diffusion with corrections for effects of rotation. These calculations are burdened with uncertainties in treatment of various physical processes (nuclear reaction rates, convection and mixing, transport of angular momentum within a star, wind mass loss, pulsations and eruptions), yet progress is being made to improve on stellar modeling. Stellar models are used to predict the structure and physical properties of massive stars at the time of core-collapse, after nuclear fusion stops generating energy and (radiation) pressure that supports the star. This is also a point in which transition (at latest) to hydrodynamical calculations is being made to assess the fate of a collapsing star \cite{Oconnor2011,Fryer2012,Ertl2016}. For a star to form a BH, it is required that either the explosion engine is weak or delayed (so energy can leak from the center of the collapsing star) or that the infalling stellar layers are dense and massive enough to choke the explosion engine adopted in a given hydrodynamical simulation. In consequence, BHs form either in weak supernovae explosions (with some material that is initially ejected falling back and accreting onto the BH or without a supernova explosion at all (in a so-called direct collapse). Note that signatures of BH formation may already have been detected. For example, in SN~1987A there is no sign of a pulsar \cite{Alp:2018oek}, although the pulsar may still appear when dust obscuration decreases or it simply beams in another direction. Further evidence is the disappearance with no sign of a supernova of a $25{\rm ~M}_{\odot}$ supergiant star \cite{Adams2017}, although this can be a potentially long period pulsating Mira variable star that will re-emerge after a deep decline in luminosity. Stellar evolution and core-collapse simulations favor the formation of BHs with masses $M_{\rm BH} \sim 5-50 {\rm ~M}_{\odot}$ and possibly with very high masses $M_{\rm BH} \gtrsim 135 {\rm ~M}_{\odot}$. The low-mass limit is set by the so-called ``first mass gap'', coined after the scarcity of compact objects in mass range $2-5{\rm ~M}_{\odot}$ \cite{Bailyn1998,Ozel2010}. However, this mass gap may be narrower than previously thought as potential objects that fill the gap are discovered \cite{frb+08,vbk11,Linares:2018ppq}. The second gap arises from the occurrence of pair-instability SNe (PISN) as discussed below. The first mass gap may be explained either by an observational bias in determination of BH masses in Galactic X-ray binaries \cite{Kreidberg2012} or in terms of a timescale of development of the supernova explosion engine: for short timescales ($\sim\!100\;{\rm ms}$) a mass gap is expected, while for longer timescales ($\sim\!1\;{\rm s}$) a mass gap does not appear and NSs and BHs should be present in the $2-5{\rm ~M}_{\odot}$ mass range \cite{Belczynski:2011bn}. The mass threshold between NSs and BHs is not yet established, but realistic equations-of-state indicate that this threshold lies somewhere in range $2.0-2.6 {\rm ~M}_{\odot}$. The second limit at $M_{\rm BH} \sim 50 {\rm ~M}_{\odot}$ is caused by (pulsational) PISNe \cite{Heger2002,Woosley2016}. Massive stars with He-cores in the mass range $45 \lesssim M_{\rm He} \lesssim 65 {\rm ~M}_{\odot}$ are subject to pulsational PISNe before they undergo a core-collapse. These pulsations are predicted to remove the outer layers of a massive star (above the inner $40-50{\rm ~M}_{\odot}$) and therefore this process limits the BH mass to $\sim 50{\rm ~M}_{\odot}$. BHs within these two limits ($M_{\rm BH} \sim 5-50 {\rm ~M}_{\odot}$) are the result of the evolution of stars with an initial mass $M_{\rm ZAMS} \approx 20-150{\rm ~M}_{\odot}$. For high-metallicity stars (typical of stars in the Milky Way disk, $Z={\rm ~Z}_{\odot}=0.01-0.02$; \cite{Asplund2009,Villante2014}) BHs form up to $\sim 15{\rm ~M}_{\odot}$, for medium-metallicity stars ($Z=10\% {\rm ~Z}_{\odot}$) BHs form up to $\sim 30{\rm ~M}_{\odot}$, while for low-metallicity stars ($Z=1\% {\rm ~Z}_{\odot}$) BHs form up to $\sim 50{\rm ~M}_{\odot}$~\cite{Belczynski:2009xy,Kruckow:2018slo}. The remaining question is whether stars can form BHs above $\sim 50{\rm ~M}_{\odot}$. Stars with He-cores in mass range: $65\lesssim M_{\rm He} \lesssim 135 {\rm ~M}_{\odot}$ are subject to PISNe \cite{Bond1984a,Fryer2001,Heger2002} that totally disrupts the star and therefore does not produce a BH. However, it is expected that stars with He cores as massive as $M_{\rm He} \gtrsim 135 {\rm ~M}_{\odot}$, although subject to pair instability, are too massive to be disrupted and they could possibly form massive BHs ($M_{\rm BH} \gtrsim 135 {\rm ~M}_{\odot}$). If these massive BHs exist, then second mass gap will emerge with no BHs in the mass range $M_{\rm BH} \simeq 50-135 {\rm ~M}_{\odot}$ \cite{Heger2003,Yusof2013,Spera2015,Marchant2016}. If these massive BHs exist, and if they find their way into merging BBH binaries then GW observatories will eventually find them \cite{Belczynski:2014iua,Marchant2016}. The existence of very massive BHs will constrain the extend of the stellar initial mass function (IMF) and wind mass-loss rates for the most massive stars ($M_{\rm ZAMS}>300{\rm ~M}_{\odot}$) that can produce these BHs. So far, there are no physical limitations for the existence of such massive stars \cite{Massey2011,Crowther2012}. Note that the most massive stars known today are found in the LMC with current masses of $\sim 200{\rm ~M}_{\odot}$ \cite{Crowther2010}. BH formation may be accompanied by a natal kick. Natal kicks are observed for Galactic radio pulsars, that move significantly faster (with average 3-dimensional speeds of $\sim 400{\rm ~km} {\rm ~s}^{-1}$, e.g., \cite{Hobbs2005}) than their typical progenitor star ($10-20{\rm ~km} {\rm ~s}^{-1}$). These high velocities are argued to be associated with some supernova asymmetry: either asymmetric mass ejection \cite{Janka1994,Tamborra2014,jan17} or asymmetric neutrino emission \cite{Kusenko1996,Fryer2006b,Socrates2005}. Note that neutrino kick models all require (possibly unrealistic) strong magnetic fields, and simulations of core collapse without magnetic fields are unable to produce significant neutrino kicks. Naturally, in these simulations the authors find the need for asymmetric mass ejection to explain natal kicks (e.g., \cite{Tamborra2014}). Although BH natal kicks as high as observed for NSs cannot yet be observationally excluded, it is unlikely for BHs to receive such large natal kicks \cite{Mandel2016b,Repetto2017}. It appears that some of the BHs may form without a natal kick \cite{ntv99,Mirabel2017}, while some may form with a kick of the order of $\sim 100 {\rm ~km} {\rm ~s}^{-1}$ \cite{Repetto2017}. The BH natal spin may simply depend on the angular momentum content of the progenitor star at the time of core collapse. Massive stars are known to rotate; with the majority of massive stars spinning at moderate surface velocities (about 90\% at $\sim 100 {\rm ~km} {\rm ~s}^{-1}$) and with some stars spinning rather rapidly (10\% at $\sim 400 {\rm ~km} {\rm ~s}^{-1}$). During its evolution, a star may transport angular momentum from its interior to its atmosphere. Then angular momentum is lost from the star when the outer star layers are removed. The envelope removal in massive stars that are progenitors of BBH mergers is easily accomplished either by stellar winds or by mass transfer to a close companion star. However, the efficiency of angular momentum transport is unknown. Two competitive models are currently considered in the literature: very effective angular momentum transport by a magnetic dynamo \cite{Spruit1999,Spruit2002} included in the {\tt MESA} stellar evolution code that leads to solid body rotation of the entire star, and mild angular momentum transport through meridional currents \cite{Eggenberger2012,Ekstrom2012} included in the {\tt Geneva} code that leads to differential rotation of the star. Asteroseismology that probes the internal rotation of stars has not yet provided any data on massive stars (i.e. progenitors of BHs). The available measurements for intermediate-mass stars (B~type main-sequence stars) show that some stars are well described by solid body rotation and some by differential rotation \cite{Aerts2008}. Depending on an the adopted model, the angular momentum content of a star at core-collapse could be very different. During BH formation some angular momentum may be lost affecting the natal BH spin if material is ejected in a supernova explosion. Whether BH formation is accompanied by mass loss is not at all clear and estimates that use different assumptions on mass ejection in the core-collapse process are underway~\cite{Belczynski:2017gds,Schroder:2018hxk}. At the moment, from the modeling perspective, the BH natal spin is mostly unconstrained. \subsubsection{Binary star evolution} The majority ($\gtrsim 70\%$) of massive O/B stars, the potential progenitors of NSs and BHs, are found in close binary systems \cite{Sana2012}. The evolution of massive stars in binaries deviates significantly from that of single stars \cite{pjh92,wl99,bhl+01,lan12}. The main uncertainties affecting the calculation of BH merger rates are the metallicity, the common-envelope phase and the natal kick a BH receives at birth. These factors also determine the two main BH properties: mass and spin. Two main scenarios were proposed for BBH merger formation from stars that evolve in galactic fields: classical isolated binary evolution similar to that developed for double neutron stars (e.g., \cite{Tutukov1993,Lipunov1997,Belczynski:2001uc,Voss2003,Mennekens2014,Eldridge2016,Belczynski:2016obo,Stevenson2017}) and chemically homogeneous evolution (e.g., \cite{Maeder1987, Yoon2005,Marchant2016,Mandel2016a,deMink2016,Woosley2016}). Classical binary evolution starts with two massive stars in a wide orbit ($a \gtrsim 50-1000 {\rm ~R}_{\odot}$), and then binary components interact with each other through mass transfers decreasing the orbit below $\sim 50{\rm ~R}_{\odot}$ in common envelope (CE) evolution \cite{Webbink1984,Ivanova2013}. Depending on their mass, both stars collapse to BHs, either with or without supernova explosion, forming a compact BBH binary. The orbital separation of two BHs which merge within a Hubble time is below $\sim 50{\rm ~R}_{\odot}$ (for a circular orbit and two $30{\rm ~M}_{\odot}$ BHs \cite{Peters:1964zz}). \cite{Heuvel2017} highlight that for the massive stars that are expected to form BHs, the mass ratio in the second mass-transfer phase is much less extreme, which means a CE phase may be avoided. In the chemically homogeneous evolution scenario, two massive stars in a low-metallicity environment form in a very close binary ($\lesssim 50{\rm ~R}_{\odot}$) and interact strongly through tides \cite{Zahn1992,Kushnir2016}. Tidal interactions lock the stars in rapid rotation and allow for the very effective mixing of elements in their stellar interior that inhibits radial expansion of the stars. Hence, these stars remain compact throughout their evolution and collapse to BHs without experiencing a CE phase \cite{Marchant2016}. This evolutionary scheme may well explain the most massive LIGO/Virgo BBH mergers, as the enhanced tidal mixing required in this channel only works for most massive stars ($\gtrsim 30{\rm ~M}_{\odot}$). It also predicts that both binary components evolve while rotating fairly rapidly and this may produce rapidly spinning BHs, unless angular momentum is lost very efficiently in the last phases of stellar evolution or during BH formation. \subsubsection{Reconciling observations and theory} There seems to be some confusion in the community as to what was expected and predicted by stellar/binary evolution models prior to LIGO/Virgo detections of the first sources. In particular, it is striking that often it is claimed that LIGO/Virgo detections of BBH mergers with very massive BHs were surprising or unexpected. Before 2010, in fact most models were indicating that BNS are dominant GW sources for ground-based detectors (however, see also Ref.~\cite{Voss2003}, predicting LIGO detection rates strongly dominated by BBH binaries), and that stellar-origin BHs are formed with small masses of $\sim 10 {\rm ~M}_{\odot}$ \cite{Abadie:2010cf}. The models before 2010 were limited to calculations for stars with high metallicity (typical of the current Milky Way population) and this has introduced a dramatic bias in predictions. However, already around 2010 it was shown that stars at low metallicities can produce much more massive ($30-80 {\rm ~M}_{\odot}$) BHs than observed in the Milky Way \cite{Zampieri2009,Mapelli2009,Belczynski:2009xy}. Additionally, it was demonstrated that binaries at low metallicities are much more likely to produce BBH mergers than high metallicity stars by one or two ordes of magnitude~\cite{Belczynski:2010tb}. This led directly to the pre-detection predictions that {\em (i)} the first LIGO/Virgo detection was expected when the detector (BNS) sensitivity range reached about $50-100$ Mpc (the first detection was made at $70$ Mpc), that {\em (ii)} BBH mergers will be the first detected sources, and that {\em (iii)} the BBH merger chirp-mass distribution may reach $30{\rm ~M}_{\odot}$~\cite{Belczynski:2010tb,Dominik:2012kk,Dominik:2013tma,Dominik:2014yma}. Additionally studies of the future evolution of X-ray binaries like IC10 X-1 and NGC300 X-1 \cite{Bulik2011} suggested that there exists a large population of merging BH binaries with masses in excess of $20 {\rm ~M}_{\odot}$. Post-detection binary evolution studies expanded on earlier work to show agreement of calculated BBH merger rates and BBH masses with LIGO/Virgo observations~\cite{Eldridge2016,Belczynski:2016obo,Stevenson2017,Kruckow:2018slo}. The range of calculated merger rates ($10-300 {\rm ~Gpc}^{-3} {\rm ~yr}^{-1}$) comfortably coincides with the observed rate estimates ($12-213 {\rm ~Gpc}^{-3} {\rm ~yr}^{-1}$ for the LIGO/Virgo $90\%$ credible interval). Note that these classical binary evolution rates are typically much higher than rates predicted for dynamical BBH formation channels ($5-10 {\rm ~Gpc}^{-3} {\rm ~yr}^{-1}$, \cite{Rodriguez2016b,Askaretal2017}). The most likely detection mass range that is predicted from classical isolated binary evolution is found in the total BBH merger mass range of $20-100 {\rm ~M}_{\odot}$ (e.g.\cite{Kruckow:2018slo}). Examples of merger rate and mass predictions for BBH mergers are given in Figures~\ref{fig.nature} and \ref{fig:kruckow}. A similar match between observed LIGO/Virgo BH masses and model predictions is obtained from the dynamical formation channel \cite{Rodriguez2016b,Askaretal2017}. Note that this makes these two channels indistinguishable at the moment, although the merger rates are likely to be much smaller for the dynamical channel. A caveat of concern for the prospects of LIGO/Virgo detecting BBH mergers with masses above the PISN gap is related to the relatively low GW frequencies of the such massive BBH binaries with chirp masses above $100\;{\rm ~M}_{\odot}$. During the in-spiral, the emitted frequencies are expected to peak approximately at the innermost stable circular orbit (ISCO), before the plunge-in phase and the actual merging. Hence, the emitted frequencies are most likely less than 100~Hz, and with redshift corrections the frequencies to be detected are easily lower by a factor of two or more. A frequency this low is close to the (seismic noise) edge of the detection window of LIGO/Virgo and may not be resolved. \begin{figure} \begin{center} \hspace*{-0.2cm} \includegraphics[width=9.2cm]{figure1.pdf} \begin{center}\caption{ \emph{Left:} Redshifted total merger mass distribution for two population synthesis models \cite{Belczynski:2017gds}: M10 (low BH natal kicks) and M23 (high BH natal kicks). The O2 LIGO sensitivity is marked; the most likely detections are expected when models are closest to the sensitivity curve. We also mark LIGO/Virgo BBH merger detections (vertical positions have no meaning), all of which fall within the most likely detection region between $20-100{\rm ~M}_{\odot}$. \emph{Right:} Source frame BBH merger-rate density of several population synthesis models for the local Universe ($z=0$). The current LIGO O1/O2 BBH merger rate is $12$--$213{\rm ~Gpc}^{-3} {\rm ~yr}^{-1}$ (blue double-headed arrow). Note that the models with fallback-attenuated BH natal kicks (M10, M20) are at the LIGO upper limit, while models with high BH natal kicks are at the LIGO lower limit (M13, M23). Models with small (M26) and intermediate (M25) BH kicks fall near the middle of the LIGO estimate. } \end{center} \label{fig.nature} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{kruckow_IZw18.pdf} \caption{\label{fig:kruckow}Distribution of simulated double compact object binaries in the total mass– chirp mass plane for a metallicity of Z = 0.0002. Three islands of data are visible, corresponding to BBH, mixed BH-NS and BNS systems. The colour code indicates the merger rate per pixel for a Milky~Way equivalent galaxy. The three solid grey lines indicate a constant mass ratio of 1, 3 and 10 (from top to bottom). Observed LIGO/Virgo sources are shown with black crosses and event names are given for the four most massive cases. The lowest mass BBH mergers can only be reproduced with a higher metallicity. Figure taken from Ref.~\cite{Kruckow:2018slo}.} \end{center} \end{figure} The current LIGO/Virgo broad range of an empirically determined local BBH merger-rate density ($12-213 {\rm ~Gpc}^{-3} {\rm ~yr}^{-1}$) can easily be explained by uncertainties in key input physics parameters of population synthesis modelling, such as the slope of the IMF or the efficiency of the CE ejection, see e.g. Table~5 in \cite{Kruckow:2018slo}. Alternatively, it may be explained by altering BH natal kicks~\cite{Belczynski:2016obo} from full NS natal kicks corresponding to a low rate estimate) to almost no BH kicks (high rate estimate). Once LIGO/Virgo narrows its empirical estimate it may be possible to use the merge-rate density to constrain the input physics applied in modelling, although it should be cautioned that there is a large degree of degeneracy \cite{Voss2003,Kruckow:2018slo}. LIGO/Virgo provides an estimate of the effective spin parameter that measures the projected BH spin components ($a_1, a_2$) parallel to binary angular momentum, weighted by BH masses ($M_1, M_2$): \begin{equation} \chi_{\rm eff}\equiv \frac{M_1 a_1 \cos\Theta_1 + M_2 a_2 \cos\Theta_2}{M_1+M_2}\;, \label{eq.xeff} \end{equation} where $\Theta_{1,2}$ are the angles between the BH spins and the orbital angular momentum vector. So far, for the six LIGO/Virgo BBH detections the effective spins cluster around $|\chi_{\rm eff}| < 0.35$ (e.g., \cite{Abbott:2017vtc}). This defies the expectations for the main BBH formation channels. If spin magnitudes of stellar-origin BHs are significant, as estimated for several Galactic and extra-galactic X-ray binaries \cite{Fragos2015}, then the dynamical formation channel (random BH capture) predicts an isotropic $\chi_{\rm eff}$ distribution, while the classical binary evolution channel mostly predicts aligned BH spins (aligned stellar spins that are only moderately misaligned by BH natal kicks). Hence, in the latter case one expects a distribution peaked at high values of $\chi_{\rm eff}$. On the one hand this tension is rather unfortunate, as it does not allow to distinguish between these two very different scenarios of BBH merger formation. On the other hand, this is a great opportunity to learn something new about stars and BHs that was so far not expected and is not easily understood in the framework of current knowledge. There are five potential explanations of this apparent tension. First, there could be a mechanism that puts both BH spins in the plane of the binary orbit, producing $\chi_{\rm eff}=0$ independent of BH masses and spin magnitudes. Such a mechanism is proposed to operate in triple stars \cite{Antonini:2017tgo}. Note that triple stars are a minority of stars ($10-20\%$ of all field stars) and that the proposed mechanism requires very specific tuning to operate, so it is not clear how likely it is that it worked for all LIGO/Virgo sources. Second, there could be a mechanism that forces the BH spins to be in opposite directions so that they cancel out. For approximately equal mass BBH binaries (typical of LIGO/Virgo sources) this would imply 180 degree flip of spins. No mechanism is known to produce such configurations. Third, both BH spin magnitudes may be very small reducing effective spin parameter to $\chi_{\rm eff}=0$, independent of other merger parameters. This was already proposed and is used in studies of angular momentum transport in stars \cite{Belczynski:2017gds}. Fourth, LIGO/Virgo BHs may not have been produced by stars, but for example they come from a primordial population for which small spins are naturally expected \cite{Garcia-Bellido:2017imq,Clesse:2017bsw}. Fifth, it may be the case that the spin of a BH (at least its direction) is not mainly determined by the angular momentum of the progenitor star, but a result of the physics of the collapse (spin tossing, e.g., \cite{Tauris2017}). In that case, there is no reason to assume the spins in mergers formed from isolated binary evolution are aligned and they may be isotropic. Note that with these five options, that need to be tested and developed further, one cannot determine the main formation channel of BBH mergers, as it was proposed in several recent studies \cite{Kushnir2016,Zaldarriagaetal2018,Hotokezaka2017,Hotokezaka:2017dun}. The issue of spins is rather fundamental as the effective spin parameter most likely contains information on natal BH spin magnitudes and therefore information on stellar astrophysics regarding angular momentum transport in massive stars, which is still unconstrained by electromagnetic observations. Possibly the second-formed BH spin could have been increased in binary evolution by accretion of mass from a companion star. However, it is argued that BHs cannot accrete a significant amount of mass in the binary evolution leading to the BBH formation~\cite{Belczynski:2007xg,Belczynski:2016obo,Belczynski:2017gds}. This is partly due to very low accretion rates during a CE of $1-10\%$ of the Bondi-Hoyle accretion rate \cite{Ricker2008,MacLeod2017,Murguia-Berthier:2017bdf,Holgado:2017vut}) and fast timescale Roche-lobe overflow (RLO) in massive progenitor binaries that leads to ejection of most of the exchanged matter from the binary system (due to super-Eddington mass-transfer rates). The amount of mass accreted by BHs in binary systems ($\lesssim 1-3 {\rm ~M}_{\odot}$) cannot significantly spin up massive BHs ($10-30{\rm ~M}_{\odot}$) that are detected by LIGO/Virgo. It is important to note the most challenging parts of the evolutionary predictions in the context of BBH formation. In the classical binary evolutionary channel, the two most uncertain aspects of input physics are related to the CE evolution and the natal BH kicks. Although some observational constraints on both processes exist, they are rather weak. Systems entering CE evolution have recently been reported. However, they are not as massive as stars that could produce NSs or BHs \cite{Tylenda2016}. The search for CE traces as IR outburts has so far yielded no clear detection of emerging or receding X-ray binaries as expected in this scenario \cite{Oskinova:2018sfn}. BH natal kicks are only measured indirectly by positions and motions of X-ray binaries hosting BHs, and usually only lower limits on natal kicks are derived \cite{ntv99,Mandel2016b,Repetto2017,Belczynski:2015tba}. On theoretical grounds, reliable models for CE \cite{Ivanova2013} and supernovae \cite{Janka2016} are missing. In the chemically homogeneous evolution channel the largest uncertainty is connected with the efficiency of the mixing, the number of massive binaries that can form in very close orbits, and the strength of tidal interactions in close binaries. Since initial orbital period distributions are measured only in the very local Universe \cite{Sana2012}, it is not clear whether they apply to the majority of all stars and thus it is not fully understood how many stars are potentially subject to this kind of evolution. Even a deeper problem exists with our understanding of tides and their effectiveness in close binaries \cite{Zahn1992,Kushnir2016}, and effective tides are the main component of input physics in chemically homogeneous evolution. Astrophysical inferences from GW observations are currently limited. First, it is not known which formation channel (or what mixture of them) produces the known LIGO/Virgo BBH mergers. Since each channel is connected to specific set of conclusions (for example, the isolated binary channel informs about CE evolution and natal kicks; while the dynamical channel informs predominantly about stellar interactions in dense clusters) it is not clear which physics we are testing with GW observations. Second, within each channel there is degeneracy such that multiple model parameters are only weakly constrained by observations. As nobody so far was able to deliver a comprehensive study of the large multi-dimensional parameter space, the inferences on various model parameters (e.g. the strength of BH natal kicks or the CE efficiency) are hindered by various and untested model parameter degeneracies. However, it is already possible to test several aspects of stellar evolution as some processes leave unambiguous signatures in GW observations. For example, the existence of the first and the second mass gap, if confirmed by LIGO/Virgo, will constrain core-collapse supernovae and PISNe, respectively. Careful studies with detailed exposure of caveats are needed to transform future observations into astrophysical inferences. It is expected that GW events resulting from the merger of stellar-mass BHs are unlikely to produce electromagnetic counterparts. Nevertheless, a (marginal) transient signal detected by the Fermi gamma-ray burst monitor, 0.4~seconds after GW150914, was reported \cite{cbg+16}. This claim encouraged several theoretical speculations for a possible origin. It has been suggested \cite{dk17} that a tiny fraction of a solar mass of baryonic material could be retained in a circumbinary disk around the progenitor binary which probably shed a total mass of $>10\;{\rm ~M}_{\odot}$ during its prior evolution. The sudden mass loss and recoil of the merged BH may then shock and heat this material, causing a transient electromagnetic signal. It will be interesting see if any further electromagnetic signals will be associated with BBH mergers in the near future. \subsection{BNS mergers} The formation of double NSs has been studied in detail following the discovery of the Hulse-Taylor pulsar \cite{ht75} and currently more than 20 such BNS systems are known -- see \cite{Tauris2017} for a details and a review on their formation and evolution. Whereas no BBH binaries had been detected prior to GW150914, detailed knowledge on BNS systems was known for many years from Galactic radio pulsar observations \cite{lk12}. LIGO/Virgo has currently only detected one BNS merger event (GW170817), located in the lenticular (S0) galaxy NGC~4993, and thus the local empirical BNS merger rate density still remains rather uncertain: $1540^{+3200}_{-1220} {\rm ~Gpc}^{-3} {\rm ~yr}^{-1}$ ($90\%$ credible limits~\cite{TheLIGOScientific:2017qsa}). The study of double NSs is relevant for the study of BHs because it gives independent constraints on the evolution of similar massive binary populations from which binary BHs are formed. In particular the question which stars for NSs and which form BHs and if and how this depends on previous binary interactions is a question that likely only can be answered observationally by significant statistics on the relative abundance of double BHs, BNS and NS-BH binaries. There are two major sites to produce BNS mergers: isolated binaries in galactic fields (the main contributor), and dense environments in globular and nuclear clusters. None of these sites (nor any combination of them) can easily reproduce the preliminary estimated LIGO/Virgo event rate, even if all elliptical host galaxies are included within the current LIGO/Virgo horizon~\cite{Belczynski:2017mqx}. The local supernova rate can be estimated to be about $10^5 {\rm ~Gpc}^{-3} {\rm ~yr}^{-1}$, so the current empirical BNS merger rate from LIGO/Virgo would imply a very high efficiency of BNS binary formation. This apparent tension may be solved if BNS mergers are allowed to originate from a wide range of host galaxies and if the low-end of the LIGO/Virgo merger-rate estimate is used ($320 {\rm ~Gpc}^{-3} {\rm ~yr}^{-1}$). Population synthesis studies seem to agree that rates as high as $200-600 {\rm ~Gpc}^{-3} {\rm ~yr}^{-1}$ can possibly be reached if favorable conditions are assumed for classical binary evolution \cite{Chruslinska2018,Kruckow:2018slo,Vigna-Gomez:2018dza}. In the coming years, the statistics of the empirical BNS merger rate will improve significantly and reveal whether current theoretical BNS merger rates need a revision. It is interesting to notice, however, that calibrations with the rates of observed short gamma-ray bursts and the rate of mergers required to reproduce the abundances of heavy r-process elements favor a merger-rate density significantly smaller than the current empirical rate announced by LIGO/Virgo \cite{Kruckow:2018slo}. The main uncertainties of the theoretically predicted merger rate of BNS binaries are also related to CE evolution and SNe (similar to the case of BBH mergers). A CE evolution is needed to efficiently remove orbital angular momentum to tighten the binary orbit and allow a merger event within a Hubble time. However, the onset criterion and the efficiency of the in-spiral in a CE remain uncertain \cite{Ivanova2013,ktl+16}. The kick velocities imparted on newborn NSs span a wide range from a few ${\rm km\,s}^{-1}$ (almost symmetric SNe) to more than 1000~${\rm km\,s}^{-1}$ and are sometimes difficult to determine \cite{Verbunt18}. The kick magnitude seems to be related to the mass of the collapsing core, its density structure and the amount of surrounding envelope material \cite{jan17}. Additional important factors for the predicted merger rates include the slope of the initial-mass function and the efficiency of mass accretion during RLO \cite{Kruckow:2018slo}. All taken together, the predicted merger rate of double NSs in a Milky~Way equivalent galaxy vary by more than two orders of magnitude~\cite{Abadie:2010cf}. The empirical merger rate that LIGO/Virgo will detect at design sensitivity in a few years is of uttermost importance for constraining the input physics behind the rate predictions. Besides the detection rates, the mass spectrum and the spin rates of the double NS mergers will reveal important information about their origin. Although their precise values cannot be determined due to degeneracy, the overall distribution of estimated NS masses will reveal information on their formation process (electron capture vs iron-core collapse SNe), as well as constraining the nuclear-matter equation-of-state. The latter will also be constrained from tidal deformations of the NSs in their last few orbits~\cite{TheLIGOScientific:2017qsa}. An important observational signature of the merger event of BNS binaries is the detection of the ring-down signal of either a meta-stable highly massive NS or a BH remnant. Such information would set constraints on the fundamental equation-of-state of nuclear matter at high densities. Whereas LIGO/Virgo is not sensitive enough to detect a ring-down signal, it is the hope that third-generation GW detectors might be able to do so. Another important observational input is the distribution of mass ratios in BNS merger events. This distribution could provide important information about the formation of NSs and the nature of the supernovae (e.g. electron-capture vs iron core-collapse supernovae). Optical follow-up will in many cases reveal the location of a double NS merger (e.g., \cite{GBM:2017lvd}). This will provide information on their formation environments \cite{sha+17,Coulter:2017wya} and kinematics \cite{fb13}, besides crucial information on heavy r-process nucleosynthesis \cite{2010MNRAS.406.2650M}. \subsection{Mixed BH-NS mergers} The formation of mixed BH/NS mergers is expected to follow similar scenarios as double NS or double BH \cite{Tutukov1993,Voss2003} with all the associated uncertainties. It is perhaps somewhat surprising that LIGO/Virgo detected a double NS merger (GW170817) before a mixed BH/NS merger, since (at least some) population synthesis codes predict a detection rate of mixed BH/NS systems which is an order of magnitude larger that the expected detection rate of double NS systems (with large uncertainties, e.g.,\cite{Kruckow:2018slo}). Hence, if these predictions are correct, GW170817 is a statistical rare event and detections of mixed BH/NS systems are expected already in the upcoming O3/O4 LIGO runs. The detection of mixed BH-NS mergers is interesting for two reasons: (i) a key question is whether BH-NS and NS-BH binaries may be distinguished from one another (i.e. the formation order of the two compact objects, which leads to a (mildly) recycled pulsar in the latter case), and (ii) the detected in-spiral of mixed BH-NS mergers may reveal interesting deviations from GR and pure quadrupole radiation given the difference in compactness between BHs and NSs \cite{wil94}. \section{Dynamical Formation of Stellar-mass Binary Black Holes}\label{Sec:nbody} \vspace{-3mm} {\it Contributors:} B.~Kocsis and A.~Askar \vspace{3mm} \subsection{Introduction}\label{sec:dynamicsintroduction} The recent GW observations from six BBH mergers (GW150914, LVT151012, GW151226, GW170104, GW170608, GW170814) and a BNS merger (GW170817) opened ways to test the astrophysical theories explaining the origin of these sources \cite{Abbott:2016nmj,Abbott:2016blz,Abbott:2017vtc,Abbott:2017gyy,Abbott:2017oio,TheLIGOScientific:2017qsa} . As discussed earlier, the component masses of these merging sources span a range between 8--$35\ensuremath \mathrm{M}_{\odot}$~\cite{Abbott:2017vtc}, which is different from the distribution of BHs seen in X-ray binaries, $5$ -- $17\ensuremath \mathrm{M}_{\odot}$ \cite{Farretal2011} with two possible exceptions (NGC300X-1 and IC10X-1). The event rates of BBH mergers are estimated to be between $40$--$213\,\ensuremath{\rm Gpc}^{-3}\ensuremath{\rm yr}^{-1}$ for a power-law BH mass function and between $12$--$65\,\ensuremath{\rm Gpc}^{-3}\ensuremath{\rm yr}^{-1}$ for a uniform-in-log BH mass function~\cite{Abbott:2017vtc}, which is higher than previous theoretical expectations of dynamically formed mergers, for instance see \cite{Abadie:2010cf}. The event rates of BNS mergers is currently based on a single measurement which suggests a very high value of $1540^{+3200}_{-1220}\,\ensuremath{\rm Gpc}^{-3}\ensuremath{\rm yr}^{-1}$~\cite{TheLIGOScientific:2017qsa} (c.f.~\cite{Belczynski:2017mqx}). How do we explain the observed event rates and the distribution of masses, mass ratios, and spins? Several astrophysical merger channels have been proposed to explain observations. Here we review some of the recent findings related to {\bf dynamics}, their limitations and directions for future development. These ideas represent alternatives to the classical binary evolution picture, in which the stars undergo poorly understood processes, such as common envelope evolution. In all of these models the separation between the compact objects is reduced dynamically to less than an AU, so that GWs may drive the objects to merge within a Hubble time, $t_{\rm Hubble}=10^{10}\,\ensuremath{\rm yr}$. \subsection{Merger rate estimates in dynamical channels} \paragraph{Dynamical formation and mergers in globular clusters} Although about 0.25\% of the stellar mass is currently locked in globular clusters (GCs) \cite{Rosenblatt1988,Boker2008,Harris2015}, dynamical encounters greatly catalyze the probability of mergers compared to that in the field. Within the first few million years of GC evolution, BHs become the most massive objects. Due to dynamical friction, they will efficiently segregate to the cluster center \cite{Spitzer1969} where they can dynamically interact and form binaries with other BHs \cite{Sigurdsson1993,Ivanova2005}. The dense environments of GCs can also lead to binary-single and binary-binary encounters involving BHs that could result in their merger. Collisional systems like GCs can also undergo core collapse, during which central densities can become very large leading to many strong dynamical interactions. The encounter rate density is proportional to $\mathcal{R}\sim \int dV \,\langle n_{*}^2 \rangle \, \sigma_{cs} v$, where $n_*$ is the stellar number density, $\sigma_{\rm cs}\sim GMb/v^2$ is the capture cross section, $M$ is the total mass, $b$ is the impact parameter, $v$ is the typical velocity dispersion. Note the scaling with $\langle n_*^2\rangle$, where $\langle n_{*}^2\rangle^{1/2} \sim 10^5\ensuremath{\rm pc}^{-3}$ in GCs and $\sim 1\ensuremath{\rm pc}^{-3}$ in the field. Estimates using Monte Carlo method to simulate realistic GCs yield merger rates of at least $\mathcal{R}_{\rm GC}\sim 5 \, \ensuremath{\rm Gpc}^{-3}\ensuremath{\rm yr}^{-1}$ \cite{Rodriguez2016b,Askaretal2017}, falling below the current limits on the observed rates. Rate estimates from results of direct $N$--body simulations also yield a similar value of $\mathcal{R}_{\rm GC}\sim 6.5 \, \ensuremath{\rm Gpc}^{-3}\ensuremath{\rm yr}^{-1}$ \cite{Park2017}. In particular, these papers have shown that the low-mass GCs below $10^5\mathrm{M}_{\odot}$ have a negligible contribution to the rates. However, they also show that initially more massive GCs (more massive than $10^6\mathrm{M}_{\odot}$) contribute significantly to the rates. \cite{Askaretal2017} argue that actual merger rates from BHs originating in GCs could be 3 to 5 times larger than their estimated value of $\sim 5 \, \ensuremath{\rm Gpc}^{-3}\ensuremath{\rm yr}^{-1}$ due to uncertainties in initial GC mass function, initial mass function of stars in GCs, maximum initial stellar mass and evolution of BH progenitors. Furthermore, BBH merger rates can be significant in young clusters with masses $\sim 10^4\,\ensuremath \mathrm{M}_{\odot}$ \cite{2016MNRAS.459.3432M,2017MNRAS.467..524B}. A simple robust upper limit may be derived by assuming that all BHs merge once in each GC in a Hubble time: \begin{eqnarray} \mathcal{R} & \leq & \frac12f_{\rm BH} N_{*}\frac{ n_{\rm GC}}{t_{\rm Hubble}} \nonumber \\ & < & \frac12 \frac{ \int_{20\ensuremath \mathrm{M}_{\odot}}^{150\ensuremath \mathrm{M}_{\odot}} f_{\rm IMF}(m) dm}{ \int_{0.08\ensuremath \mathrm{M}_{\odot}}^{150\ensuremath \mathrm{M}_{\odot}} m f_{\rm IMF}(m) dm} \times 10^{5.5}\ensuremath \mathrm{M}_{\odot} \times\frac{0.8\ensuremath{\rm Mpc}^{-3}}{10^{10}\ensuremath{\rm yr}} \nonumber \\ & = & 38\,\ensuremath{\rm Gpc}^{-3}\ensuremath{\rm yr}^{-1} \label{eq:GCrate} \end{eqnarray} where $f_{\rm BH}$ is the fraction of stars that turn into BHs from a given stellar initial mass function $f_{\rm IMF}$, $N_{*}$ is the initial number of stars in a GC, and $n_{\rm GC}$ is the cosmic number density of GCs, and $f_{\rm IMF}(m) \propto (m/0.5\ensuremath \mathrm{M}_{\odot})^{-2.3}$ for $m>0.5\ensuremath \mathrm{M}_{\odot}$ and $(m/0.5\ensuremath \mathrm{M}_{\odot})^{-1.3}$ otherwise \cite{2001MNRAS.322..231K}. The result is not sensitive to the assumed upper bound on mass of the BH progenitor, which is set by the pair instability supernova. Recent estimates find 110--$130\,\ensuremath \mathrm{M}_{\odot}$ for GC metallicities \cite{Spera:2017fyx}. However, note that the mass in GCs may have been much higher than currently by a factor $\sim 5$, since many GCs evaporated or got tidally disrupted \cite{Gnedinetal2014,2014MNRAS.444.3738A}. This effect increases the rates by at most a factor 2 at $z<0.3$, but more than a factor 10 at $z>2.5$~\cite{Fragione:2018vty}. \paragraph{Dynamical and relativistic mergers in galactic nuclei} The densest regions of stellar BHs in the Universe are expected to be in the centers of nuclear stars clusters (NSC), whose mass-segregated inner regions around the SMBH exceed $n_*\sim 10^{10}\ensuremath{\rm pc}^{-3}$ \cite{OLearyetal2009}. In contrast to GCs, the escape velocity of the SMBH in the central regions of NSCs is so high that compact objects are not expected to be ejected by dynamical encounters or supernova birth kicks. In these regions, close BH binaries may form due to GW emission in close single-single encounters \cite{OLearyetal2009}. Binary mergers may also be induced by the secular Kozai-Lidov effect of the SMBH \cite{Antonini_Perets2012,Hoang:2017fvh,PetrovichAntonini2017,2018MNRAS.477.4423A,Arca-Sedda:2017wea,Hamers:2018hxv} and tidal effects \cite{Fernandez:2018uiy}. Detailed estimates give from the $\mathcal{R}_{\rm NSC}\sim 5 - 15\,\ensuremath{\rm Gpc}^{-3}\ensuremath{\rm yr}$ \cite{OLearyetal2009,Hoang:2017fvh,Antonini_Perets2012,PetrovichAntonini2017,Belczynski:2017mqx}, below the observed value. Higher values may be possible for top heavy BH mass functions and if black holes are distributed in disk configurations \cite{OLearyetal2009,Szolgyen:2018zra}. These numbers are sensitive to the uncertain assumptions on the total supply of BHs in the NSCs, either formed in situ or delivered by infalling GCs \cite{2012ApJ...750..111A,2013ApJ...763...62A,Gnedinetal2014,ArcaSedda2015,Arca-Sedda:2017qcq}. If all BHs merged in galactic nuclei once, an upper limit similar to Equation~(\ref{eq:GCrate}) is $\mathcal{R}_{\rm NSC}< 30\,\ensuremath{\rm Gpc}^{-3}\ensuremath{\rm yr}^{-1}$. Due to the high escape velocity from NSCs and a rate of infalling objects, this bound may be in principle exceeded. \paragraph{Mergers facilitated by ambient gas} Dynamical friction facilitates mergers in regions where a significant amount of gas embeds the binary, which may carry away angular momentum efficiently. Particularly, this may happen in a star forming regions \cite{2016MNRAS.462.3812T,Belczynski:2016ieo,2018ApJ...856...47T}, in accretion disks around SMBHs in active galactic nuclei (AGN) \cite{Barausse:2007dy,Kocsisetal2011} or if the stellar envelope is ejected in a stellar binary \cite{Tagawa:2018vls}. In AGN, the accretion disk may serve to capture stellar mass binaries from the NSC \cite{Bartosetal2017}, or help to form them in its outskirts by fragmentation \cite{Stoneetal2017}. The rate estimates are at the order of $1\,\ensuremath{\rm Gpc}^{-3}\ensuremath{\rm yr}^{-1}$, below the observed value. Nevertheless, mergers in this channel deserve attention as they have electromagnetic counterparts, the population of AGNs, which may be used in a statistical sense \cite{Bartosetal2017b}. \paragraph{Isolated triples} The stellar progenitors of BHs are massive stars, which mostly reside not only in binaries, but in many cases in triples. The gravity of the triple companion drives eccentricity oscillations through the Kozai-Lidov effect, which may lead to GW mergers after close encounters. However, the rate estimates are around or below $6 \, \ensuremath{\rm Gpc}^{-3}\ensuremath{\rm yr}^{-1}$, below the current observational range \cite{Silsbee_Tremaine2017,2017ApJ...841...77A}. The rates may be higher $2$--$25 \, \ensuremath{\rm Gpc}^{-3}\ensuremath{\rm yr}^{-1}$ for low metallicity triples \cite{Rodriguez:2018jqu} and further increased by non-hierarchical configurations \cite{Arca-Sedda:2018qgq,Banerjee:2018pmh} and by quadrupole systems \cite{Fang_Thompson_Hirata2018}. \paragraph{Mergers in dark matter halos} The first metal-free stars (Pop III) may form binaries dynamically and merge in DM halos \cite{2014MNRAS.442.2963K}, but the expected rates are below the observed rates~\cite{Belczynski:2016ieo}. Futhermore, two dynamical channels have been suggested leading to a high number of BH mergers in DM halos. If PBHs constitute a significant fraction of DM, the merger rates following GW captures in single-single encounters match the observed rates \cite{Bird:2016dcv}. However, this would also lead to the dispersion of weakly bound GCs in ultrafaint dwarf galaxies, contradicting an observed system \cite{Brandt:2016aco}. More recent estimates show that the LIGO rates are matched by this channel even if only $1\%$ of the DM is in PBHs \cite{Ali-Haimoud:2017rtz}. The second PBH channel requires only a $0.1\%$-- fraction of DM to be PBHs, given that they form binaries dynamically in the very early universe \cite{Sasaki:2016jop}. While these sources can match the observed rates, as discussed in the following Sec.~\ref{Sec:PBHandDM}, we await further strong theoretical arguments or observational evidence for the existence of these PBHs \cite{Sasaki:2018dmp}. \subsection{Advances in numerical methods in dynamical modeling} Recent years have brought significant advances in modelling the dynamical environments leading to GW events. \subsubsection{Direct $N$-body integration} State of the art direct $N$--body simulations have been used to model the dynamics of GCs to interpret GW observations. These methods now reach $N=10^6$, so that stars are represented close to 1:1 in the cluster \cite{Wangetal2016}. Comparisons between the largest direct $N$--body and Monte Carlo simulations show an agreement \cite{Wangetal2016,Rodriguezetal2016a}. However, due to their high numerical costs, only a very low number of initial conditions have been examined to date. Further development is needed to account for a higher number of primordial binaries, larger initial densities, a realistic mass spectrum, and a nonzero level of rotation and anisotropy. Large direct $N$-body simulation have also been used to study the dynamics in nuclear star clusters (NSC) in galactic nuclei with an SMBH and the formation of NSCs from the infall of GCs \cite{ArcaSedda2015}. Recent simulations have reached $N=10^6$ in regions embedding NSCs \cite{Panamarev:2018bwq}. Here, the number of simulated stars to real stars is 1:100. To interpret GW mergers in these systems, a 1:1 simulation of the innermost region of the NSC would be most valuable, even if the total simulated number is of the order $10^5$--$10^6$. Including a mass distribution and primordial binaries would also be useful. Direct $N$-body methods were also recently used to simulate the NSC in AGN with stellar captures by the accretion disk, to predict the rate of tidal disruption events and the formation of a nuclear stellar disk \cite{Kennedyetal2016,Panamarev:2018who}. The most important place for development is to relax the assumption of a rigid fixed disk in these simulations. Indeed, the disk may be expected to be significantly warped by the star cluster \cite{Kocsis_Tremaine2011,Peretsetal2018}, and the stars and BHs captured in the disk may grow into IMBHs and open gaps in the disk \cite{Kocsisetal2011}. Furthermore, an initially nonzero number of binaries, the binary-disk interaction, and a mass spectrum would be important to incorporate to make GW predictions. \subsubsection{Monte-Carlo methods} State of the art Monte-Carlo methods have also improved significantly during the past years, providing a numerically efficient way to model the evolution of GCs accurately. Recent developments showed that the BH subclusters do not necessarily decouple and evaporate at short time scales and that GCs with longer relaxation times can retain BHs up to a Hubble time \cite{Morscheretal2015,Arcasedda2018,Askar2018}. These methods have been used to predict the evolutionary pathways to some of the observed mergers~\cite{Rodriguez:2016avt,Chatterjeeetal2017} and to interpret the distributions of masses and spins \cite{Rodriguez2016b,Askaretal2017}. These methods have been used to study the formation of IMBHs~\cite{Gierszetal2015}. Most recently, 2.5-order post-Newtonian dissipative effects were incorporated in order to re-simulate binary-single interactions involving three BHs from results of these Monte Carlo codes which increased the rate of eccentric mergers by a factor of 100~\cite{Rodriguez:2017pec,Samsingetal2017,Samsing:2018isx}. The implementation of post-Newtonian terms and tidal dissipation \cite{Samsing:2018uoo} for computing results of strong binary-single and binary-binary encounters involving at least two BHs could further increase merger rates for BBHs expected to originate from GCs. Moreover, implementation of two body gravitational and tidal capture within these codes could provide more insights into the role of dynamics in forming potentially merging BBHs. Further development is needed to include resonant multibody effects. Moreover, simulations of galactic nuclei with Monte Carlo methods would be valuable to tracking the evolution of binaries, and accounting for long term global effects such as resonant relaxation. \subsubsection{Secular Symplectic $N$-body integration} Systems which are described by a spherical gravitational potential such as a galactic nucleus or a GC are affected by strong global resonances, in which the anisotropies of the system drive a rapid secular change in the angular momentum vectors of the objects, called resonant relaxation \cite{Rauch_Tremaine1996}. Vector and scalar resonant relaxation, which affect the distribution of orbital planes and the eccentricity, respectively, are expected to reach statistical equilibrium within a few Myr to a few 100 Myr, respectively. A secular symplectic $N$-body integration method was recently developed \cite{Kocsis_Tremaine2015}. Preliminary results show that objects such as BHs, which are heavier than the average star in a GC or a nuclear star cluster tend to be distributed in a disk \cite{Szolgyen:2018zra}. Since LVC mergers happen more easily in BH disks, if they exist, future studies are necessary to explore the formation, evolution, and the expected properties of such configurations. The statistical equilibrium phase space distribution of resonant relaxation is known only for a limited number of idealized configurations \cite{Touma_Tremaine2014,Roupasetal2017,Takacs:2017wnn,Fouvry2018}. Interestingly resonant relaxation has strong similarities to other systems in condensed matter physics such as point vortices and liquid crystals \cite{Touma_Tremaine2014,Kocsis_Tremaine2015,Roupasetal2017}. This multidisciplinary connection may be used to study these probable sites of BH mergers \cite{Roupasetal2017}. \subsubsection{Semianalytical methods} Semianalytical methods are developed to include the highest number of physical effects in an approximate way. The formation of the Galactic bulge from the infall of GCs was examined in great detail with this technique \cite{Gnedinetal2014,2014MNRAS.444.3738A}. It was shown, that during this process, more than 95\% of the initial GC population is destroyed. Thus, GCs were much more common at cosmologically early times. Since one way to form IMBHs in GCs is by runaway collisions of stars or BHs \cite{PortegiesZwart_McMillan2002}, their numbers may also expected to be higher than previously thought (however, this conclusion may be different in other IMBH formation channels \cite{Gierszetal2015}). GCs generate a high rate of mergers between stellar mass and $10^2$--$10^3\ensuremath \mathrm{M}_{\odot}$ IMBHs detectable in the near future with advanced LIGO and Virgo detectors at design sensitivity at $z>0.6$ \cite{Fragione:2017blf,2018MNRAS.477.4423A,Arca-Sedda:2017wea}. BH mergers with IMBHs with $10^3$--$10^4\ensuremath \mathrm{M}_{\odot}$ may also be detected with LISA from the local Universe. \subsection{Astrophysical interpretation of dynamical sources} How can dynamical models test the astrophysical interpretation of GW sources? GW detectors can measure the binary component masses, the spin magnitudes and direction, the binary orbital plane orientation, the eccentricity, the distance to the source and the sky location. The observed distributions of these parameters may be compared statistically to the predicted distributions for each channel. Some smoking gun signatures of the astrophysical environment are also known for individual sources, discussed below. \subsubsection{Mass distribution} The mass distribution of mergers depends on the theoretically poorly known BH mass function times the mass-dependent likelihood of mergers. Particularly, the mergers of massive objects are favored in GCs, due to the mass dependence of binary formation in triple encounters, binary exchange interactions, dynamical friction, and the Spitzer instability. Monte Carlo and $N$-body simulations show that the likelihood of merger is proportional to $M^4$ in GCs \cite{OLearyetal2016}. The 2-dimensional total mass and mass ratio distribution of mergers may be used to test this prediction \cite{Zevinetal2017} and to determine the underlying BH mass function in these environments. The mass function of mergers may also vary with redshift as the most massive BHs are ejected earlier \cite{Rodriguez2016b}. Recently, Ref.~\cite{Kocsisetal2018} introduced a parameter to discriminate among different astrophysical channels: \begin{equation} \alpha = -(m_1+m_2)^2 \frac{\partial^2 \mathcal{R}}{\partial m_1\partial m_2}\,. \end{equation} This dimensionless number is $1 \pm 0.03$ for PBHs formed dynamically in the early universe (if the PBH mass function is assumed to be flat. For arbitrary PBH mass function the results can be substantially different~\cite{Chen:2018czv}), $10/7$ for BHs which form by GW emission in collisionless systems such as DM halos. For BHs which form by GW emission in collisional systems which exhibit mass segregation, this $\alpha$ varies for different component masses. In galactic nuclei it ranges between $10/7$ for the low mass components in the population to $-5$ for the highest mass component. It would be very useful to make predictions for $\alpha$ for all other merger channels. \subsubsection{Spin distribution} Using the empirically measured rotation rate of Wolf-Rayet (WR) stars, the birth spin of the massive BHs is expected to be small~\cite{Amaro-Seoane:2015umi}. If BHs have undergone previous mergers, their spin is distributed around 0.7 \cite{Gerosa:2017kvu,Fishbachetal2017}. Dynamical effects may also spin up the WR-descendent BH \cite{Zaldarriagaetal2018}. If the BH acquires a significant amount of mass due to accretion, it is expected to be highly spinning. Thus, second generation mergers are distinguishable in mass and spin from first generation mergers. Monte Carlo simulations predict that $10\%$ are second generation in GCs \cite{OLearyetal2016,Rodriguez:2017pec}. Results from Monte Carlo simulations have also been utilized to investigate the role of spin in gravitational recoil kicks on merged BHs \cite{Morawski:2018kfs}. This has important implications for repeated mergers in dense environments like GCs, results from Ref.~\cite{Morawski:2018kfs} suggest that that about 30\% of merging BHs could be retained in GCs. According to a recent X-ray observing campaign, 7 out of the 22 Active Galactic Nuclei analyzed are candidates for being high spin SMBHs (with spin $>98\%$), see Table 1 of Ref.~\cite{Brenneman:2013oba}. Some of the X-ray binaries show evidence of highly spinning stellar mass BHs \cite{McClintocketal2014}. However, with the exception of a single source, current LVC sources are consistent with zero effective spin \cite{Farretal2017}. If spinning, the relative direction of spins is expected to be uncorrelated (isotropic) for spherical dynamical systems. This is different from the standard common envelope channel, where spin alignment is generally expected with the angular momentum vector and counteralignment is not likely, in case the BHs are spinning \cite{Farretal2017,Farretal2018}. \subsubsection{Eccentricity distribution} Since GW emission circularizes binaries \cite{Peters:1964zz}, they are expected to be nearly circular close to merger unless they form with a very small pericenter separation. Indeed, GW sources in GCs are expected to have a relatively small eccentricity in the LIGO band close to merger \cite{OLearyetal2006}. Moderate eccentricity of $e=0.1$ is expected from $10\%$ of GC sources at the low-frequency edge of the Advanced LIGO design sensitivity \cite{Rodriguez:2017pec,Samsingetal2017}. However, they may have a high eccentricity in the LISA band \cite{OLearyetal2006,Breiviketal2016,Samsing:2018isx,DOrazio:2018jnv,Samsing:2018ykz} or the DeciHz band~\cite{Chen:2017gfm}. Field triples may also yield some eccentric LIGO mergers \cite{Silsbee_Tremaine2017,2017ApJ...841...77A,Rodriguez:2018jqu,Arca-Sedda:2018qgq}. Eccentricity may be much higher for sources forming by GW emission in close encounters \cite{OLearyetal2009}. The eccentricity distribution of merging BH binaries in GCs is expected to have three distinct peaks corresponding to binaries which merge outside of the cluster after ejection following a binary-single hardening interaction, binary mergers which happen within a cluster in between two binary-single interactions, and mergers which happen during a binary-single interactions \cite{Samsing:2018isx,DOrazio:2018jnv}. For GW capture binaries, the typical eccentricity at the last stable orbit (LSO, extrapolating \cite{Peters:1964zz}) is set by the velocity dispersion of the source population $\sigma$ as $e_{\rm LSO} \sim 0.03 (\sigma/1000{\rm km}{\rm s}^{-1})^{35/32}\,(4\eta)^{-35/16}$, where $\eta=m_1m_2/(m_1+m_2)^2$ is the symmetric mass ratio \cite{Gondan:2017wzd}. The heavier stellar BHs and IMBHs are expected to merge with a higher eccentricity at around $0.1$, while the lower mass BHs will have an eccentricity around $10^{-3}$ \cite{Gondan:2017wzd}. At design sensitivity Advanced LIGO and VIRGO may be more sensitive to eccentricity well before merger \cite{Gondan:2017hbp}. \subsubsection{Sky location distribution} Since GW sources are observed from beyond $100\,\ensuremath{\rm Mpc}$, they are expected to be isotropically distributed. The sky location measurement accuracy is typically insufficient to identify a unique host galaxy counterpart for individual mergers. However, the power spectrum of sky location of all mergers may be measured. This may be useful to determine the typical galaxy types of mergers, particularly, whether the sources are in active galactic nuclei \cite{Bartosetal2017b}. \subsubsection{Smoking gun signatures} Which other features may help to conclusively identify the host environment of individual GW sources? \paragraph{Resolved clusters with electromagnetic counterparts} Recently it was shown that several inspiraling stellar mass black hole binaries of Milky Way GCs are expected to be in the LISA band \cite{Kremer:2018tzm}. Therefore if LISA identifies these GW sources on the sky, it will constrain the event rates corresponding to the GC channel. \paragraph{Modulation of the GW phase due to environmental effects} In the case of LISA sources, extreme mass ratio inspirals may be significantly affected by an a SMBH companion or gas-driven migration \cite{Yunesetal2011,Kocsisetal2011}. For stellar-mass BH mergers, the most important effect of a perturbing object is a Doppler phase shift, which accumulates mostly at low GW frequencies \cite{Meironetal2017}. LIGO and LISA together will be able to measure the SMBH provided that the orbital period around the SMBH is less than $\mathcal{O}(\ensuremath{\rm yr})$ \cite{Meironetal2017,Inayoshi:2017hgw}. \paragraph{GW astrophysical echos} The identification of a secondary lensed signal by the SMBH using LVC may confirm that a LVC merger takes place in a galactic nucleus. This manifests as a GW echo with a nearly identical chirp waveform as the primary signal but with a typically fainter amplitude depending on direction and distance from the SMBH \cite{Kocsis2013}. If the distance between the source and the SMBH is less than $10^3 M_{\rm SMBH}$, the relative amplitude of the echo is of the order of $10\%$, and the time-delay is less than a few hours. Studies of the expected source parameters of sources with GW echos are underway. \paragraph{Double mergers and mergers with electromagnetic counterparts} Finally, a case is conceivable in which in which a binary-single or binary-binary encounter results in a double merger, i.e. two mergers from the same direction within a short timespan \cite{Samsing:2017xod}. Such an observation would indicate a dense dynamical host environment. Furthermore, binary-single or binary-binary encounters involving at least 2 BHs and one or more stellar object may also lead to BBH mergers with with transient electromagnetic counterparts associated with the disruption or accretion of stellar matter on the BHs. Observation of such a counterpart would also indicate that the merging BHs originated in a dense dynamical environment. The inclusion of tidal and gravitational dissipation effects during the computation of such strong encounters \cite{Samsing:2018uoo} within simulations of GCs could help to constrain rates for BBH mergers in which an electromagnetic counterpart could be expected. \section {Primordial Black Holes and Dark Matter}\label{Sec:PBHandDM} \vspace{-3mm} {\it Contributors:} G.~Bertone, C.~T.~Byrnes, D.~Gaggero, J.~Garc\'ia-Bellido, B.~J.~Kavanagh \vspace{3mm} \subsection{Motivation and Formation Scenarios.} The nature of Dark Matter (DM) is one of the most pressing open problems of Modern Cosmology. The evidence for this mysterious form of matter started to grow in the early 20th century~\cite{Bertone:2016nfn}, and it is today firmly established thanks to a wide array of independent observations~\cite{Bertone:2010zza}. Its nature is still unknown, and the candidates that have been proposed to explain its nature span over 90 orders of magnitude in mass, from ultra-light bosons, to massive BHs~\cite{Bergstrom:2000pn,Bertone:2004pz,Feng:2010gw}. An intriguing possibility is that at least a fraction of DM is in the form of PBHs~\cite{Carr:2016drx}. The recent LVC detections~\cite{Bird:2016dcv,Clesse:2016vqa}, have revived the interest in these objects, and prompted a reanalysis of the existing bounds~\cite{Carr:2016drx,Garcia-Bellido:2017xvr} and new prospects for phenomenological signatures~\cite{Garcia-Bellido:2017fdg,Clesse:2017bsw}. PBHs might be produced from early universe phase transitions, topological defects (e.g., cosmic strings, domain walls), condensate fragmentation, bubble nucleation and large amplitude small scale fluctuations produced during inflation (see the reviews~\cite{Green:2014faa,Sasaki:2018dmp} and references therein). Excluding the possibility of BH relics, non-evaporating PBHs could range in mass from $10^{-18}$ to $10^6$ solar masses today, although the most interesting range today is that associated with the observed LIGO BBHs, around a few to tens of solar masses. For those PBHs, amongst the most promising production mechanisms was the one first proposed in \cite{GarciaBellido:1996qt} from high peaks in the matter power spectrum generated during inflation. When those fluctuations re-enter the horizon during the radiation era, their gradients induce a gravitational collapse that cannot be overcome even by the radiation pressure of the expanding plasma, producing BHs with a mass of order the horizon mass~\cite{Clesse:2015wea}. Most of these PBH survive until today, and may dominate the Universe expansion at matter-radiation equality. One important question is what fraction $f_{\rm PBH}= \Omega_\mathrm{PBH}/\Omega_\mathrm{DM}$ of DM should be made of O(10M) PBHs in order to match the BBH merger rate inferred by LIGO/Virgo ($\mathcal{R} \simeq$ 10 - 200 Gpc$^{-3}$ yr$^{-1}$ \cite{Abbott:2017vtc}). If one considers PBH binaries that form within virialized structures, the corresponding rate is compatible with $f_{\rm PBH} = 1$ \cite{Bird:2016dcv}. However, PBH binary systems can form in the early Universe (deep in the radiation era) as well \cite{Nakamura:1997sm,Ioka:1998nz}, if the orbital parameters of the pair allow the gravitational pull to overcome the Hubble flow and decouple from it. A recent calculation of the associated merger rate today \cite{Sasaki:2016jop} -- significantly extended in \cite{Ali-Haimoud:2017rtz} -- provides a much larger estimate (compared to the former scenario): This result can be translated into a bound on $f_{\rm PBH}$, which is potentially stronger than any other astrophysical and cosmological constraint in the same range. The constraint has been recently put on more solid grounds in \cite{Kavanagh:2018ggo} by taking into account the impact of the DM mini-halos around PBHs, which have a dramatic effect on the orbits of PBH binaries, but surprisingly subtle effect on their merger rate today. Several aspects of this calculation are still under debate, including the PBH mass distribution, the role of a circumbinary accretion disk of baryons, the impact of initial clustering of PBHs (see \cite{Ali-Haimoud:2018dau}) and the survival until present time of the binary systems that decoupled in the radiation era. The critical collapse threshold to form a PBH is typically taken to be $\delta_c\equiv {\delta\rho_c/\rho}\sim0.5$, with an exponential sensitivity on the background equation of state, the initial density profile and angular momentum. Because any initial angular momentum suppresses PBH formation, PBHs are expected to spin slowly, a potential way to discriminate between them and (rapidly rotating) astrophyical BHs \cite{Chiba:2017rvs}. Inflationary models which generate PBHs are required to generate a much larger amplitude of perturbations on small scales compared to those observed on CMB scales. This can be achieved either through multiple-field effects or an inflection point in single-field inflation~\cite{Ezquiaga:2017fvi,Garcia-Bellido:2017mdw}. These typically generate a reasonably broad mass fraction of PBHs, in contrast to the monochromatic mass spectrum usually assumed when interpreting observational constraints. In addition, the softening of the equation of state during the QCD phase transition greatly enhances the formation probability of solar mass PBHs compared to the more massive merging BHs LIGO detected \cite{Byrnes:2018clq}. BHs below the Chandrasekhar mass limit would be a smoking gun of a primordial origin. At non-linear order, scalar and tensor perturbations couple, implying that the large amplitude perturbations required to generate PBHs will also generate a stochastic background of GWs. For the LIGO mass range the corresponding GW frequency is constrained by pulsar timing arrays unless the scalar perturbations are non-Gaussian~\cite{Nakama:2016gzw,Garcia-Bellido:2017aan}. \subsection{Astrophysical probes} \begin{figure} \centering \includegraphics[width=0.79\textwidth]{PBH_constraints.pdf} \caption{\textbf{Summary of astrophysical constraints on PBHs in the mass range $M \in [10^{-2},\,10^5]\,M_\odot$.} Details of the constraints are given in the main text and we plot here the most conservative. We emphasize that astrophysical constraints may have substantial systematic uncertainties and that the constraints shown here apply only for monochromatic mass functions.} \label{fig:constraints} \end{figure} Constraints on the PBH abundance are typically phrased in terms of $f_\mathrm{PBH} = \Omega_\mathrm{PBH}/\Omega_\mathrm{DM}$, the fraction of the total DM density which can be contributed by PBHs. Even in the case where $f_\mathrm{PBH} < 1$, a future detection via astrophysical probes or GW searches remains hopeful \cite{Sasaki:2016jop}. We outline selected astrophysical constraints below, summarizing these in Fig.~\ref{fig:constraints}, where we focus on PBHs around the Solar mass range, most relevant for GW signals. {\it Micro-lensing:} The MACHO \cite{Allsman:2000kg} and EROS \cite{Tisserand:2006zx} collaborations searched for micro-lensing events in the Magellanic Clouds in order to constrain the presence of Massive Compact Halo Objects (MACHOs) in the Milky Way Halo. Considering events on timescales of $\mathcal{O}$(1-800) days constrains $f_\mathrm{PBH} < 1$ for masses in the range $M_\mathrm{PBH} \in [10^{-7}, 30]\,M_\odot$ (though these constraints come with a number of caveats, see e.g. \cite{Hawkins:2015uja,Garcia-Bellido:2017xvr}). Another promising target is M31: an important fraction of the Andromeda and Milky Way DM halo can be probed by micro-lensing surveys, and an interesting hint comes from the observation of $56$ events in the $\mathcal{O}$(1-100) days range from this region of interest, which may suggest a MACHO population with $f \sim 0.5$ and mass $1 \,M_\odot$ or lighter \cite{CalchiNovati:2005cd,Lee:2015boa}. A recent search for lensing of Type Ia Supernovae obtained constraints on all PBH masses larger than around $10^{-2}\,M_\odot$, although substantial PBH fractions $f_\mathrm{PBH} \lesssim 0.6$ are still compatible with the data \cite{Zumalacarregui:2017qqd}. For wide mass distributions the SN lensing constraints go away \cite{Garcia-Bellido:2017imq} and PBH could still constitute all of the DM. {\it Early Universe constraints:} PBHs are expected to accrete gas in the Early Universe, emitting radiation and injecting energy into the primordial plasma. This in turn can lead to observable distortions of the CMB spectrum and can affect CMB anisotropies \cite{Carr1981}. Early calculations over-estimated the effect \cite{Ricotti:2007au,Chen:2016pud}, with more recent studies finding weaker constraints \cite{Blum:2016cjs,Ali-Haimoud:2016mbv}. In spite of this, data from COBE/FIRAS \cite{Blum:2016cjs,Clesse:2016ajp} and PLANCK \cite{Ali-Haimoud:2016mbv} can still rule out PBHs with masses $M_\mathrm{PBH} \gtrsim 100 \,M_\odot$ as the dominant DM component. It should be noted that these constraints depend sensitively on the details of accretion onto PBHs in the Early Universe (see e.g.~Ref.~\cite{Poulin:2017bwe}). {\it Dynamical constraints:} The presence of PBHs is also expected to disrupt the dynamics of stars. Wide halo binaries may be perturbed by PBHs and the actual observation of such systems constrains the PBHs abundance above $\simeq 20 \,M_\odot$ \cite{Monroy-Rodriguez:2014ula} (although significant fractions $f_\mathrm{PBH} \lesssim 0.2$ are still allowed). PBHs are also expected to dynamically heat and thereby deplete stars in the centre of dwarf galaxies. Observations of stellar clusters in Eridanus II \cite{Brandt:2016aco} and Segue I \cite{Koushiappas:2017chw} have been used to constrain PBHs heavier than $\mathcal{O}(1)\,M_\odot$ to be sub-dominant DM components, unless they have an IMBH at their center~\cite{Li:2016utv}. {\it Radio and X-ray constraints:} If PBHs exist in the inner part of the Galaxy, which contains high gas densities, a significant fraction of them would inevitably form an accretion disk and emit a broad-band spectrum of radiation. A comparison with existing catalogs of radio and X-ray sources in the Galactic Ridge region already rules out the possibility that PBHs constitute all of the DM in the Galaxy, even under conservative assumptions on the physics of accretion~\cite{Gaggero:2016dpq}. During the next decade, the SKA experiment will provide an unprecedented increase in sensitivity in the radio band; in particular, SKA1-MID will have the unique opportunity to probe the presence of a subdominant population of PBHs in the Galaxy in the $10 \div 100 \, M_\odot$ mass range, even if it amounts to a fraction as low as $\simeq 1$\% of the DM. In Fig.~\ref{fig:constraints}, we show only the most conservative of these constraints, which suggest that PBHs may still constitute all of the DM for masses close to $10\,M_\odot$, or a smaller fraction ($f_\mathrm{PBH} \gtrsim 0.1$) for masses up to $\mathcal{O}(100\,M_\odot)$. We highlight that such astrophysical constraints may have large systematic uncertainties and that these constraints apply only to PBHs with a mono-chromatic mass function. Recent studies have begun re-evaluating these constraints for extended mass functions \cite{Green:2016xgy,Bellomo:2017zsr,Kuhnel:2017pwq,Garcia-Bellido:2017xvr}. For physically motivated mass functions, it may be possible to achieve up to $f_\mathrm{PBH} \sim 0.1$ for PBHs in the mass range $25-100 \,M_\odot$ \cite{Carr:2017jsz}. Relaxing certain dynamical constraints (which typically have large uncertainties) or considering more general mass functions may accommodate an even larger PBH fraction \cite{Carr:2017jsz,Lehmann:2018ejc}. \subsection{Discriminating PBHs from astrophysical BHs} A number of observations may help discriminating PBHs from ordinary astrophysical BHs. The detection of GWs from a binary system including a compact object lighter than standard stellar BHs, say below 1 solar mass, would point towards the existence of PBHs. This can be deduced from the highest frequency reached in the GW chirp signal, $f_{\rm ISCO} = 4400\,{\rm Hz}\, (M_\odot/M)$, and it is in principle possible already with Advanced LVC in the next run O3 ~\cite{Garcia-Bellido:2017fdg}. The detection of GWs at redshift $z \gtrsim 40$ would imply a non-standard cosmology, or the existence of PBHs~\cite{Koushiappas:2017kqm}. Further insights on the origin of BHs might be obtained through the analysis of `environmental' effects, which are discussed in Sec.~\ref{Sec:DM}, and through the analysis of the spatial distribution and mass function of X-ray and radio sources powered by BHs. \section{Formation of supermassive black hole binaries in galaxy mergers}\label{Sec:FormSMBH} \vspace{-3mm} {\it Contributors:} M.~Colpi, M.~Volonteri, A.~Sesana \vspace{3mm} When two galaxies, each hosting a central SMBH merge, the SMBHs start a long journey that should bring them from separations of tens of kpc (1 kpc = $3.086\times 10^{21}$ cm) down to milli-parsec scales, below which they merge through GWs. The initial {\it pairing} of BHs in merging galaxies, their binding into a {\it binary} on parsec scales, and their crossing to enter the GW-driven inspiral are the three main stages of this dynamical journey, sketched in Figure \ref{journey} \cite{Colpi14}. In some cases, the two SMBHs become bound in the core of the merger remnant, cross to the gravitational-driven regime and eventually coalesce. However, there are cases in which the two SMBHs never form a binary: one of the SMBHs may remain stranded and unable to pair and bind \cite{1994MNRAS.271..317G}. \begin{figure} \begin{center} \includegraphics[width=1.00\textwidth]{journey} \caption{Cartoon illustrating the journey travelled by SMBHs of masses in the range $10^{6-8}{\rm ~M}_{\odot}$ during major galaxy-galaxy collisions. The $x-$ axis informs on the SMBH separation given in various panels, while the $y-$ informs on the timescale. The journey starts when two galaxies (embedded in their DM halos) collide on kpc scales (right-most plot). The inset shows a selected group of galaxies from the cosmological simulation described in Ref.~\cite{Khan:2016vln}. The inset in the second panel (from right to left), from Ref.~\cite{2015MNRAS.447.2123C}, depicts the merger of two disc galaxies and their embedded SMBHs. Pairing occurs when the two SMBHs are in the midst of the new galaxy that has formed, at separations of a few kpc. The SMBHs sink under the action of star-gas-dynamical friction. In this phase, SMBHs may find themselves embedded in star forming nuclear discs, so that their dynamics can be altered by the presence of massive gas-clouds. Scattering off the clouds makes the SMBH orbit stochastic, potentially broadening the distribution of sinking times during the pairing phase~\cite{Fiacconi13,2015MNRAS.446.1765L,2017MNRAS.464.2952T,2015MNRAS.453.3437L}. Furthermore, feedback from supernovae and AGN triggering by one or both the SMBHs affect the dynamics as these processes alter the thermodynamics of the gas and its density distribution, which in turn affect the process of gas-dynamical friction on the massive BHs. It is expected that eventually the SMBHs form a Keplerian binary, on pc scales. Then, individual scattering off stars harden the binary down to the GW-driven domain. This is an efficient mechanism (and not a bottleneck) if the relic galaxy displays some degree of triaxiality and/or rotation. In this case, there exists a large enough number of stars on low-angular momentum orbits capable to interact with the binary and extract orbital energy~\cite{2015ApJ...810...49V}. The binary in this phase can also be surrounded by a (massive) circum-binary disc \cite{2012A&A...545A.127R}. In a process reminiscent to type II planet migration, the two SMBHs can continue to decrease their separation and they eventually cross the GW boundary. Then, GW radiation controls the orbital decay down to coalescence.} \label{journey} \end{center} \end{figure} The cosmic context, the growth of DM halos through mergers with other halos gives us the backdrop upon which all the subsequent evolution develops. This is the first clock: the halo merger rate evolves over cosmic time and peaks at different redshift for different halo masses. Furthermore, the cosmic merger rate predicts that not all mergers occur with the same frequency: {\it major} mergers, of halos of comparable masses (from 1:1 down to $\sim$ 1:4) are rare, while minor mergers (mass ratio $<$ 1 : 10) are more common. However, the dynamical evolution is much faster in major mergers than in minor mergers: if the mass ratio of the merging halos and galaxies is too small, the satellite halo is tidally disrupted early on in its dynamical evolution and its central SMBH is left on a wide orbit, too far from the center of the larger galaxy to ever merge with its SMBH \cite{Callegari09,2018MNRAS.tmp..160T}. A population of wandering SMBHs is therefore predicted to exist in galaxies \cite{2005MNRAS.358..913V,2010ApJ...721L.148B,2016MNRAS.460.2979V}. For mergers where the SMBHs can pair, i.e., find themselves in the newly formed core of the merger remnant, the second hurdle is to get close enough to form a gravitationally bound binary \cite{Yu2002,2007Sci...316.1874M,2015MNRAS.449..494R,2017MNRAS.471.3646P}. For SMBHs embedded in a singular isothermal sphere, a binary forms at a separation of $a = GM/2\sigma^2 \simeq 0.2 \, {\rm pc} \, (M_{\rm BBH}/10^6{{\rm ~M}_{\odot}}) \, \left(\sigma/100 {\rm ~km} {\rm ~s}^{-1}\right)^{-2}$, where the mass of the SMBHs exceeds the enclosed mass in stars, gas, and DM. The journey from the beginning of the merger, at tens of kpc, to the formation of the binary on pc to subpc scales, takes between a few tens of Myr for compact galaxies at very high redshift, $z>3-4$ to several Gyr for larger, less dense galaxies at low redshift \cite{Yu2002,Volonteri:2002vz,2017ApJ...840...31D}. In the cases where the SMBHs form a bound binary within the Hubble time, the final crossing into the GW regime hinges on exchanges of energy through scattering with low angular momentum stars in the nucleus of the galaxy \cite{Begelman:1980vb} or on extraction of angular momentum through gravitational torques coming from a gas disc that may result on the shrinking of its separation \cite{1980ApJ...241..425G}, or on a combination of the two processes. Recent results of direct N-body simulations, Monte Carlo methods and scattering experiments are giving an optimistic view of what has been considered to be the main bottleneck of the binary evolution for almost 40 years: the ``final parsec problem,'' i.e., running out of low-angular momentum stars \cite{Begelman:1980vb}. The evolution of SMBH binaries through stellar scattering seems to continue at nearly a constant rate leading to merger in less than $\sim$1 Gyr \cite{2006ApJ...642L..21B,2015ApJ...810...49V,2015MNRAS.454L..66S} once rotation, triaxiality and the granularity of the stellar distribution are taken into account. \begin{figure} \begin{center} \includegraphics[width=1.00\textwidth]{time_coal_gal_star_gas} \caption{Relation between ``time$_{\rm start}$,'' corresponding to the cosmic time of onset of a galaxy-galaxy collision hosting SMBHs, and `` time$_{\rm coal}$'' at which the SMBHs merge for circular binaries with the primary BH of $10^8 {\rm ~M}_{\odot}$ and mass ratios $q=1$, and 0.1. Redshifts at start and coalescence are given in the same plot. Dotted and dashed lines refer to sinking times associated to the total of the merger time for the host halos and galaxies, the pairing of SMBH, and then the shrinking of the binary subject to either stellar and gaseous processes. The black solid line represents ``time$_{\rm coal}={\rm time}_{\rm start}$ The Figure shows that galaxies colliding at $z\sim 1$ can display time delays as long as 3 Gyr. \label{delay}} \end{center} \end{figure} If the binary environment is dominated by gas, rather than stars, the binary is expected to evolve through interaction with the accretion disk(s) surrounding the SMBHs, through the so called circumbinary disk phase. The disks are formed by the gas that inflows towards the SMBHs with an even small amount of angular momentum: in the single SMBH case these are referred to as ``accretion discs'' \cite{1973A&A....24..337S}. Depending on the mass ratio $q$, the binary may or may not clear a gap within the circumbinary disk. In the former case (valid for $q \ll 1$) the less massive hole behaves as a fluid element within the accretion disk of the primary, thus experiencing what is known as Type I migration. In the latter case (generally valid for $q > 0.01$), the formation of a central cavity slows down the evolution of the binary (Type II migration). The energy and angular momentum transfer between the binary and the circumbinary disk is mediated by streams leaking from the inner rim into the cavity. The pace of the supermassive binary BH (SMBBH) evolution depends on the detailed dynamics of the streams crossing the gap, which are partially accreted by the two SMBHs and partially flung back to the disk, extracting the binary energy. Simulations generally concur that accretion into the central cavity is efficient enough to bring the SMBBH into the GW-dominated regime ($\sim 0.01$ pc) within $\sim$1-100 Myr \cite{ArmitageNarajan2005,2008ApJ...672...83M,Haiman09,rds+11,2012A&A...545A.127R,2012ApJ...749..118S,2013MNRAS.436.2997D, 2015ApJ...807..131S}. The role of AGN feedback on this scales, however, is still largely unexplored and may bring surprises. In summary, SMBHs in merging galaxies have a long journey, that takes between 1 and 10 Gyr depending on the cosmic time, mass and mass ratios of the host galaxies, galaxy morphologies, stellar and gas content, and on the masses of the SMBH themselves. Figure \ref{delay} is an illustration of the delay timescales in the dynamics of SMBH mergers, during the pairing process in major mergers. This is a simplified example for circular binaries with the primary SMBH of $10^8 {\rm ~M}_{\odot}$ and mass ratios $q=1$, and 0.1. Here we included the time for halos to merge \cite{Boylan2008}, and then added an additional timescale for BHs to pair, based on the formalism of \cite{2014ApJ...789..156M}, motivated by the recent results of \cite{2018MNRAS.tmp..160T} who find that the halo merger timescale is insufficient to estimate the pairing timescale. We assumed 100 Myr from binary formation to coalescence for the gas-driven case and the fit proposed by \cite{2015MNRAS.454L..66S} for the stellar-driven case. Theoretical studies and predictions, however, are still far from being complete: this is a severely multi-scale problem, intimately connected with the processes of galaxy clustering on cosmological scales, which involves a rich physics. Determining the distribution of merging times in SMBH mergers is critical, as only through the detailed knowledge of this distribution and associated processes, we are able to bracket the uncertainties in the estimates of the SMBH merger rates, relevant for the LISA mission, and to provide reliable estimate of the expected GW signal in the nHz band, relevant to PTAs. \section{Probing supermassive black hole binaries with pulsar timing arrays} \label{Sec:PTA} \vspace{-3mm} {\it Contributors:} C.~Mingarelli, M.~Kramer, A.~Sesana \vspace{3mm} Millisecond pulsars are excellent clocks over long timespans, making them ideal tools to search for GWs~\cite{saz78,det79, Backer:1982, hd83, r89, Burke-Spolaor:2015xpf,l15, l17}. Indeed, an array of millisecond pulsars forms a galactic-scale nanohertz GW detector called a Pulsar Timing Array (PTA, \cite{fb90}). The GWs change the proper distance between the Earth and the pulsars, which induces a timing delay or advance in the pulsar pulses. The difference between the expected pulse arrival time and the actual arrival time, called the timing residual, is used to search for signatures of low-frequency GWs. The frequency band where PTAs operate is set by the length of pulsar observations, and the cadence of the observations. Briefly, the lower limit is set by $1/T_\mathrm{obs}$, where $T_\mathrm{obs}$ is the total observation time, and the high-frequency limit is set by $1/\Delta t$, where $\Delta t$ is the cadence of the pulsar observations. This sets the sensitivity band between 1 nHz and 100 nHz. The most promising signals in the PTA frequencies are due to the expected cosmic population of SMBBHs. The systems of interest here have masses $>10^8~M_\odot$, and can spend tens of millions of years emitting GWs in the relevant frequency band. The incoherent superposition of their signals gives rise to a stochastic GW background (GWB, see, e.g. \cite{rm95, jb03, wl03, shm+04, svc08}). On top of it, particularly nearby and/or very massive SMBBHs might be individually resolved ~\cite{svv09, mls+2017}, providing interesting targets for multimessenger observations. Moreover, the memory effect following the merger of those systems may also give rise to detectable bursts of GW radiation ~\cite{vHL10, mcc17}. Gravitational radiation may also originate from cosmic strings~\cite{Siemens:2006vk, lss09, sbs13}, and primordial GWs from e.g. inflation~\cite{Lasky:2015lej}. Here we focus on the stochastic GWB and continuous GW sources, but it is worth mentioning that the correlation function present in GWB searches depends on both the underlying theory of gravity, the distribution of GW power on the sky, and the intra-pulsar distances. Indeed, additional GWB polarizations such as breathing modes can in principal be detected with PTA experiments, e.g. \cite{ljp08, ss12}, as can departures from GWB isotropy~\cite{msmv13, tg13, ms14, Cornish:2014rva, grt+14,TaylorEtAl:2015}. Clustering of large-scale structure, resulting in an overdensity of merging SMBBHs, can lead to GWB anisotropy, as can nearby continuous GW sources which are individually unresolvable, but can contribute to GWB anisotropy at level of $\sim 20\%$ of the isotropic component, see \cite{mls+2017}. Moreover, pulsars separated by less than $\sim 3 ^{\circ}$ violate the short-wavelength approximation~{ms14, mm18}, used to write down the Hellings and Downs curve, and exhibit an enhanced response to GWs which may help in their detection. World-wide, PTA experiments have been taking data for over a decade, with efforts in North American governed by the North American Nanohertz Observatory for GWs (NANOGrav, see e.g. \cite{Arzoumanian:2017puf}), in Europe by the European PTA (EPTA)~\cite{desvignes+:2016}, and in Australia by the Parkes PTA (PPTA)~\cite{ShannonEtAl:2015}. The union of these PTAs forms the International Pulsar Timing Array (IPTA)~\cite{VerbiestEtAl:2016}, where the PTAs share data in the effort to accelerate the first detection of nHz gravitational radiation, and to learn more about the individual millisecond pulsars. For a more comprehensive overview of PTA experiments, see e.g.~\cite{Burke-Spolaor:2015xpf,l15,l17} \subsection{The Gravitational-Wave Background} The cosmic merger history of SMBBHs is expected to form a low-frequency GWB, which may be detected by PTAs in the next few years~\cite{Siemens:2013zla,rsg15,tve+16}. The time to detection depends strongly on the number of pulsars in the array, the total length of the dataset, and the underlying astrophysics which affects the SMBBH mergers, such as stellar hardening process, interactions with accretion disks, binary eccentricity, and potentially SMBH stalling. Searches for the GWB have resulted in evermore stringent limits on the amplitude $A$ of the GWB reported at a reference frequency of 1/yr: \begin{equation} h_c = A \left( \frac{f}{\mathrm{yr^{-1}}}\right)^{-2/3} \, , \end{equation} where $h_c$ is the characteristic strain of the GWB~\cite{Phinney:2001di}. This simple power-law scaling assumes the the SMBBH systems are circular when emitting GWs, and that they are fully decoupled from their environment. Binary eccentricity and interactions with gas and stars around the binary can deplete the GW signal at very low frequencies (equivalently at wide binary separations), causing the GW strain spectrum to turn over~\cite{ks11, rws+14, scm15, ArzoumanianEtAl:2016}. On the other hand, the amplitude of the GWB is affected by the abundance and mass range of the cosmic population of SMBHs. Therefore, future detections of a stochastic GWB will allow to constrain both the overall population of SMBBHs and the physics driving their local dynamics~\cite{Sesana13,2017PhRvL.118r1102T, 2017MNRAS.468..404C, rph+18, lbh+17,rhh+16, iih18}. Current non-detections have been used in \cite{Middleton:2015oda} to challenge the popular $M_\mathrm{BH}-M_\mathrm{bulge}$ from \cite{kh13}. However, a full analysis taking into account uncertainties in the merger rate, SMBBH dynamics and the possibility of stalling is needed to draw firm conclusions \cite{2018NatCo...9..573M}. The current upper limit on $A$ from the various PTA experiments are similar and are improving with time. From the EPTA is $A<3\times 10^{-15}$~\cite{Lentati:2015qwp}, from the PPTA this is $A<1\times 10^{-15}$~\cite{ShannonEtAl:2015}, from NANOGrav this is $A <1.45\times 10^{-15}$~\cite{Arzoumanian:2018saf}, and the IPTA limit is $1.7\times 10^{-15}$~\cite{VerbiestEtAl:2016}. In order to improve those, further more sensitive observations are needed, which can and will be provided by improved instrumentation and new telescopes like FAST, MeerKAT and eventually the SKA \cite{Janssen:2014dka}. Additionally, systematics needs to be addressed. For instance, solar system ephemeris errors can mimic a GWB signal, if the underlying data are sufficiently sensitive~\cite{thk+16}. Here, mitigation techniques can be applied, as already done in the recent analysis of the NANOGrav 11-year data \cite{Arzoumanian:2018saf}, while other effects, such as the interstellar weather may be best addressed with multi-frequency observations. Future IPTA results will take those and other effects into account. \begin{figure*} \centering \includegraphics[width=4in]{gwlandscape2.pdf} \caption[The Gravitational-Wave Landscape]{The spectrum of gravitational radiation from low-frequency (PTA) to high-frequency (LIGO). At very low frequencies pulsar timing arrays can detect both the GWB from supermassive black hole binaries, in the $10^8-10^{10}~M_\odot$ range, as well as radiation from individual binary sources which are sufficiently strong. We assume 20 pulsars with 100 ns timing precision with a 15 year dataset for IPTA, and 100 pulsars timed for 20 years with 30~ns timing precision for SKA. Both estimates assume 14-day observation cadence. This Figure was reproduced with minor changes from~\cite{mm18}, who in turn also used free software from~\cite{Moore:2014lga}.} \label{fig:gwspectrum} \end{figure*} \subsection{Continuous Gravitational Waves} Individual nearby and very massive SMBBHs emitting nHz GWs can also be detected with PTA experiments~\cite{lwk+11,2012PhRvD..85d4034B,abb+14,Babak:2015lua}. These SMBBHs are likely in giant elliptical galaxies, though the timescale between the galaxy merger and the subsequent SMBH merger is poorly understood. Unlike LIGO or LISA sources, these SMBBHs are in the PTA band for many millions of years, and will likely merge outside both the PTA and LISA band. The sky and polarization averaged strain of a continuous GW source is \begin{equation} h = \sqrt{\frac{32}{5}}\frac{\mathcal{M}_c^{5/3}[\pi f (1+z)]^{2/3}}{D_L} \, , \end{equation} where $\mathcal{M}_c^{5/3} = [q/(1+q)^2] M^{5/3}$ is the binary chirp mass, $q<1$ is the binary mass ratio, $M$ is the total binary mass, $f$ is the GW frequency and $D_L$ is the luminosity distance to the binary. The time to detection of continuous GW sources has been estimated in various ways: namely by simulating a GW source population based on cosmological simulations~\cite{svv09,sv10,rsg15}, or by using an underlying galaxy catalog to estimate which nearby galaxies can host SMBBH systems which are PTA targets~\cite{mls+2017,spl+14}. Both these approaches largely agree that at least one SMBBH system will be detected in the next decade or so, with the details depending on the amount of red noise in the pulsars. Detecting individual SMBBH systems is expected to shed light on the so-called {\it final parsec problem}, providing valuable insights into how SMBBHs merge. In fact, pulsar distances are well-measured, it may be possible to measure the SMBH spins using the pulsar terms~\cite{Mingarelli2012}. Importantly, resolving an individual SMBBH with PTAs will open new avenues in multimessenger astronomy \cite{srr+12, tmh12, Burke-Spolaor:2013aba, m17}. The combined GW and electromagnetic signals will allow to pin down the properties of the binary and its environment, to study the dynamics of the SMBBH pairing and the coupling with the surrounding gaseous disc, if present~\cite{Sesana13,Burke-Spolaor:2015xpf}. Even a single joint detection will allow to nail down the distinctive electromagnetic properties of accreting SMBBHs. Analog systems can then be searched in the archival data of large surveys, to quantify their overall cosmic population. A promising way forward is to check candidates mined systematically in time domain surveys for peculiar spectral signatures, hence producing credible targets for PTAs. \section{Numerical Simulations of Stellar-mass Compact Object Mergers}\label{Sec:NumericalRelativity} \vspace{-3mm} {\it Contributor:} A.~Perego \vspace{3mm} \subsection{Motivations} Compact binary mergers (CBM) comprising at least one NS (i.e. BNS or BH-NS mergers) are unique cosmic laboratories for fundamental physics at the extreme. These powerful stellar collisions involve all fundamental interactions in a highly dynamical and intense regime (see, e.g., \cite{Rosswog:2015nja,Shibata:2011jka,Baiotti:2016qnr,Paschalidis:2016agf} and references therein for more comprehensive overviews). Their study is extremely challenging and requires multidimensional, multiphysics and multiscale models. General relativistic hydrodynamical (GRHD) simulations, including a detailed, microphysical description of matter and radiation, are necessary to model the merger and to produce robust predictions for a large variety of observables. Their comparison with observations will provide an unprecedented test for our understanding of the fundamental laws of Nature, in regimes that will never be accessible in terrestrial laboratories. CBMs are events of primary relevance in astrophysics and fundamental physics. They are intense sources of neutrinos \cite{1989Natur.340..126E} and GWs \cite{Peters:1964zz}, and primary targets for the present generation of ground-based GW detectors. Both theoretical and observational arguments support CBMs as progenitors of sGRB \cite{1986ApJ...308L..43P,Berger:2013jza}. At the same time, they are the places where the heaviest elements in the Universe (including Gold and Uranium) are synthesized and ejected into space, via the so-called $r$-process nucleosynthesis \cite{1974ApJ...192L.145L,1989ApJ...343L..37M,1999ApJ...525L.121F}. The radioactive decay of these freshly synthesized, neutron-rich elements in the ejecta powers a peculiar EM transient, called kilonova (or macronova), hours to days after the merger \cite{1998ApJ...494L..45P,Kulkarni:2005jw}. Despite happening far away from the merger remnant, the $\gamma$-ray emission, the $r$-process nucleosynthesis, and the kilonova transient are extremely sensitive to physical processes happening where the gravitational curvature is stronger. In particular, the equation of state (EOS) of NS matter is thought to play a central role in all the emission processes, since it determines the compactness of the merging NSs, their tidal deformations, the lifetime and spin frequency of the remnant, the amount and the properties of the ejected mass. On August, 17th 2017, the first detection of GWs from a CBM event compatible with a BNS merger marked the beginning of the multimessenger astronomy era \cite{TheLIGOScientific:2017qsa,GBM:2017lvd} and remarkably confirmed several years of intense and productive lines of research. The GW signal, GW170817, provided the link to associate a sGRB, GRB170817A \cite{Monitor:2017mdv}, and the kilonova AT2017gfo transient \cite{Pian:2017gtc,Tanvir:2017pws,Coulter:2017wya} to a CBM localized in the NGC4993 galaxy. Moreover, this single event set constraints on the EOS of NS matter and gave an independent measure of the Hubble constant \cite{Abbott:2017xzu}. Many relevant aspects of the merger and of the emission processes are, however, not yet fully understood and relate to open questions in fundamental physics and astrophysics. The interpretation of the observational data strongly relies on the theoretical modeling of compact binary sources in GR. During the last years the unprecedented growth of computing power has allowed the development of increasingly sophisticated numerical models. The most advanced simulations in Numerical Relativity (NR) are set up using a first-principles and an ab-initio approach. Neutrino radiation and magnetic fields are thought to play a key role during the merger and its aftermath. Their inclusion in detailed NR simulations, in association with microphysical, finite temperature EOS for the description of matter at and above nuclear saturation density is one of the present challenges in the field. Reliable predictions do not only require the inclusion of all the relevant physics, but also accurate numerical schemes and numerically converging results. Robust and computationally stable discretizations are also essential to perform long-term and high-resolution simulations. \subsection{Recent results for binary neutron star mergers} \subsubsection{Gravitational waves and remnant properties} A primary outcome of BNS simulations are accurate gravitational waveforms. Numerical models provide consistent and continuous GW signals for all three major phases: inspiral, merger, and post-merger/ring-down. As discussed earlier, key parameters encoded in the waveform are the masses and the spins of the coalescing objects, the quadrupolar tidal polarizability (depending on the NS EOS), and possibly the residual binary eccentricity. For the inspiral phase, state-of-the-art simulations cover at least 10 to 20 orbits before coalescence~\cite{Bernuzzi:2014owa,2017PhRvD..96h4060K}. Due to the cold character of this phase, zero-temperature EOSs for the NS matter are often used. High-order finite difference operators have been shown to be key to reach high-confidence numerical results \cite{Radice:2013hxh}. Error control on the numerical results is essential for using NR waveforms for the analysis of data streams coming from GW detectors. State-of-art analysis of the error budget include the study of the numerical convergence, requiring multiple resolutions (4 or 5), and an estimate of the error due to the finite distance extraction of the GW signal within the computational domain. Only recently, initial conditions for rotating NSs have become available \cite{Tichy:2012rp}. They were used to follow consistently, for the first time, the inspiral phase in numerical relativity, including spin precession as well as spin-spin and spin-orbit couplings \cite{Bernuzzi:2013rza,2015PhRvD..92l4007D,2017PhRvD..95d4045D}. The high computational costs have limited the exploration of the BNS wide parameter space. However, thanks to the large set of presently available waveforms, detailed comparisons with Analytical Relativity results (including post-Newtonian and Effective One Body approaches) are nowadays possible~\cite{Bernuzzi:2014owa,2016PhRvD..93f4082H,Hinderer:2016eia}. Moreover, large databases of NR waveforms led to the production of purely NR-based waveform models \cite{2017CQGra..34j5014D}. These results are critical to improve the quality of semi-analytic waveforms employed in current GW data analysis. The merger and its aftermath are highly dynamical and non-linear phases. A systematic study of the remnant properties and of its GW emission is presently made possible by large and extensive sets of simulations in NR (e.g., \cite{2013PhRvD..88d4026H,Bauswein:2013jpa,Zappa:2017xba}), sometimes extending for tens of ms after the merger. Detailed analysis of the post-merger gravitational waveforms revealed the presence of characteristic high-frequency peaks, which will be possibly accessible by third generation GW detectors. These features in the GW spectrum are associated with properties of the nuclear EOS \cite{2013PhRvD..88d4026H,Bauswein:2015vxa,Rezzolla:2016nxn}, with the presence of a magnetised long-lived massive NS \cite{2017PhRvD..95f3016C}, with the development of one-arm spiral instabilities \cite{Paschalidis:2015mla,Radice:2016gym,Lehner:2016wjg} or convective excitation of inertial modes inside the remnant \cite{DePietri:2018tpx}. However, a firm understanding of these structures in the GW frequency spectrum is still in progress. The remnant fate depends primarily on the masses of the colliding NS and on the nuclear EOS. The collapse timescales of metastable remnants to BHs crucially relies on physical processes that are still not fully understood, including angular momentum redistribution (possibly of magnetic origin) and neutrino cooling. A prompt collapse to a BH, expected in the case of massive enough NSs and soft EOS, results in the most luminous GW emissions at merger. For stiffer EOS or lower NS masses, differential rotation and thermal support can temporarily prevent the collapse of an object whose mass is larger than the mass of a maximally rotating NS, forming a so-called hypermassive NS (HMNS). Mergers producing a HMNS emit the largest amount of energy in GWs, since an intense luminosity is sustained for several dynamical timescales. If the remnant mass is below the maximum mass of a maximally rotating NS or even of a non-rotating NS, a supramassive or massive NS can form, respectively. Gravitational waveforms for a supramassive or massive NS are similar to those of a HMNS case, but weaker. The detection of this part of the spectrum is beyond the capability of the present generation of GW detectors \cite{Abbott:2017dke}. NR simulations take consistently into account the angular momentum emitted in GWs and in ejected mass, and predict the angular momentum of the final remnant to be $0.6 \lesssim J/M^2 \lesssim 0.85$. In the HMNS and supramassive NS cases, the super-Keplerian value of $J$ provides a large reservoir to power subsequent angular momentum ejections. Within the first dynamical timescale, this leads to the formation of a massive, thick disk around the central remnant. \subsubsection{Matter ejection and electromagnetic counterparts} Numerical simulations are necessary to quantitatively model matter ejection from BNS. Different ejection mechanisms result in different ejecta properties. In addition to GW emission, BNS simulations predict the ejection of both tidal (cold) and shocked (hot) \textit{dynamic ejecta}, a few ms after the GW peak \cite{2012MNRAS.426.1940K,Bauswein:2013jpa,2013PhRvD..88d4026H,Sekiguchi:2015dma,Foucart:2015gaa,Radice:2016dwd,2017PhRvD..96l4005B}. The most recent NR simulations employing microphysical, finite temperature EOS predict between $10^{-4}$ and a few $10^{-3}$ M$_{\odot}$ of dynamic ejecta, moving at $v \sim 0.3\,c$. This range covers both intrinsic variability in the binary (e.g., NS masses) and uncertainties in the nuclear EOS. However, results for the same systems from different groups agree only within a factor of a few. Differences in the treatments of the NS surface and of the floor atmosphere, as well as in the microphysical content and in the extraction of matter properties at finite radii possibly account for these discrepancies. Differently from what was previously thought, state-of-the-art simulations including weak reactions show that neutrino emission and absorption decrease the neutron richness of the equatorial ejecta and, more significantly, of the high latitude ejecta. The former can still synthesize the heaviest $r$-process elements \cite{Wanajo:2014wha}, while for the latter the formation of lanthanides can be inhibited. These results have been recently confirmed by parametric studies based on NR models \cite{Goriely:2015,2018CQGra..35c4001M}. Long term (100s of ms) simulations of the merger aftermath show \textit{wind ejecta} coming from the remnant and the accretion disk. These winds are powered by neutrino absorption \cite{Perego:2014fma,2017ApJ...846..114F} or, on the longer disk lifetime, by viscous processes of magnetic origin and nuclear recombination inside the disk \cite{2013MNRAS.435..502F,Just:2015ypa,Foucart:2015gaa,Siegel:2017jug,Fujibayashi:2017puw}. If a HMNS has formed, the neutrino emission is expected to be more intense and the differential rotation can also power winds of magnetic origin \cite{Siegel:2014ita}. The unbound mass can sum up to several $10^{-2}$ M$_\odot$, depending on the disk mass, and moves at $\sim 0.1~c$. Weak processes, including neutrino irradiation, decrease the neutron richness, particularly at high latitudes and for very long-lived remnants \cite{2017MNRAS.472..904L}. This implies the production of the first {\it r}-process peak in $\nu$-driven winds and viscous ejecta. In the former case, the nucleosynthesis of lanthanides is highly suppressed, while in the latter the wider distribution of neutron richness leads to the production of possibly all r-process elements, from the first to the third peak (e.g.,\cite{Siegel:2017jug, 2017MNRAS.472..904L,2017PhRvD..96l3015W,Just:2015ypa}. Only a few simulations of the merger aftermath are presently available and only few of them were performed in a NR framework. While the study of the viscosity-driven ejecta led to a partial agreement between different groups, simulations of $\nu$- and magnetically-driven winds are still very few and additional work is required. The different compositions and kinematic properties of the different ejecta channels have implications for the magnitude, color, and duration of the kilonova (see \cite{2016ARNPS..66...23F} and \cite{Metzger:2016pju} for recent reviews). The presence of lanthanides in the ejecta is expected to significantly increase the photon opacity due to the presence of millions of mostly unknowns absorption lines \cite{Barnes:2013wka}. Detailed radiative transfer codes provide the most accurate models for the EM emission over timescales ranging from a few hours up to a few weeks \cite{Tanaka:2013ana,2015MNRAS.450.1777K,Wollaeger:2017ahm}. Due to the uncertainties in the opacity treatment and in the geometry of the ejecta, large differences could still arise between different models. More phenomenological approaches, characterized by significantly lower computational costs and accuracy, allow a more systematic and extensive exploration of the different parameters in the ejecta properties \cite{2014MNRAS.439..757G,Rosswog:2016dhy}. Based on these models, it was suggested that, due to the absorption of neutrinos at high latitudes, the presence of lanthanide-free material (characterized by lower opacity) could result in a bluer and earlier ($\sim$ a few hours) emission, compared with the redder and dimmer emission powered by material enriched in lanthanides \cite{2014MNRAS.441.3444M,Perego:2014fma,Martin:2015hxa}. \subsubsection{GW170817 and its counterparts} The detection of GW170817 and of its counterparts represents an unprecedented chance to test our understanding of BNS. The analysis of the gravitational waveform and the subsequent parameter estimation made use of large sets of approximated waveform templates. The most recent results obtained in NR could potentially improve this analysis and set more stringent constraints on the intrinsic binary parameters and on the EOS of NS matter. The interpretation of AT2017gfo as a kilonova transient revealed the need to consider at least two different components to explain both the early blue and the subsequent red emission \cite{Tanvir:2017pws,Villar:2017wcc}. More recently, the observed light curves have been traced back to the properties and to the geometry of the ejecta predicted by the most recent numerical models. This analysis has confirmed the multi-component and anisotropic character of the BNS ejecta, as well as the central role of weak interaction in setting the ejecta composition \cite{Tanaka:2017qxj,Perego:2017wtu,2017ApJ...848L..34M}. Finally, the combination of the information extracted from the GW and from the EM signal enabled the possibility to set more stringent constraints on the nuclear EOS, in a genuine multimessenger approach \cite{2017ApJ...850L..34B,Radice:2017lry}. Both very stiff and very soft EoSs seem to be disfavored by this kind of analysis. A similar approach has been used to show that the formation of a HMNS is the most probable outcome of this event \cite{2017ApJ...850L..19M}. \subsection{Recent results for black hole-neutron star mergers} Modelling of BH-NS mergers in GR shares many similarities with BNS merger modelling, but reveals also differences and peculiarities (see \cite{Shibata:2011jka} for a review). The larger ranges in masses and spins expected for stellar mass BHs significantly increase the parameter space. The large mass ratio between the colliding objects makes NR simulations more expensive, since the inspiral requires larger initial distances to cover a sufficient number of orbits, and the dissimilar lengthscales require higher resolutions. These numerical challenges have limited the number of models and waveforms that are presently available in comparison with the BNS case. If the final remnant is always represented by a spinning BH, the presence of a disk around it depends on the mass ratio and BH spin \cite{2013PhRvD..88d1503K,2013ApJ...776...47D,Foucart:2014}. Larger BHs, with moderate spins, swallow the NS during the plunge dynamical phase, while the decrease of the last stable orbit in the case of aligned, fast spinning BHs leads more easily to the tidal disruption of the NS (in the form of mass shedding) and to a massive accretion disk formation (with masses possibly in excess of 0.1 M$_\odot$). Misalignment between the orbital and the BH angular momentum induces non-trivial precessions, encoded both in the GW signal and in the amount of mass outside the BH horizon. The GW spectrum shows a characteristic cutoff frequency directly related with the orbital frequency at the NS tidal disruption or at the last stable orbit. Since this frequency decreases with increasing BH spin and with decreasing compactness, its detection could provide direct constraints on the NS matter EOS. Dynamical mass ejection from BH-NS merging system is expected to be caused by tidal torque when the NS is disrupted, and to happen predominantly along the equatorial plane. The ejected mass can be significantly larger than in the BNS case and neutrino irradiation is expected to have only a minor effect in changing the ejecta composition~\cite{Roberts:2016igt}. Similarly to the BNS case, the viscous evolution of the disk drives significant matter outflows on the disk lifetime, while the absence of a central NS reduces the emission of neutrinos and the ejection of $\nu$- or magnetically-driven winds \cite{2015PhRvD..92f4034K}. The weaker effect of neutrino processes on the dynamic ejecta results always in robust $r$-process nucleosynthesis between the second and the third $r$-process peaks, while viscous disk winds can still synthesize all {\it r}-process nuclei. The high lanthanide content in the ejecta is expected to produce a redder kilonova transient a few days after the merger \cite{2017CQGra..34o4001F}. \subsection{Perspectives and future developments} A complete and dense exploration of the wide parameter space, including a consistent treatment of NS and BH spins and of their evolution, is one of the major challenges in the study of CBMs in NR. The extraction of waveforms from long inspiral simulations at high resolution, employing accurate high-order numerical schemes, is necessary to build a robust NR waveform database. These templates, in combination with more analytical approaches, can guide the construction of complete and coherent semi-analytical waveforms for the GW data analysis, spanning many orbits from the inspiral to the actual coalescence. The large variety of properties in the colliding system potentially translates to a broad distribution of properties for the remnant and for the ejecta. Long term simulations of the merger and of its aftermath are the necessary tool to provide a complete and accurate description of the ejecta. The inclusion of the physics necessary to model the remnant and the matter ejection is still at its outset. In particular, the consistent inclusion of neutrino radiation and magnetic field evolution in NR model is extremely challenging. Leakage scheme for neutrino radiation are presently implemented in many CBM models (see, e.g., the introduction section of \cite{2016ApJS..223...22P} for a detailed discussion). They provide a robust and physically motivated treatment for neutrino cooling. However, they are too inaccurate to model long term evolution and neutrino absorption in optically thin conditions. State-of-the-art simulations include gray moment scheme \cite{Radice:2016dwd,2016PhRvD..94l3016F}. Nevertheless, the significant dependence of neutrino cross-sections on particle energy requires spectral schemes. Large velocity gradients inside the computational domain make the accurate transformation of the neutrino energy spectrum between different observers and its transport highly non-trivial. Moreover, the application of moment schemes for colliding rays in free streaming conditions leads to closure artifacts in the funnel above the remnant. Monte Carlo radiation schemes represent an appealing alternative, but their computational cost is still far beyond our present capabilities. The usage of Monte Carlo techniques to provide more physical closures in moment scheme seems a more viable approach \cite{Foucart:2018}. Neutrino masses have a potentially high impact on the propagation and flavor evolution of neutrinos from CBMs. New classes of neutrino oscillations, including the so-called matter neutrino resonances, can appear above the remnant of BNS and BH-NS mergers and influence the ejecta nucleosynthesis \cite{2016PhRvD..93d5021M,2017PhRvD..96d3001T}. However, numerical modelling of these phenomena is still at its dawn \cite{2016PhRvD..94j5006Z,2017PhRvD..95b3011F,2017PhRvD..96l3015W}. General relativistic magneto-hydrodynamics codes are presently available and they have started to access the spatial scales necessary to consistently resolve the magneto-rotational instabilities (MRIs, \cite{1960PNAS...46..253C}). The latter appear in the presence of differential rotation and locally amplify the magnetic field. However, global simulations obtained with unprecedented spatial resolution (of the order of 12.5 m) do not show convergence yet, proving that we are still not able to resolve the most relevant scales for magnetic field amplification in a self-consistent way \cite{Kiuchi:2017zzg}. In the past, sub-grid models have been used as alternative approaches to ab-initio treatments \cite{2015PhRvD..92d4045P,2016CQGra..33p4001E,2017PhRvD..95f3016C}. More recently, two different formulations of effective MHD-turbulent viscosity in General Relativity have been proposed and used in both global BNS and long term aftermath simulations \cite{Radice:2017zta,Shibata:2017jyf}. Moreover, the role of bulk viscosity in the post-merger phase has started being investigated \cite{Alford:2017rxf}. Accurate modelling of the GW and EM signals for CBMs are key to set tight constraints on the EOS of NS matter, which still represents one of the major sources of uncertainty in both fundamental physics and in numerical models. If the connection between CBMs and short GRBs will be confirmed by future detections, the knowledge of the remnant fate, as well as of the environment around it, will be crucial to address the problem of short GRB engines, including jet formation, collimation, and break-out. The accurate modelling of small-scale magnetic field amplification, as well as of heat redistribution due to neutrino transport, is key to predict the lifetime of the remnant for cases of astrophysical interest. In fact, the presence of highly rotating and magnetized massive NSs \cite{Perego:2017fho,2017PhRvD..95f3016C}, or of fast spinning BHs, is anticipated to play an essential role in solving the puzzle of the short GRB central engine. The high computational costs required by long-term, high-resolution, numerically accurate and multiphysics models of CBMs point to the need of developing a new generation of numerical schemes and codes for the new generations of large supercomputers. These codes will need, for example, to improve scalability and to employ more heavily vectorization in the hybrid (shared \& distributed) parallelization paradigm. This is perhaps the greatest challenge in the NR field for the years to come. \section{Electromagnetic Follow-up of Gravitational Wave Mergers }\label{Sec:EMfollowup} \vspace{3mm} {\it Contributor:} A.~Horesh \vspace{3mm} The year 2017 will be remembered as the year in which extraordinary achievements in observational astrophysics have been made. On 2017, August 17, the LIGO and Virgo detectors detected for the first time GWs from the merger of two NSs, dubbed GW\,170817. Adding to the excitement was the detection of gamma-ray emission only two seconds after the merger event, by the {\it Fermi} satellite. The sensational discovery of a GW signal with a coincident EM emission led to one of the most comprehensive observational campaigns worldwide. A few hours after the GW detection, the LIGO and Virgo detectors managed to pinpoint the position of the GW event to an error circle of $34$ sq. degrees in size (see Fig.~\ref{fig:gw17} below). This area was small enough so that an international teams of astronomers encompassing more than a hundred instruments around the world on ground and in space could conduct an efficient search for EM counterparts. Roughly $11$ hours after the GW event, an optical counterpart was announced by the 1M2M (`Swope') project team \cite{Coulter:2017wya,Kilpatrick:2017mhz}. Many other teams around the world independently identified and observed the counterpart in parallel to the SWOPE team discovery. The optical detection of the counterpart pinpointed the source to a precise location in the galaxy NGC4993 at a distance of $40$\,Mpc, (which is also consistent with the distance estimate from the GW measurements; see Sec.~\ref{Sec:LVC}). Overall, the counterpart (for which we will use the official IAU name AT\,2017gfo) was detected across the spectrum with detections in the ultraviolet, optical, infrared (e.g., \cite{Coulter:2017wya,Kilpatrick:2017mhz,Drout:2017ijr,Smartt:2017fuw,Tanvir:2017pws,Chornock:2017sdf,Arcavi:2017xiz,Shappee:2017zly,Kasliwal:2017ngb,Andreoni:2017ppd,McCully:2017lgx,Pian:2017gtc,Cowperthwaite:2017dyu}) and later on also in the X-ray~(e.g., \cite{Troja:2017nqp,Haggard:2017qne,Margutti:2017cjl}) and in the radio~(e.g., \cite{Hallinan:2017woc,Mooley:2017enz}). Below we briefly summarize the observational picture with respect to theoretical predictions. \subsection{The High-energy Counterpart} For many years it was hypothesized that short gamma-ray bursts (GRBs) that are by now routinely observed, are originating from NS mergers (e.g., \cite{Narayan:1992iy}). The theoretical model includes the launching of a relativistic jet during the short period of accretion onto the merger remnant. This energetic jet is believed to be responsible for the prompt gamma-ray emission. In addition, the interaction of the jet with the interstellar medium will produce an afterglow emission in the optical, X-ray and also radio wavelengths. The short GRB ($T_{90} = 2.0 \pm 0.5$\,sec) that has been discovered about $1.7$ second after the GW170817 merger event \cite{Goldstein:2017mmi} was irregular compared to other short GRBs observed so far. Most notable is the low equivalent isotropic energy $E_{iso} \approx 5 \times 10^{46}$\,erg (in the $10 - 1000$\,kev band), which is $\sim 4$ orders of magnitude lower than a typical short GRB energy. In addition to the initial detection by {\it Fermi-GBM}, there were observations made by the Integral satellite. Integral also detected the short GRB with a flux of $\sim 1.4 \times 10^{-7}$\,erg\,cm$^{-2}$ (at $75 - 2000$\,kev~\cite{Savchenko:2017ffs}). Following the initial detection and once the candidate counterpart had been localized, X-ray observations were made within less than a day of the merger event. The initial X-ray observations by both the Neil Gehrels Swift Observatory and by the Nuclear Spectroscopic Telescope Array ({\it NuSTAR}) resulted in null-detections. However, later on, on day $\sim 9$\, after the merger, the {\it Chandra} telescope detected X-ray emission from the AT\,2017gfo for the first time \cite{Troja:2017nqp} with an initial isotropic luminosity of $\approx 9 \times 10^{38}$\,erg\,s$^{-1}$. In the following days, the X-ray luminosity appeared to continue to slowly rise \cite{Troja:2017nqp,Haggard:2017qne}. Late-time observations ($109$\,days after the merger) by {\it Chandra} still show that the X-ray emission continues to slowly rise \cite{Margutti:2018xqd}. In order to reconcile the relatively faint prompt gamma-ray emission and the late onset of the X-ray emission, it was suggested that AT\,2017gfo was a regular short GRB but one that is observed slightly off the main axis by $\sim 10\,$degrees of the jet~\cite{Troja:2017nqp}. However, this interpretation has yet to be tested against observations in other wavelengths, such as the radio (see below). \subsection{The Optical and Infrared Counterpart} Over four decades ago, a prediction was made by \cite{1974ApJ...192L.145L,Eichler:1989} that neutron-rich material can be tidally ejected in a NS merger. About a decade later numerical simulations also showed that NS mergers will exhibit such mass ejections, where the mass ejected is expected to be in the mass range $10^{-5} - 10^{-2}$\,M$_{\odot}$ with velocities of $0.1 - 0.3$\,c (e.g., \cite{Rosswog:1998hy}). It was also predicted that heavy elements would form in the neutron rich ejected material via r-processes. As discussed in detail in the previous Section~\ref{Sec:NumericalRelativity}, additional material is also expected to be ejected by accretion disk winds and from the interface of the two merging stars (the latter material will be ejected mostly in the polar direction). In general, each ejected mass component may have a different neutron fraction leading to a different r-process element composition. As the ejecta from the NS merger is radioactive, it can power transient emission, as proposed in Ref.~\cite{Li:1998bw}. The initial prediction was that the emission will be supernova-like and will be blue and peaking on a one-day timescale. Later on, more detailed calculations by \cite{2010MNRAS.406.2650M} showed that the peak of the emission will be weaker at a level of $10^{41}$\,erg\,s$^{-1}$. The next theoretical developments were made by \cite{Barnes:2013wka,Tanaka:2013ana,2014MNRAS.439..757G} who for the first time calculated the effect of the heavy r-process element opacities on the emission. They found that due to very high opacities, the emission is expected to be even weaker, with its spectral peak now in the infrared (instead of the optical) and with the peak timescale being delayed. Since the various ejecta mass components may have different compositions, they therefore may have different opacities. Thus in principle, one component may form a blue short-lived emission (the so-called `blue component') while another may emit in the infrared with week-long timescales (the so-called `red component'). The optical emission from the AT\,2017gfo peaked at a faint absolute magnitude of $M_{V} \approx -16$ in one day and began to rapidly decline, while the infrared emisson had a somewhat fainter and later (at approximately 2 days) peak compared to the optical emission. The overall observed brightness of the source in the optical and infrared bands and its evolution over time have shown in general an astonishing agreement with the predicted properties of such an event. The multiple photometric and spectroscopic measurements sets obtained by the various groups paints the following picture: At early times, at about $0.5$ days after the merger, the ejecta temperature was high at $T\sim 11,000$\,K (e.g. \cite{Shappee:2017zly}). A day later the spectral peak was at $\approx 6000$\,Angstrom, and the temperature decreased to $T\approx 5000$\,K~~\cite{Pian:2017gtc}. From this point onwards the spectral peak quickly moved into the infrared band. The combined optical and IR emission and its evolution were found to be consistent with an energy source powered by the radioactive decay of r-process elements (e.g. \cite{Drout:2017ijr,Smartt:2017fuw,Tanvir:2017pws}). Based on both the photometric and spectroscopic analysis an estimate of the ejecta mass and its velocity were found to be ${\rm M}_{ej} \approx 0.02 - 0.05$\,M$_{\odot}$ with a velocity in the range $0.1 - 0.3$\,c~\cite{Chornock:2017sdf,Arcavi:2017xiz,Smartt:2017fuw,Shappee:2017zly,Andreoni:2017ppd,Drout:2017ijr,McCully:2017lgx,Pian:2017gtc,Cowperthwaite:2017dyu}. One of the main claims with regards to this event is that the observations provide evidence for the formation of r-process elements. This conclusion is mainly driven based on the evolution of the infrared emission~\cite{Drout:2017ijr,Kasliwal:2017ngb}. \cite{Tanvir:2017pws}, for example, obtained late-time IR measurements using the Hubble Space Telescope and argue that the slower evolution of the IR emission compared to the optical require high opacity r-process heavy elements with atomic number $> 195$. The infrared spectrum shows broad features which are presumably comprised of blended r-process elements \cite{Tanvir:2017pws}. These broad spectral features were compared to existing model predictions and show a general agreement, albeit there are still inconsistencies that need to be explained (e.g., \cite{Kasen:2017sxr,Metzger:2016pju,Kasliwal:2017ngb}). There is still an ongoing debate about the composition of the heavier elements. Ref.~\cite{Smartt:2017fuw} claims that the observed infrared spectral features can be matched with CS and Te lines. Others estimate the overall fraction of the lanthanides in the ejecta and find it to be in the range ${\rm X}_{lan} \approx 10^{-4} - 10^{-2}$. It seems that at early times (up to 3-5 days), the ejecta that dominates the emission has very low lanthanide fraction and that there are discrepancies between the predicted and observed optical light curves (e.g., \cite{Arcavi:2017xiz,McCully:2017lgx}. Several works (e.g., \cite{Kasen:2017sxr,Metzger:2016pju,Cowperthwaite:2017dyu,Nicholl:2017ahq,Chornock:2017sdf,Kasliwal:2017ngb,Pian:2017gtc,Smartt:2017fuw}) explain the early {\it vs.} late-time behaviour of the emission by having an ejecta with two components (as also predicted in the literature). The first component is the ``blue'' kilonova, with a high electron fraction and thus a low fraction of heavy elements, which dominates the optical emission early only. A ``red'' kilonova is the second component that is comprised by heavy r-process elements and that produces the slow IR evolution and the broad spectral features observed at late-times. Still there are some claims that even at late times, the emission can be originating from an ejecta with only low-mass elements~\cite{Waxman:2017sqv}. \subsection{The Radio Counterpart} In addition to the radio afterglow emission on the short time scale (days), radio emission is also expected on longer time scales (month to years). The latter is not a result of the relativistic jet but rather originating from the interaction of the slower dynamical ejecta with the instersteIlar medium (ISM)~\cite{Nakar:2011cw}. This emission is expected to be rather weak where the strength of the peak emission depends on the velocity of the dynamical ejecta, the ISM density and on some microphysical parameters. While in the past, radio afterglows of short GRBs (not accompanied by GWs) were detected (e.g., \cite{Fong:2015oha}), long-term radio emission was never observed until GW\,170817 (including in the previous cases where kilonova candidates were discovered~\cite{Horesh:2016dah,Fong:2016orv}). Similar to any other wavelength, radio observations were undertaken within the day of the GW170817 merger discovery and the following days. The early time observations performed by The Jansky Very Large Array (VLA), The Australian Compact Array (ATCA), the Giant Meterwave Radio Telescope (GMRT) and the Atacama Large Millimeter Array (ALMA), all resulted in null-detections \cite{Hallinan:2017woc,Alexander:2017aly,Kim:2017skw}. Late-time VLA observations, however, finally revealed a radio counterpart $\approx 16$\,days after the merger \cite{Hallinan:2017woc}. The initial radio emission was weak at a level of a couple of tens of $\mu$Jy only (at both $3$ and $6$\,GHz). Follow-up ATCA observations confirmed the radio detection. Upon detection, a long-term radio monitoring campaign was initiated, and the results are reported in Ref.~~\cite{Mooley:2017enz}. They report that the radio emission still continued to rise at $> 100$\,days after the merger. The multiple frequency observations also show that the radio emission is optically thin with a spectral index of $\alpha = -0.6$. \cite{Hallinan:2017woc} compared the radio observations to several predictions including an on-axis jet, a slightly off-axis jet, and being completely off-axis. In addition, they included in their comparison a model in which the jet forms a hot wide-angled mildly relativistic cocoon. This cocoon formed as the jet is working its way out of the dynamical ejecta, may also lead to gamma-ray, X-ray, and radio emission, but with different characteristics than the emission formed by either the highly relativistic jet or the slower dynamical ejecta. In fact, \cite{Hallinan:2017woc} find that both an on-axis jet model, or even a slightly off-axis jet one, are expected to produce bright radio emission in the first few days after the merger which by day 16 should start fading, a prediction which does not match the rising observed radio source. The model that is the most consistent with current data is the cocoon model with either a choked or successful or `structured' jet (see e.g., \cite{Lamb:2017ych,Margutti:2018xqd}). In fact, the choked-jet cocoon model may also explain the relatively low energy of the gamma ray emission \cite{Gottlieb:2017pju}. If both the late-time radio and X-ray emission originate from interaction of the material (whether it is the dynamical ejecta or a cocoon) with the ISM, one can test whether the observed emission in both wavelengths is indeed connected as expected. For now, it seems that the observed X-ray emission fits the prediction which is based on extrapolating the observed radio emission into higher energies. Both the X-rays and the radio emission have also roughly the same spectral slope. This suggests that there is no additional power source in play at this time. \subsection{Many Open Questions} While vast amounts of data have been collected for this amazing merger event, and while a flood of papers report many types of analysis and conclusions (by no means are we attempting to cover all of them here), there are still many open questions remaining. For example, is this event connected to short GRBs or do we still have to prove this connection? Which r-process elements form and at what time? Do the observations really require producing heavy r-process elements or can they all be explained by lighter-element components? How many components does the ejecta have and which one dominates the emission and when? Are there any other emission power sources in play such as a magnetar (even if at short time) ? Are we really seeing a cocoon with a choked or successful jet or is there some other scenario that can explain the radio emission combined with all the other evidence? As scientists around the world are still working on analyzing all the data in hand for this event and are also still collecting new data, they are also gearing up for the future, and preparing for the next year when the LIGO and Virgo detectors switch back on. At this time, hopefully more events with EM signatures will be discovered and more answers (and surely more new questions) will present themselves. \section{X-ray and gamma-ray binaries } \label{Sec:XrayGammaRay} \vspace{-3mm} {\it Contributor:} M. ~Chernyakova \vspace{3mm} The population of Galactic X-ray sources above 2 keV is dominated by the X-ray binaries, see e.g. \cite{Grimm2002}. A typical X-ray binary contains either a NS or a BH accreting material from a companion star. Due to angular momentum in the system, accreted material does not flow directly onto the compact object, forming a differentially rotating disk around the BH known as an accretion disk . X-ray binaries can be further divided into two different classes, regardless the nature of the compact object, according to the mass of the companion star: high-mass X-ray binaries and low-mass X-ray binaries. The secondary of low-mass X-ray binary systems is a low-mass star, which transfers matter by Roche-lobe overflow. High-mass X-ray binaries comprise a compact object orbiting a massive OB class star. High-mass X-ray binaries systems are a strong X-ray emitter via the accretion of matter from the OB companion. At the moment 114 high-mass X-ray binaries~\cite{Liu2006} and 187 low-mass X-ray binaries~\cite{Liu2007} are known. Black hole X-ray binaries are interacting binary systems where X-rays are produced by material accreting from a secondary companion star onto a BH primary \cite{1973A&A....24..337S}. While some material accretes onto the BH, a portion of this inward falling material may also be removed from the system via an outflow in the form of a relativistic plasma jet or an accretion disk wind, see e.g. \cite{BHrev2006} for a review. Currently, the known Galactic BH X-ray population is made up of 19 dynamically confirmed BHs, and 60 BH candidates \cite{BHcat2016}. The vast majority of these Galactic BH X-ray objects are low-mass X-ray binaries. Most of these systems are transient, cycling between periods of quiescence and outburst. This behaviour is associated with changing geometries of mass inflow and outflow, e.g. \cite{BHrev2006}. At higher energies, however the situation is drastically different. While current Cherenkov telescopes have detected around 80 Galactic sources (see the TeVCat catalogue~\cite{TeVCat}), only 7 binary systems are regularly observed at TeV energies. Properties of PSR B1259-63, LS 5039, LSI +61 303, HESS J0632+057 and 1FGL J1018.6-5856 are reviewed in \cite{dubus_review13}. Since 2013 two more Galactic binaries have been discovered at TeV sky, PSR J2032+4127 \cite{PSRJ2032_ATel_TeV} and HESS J1832-093 \cite{2016MNRAS.457.1753E}, but still the number of binaries observed at TeV sky is extremely small, and the reason why these systems are able to accelerate particles so efficiently is not known yet. These systems are called gamma-ray-loud binaries (GRLB), as the peak of their spectral energy distribution lies at GeV - TeV energy range. All GRLB systems host compact objects orbiting around massive young star of O or Be spectral type. This allows to suggest, that the observed $\gamma$-ray emission is produced in the result of interaction of the relativistic outflow from the compact object with the non-relativistic wind and/or radiation field of the companion massive star. However, neither the nature of the compact object (BH or NS?) nor the geometry (isotropic or anisotropic?) of relativistic wind from the compact object are known in the most cases. Only in PSR B1259-63 and PSR J2032+4127 systems the compact object is known to be a young rotation powered pulsar which produces relativistic pulsar wind. Interaction of the pulsar wind with the wind of the Be star leads to the huge GeV flare, during which up to 80\% of the spin-down luminosity is released \cite{Abdo2011,Chernyakova2015}. In all other cases the source of the high-energy activity of GRLBs is uncertain. It can be either accretion onto or dissipation of rotation energy of the compact object. In these systems the orbital period is much shorter than in PSR B1259-63 and PSR J2032+4127, and the compact object spend most of the time in the dense wind of the companion star. The optical depth of the wind to free-free absorption is big enough to suppress most of the radio emission within the orbit, including the pulsed signal of the rotating NS~\cite{zdz10}, making impossible direct detection of the possible pulsar. In Ref.~\cite{Massi2017} authors tried to deduce the nature of the compact source in LSI +61 303 studying the relation between X-ray luminosity and the photon index of its X-ray spectrum. It turned out that existing X-ray observations of the system follows the same anti-correlation trend as BH X-ray binaries \cite{2015PKAS...30..565Y}. The hypothesis on microquasar nature of LSI +61 303 allowed to explain the observed radio morphology \cite{BoschRamon:2004se} and explains the observed superorbital period as a beat frequency between the orbital and jet-precession periods~\cite{Massi2016}. At the same time it was shown that the model in which the compact source is a pulsar allowed naturally explain the keV-TeV spectrum of LSI +61 303~\cite{zdz10}. Authors argued, that the radio source has a complex, varying morphology, and the jet emission is unlikely to dominate the spectrum through the whole orbit. Within this model the superorbital period of the source is explained as timescale of the gradual build-up and decay of the disk of the Be star. This hypothesis is also supported by the optical observations confirming the superorbital variability of the Be-star disk \cite{fortuny15}. A number of multi-wavelength campaigns are currently ongoing aiming to resolve the nature of these peculiar systems. GeV observations revealed a few more binaries visible up to few GeV. Among them are Cyg X-1 \cite{Zanin2016,Zdz_CygX1_2017} and Cyg X-3 \cite{2016ATel.9502....1C}. However, contrary to the GRLBs described above these systems are transients and seen only during the ares, or, in the case of Cyg X-1, during the hard state. In addition to this the peak of the spectral energy distribution of these systems happens at much lower energies than in the case of binaries visible at TeV energies. However contrary to the GRLBs described above these systems are transients and seen only during the flares, or ,in the case of Cyg X-1, during the hard state. In addition to this the peak of the spectral energy distribution of these system happens at much lower energies than in the case of binaries visible at TeV energies. From these observations it seems that wind collision can accelerate particles more efficient that the accretion, but more sensitive observations are needed to prove it and understand the reason. Hopefully CTA \cite{CTA} observations will be able to shed light on the details of the physical processes taking place in these systems. \section{Supermassive black hole binaries in the cores of AGN} \label{Sec:BHBandAGN} \vspace{-3mm} {\it Contributors:} E.~Bon, E.~M.~Rossi, A.~Sesana, A.~Stamerra \vspace{3mm} Following mergers it is expected that the galaxy cores should eventually end up close to each other. In this process, the term dual SMBHs refers to the stage where the two embedded SMBHs are still widely separated (gravitationally-bound to the surrounding gas and stars and not to one another), while SMBBHs denotes the evolutionary stage where they are gravitationally bounded in the close-orbiting system of SMBHs. Bound SMBBHs on centi-pc scales are the most relevant to GW emission (and therefore to this roadmap). In the approximation of circular orbits, these systems emit GWs at twice their orbital frequency, i.e. $f_{\rm GW}=2/P_{\rm orb} \gtrsim 1$ nHz. As we discuss in the next Section, this is the frequency at which PTAs are most sensitive \cite{Lentati:2015qwp,ShannonEtAl:2015,VerbiestEtAl:2016,Arzoumanian:2017puf}, having a concrete chance to make a direct detection of these systems within the next decade \cite{Siemens:2013zla,rsg15,tve+16}. The presence of a SMBBH with an orbital period of several years introduces a natural timescale in the system. In fact, numerical simulations of SMBBHs embedded in circumbinary accretion disks display consistent periodic behaviors of the gas leaking into the cavity \cite{1994ApJ...421..651A,2008ApJ...672...83M,2012A&A...545A.127R,2012ApJ...749..118S,2013MNRAS.436.2997D,Farris:2014iga}. This led to the notion that SMBBHs might be detected via periodicity of their lightcurve. However, the diffusion time of the gas within the mini-disks surrounding the two holes is generally longer than the binary period, and it is not at all clear that the periodic supply of gas through the cavity will in turn result in periodicities in the binary lightcurves or their spectra \cite{srr+12}. Even though the cross-gap accretion rate is generally periodic, only light generated at the accretion stream/minidisk or outgoing stream/circumbinary disk shocks is guaranteed to follow this modulation \cite{2014ApJ...785..115R,Farris:2014iga}. Since some periodicity seems inevitable, we focus on this signature in the following discussion. We notice however, that several spectral signatures of close SMBBHs have also been proposed, including a dimming at UV wavelengths \cite{tmh12}, double K$\alpha$ lines \cite{srr+12}, notches in the spectral continuum \cite{2014ApJ...785..115R}, steepening of the thermal spectrum compared to the standard thin disk model \cite{Ryan:2016vcm}. Among many mechanisms proposed to explain the emission variability of active galactic nuclei (AGN) besides outflows, jet precession, disk precession, disk warping, spiral arms, flares, and other kinds of accretion disk instabilities, one of the most intriguing possibilities involves the existence of a SMBBH system in their cores~\cite{Komossa:2003wz,Bogdanovic:2007bu,Bogdanovic:2014cua,Graham:2015gma,Nguyen:2016qnk}, and the tidal disruption event (see Ref.~\cite{2015JHEAp...7..148K} and references therein). The light variability emitted from AGN was tracked much before they were recognized as active galaxies. There are light curves showing AGN variability of over 100 years of observations, see for example \cite{Guo:2014ora,Oknyanskij2016}, with variability timescale of over decades. In fact, many AGN show variability of different time scales depending on time scales of processes that drive the variability, such as speeds at which variations propagate, for example the speed of light c $\sim$ 3 $\cdot$ 10$^5$ km/s or the speed of sound $v_s\sim(kT/m)^{1/2}$, where $c > v_{orb} \gg v_s$, or the time scale of the orbital motion $v_{orb}\sim(GM/R)^{1/2}$. The shortest timescale corresponds to the light crossing timescale, on which the reverberation mapping campaigns are based on. Orbital timescales are longer~\cite{Hagai2013}, and are dominant in the case of SMBBH systems. Recently, possible connection of AGN variability time scales and orbital radius is presented in~\cite{Bon:2018ibp}, indicating that variability time scales may not be random, and that they might correspond to the orbiting time scales. To identify possible candidates, we search for periodic variations in their light and radial velocity curves. We expect that periodic variability should correspond to orbital motion exclusively, while the other processes could produce only quasi periodic signals. Unfortunately, AGN were identified only about 70 years ago, so observing records are long for few decades only, which is of order of orbiting time scales, and therefore not long enough to trace many orbits in historical light curves, not to mention the radial velocity curves \cite{Begelman:1980vb,Popovic:2011uy,Bon:2012,Bon:2016-NGC5548,Li:2016hcm}, which are harder to obtain because of their faintness, and even shorter records. Therefore, for AGN it is very hard to prove that the signal is actually periodic, especially if they are compared to the red noise like variability curves, that in fact, AGN light curves are very similar to \cite{Simm:2015ova,Vaughan:2016pyf}. Therefore, standard methods like Fourier and Lomb-Scargle \cite{Scargle:1982bw} may show peaks of high looking significance but the derived p-value may not be valid \cite{Vaughan:2016pyf, Bon:2016-NGC5548, Bon:2017tgh}. Keeping in mind the aforementioned difficulties, a number of AGN have been proposed to display a significant periodical variability in their light curves~\cite{Sillanpaa:1988zz,Graham:2015gma,Graham:2015tba,Charisi:2016fqw,Liu:2016msr,Bhatta:2016dsn,Kovacevic:2017sjl}. Notable examples include the blazar OJ287 (11.5 yr period, \cite{Sillanpaa:1988zz, Bhatta:2016dsn}, the quasar PG1302-102 (6 yr period, \cite{Graham:2015gma}), the blazar PG 1553+113 (2 yr period, \cite{Ackermann:2015wda}). Among those candidates, there are only few that could indicate periodic light and radial velocity curves in the same time \cite{Bon:2016-NGC5548,Bon:2012,Li:2016hcm,Li:2017eqf}, which therefore could be recognized as SMBBH candidates, like NGC4151 with a 15.9 year periodicity \cite{Bon:2012}, NGC5548 with a $\approx$15 year periodicity \cite{Li:2016hcm,Bon:2016-NGC5548}, and Akn120 with a $\approx$20 year periodicity \cite{Li:2017eqf}. We note that simulating the emission from such systems is very complex \cite{Cuadra:2008xn,Farris:2014iga,Kimitake:2007fs,Tang:2017eiz,Ryan:2016vcm}, especially for the eccentric high-mass ratio systems \cite{Bogdanovic:2007bu,2017MNRAS.466.1170M,2013PASJ...65...86H}. Among AGN, the class of blazars is dominated by the emission from the jet due to beaming effects caused by the small angle of sight. Blazars show high variability in all wavebands from radio to gamma-rays. High energy emission is likely originated in small jet scales and therefore can be modulated by the orbital motion of the SMBH binary system. The modulation can be funneled through variations on the accretion rate induced by the perturbation on the disk by the companion SMBH, as suggested for OJ287, or in helical paths induced by precession \cite{Sobacchi:2016yez} as suggested to explain the clear signature of a periodic modulation on the gamma-ray blazar PG 1553+113 \cite{Ackermann:2015wda}. More complex interplay among the different components in the jet, emitting at different wavelength is possible in the framework of a binary SMBH system (\cite{Rieger:2004ay}). We mention that quasi-periodicities in the jet emission can be induced by intrinsic oscillatory disk instabilities that can mimic periodical behaviour. The continuous gamma-ray monitoring of blazars by the {\em Fermi}-LAT satellite is providing new possible candidates showing periodic or quasi-periodic emission (see e.g., \cite{Prokhorov:2017amk}, \cite{Sandrinelli:2015ijk}). Similarly, a 14 year periodicity is found in the X-ray and optical variations of 3C 273, while in OJ 287, the optical variability may not always be consistent with radio. Even, a detection of periodic variations of spinning jet could indicate presence of SMBBH \cite{Kun:2014tva}. The time domain window has only been opened in the past decade with dedicated surveys such as CRTS, PTF and Pan-STARRS, and already produced several SMBBH candidates. Many of them, however, have been already severely constrained by PTA upper limits on the stochastic GW background they would imply \cite{Sesana:2017lnk}. This confirms our poor understanding of SMBBH appearance. More sophisticated numerical simulations, including 3-D grids, radiative transport schemes, feedback from the accreting sources, etc., are needed to better understand the emission properties and peculiar signatures of SMBBHs. Under their guidance, future candidates should be then proposed based on systematic cross check of variability coupled with peculiar spectral features. \subsection{Modeling electromagnetic signatures of merging SMBBHs} Although the identification of compact SMBBHs might have an important impact on GW observations with PTAs in the near future, looking further ahead, LISA is expected to detect tens to hundreds of coalescing SMBBHs throughout the Universe. Electromagnetic observations related to the merger of SMBBHs are important both for cosmology and the astrophysics of galactic nuclei. Pinning down the host galaxy of the merger will allow us to combine redshift and distance from gravitational waves to constrain cosmological parameters (\cite{Tamanini:2016zlh}, see next Section) and to study the large scale galactic environment of merging SMBBHs, adding to our understanding to the process galaxy formation. On the other hand, electromagnetic observations will give us access to the properties of matter in the relative close environment of a merger and to the gas an stars (hydro)-dynamics as they adjust in response to the merger. Given the importance of such identification, there has been an extensive effort to predict observable electromagnetic signatures that can occur in nearly coincidence with the event (``prompt signals'') or afterwards (``afterglows''). In the following few examples will be given; notably, some of them also inspired recent models for possible electromagnetic counterparts to SOBBH mergers, of the kind detected by LIGO and VIRGO. As mentioned previously, SMBBH mergers, especially at high redshifts, can happen in a gaseous environment that provides each SMBH with a ``minidisc,'' fed through streams leaking from a circumbinary disc. Those minidiscs are likely to be retained even after the orbital decay due to gravitational radiation dominates, providing distinctive modulation of the emerging luminosity as the binary spirals in \cite{2018MNRAS.476.2249T,2018ApJ...853L..17B}. During the final orbits, the surviving gas between the black holes gets squeezed possibly producing super-Eddington outflows as discussed in \cite{armitage,2016MNRAS.457..939C}, but see \cite{2017MNRAS.468L..50F}. Full GR simulations have also been employed to study the possible formation of precessing jets during the inspiral and merger, in the attempt of identifying distinctive signatures \cite{Palenzuela:2010nf,2012ApJ...749L..32M,2014PhRvD..90j4030G}. General relativity predicts that a newly formed black hole suffers a recoil because GWs carry away a non-zero linear momentum (e.g., \cite{1983MNRAS.203.1049F,Campanelli:2007ew}). This recoil affects the circumbinary disc, bound to the black hole: a kick is imparted that shocks the gas producing a slowly rising, $\sim 10$ yr lasting afterglows \cite{2008ApJ...676L...5L,2008ApJ...684..835S,2008ApJ...682..758S,2009PhRvD..80b4012M,rossi}. In stellar mass black hole merger, similar phenomena but on much shorter timescales can occur \cite{perna,dk17}. Contrary to the SMBH case, however, providing a gas rich environment for the merger is a challenge. A possible venue involves cold relic discs, formed as a result of weak supernovae, where accretion is suppressed until either the ``squeezing'' or the ``kicking'' heat them up again in the same configuration envisaged for SMBHs. As already mentioned, the next theoretical challenge for these dynamical models is to predict realistic lightcurves and spectra, which will require non-trivial radiative transfer calculations (see, e.g., \cite{2016ApJ...819...48S}). With solid predictions in hand, appropriate strategies can be devised to coordinate electromagnetic follow-ups, to take full advantage of multimessenger astronomy in the LISA era. \section{Cosmology and cosmography with gravitational waves}\label{Sec:cosmography} \vspace{-3mm} {\it Contributors:} C.~Caprini, G.~Nardini, N.~Tamanini \vspace{3mm} The recent direct measurement of GWs by the Earth-based interferometers LIGO and Virgo opened up a new observational window onto the universe and, right from the first detection, led to the discovery of a new, unexpected source: fairly massive stellar-origin BBHs. This demonstrates the great potential of GW observations to improve our knowledge of the universe. Concerning cosmology, it is beyond doubt that the possible detection of a stochastic GW background (SGWB) from the early universe would be revolutionary from this point of view: similar to the discovery of the CMB, which constitutes a milestone in our understanding of the universe, rich of consequences that we are still investigating. Furthermore, the plethora of new GW detections expected in the next decades by both Earth and space-based interferometers will not only deliver fundamental information on the emitting astrophysical sources, but it will also bring complementary and independent data, with respect to standard EM observations, that can be used for cosmological purposes. In particular, by means of GW detections, we can probe the history of the universe both at early and late times, shedding new light on some of the most elusive cosmological mysteries, such as dark energy, DM and the origin of cosmic inhomogeneities. In this Section, we overview how the observation of GWs can enhance our knowledge of the history of the universe. \subsection{Standard sirens as a probe of the late universe} Within the theory of GR, a binary system composed by two compact astrophysical objects orbiting around each other, emits a GW signal with the two polarizations~\cite{Maggiore:1900zz} \begin{eqnarray} h_\times(t) & = &\frac{4}{d_L(z)} \left( \frac{G \mathcal{M}_c(z)}{c^2} \right)^{5/3} \left( \frac{\pi f(t)}{c} \right)^{2/3} \sin[\Phi(t)] \cos\iota\,, \nonumber \\ & & \label{eq:hx}\\ h_+(t) & = & h_\times(t) \frac{1+\cos^2 \iota}{2 \cos\iota} \cot[\Phi(t)] ~, \label{eq:h+} \end{eqnarray} where $h_{\times,+}(t)$ are the GW strains in the transverse-traceless gauge (we neglect here post-Newtonian contributions). In these expressions $\iota$ is the orientation of the orbital plane with respect to the detector, $z$ is the redshift of the source, $d_L(z)$ is the luminosity distance, $f$ is the GW frequency at the observer, $\Phi(t)$ is the phase of the GW, and $\mathcal{M}_c(z)=(1+z)(m_1 m_2)^{3/5}/(m_1+m_2)^{1/5}$ is the redshifted chirp mass, with $m_{1,2}$ being the masses of the two binary bodies. An accurate detection of the GW signal allows to reconstruct all the parameters in Eqs.~(\ref{eq:hx}) and (\ref{eq:h+}) within some (correlated) uncertainties~\cite{Schutz:1986gp}. In particular, thanks to the reconstruction of $d_L$, binaries can be employed as reliable cosmological distance indicators. Eqs.~(\ref{eq:hx}) and (\ref{eq:h+}) highlight three key aspects of using inspiralling binaries as cosmological distance indicators: \textit{i)} The measurement of $d_L$ from GW signals is not affected by any systematic uncertainties in the modelling of the source, since the dynamics of compact binary systems is directly determined by GR. This is in contrast with supernovae type-Ia (SNIa), which require the cosmic distance ladder, i.e.~cross calibration with local measurements of sources of known distance, to overcome unknown systematics in the determination of their luminosity distance. \textit{ii)} Due to the scaling $\propto d_L^{-1}$, GW cosmological indicators are suitable even at large distances where EM sources, whose intensity scales as $d_L^{-2}$, are too faint. This implies that given the same amount of emitted energy, a source producing GWs can be observed at higher distances with respect to a source emitting EM waves. \textit{iii)} The measurement of the quoted waveform does not allow to determine the redshift of the source: in fact Eqs.~(\ref{eq:h+}) and (\ref{eq:hx}) are invariant under the transformation $m_i\to m_i (1+z)$ plus $d_L \to d_L (1+z)$. In other words the waveform detected from any system with masses $m_i$ at a distance $d_L$ will be equivalent to a waveform produced by a system with masses $m_i (1+z)$ at a distance $d_L (1+z)$. The luminosity distance $d_L$ is tightly linked to the redshift $z$ in a given cosmological setup. For a homogeneous and isotropic universe (at large scale), the luminosity distance is given by \begin{equation} d_L(z) = \frac{c}{H_0}\frac{1+z}{\sqrt{\Omega_k}} \sinh \left[ \sqrt{\Omega_k} \int_0^z \frac{H_0}{H(z')} dz' \right] \,, \label{eq:dL_z} \end{equation} with $\Omega_k$ being the present value of the density parameter of the spatial curvature, $H(z)$ being the Hubble rate as a function of the redshift, and $H_0=H(z=0)$. Therefore, if besides $d_L$ (reconstructed from the waveform) one can independently establish the redshift $z$ of a GW source, then one obtains a data point useful to constrain the relation~(\ref{eq:dL_z}). The parameters of any given cosmological model, which are implicitly included in $H(z)$ since its dynamics is determined by the Einstein equations, can then be statistically constrained by fitting Eq.~(\ref{eq:dL_z}) against a number of $(z,d_L(z))$ data points. This can be done using observations from GW inspiralling binaries, if a determination of their redshift is available by some means. For this reason, well detectable binaries, for which the redshift is also known (or at least estimated), are dubbed {\it standard sirens}, in analogy with SNIa used as standard candles in cosmography~\cite{Holz:2005df}. Of course, to obtain a robust bound, it is crucial to fit Eq.~(\ref{eq:dL_z}) with as many standard sirens as possible at the most diverse redshifts, especially if the cosmological model at hand contains many parameters. The outcome of such an analysis can be remarkable. It constitutes the first robust cosmological test not using EM radiation as the only messenger of astronomical information, and it allows to probe the validity of the cosmic distance ladder up to far distances. As demonstrated by the recent analysis of the LVC detection GW170817, discussed in section \ref{sub:status} below, it also provides a measurement of $H_0$ that is independent of the calibrations necessary to establish constraints using SNIa. In particular, if the current tension between the $H_0$ measurement from SNIa and CMB (assuming $\Lambda$CDM) will persist, standard sirens will be a very useful observable to decipher the origin of this tension. \subsubsection{Redshift information} \label{sub:method} $ $\\[1mm] \noindent The cosmography procedure just described assumes that the redshift of the GW sources can be acquired. This is however not straightforward. Because of their intrinsic nature, coalescing BHs do not guarantee any signature besides GWs. Nevertheless, EM counterparts can be envisaged for BBHs surrounded by matter, or for compact binaries where at least one of the two bodies is not a BH. For this reason, an EM counterpart is expected for merging massive BHs at the centre of galaxies, which may be surrounded by an accretion disk, and for BNSs, whose merger produces distinctive EM emissions, including gamma ray bursts and kilonovae. On the other hand, extreme mass ratio inspiral (EMRI) systems and stellar-origin BBHs (SOBBHs) are not expected to produce significant EM radiation at merger, although their merging environment is still unclear. Of course, in order to fit Eq.~(\ref{eq:dL_z}) with as many data points ($z$, $d_L$) as possible, it would help substantially to determine the redshift of all detectable standard sirens, independently of whether they do or do not exhibit an EM counterpart. There are mainly two different ways to obtain redshift information for a standard siren, depending on the observation or not of an EM counterpart: \begin{description} \item[\it Method with EM counterpart:] This method relies on EM telescopes to recognize the galaxy hosting the GW source \cite{Schutz:1986gp}. Reaching a good sky localization ($\mathcal O(10\,$deg$^2)$ or below) as soon as possible after the detection of the GW signal, is essential to alert EM telescopes and point them towards the solid angle determining the direction of the GW event to look for EM transients. Once such a transient is detected, the GW event can be associated with the nearest galaxy whose redshift can be measured either spectroscopically or photometrically. It is important for this method to have GW detectors able to rapidly reach a well-beamed sky localization of the GW source. A network of GW interferometers not only improves the sensitivity to a standard siren signal (improving the reconstruction of $d_L$) but also the identification of the sky solid angle containing the source thanks to spatial triangulation. \item[\it Method without EM counterpart:] This method allows to determine the sky localization of the standard siren much after the GW event, with clear practical advantages, in particular, the presence of the source does not need to be recognized in real time, and moreover the SNR required for a good sky localization does not need to be reached before the stage at which the EM signal might be triggered). It adopts a statistical approach \cite{Schutz:1986gp,MacLeod:2007jd}. Indeed, given a galaxy catalogue, the galaxy hosting the standard siren has to be one of those contained in the box given by the identified solid angle (with its experimental error) times a properly-guessed redshift range. This range is obtained by applying a reasonable prior to the redshift obtained inverting Eq.~(\ref{eq:dL_z}). In this procedure one must of course take into account the dependency upon the cosmological parameters of $H(z)$, which will affect the final posterior on the parameters themselves. The redshift of the standard siren can be estimated as the weighted average of the redshifts of all the galaxies within the error box (an additional prior on the conformation of each galaxy may be also included). Due to the large uncertainty of this statistical approach, this method is effective only when a limited amount of galaxies can be identified within the volume error box and only if a sufficiently large amount of GW events is observed. \end{description} These procedures suffer from some major uncertainties. The redshift appearing in Eq.~(\ref{eq:dL_z}) is the one due to the Hubble flow, and consequently the contribution due to the peculiar velocity of the host galaxy to the measured redshift should be subtracted. Moreover, the real geodesic followed by the GW is not the one resulting from the homogeneous and isotropic metric assumed in Eq.~(\ref{eq:dL_z}), and in fact the lensing contribution to $d_L$ due to cosmic inhomogeneities should be removed. Although in principle very precise lensing maps and galaxy catalogues are helpful to estimate such source of uncertainty \cite{Jonsson:2006vc,Shapiro:2009sr,Hirata:2010ba,Hilbert:2010am}, still the lensing and peculiar velocity effects have to be treated as a (large) systematic error that can be reduced only by means of numerous detections (the uncertainty due to peculiar velocities dominating at low redshift, and the lensing uncertainty dominating at large redshift). \subsubsection{Standard sirens with current GW data: GW170817} \label{sub:status} $ $\\[1mm] \noindent The GW170817 event, corresponding to the coalescence of a BNS, is the exquisite progenitor of cosmography via standard sirens with EM counterpart. For that event, the LVC interferometer network recovered the luminosity distance $d_L= 43.8^{+2.9}_{-6.9}$ Mpc at 68\% C.L., and a sky localization of 31 deg$^2$ \cite{TheLIGOScientific:2017qsa,GBM:2017lvd}, corresponding to the solid angle shown in the left panel of Fig.~\ref{fig:gw17} (green area). The optical telescopes exploring this portion of the sky identified an EM transient in association to the galaxy NGC4993, which is known to be departing from us at the speed of $3327\pm 72\,$\,km s$^{-1}$. Although part of the peculiar velocity of the galaxy NGC4993 could be subtracted \cite{GBM:2017lvd}, remaining uncertainties on the peculiar motion eventually yield the estimate $v_H=3017\pm 166$\,km s$^{-1}$ for its Hubble flow velocity, corresponding to $z\simeq 10^{-2}$ (notice that at this redshift the lensing uncertainty is negligible). Using the Hubble law $d_L=z/H_0$, which is the leading order term in the expansion of Eq.~(\ref{eq:dL_z}) at small $z$, one obtains the posterior distribution for $H_0$ presented in Fig.~\ref{fig:gw17} (right panel), providing the measurement $H_0 = 70.0^{+12.0}_{-8.0}\,\rm{km\, s^{-1} Mpc^{-1}}$ at 68\% C.L.~\cite{Abbott:2017xzu}. The uncertainty on $H_0$ is too large to make the measurement competitive with CMB \cite{Ade:2015xua} and SNIa constraints \cite{Riess:2016jrr}. Nevertheless, this represents a local estimate of $H_0$ that is not dependent on the cosmic distance ladder and the first cosmological measurement not relying only on EM radiation. Moreover, the large number of events similar to GW170817 expected to be observed in the future will eventually yield constraints on $H_0$ at the level of both local and CMB measurements. \begin{figure}[t] \begin{minipage}[h]{.45\linewidth} \centering \includegraphics[width=\linewidth]{GW170817_MMA_Skymap.png} \end{minipage}\hfill \begin{minipage}[h]{0.49\linewidth} \vspace*{5mm}\centering \includegraphics[width=\linewidth]{hubble_posterior.pdf} \end{minipage} \caption{\it{Left panel: The multi-messanger sky localization of GW170817 and the identification of the host galaxy. Right panel: The posterior distribution of $H_0$ compared to the recent CMB and SNIa constraints~\cite{Ade:2015xua, Riess:2016jrr}. Figures taken from Refs.~\cite{GBM:2017lvd,Abbott:2017xzu}}. } \label{fig:gw17} \end{figure} \subsubsection{Cosmological forecasts with standard sirens} $ $\\[1mm] \noindent The potential of current and forthcoming GW detectors for cosmology have been widely studied in the literature. To appreciate the capability of standard sirens to probe the late universe, here we report on the most recent forecasts (see e.g.~Refs.~\cite{Dalal:2006qt,Nissanke:2009kt,DelPozzo:2011yh,Nissanke:2013fka} (LVC) and \cite{Menou:2008dv,Stavridis:2009ys,VanDenBroeck:2010fp,Shang:2010ta,Sereno:2011ty,Petiteau:2011we} (LISA) for previous studies). Such analyses are expected to continuously improve over the upcoming years of data taking. \begin{description} \item[\it Forecasts with ground-based interferometers:] Ground-based interferometers can detect binary systems composed of NSs and/or stellar-origin BHs, with masses ranging from few solar masses up to tens of solar masses. By means of the planned LVC-KAGRA-LIGO India network, BNSs with counterpart will allow to put a tight constraint on $H_0$. For example, assuming the $\Lambda$CDM model, $H_0$ is expected to be measured roughly with a $\sim$1\% error after $\sim$100 detections with counterpart~\cite{Chen:2017rfc, Seto:2017swx}. The result would be even tighter if, for some SOBBHs, the host galaxy could be identified, or if BNSs with counterpart are considered instead, as only $\sim$10 detections would yield the same level of accuracy~\cite{Chen:2017rfc,Nishizawa:2016ood}. The forecasts drastically improve for third generation ground-based detectors, such as the Einstein Telescope (ET) \cite{Sathyaprakash:2009xt,Taylor:2012db,Cai:2016sby}. In addition, besides the two approaches mentioned above for the redshift measurement, a further method, feasible only with an ET-like device, will become available~\cite{Messenger:2011gi} (see also \cite{Taylor:2011fs} for a similar idea). Thanks to the tidal effects of a BNS waveform, the redshift-mass degeneracy can be broken. Consequently, by measuring the tidal effect in the waveform and assuming a prior knowledge on the NS equation of state, the redshift $z$ entering the redshifted chirp mass can be reconstructed from the waveform itself. In this way, with more than 1000 detected BNS events, the ET is expected to constrain the parameters $H_0$ and $\Omega_m$ of the $\Lambda$CDM model roughly with an uncertainty of $\sim 8\%$ and $\sim 65\%$, respectively~\cite{DelPozzo:2015bna}. However, if the rate of BNS mergers in the universe will result to be in the higher limit of the currently allowed range, the ET will be able to see up to $\sim\!\!10^7$ BNS mergers, ameliorating these constraints by two orders of magnitude. \item[\it Forecasts with space-based interferometers:] Space-based GW interferometry will open the low-frequency window (mHz to Hz) in the GW landscape, which is complementary to Earth-based detectors (Hz to kHz) and PTA experiments (nHz). The Laser Interferometer Space Antenna (LISA) is currently the only planned space mission designed to detect GWs, as it has been selected by ESA~\cite{Audley:2017drz}. Several new GW astrophysical sources will be observed by LISA, including SOBBHs, EMRIs and massive BBHs from $10^4$ to $10^7$ solar masses. These sources can not only be conveniently employed as standard sirens, but they will be detected at different redshift ranges, making LISA a unique cosmological probe, able to measure the expansion rate of the universe from local ($z\sim 0.01$) to very high ($z\sim 10$) redshift. The current forecasts, produced taking into account only massive BBHs \cite{Tamanini:2016zlh} (for which an EM counterpart is expected) or SOBBHs \cite{Kyutoku:2016zxn,DelPozzo:2017kme} (for which no EM counterpart is expected), estimate constraints on $H_0$ down to a few percent. However, joining all possible GW sources that can be used as standard sirens with LISA in the same analysis, should not only provide better results for $H_0$, which will likely be constrained to the sub-percent level, but it will open up the possibility to constrain other cosmological parameters. The massive BBH data points at high redshifts will, moreover, be useful to test alternative cosmological models, predicting deviations from the $\Lambda$CDM expansion history at relatively early times \cite{Caprini:2016qxs,Cai:2017yww}. Finally, more advanced futuristic missions, such as DECIGO or BBO, which at the moment have only been proposed on paper, may be able to probe the cosmological parameters, including the equation of state of dark energy, with ultra-high precision~\cite{Cutler:2009qv,Nishizawa:2010xx,Kawamura:2011zz,Arabsalmani:2013bj}. They might also be able to detect the effect of the expansion of the universe directly on the phase of the binary GW waveform \cite{Seto:2001qf,Nishizawa:2011eq}, although the contribution due to peculiar accelerations would complicate such a measurement \cite{Bonvin:2016qxr,Inayoshi:2017hgw}. \end{description} \subsubsection{Future prospects}$ $\\[1mm] \noindent The recent GW170817 event has triggered numerous studies about the use of standard sirens for cosmography, and the forthcoming experimental and theoretical developments might change the priorities in the field. As mentioned above, the network of Earth-based GW interferometers is expected to detect an increasing number of GW sources employable as standard sirens, with or without EM counterparts. This will eventually yield a measurement of the Hubble constant competitive with CMB and SNIa constraints, which might be used to alleviate the tension between these two datasets. On the other hand, higher redshift ($z \gtrsim 1$) standard sirens data will probably be obtained only with third generation detectors or space-based interferometers. With these high redshift data we will start probing the expansion of the universe at large distances, implying that a clean, and not exclusively EM-based, measurement of other cosmological parameters will become a reality. Furthermore, the precision of cosmography via standard sirens might be boosted by improvements in other astronomical observations. This is the case if e.g.~galactic catalogues and lensing maps improve substantially in the future, as expected. Concerning present forecasts, there is room for improvements, especially regarding third generation Earth-based interferometers and space-borne GW detectors. The full cosmological potential of future GW experiments such as ET and LISA has still to be assessed. For ET we still do not have a full cosmological analysis taking into account all possible GW sources that will be used as standard sirens, using both the method with counterpart (BNSs) and the method without counterpart (SOBBHs and BNSs). Moreover, we still need to fully understand up to what extent the information on the NS equation of state can be used to infer the redshift of BNSs and NS-BH binaries, with the latterpotentially representing a new interesting future source of standard sirens \cite{Nissanke:2009kt,Nissanke:2013fka,Vitale:2018wlg}. Regarding LISA, there are still several issues to be addressed in order to produce reliable forecasts. For instance, the EMRI detection rate is still largely unknown \cite{Babak:2017tow}, although these sources might turn out to be excellent standard siren candidates at redshift $0.1\lesssim z \lesssim 1$~\cite{Tamanini:2016uin}. In addition, the prospects for massive BBH mergers as standard sirens would be more robust if an up-to-date model of the EM counterpart were implemented in the investigations. Performing these analyses and combining the results for all the different types of standard sirens observable by LISA will allow for a full assessment of the cosmological potential of the mission. Most of the literature on standard sirens assumes GR and an homogeneous and isotropic universe at large scales. However, breakthroughs may occur by generalizing the aforementioned analyses to theories beyond GR, and in fact forecasts for scenarios not fulfilling these assumptions are flourishing topics with potentially revolutionary results. For example, some models of modified gravity, formulated to provide cosmic acceleration, have been strongly constrained by the measurement of the speed of GW (compared to the speed of light) with GW170817 \cite{Creminelli:2017sry,Ezquiaga:2017ekz,Baker:2017hug,Sakstein:2017xjx}. In general, in theories beyond GR or admitting extra dimensions, the cosmological propagation of GWs changes and a comparison between the values of $d_L$ measured from GWs and inferred from EM observations can provide a strong test of the validity of these theories (see \cite{Belgacem:2017ihm,Amendola:2017ovw,Linder:2018jil,Pardo:2018ipy} for recent works). Finally we mention that the large number of GW sources expected to be observed by third generation interferometers such as ET or by future space-borne detectors such as DECIGO/BBO, might even be used to test the cosmological principle \cite{Yagi:2011bt,Namikawa:2015prh,Cai:2017aea} and to extract cosmological information by cross-correlating their spatial distribution with galaxy catalogues or lensing maps \cite{Camera:2013xfa,Oguri:2016dgk,Raccanelli:2016fmc}. \subsection{Interplay between GWs from binaries and from early-universe sources} The gravitational interaction is so weak that GWs propagate practically unperturbed along their path from the source to us. GWs produced in the early universe thus can carry a unique imprint of the pre-CMB era, in which the universe was not transparent to photons. In this sense, GW detection can provide for the first time direct, clean access to epochs that are very hard to probe by any other observational means. GWs of cosmological origin appear (pretty much as the CMB) to our detectors as a SGWB \cite{Romano:2016dpx,Caprini:2018mtu}. Several pre-CMB phenomena sourcing GWs might have occurred along the cosmological history, from inflation to the epoch of the QCD phase transition. In such a case, the SGWB would be constituted by the sum of all single contributions, each of them potentially differing from the others in its spectral shape as a function of frequency, or because of other properties such as chirality and/or gaussianity. It is hence crucial to have precise predictions of all proposed cosmological sources to possibly isolate all components in the detected SGWB, with the aim of reconstructing the early time history of the universe. On the other hand, binary systems can also be detected as a SGWB. This happens for those (independent) astrophysical events that are overall too weak to be individually resolved. The level of their ``contamination'' to the cosmological SGWB thus depends on the sensitivity and on the resolution of the available detector. Crucially, at LIGO-like and LISA-like experiments the astrophysical component might be stronger than the cosmological one, so that the information hidden in the pre-CMB signal would be impossible to recover, unless the astrophysical contribution is known in great detail and methods are found to subtract it. \subsubsection{Status and future prospects} $ $\\[-1mm] \noindent No measurement of the SGWB has been done yet, but upper bounds have been inferred from the observations. These are usually expressed in terms of the SGWB energy density, which is given by $\Omega_{\rm GW}(f)=(f/\rho_c) ~\partial \rho_{\rm GW}/\partial f$, with $\rho_c$ and $\rho_{\rm GW}$ being the critical and the SGWB energy densities, respectively. Specifically, by assuming the frequency shape $\Omega_{\rm GW}(f)=\Omega_\alpha (f/25$Hz)$^\alpha$, the LVC found the 95\% C.L.~limit $\Omega_{\alpha=0,2/3,3} < 17\times 10^{-7},13\times 10^{-7}$ and $1.7\times 10^{-8}$ in the $\mathcal O (10)$ -- $\mathcal O (100)\,$Hz frequency band~\cite{TheLIGOScientific:2016dpb}. The latest Pulsar Timing Array analysis, done with the data of the NANOGrav collaboration, yields $h^2\Omega_{\alpha=0} < 3.4\times 10^{-10}$ at 95\% C.L.~at $3.17\times 10^{-8}$\,Hz~\cite{Arzoumanian:2018saf}. Since the astrophysical sources are not expected to produce a SGWB with higher amplitude than these bounds, the latter are particularly relevant only for the most powerful cosmological signals~\cite{Lasky:2015lej, Abbott:2017mem}. However, it is probably only a matter of some years to achieve the first detection of the astrophysical SGWB by the LVC. Based on the current detection rates, the SOBBHs and BNSs lead to the power-law SGWBs $\Omega_{\rm BBH}(f)=1.2^{+1.9}_{-0.9}\times 10^{-9}\left(f/25\,{\rm Hz}\right)^{2/3}$ and $\Omega_{\rm BBH+BNS}(f)=1.8^{+2.7}_{-1.3}\times 10^{-9}\left(f/25\,{\rm Hz}\right)^{2/3}$ in the frequency band of both LVC and LISA~\cite{Abbott:2017xzg}. These signals are expected to be measurable by the LIGO and Virgo detectors in around 40 months of observation time~\cite{Abbott:2017xzg}, while in LISA they will reach a Signal to Noise Ratio (SNR) of $\mathcal O(10)$ in around one month of data taking~\cite{CosWGpreparation}. On the other hand, given the large uncertainties on EMRIs~\cite{Babak:2017tow}, it is not clear whether they can also give rise to an observable SGWB component in the LISA band. Focusing exclusively on the SGWB contribution from astrophysical binaries, two general aspects about it are particularly worth investigating. The first regards the detailed prediction and characterization of this component. For the time being, the literature does not suggest any technique to disentangle the cosmological SGWB component from a generic astrophysical one. At present, it is believed that component separation will be (partially) feasible in the LISA data only for what concerns the SGWB signal due to galactic binaries. In this case, indeed, the anisotropy of the spatial distribution of the source (confined to the galactic plane), together with the motion of the LISA constellation, generates a SGWB signal modulated on a yearly basis, which allows to distinguish and subtract this component from the total SGWB~\cite{Adams:2013qma}. Although a similar approach is not possible for the extra-galactic SGWB component, this example well illustrates that detailed predictions of the astrophysical signatures might highlight subtraction techniques allowing to perform component separation and hopefully isolate the information coming from the early universe (similarly to what done for foreground subtraction in CMB analyses). The second aspect concerns the possibility of exploiting third generation detectors, such as the ET and Cosmic Explorer, to subtract the astrophysical SGWB component \cite{Regimbau:2016ike}. Their exquisite sensitivity may allow to resolve many of the SOBBHs and BNSs that give rise to the SGWB in present detectors, including the full LVC-KAGRA-LIGO India network. This would clean the access to the cosmological SGWB down to a level of $\Omega_{\rm GW}\simeq 10^{-13}$ after five years of observation \cite{Regimbau:2016ike}. This technique not only applies in the frequency bandwidth of the Earth-based devices, but it can also be used to clean the LISA data and reach a potential SGWB of cosmological origin in the LISA band. Note that LISA might as well help in beating down the level of the SGWB from SOBBHs by exploiting possible multi-band detections of the same source \cite{Sesana:2016ljz}. The SOBBH would be detected first by LISA, during its inspiral phase; some years later, when the binary has arrived to the merger stage, it would reappear in Earth-based interferometers. These latter can therefore be alerted in advance, possibly leading to an increase in the number of detected BBHs (depending on their LISA SNR). \part{Modelling black-hole sources of gravitational waves: prospects and challenges} \label{WG2} \phantomsection \addcontentsline{toc}{part}{\bf Chapter II: Modelling black-hole sources of gravitational waves: prospects and challenges} \begin{center} {\large \bf Chapter II: Modelling black-hole sources of gravitational waves: prospects and challenges} \end{center} \begin{center} Editor: Leor Barack \end{center} \vskip 1cm \setcounter{section}{0} \section{Introduction} \label{Sec:introduction2} The detection and characterization of gravitational-wave (GW) sources rely heavily on accurate models of the expected waveforms. This is particularly true for black hole binaries (BBHs) and other compact objects, for which accurate models are both necessary and hard to obtain. To appreciate the state of affairs, consider the following three examples. (i) While GW150914, the first BH merger event detected by LIGO, had initially been identified using a template-free search algorithm, some of the subsequent events, which were not as bright, would likely have been altogether missed if template-based searches had not been performed. (ii) While the error bars placed on the extracted physical parameters of detected BH mergers have so far come primarily from instrumental noise statistics, systematic errors from the finite accuracy of available signal models are only marginally smaller and would actually dominate the total error budget for sources of some other spin configurations or greater mass disparity; in the case of the binary neutron star (BNS) GW170817, deficiencies in available models of tidal effects already restrict the quality of science extractable from the signal. (iii) Even with a perfectly accurate model at hand, analysis of GW170817 would not have been possible within the timescale of weeks in which it was carried out, without the availability of a suitable reduced-order representation of the model, necessary to make such an analysis computationally manageable. Indeed, accurate and computationally efficient models underpin all of GW data analysis. They do so now, and will increasingly do so even more in the future as more sensitive and broader-band instruments go online. The response of detectors like ET, and especially LISA, will be source-dominated, with some binary GW signals occurring at high signal-to-noise ratio (SNR) and remaining visible in band through many more wave cycles. The accuracy standard of models needs to increase commensurably with detector sensitivity, or else modelling error would restrict our ability to fully exploit the detected signals. As a stark example, consider that, in a scenario that is not unlikely, LISA's output will be dominated by a bright massive BH (MBH) merger signal visible with SNR of several 100s. This signal would have to be carefully ``cleaned out'' of the data in order to enable the extraction and analysis of any other sources buried underneath; any model inaccuracies would form a systematic noise residual, potentially hiding dimmer sources. In the case of Extreme Mass Ratio Inspirals (EMRIs), where $O(10^5)$ wave cycles are expected in the LISA band at a low SNR, a precise model is a crucial prerequisite for both detection and parameter extraction. This chapter reviews the current situation with regard to the modelling of GW sources within GR, identifying the major remaining challenges and drawing a roadmap for future progress. To make our task manageable, we focus mainly on sources involving a pair of BHs in a vacuum environment in GR, but, especially in Sec.~\ref{Sec:HE}, we also touch upon various extensions beyond vacuum, GR and the standard model (SM) of particle physics. Isolated vacuum BHs in GR are remarkably simple objects, described in exact form by the Kerr family of solutions to Einstein’s field equations (see, however, Sec.~\ref{Sec:MR} of this chapter). But let two such BHs interact with each other, and the resulting system displays a remarkably complicated dynamics, with no known exact solutions. Even numerical solutions have for decades proven elusive, and despite much progress following the breakthrough of 2005 they remain computationally very expensive---prohibitively so for mass ratios smaller than $\sim 1:10$---and problematic for certain astrophysically relevant BH spin configurations. Systematic analytical approximations are possible and have been developed based around expansions of the field equations in the weak-field or extreme mass-ratio regimes, and these may be combined with fully numerical solutions to inform waveform models across broader areas of the parameter space. To facilitate the fast production of such waveforms, suitable for GW search pipelines, effective and phenomenological models have been developed, which package together and interpolate results from systematic numerical simulations and analytical approximations. With the rapid progress in GW experiments, there is now, more than ever, a need for a concentrated community effort to improve existing models in fidelity, accuracy and parameter-space reach, as well as in computational efficiency. The structure of this chapter is as follows. In Secs.\ \ref{Sec:perturbations} and \ref{Sec:PN} we review the two main systematic approximations to the BBH problem: perturbation theory including the gravitational self-force (GSF) and post-Newtonian (PN) approaches, respectively. Section \ref{Sec:NR} surveys progress and prospects in the Numerical Relativity (NR) modelling of inspiralling and merging BHs in astrophysical settings, and Sec.\ \ref{Sec:HE} similarly reviews the role of NR in studying the dynamics of compact objects in the context of alternative theories of gravity and beyond the SM. Section \ref{Sec:EOB} then reviews the Effective One Body (EOB) approach to the BBH problem, and the various phenomenological models that have been developed to facilitate fast production of waveform templates. Section \ref{Sec:DA} reviews the unique and highly involved challenge of data-analysis in GW astronomy, with particular emphasis on the role of source models; this data-analysis challenge sets the requirements and accuracy standards for such models. Finally, Sec.\ \ref{Sec:MR} gives a mathematical relativist's point of view, commenting on a variety of (often overlooked) foundational questions that are yet to be resolved in order to enable a mathematically rigorous and unambiguous interpretation of GW observations. \section{Perturbation Methods} \label{Sec:perturbations} \vspace{-3mm} {\it Contributor:} B. Wardell \vspace{3mm} Exact models for GWs from BBHs can only be obtained by exactly solving the full Einstein field equations. However, there is an important regime in which a perturbative treatment yields a highly accurate approximation. For BBH systems in which one of the BHs is much less massive than the other, one may treat the mass ratio as a small perturbation parameter. Then, the Einstein equations are amenable to a perturbative expansion in powers of this parameter. Such an expansion is particularly suitable for EMRIs, systems in which the mass ratio may be as small as $10^{-6}$, or even smaller \cite{Magorrian:1997hw,Gair:2004iv,AmaroSeoane:2007aw,Gair:2008bx,AmaroSeoane:2012km}. In such cases, it has been established \cite{Hinderer:2008dm,Isoyama:2012bx,Burko:2013cca} that it will be necessary to incorporate information at second-from-leading perturbative order to achieve the accuracy that will be required for optimal parameter estimation by the planned LISA mission \cite{LISA,Barack:2003fp,Babak:2010ej,Gair:2010bx,Babak:2017tow}. Aside from EMRIs, a perturbative expansion is likely to also be useful as a model for Intermediate Mass Ratio Inspirals (IMRIs): systems where the mass ratio may be as large as $\sim10^{-2}$. Such systems, if they exist, are detectable in principle by Advanced LIGO and Virgo, and are indeed being looked for in the data of these experiments\cite{Abbott:2017iws,Haster:2015cnn}. The perturbative approach (often called the \emph{self-force}\footnote{Strictly speaking, the term \emph{self-force} refers to the case where local information about the perturbation in the vicinity of the smaller object is used; other calculations of, e.g., the flux of GW energy far from the binary also rely on a perturbative expansion \cite{Glampedakis:2002ya,Mino:2003yg,Hughes:2005qb,Sago:2005fn,Drasco:2005is,Drasco:2005kz,Sundararajan:2007jg,Fujita:2009us}, but are not referred to as GSF calculations.} approach) yields a set of equations for the motion of the smaller object about the larger one. For a detailed technical review of self-force physics, see \cite{Poisson:2011nh}, and for a most recent, pedagogical review, see \cite{Barack:2018yvs}. At zeroth order in the expansion, one recovers the standard geodesic equations for a test particle in orbit around (i.e.~moving in the background spacetime of) the larger BH. At first order, we obtain coupled equations for an accelerated worldline forced off a geodesic by the GSF, which itself arises from the metric perturbation to the background spacetime sourced by the stress-energy of the smaller BH. In the GSF approach, it can be convenient to treat the smaller BH as a ``point particle'', with the GSF being computed from an effective \emph{regularized} metric perturbation \cite{DeWitt:1960fc,Hobbs:1968a,Mino:1996nk,Quinn:1996am,Detweiler:2002mi}. Such an assumption is not strictly necessary, but has been validated by more careful treatments whereby both BHs are allowed to be extended bodies, and the point-particle-plus-regularization prescription is recovered by appropriately allowing the smaller BH to shrink down to zero size \cite{Gralla:2008fg,Pound:2009sm,Pound:2010pj,Pound:2012dk}. These more careful treatments have also allowed the GSF prescription to be extended to second perturbative order \cite{Rosenthal:2005it,Rosenthal:2006iy,Detweiler:2011tt,Gralla:2012db,Pound:2012nt,Pound:2012dk,Pound:2014xva} and to the fully non-perturbative case \cite{Harte:2008xq,Harte:2009yr,Harte:2011ku}. The goal of the GSF approach is to develop efficient methods for computing the motion of the smaller object and the emitted GWs in astrophysically-relevant scenarios. These involve, most generally, a spinning (Kerr) large BH, and a small (possibly spinning) compact object in a generic (possibly inclined and eccentric) inspiral orbit around it. The data analysis goals of the LISA mission (which demand that the phase of the extracted waveform be accurate to within a fraction of a radian over the entire inspiral) require all contributions to the metric perturbation at first order, along with the dissipative contributions at second order \cite{Hinderer:2008dm}. \subsection{Recent developments} Focused work on the GSF problem has been ongoing for at least the last two decades, during which time there has been substantial progress. Early work developed much of the mathematical formalism, particularly in how one constructs a well-motivated and unambiguous regularized first-order metric perturbation \cite{DeWitt:1960fc,Hobbs:1968a,1982JPhA...15.3737G,Mino:1996nk,Quinn:1996am,Detweiler:2002mi,Gralla:2008fg,Pound:2009sm}. More recent work has addressed the conceptual challenges around how these initial results can be extended to second perturbative order \cite{Rosenthal:2005it,Rosenthal:2006iy,Detweiler:2011tt,Gralla:2012db,Pound:2012nt,Pound:2012dk,Pound:2014xva}, and on turning the formal mathematical prescriptions into practical numerical schemes \cite{Poisson:Wiseman:1998,Barack:1999wf,Barack:2001gx,Anderson:2005gb,Barack:2007jh,Vega:2007mc}. As a result, we are now at the point where first-order GSF calculations are possible for almost any orbital configuration in a Kerr background spacetime \cite{vandeMeent:2017bcc}. Indeed, we now have not one, but three practical schemes for computing the regularized first-order GSF \cite{Wardell:2015kea}. Some of the most recent highlights from the substantial body of work created by the GSF community are discussed below. \subsubsection{First-order gravitational self-force for generic orbits in Kerr spacetime} Progress in developing tools for GSF calculations has been incremental, starting out with the simplest toy model of a particle with scalar charge in a circular orbit about a Schwarzschild BH, and then extending to eccentric, inclined, and generic orbits about a spinning Kerr BH \cite{Barack:2000zq,Burko:2000xx,Burko:2001kr,Detweiler:2002gi,DiazRivera:2004ik,Haas:2007kz,Ottewill:2007mz,Ottewill:2008uu,Lousto:2008mb,Vega:2009qb,Canizares:2009ay,Dolan:2010mt,Thornburg:2010tq,Canizares:2010yx,Warburton:2010eq,Warburton:2011hp,Diener:2011cc,Dolan:2011dx,Casals:2012qq,Ottewill:2012aj,Casals:2013mpa,Warburton:2013lea,Vega:2013wxa,Wardell:2014kea,Warburton:2014bya,Gralla:2015rpa,Thornburg:2016msc,Heffernan:2017cad}. Progress has also been made towards developing tools for the conceptually similar, but computationally more challenging GSF problem; again starting out with simple circular orbits in Schwarzschild spacetime before extending to eccentric equatorial orbits and, most recently, fully generic orbits in Kerr spacetime \cite{Barack:2002ku,Barack:2005nr,Anderson:2005gb,Barack:2007tm,Sago:2009zz,Shah:2010bi,Akcay:2010dx,Barack:2010tm,Keidl:2010pm,Warburton:2011fk,Dolan:2012jg,Shah:2012gu,Dolan:Capra16,Akcay:2013wfa,Pound:2013faa,Isoyama:2014mja,Merlin:2014qda,Osburn:2015duj,Gralla:2016qfw}. Along the way, there have been many necessary detours in order to establish the most appropriate choice of gauge \cite{Keidl:2010pm,Hopper:2012ty,Pound:2013faa,Osburn:2014hoa,Merlin:2014qda,Pound:2015fma,Shah:2016juc,Chen:2016plo}, reformulations of the regularization procedure \cite{Haas:2006ne,Barack:2007we,Wardell:2011gb,Vega:2011wf,Dolan:2012jg,Heffernan:2012su,Heffernan:2012vj,Pound:2013faa,Warburton:2013lea,Linz:2014pka,Wardell:2015ada}, and various numerical methods and computational optimisations \cite{Barack:2008ms,Vega:2009qb,Canizares:2009ay,Field:2009kk,Field:2010xn,Canizares:2010yx,Hopper:2010uv,Akcay:2010dx,Hopper:2012ty,Dolan:2012jg,Akcay:2013wfa,Osburn:2014hoa,Hopper:2015jxa,Hopper:2017iyq,Barack:2017oir}. \subsubsection{Extraction of gauge-invariant information} Both the regularized metric perturbation and the GSF associated with it are themselves gauge-dependent \cite{Barack:2001ph,Gralla:2011zr,Pound:2015fma}, but their combination encapsulates gauge-invariant information. A series of works have derived a set of gauge-invariant quantities accessible from the regularized metric and GSF, which quantify {\em conservative} aspects of the dynamics in EMRI systems beyond the geodesic approximation. (Strictly speaking, they are only ``gauge invariant'' within a particular, physically motivated class of gauge transformations. Nevertheless, even with this restriction the gauge invariance is very useful for comparisons with other results.) Examples include Detweiler's \emph{red-shift} invariant \cite{Detweiler:2008ft}, the frequency of the innermost stable circular orbit (ISCO) in Schwarzschild \cite{Barack:2009ey} and Kerr \cite{Isoyama:2014mja}, the periastron advance of slightly eccentric orbits in Schwarzschild \cite{Barack:2010ny} and Kerr \cite{vandeMeent:2016hel}, spin (geodetic) precession \cite{Dolan:2013roa,Shah:2015nva,Akcay:2016dku,Akcay:2017azq}, and, most recently, quadrupolar and octupolar tidal invariants \cite{Dolan:2014pja,Nolan:2015vpa}. The most important outcome from the development of these gauge invariants is the synergy it has enabled, both within the GSF programme \cite{Sago:2008id} (e.g., by allowing for direct comparisons between results computed in different gauges) and, as described next, with other approaches to the two-body problem. \subsubsection{Synergy with PN approximations, EOB theory, and NR} One of the most fruitful outcomes arising from the development of GSF gauge-invariants is the synergy it has enabled between the GSF and PN and EOB theories. With a gauge-invariant description of the physical problem available, it is possible to make direct connections between GSF, PN and EOB approximations. This synergy has worked in a bidirectional way: GSF calculations have been used to determine previously-unknown coefficients in both PN and EOB expansions \cite{Detweiler:2008ft,Blanchet:2010zd,Barack:2010ny,Blanchet:2009sd,Akcay:2012ea,Bini:2014zxa,Blanchet:2014bza,Bini:2015xua,Bini:2015bfb,Kavanagh:2015lva,Akcay:2015pjz,Akcay:2015pza,Hopper:2015icj,Kavanagh:2016idg,Bini:2016qtx,Bini:2016dvs,Kavanagh:2017wot}; and EOB and PN calculations have been used to validate GSF results \cite{Bini:2014zxa}, and even to assess the region of validity of the perturbative approximation \cite{LeTiec:2011bk,LeTiec:2011dp,Tiec:2013twa}. More on this in Sec.~\ref{Sec:PN}. Mirroring the synergy between GSF and PN/EOB, there have emerged methods for making comparisons between GSF and NR. This started out with direct comparisons of the periastron advance of slightly eccentric orbits \cite{LeTiec:2011bk,Tiec:2013twa}. More recently, a similar comparison was made possible for Detweiler's redshift \cite{Zimmerman:2016ajr,LeTiec:2017ebm}, facilitated by an emerging understating of the relation between Detweiler's red-shift and the horizon surface gravity of the small BH. \subsubsection{New and efficient calculational approaches} Despite the significant progress in developing numerical tools for computing the GSF, it is still a computationally challenging problem, particularly in cases where high accuracy is required. This challenge has prompted the development of new and efficient calculational approaches to the problem. Initial GSF results were obtained in the Lorenz gauge \cite{Barack:2005nr,Barack:2007tm,Barack:2009ey,Barack:2010tm,Akcay:2010dx,Barack:2011ed,Warburton:2011fk,Akcay:2013wfa}, where the regularization procedure is best understood. Unfortunately, the details of a Lorenz-gauge calculation---in which one must solve coupled equations for the $10$ components of the metric perturbation---are tedious and cumbersome, making it difficult to implement and even more difficult to achieve good accuracy. In the Schwarzschild case, other calculations based on variations of Regge-Wheeler gauge \cite{Detweiler:2005kq,Field:2009kk,Field:2010xn,Hopper:2010uv,Hopper:2012ty,Osburn:2015duj,Hopper:2017qus,Hopper:2017iyq} were found to be much easier to implement and yielded much more accurate results. However, with regularization in Regge-Wheeler gauge less well understood, those calculations were restricted to the computation of gauge-invariant quantities. Perhaps more importantly, they are restricted to Schwarzschild spacetime, meaning they can not be used in astrophysically realistic cases where the larger BH is spinning. The radiation gauge---in which one solves the Teukolsky equation \cite{Teukolsky:1972my,Teukolsky:1973ha} for a single complex pseudo-scalar $\psi_4$ (or, equivalently, $\psi_0$)---retains much of the simplicity of the Regge-Wheeler gauge, but has the significant benefit of being capable of describing perturbations of a Kerr BH. Furthermore, recent progress has clarified subtle issues related to regularization \cite{Pound:2013faa} (including metric completion \cite{Merlin:2016boc,vandeMeent:2017fqk}) in radiation gauge, paving the way for high-accuracy calculations of both gauge-invariant quantities and the GSF~\cite{vandeMeent:2015lxa,vandeMeent:2016pee,vandeMeent:2016hel,vandeMeent:2017bcc}. Within these last two approaches (using Regge-Wheeler and radiation gauges) \emph{functional methods} \cite{Leaver:1986JMP,Mano:1996vt,Sasaki:2003xr} have emerged as a particularly efficient means of achieving high accuracy when computing the metric perturbation. Fundamentally, these methods rely on the fact that solutions of the Teukolsky (or Regge-Wheeler) equation can be written as a convergent series of hypergeometric functions. This essentially reduces the problem of computing the metric perturbation to the problem of evaluating hypergeometric functions. The approach has proved very successful, enabling both highly accurate numerical calculations \cite{Shah:2013uya,Shah:2014tka,Johnson-McDaniel:2015vva,vandeMeent:2017bcc} and even exact results in the low-frequency--large-radius (i.e. PN) regime \cite{Bini:2014zxa,Bini:2015bla,Bini:2015mza,Bini:2015xua,Bini:2015bfb,Kavanagh:2015lva,Hopper:2015icj,Kavanagh:2016idg,Bini:2016qtx,Bini:2016dvs,Kavanagh:2017wot}. A different type of analytic treatment is possible for modelling the radiation from the last stage of inspiral into a nearly extremal BH, thanks to the enhanced conformal symmetry in this scenario \cite{Porfyriadis:2014fja,Hadar:2014dpa,Hadar:2015xpa,Hadar:2016vmk}. This method, based on a scheme of matched asymptotic expansions, has so far been applied to equatorial orbits \cite{Gralla:2015rpa,Compere:2017hsi}, with the GSF neglected. \subsubsection{Cosmic censorship} Independently of the goal of producing accurate waveforms for LISA data analysis, the GSF programme has also yielded several other important results. One particular area of interest has been in the relevance of the GSF to answering questions about cosmic censorship. Calculations based on test-particle motion made the surprising discovery that a test particle falling into a Kerr BH had the potential to increase the BH spin past the extremal limit, thus yielding a naked singularity \cite{Jacobson:2009kt}. (Analogous cases exist where an electric charge falling into a charged (Reissner-Nordstr\"om) BH may cause the charge on the BH to increase past the extremal limit \cite{Hubeny:1998ga,Saa:2011wq,Gao:2012ca}.) The intuitive expectation is that this is an artifact of the test-particle approximation, and that by including higher-order terms in this approximation the GSF may in effect act as a ``cosmic censor'' by preventing over-charging and restoring cosmic censorship \cite{Barausse:2010ka}. Several works have explored this issue in detail, studying the self-force on electric charges falling into a Reissner-N\"ordstrom BH \cite{Zimmerman:2012zu,Revelar:2017sem} and on a massive particle falling into a Kerr BH \cite{Colleoni:2015afa,Colleoni:2015ena}. These works demonstrated with explicit calculations how the overspinning or overcharging scenarios are averted once the full effect of the self-force is taken into account (a result later rigorously proven in a more general context \cite{Sorce:2017dst}). \subsection{Remaining challenges and prospects} While the perturbative (self-force) approach has proven highly effective to date, there remain several important and challenging areas for further development. Here, we list a number of the most important future challenges and prospects for the GSF programme. \subsubsection{Efficient incorporation of self-force information into waveform models} There are at least two key aspects to producing an EMRI model: (i) computing the GSF; and (ii) using the GSF to actually drive an inspiral. Unfortunately, despite the substantial advances in calculational approaches, GSF calculations are still much too slow to be useful on their own as a means for producing LISA gravitational waveforms. Existing work has been able to produce first-order GSF driven inspirals for a small number of cases \cite{Diener:2011cc,Warburton:2011fk,Osburn:2015duj,Warburton:2017sxk} but when one takes into account the large parameter space of EMRI systems it is clear that these existing methods are inadequate. It is therefore important to develop efficient methods for incorporating GSF information into EMRI models and waveforms. There has been some promising recent progress in this direction, with fast ``kludge'' codes producing approximate (but not sufficiently accurate) inspirals \cite{Glampedakis:2002ya,Hughes:2005qb,Drasco:2005kz,Sundararajan:2007jg,Sundararajan:2008zm,Chua:2017ujo,Berry:2012im}, and with the emergence of mathematical frameworks based on near-identity transformations \cite{vandeMeent:2018rms}, renormalization group methods~\cite{Galley:2016zee} and two-timescale expansions \cite{Moxon:2017ozd}. To complicate matters, inspirals generically go through a number of transient resonances, when the momentary radial and polar frequencies of the orbit occur in a small rational ratio. During such resonances, approximations based on adiabaticity break down \cite{Flanagan:2010cd,Flanagan:2012kg,vandeMeent:2013sza,vandeMeent:2014raa,Brink:2013nna}. Works so far have mapped the locations of resonances in the inspiral parameter space, studied how the orbital parameters (including energy and angular momentum) experience a ``jump'' upon resonant crossing, and illustrated how the magnitude of the jump depends sensitively on the precise resonant phase (the relative phase between the radial and polar motions at resonance). The impact of resonances on the detectability of EMRIs with LISA was studied in Refs.~\cite{Berry:2016bit,Berry:2017cty} using an approximate model of the resonant crossing. But so far there has been no actual calculation of the orbital evolution through a resonance. Now that GSF codes for generic orbits are finally at hand, such calculations become possible, in principle. There is a vital need to perform such calculations, in order to allow orbital evolution methods to safely pass through resonances without a significant loss in the accuracy of the inspiral model. \subsubsection{Producing accurate waveform models: self-consistent evolution and second-order gravitational self-force} Possibly the most challenging outstanding obstacle to reaching the sub-radian phase accuracy required for LISA data analysis is the fact that the first-order GSF on its own is insufficient and one must also incorporate information at second perturbative order \cite{Hinderer:2008dm}. There are ongoing efforts to develop tools for computing the second order GSF \cite{Pound:2009sm,Pound:2012nt,Warburton:2013lea,Pound:2014xva,Pound:2014koa,Pound:2015wva,Wardell:2015ada,Miller:2016hjv,Pound:2017psq,Moxon:2017ozd}, but, despite significant progress, a full calculation of the second-order metric perturbation has yet to be completed. One of the challenges of the GSF problem when considered through second order is that it is naturally formulated as a self-consistent problem, whereby the coupled equations for the metric perturbation and for the particle worldline are evolved simultaneously. Indeed, this self-consistent evolution has yet to be completed even for the first-order GSF (it has, however, been done for the toy-model scalar charge case \cite{Diener:2011cc}). Even when methods are developed for computing the second order GSF, it will remain a further challenge to incorporate this information into a self-consistent evolution scheme. \subsubsection{Gravitational Green Function} One of the first proposals for a practical method for computing the GSF was based on writing the regularized metric perturbation in terms of a convolution, integrating the Green function for the wave equation along the worldline of the particle \cite{Poisson:Wiseman:1998,Anderson:2005gb}. It took several years for this idea to be turned into a complete calculation of the GSF, and even then the results were restricted to the toy-model problem of a scalar charge moving in Schwarzschild spacetime \cite{Casals:2013mpa,Wardell:2014kea}. Despite this deficiency, the Green function approach has produced a novel perspective on the GSF problem. In principle, the methods used for a scalar field in Schwarzschild spacetime should be applicable to the GSF problem in Kerr. The challenge is two-fold: (i) to actually adapt the methods to the Kerr problem and to the relevant wave equation, which is the linearized Einstein equation in the Lorenz gauge; and (ii) to explore whether and how one can instead work with the much simpler Teukolsky wave equation. \subsubsection{Internal-structure effects} The vast majority of GSF calculations to date have been based on the assumption that the smaller object is spherically symmetric and non-spinning. This idealization ignores the possibility that the smaller BH may be spinning, or more generally that other internal-structure effects may be relevant. Unfortunately, this picture is inadequate; certain internal-structure effects can make important contributions to the equations of motion. For example, the coupling of the small body's spin to the larger BH's spacetime curvature (commonly referred to as the Mathisson-Papapetrou force) is expected to contribute to the phase evolution of a typical EMRI at the same order as the conservative piece of the first-order GSF \cite{Ruangsri:2015cvg}. Furthermore, finite internal-structure is likely to be even more important in the case that the smaller body is a NS. While there has been some progress in assessing the contribution from the smaller body's spin to the motion \cite{Faye:2006gx,Han:2010tp,Ruangsri:2015cvg,Harms:2016ctx,Warburton:2017sxk,Maia:2017gxn,Maia:2017yok,Lukes-Gerakopoulos:2017vkj}, existing work has focused on flux-based calculations or on the PN regime. It remains an outstanding challenge to determine the influence of internal-structure effects on the GSF, especially at second order. \subsubsection{EMRIs in alternative theories of gravity} In all of the discussion so far, we have made one overarching assumption: that BHs behave as described by General Relativity (GR). However, with EMRIs we have the exciting prospect of not simply assuming this fact, but of testing its validity with exquisite precision. There is initial work on the self-force in the context of scalar-tensor gravity \cite{Zimmerman:2015hua}, but much more remains to be done to establish exactly what EMRIs can do to test the validity of GR (and how) when pushed to its most extreme limits. Much more on this in Chapter III (see, in particular, Sec.~\ref{sec:ringdown} therein) \subsubsection{Open tools and datasets} While there has been significant progress in developing tools for computing the GSF, much of it has been ad-hoc, with individual groups developing their own private tools and codes. Now that a clear picture has emerged of exactly which are the most useful methods and tools, the community has begun to combine their efforts. This has lead to the development of a number of initiatives, including (i) tabulated results for Kerr quasinormal modes and their excitation factors~\cite{Cardosoweb,Bertiweb}; (ii) open source ``kludge'' codes for generating an approximate waveform for EMRIS \cite{EKS,Chua:2017ujo}; and (ii) online repositories of self-force results \cite{BHPC}. It is important for such efforts to continue, so that the results of the many years of development of GSF tools and methods are available to the widest possible user base. One promising initiative in this direction is the ongoing development of the \emph{Black Hole Perturbation Toolkit} \cite{BHPT}, a free and open source set of codes and results produced by the GSF community. \section{Post-Newtonian and Post-Minkowskian Methods} \label{Sec:PN} \vspace{-3mm} {\it Contributor:} A. Le Tiec \vspace{3mm} \subsection{Background} The PN formalism is an approximation method in GR that is well suited to describe the orbital motion and the GW emission from binary systems of compact objects, in a regime where the orbital velocity is small compared to the speed of light and the gravitational fields are weak. This approximation method has played a key role in the recent detections, by the LIGO and Virgo observatories, of GWs generated by inspiralling and merging BH and NS binaries \cite{Abbott:2016blz,Abbott:2016nmj,Abbott:2017vtc,Abbott:2017oio,TheLIGOScientific:2017qsa}, by providing accurate template waveforms to search for those signals and to interpret them. Here we give a brief overview of the application of the PN approximation to binary systems of compact objects, focusing on recent developments and future prospects. See the review articles \cite{FuIt.07,Blanchet:2011wga,Schafer:2009dq,Foffa:2013qca,Rothstein:2014sra,Blanchet:2013haa,Porto:2016pyg,Schafer:2018kuf} and the textbooks \cite{Maggiore:1900zz,PoissonWill} for more information. In PN theory, relativistic corrections to the Newtonian solution are incorporated in a systematic manner into the equations of motion (EOM) and the radiation field, order by order in the small parameter $v^2/c^2 \sim Gm/(c^2r)$, where $v$ and $r$ are the typical relative orbital velocity and binary separation, $m$ is the sum of the component masses, and we used the fact that $v^2 \sim Gm/r$ for bound motion. (The most promising sources for current and future GW detectors are bound systems of compact objects.) Another important approximation method is the post-Minkowskian (PM) approximation, or non-linearity expansion in Newton's gravitational constant $G$, which assumes weak fields ($Gm/c^2 r \ll 1$) but unrestricted speeds ($v^2 / c^2 \lesssim 1$), and perturbs about the limit of special relativity. In fact, the construction of accurate gravitational waveforms for inspiralling compact binaries requires a combination of PN and PM techniques in order to solve two coupled problems, namely the problem of motion and that of wave generation. \hspace{-0.07cm}The two-body EOM have been derived in a PN framework using three well-developed sets of techniques in classical GR: (i) the PN iteration of the Einstein field equations in harmonic coordinates \cite{Blanchet:1998vx,Blanchet:2000ub,deAndrade:2000gf,Blanchet:2003gy,Mitchell:2007ea}, (ii) the Arnowitt-Deser-Misner (ADM) canonical Hamiltonian formalism \cite{Jaranowski:1997ky,Damour:2000kk,Damour:2001bu}, and (iii) a surface integral approach pioneered by Einstein, Infeld and Hoffmann \cite{Itoh:2001np,Itoh:2003fy,Itoh:2003fz}. By the early 2000s, each of these approaches has independently produced a computation of the EOM for binary systems of non-spinning compact objects through the 3rd PN order (3PN).\footnote{By convention, ``$n$PN'' refers to EOM terms that are $O(1/c^{2n})$ smaller than the Newtonian acceleration, or, in the radiation field, smaller by that factor relative to the standard quadrupolar field.} More recently, the application of effective field theory (EFT) methods \cite{Goldberger:2004jt}, inspired from quantum field theory, has provided an additional independent derivation of the 3PN EOM \cite{Foffa:2011ub}. All of those results were shown to be in perfect agreement. Moreover, the 3.5PN terms---which constitute a 1PN relative correction to the leading radiation-reaction force---are also known \cite{Iyer:1993xi,Iyer:1995rn,Jaranowski:1996nv,Pati:2002ux,Konigsdorffer:2003ue,Nissanke:2004er}. At the same time, the problem of radiation (i.e., computing the field in the far/wave zone) has been extensively investigated within the multipolar PM wave generation formalism of Blanchet and Damour \cite{Blanchet:1985sp,Blanchet:1986dk,Blanchet:1998in}, using the ``direct integration of the relaxed Einstein equation'' approach of Will, Wiseman and Pati \cite{Will:1996zj,Pati:2000vt}, and more recently with EFT techniques \cite{Goldberger:2009qd,Ross:2012fc}. The application of these formalisms to non-spinning compact binaries has, so far, resulted in the computation of the GW phase up to the relative 3.5PN (resp. 3PN) order for quasi-circular (resp.~quasi-eccentric) orbits \cite{Blanchet:2001aw,Blanchet:2001ax,Blanchet:2004ek,Blanchet:2005tk,Arun:2007rg,Arun:2007sg,Arun:2009mc,Moore:2016qxz}, while amplitude corrections in the GW polarizations are known to 3PN order, and even to 3.5PN order for the quadrupolar mode \cite{Arun:2004ff,Kidder:2007gz,Kidder:2007rt,Blanchet:2008je,Faye:2012we,Mishra:2015bqa}. \subsection{Recent developments} Over the last five years or so, significant progress on PN modelling of compact binary systems has been achieved on multiple fronts, including (i) the extension of the EOM to 4PN order for non-spinning bodies (with partial results also obtained for aspects of the two-body dynamics at the 5PN order \cite{Blanchet:2010zd,Barack:2010ny,Foffa:2013gja}), (ii) the inclusion of spin effects in the binary dynamics and waveform, (iii) the comparison of several PN predictions to those from GSF theory, and (iv) the derivation of general laws controlling the mechanics of compact binaries. \subsubsection{4PN equations of motion for non-spinning compact-object binaries} Recently, the computation of the two-body EOM has been extended to 4PN order, by using both the canonical Hamiltonian framework in ADM-TT coordinates \cite{Jaranowski:2012eb,Jaranowski:2013lca,Damour:2014jta,Jaranowski:2015lha,Damour:2016abl,Damour:2017ced} and a Fokker Lagrangian approach in harmonic coordinates \cite{Bernard:2015njp,Bernard:2016wrg,Bernard:2017bvn,Marchand:2017pir,Bernard:2017ktp}. Partial results at 4PN order have also been obtained using EFT techniques \cite{Foffa:2012rn,Foffa:2016rgu,Porto:2017dgs}. All of those high-order PN calculations resort to a point-particle model for the (non-spinning) compact objects, and rely on dimensional regularization to treat the local ultraviolet (UV) divergences that are associated with the use of point particles. The new 4PN results have been used to inform the EOB framework \cite{Damour:2015isa}, a semi-analytic model of the binary dynamics and wave emission (see Sec.\ \ref{Sec:EOB} below). The occurrence at the 4PN order of infrared (IR) divergences of spatial integrals led to the introduction of several \textit{ambiguity parameters}; one in the ADM Hamiltonian approach and two in the Fokker Lagrangian approach. One of those IR ambiguity parameters was initially fixed by requiring agreement with an analytical GSF calculation \cite{Bini:2013zaa} of the so-called Detweiler redshift along circular orbits \cite{Detweiler:2008ft,Sago:2008id}. Recently, however, Marchand et al. \cite{Marchand:2017pir} gave the first complete (i.e., ambiguity-free) derivation of the 4PN EOM. The last remaining ambiguity parameter was determined from first principles, by resorting to a matching between the near-zone and far-zone fields, together with a computation of the conservative 4PN tail effect in $d$ dimensions, allowing to treat both UV and IR divergences using dimensional regularization. Another interesting (and related) feature of the binary dynamics at the 4PN order is that it becomes \textit{non-local} in time \cite{Damour:2014jta,Bernard:2015njp}, because of the occurence of a GW tail effect at that order: gravitational radiation that gets scattered off the background spacetime curvature backreacts on the orbital motion at later times, such that the binary's dynamics at a given moment in time depends on its entire past history \cite{Blanchet:1987wq,Foffa:2011np,Galley:2015kus}. \subsubsection{Spin effects in the binary dynamics and gravitational waveform} \hspace{-0.05cm}Since stellar-mass and/or supermassive BHs may carry significant spins \cite{Reynolds:2013rva,Miller:2014aaa}, much effort has recently been devoted to include spin effects in PN template waveforms. In particular, spin-orbit coupling terms linear in either of the two spins have been computed up to the next-to-next-to leading order, corresponding to 3.5PN order in the EOM, using the ADM Hamiltonian framework \cite{Hartung:2011te,Hartung:2013dza}, the PN iteration of the Einstein field equations in harmonic coordinates \cite{Marsat:2012fn,Bohe:2012mr}, and EFT techniques \cite{Levi:2015uxa}. Spin-spin coupling terms proportional to the product of the two spins have also been computed to the next-to-next-to leading order, corresponding to 4PN order in the EOM, using the ADM Hamiltonian and EFT formalisms \cite{Hartung:2011ea,Levi:2011eq,Levi:2014sba,Levi:2015ixa,Levi:2016ofk}. The leading order 3.5PN cubic-in-spin and 4PN quartic-in-spin contributions to the binary dynamics are also known for generic compact bodies \cite{Hergt:2007ha,Hergt:2008jn,Marsat:2014xea,Levi:2014gsa,Vaidya:2014kza}, as well as all higher-order-in-spin contributions for BBHs (to leading PN order) \cite{Vines:2016qwa}. All these results are summarized in Fig.~\ref{fig:PNtable}. 2PN BH binary spin precession was recently revisited using multi-timescale methods~\cite{Gerosa:2015tea,Kesden:2014sla}, uncovering new phenomenology such as precessional instabilities~\cite{Gerosa:2015hba} and nutational resonances~\cite{Zhao:2017tro}. Spin-related effects on the far-zone field have also been computed to high orders, for compact binaries on quasi-circular orbits. To linear order in the spins, those effects are known up to the relative 4PN order in the GW energy flux and phasing \cite{Bohe:2013cla,Marsat:2013caa}, and to 2PN in the wave polarizations~\cite{Buonanno:2012rv,Mishra:2016whh}. At quadratic order in the spins, the contributions to the GW energy flux and phasing have been computed to 3PN order \cite{Porto:2010zg,Bohe:2015ana}, and partial results were derived for amplitude corrections to 2.5PN order \cite{Porto:2012as}. The leading 3.5PN cubic-in-spin effects in the GW energy flux and phasing are known as well \cite{Marsat:2014xea}. \begin{figure}[h!] \includegraphics[width=0.7\linewidth]{figPNtable.pdf} \caption{Contributions to the two-body Hamiltonian in the PN spin expansion, for arbitrary-mass-ratio binaries with spin induced multipole moments. Contributions in red are yet to be calculated. LO stands for ``leading order'', NLO for ``next-to-leading order'', and so on. SO stands for ``spin-orbit''. Figure from Ref.~\cite{Vines:2016qwa}.} \label{fig:PNtable} \end{figure} Finally, some recent works have uncovered remarkable relationships between the PN \cite{Vines:2016qwa} and PM \cite{Vines:2017hyw} dynamics of a binary system of spinning BHs with an arbritrary mass ratio on the one hand, and that of a test BH in a Kerr background spacetime on the other hand. Those results are especially relevant for the ongoing development of EOB models for spinning BH binaries (see Sec.\ \ref{Sec:EOB}), and in fact give new insight into the energy map at the core of such models. \subsubsection{Comparisons to perturbative gravitational self-force calculations} The GWs generated by a coalescing compact binary system are not the only observable of interest. As we have described in Sec.~\ref{Sec:perturbations}, over recent years, several \textit{conservative} effects on the orbital dynamics of compact-object binaries moving along quasi-circular orbits have been used to compare the predictions of the PN approximation to those of the GSF framework, by making use of gauge-invariant quantities such as (i) the Detweiler redshift \cite{Detweiler:2008ft,Blanchet:2009sd,Blanchet:2010zd,Damour:2009sm,Blanchet:2013txa,Blanchet:2014bza}, (ii) the relativistic periastron advance \cite{Barack:2010ny,LeTiec:2011bk,Tiec:2013twa,vandeMeent:2016hel}, (iii) the geodetic spin precession frequency \cite{Dolan:2013roa}, and (iv) various tidal invariants \cite{Dolan:2014pja,Nolan:2015vpa}, all computed as functions of the circular-orbit frequency of the binary. Some of these comparisons were extended to generic bound (eccentric) orbits~\cite{Barack:2011ed,Akcay:2015pza,Akcay:2016dku}. All of those comparisons showed perfect agreement in the common domain of validity of the two approximation schemes, thus providing crucial tests for both methods. Building on recent progress on the second-order GSF problem \cite{Pound:2012nt,Pound:2012dk,Pound:2014xva,Pound:2015fma,Miller:2016hjv,Pound:2017psq}, we expect such comparisons to be extended to second order in the mass ratio, e.g. by using the redshift variable \cite{Pound:2014koa}. Independently, the BH perturbative techniques of Mano, Suzuki and Takasugi \cite{Mano:1996vt,Mano:1996gn} have been applied to compute analytically, up to very high orders, the PN expansions of the GSF contributions to the redshift for circular \cite{Bini:2013rfa,Bini:2014nfa,Bini:2015bla,Kavanagh:2016idg} and eccentric \cite{Bini:2015xua,Bini:2015bfb,Hopper:2015icj,Bini:2016qtx,Bini:2016dvs} orbits, the geodetic spin precession frequency \cite{Bini:2014ica,Bini:2015mza,Kavanagh:2017wot}, and various tidal invariants \cite{Bini:2014zxa,Shah:2015nva,Kavanagh:2015lva}. Additionally, using similar techniques, some of those quantities have been computed numerically, with very high accuracy, allowing the extraction of the exact, analytical values of many PN coefficients \cite{Shah:2013uya,Johnson-McDaniel:2015vva,vandeMeent:2015lxa}. \subsubsection{First law of compact binary mechanics} The conservative dynamics of a binary system of compact objects has a fundamental property now known as the \textit{first law of binary mechanics} \cite{LeTiec:2011ab}. Remarkably, this variational formula can be used to relate local physical quantities that characterize each body (e.g. the redshift) to global quantites that characterize the binary system (e.g. the binding energy). For point-particle binaries moving along circular orbits, this law is a particular case of a more general result, valid for systems of BHs and extended matter sources \cite{Friedman:2001pf}. Using the ADM Hamiltonian formalism, the first law of \cite{LeTiec:2011ab} was generalized to spinning point particles, for spins (anti-)aligned with the orbital angular momentum~\cite{Blanchet:2012at}, and to non-spinning binaries moving along generic bound (eccentric) orbits \cite{Tiec:2015cxa}. The derivation of the first law for eccentric motion was then extended to account for the non-locality in time of the orbital dynamics due to the occurence at the 4PN order of a GW tail effect~\cite{Blanchet:2017rcn}. These various laws were derived on general grounds, assuming only that the conservative dynamics of the binary derives from an autonomous canonical Hamiltonian. (First-law-type relationships have also been derived in the context of linear BH perturbation theory and the GSF framework \cite{Gralla:2012dm,Tiec:2013kua,Fujita:2016igj}.) Moreover, they have been checked to hold true up to 3PN order, and even up to 5PN order for some logarithmic terms. So far the first laws have been applied to (i) determine the numerical value of the aforementioned ambiguity parameter appearing in derivations of the 4PN two-body EOM \cite{Bini:2013zaa}, (ii) calculate the exact linear-in-the-mass-ratio contributions to the binary's binding energy and angular momentum for circular motion \cite{LeTiec:2011dp}, (iii) compute the shift in the frequencies of the Schwarzschild and Kerr innermost stable circular orbits induced by the (conservative) GSF~\cite{Barack:2009ey,Damour:2009sm,Barack:2010tm,LeTiec:2011dp,Akcay:2012ea,Isoyama:2014mja,vandeMeent:2016hel}, (iv) test the weak cosmic censorship conjecture in a scenario where a massive particle subject to the GSF falls into a nonrotating BH along unbound orbits \cite{Colleoni:2015afa,Colleoni:2015ena}, (v) calibrate the effective potentials that enter the EOB model for circular~\cite{Barausse:2011dq,Akcay:2012ea} and mildly eccentric orbits~\cite{Akcay:2015pjz,Bini:2015bfb,Bini:2016qtx}, and spin-orbit couplings for spinning binaries \cite{Bini:2015xua}, and (vi) define the analogue of the redshift of a particle for BHs in NR simulations, thus allowing further comparisons to PN and GSF calculations \cite{Zimmerman:2016ajr,LeTiec:2017ebm}. \subsection{Prospects} On the theoretical side, an important goal is to extend the knowledge of the GW phase to the relative 4.5PN accuracy, at least for non-spinning binaries on quasi-circular orbits. (Partial results for some specific tail-related effects were recently derived \cite{Marchand:2016vox}.) This is essential in order to keep model systematics as a sub-dominant source of error when processing observed GW signals \cite{Abbott:2016wiq}. Accomplishing that will require, in particular, the calculation of the mass-type quadrupole moment of the binary to 4PN order and the current-type quadrupole and mass-type octupole moments to 3PN order. Moreover, some spin contributions to the waveform---both in the phasing and amplitude corrections---still have to be computed to reach the 4PN level, especially at quadratic order in the spins. Most of the PN results reviewed here have been established for circularized binaries, and often for spins aligned or anti-aligned with the orbital angular momentum. It is important to extend this large body of work to generic, eccentric, precessing systems. Progress on the two-body scattering problem would also be desirable \cite{Damour:2014afa,Bini:2018ywr,Bini:2017wfr}. Additionally, much of what has been achieved in the context of GR could be done as well for well-motivated alternative theories of gravitation, such as for scalar-tensor gravity (e.g. \cite{Mirshekari:2013vb,Lang:2013fna,Lang:2014osa,Sennett:2016klh,Bernard:2018hta}) or in quadratic gravity \cite{Yagi:2011xp,Yagi:2013mbt}. The first law of binary mechanics reviewed above could be extended to generic, precessing spinning systems, and the effects of higher-order spin-induced multipoles should be investigated. Recent work on the PM approximation applied to the gravitational dynamics of compact binaries has given new insight into the EOB model \cite{Damour:2016gwp,Bini:2017xzy,Vines:2017hyw,Damour:2017zjx}, and in particular into the energy map therein. These two lines of research may improve our physical understanding of the general relativistic two-body problem. On the observational side, future GW detections from inspiralling compact-object binaries will allow testing GR in the strong-field/radiative regime, by constraining possible deviations from their GR values of the various PN coefficients that appear in the expression of the phase. This, in particular, can be used to test some important nonlinear features of GR such as the GW tail, tail-of-tail and nonlinear memory effects. Indeed, the first detections of GWs from inspiralling BH binaries have already been used to set bounds on these PN coefficients, including an $O(10\%)$ constraint on the leading tail effect at the 1.5PN order~\cite{TheLIGOScientific:2016pea}; see Fig.~\ref{fig:test_GR} (or Ref.~\cite{Abbott:2017oio} for a more up-to-date version thereof). More detections with a wider network of increasingly senstive interferometric GW detectors will of course improve those bounds. \begin{figure}[h!] \includegraphics[width=\linewidth]{figGRtest.pdf} \caption{Two GW detections of inspiralling BH binaries were used to set bounds on possible deviations from their GR values of various PN coefficients that appear in the expression for the GW phase. Figure from Ref.~\cite{TheLIGOScientific:2016pea}.} \label{fig:test_GR} \end{figure} Finally, following the official selection of the LISA mission by the European Space Agency (ESA), with a launch planned for 2034, we foresee an increased level of activity in source modelling of binary systems of MBHs and EMRIs, two promising classes of sources for a mHz GW antenna in space. This will motivate more work at the interface between the PN approximation and GSF theory. \section{Numerical Relativity and the Astrophysics of Black Hole Binaries }\label{Sec:NR} \vspace{-3mm} {\it Contributor:} P. Schmidt \vspace{3mm} The year of $2005$ marked a remarkable breakthrough: the first successful numerical simulation of---and the extraction of the GWs from---an inspiraling pair of BHs through their merger and final ringdown~\cite{Pretorius:2005gq, Campanelli:2005dd, Baker:2005vv} (see e.g. \cite{Sperhake:2014wpa} for a review). NR provides us with accurate gravitational waveforms as predicted by GR. BBHs cover an eight-dimensional parameter space spanned by the mass ratio $q=m_1/m_2$, the spin angular momenta $\mathbf{S}_i$ and the eccentricity $e$. Simulations are computationally extremely expensive, thus the large BBH parameter space is still sparsely sampled. Nevertheless, NR waveforms already play a crucial part in the construction and verification of semi-analytic waveform models used in GW searches, which facilitated the first observations of GWs from BBH mergers~\cite{Abbott:2016blz,Abbott:2016nmj,TheLIGOScientific:2016pea,Abbott:2017vtc,Abbott:2017oio}. Furthermore, they play a key role in the estimation of source properties and in facilitating important tests of GR in its most extreme dynamical regime. Since the initial breakthrough, NR has made significant progress: from the first simulations of equal-mass non-spinning BBHs spanning only the last few orbits~\cite{Pretorius:2005gq,Campanelli:2005dd, Baker:2005vv}, to a realm of simulations exploring aligned-spin~\cite{Campanelli:2006uy, Hannam:2007wf, Dain:2008ck} as well as precessing quasi-circular binaries~\cite{Campanelli:2006fy, Campanelli:2008nk, Schmidt:2010it}, eccentric-orbit binaries~\cite{Sperhake:2007gu, Hinder:2007qu}, and evolutions long enough to reach into the early-inspiral regime where they can be matched onto PN models~\cite{Szilagyi:2015rwa}. Today, several codes are capable of stably evolving BBHs and extracting their GW signal. They can roughly be divided into two categories: finite-differencing codes including BAM~\cite{Bruegmann:2006at}, the Einstein Toolkit~\cite{Loffler:2011ay}, LazEv~\cite{Campanelli:2005dd, Campanelli:2006gf, Campanelli:2006uy, Campanelli:2007ew}, MAYA~\cite{Vaishnav:2007nm}, LEAN~\cite{Sperhake:2006cy} as well as the codes described in Refs.~\cite{Baker:2006yw, Pretorius:2004jg, Pretorius:2005gq}; and (pseudo-)spectral codes such as the Spectral Einstein Code (\texttt{SpEC})~\cite{Scheel:2006gg}. Other evolution codes currently under development include~\cite{Hilditch:2015aba} and~\cite{Clough:2015sqa}. To date, these codes have together produced several thousands of BBH simulations~\cite{Mroue:2013xna, Chu:2015kft, Jani:2016wkt, Healy:2017psd}. \subsection{Current status} \label{s:NR} {\it Excision and puncture.} In BBH simulations one numerically solves the vacuum Einstein equations with initial conditions that approximate a pair of separate BHs at some initial moment. An obvious complication is the presence of spacetime singularities inside the BHs, where the solution diverges. There are two approaches to this problem. The first is {\it excision}~\cite{0264-9381-4-5-013}, whereby a region around the singularity is excised (removed) from the numerical domain. As no information can propagate outwards from the interior of the BH, the physical content of the numerical solution outside the BHs is unaffected. Since the excision boundary is spacelike (not timelike), one cannot and does not specify boundary conditions on it. Instead, the main technical challenge lies in ensuring that the excision boundary remains spacelike. The risk, for example, is that part of the boundary may become timelike if numerical noise isn't properly controlled~\cite{Cook:2004kt,Szilagyi:2014fna,Kidder:2000yq}. It must also be ensured that non-physical gauge modes do not lead to numerical instabilities. Excision is used in \texttt{SpEC}, for example. The second common way to deal with the BH singularities is to choose singularity-avoiding coordinates. This is achieved by representing BHs as compactified topological wormholes~\cite{Brandt:1997tf} or infinitely long cylinders (``trumpets'') \cite{Hannam:2008sg}, known as {\it puncture} initial data. Specific gauge conditions allow the punctures to move across the numerical grid, giving this approach the name moving punctures~\cite{Campanelli:2005dd, Baker:2005vv, vanMeter:2006vi, Gundlach:2006tw, Hannam:2008sg}. {\it Initial Data.} No exact solutions are known, in general, for the BBH metric on the initial spatial surface, so one resorts to approximate initial conditions. Two types of initial data are commonly used: conformally flat and conformally curved. Most simulations that incorporate moving punctures use conformally flat initial data. Under this assumption, three of the four constraint equations (themselves a subset of the full Einstein field equations) are given analytically in terms of the Bowen-York solutions~\cite{Bowen:1980yu}. The maximal possible angular momentum in this approach is $a/m=0.93$, known as the Bowen-York limit~\cite{Dain:2002ee}. In order to go beyond this limit, conformally curved initial data have to be constructed. For codes that use excision, these can be obtained by solving the extended conformal thin sandwich (CTS) equations~\cite{York:1998hy} with quasi-elliptical boundary conditions~\cite{Gourgoulhon:2001ec, Grandclement:2001ed, Cook:2001wi,Pfeiffer:2002iy}. The initial spatial metric is proportional to a superposition of the metrics of two boosted Kerr-Schild BHs~\cite{Lovelace:2008tw}. More recently, the first non-conformally flat initial data within the moving punctures framework have been constructed~\cite{Ruchlin:2014zva, Zlochower:2017bbg} by superposing the metrics and extrinsic curvatures of two Lorentz-boosted, conformally Kerr BHs. \\ {\it Evolution Systems.} The successful evolution of a BBH spacetime further requires a numerically stable formulation of the Einstein field equations and appropriate gauge choices. Long-term stable evolutions today are most commonly performed with either a variant of the generalised harmonic~\cite{Pretorius:2005gq, Friedrich:2000qv, Lindblom:2005qh} or the Baumgarte-Shapiro-Shibata-Nakamura (BSSN) formulation~\cite{PhysRevD.52.5428, Baumgarte:1998te}. Another formulation, Z4, combines constraint preserving boundary conditions with an evolution system very close to BSSN~\cite{Bona:2003fj, Bona:2004yp, Gundlach:2005eh,Hilditch:2012fp}. More advanced Z4-type formulations, which are conformal and traceless, were developed in Refs.~\cite{Bernuzzi:2009ex,Alic:2011gg,Dumbser:2017okk}\\ {\it Gravitational Wave Extraction.} The GW signal emitted through the inspiral, merger and ringdown is usually extracted from the Newman-Penrose curvature scalar $\Psi_4$~\cite{Newman:1961qr, Stewart:1990aa} associated with the computed metric. The extraction of the signal is typically performed on spheres of constant coordinate radius some distance from the binary, followed by extrapolation to infinity. The method of Cauchy-Characteristic Extraction (CCE)~\cite{Bishop:1996gt, Winicour:1999ba} allows us to extract the observable GW signal directly at future null infinity by matching the Cauchy evolution onto a characteristic evolution that extends the simulation to null infinity. In the Cauchy-Characteristic Extraction, the gravitational waveform is most naturally extracted from the Bondi news function~\cite{Bondi21, Sachs103}. \subsection{Challenges} \label{s:challenges} {\it High spins.} Numerical simulations of close to maximally spinning BHs are still challenging to carry out due to difficulties in the construction of initial data as well as increasingly demanding accuracy requirements during the evolutions. This particularly affects binary configurations of unequal masses and arbitrary spin orientation, despite significant developments made in the past five years~\cite{Mroue:2013xna, Chu:2015kft, Jani:2016wkt, Healy:2017psd, Husa:2015iqa}. While spins up to 0.994 have been evolved stably~\cite{Scheel:2014ina}, extensive work is still underway to reach the proposed Novikov-Thorne limit of 0.998~\cite{Thorne:1974ve} and beyond. \\ {\it High mass ratios.} The highest mass ratio BBH fully general-relativistic numerical simulation available to date is of $q =100$~\cite{Lousto:2010ut}. (However, this simulation follows a relatively small number of orbital cycles.) For spinning BHs, simulations of mass ratio $q=18$ have been performed~\cite{Husa:2015iqa}. Both of these have been obtained in the moving punctures approach. For spectral methods, mass ratios higher than $q\sim 10$ are numerically very challenging. Generally, higher mass ratios are more computationally expensive as the disparate lengthscales demand higher spatial resolution, and because the number of orbital cycles increases in proportion to $q$. The choice of a Courant factor poses another problem in explicit evolution schemes, as it forces a small time step, which becomes increasingly challenging for high mass ratios. Implicit schemes could provide a potential solution to this~\cite{Lau:2011we}, but have not yet been applied to the fully relativistic binary problem. Improvements in numerical technique, and perhaps a synergy with the perturbative methods reviewed in Sec.\ \ref{Sec:perturbations}, will be crucial for overcoming this problem in the hope of extending the reach of NR towards the extreme-mass-ratio regime. \\ {\it Long simulations.} It is crucial for simulations to track the binary evolution from the early inspiral, where it can be matched to a PN model, all the way down to the final merger and ringdown. However, such long evolutions are computationally very expensive, requiring many months of CPU time with existing codes. Note that the run time is not simply linear in the evolution time: a longer run would usually also require a larger spatial domain in order to keep spatial boundaries out of causal contact. To date, only one complete numerical simulation reaching well into the PN regime has been produced, using a modified version of \texttt{SpEC}~\cite{Szilagyi:2015rwa}. Significant modifications to existing codes would need to be made in order to be able to generate long simulations on a production scale. \\ {\it Waveform accuracy.} As the sensitivity of GW detectors increases, further work will be required to further improve models and assess their systematic errors \cite{Abbott:2016wiq}. Already, for large precessing spins, or large mass ratios, or non-negligible eccentricities, systemic modelling errors limit the accuracy with which LIGO and Virgo can extract source parameters (see~\cite{Hannam:2009rd} for a more detailed discussion). This is due, in part, to relatively large errors in current models of high multipole-mode contributions, which we expect in the coming years to become resolvable in detected signals. \\ {\it Beyond General Relativity.} GW observations from merging compact binaries allow us to probe the strong-field regime and test GR (cf.\ Chapter III). While theory-agnostic ways to test deviations from GR are commonly used, testing a selection of well-motivated alternative theories through direct waveform comparison is desirable. First steps have been taken to simulate BBHs in alternative theories of gravity that admit BH solutions~\cite{Berti:2013gfa, Okounkova:2017yby}. However, many issues remain to be addressed, not least of which the fact that some of these theories may not even possess a well-posed initial value formulation (see~\cite{Papallo:2017qvl, Delsate:2014hba} for examples). \subsection{Numerical Relativity and GW observations} \label{s:obs} NR plays an important role in GW astrophysics and data analysis. The semi-analytical waveform models employed to search for BBHs (see Sec.\ \ref{Sec:EOB} below) model the complete radiative evolution from the early inspiral, through the merger and to the final ringdown stage, with the later stages calibrated to BBH simulations~\cite{Khan:2015jqa, Bohe:2016gbl}. These models and more sophisticated ones incorporating more physical effects, for example precession~\cite{Apostolatos:1994mx, Kidder:1995zr}, are used to determine the fundamental properties of the GW source, i.e. its masses and spin angular momenta as well as extrinsic quantities such as the orbital orientation relative to the observatories~\cite{Hannam:2013oca, Pan:2013rra}. Alternatively, pure NR waveforms may be used directly~\cite{Abbott:2016apu,Lange:2017wki} or by the means of surrogate models~\cite{Blackman:2015pia, Blackman:2017dfb, Blackman:2017pcm}. But due to the computational cost of simulations, these have only been attempted so far on a very restricted portion of the parameter space. Despite this restriction, pure NR surrogate models have the advantage of incorporating more physical effects that may be limited or entirely neglected in currently available semi-analytic models. The ringdown phase after the merger may be described by perturbation theory (see e.g.~\cite{Kokkotas:1999bd} for a review), but the amplitudes of the excited quasi-normal modes can only be obtained from numerical simulations. These are of particular interest as measuring the amplitudes of individual quasinormal modes would allow to map the final state of the merger to the properties of the progenitor BHs~\cite{Kamaretsos:2011um, Kamaretsos:2012bs} and to test the BH nature of the source~\cite{Berti:2005ys} (see also Sections~\ref{sec:nohair} and \ref{sec:hairyBHs}). In order to estimate the mass and spin angular momentum of the remnant BH, fits to NR simulations are essential~\cite{Barausse:2009uz, Hemberger:2013hsa, Hofmann:2016yih, Healy:2016lce, Jimenez-Forteza:2016oae,Buonanno:2007sv,Rezzolla:2007rz,Lousto:2009mf,Bernuzzi:2009ex,Alic:2011gg,Dumbser:2017okk}. Independent measurements of the binary properties from the inspiral portion of the GW signal and from the later stages of the binary evolution using fits from NR allow, when combined, to test the predictions from GR~\cite{Ghosh:2016qgn, TheLIGOScientific:2016src}. While the phenomenological fit formulae have seen much improvement in recent years, modelling the final state of precessing BH binaries from numerical simulations still remains an open challenge. Generically, when BHs merge, anisotropic emission of GWs leads to the build up of a net linear momentum. Due to the conservation of momentum, once the GW emission subsides, the remnant BH recoils (``kick''). While the recoil builds up during the entire binary evolution, it is largest during the non-linear merger phase. Kick velocities can be as high as several thousands ${\rm km}/{\rm s}$, with astrophysical consequences: a BH whose recoil velocity is larger than the escape velocity of its galaxy may leave its host. Numerical simulations are necessary to predict the recoil velocities~\cite{Gonzalez:2006md, Gonzalez:2007hi, Brugmann:2007zj, Lousto:2011kp} (a convenient surrogate model for these velocities was constructed recently in \cite{Gerosa:2018qay}). It has been suggested that it may be possible to directly measure the recoil speed from the GW signal by observing the induced differential Doppler shift throughout the inspiral, merger and ringdown~\cite{Gerosa:2016vip}. Numerical simulations of the strong field regime are also crucial for exploring other spin-related phenomena such as ``spin-flips''~\cite{Lousto:2015uwa} caused by spin-spin coupling, whose signatures may be observable. Understanding the spin evolution and correctly modelling spin effects is crucial for mapping out the spin distribution of astrophysical BHs from GW observations. Waveform models as well as fitting formulae for remnant properties are prone to systematic modelling errors. For GW observations with low SNR, the statistical uncertainty dominates over the systematic modelling error in the parameter measurement accuracy. Improved sensitivities for current and future GW observatories (including, especially, LISA \cite{Berti:2006ew,Cutler:2007mi}) will allow for high SNR observations reaching into the regime where systematic errors are accuracy limiting. This includes strongly inclined systems, higher-order modes, eccentricity, precession and kicks, all of which can be modelled more accurately through the inclusion of results from NR (see~\cite{Abbott:2016wiq} for a detailed discussion in the context of the first BBH observation GW150914). \section{Numerical relativity in fundamental physics }\label{Sec:HE} \vspace{-3mm} {\it Contributor:} U. Sperhake \vspace{3mm} The standard model of particle physics and Einstein's theory of GR provide us with an exquisite theoretical framework to understand much of what we observe in the Universe. From high-energy collisions at particle colliders to planetary motion, the GW symphony of NS and BH binaries and the cosmological evolution of the Universe at large, the theoretical models give us remarkably accurate descriptions. And yet, there are gaps in this picture that prompt us to believe that something is wrong or incomplete. Galactic rotation curves, strong gravitational lensing effects, X-ray observations of galactic halos and the cosmic microwave background cannot be explained in terms of the expected gravitational effects of the visible matter \cite{Bertone:2004pz}. Either we are prepared to accept the need to modify the laws of gravity, or there exists a form of {\em dark matter} (DM) that at present we can not explain satisfactorily with the SM (or both). Different DM candidates and their status are reviewed in Section~\ref{sec:BH}. Likewise, the accelerated expansion of the Universe \cite{Kowalski:2008ez} calls for an exotic form of matter dubbed {\em dark energy} or (mathematically equivalently) the introduction of a cosmological constant with a value many orders of magnitude below the zero-point energy estimated by quantum field theory---the {\em cosmological constant problem}. Further chinks in the armor of the SM+GR model of the universe include the {\em hierarchy problem}, i.e.~the extreme weakness of gravity relative to the other forces, and the seeming irreconcilability of GR with quantum theory. Clearly, gravity is at the center of some of the most profound contemporary puzzles in physics. But it has now also given us a new observational handle on these puzzles, in the form of GWs \cite{Abbott:2016blz}. Furthermore, the aforementioned 2005 breakthrough in NR \cite{Pretorius:2005gq,Baker:2005vv,Campanelli:2005dd} has given us the tools needed to systematically explore the non-linear strong-field regime of gravity. For much of its history, NR was motivated by the modeling of astrophysical sources of GWs, as we have described in Sec.~\ref{Sec:NR}. As early as 1992, however, Choptuik's milestone discovery of critical behaviour in gravitational collapse \cite{Choptuik:1992jv} has demonstrated the enormous potential of NR as a tool for exploring a much wider range of gravitational phenomena. In this section we review the discoveries made in this field and highlight the key challenges and goals for future work. \subsection{Particle laboratories in outer space} DM, by its very definition, generates little if any electromagnetic radiation; rather, it interacts with its environment through gravity. Many DM candidates have been suggested (see also Section~\ref{Sec:DM}), ranging from primordial BH clouds to weakly interacting massive particles (WIMPs) and ultralight bosonic fields \cite{Bertone:2004pz,Feng:2010gw,Khlopov:2017vcj}. The latter are a particularly attractive candidate in the context of BH and GW physics due to their specific interaction with BHs. Ultralight fields, also referred to as weakly interacting slim particles (WISPs), typically have mass parameters orders of magnitude below an electron volt and arise in extensions of the SM of particle physics or extra dimensions in string theory. These include {\em axions} or axion-like particles, dark photons, non-topological solitons (so-called {\em Q-balls}) and condensations of bosonic states \cite{Bertone:2004pz,Arvanitaki:2010sy,Ackerman:mha,Klasen:2015uma}. For reference, we note that a mass $10^{-10}\,{\rm eV}$ ($10^{-20}\,{\rm eV}$) corresponds to a Compton wavelength of $\mathcal{O}(1)\,{\rm km}$ ($\mathcal{O}(10^2)\,{\rm AU}$) which covers the range of Schwarzschild radii of astrophysical BHs. At first glance, one might think the interaction between such fundamental fields and BHs is simple; stationary states are constrained by the no-hair theorems and the field either falls into the BH or disperses away. In practice, however, the situation is more complex---and more exciting. NR simulations have illustrated the existence of long-lived, nearly periodic states of single, massive scalar or Proca (i.e.~vector) fields around BHs \cite{Witek:2012tr,Okawa:2014nda,Zilhao:2015tya}. Intriguingly, these configurations are able to extract rotational energy from spinning BHs through {\em superradiance} \cite{Zeldovich:1971,Zeldovich1972,Misner:1972kx,Brito:2015oca}, an effect akin to the Penrose process \cite{Penrose:1971uk,Brito:2015oca}. Further details on this process are provided in Section~\ref{sec:superradiance}. In the presence of a confining mechanism that prevents the field from escaping to infinity, this may even lead to a runaway instability dubbed the {\em BH bomb} \cite{Press:1972zz,Cardoso:2004nk}. Another peculiar consequence arising in the same context is the possibility of floating orbits where dissipation of energy through GW emission is compensated by energy gain through superradiance~\cite{Cardoso:2011xi,Fujita:2016yav}. Naturally, non-linear effects will limit the growth of the field amplitude or the lifetime of floating orbits, and recent years have seen the first numerical studies to explore the role of non-linearities in superradiance. These simulations have shown that massive, real scalar fields around BHs can become trapped inside a potential barrier outside the horizon, form a bound state and may grow due to superradiance \cite{Witek:2012tr}. Furthermore, beating phenomena result in a more complex structure in the evolution of the scalar field. These findings were confirmed in \cite{Okawa:2014nda}, which also demonstrated that the scalar clouds can source GW emission over long timescales. The amplification of GWs through superradiance around a BH spinning close to extremality has been modeled in Ref.~\cite{East:2013mfa} and found to be maximal if the peak frequency of the wave is above the superradiant threshold but close to the dominant quasi-normal mode frequency of the BH. The generation of GW templates for the inspiral and merger of hairy BHs and the identification of possible smoking gun effects distinguishing them from their vacuum counterparts remains a key challenge for future NR simulations. Numerical studies of the non-linear saturation of superradiance are very challenging due to the long time scales involved. A particularly convenient example of superradiance in spherical symmetry arises through the interaction of charged, massless scalar fields with Reissner-Nordstr{\"o}m BHs in asymptotically anti-de Sitter spacetime or in a cavity; here energy is extracted from the BH charge rather than its rotation. The scalar field initially grows in accordance with superradiance, but eventually saturates, leaving behind a stable hairy BH \cite{Bosch:2016vcp,Sanchis-Gual:2016tcm}. Recently the instability of spinning BHs in AdS backgrounds was studied in full generality~\cite{Chesler:2018txn}. The system displays extremely rich dynamics and it is unclear if there is a stationary final state. The superradiant growth of a complex, massive vector field around a near-extremal Kerr BH has recently been modeled in axisymmetry \cite{East:2017ovw}. Over $9\,\%$ of the BH mass can be extracted until the process gradually saturates. Massive scalar fields can also pile up at the center of ``normal'' stars. Due to gravitational cooling, this pile-up does not lead to BH formation but stable configurations composed of the star with a ``breathing'' scalar field \cite{Brito:2015yga}. The hairy BH configurations considered so far are long-lived but not strictly stationary. A class of genuinely stationary hairy BH spacetimes with scalar or vector fields has been identified in a series of papers \cite{Herdeiro:2014goa,Brihaye:2014nba,Benone:2014ssa,Herdeiro:2016tmi,Chodosh:2015oma}. The main characteristic of these systems is that the scalar field is {\em not stationary}---thus bypassing the no-hair theorems---but the spacetime metric and energy-momentum are. A subclass of these solutions smoothly interpolates between the Kerr metric and boson stars. A major challenge for numerical explorations is to evaluate whether these solutions are non-linearly stable and might thus be astrophysically viable. A recent study of linearized perturbations around the hairy BH solution of \cite{Herdeiro:2014goa} has found unstable modes with characteristic growth rates similar to or larger than those of a massive scalar field on a fixed Kerr background. However, such solution may still be of some astrophysical relevance~\cite{Degollado:2018ypf}. See also Sec.~\ref{sec:circumvent} of Chapter III for a related discussion. \subsection{Boson stars\label{sec:boson_stars}} The idea of stationary localized, soliton-like configurations made up of the type of fundamental fields discussed in the previous section goes back to Wheeler's ``gravitational-electromagnetic entities'' or {\em geons} of the 1950s \cite{Wheeler:1955zz}. While Wheeler's solutions turn out to be unstable, replacing the electromagnetic field with a complex scalar field leads to ``Klein-Gordon geons'' first discovered by Kaup \cite{Kaup:1968zz} and now more commonly referred to as {\em boson stars}. The simplest type of boson stars, i.e.~stationary solutions to the Einstein-complex-Klein-Gordon system, is obtained for a non-self-interacting field with harmonic time dependence $\phi \propto e^{i\omega t}$ where the potential contains only a mass term $V(\phi)= m^2 |\phi|^2$. The resulting one-parameter family of these so-called {\em mini boson star} solutions is characterized by the central amplitude of the scalar field and leads to a mass-radius diagram qualitatively similar to that of static NSs; a maximum mass value of $M_{\rm max}=0.633\,M_{\rm Planck}^2/m$ separates the stable und unstable branches~\cite{Jetzer:1991jr,Mundim:2010hi,Liebling:2012fv}. For a particle mass $m=30\,{\rm GeV}$, for instance, one obtains $M_{\rm max}\sim 10^{10}\,{\rm kg}$ with radius $R\sim 6\times 10^{-18}\,m$ and a density $10^{48}$ times that of a NS \cite{Mielke:1997re}. A wider range of boson star models is obtained by adding self-interaction terms to the potential $V(\phi)$ which result in more astrophysically relevant bulk properties of the stars. For a quartic term $\lambda |\phi|^4/4$, for example, the maximal mass is given by $M_{\rm Planck} (0.1\,{\rm GeV}^2)\,M_{\odot}\,\lambda^{1/2}/m^2$ \cite{Colpi:1986ye}; for further types of boson stars with different potential terms see the reviews \cite{Mielke:1997re,Liebling:2012fv,Helfer:2016ljl} and references therein. A particularly intriguing feature of rotating boson stars exemplifies their macroscopic quantum-like nature: the ratio of angular momentum to the conserved particle number must be of integer value which prevents a continuous transition from rotating to non-rotating configurations \cite{Kobayashi:1994qi,Mielke:1997re}. More recently, stationary, soliton-like configurations have also been found for complex, massive Proca fields \cite{Brito:2015pxa}. For real scalar fields, in contrast, stationary solutions do not exist, but localized, periodically oscillating solutions dubbed {\em oscillatons} have been identified in Ref.~\cite{Seidel:1991zh}. Boson stars are natural candidates for DM~\cite{Liebling:2012fv}, but may also act as BH mimickers \cite{Guzman:2009zz,Cardoso:2016olt}. In the new era of GW observations, it is vital to understand the GW generation in boson-star binaries and search for specific signatures that may enable us to distinguish them from BH or NS systems. Recent perturbative calculations of the tidal deformation of boson stars demonstrate that the inspiral part of the waveform may allow us to discriminate boson stars, at least with third-generation detectors \cite{Sennett:2017etc}. Numerical studies of dynamic boson stars have so far mostly focused on the stability properties of single stars and confirmed the stable and unstable nature of the branches either side of the maximum mass configuration; see e.g.~\cite{Balakrishna:1997ej,ValdezAlvarado:2012xc,Collodel:2017biu,Sanchis-Gual:2017bhw}. The modeling of head-on collisions of boson stars \cite{Palenzuela:2006wp,Bezares:2017mzk} reveals rich structure in the scalar radiation and that the merger leads to the formation of another boson star. Head-on collisions have also served as a testbed for confirming the validity of the hoop conjecture in high-energy collisions \cite{Choptuik:2009ww}. Inspiralling configurations result either in BH formation, dispersion of the scalar field to infinity or non-rotating stars \cite{Palenzuela:2007dm,Palenzuela:2017kcg}, possibly a consequence of the quantized nature of the angular momentum that makes it difficult to form spinning boson stars instead. Binary boson star systems thus remain largely uncharted territory, especially regarding the calculation of waveform templates for use in GW data analysis and the quantized nature of spinning boson stars and their potential constraints on forming rotating stars through mergers. \subsection{Compact objects in modified theories of gravity} New physical phenomena and the signature of new ``modified'' theories are typically encountered when probing extreme regimes not accessible to previous experiments and observations. Quantum effects, for instance, become prominent on microscopic scales and their observation led to the formulation of quantum theory while classical physics still provides an accurate description of macroscopic systems. Likewise, Galilean invariance and Newtonian theory accurately describe slow motion and weakly gravitating systems but break down at velocities comparable to the speed of light or in the regime of strong gravity. We therefore expect modifications of GR, if present, to reveal themselves in the study of extreme scales such as the large-scale dynamics of the universe or the strong curvature regime near compact objects. The dawn of GW observations provides us with unprecedented opportunities to probe such effects. The modeling of compact objects in alternative theories of gravity represents one of the most important challenges for present and future NR studies, so that theoretical predictions can be confronted with observations. This challenge faces additional mathematical, numerical and conceptual challenges as compared to the GR case; a more detailed discussion of these is given in Sec.~\ref{IVPandNumerics} below. A convenient way to classify the numerous modified theories of gravity is provided by the assumptions underlying Lovelock's theorem and, more specifically, which of these assumptions are dropped \cite{Berti:2015itd}. Unfortunately, for most of these candidate theories, well-posed formulations are not known (cf.~Table 1 in \cite{Berti:2015itd}) or presently available only in the form of a continuous limit to GR or linearization around some background \cite{Delsate:2014hba,Okounkova:2017yby}. Prominent exceptions are (single- or multi-) scalar-tensor (ST) theories of gravity \cite{Fujii:2003pa} which includes Brans-Dicke theory \cite{Brans:1961sx} and, through mathematical equivalence, a subset of $f(R)$ theories \cite{DeFelice:2010aj}. ST theories inherit the well-posedness of GR through the Einstein frame formulation \cite{Damour:1992we}; see also \cite{Salgado:2005hx,Salgado:2008xh,KijowskiJakubiecUniversality}. Furthermore, ST physics would present the most conspicuous strong-field deviation from GR discovered so far, the {\em spontaneous scalarization} of NSs \cite{Damour:1993hw} (for a similar effect in vector-tensor theory see \cite{Ramazanoglu:2017xbl}). For these reasons, almost all NR studies have focused on this class of theories, even though its parameter space is significantly constrained by solar system tests and binary pulsar observations \cite{Bertotti:2003rm}. The structure of equilibrium models of NSs in ST theories has been studied extensively in the literature (e.g.~\cite{Damour:1993hw,Doneva:2013qva,Mendes:2014ufa,Silva:2014fca, Horbatsch:2015bua,Palenzuela:2015ima,Ramazanoglu:2016kul,Morisaki:2017nit}) and leads to a mass-radius diagram that contains one branch of GR or GR-like models plus possible additional branches of strongly scalarized stars. The presence or absence of these additional branches depends on the coupling between the scalar and tensor sector of the theory. Early numerical studies considered the collapse of dust in spherical symmetry \cite{Shibata:1994qd,Scheel:1994yr,Scheel:1994yn,Harada:1996wt} which leads to a hairless BH in agreement with the no-hair theorems, even though departures from GR are possible during the dynamic stages of the collapse. In a sequence of papers, Novak {\em et al} studied the collapse of NSs into a BH \cite{Novak:1997hw}, the transition of NSs between stable branches \cite{Novak:1998rk} and the formation of NSs through gravitational collapse \cite{Novak:1999jg}. In all cases, strong scalar radiation is generated for that part of the ST theory's parameter space that admits spontaneously scalarized equilibrium models. In \cite{Gerosa:2016fri}, the collapse of stellar cores to BHs was found to be the most promising scenario to generate detectable scalar radiation for parameters allowed by the Cassini and pulsar observations; galactic sources at a distance $D=10\,{\rm kpc}$ may be detected with present and future GW observatories or used to further constrain the theory's parameters. All these simulations, however, consider {\em massless ST theory}. For massive fields, low frequency interactions decay exponentially with distance, so that the pulsar and Cassini constraints may no longer apply \cite{Alsing:2011er}. In consequence, massive ST theory still allows for very strongly scalarized equilibrium stars if $m\gtrsim 10^{-15}\,{\rm eV}$ \cite{Ramazanoglu:2016kul,Morisaki:2017nit}. This has dramatic consequences for the GW signals that can be generated in massive ST theory as compared with its massless counterpart: GW amplitudes can be orders of magnitude larger and the waves are spread out into a nearly monochromatic signal over years or even centuries due to the dispersion of the massive field. GW searches may therefore be directed at historic supernovae such as SN1987A and either observe a signal or constrain the theory's parameter space \cite{Sperhake:2017itk}. Numerical studies of binary systems in ST theory are rather scarce. The no-hair theorems strongly constrain possible deviations of pure BH spacetimes from GR. They can be bypassed, however, through non-trivial potentials \cite{Healy:2011ef} or boundary conditions \cite{Berti:2013gfa} which leads to scalar wave emission. Nevertheless, NS systems appear to be the more natural candidate to search for imprints of ST theory. Dynamical scalarization has indeed been observed in simulations of the merger of two NSs with initially vanishing scalar charge \cite{Barausse:2012da,Palenzuela:2013hsa}. Beyond GR and ST theory, we are only aware of one numerical study \cite{Okounkova:2017yby}, which simulated the evolution of BH binaries in the dynamical Chern-Simons (dCS) theory linearized around GR. LIGO observations may then measure or constrain the dCS length scale to $\lesssim \mathcal{O}(10)\,{\rm km}$. With the dawn of GW astronomy \cite{Abbott:2016blz}, the topics discussed so far in this section are becoming important subjects of observational studies with LIGO, Virgo and future GW detectors. These studies are still in an early stage and the generation of precision waveforms for scenarios involving modifications of gravity, fundamental fields or more exotic compact objects will be a key requirement for fully exploiting the scientific potential of this new channel to observing the universe. The analysis of GW events has so far concentrated on testing the consistency of the observed signals with GR predictions, establishing bounds on phenomenological parametrizations of deviations from GR and obtaining constraints from the propagation of the GW signal. An extended study of GW150914 demonstrated consistency between the merger remnant's mass and spins obtained separately from the low-frequency inspiral and the high-frequency postinspiral signal \cite{TheLIGOScientific:2016src}. The Compton wavelength of the graviton was constrained with a 90\,\% lower bound of $10^{13}$\,km (corresponding to an upper bound for the mass of $\sim 10^{-22}\,{\rm eV}$) and parametrizations of violations of GR using high-order PN terms have been constrained; see also \cite{Abbott:2017gyy, Abbott:2017vtc}. The first NSB observation \cite{TheLIGOScientific:2017qsa}, combined with electromagnetic observations, limited the difference between the speed of propagation of GWs and that of light to within $-3\times 10^{-15}$ and $7\times 10^{-16}$ times the speed of light \cite{Monitor:2017mdv}. Analysis of the polarization of the first triple coincidence detection GW170814 found Bayes' factors of 200 (1000) favoring a purely tensor polarization against purely vector (purely scalar) radiation. In summary, the GW observations have as yet not identified an inconsistency with the predictions of vacuum GR for BBH signals or GR predictions for NS systems. Chapter III contains detailed discussions on many of the topics raised here, including motivation and brief description of certain broad classes of alternative to GR (Sec.~\ref{sec:theories}), numerics beyond GR (Sec.~\ref{sec:NRbeyond}), and the nature of compact objects beyond GR (Sec.~\ref{sec:compactbeyond}). \subsection{High-energy collisions of black holes} The hierarchy problem of physics consists in the vast discrepancy between the weak coupling scale $(\approx 246\,{\rm GeV})$ and the Planck scale $1.31\times 10^{19}\,{\rm GeV}$ or, equivalently, the relative weakness of gravity compared with the other interactions. A possible explanation has been suggested in the form of ``large'' extra spatial dimensions \cite{Antoniadis:1998ig,ArkaniHamed:1998rs} or extra dimensions with a warp factor \cite{Randall:1999ee,Randall:1999vf}. On short lengthscales $\lesssim 10^{-4}\,{\rm m}$, gravity is at present poorly constrained by experiment and would, according to these models, be diluted due to the steeper fall off in higher dimensions. All other interactions, on the other hand, would be constrained to a 3+1 dimensional brane and, hence, be unaffected. In these {\em braneworld} scenarios, the fundamental Planck scale would be much smaller than the four-dimensional value quoted above, possibly as low as $\mathcal{O}({\rm TeV})$ which inspired the name {\em TeV gravity}. This fundamental Planck scale determines the energy regime where gravity starts dominating the interaction, leading to the exciting possibility that BHs may be formed in particle collisions at the LHC or in high-energy cosmic rays hitting the atmosphere \cite{Banks:1999gd,Dimopoulos:2001hw,Giddings:2001bu}. The analysis of experimental data employs so-called Monte-Carlo event generators \cite{Frost:2009cf} which require as input the cross section for BH formation and the loss of energy and momentum through GW emission. In the ultrarelativistic limit, the particles may be modeled as pointlike or, in GR, as BHs. In $D=4$ spacetime dimensions, high-energy collisions of BHs are by now well understood. Head-on collisions near the speed of light radiate about $14\,\%$ of the center-of-mass energy $M$ in GWs \cite{Sperhake:2008ga,Healy:2015mla}. The impact parameter separating merging from scattering collisions with boost velocity $v$ is $b/M=(2.50\pm0.05)/v$ \cite{Shibata:2008rq}. Grazing collisions exhibit zoom-whirl behaviour \cite{Pretorius:2007jn,Sperhake:2009jz} and can radiate up to $\sim 50\,\%$ of the total energy in GWs \cite{Sperhake:2012me}. The collision dynamics furthermore become insensitive to the structure of the colliding objects -- BHs or matter balls -- at high velocities \cite{Choptuik:2009ww,East:2012mb,Rezzolla:2012nr,Sperhake:2012me, Sperhake:2015siy} as expected when kinetic energy dominates the budget. Finally, NR simulations of hyperbolic BH encounters are in good agreement with predictions by the EOB method \cite{Damour:2014afa}. The key challenges are to generalize these results to the higher $D$ scenarios relevant for TeV gravity (there are partial, perturbative results~\cite{Galtsov:2010vtu,Berti:2010gx,Galtsov:2012pcw}). Considering the symmetries of the collision experiments, it appears plausible to employ rotational symmetry in higher $D$ numerics which is vital to keep the computational costs under control and underlies all NR studies performed so far \cite{Pretorius:2004jg,Zilhao:2010sr,Yoshino:2011zz,Yoshino:2011zza, Cook:2016soy}. Nonetheless, NR in higher $D$ faces new challenges. The extraction of GWs is more complex, but now tractable either with perturbative methods \cite{Kodama:2003jz,Kodama:2003kk,Witek:2010xi}, using the Landau-Lifshitz pseudotensor \cite{Yoshino:2009xp} or projections of the Weyl tensor \cite{Godazgar:2012zq,Cook:2016qnt}. Likewise, initial data can be obtained by generalizing four-dimensional techniques \cite{Zilhao:2011yc}. Studies performed so far, however, indicate that, for reasons not yet fully understood, obtaining numerically stable evolutions is harder than in $D=4$ \cite{Okawa:2011fv,Cook:2017fec}. Results for the scattering threshold are limited to $v\lesssim 0.5\,c$ \cite{Okawa:2011fv} and the emission of GWs has only been computed for non-boosted collisions of equal and unequal mass BH binaries \cite{Witek:2010az,Witek:2014mha,Cook:2017fec}. These simulations show a strong suppression of the radiated energy with $D$ beyond its peak value $E_{\rm rad}/M \approx 9\times 10^{-4}$ at $D=5$ for equal-mass systems, but reveal more complex behaviour for low mass ratios. A remarkable outcome of BH grazing collisions in $D=5$ is the possibility of super-Planckian curvature in a region outside the BH horizons \cite{Okawa:2011fv}. \subsection{Fundamental properties of black holes and non-asymptotically flat spacetimes} Recent years have seen a surge of NR applications to non-asymptotically flat spacetimes in the context of the gauge-gravity duality, cosmological settings and for the exploration of fundamental properties of BH spacetimes. We list here a brief selection of some results and open questions; more details can be found in the reviews \cite{Sperhake:2013qa,Cardoso:2014uka}. Cosmic censorship has for a long time been a topic of interest in NR, but to date no generic, regular and asymptotically flat class of initial data are known to result in the formation of naked singularities in four spacetime dimensions. Higher-dimensional BHs, however, have a much richer phenomenology \cite{Emparan:2008eg}, including in particular black rings which may be subject to the Gregory-Laflamme instability \cite{Gregory:1993vy}. Thin black rings have indeed been found to cascade to a chain of nearly circular BHs connected by ever thinner segments in finite time \cite{Figueras:2015hkb} in the same way as infinite black strings \cite{Lehner:2010pn}. Similarly, ultraspinning topologically spherical BHs in $D\ge 6$ dimensions are unstable \cite{Shibata:2010wz} and ultimately form ever thinner rings in violation of the weak cosmic censorship conjecture \cite{Figueras:2017zwa}. We have already mentioned Choptuik's discovery of critical phenomena in the collapse of spherical scalar fields \cite{Choptuik:1992jv}. In asymptotically Anti-de Sitter (AdS) spacetimes, the dynamics change through the confining mechanism of the AdS boundary, allowing the scalar field to recollapse again and again until a BH forms \cite{Bizon:2011gg,Jalmuzna:2011qw,Buchel:2012uh,Bizon:2013xha,Olivan:2015fmy}; see also \cite{Bantilan:2017kok} for non-spherically symmetric configurations. NR simulations of asymptotically AdS spacetimes are very challenging due to the complex outer boundary conditions, in particular away from spherical symmetry, but recent years have seen the first simulations of BH collisions in AdS \cite{Bantilan:2012vu,Bantilan:2014sra} which, assuming gauge-gravity duality, would imply a far-from hydrodynamic behaviour in heavy-ion collisions during the early time after merger. Using a cosmological constant with an opposite sign leads to asymptotically de Sitter (dS) spacetimes widely believed to describe the Universe we live in. NR studies in dS have explored the possible impact of local structures on the cosmological expansion \cite{Bentivegna:2012ei,Yoo:2013yea,Yoo:2014boa, Bentivegna:2015flc,Mertens:2015ttp} and found such inhomogeneities to not significantly affect the global expansion. Further work has explored the robustness of inflation under inhomogeneities \cite{Clough:2016ymm}, the propagation of light in an expanding universe \cite{Bentivegna:2016fls} and the impact of extreme values of the cosmological constant on the physics of BH collisions \cite{Zilhao:2012bb}. \section{Effective-One-Body and Phenomenological models}\label{Sec:EOB} \vspace{-3mm} {\it Contributor:} T. Hinderer \vspace{3mm} This section surveys the status of two main classes of models that are currently used in GW data analysis: (i) the EOB approach, which describes both the dynamics and waveforms in the time domain for generic spinning binaries, and (ii) the phenomenological approach (Phenom), which provides a closed-form description of the waveform in the frequency domain and includes the dominant spin effects. The discussion below reviews mainly the current state-of-the-art models; a more comprehensive overview of prior work can be found in review articles such as Refs.~\cite{Buonanno:2014aza,Hannam:2013pra,Damour:2008yg}. \subsection{Effective-one-body models} The EOB approach was introduced in ~\cite{Buonanno:1998gg,Buonanno:2000ef} as a method to combine information from the test-particle limit with PN results (for early ideas in this spirit, see~Ref.\cite{Maheshwari:2016edp}). The model comprises a Hamiltonian for the inspiral dynamics, a prescription for computing the GWs and corresponding radiation reaction forces, and a smooth transition to the ringdown signals from a perturbed final BH. The idea is to map the conservative dynamics of the relative motion of two bodies, with masses $m_{1,2}$, spins ${\bm S}_{1,2}$, orbital separation $\bm x$ and relative momentum $\bm p$, onto an auxiliary Hamiltonian description of an effective particle of mass $\mu =m_1\,m_2/(m_1+m_2)$ and effective spin $\bm{S}_*(\bm{S}_1,\bm{S}_2,\bm{x},\bm{p})$ moving in an effective spacetime $g_{\mu \nu}^{\rm eff}(M,\bm{S}_{\rm Kerr};\nu)$ that is characterized by the total mass $M =m_1+m_2$, symmetric mass ratio $\nu= \mu/M$, and total spin $\bm{S}_{\rm Kerr}(\bm{S}_1,\bm{S}_2)$. The basic requirements on this mapping are that (i) the test-particle limit reduces to a particle in Kerr spacetime, and (ii) in the weak-field, slow-motion limit the EOB Hamiltonian reduces to the PN Hamiltonian, up to canonical transformations. These considerations, together with insights from results in quantum-electrodynamics~\cite{Brezin:1970zr}, scattering theory~\cite{Damour:2016gwp,Vines:2017hyw}, and explicit comparisons of the action variables~\cite{Buonanno:1998gg} all lead to the ``energy map'' given by \begin{equation} H=M \sqrt{1+2\nu \left(\frac{H_{\rm eff}}{\mu}-1\right)}, \end{equation} where $H$ and $H_{\rm eff}$ are the Hamiltonians describing the physical binary system and the effective particle respectively. The details of the effective Hamiltonian $H_{\rm eff}$ are less constrained by theoretical considerations, and different descriptions have been proposed that differ in the structure of the Hamiltonian, the form of the potentials characterizing the effective spacetime, and the spin mapping between the physical and effective systems. The differences between these choices reflect current limitations of theoretical knowledge; they all agree in the PN and nonspinning test-particle limit. For the structure of $H_{\rm eff}$, the incarnation of the EOB model of Refs.~\cite{Barausse:2009aa, Barausse:2009xi, Barausse:2011ys,Taracchini:2013rva,Bohe:2016gbl} imposes that the limiting case of a spinning test-particle in Kerr spacetime must be recovered, to linear order in the test-spin. The version of the model of Refs.~\cite{Damour:2001tu,Damour:2008qf,Nagar:2011fx,Balmelli:2015lva,Damour:2014sva} does not include test-particle spin effects, which enables a more compact description. These different choices also result in different spin mappings, for both the Kerr parameter and the spin of the effective particle. Finally, these two branches of EOB models also employ different ways to re-write the potentials that are calculated as a Taylor series in a PN expansion in a ``re-summed'' form. This means that an empirically motivated non-analytic representation is used that consists either of a Pade-resummation ~\cite{Damour:2001tu,Damour:2008qf,Nagar:2011fx} or has a logarithmic form~\cite{Barausse:2009xi, Barausse:2011ys}. In addition to the above choices for describing the strong-field, comparable-mass regime, the models also include parameterized terms whose coefficients are functions of the mass and spin parameters that are fixed by comparisons to NR results. In the model of Ref.~\cite{Barausse:2009xi, Barausse:2011ys,Taracchini:2013rva,Bohe:2016gbl}, these calibration parameters are constrained by the requirement that the model must reproduce the GSF results for the ISCO shift in Schwarzschild~\cite{Barack:2009ey}. The radiative sector in the EOB model is described by so-called factorized waveforms (instead of a PN Taylor series expansion) that are motivated from the structure of waveforms in the test-particle limit and have the form~\cite{Damour:2007xr,Damour:2008gu} \begin{equation} \label{hlm} h_{\ell m}^{\rm insp-plunge}(t)=h_{\ell m}^{(N,\epsilon)}\,\hat{S}_{\rm eff}^{(\epsilon)}\, T_{\ell m}\, e^{i\delta_{\ell m}}\, f_{\ell m}\,N_{\ell m}\,. \end{equation} Here, $h_{\ell m}^{(N,\epsilon)}$ is the Newtonian contribution, and $\hat{S}_{\rm eff}^{(\epsilon)}$ is a certain effective ``source term'' that, depending on the parity $\epsilon$ of the mode, is related to either the energy or the angular momentum. The factor $T_{\ell m}$ contains the leading order logarithms arising from tail effects, the term $e^{i\delta_{\ell m}}$ is a phase correction due to sub-leading order logarithms in hereditary contributions, while the function $f_{\ell m}$ collects the remaining PN terms. Finally, the factor $N_{\ell m}$ is a phenomenological correction that accounts for deviations from quasi-circular motion~\cite{Damour:2002vi} and is calibrated to NR results. The modes from Eq.~(\ref{hlm}) are used to construct dissipative forces $\bm{\mathcal{F}}$, in terms of which the equations of motion are given by \begin{eqnarray} &&\frac{d\bm{x}}{d t}=\frac{\partial H}{\partial \bm{p}}\,, \quad \quad \frac{d\bm{p}}{d t}=-\frac{\partial H}{\partial \bm{x}} +\bm{\mathcal{F}}\,, \qquad \qquad\\ && \frac{d\bm{S}_{1,2}}{d t}=\frac{\partial H}{\partial \bm{S}_{1,2}}\times \bm{S}_{1,2}. \end{eqnarray} This EOB description of the inspiral-plunge dynamics is smoothly connected to the merger-ringdown signal in the vicinity of the peak in the amplitude $|h_{22}|$. To perform this matching, initial nonspinning models and the aligned-spin models SEOBNRv1~\cite{Barausse:2011ys} and SEOBNRv2~\cite{Taracchini:2013rva,Taracchini:2012ig} (as well as in the models of the IHES group~\cite{Damour:2012ky,Damour:2001tu}) used a superposition of damped sinusoids similar to quasi-normal modes~\cite{Buonanno:2000ef}. This method is inspired by results for the infall of a test-particle into a BH and the close-limit approximation~\cite{Price:1994pm}. More recently, a simpler phenomenological fit of the amplitude and phase inspired by the rotating source approximation \cite{Baker:2008mj}, which was adapted to the EOB context in \cite{Damour:2014yha}, has become standard in SEOBNRv4~\cite{ Bohe:2016gbl}. This method provides a more stable and controlled way to connect the inspiral-plunge to the ringdown. A further key input into the merger-ringdown model is the frequency of the least-damped quadrupolar quasinormal mode $\sigma_{220}$ of the remnant based on a fitting formula from NR for the mass and spin of the final object given the initial parameters, with the currently most up-to-date fit from ~\cite{ Bohe:2016gbl}. Generic spin precession effects are also included in the model, by starting from a calibrated spin-aligned model and transforming to the precessing frame as dictated by the precession equations derived from the EOB Hamiltonian \cite{Pan:2010hz,Babak:2016tgq}, without further calibrations. The most recent refinement for generic precessing binaries from Ref.~\cite{Babak:2016tgq} is known as SEOBNRv3. For non-vacuum binaries involving e.g. neutron stars, exotic compact objects, or condensates of fundamental fields around BHs, several effects of matter change the GWs from the inspiral relative to those from a BH binary, as described in Chapter III, Sec. 4.4, where also the signatures from such systems generated during the merger and ringdown are covered. Effects during the inspiral include spin-induced deformations of the objects, tidal effects, the presence of a surface instead of an event horizon, tidal excitation of internal oscillation modes of the objects, and more complex spin-tidal couplings. Two classes of EOB models for such systems are currently available, corresponding to the two different baseline EOB models for BHs. One is known as TEOBRESUMS \cite{Nagar:2018zoe} and incorporates the effects of rotational deformation and adiabatic tidal effects \cite{Damour:2009wj,Bini:2012gu} in re-summed form that was inspired by \cite{Bini:2014zxa} and augmented as described in \cite{Bernuzzi:2014owa}. The other model is known as SEOBNRv4T and includes the spin-induced quadrupole as well as dynamical tides from the objects' fundamental oscillation modes \cite{Steinhoff:2016rfi,Hinderer:2016eia}. \subsection{Phenomenological (Phenom) models} The aim of the Phenom models is to provide a simple, efficient, closed-form expression for the GW signal in the frequency domain~\cite{Ajith:2007qp} by assuming the schematic form \begin{equation} \tilde{h}_{\rm phen}(f;\vec{\alpha}; \vec{\beta}) := A(f;\vec{\alpha})e^{i\Psi(f;\vec{\beta})}, \end{equation} where $\vec{\alpha}$ and $\vec{\beta}$ are amplitude and phase parameters in the model. Phenomenological (``Phenom'') models were first developed for nonspinning binaries in Refs.~\cite{Ajith:2007qp,Ajith:2007kx} and subsequently refined to include aligned spins~\cite{Ajith:2009bn} known as ``PhenomB''. This model employed only a single weighted combination of the individual BH spins characterizing the dominant spin effect in the GWs, and was further refined in ~\cite{Santamaria:2010yb} (``PhenomC''), and in ~\cite{Khan:2015jqa,Husa:2015iqa} (``PhenomD''). The latter has been calibrated using NR data for mass ratios up to 1:18 and dimensionless spins up to $0.85$ (with a larger spin range for equal masses). An effective description of the dominant precession effects has also been developed~\cite{Hannam:2013oca} (``PhenomP'')~\cite{Schmidt:2012rh, Schmidt:2014iyl}. The PhenomP model provides an approximate mapping for obtaining a precessing waveform from any non-precessing model, with PhenomD being currently used as the baseline. To construct these state-of-the-art Phenom models, the GW signal is divided into three main regimes: an inspiral, intermediate region, and merger-ringdown. The ansatz for the inspiral model is the PN result for the frequency-domain phasing obtained from energy balance in the stationary phase approximation (``TaylorF2''), accurate to 3.5PN in the nonspinning and linear-in-spin sector, and to 2PN in spin-spin terms. This has the form \begin{equation} \Psi_{\rm TF2}=2\pi f t_c-\phi_c-\pi/4+\frac{3}{128\nu}(\pi M f)^{-5/3}\sum_{i=0}^7 c_i (\pi M f)^{i/3}, \label{eq:TF2phase} \end{equation} where $c_i$ are PN coefficients. The Phenom models add to this auxiliary terms so that the inspiral phase becomes \begin{equation} \Psi=\Psi_{\rm TF2}+\frac{1}{\nu}\sum_{i=0}^6 \sigma_i f^{i/3}, \end{equation} where $\sigma_i$ are phenomenological coefficient with $\sigma_{1}=\sigma_2=0$. A different ansatz is made for the late inspiral and merger-ringdown signals that are likewise closed-form expressions involving the frequency and phenomenological parameters; in total the model involves 17 phenomenological parameters. For the late inspiral portion, these are mapped to the set of two physical parameters $(\nu, \chi_{\rm PN})$, where $\chi_{\rm PN}$ is defined by \begin{eqnarray} \chi_{\rm PN}&=& \chi_{\rm eff}-\frac{38\nu}{113}\left(\chi_1+\chi_2\right)\,,\\ \chi_{\rm eff}&=&\frac{m_1}{M} \vec{\chi}_1\cdot \hat{L}_N+\frac{m_2}{M} \vec{\chi}_2\cdot \hat{L}_N\,,\label{chi} \end{eqnarray} where $\hat L_N$ is the direction of the Newtonian angular momentum and $\chi_i=S_i/m_i^2$. $\chi_{\rm PN}$ describes the dominant spin effect (the dependence on $M$ is only a rescaling). The mapping is done by assuming a polynomial dependence $\sigma_i=\sum b_{ijk}\nu^j \chi_{\rm eff}^k$ to quadratic order in $\nu$ and third order in $\chi_{\rm PN}$ so that the coefficients vary smoothly across the parameter space. Finally, the coefficients are calibrated to a large set of hybrid waveforms, which themselves are formed using the uncalibrated SEOBNRv2 model. Spin effects in the Phenom models are described by several different combinations of parameters. In the aligned-spin baseline model, the description of the early inspiral depends on both spin parameters $\chi_1,\chi_2$ from PN. In the later inspiral regime, spin effects are described by the effective combination $\chi_{\rm PN}$ described above, while the merger-ringdown model is expressed in terms of the total spin. For generic spin orientations, an additional parameter $\chi_p$ that characterized the most readily measurable effects of precession is included in the model. The defnition of the precessional parameter $\chi_p$ is motivated by the observation that waveforms are simpler in the coprecessing frame that is aligned with the direction of the dominant radiation emission~\cite{Schmidt:2010it}; it is given by~\cite{Schmidt:2014iyl} \begin{equation} \chi_p=\frac{1}{B_1 m_1^2}{\rm max}(B_1 S_{1\perp},B_2 S_{2\perp}), \end{equation} where $B_1=2+3 m_2/(2 m_1)$ and $B_2=2+3 m_1/(2 m_2)$ assuming the convention $m_1>m_2$. Starting from an aligned-spin frequency-domain waveform model, an approximate precessing waveform is constructed by ``twisting up'' the non-precessing waveform with the precessional motion based on a single-spin PN description for the precession angles~\cite{Schmidt:2012rh,Hannam:2013oca}. While there are some broad similarities of PhenomP with the way the precessing EOB model is constructed, the two approaches differ in several details, as explained in Sec.\ IV of Ref.~\cite{Babak:2016tgq}. For non-BH objects, the effects of rotational and tidal deformations are included in the phasing using either PN information \cite{Krishnendu:2017shb,Vines:2011ud,Damour:2012yf} or, for the case of binary neutron stars, using a model calibrated to NR \cite{Dietrich:2017aum,Dietrich:2018uni}. \subsection{Remaining challenges} Important advances that all EOB and Phenom models aim to address in the near-future are higher modes, parameter space coverage (especially in the mass ratio) and inclusion of all available theoretical knowledge, reduction of systematic errors, and going beyond circular orbits. To date, most of the effort in calibrating the EOB and Phenom models has focused on the $(2,2)$ mode, although for the special case of nonspinning binaries an EOB multipolar approximant is available~\cite{Pan:2011gk}. An accurate model of higher modes is important for robust data analysis, especially for spinning binaries. Work is ongoing to address this within the EOB model~\cite{Cotesta:2018fcv} and with studies that will inform the Phenom approach~\cite{Bustillo:2015ova}. The parameter space over which current models have been calibrated and tested is limited by available NR simulations of sufficient accuracy and length (see Sec.\ \ref{Sec:NR}), and systematic uncertainties remain a concern. Extending the range in parameter space ties into the pressing issue that many available results from GSF calculations are not currently incorporated in these models. One of the obstacles is that GSF data for circular orbits only make sense above the ``light ring'' radius and cannot directly inform the EOB model across and below that radius. This issue is a further subject of ongoing work. Efforts are also underway to include effects of eccentricity~\cite{Huerta:2016rwp,Hinder:2017sxy,Hinderer:2017jcs}. Effects such as the motion of the center-of-mass, BH radiation absorption, and radiation reaction for spins are also not yet included in current models. For EOB models, efficiency for data analysis is a further concern, because computing EOB waveforms is relatively time-consuming. To overcome this issue, reduced-order models for EOB waveforms in the frequency domain have been developed for aligned-spin binaries~\cite{Purrer:2015tud, Bohe:2016gbl}, with work being underway on the larger parameter space of precessing binaries. Also ongoing is work on theoretical aspects of spin effects in the EOB model, aimed at devising an improved description that is more robust and computationally efficient. \subsection{Tests of GR with EOB and Phenom models} The effective models described above have been used for tests of GR and exotic objects, e.g. in \cite{TheLIGOScientific:2016src}. One of the parameterized tests makes use of the general framework ``TIGER'' \cite{Agathos:2013upa} that has been implemented for data analysis, where parameterized fractional deviations from the GR coefficients are included in the frequency-domain phase evolution from Eq.~(\ref{eq:TF2phase}). In recent studies, e.g.~\cite{TheLIGOScientific:2016src,Abbott:2017oio}, this framework was used on top of the PhenomD inspiral model \cite{Khan:2015jqa,Husa:2015iqa} to obtain bounds on non-GR effects, where the tests also included parameterized deviations in the intermediate and merger-ringdown regime. Work is ongoing to implement tests of GR within the EOB approach~\cite{Brito:2018rfr}. Finally, work is also underway to include in these models the option to test for exotic physics manifest in spin-quadrupole and tidal effects during the inspiral. Despite this recent progress on frameworks to test GR and the nature of BHs, substantial further work will be necessary to refine the level of sophistication and physical realism of such tests. \section {Source modelling and the data-analysis challenge}\label{Sec:DA} \vspace{-3mm} {\it Contributor:} J. R. Gair \vspace{3mm} Models of GW sources form an essential component of data analysis for GW detectors. Indeed, many of the scientific objectives of current and future GW detectors cannot be realised without having readily available accurate waveform models. In this section we first provide a brief overview of the primary techniques used for GW data analysis, and follow this with a discussion of the additional challenges that data analysis for future GW detectors will pose and the corresponding requirements that this will place on waveform models. \subsection{Overview of current data analysis methods} The LIGO-Virgo Collaboration (LVC) uses a variety of techniques to first identify and then characterise GW transients identified in their data. The majority of these make use of signal models in some way. We briefly describe these methods in the following. \subsubsection{Unmodelled searches} A number of LVC analysis pipelines are targeted towards ``burst'' sources for which the waveforms are not well modelled. These included Coherent Wave Burst~\cite{2008CQGra..25k4029K}, X-pipeline~\cite{2010NJPh...12e3034S,2012PhRvD..86b2003W} and BayesWave~\cite{2015CQGra..32m5012C}. All three algorithms make use of the fact that there are multiple detectors in the LVC network, which allows signals (common between different detectors) to be distinguished from noise (generally uncorrelated between detectors). The algorithms differ in their implementation. Coherent Wave Burst and X-pipeline operate on spectrograms (time-frequency) maps of the data, computed using wavelet transforms in the first case and a Fourier transform in the second. Both algorithms then identify ``hot'' pixels that have particularly large power, carry out clustering to identify candidate events, i.e., continuous tracks of excess power, in individual detectors, and then impose consistency in the properties of the tracks identified in the different detectors. BayesWave takes a slightly different approach, using Bayesian techniques to construct a model for the noise in each detector and any signal present. The signal is constructed as a superposition of wavelets, and reversible jump Markov Chain Monte Carlo is used to add or remove components from the signal and noise models. These model-free searches are powerful tools for source identification. Indeed, the first algorithm to find GW150914, the first GW event detected by LIGO, was the Coherent Wave Burst~\cite{Abbott:2016blz,TheLIGOScientific:2016uux}, because it was the only online search pipeline running at the time of the event. However, these algorithms are not as sensitive as searches based on models and the second clear GW event, GW151226, was only found with high significance by the template-based searches~\cite{Abbott:2016nmj}. In addition, parameter estimation can only be done using models. Moreover, of the three algorithms, only BayesWave is truly independent of waveform models. Both Coherent Wave Burst and X-pipeline uses signal injections in order to determine the optimal choice of the various tunable thresholds in the algorithms that maximizes distinction between signal and noise. The injections use realistic GW signal waveforms, for which models are needed. BayesWave does not do tuning in this way, although the development of that algorithm and demonstration of its performance was done using signal injections. \subsubsection{Template based searches} The primary search pipelines within the LVC Compact Binary Coalescence (CBC) group use matched filtering. This involves using a precomputed bank of templates of possible GW signals and computing their overlap, i.e., inner product, with the data. The template bank needs to fully cover the parameter space so that if a signal is present, at least one of the templates will recover it confidently. There are two primary matched filtering based searches used by the LVC --- pyCBC~\cite{Canton:2014ena,Usman:2015kfa} and gstLAL~\cite{2012ApJ...748..136C,2014PhRvD..89b4003P}. The two searches differ in a number of ways, such as the choice of template bank and placement, and in the various consistency checks that are used to compare the signals identified in the different detectors and to compare the post-signal extraction residual to the expected noise distribution. We refer the interested reader to the previous references for more details. All the BBH systems identified by LIGO to date were found by both of these pipelines with very high significance, and several of them would not have been identified using template-free methods only. Matched filtering is only possible if models of GW signals are available that have sufficient fidelity to true signals. \subsubsection{Parameter estimation} The other primary area where GW signal models are essential is in parameter estimation. Once a potential signal has been identified by one of the search pipelines described above, the signal is characterized using the separate LALInference pipeline~\cite{2015PhRvD..91d2003V} (though other parameter inference methods are also used~\cite{Pankow:2015cra,Lange:2018pyp}). LALInference constructs a posterior probability distribution for the source parameters using Bayes theorem. This relies on a model for the likelihood which is taken to be the likelihood of the noise (assumed Gaussian on the short stretches of data around each signal). The noise is computed as observed data minus signal, and is therefore a function of the signal parameters and requires an accurate model of potential signals. LALInference includes two different algorithms --- LALInferenceNest~\cite{2010PhRvD..81f2003V} and LALInferenceMCMC~\cite{2008ApJ...688L..61V}. The first uses nested sampling to determine the posterior and associated model evidence, while the latter uses Markov Chain Monte Carlo techniques based on the Metropolis-Hastings algorithm. Parameter estimation is essential to extract physical information from identified GW events, and the resulting posterior distributions summarise all our information about the properties of the source. Any physical effect that we wish to probe using GW observations must be included in the signal model used in parameter estimation. The recent NSB event observed by LIGO, GW170817~\cite{TheLIGOScientific:2017qsa}, showed some evidence for tidal effects in the signal, which highlighted deficiencies in signal models with tides and the need for further work in that area~\cite{Abbott:2018wiz}. \subsection{Challenges posed by future detectors} The next LIGO-Virgo observing run, O3, is scheduled to start in early 2019, and is expected to have a factor of $\sim2$ improvement in sensitivity over the O2 run that finished in August 2017. LIGO/Virgo are expected to complete their first science runs at design sensitivity in the early 2020s~\cite{Aasi:2013wya}. Based on the earlier science runs, several tens of BBH systems and a small number of NSB events~\cite{Abbott:2016nhf,TheLIGOScientific:2017qsa} are likely to be detected in O3. These events will be similar to sources previously identified and so the primary challenge will be computational --- the LVC will need the capability to process multiple events simultaneously, which implies a need for accurate waveform models that are as fast to generate as possible. Further in the future there are plans for third generation ground based detectors, such as the Einstein Telescope~\cite{2012CQGra..29l4013S} and Cosmic Explorer~\cite{Evans:2016mbw}, and a space-based GW detector, the Laser Interferometer Space Antenna (LISA)~\cite{Audley:2017drz}. These new detectors will observe new types of source which will pose new modelling challenges. \subsubsection{Third-generation ground-based detectors} Planned third-generation ground-based detectors, like the Einstein Telescope (ET)~\cite{2012CQGra..29l4013S} or the Cosmic Explorer~\cite{Evans:2016mbw}, aim to improve on advanced detectors in two ways. Firstly, they aim to have an increase in sensitivity by an order of magnitude. Secondly, they aim to improve low-frequency sensitivity, with the ultimate goal of sensitivity as low as $1$Hz, compared to $\sim30$Hz for the LIGO/Virgo detectors in O2, and $10$Hz at design sensitivity. For the ET, these aims will be achieved by increasing the arm-length to $10$km, sitting the detector underground and using cryogenic cooling of the mirrors. ET will also have three arms in a triangular configuration, so that it is equivalent to having two detectors at the same site. The increase in sensitivity will increase the number of sources detected by about a factor of a thousand. Data analysis must be able to keep pace with such a high source volume, which means algorithms must run quickly. This places constraints on the computational cost of waveform models, which is discussed further below. The improvement in low frequency sensitivity significantly increases the duration of any given signal in band. At leading order, the time to coalescence of a binary scales like $f^{-8/3}$~\cite{Peters:1964zz}. The NSB event GW170817 was in the LIGO band for ~$40s$, starting from $30$Hz, and generated 3000 waveform cycles~\cite{TheLIGOScientific:2017qsa}. For a detector with low frequency cut-off at $3$Hz, the same source would be in band for $\sim5$h and generate $\sim10^5$ cycles. This longer duration places more stringent requirements on waveform models, since fractional errors in the waveforms will need to be small enough that the templates are accurate to within 1 cycle of the $10^5$ in band. This is mitigated partially by the fact that most of the additional cycles are generated in the weak field where analytic models are well understood. Third-generation detectors also offer the prospect of detection of new classes of sources. These include higher-mass BH systems, made possible by the improved low-frequency sensitivity~\cite{2012CQGra..29l4013S}, and possibly intermediate mass ratio inspirals (the inspirals of stellar origin BHs into intermediate mass BHs with mass $\sim 100M_\odot$)~\cite{brown:2007,Gair:2010dx}. The former do not pose additional modelling challenges, as these signals will be well represented by rescaling templates for lower mass systems. The latter, however, lie in a regime where both finite mass-ratio effects and higher-order PN effects become important. The latter require numerical techniques, but these are limited in terms of the number of cycles that can be modelled, while the former requires perturbative techniques, but are then limited by the size of mass ratio corrections. Any IMRIs observed are likely to be at very low SNR and so will only be identifiable in the data if accurate templates are available. Initial attempts to construct hybrid IMRI models have been made~\cite{Huerta:2011a,Huerta:2011b,Huerta:2012a,Huerta:2012b}, but considerable work is still required. It is also hoped that it will be possible to test GR to high precision with third-generation ground-based detectors~\cite{2012CQGra..29l4013S}. This requires development of models in alternative theories. This challenge is common to space-based detectors and is discussed further in the next section. \subsubsection{The Laser Interferometer Space Antenna} LISA is a space-based GW detector that has been selected as the third large mission that ESA will launch in its current programme, with a provisional launch date of 2034. LISA will comprise three satellites, arranged in an approximately equilateral triangular formation with 2.5Mkm long arms and with laser links passing in both directions along each arm. By precisely measuring the phase of the outgoing and incoming laser light in each arm LISA can do interferometry and detect GWs. It will operate at lower frequency than the LIGO/Virgo detectors, in the range $0.1$mHz--$0.1$Hz, with peak sensitivity around a few mHz. The lower frequency sensitivity means that the typical systems that LISA will observe have higher mass, $M\sim 10^4$--$10^7M_\odot$. Such massive BHs (MBHs) are observed to reside in the centres of lower mass galaxies. LISA is expected to observe mergers of binaries comprised of two such MBHs, which are expected to occur following mergers between the BH host galaxies. MBHs are typically surrounded by clusters of stars, which include BHs similar to those observed by LIGO that were formed as the endpoint of stellar evolution. LISA is also expected to observe the EMRIs of such stellar origin BHs into MBHs. In addition to these MBH sources, LISA will also observe stellar compact binaries in the Milky Way, it could detect some of the stellar origin BH binaries that LIGO will observe and may detect sources of cosmological origin~\cite{Audley:2017drz}. The latter sources do not pose particular modelling challenges, but the BH sources do. In contrast to the LIGO-Virgo network, there will be only one LISA constellation. While the three LISA arms allow, in principle, the construction of two independent data streams, there will inevitably be some correlation between noise in these channels. In addition, LISA sources are long-lived, lasting months or years in the data set, and so there will be hundreds of sources overlapping in the data. These properties make it much more difficult to construct unmodelled source pipelines like those used in LIGO and so LISA will rely even more heavily on having models of potential signals in order to identify them in the data. Typical MBH binary signals will have SNR in the tens to hundreds, with a few as high as a thousand. This allows much more precise estimates of parameters, but places much higher demands on the fidelity of waveform models. A template accurate to a few percent is fine for characterising a source that has SNR of a few tens as is typical for LIGO/Virgo, but for an SNR of one thousand, the residual after extracting that source will have SNR in the several tens, biasing parameter estimation and contaminating the extraction of subsequent sources. Templates need to be two orders of magnitude more accurate for use in LISA. This accuracy comes coupled to the need for longer duration templates, as for the third-generation ground based detectors, since the signals are present in the data stream for months. MBH binaries detected by LISA are additionally expected to have high spins~\cite{Barausse:2012fy}, in contrast to the observed LIGO/Virgo sources which are all consistent with low or zero spin~\cite{TheLIGOScientific:2016pea,Abbott:2017vtc,Abbott:2017oio,Abbott:2017gyy,Wysocki:2017isg,Wysocki:2018mpo}. Finally, MBH binaries are more likely to show precession. The likely presence of these physical effects in observed signals, coupled with the necessity of model-based searches for LISA places strong requirements on the MBH binary waveform models that must be available by the time LISA flies. For EMRIs, expected SNRs are in the tens~\cite{Babak:2017tow,Berry:2016bit}, but this SNR is accumulated over $\sim10^5$ cycles. This makes unmodelled EMRI searches impossible and imposes the requirement on modelled searches that the EMRI templates match the true signals to better than one cycle over $10^5$. In addition, all of these cycles are generated in the strong-field where accurate modelling of the signals is more challenging. This drives the requirement for GSF EMRI models accurate to second order in the mass ratio described above. These models must include the effect of high spin in the central MBH, and eccentricity and inclination of the orbit of the smaller BH~\cite{AmaroSeoane:2007aw,AmaroSeoane:2012tx}. EMRI models will also have to be computationally efficient. Naively, to match $10^5$ cycles in a parameter space with $8$ intrinsic (and $6$ extrinsic) parameters, requires $(10^5)^8 \sim 10^{40}$ templates. This is a crude overestimate, but illustrates the complexity of a template-based EMRI search, and the need to be able to generate large numbers of templates in a small computational time. Semi-coherent methods have been proposed~\cite{Gair:2004iv} that have less stringent requirements, but still need templates accurate for $10^3$ or more cycles. Finally, one of LISA's primary science objectives is to carry out high precision tests of GR, including both tests of strong-field gravity and tests of the nature of compact objects by using EMRIs to probe the gravitational field structure in the vicinity of BHs. Many different tests have been proposed and we refer the reader to~\cite{Gair:2012nm} for a comprehensive review. Several methods exist for phenomenological tests, which assess consistency of the observed signal with the predictions of GR. However, understanding the significance of any constraints that LISA places, and interpreting any deviations that are identified requires models for deviations from the predictions of GR in alternate theories. We require strong-field predictions, most likely relying on numerical simulations, to compare to the merger signals from MBH binaries, which will be observable with high significance. We also require predictions for the sorts of deviations that might be present in the inspiral phase of EMRIs. The latter need to be accurate to a part in $10^5$ and must allow for confusion between gravitational effects and effects of astrophysical origin, e.g., the presence of matter in the system, perturbations from distant objects, etc. \subsection{Computationally efficient models} As described above, a significant obstacle to data analysis for future detectors is computational. In order to analyse a large number of potential signals and search large parameter spaces, we require models that capture all the main physics reliably but can be evaluated rapidly. We describe the current status and outstanding challenges here. \subsubsection{Reduced-order models} In the context of LIGO/Virgo, interest in developing computationally efficient representations of waveform models arose due to the high cost of parameter estimation algorithms~\cite{2015PhRvD..91d2003V}. This led to the development of reduced-order or surrogate models. The gstLAL search algorithm~\cite{2012ApJ...748..136C,2014PhRvD..89b4003P} uses a singular-value decomposition (SVD) of the signal and noise parameter spaces to efficiently identify candidate signals. However, for parameter estimation we also need to quickly map such a representation onto physical parameters. One approach is to take Fourier representations of gravitational waveforms, construct an SVD of both of these and finally construct a fit to the dependence of the SVD coefficients on source parameters~\cite{2014CQGra..31s5010P,Purrer:2015tud}. An alternative approach is to find a reduced-order representation of the waveform parameter space. A set of templates covering the whole parameter space is constructed and then waveforms are added sequentially to the reduced-basis set using a greedy algorithm. At each stage the template that is least well represented by the current reduced basis is added to the set~\cite{2011PhRvL.106v1102F,2012PhRvD..86h4046F}. This procedure is terminated once the representation error of the basis is below a specified threshold, and typically the number of templates in the final basis is one or more orders of magnitude less than the size of the original set. Representing an arbitrary template with the basis is then achieved by requiring the basis representation to match the waveform at a specific set of points, i.e., interpolation rather than projection. These points are chosen by another greedy algorithm --- picking the points at which the difference between the next basis waveform and its interpolation on the current basis is biggest. This method of representation then allows the likelihood of the data to be written as a quadrature rule sum over the values of the desired waveform at the interpolation points~\cite{2013PhRvD..87l4005C}. This is typically a far smaller number of points than the original time series and so is much cheaper to evaluate than the full waveform. For numerical waveforms an additional step is needed in which the values of the waveform at the desired points is interpolated across parameter space~\cite{2014PhRvX...4c1006F}. The first GW application of this reduced-order quadrature method used a simple sine-Gaussian burst model~\cite{2013PhRvD..87l4005C} and the first implementation within the LIGO/Virgo analysis infrastructure was for a NS waveform model~\cite{2015PhRvL.114g1104C}. Reduced-order models and associated quadrature rules have subsequently been developed for a number of other waveform models, including NR simulations for higher mass BH binaries~\cite{Blackman:2015pia,Blackman:2017dfb,Blackman:2017pcm} and phenomenological inspiral-merger-ringdown waveform models~\cite{2016PhRvD..94d4031S}. Parameter estimation for GW170817~\cite{TheLIGOScientific:2017qsa} would arguably not have been possible within the timescale on which that paper was written if the reduced order model waveform had not been available. Some of the existing reduced-order models and reduced-order quadratures will be useful for analysis of data from future detectors. However, the increased duration of signals will typically increase the size of the reduced basis and so work will be needed to optimize the models. In addition, new types of waveform models including higher spins, lower mass ratios or eccentricities do not yet have available reduced-order models, and so these will have to be developed for application to LISA searches for massive BHs and EMRIs. It is possible that some eventual advanced phenomenological models may on their own provide a competitive alternative to Reduced-order models. \subsubsection{Kludges} To model EMRIs, ``kludge'' models have been developed that capture the main qualitative features of true inspiral signals. There are two main kludge approaches. The analytic kludge (AK~\cite{Barack:2003fp}) starts from GW emission from Keplerian orbits, as described by~\cite{1963PhRv..131..435P,Peters:1964zz}, and then adds various strong-field effects on top---evolution of the orbit through GW emission, perihelion precession and orbital-plane precession. The model is semi-analytic and hence cheap to generate, which means it has been used extensively to scope out LISA data analysis~\cite{Gair:2004iv,2007CQGra..24S.551A,Babak:2007zd}. However, it rapidly dephases from true EMRI signals. Versions of the AK that include deviations from GR arising from an excess quadrupole moment~\cite{Barack:2006pq} and generic changes in the spacetime structure~\cite{Gair:2011ym} have also been developed. The numerical kludge model (NK) is built around Kerr geodesic trajectories, with the parameters of the geodesic evolved as the object inspirals, using a combination of PN results and fits to perturbative computations~\cite{Gair:2005ih}. A waveform is generated from the trajectory by identifying the Boyer-Lindquist coordinates in the Kerr spacetime with spherical polar coordinates and using a flat-space GW emission formula~\cite{Babak:2006uv}. The NK model is quite accurate when compared to trajectories computed using BH perturbation theory~\cite{Babak:2006uv}. Versions of the NK model including conservative GSF corrections have also been developed~\cite{2009PhRvD..79h4021H}. Current kludge models are fast but not sufficiently accurate to be used in LISA data analysis. The NK model can be improved using fits to improved perturbative results as these become available. An osculating element formalism has been developed~\cite{Pound:2007th,Gair:2010iv} that can be used to compute the contribution appropriate corrections to the phase and orbital parameter evolution for an arbitrary perturbing force (see~\cite{Warburton:2011fk} for an example using the GSF). This model is likely to be sufficiently reliable for use in preliminary data analysis, but will have to be continually improved as the GSF calculations described in Sec.\ \ref{Sec:perturbations} are further developed. Although much faster than GSF models, the NK model is likely to be too expensive computationally for the first stages of data analysis, in which source candidates are identified using a large number of templates. The AK model in its current form is potentially fast enough, but is not sufficiently accurate. However, recently a hybrid model called the ``Augmented Analytic Kludge'' was proposed~\cite{2015CQGra..32w2002C,Chua:2017ujo} that corrects the AK phase and orbital evolution equations so that they match numerical kludge trajectories. The ``Augmented Analytic Kludge'' model is almost as fast as the AK model and is almost as accurate as the NK model, so it is a promising approach to data analysis, though much work is needed to develop the preliminary model to include the full range of physical effects. Computationally efficient models will be essential for identifying sources in data from current and future GW detectors. However, they can only be constructed if accurate physical waveform models have been developed, and the final stage of any parameter estimation pipeline must use the most accurate physical model available in order to best extract the source parameters. Accurate source modelling underpins all GW data analysis. \section {The view from Mathematical GR}\label{Sec:MR} \vspace{-3mm} {\it Contributor:} Piotr T.~Chru\'sciel \vspace{3mm} The detection of gravitational waves presents formidable challenges to the mathematical relativist: can one invoke mathematical GR to provide an unambiguous interpretation of observations, and a rigorous underpinning for some of the interpretations already made? Here we briefly discuss some of the issues arising, focusing on questions that are tractable using available methods in mathematical GR. We shall sidestep a host of other important problems, such as that of the global well-posedness of the Cauchy problem for binary BHs, or the related problem of \emph{cosmic censorship}, which are currently completely out of reach to mathematical GR. \subsection{Quasi-Kerr remnants?} \label{ss23V18.1} A working axiom in the GW community is that \emph{Kerr BHs exhaust the collection of BH end states}. There is strong evidence for this, based on the ``no-hair'' theorem, which asserts that \emph{suitably regular stationary, analytic, non-degenerate, connected, asymptotically flat vacuum BHs belong to the Kerr family}; see Ref.~\cite{Chrusciel:2012jk} for precise definitions and a list of many contributors (see also Secs.~\ref{sec:nohair} and \ref{sec:hairyBHs} of Chapter III for a discussion on no-hair theorems in the context of beyond-GR phenomenology). The analyticity assumption is highly undesirable, being rather curious: it implies that knowledge of a metric in a small space-time region determines the metric throughout the universe. This assumption was relaxed in~\cite{Alexakis:2009ch} for near-Kerr configurations and in~\cite{AIK3} for slowly rotating horizons, but the general case remains open. Next, the non-degeneracy assumption is essentially that of non-vanishing surface gravity: the maximally spinning Kerr BHs are degenerate in this sense. One often discards the extremal-Kerr case by declaring that it is unstable. This might well be the case, but it is perplexing that the spin range of observed SMBH candidates is clearly biased towards high spin, with a possible peak at extremality~\cite{Brenneman:2013oba,Reynolds:2013qqa}. (Note, however, that these studies have an a priori assumption built-in, that the BHs are Kerr.) Finally, while the connectedness assumption is irrelevant when talking about an isolated BH, it is quite unsatisfactory to have it around: what mechanism could keep a many-BH vacuum configuration in a stationary state? All this makes it clear that removing the undesirable assumptions of \emph{analyticity, non-degeneracy, and connectedness} in the uniqueness theorems should be a high priority to mathematically-minded relativists. \subsection{Quasi-normal ringing?} \label{ss23V18.2} The current paradigm is that, after a merger, the final BH settles to a stationary one by emitting radiation which, at the initial stage, has a characteristic profile determined by \emph{quasi-normal modes}. This characteristic radiation field provides a very useful tool for determining the final mass and angular momentum of the solution. (Note that the extraction of these parameters from the ringdown profile assumes the final state to be Kerr, taking us back to the issues raised in the previous paragraph...) All this seems to be well supported by numerics. But we are still far behind by way of mathematical proof. Work by Dyatlov~\cite{Dyatlov,Dyatlov2} has rigorously established that quasi-normal modes are part of an asymptotic expansion of the time-behaviour of linear waves on the \emph{Kerr-de Sitter} background with a nonzero cosmological constant. The asymptotically flat Kerr case turns out to be much more difficult to tackle, and so far no rigorous mathematical statements are available for it in the literature. The whole problem becomes even more difficult when non-linearities are introduced, with no results available so far. The recently announced proofs of nonlinear stability of the Schwarzschild solution within specific families of initial data~\cite{DHRT,KlainermanSzeftel} might serve as a starting point for further studies of this important problem. \subsection{Quasi-local masses, momenta and angular momenta?} It was mind-blowing, and still is, when LIGO detected two BHs of about 35 and 30 solar masses merging into one, of about 60 solar masses. But the question arises, what do these numbers mean? Assuming the end state to be Kerr, the last part of the statement is clear. But how can the pre-merger masses be determined, or even defined, given the non-stationarity of the BBH configuration? Each NR group uses its own method for assigning mass and spin parameters to the initial datasets used in their simulations, and it is not {\it a priori} clear how these numbers relate to each other. EOB models likewise employ their own definitions of mass and spin. Any differences are most likely irrelevant at the current level of detection accuracy. But one hopes that it will become observationally important at some stage, and is a fundamental issue in our formulation of the problem in any case. To address the issue requires choosing benchmark definitions of quasi-local mass, momentum, and angular momentum that can be blindly calculated on numerical initial datasets, without knowing whether the data were obtained, e.g., by solving the spacelike constraints with an Ansatz, or by evolving some other Ansatz, or by matching spacelike and characteristic data. One can imagine a strategy whereby one first locates the apparent horizons within a unique preferred time-slicing (e.g., maximal, keeping in mind that apparent horizons are slicing-dependent), and then calculates the chosen quasi-local quantities for those. There exists a long list of candidates for quasi-local mass, with various degrees of computational difficulty, which could be used after extensive testing, all to be found in~\cite{Szabados:2009eka}; alphabetically: Bartnik's, Brown-York's, Hawking's (directly readable from the area), Kijowski's, Penrose's, with the currently-most-sophisticated one due to Chen, Wang and Yau (see~\cite{ChenWangYau1804} and references therein). The issue is not whether there is an \emph{optimal} definition of quasi-local quantities, which is an interesting theoretical question on its own, unlikely to find a universally agreed answer, but whether there is a \emph{well-defined and computationally convenient one} that can be used for scientific communication. Such a definition would need to have an unambiguous Newtonian limit, necessary for making contact with non-GR astrophysical observations. Incidentally, in recent work~\cite{Chen:2019obg}, the Wang and Yau mass has been calculated for balls of fixed radius receding to infinity along null geodesics. The key new observation is that the leading order volume integrand is the square of the Bondi news function. This is accompanied by an integral over the surface of the bounding spheres which deserves further investigation, numerically or otherwise. \subsection{Quasi-mathematical numerical relativity?} To an outsider, NR looks like a heroic struggle with a quasi-impossible task, which, after years of inspired attempts, resulted in a maze of incredibly complicated codes that manage to perform the calculations related to the problem at hand. It is conceivable that there is no way to control the whole construction in a coherent way. However, some mathematically minded outsiders would like to get convinced that the numerical calculations are doing what they are supposed to do; compare~\cite{KGO,Sarbach:2012pr}. In other words, is it possible to show that, at least some, if not most, if not all of the current numerical approximations to Einstein equations would converge to a real solution of the problem at hand if the numerical accuracy could be increased without limit? Standard convergence tests, or checks that the constraints are preserved, are of course very important, but they cannot on their own settle the point. A more rigorous proof of convergence is desirable. \part{Black holes and fundamental physics} \label{WG3} \phantomsection \addcontentsline{toc}{part}{\bf Chapter III: Gravitational waves and fundamental physics} \begin{center} {\large \bf Chapter III: Gravitational waves and fundamental physics} \end{center} \begin{center} Editor: Thomas P.~Sotiriou \end{center} \vskip 1cm \newcommand{\pp}[1]{{\textcolor{red}{\sf{[PP: #1]}} }} \setcounter{section}{0} \section{Introduction} \label{Sec:introduction3} Gravity is arguably the most poorly understood fundamental interaction. It is clear that General Relativity (GR) is not sufficient for describing the final stages of gravitational collapse or the very early universe. Additionally, a deeper understanding of the role of gravitation seems to be a necessary ingredient for solving almost any other major challenge in fundamental physics, cosmology, and astrophysics, such as the hierarchy problem, or the dark matter (DM) and dark energy problems. Gravitational waves (GWs) promise to turn gravity research into a data-driven field and potentially provide much needed insights. However, one needs to overcome a major obstacle first: that of extracting useful information about fundamental physics from GW observations. The reason this is a challenge should have become apparent in the previous chapter. The signal is buried inside noise and extracting it requires precise modelling. Doing so in GR is already a major feat and it only gets harder when one tries to add new ingredients to the problem. Nonetheless there is very strong motivation. Black hole binary (BBH) mergers are among the most violent events in the universe and some of the most interesting and exotic phenomena are expected to take place in the vicinity of BHs. This chapter focusses on how BHs can be used to probe fundamental physics through GWs. The next section sets the background by discussing some examples of beyond-GR scenarios. Sec.~\ref{Sec:detection} summarises the techniques that are being used or developed in order to probe new fundamental physics through GWs. Sec.~\ref{Sec:nonKerr} is devoted to the efforts to probe the nature and structure of the compact objects that are involved in binary mergers. Finally, Sec.~\ref{Sec:DM} provide a thorough discussion on what GW observation might reveal on the nature of DM. \section{Beyond GR and the standard model} \label{Sec:beyondGR} \subsection{Alternative theories as an interface between new physics and observations} \label{sec:theories} The most straightforward way to test new fundamental physics with GWs from BBH coalescences would be the following: select one's favourite scenario and model the system in full detail, extract a waveform (or better a template bank), and then look for the prediction in the data. The technical difficulties of doing so will become apparent below, where it will also be apparent that the required tools are at best incomplete and require further development. However, there is a clear non-technical drawback in this approach as well: it assumes that one knows what new fundamental physics to expect and how to model it to the required precision. One should contrast this with the fact that most quantum gravity candidates, for example, are not developed enough to give unique and precise predictions for the dynamics of a binary system. Moreover, ideally one would want to obtain the maximal amount of information from the data, instead of just looking for very specific predictions, in order not to miss unexpected new physics. Hence, the question one needs to ask is what is the optimal way to extract new physics from the data? GW observations test gravity itself and the way matter interacts through gravity. Hence, at the theoretical level, they can test GR and the way GR couples to the Standard Model (SM) and its extensions. This suggests clearly that the new fundamental physics that can leave an imprint on GW observations can most likely be effectively modelled using an alternative theory of gravity. Recall that alternative theories of gravity generically contain extra degrees of freedom that can be nonminimally coupled to gravity. The advantages of this approach are: (i) Alternative theories of gravity can act as {\em effective field theories} for describing certain effects and phenomena ({\em e.g.}~violations of Lorentz symmetry or parity) or be eventually linked to a specific more fundamental theory; (ii) they can provide a complete framework for obtaining predictions for binary evolutions and waveforms, as they come with fully nonlinear field equations; (iii) Their range of validity is broad, so they allow one to combine constraints coming from the strong gravity regime with many other bounds from {e.g.}~the weak field regime, cosmology, astrophysics, laboratory tests, etc. The major drawback of this approach is that it requires theory-dependent modelling, which can be tedious and requires one to focus on specific alternative theories of gravity. In this Section we will give a brief overview of some alternative theories that can be used to model new physics in GW observations. This is not meant to be a comprehensive list nor is it our intent to pinpoint interesting candidates. We simply focus on theories that have received significant attention in the literature in what regards their properties in the strong gravity regime and to which we plan to refer to in the coming Sections. It is worth mentioning that a complementary approach is to use theory-agnostic {\em strong field parametrisations}. Their clear advantage is that they simplify the modelling drastically and they render it theory-independent. However, they fall short in points (i)-(ii) above. In specific, it is no always straightforward to physical interpret them, they typically describe only part of the waveform, and they do not allow one to combine constraints with other observations unless interpreted in the framework of a theory. Strong field parametrisations and their advantanges and disadvantages will be discussed in Secs.~\ref{sec:propagation}, \ref{sec:inspiral} and \ref{sec:ringdown}. \subsection{Scalar-tensor theories} \vspace{-3mm} {\it Contributors:} T.~P.~Sotiriou, K.~Yagi \vspace{3mm} \label{sec:sttheories} One of the simplest ways to modify GR is to introduce a scalar field that is nonminimally coupled to gravity. In many cases, the term scalar-tensor theories refers to theories described by the action \begin{eqnarray} \label{staction} S_{\rm st}&=&\frac{1}{16\pi }\int d^4x \sqrt{-g} \Big(\varphi R-\frac{\omega(\varphi)}{\varphi} \nabla^\mu \varphi \nabla_\mu \varphi-V(\varphi) \Big) +S_m(g_{\mu\nu},\psi)\,, \end{eqnarray} where $g$ is the determinant of the spacetime metric $g_{\mu\nu}$, $R$ is the Ricci scalar of the metric, and $S_m$ is the matter action. $\psi$ is used to collectively denote the matter fields, which are coupled minimally to $g_{\mu\nu}$. The functions $\omega(\varphi)$ and $V(\varphi)$ need to be specified to identify a specific theory within the class. Brans-Dicke theory is a special case with $\omega=\omega_0=$constant and $V=0$~\cite{Brans:1961sx}. It is common in the literature to rewrite the action in terms of a different set of variables. In particular, the conformal transformation $\hat{g}_{\mu\nu}=G \varphi \, g_{\mu\nu}$, where $G$ is Newton's constant, and the scalar field redefinition $2\varphi d\phi=\sqrt{2\omega(\varphi)+3} \, d\varphi$, can be employed to bring the action~(\ref{staction}) to the form \begin{equation} \label{stactionein} S_{\rm st}=\frac{1}{16\pi G}\int d^4x \sqrt{-\hat{g}} \Big(\hat{R}-2 \hat{g}^{\nu\mu}\partial_\nu \phi \partial_\mu \phi-U(\phi)\Big)+S_m(g_{\mu\nu},\psi)\,, \end{equation} where $U(\phi)=V(\varphi)/(G\varphi)^2$. The set of variables $(\hat{g}_{\mu\nu}, \phi)$ is known as the {\em Einstein frame}, whereas the original set of variables $(g_{\mu\nu},\varphi)$ is known as the {\em Jordan frame}. Quantities with a hat are defined with $\hat{g}_{\mu\nu}$. In the Einstein frame, scalar-tensor theories seemingly take the form of Einstein's theory with a minimally coupled scalar field (hence the name). However, this comes at a price: $\phi$ now couples to the matter field $\psi$, as can be seen if $S_m$ is written in terms of $\hat{g}_{\mu\nu}$, $\phi$, and $\psi$. Hence, if one sticks to the Einstein frame variables, one would infer that matter experiences an interaction with $\phi$ (fifth force), and hence it cannot follow geodesics of $\hat{g}_{\mu\nu}$. In the Jordan frame instead, there is no interaction between $\varphi$ and $\psi$ (by definition), and this implies that matter follows geodesics of $g_{\mu\nu}$. This line of thinking allows one to ascribe a specific physical meaning to the Jordan frame metric, but it does not change the fact that the two frames are equivalent descriptions of the same theory (see Ref.~\cite{Sotiriou:2007zu} and references therein for a more detailed discussion). Scalar-tensor theories are well-studied but generically disfavoured by weak field observations, {\em e.g.}~\cite{Bertotti:2003rm,Perivolaropoulos:2009ak}. In brief, weak field tests dictate that the scalar field, if it exists, should be in a trivial configuration around the Sun and around weakly gravitating objects, such as test masses in laboratory experiments. These precision tests translate to very strong constraints on scalar-tensor theories; strong enough to make it unlikely that there can be any interesting phenomenology in the strong gravity regime, which is our focus here. There is one notable known exception~\cite{Damour:1993hw}. To introduce these models it is best to first set $V=U=0$ and then consider the equation of motion for $\phi$, as obtained from varying action (\ref{stactionein}). It reads \begin{equation} \hat{\Box} \phi=-4\pi G \alpha(\phi) T \,, \end{equation} where $T\equiv T^\mu_{\phantom{a}\mu}$, $T_{\mu\nu}\equiv -2 (-g)^{-1/2} \delta S_m/\delta g^{\mu\nu}$ is the Einstein frame stress energy tensor, and \begin{equation} \alpha(\phi)=- [2\omega(\varphi)+3]^{-1/2}\,. \end{equation} Solution with $\phi=\phi_0=$ constant are admissible only if $\alpha(\phi_0)=0$. This condition identifies a special class of theories and translates to $\omega(\varphi_0)\to \infty$ in the Jordan frame, where $\varphi_0$ is the corresponding value of $\varphi$. For $\phi=\phi_0$ solutions the corresponding metric is identical to the GR solution for the same system. However, these solutions are not unique. It turns out that they are preferable for objects that are less compact than a certain threshold, controlled by the value of $\alpha'(\phi_0)$. When compactness exceeds the threshold, a nontrivial scalar configuration is preferable, leading to deviations from GR~\cite{Damour:1992we,Damour:1993hw,Damour:1996ke}. The phenomenon is called {\em spontaneous scalarization} and it is known to happen for neutron stars (NSs). A NSB endowed with non-trivial scalar monopolar configurations would lose energy through dipole emission~\cite{Will:2014kxa} and this would affect the orbital dynamics. More specifically, this would be an additional, lower-order, contribution to the usual quadrupolar GW emission that changes the period of binary pulsars. Hence, binary pulsar constraints are extremely efficient in constraining spontaneous scalarization and the original scenario is almost ruled out \cite{Freire:2012mg,Shao:2017gwu}, with the exception of rapidly rotating stars. However, dipolar emission can be avoided if the scalar-scalar interaction between the stars is suppressed. This is the case is the scalar is massive \cite{Alsing:2011er,Ramazanoglu:2016kul}. The mass defines a characteristic distance, beyond which the fall-off of the scalar profile is exponential (as opposed to $1/r$), so if the value of the mass is in the right range, the stars can be scalarized and yet the binary will not exhibit any appreciable dipolar emission above a given separation. One can then avoid binary pulsar constraints and still hope to see some effect in GW emission from the late inspiral, merger, and ringdown. There is also the possibility that NSs will \emph{dynamically} scalarize once they come close enough to each other~\cite{Barausse:2012da,Palenzuela:2013hsa,Shibata:2013pra,Taniguchi:2014fqa,Sennett:2016rwa}. It should be noted that the known scalarization scenario faces the following issue. Usually, calculations assume flat asymptotics with a vanishing gradient for the scalar. However, realistically the value of the scalar far away from the star changes at cosmological timescales and it turns out that this change is sufficient to create conflict with observations~\cite{Sampson:2014qqa,Anderson:2017phb}. This problem can be pushed into the future by tuning initial data, so it can technically be easily avoided. Nonetheless, one would eventually need to do away with the tuning, presumably by improving the model. Black holes in scalar-tensor theories have received considerable attention, most notably in the context of no-hair theorems ({\em e.g.}~\cite{Hawking:1972qk,Bekenstein:1995un,Sotiriou:2011dz}) and their evasions (see also Ref.~\cite{Sotiriou:2015pka} for a discussion). Sections ~\ref{sec:nohair} and \ref{sec:hairyBHs} are entirely devoted to this topic, so we will not discuss it further here. The action of eq.~(\ref{staction}) is the most general one can write that is quadratic in the first derivatives of the scalar, and hence it presents itself as a sensible effective field theory for a scalar field nonminimally coupled to the metric. However, this action can be generalized further if one is willing to include more derivatives. Imposing the requirement that variation leads to field equations that are second order in derivatives of both the metric and the scalar leads to the action \begin{eqnarray} \label{hdaction} S_H&=&\int d^4 x \sqrt{-g}\left(L_2+L_3+L_4+L_5\right)\,, \end{eqnarray} where \begin{eqnarray} L_2 &=& K(\phi,X) , \\ L_3 &=& -G_3(\phi,X) \Box \phi , \\ L_4 &=& G_4(\phi,X) R + G_{4X} \left[ (\Box \phi)^2 -(\nabla_\mu\nabla_\nu\phi)^2 \right] , \\ L_5 &=& G_5(\phi,X) G_{\mu\nu}\nabla^\mu \nabla^\nu \phi \nonumber\\ &&\qquad- \frac{G_{5X}}{6} \left[ (\Box \phi)^3 - 3\Box \phi(\nabla_\mu\nabla_\nu\phi)^2 + 2(\nabla_\mu\nabla_\nu\phi)^3 \right] \,, \end{eqnarray} where $X\equiv-\frac{1}{2} \nabla^\mu \phi \nabla_\mu \phi$, $G_i$ need to be specified to pin down a model within the class, and $G_{iX}\equiv \partial G_i/\partial X$. Horndeski was the first to write down this action in an equivalent form~\cite{Horndeski:1974wa} and it was rediscovered fairly recently in Ref.~\cite{Deffayet:2009mn}. Theories described by this action are referred to as {\em generalized scalar-tensor theories}, Horndeski theories, {\em generalized galileons}, or simply scalar-tensor theories. We will reserve the last term here for the action~(\ref{staction}) in order to avoid confusion. The term generalized galileons comes from the fact that these theories were (re)discovered as curved space generalization of a class of flat space scalar theories that are symmetric under $\phi \to \phi + c_\mu x^\mu +c$, where $c_\mu$ is a constant one-form and $c$ is a constant (Galilean symmetry) \cite{Nicolis:2008in}. In fact, the numbering of the $L_i$ terms in the Lagrangian is a remnant of the original Galileons, where the $i$ index indicates the number of copies of the field. This is no longer true in action (\ref{hdaction}), but now the $L_i$ term contains $i-2$ second derivatives of the scalar. In what regards strong gravity phenomenology, a lot of attention has been given to the shift-symmetric version of action (\ref{hdaction}). If $G_i=G_i(X)$, {\em i.e.}~$\partial G_i/\partial \phi=0$, then (\ref{hdaction}) is invariant under $\phi \to \phi+$constant. This symmetry protects the scalar from acquiring a mass from quantum corrections. It has been shown in Ref.~\cite{Hui:2012qt} that asymptotically flat, static, spherically symmetric BHs in shift-symmetric Horndeski theories have trivial (constant) scalar configuration and are hence described by the Schwarzschild solution. This proof can be extended to slowly-rotating BHs, in which case the solution is the slowly rotating limit of the Kerr spacetime~\cite{Sotiriou:2013qea}. However, there is a specific, unique term in (\ref{hdaction}) that circumvents the no-hair theorem and leads to BHs that differ from those of GR and have nontrivial scalar configurations (hairy BHs) \cite{Sotiriou:2013qea}: $\phi {\cal G}$, where ${\cal G}\equiv R^2 - 4 R_{\mu\nu} R^{\mu\nu} + R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}$ is the the Gauss-Bonnet invariant.\footnote{Though this term does not seem to be part of action (\ref{hdaction}), it actually corresponds to the choice $G_5\propto \ln|X|$~\cite{Kobayashi:2011nu}.} This singles out \begin{equation} \label{phiG} S_{\phi{\cal G}}=\frac{1}{16\pi G}\int d^4x \sqrt{-g} \Big(R-\partial^\nu \phi \partial_\mu \phi+\alpha\phi {\cal G}\Big)+S_m(g_{\mu\nu},\psi)\,, \end{equation} as the simplest action within the shift-symmetric Horndeski class that has hairy BHs. $\alpha$ is a coupling constant with dimension of a length squared. Hairy BHs in this theory \cite{Yunes:2011we,Sotiriou:2014pfa} are very similar to those found earlier in the (non-shift-symmetric) theory with exponential coupling \cite{Campbell:1991kz,Kanti:1995vq}, {\em i.e.}~$e^\phi {\cal G}$. The theory with exponential coupling is known as Einstein-dilaton Gauss-Bonnet theory and it arises as a low-energy effective theory of heterotic strings~\cite{Metsaev:1987zx,Maeda:2009uy}. The subclass of theories described by the action \begin{equation} \label{fG} S_{f(\phi){\cal G}}=\frac{1}{16\pi G}\int d^4x \sqrt{-g} \Big(R-\frac{1}{2}\partial^\nu \phi \partial_\mu \phi+f(\phi){\cal G}\Big)+S_m(g_{\mu\nu},\psi)\,, \end{equation} seems to have interesting BH phenomenology in general. This theory admits GR solutions if $f'(\phi_0)=0$ for some constant $\phi_0$. Assuming this is the case, it has recently been proven in Ref.~\cite{Silva:2017uqg} that stationary, asymptotically flat BH solutions will be identical to those of GR, provided $f''(\phi_0){\cal G}<0$. A similar proof, but restricted to spherical symmetry, can be found in Ref.~\cite{Antoniou:2017acq}. Theories that do not satisfy the $f''(\phi_0){\cal G}<0$ exhibit an interesting phenomenon \cite{Doneva:2017bvd,Silva:2017uqg}: BH scalarization, similar to the NS scalarization discussed above. See also Ref.~\cite{Antoniou:2017hxj} for a study of hairy BH solution in this class of theories and Ref.~\cite{Blazquez-Salcedo:2018jnn} for a very recent exploration of linear stability. The same class of theories can exhibit spontaneous scalarization in NSs~\cite{Silva:2017uqg}. On the other hand, in shift-symmetric Horndeski theories it is known that for NSs the scalar configuration will be trivial~\cite{Barausse:2015wia,Barausse:2017gip,Lehebel:2017fag}, provided that the $\phi {\cal G}$ terms is absent. Even if it is there, there will be a faster than $1/r$ fall-off \cite{Yagi:2011xp,Yagi:2015oca}. Hence, for shift-symmetric theories BH binaries are probably the prime strong gravity system of interest. It should be noted that the speed of GWs in generalized scalar tensor theories can differ from the speed of light under certain circumstances and this has been used to obtain constraint recently~\cite{Lombriser:2016yzn,Lombriser:2015sxa,Sakstein:2017xjx,Ezquiaga:2017ekz,Creminelli:2017sry,Baker:2017hug}. These will be discussed in Sec.~\ref{sec:propagation}. As mentioned above, action (\ref{hdaction}) is the most general one that leads to second order equations upon variation. Nonetheless, it has also been shown that there exist theories outside this class that contain no other degrees of freedom than the metric and the scalar field \cite{Zumalacarregui:2013pma,Gleyzes:2014dya,Gleyzes:2014qga,Langlois:2015cwa,Crisostomi:2016czh,Achour:2016rkg}. These models are often referred to as {\em beyond-Horndeski} theories. Last but not least, dynamical Chern-Simons (dCS) gravity~\cite{Jackiw:2003pm,Alexander:2009tp} is a scalar-tensor theory that introduces gravitational parity violation at the level of the field equations. It draws motivation from the standard model~\cite{Weinberg:1996kr}, heterotic superstring theory~\cite{Green:1987mn}, loop quantum gravity~\cite{Ashtekar:1988sw,Alexander:2004xd,Taveras:2008yf,Calcagni:2009xz,Gates:2009pt} and effective field theories for inflation~\cite{Weinberg:2008hq}. dCS gravity is not a member of the Horndeski class. The scalar is actually a pseudo-scalar, {\em i.e.}~it changes sign under a parity transformation. Moreover, it couples to the Pontryagin density $*R^{\alpha\phantom{a}\gamma\delta}_{\phantom{a}\beta} R_{\phantom{a}\alpha\gamma\delta}^{\beta}$, where $R_{\phantom{a}\beta\gamma\delta}^{\alpha}$ is the Riemann tensor, $*R^{\alpha\phantom{a}\gamma\delta}_{\phantom{a}\beta} \equiv {1\over2} \epsilon^{\gamma\delta\mu\nu}R_{\phantom{a}\beta\mu\nu}^{\alpha}$ and $\epsilon^{\gamma\delta\mu\nu}$ is the Levi-Civita tensor. This coupling term gives rise to third-order derivatives in the field equations. The fact that the theory has higher-order equations implies that it is unlikely it is actually predictive as it stands~\cite{Delsate:2014hba} (see also \cite{Crisostomi:2017ugk}). Hence, most work has used a specific approach to circumvent that problem that is inspired by effective field theory and relies on allowing the coupling term to introduce only perturbative correction. This issue will be discussed in some detail in Sec.~\ref{IVPandNumerics}. dCS gravity has received a lot of attention in the strong gravity regime. Non-rotating BH and NS solutions are the same as in GR, as such solutions do not break parity. On the other hand, slowly-rotating~\cite{Yunes:2009hc,Konno:2009kg,Yagi:2012ya}, rapidly-rotating~\cite{Stein:2014xba} and extremal~\cite{McNees:2015srl,Chen:2018jed} BHs and slowly-rotating NSs~\cite{Yagi:2013mbt} all differ from the corresponding GR solutions. In particular, such rotating compact objects acquire a scalar \emph{dipole} hair and a correction to the quadrupole moment, which modify gravitational waveforms from compact binary inspirals~\cite{Sopuerta:2009iy,Pani:2011xj,Yagi:2012vf,Canizares:2012is} and mergers~\cite{Okounkova:2017yby}. Future GW observations will allow us to place bounds on the theory that are orders of magnitude stronger than the current bounds from Solar System~\cite{AliHaimoud:2011fw} and table-top experiments~\cite{Yagi:2012ya}. Another interesting phenomenon in dCS gravity is gravitational amplitude birefringence, where the right-handed (left-handed) circularly-polarized GWs are either enhanced or suppressed (suppressed or enhanced) during their propagation from a source to Earth, which can be used to probe the evolution of the pseudo-scalar field~\cite{Alexander:2007kv,Yunes:2008bu,Yunes:2010yf}. Such GW observations are complementary~\cite{Yagi:2017zhb} to Solar System~\cite{Smith:2007jm} and binary pulsar~\cite{Yunes:2008ua,AliHaimoud:2011bk} probes. For a general introduction to standard scalar-tensor theories see Refs.~\cite{Fujii:2003pa,2004cstg.book.....F} and for generalized scalar-tensor theories see Ref.~\cite{Deffayet:2013lga}, and for a recent brief review on gravity and scalar fields see Ref.~\cite{Sotiriou:2015lxa}. \subsection{Lorentz violations} \vspace{-3mm} {\it Contributor:} T.~P.~Sotiriou \vspace{3mm} \label{sec:LVtheories} Lorentz symmetry is a fundamental symmetry for the standard model of particle physics, hence it is important to question whether it plays an equally important role in gravity. To address this question one needs to study Lorentz-violating (LV) theories of gravity for two distinct reasons: (i) to quantify how much one is allowed to deviated from GR without contradicting {\em combined} observations that are compatible with Lorentz symmetry one needs to model the deviations in the framework of a consistent theory; (ii) studying the properties of such theories can provide theoretical insights. We will briefly review two examples of LV theories here. Einstein-aether theory (\ae-theory) \cite{Jacobson:2000xp} is described by the action \begin{equation} \label{Saetheory} S_{ae}= \frac{1}{16\pi G}\int d^{4}x\sqrt{-g} (-R -M^{\alpha\beta}{}_{\mu\nu} \nabla_\alpha u^\mu \nabla_\beta u^\nu) \end{equation} where \begin{equation} M^{\alpha\beta}{}_{\mu\nu} = c_1 g^{\alpha\beta}g_{\mu\nu}+c_2\delta^{\alpha}_{\mu}\delta^{\beta}_{\nu} +c_3 \delta^{\alpha}_{\nu}\delta^{\beta}_{\mu}+c_4 u^\alpha u^\beta g_{\mu\nu}\,, \end{equation} $c_i$ are dimensionless coupling constants and the aether $u_\mu$ is forced to satisfy the constraint $u^2\equiv u^\mu u_\mu=1$. This constraint can be enforced by adding to the action the Lagrange multiplier term $\lambda (u^2-1)$ or by restricting the variation of the aether so as to respect the constraint. Due to the constraint, the aether has to be timelike, thereby defining a {\em preferred threading} of spacetime by timelike curves, exactly like an observer would. It cannot vanish in any configuration, including flat space. Hence, local Lorentz symmetry, and more precisely boost invariance, is violated. (\ref{Saetheory}) is the most general action that is quadratic in the first derivatives of a vector field that couples to GR and satisfies the unit constraint. Indeed, \ae-theory is a good effective field theory for Lorentz-symmetry breaking with this field content \cite{Withers:2009qg}. It should be emphasised that the aether brings 3 extra degrees of freedom into the theory, 2 vector modes and a scalar (longitudinal) mode. The speed of any mode, including the usual spin-2 mode that corresponds to GWs, can differ from the speed of light and is controlled by a certain combination of the $c_i$ parameters. One can straightforwardly obtain an LV theory with a smaller field content from action (\ref{Saetheory}) by restricting the aether to be hypersurface orthogonal \cite{Jacobson:2010mx}. This amounts to imposing \begin{equation} \label{ho} u_\mu=\frac{\partial_\mu T}{\sqrt{g^{\lambda\nu}\partial_\lambda T \partial_\nu T}}\, \end{equation} where $T$ is a scalar and the condition is imposed before the variation ({\em i.e.~}one varies with respect to $T$). This condition incorporates already the unit constraint $u^2=1$ and removes the vector modes from the theory. It can be used to rewrite the action in terms of the metric and $T$ only. $T$ cannot vanish in any regular configuration that solves the field equations and its gradient will always be timelike. Hence, the aether is always orthogonal to some spacelike hypersurfaces over which $T$ is constant. That is, $T$ defines a preferred foliation and acts as a preferred time coordinate. Note that it appears in the action only through $u_\mu$ and the latter is invariant under $T\to f(T)$, so this is a symmetry of the theory and the preferred foliation can be relabelled freely. One can actually use the freedom to choose the time coordinate and write this theory directly in the preferred foliation \cite{Jacobson:2010mx}. Indeed, if $T$ is selected as the time coordinate, the $T=$ constant surfaces define a foliation by spacelike hypersurfaces with extrinsic curvature $K_{ij}$ and $N$ and $N^i$ are the lapse function and shift vector of this foliation. Then $u_\mu=N \delta^0_\mu$, and action (\ref{Saetheory}) takes the form \begin{equation} \label{SBPSHIR} S_T= \frac{1}{16\pi G'}\int dT d^3x \, N\sqrt{h}\left(K_{ij}K^{ij} - \lambda K^2 + \xi {}^{(3)}\!R + \eta a_ia^i \right)\,, \end{equation} where $a_i\equiv \partial_i \ln N$ and \begin{equation} \label{HLpar} \frac{G'}{G}=\xi=\frac{1}{1-(c_{1}+c_3)}, \quad \lambda=\frac{1+c_2}{1-(c_{1}+c_{3})},\quad \eta=\frac{c_1+c_4}{1-(c_1+c_3)}\,. \end{equation} Since $T$ was chosen as a time coordinate, action (\ref{SBPSHIR}) is no longer invariant under general diffeomorphisms. It is, however, invariant under the subset of diffeomorphisms that preserve the folation, or else the transformations $T\to T'=f(T)$ and $x^i\to x'^i=x'^i(T,x^i)$, which are now the defining symmetry of the theory from a EFT perspective. In fact, this action can be thought of as the infrared (low-energy) limit of a larger theory with the same symmetry, called Ho\v rava gravity \cite{Horava:2009uw}. The most general action of Ho\v rava gravity is \cite{Blas:2009qj} \begin{equation} \label{SBPSHfull} S_{H}= \frac{1}{16\pi G_{H}}\int dT d^3x \, N\sqrt{h}\left(L_2+\frac{1}{M_\star^2}L_4+\frac{1}{M_\star^4}L_6\right)\,, \end{equation} where $L_2$ is precisely the Lagrangian of action (\ref{SBPSHIR}). $L_4$ and $L_6$ collectively denote all the operators that are compatible with this symmetry and contain 4 and 6 spatial derivatives respectively (in the $T$ foliation). The existence of higher order spatial derivatives implies that perturbation will satisfy higher order dispersion relations and $M_\star$ is a characteristic energy scale that suppresses the corresponding terms. Ho\v rava gravity has been argued to be power-counting renormalizable and hence a quantum gravity candidate \cite{Horava:2009uw}. The basic principle is that the higher-order spatial derivative operators modify the behaviour of the propagator in the ultraviolet and remove/regulate divergences (see Refs.~\cite{Visser:2009fg,Visser:2009ys} for a discussion in the context of a simple scalar model). The {\em projectable} version of the theory \cite{Horava:2009uw,Sotiriou:2009gy}, in which the lapse $N$ is restricted to have vanishing spatial derivatives, has actually been shown to be renormalizable (beyond power-counting) in 4 dimensions \cite{Barvinsky:2015kil}, whereas the 3-dimensional version \cite{Sotiriou:2011dr} is asymptotically \cite{Barvinsky:2017kob}. This restricted version does however suffer from serious viability issues at low energies \cite{Horava:2009uw,Sotiriou:2009bx,Charmousis:2009tc,Blas:2009yd,Koyama:2009hc}. We will not discuss the precise form of $L_4$ and $L_6$ any further, as it not important for low-energy physics. It should be stressed though that such terms lead to higher order corrections to the dispersion relation of GWs and that of the extra scalar excitation Ho\v rava gravity propagates. This implies that the speeds of GW polarizations can differ from the speed of light and acquire a frequency dependence. The relevant constraints from GW observations are discussed in Sec.~\ref{sec:propagation} (see also \cite{Sotiriou:2017obf} for a critical discussion). At large wavenumber $k$, the dispersion relations scale as $\omega^2 \propto k^6$ (since by construction the leading order ultraviolet operators in the action contain 2 temporal and 6 spatial derivatives) and this implies that an excitation can reach any speed, provided it has short enough wavelength. More surprisingly, it has been shown that Ho\v rava gravity exhibits instantaneous propagation even in the infrared limit, described by action (\ref{SBPSHIR}). In this limit, dispersion relations are linear but, both at perturbative \cite{Blas:2011ni} and nonperturbative level \cite{Bhattacharyya:2015uxt}, there exists an extra degree of freedom that satisfies an elliptic equation on constant preferred time hypersurfaces. This equation is not preserved by time evolution and hence it transmits information instantaneously within these hypersurfaces. For a detailed discussion see Ref.~\cite{Bhattacharyya:2015gwa}. It should be clear from the discussion above that both \ae-theory and Ho\v rava gravity can exhibit superluminal propagation and this makes BHs particularly interesting in these theories. For both theories BHs differ from their GR counterparts and have very interesting feature that will be discussed further in Sec.~\ref{sec:hairyBHs}. For a review see Ref.~\cite{Barausse:2013nwa}. For early reviews on \ae-theory and Ho\v rava gravity see Refs.~\cite{Jacobson:2008aj} and \cite{Sotiriou:2010wn}, respectively. For the most recent {\em combined} constraints see Refs.~\cite{Gumrukcuoglu:2017ijh,Oost:2018tcv} and references therein. \subsection{Massive gravity and tensor-tensor theories} \vspace{-3mm} {\it Contributor:} S.~F.~Hassan \vspace{3mm} \label{sec:massivegrav} In the appropriate limit GR reproduces Newton's famous inverse square law of gravitation. At the same time, linear excitations in GR (GWs) satisfy a linear dispersion relation without a mass term. In particle theory terminology, both of these statements imply that the graviton --- the boson that mediates the gravitational interaction --- is massless. Extensions of GR that attempt to give the graviton a mass are commonly referred to as massive gravity, bimetric gravity, and multimetric gravity theories. One of the original motivations for the study of these theories was the hope that they could provide a resolution to the cosmological constant problem by changing the gravitational interaction beyond some length scale (controlled by the mass of the graviton). However, a more fundamental motivation emerges from comparing the structure of GR with that of the standard model of particle physics that successfully accounts for all known particles interactions. This is elaborated below. The standard model describes particle physics in terms of fields of spin 0 (the Higgs field), spin 1/2 (leptons and quarks) and spin 1 (gauge bosons). At the most basic level, these fields are governed by the respective field equation, namely, the Klein-Gordon equation, the Dirac equation and the Maxwell/Yang-Mills/Proca equations. The structures of these equations are uniquely fixed by the spin and mass of the field along with basic symmetry and consistency requirements, in particular, the absence of ghost instabilities, for example, in spin-1 theories. From this point of view, GR is the unique ghost-free theory of a massless spin-2 field; the metric $g_{\mu\nu}$. Then the Einstein equation, $R_{\mu\nu} - \frac{1}{2} g_{\mu\nu} R=0$, with all its intricacies is simply the counterpart for a spin 2 field of the massless Klein-Gordon equation $\Box \phi=0$, or of the Maxwell equation $\partial_\mu F^{\mu\nu}=0$. However, to describe physics, the standard model contains a second level of structure: At the fundamental level, all fields appear in multiplets allowing them to transform under symmetries, in particular under gauge symmetries which are in turn responsible for the renormalizability and hence, for the quantum consistency of the theory. The observed fields are related to the fundamental ones through spontaneous symmetry breaking and through mixings. For example, to obtain a single Higgs field, one needs to start with four scalar fields arranged in a complex $SU(2)$ doublet, rather than the one scalar field of the final theory. Similarly, although electrodynamics is very well described by Maxwell's equations as a standalone theory, its origin in the standard model is much more intricate, requiring the spontaneous breaking of a larger gauge symmetry and a specific mixing of the original gauge fields into the vector potential of electromagnetism. Now, one may contrast these intricate features of the standard model with GR which is the simplest possible theory of a single massless spin 2 field with no room for any further structure. The comparison suggests embedding GR in setups with more than one spin 2 field and investigating the resulting models. The aim would be to see if the new structures could address some of the issues raised above, either in analogy with the standard model, or through new mechanisms. This will shed light on a corner of the theory space that has remained unexplored so far. However, implementing this program is not as straightforward as it may seem. It was known since the early 70's that adding other spin 2 fields to GR generically leads to ghost instabilities. The instability was first noted in the context of massive gravity, which contains an interaction potential between the gravitational metric $g_{\mu\nu}$ and a nondynamical predetermined metric $f_{\mu\nu}$. However it also plagues dynamical theories of two or more spin 2 fields. Recent developments since 2010 have shown that one can indeed construct ghost free theories of massive gravity (with a fixed auxiliary metric) \cite{deRham:2010kj,Hassan:2011hr}, or fully dynamical ghost free bimetric theories \cite{Hassan:2011zd}. In particular, the bimetric theory is the theory of gravitational metric that interacts with an extra spin-2 field in specific ways and contains several parameters. It propagates one massless and one massive spin-2 mode which are combinations of the two metrics. Consequently, it contains massive gravity and GR as two different limits. For a review of bimetric theory see \cite{Schmidt-May:2015vnx}. In the massive gravity limit, as the name suggests, the theory has a single massive spin-2 field that mediates gravitational interactions in the background of a fixed auxiliary metric. The most relevant parameter then is the graviton mass which is constrained by different observations (for a review see, \cite{deRham:2016nuf}). In particular, consistency with the detection of GWs by LIGO restricts the graviton mass to $m < 7.7\times 10^{-23}{\rm eV}$ \cite{TheLIGOScientific:2016src,Abbott:2017vtc}. However, in this limit, it becomes difficult to describe both large scale and small scale phenomena consistently with the same fixed auxiliary metric. Of course, the bimetric theory is more relevant phenomenologically close to the GR limit, where gravity is described by a mostly massless metric, interacting with a massive spin-2 field. Now the two most relevant parameters are the spin-2 mass $m$ and the Planck mass $M$ of the second spin-2 field, $M\ll M_P$, where $M_P$ is the usual Planck mass of the gravitational metric. The presence of the second parameter $M$ significantly relaxes the bounds on the spin-2 mass $m$. In the most commonly assumed scenarios, the stability of cosmological perturbations in the very early Universe requires $M \lesssim 100 {\rm GeV}$ \cite{Akrami:2015qga}. In this case, GWs will contain a very small admixture of the massive mode, hence, $m$ is not strongly constrained by LIGO and can be large. A new feature is that now the massive spin-2 field can be a DM candidate. Depending on the specific assumptions, the relevant allowed mass ranges are estimated as $10^{-4} {\rm eV}\lesssim m \lesssim 10^7 {\rm eV}$ \cite{Aoki:2017cnz}, or the higher range, $1 {\rm TeV} \lesssim m \lesssim 100 {\rm TeV}$ \cite{Babichev:2016bxi}. While some studies of cosmology in bimetric theory have been carried out, many important features of the theory are yet to be fully explored. One of these is the problem of causality and the well-posedness of the initial value problem. In particular, bigravity provides a framework for investigating models where the causal metric need not always correspond to the gravitational metric \cite{Hassan:2017ugh,Kocic:2018yvr}. Another interesting question is if the bimetric interaction potential can be generated through some kind of Higgs mechanism, in analogy with the Proca mass term for spin 1 fields. From a phenomenological point of view, it is interesting to investigate the massive spin 2 field present in the theory as a DM candidate. Strong field effects in bimetric gravity are expected to show larger deviations from GR as compared to weak field effects. While ghost free bimetric models have interesting properties, they do not admit enlarged symmetries, since the two spin 2 fields have only nonderivative interactions. Constructing ghost free theories of spin 2 multiplets with gauge or global symmetries is a more difficult problem which requires constructing ghost free derivative interactions. So far, this is an unresolved problem which needs to be further investigated. If such theories exist, they will provide the closest analogues of the standard model in the spin 2 sector. \section{Detecting new fields in the strong gravity regime}\label{Sec:detection} As has been discussed in some detail in Section \ref{Sec:beyondGR}, if new physics makes an appearance in the strong gravity regime, it can most likely be modelled as a new field that couples to gravity. In order to detect this field with GW observations, one has to model its effects on the dynamics of the binary system and produce waveforms that can be compared with observations. The 3 distinct phases of a binary coalescence --- inspiral, merger, and ringdown --- require different modelling techniques. The inspiral and the ringdown are susceptible to different types of perturbative treatment. During most of the inspiral, the two members of the binary are well separated and, even though they are strongly gravitating objects individually, they interact weakly with each other. Hence, one can model this interaction perturbatively and this modeling can be used to produce a partial waveform for the inspiral phase. It should be stressed that this gives rise to a perturbation expansion whose parameters depend on the nature and structure of the members of the binary. Computing these parameters, known as {\em sensitivities}, generally requires modelling the objects composing the binary in a nonperturbative fashion. Moreover, if the objects are endowed with extra field configurations, then one needs to model them as translating in an ambient field (in principle created by the companion), in order to faithfully capture the extra interaction. This is the same calculation one performs to obtain constraints from binary pulsars~\cite{Yagi:2013qpa,Sampson:2014qqa}. An alternative is to try to prepare theory-agnostic, parametrised templates. Three different ways to achieve this will be discussed in detail in Sec.~\ref{sec:inspiral}. The ringdown can be modelled with linear perturbation theory around a known configuration for a compact object, but there are significant challenges. The most obvious ones are: (i) one needs to first have fully determined the structure of the quiescent object that will be used as background; (ii) the equations describing linear perturbations around such a background are not easy to solve ({\em e.g.}~separability is an issue); (iii) both the background and the perturbation equations depend crucially on the scenario one is considering. An in-depth discussion in perturbation theory around BHs, and its use to describe the ringdown phase, will be given in Sec.~\ref{sec:ringdown}. The merger phase is highly nonlinear and modeling it requires full-blown numerical simulations. A prerequisite for performing such simulations is to have an initial value formulation and to show that the {\em initial value problem} (IVP) is well-posed (in an appropriate sense, as will be discussed in more detail below). This has not been addressed for the vast majority of gravity theories that are currently being considered. Sec.~\ref{IVPandNumerics} focuses on the current state of the art in modeling the merger phase. It contains a critical discussion of the IVP in alternative theories of gravity and a summary of known numerical results. Last but not least, one could attempt to detect new fields (polarisations) by the effect they have on GW propagation, as opposed to generation. This amounts to parametrizing the dispersion relation of the GWs and constraining the various parameters that enter this parameterisation, such as the mass, the speed, and the parameters of terms that lead to dispersive effects. The next section is devoted to propagation constraints. \subsection{Gravitational wave propagation} \vspace{-3mm} {\it Contributor:} T. P. Sotiriou \vspace{3mm} \label{sec:propagation} According to GR, GWs are massless and propagate at the speed of light, $c$. Hence, they satisfy the dispersion relation $E^2=c^2 p^2$, where $E$ is energy and $p$ is momentum. Deviations from GR can introduce modifications to this dispersion relation. LIGO uses the following parametrisation to model such deviations: $E^2=c^2 p^2+A c^\alpha p^\alpha$, $\alpha>0$ \cite{Mirshekari:2011yq}\footnote{A more general parametrisation \cite{Kostelecky:2016kfm} has been used in Ref.~\cite{Monitor:2017mdv}.}. When $\alpha=0$, $A$ represents a mass term in the dispersion relation. The current upper bound on the the mass from GW observations is $7.7\times 10^{-23}{\rm eV}/c^2$ \cite{Abbott:2017vtc}. When $\alpha=2$, $A$ represents a correction to the speed of GWs. The fact that GWs from the NSB merger GW170817 almost coincided with the gamma ray burst GRB 170817A has given a spectacular double-sided constraint on the speed of GWs, which should agree with the speed of light to less than a part in $10^{15}$~\cite{Monitor:2017mdv,TheLIGOScientific:2017qsa}. All other corrections introduce dispersive effects which accumulate with the distance the wave had to travel to reach the detector. This enhances the constraints in principle. However, it should be noted that $A$ has dimensions $[A]=[E]^{2-\alpha}$, which implies that corrections that have $\alpha>2$ are suppressed by positive powers of $c p/M_\star$. $M_\star$ denotes some characteristic energy scale and it is useful to think of the constraints on $A$ as constraints on $M_\star$. GWs have long wavelengths, so one expects $\alpha>2$ corrections to be heavily suppressed unless the aforementioned energy scale is unexpectedly small. For instance, for $\alpha=3$ the current upper bound on $M_\star$ is in the $10^9\,{\rm GeV}$ range, whereas for $\alpha=4$ it drops well below the eV range \cite{Abbott:2017vtc,Mirshekari:2011yq,Yunes:2016jcc,Sotiriou:2017obf}. It is important to stress that all of the aforementioned constraints need to be interpreted within the framework of a theory (as is the case for any parametrisation). The mass bound can be turned to a constraint on the mass of the graviton in massive gravity theories, discussed in Sec.~\ref{sec:massivegrav}. Interestingly, it is not the strongest constraint one can obtain \cite{deRham:2016nuf,Will:2018gku}. All other bounds can in principle be interpreted as constraints on Lorentz symmetry. However, terms with odd powers of $\alpha$ require spatial parity violations as well and the bounds on terms with $\alpha>2$ are probably too weak to give interesting constraints on Lorentz-violating gravity theories \cite{Sotiriou:2017obf}. The double-sided bound on the speed of GWs certainly yields a very strong constraint on Lorentz-violating theories of gravity. However, one has to take into account that such models generically have a multidimensional parameter space even at low energies and generically exhibit extra polarisations, the existence/absence of which can potentially give stronger or additional constraints \cite{Sotiriou:2017obf}. For details on the implications of this bound for Ho\v rava gravity and for Einstein-\ae ther theory, discussed in Sec.~\ref{sec:LVtheories}, see Refs.~\cite{Gumrukcuoglu:2017ijh} and \cite{Oost:2018tcv} respectively. Lorentz symmetry breaking is not the only scenario in which GWs propagate at a different speed than light. Lorentz invariant theories can exhibit such behaviour as well if they satisfy two conditions: (i) they have extra fields that are in a nontrivial configuration between the emission and the detection locations; (ii) these fields couple to metric perturbations in a way that modifies their propagation. The second conditions generally requires specific types of nonminimal coupling between the extra fields and gravity. It is satisfied by certain generalised scalar tensor theories \cite{Lombriser:2015sxa}, discussed in Sec.~\ref{sec:sttheories}. Indeed, the bound on the speed has been used to obtain strong constraints on such theories as models of dark energy~\cite{Lombriser:2015sxa,Lombriser:2016yzn,Sakstein:2017xjx,Ezquiaga:2017ekz,Creminelli:2017sry,Baker:2017hug}. It should be stressed, however, that, due to condition (i), such constraints are inapplicable if one does not require that the scalar field be the dominant cosmological contribution that drives the cosmic speed-up. Hence, they should not be interpreted as viability constraints on the theories themselves, but only on the ability of such models to account for dark energy. In summary, modifications in the dispersion relation of GWs can be turned into constraints on deviations from GR. How strong these constraints are depends on the theory or scenario one is considering. The clear advantage of propagation constraints is that they are straightforward to obtain and they do not require modeling of the sources in some alternative scenario. This reveals a hidden caveat: they are intrinsically conservative, as they assume that there is no deviation from the GR waveform at the source. Moreover, as should be clear from the discussion above, propagation constraints have other intrinsic limitations and can only provide significant constraints for models that lead to deviations in the dispersion relation of GWs for relatively long wavelengths. Hence, one generally expects to get significantly stronger or additional constraints by considering the deviation from the GR waveform at the source. \subsection{Inspiral and parametrized templates} \vspace{-3mm} {\it Contributor:} K. Yagi \vspace{3mm} \label{sec:inspiral} \subsubsection{Post-Newtonian parametrisation} Rather than comparing GW data to template waveforms in non-GR theories to probe each theory one at a time, sometimes it is more efficient to prepare parameterized templates that capture deviations away from GR in a generic way so that one can carry out model-independent tests of gravity with GWs. We will mainly focus on the parameterized modification in the inspiral part of the waveform and discuss what needs to be done to construct parameterized waveforms in the merger-ringdown part of the waveform. One way to prepare such parameterized templates is to treat coefficients of each post-Newtonian (PN) term in the waveform phase as an independent coefficient~\cite{Arun:2006yw,Arun:2006hn,Mishra:2010tp}. For gravitational waveforms of non-spinning BBH mergers in GR, such coefficients only depend on the masses of individual BHs. Thus, if GR is correct, any two measurements of these coefficients give us the masses, and the measurement of a third coefficient allows us to carry out a consistency test of GR. This idea is similar to what has been done with binary pulsar observations with measurements of post-Keplerian parameters~\cite{Stairs:2003eg,Perrodin:2017bxr}. Though, it is nontrivial how to perform such tests for spinning BBHs. Moreover, this formalism does not allow us to probe non-GR corrections entering at negative PN orders due to e.g. scalar dipole emission. \begin{figure}[t] \includegraphics[width=0.7\textwidth]{beta-constraint-only-GW-event-GW151226.pdf} \caption{\label{beta-bound} Upper bounds on the ppE parameter $\beta$ with GW150914 and GW151226 entering at different PN orders. For comparison, we also present bounds from binary pulsar observations and Solar System experiments. This figure is taken and edited from~\cite{Yunes:2016jcc}. } \end{figure} \subsubsection{Parameterized post-Einsteinian (ppE) formalism} In this formalism the modified waveform in the Fourier domain is given by $\tilde h (f) = \tilde h_{\mbox{\tiny GR}}(f)\, e^{i \beta v^{2n-5}}$~\cite{Yunes:2009ke}. Here $\tilde h_{\mbox{\tiny GR}}$ is the GR waveform, $\beta$ is the ppE parameter representing the magnitude of the non-GR deviation while $v$ is the relative velocity of the binary components. The above correction enters at $n$th PN order. Since $n$ is arbitrary, this formalism can capture deviations from GR that enter at negative PN orders. It has been used in~\cite{Yunes:2016jcc} to probe strong and dynamical field gravity with GW150914~\cite{Abbott:2016blz} and GW151226~\cite{Abbott:2016nmj}. Bounds on $\beta$ at each PN order using a Fisher analysis are shown in Fig.~\ref{beta-bound}. Here, the authors included non-GR corrections only in the inspiral part of the waveform. Having these bounds on generic parameters at hand, one can map them to bounds on violations in fundamental pillars of GR. For example, a -4PN deviation maps to time variation of the gravitational constant $G$, that allows us to check the strong equivalence principle. 0PN (2PN) corrections can be used to probe violations in Lorentz (parity) invariance. \subsubsection{Phenomenological waveforms} The LIGO/Virgo Collaboration (LVC) used a generalized IMRPhenom waveform model~\cite{TheLIGOScientific:2016src} by promoting phenomenological parameters $\vec p$ in the GR waveform to $\vec p(1 + \delta p)$, where $\delta p$ corresponds to the fractional deviation from GR. For the inspiral part of the waveform, one can show that these waveforms are equivalent to the ppE waveform. LVC's bounds on non-GR parameters using a Bayesian analysis on the real data set are consistent with the Fisher bound in Fig.~\ref{beta-bound}. One important future direction to pursue is to come up with a meaningful parameterization of the waveform in the merger and ringdown phases. The generalized IMRPhenom waveform model used by LVC does contain non-GR parameters in the merger-ringdown part of the waveform, though it is not clear what they mean physically or how one can map such parameters to e.g. coupling constants in each non-GR theory. To achieve the above goal, one needs to carry out as many BBH merger simulations as possible in theories beyond GR~\cite{Healy:2011ef,Berti:2013gfa,Okounkova:2017yby,Jai-akson:2017ldo,Hirschmann:2017psw} to give us some insight on how we should modify the waveform from GR in the merger-ringdown phase. \subsection{Ringdown and black hole perturbations beyond General Relativity} \vspace{-3mm} {\it Contributor:} P. Pani \vspace{3mm} \label{sec:ringdown} Depending on the physics being tested, the post-merger ``ringdown'' phase of a compact-binary coalescence might be better suited than the inspiral or the merger. For instance, in the inspiral phase the nature of the binary components shows up only at high PN order (though corrections due to extra polarizations can show up at low PN order) whereas the merger phase requires time-consuming and theory-dependent numerical simulations. On the other hand, the post-merger phase --- during which the remnant BH settles down to a stationary configuration --- can be appropriately described by perturbation theory and it is therefore relatively simple to model beyond GR. If the remnant is a BH in GR, the ringdown phase is well-described by a superposition of exponentially damped sinusoids, \begin{equation} h_+ +i h_\times \sim \sum_{lmn} A_{lmn}(r) e^{-t/\tau_{lmn}}\sin (\omega_{lmn} t+\phi_{lmn})\,, \label{ringdown} \end{equation} called quasinormal modes (QNMs)~\cite{Kokkotas:1999bd,Ferrari:2007dd,Berti:2009kk,Konoplya:2003ii}. Here, $\omega_{lmn}$ is the characteristic oscillation frequency of the final BH, $\tau_{lmn}$ is the damping time, $A_{lmn}$ is the amplitude at a distance $r$, $\phi_{lmn}$ is the phase, $l$ and $m$ ($|m|\leq l$) are angular indices describing how radiation is distributed on the final BH's sky, and $n$ is an overtone index labeling the (infinite, countable) number of modes. Equation~(\ref{ringdown}) sets the ground for \emph{GW spectroscopy}: QNMs are the fingerprints of the final BH and extracting their frequency and damping time allows for a consistency check of the GW templates~\cite{TheLIGOScientific:2016qqj}, tests of gravity~\cite{Berti:2015itd,TheLIGOScientific:2016src,Yunes:2016jcc}, and of the no-hair properties of the Kerr solution~\cite{Cardoso:2016ryw}. As a corollary of the GR uniqueness theorem~\cite{Carter:1971zc,Robinson}, the entire QNM spectrum of a Kerr BH eventually depends only on its mass and spin. Thus, detecting several modes allows for multiple null-hypothesis tests of the Kerr metric~\cite{Gossan:2011ha,Berti:2016lat,Meidam:2014jpa} and, in turn, of one of the pillars of GR. \subsubsection{Background} BH perturbation theory beyond GR is well established in the case of nonspinning background solutions. Extending seminal work done in GR~\cite{Vishveshwara,Regge:1957td,Zerilli:1970se,Chandra}, the BH metric is approximated as $g_{\mu\nu} = g_{\mu\nu}^{(0)}+h_{\mu\nu}$, where $g_{\mu\nu}^{(0)}$ is the background solution, namely the static BH metric that solves the modified Einstein equations in a given theory, whereas $h_{\mu\nu}\ll1$ are the metric perturbations. Likewise, all possible extra fields (e.g.\ the scalar field in a scalar-tensor theory) are linearized around their own background value. Perturbations of spherically symmetric objects can be conveniently decomposed in a basis of spherical harmonics, and modes with different harmonic indices $(l,m)$ and different parity decouple from each other. At linear order, the dynamics is described by two systems of ordinary differential equations: the axial (or odd parity) sector and the polar (or even parity) sector. When supplemented by physical boundary conditions at the horizon and at infinity~\cite{Kokkotas:1999bd,Berti:2009kk}, each resulting linear system defines an eigenvalue problem whose solutions are the (complex) QNMs, $\omega_{lmn}-i/\tau_{lmn}$. Remarkably, for a Schwarzschild BH within GR the odd and even parity sectors are isospectral~\cite{Chandra}, whereas this property is generically broken in other gravity theories. Perturbations of spinning BHs beyond GR are much more involved. In GR, the gravitational perturbations of a Kerr BH can be separated using the Newman-Penrose formalism~\cite{Teukolsky:1973ha,Chandra} and the corresponding QNMs are described by a single master equation. This property does not generically hold beyond GR or for other classes of modes. To treat more generic cases, one could perform a perturbative analysis in the spin (the so-called slow-rotation approximation~\cite{Pani:2012bp,Pani:2013pma}) or instead solve the corresponding set of partial differential equations with spectral methods and other elliptic solvers~\cite{Dias:2015nua}. In general, the spectrum of spinning BHs beyond GR is richer and more difficult to study. This fact has limited the development of parametrized approaches to ringdown tests~\cite{Barausse:2014tra}, which clearly require taking into account the spin of the final object. Remarkably, there is a tight relation between the BH QNMs in the eikonal limit ($l\gg1$) and some geodesic properties associated to the spherical photon orbit (the photon sphere)~\cite{Ferrari:1984zz,Cardoso:2008bp,Yang:2012he}. This ``null geodesic correspondence'' is useful as it requires only manipulation of background quantities which are easy to obtain. Furthermore, it provides a clear physical insight into the BH QNMs in terms of waves trapped within the photon sphere, slowly leaking out on a timescale given by the geodesic instability time scale. Within this approximation, the QNMs of BHs beyond GR have been recently studied in Refs.~\cite{Blazquez-Salcedo:2016enn} and a parametrized approach has been proposed in Ref.~\cite{Glampedakis:2017dvb}. There are however two important limitations. First, the correspondence is strictly valid only in the eikonal limit, $l\gg1$, whereas the GW signal is typically dominated by the lowest-$l$ modes. More importantly, the geodesic properties are ``kinematical'' and do not take into account dynamical aspects, e.g.\ those related to extra degrees of freedom~\cite{Blazquez-Salcedo:2016enn} or nonminimal couplings~\cite{Konoplya:2017wot}. For example, Schwarzschild BHs in two different theories of gravity will share the same geodesics but will have in general different QNM spectra~\cite{Barausse:2008xv,Blazquez-Salcedo:2016enn}. \subsubsection{Signatures} There are several distinctive features of linearized BH dynamics that can be used to perform tests of gravity in the strong-field regime. \paragraph{Ringdown tests \& the no-hair theorem} For BBH mergers within GR, the QNMs excited to larger amplitudes are usually the $(2,2,0)$ and $(3,3,0)$ gravitational modes~\cite{Buonanno:2006ui,Berti:2007fi,Barausse:2011kb,Baibhav:2017jhs}. Because the Kerr BH depends only on two parameters, extracting $\omega_{220}$ and $\tau_{220}$ allows to estimate the mass and spin of the final object, whereas extracting further subleading modes provides multiple independent consistency checks of the Kerr metric, since the QNMs are generically different in extensions of GR. Since QNMs probe a highly dynamical aspect of the theory, the latter statement is true even for those theories in which the Kerr metric is the only stationary vacuum solution~\cite{Barausse:2008xv}. These tests requires high SNR in the ringdown phase and will be best performed with next generation detectors, especially with LISA~\cite{Berti:2004bd,Berti:2005ys,Berti:2007zu,Berti:2016lat}, although coherent mode-stacking methods may be feasible also with advanced ground-based detectors~\cite{Yang:2017zxs}. Furthermore, the ability to perform accurate tests will rely on understanding theoretical issues such the starting time of the ringdown~\cite{Baibhav:2017jhs,Bhagwat:2017tkm} and on the modelling of higher harmonics~\cite{Brito:2018rfr}. Such tests are based on the prompt ringdown response and implicitly assume that the remnant object has an horizon. If an ultracompact exotic compact object rather than a BH forms as a results of beyond-GR corrections, the ringdown signal depends strongly on the final object's compactness. If the compactness is comparable or slightly larger than that of a neutron star, the prompt ringdown will show distinctive signatures~\cite{Pani:2009ss,Macedo:2013jja,Chirenti:2016hzd}. On the other hand, for objects as compact as BHs, the prompt ringdown is identical to that of a Kerr BH, but the signal is characterized by ``echoes'' at late times~\cite{Cardoso:2016rao,Cardoso:2016oxy,Cardoso:2017cqb} (see also Sec.~\ref{sec:ECOs}). \paragraph{Extra ringdown modes} In addition to the shift of gravitational modes discussed above, virtually any extension of GR predicts extra degrees of freedom~\cite{Berti:2015itd} which may be excited during the merger~\cite{Blazquez-Salcedo:2016enn,Okounkova:2017yby,Tattersall:2017erk, Tattersall:2018nve}. Although the amplitude of such modes is still poorly estimated, this type of tests calls for novel ringdown searches including two modes of different nature~\cite{Brustein:2017koc}. \paragraph{Isospectrality} Isospectrality of Schwarzschild BHs in GR has a bearing also on Kerr BHs. Since this property is generically broken in alternative theories, a clear signature of beyond-GR physics would be the appearance of a ``mode doublet'' (i.e., two modes with very similar frequency and damping time) in the waveform~\cite{Barausse:2014tra,Blazquez-Salcedo:2016enn}. \paragraph{Instabilities} BHs in Einstein's theory are (linearly mode) stable. A notable exception relates to the superradiant instability triggered by minimally-coupled light bosonic fields~\cite{Brito:2015oca} (see also Section~\ref{Sec:DM} below). For astrophysical BHs, this instability is ineffective except for ultralight bosons with masses below $10^{-10}\,{\rm eV}$, in which case it has important phenomenological consequences (see Sec.~\ref{sec:superradiance}). The situation is different in modified gravity, for example certain BH solutions are known to be unstable in scalar-tensor theories in the presence of matter~\cite{Cardoso:2013fwa,Cardoso:2013opa}, in theories with higher-order curvature couplings~\cite{Silva:2017uqg,Blazquez-Salcedo:2018jnn}, and in massive (bi)-gravity~\cite{Babichev:2013una,Brito:2013wya}. These instabilities are not necessarily pathological, but simply indicate the existence of a different, stable, ``ground state'' in the spectrum of BH solutions to a given theory. Finally, as discussed in Sec.~\ref{sec:ECOs}, extensions of GR might predict exotic compact objects without an event horizon. These objects are typically unstable and their instability can lead to peculiar signatures also in the GW signal~\cite{Cardoso:2017cqb,Cunha:2017qtt}. \subsubsection{A parametrised ringdown approach?} It should be clear from the previous discussion that the Teukolsky formalism is insufficient to study BH perturbation theory in alternative theories of gravity. Extending it is hence a major open problem. One might be even more ambitious and attempt to develop a theory-agnostic parametrisation for the ringdown, similar in spirit to that discussed in Sec.~\ref{sec:inspiral} for the inspiral phase. This would generally be a two-faceted problem, since the quiescent BH that acts as the endpoint of evolution is not a Kerr BH, in general. Hence, one first needs to parametrise deviations from the Kerr spacetime and then propose a meaningful and accurate parametrisation of the perturbations around such non-Kerr background. There have been several attempts to parametrise stationary deviations from the Kerr spacetime. \emph{Bumpy} metrics were constructed in Refs.~\cite{Collins:2004ex,Vigeland:2009pr,Vigeland:2010xe} within GR, starting from a perturbation around the Schwarzschild metric and applying the Newman-Janis transformation~\cite{Newman:1965tw} to construct a rotating configuration. The \emph{quasi-Kerr} metric uses the Hartle-Thorne metric~\cite{Hartle:1968si}, valid for slowly-rotating configurations, and modifies the quadrupole moment~\cite{Glampedakis:2005cf}. Non-Kerr metrics can also be found, with the extra requirement that a Carter-like, second-order Killing tensor exists~\cite{Vigeland:2011ji}. This approach and metric were later simplified in Ref.~\cite{Johannsen:2015pca}. Note however that rotating BH solutions in alternative theories of gravity do not possess such a Killing tensor in general. Johannsen and Psaltis~\cite{Johannsen:2011dh} proposed the \emph{modified Kerr} metric by starting from a spherically symmetric and static metric (that does not necessarily satisfy the Einstein equations) and applying the Newman-Janis transformation. This approach was later extended in Ref.~\cite{Cardoso:2014rha}. Finally, modified axisymmetric spacetime geometries can be found by adopting a continued-fraction expansion to obtain a rapid convergence in the series~\cite{Rezzolla:2014mua,Konoplya:2016jvv}. See e.g. Refs.~\cite{Lin:2015oan,Ghasemi-Nodehi:2016wao} for other parameterizations for deviations from Kerr. Properties of various parameterized Kerr metrics have been studied in~\cite{Johannsen:2013rqa}. See also Ref.~\cite{Yagi:2016jml} for more details on such generic parameterization of the Kerr metric. It should be stressed that a shortfall of all of the aforementioned parametrisation is that they do not have a clear physical interpretation and this makes them rather {\em ad hoc}. Moreover, they only address half of the problem of parametrizing the ringdown. So far their use in modelling waveforms has been limited, but see Refs.~\cite{Gair:2011ym,Moore:2017lxy} and Ref.~\cite{Glampedakis:2017dvb} for applications to extreme-mass ratio inspirals and BH ringdown respectively. At this stage there has been very little progress on how one can take the second step, {\em i.e.}~parametrise linear perturbations around such spacetimes in a theory-agnostic way. Along these lines, a general theory of gravitational perturbations of a Schwarzschild metric has been developed in Ref.~\cite{Tattersall:2017erk} and applied to Horndeski theory in Ref.~\cite{Tattersall:2018nve}. \subsection{Merger, numerics, and complete waveforms beyond General Relativity\label{IVPandNumerics}} \vspace{-3mm} {\it Contributors:} C. Palenzuela, T. P. Sotiriou \vspace{3mm} \label{sec:NRbeyond} The merger of binary compact objects will test the highly dynamical and strongly non-linear regime of gravity, that can only be modelled by using numerical simulations. As already discussed above, the recent direct detection of GWs from BBHs \cite{Abbott:2016blz,Abbott:2016nmj,Abbott:2017vtc,Abbott:2017oio,Abbott:2017gyy}, and of the NSB merger~\cite{TheLIGOScientific:2017qsa} with an associated short GRB \cite{Monitor:2017mdv} and a plethora of concurrent electromagnetic signals from the same source \cite{GBM:2017lvd} has already provided constraints on deviations from GR in various forms and contexts ({\em e.g.~}\cite{Yunes:2016jcc,TheLIGOScientific:2016src,Sakstein:2017xjx,Ezquiaga:2017ekz,Creminelli:2017sry,Baker:2017hug,Gumrukcuoglu:2017ijh,Oost:2018tcv} respectively. However, most of these constraints use either partial, potentially parametrised, waveforms or rely on propagation effects. In fact, the true potential of testing GR is currently limited by the lack of knowledge of GW emission during the merger phase in alternatives to GR~\cite{Yunes:2016jcc}. This problem is particularly acute for heavier BH mergers, where only a short part of the inspiral is detected. This suggests that constraints could be strengthened significantly in most cases if one had complete, theory-specific, waveforms. However, performing stable and accurate numerical simulations that would produce such waveforms requires an understanding of several complex issues. Probably the first among them is the well-posedness of the system of equations that describe evolution of a given alternative scenario to GR. In the next section we briefly describe the issue of well-posedness and discuss some techniques that have been used so far to obtain a well-posed evolution system in alternative theories. In Sec.~\ref{sec:numerics} we overview some recent numerical studies of nonlinear evolution beyond GR. \subsubsection{Initial value formulation and predictivity beyond GR} \label{sec:ivp} Modelling the evolution of a binary system for given initial data is a type of initial value problem (IVP). An IVP is well-posed if the solution exists, is unique and does not change abruptly for small changes in the data. A theory with an ill-posed IVP cannot make predictions. The IVP is well-posed in GR \cite{1952AcM....88..141F} but it is not clear if this is true for most of its contenders. This is a vastly overlooked issue and a systematic exploration of the IVP in many interesting alternative theories, such as those discussed in Sec.~\ref{sec:theories}, is pending. A class of alternative theories of gravity that are known to be well-posed is scalar-tensor theories described by action (\ref{staction}). As discussed in Sec.~\ref{sec:sttheories}, after suitable field redefinitions, these theories take the form of GR plus a scalar field with a canonical kinetic term and potential non-minimal couplings between the scalar and standard model fields, see action (\ref{stactionein}). Since these couplings do not contain more than two derivatives, one can argue for well-posedness using the known results for GR and the fact that lower-order derivative terms are not relevant for this discussion. Interestingly, most alternative theories actually include modifications to the leading order derivative terms in the modified Einstein's equations, so a theory-specific study is necessary. This has been attempted in very few cases and results are mostly very preliminary. In particular, there is evidence that dynamical Chern-Simons gravity is ill-posed~\cite{Delsate:2014hba}. Certain generalised scalar-tensor theories appear to be ill-posed in a generalised harmonic gauge when linearised over a generic, weak field background \cite{Papallo:2017ddx}; however, note that this result is gauge-dependent, and hence not conclusive. Finally, in certain Lorentz-violating theories that exhibit instantaneous propagation, describing evolution might require solving a mixture of hyperbolic and elliptic equations (where the latter are not constraints as in GR) \cite{Bhattacharyya:2015uxt,Bhattacharyya:2015gwa}. \subsubsection{Well-posedness and effective field theories} One is tempted to use well-posedness as a selection criterion for alternative theories of gravity, as a physical theory certainly needs to be predictive (in an appropriate sense). However, alternatives to GR can be thought of as effective field theories --- truncations of a larger theory and hence inherently limited in their range of validity. This complicates the question of what one should do when a given theory turns out to be ill-posed. In fact, it even affects how one should view its field equations and the dynamics they describe in the first place. Effective field theories (EFTs) can often contain spurious degrees of freedom ({\em e.g.}~ghosts) that lead to pathological dynamics. In linearised theories it is easy to remove such degrees of freedom and the corresponding pathologies, but there is no unique prescription to doing so in general. Hence, instead of setting aside theories that appear to be ill-posed when taken at face value, perhaps one should be looking for a way to `cure' them and render them predictive at nonlinear level. A very well-known EFT, viscous relativistic hydrodynamics, requires such `curing' to control undesirable effects of higher order derivatives and a prescription for doing so has been given long ago~\cite{Israel:1976tn,1976PhLA...58..213I,Israel:1979wp}. A similar prescription applicable to gravity theories has been given recently~\cite{2017PhRvD..96h4043C}. Roughly speaking, this approach treats the theory as a gradient expansion and, hence, it considers as the cause of the pathologies runaway energy transfer to the ultraviolet, that in turn renders the gradient expansion inapplicable. As a results, it attempts to modify the equations so as to prevent such transfer and to ensure that the system remains within the regime of validity of the effective descriptions throughout the evolution. Another approach is to consider the theory as arising from a perturbative expansion is a certain parameter, say $\lambda$. If that were the case, then higher order corrections in $\lambda$ have been neglected and, consequently, the solutions can only be trusted only up to a certain order in $\lambda$. Moreover, they have to be perturbatively close (in $\lambda$) to solutions with $\lambda=0$, which are solution of GR. Hence, one can iteratively generate fully dynamical solutions order-by-order in $\lambda$. This process is effectively an {\em order-reduction} algorithm and yields a well-posed system of equations. It has a long history in gravity theories in different contexts ({\em e.g.~}\cite{Bel:1985zz,Simon:1990ic,Sotiriou:2008ya}), but it has only recently been used for nonlinear, dynamical evolution in alternative theories \cite{Benkel:2016kcq,Benkel:2016rlz,Okounkova:2017yby}. These two `cures' do not necessarily give the same results. Moreover, there is no reason to believe that there cannot be other ways to address this problem, so this remains a crucial open question. In principle, the way that an EFT is obtained from a more fundamental theory should strongly suggest which is the way forward when considering nonlinear evolution. However, in practice one often starts with an EFT and hopes to eventually relate it to some fundamental theory. Hence, it seems wise to consider all possible approaches. In principle different theories might require different approaches. \subsubsection{Numerical simulations in alternative theories} \label{sec:numerics} As mentioned above, one can straightforwardly argue that the IVP is well-posed in standard scalar-tensor theories described by action (\ref{staction}). However, as will be discussed in detail in Sec.~\ref{sec:nohair}, BHs in this class of theories are generically identical to GR and carry no scalar configuration. Hence, even though gravitational radiation in these theories can in principle contain a longitudinal component, it is highly unlikely it will get excited in a BBH. In Sec.~\ref{sec:hairyBHs} we outline the conditions under which BHs can differ from their GR counterparts in standard scalar-tensor theories.\footnote{It might hence be preferable for readers that are not familiar with the literature on hairy BHs to return to this section after going through Secs.~\ref{sec:nohair} and \ref{sec:hairyBHs}.} When these conditions are satisfied there should be an imprint in GWs from BH binaries. It is worth highlighting here a couple of cases where numerical simulations have to be used to address this question. The first has to do with the role of asymptotics. It has been shown that time-dependent asymptotic for the scalar could lead to scalar radiation during the coalescence~\cite{Berti:2013gfa}, though this emission would probably be undetectable for realistic asymptotic values of the scalar field gradient. The second case has to do with whether matter in the vicinity of a BH can force it to develop a non-trivial configuration. This has been shown in idealised setups only in Refs. \cite{Cardoso:2013fwa,Cardoso:2013opa}. However, numerical simulations for the same phenomenon in NSs have been performed in~\cite{Barausse:2012da,Shibata:2013pra,Palenzuela:2013hsa}. $f(R)$ theories of gravity (see Ref.~\cite{Sotiriou:2008rp} for a review), which are dynamically equivalent to a specific subclass of scalar-tensor theories, are also well-posed \cite{LanahanTremblay:2007sg,Sotiriou:2008rp}. A comparative study of NSB mergers in GR with those of a one-parameter model of $f(R)=R+aR^2$ gravity is performed in~\cite{Sagunski:2017nzb}. A well-posed extension of scalar-tensor theories is Einstein-Maxwell-Dilaton gravity. It has its origin in low energy approximations of string theory and it includes, apart from the metric and a scalar, a U(1) gauge field. The scalar field couples exponentially to the gauge field, allowing for deviations from GR even for BHs with asymptotically constant scalar field. BBH simulations have shown that these deviations are rather small for reasonable values of the hidden charge~\cite{Hirschmann:2017psw}, leading to weak constraints on the free parameters of the theory. Finally, some first numerical results have recently appeared in scalar-Gauss--Bonnet gravity, and specifically the theory described by action (\ref{phiG}), and in Chern-Simons gravity (see Sec.~\ref{sec:sttheories} for more details on the theories). Refs.~\cite{Benkel:2016rlz,Benkel:2016kcq} studied scalar evolution in scalar-Gauss--Bonnet gravity and Ref.~\cite{Okounkova:2017yby} performed the first binary evolution in Chern-Simons gravity. These results are notable for using a perturbative expansion in the free parameter of the theory, described in the previous section, in order to circumvent potential issues with well-posedness (as discussed above, if the field equation are taken at face value, well-posedness is known to be an issue for Chern-Simons gravity~\cite{Delsate:2014hba} and it might be an issue for scalar-Gauss--Bonnet gravity~\cite{Papallo:2017ddx}). \section{The nature of Compact Objects}\label{Sec:nonKerr} \label{sec:compactbeyond} \subsection{No-hair theorems} \vspace{-3mm} {\it Contributors:} C.~Herdeiro, T.~P.~Sotiriou \vspace{3mm} \label{sec:nohair} The discovery of the Schwarzschild solution in 1915-16~\cite{Schwarzschild:1916uq}, shortly after Einstein's presentation of GR~\cite{Einstein:1915ca}, triggered a half-a-century long quest for its rotating counterpart, which ended with the discovery of the Kerr solution in 1963~\cite{Kerr:1963ud} - see~\cite{Kerr:2007dk} for a historical account of this discovery. At this time, unlike in Schwarzschild's epoch, it was already considered plausible that these metrics could represent the endpoint of the gravitational collapse of stars, even though the name \textit{black hole} to describe them only became widespread after being used by Wheeler in 1967-68~\cite{Wheeler:1998vs}. The extraordinary significance of the Kerr metric became clear with the establishment of the \textit{uniqueness theorems} for BHs in GR - see~\cite{Chrusciel:2012jk} for a review. The first such theorem was that of Israel's for the static case~\cite{Israel:1967wq}. Then, Carter~\cite{Carter:1971zc} and Robinson~\cite{Robinson:1975bv} constructed a theorem stating: \textit{An asymptotically flat stationary and axisymmetric vacuum spacetime that is non-singular on and outside a connected event horizon is a member of the two-parameter Kerr family}. The axial symmetry assumption was subsequently shown to be unnecessary (assuming analyticity, nondegenerary, and causality \cite{Carter:1997im}); for BHs, stationarity actually implies axial symmetry, via the ``rigidity theorem,'' which relates the teleologically defined ``event horizon'' to the locally defined ``Killing horizon''~\cite{Hawking:1971vc}. The upshot of these theorems is that a stationary vacuum BH, despite possessing an infinite number of non-trivial, appropriately defined multipolar moments~\cite{Hansen:1974zz}, has only two degrees of freedom: its mass and angular momentum. This simplicity was written in stone by Wheeler's dictum ``\textit{BHs have no hair}''~\cite{Ruffini:1971bza}. In parallel to these developments the astrophysical evidence for BHs piled up over the last half century - see $e.g.$~\cite{Narayan:2013gca}. In fact, the very conference where the Kerr solution was first presented (the first \textit{Texas symposium on relativistic astrophysics}) held in December 1963, was largely motivated by the discovery of \textit{quasars} in the 1950s and the growing belief that relativistic phenomena related to highly compact objects should be at the source of the extraordinary energies emitted by such very distant objects~\cite{Schucking:1989}. Gravitational collapse was envisaged to play a key role in the formation of these highly compact objects and such collapse starts from matter distributions, rather than vacuum. Thus, one may ask if BHs in the presence of matter, rather than in vacuum, still have \textit{no-hair}. In this respect, Wheeler's dictum was a conjecture, rather than a synthetic expression of the uniqueness theorems. It hypothesised a much more general and ambitious conclusion than that allowed solely by these theorems. The conjecture stated that: \textit{gravitational collapse leads to equilibrium BHs uniquely determined by mass, angular momentum and electric charge -asymptotically measured quantities subject to a Gauss law and no other independent characteristics (hair)}~\cite{Misner:1974qy}. Here, the Gauss law plays a key role to single out the physical properties that survive gravitational collapse, because they are anchored to a fundamental (gauged) symmetry, whose conserved charge cannot be cloaked by the event horizon. The uniqueness theorems (which can be generalised to electro-vacuum~\cite{Chrusciel:2012jk}), together with the no-hair conjecture, lay down the foundation for the \textit{Kerr hypothesis}, that the myriad of BHs that exist in the Cosmos are well described, when near equilibrium, by the Kerr metric. This astonishing realisation was elegantly summarised by S. Chandrasekhar~\cite{Chandrasekhar:1989}: \textit{In my entire scientific life, extending over forty-five years, the most shattering experience has been the realisation that an exact solution of Einstein's field equations of GR, discovered by the New Zealand mathematician, Roy Kerr, provides the absolutely exact representation of untold numbers of massive BHs that populate the Universe.} The Kerr hypothesis can be tested both theoretically and by confrontation with observations. On the theory front, the key question is whether BHs that are not described by the Kerr metric can constitute solutions of reasonable field equations that describe gravity and potentially other fields. Additionally, one can ask whether such BH solutions can form dynamically and are (sufficiently) stable as to be astrophysically relevant. On the observational front, the question is to which extent the BHs we observe are compatible with the Kerr hypothesis and what constraints can one extract from that. A set of results on the non-existence of non-Kerr BHs in specific models, either including different matter terms in Einstein's GR or by considering theories of gravity beyond Einstein's GR (or both) have been established since the 1970s and are known as \textit{no-hair theorems} - see $e.g.$ the reviews~\cite{Bekenstein:1996pn,Herdeiro:2015waa,Sotiriou:2015pka,Volkov:2016ehx}.\footnote{Often, the uniqueness theorems are also called no-hair theorems, under the rationale that they show that in vacuum (or electro-vacuum), BHs have no independent multipolar hair, and only two (or three, ignoring magnetic charge) degrees of freedom. Here we use no-hair theorems to refer to no-go results for non-Kerr BHs in models beyond electro-vacuum GR.} No-hair theorems apply to specific models or classes of models and rely on assumptions regarding symmetries, asymptotics, etc. As will become evident in the next section, dropping one or more of these assumptions in order to circumvent the corresponding no-hair theorem can be a good tactic for finding non-Kerr BHs (see also Ref.~\cite{Sotiriou:2015pka} for a discussion). More generally, understanding the various sort of theorems, their assumptions and limitations is instructive. A large body of work was dedicated to the case of scalar hair, partly due to the conceptual and technical simplicity of scalar fields. Indeed, one of the earliest examples of a no-hair theorem was provided by Chase~\cite{Chase:1970}, following~\cite{Penney:1968zz}. Chase inquired if a static, regular BH spacetime could be found in GR minimally coupled to a real, massless scalar field. Without assuming spherical symmetry, he could show that the scalar field necessarily becomes singular at a simply-connected event horizon. Thus, a static BH cannot support scalar hair, in this model. The 1970s witnessed the formulation of influential no-hair theorems, notably, by Bekenstein, who developed a different approach for establishing the non-existence of scalar hair, applicable to higher spin fields as well~\cite{Bekenstein:1971hc,Bekenstein:1972ky,Bekenstein:1972ny}. An elegant no-hair theorem was given by Hawking~\cite{Hawking:1972qk} for minimally coupled scalar fields without self interactions and for Brans-Dicke theory \cite{Brans:1961sx}. Hawking's theorem assumes stationarity (as opposed to staticity) and no other symmetry and its proof is particularly succinct. This theorem has been later generalised by Sotiriou and Faraoni~\cite{Sotiriou:2011dz} to scalars that exhibit self interactions and to the class of scalar-tensor theories described by action (\ref{staction}). Hui and Nicolis~\cite{Hui:2012qt} have proven a no-hair theorem for shift-symmetric (and hence massless) scalar fields that belong to the Horndeski class discussed in Sec.~\ref{sec:sttheories}. This proof assumes staticity and spherical symmetry, though it can be easily generalized to slowly-rotating BHs~\cite{Sotiriou:2013qea}. It is also subject to a notable exception, as pointed out in Ref.~\cite{Sotiriou:2013qea}: a linear coupling between the scalar and the Gauss-Bonnet invariant is shift symmetric and yet circumvents the theorem [see Sec.~\ref{sec:sttheories} and action (\ref{phiG})]. Finally, a no-hair theorem has been given in Ref.~\cite{Silva:2017uqg} for stationary BH solutions for the theory described by action (\ref{fG}), provided $f''(\phi_0){\cal G}<0$; a similar proof, but restricted to spherical symmetry, can be found in Ref.~\cite{Antoniou:2017acq}. \subsection{Non-Kerr black holes} \vspace{-3mm} {\it Contributors:} C.~Herdeiro, T.~P.~Sotiriou \vspace{3mm} \label{sec:hairyBHs} Despite the many no-hair theorems discussed above, hairy BH solutions are known to exist in many different contexts. In GR, the 1980s and 1990s saw the construction of a variety of hairy BH solutions, typically in theories with non-linear matter sources - see $e.g.$ the reviews~\cite{Bizon:1994dh,Bekenstein:1996pn,Volkov:1998cc}. The paradigmatic example are the coloured BHs, found in Einstein-Yang-Mills theory~\cite{Volkov:1989fi,Volkov:1990sva,1990JMP....31..928K,Bizon:1990sr}. However, these BHs are unstable and have no known formation mechanism. Hence, they should not be considered as counter examples to the weak no-hair conjecture, and it is unlikely that they play a role in the astrophysical scene. Below we will review some more recent attempt to find hairy BH solutions that are potentially astrophysically relevant. We first focus on cases that are covered by no-hair theorems and discuss how dropping certain assumptions can lead to hairy BHs. We then move on to various cases of hairy BHs in theories and models that are simply not covered by no-hair theorems. This is not meant to be an exhaustive review but merely a discussion of some illuminating examples in order to highlight the interesting BH physics that GWs could potentially probe. \subsubsection{Circumventing no-hair theorems} \label{sec:circumvent} No-hair theorems rely on assumptions, such as: (i) the asymptotic flatness of the metric and other fields; (ii) stationarity (or more restrictive symmetries); (iii) the absence of matter; (iv) stability, energy conditions and other theorem-specific conditions. There is no doubt that removing any one of these assumptions leads to BH hair. For example, standard scalar-tensor theories described by the action (\ref{staction}) are covered by the no-hair theorems discussed in the previous section \cite{Chase:1970,Bekenstein:1971hc,Bekenstein:1972ky,Bekenstein:1972ny,Hawking:1972qk,Sotiriou:2011dz}. Nonetheless, BHs with scalar hair exist in these theories if one allows for anti-de Sitter (AdS) asymptotics ({\em e.g.}~Ref.~\cite{Torii:2001pg}), or attempts to ``embed'' the BHs in an evolving universe \cite{Jacobson:1999vr}, or allows for matter to be in their vicinity \cite{Cardoso:2013fwa}. The case of AdS asymptotics, though interesting theoretically, is not relevant for astrophysics. The fact that cosmic evolution could potentially endow astrophysical BHs with scalar hair is certainly enticing but the effect should be very small \cite{Horbatsch:2011ye}. Finally, it is not yet clear if the presence of matter in the vicinity of a BH could induce a scalar charge that is detectable with the current observations. See Ref.~\cite{Sotiriou:2015pka} for a more detailed discussion. Circumventing the theorems by violating stability arguments will be discussed in Sec.~\ref{sec:BHscalarization}. A perhaps less obvious way to obtain hairy solutions is to strictly impose the symmetry assumptions on the metric only and relax them for other fields. Even though the field equations relate the fields their symmetries do not need to match exactly in order to be compatible (see, for instance, Ref.~\cite{Smolic:2016dmh} for a recent discussion). A well-studied case is that of a complex, massive scalar field minimally coupled to gravity \cite{Herdeiro:2014goa,Herdeiro:2015gia}. The scalar has a time-dependent phase, but its stress-energy tensor remains time-independent (thanks to some tuning) and the metric can remain stationary. See also~\cite{Kleihaus:2015iea,Herdeiro:2015tia} for generalisations and~\cite{Chodosh:2015oma} for an existence proof of the solutions. BHs with Proca (massive vector) hair have also been found using the same approach \cite{Herdeiro:2016tmi}. Whether or not these hairy BH are relevant for astrophysical phenomena depends on three main factors: (1) the existence of (ultra-light) massive bosonic fields in Nature; (2) the existence of a formation mechanism and (3) their stability properties. The first factor is an open issue. Ultra-light bosonic fields of the sort necessary for the existence of these BHs with astrophysical masses do not occur in the standard model of particle physics. Some beyond the standard model scenarios, however, motivate the existence of this type of ``matter,'' notably the axiverse scenario in string theory~\cite{Arvanitaki:2009fg}. The second point has been positively answered by fully non-linear numerical simulations~\cite{East:2017ovw,Herdeiro:2017phl}. There is at least one formation channel for these BHs, via the superradiant instability of Kerr BHs, in the presence of ultralight bosonic fields - see~\cite{Brito:2015oca} for a review of this instability. The hairy BHs that result from this dynamical process, however, are never very hairy, being always rather Kerr-like. Concerning the third point, it has been recognised since their discovery, that these BHs could be afflicted by the superradiant instability themselves~\cite{Herdeiro:2015gia,Herdeiro:2014jaa}. Recent work~\cite{Ganchev:2017uuo} succeeded in computing the corresponding timescales, and reported that the timescale of the strongest superradiant instability that can afflict hairy BH is roughly 1000 times longer than that of the strongest instability that can afflict the Kerr BH. This study, albeit based on the analysis of a very small sample of solutions, leaves room for the possibility that some of these BHs can form dynamically and be sufficiently stable to play a role in astrophysical processes \cite{Degollado:2018ypf}. However, a more detailed analysis needs to be performed before definite conclusions can be drawn. Hairy BHs have also been found in certain shift-symmetric scalar tensor theories by allowing the scalar to depend linearly on time while requiring that the metric be static \cite{Babichev:2013cya,Babichev:2017guv}. This is possible because shift symmetry implies that the scalar only appears in the equations through its gradient and the linear time dependence renders the gradient time independent. The linear stability on such solutions has been explored in Refs.~\cite{Ogawa:2015pea,Babichev:2017lmw,Babichev:2018uiw}. It remains largely unexplored whether these BHs can form dynamically and hence whether they are relevant for astrophysics. \subsubsection{Black holes with scalar hair} The most straightforward way to find hairy BH solutions is to consider theories that are not covered by no-hair theorems. In the case of scalar-tensor theories, it is known that couplings between the scalar (or pseudoescalar) and quadratic curvature invariants, such as the Gauss-Bonnet invariant ${\cal G}\equiv R^2 - 4 R_{\mu\nu} R^{\mu\nu} + R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}$ or the Pontryagin density $*R^{\alpha\phantom{a}\gamma\delta}_{\phantom{a}\beta} R_{\phantom{a}\alpha\gamma\delta}^{\beta}$, can lead to scalar hair, see {\em e.g.}~\cite{Campbell:1991kz,Kanti:1995vq,Yunes:2011we,Sotiriou:2013qea,Sotiriou:2014pfa}. Indeed, the existence of hairy BHs singles out specific class of theories, such as dynamical Chern-Simons (dCS) gravity~\cite{Jackiw:2003pm,Alexander:2009tp} and the scalar-Gauss-Bonnet theories described by the action (\ref{fG}) of Sec.~\ref{sec:sttheories} (which have hairy BH solutions provided that $f'(\phi_0)\neq 0$ for any constant $\phi_0$ \cite{Sotiriou:2013qea,Silva:2017uqg}). Sec.~\ref{sec:sttheories} already contains a discussion about these theories and their BH phenomenology, so we refer the reader there for details. See also Sec.~\ref{sec:numerics} for a discussion on the first attempts to simulate dynamical evolution of hairy BHs and to produce waveforms for binaries that contain them. \subsubsection{Black hole scalarization} \label{sec:BHscalarization} In the previous section the focus was on theories that are not covered by no hair theorems and have hairy BH for all masses. An interesting alternative was recently pointed out in Refs.~\cite{Doneva:2017bvd,Silva:2017uqg}: that certain theories that marginally manage to escape no-hair theorems \cite{Silva:2017uqg,Antoniou:2017acq} can have hairy BHs only in certain mass ranges, outside which BHs are identical to those of GR. The theories belong in the class describe by action (\ref{fG}) and the phenomenon resembles the``spontaneous scalarization'' of stars that happens in certain models of the standard scalar-tensor theories of action (\ref{staction}); see Sec.~\ref{sec:sttheories} for more details. In the mass ranges where hairy BHs exist the GR BHs of the same mass are expected to be unstable and this instability is supposed to gives rise to the scalar hair. However, depending on the model, the hairy solutions can also be unstable \cite{Blazquez-Salcedo:2018jnn} and this issue requires further investigation. It is also not clear if other theories can also exhibit BH scalarization and the astrophysical significance of the effect has not yet be explored. \subsubsection{Black holes in theories with vector fields and massive/bimetric theories} In analogy to the generalised scalar-tensor theories discussed in Sec.~\ref{sec:sttheories} and described by action (\ref{hdaction}), one can construct the most general vector-tensor theories with second-order equations of motion. These theories can be considered as generalised Proca theories and they contain new vector interactions \cite{Heisenberg:2014rta,Jimenez:2016isa}. Their phenomenology can be distinct from that of Horndeski scalars and BH solutions with vector hair are known to exist \cite{Chagoya:2016aar,Minamitsuji:2016ydr,Heisenberg:2017xda,Heisenberg:2017hwb,Fan:2016jnz,Cisterna:2016nwq,Babichev:2017rti}. Depending on the order of derivative interactions, these BH solutions can have primary ({\em i.e.}~the new charge is independent) or secondary ({\em i.e.}~the new charge is not independent) Proca hair. The presence of a temporal vector component significantly enlarges the possibility for the existence of hairy BH solutions without the need of tuning the models \cite{Heisenberg:2017xda,Heisenberg:2017hwb}. Horndeski scalar-tensor and the generalised Proca theories can be unified into scalar-vector-tensor theories for both the gauge invariant and the gauge broken cases \cite{Heisenberg:2018acv}. In the $U(1)$ gauge invariant and shift symmetric theories the presence of cubic scalar-vector-tensor interactions is crucial for obtaining scalar hair, which manifests itself around the event horizon. The inclusion of the quartic order scalar-vector-tensor interactions enables the presence of regular BH solutions endowed with scalar and vector hairs \cite{Heisenberg:2018vti}. It is worth mentioning that this new type of hairy BH solutions are stable against odd-parity perturbations \cite{Heisenberg:2018mgr}. In massive and bimetric gravity theories the properties of the BH solutions depend heavily on choices one makes for the fiducial or dynamical second metric \cite{Volkov:2012wp,Volkov:2013roa,Brito:2013xaa,Volkov:2014ooa,Babichev:2015xha,Torsello:2017cmz}. If the two metrics are forced to be diagonal in the same coordinate system (bidiagonal solutions) their horizons need to coincide if they are not singular \cite{Deffayet:2011rh}. It should be stressed that generically there is not enough coordinate freedom to enforce the metrics to be bidiagonal. The analysis of linear perturbations reveals also an unstable mode with a Gregory-Laflamme instability \cite{Babichev:2013una,Brito:2013wya}, which remains in bi-Kerr geometry. Perturbations of the non-bidiagonal geometry is better behaved \cite{Babichev:2014oua}. Non-singular and time dependent numerical solutions were recently found in \cite{Rosen:2017dvn}. \subsubsection{Black holes in Lorentz-violating theories} Lorentz symmetry is central to the definition of a BH thanks to the assumption that the speed of light is the maximum attainable speed. Superluminal propagation is typical of Lorentz violating theories\footnote{It might be worth stressing that superluminal propagation does not necessarily lead to causal conundrums in Lorentz-violating theories.} and GW observations have already given a spectacular constraint: the detection of a NSB merger (GW170817) with coincident gamma ray emission has constrained the speed of GWs to a part in $10^{15}$ \cite{Monitor:2017mdv}. However, Lorentz-violating theories generically exhibit extra polarizations \cite{Sotiriou:2017obf}, whose speed remains virtually unconstrained \cite{ Gumrukcuoglu:2017ijh}. See also Sec.~\ref{sec:propagation}. Moreover, BHs in such theories are expected to always be hairy. The reason has essentially been explained in the discussion of Sec.~\ref{sec:LVtheories}: Lorentz-violating theories can generally be written in a covariant way (potentially after restoring diffeomorphism invariance through the introduction of a Stueckelberg field) and in this setup Lorentz symmetry breaking can be attributed to some extra field that has the property of being nontrivial in any solution to the field equation. That same property will then endow BHs with hair. Indeed, this is known to be true for the theories reviewed in Sec.~\ref{sec:LVtheories}. The structure of such BHs depends on the causal structure of the corresponding theory. In particular, in Einstein-aether theory (\ae-theory), though speeds of linear perturbations can exceed the speed of light, they satisfy an upper bound set by the choice of the $c_i$ parameters. Indeed, massless excitations travel along null cones of effective metrics composed by the metric that defines light propagation and the aether \cite{Eling:2006ec}. This makes the causal structure quasi-relativistic, when one adopts the perspective of the effective metric with the widest null cone \cite{Bhattacharyya:2015gwa}. Stationary BHs turn out to have multiple nested horizons, each of which is a Killing horizon of one of the effective metrics and acts as a causal boundary of a specific excitation \cite{Eling:2006ec,Barausse:2011pu}. In Ho\v rava gravity there is no maximum speed, as already explained is Sec.~\ref{sec:LVtheories} . There is a preferred foliation and all propagating modes have dispersion relations that scale as $\omega^2\propto k^6$ for large wave numbers $k$ and there is arbitrarily fast propagation. Even at low momenta, where the dispersion relations are truncated to be linear, there is also an instantaneous (elliptic) mode, both at perturbative \cite{Blas:2011ni} and nonperturbative level \cite{Bhattacharyya:2015uxt}. It would be tempting to conclude that BHs cannot exist in Ho\v rava gravity. Nonetheless, there is a new type of causal horizon that allows one to properly define a BH, dubbed the universal horizon \cite{Barausse:2011pu,Blas:2011ni}. It is not a null surface of any metric, but a spacelike leaf of the preferred foliation that cloaks the singularity. Future-directed signals can only cross it in one direction and this shields the exterior from receiving any signal, even an instantaneous one, from the interior. Universal horizons persist in rotating BHs in Ho\v rava gravity \cite{Barausse:2012qh,Sotiriou:2014gna,Bhattacharyya:2015gwa,Barausse:2013nwa,Barausse:2015frm} and have been rigorously defined without resorting to symmetries in Ref.~\cite{Bhattacharyya:2015gwa}. Even though BHs in Lorentz-violating theories and their GR counterparts have very different causal structure, their exteriors can be very similar and hard to tell apart with electromagnetic observations \cite{Barausse:2011pu,Barausse:2013nwa,Barausse:2015frm}. Nevertheless, the fact that Lorentz-violating theories have additional polarisations and their BHs are have hair suggests very strongly that binary systems will emit differently than in GR. Hence, confronting model-dependent waveforms with observations is likely to yields strong constraints for Lorentz violation in gravity. \subsection{Horizonless exotic compact objects\label{sec:ECOs}} \vspace{-3mm} {\it Contributors:} V.~Cardoso, V.~Ferrari, P.~Pani, F.~H.~Vincent \vspace{3mm} BHs are the most dramatic prediction of GR. They are defined by an event horizon, a null-like surface that causally disconnects the interior from our outside world. While initially considered only as curious mathematical solutions to GR, BHs have by now acquired a central role in astrophysics: they are commonly accepted to be the outcome of gravitational collapse, to power active galactic nuclei, and to grow hierarchically through mergers during galaxy evolution. The BH paradigm explains a variety of astrophysical phenomena and is so far the simplest explanation for all observations. It is, however, useful to remember that the only evidence for BHs in our universe is the existence of dark, compact and massive objects. In addition, BHs come hand in hand with singularities and unresolved quantum effects. Hence, it is useful to propose and to study exotic compact objects (ECOs) -- dark objects more compact than NSs but without an event horizon. Perhaps the strongest theoretical motivation to study ECO's follows from quantum gravity. Quantum modifications to GR are expected to shed light on theoretical issues such as the pathological inner structure of Kerr BHs and information loss in BH evaporation. While there is a host of recent research on the quantum structure of BHs, it is not clear whether these quantum effects are of no consequence on physics outside the horizon or will rather lead to new physics that resolves singularities and does away with horizons altogether. The consequence of any such detection, however small the chance, would no less than shake the foundations of physics. As there is no complete quantum gravity approach available yet, the study of ECOs, in various degrees of phenomenology, is absolutely crucial to connect to current GW observations. In summary, ECOs should be used both as a testing ground to {\it quantify} the observational evidence for BHs, but also to understand the signatures of alternative proposals and to search for them in GW and electromagnetic data~\cite{Cardoso:2017cqb}. {\em Classification:} Compact objects can be conveniently classified according to their compactness, $M/R$, where $M$ and $R$ are the mass and (effective) radius of the object, respectively. NSs have $M/R\approx 0.1-0.2$, a Schwarzschild BH has $M/R=1/2$, whereas a nearly extremal Kerr BH has $M/R\approx1$. The study of the dynamics of light rays and of GWs shows that the ability of ECOs to mimic BHs is tied to the existence of an unstable light ring in the geometry~~\cite{Abramowicz:2002vt,Cardoso:2017cqb,Cardoso:2017njb}. Accordingly, two important categories of this classification are~\cite{Cardoso:2017cqb,Cardoso:2017njb}: \begin{enumerate} \item \emph{Ultracompact objects (UCOs)}, whose exterior spacetime has a photon sphere. In the static case, this requires $M/R>1/3$. UCOs are expected to be very similar to BHs in terms of geodesic motion. NSs are not in this category, unless they are unrealistically compact and rapidly spinning. \item \emph{Clean-photon-sphere objects (ClePhOs)}, so compact that the light travel time from the photon sphere to the surface and back is longer than the characteristic time scale, $\tau_{\rm geo}\sim M$, associated with null geodesics along the photon sphere. These objects are therefore expected to be very similar to BHs at least over dynamical time scales $\sim M$. In the static case, this requires $M/R>0.492$~\cite{Cardoso:2017cqb,Cardoso:2017njb}. \end{enumerate} Some models of ECOs have been proposed in attempts to challenge the BH paradigm. Others are motivated by (semiclassical) quantum gravity scenarios that suggest that BHs would either not form at all~\cite{Mazur:2004fk,Mathur:2005zp,Mathur:2008nj,Barcelo:2015noa,Danielsson:2017riq,Danielsson:2017pvl,Berthiere:2017tms,Cardoso:2017njb}, or that new physics should drastically modify the structure of the horizon~\cite{Giddings:2013kcj,Giddings:2014nla,Giddings:2017mym}. In the presence of exotic matter fields (e.g. ultralight scalars), even classical GR can lead to ECOs. The most characteristic case is perhaps that of \emph{boson stars}~\cite{feinblum68,Kaup:1968zz,ruffini69}: self-gravitating solutions of the Einstein-Klein-Gordon theory. They have no event horizon, no hard surface, and are regular in their interior. Depending on the scalar self-interactions, they can be as massive as supermassive compact objects and --~when rapidly spinning~-- as compact as UCOs. Further details about these objects can be found elsewhere~\cite{liebling17}. Though boson stars are solutions of a well defined theory, most ECO-candidates currently arise from phenomenological considerations and employ some level of speculation. For instance, certain attempts of including quantum gravity effects in the physics of gravitational collapse postulate corrections in the geometry at a Planck distance away from the horizon, regardless of the mass (and, hence, of the curvature scale) of the object. Since $l_{\rm Planck}\ll M$, these objects naturally classify as ClePhOs. One example are fuzzballs, ensembles of horizonless microstates that emerge as rather generic solutions to several string theories~\cite{Mathur:2005zp,Mathur:2008nj}. In these solutions the classical horizon arises as a coarse-grained description of the (horizonless) microstate geometries. Another example are so-called gravitational-vacuum stars, or gravastars, ultracompact objects supported by a negative-pressure fluid~\cite{Mazur:2001fv,Mazur:2004fk,Cattoen:2005he,Chirenti:2007mk}, which might arise from one-loop effects in curved spacetime in the hydrodynamical limit~\cite{Mottola:2006ew}. Other examples of ClePhOs inspired by quantum corrections include black stars~\cite{Barcelo:2009tpa}, superspinars~\cite{Gimon:2007ur}, collapsed polymers~\cite{Brustein:2016msz,Brustein:2017kcj}, $2-2$-holes~\cite{Holdom:2016nek}, etc~\cite{Cardoso:2017njb}. We should highlight that for most of these models, $1-2M/R\sim 10^{-39}-10^{-46}$ for stellar to supermassive dark objects. Thus, their phenomenology is very different from that of boson stars, for which $1-2M/R\sim {\cal O}(0.1)$ at most. The speculative nature of most ECOs implies that their formation process has not been consistently explored (with notable exceptions~\cite{liebling17}). If they exist they are likely subject to the so-called ergoregion instability. This was first found in Ref.~\cite{Friedman:1978wla} for scalar and electromagnetic perturbations (see also~\cite{CominsSchutz,YoshidaEriguchi}), then shown to affect gravitational perturbations as well in Ref.~\cite{KokkotasRuoffAndersson}, and recently proved rigorously in Ref.~\cite{Moschidis:2016zjy}). The instability affects any spinning compact object with an ergoregion but without a horizon and its time scale may be shorter than the age of the Universe~\cite{Cardoso:2007az,Chirenti:2008pf}. The endpoint of the instability is unknown but a possibility is that the instability removes angular momentum~\cite{Cardoso:2014sna}, leading to a slowly-spinning ECO~\cite{Brito:2015oca}, or perhaps it is quenched by dissipation within the object~\cite{Maggio:2017ivp} (although the effects of viscosity in ECOs are practically unknown). An interesting question is also the outcome of a potential ECO coalescence. All known equilibrium configurations of ECOs have a maximum mass above which the object is unstable against radial perturbations and is expected to collapse. The configuration with maximum mass is also the most compact (stable) one. Therefore, the more an ECO mimics a BH, the closer it is to its own maximum mass. Thus, if two ECOs are compact enough as to reproduce the inspiral of a BH coalescence (small enough deviations in the multipole moments, tidal heating, and tidal Love numbers relative to the BH case; see below), their merger would likely produce an ECO with mass above the critical mass of the model. Hence the final state could be expected to be a BH. In other words, one might face a ``short blanket'' problem: ECOs can mimic well \emph{either} the post-merger ringdown phase of a BH \emph{or} the pre-merger inspiral of two BHs, but they may find it difficult to mimic the entire inspiral-merger-ringdown signal of a BH coalescence with an \emph{ECO+ECO $\to$ ECO} process. The only way out of this problem is to have a mass-radius diagram that resembles that of BHs for a wide range of masses. No classical model is known that behaves this way, yet. It is worth noting that, even if the BH paradigm is correct, ECOs might be lurking in the universe along with BHs and NSs. Modelling the EM and GW signatures of these exotic sources is necessary to detect them, and may even provide clues for other outstanding puzzles in physics, such as DM (see also Section~\ref{Sec:DM} below). \subsection{Testing the nature of compact objects} \vspace{-3mm} {\it Contributors:} V.~Cardoso, V.~Ferrari, M.~Kramer, P.~Pani, F.~H.~Vincent, N.~Wex \vspace{3mm} \subsubsection{EM diagnostics} EM radiation emitted from the vicinity of BH candidates can probe the properties of spacetime in the strong-field region and may lead to constraining the nature of the compact object. We focus our discussion on three popular EM probes, namely \textit{shadows}, \textit{X-ray spectra}, and \textit{quasi-periodic oscillations}. We do not discuss here polarization, nor effects on stellar trajectories. Because all EM tests can be eventually traced back to geodesic motion, EM probes may distinguish between BHs and ECOs with $M/R<0.492$, whereas it is much more challenging to tell a ClePhO from a BH through EM measurements. \paragraph{Shadows} BHs appear on a bright background as a dark patch on the sky, due to photons captured by the horizon. This feature is known as the BH shadow~\cite{Bardeen1973,FalckeMeliaAgol2000}. The Event Horizon Telescope~\cite{Doeleman:2009te}, aiming at obtaining sub-millimeter images of the shadow of the supermassive object Sgr~A* at the center of the Milky Way and of the supermassive object at the center of the elliptical galaxy M87, is a strong motivation for these studies. A rigorous and more technical definition of the BH shadow is the following~\cite{Cunha:2018acu}: it is the set of directions on an observer's local sky that asymptotically approach the event horizon when photons are ray traced backwards in time along them. Thus, by this definition, shadows are intrinsically linked with the existence of a horizon and, strictly speaking, an ECO cannot have a shadow. However, in practice ultracompact horizonless objects (in particular UCOs and ClePhOs) might be very efficient in mimicking the exterior spacetime of a Kerr BH. It is therefore interesting to study photon trajectories in such spacetimes and to contrast them with those occurring in the Kerr case. In the analysis of shadows, one generally either considers parametrized spacetimes~\cite{Johannsen:2011dh,grenzebach14,Ghasemi-Nodehi:2015raa,wang17} (that allow to tune the departure from Kerr but might not map to known solutions of some theory), or takes into account a specific alternative theoretical framework~\cite{amarilla12,amarilla13,wei13,vincent14,tinchev14,moffat15,schee16,cunha17} or a particular compact object within GR~\cite{hioki09,nedkova13,cunha15,vincent16}. Most of these studies obtain differences with respect to the standard Kerr spacetime that are smaller than the current instrumental resolution ($< 10\,\mu$as). Some studies report more noticeable differences in the case of naked singularities~\cite{hioki09}, exotic matter that violates some energy condition~\cite{nedkova13}, or in models that allow for large values of the non-Kerrness parameters~\cite{tinchev14,moffat15}. Moreover, demonstrating a clear difference in the shadow is not enough to infer that a test can be made, since such difference might be degenerate with the mass, spin, distance, and inclination of the source. Finally, the fact that horizonless objects lead to shadow-like regions that can share some clear resemblance with Kerr~\cite{vincent16} shows the extreme difficulty of an unambiguous test based on such observables (for perspective tests with current observations see, e.g., Refs.~\cite{2015ApJ...814..115P,Mizuno:2018lxz}). \paragraph{$K\alpha$ iron line and continuum-fitting} The X-ray spectra of BHs in X-ray binaries and AGNs are routinely used to constrain the spin parameter of BHs, assuming a Kerr metric~\cite{reynolds03,mcclintock11}. In particular, the iron K$\alpha$ emission line, that is the strongest observed and is strongly affected by relativistic effects, is an interesting probe. Many authors have recently investigated the X-ray spectral observables associated to non-Kerr spacetimes~\cite{Bambi:2015kza}. The same distinction presented above for the tests of shadows can be made between parametric studies~\cite{johannsen13b,moore15,ni16}, specific alternative theoretical frameworks~\cite{harko09,harko09b,vincent14,moore15,schee09}, and alternative objects within classical GR~\cite{cao16,ni16b}. Although differences with respect to Kerr are commonly found in various frameworks, these are generally degenerate with other parameters, such as mass, spin and inclination of the source. Moreover, the precise shape of the iron line depends on the subtle radiative transfer in the accretion disk, which is ignored in theoretical studies that generally favor simple analytic models. This is a great advantage of the shadow method, which is probably the EM probe least affected by astrophysical systematics. \paragraph{QPOs} Quasi-periodic oscillations (QPOs) are narrow peaks in the power spectra, routinely observed in the X-ray light curves of binaries. QPOs are of the order of Keplerian frequencies in the innermost regions of the accretion flow~\cite{remillard06}. A series of recent works were devoted to studying the QPO observables associated to alternative compact objects, be it in the context of parametric spacetimes~\cite{johannsen11,bambi16}, alternative theoretical frameworks~\cite{vincent14,maselli15,chen16}, or alternative objects within GR~\cite{franchini17}. Although the QPO frequencies can be measured with great accuracy, QPO diagnostics suffer from the same limitations as the X-ray spectrum: degeneracies and astrophysical uncertainties. The non-Kerr parameters are often degenerate with the object's spin. Moreover, it is currently not even clear what the correct model for QPOs is. \paragraph{Pulsar-BH systems} High-precision timing observations of radio pulsars provide very sensitive probes of the spacetime in the vicinity of compact objects. Indeed, the first evidence for the existence of gravitational waves were obtained from observations of binary pulsars \cite{tay94}; light propagation in strong gravitational effects can be precisely tested with Shapiro delay experiments \cite{ksm+06}; relativistic spin-precession can be studied by examining pulse shape and polarisation properties of pulsars \cite{kra98,bkk+08}. The very same techniques can be applied for pulsars to be found around BHs \cite{wk99}. While shadows, accretion-disk spectra, and QPOs are probing the near field of the BH, i.e.\ on a scale of a few Schwarzschild radii, it is not expected to find a pulsar that close to a BH. The lifetime of such a pulsar-BH system due to gravitational wave damping is very small, which makes a discovery of such a system extremely unlikely. This is also true for a pulsar around Sgr A$^*$, although observational evidence stills point towards an observable pulsar population in the Galactic Centre\cite{Wharton:2011dv}. Consequently, a pulsar-BH system, once discovered, is expected to provide a far field test, i.e. a test of only the leading multipole moments of the BH spacetime, in particular mass $M_\bullet$, spin $S_\bullet$ and — for IMBHs and Sgr~A$^*$ — the quadrupole moment $Q_\bullet$ \cite{Liu:2011ae,Liu:2014uka,Zhang:2017qbb}. On the one hand, the measurement of $M_\bullet$, $S_\bullet$ and $Q_\bullet$ can be used to test the {\em Kerr hypothesis}. On the other hand, a pulsar can provide a complementary test to the near field test, and break potential degeneracies of, for instance, a test based on the shadow of the BH\cite{Ghasemi-Nodehi:2015raa,Psaltis:2015uza,Mizuno:2018lxz}. A pulsar in a sufficiently tight orbit (period $\lesssim$ few days) about a stellar-mass BH is expected to show a measurable amount of orbital shrinkage due to the emission of gravitational waves. Alternatives to GR generally show the existence of ``gravitational charges'' associated with additional long-range fields, like e.g.\ the scalar field in scalar-tensor theories. Any asymmetry in such ``gravitational charge'' generally leads to the emission of dipolar radiation, which is a particular strong source of energy loss, as it appears at the 1.5 post-Newtonian level in the equations of motion. Of particular interest here are theories that give rise to extra gravitational charges only for BHs and therefore evade any present binary pulsar tests. Certain shift-symmetric Horndeski theories known to have such properties \cite{Yunes:2011we,Yagi:2015oca,Sotiriou:2013qea,Sotiriou:2014pfa}, where a star that collapses to a BH suddenly acquires a scalar charge in a nontrivial manner \cite{Benkel:2016rlz,Benkel:2016kcq} (cf.~Sections~\ref{sec:sttheories} and \ref{sec:hairyBHs}). Based on mock timing data, Liu et al.\ \cite{Liu:2014uka} have demonstrated the capability of utilizing a suitable pulsar-BH system to put tight constraints on any form of dipolar radiation. Finally, there are interesting considerations how a pulsar-BH system could be used to constrain quantum effects related to BHs. For instance, there could be a change in the orbital period caused by the mass loss of an enhanced evaporation of the BH, for instance due to an extra spatial dimension. An absence of any such change in the timing data of the pulsar would lead to constraints on the effective length scale of the extra spatial dimension \cite{Simonetti:2010mk}. If quantum fluctuations of the spacetime geometry outside a BH do occur on macroscopic scales, the radio signals of a pulsar viewed in an (nearly) edge-on orbit around a BH could be used to look for such metric fluctuations. Such fluctuations are expected to modify the propagation of the radio signals and therefore lead to characteristic residuals in the timing data \cite{Estes:2016wgv}. Given the prospects and scientific rewards promised by PSR-BH systems, searches are on-going to discover these elusive objects. Pulsars orbiting stellar-mass BHs are expected to be found in or near the Galactic plane. Since binary evolution requires such system to survive two supernova explosions, this implies a low systemic velocity, placing it close to its original birth place. On-going deep Galactic plane surveys, like those as part in the High Resolution Universe Survey \cite{cck+18} or upcoming surveys using the MeerKAT or FAST telescopes, clearly have the potential to uncover such systems. Looking at regions of high stellar density, one can expect even to find millisecond pulsars around BHs due to binary exchange interactions, making globular clusters \cite{Hessels:2014yja} and the Galactic Centre \cite{Eatough:2015jka} prime targets for current and future surveys. As discussed, finding a pulsar orbiting Sgr A* would be particularly rewarding. Past surveys are likely to have been limited by sensitivity and scattering effects due to the turbulent interstellar medium although the discovery of a radio magnetar in the Galactic Center \cite{efk+13} indicates that the situation may be more complicated than anticipated \cite{ddb+17}. Searches with sensitive high-frequency telescopes will ultimate provide the answer \cite{Goddi:2017pfy}. Meanwhile, sensitive timing observations of pulsars in Globular cluster can also probe the proposed existence of Intermediate Mass Black Holes (IMBHs) in the cluster centres by sensing how the pulsars ``fall'' in the cluster potential. In some clusters, prominent claims \cite{kbl17} can be safely refuted \cite{frk+17}, while in other clusters IMBHs may still exist in the centre \cite{psl+17a,psl+17b} or their potential mass can at least be constrained meaningfully \cite{brf+17}. In summary, current and future radio pulsars observations have the potential to study BH properties over a wide mass range, from stellar-mass to super-massive BHs providing important complementary observations presented in this chapter; \subsubsection{GW diagnostics} In contrast to EM probes, GW tests are able to probe also the interior of compact objects and are much less affected by the astrophysical environment~\cite{Barausse:2014tra}. Thus, they are best suited to constraining all classes of ECOs. \paragraph{GW tests based on the ringdown phase} The remnant of a binary merger is a highly distorted object that approaches a stationary configuration by emitting GWs during the ``ringdown'' phase. If the remnant is a BH in GR, the ringdown can be modeled as a superposition of damped sinusoids described by the quasinormal modes (QNMs) of the BH~\cite{Kokkotas:1999bd,Ferrari:2007dd,Berti:2009kk} (see Sec.~\ref{sec:ringdown}). If the remnant is an ECO, the ringdown signal is different: \begin{itemize} \item For UCOs, the ringdown signal can be qualitatively similar to that of a BH, but the QNMs are different from their Kerr counterpart~\cite{Pani:2009ss,Macedo:2013jja,Chirenti:2016hzd}. The rates of binary mergers that allow for QNM spectroscopic tests depends on the astrophysical models of BH population~\cite{Berti:2016lat}. The estimated rates are lower than $1/{\rm yr}$ for current detectors even at design sensitivity. On the other hand, rates are higher for third-generation, Earth-based detectors and range between a few to $100$ events per year for LISA, depending on the astrophysical model~\cite{Berti:2016lat}. Even if the ringdown frequencies of a single source are hard to measure with current detectors, coherent mode stacking methods using multiple sources may enhance the ability of aLIGO/aVirgo to perform BH spectroscopy~\cite{Yang:2017zxs}. Such procedure is dependent upon careful control of the dependence of ringdown in alternative theories on the parameters of the system (mass, spin, etc). \item For ClePhOs, the prompt post-merger ringdown signal is identical to that of a BH, because it excited at the light ring~\cite{Cardoso:2016rao,Cardoso:2017cqb}. However, ClePhOs generically support quasi-bound trapped modes~\cite{1991RSPSA.434..449C,Chandrasekhar:1992ey} which produce a modulated train of pulses at late times. The frequency and damping time of this sequence of pulses is described by the QNMs of the ClePhO (which are usually completely different from those of the corresponding BH)~\cite{Correia:2018apm,Vicente:2018mxl}. These modes were dubbed ``GW echoes'' and appear after a delay time that, in many models, scales as $\tau_{\rm echo}\sim M\log(1-2M/R)$~\cite{Cardoso:2016oxy}. Such logarithmic dependence is key to allow for tests of Planckian corrections at the horizon scale~\cite{Cardoso:2016oxy,Abedi:2016hgu,Cardoso:2017cqb}. Models of ultracompact stars provide GW echoes with a different scaling~\cite{Cardoso:2017njb,Pani:2018flj}, the latter being a possible smoking gun of exotic state of matter in the merger remnant. \item In addition to gravitational modes, matter modes can be excited in ECOs~\cite{Yunes:2016jcc}. So far, this problem has been studied only for boson stars~\cite{Bezares:2017mzk,Palenzuela:2017kcg} and it is unclear whether matter QNMs would be highly redshifted for more compact ECOs~\cite{Cardoso:2017njb}. \end{itemize} \paragraph{GW tests based on the inspiral phase} The nature of the binary components has a bearing also on the GW inspiral phase, chiefly through three effects: \begin{itemize} \item \emph{Multipolar structure.} As a by-product of the BH uniqueness and no-hair theorems~\cite{Robinson}, the mass and current multipole moments $(M_{\ell}, {S}_\ell)$ of any stationary, isolated BH can be written only in terms of mass $M\equiv M_0$ and spin $\chi\equiv {S_1}/{M^2}$. The quadrupole moment of the binary components enters the GW phase at $2$PN relative order, whereas higher multipoles enter at higher PN order~\cite{Blanchet:2013haa}. The multipole moments of an ECO are generically different, e.g. ${M}_2^{\rm ECO}={M}_2^{\rm Kerr}+\delta q(\chi, M/R)M^3$, and it is therefore possible to constrain the dimensionless deviation $\delta q$ by measuring the $2$PN coefficient of the inspiral waveform. This was recently used to constrain ${\cal O}(\chi^2)$ parametrized deviations in $\delta q$~\cite{Krishnendu:2017shb}. It should be mentioned that ECOs will generically display higher-order spin corrections in $\delta q$ and that --~at least for the known models of rotating ultracompact objects~\cite{Pani:2015tga,Uchikata:2016qku,Yagi:2015hda}~-- the multipole moments approach those of a Kerr BH in the high-compactness limit. Moreover, the quadrupole PN correction is degenerate with the spin-spin coupling. Such degeneracy can be broken using the I-Love-Q relations~\cite{Yagi:2013bca,Yagi:2016bkt} for ECOs, as computed for instance in the case of gravastars~\cite{Pani:2015tga,Uchikata:2016qku}. \item \emph{Tidal heating.} In a binary coalescence the inspiral proceeds through energy and angular momentum loss in the form of GWs emitted to infinity. If the binary components are BHs, a fraction of the energy loss is due to absorption at the horizon~\cite{Alvi:2001mx,Hughes:2001jr,Taylor:2008xy,Poisson:2009di,Chatziioannou:2012gq,Cardoso:2012zn}. This effect introduces a $2.5$PN $\times\log v$ correction to the GW phase of spinning binaries, where $v$ is the orbital velocity. The sign of this correction depends on the spin~\cite{Hughes:2001jr,Taylor:2008xy,Poisson:2009di}, since highly spinning BHs can amplify radiation due to superradiance~\cite{Brito:2015oca}. In the absence of a horizon, GWs are not expected to be absorbed significantly, and tidal heating is negligible~\cite{Maselli:2017cmm,Maggio:2017ivp}. Highly spinning supermassive binaries detectable by LISA will provide a golden test of this effect~\cite{Maselli:2017cmm}. \item \emph{Tidal deformability.} The gravitational field of the companion induces tidal deformations, which produce an effect at $5$PN relative order, proportional to the so-called tidal Love numbers of the object~\cite{PoissonWill}. Remarkably, the tidal Love numbers of a BH are identically zero for static BHs~\cite{Binnington:2009bb,Damour:2009vw,Fang:2005qq,Gurlebeck:2015xpa}, and for spinning BHs to first order in the spin~\cite{Poisson:2014gka,Pani:2015hfa,Pani:2015nua}, and to second order in the axisymmetric perturbations~\cite{Pani:2015hfa}. On the other hand, the tidal Love numbers of ECOs are small but finite~\cite{Pani:2015tga,Uchikata:2016qku,Porto:2016zng,Cardoso:2017cfl}. Thus, the nature of ECOs can be probed by measuring the tidal deformability, similarly to what is done to infer the nuclear equation of state in NSBs~\cite{Flanagan:2007ix,Hinderer:2007mb,TheLIGOScientific:2017qsa}. Analysis of the LIGO data shows that interesting bounds on the tidal deformability can be imposed already, to the level that some boson star models (approximated through a polytropic fluid) can be excluded~\cite{Johnson-McDaniel:2018uvs}. The tidal Love numbers of a ClePhO vanish logarithmically in the BH limit~\cite{Cardoso:2017cfl}, providing a way to probe horizon scales. For Planckian corrections near the horizon, the tidal Love numbers are about $4$ orders of magnitude smaller than those of a NS. It is therefore expected that current and future ground-based detectors will not be sensitive enough to detect such small effect, while LISA might be able to measure the tidal deformability of highly-spinning supermassive binaries~\cite{Maselli:2017cmm}. \end{itemize} Finally, it is possible that ECOs' low-frequency modes are excited during the inspiral, leading to resonances in the emitted GW flux~\cite{Pani:2010em,Macedo:2013qea,Macedo:2013jja}. Low-frequency modes certainly arise in the gravitational sector, as we discussed already. In addition, fluid modes at low frequency can also be excited, although this issue is poorly studied. \paragraph{Challenges in modeling ECO coalescence waveforms} With the notable exception of boson stars~\cite{liebling17}, very little is known about the dynamical formation of isolated ECOs or about their coalescence. While the early inspiral and post-merger phases can be modelled within a PN expansion and perturbation theory, respectively, searches for ECO coalescence require a full inspiral-merger-ringdown waveform. Some combination of numerical and semianalytical techniques --~analog to what is done to model precisely the waveform of BH binaries~\cite{Taracchini:2013rva,Husa:2015iqa,Khan:2015jqa}~-- is missing. Concerning the post-merger ringdown part alone, it is important to develop accurate templates of GW echoes. While considerable progress has been recently done~\cite{Nakano:2017fvh,Mark:2017dnq,Volkel:2017kfj,Bueno:2017hyj,Maselli:2017tfq,Wang:2018mlp,Correia:2018apm,Wang:2018gin}, a complete template which is both accurate and practical is missing. This is crucial to improve current searches for echoes in LIGO/Virgo data~\cite{Abedi:2016hgu,Ashton:2016xff,Abedi:2017isz,Conklin:2017lwb,Westerweck:2017hus,Abedi:2018pst,Nielsen:2018lkf,Lo:2018sep}. Model-independent burst searches have recently been reported, and will be instrumental in the absence of compelling models~\cite{Tsang:2018uie}. \section{The dark matter connection\label{Sec:DM}} \vspace{-3mm} {\it Contributors:} G.~ Bertone, D.~Blas, R.~ Brito, C.~Burrage, V.~Cardoso, L. Urena-L\'opez \vspace{3mm} The nature and properties of DM and dark energy in the Universe are among the outstanding open issues of modern Cosmology. They seem to be the responsible agents for the formation of large scale structure and the present accelerated phase of the cosmic expansion. Quite surprisingly, there is a concordance model that fits all available set of observations at hand. This model is dubbed $\Lambda$CDM because it assumes that the main matter components at late times are in the form of a cosmological constant for the dark energy~\cite{Carroll:2000fy,Copeland:2006wr} and a pressureless component known as cold DM~\cite{Peter:2012rz}. These two assumptions, together with the theoretical basis for GR, make up a consistent physical view of the Universe~\cite{Ade:2015xua}. The nature of the missing mass in the Universe has proven difficult to determine, because it interacts very feebly with ordinary matter. Very little is known about the fundamental nature of DM, and models range from ultralight bosons with masses $\sim 10^{-22} \mbox{ \rm eV}$ to BHs with masses of order $10\, M_{\odot}$. Looking for matter with unknown properties is extremely challenging, and explains to a good extent why DM has never been directly detected in any experiment so far. However, the equivalence principle upon which GR stands -- tested to remarkable accuracy with known matter -- offers a solid starting point. All forms of energy gravitate and fall similarly in an external gravitational field. Thus, gravitational physics can help us unlock the mystery of DM: even if blind to other interactions, it should still ``see'' gravity. The feebleness with which DM interacts with baryonic matter, along with its small density in local astrophysical environments poses the question of how to look for DM signals with GWs. \subsection{BHs as DM} \label{sec:BH} In light of the LIGO discoveries there has been a revival of interest in the possibility that DM could be composed of BHs with masses in the range $1-100 M_{\odot}$ \cite{Kashlinsky:2016sdv,Clesse:2016vqa,Bird:2016dcv,Sasaki:2016jop,Wang:2016ana}. To generate enough such BHs to be DM, they would need to be produced from the collapse of large primordial density fluctuations \cite{Carr:1974nx,Carr:1975qj,Carr:2016drx}. The distribution of the BH masses that form depends on the model of inflation \cite{Byrnes:2012yx}. Such BHs can be produced with sufficiently large masses that they would not have evaporated by the current epoch. Alternatively, DM could be composed of ultracompact horizonless objects for which Hawking radiation is suppressed~\cite{Raidal:2018eoo}. Different formation scenarios and constraints on such objects were reviewed in Chapter I, Section~\ref{Sec:PBHandDM}. If all of the DM is composed of such heavy compact objects a key signature is the frequency of microlensing events \cite{Paczynski:1985jf}. Microlensing is the amplification, for a short period of time, of the light from a star when a compact object passes close to the line of sight between us and the star. How frequent and how strong these events will be, depends on the distribution of BH masses \cite{Griest:1990vu, DeRujula:1990wq,Alcock:1996yv}. It has been claimed that DM composed entirely of primordial BHs in this mass range is excluded entirely by microlensing \cite{Carr:2016drx}; however, such study assumed that the BH mass distribution was a delta function. If the mass distribution is broadened the tension with the microlensing data weakens~\cite{Green:2017qoa,Green:2016xgy,Bellomo:2017zsr}, although whether realistic models can be compatible with the data remains a subject of debate~\cite{Clesse:2017bsw}. Further observational signatures include the dynamical heating of dwarf galaxies, through two body interactions between BHs and stars~\cite{Koushiappas:2017chw,Brandt:2016aco}, electromagnetic signatures if regions of high BH density also have high gas densities (such as the center of the galaxy) \cite{Gaggero:2016dpq,Inoue:2017csr}, constraints from the CMB due to energy injection into the plasma in the early universe \cite{Ali-Haimoud:2016mbv,Poulin:2017bwe} and from the (absence of) a stochastic background of GWs~\cite{Wang:2016ana}. At the very least primordial BHs in the LIGO mass range can be a component of the DM in our universe. Future GW observations determining the mass and spatial distribution of BHs in our galaxy will be key to testing this hypothesis. \subsection{The DM Zoo and GWs} Despite the accumulation of evidence for the existence of DM, the determination of its fundamental properties remains elusive. The observations related to GWs may change this. GWs and their progenitors `live' in a DM environment to which they are sensitive (at least gravitationally though other interactions may occur). Furthermore, the DM sector may include new particles and forces that can change the nature and dynamics of the sources. Before making any classification of DM candidates, we remind that the preferred models are those that: {\it i)} explain {\it all} the DM, {\it ii)} include a {\it natural} generation mechanism, {\it iii)} can be added to the SM to solve other (astrophysical or fundamental) puzzles, {\it iv)} can be tested beyond the common DM gravitational effects. None of these properties is really necessary, but their advantages are clear. A first distinction one can make when classifying DM models is whether the state of DM is a distribution of fundamental particles or of compact objects. The masses of compact objects are determined by astrophysical evolution from initial overdensities in a matter field. As a consequence, the range and distribution of masses is quite broad. These models and their bounds are discussed in Sec.~\ref{sec:BH} (see also Sec.~\ref{Sec:BHgenesis} for more details about astrophysical BHs). Fundamental particles have fixed mass, which constrains the mass to be $m_\chi \lesssim M_P$ to treat gravity within an EFT formalism (see \cite{Garny:2015sjg} for a recent study of the top range). Furthermore, there is no candidate within the SM for these fundamental particles. Particles with typical energy scales in the electroweak range (WIMPs) have been preferred in the past because they satisfy the four points above \cite{Bertone:2004pz}. Other popular candidates include axions and sterile neutrinos (see e.g. Ref. \cite{Feng:2010gw}). The first classification of particle models is based on the DM spin. Fermionic DM can only exist with masses above\footnote{All the limits of this section apply to candidates that represent {\it all} the DM. The bounds are less stringent for fractional components.} $m_\chi \gtrsim\,$keV. This is because the phase-space available for fermions in virial equilibrium in dwarf spheroidals is limited \cite{Tremaine:1979we}. The limit $m_\chi \gtrsim\,$keV always applies for models where the DM was generated thermally. In this case, the distribution of DM particles is too `hot' for lighter masses and there is a suppression of the growth of cosmological structures at small scales, in tension with observations \cite{Baur:2015jsy}. Other `out-of-equilibrium' production mechanisms allow for 'cold' distributions for all possible masses, see e.g. \cite{DAgnolo:2017dbv} and references therein. The extreme situation are models where a bosonic candidate is generated `at rest' as happens in the cases of misalignment or inflationary generation, see \cite{Marsh:2015xka}. The fundamental limitation is then $m_\chi \gtrsim 10^{-22}\,$eV, which comes from the wiping of structure at scales smaller than the de Broglie wavelength of the candidate. This scale should allow for the existence of dwarf spheroidals. Ultra-light candidates incorporate the existence of oscillating modes with coherence times long enough to generate new phenomena as compared to standard candidates (see below). The complexity of the DM sector is also unknown. Models that incorporate natural candidates for DM (e.g. supersymmetric models or axion models) provide concrete proposals, but one should keep an open mind when thinking about DM phenomenology. The existence of a coupling DM-SM beyond the gravitational one may mean an extra interaction of gravitating bodies among themselves or with the DM background. This can modify the orbital properties and open new emission channels, if the DM candidate is light enough. Besides, different DM models may admit different DM distributions, depending on their production mechanism and type of interactions. The common feature is the existence of spherical DM halos, but more complex structures (mini-halos, DM disks, breathing modes,....) may occur. Finally, the distribution of DM close to the galactic center is also quite uncertain~\cite{Pato:2015dua}. This is because baryonic effects are more important. Still, one may be able to distinguish certain features, as the existence of huge overdensities, or solitons, characteristic of some DM models~\cite{Schive:2014dra}. For instance, the prediction of solitons in center of galaxies for masses $m\sim (10^{-22}\div 10^{-21})\,$eV has been recently used in \cite{Bar:2018acw} to claim that this mass range is in tension with data. \subsection{The cosmological perspective on DM: scalar field dark matter} Requiring a consistent cosmology limits some of the DM candidates, {\it if} they are to solve both the dark energy and DM puzzles. An intense experimental search in the last years has successfully imposed stringent limits on the interactions between DM and ``ordinary'' matter~\cite{Aprile:2017iyp,Cui:2017nnn,Hoferichter:2017olk}. The Weakly Interacting Massive Particle (WIMP) hypothesis~\cite{Queiroz:2017kxt}, while appealing, is under extreme pressure. Thus, alternative models need to be seriously considered~\cite{Hooper:2017eje}. Among the vast number of possibilities, scalar fields have become increasingly tempting~\cite{Copeland:2006wr,Joyce:2016vqv}. Scalar field DM is a model where the properties of the DM can be represented by a relativistic scalar field $\phi$ endowed with an appropriate scalar potential $V(\phi)$~\cite{Matos:2000ss,Hu:2000ke,Hui:2016ltb,Suarez:2013iw,Marsh:2015xka}. In the relativistic regime, the equation of motion for the scalar is the Klein-Gordon equation $\partial_\mu \partial^\mu \phi - \partial_\phi V=0$, whereas the non-relativistic regime leads to a Schr\"odinger-type equation for the wave function. At the quantum level, the scalar represents the mean field value of a coherent state of boson particles with a large occupation number. Although many possibilities exist for the potential $V(\phi)$ in DM models, it should possess a minimum at some critical value $\phi_c$ around which we can define a mass scale $m$ for the boson particle, $m\equiv \partial^2_\phi V(\phi_c)$. The simplest possibility is the parabolic potential $V(\phi) = (1/2) m^2 \phi^2$ (with $\phi_c =0$), but higher order terms could play a role at higher energies. One representative example of the scalar field hypothesis is the axion from the Peccei-Quinn mechanism to solve the strong CP problem, for which the potential has the trigonometric form $V(\phi) = m^2 f^2 [1-\cos(\phi/f)]$, where $f$ is the so-called decay constant of the axion particle~\cite{Marsh:2015xka}. In contrast to the parabolic potential, the axion potential is periodic, has an upper energy limit $V(\phi) \leq m^2 f^2$ and includes contributions from higher-order terms $\phi^4,\phi^6,\ldots$~\cite{Zhang:2017dpp,Cedeno:2017sou}. For simplicity, we present below a summary of results obtained for the parabolic potential and their constraints arising from cosmological and astrophysical observations. \subsubsection{Cosmological background dynamics} Because of the resemblance of the KG equation to that of the harmonic oscillator, there are two main stages in the evolution of the scalar field: a damped phase with the Hubble constant $H \gg m$ during which $\phi \simeq {\rm const.}$, and a stage of rapid field oscillations around the minimum of the potential that corresponds to $H \ll m$. A piece-wise solution for the two regimes can be envisaged from semi-analytical studies, but the real challenge arises in numerical simulations for which one desires a continuum solution for all the field variables. The two stages can be put together smoothly in numerical simulations~\cite{Urena-Lopez:2015gur}. The evolution of the energy density $\rho_\phi = (1/2) \dot{\phi}^2 + (1/2) m^2\phi^2$ should transit from $\rho_\phi = {\rm const}.$ to $\rho_\phi \propto a^{-3}$ (the expected one for a DM component) before the time of radiation-matter equality to obtain the correct amount of DM at the present time. A lower bound on the boson mass $m \geq 10^{-26} {\rm eV}$ arises from such a requirement~\cite{Urena-Lopez:2015gur}. \subsubsection{Cosmological linear perturbations} For this to be a successful model, the scalar field DM must allow the formation of structure. Generically, it is found that small density fluctuations $\delta_\rho/\rho$ grow (decay) for wavelengths $k^2 < 2 a^2 m H$ ($k^2 > 2 a^2 m H$), which indicates the existence of an effective time-dependent Jeans wavenumber defined by $k^2_J = 2 a^2 m H$, where $a$ is the scale factor. The most noticeable aspect about the growth of linear perturbations is that the power spectrum has the same amplitude as that of CDM for large enough scales (ie $k < k_J$), and that a sharp-cut-off appears for small enough scales (ie $k > k_J$). The straightforward interpretation is that the SFDM models naturally predicts a dearth of small scale structure in the Universe, and that such dearth is directly related to the boson mass. This time the lower bound on the boson mass is somehow improved to $m \geq 10^{-24} {\rm eV}$~\cite{Urena-Lopez:2015gur,Hlozek:2014lca}, but one is limited by the non-linear effects on smaller scales to impose a better constraint. \subsubsection{Cosmological non-linear perturbations} Although $N$-body simulations have been adapted to the field description required for SFDM, they cannot capture the full field dynamics and the best option remains to solve directly the so-called Schr\"odinger-Poisson system. Some studies suggest that the non-linear process of structure formation proceeds as in CDM for large enough scales~\cite{Schive:2014dra,Mocz:2017wlg,Schwabe:2016rze}. The gravitationally bound objects that one could identify with DM galaxy halos all have a common structure: a central soliton surrounded by a NFW-like envelope created by the interference of the Schr\"odinger wave functions~\cite{Schive:2014dra,Schive:2014hza}. In terms of the standard nomenclature, the scalar-field DM model belongs to the so-called non-cusp, or cored, types of DM models (towards which evidence is marginally pointing). Simulations suggest a close relationship between the soliton $M_s$ and the total halo mass $M_h$ of the form $M_s = \gamma(a) M^{1/3}_h$, so that massive halos also have massive central solitons~\cite{Schive:2014dra,Schive:2014hza,Mocz:2017wlg}. Here $\gamma(a)$ is a time-dependent coefficient of proportionality. These results then predict a minimum halo mass $M_{h,{\rm min}} = \gamma^{3/2}(1) \simeq (m/10^{-22}\, {\rm eV})^{-3/2} \, 10^7 M_\odot$ when the galaxy halo is just the central soliton. But the soliton scale radius $r_s$ is related to its mass via $M_s r_ s \simeq 2 (m_{\rm Pl}/m)^2$, where $m_{\rm Pl} = G^{-1/2}$ is the Planck mass. For the aforementioned values of the soliton mass, $r_s = 300 \, {\rm pc}$ if $M_s=10^7 \, M_\odot$, in remarkable agreement with observations~\cite{Urena-Lopez:2017tob} (see, however, \cite{Deng:2018jjz}). Furthermore, the previous relation between the soliton mass and the host halo mass can be used to predict the presence of a second peak in the velocity of rotation curves, closer to the galactic center \cite{Bar:2018acw}. This prediction is in tension with data for masses in the range $m\sim (10^{-22}\div 10^{-21})\,$eV \cite{Bar:2018acw}. Finally, non-linear results also constrain the scalar mass through comparison with observations inferred from the Lyman-$\alpha$ forest~\cite{Palanque-Delabrouille:2013gaa}. These constraints require dedicated $N$-body simulations with gas and stars, so that the flux of quasars can be calculated. The constraints are obtained indirectly and subjected to uncertainties arising from our limited knowledge on the formation of realistic galaxies. In a first approximation we can estimate the power spectrum of the transmitted flux of the Lyman-$\alpha$ forest at linear order, and obtain a lower limit on the boson mass $m > 10^{-21} {\rm eV}$ if SFDM is to fulfill the DM budget completely~\cite{Kobayashi:2017jcf}. In conclusion, scalar-field models of dark energy can account for the whole of DM as well, provided the scalar has a mass of order $10^{-21}\, {\rm eV}$. This model explains also the seemingly cored density profile in the central parts of the galaxies. \subsection{The DM environment} In the following we provide a (hopefully agnostic) view on how different DM models may affect GW emission by compact objects. The environment in which compact objects live -- be it the interstellar medium plasma, accretion disks or DM -- can influence the GW signal in different ways: \begin{itemize} \item {\it By modifying the structure of the compact objects themselves}. Accretion disks, for example, alter the spacetime multipolar structure relative to the standard Kerr geometry. Accretion disks are also known to limit the parameter space of Kerr BHs, preventing them from spinning beyond $cJ/(GM^2)\sim 0.998$~\cite{Thorne:1974ve}. Likewise, if DM behaves as heavy particles, its effects parallel that of baryonic matter either in accretion disks or in the interstellar medium. In models where a particle description is insufficient (for example, if DM assumes the form of fundamental light bosons), Kerr BHs can either be unstable, or not be an equilibrium solution of the field equations at all~\cite{Cardoso:2016ryw,Sotiriou:2015pka,Herdeiro:2015waa} (see also discussion in Section~\ref{Sec:nonKerr}). Some of the simplest forms of axionic DM have a more mild effect instead, contributing to the spinning down of astrophysical BHs~\cite{Arvanitaki:2010sy,Brito:2014wla,Brito:2017wnc}. Independently of the nature of DM, compact stars evolving in DM-rich environments may accrete a significant amount of DM: DM is captured by the star due to gravitational deflection and a non-vanishing cross-section for collision with the star material~\cite{Press:1985ug,Gould:1989gw,Goldman:1989nd,Bertone:2007ae,Brito:2015yga}. The DM material eventually thermalizes with the star, and accumulates inside a finite-size core~\cite{Brito:2015yfh,Brito:2015yga,Gould:1989gw,Goldman:1989nd}. \item {\it By altering the way that GWs are generated.} The different equilibrium configuration of the compact objects, together with the possible self-gravity effects of DM and different structure of the field equations gives rise, generically, to a different GW output from that predicted in vacuum GR. \item The environment -- interstellar dust, a cosmological constant or DM -- must of course affect also the way that GWs {\it and} electromagnetic waves propagate. It is a common-day experience that light can be significantly affected by even low-density matter. For example, low-frequency light in a plasma travels at the group velocity $v_g=c\sqrt{1-\omega_p^2/\omega^2}$, with $\omega_p$ the plasma frequency $\omega_p^2=\frac{n_0 q^2}{4\pi\epsilon_0 \,m}$. Here, $n_0$ is the particle number density, $q$ the electron charge and $m$ its mass. Thus, light is delayed with respect to GWs, by an amount $\delta t$ directly proportional to the total distance they both traversed \begin{eqnarray} \delta t&=&\frac{d \rho_0 q^2}{8\pi\epsilon_0\,c\,m^2 \omega^2}\nonumber\\ &=& 6.7 \frac{\rho_0}{\rm 1\, {\rm GeV/cm}^3} \left(\frac{\rm 6 GHz}{\omega}\right)^2\,{\rm days}\,, \end{eqnarray} where in the last equality we substituted numbers for the very latest observation of GWs with an electromagnetic counterpart~\cite{GBM:2017lvd}. These numbers could be interesting: given the observed time delay of several days for radio waves, one may get constraints for models where DM consists of mili-charged matter (i.e., models where $q$ and $m$ may be a fraction of the electron charge and mass). Cold DM affects the propagation of GWs by inducing a small frequency dependent modification of the propagation speed \cite{Flauger:2017ged}. Furthermore, the effect of viscous fluids or oscillators on the passage of GWs are discussed in Refs.~\cite{Thorne_notes,Annulli:2018quj}. These calculations indicate that the effect may be too small to be measurable in the near future. \end{itemize} These different effects have, so far, been explored and quantified in the context of a few possible mechanisms where DM plays a role. \subsection{Accretion and gravitational drag} \begin{figure}[th] \begin{tabular}{c} \includegraphics[scale=1,clip]{accretion_board4-04} \end{tabular} \caption{\label{Fig_accretion_drag} Depiction of a BBH evolving in a possible interstellar or DM environment. Each individual BH accretes and exherts a gravitational pull on the surrounding matter. Both effects contribute to decelerate the BHs, leading to a faster inspiral. Credit: Ana Sousa.} \end{figure} The formation and growth of compact objects may lead to large overdensities of DM around them \cite{Gondolo:1999ef,Bertone:2005xz,Zhao:2005zr}. Whatever the nature and amount of DM surrounding them, a binary of two BHs or compact stars evolving in a DM-rich environment will be subjected to at least two effects, depicted in Fig.~\ref{Fig_accretion_drag}. As it moves through the medium it accretes the surrounding material, but also exherts a gravitational pull (known as ``gravitational drag'') on all the medium, which affects the inspiral dynamics. To quantify these effects, it is important to know how DM behaves. Collisionless DM causes, generically, a gravitational drag different from that of normal fluids~\cite{Macedo:2013qea,Macedo:2013jja}. The gravitational drag caused by DM which is coherent on large scales may be suppressed~\cite{Hui:2016ltb}, but further work is necessary to understand this quantitatively. The phase evolution of a pair of BHs, taking gravitational radiation, accretion and drag as well as the DM self-gravity was studied recently~\cite{Barausse:2014tra,Macedo:2013qea,Macedo:2013jja,Eda:2014kra,Yue:2017iwc,Hannuksela:2018izj}. These estimates are all Newtonian, and the results depend crucially on the local DM density. Perhaps more importantly, back-reaction on the DM distribution was not taken into account in a systematic manner. The only studies available including backreaction, look only at the very last orbits of the binary, and are unlikely to capture well the full physics of DM dispersal~\cite{Berti:2013gfa}. \subsection{The merger of two dark stars} Gravitational drag and accretion have a cumulative effect: even though their magnitude is small, the effect piles up over a large number of orbits. By contrast, drag and accretion have a tiny effect during merger. The merger signal carries, mostly, an imprint of the configuration of the colliding objects. For BHs, the no-hair results strongly suggest that the colliding objects belong to the Kerr family, and it seems hopeless to find any DM imprints here. However, the no-hair results can be by-passed, as they assume stationarity of the spacetime or certain matter properties, as explained in Sec.~\ref{Sec:nonKerr}~\cite{Cardoso:2016ryw,Sotiriou:2015pka,Herdeiro:2015waa}. Detailed investigations of BHs other than Kerr are still to be done. Smoking-gun effects arising from the merger of such objects are, likewise, unknown. As we discussed, compact stars can grow DM cores at their center. It is conceivable that the DM core might imprint a signature on different phases of the coalescence of two of these objects. The effects of tidal deformations within the gravitational wave signal produced by binary DM stars have been investigated in \cite{Maselli:2017vfi}. Moreover, the imprint on the post-merger spectrum, so far only for analog-mechanical models, has been studied in~\cite{Ellis:2017jgp}. \subsection{Non-perturbative effects: superradiance and spin-down.} \label{sec:superradiance} Spinning BHs are huge energy reservoirs from which all rotational energy can ultimately be extracted. In particular, due to the absence of a Pauli exclusion principle for bosons, bosonic waves can effectively extract energy from spinning BHs through a process known as rotational superradiance~\cite{Brito:2015oca}. The superradiant energy extraction might be used to rule out (or detect) ultralight bosons that might be a significant component of DM, such as the QCD axion or axion-like particles predicted by the string axiverse scenario~\cite{Arvanitaki:2010sy}, dark photons~\cite{Pani:2012vp,Pani:2012bp} and even massive spin-2 fields~\cite{Brito:2013wya}. For boson masses in the range $10^{-21}$~--~$10^{-11}$ eV, their Compton wavelength is of the order of the horizon of typical astrophysical BHs, the gravitational coupling of these two objects is strongest, and long-lived quasi-bound states of bosonic particles around BHs can form. For rotating BHs these states are typically superradiant, thus becoming an effective tap of rotational energy. The extracted energy condenses as a bosonic cloud around the BH producing ``gravitational atoms''~\cite{Arvanitaki:2010sy,Brito:2014wla,East:2017ovw}. The evolution of these systems leads to several observational channels. For complex bosonic fields, GW emission is suppressed and the end-state of the superradiant instability might be a Kerr BH with bosonic hair~\cite{Herdeiro:2014goa,Herdeiro:2016tmi}. These solutions are themselves unstable~\cite{Ganchev:2017uuo} which, over the age of the Universe, would likely lead to a slowly-rotating BH surrounded by a bosonic cloud populating the most unstable modes. On the other hand for real bosonic fields, this cloud would disperse over long timescales though the emission of nearly-monochromatic GWs, either observable individually or as a very strong stochastic GW background~\cite{Arvanitaki:2014wva,Arvanitaki:2016qwi,Yoshino:2014wwa,Baryakhtar:2017ngi,Brito:2017zvb,Brito:2017wnc}. These are appealing GW emitters with appropriate amplitude and frequency range for Earth- and space-based detectors. Although estimates from Ref.~\cite{Brito:2017zvb,Brito:2017wnc} suggest that current LIGO data can already be used constrain some range of scalar field masses, an analysis on real data has not yet been performed. These models also predict that BHs affected by the instability would spin-down in the process, and therefore the mere observation of rotating BHs can place severe limits on these particles, thereby strongly constraining the theory. In fact, observations of spinning BHs can already be used to bound the photon~\cite{Pani:2012vp,Pani:2012bp} and graviton masses~\cite{Brito:2013wya}. These studies demonstrate that BHs have an enormous potential as theoretical laboratories, where particle physics can be put to the test using current observations. All these studies neglect the possible decay of ultralight bosonic fields into Standard Model particles which is justified given the current constraints on ultralight bosons in this mass range (see e.g.~\cite{Essig:2013lka,Marsh:2015xka}). However, the superradiant instability leads to the generation of boson condensates around BHs with a very large particle number, which could enhance their decay through channels other than gravitational. For the coupling typical of the QCD axion with photons, stimulated decay into photon pairs becomes significant for axion masses above $\gtrsim 10^{-8}$ eV, and consequently for BHs with masses $\lesssim 0.01 M_{\odot}$~\cite{Rosa:2017ury}. If such systems exist, they would lead to extremely bright lasers with frequencies in the radio band. For the expected mass of the QCD axion $\sim 10^{-5}$ eV, such lasing effect would occur around primordial BHs with masses around $\sim 10^{24}\,{\rm Kg}$ and lead to millisecond radio-bursts, which could suggest a link to the observed fast radio bursts~\cite{Rosa:2017ury}. In addition to these effects, the coupling of light DM bosons with Standard Model particles may also turn NSs unstable against superradiant amplification. Rotational superradiance is in fact a generic process and can happen in any rotating object, as long as some sort of dissipation mechanism is at work~\cite{Brito:2015oca}. Thus, even NSs can be superradiantly unstable against some models of DM, such as dark photons coupled to charged matter, which can already be used to put constraints on these models~\cite{Cardoso:2017kgn}. An important question that remains to be fully investigated is: if ultralight bosons do exist and we detect them through one or several of these channels, will we be able to start using compact objects to do precision particle physics, in particular can one infer fundamental properties such as their mass, spin and fundamental interactions? This will require a better understanding of higher-spin fields around rotating BHs. Although some progress has been made to deal with massive vector fields using time-domain simulations, either in full GR~\cite{Zilhao:2015tya,East:2017ovw} or by fixing the background geometry to be a Kerr BH~\cite{Witek:2012tr,East:2017mrj}, frequency-domain computations, which are more appropriate to span the parameter space, have mainly been dealt with by employing a small-spin approximation~\cite{Pani:2012vp,Pani:2012bp,Brito:2013wya}, using an analytical matched asymptotics approach~\cite{Baryakhtar:2017ngi} or using a field theory approach~\cite{Endlich:2016jgc}, these last two being only valid in the Newtonian limit. Computations of the instability rates without employing any of these approximations have only recently started to be available~\cite{Cardoso:2018tly,Frolov:2018ezx}. An estimate of the stochastic background coming from vector or tensor fields has also not been computed. Finally, besides some some estimates on the GWs from a bosenova collapse due to axion self-interations~\cite{Yoshino:2012kn}, a detailed analysis of how self-interactions or interactions with other fields could affect the GW emission and its detectability is still missing. \subsection{Pulsar timing} The gravitational drag from particle DM candidates in the orbital motion of binary pulsars has been studied in \cite{Pani:2015qhr,Caputo:2017zqh}. This may be important to constrain DM models that generate a thick DM disk \cite{Caputo:2017zqh} or to study the DM density closer to the galactic center, if a pulsar binary is discovered there and can be timed to high precision. DM, specially if under the form of light bosons, can also produce other peculiar effects, accessible through EM observations. In models where DM comes in the form of an ultralight boson, the DM configuration is able to support itself through pressure, which is in turn caused by a time-dependence of the field. This time-periodicity gives rise to an oscillating gravitational potential which is able to produce tiny, but potentially measurable, differences in the time-of-arrival signal from precise clocks, such as pulsars~\cite{Khmelnitsky:2013lxt,Blas:2016ddr}. \subsection{Mini-galaxies around BHs} In addition, in these theories, BHs can grow (permanent or long-lived) time-oscillating, non-axisymmetric bosonic structures~\cite{Brito:2015oca}. Planets or stars in the vicinity of these BH are subjected to a periodic forcing, leading to Lindblad and co-rotation resonances. This phenomenon is akin to the forcing that spiral-armed galaxies exhert on its planets and stars, and it is conceivable that the same type of pattern appears, on smaller-scales, around supermassive hairy BHs~\cite{Ferreira:2017pth,Hannuksela:2018izj}. \subsection{GW detectors as DM probes} Finally, there are interesting proposals to use the GW detectors themselves as probes of DM, when used as extremely sensitive accelerometers~\cite{Bekenstein:2006fi,Hall:2016usm,Englert:2017det}.
1604.02813
\section{Introduction} The cornerstones of Auslander-Reiten duality are the following formulas for modules over an artin algebra $\Lambda$: \[D\operatorname{Ext}^1_\Lambda(C,-)\cong\operatorname{\overline{Hom}}_\Lambda(-,D\operatorname{Tr} C)\quad\text{and}\quad \operatorname{Ext}^1_\Lambda(-,D\operatorname{Tr} C)\cong D\operatorname{\underline{Hom}}_\Lambda(C,-).\] They were established by Auslander and Reiten \cite{AR1975} and later generalised to modules over arbitrary rings \cite{Au1978}. The crucial ingredient is the explicit construction of the \emph{Auslander-Reiten translate} by taking the dual of the transpose $D\operatorname{Tr} C$ of a finitely presented module $C$. There are several options to generalise this. A very elegant approach due to Bondal and Kapranov \cite{BK1989} uses the notion of a \emph{Serre functor} for a triangulated category. In particular, this reveals the close connection between Auslander-Reiten duality and Serre duality. For abelian categories, Auslander and Reiten established in \cite{AR1974} a generalisation by introducing the concept of a \emph{dualising variety}. Further approaches include the work of Reiten and Van den Bergh for hereditary abelian categories \cite{RV2002} and that of Lenzing and Zuazua \cite{LZ2004}. The aim of this paper is to establish Auslander-Reiten duality more generally for Grothendieck abelian categories that have a sufficient supply of finitely presented objects. We mimick the construction of the Auslander-Reiten translate by invoking the existence of flat covers in certain functor categories. In fact, we show that the Auslander-Reiten translate is the representing object of a specific functor; so our approach is somewhat similar to Neeman's acccount of Grothendieck duality via Brown representability \cite{Ne1996}. Motivated by the general setting of Grothendieck abelian categories, we obtain a coherent formulation of Auslander-Reiten duality which seems to be new even for module categories. For a finitely presented module we provide an explicit construction of the Auslander-Reiten translate; it is a modification of the original construction due to Auslander and Reiten. For an abelian category $\A$ let $\overline\A$ denote the \emph{stable category modulo injectives}, which is obtained from $\A$ by identifying two morphisms $\phi,\phi'\colon X\to Y$ if \[\operatorname{Ext}^1_\A(-,\phi)=\operatorname{Ext}^1_\A(-,\phi').\] When $\A$ has enough injective objects this means $\phi-\phi'$ factors through an injective object. We write $\operatorname{\overline{Hom}}_\A(-,-)$ for the morphisms in $\overline\A$. Analogously, the \emph{stable category modulo projectives} $\underline\A$ is defined. \begin{thm}\label{th:intro} Let $\A$ be a Grothendieck abelian category that is locally finitely presented. Fix a finitely presented object $C$ and set $\Gamma=\operatorname{End}_\A(C)$. Then for every injective $\Gamma$-module $I$ there exists an object $\tau_C(I)$ in $\overline\A$ and a natural isomorphism \begin{equation}\label{eq:repres} \operatorname{Hom}_\Gamma(\operatorname{Ext}^1_\A(C,-),I)\cong\operatorname{\overline{Hom}}_\A(-,\tau_C(I)). \end{equation} \end{thm} We refer to Theorem~\ref{th:main} for the proof of this result and continue with some consequences. Another version of Auslander-Reiten duality involves the stable category modulo projectives. We say that $\A$ has \emph{enough projective morphisms} if every object $X$ in $\A$ admits an epimorphism $\pi\colon X' \to X$ such that $\operatorname{Ext}^1_\A(\pi,-)=0$. \begin{cor}\label{co:intro1} For an injective $\Gamma$-module $I$ there is a natural monomorphism \begin{equation}\label{eq:proj-repres} \operatorname{Ext}^1_\A(-,\tau_C(I))\longrightarrow \operatorname{Hom}_\Gamma(\operatorname{\underline{Hom}}_\A(C,-),I) \end{equation} which is an isomorphism when $\A$ has enough projective morphisms. \end{cor} A necessary and sufficient condition for \eqref{eq:proj-repres} to be an isomorphism is given in Theorem~\ref{th:second-main}. For an intriguing symmetry between \eqref{eq:repres} and \eqref{eq:proj-repres}, see Appendix~\ref{ap:ext}. Note that the r\^{o}les of \eqref{eq:repres} and \eqref{eq:proj-repres} are quite different. The first one provides the Auslander-Reiten translate as a representing object, while the second seems to be more suitable for applications. For instance, \eqref{eq:proj-repres} is used for constructing almost split sequences. Also, \eqref{eq:proj-repres} identifies with Serre duality for categories of quasi-coherent sheaves over projective schemes. There is an equivalent formulation of the isomorphism \eqref{eq:repres} in terms of the defect of an exact sequence \cite{Au1978}. Given an exact sequence \[\xi\colon 0\longrightarrow X\longrightarrow Y\longrightarrow Z\longrightarrow 0\] in $\A$, the \emph{covariant defect} $\xi_*$ and the \emph{contravariant defect} $\xi^*$ are defined by the exactness of the following sequences: \begin{gather*} 0\longrightarrow \operatorname{Hom}_\A(Z,-)\longrightarrow\operatorname{Hom}_\A(Y,-)\longrightarrow\operatorname{Hom}_\A(X,-)\longrightarrow \xi_*\longrightarrow 0\\ 0\longrightarrow \operatorname{Hom}_\A(-,X)\longrightarrow\operatorname{Hom}_\A(-,Y)\longrightarrow\operatorname{Hom}_\A(-,Z)\longrightarrow \xi^*\longrightarrow 0 \end{gather*} \begin{cor} For an injective $\Gamma$-module $I$ there is a natural isomorphism \[\operatorname{Hom}_\Gamma(\xi^*(C),I)\cong\xi_*(\tau_C(I)).\] \end{cor} In applications one often deals with an abelian category that is $k$-linear over a commutative ring $k$. In that case the isomorphism \eqref{eq:repres} gives for any injective $k$-module $I$ a natural isomorphism \begin{equation}\label{eq:k-repres} \operatorname{Hom}_k(\operatorname{Ext}^1_\A(C,-),I)\cong\operatorname{\overline{Hom}}_\A(-,\tau(C,I)) \end{equation} by setting $\tau(C,I)=\tau_C(\operatorname{Hom}_k(\Gamma,I))$. For instance, when $\A$ is the category of modules over a $k$-algebra $\Lambda$ and $C$ is a finitely presented $\Lambda$-module, then \[\tau(C,I)=\operatorname{Hom}_k(\operatorname{Tr} C,I).\] A case of particular interest is given by a non-singular projective scheme $\bbX$ of dimension $d\ge 1$ over a field $k$. For the category $\A$ of quasi-coherent $\mathcal O_\bbX$-modules and a coherent $\mathcal O_\bbX$-module $C$ we have \[\tau(C,k)=\Sigma^{d-1}(C\otimes_{\bbX}\omega_\bbX)\] where $\omega_\bbX$ is the dualising sheaf and $\Sigma^{d-1}$ denotes the $(d-1)$st syzygy in a minimal injective resolution. In that case the isomorphism \eqref{eq:k-repres} is a variation of Serre duality \cite{Gr1968,Se1955}. In fact, a more familiar form of Serre duality is given by the natural isomorphism \[\operatorname{Ext}^{1}_{\bbX}(-, \Sigma^{d-1}(C\otimes_\bbX\omega_\bbX))\cong \operatorname{Ext}^{d}_{\bbX}(-, C\otimes_\bbX\omega_\bbX)\cong \operatorname{Hom}_k(\operatorname{Hom}_\bbX(C,-),k)\] which identifies with \eqref{eq:proj-repres} since $\operatorname{\underline{Hom}}_\bbX(C,-)=\operatorname{Hom}_\bbX(C,-)$. This provides a precise connection between Auslander-Reiten duality and Serre duality. For an arbitrary Grothendieck abelian category the construction of the \emph{Auslander-Reiten translate} $\tau_C(I)$ is far from explicit; it involves the existence of flat covers which was an open problem for about twenty years \cite{BEE2001}.\footnote{There is an alternative proof of Theorem~\ref{th:intro} which obtains the Auslander-Reiten translate from Brown representability, using the fact that the homotopy category of complexes of injective objects $\mathbf K(\operatorname{Inj}\A)$ is a compactly generated triangulated category \cite{Kr2015,St2014}.} When $\A$ is the category of modules over a ring, we provide an explicit description of the Auslander-Reiten translate, making also the connection with the dual of the transpose of Auslander and Reiten; see Definition~\ref{de:DTr} and Theorem~\ref{th:defect1}. This paper has three parts. First we deal with the general case of a Grothendieck abelian category, then we consider the Auslander-Reiten translate for module categories, and the final section is devoted to Auslander-Reiten duality for categories of quasi-coherent sheaves. \section{Auslander-Reiten duality for Grothendieck abelian categories} In this section we introduce the Auslander-Reiten translate for Grothendieck abelian categories that are locally finitely presented (Theorem~\ref{th:main}). Following \cite{Br1970}, a Grothendieck abelian category $\A$ is \emph{locally finitely presented} if the finitely presented objects generate $\A$. Recall that an object $X$ in $\A$ is \emph{finitely presented} if the functor $\operatorname{Hom}_\A(X,-)$ preserves filtered colimits. We denote by $\operatorname{fp}\A$ the full subcategory of finitely presented objects in $\A$. Note that the isomorphism classes of finitely presented objects form a set when $\A$ is locally finitely presented. We begin with some preparations. \subsection*{Modules} Let $\A$ be an additive category. We write $(\A^\mathrm{op},\Ab)$ for the category of additive functors $\A^\mathrm{op}\to\Ab$ where $\Ab$ denotes the category of abelian groups. The morphisms between functors are the natural transformations and we obtain an abelian category. Note that (co)kernels and (co)products are computed pointwise: for instance, a sequence $X\to Y\to Z$ of morphisms in $(\A^\mathrm{op},\Ab)$ is exact if and only if the sequence $X(A)\to Y(A)\to Z(A)$ is exact in $\Ab$ for all $A$ in $\A$. When $\A$ is essentially small, then the morphisms between two functors in $(\A^\mathrm{op},\Ab)$ form a set. Now fix a set $\mathcal C$ of objects in $\A$ and view $\mathcal C$ as a full subcategory of $\A$. We set $\operatorname{Mod}\mathcal C=(\mathcal C^\mathrm{op},\Ab)$ and call the objects \emph{$\mathcal C$-modules}. For example, if $\mathcal C$ consists of one object $C$, then $\operatorname{Mod}\mathcal C$ is the category of modules over the endomorphism ring of $C$. \subsection*{Restriction and coinduction} Let $\A$ be an additive category. For a full subcategory $\mathcal C\subseteq\A$ there is the \emph{restriction functor} \[(\A^\mathrm{op},\Ab)\longrightarrow (\mathcal C^\mathrm{op},\Ab),\qquad F\mapsto F|_\mathcal C\] and its right adjoint, the \emph{coinduction functor} \[\operatorname{coind}_\mathcal C\colon (\mathcal C^\mathrm{op},\Ab)\longrightarrow (\A^\mathrm{op},\Ab)\] given by \[\operatorname{coind}_\mathcal C I(X)=\operatorname{Hom}(\operatorname{Hom}_\A(-,X)|_\mathcal C,I)\qquad \text{for }I\in (\mathcal C^\mathrm{op},\Ab),\,X\in\A.\] Thus for $F\in (\A^\mathrm{op},\Ab)$ and $I\in (\mathcal C^\mathrm{op},\Ab)$, there is an isomorphism \begin{equation}\label{eq:coind} \operatorname{Hom}(F|_\mathcal C,I)\cong \operatorname{Hom}(F,\operatorname{coind}_\mathcal C I). \end{equation} \begin{lem}\label{le:coind} The functor $\operatorname{coind}_\mathcal C$ preserves injectivity. \end{lem} \begin{proof} The restriction functor is exact, and any right adjoint of an exact functor preserves injectivity. \end{proof} \subsection*{Finitely presented functors} Let $\A$ be an additive category. We denote by $\operatorname{Fp}(\A^\mathrm{op},\Ab)$ the category of finitely presented functors $F\colon\A^\mathrm{op}\to\Ab$. Recall that $F$ is \emph{finitely presented} (or \emph{coherent}) if it fits into an exact sequence \[\operatorname{Hom}_\A(-,X)\longrightarrow \operatorname{Hom}_\A(-,Y)\longrightarrow F\longrightarrow 0.\] Note that $\operatorname{Fp}(\A^\mathrm{op},\Ab)$ is an abelian category when $\A$ admits kernels. Then the assignment $X\mapsto\operatorname{Hom}_\A(-,X)$ identifies $\A$ with the full subcategory of projective objects in $\operatorname{Fp}(\A^\mathrm{op},\Ab)$ by Yoneda's lemma. \subsection*{Flat functors and flat covers} Let $\mathcal C$ be an essentially small additive category. We consider the category $(\mathcal C^\mathrm{op},\Ab)$ of additive functors $F\colon\mathcal C^\mathrm{op}\to\Ab$. Recall that $F$ is \emph{flat} if it can be written as a filtered colimit of representable functors. The following result describes the connection between locally finitely presented categories and categories of flat functors. \begin{thm}[{Breitsprecher \cite{Br1970}}]\label{th:flat} Let $\A$ be a locally finitely presented Grothendieck abelian category. Then the functor \begin{equation*}\label{eq:flat} \A\longrightarrow ((\operatorname{fp}\A)^\mathrm{op},\Ab),\qquad X\mapsto \operatorname{Hom}_\A(-,X)|_{\operatorname{fp}\A} \end{equation*} identifies $\A$ with the full subcategory of flat functors $(\operatorname{fp}\A)^\mathrm{op}\to\Ab$. Moreover, the functor admits an exact left adjoint.\qed \end{thm} A morphism $\pi\colon F\to G$ in $(\mathcal C^\mathrm{op},\Ab)$ is a \emph{flat cover} of $G$ if the following holds: \begin{enumerate} \item $F$ is flat and every morphism $F'\to G$ with $F'$ flat factors through $\pi$. \item $\pi$ is \emph{right minimal}, that is, an endomorphism $\phi\colon F\to F$ satisfying $\pi\phi=\pi$ is invertible. \end{enumerate} A \emph{minimal flat presentation} of $G$ is an exact sequence \[F_1\longrightarrow F_0\stackrel{\pi}\longrightarrow G\longrightarrow 0\] such that $F_0\to G$ and $F_1\to\operatorname{Ker}\pi$ are flat covers. A \emph{projective cover} and a \emph{minimal projective presentation} are defined analogously, replacing the term flat by projective.\footnote{This definition of a projective cover is equivalent to the usual one which requires the kernel to be superfluous.} \begin{thm}[{Bican--El Bashir--Enochs \cite{BEE2001}}] Every additive functor $\mathcal C^\mathrm{op}\to\Ab$ admits a flat cover.\qed \end{thm} The following consequence is straightforward; see \cite[Theorem~2.2]{Kr2014}. \begin{cor}\label{co:flatcover} Let $\A$ be a locally finitely presented Grothendieck abelian category. Then every functor $F\colon \A^\mathrm{op}\to\Ab$ that preserves filtered colimits belongs to $\operatorname{Fp}(\A^\mathrm{op},\Ab)$ and admits a minimal projective presentation. \end{cor} \begin{proof} Choose a minimal flat presentation of $F|_{\operatorname{fp}\A}$ and apply Theorem~\ref{th:flat}. \end{proof} The next lemma will be needed to identify injective objects in a locally finitely presented Grothendieck abelian category. \begin{lem}\label{le:domdim} Let $\mathcal C$ be an essentially small additive category and consider the following conditions in $(\mathcal C^\mathrm{op},\Ab)$. \begin{enumerate} \item Given a minimal injective copresentation $0\to G\to I^0\to I^1$ such that $G$ is flat, then $I^0$ and $I^1$ are flat. \item Given a minimal flat presentation $F_1\to F_0\to G\to 0$ such that $G$ is injective, then $F_0$ and $F_1$ are injective. \end{enumerate} Then \emph{(1)} implies \emph{(2)}. \end{lem} \begin{proof} Fix a minimal flat presentation $F_1\to F_0\xrightarrow{\pi} G\to 0$ such that $G$ is injective. Let $F_0\to E(F_0)$ be an injective envelope. Then $\pi$ factors through this since $G$ is injective. On the other hand, the morphism $E(F_0)\to G$ factors through $\pi$ since $E(F_0)$ is flat. The minimality of $\pi$ implies that $F_0$ is a direct summand of $E(F_0)$, and therefore $F_0$ is injective. Now choose a minimal injective copresentation $0\to F_1\to I^0\to I^1$. This gives rise to the following commutative diagram with exact rows. \[\begin{tikzcd} F_1\arrow{r}\arrow{d}& F_0\arrow{r}\arrow{d}&G\arrow{r}\arrow{d}&0\\ I^0\arrow{r}& F_0\oplus I^1\arrow{r}&H\arrow{r}&0 \end{tikzcd}\] The vertical morphisms are monomorphism, and therefore $G\to H$ splits. The inverse $H\to G$ induces the following commutative diagram with exact rows since $I^0$ and $I^1$ are flat. \[\begin{tikzcd} I^0\arrow{r}\arrow{d}& F_0\oplus I^1\arrow{r}\arrow{d}&H\arrow{r}\arrow{d}&0\\ F_1\arrow{r}& F_0\arrow{r}&G\arrow{r}&0 \end{tikzcd}\] Now the minimality of the flat presentation implies that the composition $F_1\to I^0\to F_1$ is invertible. Thus $F_1$ is injective. \end{proof} Let $\A$ be a locally finitely presented Grothendieck abelian category. Set $\mathcal C=\operatorname{fp}\A$ and consider the functor $h\colon\A\to (\mathcal C^\mathrm{op},\Ab)$ from Theorem~\ref{th:flat}. Then $X\in\A$ is injective if and only if $h(X)$ is injective, since $h$ has an exact left adjoint. In fact, $h$ takes an injective copresentation in $\A$ to one in $(\mathcal C^\mathrm{op},\Ab)$. Thus $\mathcal C$ satisfies condition (1) in Lemma~\ref{le:domdim}. \begin{cor}\label{co:domdim} Let $\A$ be a locally finitely presented Grothendieck abelian category and consider a minimal projective presentation \[\operatorname{Hom}_\A(-,X_1)\longrightarrow \operatorname{Hom}_\A(-,X_0)\longrightarrow F\longrightarrow 0\] of a functor $F$ in $\operatorname{Fp}(\A^\mathrm{op},\Ab)$. If $F|_{\operatorname{fp}\A}$ is an injective object in $((\operatorname{fp}\A)^\mathrm{op},\Ab)$, then $X_0$ and $X_1$ are injective objects in $\A$. \end{cor} \begin{proof} The sequence \[\operatorname{Hom}_\A(-,X_1)|_{\operatorname{fp}\A}\longrightarrow \operatorname{Hom}_\A(-,X_0)|_{\operatorname{fp}\A}\longrightarrow F|_{\operatorname{fp}\A}\longrightarrow 0\] is a minimal flat presentation of $F|_{\operatorname{fp}\A}$ in $((\operatorname{fp}\A)^\mathrm{op},\Ab)$. Thus the assertion follows from Lemma~\ref{le:domdim}. \end{proof} \subsection*{The defect of an exact sequence} We recall the following notion from \cite[II.3]{Au1978}. Let $\A$ be an abelian category. For an exact sequence \[\xi\colon 0\longrightarrow X\longrightarrow Y\longrightarrow Z\longrightarrow 0\] in $\A$ the \emph{covariant defect} $\xi_*$ and the \emph{contravariant defect} $\xi^*$ are defined by the exactness of the following sequences: \begin{gather*} 0\longrightarrow \operatorname{Hom}_\A(Z,-)\longrightarrow\operatorname{Hom}_\A(Y,-)\longrightarrow\operatorname{Hom}_\A(X,-)\longrightarrow \xi_*\longrightarrow 0\\ 0\longrightarrow \operatorname{Hom}_\A(-,X)\longrightarrow\operatorname{Hom}_\A(-,Y)\longrightarrow\operatorname{Hom}_\A(-,Z)\longrightarrow \xi^*\longrightarrow 0 \end{gather*} The functors $\xi_*\colon\A\to\Ab$ given by the exact sequences $\xi$ in $\A$ form an abelian category, with morphisms the natural transformations. We denote this category by $\operatorname{Eff}(\A,\Ab)$, because the objects are precisely the finitely presented functors $F\colon\A\to\Ab$ that are \emph{locally effaceable} \cite{Gr1957}, that is, for each $x\in F(X)$ there exists a monomorphism $\phi\colon X\to Y$ such that $F(\phi)(x)=0$. The assignment $F\mapsto D_\A(F)$ given by \[D_\A(F)(X)=\operatorname{Ext}^2(F,\operatorname{Hom}_\A(X,-))\] yields an equivalence \begin{equation}\label{eq:eff} D_\A\colon \operatorname{Eff}(\A,\Ab)^\mathrm{op}\xrightarrow{\ \sim\ }\operatorname{Eff}(\A^\mathrm{op},\Ab), \end{equation} where $\operatorname{Ext}^2(-,-)$ is computed in the abelian category $\operatorname{Fp}(\A,\Ab)$ and the inverse is given by $D_{\A^\mathrm{op}}$. Note that $D_\A(\xi_*)=\xi^*$ and $D_{\A^\mathrm{op}}(\xi^*)=\xi_*$. When the context is clear we write $D$ instead of $D_\A$. We continue with some further properties of locally effaceable functors. For a discussion of the following result, see also \cite{Oo1963}. \begin{lem}\label{le:hom-ext} Suppose that every object $X\in\A$ admits a monomorphism $\iota \colon X\to Y$ such that $\operatorname{Ext}^1_\A(-,\iota)=0$. Then the assignment $\phi\mapsto\operatorname{Ext}^1_\A(-,\phi)$ induces for all objects $X,X'\in\A$ an isomorphism \[\operatorname{\overline{Hom}}_\A(X,X')\xrightarrow{\ \sim\ }\operatorname{Hom}(\operatorname{Ext}^1_\A(-,X),\operatorname{Ext}^1_\A(-,X')).\] \end{lem} Clearly, the assumption on $\A$ is satisfied when $\A$ has enough injective objects. \begin{proof} Let $\xi\colon 0\to X\xrightarrow{\iota} Y\to Z\to 0$ be an exact sequence such that $\operatorname{Ext}^1_\A(-,\iota)=0$. Then we have \[\xi^*\cong\operatorname{Ext}^1_\A(-,X)\qquad\text{and}\qquad\xi_*\cong\operatorname{\overline{Hom}}_\A(X,-).\] Now apply the duality \eqref{eq:eff}. \end{proof} \begin{lem}\label{le:eff-adj} The inclusion $\operatorname{Eff}(\A^\mathrm{op},\Ab)\to \operatorname{Fp}(\A^\mathrm{op},\Ab)$ admits a right adjoint \begin{equation*} \operatorname{eff}\colon \operatorname{Fp}(\A^\mathrm{op},\Ab)\longrightarrow \operatorname{Eff}(\A^\mathrm{op},\Ab). \end{equation*} \end{lem} \begin{proof} The right adjoint takes a functor $F=\operatorname{Coker} \operatorname{Hom}_\A(-,\phi)$ given by a morphism $\phi\colon X\to Y$ in $\A$ to $\operatorname{Coker} \operatorname{Hom}_\A(-,\phi')$ where $\phi'\colon X\to\Im\phi$ is the morphism induced by $\phi$. \end{proof} \subsection*{Auslander-Reiten duality} Let $\A$ be an abelian category and $\mathcal C$ a set of objects in $\A$. Then every object $X$ in $\A$ gives rise to a $\mathcal C$-module \[\operatorname{Ext}^1_\A(\mathcal C, X) := \operatorname{Ext}^1_\A(-, X)|_\mathcal C.\] \begin{thm}\label{th:main} Let $\A$ be a locally finitely presented Grothendieck abelian category and $\mathcal C$ a set of finitely presented objects in $\A$. Then for every injective $\mathcal C$-module $I$ there exists an object $\tau_\mathcal C(I)$ in $\overline\A$ and a natural isomorphism \begin{equation*}\label{eq:tau} \operatorname{Hom}(\operatorname{Ext}^1_\A(\mathcal C,-),I)\cong\operatorname{\overline{Hom}}_{\A}(-,\tau_\mathcal C(I)). \end{equation*} \end{thm} The special case that $\mathcal C$ consists of a single object is precisely Theorem~\ref{th:intro} from the introduction. The theorem suggests the following definition. \begin{defn}\label{de:tau} The object $\tau_\mathcal C(I)$ is called the \emph{Auslander-Reiten translate} with respect to $\mathcal C\subseteq\operatorname{fp}\A$ and $I\in\operatorname{Mod}\mathcal C$. The assignment $I\mapsto\tau_\mathcal C(I)$ yields a functor \[\tau_\mathcal C\colon\operatorname{Inj}\operatorname{Mod}\mathcal C\longrightarrow\overline\A.\] \end{defn} \begin{proof}[Proof of Theorem~\ref{th:main}] We fix $\mathcal C\subseteq\operatorname{fp}\A$ and $I\in\operatorname{Mod}\mathcal C$. The functor $\operatorname{coind}_\mathcal C(I)$ in $(\A^\mathrm{op},\Ab)$ preserves filtered colimits and admits therefore a minimal presentation \[0\to\operatorname{Hom}_\A(-,T_2)\to\operatorname{Hom}_\A(-,T_1)\to\operatorname{Hom}_\A(-,T_0)\to \operatorname{coind}_\mathcal C(I)\to 0\] by Corollary~\ref{co:flatcover}. In $\A$ this gives rise to an exact sequence \[\eta\colon 0\longrightarrow T_2 \longrightarrow T_1\longrightarrow \bar T_0\longrightarrow 0.\] We set \[\tau_\mathcal C(I):=T_2.\] Note that $T_1$ is an injective object when $I$ is injective. This follows from Corollary~\ref{co:domdim}, because $\operatorname{coind}_\mathcal C(I)$ is an injective object in $((\operatorname{fp}\A)^\mathrm{op},\Ab)$ by Lemma~\ref{le:coind}. Thus \begin{equation}\label{eq:eta} \operatorname{eff}\operatorname{coind}_\mathcal C(I)\cong \eta^*\cong\operatorname{Ext}^1_\A(-,\tau_\mathcal C(I)). \end{equation} Now fix an object $X\in\A$. Then we have \begin{align*} \operatorname{Hom}(\operatorname{Ext}^1_\A(-,X)|_\mathcal C,I) &\cong\operatorname{Hom}(\operatorname{Ext}^1_\A(-,X),\operatorname{coind}_\mathcal C(I))\\ &\cong\operatorname{Hom}(\operatorname{Ext}^1_\A(-,X),\operatorname{eff}\operatorname{coind}_\mathcal C(I))\\ &\cong\operatorname{Hom}(\operatorname{Ext}^1_\A(-,X),\operatorname{Ext}^1_\A(-,\tau_\mathcal C(I)))\\ &\cong\operatorname{\overline{Hom}}_{\A}(X,\tau_\mathcal C(I)). \end{align*} Let us label the $n$th isomorphism by ($n$). Then (1) and (2) are adjunctions, given by \eqref{eq:coind} and Lemma~\ref{le:eff-adj}, (3) uses \eqref{eq:eta}, and (4) follows from Lemma~\ref{le:hom-ext}. This completes the proof. \end{proof} One may wonder whether the functor $\operatorname{Ext}_\A^1(\mathcal C,-)\colon\overline\A\to\operatorname{Mod}\mathcal C$ admits a right adjoint. In fact, the proof of Theorem~\ref{th:main} provides for an arbitrary $\mathcal C$-module $I$ an object $\tau_\mathcal C(I)$ and a natural monomorphism \[\operatorname{Hom}(\operatorname{Ext}^1_\A(\mathcal C,-),I)\longrightarrow\operatorname{\overline{Hom}}_{\A}(-,\tau_\mathcal C(I)).\] It is easily seen that this is not invertible in general. \begin{rem} Let $\mathcal C\subseteq\D\subseteq\operatorname{fp}\A$ and denote by $i\colon\mathcal C\to\D$ the inclusion. Then we have $\tau_\mathcal C=\tau_\D\mathop{\circ} i_*$ where $i_*$ is the right adjoint of restriction $\operatorname{Mod}\D\to\operatorname{Mod}\mathcal C$. \end{rem} \begin{rem}\label{re:stable} Consider the stable category modulo projectives $\underline\A$. For a set of objects $\mathcal C\subseteq\A$ let $\underline\mathcal C$ denote the corresponding subcategory of $\underline\A$. Then we have \[\operatorname{Ext}^1_\A(\mathcal C,X)\in\operatorname{Mod}\underline\mathcal C\subseteq\operatorname{Mod}\mathcal C\] for all $X\in\A$ and one can replace $\operatorname{Mod}\mathcal C$ by $\operatorname{Mod}\underline\mathcal C$ in Theorem~\ref{th:main}. \end{rem} \subsection*{Auslander-Reiten duality relative to a base} Let $k$ be a commutative ring. Suppose that $\A$ is a $k$-linear and locally finitely presented Grothendieck abelian category. \begin{cor}\label{co:base} There is a functor \[\operatorname{fp}\A\times\operatorname{Inj}\operatorname{Mod} k\longrightarrow\overline\A,\qquad (C,I)\mapsto\tau(C,I)\] and a natural isomorphism \[ \operatorname{Hom}_{k}(\operatorname{Ext}^1_\A(C,-),I)\cong\operatorname{\overline{Hom}}_{\A}(-,\tau(C,I)). \] \end{cor} \begin{proof} Fix $C\in\operatorname{fp}\A$ and set $\Gamma=\operatorname{End}_\A(C)$. Observe that $\operatorname{Hom}_k(\Gamma,-)$ induces a functor $\operatorname{Mod} k\to\operatorname{Mod}\Gamma$ that takes injectives to injectives. Then we obtain $\tau(C,-)$ from the Auslander-Reiten translate $\tau_C$ by setting \[\tau(C,-)=\tau_C(\operatorname{Hom}_k(\Gamma,-)).\qedhere\] \end{proof} \subsection*{A formula for the defect} There is a reformulation of the isomorphism in Theorem~\ref{th:main} in terms of the defect of an exact sequence. \begin{thm}\label{th:defect} Let $\A$ be a locally finitely presented Grothendieck abelian category. Fix a set $\mathcal C$ of finitely presented objects and an exact sequence $\xi\colon 0\to X\to Y\to Z\to 0$ in $\A$. Then for every injective $\mathcal C$-module $I$ there is a natural isomorphism \[\operatorname{Hom}(\xi^*|_\mathcal C,I)\cong\xi_*(\tau_\mathcal C(I)).\] \end{thm} The above isomorphism can be rewritten using the duality \eqref{eq:eff}. Thus we obtain for $F$ in $\operatorname{Eff}(\A^\mathrm{op},\Ab)$ a natural isomorphism \[ \operatorname{Hom}(F|_\mathcal C,I)\cong D(F)(\tau_\mathcal C(I)). \] \begin{proof}[Proof of Theorem~\ref{th:defect}] Fix $\xi\colon 0\to X\xrightarrow{\phi} Y\xrightarrow{\psi} Z\to 0$. It is not hard to see that \[\xi_*\cong\operatorname{Coker} \operatorname{\overline{Hom}}_\A(\phi,-)\qquad\text{and}\qquad\xi^*\cong \operatorname{Ker}\operatorname{Ext}_\A^1(-,\phi).\] Combining this with the isomorphism in Theorem~\ref{th:main} we get \begin{align*} \operatorname{Hom}(\xi^*|_\mathcal C,I) &\cong\operatorname{Hom}(\operatorname{Ker}\operatorname{Ext}_\A^1(\mathcal C,\phi),I)\\ &\cong\operatorname{Coker}\operatorname{Hom}(\operatorname{Ext}_\A^1(\mathcal C,\phi),I)\\ &\cong\operatorname{Coker}\operatorname{\overline{Hom}}_\A(\phi,\tau_\mathcal C(I))\\ &\cong \xi_*(\tau_\mathcal C(I)).\qedhere \end{align*} \end{proof} \subsection*{Auslander-Reiten duality modulo projectives} Let $\A$ be an abelian category. Recall that the \emph{stable category modulo projectives} $\underline\A$ is obtained from $\A$ by identifying two morphisms $\phi,\phi'\colon X\to Y$ if \[\operatorname{Ext}^1_\A(\phi,-)=\operatorname{Ext}^1_\A(\phi',-).\] We write $\operatorname{\underline{Hom}}_\A(-,-)$ for the morphisms in $\underline\A$. The following result is a refinement of Corollary~\ref{co:intro1} from the introduction. \begin{thm}\label{th:second-main} Let $\A$ be a locally finitely presented Grothendieck abelian category and $\mathcal C$ a set of finitely presented objects in $\A$. Then for every injective $\mathcal C$-module $I$ there is a natural monomorphism \begin{equation*} \alpha\colon \operatorname{Ext}^1_\A(-,\tau_\mathcal C(I))\longrightarrow \operatorname{Hom}(\operatorname{\underline{Hom}}_\A(\mathcal C,-),I). \end{equation*} This is an isomorphism for all $I$ if and only if for every object $X\in\A$ there exists an epimorphism $\pi\colon X_\mathcal C\to X$ such that for every morphism $\phi\colon C\to X$ with $C\in\mathcal C$ \[\phi\text{ factors through }\pi\qquad\iff\qquad\operatorname{Ext}^1_\A(\phi,-)=0.\] \end{thm} The condition for $\alpha$ to be an isomorphism expresses the fact that $\A$ has locally enough projective morphisms. Clearly, the condition is satisfied when $\A$ has enough projective objects, but there are also other examples; see Proposition~\ref{pr:proj-scheme}. \begin{proof}[Proof of Theorem~\ref{th:second-main}] For $X\in\A$ we have \begin{align*} \operatorname{Ext}^1_\A(X,\tau_\mathcal C(I))&\cong\operatorname{Hom}(\operatorname{\underline{Hom}}_\A(-,X),\operatorname{Ext}^1_\A(-,\tau_\mathcal C(I))\\ &\cong\operatorname{Hom}(\operatorname{\underline{Hom}}_\A(-,X),\operatorname{eff}\operatorname{coind}_\mathcal C(I))\\ &\subseteq\operatorname{Hom}(\operatorname{\underline{Hom}}_\A(-,X),\operatorname{coind}_\mathcal C(I))\\ &\cong\operatorname{Hom}(\operatorname{\underline{Hom}}_\A(-,X)|_\mathcal C,I). \end{align*} Let us label the $n$th isomorphism by ($n$). Then (1) uses Yoneda's lemma, (2) follows from \eqref{eq:eta}, and (3) is the adjunction \eqref{eq:coind}. Now assume that there exists an epimorphism $\pi\colon X_\mathcal C\to X$ such that for \[F=\operatorname{Coker}\operatorname{Hom}_\A(-,\pi)\] we have $F|_\mathcal C\cong\operatorname{\underline{Hom}}_\A(-,X)|_\mathcal C$. Then $F$ is effaceable and we can apply Lemma~\ref{le:eff-adj}. Thus \begin{align*} \operatorname{Hom}(\operatorname{\underline{Hom}}_\A(-,X)|_\mathcal C,I)&\cong \operatorname{Hom}(F|_\mathcal C,I)\\ &\cong \operatorname{Hom}(F,\operatorname{coind}_\mathcal C(I))\\ &\cong \operatorname{Hom}(F,\operatorname{eff}\operatorname{coind}_\mathcal C(I))\\ &\cong \operatorname{Hom}(F,\operatorname{Ext}^1_\A(-,\tau_\mathcal C(I))\\ &\subseteq\operatorname{Hom}(\operatorname{Hom}_\A(-,X),\operatorname{Ext}^1_\A(-,\tau_\mathcal C(I))\\ &\cong \operatorname{Ext}^1_\A(X,\tau_\mathcal C(I)) \end{align*} and we conclude that $\alpha$ is an isomorphism. Finally, assume that $\alpha$ is an isomorphism for all $I$. An injective envelope $\eta\colon\operatorname{\underline{Hom}}_\A(\mathcal C,X)\to J$ corresponds under $\alpha$ to an extension \[0\longrightarrow\tau_\mathcal C(J)\longrightarrow X_\mathcal C\xrightarrow{\ \pi\ }X\longrightarrow 0.\] From the functoriality of $\alpha$ one sees that any morphism $\phi\colon C\to X$ factors through $\pi$ iff the the composition of $\operatorname{\underline{Hom}}_\A(\mathcal C,\phi)$ with $\eta$ equals zero. The latter condition means $\operatorname{Ext}^1_\A(\phi,-)=0$. \end{proof} \begin{rem}\label{re:pi} For $X\in\A$ there exists an essentially unique morphism $\pi\colon X_\mathcal C\to X$ having the following properties: \begin{enumerate} \item A morphism $\phi\colon C\to X$ with $C\in\mathcal C$ factors through $\pi$ iff $\operatorname{Ext}^1_\A(\phi,-)=0$. \item Every morphism $X'\to X$ satisfying (1) factors through $\pi$. \item Every endomorphism $\varepsilon\colon X_\mathcal C\to X_\mathcal C$ satisfying $\pi\varepsilon=\pi$ is invertible. \end{enumerate} This follows from \cite[Theorem~1.1]{Kr2014}. Thus the crucial issue for Theorem~\ref{th:second-main} is the property of $\pi$ to be an epimorphism. \end{rem} \section{Auslander-Reiten duality for module categories} This section is devoted to giving an explicit construction of the Auslander-Reiten translate for module categories (Definition~\ref{de:DTr}). This is closely related to the original construction of Auslander and Reiten \cite{Au1978,AR1974} but it is not the same. \subsection*{Stable module categories and the transpose} Let $\Lambda$ be a ring. Given $\Lambda$-modules $X$ and $Y$, we set \[\operatorname{\underline{Hom}}_\Lambda(X,Y)=\operatorname{Hom}_\Lambda(X,Y)/\{\phi\mid \phi\text{ factors through a projective module}\}\] and \[\operatorname{\overline{Hom}}_\Lambda(X,Y)=\operatorname{Hom}_\Lambda(X,Y)/\{\phi\mid \phi\text{ factors through an injective module}\}.\] Changing not the objects but the morphisms in $\operatorname{Mod}\Lambda$, we obtain the \emph{stable category modulo projectives} $\operatorname{\underline{Mod}}\Lambda$. Let $\operatorname{\underline{mod}}\Lambda$ denote the full subcategory of finitely presented $\Lambda$-modules. Analogously, the \emph{stable category modulo injectives} $\operatorname{\overline{Mod}}\Lambda$ is defined. For a finitely presented $\Lambda$-module $X$ having a projective presentation \[P_1\longrightarrow P_0\longrightarrow X\longrightarrow 0\] the \emph{transpose} $\operatorname{Tr} X$ is defined by the exactness of the following sequence of $\Lambda^\mathrm{op}$-modules \[P_0^*\longrightarrow P_1^*\longrightarrow \operatorname{Tr} X\longrightarrow 0\] where $P^*=\operatorname{Hom}_\Lambda(P,\Lambda)$. \begin{lem}\label{le:Tr} The transpose induces mutually inverse equivalences \[(\operatorname{\underline{mod}}\Lambda)^\mathrm{op}\xrightarrow{\ \sim\ }\operatorname{\underline{mod}}(\Lambda^\mathrm{op})\qquad\text{and}\qquad \operatorname{\underline{mod}}(\Lambda^\mathrm{op})\xrightarrow{\ \sim\ }(\operatorname{\underline{mod}}\Lambda)^\mathrm{op}.\] \end{lem} \begin{proof} See Proposition~2.6 in \cite{AB1969}. \end{proof} \subsection*{Injective modules over quotient rings} Let $\pi\colon \Gamma\to\bar\Gamma$ be a surjective ring homomorphism. Then restriction of scalars along $\pi$ yields a functor $\operatorname{Mod}\bar\Gamma\to\operatorname{Mod} \Gamma$ that is fully faithful. This functor has a right adjoint which takes a $\Gamma$-module $I$ to $\bar I=\operatorname{Hom}_\Gamma(\bar\Gamma,I)$. Thus $\pi$ induces a monomorphism $\varepsilon_I\colon\bar I\to I$ in $\operatorname{Mod}\Gamma$ which identifies $\bar I$ with the largest submodule of $I$ that is annihilated by $\operatorname{Ker}\pi$ and gives the isomorphism \begin{equation}\label{eq:DTr-inj} \operatorname{Hom}_{\bar\Gamma}(-,\bar I)\xrightarrow{\sim} \operatorname{Hom}_\Gamma(-,I). \end{equation} Let $\bar I\to E(\bar I)$ denote an injective envelope in $\operatorname{Mod}\Gamma$. \begin{lem}\label{le:DTr-inj} For an injective $\Gamma$-module $I$, we have in $\operatorname{Mod}\bar\Gamma$ \[\bar I\xleftarrow{\sim} \operatorname{Hom}_\Gamma(\bar\Gamma,\bar I) \xrightarrow{\sim}\operatorname{Hom}_\Gamma(\bar\Gamma,E(\bar I)).\] \end{lem} \begin{proof} The first isomorphism is given by $\varepsilon_{\bar I}$. The functor $\operatorname{Hom}_\Gamma(\bar\Gamma,-)$ preserves injectivity since it is right adjoint to an exact functor. Thus $\bar I\to\operatorname{Hom}_\Gamma(\bar\Gamma,E(\bar I))$ is a split monomorphism, and it is an isomorphism, because the composition with $\varepsilon_{E(\bar I)}$ is an injective envelope. \end{proof} \subsection*{The dual of the transpose} Let $\Lambda$ be a ring and fix a finitely presented $\Lambda$-module $C$. Set $\Gamma=\operatorname{End}_\Lambda(C)$ and $\bar\Gamma=\operatorname{\underline{End}}_\Lambda(C)$. The transpose $\operatorname{Tr} C$ is a $\Lambda^\mathrm{op}$-module and there is an isomorphism \[\gamma\colon\operatorname{\underline{End}}_\Lambda(C)\xrightarrow{\ \sim\ }\operatorname{\underline{End}}_{\Lambda^\mathrm{op}}(\operatorname{Tr} C)^\mathrm{op}\] by Lemma~\ref{le:Tr}. For an injective $\Gamma$-module $I$ we view $\bar I=\operatorname{Hom}_\Gamma(\bar\Gamma,I)$ as an $\operatorname{End}_{\Lambda^\mathrm{op}}(\operatorname{Tr} C)^\mathrm{op}$-module via $\gamma$ and denote by $E(\bar I)$ an injective envelope. This yields the assignment \[\operatorname{Inj} \operatorname{Mod}\operatorname{End}_\Lambda(C)\longrightarrow \operatorname{Inj} \operatorname{Mod}\operatorname{End}_{\Lambda^\mathrm{op}}(\operatorname{Tr} C)^\mathrm{op},\qquad I\mapsto E(\bar I).\] \begin{defn}\label{de:DTr} The \emph{dual of the transpose} (or \emph{Auslander-Reiten translate}) of $C$ with respect to $I$ is the $\Lambda$-module \[\tau_C(I):=\operatorname{Hom}_{\operatorname{End}_{\Lambda^\mathrm{op}}(\operatorname{Tr} C)^\mathrm{op}}(\operatorname{Tr} C,E(\bar I)).\] \end{defn} In Corollary~\ref{co:ARformula} we will see that this definition is consistent with the previous Definition~\ref{de:tau}. In particular, the definition does not depend on any choice when $\tau_C(I)$ is viewed as an object in $\operatorname{\overline{Mod}}\Lambda$. The definition of $\tau_C(I)$ is a variation of Auslander's definition of the dual of the transpose in \cite[I.3]{Au1978}. To be precise, Auslander starts with an injective module over $\operatorname{End}_{\Lambda^\mathrm{op}}(\operatorname{Tr} C)^\mathrm{op}$ whereas the above definition takes as input an injective module over $\operatorname{End}_\Lambda(C)$. Keeping this difference in mind, the following is the analogue of Theorem~III.4.1 in \cite{Au1978}. \begin{thm}\label{th:defect1} Let $\xi\colon 0\to X\to Y\to Z\to 0$ be an exact sequence of $\Lambda$-modules and $C$ a finitely presented $\Lambda$-module. For an injective $\operatorname{End}_\Lambda(C)$-module $I$, there is a natural isomorphism \[\operatorname{Hom}_{\operatorname{End}_\Lambda(C)}(\xi^*(C),I)\cong \xi_*(\tau_C(I)).\] \end{thm} \begin{proof} Set $\Sigma=\operatorname{End}_{\Lambda^\mathrm{op}}(\operatorname{Tr} C)^\mathrm{op}$ and $\bar \Sigma=\operatorname{\underline{End}}_{\Lambda^\mathrm{op}}(\operatorname{Tr} C)^\mathrm{op}$. Note that $\bar\Gamma\cong\bar\Sigma$ by Lemma~\ref{le:Tr}. In the following we use that $\operatorname{Hom}_\Sigma(\bar\Sigma,E(\bar I))\cong \bar I$ by Lemma~\ref{le:DTr-inj}. Also, the isomorphism \eqref{eq:DTr-inj} is used for $\Gamma$ and $\Sigma$. We fix an exact sequence \[\xi\colon 0\longrightarrow X\xrightarrow{\ \phi\ } Y\xrightarrow{\ \psi\ } Z\longrightarrow 0.\] A projective presentation $P_1\to P_0\to C\to 0$ induces the following commutative diagram with exact rows. \[\begin{tikzcd}[column sep=small] {}&&X\otimes_\Lambda P_0^*\arrow{r}\arrow{d}{\wr}&X\otimes_\Lambda P_1^*\arrow{r}\arrow{d}{\wr}&X\otimes_\Lambda\operatorname{Tr} C\arrow{r}&0\\ 0 \arrow{r}&\operatorname{Hom}_\Lambda(C,X) \arrow{r}&\operatorname{Hom}_\Lambda(P_0,X) \arrow{r}&\operatorname{Hom}_\Lambda(P_1,X) \end{tikzcd}\] Therefore $\xi$ induces the following commutative diagram with exact rows and colums. \[\begin{tikzcd}[column sep=small] &0\arrow{d}&0\arrow{d}&0\arrow{d}\\ 0\arrow{r}&\operatorname{Hom}_\Lambda(C,X)\arrow{r}\arrow{d}&\operatorname{Hom}_\Lambda(P_0,X)\arrow{r}\arrow{d}&\operatorname{Hom}_\Lambda(P_1,X)\arrow{r}\arrow{d}&X\otimes_\Lambda\operatorname{Tr} C\arrow{r}\arrow{d}&0\\ 0\arrow{r}&\operatorname{Hom}_\Lambda(C,Y)\arrow{r}\arrow{d}&\operatorname{Hom}_\Lambda(P_0,Y)\arrow{r}\arrow{d}&\operatorname{Hom}_\Lambda(P_1,Y)\arrow{r}\arrow{d}&Y\otimes_\Lambda\operatorname{Tr} C\arrow{r}\arrow{d}&0\\ 0\arrow{r}&\operatorname{Hom}_\Lambda(C,Z)\arrow{r}&\operatorname{Hom}_\Lambda(P_0,Z)\arrow{r}\arrow{d}&\operatorname{Hom}_\Lambda(P_1,Z)\arrow{r}\arrow{d}&Z\otimes_\Lambda\operatorname{Tr} C\arrow{r}\arrow{d}&0\\ &&0&0&0 \end{tikzcd}\] Now we apply the snake lemma and use adjunctions plus the isomorphism from Lemma~\ref{le:DTr-inj}, as explained above. This yields \begin{align*} \operatorname{Hom}_\Gamma(\xi^*(C),I) &=\operatorname{Hom}_\Gamma(\operatorname{Coker}\operatorname{Hom}_\Lambda(C,\psi),I)\\ &\cong\operatorname{Hom}_\Gamma(\operatorname{Coker}\operatorname{\underline{Hom}}_\Lambda(C,\psi),I)\\ &\cong\operatorname{Hom}_{\bar\Gamma}(\operatorname{Coker}\operatorname{\underline{Hom}}_\Lambda(C,\psi),\bar I)\\ &\cong \operatorname{Hom}_{\bar\Sigma}(\operatorname{Ker} (\phi\otimes_\Lambda\operatorname{Tr} C),\bar I)\\ &\cong \operatorname{Hom}_{\Sigma}(\operatorname{Ker} (\phi\otimes_\Lambda\operatorname{Tr} C),E(\bar I))\\ &\cong \operatorname{Coker} \operatorname{Hom}_\Sigma(\phi\otimes_\Lambda\operatorname{Tr} C,E(\bar I))\\ &\cong\operatorname{Coker} \operatorname{Hom}_\Lambda(\phi,\operatorname{Hom}_\Sigma(\operatorname{Tr} C,E(\bar I)))\\ &=\xi_*(\tau_C(I)) \end{align*} and the proof is complete. \end{proof} The following result says that the dual of the transpose $\tau_C(I)$ is unique up to morphisms that factor through an injective module; it is the representing object of the functor \[\operatorname{Hom}_{\operatorname{End}_\Lambda(C)}(\operatorname{Ext}^1_\Lambda(C,-),I).\] For the analogues in Auslander's work \cite{Au1978}, see Proposition~I.3.4 and Corollary~III.4.3. \begin{cor}\label{co:ARformula} Let $\Lambda$ be a ring and $C$ a finitely presented $\Lambda$-module. Sending an injective $\operatorname{End}_\Lambda(C)$-module $I$ to $\tau_C(I)$ gives a functor \[\tau_C\colon\operatorname{Inj}\operatorname{Mod}\operatorname{End}_\Lambda(C)\longrightarrow\operatorname{\overline{Mod}}\Lambda\] such that \begin{align} \operatorname{Hom}_{\operatorname{End}_\Lambda(C)}(\operatorname{Ext}^1_\Lambda(C,-),I)&\cong\operatorname{\overline{Hom}}_\Lambda(-,\tau_C(I)) \intertext{and} \label{eq:Ext}\operatorname{Hom}_{\operatorname{End}_\Lambda(C)}(\operatorname{\underline{Hom}}_\Lambda(C,-),I)&\cong\operatorname{Ext}^1_\Lambda(-,\tau_C(I)). \end{align} \end{cor} \begin{proof} Both isomorphisms follow from the isomorphism in Theorem~\ref{th:defect1} by choosing an appropriate exact sequence $\xi\colon 0\to X\to Y\to Z\to 0$. In the first case choose $Y$ to be injective. Then $\xi_*\cong\operatorname{\overline{Hom}}_\Lambda(X,-)$ and $\xi^*=\operatorname{Ext}^1_\Lambda(-,X)$. In the second case choose $Y$ to be projective. Then $\xi^*\cong\operatorname{\underline{Hom}}_\Lambda(-,Z)$ and $\xi_*\cong\operatorname{Ext}^1_\Lambda(Z,-)$. In particular, the assignment $I\mapsto\tau_C(I)$ is functorial. \end{proof} We have identified the Auslander-Reiten translate $\tau_C(I)$ as the representing object of a specific functor. The following remark suggests that $\tau_C(I)$ is the `universal kernel' of certain epimorphisms that are right determined by $C$. \begin{rem} Fix a finitely presented $\Lambda$-module $C$ and recall from \cite{Au1978} that a morphism $\alpha\colon X\to Y$ is \emph{right $C$-determined} if for any morphism $\alpha'\colon X'\to Y$ we have \[\Im\operatorname{Hom}_\Lambda(C,\alpha')\subseteq \Im\operatorname{Hom}_\Lambda(C,\alpha)\quad\iff\quad\text{$\alpha'$ factors through $\alpha$}.\] Now let \[\xi\colon 0\longrightarrow\tau_C(I)\longrightarrow X\xrightarrow{\ \alpha\ }Y\longrightarrow 0\] be an exact sequence of $\Lambda$-modules and $H$ the kernel of an $\operatorname{End}_\Lambda(C)$-linear map \[\operatorname{Hom}_\Lambda(C,Y)\twoheadrightarrow \operatorname{\underline{Hom}}_\Lambda(C,-)\xrightarrow{\eta} I.\] Then $\xi$ corresponds to $\eta$ under the isomorphism \eqref{eq:Ext} if and only if $\alpha$ is right $C$-determined with $H=\Im\operatorname{Hom}_\Lambda(C,\alpha)$. This follows from the functoriality of \eqref{eq:Ext}. \end{rem} \begin{rem} In Theorem~\ref{th:defect1} one can replace the finitely presented $\Lambda$-module $C$ by a set of finitely presented modules, as in Theorem~\ref{th:defect}. \end{rem} \subsection*{Examples} (1) Let $\Lambda$ be a $k$-algebra over a commutative ring $k$. For a finitely presented $\Lambda$-module $C$ and an injective $k$-module $I$ we have $\tau(C,I)=\operatorname{Hom}_k(\operatorname{Tr} C,I)$ and therefore \[\operatorname{Hom}_k(\operatorname{Ext}^1_\Lambda(C,-),I)\cong\operatorname{\overline{Hom}}_\Lambda(-,\tau(C,I)).\] For an elementary proof see \cite{Kr2003}. (2) Let $\Lambda$ be a noetherian algebra over a complete local ring $k$. Then Matlis duality over $k$ composed with the transpose $\operatorname{Tr}$ identifies the noetherian $\Lambda$-modules (modulo projectives) with the artinian $\Lambda$-modules (modulo injectives); see \cite{Au1978}. (3) Let $\Lambda$ be a Dededkind domain. Then the Auslander-Reiten translate identifies the noetherian $\Lambda$-modules (modulo projectives) with the artinian $\Lambda$-modules (modulo injectives). \section{Auslander-Reiten duality for quasi-coherent sheaves} This section is devoted to giving an explicit construction of the Auslander-Reiten translate for the category of quasi-coherent modules over a scheme (Definition~\ref{de:translate-scheme}). In particular, we explain the connection to Serre duality for a non-singular projective scheme over a field. Let $k$ be a field and $\bbX$ a quasi-compact and quasi-separated scheme over $k$, given by a morphism $f\colon\bbX \to\bbY=\operatorname{Spec} k$. Consider the category $\operatorname{Qcoh}\bbX$ of quasi-coherent $\mathcal O_\bbX$-modules and note that every object is a filtered colimit of finitely presented $\mathcal O_\bbX$-modules \cite[I.6.9.12]{GD1971}. Thus the Grothendieck abelian category $\operatorname{Qcoh}\bbX$ is locally finitely presented. From now on assume that the scheme $\bbX$ is locally noetherian. Then an $\mathcal O_\bbX$-module is coherent if and only if it is finitely presented. For an abelian category $\A$ let $\bfD(\A)$ denote its derived category. In \cite{Ne1996}, Neeman shows that Grothendieck duality is a formal consequence of Brown representabilty, that is, the derived direct image functor \[\mathbf Rf_*\colon\mathbf D(\operatorname{Qcoh}\bbX)\longrightarrow\mathbf D(\operatorname{Qcoh}\bbY)\] has a right adjoint \[f^!\colon\mathbf D(\operatorname{Qcoh}\bbY)\longrightarrow\mathbf D(\operatorname{Qcoh}\bbX).\] Now fix objects $C,X\in\operatorname{Qcoh}\bbX$ and suppose that $C$ is coherent. The following lemma may be deduced from a more general statement in SGA 6; see Proposition~3.7 in \cite[Exp.~I]{BGI1971}. \begin{lem}\label{le:rigidity} There is in $\bfD(\operatorname{Qcoh}\bbX)$ a natural isomorphism \[\sigma\colon X\overset{\mathbf L}{\otimes}_{\bbX}\operatorname{\mathbf{R}\mathcal{H}\!\!\;\mathit{om}}_\bbX(C,\mathcal O_\bbX)\xrightarrow{\sim} \operatorname{\mathbf{R}\mathcal{H}\!\!\;\mathit{om}}_\bbX(C,X).\] \end{lem} \begin{proof} Given a ring $\Lambda$ and $\Lambda$-modules $M,N$, there is a natural morphism \[N\otimes_\Lambda\operatorname{Hom}_\Lambda(M,\Lambda)\longrightarrow\operatorname{Hom}_\Lambda(M,N)\] which is an isomorphism when $M$ is finitely generated projective. This gives the morphism $\sigma$ which is an isomorphism because $C$ is locally isomorphic to a bounded above complex of finitely generated projective modules (since $\bbX$ is locally noetherian) while $X$ is a bounded below complex. For the affine case, see also \cite [Lemma~3.1]{KL2006}. \end{proof} \begin{lem}\label{le:gr-duality} There is a natural isomorphism \begin{equation*} \operatorname{Hom}_k(\operatorname{{\mathbf R}Hom}_{\mathbf D(\bbX)}(C,X),k)\cong \operatorname{Hom}_{\mathbf D(\bbX)}(X,\operatorname{\mathbf{R}\mathcal{H}\!\!\;\mathit{om}}_\bbX(\operatorname{\mathbf{R}\mathcal{H}\!\!\;\mathit{om}}_\bbX(C,\mathcal O_\bbX),f^!k)). \end{equation*} \end{lem} \begin{proof} Combining Grothendieck duality, Lemma~\ref{le:rigidity}, and tensor-hom adjunction, we obtain the following chain of isomorphisms: \begin{align*} \operatorname{Hom}_k(\operatorname{{\mathbf R}Hom}_{\mathbf D(\bbX)}(C,X),k) &\cong\operatorname{Hom}_{\mathbf D(\bbY)}(\mathbf Rf_*\operatorname{\mathbf{R}\mathcal{H}\!\!\;\mathit{om}}_\bbX(C,X),k)\\ &\cong\operatorname{Hom}_{\mathbf D(\bbX)}(\operatorname{\mathbf{R}\mathcal{H}\!\!\;\mathit{om}}_\bbX(C,X),f^!k)\\ &\cong\operatorname{Hom}_{\mathbf D(\bbX)}(X\overset{\mathbf L}{\otimes}_{\bbX}\operatorname{\mathbf{R}\mathcal{H}\!\!\;\mathit{om}}_\bbX(C,\mathcal O_\bbX),f^!k)\\ &\cong\operatorname{Hom}_{\mathbf D(\bbX)}(X,\operatorname{\mathbf{R}\mathcal{H}\!\!\;\mathit{om}}_\bbX(\operatorname{\mathbf{R}\mathcal{H}\!\!\;\mathit{om}}_\bbX(C,\mathcal O_\bbX),f^!k)).\qedhere \end{align*} \end{proof} Given a complex $M$ of $\mathcal O_\bbX$-modules and $n\in\bbZ$, let $\bfi(M)$ denote a K-injective resolution \cite{Sp1988} and set \[Z^n(M)=\operatorname{Ker} (M^n\to M^{n+1}).\] \begin{defn}\label{de:translate-scheme} The \emph{Auslander-Reiten translate} of a coherent $\mathcal O_\bbX$-module $C$ is the $\mathcal O_\bbX$-module \[\tau(C,k):=Z^{-1}\bfi (\operatorname{\mathbf{R}\mathcal{H}\!\!\;\mathit{om}}_\bbX(\operatorname{\mathbf{R}\mathcal{H}\!\!\;\mathit{om}}_\bbX(C,\mathcal O_\bbX),f^!k)).\] \end{defn} The following result shows that this definition is consistent with the notation from Corollary~\ref{co:base}. In particular, the definition does not depend on any choice when $\tau(C,k)$ is viewed as an object of the stable category modulo injectives. \begin{thm}\label{th:Qcoh} Let $\bbX$ be a locally noetherian scheme over a field $k$. For a coherent $\mathcal O_\bbX$-module $C$ there is a natural isomorphism \begin{align*} \operatorname{\overline{Hom}}_{\bbX}(-,\tau(C,k)) &\xrightarrow{\sim}\operatorname{Hom}_k(\operatorname{Ext}^1_{\bbX}(C,-),k) \intertext{and a natural monomorphism} \operatorname{Ext}^1_{\bbX}(-,\tau(C,k))&\hookrightarrow \operatorname{Hom}_k(\operatorname{Hom}_{\bbX}(C,-),k) \end{align*} which is an isomorphism if and only if $H^0(\operatorname{\mathbf{R}\mathcal{H}\!\!\;\mathit{om}}_\bbX(\operatorname{\mathbf{R}\mathcal{H}\!\!\;\mathit{om}}_\bbX(C,\mathcal O_\bbX),f^!k))=0$. \end{thm} \begin{proof} Set $\A=\operatorname{Qcoh}\bbX$. The assignment $M\mapsto \bfi(M)$ yields a fully faithful and exact functor $\bfD(\A)\to\mathbf K(\operatorname{Inj}\A)$; see \cite{Sp1988}. Thus we can apply Lemma~\ref{le:cycles} and get both morphisms from the isomorphism in Lemma~\ref{le:gr-duality}. \end{proof} Let $\bbX$ be a non-singular proper scheme of dimension $d\ge 1$ over a field $k$. Then the above calculation simplifies since \[\operatorname{\mathbf{R}\mathcal{H}\!\!\;\mathit{om}}_\bbX(\operatorname{\mathbf{R}\mathcal{H}\!\!\;\mathit{om}}_\bbX(C,\mathcal O_\bbX),f^!k)\cong C\overset{\mathbf L}{\otimes}_\bbX f^!k\qquad\text{and}\qquad f^!k\cong\omega_\bbX[d]\] where $\omega_\bbX$ denotes the dualising sheaf. The first isomorphism is clear since $C$ is isomorphic to a bounded complex of locally free sheaves, and for the second isomorphism see \cite[IV.4]{Ha1966}. Thus \[\tau(C,k)=Z^{d-1}\bfi (C\otimes_{\bbX}\omega_\bbX).\] Moreover, the isomorphism in Lemma~\ref{le:gr-duality} gives \[\operatorname{Hom}_k(\operatorname{Hom}_\bbX(C,-),k) \cong\operatorname{Ext}^{d}_{\bbX}(-,C\otimes_\bbX\omega_\bbX)\cong \operatorname{Ext}^{1}_{\bbX}(-, \Sigma^{d-1}(C\otimes_\bbX\omega_\bbX))\] where $\Sigma^{d-1}$ denotes the $(d-1)$st syzygy in a minimal injective resolution. This isomorphism is a variation of Serre duality \cite[Exp.~XII]{Gr1968} and equals the isomorphism from Theorem~\ref{th:second-main}. In particular, we have $\operatorname{\underline{Hom}}_\bbX(C,-)=\operatorname{Hom}_\bbX(C,-)$. The following is an application of Theorem~\ref{th:second-main}. \begin{prop}\label{pr:proj-scheme} Let $\bbX$ be a non-singular proper scheme of dimension $d\ge 1$ over a field, and fix a pair of $\mathcal O_\bbX$-modules $C,X$ such that $C$ is coherent. Then there exists an epimorphism $\pi\colon X_C\to X$ such that for every morphism $\phi\colon C\to X$ the following holds: \[\phi \text{ factors through }\pi\quad\iff \quad\operatorname{Ext}^1_\bbX(\phi,-)=0\quad\iff\quad\phi=0.\] \end{prop} \begin{proof} The construction of $\pi$ is given in the proof of Theorem~\ref{th:second-main}. \end{proof} \begin{rem} (1) There is a canonical choice for $\pi\colon X_C\to X$; see Remark~\ref{re:pi}. (2) When $X$ is coherent, then there is a choice for $\pi\colon X_C\to X$ such that $X_C$ is coherent. (3) One may conjecture that the second morphism in Theorem~\ref{th:Qcoh} induces an isomorphism \[\operatorname{Ext}^1_{\bbX}(-,\tau(C,k))\xrightarrow{\sim} \operatorname{Hom}_k(\operatorname{\underline{Hom}}_{\bbX}(C,-),k).\] \end{rem} \subsection*{Some questions} Let $\A$ be a Grothendieck abelian category. (1) Is there an alternative construction of the Auslander-Reiten translate $\tau_C$ for a finitely presented object $C\in\A$ that is not based on the existence of flat covers? (2) Are there examples when the morphism \[\operatorname{Ext}^1_\A(-,\tau_C(I))\longrightarrow \operatorname{Hom}_\Gamma(\operatorname{\underline{Hom}}_\A(C,-),I)\] from Theorem~\ref{th:second-main} is not invertible? (3) Suppose that $\operatorname{fp}\A$ is $k$-linear and Ext-finite over a field $k$. When is the Auslander-Reiten translate $\tau(C,k)$ finitely presented for all $C$ in $\operatorname{fp}\A$? And when does the Auslander-Reiten translate induce an equivalence between the projectively stable and the injectively stable category associated with $\operatorname{fp}\A$? An important class where both properties hold are given by categories $\A$ such that $\operatorname{fp}\A$ is a dualising $k$-variety \cite{AR1974}. However, weighted projective lines \cite{GL1987} provide interesting examples where these properties hold but $\operatorname{fp}\A$ is not a dualising $k$-variety; see also \cite{CL2015,LZ2004}. (4) Suppose that $\A$ is locally noetherian and that $\operatorname{fp}\A$ is $k$-linear and Ext-finite over a complete local ring $k$. When does the Auslander-Reiten translate (via Matlis duality over $k$) induce an equivalence between the projectively stable category of noetherian objects and the injectively stable category of artinian objects? \begin{appendix} \section{Complexes of injectives and the stable category} Let $\A$ be an abelian category and let $\operatorname{Inj}\A$ denote the full subcategory of injective objects. The chain complexes in $\operatorname{Inj}\A$ with morphisms the chain maps up to homotopy are denoted by $\mathbf K(\operatorname{Inj}\A)$. Suppose that $\A$ has enough injective objects. Then we denote by \[\bfi\colon \A\longrightarrow\mathbf K(\operatorname{Inj}\A)\] the fully faithful functor that takes an object in $\A$ to an injective resolution. For an integer $n$ consider the functor \[Z^n\colon\mathbf K(\operatorname{Inj}\A)\longrightarrow\overline\A,\qquad X\mapsto\operatorname{Ker} (X^n\to X^{n+1}).\] Note that for $X\in\mathbf K(\operatorname{Inj}\A)$ there is a natural morphism $\bfi Z^0(X)\to X$. \begin{lem}\label{le:cycles} For objects $X\in\A$ and $Y\in\mathbf K(\operatorname{Inj}\A)$ the following holds. \begin{enumerate} \item If $\operatorname{Hom}_{\mathbf K(\operatorname{Inj}\A)}(-,Y)$ vanishes on complexes concentrated in degree zero, then $Z^0$ induces a natural isomorphism \[\operatorname{Hom}_{\mathbf K(\operatorname{Inj}\A)}(\mathbf i(X),Y)\xrightarrow{\sim} \operatorname{\overline{Hom}}_{\A}(X,Z^{0}(Y)).\] \item There is a natural monomorphism \[\operatorname{Ext}^1_\A(X,Z^{-1}(Y))\hookrightarrow \operatorname{Hom}_{\mathbf K(\operatorname{Inj}\A)}(\mathbf i(X),Y)\] which is an isomorphism for all $X\in\A$ if and only if $H^0(Y)=0$. \end{enumerate} \end{lem} \begin{proof} The proof is straightforward. The second morphism is the composition of \[\operatorname{Ext}^1_\A(X,Z^{-1}(Y))\xrightarrow{\sim}\operatorname{Hom}_{\mathbf K(\operatorname{Inj}\A)}(\bfi(X),\bfi Z^{-1}(Y)[1])\] and the morphism induced by $\bfi Z^{-1}(Y)[1]\to Y$. \end{proof} \section{Natural maps of extension functors}\label{ap:ext} We rewrite the isomorphisms \eqref{eq:repres} and \eqref{eq:proj-repres} from the introduction in terms of natural transformations between extension functors. This is based on Lemma~\ref{le:hom-ext} and reveals the symmetry between both formulas. Let $\A$ be a Grothendieck abelian category that is locally finitely presented. Fix a finitely presented object $C$ and and injective module $I$ over $\Gamma=\operatorname{End}_\A(C)$. Then for $X\in\A$ there is a natural isomorphism \begin{align*} \operatorname{Hom}_\Gamma(\operatorname{Hom}(\operatorname{Hom}_\A(X,-),\operatorname{Ext}^1_\A(C,-)),I)\cong\operatorname{Hom}(\operatorname{Ext}^1_\A(-,X),\operatorname{Ext}^1_\A(-,\tau_C(I))). \intertext{When $\A$ has enough projective morphisms we have also} \operatorname{Hom}_\Gamma(\operatorname{Hom}(\operatorname{Ext}^1_\A(X,-),\operatorname{Ext}^1_\A(C,-)),I)\cong\operatorname{Hom}(\operatorname{Hom}_\A(-,X),\operatorname{Ext}^1_\A(-,\tau_C(I))). \end{align*} \end{appendix} \subsection*{Acknowledgements} I am grateful to Amnon Neeman for his help with the algebraic geometry in this paper.
2212.08888
\section{Introduction} It has been repeatedly shown that the user and product information associated with reviews is helpful for sentiment polarity prediction~\cite{tang-etal-2015-user-product,tang-etal-2015-user-product,chen-etal-2016-neural-sentiment,ma-etal-2017-cascading}. Just as the same user is expected to have consistent narrative style and vocabulary, the reviews belonging to the same product are expected to exhibit similar vocabulary for specific terms. Most previous work models user and product identities as representation vectors which are implicitly learned during the training process and only focus on the interactions between the user or product and the review text~\cite{dou-2017-capturing,long-etal-2018-dual,amplayo-2019-rethinking,zhang-etal-2021-ma-bert}. This brings with it two major shortcomings: i) the associations between users and products are not fully exploited, and, ii) the text of historical reviews is not used. To tackle the first shortcoming, \newcite{amplayo-etal-2018-cold} propose to incorporate similar user and product representations for review sentiment classification. However, their approach requires a complex selective gating mechanism while ignoring the associations between users and products. To tackle the second shortcoming, \newcite{lyu-etal-2020-improving} propose to explicitly use historical reviews in the training process. However, their approach needs to incrementally store review representations during the training process, which results in a more complex model architecture. \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/idea_sketch_large_text.pdf} \caption{Our proposed idea of representing users and products with their historical reviews, which can directly inform user and product preferences, and incorporating the associations between users and products.} \label{fig:idea_sketch} \end{figure} As shown in Figure~\ref{fig:idea_sketch}, we propose two simple strategies to address the aforementioned issues. Firstly, we use pre-trained language models (PLMs) to pre-compute the representations of all historical reviews belonging to the same user/product. Historical review representations are then used to initialize user/product representations by average pooling over all tokens before again average pooling over all reviews. This allows historical review text to inform the user and product preference, which we believe is potentially more advantageous than implicitly learned user/product representations. In addition, time and memory costs are minimized since the representations of historical reviews are average pooled and the pre-computation is one-time. Secondly, we propose a user-product cross-context module, which cooperates with historical representations of users and products to gather sentiment polarity information from the reviews of other users/products. This module interacts on four dimensions: user-to-user, product-to-product, user-to-product and product-to-user. The former two are used to obtain similar user (product) information, which is useful to model user (product) preference especially when a user (product) has limited reviews. The latter two are used to model the product preference of the user (what kind of products do they like and what kind of ratings would they give to similar products?) and user preference associated with a product (what kinds of users like such products and what kinds of ratings would they give to this product?). We apply our approach to various English PLMs and test on three benchmark English datasets -- IMDb, Yelp-2013, Yelp-2014. We find that our approach yields consistent improvements across PLMs and achieves substantial improvements over previous state-of-the-art models. We also show the superior performance of our approach when the number of reviews for each user is limited. Our contributions are two effective, cooperative strategies for improving sentiment analysis with user and product information: \begin{enumerate} \setlength\itemsep{-0.5em} \item initializing user and product representations using their historical reviews \item a user-product cross-context module which cooperates with Contribution 1 to efficiently incorporate textual associations between users and products from a larger context. \end{enumerate} \section{Related Work} Neural models have been widely used in sentiment classification \cite{socher-etal-2013-recursive,kim-2014-convolutional,ibm_cnn,tang-etal-2015-document,wang2016attention}. Most existing works focus purely on text. However, \citep{tang-etal-2015-user-product} point out the importance of incorporating user and product information for review sentiment classification, where such information is available. Subsequently various methods have been explored to inject user and product information~\cite{tang-etal-2015-user-product,chen-etal-2016-neural-sentiment,ma-etal-2017-cascading,dou-2017-capturing,long-etal-2018-dual,amplayo-etal-2018-cold,amplayo-2019-rethinking} based on CNN and LSTM. After the emergence of PLMs such as BERT~\cite{bert}, the performance of this task has significantly improved. \newcite{zhang-etal-2021-ma-bert}, for example, proposed a multi-attribute encoder using bilinear projections between attributes and texts on top of BERT~\cite{bert} to make better use of attribute (user and product) information. How to utilize the user and product information has been investigated comprehensively among aforementioned studies. However, most focus on modeling user and product as randomly initialized embedding vectors implicitly learned in training. \newcite{lyu-etal-2020-improving} demonstrate that explicitly using historical reviews can substantially improve the performance of a sentiment classification model by incrementally storing their representations during the training process. However, the magnitude of the user and product matrix is difficult to control when the number of reviews grow very large. As well as that, the associations among users and products are not fully utilized. Although~\newcite{amplayo-etal-2018-cold} propose to make use of information about similar users and similar products, the information about users associated with the current product is not taken into consideration. \section{Methodology} An overview of our approach is shown in Figure~\ref{fig:model_architecture}. We firstly feed the review text, $D$, into a PLM encoder to obtain its representation, $H_{D}$. $H_{D}$ is then fed into a user-product cross-context module consisting of multiple attention functions together with the corresponding user embedding vector, $u$ and product embedding vector, $p$. The output of the user-product module is concatenated with $H_{D}$ and fed into a linear classification layer to obtain the distribution over all sentiment labels. The architecture design is novel in two ways: 1) the user and product embedding matrix is initialized using representations of historical reviews of the corresponding users/products, 2) a user-product cross-context module that cooperates with 1) to incorporate textual associations between users and products. \begin{figure*} \centering \includegraphics[scale=0.185]{Figures/model_architecture_with_dashedlines_x.pdf} \caption{Our model architecture. We initialize user representation matrix $E_{U}$ and product representation matrix $E_{P}$. The user vector $E_{u_{i}}$ and product vector $E_{p_{j}}$ are fed into user-product cross-context module with document representation $H_{D}$. The dashed lines indicate the direct interactions of historical reviews in the cross-context module.} \label{fig:model_architecture} \end{figure*} \subsection{Incorporating Textual Information of Historical Reviews} For the purpose of making use of the textual information of historical reviews, we initialize all user and product embedding vectors using the representations of their historical reviews. Specifically, assume that we have a set of users $U=\{u_{1},......,u_{N}\}$ and products $P=\{p_{1},......,p_{M}\}$. Each user $u_{i}$ and product $p_{j}$ have their corresponding historical reviews: $u_{i}=\{D_{1}^{u_{i}}, ......, D_{n_{i}}^{u_{i}}\}$ and $p_{j}=\{D_{1}^{p_{j}}, ......, D_{m_{j}}^{p_{j}}\}$. For a certain user $u_{i}$, we firstly feed $D_{1}^{u_{i}}$ into the transformer encoder to obtain its representation $H_{D_{1}}^{u_{i}} \in \mathbf{R}^{L\times h}$, then we average $H_{D_{1}}^{u_{i}}$ along its first dimension: \begin{equation} \bar{H}_{D_{1}}^{u_{i}} = \frac{\sum{H_{D_{1}}^{u_{i}}}}{T_{D_{1}}^{u_{i}}} \end{equation} where $\bar{H}_{D_{1}}^{u_{i}}\in \mathbf{R}^{1\times h}$, $L$ is the maximum sequence length, $h$ is the hidden size of the transformer encoder, $T_{D_{1}}^{u_{i}}$ is the total number of tokens in $D_{1}^{u_{i}}$ excluding special tokens. Therefore, we simply sum the representations of all tokens in $D_{1}^{u_{i}}$ and then average it to obtain a document vector $\bar{H}_{D_{1}}^{u_{i}}$. The same procedure is used to generate the document vectors of all documents in $u_{i}=\{D_{1}^{u_{i}}, ......, D_{n_{i}}^{u_{i}}\}$. Finally, we obtain the representation of $u_{i}$ by: \begin{equation} E_{u_{i}} = \frac{\sum_{k=1}^{n_{i}}{\bar{H}_{D_{k}}^{u_{i}}}}{n_{i}} \end{equation} where $E_{u_{i}} \in \mathbf{R}^{1\times h}$ is the initial representation of user $u_{i}$. The same process is applied to generate the representations of all the other users as well as all products. Finally, we have $E_{U} \in \mathbf{R}^{N\times h}$ and $E_{P} \in \mathbf{R}^{M\times h}$ as the user and product embedding matrix respectively. Moreover, in order to control the magnitude of $E_{U}$, $E_{P}$ to prevent it from being too large or too small, we propose scaling heuristics, $E_{U}$ and $E_{P}$: \begin{equation} \hat{E_{U}} = f_{U}E_{U}, f_{U} = \frac{F\_Norm(E)}{F\_Norm(E_{U})} \label{user_scaling} \end{equation} \begin{equation} \hat{E_{P}} = f_{P}E_{P}, f_{P} = \frac{F\_Norm(E)}{F\_Norm(E_{P})} \label{product_scaling} \end{equation} where $F\_Norm$ is Frobenius norm, $E$ is a normal matrix in which the elements $E_{i,j}$ is drawn from a normal distribution $\mathcal{N}(0,1)$. \subsection{User-Product Information Integration} Having enriched user and product representations with historical reviews, we propose a user-product cross-context module for the purpose of garnering sentiment clues from textual associations between users and products. In this module, we adopt four attention operations: \textit{user-to-user, product-to-product, user-to-product} and \textit{product-to-user}. We use \textit{Multi-Head-Attention}~\cite{transformer} in four attention operations. Specifically, for \textit{Multi-Head-Attention(Q,K,V)}, we use the user representation $E_{u_{i}}$ or product representation $E_{p_{j}}$ as \textit{Q} and the user matrix $E_{U}$ and product matrix $E_{P}$ as \textit{K} and \textit{V}. It is important to note that, before using $E_{u_{i}}$ and $E_{p_{j}}$, we fuse the document information $H_{cls} \in \mathbf{R}^{1\times h}$, the representation of the \textit{[CLS]} token, into them as follows: \begin{equation} E_{u_{i}} = g_{u}(E_{u_{i}}, H_{cls}), E_{p_{j}} = g_{p}(E_{p_{j}}, H_{cls}), \end{equation} where $g_{u}$ and $g_{p}$ represent two linear layers combining $E_{u_{i}}/E_{p_{j}}$ and $H_{cls}$. \paragraph{User-to-User Attention} We use $E_{u_{i}}$ as the query and $E_{U}$ as the keys and values to gather information from similar users: \begin{equation} E_{u_{i}}^{uu} = Attn_{uu}(E_{u_{i}},E_{U}, E_{U}) \end{equation} \paragraph{Product-to-Product Attention} We use $E_{p_{j}}$ as the query and $E_{P}$ as the keys and values to gather information from similar products: \begin{equation} E_{p_{j}}^{pp} = Attn_{pp}(E_{p_{j}},E_{P}, E_{P}) \end{equation} \paragraph{User-to-Product Attention} We use $E_{u_{i}}$ as the query and $E_{P}$ as the keys and values to gather information from products associated with $u_{i}$: \begin{equation} E_{u_{i}}^{up} = Attn_{up}(E_{u_{i}},E_{P}, E_{P}) \end{equation} \paragraph{Product-to-User Attention} We use $E_{p_{j}}$ as the query and $E_{U}$ as the keys and values to gather information from users associated with $p_{j}$: \begin{equation} E_{p_{j}}^{pu} = Attn_{pu}(E_{p_{j}},E_{U}, E_{U}) \end{equation} We also employ two \textit{Multi-head Attention} between $E_{u_{i}}$/$E_{p_{j}}$~(query) and $H_{D}$~(key and value). The corresponding outputs are $E_{u_{i}}^{D}$ and $E_{p_{j}}^{D}$. We then combine the output of the user-product cross-context module and $H_{cls}$ to form the final representations. In $Attn_{uu}$ and $Attn_{pp}$, we add attention masks to prevent $E_{u_{i}}$ and $E_{p_{j}}$ from attending to themselves. Thus we also incorporate $E_{u_{i}}$ and $E_{p_{j}}$ as their \textit{self-attentive} representations: \begin{equation} \begin{split} H_{d} = & g(E_{u_{i}}^{uu}, E_{p_{j}}^{pp}, E_{u_{i}}^{up}, E_{p_{j}}^{pu}, E_{u_{i}}^{D}, E_{p_{j}}^{D}, \\ & E_{u_{i}}, E_{p_{j}},H_{cls}) \end{split} \end{equation} $H_{d}$ is fed into the classification layer to obtain the sentiment label distribution. We use \textit{Cross-Entropy} to calculate the loss between our model predictions and the gold labels. \section{Experiments} \begin{table}[!htb] \centering \small \input{Tables/tbl-01-2-split} \caption{Number of documents in per split and average doc length of IMDb, Yelp-2013 and Yelp-2014.} \label{data_statistics_train} \end{table} \begin{table}[!htb] \centering \resizebox{\linewidth}{!}{ \input{Tables/tbl-01-1-stats} } \caption{Number of users and products with average amount of documents for each user and product in IMDb, Yelp-2013 and Yelp-2014.} \label{data_statistics_user_product} \end{table} \begin{table*}[!htb] \centering \small \input{Tables/tbl-03-more-models-results} \caption{Results of our approach on various PLMs on the dev sets of IMDb, Yelp-2013 and Yelp-2014. We show the results of the baseline vanilla attention model for each PLM as well as the results of the same PLM with our proposed approach. We report the average of five runs with two metrics, Accuracy~($\uparrow$) and RMSE~($\downarrow$)} \label{tbl-03-more-models-results}% \end{table*} \subsection{Datasets} Our experiments are conducted on three benchmark English document-level sentiment analysis datasets: IMDb, Yelp-13 and Yelp-14~\cite{tang-etal-2015-user-product}. Statistics of the three datasets are shown in Table~\ref{data_statistics_train}. The IMDb dataset has the longest documents with an average length of approximately 395 words. All three are fine-grained sentiment analysis datasets: Yelp-2013 and Yelp-2014 have 5 classes, IMDb has 10 classes. Each review is accompanied by its corresponding anonymized user ID and product ID. The average number of reviews for each user/product is shown in Table~\ref{data_statistics_user_product}. \subsection{Experimental Setup} The pre-trained language models we employed in experiments are BERT-base-uncased, BERT-large-uncased~\cite{bert}, SpanBERT-base, SpanBERT-large~\cite{joshi2020spanbert} and Longformer~\cite{beltagy2020longformer}. We use the implementations from Huggingface~\cite{Wolf2019HuggingFacesTS}. The hyperparameters are empirically selected based on the performance on the dev set. We adopt a early stopping strategy where we stop training when the performance on dev set decreases. The maximum sequence is set to 512 for all models in order to fully utilize the textual information in documents. For evaluation, we employ two metrics \textit{Accuracy} and \textit{RMSE}~(Root Mean Square Error), which are calculated using the scripts in \newcite{scikit-learn}\footnote{https://scikit-learn.org/stable/modules/classes.html\#module-sklearn.metrics}. All experiments are conducted on one Nvidia GeForce RTX 3090 GPU. \subsection{Results} \begin{table*}[!htb] \centering \small \input{Tables/tbl-02-main-results} \caption{Experimental Results on the test sets of IMDb, Yelp-2013 and Yelp-2014. We report the average results of of five runs of two metrics Accuracy~($\uparrow$) and RMSE~($\downarrow$). The best performance is in bold.} \label{tbl-02-main-results}% \end{table*} \begin{table*}[!htb] \centering \small \input{Tables/tbl-04-ablation-studies} \caption{Results of ablation studies on the dev sets of IMDb, Yelp-2013 and Yelp-2014.} \label{tbl-04-ablation-studies-results}% \end{table*} In order to validate the effectiveness of our approach, we first conduct experiments with several PLMs (BERT, SpanBERT and Longformer). Results on the dev sets of IMDb, Yelp-2013 and Yelp-2014 are shown in Table~\ref{tbl-03-more-models-results}. We compare our approach to a vanilla user and product attention baseline where 1) the user and product representation matrices are randomly initialized and 2) we simply employ multi-head attention between user/product and document representations without the user-product cross-context module. Our approach is able to achieve consistent improvements over the baseline with all PLMs on all three datasets. For example, our approach gives improvements over the baseline of 4.3 accuracy on IMDb, 1.6 accuracy on Yelp-2013 and 1.7 accuracy on Yelp-2014 for BERT-base. Moreover, our approach can give further improvements for large PLMs such as Longformer-large: improvements of 4.8 accuracy on IMDb, 2.8 accuracy on Yelp-2013 and 2.1 accuracy on Yelp-2014. The improvements over the baseline are statistically significant ( $p < 0.01$). We compare our approach to previous approaches on the test sets of IMDb, Yelp-2013 and Yelp-2014. These include pre-BERT neural baseline models using CNN~\cite{ibm_cnn, kim-2014-convolutional} and LSTM~\cite{yang2016hierarchical_han} -- UPNN~\cite{tang-etal-2015-user-product}, NSC~\cite{chen-etal-2016-neural-sentiment}, UPDMN~\cite{dou-2017-capturing}, CMA~\cite{ma-etal-2017-cascading}, HCSC~\cite{amplayo-etal-2018-cold}, DUPMN~\cite{long-etal-2018-dual}, HUAPA~\cite{aaai-18-jiajun-chen}, RRP-UPM~\cite{cikm19-memory}, CHIM~\cite{amplayo-2019-rethinking} -- and two state-of-the-art models based on BERT including IUPC~\cite{lyu-etal-2020-improving} and MA-BERT~\cite{zhang-etal-2021-ma-bert}. We use BERT-base for a fair comparison with IUPC and MA-BERT, which both use BERT-base. The results are shown in Table~\ref{tbl-02-main-results}. Our model obtains the best performance at both accuracy and RMSE on IMDb, Yelp-2013 and Yelp-2014. Specifically, our model achieves absolute improvements in accuracy of 1.7, 1.6 and 1.2 on IMDb, Yelp-2013 and Yelp-2013 respectively compared to previous state-of-the-art results. As for RMSE, which indicates how \textit{close} the predicted labels are to ground-truth labels, our models outperforms earlier state-of-the-art models on RMSE by 0.011 on IMDb, 0.018 on Yelp-2013 and 0.010 on Yelp-2014. \begin{table*}[!htb] \centering \input{Tables/tbl-06-longformer-len} \caption{Results of Longformer under different maximum sequence length on the dev sets of IMDb and Yelp-2013. The truncated examples are the percentage of examples that exceed the corresponding max sequence length.}. \label{tbl-06-longformer-len}% \end{table*} \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/user_ratios_plot_xy.pdf} \caption{Experimental results of IUPC, MA-BERT and our approach under different proportion of reviews from 10\% to 100\% on the dev sets of IMDb and Yelp-2013.} \label{fig:user_ratio} \end{figure} \subsection{Ablation Studies} Results of an ablation analysis are shown in Table~\ref{tbl-04-ablation-studies-results}. The first row results are from a BERT model without user and product information. The next three rows correspond to \begin{enumerate} \setlength\itemsep{-0.5em} \item \textit{User-Product Information}, where we use the same method in the baseline vanilla attention model in Table~\ref{tbl-03-more-models-results} to inject user-product information \item \textit{Textual Information}, our proposed approach of using historical reviews to initialize user and product representations. \item \textit{User-Product Cross-Context}, our proposed module incorporating the associations between users and products. \end{enumerate} The results show, firstly, that user and product information is highly useful for sentiment classification, and, secondly, that both textual information of historical reviews and user-product cross-context can improve sentiment classification. For example, \textit{Textual Information} gives \char`\~ 1 accuracy improvement on three datasets while giving \char`\~0.04 RMSE improvement on IMDb and Yelp-2014, \char`\~ 0.02 RMSE improvement on Yelp-2013. \textit{User-Product Cross-Context} achieves large improvements on IMDb of 2.8 accuracy compared to the improvements on Yelp-2013 and Yelp-2014 of 0.6 and 0.5 accuracy respectively. \subsection{Performance under Varying Amount of Reviews to User/Product} \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/effect_of_scaling_factor_bert_spanbert_3lines.pdf} \caption{Effect of varying the scaling factor for the User and Product Matrices on the dev sets of Yelp-2013~(left) and IMDb~(right). We include results of BERT-base~(top) and SpanBERT-base~(bottom). The left and right y-axis in each subplot represent \textit{Accuracy} and \textit{RMSE} respectively. The x-axis represents the scaling factor. The vertical green dashed line is the scaling factor from the Frobenius norm heuristic. The two horizontal dashed lines~(blue and orange) are the accuracy and RMSE produced by the Frobenius norm heuristic respectively.} \label{fig:effect_of_scaling_factor} \end{figure} We investigate model performance with different amounts of reviews belonging to the same user/product. We randomly sample a proportion of each user's reviews (from 10\% to 100\%). Then we use the sampled training data, where each user only has part of their total reviews (e.g. 10\%), to train sentiment classification models. We conduct experiments on Yelp-2013 using IUPC~\cite{lyu-etal-2020-improving}, MA-BERT~\cite{zhang-etal-2021-ma-bert} and our approach. The results are shown in Figure~\ref{fig:user_ratio}, where the x-axis represents the proportion of reviews that we used in experiments. When the proportion of reviews lie between 10\% and 50\%, our approach obtains superior performance compared to MA-BERT and IUPC while the performance gain decreases when users have more reviews. The results show the advantage of our approach under a low-review scenario for users. \begin{table*}[!htb] \centering \small \resizebox{\textwidth}{!}{ \input{Tables/tbl-05-cases} } \caption{Example reviews from the dev sets of Yelp-2013 and the corresponding predictions of each model. Very Negative~(VN), Negative~(N), Neutral~(Ne), Positive~(P), Very Positive~(VP).} \label{tbl-05-cases}% \end{table*} \subsection{Scaling Factor for User/Product Matrix} We conduct experiments with different scaling factor (see Equations~\ref{user_scaling} and~\ref{product_scaling}) on the dev sets of Yelp-2013 and IMDb using BERT-base. In this experiment, we apply the same scaling factor to both user and product matrix. The results are shown in Figure~\ref{fig:effect_of_scaling_factor}, where we use scaling factor ranging from 0.05 to 1.5 with intervals of 0.05. The results show that our proposed scaling factor~(green dashed lines in Figure~\ref{fig:effect_of_scaling_factor}) based on Frobenius norm can yield competitive performance: best accuracy according to the blue dashed line. Although the RMSE of Frobenius norm heuristic is not always the optimal, it is still a relatively lower RMSE compared to most of the other scaling factors~(except the RMSE of SpanBERT-base on IMDb). Moreover, the Frobenius norm heuristic can reduce the efforts needed to tune the scaling factor, since the optimal scaling factor is varying for different models on different data, whereas the Frobenius norm heuristic is able to consistently provide a competitive dynamic scaling factor. \subsection{Effect of Maximum Sequence Length} Document length can make document-level sentiment classification more challenging, especially for fine-grained classification, which requires model to capture the subtle expression of sentiment polarity in documents. However, PLMs often have a fixed maximum sequence length (usually 512 WordPiece~\cite{wu2016google_nmt_system} tokens). A commonly used method for dealing with this constraint is to only keep the first 512 tokens for documents longer than the maximum length. This has been shown, however, not to be the best strategy~\cite{sun2019fine_bert_qiu}, because the expression of sentiment polarity could be towards the end of a document. Therefore, in order to investigate the importance to sentiment polarity prediction of text in the tail end of a long review, we conduct experiments on the dev sets of IMDb and Yelp-2013 using Longformer-base~\cite{beltagy2020longformer}. We adopt various maximum sequence lengths, from 64 to up the 2048 tokens handled by Longformer. In order to purely focus on review texts, we do not include user/product information in this experiment. The results are shown in Table~\ref{tbl-06-longformer-len}. When reviews longer than the maximum length are truncated, the performance of sentiment classification is substantially reduced. For example, in IMDb, when the maximum length is set to 128 and 256, 96.3\% and 68.7\% examples are truncated and the accuracy drops \char`\~ 40\% compared to the best performance. However, the effect is lower for Yelp-2013. For example, when 63.7\% and 29.3\% examples are truncated, the accuracy only drops \char`\~ 10\% and \char`\~ 5\% compared to the best accuracy. This is not surprising given the shorter review length of Yelp versus IMDb reviews (see Table~\ref{data_statistics_train}). \subsection{Examples} Some cases sampled from the dev set of Yelp-2013 and corresponding predictions from Vanilla BERT w/o user and product information, IUPC~\cite{lyu-etal-2020-improving}, MA-BERT~\cite{zhang-etal-2021-ma-bert} and our model are shown in Table~\ref{tbl-05-cases}. \paragraph{Example 1}This is a straightforward positive review since it clearly conveys the satisfaction towards the restaurant. Thus all models make the correct prediction. \paragraph{Example 2} This is similar to the first example in narrative style, but the ground-truth sentiment label is Positive rather than Very Positive since this user tends not to give very high ratings. This example shows the importance of user information. \paragraph{Example 3} This review conveys a very negative attitude. However, the author tends not to give very poor ratings plus the reviews this store received are not bad. With both user and product information, our model makes the correct prediction of Neutral. \paragraph{Example 4} All models, regardless of whether they use user and product information, predict Neutral or Negative while in fact the review label is Very Positive. This is a difficult example where the sentiment is subtly expressed. \section{Conclusion and Future Work} In order to make the best use of user and product information in document-level sentiment classification, we propose a text-driven approach: 1) explicitly utilizing historical reviews to initialize user and product representations 2) modeling associations between users and products with an additional user-product cross-context module. The experiments conducted on three benchmark datasets including IMDb, Yelp-2013 and Yelp-2014 demonstrate that our approach substantially outperforms previous state-of-the-art approaches and is effective for several PLMs. We also show that our method obtains superior performance when the amount of reviews for a user/product is limited. For future work, we aim to apply our approach to more tasks where there is a need to learn representations for various types of attributes. \section{Appendix} \label{sec:appendix} \subsection{Hyperparameters} We show the Learning Rate and Batch Size used to train our models on all datasets in Table~\ref{tbl-07-hyperparameters}. \begin{table*}[!htb] \centering \input{Tables/tbl-07-hyperparameters} \caption{The hyperparameters used to fine-tune all models on all datasets including Learning Rate~(LR) and Batch Size~(BS).} \label{tbl-07-hyperparameters}% \end{table*} \section*{Acknowledgements} This document has been adapted by Steven Bethard, Ryan Cotterell and Rui Yan from the instructions for earlier ACL and NAACL proceedings, including those for ACL 2019 by Douwe Kiela and Ivan Vuli\'{c}, NAACL 2019 by Stephanie Lukin and Alla Roskovskaya, ACL 2018 by Shay Cohen, Kevin Gimpel, and Wei Lu, NAACL 2018 by Margaret Mitchell and Stephanie Lukin, Bib\TeX{} suggestions for (NA)ACL 2017/2018 from Jason Eisner, ACL 2017 by Dan Gildea and Min-Yen Kan, NAACL 2017 by Margaret Mitchell, ACL 2012 by Maggie Li and Michael White, ACL 2010 by Jing-Shin Chang and Philipp Koehn, ACL 2008 by Johanna D. Moore, Simone Teufel, James Allan, and Sadaoki Furui, ACL 2005 by Hwee Tou Ng and Kemal Oflazer, ACL 2002 by Eugene Charniak and Dekang Lin, and earlier ACL and EACL formats written by several people, including John Chen, Henry S. Thompson and Donald Walker. Additional elements were taken from the formatting instructions of the \emph{International Joint Conference on Artificial Intelligence} and the \emph{Conference on Computer Vision and Pattern Recognition}. \section{Limitations} \section*{Acknowledgements} This work was funded by Science Foundation Ireland through the SFI Centre for Research Training in Machine Learning (18/CRT/6183).
2101.02510
\section{Introduction} One of the most typical properties of social networks is the presence of \emph{homophily}~\cite{mcpherson_homophily_1987,shrum_friendship_1988,moody_race_2001,mcpherson_birds_2001}, i.e. the increased tendency of an edge to exist between two nodes if they share the same underlying characteristic, such as race, gender, class and a variety of other social parameters. More broadly, when the underlying similarity parameter is not specified \emph{a priori}, the same homophily pattern is known as \emph{community structure}~\cite{fortunato_community_2010}. Another pervasive pattern encountered in the same kinds of network is \emph{transitivity}~\cite{rapoport_spread_1953,holland_transitivity_1971,holland_local_1975}, i.e. the increased probability of observing an edge between two nodes if they have a neighbor in common. Although these patterns are indicative of two distinct mechanisms of network formation, namely choice or constraint homophily~\cite{kossinets_origins_2009} and triadic closure~\cite{granovetter_strength_1973}, respectively, they are generically conflated in non-longitudinal data. This is because both processes can result in the same kinds of observation: 1. the preferred connection between nodes of the same kind can induce the presence of triangles involving similar nodes, and 2. the tendency of triangles to be formed can induce the formation of groups of nodes with a higher density of connections between them, when compared to the rest of the network~\cite{foster_clustering_2011,foster_communities_2010}. This conflation means we cannot reliably interpret the underlying mechanisms of network formation merely from the abundance of triangles or observed community structure in network data. In this work we present a solution to this problem, consisting in a principled method to disentangle homophily and community structure from triadic closure in network data. This is achieved by formulating a generative model that includes community structure in a first instance, and an iterated process of triadic closure in a second. Based on this model, we develop a Bayesian inference algorithm that is capable of identifying which edges are more likely to be due to community structure or triadic closure, in addition to the underlying community structure itself. Several authors have demonstrated that triadic closure can induce community structure and homophily in networks. Foster \textit{et al}~\cite{foster_clustering_2011,foster_communities_2010} have shown that maximum entropy network ensembles conditioned on prescribed abundances of triangles tend to possess high modularity. A more recent analysis of this kind of ensemble by López \textit{et al}~\cite{lopez_transitions_2020} showed that it is marked by a spontaneous size-dependent formation of ``triangle clusters.'' Bianconi \textit{et al}~\cite{bianconi_triadic_2014} have investigated a network growth model, where nodes are progressively added to the network, and connected in such a way as to increase the amount of triangles, and shown that it is capable of producing networks with emergent community structure. The effect of triangle formation on apparent community structure has been further studied by Wharrie \textit{et al}~\cite{wharrie_micro-_2019}, who showed that those patterns can even mislead methods specifically designed to avoid the detection of spurious communities in random networks. More recently, Asikainen \textit{et al}~\cite{asikainen_cumulative_2020} have shown that iterated triadic closure can exacerbate homophily present in the original network, via a simple macroscopic model. The approach presented in this work differs from the aforementioned ones primarily in that it runs in the reverse direction: instead of only defining a conceptual network model that demonstrates the interlink between triadic closure and homophily given prescribed parameters, the proposed method operates on empirical network data, and reconstructs the underlying generative process, decomposing it into distinct community structure and triadic closure components. As we show, this reconstruction yields a detailed interpretation of the underlying mechanisms of network formation, allowing us to identify macro-scale structures that emerge spontaneously from micro-scale higher-order interactions~\cite{battiston_networks_2020,benson_simplicial_2018}, and in this way we can separate them from inherently macro-scale structures. Our method is based on the nonparametric Bayesian inference of a modified version of the stochastic block model (SBM)~\cite{holland_stochastic_1983,peixoto_bayesian_2019} with the addition of triadic closure edges, and therefore leverages the statistical evidence available in the data, without overfitting. Importantly, our method is capable of determining when the observed structure can be attributed to an actual preference of connection between nodes, as described by the SBM, rather than an iterated triadic closure process occurring on top of a substrate network. As a result, we can distinguish between ``true'' and \emph{apparent} community structure caused by increased transitivity. As we also demonstrate, this decomposition yields an edge prediction method that tends to perform better in many instances than the SBM used in isolation. Our manuscript is organized as follows. In Sec.~\ref{sec:model} we describe our model, and its inference procedure. In Sec.~\ref{sec:artificial} we demonstrate how it can be used disambiguate triadic closure from community structure in artificially generated networks. In Sec.~\ref{sec:empirical} we perform an analysis of empirical networks, in view of our method. In Sec.~\ref{sec:prediction} we show how our model can improve edge prediction. We end in Sec.~\ref{sec:conclusion} with a conclusion. \section{Stochastic block model with triadic closure (SBM/TC)}\label{sec:model} \begin{figure} \resizebox{\columnwidth}{!}{ \begin{tabular}{cccc} \multicolumn{4}{c}{\larger Generative process}\\\hline\\[-.7em] Seminal edges & \multicolumn{2}{c}{Triadic closure} & Observed network\\ \includegraphics[width=.33\columnwidth]{diagram-black.pdf}& \multicolumn{2}{c}{\includegraphics[width=.33\columnwidth]{diagram-red.pdf}}& \includegraphics[width=.33\columnwidth]{diagram.pdf}\\ \multicolumn{4}{c}{\larger Statistical inference}\\\hline\\[-.7em] Observed network & \multicolumn{2}{c}{Posterior distribution} & Marginal probabilities\\[.5em] \multirow{2}{*}[1cm]{\includegraphics[width=.33\columnwidth]{diagram.pdf}}& \includegraphics[width=.15\columnwidth]{diagram-snap-1.pdf}& \includegraphics[width=.15\columnwidth]{diagram-snap-2.pdf}& \multirow{2}{*}[1cm]{\includegraphics[width=.33\columnwidth]{diagram-gm.pdf}}\\ &\includegraphics[width=.15\columnwidth]{diagram-snap-3.pdf} & \includegraphics[width=.15\columnwidth]{diagram-snap-5.pdf} & \end{tabular}} \caption{Schematic representation of the generative process considered (top) and the associated inference procedure (bottom). The generative process consists in the placement of seminal edges according to a SBM, and the addition of triadic closure edges conditioned on the seminal edges (shown in red). The inference procedure runs in the reverse direction, and given an observed graph, it produces a posterior distribution of possible divisions of seminal and triadic closure edges, with which edge marginal probabilities on the edge identities can be obtained.\label{fig:diagram}} \end{figure} Community structure and triadic closure are generally interpreted as different processes of network formation. With the objective of allowing their identification \emph{a posteriori} from network data, our approach consists in defining a generative network model that encodes both processes explicitly. More specifically, our generative model consists of two steps, with the first one being the generation of a substrate network containing ``seminal'' edges, placed according to an arbitrary mixing pattern between nodes, and an additional layer containing triadic closure edges, potentially connecting two nodes if they share a common neighbor in the substrate network (see Fig.~\ref{fig:diagram}). The final network is obtained by ``erasing'' the identity of the edges. i.e. whether they are seminal or due to closure of a triangle. Conversely, the inference procedure consists in moving in the opposite direction, i.e. given a simple graph, with no annotations on the edges, we consider the posterior distribution of all possible divisions into seminal and triadic closure edges, weighted according to their plausibility. We will denote the seminal edges with an adjacency matrix $\bm{A}$, and for its generation we will use the degree-corrected stochastic block model (DC-SBM)~\cite{karrer_stochastic_2011}, conditioned on a partition $\bm{b}$ of the nodes into $B$ groups, where $b_i\in[1,B]$ is the group membership of node $i$, which has a marginal distribution given by~\cite{peixoto_nonparametric_2017} \begin{multline}\label{eq:sbm} P(\bm{A}|\bm{b}) = \frac{\prod_{r<s}e_{rs}!\prod_re_{rr}!!\prod_ik_i!} {\prod_{i<j}A_{ij}!\prod_iA_{ii}!!\prod_re_r!}\times \prod_r\frac{\prod_k\eta_k^r!}{n_r!q(e_r,n_r)}\times\\ {\frac{B(B+1)}{2} + E - 1 \choose E}^{-1}, \end{multline} where $e_{rs}=\sum_{ij}A_{ij}\delta_{b_i,r}\delta_{b_j,s}$ is the number of edges between groups $r$ and $s$ (or twice that for $r=s$), $e_r=\sum_se_{rs}$, $k_i=\sum_jA_{ij}$ is the degree of node $i$, $n_r=\sum_i\delta_{b_i,r}$ is the number of nodes in group $r$, $\eta^r_k=\sum_i\delta_{b_i,r}\delta_{k_i,k}$ is the number of nodes in group $r$ with degree $k$, $E=\sum_{ij}A_{ij}/2$ is the total number of edges, and $q(m,n)$ is the number of restricted partitions of integer $m$ into at most $n$ parts. We refer to Ref.~\cite{peixoto_nonparametric_2017} for a detailed derivation of this marginal likelihood, including also the extension for hierarchical partitions that is straighforward to incorporate, as well as latent multigraphs~\cite{peixoto_latent_2020} (see appendix~\ref{app:multigraph}), both of which we have used in our analysis. This model is capable of generating networks with arbitrary degree distributions and mixing patterns between groups of nodes, including homophily~\cite{peixoto_bayesian_2019}. The triadic closure edges are represented by an additional set of $N$ ``ego'' graphs $\bm{g}$, attributed to each node $u$ of $\bm{A}$, where $\bm{g}(u)$ is the ego graph of node $u$. The ego graph $\bm{g}(u)$ is allowed only to contain nodes that are neighbors of $u$ in $\bm{A}$ (excluding $u$ itself), and edges that do not exist in $\bm{A}$, so that any existing edge in $\bm{g}(u)$ amounts to a triadic closure in $\bm{A}$. The adjacency of $\bm{g}(u)$ is given by \begin{equation} g_{ij}(u) = \begin{cases} 1, &\text{ if } (i,j) \in \bm{g}(u), \\ 0, &\text{ otherwise.} \end{cases} \end{equation} Let us denote the existence of a open triad $(i,u,j)$ in $\bm{A}$ with \begin{equation} m_{ij}(u)=A_{ui}A_{uj}(1-A_{ij}), \end{equation} such that $m_{ij}(u) = 1$ if the open triad exists, or $0$ otherwise, and we adopt the convention $\bm{A}_{uu} = 0$ throughout. Therefore, an edge $(i,j)$ can exist in $\bm{g}(u)$ only if $m_{ij}(u) = 1$. Based on this, the ego networks are generated independently with probability, \begin{equation}\label{eq:p_g} P(\bm{g}(u)|\bm{A},p_u) = \prod_{i<j}\left[p_um_{ij}(u)\right]^{g_{ij}(u)}\left[1-p_um_{ij}(u)\right]^{1-g_{ij}(u)}. \end{equation} where $p_u \in [0,1]$ is a probability associated with node $u$ that controls the degree to which its neighbors in $\bm{A}$ end up connected in $\bm{g}(u)$. This process may result in the same edge $(i,j)$ existing in different graphs $\bm{g}(u)$, if $i$ and $j$ share more than one common neighbor in $\bm{A}$. We therefore consider the resulting simple graph $\bm{G}(\bm{A},\bm{g})$, constructed by ignoring any multiplicities introduced by the various ego graphs, i.e. with adjacency given by \begin{equation} G_{ij}(\bm{A},\bm{g}) = \begin{cases} 1, & \text{ if } A_{ij} + \sum_u g_{ij}(u) > 0,\\ 0, & \text{ otherwise.} \end{cases} \end{equation} The joint probability of the above process is then given by \begin{equation} P(\bm{G},\bm{g},\bm{A}|\bm p, \bm{b}) = \bm{1}_{\{\bm{G} = \bm{G}(\bm{A},\bm{g})\}}P(\bm{A}|\bm{b})\prod_uP(\bm{g}(u)|\bm{A},p_u), \end{equation} where $\bm{1}_{\{x\}}$ is the indicator function. Unfortunately, the marginal probability of the final graph \begin{equation} P(\bm{G}) = \sum_{\bm{g},\bm{A},\bm{b}}\int P(\bm{G},\bm{g},\bm{A}|\bm p, \bm{b})P(\bm p)P(\bm{b})\,\mathrm{d}\bm p, \end{equation} with $P(\bm p)$ and $P(\bm{b})$ being prior probabilities, does not lend itself to a tractable computation. Luckily, however, this will not be needed for our inference procedure. Instead, we are interested in the posterior distribution \begin{equation} P(\bm{g},\bm{A},\bm{b} | \bm{G}) = \frac{P(\bm{G},\bm{g},\bm{A}|\bm{b})P(\bm{b})}{P(\bm{G})}, \end{equation} which describes the probability of a decomposition of an observed simple graph $\bm{G}$ into its seminal graph $\bm{A}$, the underlying community structure $\bm{b}$, and the triadic closures represented by the ego graphs $\bm{g}$. (Although the marginal distribution $P(\bm{G})$ appears in the denominator of the above equation, we will see in later on that it is just a normalization constant that does not in fact need to be computed.) The marginal likelihood \begin{equation} P(\bm{G},\bm{g},\bm{A}|\bm{b}) = P(\bm{G}|\bm{A},\bm{g})P(\bm{g}|\bm{A})P(\bm{A}|\bm{b}) \end{equation} can be computed easily via \begin{align} P(\bm{g}|\bm{A}) &= \prod_u\int_0^1P(\bm{g}(u)|\bm{A},p)P(p)\,\mathrm{d} p\nonumber\\ &= \prod_u\left[{\sum_{i<j}m_{ij}(u) \choose \sum_{i<j}g_{ij}(u)}^{-1} \frac{1}{1+\sum_{i<j}m_{ij}(u)}\right],\label{eq:marginal_simple} \end{align} where we have used a uniform prior $P(p)=1$, and with the remaining likelihood term being only the indicator function, $P(\bm{G}|\bm{A},\bm{g})=\bm{1}_{\{\bm{G} = \bm{G}(\bm{A},\bm{g})\}}$. Although this choice of priors makes the calculation very simple, it implies that we expect the observed graphs to always have a large fraction of triadic closures. In Appendix~\ref{app:general} we describe a slight modification of this model that makes it more versatile with respect to the abundance of triadic closures, at the expense of yielding somewhat longer expressions for the likelihood. We note that we made use of the modifications specified there in our ensuing analysis, as they can only improve the use of the model. \subsection{Iterated triadic closures} \begin{figure} \includegraphics[width=.7\columnwidth]{diagram-snap-1-l.pdf} \caption{Example network illustrating how iterated triadic closures are implemented in the model. The initial network (black edges) receives the first generation of triadic closures (red edges). The second generation (green edges) can only close triads involving at least one edge of the first generation (red). The third generation (blue edges) in turn can only close triads involving at least one edge belonging to the second generation (green). \label{fig:iteration}} \end{figure} Triadic closures increase the number of edges in the network, and in this way can introduce opportunities for new triadic closures, involving both older and newer edges. This leads naturally to a dynamical model, where generations of triadic closures are progressively introduced to the network. We can incorporate this in our model via ``layers'' of ego graphs $\bm{g}^{(l)}$ representing edges introduced in generation $l\in [1,\dots,L]$. For our formulation, it will be useful to define the cumulative network at generation $l$, defined recursively by \begin{equation} A^{(l)}_{ij}= \begin{cases} 1, & \text{ if } A_{ij}^{(l-1)}+\sum_ug_{ij}^{(l)}(u) > 0,\\ 0, & \text{ otherwise,} \end{cases} \end{equation} with boundary conditions $\bm{A}^{(0)}=\bm{A}$, and $\bm{g}^{(0)}(u)$ being empty graphs for all $u$, and we will denote the final generation as $\bm{A}^{(L)}=\bm{G}$. The formation of new triadic closure layers is done according to the probability \begin{multline} P(\bm{g}^{(l)}(u)|\bm{A}^{(l-1)}, \bm{g}^{(l-1)}, p_u^{(l)}) = \\ \prod_{i<j}\left[p_u^{(l)}m_{ij}^{(l)}(u)\right]^{g_{ij}^{(l)}(u)} \left[1-p_u^{(l)}m_{ij}^{(l)}(u)\right]^{1-g_{ij}^{(l)}(u)}. \end{multline} where an open triad $(i,u,j)$ at generation $l$ is denoted by \begin{equation} m_{ij}^{(l)}(u) = w_{ij}^{(l)}(u)\left(1-A_{ij}^{(l-1)}\right), \end{equation} so that $m_{ij}(u) \in \{0, 1\}$, where \begin{multline} w_{ij}^{(l)}(u) =\\ \begin{cases} 1, & \text{ if } A_{ui}^{(l-1)}\sum_vg^{(l-1)}_{uj}(v) + A_{uj}^{(l-1)}\sum_vg^{(l-1)}_{ui}(v) > 0,\\ 0, & \text{ otherwise.} \end{cases} \end{multline} determines whether or not the open triad $(i,u,j)$ at generation $l$ has at least one of the edges $(u,i)$ or $(u,j)$ formed exactly at the preceding generation $l-1$. This restriction means that triadic closures at generation $l$ can only close new triads that have been introduced at generation $l-1$, not previously. The reason for this is a matter of identifiability: an edge at generation $l$ that closes an open triad that has been formed at generation $l' < l$ could also have been generated in any of the intermediate generations $[l',l-1]$, thus introducing a inevitable ambiguity in the inference. The above restriction removes the ambiguity, and forces the new generations to form triadic closures which could not have existed in the preceding generations (see Fig.~\ref{fig:iteration}). Note that this restriction does not alter the generality of the model, since the same final networks can still be formed despite this restriction, simply by choosing appropriate values for $\{p_u^{(l)}\}$. With the above, the joint likelihood of all generations is given by \begin{multline} P(\{\bm{g}^{(l)}\},\bm{A}|\bm{b},\bm p) =\\ P(\bm{A}|\bm{b})\prod_{l=1}^{L}\prod_uP(\bm{g}^{(l)}(u)|\bm{A}^{(l-1)}, \bm{g}^{(l-1)}, p_u^{(l)}). \end{multline} Following the same calculation as before, we obtain the individual marginal likelihood at each generation $l$ as \begin{equation} P(\{\bm{g}^{(l)}\},\bm{A}|\bm{b}) = P(\bm{A}|\bm{b})\prod_{l=1}^{L}P(\bm{g}^{(l)}|\bm{A}^{(l-1)}, \bm{g}^{(l-1)}). \end{equation} with the individual terms in the product being entirely analogous to Eq.~\ref{eq:marginal_simple}, \begin{multline} P(\bm{g}^{(l)}|\bm{A}^{(l-1)}, \bm{g}^{(l-1)}) = \\ \prod_u\left[{\sum_{i<j}m_{ij}^{(l)}(u) \choose \sum_{i<j}g_{ij}^{(l)}(u)}^{-1} \frac{1}{1+\sum_{i<j}m_{ij}^{(l)}(u)}\right]. \end{multline} Finally, the posterior distribution for the reconstruction becomes \begin{equation}\label{eq:posterior} P(\{\bm{g}^{(l)}\},\bm{A},\bm{b} | \bm{G}) = \frac{P(\bm{G},\{\bm{g}^{(l)}\},\bm{A}|\bm{b})P(\bm{b})}{P(\bm{G})}. \end{equation} Note that for $L=1$ we recover the previous model. Having to specify $L$ beforehand is not a strict necessity, since the inference will only occupy new generations if this yields a more parsimonious description of the network.~\footnote{If we wanted to tread $L$ as an unknown, we should introduce a prior for $L$, $P(L)$, and include that in the posterior as well. However, with the parametrization in Appendix~\ref{app:general}, generations which are unpopulated with edges have no contribution to the marginal likelihood. Therefore we can simply set $L$ to be a sufficiently large value, for example $L={N\choose 2}$, since for later generations it is impossible to add new edges.} \subsection{Inference algorithm} \begin{figure*} \begin{tabular}{P{.33\textwidth}P{.33\textwidth}P{.33\textwidth}} \includegraphics[width=.33\textwidth]{random.pdf} & \includegraphics[width=.33\textwidth]{random_triadic.pdf}& \includegraphics[width=.33\textwidth]{random_triadic_inferred.pdf}\\ (a) Random seminal edges& (b) Triadic closure edges and spurious communities found with SBM ($\Sigma_{\text{SBM}}=801.7$ nats)& (c) Inference of the SBM/TC model ($\Sigma_{\text{SBM/TC}}=590.6$ nats) \end{tabular} \caption{\textbf{(a)} Example artificial network generated as a fully random random graph with a geometric degree distribution, $N=100$ nodes and $E=94$ edges, and \textbf{(b)} a process of triadic closure based on network (a) with parameter $p_u=0.8$ for every node, with closure edges shown in red. It is also shown the partition found by fitting the SBM to the resulting network, and the description length obtained. \textbf{(c)} The result of inferring the SBM/TC model, which uncovers a single partition --- no community structure --- and the closure edges shown in red (the thickness of the edges correspond to the marginal probabilities $\pi_{ij}$ and $1-\pi_{ij}$ for the seminal and closure edges, respectively). It is also shown the description length of the SBM/TC fit. \label{fig:random}} \end{figure*} The posterior distribution of Eq.~\ref{eq:posterior} can be written exactly, up to a normalization constant. However, this fact alone does not allow us to directly sample from this distribution, which can only be done in very special cases. Instead, we rely here on Markov chain Monte Carlo (MCMC), implemented as follows. We begin with an arbitrary choice of $\{\bm{g}^{(l)}\}$, $\bm{A}$ and $\bm{b}$ that is compatible with our observed graph $\bm{G}$. We then consider modifications of these quantities, and accept or reject them according to the Metropolis-Hastings criterion~\cite{metropolis_equation_1953,hastings_monte_1970}. More specifically, we consider moves of the kind $P(\{{\bm{g}'}^{(l)}\},\bm{A}'|\{{\bm{g}}^{(l)}\},\bm{A})$, and accept them according to the probability \begin{equation} \min\left(1, \frac{P(\{{\bm{g}'}^{(l)}\},\bm{A}',\bm{b} | \bm{G})P(\{{\bm{g}}^{(l)}\},\bm{A}|\{{\bm{g}'}^{(l)}\},\bm{A}')} {P(\{\bm{g}^{(l)}\},\bm{A},\bm{b} | \bm{G})P(\{{\bm{g}'}^{(l)}\},\bm{A}'|\{{\bm{g}}^{(l)}\},\bm{A})}\right)\\ \end{equation} which, as we mentioned before, does not require the computation of the intractable marginal probability $P(\bm{G})$. We also consider moves that change the community structure, according to a proposal $P(\bm{b}'|\bm{b})$ and accept with probability \begin{equation} \min\left(1, \frac{P(\bm{A}|\bm{b}')P(\bm{b}')P(\bm{b}|\bm{b}')} {P(\bm{A}|\bm{b})P(\bm{b})P(\bm{b}'|\bm{b})}\right). \end{equation} For the latter we use the merge-split moves described in Ref.~\cite{peixoto_merge-split_2020}. Iterating the moves described above eventually produces samples from the target posterior distribution. In Appendix~\ref{app:mcmc} we specify the details of the particular move proposals we use. Given samples from the posterior distribution, we can use them to summarize it in a variety of ways. A useful quantity is the marginal probability $\pi_{ij}$ of an edge $(i,j)$ being seminal, which is given by \begin{equation} \pi_{ij} = \sum_{\{\bm{g}^{(l)}\},\bm{A},\bm{b}}A_{ij}P(\{\bm{g}^{(l)}\},\bm{A},\bm{b} | \bm{G}). \end{equation} Conversely, the reciprocal quantity, \begin{equation} 1-\pi_{ij}, \end{equation} corresponds to the probability that edge $(i,j)$ is due to triadic closure, occurring in any generation or ego graph. Therefore, the quantity $\bm\pi$ gives us a concise summary of posterior decomposition of a network, and we will use it throughout our analysis. (It easy to devise and compute other summaries, such as the marginal probability of an edge belonging to a given triadic generation, or a particular ego graph, but we will not have use for those in our analysis.) \section{Distinguishing community structure from triadic closure}\label{sec:artificial} Here we illustrate how triadic closure can be mistaken as community structure, and how our inference method is capable of uncovering it. We begin by considering an artificial example, where we first sample a fully random network with a geometric degree distribution, $N=100$ nodes and $E=94$ edges, as shown in Fig.~\ref{fig:random}a. This network does not possess any community structure, since the probability of observing an edge is just proportional to the product of the degrees of the endpoint nodes --- indeed if we fit a DC-SBM to it, we uncover, correctly, only a single group. Conditioned on this network, Fig.~\ref{fig:random}b shows sampled triadic closure edges, according to the model described previously, where each node has the same probability $p_u=0.8$ of having neighbours connected in their ego graphs. In the same figure we show the result of fitting the DC-SBM on the network obtained by ignoring the edge types. That approach finds five assortative communities, corresponding to regions of higher densities of edges induced by the random introduction of transitive edges. One should not, however, interpret the presence of these regions as a special affinity between the respective groups of nodes, since they are a result of a random process that has no relation to that particular division of the network --- indeed, if we run the whole process again from the beginning, the nodes will most likely end up clustered in completely different ``communities.'' If we now perform the inference of our SBM with triadic closure (SBM/TC), we obtain the result shown in Fig.~\ref{fig:random}c. Not only are we capable of distinguishing the seminal from the triadic closure edges (AUC ROC = $0.92$), but we also correctly identify the presence of a single group of nodes, which is in full accordance with the completely random nature in which the network has been generated. In other words, with the SBM/TC we are not misled by the density heterogeneity introduced by triadic closures into thinking that the network possesses real community structure, and we realize instead that they can be better explained by a different process. In the artificial example considered above, the result obtained with the SBM/TC model is more appealing, since it more closely matches the known generative process that was used. However, in more realistic situations, we will need to decide if it provides a better description of the data without such privileged information. In view of this, we can make our model selection argument more formal in the following way. Suppose we are considering a partition $\bm{b}^{(1)}$ found with inferring the SBM on a given network, as well as another partition $\bm{b}^{(2)}$ and ego graphs $\{\bm{g}^{(l)}\}$ found with the SBM/TC model. We can decide which one provides a better description of a network via the posterior odds ratio, \begin{align} \Lambda &= \frac{P(\bm{b}^{(2)}, \{\bm{g}^{(l)}\}, \mathcal{H}_{\text{SBM/TC}}|\bm{G})} {P(\bm{b}^{(1)}, \mathcal{H}_{\text{SBM}}|\bm{G})}\\ &= \frac{P(\bm{G},\{\bm{g}^{(l)}\},\bm{A},\bm{b}^{(2)})} {P(\bm{G},\bm{b}^{(1)})} \times \frac{P(\mathcal{H}_{\text{SBM/TC}})}{P(\mathcal{H}_{\text{SBM}})}, \end{align} where $P(\mathcal{H}_{\text{SBM/TC}})$ and $P(\mathcal{H}_{\text{SBM}})$ are the prior probabilities for either model. In case these are the same, we have \begin{equation} \Lambda = \mathrm{e}^{-(\Sigma_{\text{SBM/TC}} - \Sigma_{\text{SBM}})}, \end{equation} where $\Sigma_{\text{SBM/TC}}$ and $\Sigma_{\text{SBM}}$ are the description lengths of both hypotheses, given by \begin{align} \Sigma_{\text{SBM/TC}} &= -\ln P(\bm{G},\{\bm{g}^{(l)}\},\bm{A},\bm{b}^{(2)}),\\ \Sigma_{\text{SBM}} &= -\ln P(\bm{G},\bm{b}^{(1)}). \end{align} The description length~\cite{grunwald_minimum_2007} measures the amount of information necessary to encode both the data and the model parameters, and hence accounts both for the quality of fit and the model complexity. The above means that the model that is most likely \emph{a posteriori} is the one that \emph{most compresses} the data under its parametrization, and thus the criterion amounts to an implementation of Occam's razor, since it points to the best balance between model complexity and fitness. Before we employ the above criterion to select between both models considered, it is important to emphasize that the pure SBM is ``nested'' inside the SBM/TC, since the former amounts to the special case of the latter when there are zero triadic closure edges. In particular, if we use the more general parametrization described in Appendix~\ref{app:general}, in the situation with zero triadic edges, i.e. all $\{\bm{g}^{(l)}\}$ are empty graphs $\bm{g}_{\text{empty}}$ and $\bm{A}=\bm{G}$, we have \begin{equation} P(\bm{G},\{\bm{g}^{(l)}=\bm{g}_{\text{empty}}\},\bm{A}=\bm{G},\bm{b}) \geq \frac{P(\bm{G},\bm{b})}{N+1}. \end{equation} Therefore, in general, we must have \begin{multline} \underset{\{\bm{g}^{(l)}\},\bm{A},\bm{b}}{\max}\; \ln P(\bm{G},\{\bm{g}^{(l)}\},\bm{A},\bm{b}) \geq\\ \underset{\bm{b}}{\max}\; \ln P(\bm{G},\bm{b}) + \ln(N+1). \end{multline} Since the last logarithm term becomes negligible for large networks, typically the use of the SBM/TC can only reduce the description length of the data. Therefore, in situations where there is no evidence for triadic closure, both models should yield approximately the same description length value. In Fig.~\ref{fig:random} we show the description lengths for both models for the particular example discussed previously, where we can see that the SBM/TC provides a substantially better compression of the data, therefore yielding a more parsimonious and hence more probable account of the how the data was generated --- which happens also to be the true one in this controlled setting. \begin{figure} \begin{tabular}{cc} \multicolumn{2}{c}{Inference using SBM}\\ \begin{overpic}[width=.5\columnwidth]{pp-overlap-modelsbm-N10000-B10-ak5.pdf} \put(0,0){(a)} \end{overpic}& \begin{overpic}[width=.5\columnwidth]{pp-Be-modelsbm-N10000-B10-ak5.pdf} \put(0,0){(b)} \end{overpic}\\[1em] \multicolumn{2}{c}{Inference using SBM/TC}\\ \begin{overpic}[width=.5\columnwidth]{pp-overlap-modeltriadic_sbm-N10000-B10-ak5.pdf} \put(0,0){(c)} \end{overpic}& \begin{overpic}[width=.5\columnwidth]{pp-Be-modeltriadic_sbm-N10000-B10-ak5.pdf} \put(0,0){(d)} \end{overpic} \end{tabular} \caption{\label{fig:recovery}Recovery of community structure for artificial networks generated from the PP model with added triadic closure, as described in the text, for networks with $N=10^4$ nodes, average degree $\avg{k}=5$, $B=10$ planted groups, and uniform triadic closure probability $p_u=p$ shown in the legend. Figures (a) and (b) correspond to inferences done using the SBM, and (c) and (d) with the SBM/TC model. All results where averaged over 10 network realizations. The vertical dashed line marks the detectability transition value $c^*_+$, described in the text.} \end{figure} We proceed with a more systematic analysis of how triadic closure can interfere in community detection with artificial networks generated by the SBM, more specifically the special case known as the planted partition model (PP), where the $B$ groups have equal size, and the number of edges between groups is given by \begin{equation} e_{rs} = 2E\left[\frac{c}{B}\delta_{rs} + \frac{1-c}{B(B-1)}(1-\delta_{rs})\right], \end{equation} where $c\in[0,1]$ determines the affinity between the (dis)assortative groups. For this model, we know that there are critical values \begin{equation} c^*_{\pm} = \frac{1}{B} \pm \frac{B-1}{B\sqrt{k}}, \end{equation} such that if $c\in [c^*_{-}, c^*_{+}]$ then no algorithm can infer a partition that is correlated to the true one from a single network realization, as it becomes infinitely large $N\to\infty$~\cite{decelle_asymptotic_2011}. Starting from a network generated with the PP model, we include triadic closure edges via the global probability $p_u=p$ for every node in the network. Based on the resulting network, we attempt to recover the original communities, using the SBM and the SBM/TC model. A result of this analysis is shown in Fig.~\ref{fig:recovery}, where we compute the maximum overlap~\cite{peixoto_revealing_2020} $q\in[0,1]$ between the inferred $\hat\bm{b}$ and true partition $\bm{b}$, defined as \begin{equation} q = \underset{\mu}{\max}\;\frac{1}{N}\sum_i\delta_{\mu(\hat b_i), b_i}, \end{equation} where $\mu(r)$ is a bijection between the group labels in $\hat\bm{b}$ and $\bm{b}$, as well as the effective number of inferred groups $B_e=\mathrm{e}^S$, where $S$ is the group label entropy \begin{equation}\label{eq:eB} S = -\sum_r \frac{n_r}{N}\ln \frac{n_r}{N}. \end{equation} As can be seen in Fig.~\ref{fig:recovery}a, the presence of triadic closure edges can have a severe negative effect on the recovery of the original partitions when using the SBM. In Fig.~\ref{fig:recovery}b we see that the number of groups uncovered can be orders of magnitude larger than the original partition, specially when the latter is not even detectable. This shows that the apparent communities that arise out of the formation of triangles substantially overshadow the underlying true community structure. The situation changes considerably when we use the SBM/TC instead, as shown Fig.~\ref{fig:recovery}c. In this case, the presence of triadic closure has no noticeable effect on the detectability of the true community structure, and we obtain a recovery performance indistinguishable from the SBM in the case with no additional edges. As seen in Fig.~\ref{fig:recovery}c the same is true for the number of groups inferred. These results seem to point to a robust capacity of the SBM/TC model to reliably distinguish between actual community structure, and the density fluctuations with result from triadic closures. \section{Empirical networks}\label{sec:empirical} \begin{figure} \begin{tabular}{P{.5\columnwidth}P{.5\columnwidth}} \includegraphics[width=.5\columnwidth]{student_cooperation-marginal-sbm-modesbm.pdf} & \includegraphics[width=.5\columnwidth]{student_cooperation-marginal-triadic_sbm-modecombined.pdf}\\ (a) SBM, $\Sigma_{\text{SBM}}=1145.6$ nats & (b) SBM/TC, $\Sigma_{\text{SBM/TC}}=935.1$ nats \end{tabular} \caption{Network of cooperation between students~\cite{Fire_2012}. (a) Fit of the SBM, yielding $B=9$ communities. (b) Fit of the SBM/TC, uncovering a single community, and triadic closure edges shown in red. The thickness of the edges correspond to the marginal probabilities $\pi_{ij}$ and $1-\pi_{ij}$ for the seminal and closure edges, respectively.\label{fig:student_cooperation}} \end{figure} \begin{figure*} \begin{tabular}{P{.33\textwidth}P{.33\textwidth}P{.33\textwidth}} \begin{overpic}[width=.33\textwidth]{add_health_comm26-marginal-sbm-modesbm.pdf} \put(95,55){\color{black}$\bm{\swarrow}$} \end{overpic} & \begin{overpic}[width=.33\textwidth]{add_health_comm26-marginal-triadic_sbm-modesbm.pdf} \put(0,0){\includegraphics[width=.33\textwidth]{add_health_comm26-marginal-triadic_sbm-modesbm-grade-lines.pdf}} \end{overpic}& \includegraphics[width=.33\textwidth]{add_health_comm26-marginal-triadic_sbm-modetriadic.pdf}\\ & Seminal edges & Triadic closure edges \\[.3em] (a) SBM ($\Sigma_{\text{SBM}}=8757.0$ nats) & \multicolumn{2}{c}{ (b) SBM/TC ($\Sigma_{\text{SBM/TC}}=8456.3$ nats)} \end{tabular} \caption{Network of friendships between high school students --- Adolescent health (comm26)~\cite{Moody_2001}. (a) Fit of the SBM, yielding $B=26$ communities. (b) Fit of the SBM/TC, uncovering $B=9$ communities, with seminal (black) and triadic closure (red) edges shown separately in the left and right figures. The thickness of the edges correspond to the marginal probabilities $\pi_{ij}$ and $1-\pi_{ij}$ for the seminal and closure edges, respectively. The red dashed lines delineate the known divisions into grades, as numbered.\label{fig:add_health}} \end{figure*} \begin{figure*} \begin{tabular}{cc} \includegraphics[width=.5\textwidth]{netscience-marginal-sbm-modesbm.pdf} & \includegraphics[width=.5\textwidth]{netscience-marginal-triadic_sbm-modecombined.pdf}\\ (a) SBM, $\Sigma_{\text{SBM}}=3816.3$ nats & (b) SBM/TC, $\Sigma_{\text{SBM/TC}}=3009.9$ nats \end{tabular} \caption{Network of collaborations between network scientists~\cite{Newman_2006}. (a) Fit of the SBM, yielding $B=27$ communities. (b) Fit of the SBM/TC, uncovering only $B=3$ groups, and triadic closure edges shown in red. The thickness of the edges correspond to the marginal probabilities $\pi_{ij}$ and $1-\pi_{ij}$ for the seminal and closure edges, respectively.\label{fig:netscience}} \end{figure*} \begin{figure} \begin{tabular}{P{.5\columnwidth}P{.5\columnwidth}} \includegraphics[width=.5\columnwidth]{c-datastudent_cooperation-l12.pdf} & \includegraphics[width=.5\columnwidth]{c-dataadd_health_comm26-l6.pdf}\\ {\smaller (a) Cooperation between students } & {\smaller (b) Adolescent health (comm26)}\\ \includegraphics[width=.5\columnwidth]{c-datanetscience-l6.pdf} & \includegraphics[width=.5\columnwidth]{c-datafootball-l6.pdf}\\ {\smaller (c) Scientific collaborations in Network Science } & {\smaller(d) NCAA college football 2000} \end{tabular} \caption{Posterior predictive distributions of the clustering coefficient, as described in the text, for the SBM and SBM/TC as indicated in the legend, for different datasets. The vertical line shows the empirical value $C(\bm{G})$.\label{fig:c-pred}} \end{figure} \begin{figure} \includegraphics[width=.4\textwidth]{football-marginal-sbm-modesbm.pdf} \caption{Network of games between American college football teams (NCAA college football 2000)~\cite{Girvan_2002}. The node colors show the fit of the SBM and SBM/TC, both yielding the same $B=11$ communities. The SBM yields a description length of $\Sigma_{\text{SBM}}=1761.1$ nats and the SBM/TC, $\Sigma_{\text{SBM/TC}}=1767.6$ nats.\label{fig:football}} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{c-posterior-zscore-summary.pdf} \caption{Values of the z-score for the posterior predictive distributions of the clustering coefficient, as described in the text, for the SBM and SBM/TC as indicated in the legend, for different datasets. The solid horizontal lines mark the values $-2$ and $2$. \label{fig:zscore}} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{cA-avg-summary.pdf} \caption{Values of the clustering coefficient (Eq.~\ref{eq:clustering}) computed for the original network, $C(\bm{G})$, and for the inferred seminal network, $C_S(\bm{G})$, averaged over the posterior distribution according to Eq.~\ref{eq:c_A}, as shown in the legend, for different datasets. \label{fig:c_A}} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{eB-avg-summary.pdf} \caption{Values of effective number of inferred groups, as given by Eq.~\ref{eq:eB}, for the SBM and SBM/TC as indicated in the legend, for different datasets. \label{fig:eB}} \end{figure} We investigate the use our method with a variety of empirical networks. We begin with a network of cooperation among students while doing their homework for a course at Ben-Gurion University~\cite{Fire_2012}. In Fig.~\ref{fig:student_cooperation}a we show the network and a fit of the DC-SBM, which finds 9 assortative communities. Based on this result --- and knowing that the partitions found by inferring the SBM as we do here point to statistically significant results that cannot be attributed to mere random fluctuations~\cite{peixoto_bayesian_2019} --- we would be tempted to posit that these divisions are uncovering latent social parameters that could explain the observed cooperation between these groups of of students. However, if we employ instead the SBM/TC, we obtain the result shown in Fig.~\ref{fig:student_cooperation}b, which uncovers instead only a single group, and an abundance of triadic closure edges. This is not unlike the artificial example considered in Fig.~\ref{fig:random}, and points to a very different interpretation, namely there is no measurable \emph{a priori} predisposition for students to work with each other in groups, and the resulting network stems instead from students choosing to work together if they already share a mutual partner. Indeed if we inspect the description lengths obtained with each model, we immediately recognize the SBM/TC as the most plausible explanation, and therefore we deem the community structure found by the SBM as an unlikely one by comparison. We move now to another social network, but this time of friendships between high school students~\cite{Moody_2001}. We show the results of our analysis in Fig.~\ref{fig:add_health}. Using the SBM we find $B=26$ groups, shown in Fig.~\ref{fig:add_health}a, which at first seems like a reasonable explanation for this network. But instead, with the SBM/TC we find only $B=9$ groups and a substantial amount of triadic closure edges, as seen in Fig.~\ref{fig:add_health}b. Differently from the previous example, the SBM/TC still finds enough evidence for a substantial amount of community structure, although with fewer groups than the pure SBM. The groups found with the SBM/TC have a strong correlation with the student grades, as shown in Fig.~\ref{fig:add_health}b, except for the 11th and 12th grades, which seem to intermingle more, and for which the model finds evidence of more detailed internal social structures. This indicates that most of the subdivisions of the grades found by the pure SBM are in fact better explained by triadic closure edges, and the \emph{a priori} friendship preference within these grades are far more homogeneous than the SBM fit would lead us to conclude. One particularly striking feature of this analysis is that it imputes some seemingly clear communities entirely to triadic closure. A good example is the group highlighted with an arrow in Fig.~\ref{fig:add_health}a, formed by students in the 8th grade. According to the SBM/TC, this group has arisen due to the formation of triangles between an initially poorly connected subset of students, formed by all friends of a single student, rather than an initial affinity between them. The SBM/TC explanation is again more plausible, due to its smaller description length. We move now to an additional example, this time of collaborations between researchers in network science~\cite{Newman_2006}, shown in Fig.~\ref{fig:netscience}. For this network, the SBM finds $B=27$ communities. The interpretation here is the same as previous analyses of the same network, namely that these communities are groups that tend to work together, with the occasional collaboration across groups. On the other hand, when we employ the SBM/TC, the difference this time is quite striking. Most of the community structure found with the pure SBM vanishes and is replaced by a substrate network with a substantial ``core-periphery'' mixing pattern formed of two main groups, where the ``core'' (blue nodes) is composed of perceived initiators of the collaborations with the ``periphery'' (yellow nodes), which end up being connected in the final network simply by virtue of the all-to-all nature of multi-way collaborations, captured here by triadic closure edges. The core-periphery pattern is not perfect, as we observe seminal edges between nodes of every type, but most commonly these exist between core and periphery nodes, and the core nodes themselves, who therefore seem to have a predisposition to wider collaborations. The difference between the description lengths of both models is substantial, indicating that the SBM/TC interpretation is indeed far more plausible. Lastly, we consider the network of American football games between colleges during the fall of 2000~\cite{Girvan_2002}, shown in Fig.~\ref{fig:football}. For this network we observe an interesting result, namely the SBM and SBM/TC yield the exact same inference, which means that SBM/TC gives a very low probability of triadic closure edges. Although we might expect this to occur for a network that has very few or no triangles, and therefore substantial evidence against triadic closure, this is not the case for the particular network in question, which has in fact an abundance of triangles, in addition to clear assortative communities. The reason for this is that, in this particular case, the SBM is fully capable of accounting for the triangles observed, which therefore can be characterized being a ``side-effect'' of the homophily between nodes of the same group, instead of an excess that needs additional explanation. We will revisit this particular case in the following, from a different angle. One natural criticism of the SBM as a useful hypothesis for real networks, however stylized as it clearly is, is that it assumes conditional independence for the placement of every edge. One consequence of this is that the probability of observing a spontaneous triadic closure edge will scale with $O(B/N)$, for a network with $N$ nodes and $B$ groups, assuming the group affinities are uniform for all groups. Therefore if $B \ll N$, we should not expect any abundance of triangles, which is at odds with what we observe in many empirical data. One problem with this logic is that we do not know \emph{a priori} the precise relationship between $B$ and $N$ for finite empirical networks, and therefore we cannot rule out the SBM hypothesis based simply on an observed abundance of triangles. Auspiciously, with the SBM/TC at hand, we are the perfect position to evaluate the SBM in that regard, and understand how many of the observed triangles can be attributed to an incidental link placement due to community structure, or if they are instead better explained by explicit triadic closure edges. A common way of quantifying the amount of triangles in a network $\bm{G}$ is via its clustering coefficient $C(\bm{G})\in[0,1]$, which determines the fraction of triads in the network which are closed in a triangle, and is given by \begin{equation}\label{eq:clustering} C(\bm{G}) = \frac{\sum_{ijk}G_{ij}G_{jk}G_{ki}}{k_i(k_i-1)}. \end{equation} A meaningful way to evaluate whether a given model $P(\bm{G}|\bm\theta)$ with parameters $\bm\theta$ can capture what is seen in the data is to compute the posterior predictive distribution, \begin{equation} P(C|\bm{G}) = \sum_{\bm{G}'}\delta(C-C(\bm{G}'))\sum_{\bm\theta}P(\bm{G}'|\bm\theta)P(\bm\theta|\bm{G}). \end{equation} This involves sampling parameters $\bm\theta$ from the posterior $P(\bm\theta|\bm{G})$, generating new networks $\bm{G}'$ from the model $P(\bm{G}'|\bm\theta)$, and obtaining the resulting population of $C(\bm{G}')$ values, which can then be compared to the observed value $C(\bm{G})$, and in this way we can determine if the model used is capable of capturing this aspect of the data. In Fig.~\ref{fig:c-pred} we show the results of this comparison for the SBM and SBM/TC (in Appendix~\ref{app:predictive} we give more details about how $\bm\theta$ should be chosen in each case) using three datasets. For three of the four networks we observe what one might expect: although the SBM is capable of accounting for a substantial amount of triangles (far more than one would expect by naively assuming $B\ll N$), it falls short of explaining what is actually seen in the data. The SBM/TC, on the other hand, accounts for a realm of possibilities that comfortably includes what is observed in the data, with a sufficiently high probability. For the remaining network in Fig.~\ref{fig:c-pred}c, NCAA college football 2000, as before, we observe a different picture. Namely, both models produce predictive posterior distributions that are essentially identical, and fully compatible with what is seen in the data. Therefore we can say with a fair amount of confidence that the fairly high clustering coefficient observed for this network can in principle be attributed to community structure alone, rather than triadic closure, contradicting the intuition obtained from the asymptotic case where $B\ll N$, which is not applicable to this network. We extend the previous analysis to a larger set of empirical networks, as shown in Fig.~\ref{fig:zscore}, by summarizing the compatibility of the posterior predictive distribution via the z-score, \begin{equation} z = \frac{C(\bm{G})-\avg{C}}{\sigma_C}, \end{equation} where $\avg{C}$ and $\sigma_C$ are the mean and standard deviation of the posterior predictive distribution. As we can see, there are a number of networks for which the z-score values lie in the plausible interval $[-2,2]$ for both models, but there is a much larger fraction of the data for which the values for the SBM point to a decisive incompatibility, whereas the SBM/TC yields credible values more systematically. We can further exploit the decomposition that the SBM/TC provides by quantifying precisely, for any given network, how much of the observed clustering can be attributed to triadic closure directly, or to community structure indirectly. We can do so by computing the mean clustering coefficient of the substrate seminal network from the posterior distribution, \begin{equation}\label{eq:c_A} C_S(\bm{G}) = \sum_{\bm{A},\bm{g},\bm{b}}C(\bm{A})P(\bm{A},\bm{g},\bm{b}|\bm{G}). \end{equation} We can then compare this value with the coefficient for the observed network $C(\bm{G})$, as we show in Fig.~\ref{fig:c_A}. We identify a variety of scenarios, including situations where the seminal network (and hence the community structure) accounts for the majority of the observed clustering, but most commonly we observe that a substantial fraction can be attributed to more direct triadic closure. Nevertheless, in many cases the values of $C_S(\bm{G})$ do not drop to negligible values, showing that the presence of triangles cannot be wholly attributed to either mechanism in these cases. Indeed, this variability seems to indicate that mere presence of a high or low density of triangles, as captured by the clustering coefficient, cannot be used by itself to evaluate whether triadic closure or community structure is the leading underlying mechanism of network formation. Another aspect of the suitability of triadic closure as a more plausible network model is that it tends to come together with a less pronounced inferred community structure, since part of the density heterogeneity found is attributed to the former mechanism, rather than the latter. In Fig.~\ref{fig:eB} we characterize this difference by the effective number of groups found with both models. We see that the discrepancy between them is once again quite varied, where in some cases it can be quite small, indicating that triadic closure plays a minor role, while in other cases it can be quite extreme, indicating the dominant role that triadic role has in the respective networks. Overall, what we seem to extract from these empirical networks is that, in the majority of cases (though not all), the observed structure seems to be better explained by a heterogeneous combination of underlying mixing patterns with a further distortion by an additional tendency of forming triangles. The precise balance between these two components vary considerably in general, and needs to be assessed for each individual network. \section{Edge prediction}\label{sec:prediction} \begin{figure} \begin{tabular}{c} Student cooperation\\ \includegraphics[width=\columnwidth]{student_cooperation-predict-f0.95.pdf}\\ Scientific collaborations in Network Science\\ \includegraphics[width=\columnwidth]{netscience-predict-f0.95.pdf}\\ Adolescent health (comm26)\\ \includegraphics[width=\columnwidth]{add_health_comm26-predict-f0.95.pdf}\\ NCAA college football 2000\\ \includegraphics[width=\columnwidth]{football-predict-f0.95.pdf} \end{tabular} \caption{Distributions of Precision and Recall values, according to the SBM and SBM/TC model, for four empirical networks, and a fraction $f=0.05$ of omitted edges and corresponding number of omitted non-edges. The results were obtained for $200$ different realizations of missing edges and non-edges.\label{fig:prediction}} \end{figure} As with every kind of empirical assessment, network data is subject to measurement errors or omissions. A common use of network models is to predict such erroneous and missing information from what is more precisely known~\cite{clauset_hierarchical_2008, guimera_missing_2009}. The SBM has been successfully used as such a model~\cite{guimera_missing_2009, peixoto_reconstructing_2018}, since the latent group assignments and the affinities between them can be learned from partial network information, which in turn can be used to infer what has been distorted or left unobserved. Another common approach to edge prediction consists of attributing a higher probability to a potential edge if it happens to form a triangle~\cite{adamic_friends_2003}. As we have been discussing in this work, these two properties --- group affinity and triadic closure --- point to related but distinct processes of edge formation, and approaches of edge prediction that rely exclusively on either one will be maximally efficient only if it happens to be the dominant underlying mechanism, which, as we have seen in the last section, is typically not the case. However, with the SBM/TC model we have introduced, it should be possible to accommodate both mechanisms at the same time, and in this way improve edge prediction is more realistic settings. In the following, we show how this can be done, and demonstrate it with a few examples. The scenario we consider is the general one presented in Ref.~\cite{peixoto_reconstructing_2018}, where we make $n_{ij}$ measurements of node pair $(i,j)$ and record the number of times $x_{ij}$ than an edge has been observed. Based on this data, we infer the underlying network $\bm{G}$ according to the posterior distribution \begin{equation} P(\bm{G} | \bm{n}, \bm{x}) = \frac{P(\bm{x}|\bm{G},\bm{n})P(\bm{G})}{P(\bm{x}|\bm{n})}. \end{equation} The measurement model corresponds to a situation where the probabilities of observing missing and spurious edges are unknown, which amounts to marginal probability~\cite{peixoto_reconstructing_2018} \begin{equation} P(\bm{x}|\bm{G},\bm{n}) = {\mathcal{E} \choose \mathcal{T}}^{-1}\frac{1}{\mathcal{E}+1}{\mathcal{M}-\mathcal{E} \choose \mathcal{X}-\mathcal{T}}^{-1}\frac{1}{\mathcal{M}-\mathcal{E}+1}. \end{equation} where we have \begin{align} \mathcal{M}&=\sum_{i<j}n_{ij}, & \mathcal{X}&=\sum_{i<j}x_{ij},\\ \mathcal{E}&=\sum_{i<j}n_{ij}G_{ij}, & \mathcal{T}&=\sum_{i<j}x_{ij}G_{ij}. \end{align} The network model comes into play via the prior $P(\bm{G})$. For the SBM/TC model this is \begin{equation} P(\bm{G}) = \sum_{\{\bm{g}^{(l)}\}, \bm{A}, \bm{b}}P(\bm{G}, \{\bm{g}^{(l)}\}, \bm{A}, \bm{b}). \end{equation} Once more, we avoid an intractable computation, by sampling instead from a joint posterior with the model parameters, i.e. \begin{equation} P(\bm{G}, \{\bm{g}^{(l)}\}, \bm{A}, \bm{b} | \bm{n}, \bm{x}) = \frac{P(\bm{x}|\bm{G},\bm{n})P(\bm{G}, \{\bm{g}^{(l)}\}, \bm{A}, \bm{b})}{P(\bm{x}|\bm{n})} \end{equation} so that the desired posterior distribution can be obtained by marginalization \begin{equation} P(\bm{G} | \bm{n}, \bm{x}) = \sum_{\{\bm{g}^{(l)}\}, \bm{A}, \bm{b}}P(\bm{G}, \{\bm{g}^{(l)}\}, \bm{A}, \bm{b} | \bm{n}, \bm{x}). \end{equation} In order to perform our comparison, we consider the following particular setup for the data $(\bm n,\bm x)$. Given a true network $\bm{G}$ we select a random subset $\bm{P}_t$ of the edges (``true positives''), and an equal-sized random subset $\bm{N}_t$ of ``non-edges'' (``true negatives''), i.e., node pairs $(i,j)$ for which $G_{ij}=0$, such that $|\bm{P}_t|=|\bm{N}_t|=f E$, where $f\in[0,1]$ is a free parameter and $E$ is the total number of edges. We then set $n_{ij}\to\infty$ for all node pairs neither in $\bm{P}_t$ nor in $\bm{N}_t$, with $x_{ij}=n_{ij}$ if $G_{ij}=1$ and $x_{ij}=0$ otherwise --- these are parts of the network about which we are perfectly certain. For the node pairs in $\bm{P}_t$ and $\bm{N}_t$ we set $n_{ij}=x_{ij}=0$, meaning we lack any data about them. We then compute the posterior marginal probability \begin{equation} p_{ij} = \sum_{\bm{G}}G_{ij}P(\bm{G}|\bm n,\bm x), \end{equation} and we use it to evaluate the quality of the reconstruction. We do so by computing the Precision and Recall, defined as \begin{align} \text{Precision} &= \frac{\sum_{(i,j)\in \bm{P}_t}p_{ij}}{\sum_{(i,j)\in \bm{P}_t\cup\bm{N}_t}p_{ij}}\\ \text{Recall} &= \frac{\sum_{(i,j)\in \bm{P}_t}p_{ij}}{|\bm{P}_t|}, \end{align} which measures the fraction of correctly predicted edges, relative to the total number of edges predicted, or the total number of true edges, respectively. In Fig.~\ref{fig:prediction} we show the results of the above analysis for some of the networks studied previously, using both the SBM/TC model and the pure SBM. For most of them, the SBM/TC model yields a superior predictive performance, sometimes substantially. This shows that while community detection via the SBM can to some extent detect the patterns induced by triadic closure, the more explicit SBM/TC model does a better job at this, corroborating the model selection arguments we have used previously. For networks of games between American football college teams, the situation is once again different, and we observe indistinguishable results between the SBM and SBM/TC. For this network, as the previous analysis has established, triadic closure seems to play a insignificant role, despite the relative abundance of triangles. As a consequence, in this case the SBM/TC model offers no advantage in edge prediction, but importantly, it does not degrade it either. In a recent work, Ghasemian \textit{et al}~\cite{ghasemian_stacking_2020} have performed a large-scale analysis of over two hundred edge prediction methods on over five hundred networks belonging to various domains. Although the overall conclusion of that work was that no single method dominates on every data, the predictive performance of the different methods were far from uniform, with the method above based on the SBM providing the single best performance overall.\footnote{Ghasemian \textit{et al}~\cite{ghasemian_stacking_2020} considered only a simplified version of the method described, where only the best-scoring partition was used, instead of an average over the posterior distribution. Furthermore, they have used only the version of the SBM with noninformative priors, which is known to underfit, as opposed to the nested SBM~\cite{peixoto_hierarchical_2014, peixoto_nonparametric_2017} which removes this problem. Accounting for both of these issues have been shown before to improve edge prediction systematically~\cite{valles-catala_consistencies_2018}, and could potentially have pushed the result of the analysis in Ref.~\cite{ghasemian_stacking_2020} even more in favor of the SBM approach.} Interestingly, the situations where the SBM approach yielded inferior performance were precisely for social networks, for which some predictors based on triadic closure performed better. Although our results above fall short of a thorough and systematic analysis of the wide domains of network data, since we consider only a handful of networks, they nevertheless seem to give good indication that combining group affinity with triadic closure could potentially eliminate this shortcoming for this particular class of network data. \section{Conclusion}\label{sec:conclusion} We have presented a generative model and corresponding inference scheme that is capable of differentiating community structure from triadic closure in empirical networks. We have shown that although these features are generically conflated in traditional network analysis, our method can pick them apart, allowing us tell us whether an observed abundance of triangles is a byproduct of an underlying homophily between nodes, or whether they arise out of a local property of transitivity. Likewise, we have also shown how our method can evade the detection of spurious communities, which are not due to homophily, but arise instead simply out of a random formation of triangles. Our approach shows how local and global (or mesoscale) generative processes can be combined into a single model. Since it contains a mixture of both mechanisms, our method is able to decompose them for a given observed network according to their inferred contributions. By employing our method on several empirical networks, we were able to demonstrate a wide variety of scenarios, containing everything from a large number of triangles caused predominantly by triadic closures, by a mixture of community structure and triadic closures, and by community structure alone. These findings seem to indicate that local and global network properties tend to mix in nontrivial ways, and we should refrain from automatically concluding that an observed local property (e.g. large number of triangles) cannot have a global cause (e.g. group homophily), and likewise an observed global property (e.g. community structure) cannot have a purely local cause (e.g. triadic closure). Several authors had shown before that triadic closure can induce the formation of community structure in networks~\cite{foster_clustering_2011,foster_communities_2010,lopez_transitions_2020,bianconi_triadic_2014,wharrie_micro-_2019, asikainen_cumulative_2020}. This introduces a problem of interpretation for community detection methods that do not account for this, which, to the best of our knowledge, happens to be the vast majority of them. This is true also for inference methods based on the SBM, which, although it is not susceptible to finding spurious communities formed by a fully random placement of edges~\cite{guimera_modularity_2004} (unlike non-inferential methods, which tend to overfit~\cite{ghasemian_evaluating_2019}), they cannot evade those arising from triadic closure~\cite{wharrie_micro-_2019}. Our approach provides a solution to this interpretation problem, allowing us to reliably rule out triadic closure when identifying communities in networks. We have also shown how incorporating triadic closure together with community structure can improve edge prediction, without degrading the performance in situations where it is not present. This further demonstrates the usefulness of approaches that model networks in multiple scales, and points to a general way of systematically improving our understanding of network data.
1502.04533
\section{Introduction}\label{sec:Intro} A wireless ad-hoc network is a self-organized decentralized network that consists of independent radio transceivers (transmitter/receiver) and does not rely on any existing infrastructure. The network nodes (stations) communicate over radio channels. Each node broadcasts a signal over a fixed range and any node within this transmission range receives the signal. Communication with nodes outside the transmission range is done using multi-hops, i.e., intermediate nodes pass the message forward and form a communication path from the source node to the desired target node. The twenty-first century witnesses widespread deployment of wireless networks for professional and private applications. The field of wireless communication continues to experience unprecedented market growth. For a comprehensive survey of this field see~\cite{pahlavan2005}. Let $S$ be a set of points in the $d$-dimensional Euclidean space representing radio stations. A \emph{range assignment} for $S$ is a function $\rho : S \rightarrow \mathds{R}^+$ that assigns each point a transmission range (radius). The cost of a range assignment, representing the power consumption of the network, is defined as $cost(\rho) = \sum_{v \in S}(\rho(v))^\alpha$ for some real constant $\alpha \geq 1$, where $\alpha$ varies between $1$ and values higher than $6$, depending on different environmental factors~\cite{pahlavan2005}. A range assignment $\rho$ induces a \emph{directed communication graph} $G_{\rho}=(S,E_{\rho})$, where $E_{\rho}=\{(u,v):\rho(u) \geq |uv|\}$ and $|uv|$ denotes the Euclidean distance between $u$ and $v$. A range assignment $\rho$ is \emph{valid} if the induced (communication) graph $G_{\rho}$ is strongly connected. For ease of presentation, throughout the paper we refer to the terms `assigning a range $|uv|$ to a point $u \in S$' and `adding a directed edge $(u,v)$' as equivalent. We consider the $d$-D \textsc{Minimum Cost Range Assignment} (\textsc{MinRange}) problem, that takes as input a set $S$ of $n$ points in $\mathds{R}^d$, and whose objective is finding a valid \emph{range assignment} for $S$ of minimum cost. This problem has been considered extensively in various settings, for different values of $d$ and $\alpha$, with additional requirements and modifications. Some of these works are mentioned in this section. In \cite{Kirousis}, Kirousis et al. considered the $1$D \textsc{MinRange}\ problem (the radio stations are placed arbitrarily on a line) and showed an $O(n^4)$-time algorithm which computes an optimal solution for the problem. Later, Das et al.~\cite{DasGN07} improved the running time to $O(n^3)$. Here, we propose an $O(n^2)$-time exact algorithm, this improves the running time of the best known algorithm by a factor of $n$ without increasing the space complexity. The novelty of our method lies in separating the range assignment into two, \emph{left} and \emph{right}, range assignments (elaborated in Section~\ref{sec:Linear}). This counter intuitive approach allows us to achieve the aforementioned result and, moreover, to compute an optimal range assignment in $1$D with the additional requirement that the induced graph is a $t$-spanner, for a given $t \geq 1$. A directed graph $G=(S,E)$ is a $t$-spanner for a set $S$, if for every two points $u,v \in S$ there exists a path in $G$ from $u$ to $v$ of length at most $t|uv|$. The importance of avoiding flooding the network when routing, was one of the reasons that led researchers to consider the combination of range assignment and $t$-spanners, e.g.,~\cite{KarimRCK09,Shpungin,WYLX02,WangLi}, as well as the combination of range assignment and hop-spanners, e.g.,~\cite{Clementi00,Kirousis}. While bounded-hop spanners bound the number of intermediate nodes forwarding a message, $t$-spanners bound the relative distance a message is forward. For the $1$D bounded-hop range assignment problem, Clementi et al.~\cite{Clementi00} showed a 2-approximation algorithm whose running time is $O(hn^3)$. To the best of our knowledge, we are the first to show an algorithm that computes an optimal solution for the range assignment with the additional requirement that the induced graph is a $t$-spanner. While the $1$D version of the \textsc{MinRange}\ problem can be solved optimally, for any $d \geq 2$ and $\alpha \geq 1$, it has been proven to be $NP$-hard (in~\cite{Kirousis} for $d\geq3$ and $1 \leq \alpha < 2$ and later in~\cite{Clementi} for $d \geq 2$ and $\alpha > 1$). However, some versions can be approximated within a constant factor. For $\alpha = 2$ and any $d \geq 2$ Kirousis et al.~\cite{Kirousis} gave a 2-approximation algorithm based on the minimum spanning tree (although they addressed the case of $d\in \{2,3\}$ their result holds for any $d \geq 2$). The best known approximation ratio is for the case $\alpha=1$, where the approximation ratio is $1.5$~\cite{Ambuhl05}. We show a new approximation algorithm for this case\footnote{Values of $\alpha$ smaller than $2$ correspond to areas, such as, corridors and large open indoor areas~\cite{pahlavan2005}.} with improved approximation ratio of $1.5-\epsilon$, for a suitable constant $\epsilon>0$. We do not focus on increasing $\epsilon$ but rather on showing that there exists an approximation ratio for this problem that is strictly less than $1.5$. This is in contrast to classic problems, such as metric TSP and strongly connected sub-graph problems, for which the $1.5$ ratio bound has not yet been breached. \section{Minimum Cost Range Assignment in 1D}\label{sec:Linear} In the $1$D version of the \textsc{MinRange}\ problem, the input set $S=\{v_1,...,v_n\}$ consists of points located on a line. For simplicity, we assume that the line is horizontal and for every $i<j$, $v_i$ is to the left of $v_j$. Given two indices $1 \leq i < j \leq n$, we denote by $S_{i,j}$ the subset $\{v_i,...,v_j\}\subseteq S$. We present two polynomial-time algorithms for finding optimal range assignments, the first, in Section~\ref{sec:LinMinCostRange}, for the basic $1$D \textsc{MinRange} \ problem, and the second, in Section~\ref{subsec:spanner}, subject to the additional requirement that the induced graph is a $t$-spanner (the $1$D \textsc{MinRangeSpanner} \ problem). Our new approach for solving these problems requires introducing a variant of the \emph{range assignment}. Instead of assigning each point in $S$ a radius, we assign each point two directional ranges, \emph{left range assignment}, $\rho^l : S \rightarrow \mathds{R}^+$, and \emph{right range assignment}, $\rho^r : S \rightarrow \mathds{R}^+$. A pair of assignments $(\rho^l,\rho^r)$ is called a \emph{left-right assignment}. Assigning a point $v \in S$ a left range $\rho^l(v)$ and a right range $\rho^r(v)$ implies that in the induced graph, $G_{\rho^{lr}}$, $v$ can reach every point to its left up to distance $\rho^l(v)$ and every point to its right up to distance $\rho^r(v)$. That is, $G_{\rho^{lr}}$, contains the directed edge $(v_i,v_j)$ iff one of the following holds: (i) $i<j$ and $|v_iv_j| \leq \rho^r(v_i)$, or \ (ii) $j<i$ and $|v_iv_j| \leq \rho^l(v_i)$. The cost of an assignment $(\rho^l,\rho^r)$, is defined as $cost(\rho^l,\rho^r) = \sum_{v \in S}(\max\{\rho^l(v),\rho^r(v)\})^\alpha$. Our algorithms find a \emph{left-right assignment} of minimum cost that can be converted into a \emph{range assignment} $\rho$ with the same cost by assigning each point $v \in S$ a range $\rho(v)=\max\{\rho^l(v),\rho^r(v)\}$. Note that any valid \emph{range assignment} for $S$ can be converted to a a \emph{left-right assignment} with the same cost, by assigning every point $v \in S$, $\rho^l(v)=\rho^r(v)=\rho(v)$. To be more precise, either $\rho^l(v)$ or $\rho^r(v)$ should be reduced to $|vu|$, where $u$ is the farthest point in the directional range (for Lemma~\ref{lem:range_ij} to hold). Therefore, a minimum cost \emph{left-right assignment}, implies a minimum cost \emph{range assignment}. In addition to the $cost$ function, we define $cost'(\rho^l,\rho^r) = \sum_{v \in S}((\rho^l(v))^\alpha+(\rho^r(v))^\alpha),$ and refine the term of \emph{optimal solution} to include only solutions that minimize $cost'(\rho^l,\rho^r)$ among all solutions, $(\rho^l,\rho^r)$, with minimum $cost(\rho^l,\rho^r)$. \subsection{An Optimal Algorithm for the 1D \textsc{MinRange} \ Problem}\label{sec:LinMinCostRange} Das et al.~\cite{DasGN07} state three basic lemmas regarding properties of an optimal range assignment. The following three lemmas are adjusted versions of these lemmas for a \emph{left-right assignment}. \begin{lemma}\label{lem:range_ij} In an optimal solution $(\rho^l, \rho^r)$ for every $v_i \in S$, either $\rho^l(v_i)=0$ or $\rho^l(v_i)=|v_i v_j|$ and similarly, either $\rho^r(v_i)=0$ or $\rho^r(v_i)=|v_i v_k|$ for some $j \leq i \leq k$. \end{lemma} \begin{lemma}\label{lem:range_line} Given three indices $1\leq i < j < k \leq n$, consider an optimal solution for $S_{i,k}$, denoted by $(\rho^l, \rho^r)$, subject to the condition that $\rho^l(v_j)\geq |v_i v_j|$ and $\rho^r(v_j) \geq |v_j v_k|$, then, \begin{itemize} \item for all $m=i,...,j-1$, $\rho^r(v_m)=|v_m v_{m+1}|$ and $\rho^l(v_m)=0$; and \item for all $m=j+1,...,k$, $\rho^l(v_m)=|v_m v_{m-1}|$ and $\rho^r(v_m)=0$. \end{itemize} \end{lemma} \begin{lemma}\label{lem:range_12} In an optimal solution $(\rho^l, \rho^r)$, $\rho^l(v_1)=0$ and $\rho^r(v_1)=|v_1v_2|$. \end{lemma} Lemma~\ref{lem:range_ij} allows us to simplify the notation $\rho^x(v_i)=|v_i v_j|$ for $x \in \{l,r\}$ and $1 \leq i,j \leq n$, and write $\rho^x(i)=j$ for short. We solve the \textsc{MinRange}\ problem using dynamic programming. Given $1 \leq i <n$, we denote by $OPT(i)$ the cost of an optimal solution for the sub-problem defined by the input $S_{i,n}$, subject to the condition that $\rho^r(i)=i+1$. Note that the cost of an optimal solution for the whole problem is $OPT(1)$. In Section~\ref{subsec:cubic} we present an algorithm with $O(n^3)$ running time and $O(n^2)$ space (the same time and space as in~\cite{DasGN07}). Then, in Section~\ref{subsec:Quad} we reduce the running time to $O(n^2)$. \subsubsection{A Cubic-Time Algorithm }\label{subsec:cubic} Algorithm~\textsc{1DMinRA}\ (Algorithm~\ref{alg}) applies dynamic programming to compute the values $OPT(i)$ for every $1\leq i \leq n$ and store them in a table, $T$. Finally, it outputs the value $T[1]$. In our computation we use a $2$-dimensional matrix, \emph{Sum}, storing for every $1\leq i<j \leq n$ the sum $\sum_{m=i}^{j-1} |v_m v_{m+1}|^\alpha$. \begin{algorithm}[htp] \caption{\textsc{1DMinRA}($S$)}\label{alg} \begin{algorithmic} \FOR {$i = n-1$ \textbf{downto} $1$} \FOR {$j = n$ \textbf{downto} $i+1$} \STATE \[ \text{Sum}[i,j] \gets \left\{ \begin{array}{ll} |v_{n-1} v_n|^\alpha &\: ,i=n-1 \\ \text{Sum} [i+1,n]+|v_i v_{i+1}|^\alpha &\: ,j=n \\ \text{Sum} [i,j+1]-|v_j v_{j+1}|^\alpha &\: ,\text{otherwise} \end{array} \right \] \ENDFOR \ENDFOR \FOR {$i = n-1$ \textbf{downto} $1$} \STATE \[ \text{T}[i] \gets \left\{ \begin{array}{cl} \text 2|v_i v_{i+1}|^\alpha &\: ,i=n-1 \\ \begin{aligned} \min_{ \substack{i < k < n \\ k < k' \leq n}} \{& \text{Sum}[i,k'-1] + \text{T}[k'-1] - |v_{k'-1} v_{k'}|^\alpha\\[-2mm] & + \max\{ |v_i v_k|^\alpha, |v_k v_{k'}|^\alpha \}\} \end{aligned} &\: ,otherwise \end{array} \right \] \ENDFOR \RETURN $\text{T}[1]$ \end{algorithmic} \end{algorithm} While the table $T$ maintains only the costs of the solutions, the optimal assignment can be easily retrieved by backtracking the cells leaded to the optimal cost and assigning the associated ranges (described in the proof of Lemma~\ref{lem:OPT(i)}).\\ \paragraph*{Correctness.} We prove that for every $1 \leq i \leq n$, the value assigned to cell $T[i]$ by the algorithm equals $OPT(i)$. Trivially, $OPT(n-1)$ indeed equals $2|v_i v_{i+1}|^\alpha$. Assume, during the $i$-th iteration it holds that $T[i']=OPT(i')$ for every $i<i'<n$, the correctness of the computation done during the $i$-th iteration is given in Lemma~\ref{lem:OPT(i)}. \begin{lemma}\label{lem:OPT(i)} Given an index $i$ with $1 \leq i \leq n-1$, \[ OPT(i) = \min_{ \substack{i < k < n \\ k < k' \leq n}} \left\{ \sum_{m=i}^{k'-2} |v_m v_{m+1}|^\alpha + OPT(k'-1) - |v_{k'-1} v_{k'}|^\alpha + \max\{ |v_i v_k|^\alpha, |v_k v_{k'}|^\alpha \} \right\}. \] \end{lemma} \begin{proof} Let $X_i$ denote the right side of the equation, we prove $OPT(i) = X_i$. \begin{description}[topsep=2mm, itemsep=2mm, leftmargin=2.5mm] \item [$OPT(i) \leq X_i$:] \ We show that all costs that appear as $\min$ function arguments in $X_i$ correspond to valid assignments and thus infer, by the optimality of $OPT(i)$, that the above inequality holds. Consider an argument with parameters $k$ and $k'$. We associate it with an assignment $(\rho^l,\rho^r)$ defined as follows (see Fig.~\ref{fig:OPT(i)}(a)). For $m \geq k'-1 $ the assignment is inductively defined by $OPT(k'-1)$. For every $i \leq m < k$, $\rho^l(m)=m$ and $\rho^r(m)=m+1$, for every $k < m < k'$, $\rho^l(m)=m-1$ and $\rho^r(m)=m$ ($v_{k'-1}$ is reassigned) and for $k$, $\rho^l(k)=i$, $\rho^r(k)=k'$. By the validity of $OPT(k'-1)$, every two points among $S_{k'-1,n}$ are (strongly) connected. Our assignment for $S_{i,k'-1}$ guarantees the connectivity between every two points in $S_{i,k'-1}$, and thus between every two points in $S_{i,n}$. \item[$OPT(i) \geq X_i$:] \ Consider an optimal solution $(\rho^l,\rho^r)$ for the points $S_{i,n}$ subject to the condition that $\rho^r(i)=i+1$. Let $v_k$ be a point to the right of $v_i$ with $\rho^l(k)=i$ and let $\rho^r(k)=k'$. Note that since $v_i$ is the leftmost point and the induced graph is strongly connected, such a point necessarily exists. Next we show that there is no edge directed either left or right connecting two points on different sides of $v_{k'}$ in $G_{\rho^{lr}}$, except for possibly an edge $(v_j, v_{k'-1})$ with $j>k'$. Assume towards contradiction that the former does not hold, i.e., there exists $i<t<k'$, with $\rho^r(t)\geq k'$; then, reassigning $\rho^r(k)= \max \{t,k\}$ maintains the connectivity, and reduces the value of $cost'$ without increasing the value of $cost$ in contradiction to the optimality of the solution. Now, let $v_j$ be a point to the right of $v_{k'}$ with $\rho^l(j)=j'\in \left[ i,k' \right]$, we show that $j' \geq k'-1$. Consider a point $j'<t<k'$, as we have shown, $\rho^r(t)<k'$. By symmetric arguments we have $\rho^l(t)> j'$ (see Fig.~\ref{fig:OPT(i)}(b)). Namely, there is no edge going out of the interval $(v_{j'},v_{k'})$. Thus, connectivity can be achieved only if this interval is empty of vertices, i.e., either $j'=k'-1$ or $j'=k'$ (note that $k'-1>i$). The above observation allows us to divide the problem into two independent subproblems, one for the points $S_{i,k'-1}$ subject to the constraints $\rho^l(k)=i$ and $\rho^r(k)=k'$, and the other for the points $S_{k'-1,n}$ subject to the artificial constraint $\rho^r(k'-1)=k'$ that guarantees the existence of a path from $k'-1$ to $k'$, due to the solution of the first subproblem, but should not be paid for. Regarding the first subproblem, by Lemma~\ref{lem:range_line}, in an optimal assignment, for every $i \leq m < k$, $\rho^l(m)=m$ and $\rho^r(m)=m+1$, and for every $k < m \leq k'-1$, $\rho^l(m)=m-1$ and $\rho^r(m)=m$. Thus, its cost is $ \sum_{m=i}^{k'-2} |v_m v_{m+1}|^\alpha + \max \{|v_k v_i|^\alpha ,|v_k v_{k'}|^\alpha$\}. The cost of an optimal solution to the second subproblem is $OPT(k'-1) - |v_{k'-1} v_k'|^\alpha$. Hence, the $cost$ of an optimal solution to the whole problem is the sum of the above costs and the lemma follows. \end{description} \end{proof} \begin{figure}[tbh] \centering \includegraphics[width=1\textwidth]{OPT_i_.pdf} \caption{(a) An illustration of the assignment associated with $OPT(i)$ with respect to given parameters $k$ and $k'$. In gray are range assignments associated with $OPT(k'-1)$. (b) An illustration of the proof of Lemma~\ref{lem:OPT(i)}. In dashed arrows, the impossible ranges of $v_t$ and in gray, the alternative assignment of lower $cost'$.} \label{fig:OPT(i)} \end{figure} \paragraph*{Complexity.} Obviously, Algorithm~\textsc{1DMinRA}\ requires $O(n^2)$ space. Regarding the running time, $O(n)$ iterations are performed during the algorithm, each iteration takes $O(n^2)$ time Therefore, the total running time is $O(n^3)$ and Lemma~\ref{lem:alg1DRA} follows. \begin{lemma}\label{lem:alg1DRA} Algorithm~\textsc{1DMinRA}\ runs in $O(n^3)$ time using $O(n^2)$ space. \end{lemma} \subsubsection{A Quadratic-Time Algorithm}\label{subsec:Quad} In this section we consider Algorithm~\textsc{1DMinRA}\ from previous section and reduce its running time to $O(n^2)$. Consider the equality stated in Lemma~\ref{lem:OPT(i)}. Observe that given fixed values $i$ and $k'$, the value $k$ that minimizes the argument of the $\min$ function with respect to $i$ and $k'$ is simply the value $k$ that minimizes $\max\{ |v_i v_k|^\alpha, |v_k v_{k'}|^\alpha\}$. This value is simply the closest point to the midpoint of the segment $\overline{v_i v_{k'}}$, denoted by $c(i,k')$. Thus, \[ OPT(i)= \min_{i+1 < k' \leq n} \left\{ \begin{aligned} \sum_{m=i}^{k'-2} |v_m v_{m+1}|^\alpha &+ OPT(k'-1) - |v_{k'-1} v_{k'}|^\alpha \\ & + \max\{ |v_i v_{c(i,k')}|^\alpha, |v_{c(i,k')} v_{k'}|^\alpha \} \end{aligned} \right\}. \] Consider Algorithm~\textsc{1DMinRA}\ after applying the above modification in the computation of $T[i]$. Since there are only $O(n)$ sub-problems to compute, each in $O(n)$ time, the running time reduces to $O(n^2)$ and the following theorem follows. \begin{theorem} The $1$D \textsc{MinRange}\ problem can be solved in $O(n^2)$ time using $O(n^2)$ space. \end{theorem} \subsection{An Optimal Algorithm for the 1D \textsc{MinRangeSpanner} \ Problem}\label{subsec:spanner} Given a set $S=\{v_1,..,v_n\}$ of points in $1D$ and a value $t \geq 1$, the 1D \textsc{MinRangeSpanner}\ problem aims to find a minimum cost range assignment for $S$, subject to the requirement that the induced graph is a $t$-spanner. We present a polynomial-time algorithm which solves this problem optimally and follows the same guidelines as Algorithm~\textsc{1DMinRA}. We begin with providing the key notions required for understanding the correctness of the algorithm, followed by its description. Due to space limitation, we do not supply a formal proof. The first and most significant observation, is that the problem can still be divided into two subproblems in the same way as in Algorithm~\textsc{1DMinRA}, by similar arguments to those of Lemma~\ref{lem:OPT(i)}. In Lemma~\ref{lem:OPT(i)} we show that any assignment that does not satisfy the conditions required for the division can be adjusted to a new assignment with a lower value of $cost'$ that preserves connectivity. The new assignment, however, preserves also the lengths of the shortest paths, which make the argument legitimate for this problem as well. The two problems (\textsc{MinRange}\ and \textsc{MinRangeSpanner}\ ) differ when it comes to solving each of the above subproblems. Consider the left subproblem, i.e., of the form described in Lemma~\ref{lem:range_line}. The optimal assignment for it is no longer necessarily the one stated in the lemma, since it does not ensure the existence of $t$-spanning paths. Therefore, our algorithm divides problems of this form into smaller subproblems handled recursively (see Fig.~\ref{fig:range_spanner}, right). Dealing with such subproblems, requires defining new parameters: a rightmost input point $v_j$, and the length of the shortest paths connecting $v_i$ to $v_j$, $v_j$ to $v_i$ and $v_i$ to $v_{i+1}$ not involving points in $S_{i,j}$ except for the endpoints, denoted by $\overrightarrow{\delta}$, $\overleftarrow{\delta}$, and $\delta^i$ , respectively. Regarding the computation of a subproblem, since points may be covered now by vertices outside the subproblem domain, we allow $v_k$ to have either a right or a left range equals $0$ (in the terms of Algorithm~\textsc{1DMinRA}, either $k=i$ or $k=k'$). Another key observation is that any directed graph $G$ over $S$ is a $t$-spanner for $S$ iff for every $1 \leq i <n$ there exists a $t$-spanning path from $v_i$ to $v_{i+1}$ and from $v_{i+1}$ to $v_i$. Moreover, given that $G$ is strongly connected implies that the addition of an edge between consecutive points does not effect the length of the shortest path between any other pair of consecutive points. Therefore, for subproblems with $j=i+1$ we assign $\rho^r(i)=i+1$ (resp. $\rho^l(i+1)=i$) iff $\overrightarrow{\delta}/|i,i+1|>t$ (resp. $\overleftarrow{\delta}/|i,i+1|>t$) and thus ensuring that the induced graph is a $t$-spanner. Our algorithm may consider solutions in which an assignment to a node is charged more than once in the total cost; however, for every such solution, there exists an equivalent one in which the charging is done properly and is preferred by the algorithm due its lower cost. We denote by $OPT(i,j,\overrightarrow{\delta},\overleftarrow{\delta}, \delta^i)$ the cost of an optimal solution to the sub-problem defined by the input $S_{i,j}$ subject to the parameters $\overrightarrow{\delta},\overleftarrow{\delta}$, and $\delta^i$ representing the lengths of the shortest external paths as defined earlier in this section. Let $\Delta_{i,j}=\{2|v_l v_i|+|v_i v_j|, 2|v_j v_r|+|v_i v_j| : l \leq i < j \leq r\}$. We compute $OPT(i,j,\overrightarrow{\delta},\overleftarrow{\delta}, \delta^i)$ for every $1\leq i<j \leq n$, and the corresponding $\overrightarrow{\delta},\overleftarrow{\delta}, \delta^i \in \Delta_{i,j} $ iteratively, while in stage $x$ all subproblems with $j-i=x$ are solved. The computation is derived from the equalities below. For simplicity of presentation, we overload notation and write $|i,j|$ to mean $|v_i v_j|$. In addition, we write $\varnothing$ in place of $\delta^i$ where $\delta^i=\overrightarrow{\delta}$. \[ \begin{aligned} OPT(&i, i+1,\overrightarrow{\delta}, \overleftarrow{\delta}, \delta^i)= \overrightarrow{r} + \overleftarrow{r}, \text{where}\\ &\overrightarrow{r} = \left\{ \begin{array}{ll} |i,i+1| \text{ (*\emph{assigning $\rho^r(i)=i+1$}*)}&\: ,\overrightarrow{\delta}/|i,i+1|> t \\ 0 &\: ,otherwise \end{array} \right. \end{aligned} \] and $\overleftarrow{r}$ is defined symmetrically. \noindent For $j>1$, we have \[ \begin{aligned} &OPT(i, j,\overrightarrow{\delta}, \overleftarrow{\delta}, \delta^i)= \\ &\min_{\substack{i \leq k \leq j \\ k \leq k' \leq j}} \left\{ \begin{array}{ll} \begin{aligned} |i,i+1|^\alpha &+ OPT(i,\, i+1, \,|i,i+1|, \,|i+i,j|+\overleftarrow{\delta}, \,\varnothing) \\ &+ OPT(i+1, \,j, \,\infty, \,\overleftarrow{\delta}-|i+1, i|, \,\varnothing) \end{aligned} &\: ,i=k \\[4mm] \begin{aligned} |i,i+1|^\alpha &+ OPT(i\,i+1,\,\delta^i,\,|i,i+1|,\,\varnothing) \\ &+ OPT(i+1,\,j,\,|i,i+1|+\overrightarrow{\delta},\,\overleftarrow{\delta}-|i,i+1|,\,\varnothing) \end{aligned} &\: ,k=k' \\[4mm] \begin{aligned} &\max\{|i,k|^\alpha, |k,k'|^\alpha\} \\ &+ OPT(i,\,i+1,\,\delta^i,\,|i+1,k|+|k,i|,\,\varnothing) \\ &+ OPT(i+1,\,k,\,\infty,\,|k,i+1|,\,\infty) \\ &+ OPT(k,\,k'-1,\,|k,k'-1|,\,\infty,\,|k,k+1|) \\ &+ OPT(k'-1,\,j,\,|k'-1,i|+\overrightarrow{\delta},\,\overleftarrow{\delta}-|i,k'-1|,\,|k'-1,k|+|k,k'|). \\ \end{aligned} &\: ,i \neq k \neq k' \\ \end{array} \right.\\ \end{aligned} \] We permit either $i=k$ and then $k'=i+1$ or $k=k'$ and then $k'=i+1$ but not both. \begin{figure}[thb] \centering \includegraphics[width=1\textwidth]{range_spanner.pdf} \caption{An illustration of the algorithm for the \textsc{MinRangeSpanner}\ problem. The ranges are illustrated in black arrows and the division to subproblems in sashed lines.} \label{fig:range_spanner} \end{figure} \paragraph*{Complexity.} Let $\Delta$ be the set of all distinct distances in $S$, then for every $v_i,v_j \in S $, $|\Delta_{i,j}|=|\Delta|=O(n)$. We fill a table with $O(n^2 |\Delta|^3)$ cells, each cell is computed in $O(n^2)$ time, thus, the total running time is $O(n^4 |\Delta|^3)$. As we have focused on presenting a simple and intuitive solution, rather than reducing the running time, a more careful analysis achieves a better bound on the time complexity. For example, the relevant domain of $\overrightarrow{\delta},\overleftarrow{\delta}$, and $\delta^i$ can be estimated more precisely with respect to $t$. Moreover, Observation~\ref{obs:lr_ranges} allows reducing the running time by a factor of $n$. This is done by decreasing the number of relevant combinations of $i$ and $k'$ that have to be checked by the algorithm, for fixed indices $j$ and $k$ with $i < k < k'< j $, to $O(n)$, using similar arguments to those in Lemma~\ref{lem:OPT(i)} (see Fig.~\ref{fig:OPT(i)_improved}). \begin{observation}\label{obs:lr_ranges} Consider an optimal assignment $(\rho^l,\rho^r)$ and a point $v_k \in S$. Let $\rho^l(k)=i$ and let $k^{i}$ denote the minimal index with $k < k^{i}$ and $|v_k v_{k^{i}}|\geq|v_k v_{i}|$, then $k^{i+1} \leq \rho^r(k) \leq k^{i}$. \end{observation} \begin{figure}[bht] \centering \includegraphics[width=0.75\textwidth]{OPT_i__improved.pdf} \caption{An illustration of Observation~\ref{obs:lr_ranges}. Every pair of symmetric arcs indicates equal distances from $v_k$. The marked domains indicates the legal values of $\rho^r(k)$ for different values of $i$.} \label{fig:OPT(i)_improved} \end{figure} \section{The \textsc{MinRange}\ Problem in Higher Dimensions}\label{sec:PlaneMinCostRange } In this section we focus on the \textsc{MinRange}\ problem for dimension $d \geq 2$ and $\alpha =1$. As all the versions of the problem for $d \geq 2$ and $\alpha \geq 1$, it is known to be $NP$-hard. Currently, the algorithm achieving the best approximation ratio for $\alpha=1$ and ant $d\geq 2$ is the \emph{Hub} algorithm with a ratio of $1.5$. This algorithm was proposed by G. Calinescu, P.J.Wan, and F. Zaragoza for the general metric case, and analyzed by Amb\"{u}hl et al. in~\cite{Ambuhl05} for the restricted Euclidean case. We show a new approximation algorithm and bound its approximation ratio from above by $1.5-\epsilon$ for $\epsilon=5/10^5$. Although in some cases our phrasing is restricted to the plane, all arguments hold for higher dimensions as well. In our algorithm we use two existing algorithms, the \emph{Hub} algorithm and the algorithm for the $1$D \textsc{MinRange}\ problem introduced by Kirousis et al.~\cite{Kirousis}, to which we refer as the \emph{$1$D RA} algorithm. We observe that the later algorithm outputs an optimal solution for any ordered set $V=\{v_1,...,v_n\}$ with distance function $h$ that satisfies the following \emph{line alike\ } condition: for every $1 \leq i \leq j < k \leq l \leq n$, it holds that $h(v_i,v_l) \geq h(v_j,v_k)$. We use this algorithm for subsets of the input set that roughly lie on a line. \subsection{Our Approach}\label{sec:approach} Presenting our approach requires acquaintance with the \emph{Hub} algorithm. The \emph{Hub} algorithm finds the minimum enclosing disk $C$ of $S$ centered at point $hub \in S$. Then, the algorithm sets $\rho(hub)=r_{min}$, where $r_{min}$ is $C$'s radius. Finally, it directs the edges of $MST(S)$ towards the \emph{hub} and for each directed edge $(v,u)$ sets $\rho(v)=|vu|$. The cost of this assignment is $w(MST(S))+r_{min} \leq w(MST(S))+(w(MST(S))+w(e_{M}))/2,$ where $e_M$ is the longest edge in $MST(S)$ and the weight function $w$ is defined with respect to Euclidean lengths. To guide the reader, we give an intuition and a rough sketch of our algorithm. We characterize the instances where the \emph{Hub} algorithm gives a better approximation than 1.5, and to generalize these cases we slightly modify it. Furthermore, we show an algorithm that prevails in the cases where the modified \emph{Hub} algorithm fails to give an approximation ratio lower than 1.5. Before we elaborate more on the aforementioned characterization, another piece of terminology. Given a graph $G$ over $S$ and two points $p,q \in S$, the \emph{stretch factor} from $p$ to $q$ in $G$ is $\delta_G(p,q)/|pq|$, where $\delta_G(p,q)$ denotes the Euclidean length of the shortest path between $p$ and $q$ in $G$. We use $\sim$$large$ when referring to values greater than fixed thresholds, some with respect to $w(MST(S))$, defined later. Consider $MST(S)$ and its longest path $P_M$. If one of the following conditions holds, then the \emph{Hub} algorithm or its modification results in a better constant approximation than $1.5$: \ (A1) there exists a $\sim$$large$ edge in $MST(S)$; \ (A2) a $\sim$$large$ fraction of $P_M$ consists of disjoint sub-paths connecting pairs of points with $\sim$$large$ stretch factor, not dominated by one sub-path of at least half the fraction; or \ (A3) the weight $w(MST(S)\backslash P_M)$ is $\sim$$large$. Otherwise, there are three possible cases: \ (B1) the graph $MST(S)$ is roughly a line; \ (B2) there are two points in $P_M$ with $\sim$$large$ stretch factor, i.e., there is a $\sim$$large$ `hill' in $P_M$, and then either $MST(S)$ roughly consists of two $1$D paths; or \ (B3) the optimal solution uses edges connecting the two sides of the `hill', covering $\sim$$large$ fraction of it. The last three cases are approximated using the following method. We consider every two edges connecting the two sides of the `hill' as the edges in the optimal solution that separates the uncovered remains of the path to two independent sub-paths, i.e., not connected by an edge. (the points in both sub-paths may be connected to the middle covered area.) Note that such two edges exist. We direct the covered area to achieve a strongly connected sub-graph and solve each of the two sub-paths separately in two techniques. The first, using the \emph{$1$D RA} algorithm with a distance function implied by the input, satisfying the \emph{line alike\ } condition, and applying adjustments on the output, and the second, using the \emph{Hub} algorithm. A $(1.5-\epsilon)$-approximation is obtained for cases (B1) and (B2), using the first technique, and for cases (B1) and (B2), using the second technique. The algorithm computed several solutions, using the aforementioned methods, and returns the one of minimum cost. \old{ \begin{figure}[htb] \centering \includegraphics[width=0.78\textwidth]{approx_approach.pdf} \caption{Left, the cases where the required approximation is achieved by the \emph{Hub} algorithm; right, the cases where the required approximation is achieved by other methods.} \label{fig:approx_approach} \end{figure} } \subsection{The Approximation Algorithm}\label{sec:alg} The algorithm uses the following three procedures that are defined precisely at the end of the algorithm's description. \begin{itemize}[leftmargin=*] \item The \emph{flatten} procedure $f$ - a method performing shortcuts between pairs of points on a given path $P$ resulting in a path without two points of stretch factor greater than $c_s$. \item The \emph{distance function} $h_S$ - a distance function defined for an ordered set $P\subseteq S$, satisfying the \emph{line alike} condition. \item The \emph{adjustment} transformation $g$ - a function adjusting an optimal range assignment for an ordered set $P\subseteq S$ with distance function $h$, to a valid assignment for $P$. \end{itemize} \vspace{0.5mm} Let $R$ be the forest obtained by omitting from $MST(S$) the edges of its longest path, $PM$. Given a point $v \in P_M$, let $T(v)$ denote the tree of $R$ rooted at $v$. For every $u \in T(v)$ let $r(u)$ denote the root of the tree in $R$ containing $u$, namely, $v$. For a set of points $V \subset P_M$, let $T(V)$ denote the union $\bigcup_{v \in V} T(v)$. For ease of presentation, we assume the path $P_M$ has a \emph{left} and a \emph{right} endpoints, thus, the \emph{left} and \emph{right} relations over $P_M$ are naturally defined. \\[2mm] \textbf{The main algorithm scheme:} \\ Compute four solutions and return the one of minimal cost. In case of multiple assignments to a point in a solution, the maximal among the ranges counts.\\ \underline{Solution} (i): apply the \emph{Hub} algorithm.\\ \underline{Solution} (ii): apply a variant of the \emph{Hub} algorithm - find a point $c \in P_M$ that minimizes the value $r_c=\max\{|cp_1|,|cp_z|\}$, where $p_1$ and $p_z$ are the endpoints of the path $P_M$. Assign $c$ the range $r_c$, direct $P_M$ towards $r_c$ and bi-direct all edges in $R$.\\[1mm] (* \textit{The rest of the algorithm handles cases (B1)-(B3) defined in Section~\ref{sec:approach}} *)\\[1mm] \textbf{For} every edge $e \in P_M$ do : \begin{indentpar}{0.4cm} Let $P_{e^l}$ and $P_{e^r}$ be the two paths of $P_M\backslash e$, to the left and to the right of $e$, respectively.\\ Apply the \emph{flatten procedure} $f$ on $P_{e^l}$ and $P_{e^r}$ to obtain the sub-paths\\ $P_{l'}=(p_{1},p_{2},...,p_{m})$ and $P_{r'}=(p_{m+1},p_{i+2},...,p_z)$, respectively.\\ (* \textit{Note $R$ has been changed during the \emph{flatten procedure}} *)\\ \textbf{For} every $4$ points $p_{l}, p_{l'}, p_{r'}, p_{r}$ with $l \leq l' \leq m < r' \leq r$ in the flattened sub-paths: \begin{indentpar}{0.6cm} In both solutions (iii) and (iv) direct the path $P_x=(p_l,...p_m,p_{m+1},...,p_r)$ towards $p_{l}$ and for each point $p_i$ with $1 \leq i \leq z$ direct $T(p_i)$ towards $p_i$ and assign $p_i$ a range $w(T(p_i))$. Perform the least cost option among the following two, either add the edge $(p_l,p_r)$, or add the two edges, one from $u_l$ to $u_{r'}$ for $u_l \in T(p_l), u_{r'} \in T(p_{r'})$ of minimal length and the other from $u_{l'}$ to $u_r$ for $u_{l'} \in T(p_{l'}), u_r \in T(p_r)$ of minimal length. (see illustration in Fig.~\ref{fig:alg2}). % As for the two sub-paths $P_l=(p_{1},p_{2},...,p_{l})$ and $P_r=(p_{r},p_{r+1},...,p_z)$, assign them ranges as follows:\vspace{0.1cm} \\ % \underline{Solution} (iii): apply the \emph{Hub} algorithm separately on each sub-path.\\ \underline{Solution} (iv): apply the \emph{$1$D RA} algorithm separately on each sub-path with respect to the \emph{distance function $h_S$} and perform the \emph{transformation $g$} on the assignment received. \end{indentpar} \end{indentpar} \begin{figure}[bht] \centering \includegraphics[width=0.6\textwidth]{alg2.pdf} \caption{The two sub-paths $P_l$ and $P_r$ as defined in the algorithm.} \label{fig:alg2} \end{figure} \begin{description}[topsep=0.2cm, itemsep=0.1cm] \item[The \emph{flatten} procedure $f$.] Let $c_s = 5/4$. Given a path $P=\{v_i, ..,v_n \}$, set $Q_{P} = \{\}$. Let $j>i$ be the maximal index such that $\delta_{P}(v_i,v_j) > c_s|v_i v_j|$. If such index does not exist, let $j=i+1$. Else ($j>i+1$), add the edge $(v_i,v_j)$ to $P$, remove the edge $(v_{j-1},v_j)$ from $P$, move the sub-path $(v_i,..,v_{j-1})$ from $P$ to the forest $R$, and update $Q_{P} = Q_{P} \cup \{ (v_i,v_j)\}$. \ Finally, repeat with the sub-path $(v_j,..,v_n)$ without initializing $Q_P$. \end{description} The definitions for $h_S$ and $g$ are given with respect to the sub-paths $P_l$ and $P_{l'}$, the definitions for the sub-path $P_r$ and $P_{r'}$ are symmetric. \begin{description}[topsep=0.2cm, itemsep=0.1cm] \item[The distance function $h_S$.] For every two points $p_j,p_k$ with $1\leq j \leq k \leq l$ we define, $$h_S(p_j,p_k) = \min_{ \substack{u \in T(p_{j'}), 1\leq j' \leq j \\ v \in T(p_{k'}), k\leq k' \leq m}} |uv|.$$ \item[The \emph{adjustment} transformation $g$.] Given an assignment $\rho':P_l \rightarrow \mathds{R}^+$, we transform it into an assignment $g(\rho')=\rho: P_{l'} \rightarrow \mathds{R}^+$. First, we assign ranges a follows: \[ \rho(p_j)= \left\{ \begin{array}{l} \begin{aligned} c_s \cdot \rho(p_j)+ c_k \cdot T(p_j), &\: 1 \leq j \leq l, \\ c_k \cdot T(p_j), &\: l < j \leq m , \end{aligned} \end{array} \right. \] \end{description} where $c_k = 1+8(1+c_s) = 19$. The multiplicity (by $c_s$) handles the gaps caused by points breaking the \emph{line alike\ } condition with respect to the Euclidean metric. The role of the additive part, together with the second stage of the transformation, elaborated next, is to overcome the absence of points outside the path. In the second stage, for every $p_j$ with $1 \leq j \leq m$, let $1\leq j^- < j$ be the minimal index for which there exists $u \in T(p_{j^-})$ with $|p_j u| \leq c_k\cdot w(T(p_j))$, and let $j< j^+ \leq m$ be the maximal index for which there exists $u \in T(p_{j^+})$ with $|p_j u| \leq c_k\cdot w(T(p_j))$, direct the sub-path between $p_{j^-}$ and $p_{j^+}$ towards $p_j$. See Fig.~\ref{fig:transformation} for illustration. \begin{figure}[htb] \centering \includegraphics[width=0.78\textwidth]{transformation.pdf} \caption{An illustration of the second stage in the adjustment transformation $g$.} \label{fig:transformation} \end{figure} The indexing of points in $P_M$ and notations introduced in the algorithm are used throughout this section. The tree $T(p_i) \in R$, for $1 \leq i \leq z$, is sometimes denoted by $T_i$ for short. \subsection{The Validity of the Output}\label{sec:validity} We consider each solution separately and show it forms a \emph{valid} assignment $\rho$. \noindent \textbf{Validity of Solution (i).} \ Follows from the validity of the \emph{Hub} algorithm \noindent \textbf{Validity of Solution (ii).} \ The subgraph of $G_{\rho}$ induced by the points of $P_M$ is strongly connected due to the validity of the \emph{Hub} algorithm. All trees in $R$ are bi-directed trees sharing a common point with $P_M$ and therefore, the whole graph, $G_{\rho}$, is strongly connected. \noindent \textbf{Validity of Solution (iii).} \ Each tree in $R$ induces a strongly connected subgraph of $G_{\rho}$, thus, it is suffices to show the connectivity of the minor obtained from $G_{\rho}$ by contracting all trees of $R$. The subgraph of $G_{\rho}$ induced by the points of the middle sub-path $P_x$ forms either one directed cycle or two directed cycles sharing common vertices. By the correctness of the \emph{Hub} algorithm, each of the sub-paths $P_l$ and $P_r$ induces a strongly connected subgraph of $G_{\rho}$. In addition, each of them shares a common vertex with the middle sub-path, thus, the whole graph, $G_{\rho}$, is strongly connected. \noindent \textbf{Validity of Solution (iv).} \ Since we already verified the validity of Solution (iii), we are only left to show that each of $P_l$ and $P_r$ induce strongly connected subgraphs in $G_{\rho}$. We consider the sub-path $P_l$, while the case of $P_r$ is symmetric. Let $\rho_{l}: P_l\rightarrow \mathds{R}^+$ denote the assignments obtained by applying the $1$D algorithm on the sub-path $P_l$ with respect to $h_S$. Due to the validity of $\rho_{l}$, the graph induced by $\rho_{l}$ with respect to $h_S$ is strongly connected. Let $(p_i,p_j)$ be an edge in this graph, we show that there exists a directed path from $p_i$ to $p_j$ in $G_{\rho}$. Assume w.l.o.g., $1 \leq i < j \leq l$. By the definition of $h_S$, there exist $u \in T(p_{i'})$ and $v \in T(p_{j'})$ with $1 \leq i' \leq i < j \leq j'\leq m$, such that $\rho_l (p_i) \geq h_S (p_i,p_j)=|uv|$. Since the final assignment $\rho$ is obtained after applying the adjustment transformation $g$, we have $\rho(p_i) \geq c_s |uv|$. \begin{description}[topsep=0.2cm, itemsep=0.1cm] \item[Case 1:] $|uv| \leq (c_k-1)\cdot w(T(p_{i'}))$ or $|uv| \leq (c_k-1)\cdot w(T(p_{j'}))$.\\ Assume, w.l.o.g., that the second condition holds, then $|p_{j'}u| \leq c_k \cdot w(T(p_{j'}))$ and thus, by the definition of transformation $g$, the directed path from $p_{i'}$ to $p_{j'}$ and the edge $(p_{j'}, u)$ are contained in $G_{\rho}$. Together with the directed path in $T(p_{i'})$ from $u$ to the root $p_{i'}$ they form a cycle containing both $p_{i}$ and $p_{j}$. \item[Case 2:] both $|uv| > (c_k-1)\cdot w(T(p_{i'}))$ and $|uv| > (c_k-1)\cdot w(T(p_{j'}))$.\\ Since $p_{i'},p_{j'} \in P_{l'}$, and $P_{l'}$ is the output sub-path after performing the \emph{flatten} procedure, then $\delta_{P_{l'}}(p_{i'},p_{j'}) \leq c_s|p_{i'} p_{j'}|$. Therefore, we have \begin{align*} |p_i p_j| \ &\leq \ \delta_{P_{l'}}(p_i, p_j) \ = \ \delta_{P_{l'}}(p_{i'},p_{j'})-(\delta_{P_{l'}}(p_{i'},p_{i})+\delta_{P_{l'}}(p_{j'},p_{j}))\\ &\leq c_s|p_{i'} p_{j'}|-(|p_i p_j|-(w(T_{i'})+w(T_{j'})+|u v|))\\ &\leq c_s(w(T_{i'})+w(T_{j'})+|uv|)-|p_i p_j|+w(T_{i'})+w(T_{j'})+|u v|\\ &\leq (c_s+1)(w(T_{i'})+w(T_{j'}))+(c_s+1)|uv|-|p_i p_j|\\ &\leq (c_s+1)(2|uv|/(c_k-1))+(c_s+1)|uv|-|p_i p_j|\\ &\leq (c_s+1)(2|uv|/(8(1+c_s)))+(c_s+1)|uv|-|p_i p_j|\\ \Rightarrow |p_i p_j| &\leq (\frac{1+c_s}{2}+\frac{1}{8})|uv| = c_s|uv|. \end{align*} \end{description} \subsection{The Approximation Ratio} Let $SOL$ denote the cost of the output of the algorithm for the input set $S$ and let $\rho^*: S \rightarrow \mathds{R}^+$ denote an optimal assignment for $S$ of cost $OPT$. We show that $SOL \leq (1.5-\epsilon)OPT$. Let $W=w(MST(S))$, $r = w(R)/W$, $e_{M}$ denote the longest edge in $MST(S)$ and $l = w(e_{M})/W$. As shown in~\cite{Ambuhl05}, $OPT \geq W+w(e_M)=W(1+l)$. Next we show several upper bounds on the ratio $SOL/OPT$, corresponding to the four solutions computed during the algorithms and finally conclude that the minimum among them equals at most $(1.5-\epsilon)$.\\[2mm] \noindent \textbf{Approximation bound for Solution (i).} \ Due to the analysis of the \emph{Hub} algorithm done in~\cite{Ambuhl05}, we have $SOL \leq W+(w(P_{M})+w(e_{M}))/2 = W(1.5-r/2+l/2)$. Therefore, \begin{align}\label{eq:i} \frac{SOL}{OPT} \leq \frac{W(1.5-r/2+l/2)}{W(1+l)} = \frac{1.5-(r-l)/2}{1+l}. \end{align} Assume $SOL > (1.5-\epsilon)OPT$, then \begin{align*} 1.5-\epsilon < \frac{SOL}{OPT} < \frac{1.5-(r-l)/2}{1+l} < 1.5-r/2, \end{align*} implies $r < 2\epsilon$ and the following corollary follows. \begin{corollary}\label{cor:Soli} One of the following holds, $SOL \leq (1.5-\epsilon)OPT$ or $r < 2\epsilon$. \end{corollary} The following lemma is crucial for introducing the bounds for Solutions (ii) and (iv). \begin{lemma}\label{lem:sf} Let $c_s$ and the notation $Q_{P}$ be defined as in procedure $f$. Given a constant $\delta$, \begin{enumerate} \item there exist two pairs of points $(u,v),(y,w)$ connecting two disjoint sub-paths in $P_M$, each pair with stretch factor greater than $c_s$, that satisfy $\delta_{P_M}(u,v) \geq \delta$ and $\delta_{P_M}(y,w) \geq \delta$; or \item there exists an edge $e \in {P_M}$ defining $P_{e^l}$ and $P_{e^r}$ (the two paths of $P_M\backslash e$, to the left and to the right of $e$, respectively), such that for every $(u,v)\in Q_{P_{e^l}}\cup Q_{P_{e^r}}$, $\delta_{P_M}(u,v)<\delta$. \end{enumerate} \end{lemma} \begin{proof} If the first condition holds, we are done. Otherwise, fix $e$ to be the rightmost edge in $P_M$. If the second condition does not hold for $e$, then there exists exactly one pair $(u,v)\in Q_{P_{e^l}}$ with $\delta_{P_M}(u,v)\geq \delta$. Replace $e$ with the consecutive edge to its left in ${P_M}$. Continue the process until for every $(u,v)\in Q_{P_{e^l}}$, $\delta_{P_M}(u,v)<\delta$. Note that this condition hold when $P_{e^l}$ contains a single edge. If the process ends with a separating edge $e$ for which there exists a pair $(u,v)\in Q_{P_{e^r}}$ with $\delta_{P_M}(u,v)\geq \delta$, then $u$ is a common endpoint with the preceding edge in the process and there exists a point $w\in P_{e^l}$ such that the two pairs $(w,u),(u,v)$ satisfy the first condition. \end{proof} \noindent \textbf{Approximation bound for Solution (ii).} \ Consider the value $c_h W$, where $c_h=20 \epsilon$. One of the two conditions of Lemma~\ref{lem:sf}, denoted by L\ref{lem:sf}.1 and L\ref{lem:sf}.2, respectively, must hold for $\delta = c_h W$. We start by assuming that condition L\ref{lem:sf}.1 holds which leads to Lemma~\ref{lem:approx1}. For every $q \in P_M$, we have that the cost of Solution (ii) equals at most $w(P_M)+\max\{|q p_1|,|q p_z|\}+2w(R)$. Consider the path $\sim$$f(P_M)$ obtained from $P_M$ after applying the \emph{flatten} procedure $f$ on the sub-paths $P_{e^l}$ and $P_{e^r}$. Note that the length of this path is at most $w(P_M)-2c_h W(1-1/c_s)$ and it shares common endpoints with $P_M$. Let $\tilde{c}$ be the point on $\sim$$f(P_M)$ closest to its midpoint. The midpoint may lie on an edge of $P_M$ or on a shortcut performed by $f$. If it lies on a shortcut, we undo it and only one shortcut remains. Thus, the point $\tilde{c}$ is at Euclidean distance at most \ \( \frac{1}{2}[ w(P_M)-c_h W(1-\frac{1}{c_s})+w(e_M) ] \), \ from both endpoints and we have \[ SOL \leq W(1-r+ \frac{1}{2}[(1-r)-c_h(1-\frac{1}{c_s})+l] +2r) = W[1.5-\frac{1}{2}(c_h(1-\frac{1}{c_s})-l-r)]. \] This implies, \begin{align}\label{eq:ii} \frac{SOL}{OPT} \leq \frac{1.5-\frac{1}{2}(c_h(1-\frac{1}{c_s})-l-r)}{1+l}. \end{align} \begin{lemma}\label{lem:approx1} If condition L\ref{lem:sf}.1 holds for $\delta = c_h W$ then $SOL \leq (1.5-\epsilon)OPT$. \end{lemma} \begin{proof} Assume towards contradiction that $SOL > (1.5-\epsilon)OPT$, then by Corollary~\ref{cor:Soli}, $r<2\epsilon$ and together with equation~(\ref{eq:ii}) we receive \\ \begin{align*} 1.5-\epsilon & < \frac{1.5-\frac{1}{2}(c_h(1-\frac{1}{c_s})-l-r)}{1+l} \ < \ 1.5-\frac{1}{2}(c_h(1-\frac{1}{c_s})-r) \\ & < 1.5-\frac{1}{2}(20\epsilon(\frac{1}{5})-2\epsilon) \ < \ 1.5-\epsilon \qquad \qquad \Rightarrow\ \text{ \ \ contradiction.} \end{align*} \end{proof} From now on, assume condition L\ref{lem:sf}.2 holds for $\delta = c_h W$ and let $e \in P_M$ be the edge satisfying the condition. Let $t = (\sum_{(u,v)\in Q_{P_{e^l}}}\delta_{P_M}(u,v) + \sum_{(u,v)\in Q_{P_{e^r}}}\delta_{P_M}(u,v))\ / \ W$, we give an additional bound to the cost of Solution (ii) using the same arguments used for the case where condition L\ref{lem:sf}.1 holds. Let $\tilde{c}$ be the point on $\sim$$f(P_M)$ closest to its midpoint. Since every pair $(u,v)\in Q_{P_{e^l}}\cup Q_{P_{e^r}}$ satisfies $\delta_{P_M}(u,v) \leq c_h W$, the point $\tilde{c}$ is at Euclidean distance at most \ \( \frac{1}{2} [ w(P_M)-(t-c_h) \cdot W(1-\frac{1}{c_s})+w(e_M)] \) \ from both endpoints, thus \begin{align}\label{eq:iii} \frac{SOL}{OPT} \leq \frac{1.5-\frac{1}{2}((t-c_h)(1-\frac{1}{c_s})-l-r)}{1+l}. \end{align} Note that although the analysis for equation~\ref{eq:iii} considers the path $\sim$$f(P_M)$, the \emph{flatten} procedure $f$ is performed after Solution (ii) is computed. In the analysis of Solution (iii) and (iv) we have $w(R)\leq(r+t)W$, since applying $f$ on $P_{e^l}$ and $P_{e^r}$ moves portions of the paths connecting points in $Q_{P_{e^l}}\cup Q_{P_{e^r}}$ to $R$. Consider the iteration of the algorithm for the edge $e$ and a choice of $4$ points $p_{l}, p_{l'}, p_{r'}, p_{r} \in P_M$ satisfying: $p_{r}$ is the rightmost point in $P_M$ with $u_r \in T(p_r)$ connected (at any direction) in $G_{\rho^*}$ to a point in $T(p_{l'})$ for $p_{l'}$ to the left of $e$ and, symmetrically, $p_{l}$ is the leftmost point in $P_M$ with $u_l \in T(p_l)$ connected (at any direction) to a point in $T(p_{r'})$ for $p_{r'}$ to the right of $e$. Meaning, there is no edge in $G_{\rho^*}$ connecting between a point in $T(p)$ for $p \in P_l\backslash \{p_l\}$ and a point in $T(q)$ for $q \in P_{r'}$ and no edge connecting between a point in $T(p)$ for $p \in P_r\backslash \{p_r\}$ and a point in $T(q)$ for $q \in P_{l'}$. Let $x$ denote the ratio $ w(P_x) / W. $ \\[2mm] \noindent \textbf{Approximation bound for Solution (iii).} \ Preforming the \emph{Hub} algorithm on $P_l$ and $P_r$, separately, result in two assignments with a total cost of at most $1.5(W-w(P_x)-w(R)) + w(e_M)/2 + w(e_M)/2 $. Directing the path $P_x$ and the trees in $R$, and assigning all roots in $R$ their assignment, together with adding the two edges, $(u_l, u_{r'})$ for $u_l \in T(p_l), u_{r'} \in T(p_{r'})$ of minimal length and $(u_{l'},u_r)$ for $u_{l'} \in T(p_{l'}), u_r \in T(p_r)$ of minimal length (or the edge $(p_l,p_r)$ instead if it is cheaper) costs at most \ \(w(P_x)+ 2\cdot w(R) + |u_l u_{r'}|+|u_{l'} u_{r}|. \) Overall, we have a total cost of at most \ \( W + \frac{1}{2}(1-x + (r+t)) \cdot W + l W + |u_l u_{r'}|+|u_{l'} u_{r}|.\) Since there is an edge connecting a point in $T(p_l)$ with a point in $T(p_{r'})$ (in some direction) and an edge connecting a point in $T(p_{l'})$ with a point in $T(p_r)$ in the optimal solution, we have $OPT \geq W + |u_l u_{r'}|+|u_{l'} u_{r}| - w(e_M)$, hence, \begin{align}\label{eq:iv} \frac{SOL}{OPT} & \leq \frac{ W(1-l+ \frac{1}{2}(1-x + (r+t)) + 2 l) +|u_l u_{r'}|+|u_{l'} u_{r}| } {W(1-l) + |u_l u_{r'}|+|u_{l'} u_{r}|} \nonumber\\ & \leq 1+ \frac{W(\frac{1}{2}(1-x + (r+t)) + 2l)} {W(1-l) + |u_l u_{r'}|+|u_{l'} u_{r}|} \leq 1+ \frac{1}{2}(1-x + (r+t)) + 2l. \end{align} \noindent \textbf{Approximation bound for Solution (iv).} \ Let $\rho_{l}: P_l\rightarrow \mathds{R}^+$ and $\rho_{r}: P_r\rightarrow \mathds{R}^+$ denote the assignments obtained by applying the \emph{$1$D RA} algorithm on $P_l$ and $P_r$, respectively, with respect to the distance function $h_S$. Let $\rho': P_l\cup P_r \rightarrow \mathds{R}^+$ denote the union of the two assignments and let $OPT'$ denote the cost of $\rho'$, i.e., $OPT' = \sum_{v \in P_l \cup P_r} \rho'(v)$. \begin{claim}\label{cl:OPT'} $OPT' \leq OPT$. \end{claim} \begin{proof} We show that the optimal assignment $\rho^*$ can be adjusted to an assignment\\ \mbox{$\rho:P_l \cup P_r \rightarrow \mathds{R}^+$}, valid for $P_l$ and $P_r$, separately, with respect to $h_S$, of the same cost.\\ We define, $$\rho(p_l)=\max_{ \substack{v\in T(p_i),\\ l\leq i \leq m}} \{\rho^*(v)\}, \qquad \rho(p_r)=\max_{ \substack{v\in T(p_i),\\ m+1\leq i \leq r}} \{\rho^*(v)\},$$ and for every $p_j$ with $j \in \{1,..,l-1\} \cup \{r+1,..,z\}$, \ \ $\rho(p_j)=\max_{v\in T(p_j)} \{ \rho^*(v)\}.$ Let $u$ and $v$ be two points in the same sub-path, w.l.o.g., $P_l$, and let $(u=u_1,u_2,...,u_k=v)$ be the path from $u$ to $v$ in $G_{\rho^*}$. Consider the sequence $(u=y_1,y_2,...,y_k=v)$, obtained by replacing every $u_i \in T(P_l)$ with $y_i=r(u_i)$, and every $u_i \in T(P_x\cup P_r)$ with $y_i=p_l$, for $1 \leq i \leq k$. We prove the above sequence forms a path from $u$ to $v$ in the graph induced by $\rho$ with respect to $P_l$ and $h_S$, and conclude that $\rho$ is valid and $OPT' \leq OPT$. Consider a pair of consecutive nodes in the above sequence, $(y_i,y_{i+1})$. Note that if $u_i=p_j$ (resp. $u_{i+1}=p_j$) for $1 \leq j < l$, than $u_{i+1}=p_h$ (resp. $u_{i}=p_h$) for $1 \leq h \leq m$; thus, for every $1 \leq i < k$, by the definition of $\rho$ and $h_S$ we have, $\rho(y_i)\geq \rho^*(u_i) \geq |u_i u_{i+1}|\geq h_S(y_i, y_{i+1})$ and the claim follows. \end{proof} Let $\sim$$g(\rho'):P_{l'}\cup P_{r'} \rightarrow \mathds{R}^+$ be the union of the assignments obtained after applying the transformation $g$ on each of $P_l$ and $P_r$, separately. The following lemma bounds its cost. \begin{lemma}\label{lem:g(rho)} $cost(\sim$$g(\rho')) \leq [c_s + (c_k + 2c_s(c_k +1))(r+t)]OPT.$ \end{lemma} \begin{proof} Consider applying transformation $g$ on $P_{l}$ and $P_{r}$. By multiplying the range of every point in $P_{l}\cup P_{r}$ by $c_s$ we obtain an assignment of cost $c_s\cdot OPT'$. As for the additive part and the second stage, we analyze the cost with respect to $P_{l'}$, while the case for $P_{r'}$ is symmetric. Every $p_j \in P_{l'}$, is responsible for an additional of at most $X_j = c_k \cdot w(T_j) + \delta_{P_{l'}}(p_{j^-},p_{j^+}) \leq c_k \cdot w(T_j) + c_s|p_{j^-} p_{j^+}| = c_k \cdot w(T_j) + c_s[2c_k \cdot w(T_j) + w(T_{j^-}) + w(T_{j^+}) ]$ to the total cost, where $p_{j^-}$ and $p_{j^+}$ are defined as in the definition of $g$. The first element in the summation is the range added to $p_j$ itself, and the second is the cost of directing the path between $p_{j^-}$ and $p_{j^+}$ towards $p_j$ (depicted in Fig.~\ref{fig:transformation} in black). Allegedly, for computing $cost(\sim$$g(\rho'))$ we should sum $X_j$ over all $p_j \in P_{l'} \cup P_{r'}$ and add it to the cost $c_s\cdot OPT'$, however, we observe that it is suffices to consider a point $p_i$ only once as $p_{j^-}$, for the rightmost point $p_j$ such that $i=j^-$ and only once as $p_{j^+}$, for the leftmost point $p_k$ such that $i=k^+$. Thus, we can charge $p_{j^-}$ and $p_{j^+}$ themselves once on each of the elements $c_s \cdot w(T_{j^-})$ and $c_s \cdot w(T_{j^+})$ in the overall summation. Namely, charge every point $p_j$ for a total range increase of $Y_j=w(T_j)[c_k + c_s(2c_k +2)]$. Summing $Y_j$ over all $p_j \in P_{l'}\cup P_{r'}$ and adding the cost $c_s \cdot OPT'$, using Claim~\ref{cl:OPT'} gives \begin{align*} cost(g(rho')) & \leq c_s \cdot OPT' + \sum_{1 \leq j \leq z} w(T_j)[c_k + c_s(2c_k +2)]\\ & = c_s \cdot OPT' + [c_k + 2c_s(c_k +1)](w(R))\\ & \leq [c_s + (c_k + 2c_s(c_k +1))(r+t)]OPT. \end{align*} \end{proof} Note that $g$ has already assigned to every $p_j \in P_{l'}\cup P_{r'}$ an assignment greater than $w(T(p_j))$. Directing all trees in $R$ towards their roots, directing the path $P_x$ and adding the edge $(p_l,p_r)$, adds to the cost at most $(2x+(r+t)) W < (2x+(r+t)) OPT$, and together with Lemma~\ref{lem:g(rho)} we receive \begin{align}\label{eq:v} \frac{SOL}{OPT} & \leq c_s + (c_k + 2c_s(c_k +1)+1)(r+t)+2x \end{align} \begin{lemma}\label{lem:approx2} If condition L\ref{lem:sf}.2 holds for $\delta = c_h W$, then $SOL \leq (1.5-\epsilon)OPT$. \end{lemma} \begin{proof} Assume towards contradiction that $SOL > (1.5-\epsilon)OPT$. By Corollary~\ref{cor:Soli}, $r<2\epsilon$ and together with equation~(\ref{eq:iii}) we receive, \begin{align*} 1.5-\epsilon &< \frac{1.5-\frac{1}{2}((t-c_h)(1-\frac{1}{c_s})-l-r)}{1+l} < 1.5-\frac{1}{2}((t-20\epsilon)\frac{1}{5}-r) \\ &< 1.5-\frac{t}{10}+3\epsilon \ \ \qquad \qquad \Rightarrow \ \ \ t < 40\epsilon. \end{align*} Replacing $r$ and $t$ with the above upper bounds in equation~(\ref{eq:v}) gives, \begin{align*} 1.5-\epsilon &< c_s + (c_k + 2c_s(c_k +1)+1)(r+t)+2x \\ &< 1\frac{1}{4}+70(42 \epsilon)+2x \ \ \qquad \Rightarrow \ \ \ x > \frac{1}{8}-1472\epsilon, \end{align*} and by equation~(\ref{eq:iv}) we have, \begin{align*} 1.5-\epsilon &< 1+ \frac{1}{2}(1-x + (r+t)) + 2l < 1+ \frac{1}{2}(1-(\frac{1}{8}-1472\epsilon)+ 42\epsilon) + 2l \\ &< 1.5 - \frac{1}{16}+757\epsilon+2l \ \ \ \ \Rightarrow \ \ \ l > \frac{1}{32}-380\epsilon. \end{align*} The upper bound on $l$ together with equation~(\ref{eq:i}) imply, \begin{align*} 1.5-\epsilon &< \frac{1.5-(r-l)/2}{1+l} < \frac{1}{2} + \frac{1}{1+l} < \frac{1}{2} + \frac{1}{1+\frac{1}{32}-380\epsilon} \qquad \Rightarrow \ \ \ \epsilon > \frac{8}{10^5} \end{align*} in contradiction to our choice of $\epsilon=\frac{5}{10^5}$. \end{proof} We conclude with Theorem~\ref{theo:2Dmain}, derived from Lemma~\ref{lem:sf} together with Lemmas~\ref{lem:approx1} and~\ref{lem:approx2}. \begin{theorem}\label{theo:2Dmain} Given a set $S$ of points in $\mathds{R}^d$ for $d \geq 2$, a minimum cost range assignment $(1.5-\epsilon)$-approximation can be computed in polynomial time for $S$, where $\epsilon=\frac{5}{10^5}$. \end{theorem} The reader can notice that our algorithm yields a better approximation bound than stated in the above theorem. However, we preferred the simplicity of presentation over a more complicated analysis resulting in a tighter bound. \bibliographystyle{plain
1502.05014
\section{Introduction} Hilbert functions and graded Betti numbers are widely studied invariants in commutative algebra. In particular, the problem of transforming an ideal into another that has the same Hilbert function and graded Betti numbers greater than or equal to those of the original ideal is one of interest to many researchers. One of the earliest results in this direction was Macaulay's Theorem \cite{Ma}, which states that if $A=K[x_1,...,x_n]$ is a polynomial ring over a field $K$, then there exists a lex segment ideal realizing the Hilbert function of any homogeneous ideal of $A$. Later, Bigatti \cite{B}, Hulett \cite{H}, and Pardue \cite{P} proved that lex segments ideals attain the highest Betti numbers among all ideals having the same Hilbert function. There are two main conjectures in this area of research. The first is a conjecture of Eisenbud, Green, and Harris \cite{EGH1}, \cite{EGH2}, which asserts that for a homogeneous ideal $I$ containing a homogeneous regular sequence $(f_1,...,f_r)$ with degrees $e_1\leq e_2\leq \cdots \leq e_r$, there exists a lex-plus-powers ideal $L+P$ which has the same Hilbert function as $I$, where $P=(x_1^{e_1},...,x_r^{e_r})$. The second is Evans' Lex-Plus-Powers Conjecture \cite{FR}, which proposes that in this situation, the graded Betti numbers are such that $b_{ij}(L+P)\geq b_{ij}(I)$ for all $i,j$. Recently, many people have proven a series of results related to these conjectures, for example \cite{A}, \cite{CK}, \cite{CM}, \cite{C}, \cite{F}, \cite{MM}, \cite{MP}, and \cite{R}. A strong result has been shown by Mermin and Murai in \cite[Theorem 8.1]{MM}. They prove the Lex-Plus-Powers Conjecture holds when $(f_1,...,f_r)$ is a regular sequence of monomials. Notice that, under this assumption, the Eisenbud-Green-Harris Conjecture easily follows from Clements and Lindstr{\"o}m's Theorem \cite{CL}. A generalization of the Mermin-Murai result was shown by Caviglia and Sbarra \cite{CS}. In their article, the authors study homogeneous ideals $I$ containing $P+\widetilde{L}$, where $\widetilde{L}$ is a piecewise lex ideal, that is, an ideal which is the sum of extensions to $A$ of lex segment ideals $L_i\subset K[X_1,...,X_i]$. The quotient rings $A/(P+\widetilde{L})$ are known as Shakin rings. Their result states that there is a lex ideal $L$ such that $P+\widetilde{L}+L$ and $I$ have the same Hilbert function, and the graded Betti numbers do not decrease when we replace $I$ by the ideal $P+\widetilde{L}+L$. Unfortunately, the upper bound for the graded Betti numbers was only shown when char($K$)=0. The main theorem of this paper removes the assumption on the characteristic of the field $K$ in the above result. In Section 2, we describe the operations done in \cite{MM} to replace the ideal $I$ by a strongly-stable-plus-P ideal whose graded Betti numbers are an upper bound for those of the monomial ideal $I$. We prove that strongly stable ideals are fixed under these operations. Section 3 contains the proof of our main theorem using a result of Caviglia and Kummini \cite{CK} to reduce the problem to the characteristic zero case. \section{Shifting and Compression} Throughout this paper, $A=K[x_1,...,x_n]$ is a polynomial ring over a field $K$, where char($K$) is arbitrary, and $P=(x_1^{e_1},...,x_r^{e_r})$, for some $r\leq n$ and $2\leq e_1\leq e_2\leq \cdots \leq e_r$ . Furthermore, throughout this section, assume $I$ is a monomial ideal containing $P+J$ where $J$ is a strongly stable ideal. Recall that $J$ is strongly stable if it satisfies the combinatorial property that whenever $x_i m\in J$, then $x_j m\in J$ for all monomials $m$ and for all $j<i$ \cite{G}. In the proof of their main theorem, Mermin and Murai show that there exists a strongly-stable-plus-$P$ ideal $B$ with the same Hilbert function as the ideal $I$ such that $b_{ij}(B)\geq b_{ij}(I)$ for all $i,j$, \cite[Proposition 8.7]{MM}. In this section, we recall the operations that Mermin and Murai use to construct the ideal $B$ from $I$, and show that in addition to the properties above, we also have $J\subset B$. We will conclude this section with the proof of the following proposition: \begin{Proposition}\label{MainProp} If $I$ is a monomial ideal containing $P+J$, then there exists a strongly-stable-plus-$P$ ideal $B$ with the same Hilbert function as $I$ such that $b_{ij}(B)\geq b_{ij}(I)$ for all $i,j$ and $J\subset B$. \end{Proposition} For pairs of variables $a>_{\text{lex}} b$, the ideal $B$ is constructed in \cite{MM} in finitely many steps by replacing $I$ with any of the following ideals: \begin{enumerate} \item Shift$_{a,b}(I)$, \item Shift$_{a,b,t}(I)+P$, \item $T=T'+P$ as in Proposition \ref{Compression}. \end{enumerate} We introduce the definitions of the basic operations used above, and prove that strongly stable ideals do not move after replacing $I$ by any of these ideals. \begin{Definition} Let $I$ be a monomial ideal, and fix variables $a>_{lex} b$ and $t\in \mathbb{Z}_{\geq 0}$. The {\bf $(a,b,t)$-shift} of $I$, denoted Shift$_{a,b,t}(I)$, is the $K$-vector space generated by monomials of the form: \begin{equation*} \begin{Bmatrix*} \begin{split} fa^sb^r &| fa^sb^r\in I, r<t\\ fa^sb^{s+t} &| fa^sb^{s+t}\in I\\ fa^lb^{s+t} &| fa^lb^{s+t} \in I \text{ or } fa^sb^{l+t}\in I\\ fa^sb^{l+t} &| fa^lb^{s+t} \in I \text{ and } fa^sb^{l+t}\in I \end{split} \end{Bmatrix*} \end{equation*} where the set is taken over all monomials $f$ such that $a\nmid f$ and $b\nmid f$, and over all integers $0\leq s<l$. \end{Definition} \begin{Remark} Notice that when $fa^lb^{s+t} \in I$ and $fa^sb^{l+t}\in I$, then both monomials $fa^lb^{s+t}$ and $fa^sb^{l+t}$ will be generators of Shift$_{a,b,t}(I)$. \end{Remark} \begin{Definition} The $(a,b)$-shift of $I$ is the $(a,b,0)$-shift of $I$ as defined above. \end{Definition} \begin{Remark} For $t\neq 0$, Shift$_{a,b,t}(I)$ does not necessarily fix ideals generated by powers of variables. Thus, in order to preserve the ideal $P$ when applying the shifting operation for $t\neq 0$, Mermin and Murai use the operation Shift$_{a,b,t}(I)+P$. \end{Remark} \begin{Proposition}\label{Shifting} Let $I$ be a monomial ideal containing $P+J$. Fix variables $a >_{\text{lex}} b$ and $t>0$. Then, $J\subset \text{Shift}_{a,b,t}(I)$. \end{Proposition} \textit{Proof.} Write $m=m'a^{\alpha}b^{\beta}\in J$, where $a\nmid m'$ and $b\nmid m'$. If $\beta\leq \alpha+t$, then it's clear that $m\in \text{Shift}_{a,b,t}(I)$. The only case where we need to use the assumption that $J$ is strongly stable is when $\beta >\alpha+t$. Here, we need to show that $m'a^{\beta-t}b^{\alpha+t}\in I$. Let $N=\beta-(\alpha+t)$. Since $J$ is strongly stable and $N>0$, then $m\cdot \frac{a^N}{b^N}\in J\subset I$. We see that $m\cdot\frac{a^N}{b^N}=m'a^{\beta-t}b^{\alpha+t}$. Since both $m=m'a^{\alpha}b^{(\beta-t)+t}\in I$ and $m'a^{\beta-t}b^{\alpha+t}\in I$, it follows that $m\in\text{Shift}_{a,b,t}(I)$. \qed The final operation used to transform the ideal $I$ in the proof of Mermin and Murai is a compression. The following definition is described by Mermin in \cite{Me}: \begin{Definition} Let $I$ be a monomial ideal, and fix variables $a>_{lex} b$. Write $I$ as a direct sum of the form $I=\bigoplus\limits_f fV_f$, where the sum is taken over all monomials $f$ in $K[\{x_1,...,x_n\}\setminus \{a,b\}]$ and $V_f$ are $K[a,b]$-ideals. The {\bf $\{a,b\}$-compression} of $I$ is the ideal $\bigoplus\limits_f fN_f$, where $N_f\subset K[a,b]$ are the lex ideals with the same Hilbert function as $V_f$. \end{Definition} \begin{Proposition}\label{Compression} Let $I$ be a monomial ideal containing $P+J$. Fix variables $a >_{\text{lex}} b$. Let $I'$ be the ideal of $A$ generated by all the minimal generators of $I$ except for $b^{e_b}$, let $T'$ be the $\{a,b\}$-compression of $I'$, and let $T=T'+P$. Then, $J\subset T$. \end{Proposition} \textit{Proof.} As in the definition of $\{a,b\}$-compression, write $I'=\bigoplus\limits_f fV_f$ with $f\in \text{Mon}(K[\{x_1,...,x_n\}\setminus \{a,b\}])$ and $V_f\subset K[a,b]$. Let $T'=\bigoplus\limits_f fN_f$ be the $\{a,b\}$-compression of $I'$. First, suppose $b^{e_b}$ is not a minimal generator of $I$. In this case, $I'=I$, and therefore, $T'$ is the $\{a,b\}$-compression of $I$. Since strongly stable ideals are $\{a,b\}$-compressed, as stated in Proposition 3.8 of \cite{Me}, then $J\subset T'$. If instead $b^{e_b}$ is a minimal generator of $I$, let $m=m'a^{\alpha}b^{\beta}$ be a monomial in $J$ with $a\nmid m'$, $b\nmid m'$. Clearly, if $\beta \geq e_b$, then $m\in P\subset T$. So we may assume $\beta <e_b$. Since $J$ is strongly stable, then we have: \begin{equation*} m=m'a^{\alpha}b^{\beta}<_{lex} m'a^{\alpha +1}b^{\beta -1}<_{lex} \cdots <_{lex} m'a^{\alpha +\beta}\in J. \end{equation*} Furthermore, all of these monomials are in $I'$. Thus, \begin{equation*} a^{\alpha}b^{\beta}<_{lex}a^{\alpha +1}b^{\beta -1}<_{lex} \cdots <_{lex}a^{\alpha +\beta}\in V_{m'}. \end{equation*} These are the first monomials of degree $\alpha +\beta$ in $K[a,b]$, hence they are also elements of the lex ideal $N_{m'}$. In particular, this implies that $m\in T'$. \qed\\ We conclude this section with the proof of the main propostion:\\ \textit{Proof of Proposition \ref{MainProp}.} By Proposition 8.7 of \cite{MM}, there exists a strongly-stable-plus-$P$ ideal $B$ with the same Hilbert function as $I$ and $b_{ij}(B)\geq b_{ij}(I)$ for all $i,j$. Furthermore, by Propositions \ref{Shifting} and \ref{Compression}, strongly stable ideals do not move under the operations used to construct the ideal $B$. Hence, when $J\subset I$, we also have $J\subset B$.\qed \section{Main Result} In the previous section, we showed that strongly stable ideals do not move under any of the three above operations. Now, we apply this to the situation in which $J=\widetilde{L}$ is a piecewise lex ideal. We begin by reminding the reader of the definition introduced by Shakin \cite{S}. \begin{Definition} For each $1\leq i\leq n$, let $A_{(i)}$ be the polynomial ring over $K$ in the first $i$ variables. An ideal $\widetilde{L}\subset A$ is called a {\bf piecewise lex ideal} if it can be written as a sum: \begin{equation*} \widetilde{L}=L_{(1)}A+L_{(2)}A+...+L_{(n)}A \end{equation*} where $L_{(i)}$ is a lex ideal in the ring $A_{(i)}$ for each $i$. \end{Definition} Since piecewise lex ideals are strongly stable, then we have shown that when $I$ contains $P+\widetilde{L}$, the ideal $B$ does as well. \begin{Theorem} Let $I \subset A$ be a homogeneous ideal with $P+\widetilde{L}\subset I$. There exists a lex ideal $L$ such that \begin{enumerate}[(i)] \item $P+\widetilde{L}+L$ has the same Hilbert function as $I$. \item $b_{ij}(P+\widetilde{L}+L)\geq b_{ij}(I)$ for all $i, j$ \end{enumerate} \end{Theorem} \textit{Proof.} Without loss of generality, using a standard upper-semicontinuity argument, we may assume $I$ is a monomial ideal containing $P+\widetilde{L}$ by replacing $I$ with in$(I)$. By Proposition \ref{MainProp}, there is a strongly-stable-plus-$P$ ideal $B$ with the same Hilbert function as $I$ and such that $b_{ij}(B)\geq b_{ij}(I)$. Furthermore, we have that $P+\tilde{L}\subset B$. Since $B$ is a strongly-stable-plus-$P$ ideal, the graded Betti numbers, $b_{ij}(B)$, do not depend on char($K$) by \cite[Corollary 3.7]{CK}. Hence, we can assume char($K$)=0. The characteristic zero result of Caviglia and Sbarra, \cite[Theorem 3.4]{CS}, gives a lex ideal $L$ such that $P+\widetilde{L}+L$ has the same Hilbert function as $B$ and $b_{ij}(P+\widetilde{L}+L)\geq b_{ij}(B)$ for all $i, j$. Again, since $P+\widetilde{L}+L$ is strongly-stable-plus-$P$, then the Betti numbers do not depend on the characteristic, so the inequality also holds for char($K$) arbitrary. \qed \begin{bibdiv} \begin{biblist} \bib{A}{article}{ title={On the Eisenbud-Green-Harris conjecture} author={Abedelfatah, A.} date={2015} journal={Proc. Amer. Math. Soc.} volume={143} pages={105--115} } \bib{B}{article}{ title={Upper bounds for the Betti numbers of a given Hilbert function} author={Bigatti, A.} date={1993} journal={Comm. Algebra} volume={21} pages={2317--2334} } \bib{CK}{article}{ title={Poset embeddings of Hilbert functions and Betti numbers} author={Caviglia, G.} author={Kummini, M.} date={2014} journal={J. Algebra} volume={410} pages={244--257} } \bib{CM}{article}{ title={Some cases of the Eisenbud-Green-Harris conjecture} author={Caviglia, G.} author={Maclagan, D.} date={2008} journal={Math. Res. Lett.} volume={15} pages={427--433} } \bib{CS}{article}{ title={Distractions of Shakin rings} author={Caviglia, G.} author={Sbarra, E.} date={2014} journal={J. Algebra} volume={419} pages={318--331} } \bib{C}{article}{ title={An application of liaison theory to the Eisenbud-Green-Harris conjecture} author={Chong, K. F. E.} note={arXiv:1311.0939v1 [math.AC]} } \bib{CL}{article}{ title={A generalization of a combinatorial theorem of Macaulay} author={Clements, G.F.} author={Lindstr{\"o}m, B.} date={1969} journal={J. Combinatorial Theory} volume={7} pages={230--238} } \bib{EGH1}{article}{ title={Higher Castelnuovo theory} author={Eisenbud, D.} author={Green, M.} author={Harris, J.} date={1993} journal={Ast\`e risque} volume={218} pages={187--202} } \bib{EGH2}{article}{ title={Cayley-Bacharach theorems and conjectures} author={Eisenbud, D.} author={Green, M.} author={Harris, J.} date={1996} journal={Bull. Amer. Math. Soc} volume={33} pages={295--324} } \bib{F}{article}{ title={Almost complete intersections and the lex-plus-powers conjecture} author={Francisco, C.} date={2004} journal={J. Algebra} volume={276} pages={737--760} } \bib{FR}{article}{ title={Lex-plus-powers ideals} author={Francisco, C.} author={Richert, B.} date={2007} journal={Lect. Notes Pure Appl. Math.} volume={254} pages={113--144} } \bib{G}{article}{ title={Generic initial ideals} author={Green, M.} date={2010} booktitle={Six lectures on commutative algebra} series={Mod. Birkh{\"a}user Class.} publisher={Birkh{\"a}user Boston} address={Boston} pages={119--186} } \bib{H}{article}{ title={Maximum Betti numbers of homogeneous ideals with a given Hilbert function} author={Hulett, H.} date={1993} journal={Comm. Algebra} volume={21} pages={2335--2350} } \bib{K}{article}{ title={Algebraic shifting} author={Kalai, G.} date={2002} journal={Adv. Stud. Pure Math.} volume={33} pages={121--163} } \bib{Ma}{article}{ label={Ma} title={Some properties of enumeration in the theory of modular systems} author={Macaulay, F.} date={1927} journal={Proc. London Math. Soc.} volume={26} pages={531--555} } \bib{Me}{article}{ label={Me} title={Compressed ideals} author={Mermin, J.} date={2008} journal={Bull. London Math. Soc.} volume={40} pages={77--87} } \bib{MM}{article}{ title={The Lex-Plus-Powers Conjecture holds for pure powers} author={Mermin, J.} author={Murai, S.} date={2011} journal={Adv. Math.} volume={226} number={3} pages={3511--3539} } \bib{MP}{article}{ title={Lexifying ideals} author={Mermin, J.} author={Peeva, I.} date={2006} journal={Math. Res. Lett.} volume={13} pages={409--422} } \bib{MH}{article}{ title={Algebraic Shifting and graded Betti numbers} author={Murai, S.} author={Hibi, T.} date={2009} journal={Trans. Amer. Math. Soc.} volume={361} pages={1853--1865} } \bib{P}{article}{ title={Deformation classes of graded modules and maximal Betti numbers} author={Pardue, K.} date={1996} journal={Illinois J. Math.} volume={40} pages={564--585} } \bib{R}{article}{ title={A study of the lex plus powers conjecture} author={Richert, B.} date={2004} journal={J. Pure Appl. Algebra} volume={186} pages={169--183} } \bib{S}{article}{ title={Piecewise lexsegment ideals} author={Shakin, D.} date={2003} journal={Sb. Math.} volume={194} pages={1701--1724} } \end{biblist} \end{bibdiv} \end{document}
2105.07279
\section{INTRODUCTION}\label{section:1} The second-order elastic constants are essential materials parameters, playing pivotal roles in many research areas of engineering \cite{R1, R2}, medical \cite{R3,R4}, condensed matter physics ~\cite{R5,R6, R7}, materials science \cite{R8}, geophysics \cite{R9}, and chemical \cite{R10}. Moreover, the second-order elastic constants will contain information about how acoustic waves behave \cite{R54}. Despite these critical practical features, the second-order elastic constants have been measured for a tiny fraction of known crystalline materials. The small data availability is due to the unavailability of large single crystals for many materials and the difficulty of precise experimental measurements \cite{R11}. The lack of such experimental data limits the scientists' ability to design and develop novel materials. With the development of high-performance computing resources and density functional theory (DFT) \cite{R12}, the determination of elastic constants of many materials can become a reality. DFT is a robust technique able to solve many-body problems using Kohn-Sham equations \cite{R13,R14}, based on which several quantum-chemistry and solid-state physics software have been developed. A number of packages have been introduced to calculate the second-order elastic stiffness tensor of 2D and 3D crystals. For example, {\small VASP} \cite{R15} can calculate second-order elastic constants using strain--stress relationships, and {\small CRYSTAL14} \cite{R16} can compute the piezoelectric and the photoelastic tensors. There is also some software capable of calculating the elastic constants using internal or external packages. For example, {\small WIEN2k} \cite{R17} uses internal packages such as IRelast \cite{R18} and Elast (for cubic systems) to calculate the second-order elastic constants, while \textsf{ElaStic} code is an external package \cite{R19} used in {\small Quantum Espresso} \cite{R53}, Exciting \cite{R87}, and {\small WIEN2k}. With the availability of these packages and the creation of elastic constant databases \cite{R20}, tools for their analysis and visualization have become more significantly desired than ever. In the last two decades, owing to the observation of anomalous mechanical properties in some materials, much effort has been taken to discover and investigate materials with such features. The negative linear compressibility (NLC) \cite{R21}, negative Poisson's ratio (NPR) (\textit{auxetic} material), \cite{R22, R23} and highly-anisotropic elastic modulus \cite{R24} are the most critical anomalous elastic properties that appear in some materials due to stress and strain. These characteristics are visible by analysis and visualization of elastic tensors. When a material is unusually loaded in tension, it extends in the direction of the applied load, and a lateral deformation accompanies its extension. These lateral deformations are quantified by a mechanical property known as the Poisson’s ratio. Poisson’s ratio is defined as the ratio of the negative values of lateral/transverse strain to the longitudinal strain under uniaxial stress. In a material with a positive Poisson’s ratio, when compressive (tensile) stress is acting in one direction, the material tends to expand (shrink) in the perpendicular direction. However, materials with the NPR show opposite behavior (Fig.~\ref{fig:wide_1}(a)). This feature was considered in 1998 \cite{R30}, although, in 1987, NPR was firstly produced by Lakes \cite{R25} from conventional low-density open-cell polymer foams. In recent years, there have been increasing interests in exploring the possibility of the \textit{auxetic} phenomenon in 2D and 3D materials to design and develop high-performance nanoscale electromechanical devices. In addition to NPR, another unusual elastic property called NLC \cite{R26, R27} which is resulted from applying hydrostatic pressure to 3D materials leading to an expansion in one direction (Fig.~\ref{fig:wide_1}(b)), has been observed in some materials. The NLC was firstly reported in tellurium in 1922 \cite{R28}. The recent discoveries suggest that NLC is not as rare as previously considered and many materials can offer such a property \cite{R26, R29}. Currently, two software packages and a Python library are available to analyze second-order elastic tensors and visualize elastic properties of 3D materials that can investigate such properties. The first code is {\small ELAM} developed by Marmier \cite{R30}. {\small ElAM}, implemented in Fortran90, is command-line driven and can output 2D cut figures in PostScript (PS) format and 3D surfaces in the Virtual Reality Modelling Language format (VRML). The second one, which was developed by R. Gaillac \textit{et al. }\cite{R31} is {\small ELATE}. {\small ELATE} is a Python module for manipulating elastic tensors and a standalone online application for routine analysis of elastic tensors. In this code, a Python module is used to generate the HTML web page with embedded Javascript for dynamical plots. Notably, this code can import elastic data directly by using the Materials API \cite{R20}. The \textsc{MechElastic} Python library was also developed by Sobhit Singh et al. \cite{R55}. In this library, the {\small ELATE} has been used as a module to analyze the properties of elastic anisotropy and auxetic features in which allows direct visualization of the 3D spherical plot, as well as 2D projections on the XY, XZ and YZ planes. \textsc{MechElastic}, powered by the {\small ELATE} module in addition to these features, can calculate changes in compressive and shear velocities, velocity ratios, and Debye velocity estimates by adding mass density. Further, this package can plot the equation of state (EOS) curves for energy and pressure for a variety of EOS models such as Murnaghan, Birch, Birch-Murnaghan, and Vinet, by reading the inputted energy/pressure versus volume data obtained via numerical calculations or experiments. The \texttt{matplotlib} \cite{R56} and \texttt{pyvista} \cite{R57} packages are used to visualize 2D figures and 3D surfaces in the \textsc{MechElastic}. \begin{figure} \includegraphics[scale=1.0]{Fig_1.pdf} \caption{\label{fig:wide_1}Schematic representation of the directional (a) Poisson's ratio ($\nu$), (b) linear compressibility ($\beta$), (c) shear modulus (\textit{G}), and (d) Young’s modulus (\textit{E}). Blue arrows represent the direction of the stress exerted, and pink arrows the axis show along which the response is measured.} \end{figure} The present work’s primary motivation is to introduce a comprehensive and efficient program that accommodates all the features of these codes in one place, add some new features, and address their shortcomings. Table \ref{Tab:0} compares \textsc{El\textit{A}Tools} features with \textsc{ElAM}, \textsc{ELATE}, and \textsc{MechElastic}. Note that other new features may be added in the updates of these packages in the future. \\\\ \begin{sidewaystable} \def\tikz\fill[scale=0.35](0,.39) -- (.25,0) -- (1,.8) -- (.25,.15) -- cycle;{\tikz\fill[scale=0.35](0,.39) -- (.25,0) -- (1,.8) -- (.25,.15) -- cycle;} \centering \caption{Comparison of \textsc{El\textit{A}Tools} with the available primary tools for analyzing anisotropic elastic properties.} \label{Tab:0} \centering \begin{tabular}{cccccc} \hhline{======} \multicolumn{2}{c}{\textbf{Features}} & \multicolumn{1}{l}{\textbf{MechElastic}} & \multicolumn{1}{l}{\textbf{ELAM}} & \multicolumn{1}{l}{\textbf{ELATE}} & \multicolumn{1}{l}{\textbf{ElATools}} \\ \hhline{======} \multicolumn{2}{c}{Main mechanical properties in 3D (see \textbf{Appendix A} )} & \cmark & \cmark & \cmark & \cmark \\ \hhline{------} \multicolumn{2}{c}{Main mechanical properties in 2D} & \cmark & \xmark & \xmark & \cmark \\ \hhline{------} \multicolumn{2}{c}{Hardness information} & \cmark & \xmark & \xmark & \cmark \\ \hhline{------} \multicolumn{2}{c}{Elastic tensor eigenvalues in 3D} & \cmark & \xmark & \cmark & \cmark \\ \hhline{------} \multicolumn{2}{c}{Elastic tensor eigenvalues in 2D} & \cmark & \xmark & \xmark & \cmark \\ \hhline{------} \multirow{8}{*}{Visualization of the 3D surfaces in 3D materials} & Shear modulus & \cmark & \cmark & \cmark & \cmark \\ \cline{2-6} & Poisson's ratio & \cmark & \cmark & \cmark & \cmark \\ \cline{2-6} & Pugh ratio & \cmark & \xmark & \xmark & \cmark \\ \cline{2-6} & Linear compressibility & \cmark & \cmark & \cmark & \cmark \\ \cline{2-6} & Bulk modulus & \cmark & \xmark & \xmark & \cmark \\ \cline{2-6} & Young's modulus & \cmark & \cmark & \cmark & \cmark \\ \cline{2-6} & Phase velocities & \xmark & \xmark & \xmark & \cmark \\ \cline{2-6} & Group velocities & \xmark & \xmark & \xmark & \cmark \\ \cline{2-6} & Power flow angles & \xmark & \xmark & \xmark & \cmark \\ \cline{2-6} & Minimum thermal conductivity & \xmark & \xmark & \xmark & \cmark \\ \hhline{------} \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}Visualization of the 2D projections on an arbitrary plane in 3D materials\end{tabular}} & \xmark & \xmark & \xmark & \cmark \\ \hhline{------} \multirow{3}{*}{Visualization of 2D polar covers in 2D materials} & Shear modulus & \xmark & \xmark & \xmark & \cmark \\ \cline{2-6} & Poisson's ratio & \xmark & \xmark & \xmark & \cmark \\ \cline{2-6} & Young's modulus & \xmark & \xmark & \xmark & \cmark \\ \hhline{------} \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}Visualization of the 2D heat-map and polar heat-map in 3D and 2D~materials\end{tabular}} & \xmark & \xmark & \xmark & \cmark \\ \hhline{------} \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}The database of elastic tensors \\(Materials Project’s database)\end{tabular}} & Offline & \xmark & \xmark & \xmark & \cmark \\ \cline{2-6} & Online & \cmark & \xmark & \cmark & \cmark \\ \hhline{------} \multirow{5}{*}{\begin{tabular}[c]{@{}c@{}}Various output formats for custom drawing\\or displaying with different software\end{tabular}} & VRLM format & \xmark & \cmark & \xmark & \cmark \\ \cline{2-6} & HTML (offline) format & \xmark & \xmark & \xmark & \cmark \\ \cline{2-6} & agr format & \xmark & \xmark & \xmark & \cmark \\ \cline{2-6} & gpi format & \xmark & \xmark & \xmark & \cmark \\ \cline{2-6} & dat format & \xmark & \xmark & \xmark & \cmark \\ \hhline{======} \end{tabular} \end{sidewaystable} The highlighted features of \textsc{El\textit{A}Tools} are as follows: \begin{itemize} \item Compute and display the main mechanical properties such as Young’s modulus, shear modulus, p-wave modulus, universal anisotropy index \cite{R32}, Chung-Buessem anisotropy index \cite{R32}, log-Euclidean anisotropy parameter \cite{R33}, Kleinman's parameter \cite{R58}, Hardness information, Cauchy pressure, Poisson’s ratio, Pugh’s ratio, according to the three averaging schemes of Voigt, Reuss, and Hill \cite{R34} - Many of these features are included in {\small ELAM}, {\small ELATE}, and \textsc{MechElastic} codes. \item Investigation of mechanical stability using calculation of six (three) eigenvalues of the elastic tensor in 3D (2D) materials - This option exists only in {\small ELATE} (for 3D materials) and \textsc{MechElastic} (for 2D and 3D materials). \item Visualization of the 3D surfaces and 2D projections on any desired plane for shear modulus, Poisson’s ratio, Pugh ratio, linear compressibility, bulk modulus, and Young’s modulus. - {\small ELATE} and \textsc{MechElastic} only depict these features on XY, XZ, and ZY planes. Currently, visualization of the 3D surfaces and 2D projections on any desired plane for bulk modulus and Pugh ratio does not exist in either {\small ELAM} or {\small ELATE}. \item Visualization of the 3D surfaces and 2D projections on any desired plane for phase velocities, group velocities and power flow angles (includes primary, fast-secondary and slow-secondary modes) in 3D materials - These options do not exist in the {\small ELAM} and {\small ELATE} or \textsc{MechElastic}. \item Visualization of the 2D polar covers for Poisson’s ratio, shear modulus, and Young’s modulus in 2D materials - This option does not exist in the {\small ELATE}, \textsc{MechElastic}, and {\small ELAM}. \item An offline/online database of more than 13000 elastic tensors taken from Materials API (Materials Project database) - This option is also available online in {\small ELATE} and \textsc{MechElastic}. \item Supports various output formats for custom drawing or displaying with different software: ``.dat'' files for standard plotting, ``.wrl'' files for visualization of the 3D surfaces by view3dscene software \cite{R35}, ``.agr'' files for visualization of the 2D projections that are opened by X{\scriptsize MGRACE} \cite{R36}, ``.html'' files for visualization of the 3D surfaces by any Web browser, and ``.gpi'' script files for visualization of the 3D surfaces, 2D heat maps (for 3D materials), polar-heat maps (for 2D materials), and 2D projections that can be run by G{\footnotesize NUPLOT} \cite{R37} - {\small ELAM} generates only VRML (for 3D visualization) and PS formats (for 2D cut visualization). {\small ELATE} provides images online in PNG format only. \textsc{MechElastic} displays properties by \texttt{matplotlib} and \texttt{pyvista} python libraries. \item A user-friendly code with a terminal-based graphical user interface (GUI). A summary of this terminal-based GUI is included in \textbf{Supplementary Information}. - This feature does not exist in the {\small ELAM} and {\small ELATE} or \textsc{MechElastic}. \end{itemize} The rest of the paper is arranged as follows: The theoretical background of elasticity and analysis of elastic tensor is explained in detail in the next section. Package descriptions, including the workflow, code structure, installation, input and output files, visualization data, and test cases are presented in Sec. 3. Sec. 4 is summary and outlook. Finally, appendices A, B, C, D and Supplementary Information file are provided as complementary parts. \section{ Theoretical Background} \label{section:2} \subsection{Hooke's law and elastic tensor of crystals } The shape of solid material changes when subjected to stress. Provided that the stress is below a specific value, the elastic limit, the strain is recoverable (Fig.~\ref{fig:wide_2}). \begin{figure} \includegraphics[scale=1]{Fig_2.pdf} \caption{\label{fig:wide_2}Schematic stress–strain curves and definitions of angles used to describe the directions in the calculations related to \textsc{El\textit{A}Tools}. The elastic regime corresponds to the portion of the diagram where the strain is proportional to the stress.} \end{figure} This means that the material returns to its original shape when the stress is removed. In this elastic regime, according to Hooke’s law, it can be stated that for sufficiently low stresses, the amount of strain is proportional to the magnitude of the applied stress: \begin{equation} \label{1)} \varepsilon =S\sigma , \end{equation} where \textit{S} is a constant. \textit{S} is called the \textit{elastic compliance constant } (ECC), or the \textit{compliance}. Another form of this equation can be written as follows, \begin{equation}\label{2)} \varepsilon =C\sigma ,\, \, \, E\equiv C=1/s, \end{equation} where \textit{C} is the \textit{elastic stiffness constant} (ESC), or \textit{stiffness}, and E is Young’s modulus. The general form of Hooke's law can be rewritten in the form of tensor, \begin{equation} \label{3)} \varepsilon _{ij} =S_{ijkl} \sigma _{kl} , \end{equation} where \textit{S$_{ijkl}$} are the ECCs of the crystal. Also, as an alternative to Eq.\eqref{2)}, \begin{equation} \label{4)} \sigma _{ij} =c_{ijkl} \varepsilon _{kl} , \end{equation} where \textit{C$_{ijkl}$} are the ESCs. Eq.\eqref{3)} and Eq.\eqref{4)} stand for nine equations, each with nine terms on the right-hand side. The \textit{S$_{ijkl}$} or \textit{C$_{ijkl}$} are 4${}^{th}$-rank tensors, and \textit{$\epsilon$$_{ij}$} or \textit{$\sigma$$_{ij}$}$_{\ }$ are second\textit{${}^{th}$}-rank tensors. Hence, the \textit{C$_{ijkl\ }$}(\textit{S$_{ijkl}$}) consists of 81 \textit{stiffness} (\textit{compliances}) constants of the crystal. Due to the inherent symmetries (translational and rotational symmetries) of $\epsilon$$_{ij}$, $\sigma$$_{ij}$, and \textit{S$_{ijkl}$} or \textit{C$_{ijkl,\ }$} the number of independent coordinates of the 4\textit{${}^{th}$}-rank tensor reduces to 21 for the least symmetric case. On the other hand, the further reduction resulting from the symmetry of the crystal can be applied to this number: 21 for triclinic, 15 for monoclinic, 9 for orthorhombic, 7 for trigonal, 5 for tetragonal, 5 for hexagonal and 3 for cubic. \subsection{Transformation law, Christoffel equation, and representation surfaces of elastic properties } A 4\textit{${}^{th}$-}rank tensor is defined (like tensors of lower rank) by its transformation law \cite{R38}. We know that the 81 tensor components \textit{$A_{\, ijkl}$} representing a physical quantity are said to form a 4${}^{th}$-rank tensor if they transform on change of axes to \textit{$A'_{\, ijkl}$}, where \begin{equation} \label{5)} A'_{\, ijkl} =a_{im} \, a_{jn} \, a_{ko} \, a_{lp} A_{mnop} . \end{equation} It can be shown that 4\textit{${}^{th}$}-rank tensor \textit{S$_{ijkl}$} or \textit{C$_{ijkl}$} follows this rule \cite{R38}: \begin{equation} \label{6)} \left. \begin{array}{l} {\varepsilon '_{ij} =a_{ik} a_{jl} \varepsilon _{kl} ,} \\ {\varepsilon _{kl} =S_{klmn} \sigma _{mn} ,} \\ {\sigma _{mn} =a_{om} a_{pn} \sigma '_{op} ,} \end{array}\right\}\varepsilon '_{ij} =a_{ik} a_{jl} \, S_{klmn} \, a_{om} a_{pn} \, \sigma '_{op} . \end{equation} By comparing Eq.\eqref{3)} and the recent equation, we have: \begin{equation} \label{7)} S'_{ijkl} =a_{im} a_{jn} a_{ko} a_{lp} S_{mnop} \, , \end{equation} which is the necessary transformation law. To express the anisotropic form of Hooke's law in matrix notation, we use the Voigt notation scheme. In the \textit{$S'_{ijkl}$} and \textit{S$_{mnop}$}, the first two suffixes are abbreviated into a single one running from 1 to 6, and the last two are abbreviated in the same way, according to the following Voigt scheme: \begin{equation} \label{8)} \begin{array}{l} {{\rm Tensor\; notation}:{\rm \; }11{\rm \; }\, \, 22\, \, {\rm \; }33{\rm \; }\, \, 23,32\, \, \, {\rm \; }31,13\, \, \, {\rm \; }12,21} \\ {{\rm Matrix\; notation}:{\rm \; }\, 1{\rm \; }\, \, \, \, \, 2{\rm \; }\, {\rm \; }\, \, \, 3{\rm \; }\, \, \, \, \, \, \, 4{\rm \; }\, \, \, \, \, \, \, \, \, \, \, \, \, 5{\rm \; }\, \, \, \, \, \, \, \, \, \, \, 6} \end{array} \end{equation} Therefore, the components of the stress (\textit{$\sigma$}) and the strain (\textit{$\epsilon$}) tensors are written in a single suffix running from 1 to 6, \begin{equation} \label{9)} \begin{array}{l} {\sigma _{ij} =\left(\begin{array}{ccc} {\sigma _{11} } & {\sigma _{12} } & {\sigma _{13} } \\ {\sigma _{12} } & {\sigma _{22} } & {\sigma _{23} } \\ {\sigma _{13} } & {\sigma _{23} } & {\sigma _{33} } \end{array}\right)\stackrel{{\rm Voigt}\, \, {\rm scheme}}{\longrightarrow}\left(\begin{array}{ccc} {\sigma _{1} } & {\sigma _{6} } & {\sigma _{5} } \\ {\sigma _{6} } & {\sigma _{2} } & {\sigma _{4} } \\ {\sigma _{5} } & {\sigma _{4} } & {\sigma _{3} } \end{array}\right),} \\\\ {\varepsilon _{ij} =\left(\begin{array}{ccc} {\varepsilon _{11} } & {\varepsilon _{12} } & {\varepsilon _{13} } \\ {\varepsilon _{12} } & {\varepsilon _{22} } & {\varepsilon _{23} } \\ {\varepsilon _{13} } & {\varepsilon _{23} } & {\varepsilon _{33} } \end{array}\right)\stackrel{{\rm Voigt}\, \, {\rm scheme}}{\longrightarrow}\left(\begin{array}{ccc} {\varepsilon _{1} } & {{\textstyle\frac{1}{2}} \varepsilon _{6} } & {{\textstyle\frac{1}{2}} \varepsilon _{5} } \\ {{\textstyle\frac{1}{2}} \varepsilon _{6} } & {\varepsilon _{2} } & {{\textstyle\frac{1}{2}} \varepsilon _{4} } \\ {{\textstyle\frac{1}{2}} \varepsilon _{5} } & {{\textstyle\frac{1}{2}} \varepsilon _{4} } & {\varepsilon _{3} } \end{array}\right).} \end{array} \end{equation} According to this scheme, we have for the \textit{S$_{mnop}$$_{\ }$}\cite{R38}, \begin{center} \noindent \textit{S$_{mnop\ }$}=\textit{ S$_{ij}$}, when \textit{i} and \textit{j} are 1; 2 or 3, \noindent 2\textit{S$_{mnop}$} = \textit{S$_{ij}$}, when either \textit{i} or \textit{j }are 4; 5 or 6, \noindent 4\textit{S$_{mnop}$} = \textit{S$_{ij\ }$}when both \textit{i }and \textit{j} are 4; 5; or 6. \end{center} \noindent Therefore, Eq.\eqref{3)} takes the shorter form: \begin{equation} \label{10)} \varepsilon _{i} =S_{ij} \sigma _{j} \, \, \, \, (i,j=1,{\rm \; }2,...,{\rm \; }6). \end{equation} The Voigt scheme replaces the cumbersome 2${}^{th}$ and 4\textit{${}^{th}$}-rank tensors in a 3-dimensional vector space of vectors and matrices in a 6-dimensional vector space. The reason for introducing the factors of 0.5 in Eq.\eqref{9)} and the factors of 2 and 4 into the definitions of the \textit{S$_{ij}$} is to enable writing Eq.\eqref{10)} in a compact form. Using \textit{S$_{ij}$} in Eq.\eqref{10)} and Eq.\eqref{5)}, we can get a general and and straightforward compliance transformation relation for any crystal from the old systems (${T'}$) to measurement systems (\textit{T)}: \begin{equation} \label{11)} T'_{\, ijkl} =r_{i\alpha } \, r_{j\beta } \, r_{k\gamma } \, r_{l\delta } T_{\alpha \beta \gamma \delta } , \end{equation} where the \textit{r} represents the components of the rotation matrix (or direction cosines). In general, the tension produces not only longitudinal, and lateral strains, but shear strains as well. Therefore, spherical coordinates are suitable for such stresses and the responses that materials give to stresses. We choose \textbf{\textit{r}}$\mathrm{\equiv}$\textbf{\textit{a}} to be the first unit vector in the new basis set \cite{R30,R31,R50}, \begin{equation} \label{12)} \textbf{\textit{a}}=\left(\begin{array}{c} {\sin (\theta )\cos (\varphi )} \\ {\sin (\theta )\sin (\varphi )} \\ {\cos (\theta )} \end{array}\right);\, \, \, \, \, \, \, 0\, \le \theta \le \pi ,\, \, \, 0\le \varphi \le \pi , \end{equation} This unit vector (\textbf{\textit{a}}) is required to determine Young's modulus (\textit{E}), linear compressibility (\textit{$\beta$}), and bulk modulus (\textit{B}). But some elastic properties such as the shear modulus (\textit{G}) and Poisson's ratio (\textit{v}) requires another perpendicular direction. Therefore, we define unit vector \textbf{\textit{b}}, which is perpendicular to \textbf{\textit{a }}(see Fig.~\ref{fig:wide_2}), as follows \cite{R30,R31}: \begin{equation} \label{13)} \textbf{\textit{b}}\equiv \left(\begin{array}{c} {\cos (\theta )\cos (\varphi )\cos (\gamma )-\sin (\theta )\sin (\gamma )} \\ {\cos (\theta )\sin (\varphi )\cos (\gamma )-\cos (\theta )\sin (\gamma )} \\ {-\sin (\theta )\cos (\gamma )} \end{array}\right),\, \, \, \, 0\le \, \gamma \le 2\pi .\, 0 \end{equation} Therefore, by defining these two vectors, Eq.\eqref{11)} is as follows: \begin{equation} \label{14)} T'_{\, \alpha \beta \gamma \delta } =a_{\alpha i} \, a_{\beta j} \, b_{\gamma k} \, b_{\delta l} T_{ijkl} , \end{equation} Using this equation, we can calculate the representation surfaces for elastic properties. For instance, from Eq.\eqref{10)}, we know that Young's modulus can be obtained by using purely normal stress (see Fig.~\ref{fig:wide_1}(c)), \begin{equation} \label{15)} \begin{array}{l} {E(\textbf{a})\equiv \frac{1}{S'_{1111} } =\sum_{ijkl}^6 \frac{1}{a_{1i} a_{1j} a_{1k} a_{1l} S_{ijkl} } } \\\\ \, \, \, \, \, \, \, \, \, \, \,\, \, \, \, \,\, \, \, \, \, \, \, \, \, \, \,\, \,\, \,\, \ =\frac{1}{S'_{11} } =\frac{1} {a_{i} a_{j} a_{k} a_{l} S_{ijkl}} \, \, \, \, ({\rm by}\, {\rm Einstein{'} s\; summation\; rule}), \end{array} \end{equation} The volume compressibility of a crystal is the proportional decrease in volume of a crystal when subjected to unit hydrostatic pressure, but the linear compressibility is the relative decrease in length of a line when the crystal is subjected to unit hydrostatic pressure (HP). Hence, it is obtained by applying isotropic stress (P$_{HP}$) in a tensor form so that \textit{$\varepsilon$$_{ij}$}=-(P$_{HP}$)\textit{S$_{ijkk}$} \cite{R30,R38}, and by considering that the extension in \textit{a} direction is \textit{$\varepsilon$$_{ij}$a$_{i}$a$_{j}$}, and for this reason, we have: \begin{equation} \label{16)} \beta (\textbf{\textit{a}})=a_{i} a_{j} a_{k} a_{k} S_{ijkk} =a_{i} a_{j} S_{ijkk} \, ;\, \, \, a_{k} .\, a_{k} =1 \end{equation} On the other hand, the relationship between \textit{$\beta$} and \textit{B} can be expressed as follows \cite{R38}, \begin{equation} \label{17)} B(\textbf{\textit{a}})=\frac{1}{\beta (\textbf{a})} =\frac{1}{a_{i} a_{j} S_{ijkk} } \end{equation} As mentioned, the \textit{G} and \textit{$\nu$} are not as straightforward to represent and depend on two directions (\textit{\textbf{a}} and \textit{\textbf{b}}). The shear ratio (Poisson's ratio) is obtained by applying a pure shear (Fig.~\ref{fig:wide_1}(d)) (a purely normal) stress in the vector form of Eq.\eqref{3)}, and results in: \begin{equation} \label{18)} G(\textbf{\textit{a}},\textbf{\textit{b}})=\frac{1}{4S_{1212} } =\frac{1}{4S_{66} } =\frac{1}{4} \frac{1}{a_{i} b_{j} a_{k} b_{l} S_{ijkl} } , \end{equation} \begin{equation} \label{19)} \nu (\textbf{\textit{a}},\textbf{\textit{b}})=-\frac{S'_{1212} }{S'_{1111} } =-\frac{S'_{12} }{S'_{11} } =-\frac{a_{i} a_{j} b_{k} b_{l} S_{ijkl} }{a_{i} a_{j} a_{k} a_{l} S_{ijkl} } . \end{equation} To better understand these equations, we obtain Young's modulus of a cubic crystal. As for cubic crystal, because of lattice symmetry, there are three independent variables \textit{C}$_{11}$, \textit{C}$_{12}$, \textit{C}$_{44}$ in the \textit{C$_{ij}$}, and \textit{S}$_{11}$, \textit{S}$_{12}$$_{,}$ and \textit{S}$_{33}$ in the \textit{S$_{ij}$}. Using Eq.\eqref {15)} and Eq.\eqref {12)}, we have: \begin{equation} \label{20)} \begin{array}{l} {S'_{1111} =a_{11} a_{11} a_{11} a_{11} S_{1111} +a_{12} a_{12} a_{12} a_{12} S_{2222} +a_{13} a_{13} a_{13} a_{13} S_{3333} +} \\\\ {\, \, \, \, \, \, \, \, \, \, \,\,\,\,\, \, \, \,\,\, a_{11} a_{11} a_{12} a_{12} S_{1122} +a_{11} a_{11} a_{13} a_{13} S_{1133} +a_{12} a_{12} a_{13} a_{13} S_{2233} +} \\\\ {\, \, \, \, \, \, \, \, \, \, \, \, \, \, \,\,\,\,\,\,a_{11} a_{11} a_{12} a_{12} S_{2211} +a_{11} a_{11} a_{13} a_{13} S_{3311} +a_{12} a_{12} a_{13} a_{13} S_{3322} +} \\\\ {\, \, \, \, \, \, \, \, \, \, \, \, \,\,\,\,\,\,\, \frac{1}{4} [a_{12} a_{13} a_{12} a_{13} S_{2323} ]+\frac{1}{4} [a_{13} a_{11} a_{13} a_{11} S_{3131} ]+\frac{1}{4} [a_{11} a_{12} a_{11} a_{12} S_{1212} ]} \\\\ {\, \, \, \, \, \, \, \, \,\,\,\,\,\, =S_{11} (a_{11}^{4} +a_{12}^{4} +a_{13}^{4} )+(\frac{1}{4} S_{44} +2S_{12} )[a_{12}^{2} a_{13}^{2} +a_{11}^{2} a_{13}^{2} +a_{11}^{2} a_{12}^{2} ],} \end{array} \end{equation} with further simplification, \begin{equation} \label{21)} \frac{1}{E} =S'_{1111} =S_{11} -2(S_{11} -S_{12} -\frac{1}{2} S_{44} )(a_{11}^{2} a_{12}^{2} +a_{11}^{2} a_{13}^{2} +a_{12}^{2} a_{13}^{2} ) \end{equation} To calculate the orientation-dependent Poisson's ratio, shear modulus and Young's modulus in 2D material, Eq.\eqref{12)} and Eq.\eqref{13)} are used again and changed as follows: \begin{equation} \label{22)} \textbf{a}=\left(\begin{array}{c} {\cos (\varphi )} \\ {\sin (\varphi )} \\ {0} \end{array}\right);\, \, \, \textbf{b}=\left(\begin{array}{c} {-\sin (\varphi )} \\ {\cos (\varphi )} \\ {0} \end{array}\right)\, ;\, \, \, \, \, \, \, \, 0\le \varphi \le 2\pi .\, \end{equation} For 2D system, according to Hooke's law (Eq.\eqref{3)} and Eq.\eqref{4)}), the relationship between \textit{$\sigma$} and the corresponding strain tensor \textit{$\varepsilon$ }can be described using the stiffness tensor \textit{C$_{ij}$}, for orthogonal symmetry under plane stress conditions as \cite{R39}, \begin{equation} \label{23)} \left(\begin{array}{c} {\sigma _{11} } \\ {\sigma _{22} } \\ {\sigma _{12} } \end{array}\right)=\left(\begin{array}{ccc} {C_{11} } & {C_{12} } & {0} \\ {C_{12} } & {C_{22} } & {0} \\ {0} & {0} & {C_{66} } \end{array}\right)\left(\begin{array}{c} {\varepsilon _{11} } \\ {\varepsilon _{22} } \\ {2\varepsilon _{12} } \end{array}\right), \end{equation} where Voigt notation has been used for \textit{C$_{ij}$}. Here \textit{C}$_{11\ }$(\textit{S}$_{11}$), \textit{C}$_{22\ }$(\textit{S}$_{22}$), \textit{C}$_{12}$=\textit{C}$_{21\ }$(\textit{S}$_{12}$=\textit{S}$_{21}$), and \textit{C}$_{66\ }$(\textit{S}$_{66}$) represent \textit{C}$_{1111\ }$(\textit{S}$_{1111}$), \textit{C}$_{2222\ }$(\textit{S}$_{2222}$), \textit{C}$_{1122\ }$(\textit{S}$_{1122}$), and \textit{C}$_{1212\ }$(\textit{S}$_{1212}$), respectively. So, using the previous equations, the in-plane Young's modulus, shear modulus, and Poisson's ratio can be defined as: \begin{equation} \label{24)} {E(\varphi )=\frac{1}{S_{11} \cos ^{4} (\varphi )+S_{22} \sin ^{4} (\varphi )+(2S_{12} +S_{66} )\cos ^{2} (\varphi )\sin ^{2} (\varphi )}} , \end{equation} \begin{equation} \label{25)} \nu (\varphi )=-\frac{[S_{11} -S_{66} +S_{22} ]\cos ^{2} (\varphi )\sin ^{2} (\varphi )+S_{12} \cos ^{4} (\varphi )+S_{12} \sin ^{4} (\varphi )}{[2S_{12} +S_{66} ]\cos ^{2} (\varphi )\sin ^{2} (\varphi )+S_{11} \cos ^{4} (\varphi )+S_{22} \sin ^{4} (\varphi )} , \end{equation} \begin{equation} \label{26)} \frac{1}{4G(\varphi )} =[S_{11} +S_{22} -2S_{12} ]\cos ^{2} (\varphi )\sin ^{2} (\varphi )+\frac{1}{4} S_{66} [\cos ^{4} (\varphi )+\sin ^{4} (\varphi )-2\sin ^{2} (\varphi )\cos ^{2} (\varphi )] . \end{equation} Equations \eqref{24)},\eqref{25)}, and \eqref{26)} can be used for hexagonal, square, and rectangular 2D crystal systems (see Fig.~\ref{fig:wide_31}), which have two, three, and four independent elastic constants, respectively. For two-dimensional oblique systems, there are six independent elastic constants(C$_{16}$ and C$_{26}$ are non-zero). In this case, the above equations are defined as follows~\cite{R59,R60}: \begin{equation} \label{27)} \centering \begin{split} E(\varphi )=\frac{1}{S_{11} \cos ^{4} (\varphi )+S_{22} \sin ^{4} (\varphi )+(2S_{12} +S_{66} )\cos ^{2} (\varphi )\sin ^{2} (\varphi )+}\\ \frac{1}{2S_{16}\cos ^{3} (\varphi )\sin (\varphi )+2S_{26}\sin ^{3} (\varphi )\cos (\varphi ))} , \end{split} \end{equation} \begin{equation} \label{28)} \centering \begin{split} \nu (\varphi )=-\frac{[S_{22} -S_{66} +S_{22} ]\cos ^{2} (\varphi )\sin ^{2} (\varphi )+S_{12}[\cos ^{4} (\varphi )+\sin ^{4} (\varphi )]+}{[2S_{12} +S_{66} ]\cos ^{2} (\varphi )\sin ^{2} (\varphi )+S_{11} \cos ^{4} (\varphi )+S_{22} \sin ^{4} (\varphi )+} \\ \frac{S_{16}[\sin ^{3} (\varphi )\cos (\varphi )-\cos ^{3} (\varphi )\sin (\varphi )]+}{2S_{16}\sin^{3}(\varphi )\cos(\varphi )+}\\ \frac{S_{26}[\cos ^{3} (\varphi )\sin (\varphi )-\sin ^{3} (\varphi )\cos (\varphi )]}{2S_{26}\cos ^{3} (\varphi)\sin(\varphi )}, \end{split} \end{equation} \begin{equation} \label{29)} \centering \begin{split} { \frac{1}{4G(\varphi )} =[S_{11} +S_{22} -2S_{12} ]\cos ^{2} (\varphi )\sin ^{2} (\varphi )+\frac{1}{4} S_{66} [\cos ^{4} (\varphi )+\sin ^{4} (\varphi )-2\sin ^{2} (\varphi )\cos ^{2} (\varphi )]}\\{+ S_{16} [\sin ^{3} (\varphi )\cos (\varphi ) - \cos ^{3} (\varphi )\sin (\varphi ) ] + S_{26} [\cos ^{3} (\varphi )\sin (\varphi ) - \sin ^{3} (\varphi )\cos (\varphi ) ]}. \end{split} \end{equation} \begin{figure} \includegraphics[scale=1.1]{Fig_3.pdf} \caption{\label{fig:wide_31}Classification of crystal systems and independent elastic constants for 2D materials.} \end{figure} An important relation in material science is the connection between the elastic constants tensor and elastic wave velocities of a solid. Since sound is a form of elastic waves traveling in a homogeneous medium (e.g., a perfect crystal), the C$_{ijkl}$ will contain information about how these sound waves are propagated. Knowing the elastic constants makes it possible to predict the sound velocities in a material by the \textit{Christoffel} equation \cite{R62,R63}, and determining the dispersion relation for these waves is possible by solving this equation: \begin{equation} \label{30)} [M_{il}-\rho \omega^{2} \delta_{il} ]s_{l}=0 . \end{equation} For a plane wave with wave vector \textbf{q}, frequency $\omega$, and polarization $\hat{s}$ in a material with density $\rho$, the \textit{Christoffel} matrix ($M_{il}$) can be defined as follows: \begin{equation} \label{31)} M_{il}=q_{j}C_{ijkl}q_{k}. \end{equation} Eq.\eqref {30)} is a simple eigenvalue problem that can be routinely solved for arbitrary \textbf{q}, and the result is a set of three frequencies and polarization vectors for each value of \textbf{q}. The \textit{Christoffel} matrix is symmetric and real; so the eigenvalues are real, and polarization vectors \{$\hat{\textbf{s}}$\} constitute an orthogonal basis. We use the reduced elastic constants tensor $\tilde{C}_{ijkl}$ = $\rho^{-1}$C$_{ijkl}$ and the corresponding reduced \textit{Christoffel} matrix ($\tilde{M}_{il}$) for further simplification. Due to the wavelength-independence of the velocities, we do not consider \textbf{q} a dimension of inverse length but a dimensionless unit vector that defines only the direction of travel of a plane wave. This causes the dimension of the $\tilde{M}_{il}$ to change from frequency to velocity squared. Therefore, Eq.\eqref {30)} can be reduced as follows: \begin{equation} \label{32)} [\tilde{M}_{il}-v^{2}_{p} \delta_{il} ]s_{l}=0;\,\, v^{2}_{p}=\omega^{2}/q^{2}, \end{equation} where $v_{p}$ is the velocity of a plane wave traveling in the direction of \textbf{$\hat{q}$}. The calculations of a material sound velocities based on the Eq.\eqref {32)} are a straightforward eigenvalue problem. From this equation, we obtain three velocities, one primary (P) and two secondary (S), which correspond to the longitudinal and transversal polarizations, respectively. Generally, the velocity of a plane wave is referred to as the \textit{phase velocity} ($v_{p}$). The real sound, like $v_{p}$ is never purely monochromatic nor purely planar. Hence, we consider a wave packet with a small spread in wavelength and direction of travel. The velocity of the wave packet formed by the superposition of these phase waves is called the \textit{group velocity} ($v_{g}$).It is the velocity that acoustic energy travels through a non-dispersive and homogeneous medium and is defined by \begin{equation} \label{33)} \mathbf{v}_{g} \equiv \mathbf{\nabla} v_{p}, \end{equation} where $v_{p}$ is a \textit{scalar} function of \textbf{$\hat{\textbf{q}}$}, and $\textbf{v}_{g}$ is a vector-valued function, which generally does not lie in the direction of \textbf{$\hat{q}$}. The angle between $v_{p}$ and $v_{g}$ is called the \textit{power flow angle} ($\psi$) and is defined as \begin{equation} \label{34)} \nu_{p} = \nu_{g} cos(\psi); cos(\psi)=\mathbf{\hat{n}}_{p}. \mathbf{\hat{n}}_{g}, \end{equation} where $\hat{\textbf{n}}_{p}$ and $\hat{\textbf{n}}_{g}$ are the normalized directions of the $v_{p}$ and $v_{g}$, respectively. \subsection{Mechanical properties and elastic anisotropy of 2D and 3D materials} From elastic constants, other basic elastic properties, including elastic moduli, can be obtained. The elastic response of an isotropic system is generally described by the \textit{B} and the \textit{G}, which may be obtained by averaging the single-crystal elastic constants. The averaging methods most often used are the Voigt \cite{R40}, Reuss \cite{R41} and Hill \cite{R34} bounds. In Voigt's and, Reuss's approximations, the equation takes the following form: \begin{equation} \label{35)} \begin{array}{l} {B_{V} =9^{-1} ([C_{11} +C_{22} +C_{33} ]+2[C_{12} +C_{23} +C_{31} ]),} \\\\ {G_{V} =15^{-1} ([C_{11} +C_{22} +C_{33} ]-[C_{12} +C_{23} +C_{13} ]+3[C_{44} +C_{55} +C_{66} ]),} \\\\ {B_{R} =([S_{11} +S_{22} +S_{33} ]+2[S_{12} +S_{23} +S_{13} ])^{-1},} \\\\ {G_{R} =15 (4[S_{11} +S_{22} +S_{33} ]-4[S_{12} +S_{23} +S_{31} ]+3[S_{44} +S_{55} +S_{66} ])^{-1}.} \end{array} \end{equation} Also, the arithmetic mean of the Voigt and Reuss bounds, termed the Voigt-Reuss-Hill (VRH) average is also found as a better approximation to the actual elastic behavior of a polycrystal material, \begin{equation} \label{36)} \begin{array}{l} {B_{VRH} =\frac{1}{2} (B_{V} +B_{R} ),} \\\\ {G_{VRH} =\frac{1}{2} (G_{V} +G_{R} )} \end{array} \end{equation} The Young's modulus (\textit{E}), and Poisson's ratio (\textit{$\nu$}) for an isotropic material are given by: \begin{equation} \label{37)} E=\frac{9BG}{3B+G} ,\, \, \, \, v=\frac{3B-2G}{2(3B+G)} . \end{equation} The elastic anisotropy is a crucial measurement of the anisotropy of chemical bonding and can be calculated by elastic constants. For all crystal systems, the bulk response is in general, anisotropic, and one must account for such contributions to quantify the extent of anisotropy accurately. For this purpose, Ranganathan \textit{et al}. \cite{R32} introduce a new universal anisotropy index (\textit{A}${}^{U}$), \begin{equation} \label{38)} A^{U} =5\frac{G_{V} }{G_{R} } +\frac{B_{V} }{B_{R} } -6. \end{equation} It is noteworthy that in weakly anisotropic materials, i.e. isotropic material, all such averages lead to similar results for elastic moduli. The mechanical behavior such as \textit{ductile} or \textit{brittle} can be represented by the ratio of the \textit{G} to the \textit{B}, \textit{i.e.,} the Pugh ratio \textit{G}/\textit{B}, by simply considering \textit{B} as the resistance to fracture and \textit{G} as the resistance to plastic deformation. The critical value of the Pugh ratio to separate \textit{ductile} and \textit{brittle} materials is around 0.57. If \textit{G/B }$\mathrm{<}$ 0.57, the material is more \textit{ductile}; otherwise, it behaves in a \textit{brittle} manner \cite{R5,R42}. Hence, a higher Pugh ratio indicates more brittleness property. Cauchy pressure (\textit{P$_{C}$}) is another characteristic to describe the brittleness and ductility of the metals and compounds and is defined in different symmetries \cite{R43,R51,R52} by: \begin{equation} \label{39)} P_{C}^{c} =C_{12} -C_{44} \, \ (Cubic \, \,symmetry) \end{equation} \begin{equation} \label{40)} P_{C}^{a} =C_{13} -C_{44} , P_{C}^{b} =C_{12} -C_{66} \, \ (Hexagonal,Trigonal, Tetragonal \, \,symmetries) \end{equation} \begin{equation} \label{41)} P_{C}^{a} =C_{23} -C_{44} , P_{C}^{b} =C_{13} -C_{55} , P_{C}^{c} =C_{12} -C_{66} \, \ (Orthorhombic\,\,symmetry) \end{equation} For covalent materials with brittle atomic bonds, the \textit{P$_{C}$} is negative, because in this case, material resistance to shear strain, \textit{i.e.}, \textit{C}$_{44}$, is much more than that for volume change, \textit{i.e}., \textit{C}$_{12}$ (for cubic symmetry). However, the \textit{P$_{C}$} must be positive for the metallic-like bonding, where the electrons are almost delocalized. For an isotropic crystal, \textit{A${}^{U}$} is zero. The departure of \textit{A${}^{U}$} from zero defines the extent of the elastic anisotropy. In addition to these properties, \textbf{Appendix A} provides a complete list of various parameters and moduli related to the elastic and mechanical properties of materials that \textsc{\textsc{El\textit{A}Tools}} is able to calculate. In this work, the relation among the \textit{E}, \textit{G}, \textit{v,} and elastic stiffness constants for a 2D system are derived as, \begin{equation} \label{42)} \centering \begin{split} {E_{x} =\frac{C_{11} C_{22} -C_{12} C_{21} }{C_{22}},} \\ {E_{y} =\frac{C_{11} C_{22} -C_{12} C_{21} }{C_{11} }, }\\ {G_{xy} =C_{66} ,} \\ {\, v_{xy} =\frac{C_{21} }{C_{22}, } , v_{yx} =\frac{C_{12} }{C_{11} } ,}\\ \end{split} \end{equation} where \textit{E$_{l}$}= \textit{$\sigma$$_{l}$}/\textit{$\varepsilon$$_{l}$} is Young's modulus along the axis of \textit{l}. \textit{v$_{lk}$} =-\textit{d$\varepsilon$$_{k}$}/\textit{d$\varepsilon$$_{l}$} is the Poisson's ratio with tensile strain applied in the \textit{l} direction and the response strain in the \textit{k} direction. \textit{G$_{xy}$} is the shear modulus in the \textit{xy}-plane. \section{Software description and Features} \label{section:3} \subsection{Workflow and structure of \textsc{El\textit{A}Tools}} \begin{figure} \includegraphics[scale=0.9]{Fig_4.pdf} \caption{\label{fig:wide_3}The flowchart of \textsc{El\textit{A}Tools}. The red dashed line represents the Calculation Kernel (CK) block. The green dashed line represents the post-processing stage.} \end{figure} The simple workflow of the \textsc{El\textit{A}Tools} package is illustrated in Fig.~\ref{fig:wide_3}. First, the type of either 2D or 3D material is chosen to determine the calculations in the Calculation Kernel (CK) block. Then, \textsc{El\textit{A}Tools} reads the \textit{C$_{ij}$} data as an input file. At this point, \textit{C$_{ij}$} data can be extracted from the output files of IRelast \cite{R18}, IRelast2D \cite{R64}, Elast \cite{R17}, AELAS \cite{R44}, and ElaStic \cite{R19} packages. The output files of these packages are supported as input files by \textsc{El\textit{A}Tools} (see Sec. 3.3). As mentioned earlier, for 3D materials, \textsc{El\textit{A}Tools} has an offline/online database of more than 13000 elastic tensors taken from Materials API (Materials Project database). The user can enter the next steps of calculating \textsc{El\textit{A}Tools} by entering the Materials API-ID of the structure. The developers of \textsc{ElATools} intend to keep the elastic tensors database updated according to the latest release of the Materials Project database. Subsequently, the \textit{C$_{ij}$} tensor enters the mechanical stability check stage. There are four methods of the Born elastic stability conditions for a crystal, which are valid regardless of the crystal symmetry: 1) if the second-order elastic \textit{stiffness} tensor \textit{C$_{ij}$} is definitely positive, 2) if all eigenvalues of \textit{C$_{ij}$} are positive, 3) if all the leading principal minors of \textit{C$_{ij}$} are positive, and 4) if an arbitrary set of minors of \textit{C$_{ij}$} are all positive. Method (2) is used in the \textsc{El\textit{A}Tools}. If the stability conditions are satisfied, the tensor will enter the CK block; otherwise, the program will stop by showing a mechanical instability error. In the CK block, the calculations are divided into two branches. If the system is 2D the main mechanical properties such as \textit{E$_{x}$}, \textit{E$_{y}$}, \textit{G}$_{xy}$, \textit{v$_{xv}$}, \textit{v}$_{yx}$, etc. are determined, and then enter the orientation-dependent (OD) calculation step. At this stage, according to the equations of Sec. 2.2, OD of Young's modulus, shear modulus, and Poisson's ratio in the (001) plane are calculated. Also, in the (001) plane, the maximum and minimum values for these proprieties are calculated. For the 3D system, the process is like a 2D system. First, \textsc{El\textit{A}Tools} reads the \textit{hkl}-index plane (\textit{e.g.} (100)) entered by the user. Then, the polycrystalline Young's modulus, bulk modulus, shear modulus, P-wave modulus, Poisson's ratio, and Pugh ratio are calculated using three averaging approaches of Voigt, Reuss and Hill approximations. Besides anisotropy indices, Cauchy pressures and hardness information are determined. Subsequently, we arrive at the spatial dependence (SD) calculation process. The SD and the 2D projection of Young's modulus, bulk modulus, shear modulus, Poisson's ratio, and linear compressibility are calculated. To calculate the elastic waves properties such as phase and group velocities, \textsc{El\textit{A}Tools} can solve the \textit{Christoffel} equation at the user's request. Then, similar to the previous steps, it enters the spatial calculation process, and SD and the 2D projection of the phase and group velocities (primary and two secondary modes) are calculated. Finally, the calculations obtained from these two branches are sorted and saved. The main mechanical properties are sorted in \textsf{DATA.out}. \textsc{El\textit{A}Tools} creates two directories named \textsf{DatFile-hkl} and \textsf{PicFile-hkl}, and stores files in ``dat'' format, and temporary figures of the properties in these directories, respectively. Temporary figures help us get an overview of the properties. Then, we enter the post-processing for visualization of the 3D spherical plot and their 2D projections with higher quality and detail. There are four plugins for visualizing data in the post-processing stage: \textsf{dat2gnu.x}, \textsf{dat2agr.x}, \textsf{dat2wrl.x}, \textsf{dat2html.x}. The \textsf{dat2gnu.x} and \textsf{dat2agr.x} generate files for 2D graphical representations of elastic properties in ``gpi'' and ``agr'' formats, respectively, which can be run by G{\scriptsize NUPLOT} and X{\scriptsize MGRACE} programs. The \textsf{dat2wrl.x} and \textsf{dat2html.x} are also prepared for 3D graphical representations of elastic properties, capable of producing files in ``wrl" and ``html" formats. The wrl format can be visualized and explored with a VRML capable browser, such as View3dscene \cite{R35}. For HTML format can also be used from any Web browser. JavaScript written in this format uses \textsf{plotly.js} \cite{R65}, a free and recently open-sourced graphing library. We can represent dynamic parametric surfaces with these formats, making the spatial representation of mechanical properties more straightforward and fully interactive. \subsection{Installation and Requirements} The \textsc{El\textit{A}Tools} is written in \textsf{Fortran90} and is installed with Intel Fortran (ifort) or GNU Fortran (gfortran) compiler. Before installing \textsc{El\textit{A}Tools}, the following libraries and packages should be installed: G{\footnotesize NUPLOT}, and LAPACK (Linear Algebra Package). LAPACK libraries are for numerical calculations and are used to calculate elastic compliance constant and so on. G{\scriptsize NUPLOT} is also used to plot temporary figures and the post-processing stages. One of the packages IRelast, Elast, AELAS, and ElaStic can calculate the elastic \textit{stiffness} constant (\textit{C$_{ij}$}). \textsc{El\textit{A}Tools} supports the output of these packages. The \textsc{El\textit{A}Tools} is distributed in a compressed tar file \textsf{elatools\_1.**.tar.gz}, which uncompresses into several directories: \textsf{soc}, \textsf{doc}, \textsf{db}, and \textsf{bin}. The \textsf{soc} directory contains the \textsf{f90} files and \textsf{Makefile}. For the compilation, the \textsf{Makefile} must be modified for one's system. The \textsf{doc} directory contains a copy of the short user guide and the examples directory. The \textsf{db} directory contains the elastic constant database files. The path of these files must be specified before installation. More details are provided in the short user guide. After installation, the executable files (\textsf{Elatools.x}, \textsf{dat2gnu.x}, \textsf{dat2agr.x}, \textsf{dat2html.x}, and \textsf{dat2wrl.x}) are saved in the \textsf{bin} directory. Finally, the code will run by executing \textsf{Elatools.x}. \subsection{Input and Output files} The only input data for \textsc{El\textit{A}Tools} is the elastic \textit{stiffness} constant, which can be calculated by other packages. However, \textsc{El\textit{A}Tools} also supports output files of many packages for convenience, such as IRelast (or IRelast2D) with \textsf{INVELC-matrix} output file, Elast with \textsf{elast.output} output file, AELAS with \textsf{ELADAT} output file, ElaStic with \textsf{ElaStic\_2nd.out} output file, and \textsf{Cij.dat} (3D system) or \textsf{Cij-2D.dat} (2D system) file for any other outputs (See \textbf{Appendix B} for more details). Several output files are generated in each run: \begin{itemize} \item Spatial-dependence and 2D projection files in 3D materials. For this case, fifteen files \textsf{3d\_pro.dat} (\textsf{pro=bulk}, \textsf{young}, \textsf{poisson}, \textsf{comp}, \textsf{shear}, \textsf{pp}, \textsf{pf}, \textsf{ps}, \textsf{gp}, \textsf{gf}, \textsf{gs}, \textsf{pfp}, \textsf{pff}, \textsf{pfs}, \textsf{km}) are generated. Among these files, the \textsf{3d\_poisson.dat} file includes the maximum value, minimum positive value, minimum negative value, and average value of Poisson's ratio. The \textsf{3d\_shear.dat} file contains the maximum positive value, minimum positive value, and average value of shear modulus, and the \textsf{3d\_comp.dat} file also contains the positive and negative value of linear compressibility. Also, for 2D projection of any plane, nine files \textsf{2dcut\_pro.dat} (\textsf{pro=bulk, young, poisson, comp }, \textsf{shear}, \textsf{km}, \textsf{gveloc}, \textsf{pveloc}, and \textsf{pfaveloc}) are generated. It should be noted that \textsf{2dcut\_p/g/pfveloc.dat} files include primary, fast secondary, and slow secondary modes. See Table \ref{Tab:2} for more details on \textsf{3d\_pro.dat} and \textsf{2dcut\_pro.dat} files. \item Orientation-dependent files in 2D materials. In this case, three files \textsf{pro} (\textsf{pro= young, poisson, shear}) are generated. Also, the \textsf{poisson\_2d\_sys.dat} file contains the maximum value, the minimum positive value, and the minimum negative value of Poisson's ratio. \item \textsf{DATA.dat} file. This file contains the \textit{C$_{ij}$}, \textit{S$_{ij}$}, the main properties and the minimal and maximal values of Young's modulus, bulk modulus, shear modulus, Poisson's ratio, linear compressibility, power flow angle, phase, and group velocities as well as, the angles and directions along which these extrema occur. \item Temporary files. These files are used for post-processing. \end{itemize} \subsection{ Visualization and Post-processing} \begin{figure} \includegraphics[scale=1]{Fig_5.pdf} \caption{\label{fig:wide_4}(a) The spatial-dependence and (b) 2D projection in (110) plane of linear compressibility and Poisson's ratio of ZnAu$_{2}$(CN)$_{4}$ structure.} \end{figure} \begin{figure} \includegraphics[scale=0.2]{Fig_5-2.pdf} \caption{\label{fig:wide_4-2}Poisson's ratio heat maps with respect to $\theta$ and $\phi$ angles for ZnAu$_{2}$(CN)$_{4}$ compound. (a) Maximum positive values, (b) minimum positive values, and (c) negative values of Poisson's ratio.} \end{figure} In the post-processing, three powerful tools \textsf{dat2gnu.x}, \textsf{dat2agr.x}, \textsf{dat2wrl.x}, and \textsf{dat2html.x} are designed to visualize of results, with output files in gpi, agr, wrl, and html formats, respectively. In Figs.~\ref{fig:wide_4}--\ref{fig:wide_9}, we show the corresponding plots for ZnAu$_{2}$(CN)$_{4}$ (space group P6222) \cite{R45}, CrB$_{2}$ (space group P6/mmm) \cite{R46}, GaAs (space group F-43m) \cite{R85}, (space group C2/m) \cite{R84}, {$\delta$}-phosphorene (space group Pmc21), and Pd$_{2}$O$_{6}$Se$_{2}$ (monolayer) structures \cite{R47} by these three postprocessing tools and G{\footnotesize NUPLOT}, X{\scriptsize MGRACE} and view3dscene programs. In the following section, these three structures are examined. A list of the main elastic properties and anisotropy indices of ZnAu$_{2}$(CN)$_{4}$, CrB$_{2}$, GaAs, $\delta$-phosphorene ($\delta$-P), and Pd$_{2}$O$_{6}$Se$_{2}$ monolayer are given in \textbf{Appendix C}. \begin{figure} \includegraphics[scale=0.8]{Fig_6.pdf} \caption{\label{fig:wide_5} (a) The spatial-dependent and 2D projection in (b) polar and (c) cartesian coordinates in (110) plane of Poisson's ratio and linear compressibility of the CrB$_{2}$ structure.} \end{figure} \textbf{Test case (1): ZnAu$_{2}$(CN)$_{4}$}. Fig.~\ref{fig:wide_4} shows the spatial-dependence and 2D projection in (110) plane of Poisson's ratio and linear compressibility of ZnAu$_{2}$(CN)$_{4}$ structure. The negative linear compressibility for this compound is shown in Fig.~\ref{fig:wide_4}(a). In these Figs., directions corresponding to positive values of linear compressibility are plotted in green, and those of NLC are plotted in red. The NCL of ZnAu$_{2}$(CN)$_{4}$ was predicted in Ref. \cite{R45}, which is evident in the \textit{z}-direction. The spatial-dependence and 2D projection in (110) plane of Poisson's ratio are shown in Fig.~\ref{fig:wide_4}(b). For Poisson’s ratio, which can be negative in some directions, three categories of colors are considered: directions corresponding to maximum (minimum) positive values of Poisson’s ratio are plotted in translucent blue (green) color, and those of NPR are plotted in red color. Calculations of \textsc{El\textit{A}Tools} based on the elastic tensor in Ref. \cite{R45} show that this structure, in addition to the NLC has a small NPR (-0.02) on (110) plane. Using G{\footnotesize NUPLOT} and \textsf{dat2gnu.x} tools, heat maps with respect to $\theta$ and $\phi$ angles are shown in Fig.~\ref{fig:wide_4-2}. These 2D heat maps show the changes in the Poisson's ratio relative to $\theta$ and $\phi$ angles in spherical space. NPR is well visible in the Fig.~\ref{fig:wide_4-2}(c). For comparison, ELATE display the NPR feature due to the unavailability 2D representation in (110) plane (it can display only on three planes (100), (010), and (001)). Hence, the ability to select a custom plane is a unique feature in \textsc{El\textit{A}Tools}. \textbf{Test case (2): CrB$_{2}$}.\textbf{ }CrB$_{2}$ compound is investigated to evaluate \textsc{El\textit{A}Tools} and post-processing. Elastic tensor is taken from the calculations of Ref. \cite{R46}. The Young's modulus and bulk modulus are shown in 3D and 2D ((010) plane) in Fig.~\ref{fig:wide_5}. The spatial dependent files are generated by the \textsf{dat2wrl.x} and represented by the View3dscene programs (Fig. ~\ref{fig:wide_5}(a)). The orientation-dependent files in polar (Cartesian) coordinate are generated by \textsf{dat2gnu.x} (\textsf{dat2agr.x}) and displayed by G{\footnotesize NUPLOT} (X{\scriptsize MGRACE}) (see Fig.\ref{fig:wide_5}(b) and (c)). For an isotropic system, in the spatial-dependence (polar and cartesian coordinates), the graph would be a sphere (a circle and a straight line). Fig.~\ref{fig:wide_5}(a) shows that the bulk modulus and Young’s modulus of CrB$_{2}$ have anisotropy. The projections on the (010) plane show more details about the anisotropic properties of the bulk modulus and Young's modulus. \textbf{Test case (3): GaAs.} We employ gallium arsenide (GaAs) as an example to illustrate the capabilities of \textsc{ElaTools}, and the output figures will be briefly commented on here. In this example, we investigate the elastic wave properties of this compound. The values of $C_{ij}$ and $\rho$ were taken from Ref. \cite{R85}. In Figs. \ref{fig:wide_7} and \ref{fig:wide_8}, phase and group velocities for primary, fast, and slow secondary modes are calculated by \textsc{Elatools} and obtained by the \textsf{dat2html.x} (using plotly.js), \textsf{dat2gnu.x} (using \textsc{Gnuplot}) post-processing codes. In Figs. \ref{fig:wide_61} and \ref{fig:wide_62} it is clear that the distinction between fast secondary (FS) and slow secondary (SS) modes always refers to the phase velocity since the group velocity of the FS mode could be lower than that of the SS mode for certain propagation directions. According to Table \ref{Tab:8} and these Figures, the minimum and maximum anisotropy are associated with primary (P) and SS modes, respectively. In Figs. \ref{fig:wide_61}(a) and \ref{fig:wide_62}(a), the P modes are not spherical, and their anisotropy is higher than the other modes. Also, as shown in Figs. \ref{fig:wide_61}(c) and \ref{fig:wide_62}(c), the P modes propagation patterns are more complex, indicating higher anisotropy than other modes. \begin{figure} \includegraphics[scale=0.9]{Fig_7.pdf} \caption{\label{fig:wide_61}(a) The spatial-dependent, (b) 2D projection in polar coordinates on (100) plane, and (c) heat maps of phase velocity ($\nu_{p}$) for primary, fast, and slow secondary modes of the GaAs compound.} \end{figure} \begin{figure} \centering \includegraphics[scale=0.9]{Fig_8.pdf} \caption{\label{fig:wide_62}(a) The spatial-dependent, (b) 2D projection in polar coordinates on (100) plane, and (c) heat maps of group velocity ($\nu_{g}$) for primary, fast, and slow secondary modes of the GaAs compound.} \end{figure} \begin{figure} \includegraphics[scale=0.9]{Fig_9.pdf} \caption{\label{fig:wide_6}The orientation-dependent in-plane (a) of Poisson's ratio, (b) Young's modulus, and (c) shear modulus of \textit{$\delta$}-phosphorene.} \end{figure} \textbf{Test case (4): \textit{$\boldsymbol{\delta}$}-Phosphorene}. Haidi Wang \textit{et al.} \cite{R47} have discovered that \textit{$\delta$}-phosphorene is a superior 2D\textit{ auxetic} material with high NPR. In Fig.~\ref{fig:wide_6}, the Poisson's ratio, shear modulus, and Young's modulus are calculated by \textsc{El\textit{A}Tools} and shown by the \textsf{dat2gnu.x} and G{\footnotesize NUPLOT}. As shown in Fig.~\ref{fig:wide_6}(a), the maximum value of the NPR (-0.267) occurs at a 90-degree angle. This amount is in perfect agreement with Haidi Wang \textit{et al.} Also, the orientation-dependent Young's modulus of this structure (see Fig.~\ref{fig:wide_6}(b)) is in good agreement with Haidi Wang \textit{et al.} \textsc{El\textit{A}Tools} can calculate shear modulus in 2D materials. Fig.\ref{fig:wide_6}(c) shows this feature of \textit{$\delta$}-phosphorene. \textbf{Test case (5): Pd$_{2}$O$_{6}$Se$_{2}$ monolayer}. In this test case, the mechanical properties of the Pd$_{2}$O$_{6}$Se$_{2}$ monolayer with an oblique 2D crystal system are investigated. The values for the $C_{ij}$ was taken from computational 2D materials database (C2DB) \cite{R47}. In Fig.~\ref{fig:wide_10}, the polar heat maps of Poisson's ratio, Young's modulus, and shear modulus are calculated by \textsc{El\textit{A}Tools} and shown by the \textsf{dat2gnu.x} and \textsc{Gnuplot}. The main elastic properties and anisotropy indices of this monolayer are listed in Table \ref{Tab:9}. It is clear from this table that the Pd$_{2}$O$_{6}$Se$_{2}$ monolayer has a NPR (-0.49). This result is well recognizable by polar heat maps (see Fig. \ref{fig:wide_10}). Polar heat maps in visualizing the mechanical properties of 2D materials, such as heat maps in 3D, are useful tools for searching for NPR with small values. \begin{figure} \includegraphics[scale=1.2]{Fig_10.pdf} \caption{\label{fig:wide_10} The polar heat maps of Poisson's ratio, Young's modulus, and shear modulus of Pd$_{2}$O$_{6}$Se$_{2}$ monolayer} \end{figure} \textbf{Test case (6): NPR analysis of cubic symmetry materials. }Arbitrarily large positive and negative values of Poisson’s ratio could occur in solids with cubic material symmetry \cite{R48,R49}. To investigate this matter, we have calculated the Poisson's ratio of a hypothetical set of systems with cubic symmetry that include three independent elastic coefficients (\textit{C}$_{11}$, \textit{C}$_{12}$, and \textit{C}$_{44}$) in the range between 0 and 100 GPa (with steps of 0.5 GPa), considering the mechanical stability criteria, and using \textsc{El\textit{A}Tools}. \begin{figure} \includegraphics[scale=0.8]{Fig_11.pdf} \caption{\label{fig:wide_7}The minimum and maximum values of Poisson's ratio diagram with respect to (a, b) (\textit{C}$_{44}$, \textit{C}$_{12}$) and (c, d) (\textit{C}$_{44}$, \textit{C}$_{11}$).} \end{figure} \begin{figure} \includegraphics[scale=0.8]{Fig_12.pdf} \centering \caption{\label{fig:wide_8}Five slices at constant values of \textit{C}$_{44}$ in which the \textit{v$_{min}$} is a function of \textit{C}$_{11}$ and \textit{C}$_{12}$ coefficients.} \end{figure} Fig.~\ref{fig:wide_7} shows the minimum (\textit{v$_{min}$}) and maximum (\textit{v$_{m}$$_{ax}$}) values of Poisson's ratio diagram with respect to (\textit{C}$_{44}$, \textit{C}$_{12}$) and (\textit{C}$_{44}$, \textit{C}$_{11}$). As shown in Figs.~\ref{fig:wide_7}(a) and (b), \textit{C}$_{44}$ plays a vital role in the negative values of \textit{v$_{min}$}. Also, comparing these two figures, with increasing \textit{C}$_{44}$, the value of \textit{C}$_{11}$ has a more prominent role than the value of \textit{C}$_{12}$ in NPR. For a better investigation, by combining Figs.~\ref{fig:wide_7}(a-b) and Figs.~\ref{fig:wide_7}(c-d), 5 slices in constant \textit{C}$_{44}$ are shown in Fig.~\ref{fig:wide_8} and Fig.~\ref{fig:wide_9}. In Fig.~\ref{fig:wide_8}, \textit{v}$_{min}$ values are almost positive when \textit{C}$_{44}$ =1 GPa and \textit{C}$_{11}$ and \textit{C}$_{12}$ range from 1 to 100 GPa. Few materials have been found that have such independent elastic coefficients. When \textit{C}$_{44}$ increases from 1 to 50 GPa, for all \textit{C}$_{11}$ and \textit{C}$_{12}$, \textit{v$_{min}$} changes its sign from positive to negative. As can be seen, when \textit{C}$_{44}$ reaches 100 GPa, the maximum negative value of \textit{v$_{min}$} is -3. In Fig.~\ref{fig:wide_9}, when \textit{C}$_{44}$ = 1 the maximum value of \textit{v$_{max}$} is less than one, and with increasing \textit{C}$_{44}$, the \textit{v$_{max}$} increases and can reach 4. As shown in both Figs. \ref{fig:wide_8} and~\ref{fig:wide_9}, the patterns of changes in \textit{v$_{min}$} and \textit{v$_{max}$} are the same when \textit{v$_{min}$} $\mathrm{<}$ -1 and \textit{v$_{max}$}$\mathrm{>}$\textit{ }1. In general, it can be concluded that the coefficient of \textit{C}$_{44}$ has a more critical role than the other two coefficients (\textit{C}$_{11}$ and \textit{C}$_{12}$) in the NPR of materials. These negative values of \textit{v$_{min}$} also appear when stretched along the [110] direction.There are many compounds of cubic symmetry that can be placed in this range of elastic coefficients that can have NPR. Finally, we have prepared a documentation \href{https://yalameha.gitlab.io/elastictools/}{website} that provides more examples and tutorials for \textsc{El\textit{A}Tools}. \begin{figure}[H] \includegraphics[scale=0.8]{Fig_13.pdf} \centering \caption{\label{fig:wide_9}Five slices at constant values of \textit{C}$_{44}$ in which the \textit{v$_{max}$} is a function of \textit{C}$_{11}$ and \textit{C}$_{12}$ coefficients.} \end{figure} \section{Summary and outlook}\label{section:4} We introduced \textsc{El\textit{A}Tools}, a \textsf{Fortran90} code designed to analyze the second-order elastic tensors of three and two-dimensional crystal systems. \textsc{El\textit{A}Tools} offers a helpful tool for detecting elastic anisotropy, NLC, and NPR or \textit{auxetic} materials. Four post-processing programs specifically designed for the visualization of the results are provided. Besides, \textsc{El\textit{A}Tools} includes the elastic constant database of Materials Project for 3D materials allowing offline/online use. Furthermore, the code can generate data for Machine Learning to detect and predict elastic and anisotropy properties. The authors plan to extend \textsc{El\textit{A}Tools} to analyze other tensorial properties, such as piezoelectric and photoelastic tensors. \subsection{Appendix A} A list of main elastic properties and anisotropy indices of two-dimensional and three-dimensional materials is provided in Table \ref{Tab:1}. The elastic modulus B, E, and G are defined by Eqs.\eqref{35)}, \eqref{36)}, and \eqref{37)}. Isotropic Poisson's ratio and 2D Poisson's ratio in 3D and 2D materials are defined by Eqs.\eqref{37)} and \eqref{42)}, respectively. The P-wave modulus (M), known as the \textit{longitudinal} modulus, is associated with the homogeneous isotropic linear elastic materials. This modulus describes the ratio of axial stress to the axial strain in a uniaxial strain state \cite{R66}, and is defined as follows: \begin{equation} \label{43)} M = B + 4G/3. \end{equation} Pugh's ratio or B/G ratio defines the ductility or brittleness of a given material. The critical value of Pugh's ratio is found to be 1.75. Materials with B/G $>$ 1.75 are ductile, whereas those with B/G $<$ 1.75 are brittle in nature \cite{R6,R23,R67}. Lame’s first ($\lambda_{1}$) and second ($\lambda_{2}$) parameters help to parameterize Hooke's law in 3D for homogeneous and isotropic materials using the stress and strain tensors. The $\lambda_{1}$ provides a measure of compressibility, and the $\lambda_{2}$ is associated with the shear stiffness of a given material \cite{R66}. These two parameters are specified as follows: \begin{equation} \label{44)} \lambda_{1} = \dfrac{\nu E}{(1+\nu)(1-2\nu)},\,\, \lambda_{2}= \dfrac{E}{2(1+\nu)}. \end{equation} Kleinman's parameter ($\xi$) describes the stability of a solid under stretching or bending, and is defined as follows: \begin{equation} \label{45)} \xi = \dfrac{C_{11}+8C_{12}}{7C_{11}-2C_{12}}. \end{equation} $\xi$ = 1 implies that bond stretching would be dominated, while $\xi$ = 0 implies that bond bending would be dominated. Thermal conductivity, responsible for conducting heat energy, is a useful physical parameter for practical applications. It decreases with increasing temperature toward a limiting value known as the minimum thermal conductivity ($\kappa_{m}$). The value of $\kappa_{m}$ can be obtained using Cahill \cite{R69} and Clarke \cite{R68} models from the following expressions: \begin{equation} \label{46)} \kappa_{m}^{Clarke} = 0.87k_{B} M_{a}^{2/3} E^{1/2} \rho^{1/6}, \end{equation} \begin{equation} \label{47)} \kappa_{m}^{Cahill} = (k_{B}/2.48)n^{2/3} (\nu_{l} + 2\nu_{t} ), \end{equation} where $k_{B}$ is Boltzmann's constant, E and $\rho$ are Young's modulus and the density of the material, respectively, and $M_{a}$ is the mean mass of atoms in each unit cell, which can be calculated by $M_{a}$=[M/(m$N_{A}$ )] (M and m are the molar mass and the total number of atoms in each unit cell, respectively, and $N_{A}$ is Avogadro's constant). In Cahill's model, \textit{n} is the density of the atom number per unit volume, and $\nu_{l}$ and $\nu_{t}$ are the longitudinal and transverse sound velocities, respectively (see Eqs. \eqref{63)},\eqref{64)} and \eqref{65)}). Currently, \textsc{ElaTools} calculates the $\kappa_{m}$ value by Clarke's model. Cauchy's pressure ($P_{C}$) is associated with the angular characteristic of atomic bonding in a given material and is defined in different symmetries by Eqs. \eqref{39)}, \eqref{40)}, and \eqref{41)}. Elastic anisotropy is an important property to characterize for a comprehensive understanding of the mechanical and physical properties of materials. This property influences a variety of physical processes like geophysical explorations of the Earth's interior \cite{R70}, development of plastic deformation in crystals \cite{R71}, enhanced positively charged defect mobility \cite{R72}, microscale cracking in ceramics \cite{R73}, alignment or misalignment of quantum dots \cite{R74}, etc. Various methods have been reported in the literature to quantify the elastic anisotropy based on elastic modulus and $C_{ij}$ tensor. Ranganathan and Ostoja-Starzewski \cite{R75} derived a universal anisotropy index $A^{U}$ to provide a measure of elastic anisotropy. This index is called \textit{universal} because of its applicability to all crystal symmetries and can be defined as follows \cite{R75}: \begin{equation} \label{48)} A^{U} = \dfrac{B_{V}}{B_{R}}+5\dfrac{G_{V}}{G_{R}}-6 \end{equation} According to the Ranganathan and Ostoja-Starzewski equation, Li et al.\cite{R86} suggested the following anisotropy index (A$^{R}$) for 2D materials: \begin{equation} \label{49)} A^{R} = \dfrac{B_{V}}{B_{R}} + 2\dfrac{G_{V}}{G_{R}} - 3 \end{equation} which B$_{V}$/B$_{R}$ and G$_{V}$/G$_{R}$ are area and shear modules in Voigt and Reuss approximations, respectively, which can be defined as follows \cite{R86}: \begin{equation} \label{50)} B_{R} = \dfrac{1}{S_{11}+S_{22}+2S_{12}}, B_{V} = \dfrac{C_{11}+C_{22}+2C_{12}}{4}, \end{equation} \begin{equation} \label{51)} G_{R} = \dfrac{2}{S_{11}+S_{22}-2S_{12}+S_{66}}, G_{V} = \dfrac{C_{11}+C_{22}-2C_{12}+4C_{66}}{8}. \end{equation} Zener proposed an anisotropy factor ($A^{Z}$) for crystals of cubic symmetry defined as the ratio of the extreme values of the orientation-dependent shear moduli given by \cite{R76} \begin{equation} \label{52)} A^{Z} = \dfrac{2C_{44}}{C_{11}-C_{12}} \end{equation} On the other hand, Chung and Buessem \cite{R70} observed that a (Cubic) crystal is isotropic when the Voigt average of the shear moduli $G_{V}$ over all possible orientations was equal to the inverse of the orientation averaged shear compliance (Reuss average)$G_{R}$, which motivated the adoption of the factor \begin{equation} \label{53)} A^{BC} = \dfrac{G_{V}-G_{R}}{G_{V}+G_{R}} \end{equation} $A^{BC}$= 0 for isotropic materials, and any positive deviation from this limiting value would indicate an anisotropic behavior. With this definition, one can determine whether a given cubic crystal is more anisotropic than the other. This index is defined as follows: \begin{equation} \label{54)} A^{L} = \sqrt{ln(\dfrac{B_{V}}{B_{R}})^{2} +5 \,ln(\dfrac{G_{V}}{G_{R}})^{2}} \end{equation} The Kube's log-Euclidean anisotropy ($A^{L}$) is the most general definition of elastic anisotropy at present, as it was defined to make definitive comparisons between any two crystals. The isotropy is determined by $A^{L}$ = 0 and any positive value denotes a measure of the elastic anisotropy. Two similar anisotropy indices A$^{K}$ and A$^{SU}$ with the above equation, have been proposed by Li et al. for 2D materials as follows \cite{R86}: \begin{equation} \label{55)} A^{SU} = \sqrt{(\dfrac{{B_{V}}}{B_{R}}-1)^2+2(\dfrac{{G_{V}}}{G_{R}}-1)^2}, \end{equation} \begin{equation} \label{56)} A^{K} = \sqrt{(ln \dfrac{{B_{V}}}{B_{R}})^2+2(ln \dfrac{{G_{V}}}{G_{R}})^2}, \end{equation} Since hardness is an essential property that is essential to describe the mechanical behavior fully various semi-empirical relations have been proposed to estimate hardness using the elastic moduli. In \textsc{ElaTools} package, the following semi-empirical correlations \cite{R78} between Vickers hardness ($H_{V}$) and B, G, E, $\nu$, and B/G, so-called macroscopic models for hardness prediction \cite{R79,R80,R81,R82}, are used: \begin{equation} \label{57)} H_{1a} = 0.0963B, \end{equation} \begin{equation} \label{58)} H_{1b} = 0.0607E, \end{equation} \begin{equation} \label{59)} H_{2} = -2.899+0.1769G, \end{equation} \begin{equation} \label{60)} H_{3} = 0.0635E, \end{equation} \begin{equation} \label{61)} H_{4} = \dfrac{B(1-2\nu)}{6(1+\nu)}, \end{equation} \begin{equation} \label{62)} H_{5} = 2(\dfrac{G^{2}}{B})^{0.585}-3. \end{equation} To determine the aptitude of the above methods in predicting hardness for different types of materials, we have used the model proposed by Singh et al. \cite{R55}. They found that the best model for these five hardness analysis methods correlates with the crystal class and the energy bandgap ($E_{g}$). Table \ref{Tab:1} provides a selection guide for the best method for calculating hardness for different types of compounds. The longitudinal ($\nu_{l}$), transverse ($\nu_{t}$), and average ($\nu_{m}$) elastic wave velocities can be calculated from the knowledge of the B and G, and $\rho$ as follows \cite{R83}: \begin{equation} \label{63)} \nu_{l} = \dfrac{3B+4G}{3\rho}, \end{equation} \begin{equation} \label{64)} \nu_{t} = \sqrt{\dfrac{G}{\rho}}, \end{equation} \begin{equation} \label{65)} \nu_{m} = [\dfrac{1}{3}(\dfrac{2}{\nu_{t}^3}+\dfrac{1}{\nu_{l}^3})]^{-1/3}. \end{equation} where G and B denote $G_{VRH}$ and $B_{VRH}$, respectively. Moreover, these equations imply that one can obtain the elastic moduli and elastic constants by measuring the elastic wave velocities using ultrasonic waves. \begin{table}[H] \caption{A guide to select the best hardness calculation method as a function of the crystal class and bandgap ($E_{g}$). This model was proposed by Singh et al. \cite{R55}} \centering \label{Tab:1} \begin{tabular}{cccccc} \hhline{======} \textbf{Type of material} & \textbf{Cubic} & \textbf{Hexagonal} & \textbf{Orthorhombic} & \textbf{Rhombohedral} & \textbf{General} \\ \hhline{======} \begin{tabular}[c]{@{}c@{}}\textbf{Insulator}\\\textbf{($E_{g}$ \textgreater{} 2 eV)}\end{tabular} & $H_{2}$ & $H_{1b}$ & $H_{2}$ & $H_{2}$ & $H_{2}$ \\ \hhline{------} \begin{tabular}[c]{@{}c@{}}\textbf{Semiconductor }\\\textbf{(0\textless{} $E_{g}$ \textless{} 2 eV)}\end{tabular} & $H_{5}$ & $H_{1b}$, $H_{3}$ & - & $H_{2}$ & $H_{5}$ \\ \hhline{------} \begin{tabular}[c]{@{}c@{}}\textbf{Metal}\\\textbf{($E_{g}$ = 0)}\end{tabular} & $H_{1a}$ & $H_{4}$ & $H_{4}$ & $H_{4}$ & $H_{4}$ \\ \hhline{======} \end{tabular} \end{table} \begin{table}[H] \caption{List of the main elastic (wave) properties and anisotropy indices of 2D and 3D materials.} \centering \label{Tab:2} \begin{tabular}{ccl} \toprule \hhline{===} \multicolumn{2}{c}{\textbf{Properties}} & \textbf{\textbf{Formulae(s)}} \\ \hhline{===} \multirow{10}{*}{Elastic moduli and elastic parameters} & Bulk modulus (B) & Eqs.\eqref{35)} and \eqref{36)} \\ & Young's modulus (E) & Eq.\eqref{37)} \\ & Shear modulus (G) & Eqs.\eqref{35)} and \eqref{36)} \\ & P-wave modulus (M) & Eq.\eqref{43)} \\ & Poisson's ratio ($\nu$) & Eqs.\eqref{37)} and \eqref{42)} \\ & Pugh's ratio (B/G) & B/G \\ & Lame's first parameter ($\lambda_{1}$) & Eq.\eqref{44)} \\ & Lame's second parameter ($\lambda_{2}$)& Eq.\eqref{44)} \\ & Kleinman's parameter ($\xi$) & Eq.\eqref{45)} \\ & Minimum thermal conductivity ($\kappa_{m}$) & Eq.\eqref{46)} \\ \hhline{---} \multirow{3}{*}{Cauchy's pressures (P$_{C}$)} & Cubic symmetry & Eq.\eqref{39)} \\ & Hex., Trig., and Tetra. symmetries & Eq.\eqref{40)} \\ & Orthorhombic symmetry & Eq.\eqref{41)} \\ \hhline{---} \multirow{7}{*}{Elastic anisotropy indices} & Universal anisotropy index (A$^{U}$) & Eq.\eqref{48)} \\ & Zener's anisotropy index (A$^{Z}$) & Eq.\eqref{52)} \\ & Ranganathan anisotropy index (A$^{R}$) & Eq.\eqref{49)} \\ & Chung-Buessem anisotropy index (A$^{CB}$) & Eq.\eqref{53)} \\ & Kube's log-Euclidean anisotropy index (A$^{L}$) & Eq.\eqref{54)} \\ & 2D anisotropy index (A$^{SU}$) & Eq.\eqref{55)} \\ & Kube anisotropy index (A$^{K}$) & Eq.\eqref{56)} \\ \hhline{---} \multirow{6}{*}{Hardness methods} & H$_{1a}$ & Eq.\eqref{57)} \\ & H$_{1b}$ & Eq.\eqref{58)} \\ & H$_{2}$ & Eq.\eqref{59)} \\ & H$_{3}$ & Eq.\eqref{60)} \\ & H$_{4}$ & Eq.\eqref{61)} \\ & H$_{5}$ & Eq.\eqref{62)} \\ \hhline{---} \multirow{3}{*}{Elastic wave properties} & Longitudinal elastic wave velocity ($v_{l}$) & Eq.\eqref{63)} \\ & Transverse elastic wave velocity ($v_{t}$) & Eq.\eqref{64)} \\ & Main~elastic wave velocity ($v_{m}$) & Eq.\eqref{65)} \\ \bottomrule \hhline{===} \end{tabular} \end{table} \subsection{Appendix B} List of input, output, and temporary files in Table \ref{Tab:2}. The three executables \textsf{dat2gnu.x}, \textsf{dat2html.x}, and \textsf{dat2wrl.x} are called with input command “\textsf{pro}” (for 3D representations and 2D projections) and “\textsf{hmpro}” (for 2D head maps). The executable \textsf{dat2agr.x} runs two input commands, \textsf{box} (\textsf{boxpro}), and \textsf{polar} (\textsf{polarpro}) used for 2D projections in cartesian and polar coordinates, respectively. The full details of the input commands and the displayable features of each of these post-processing codes are listed in Tables \ref{Tab:32}, \ref{Tab:33}, and \ref{Tab:34}. \begin{table}[H] \centering \small \caption{List of input, output, and temporary files related to \textsf{Elatools.x}, \textsf{dat2gnu.x}, \textsf{dat2agr.x}, \textsf{dat2wrl.x}, and \textsf{dat2html.x} executables.} \label{Tab:31} \begin{threeparttable} \begin{tabular}{ccccc} \hhline{=====} \textbf{Program} & \textbf{Input comment} & \textbf{Input file(s)} & \textbf{Output file(s} & \textbf{Temporary file(s)} \\ \hhline{=====} \textbf{\textsc{El\textit{A}Tools}} & - & \begin{tabular}[c]{@{}c@{}}Cij.dat,\\Cij-2D.dat,\\INVELC-matrix,\\elast.output,\\ELADAT,\\ElaStic\_2nd.out\end{tabular} & \begin{tabular}[c]{@{}c@{}}Sij.dat, DATA.dat,\\ 2dcut\_pro.dat \tnote{1} ,\\3d\_pro.dat,\\pro\_2d\_sys.dat\end{tabular} & \begin{tabular}[c]{@{}c@{}}HKL, MESH,\\.aelastpro,\\.MaMiout, \\ 3d\_SD.dat\end{tabular} \\ \hhline{-----} \textbf{dat2gnu} & pro, hmpro, phmpro \tnote{2} & \begin{tabular}[c]{@{}c@{}}2dcut\_pro.dat,\\HKL,\\.MaMiout,\\3d\_SD.dat\end{tabular} & gpi files & .SDdat \\ \hhline{-----} \textbf{dat2agr} & polar, box, boxpro, polarpro & \begin{tabular}[c]{@{}c@{}}2dcut\_pro.dat\end{tabular} & agr files & - \\ \hhline{-----} \textbf{dat2wrl} & pro & \begin{tabular}[c]{@{}c@{}}3d\_pro.dat,\\.aelastpro,\\.MaMiout\end{tabular} & wrl files & - \\ \hhline{-----} \textbf{dat2html} & pro & \begin{tabular}[c]{@{}c@{}}3d\_pro.dat\\MESH\end{tabular} & html files & - \\ \hhline{=====} \end{tabular} \begin{tablenotes} \item[1]pro: bulk, comp, poisson, young, shear, pp, pf, ps, gp, gf, gs, pfp, pff, pfs, km, etc. \item[2] The full list of input comments is in Table \ref{Tab:32}, \ref{Tab:33}, and \ref{Tab:34}. Note that the current features and options of the \textsc{ElaTools} package may increase in future versions. \end{tablenotes} \end{threeparttable} \end{table} \begin{table}[H] \centering \caption{List of input commands, and input files related to \textsf{dat2gnu.x} executable.} \label{Tab:32} \begin{tabular}{cccc} \hhline{====} \textbf{Input comment} & \multicolumn{1}{c}{\textbf{Input file}} & \textbf{Property} & \textbf{Type of graph} \\ \hhline{====} poi & 2dcut\_poisson.dat & $\nu$ & \multirow{18}{*}{\begin{tabular}[c]{@{}c@{}}Polar coordinates\\ for 3D system\end{tabular}} \\ young & 2dcut\_young.dat & E & \\ bulk & 2dcut\_bulk.dat & B & \\ shear & 2dcut\_shear.dat & G & \\ comp & 2dcut\_comp.dat & $\beta$ & \\ pp & 2dcut\_pveloc.dat & $\nu_{p}$: P-mode & \\ ps & 2dcut\_pveloc.dat & $\nu_{p}$: Show-mode & \\ pf & 2dcut\_pveloc.dat & $\nu_{p}$: Fast-mode & \\ gp & 2dcut\_gveloc.dat & $\nu_{g}$: P-mode & \\ gs & 2dcut\_gveloc.dat & $\nu_{g}$: Show-mode & \\ gf & 2dcut\_gveloc.dat & $\nu_{g}$: Fast-mode & \\ pfp & 2dcut\_pfaveloc.dat & PFA: P-mode & \\ pfs & 2dcut\_pfaveloc.dat & PFA: Show-mode & \\ pff & 2dcut\_pfaveloc.dat & PFA: Fast-mode & \\ pall & 2dcut\_pveloc.dat & $\nu_{p}$: All modes & \\ gall & 2dcut\_gveloc.dat & $\nu_{g}$: All modes & \\ pfall & 2dcut\_pfaveloc.dat & PFA: All modes & \\ km & 3d\_km.dat & $\kappa_{m}$ & \\ \hhline{----} hmpoi & 3d\_poisson.dat & $\nu$ & \multirow{15}{*}{\begin{tabular}[c]{@{}c@{}}Heat map diagram\\ for 3D system\end{tabular}} \\ hmyoung & 3d\_young.dat & E & \\ hmbulk & 3d\_bulk.dat & B & \\ hmcomp & 3d\_comp.dat & $\beta$ & \\ hmshear & 3d\_bulk.dat & G & \\ hmpall & \begin{tabular}[c]{@{}c@{}}3d\_pp.dat,\\3d\_ps.dat,\\3d\_pf.dat\end{tabular} & $\nu_{p}$: All modes & \\ hmgall & \begin{tabular}[c]{@{}c@{}}3d\_gp.dat,\\3d\_gs.dat,\\3d\_gf.dat\end{tabular} & $\nu_{g}$: All modes & \\ hmpfall & \begin{tabular}[c]{@{}c@{}}3d\_pfp.dat,\\3d\_pfs.dat,\\3d\_pff.dat\end{tabular} & PFA: Fast-mode & \\ hmkm & 3d\_km.dat & $\kappa_{m}$ & \\ \hhline{----} 2dpoi & poisson\_2d\_sys.dat & $\nu$ & \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Polar coordinates\\ for 2D system\end{tabular}} \\ 2dyoung & young\_2d\_sys.dat & E & \\ 2dshear & shear\_2d\_sys.dat & G & \\ \hhline{----} phmpoi & poisson\_2d\_sys.dat & $\nu$ & \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Polar heat map diagram\\ for 2D system\end{tabular}} \\ phmyou & young\_2d\_sys.dat & E & \\ phmshe & shear\_2d\_sys.dat & G & \\ \hhline{====} \end{tabular} \end{table} \begin{table}[H] \centering \caption{List of input commands, and input files related to \textsf{dat2wrl.x} and \textsf{dat2html.x} executables.} \label{Tab:33} \begin{tabular}{cccc} \hhline{====} \textbf{Input comment} & \textbf{Input file(s)} & \textbf{Property} & \textbf{Type of graph} \\ \hhline{====} poi & 3d\_poisson.dat & $\nu$ & \multirow{21}{*}{\begin{tabular}[c]{@{}c@{}}Spherical coordinates \\ for 3D system\end{tabular}} \\ young & 3d\_young.dat & E & \\ bulk & 3d\_bulk.dat & B & \\ shear & 3d\_shear.dat & G & \\ comp & 3d\_comp.dat & $\beta$ & \\ pp & 3d\_pp.dat & $\nu_{p}$: P-mode & \\ ps & 3d\_ps.dat & $\nu_{p}$: Show-mode & \\ pf & 3d\_pf.dat & $\nu_{p}$: Fast-mode & \\ gp & 3d\_gp.dat & $\nu_{g}$: P-mode & \\ gs & 3d\_gs.dat & $\nu_{g}$: Show-mode & \\ gf & 3d\_gf.dat & $\nu_{g}$: Fast-mode & \\ pfp & 3d\_pfp.dat & PFA: P-mode & \\ pfs & 3d\_pfs.dat & PFA: Show-mode & \\ pff & 3d\_pff.dat & PFA: Fast-mode & \\ pall & \begin{tabular}[c]{@{}c@{}}3d\_pp.dat,\\3d\_ps.dat,\\3d\_pf.dat\end{tabular} & $\nu_{p}$: All modes & \\ gall & \begin{tabular}[c]{@{}c@{}}3d\_gp.dat,\\3d\_gs.dat,\\3d\_gf.dat\end{tabular} & $\nu_{g}$: All modes & \\ km & 3d\_km.dat & $\kappa_{m}$ & \\ \hhline{====} \end{tabular} \end{table} \begin{table}[H] \centering \caption{List of input commands, and input files related to \textsf{dat2agr.x} executable.} \label{Tab:34} \begin{tabular}{cccc} \hhline{====} \textbf{Input comment} & Input file(s) & \textbf{Property} & \textbf{Type of graph} \\ \hhline{====} box & \begin{tabular}[c]{@{}c@{}}2dcut\_young.dat,\\2dcut\_shear.dat,\\2dcut\_buk.dat,\\2dcut\_comp.dat,\\2dcut\_poisson.dat\end{tabular} & E, G, B, $\beta$, and $\nu$: multiplot & \multirow{10}{*}{\begin{tabular}[c]{@{}c@{}}Polar coordinates \\of 2D cuts \\ in the 3D system\end{tabular}} \\ boxpoi & 2dcut\_poisson.dat & $\nu$ & \\ boxyoung & 2dcut\_young.dat & E & \\ boxbulk & 2dcut\_buk.dat & B & \\ boxshear & 2dcut\_shear.dat & G & \\ boxcomp & 2dcut\_comp.dat & $\beta$ & \\ boxkm & 2dcut\_km.dat & $\kappa_{m}$ & \\ boxpall & 2dcut pveloc.dat & $\nu_{p}$: All modes & \\ boxgall & 2dcut gveloc.dat & $\nu_{g}$: All modes & \\ boxpfall & 2dcut pdveloc.dat & PFA: All modes & \\ \hhline{----} polar & \begin{tabular}[c]{@{}c@{}}2dcut\_young.dat,\\2dcut\_shear.dat,\\2dcut\_buk.dat,\\2dcut\_comp.dat,\\2dcut\_poisson.dat\end{tabular} & E, G, B, $\beta$, and $\nu$: multiplot & \multirow{10}{*}{\begin{tabular}[c]{@{}c@{}}Cartesian coordinates\\of 2D cuts \\ in the 3D system\end{tabular}} \\ polarpoi & 2dcut\_poisson.dat & $\nu$ & \\ polaryoung & 2dcut\_young.dat & E & \\ polarbulk & 2dcut\_buk.dat & B & \\ polarshear & 2dcut\_shear.dat & G & \\ polarcomp & 2dcut\_comp.dat & $\beta$ & \\ polarkm & 2dcut\_km.dat & $\kappa_{m}$ & \\ polarpall & 2dcut\_pveloc.dat & $\nu_{p}$: All modes & \\ polargall & 2dcut\_gveloc.dat & $\nu_{g}$: All modes & \\ polarpfall & 2dcut\_pfveloc.dat & PFA: All modes & \\ \hhline{====} \end{tabular} \end{table} \subsection{Appendix C} List of main elastic properties and anisotropy indices of ZnAu$_{ 2}$(CN)$_{ 4}$, GaAs, CrB$_{2}$, $\delta$-phosphorene ($\delta$-P), and Pd$_{2}$O$_{6}$Se$_{2}$ compounds. The \textsc{El\textit{A}Tools} also calculates a measure of the anisotropy \textit{A}$_{M}$ of each elastic modulus \textit{M,} defined as follows: \begin{equation} \label{66)} A_{M}=\begin{cases}\frac{M_{MAX}}{M_{MIN}} & ;if\ sign(M_{MAX})=sign(M_{MAX}) \\ \infty &\ ;otherwise\end{cases} \end{equation} $A_{M}$ is particularly interesting as the marked anisotropy of the mechanical properties is often associated with anomalous mechanical behavior, such as NPR and NLC. As can be seen in Table IV, when $A_{M}$ is infinite, the material has anomalous mechanical properties. \begin{table}[H] \centering \label{Tab:3} \caption{The main elastic properties of ZnAu$_{2}$(CN)$_{4}$ compound.} \begin{tabular}{cccccccc} \hhline{========} \begin{tabular}[c]{@{}c@{}}\textbf{Elastic}\\\textbf{properties}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Bulk}\\\textbf {modulus}\\\textbf{(GPa)}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Shear}\\\textbf{modulus}\\\textbf{(GPa)}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Young}\\\textbf{modulus}\\\textbf{(GPa)}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Poisson’s}\\\textbf{ratio}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Pugh}\\\textbf{ratio}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{P-wave}\\\textbf{modulus}\\\textbf{(GPa)}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Linear}\\\textbf{Compressibility}\\\textbf{(TPa$^{-1}$)}\end{tabular} \\ \hhline{========} \textbf{Max} & 5.747×10$^{2}$ & 12.10 & 28.25 & 1.255 & - & - & 62.328 \\ \hhline{--------} \textbf{Min} & -5.477×10$^{2}$ & 3.18 & 7.26 & -0.021 & - & - & -51.689 \\ \hhline{--------} \textbf{Voigt} & 55.756 & 8.753 & 24.954 & 0.4254 & 6.3696 & 67.4267 & - \\ \hhline{--------} \textbf{Reuss} & 13.705 & 4.618 & 12.456 & 0.4597 & 2.9674 & 19.8627 & - \\ \hhline{--------} \begin{tabular}[c]{@{}c@{}}\textbf{Average}\\\textbf{(Hall)}\end{tabular} & 34.730 & 6.686 & 18.705 & 0.4426 & 5.1945 & 43.6447 & - \\ \hhline{--------} \textbf{$\textbf{A}_{\textbf{M}}$} & $\infty$~ & 3.804 & 3.893 & $\infty$~ & - & - & $\infty$~ \\ \hhline{========} \end{tabular} \end{table} \begin{table}[H] \centering \label{Tab:4} \caption{The main elastic properties of the CrB$_{2}$ compound.\\} \begin{tabular}{cccccccc} \hhline{========} \begin{tabular}[c]{@{}c@{}}\textbf{Elastic}\\\textbf{properties}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Bulk }\\\textbf{modulus}\\\textbf{(GPa)}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Shear}\\\textbf{modulus}\\\textbf{(GPa)}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Young}\\\textbf{modulus}\\\textbf{(GPa)}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Poisson’s}\\\textbf{ratio}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Pugh}\\\textbf{ratio}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{P-wave}\\\textbf{modulus}\\\textbf{(GPa)}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Linear}\\\textbf{Compressibility}\\\textbf{(TPa$^{-1}$)}\end{tabular} \\ \hhline{========} \textbf{Max} & 11.922×10$^{2}$ & 197.40 & 463.18 & 0.453 & - & - & 1.869 \\ \hhline{--------} \textbf{Min} & 5.351×10$^{2}$ & 125.04 & 260.22 & 0.162 & - & - & 0.839 \\ \hhline{--------} \textbf{Voigt} & 295.122 & 169.833 & 427.497 & 0.2586 & 1.7377 & 521.5667 & - \\ \hhline{--------} \textbf{Reuss} & 282.009 & 161.560 & 406.966 & 0.2595 & 1.7455 & 497.4226 & - \\ \hhline{--------} \begin{tabular}[c]{@{}c@{}}\textbf{Average}\\\textbf{(Hall)}\end{tabular} & 288.565 & 165.697 & 417.231 & 0.2635 & 1.7415 & 509.4947 & - \\ \hhline{--------} \textbf{$\textbf{A}_{\textbf{M}}$} & 2.227 & 1.579 & 1.780 & 2.793 & - & - & 2.228 \\ \hhline{========} \end{tabular} \end{table} \begin{table}[H] \centering \caption{List of Maximum (Max), minimum (Min) phase and group velocities, and anisotropy values of GaAs compound.} \label{Tab:8} \begin{tabular}{ccccccc} \hhline{=======} \multirow{2}{*}{\textbf{Property }} & \multicolumn{3}{c}{\textbf{Phase velocity (km/s)}} & \multicolumn{3}{c}{\textbf{Group velocity (km/s)}} \\ \cline{2-7} & \textbf{P mode} & \textbf{FS mode} & \textbf{SS mode} & \textbf{P mode} & \textbf{FS mode} & \textbf{SS mode} \\ \hhline{=======} \textbf{Max} & 5.398 & 3.346 & 3.346 & 5.398 & 3.369 & 3.490 \\ \textbf{Min} & 4.731 & 2.805 & 2.475 & 4.731 & 2.935 & 2.476 \\ \textbf{A$_{M}$} & 1.14 & 1.19 & 1.35 & 1.14 & 1.15 & 1.41 \\ \hhline{=======} \end{tabular} \end{table} \begin{table}[H] \centering \label{Tab:5} \caption{The main elastic properties of the $\delta$-P 2D compound. \\} \begin{tabular}{ccccc} \hhline{=====} \begin{tabular}[c]{@{}c@{}}\textbf{Elastic}\\\textbf{properties}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Area modulus}\\\textbf{(N/m)}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Shear modulus}\\\textbf{(N/m)}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Young modulus}\\\textbf{(N/m)}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Poisson’s}\\\textbf{ratio}\end{tabular} \\ \hhline{=====} \textbf{Max} & - & 66.452 & 142.877 & 0.290 \\ \hhline{-----} \textbf{Min} & - & 24.500 & 62.277 & -0.267 \\ \hhline{-----} \textbf{Voigt} & 47.608 & 47.909 & - & - \\ \hhline{-----} \textbf{Reuss} & 44.395 & 35.808 & - & - \\ \hhline{-----} \textbf{\textit{xy}-plane} & - & 24.500 & - & -0.159 \\ \hhline{-----} \textbf{\textit{yx}-plane} & - & - & - & -0.267 \\ \hhline{-----} \textbf{\textit{x}-direction} & - & - & 84.872 & - \\ \hhline{-----} \textbf{\textit{y}-direction} & - & - & 142.868 & - \\ \hhline{=====} \end{tabular} \end{table} \begin{table}[H] \centering \caption{The main elastic properties of the Pd$_{2}$O$_{6}$Se$_{2}$ 2D compound. \\} \label{Tab:9} \begin{tabular}{ccccc} \hhline{=====} \begin{tabular}[c]{@{}c@{}}\textbf{Elastic}\\\textbf{properties}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Area modulus}\\\textbf{(N/m)}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Shear modulus}\\\textbf{(N/m)}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Young modulus}\\\textbf{(N/m)}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Poisson's}\\\textbf{ratio}\end{tabular} \\ \hhline{=====} \textbf{Max} & - & 16.682 & 65.892 & 1.315 \\ \hhline{-----} \textbf{Min} & - & 4.857 & 6.573 & -0.492 \\ \hhline{-----} \textbf{Voigt} & 23.445 & 15.828 & - & - \\ \hhline{-----} \textbf{Reuss} & 7.686 & 7.524 & - & - \\ \hhline{-----} \textbf{\textit{xy}-plane} & - & 14.930 & - & 0.168 \\ \hhline{-----} \textbf{\textit{yx}-plane} & - & - & - & 0.164 \\ \hhline{-----} \textbf{\textit{x}-direction} & - & - & 39.731 & - \\ \hhline{-----} \textbf{\textit{y}-direction} & - & - & 38.390 & - \\ \hhline{=====} \end{tabular} \end{table} \begin{table}[H] \centering \label{Tab:6} \caption{List of anisotropy indices of ZnAu$_{2}$(CN)$_{4}$, CrB$_{2}$, $\delta$-phosphorene ($\delta$-P), and Pd$_{2}$O$_{6}$Se$_{2}$ compounds.} \begin{tabular}{ccccc} \hhline{=====} \multirow{2}{*}{\textbf{Anisotropy index}} \\ \multirow{2}{*}{\textbf{and Cauchy pressure}} & \multicolumn{4}{c}{\textbf{Compounds }} \\ \cline{2-5} & \textbf{\textbf{\textbf{\textbf{ZnAu$_{2}$(CN)$_{4}$}}}} & \textbf{\textbf{CrB$_{2}$}} & \textbf{\textbf{$\delta$-P}} & \textbf{Pd$_{2}$O$_{6}$Se$_{2}$} \\ \hhline{=====} \textbf{\textbf{A$^{U}$}} & 7.5448 & 0.3025 & - & - \\ \hhline{-----} \textbf{\textbf{A$^{L}$}} & 4.7370 & 0.3025 & - & - \\ \hhline{-----} \textbf{\textbf{A$^{CB}$}} & 0.3092 & 0.0250 & - & - \\ \hhline{-----} \textbf{\textbf{A$^{SU}$}} & - & - & 0.4833 & 2.5760 \\ \hhline{-----} \textbf{\textbf{A$^{R}$}} & - & - & 0.7482 & 4.2574 \\ \hhline{-----} \textbf{\textbf{A$^{K}$}} & - & - & 0.1814 & 0.6657 \\ \hhline{-----} \textbf{\textbf{P$_{C}^{a}$}} & 48.50 & 25.90 & & - \\ \hhline{-----} \textbf{\textbf{P$_{C}^{c}$}} & 26.50 & -15.60 & & - \\ \hhline{=====} \end{tabular} \end{table} \subsection{Appendix D} Colors available in the visualization of elastic properties in the \textsc{El\textit{A}Tools}. Personalization of colors is provided in the Supplementary Information file. \begin{table}[H] \label{Tab:7} \centering \caption{List of default colors in the 3D and 2D visualization of elastic properties such as Young's modulus, bulk modulus, shear modulus, linear compressibility, and Poisson's ratio.} \begin{tabular}{cccc} \hhline{====} \begin{tabular}[c]{@{}c@{}}\textbf{3D or 2D representation}\\\textbf{of elastic proprieties}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Positive value or}\\\textbf{Maximum positive}\\\textbf{value}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Minimum positive}\\\textbf{value}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Negative value or}\\\textbf{Minimum Negative}\\\textbf{value}\end{tabular} \\ \hhline{====} \textbf{Young’s modulus} & green & - & - \\ \hhline{----} \textbf{Bulk modulus} & green & - & - \\ \hhline{----} \textbf{Shear modulus} & bule & green & - \\ \hhline{----} \textbf{Linear compressibility} & green & - & red \\ \cline{2-4} \hhline{----} \textbf{Poisson’s ratio} & blue & green & red \\ \hhline{====} \end{tabular} \end{table} \section{acknowledgement} Parviz Saeidi is acknowledged for the valuable comments on the first draft of the manuscript. \section{References} \bibliographystyle{apsrev4-1}
2211.08646
\section{Introduction} \label{s_introduction} Due to the explosion of wireless communication devices, spectrum congestion is becoming a severe problem, releasing increasing pressure on existing radar, communication, and other wireless systems. To cope with this issue, the concept of integrated sensing and communication~(ISAC) is proposed~\cite{liu2020jointradar}. It allows the sharing of hardware platform and spectrum between sensing and communication functions, which reduces the overall hardware cost and promotes spectrum efficiency. In ISAC systems, massive multiple-input multiple-output~(MIMO) with a phased array is widely acknowledged as one of the vital techniques because the spatial diversity that it provides can be leveraged to improve the sensing accuracy and support high-speed communication. However, the following two bottlenecks have led to insufficient performance for phased array-enabled ISAC systems. \emph{First}, the power consumption of the phased array is relatively high due to extensive usage of complicated components such as phase shifters and power splitters. \emph{Second}, the gain of the phased array is limited as the antenna spacing in the phased array is typically half-wavelength and is hard to shrink due to the hardware implementation difficulties~\cite{wan2021terahertz}. Recently, \emph{holographic radio} has been proposed as a new paradigm to address the above drawbacks of massive MIMO enabled by phased arrays~\cite{pizzo2020spatially}. According to the vision of holographic radio, the antenna array in wireless systems is composed of a tremendous number of inexpensive antenna elements with low power consumption, tiny size, and ultra-close element spacing. Thus, by exploiting the high spatial diversity provided by numerous elements, high directive gain can be achieved for ISAC with an acceptable power consumption~\cite{deng2021reconfigurableholographic}. To fulfill this vision, reconfigurable holographic surfaces~(RHSs) are viewed as a promising solution~\cite{yurduseven2017design}. Specifically, the RHS is a type of metamaterial antenna whose sub-wavelength radiation elements are compactly arranged on a printed circuit board~(PCB). Due to the tunability of the radiation amplitudes of RHS elements, the radiation pattern of the RHS can be customized without the use of complicated phase shifters, thus significantly reducing the power consumption and cost~\cite{deng2021reconfigurable}. Besides, the feed of the RHS is embedded in the PCB, enabling the RHS to directly transmit or receive wireless signals with a low-profile structure. This is significantly different from reconfigurable intelligent surfaces~(RISs), another type of metamaterial antenna~\cite{zhang2021reconfigurable} which creates desired radiation pattern by reflecting the signals from the external feeds and tuning the reflection phase shifts of the elements\footnote{\textcolor{black}{The RIS and RHS are also different in terms of system model and hardware implementation. Specifically, since the feeds of the RIS are apart from the metasurface, there is an extra reflection path between the BS and the RIS in the RIS model compared with the RHS model. Another difference in the system model is that the phase shifts of RIS elements are tunable, while the radiation amplitudes are adjusted in the RHS. As for the implementation, the RIS and the BS antenna are deployed in different locations, while the RHS is compactly integrated with the BS and serves as the antenna of the BS.}}. \begin{figure*}[!t] \setlength{\abovecaptionskip}{-0pt} \setlength{\belowcaptionskip}{-18pt} \centering \includegraphics[height=2in]{RHS_scenario_a.pdf} \caption{System scenario.} \label{f_scenario} \vspace{-4mm} \end{figure*} In this article, we propose \emph{holographic ISAC} where the holographic radio is used to further improve the performance of the ISAC systems. We design an RHS-enabled holographic beamforming scheme to realize holographic ISAC and evaluate its feasibility through experiment. Our contributions can be summarized below. \begin{itemize} \item \textbf{Working Principles of RHSs:} The RHS is a planar antenna where the surface wave is first injected through feeds and then radiated to the free space through metamaterial elements. Since the electromagnetic responses of the metamaterial elements are tunable, the RHS does not rely on phase shifters to achieve the diversity of radiation patterns. \item \textbf{Beamforming Scheme for Holographic ISAC:} To serve communication users and detect targets at the same time, we propose a holographic beamforming scheme~\cite{zhang2022holographic}, where the communication streams and radar waveforms are first precoded at the base station~(BS) and then modulated by the RHS. The beamformers at the BS and the RHS are carefully selected to optimize the performance metrics for ISAC. \item \textbf{Implementation of Holographic ISAC:} Different from the theoretical investigation in~\cite{zhang2022holographic}, we build a hardware prototype consisting of an ISAC transceiver module, a communication user module, and a target module to validate the feasibility of holographic ISAC. Simulation and experiment results verify that with the aid of the RHS, the proposed platform is able to simultaneously sense the target and communicate with the user at a lower power consumption compared with the phased array-based platform. \item \textbf{Challenges of Holographic ISAC:} In addition to the holographic beamforming schemes and prototype verification, we also discuss the relevant topics including the fundamental designs for the RHS, limitations and trade-offs of holographic ISAC, and the optimization of holographic ISAC transceiver. \end{itemize} The rest of this article is organized as follows. In Section~\ref{s_rhs}, the hardware structure and working principle of the RHS are described. A holographic beamforming scheme for ISAC systems is introduced in Section~\ref{s_hbi}. In Section~\ref{s_phis}, a hardware prototype of the holographic ISAC system is presented. The corresponding experimental results are reported in Section~\ref{s_pe}. In Section~\ref{s_frdkc}, future research directions and key challenges are discussed. Finally, conclusions are drawn in Section~\ref{s_c}. \vspace{-1mm} \section{RHS: Hardware and Principles} \label{s_rhs} \begin{figure*}[!t] \setlength{\abovecaptionskip}{-0pt} \setlength{\belowcaptionskip}{-18pt} \centering \includegraphics[height=1.8in]{diagram6_a.pdf} \caption{Block diagram of the proposed holographic beamforming scheme.} \label{f_scheme} \vspace{-4mm} \end{figure*} \vspace{-1mm} \subsection{Hardware Structure} \label{ss_hs} \vspace{-1mm} The RHS is a special type of leaky-wave antenna with controllable radiation patterns enabled by reconfigurable metamaterial elements. After the electromagnetic wave is injected into the antenna from feeds, it propagates in the waveguide and excites the metamaterial elements embedded on the waveguide to leak the energy into the free space. The radiation pattern of the RHS is formed by superposing the radiated signals, which are also referred to as the object waves, from all the elements. Thus, by controlling the radiation amplitudes of the elements, the radiation pattern can be altered to generate desired beams. Fig.~\ref{f_scenario} illustrates the hardware structure of the RHS. It consists of three parts, i.e., the feeds, waveguide, and metamaterial elements, which are elaborated on as follows: \begin{itemize} \item \textbf{Feed:} The feeds of the RHS are mounted at the bottom or the edge of the waveguide surface. The other side of each feed is connected to an RF chain, which will inject the electromagnetic waves, also called reference waves, into the RHS through the feed. \item \textbf{Waveguide:} The waveguide of the RHS is a planar medium where the reference wave propagates. The thickness of the waveguide is typically on the order of millimeters, leading to the ultra-thin characteristic of the RHS. \item \textbf{Metamaterial element:} The metamaterial elements arranged as a 1D or 2D array are laid on the top of the waveguide. Their electromagnetic responses can be independently adjusted by applying different bias voltages, enabling the RHS to control its radiation pattern. \end{itemize} Compared with the phased array, the structure of the RHS is much simpler because the RHS does not rely on complex feeding circuits and phase shifters. Moreover, the simpler structure also leads to lower power consumption of the RHS since power-consuming components such as amplifiers, phase shifters, and power splitters are not necessary for the RHS. \vspace{-2mm} \subsection{Working Principle} \vspace{-1mm} \label{ss_wp} The working principle of the RHS is to first construct the interference pattern between the reference and object waves, and then excite the interference pattern recorded by the RHS to produce the desired radiation pattern~\cite{fong2010scalar}. The details of this principle are elaborated on as follows. As shown in Fig.~\ref{f_scenario}, we consider an RHS with $M$ elements and $K$ feeds. Let $x_{\mathrm{ref}, m}$ and $x_{\mathrm{obj}, m}$ denote the reference and the object waves at the location of the $m$-th element, respectively. Here, the reference wave $x_{\mathrm{ref}, m}$ is the superposition of the reference waves from all the feeds, which is determined by the locations of the feeds, the location of the $m$-th element, and the propagation vector in the waveguide. The object wave $x_{\mathrm{obj}, m}$ whose main-lobe is pointing towards direction $(\theta_0, \phi_0)$ is determined by the location of the $m$-th element and the propagation vector towards direction $(\theta_0, \phi_0)$ in free space. Based on holographic interference principle~\cite{deng2021reconfigurable}, the interference pattern $x_{\mathrm{int}, m}$ at the location of the $m$-th radiation element can be expressed as $x_{\mathrm{ref}, m} x^*_{\mathrm{obj}, m}$. When the interference pattern is recorded by the RHS elements and excited by the reference wave $x_{\mathrm{ref}, m}$, the wave radiated by the RHS is in proportion to the desired object wave $x_{\mathrm{obj}, m}$, in this way the desired radiation pattern with main-lobe pointing towards direction $(\theta_0, \phi_0)$ is generated. Since the interference pattern is complex, it cannot be directly recorded by the RHS element whose phase is not adjustable. Thus, we adopt a strategy of tunning the amplitudes of elements by using a real pattern, which is also referred to as a holographic pattern\footnote{\textcolor{black}{The information loss caused by using a real interference pattern rather than a complex interference pattern is very small. The readers may refer to~\cite{smith2017analysis} for more details.}}. The basic idea of this strategy is to radiate more energy when the reference wave and the object wave are in phase and radiate less energy when the waves are out of phase. Typically, the real part of the interference pattern $\text{Re}[x_{\mathrm{int}, m}]$ is chosen to construct this holographic pattern because $\text{Re}[x_{\mathrm{int}, m}]$ is negatively related to the phase difference between $x_{\mathrm{ref}, m}$ and $x_{\mathrm{obj}, m}$. \vspace{-2mm} \section{Holographic Beamforming for ISAC} \label{s_hbi} In this section, we first introduce the holographic beamforming scheme for a multi-user case, then discuss the performance metrics for holographic ISAC, and finally design the beamformers to optimize the ISAC performance. \begin{figure*}[!t] \setlength{\abovecaptionskip}{-0pt} \setlength{\belowcaptionskip}{-18pt} \centering \includegraphics[height=2.5in]{prototype_2_a.pdf} \caption{Illustration of the RHS-assisted ISAC prototype.} \label{f_prototype} \vspace{-4mm} \end{figure*} \vspace{-2mm} \subsection{Holographic Beamforming Scheme} \label{ss_hbs} \textcolor{black}{As shown in Fig.~\ref{f_scenario}, the proposed ISAC system consists of a BS, an RHS\footnote{\textcolor{black}{The number of RHS feed can be $1$ in the system~\cite{zhang2018experimental}. The overall performances of the single-fed based system such as achievable data rates are lower than those of the multi-fed based system ($K > 1$) due to the lower flexibility on beam steering and shaping. However, the power consumption and cost of the system using a single-fed multi-beam antenna is lower than those using a multi-fed antenna because less hardware components such as RF chains are required in the system.}}, a MIMO antenna array, multiple downlink users, and multiple targets. The BS, RHS, and the MIMO antenna array are co-located with wired connections, unlike an usual RIS-assisted ISAC system where these terminals are not co-located. The BS acts as a terminal that feeds signals to the RHS for transmission and processes the echoes received via the MIMO antenna array. The signal transmission is only via the RHS, and the MIMO antenna array is used solely for the purpose of receiving the echo signals from the targets. In order to send different data streams to the downlink users and sense multiple targets at the same time, we propose the holographic beamforming scheme for the RHS-aided ISAC system, where the BS and the RHS perform digital and analog beamforming, respectively, to transmit ISAC signals.} \textcolor{black}{Specifically, we divide the timeline of the scheme into cycles, and each cycle contains two steps, i.e., the transmission and reception steps. In the transmission step, the BS and the RHS cooperate with each other to conduct transmit beamforming and emit ISAC signals. In the reception step, the ISAC signals are recorded by the communication users for decoding, and the echo signals reflected by the targets are received by the MIMO antenna array for target sensing. The block diagram which shows the signal chain in each cycle is illustrated in Fig.~\ref{f_scheme}, which is further described in the following.} \begin{itemize} \item \textcolor{black}{\textbf{Transmission step:} In this step, the ISAC beams that simultaneously serve communication users and detect targets are generated and transmitted. Specifically, $L$ data streams and $K$ radar waveforms are first separately precoded by the BS via different digital beamformers to generate $K$ precoded data streams and $K$ precoded radar waveforms. The $k$-th precoded data stream and radar waveform are added together, and $K$ added signals can be produced in total. These signals are then sent to the $K$ RF chains, and the RF chains use the input baseband signals to modulate the carrier signals and deliver the modulated signals to the feeds of the RHS. The injected signals from the feeds are converted to the radiation signals through the waveguide and metamaterial elements, where the signals are processed by the analog beamformer that determines the radiation amplitudes of the RHS elements.} \item \textcolor{black}{\textbf{Reception step:} The operations of communication and sensing are performed parallel in this step. To be specific, the communication users receive the ISAC signals transmitted by the RHS to retrieve their data streams, while the BS receives the echo signals reflected by the targets via the MIMO antenna array and then performs radar signal processing.} \end{itemize} \textcolor{black}{Note that the RHS needs to transmit multiple beams in different directions in order to simultaneously serve users and sense targets. Specifically, some of the emitted beams are directed at users to convey communication information, and others are utilized to sense targets. To generate these beams, we first create multiple holographic patterns. For each desired direction, $K$ patterns can be generated, each corresponding to one feed. Thus, the number of the generated patterns is equal to the product of the number of the directions and the number of the feeds. Next, all the holographic patterns are weighted superposed to derive the analog beamformer~\cite{deng2022HDMA}. The weights of the holographic patterns have to be optimized to promote the ISAC performance, which is discussed in the following subsection. It should be emphasized that the number of beams can be different from the number of feeds. For example, we can generate multiple beams with only one feed by using the above technique.} \vspace{-4mm} \subsection{ISAC Performance Metrics} \label{ss_ipm} \vspace{-1mm} Since the ISAC tasks include sensing and communication, the related performance metrics can be broadly categorized into two types, i.e., sensing and communication metrics. \emph{Sensing performance metrics:} Typically, the concept “sensing” has connotations of detection, estimation, and recognition. Specifically, detection refers to the process of deciding whether a target exists or not. Estimation means the judgment about the values of target parameters such as distance and velocity. And recognition refers to the act of identifying what the sensed target is. As the meanings of detection, estimation, and recognition vary, their performance metrics are different. For example, the performance of detection can be evaluated by detection probability (the probability of making a correct decision on the existence of the target), while the estimation performance is measured by mean squared error (MSE, the average value of the squared error between the true and the estimated values of a target parameter). \emph{Communication performance metrics:} Similar to sensing tasks, the communication tasks also have different metrics. Two widely used metrics are channel capacity and bit error ratio~(BER). Channel capacity is defined as the maximal mutual information of the channel, and the capacity of an additive white Gaussian noise~(AWGN) channel can be calculated by the well-known Shannon formula. In contrast, BER means the percentage of the error bits in all the received bits, which measures the reliability of the communication systems. \vspace{-4mm} \subsection{Holographic Beamformer Optimization} \label{ss_hbo} \vspace{-1mm} \textcolor{black}{In this paper, our aim is to optimize the sensing performance\footnote{\textcolor{black}{Typically, the radar first estimates the angles of arrival of the targets by transmitting omnidirectional waveforms. The range is then estimated by transmitting waveforms towards these known target directions. In this paper, we focus on the latter case.}} given the constraints of communication qualities\footnote{The proposed scheme can be easily extended to the case where the communication performance is optimized, and thus we omit it in this paper.}. The sensing performance is promoted by minimizing the beampttern mismatch error, i.e., the difference between the transmit beampattern and a desired pattern, where the desired pattern has peaks in the target directions~(as in~\cite{stoica2007on}). Besides, we use channel capacity to evaluate the communication qualities between the RHS and the communication users.} \textcolor{black}{It is challenging to solve this problem because the optimization of the digital and analog beamformers are coupled with each other, which is non-trivial. To efficiently handle this challenge, the optimization problem is first decoupled into two subproblems, i.e., the digital and the analog beamforming subproblems, where the digital/analog beamformer is optimized given the other. Next, a holographic beamforming optimization algorithm is designed to solve the optimization problem by iteratively optimizing the digital beamformer and the analog beamformer. Specifically, we first initialize the analog beamformer by summing the holographic patterns with equal weights. Next, the digital and the analog beamforming subproblems are sequentially solved in each iteration. The iteration terminates when the value difference of the beampattern mismatch errors~\cite{stoica2007on} between the two adjacent iterations is less than a predetermined positive constant $\epsilon$. In the following, we elaborate on the methods we use to tackle the two subproblems\footnote{\textcolor{black}{The running time of the proposed algorithm is less than $1$s in a moderate setting ($3$ users and $3$ targets), which is acceptable for scenarios with slow-moving targets or communication users such as pedestrians. Besides, when implementing the proposed ISAC system in practice, the computational tasks can be offloaded to the edge servers with greater processing power, which is able to reduce the computing time delay. In the future, a low-complexity and non-iterative algorithm can also be developed to support the real-time requirements in the fast-moving scenarios.}}}. \textcolor{black}{\emph{Optimization of Digital Beamformer:} The digital beamformers for communication and radar sensing can be obtained by applying the zero forcing~(ZF) method~\cite{liu2020joint}. The basic idea is to first enforce the cancellation of the inter-user interference and the radar interference for all the communication users and then to derive the corresponding digital beamformers based on the channel information.} \emph{Optimization of Analog Beamformer:} To optimize the weights in the analog beamformer, the subproblem is first transformed into a quadratic program by reformulating the objective function and the constraints in the subproblem. Next, the SDR technique can be applied to solve the quadratic problem. \begin{figure}[!t] \setlength{\abovecaptionskip}{-0pt} \setlength{\belowcaptionskip}{-18pt} \centering \includegraphics[height=1.8in]{experiment_scenario_2_a.pdf} \caption{Experimental layout.} \label{f_experiment} \vspace{-2mm} \end{figure} \vspace{-2mm} \section{Prototype of the Holographic ISAC System} \label{s_phis} In this section, we develop a hardware prototype of the holographic ISAC system. The implementation of the RHS is first described, and then the hardware modules which comprise the ISAC system are introduced. \vspace{-2mm} \subsection{Implementation of the RHS} \label{ss_ir} As shown in Fig.~\ref{f_prototype}, we design an 1D RHS whose dimension is $15 \times 3 \times 0.17$cm$^3$. The RHS consists of two SMA connectors, a multi-layer substrate, and 16 metamaterial elements. One of the SMA connectors serves as the feed which joins the RHS and the RF chain together. The other SMA connector joins the RHS and a $50\Omega$ RF load in order to absorb the energy remaining in the substrate. The substrate functions as a waveguide and is composed of four layers. The first and the third layers are made of F4B, and the second and the fourth layers are made of copper. The second layer of the substrate is the ground layer carrying a voltage of $0$V. The fourth layer is the DC feed line layer. There are $16$ feed lines, each connecting to a metamaterial element through a via hole. The other side of the feed line links to an output pin of the FPGA which applies a bias voltage to the element. The type of metamaterial elements arranged on the top of the substrate is complementary-electric-resonator (CELC), and two PIN diodes (MADP-000907-14020) are laid on each metamaterial element. Since the two PIN diodes are in parallel, they have the same voltage bias, which means each element can be tuned between two states, i.e., ON and OFF states. At the $11$GHz working frequency, the radiated energy of the element in the ON state is much greater than that in the OFF state, which forms the basis for the adjustment of the radiation pattern of the RHS. \vspace{-2mm} \subsection{Hardware Modules of the Holographic ISAC Prototype} \label{ss_hmp} The ISAC prototype is composed of three modules, i.e., the BS, user, and target modules, which are elaborated on below. \subsubsection{ISAC transceiver module} This module serves as an ISAC BS which transmits ISAC signals and receive echo signals for radar detection. To fulfill this task, an Intel NUC is implemented as the host computer. It controls the radiation amplitudes of the RHS elements via FPGA. It also connects with a USRP N210 which is able to simultaneously transmit and receive signals. Since the working frequency of the RHS ($12$GHz) is beyond the frequency range of the USRP ($0-6$GHz), a frequency converter is employed to up-convert the low-frequency signal transmitted by the USRP or down-convert the high-frequency signal received by the Rx antenna. The Rx antenna connecting to the frequency converter is a standard horn antenna (LB-75-20-C-SF) with a frequency range of $10-15$GHz. \subsubsection{User module} The user module receives and decodes the ISAC signal from the ISAC transceiver module to retrieve the communication stream. Specifically, the Rx antenna first receives the ISAC signal and sends it to the frequency converter. The frequency converter down-converts the received signal and transmits it to a USRP which down-converts the signal to the baseband. The baseband signal is finally sent to the NUC via Ethernet cable for decoding. \subsubsection{Target module} This module is used to simulate radar targets by generating controllable radar echo signals~\cite{ma2021spatial}. It consists of an RX antenna, a TX antenna, a frequency converter, a USRP, and an Intel NUC. Once the Rx antenna receives the ISAC signal transmitted by the RHS, the target module is triggered, which adds delays to the ISAC signal and emits the delayed signal through the TX antenna. The value of the delayed time can be adjusted by the PC application running on the Intel NUC in order to simulate the targets located at different ranges. \section{Performance Evaluation} \label{s_pe} \begin{figure}[!t] \setlength{\abovecaptionskip}{-0pt} \setlength{\belowcaptionskip}{-18pt} \centering \includegraphics[height=1.2in]{plot_compare_2_a.pdf} \caption{Radiation patterns of the RHS and phased array.} \label{f_pattern} \vspace{-2mm} \end{figure} \begin{figure*}[!t] \setlength{\abovecaptionskip}{-0pt} \setlength{\belowcaptionskip}{-18pt} \centering \label{a1} \includegraphics[height=2in]{sensing_commun2_a.pdf} \caption{Radar sensing and communication performances of the holographic ISAC system.} \label{f_result} \vspace{-2mm} \end{figure*} In this section, we evaluate the performance of the proposed ISAC system. The experiment layout is shown in Fig.~\ref{f_experiment}. We deploy the proposed prototype in an anechoic chamber with a size of $4 \times 3 \times 2.5$m$^3$. The ISAC transceiver module is located at the center of the chamber. The user and the target modules are placed in different directions in regard to the ISAC transceiver module, and the distance between the ISAC transceiver module and the user/target module is $1.7$m. In the ISAC transceiver module, the ISAC signal transmission and the radar signal reception are performed in a time-division manner. Specifically, the BS first transmits the ISAC signal with a duration of $12\mu$s and then listens for the echo signal to decide the presence of the target. Since there is only one RF chain in the BS, the digital beamformer is fixed as 1, and the analog beamformer is optimized to promote the ISAC performance. Fig.~\ref{f_pattern} shows the radiation patterns of the RHS and the phased array. The phased array contains $5$ antenna elements, whose relative phases can be independently tuned by phase shifters. To simultaneously detect the target and serve the communication user, the RHS or the phased array generates two beams that point towards directions $0^\circ$ and $-30^\circ$. We can observe that the gains of the RHS towards the directions of interest are slightly higher than those of the phased array. Besides, the power consumption of the phased array ($5$W) is substantially larger than that of the RHS ($0.16$W), which indicates that the RHS is able to support ISAC with similar performance and lower consumption compared with the phased array. Fig.~\ref{f_result} illustrates the ISAC performances of the proposed platform. We place the target module in directions $-50^\circ$, $0^\circ$, and $20^\circ$, respectively, to simulate the targets in different directions. The delays added to the ISAC signal by the target module are set as $20\mu$s, $30\mu$s, and $26.7\mu$s which corresponds to the target in $6$km, $9$km, and $8$km away. In order to sense the target, one of the main lobes of the radiation pattern is steered towards directions $-50^\circ$, $0^\circ$, and $20^\circ$ in different cycles, while the other main lobe keeps pointing toward the direction of the user module, i.e., $60^\circ$, to support downlink communication. It can be observed from Fig.~\ref{f_result} that the estimated range is close to the real range, which proves the feasibility of sensing by applying holographic ISAC. Besides, we can also observe from Fig.~\ref{f_result} that the communication symbol received by the user module is the same as the communication symbol transmitted by the BS, and the data rate between the BS and the user is $5$M bit/s, which shows that the communication between the BS and the user can be supported when the BS performs radar sensing at the same time. \section{Future Research Directions and Key Challenges} \label{s_frdkc} In the previous sections, we have introduced the concept of holographic ISAC and shown the potential benefit of this concept compared to a traditional phased array. In this section, we introduce future research directions for RHS-based ISAC and the corresponding key challenges. \subsection{Fundamental Designs of the RHS} The designs of RHS layout parameters are critical in the optimization of holographic ISAC systems. In the following, we introduce the designs of two vital parameters, i.e., the RHS size and element spacing. \begin{itemize} \item \textbf{Design of RHS size:} To meet the increasing demand for communication capacity and sensing accuracy, the size of the RHS needs to be enlarged to provide higher antenna gain. However, the signal attenuation in the waveguide cannot be ignored for large-scale RHSs, which significantly decreases the efficiency of the antenna. Consequently, the size of the RHS should be optimized to balance radiation efficiency and antenna gain. \item \textbf{Design of element spacing:} Since a narrow spacing between two nearby RHS elements leads to a large number of elements given the size of the antenna aperture, the directionality of the RHS can be improved by reducing the element spacing. However, the element spacing cannot be unlimitedly decreased because the effect of mutual coupling increases when the element spacing decreases, which degrades the radiation performance of the RHS. \item \textcolor{black}{\textbf{Scaling to higher frequencies:} Driven by higher data rate requirements and the accommodation of more users, the communication systems are moving towards an unused spectrum with higher frequencies and larger bandwidth such as millimeter wave~(mmWave). The proposed ISAC scenario also has the potential for the scaling to higher frequencies, where the RHS can be easily integrated with the mmWave circuits to reduce the profile and weight of the system. Novel designs of metamaterial structure are required to enable a high antenna gain of the RHS in the mmWave band to compensate the severe attenuation of mmWave transmission.} \end{itemize} \subsection{Limitations and trade-offs of Holographic ISAC} It is essential to theoretically analyze the limitations and trade-offs to further verify the superiority of holographic ISAC. To provide a general framework for the analysis, a performance bound which unifies both radar and communication is necessary. For traditional ISAC systems, many existing works are devoted to developing closed-form expressions of the performance bounds by exploiting the inherent relation between information theory and detection theory~\cite{liu2020jointradar}. However, due to the differences in hardware structures and working principles, the performance bounds developed for traditional ISAC systems cannot be directly applied to holographic ISAC. A new performance bound needs to be developed for RHS-based ISAC systems. \subsection{Optimization of Holographic ISAC Transceiver} As a metamaterial antenna, the RHS can also be utilized for signal reception, indicating that the concept of holographic ISAC can be extended to the scenario where both the TX and RX antennas of the BS are replaced by the RHSs. However, several challenges lie in designing such a scheme. \begin{itemize} \item \textcolor{black}{\textbf{Channel Estimation:} Channel information is critical to the optimization of communication performances, while it is challenging to estimate the communicaiton channels due to the hybrid analog-digital strucutre of the RHSs. A straightforward method is to leverage the amplitude-controllable capability of the RHSs and estimate the channel of each element in a time division manner. Specifically, in each time slot, only one element is turned on, i.e., the radiation amplitude of this element is maximized. The radiation amplitudes of other elements are set as $0$ in order to avoid the interference caused by these elements. The time overhead of such method is proportional to the size of the RHS, indicating the time cost may become relatively large for large-scale RHSs. Thus, novel method that can simultaneously estimate the channels of multiple elements should be developed to reduce the corresponding burden of channel estimation.} \item \textcolor{black}{\textbf{AoA Estimation:} Considering the unique structure of the RHS where signals are first modulated by the superposition of holographic patterns and then received by the feeds, the estimation of angles of arrivals~(AoAs) by using RHSs is more complicated than that by using fully-digital arrays where the signals are directly received by the feeds. To address this issue, the signals that arrive at the RHS from unknown AoAs should be modeled, and the maximum likelihood principle can be applied to estimate the AoAs based on the model and the signals received by the RHS feeds. To reduce the related computational overhead, an efficiently estimation algorithm is required to perform multi-dimensional search and find the optimal solution of AoAs.} \item \textbf{Joint Design:} In the design of a holographic ISAC transceiver, the beamformers for transmission and the combiners for reception need to be jointly optimized, which is much more complicated than solely optimizing the beamformers or the combiners. \end{itemize} \section{Conclusion} \label{s_c} In this article, we have introduced RHS-enabled holographic ISAC, a new paradigm for integrating sensing and communication functions. We have presented the concept of RHSs and developed a hybrid beamforming scheme for holographic ISAC based on the working principle of RHSs. In particular, we have built a hardware prototype of the RHS-enabled holographic ISAC system. Simulation and experimental results have shown that the power consumption of the RHS is lower than that of the phased array with a similar antenna gain, and unveiled the great potential for energy saving by implementing holographic ISAC. We have also discussed future research direction and key challenges of holographic ISAC. \bibliographystyle{IEEEtran}
1407.2265
\section{Introduction} \addcontentsline{toc}{section}{Introduction} \enlargethispage{\baselineskip} Let $\alpha_{1},\ldots,\alpha_{n},\beta_{1},\ldots,\beta_{n}\in \mathbb{C}$. The generalized hypergeometric equation \begin{equation} \label{generalized hypergeometric equation} \big[z(\theta-\alpha_{1})\cdots (\theta-\alpha_{n})-(\theta+\beta_{1}-1)\cdots (\theta+\beta_{n}-1)\big]f(z) = 0\text{, where, }\theta = z d/dz \end{equation} is a generalization of the Euler-Gauss hypergeometric equation, corresponding to the case $n=2$ which was introduced by Euler in the $18^{th}$ century and studied in the $19^{th}$ century by among others: Gauss, Klein, Riemann and Schwarz.\\ \noindent There exists an $n$-dimensional basis of solutions to (\ref{generalized hypergeometric equation}) in a neighborhood of $z=0$, called the Frobenius basis (at $z=0$). In the case that the local exponents are pairwise distinct (the non-resonant case) this basis is given by $z^{1-\beta_{1}} F_{1},\ldots, z^{1-\beta_{n}} F_{n}$, for some analytic functions $F_{1},\ldots,F_{n}$, known as Clausen-Thomae hypergeometric functions, that are defined on some open neighborhood of $0$. In the case that all local exponents equal $1$ (the maximally unipotent case) the Frobenius basis is of the following form: \begin{align*} f_{0} &= 1+h_{0}\\ f_{1} &= f_{0}\log(z)+ h_{1}\\ f_{2} &= \frac{1}{2} f_{0} \log^{2}(z)+h_{1} \log(z)+h_{2}\\ & \vdots\\ f_{n-1} &= \frac{1}{(n-1)!} f_{0} \log^{n-1}(z)+ \sum_{l=0}^{n-2} \frac{1}{l!} h_{n-1-l} \log^{l}(z). \end{align*} where the $h_{l}$ are analytic, vanishing in $z=0$, and the unique functions with this property.\\ \noindent We are mainly interested in the monodromy corresponding to the Frobenius basis. Important to us will be the explicit form of matrices that are used in the proof of Levelt's theorem\cite{unpublished}, from which one can deduce the explicit form of the monodromy matrices corresponding to (\ref{generalized hypergeometric equation}) in a certain basis. It turns out that we can actually find the corresponding basis of functions explicitly, these functions are known as Melllin-Barnes integrals and the corresponding basis is called the Mellin-Barnes basis. The advantage of this basis is that the functions are defined on a large region, whereas the functions in the Frobenius basis are generally determined by powerseries with finite convergence radius (although they can be analytically extended). Our intention of course, is to express the functions in the Frobenius basis as linear combinations of Mellin-Barnes integrals, such that we can easily continue them along a path. In the next chapter it will be explained in detail how this is done. \\ \noindent In the non-resonant case it follows immediately that the monodromy matrix around $0$ in the Frobenius basis around $z=0$ equals $\text{diag}(e^{-2\pi i\beta_{1}},\ldots,e^{-2\pi i\beta_{n}})$. Theorem \ref{nonresonantthm} explains the general structure of the monodromy group, by giving the explicit form of the monodromy matrix around $1$ in the Frobenius basis around $z=0$, namely its $(k,l)$ entry, with $k,l=1,2,\ldots,n$, is \begin{align} \delta_{kl}+c e^{2\pi i\beta_{k}} \prod_{m=1}^{n} \frac{\sin(\pi(\beta_{l}-\alpha_{m}))}{\sin(\pi(\beta_{l}-\beta_{m}))}. \end{align} Here $c=2i (-1)^{n} e^{\pi i(\beta_{1}-\alpha_{1}+\ldots+\beta_{n}-\alpha_{n})}$ and the factor $\sin(\pi(\beta_{l}-\beta_{l}))$ should be read as $1$. This shows in particular that all monodromy matrices have algebraic entries when the parameters $\alpha_{1},\ldots,\alpha_{n},\beta_{1},\ldots,\beta_{n}$ are rational, a property that is not shared with the maximally unipotent case.\\ \noindent Our main theorem, about the maximally unipotent case, Theorem \ref{main?}, will need the following result. Suppose that $\alpha_{1},\ldots,\alpha_{n}\in\mathbb{C}\setminus\mathbb{Z}$ are such that $(X-e^{-2\pi i\alpha_{1}})\cdots (X-e^{-2\pi i\alpha_{n}})$ is a product of cyclotomic polynomials, then we can find a number $r\in\mathbb{N}$ and numbers $a_{1},\ldots,a_{r},b_{1},\ldots,b_{r}\in\mathbb{N}$ such that \begin{align*} (X-e^{-2\pi i\alpha_{1}})\cdots (X-e^{-2\pi i\alpha_{n}}) = \frac{X^{a_{1}}-1}{X^{b_{1}}-1}\cdots \frac{X^{a_{r}}-1}{X^{b_{r}}-1}. \end{align*} When this is the case it will turn out that, equivalently, we could investigate the equation \begin{align*} \theta^{n}f=Cz(\theta-\alpha_{1})\cdots (\theta-\alpha_{n})f\text{ where }C = \frac{a_{1}^{a_{1}}\cdots a_{n}^{a_{n}}}{b_{1}^{b_{1}}\cdots b_{n}^{b_{n}}}, \end{align*} which has its own Frobenius basis $f_{n-1}^{C},\ldots,f_{1}^{C},f_{0}^{C}$. This corresponds to the normalization $z\to Cz$, i.e. $f_{k}^{C}(z)=f_{k}(C z)$ for $k=0,\ldots,n-1$. In fact this is precisely what the authors of \cite{CYY} do for the case $n=4$, in that case the hypergeometric equations arise from Calabi-Yau threefolds. They showed, using a basis that shows resemblance to the Mellin-Barnes basis, that the entries of the corresponding monodromy matrices contain geometric invariants of these Calabi-Yau threefolds. In particular, they gave a neat expression for the monodromy matrices. Generalization of their result for arbitrary $n$ has been our motivation to study the maximally unipotent case. Our main theorem gives us insight in to the general form of the monodromy matrices in the case that $(z-e^{-2\pi i\alpha_{1}})\cdots (z-e^{-2\pi i\alpha_{n}})$ defines a product of cyclotomic polynomials, in particular it provides us with a practical method to determine the monodromy matrices. We will see that all matrices in the corresponding monodromy group have their entries in $\mathbb{Q}(\zeta(3) (2\pi i)^{-3},\zeta(5) (2\pi i)^{-5},\ldots,\zeta(m)(2\pi i)^{-m})$, with $m$ the largest odd number below $n$.\\ I would like to thank Frits Beukers, who was the supervisor of my master thesis, which contains a lot of material that is being used in this article, for advising me to publish my results and helping me along the way. I am thankful to Willem Pranger for pointing out numerous issues for substantive improvement in my master thesis, and consequently this article. I thank Julian Lyczak and Merlijn Staps for their proof of theorem \ref{integerC}. \section{Monodromy groups of the generalized hypergeometric equation} \subsection{The Mellin-Barnes basis} Let $z_{0}$ be an element of $\{0,1,\infty\}$, the set of singularities corresponding to (\ref{generalized hypergeometric equation}). We will denote the monodromy matrix around $z_{0}$ by $M_{z_{0}}$. For (\ref{generalized hypergeometric equation}) we know that $M_{0}$ has eigenvalues $e^{-2\pi i\beta_{1}},\ldots,e^{-2\pi i\beta_{n}}$ and $M_{\infty}$ has eigenvalues $e^{2\pi i\alpha_{1}},\ldots,e^{2\pi i\alpha_{n}}$. We will consider the case where all eigenvalues $e^{-2\pi i\beta_{1}},\ldots,e^{-2\pi i\beta_{n}}$ differ from the eigenvalues $e^{2\pi i\alpha_{1}},\ldots,e^{2\pi i\alpha_{n}}$. Here and in the rest of this article we will demand that these two sets of eigenvalues are disjunct, i.e. $\alpha_{k}$ differs from $\beta_{l}$ modulo $1$ for all $k,l=1,2,\ldots,n$. A matrix will be called a (pseudo-)reflection if this matrix minus the identity has rank $1$. The following theorem gives us insight in to the general form of the monodromy matrices corresponding to this case. \begin{theorem} \label{Levelt} \textbf{(Levelt)}\index{Levelt's theorem} Let $a_{1},\ldots,a_{n},b_{1},\ldots,b_{n}\in\mathbb{C}\setminus\{0\}$ be such that $a_{i}\neq b_{j}$ for all $1\leq i,j\leq n$. Then there exist $A,B\in GL(n,\mathbb{C})$ with eigenvalues $a_{1},\ldots,a_{n}$ and $b_{1},\ldots,b_{n}$ respectively such that $AB^{-1}$ is a reflection. Moreover, the pair $A, B$ is uniquely determined up to conjugation. \end{theorem} \noindent What is important about Levelt's theorem is its proof \cite{unpublished}. It shows us explicitly what the monodromy matrices look like in a particular basis chosen, namely \begin{align*} A = \left(\begin{array}{ccccc} 0 & 1 & 0 & \ldots & 0\\ 0 & 0 & 1 & \ldots & 0\\ \vdots & & & & \vdots\\ 0 & 0 & 0 & \ldots & 1\\ -A_{n} & -A_{n-1} & -A_{n-2} & \ldots & -A_{1} \end{array}\right) \text{ and } B = \left(\begin{array}{ccccc} 0 & 1 & 0 & \ldots & 0\\ 0 & 0 & 1 & \ldots & 0\\ \vdots & & & & \vdots\\ 0 & 0 & 0 & \ldots & 1\\ -B_{n} & -B_{n-1} & -B_{n-2} & \ldots & -B_{1} \end{array}\right), \end{align*} where $A_{1},\ldots, A_{n}, B_{1},\ldots, B_{n}$ are defined through $(X-a_{1})\cdots (X-a_{n})=X^{n}+A_{1} X^{n-1}+\ldots+A_{n}$ and $(X-b_{1})\cdots (X-b_{n})=X^{n}+B_{1} X^{n-1}+\ldots+B_{n}$.\\ \noindent It is known that $M_{1}$ has $n-1$ eigenvalues equal to $1$ and is thus a reflection (and so is $M_{\infty}^{-1} M_{1}^{-1} M_{\infty}$). In particular $M_{0}$ and $M_{\infty}^{-1}$, satisfying the relation $M_{0}M_{1}M_{\infty}=\mathbb{I}$, play the role of $A$ and $B$ in Levelt's theorem. It turns out that we can actually find an explicit basis of functions in which $M_{0}$ equals the matrix $A$ used in Levelt's theorem, with $a_{k}=e^{-2\pi i\beta_{k}}$ for $k=1,\ldots,n$. In the following we will choose the argument of $z$ in $(0,2\pi)$, which determines $z^{s}=|z|^{s} e^{i\text{arg}(z) s}$. \begin{definition} Let $\alpha_{1},\ldots,\alpha_{n},\beta_{1},\ldots,\beta_{n}\in\mathbb{C}$ and $\alpha_{k}$ differs from $\beta_{l}$ modulo $1$ for all $k,l=1,2,\ldots,n$. We define for $j=0,1,\ldots,n-1$ and $z\in\mathbb{C}\setminus\mathbb{R}_{\geq 0}$ \begin{equation} \label{MellinBarnes} I_{j}(z) = \frac{(-1)^{n}}{(2\pi i)^{n}}\int_{L} \left(\prod_{k=1}^{n} \Gamma(\alpha_{k}+s)\Gamma(1-\beta_{k}-s)\right) e^{(2j-n)\pi i s}z^{s} ds. \end{equation} Here $L$ is a path from $i\infty$ to $-i\infty$ that bends in such a way that all points $-\alpha_{k}-m$ with $m\in\mathbb{Z}_{\geq 0}$ are on the left of it and all points $1-\beta_{k}+m$ with $m\in\mathbb{Z}_{\geq 0}$ are on the right of it, for big enough $s$ we require it to be on the imaginary axis.\\ \end{definition} \begin{remark} Here by `left' and `right' we mean that $L$ divides $\mathbb{C}\setminus L$ into two connected components, the component that contains all $s$ with negative real part for $s$ big enough will be referred to as the left component, the other as the right component. The requirement that $L$ is on the imaginary axis for big $s$ is not necessary but will turn out to be convenient in what follows. \end{remark} \noindent Let us argue that the Mellin-Barnes integrals (\ref{MellinBarnes}) are well defined. Stirling's formula\index{Stirling's formula} tells us that for $a,b\in\mathbb{R}$, $a$ bounded, we have \begin{align*} |\Gamma(a+bi)| = \mathcal{O}(|b|^{a-1/2} e^{-\pi |b|/2})\text{ as }|b|\to\infty. \end{align*} We deduce that $|\Gamma(\alpha_{k}+it)\Gamma(1-\beta_{k}-it)|=\mathcal{O}(|t|^{1+\Re(\alpha_{k}-\beta_{k})} e^{-\pi|t|})$ as $|t|\to\infty$. Henceforth for $j=0,1,\ldots,n-1$ \begin{align} \label{stirling} \left| \left(\prod_{k=1}^{n} \Gamma(\alpha_{k}+it)\Gamma(1-\beta_{k}-it)\right) e^{((2j-n)\pi i)it}(i t)^{s}\right|\\ = \mathcal{O}(|t|^{n+\sum_{k=1}^{n} \Re(\alpha_{k}-\beta_{k})} e^{-\text{arg}(z)\pi |t|})\text{ as }|t|\to\infty. \end{align} Since the argument of $z$ is positive we conclude that the integrals $I_{j}$ converge.\\ \begin{proposition} \label{MBversch} Let $N\in\mathbb{N}$. Denote by $i_{j,z}$ the integrant of $I_{j}(z)$. Define by $R(N)$ the set of singularities of $i_{j,z}(s)$ between $L$ and $L+N$ and by $R(\infty)$ and $R(-\infty)$ the set of singularities on the right respectively on the left of $L$. Denote by $I_{j}^{N}$ the integral $I_{j}$ were the path $L$ has been replaced by $L+N$. We have (for a fixed choice of $\pm$) \begin{align} I_{j}(z) = I_{j}^{\pm N}(z) \pm 2\pi i\sum_{p\in R(\pm N)} \text{Res}_{p}( i_{j,z}), \end{align} In particular we have for $|z|^{\pm 1}<1$ that \begin{align} I_{j}(z) =\pm 2\pi i\sum_{p\in R(\pm \infty)} \text{Res}_{p}(i_{j,z}). \end{align} \end{proposition} \noindent\textbf{Proof. }For $T>0$ big enough consider the path $L(T)$ that coincides with $L$ but is from $iT$ to $-iT$. Now connect the paths $L(T)$ and $L(T)\pm N$ (for a fixed choice of $\pm$) by two linear segments $L_{-}(T)$ and $L_{+}(T)$ from $-iT$ to $\pm N-iT$ and from $\pm N+iT$ to $iT$ respectively. Thus we get a closed path and by the residue theorem \begin{align*} \int_{L(T)}+\int_{L_{-}(T)}-\int_{L(T)\pm N}+\int_{L_{+}(T)} i_{j,z}(s) ds = \pm 2\pi i\sum_{p\in R(\pm N)} \text{Res}_{p}( i_{j,z}). \end{align*} For the first part of the proposition it suffices to show that the integrals over $L_{\pm}(T)$ tend to $0$ as $T\to\infty$. For this we use the Stirling approximation: $|i_{j,z}(t\pm iT)|= \mathcal{O}(T^{n+2nN+\sum_{k=1}^{n} \Re(\alpha_{k}-\beta_{k})} e^{-\text{arg}(z)\pi T})$. This tends to $0$ as $T\to\infty$, as the integration intervals are finite this proves that the integrals over $L_{\pm}(T)$ tend to $0$ as $T\to\infty$.\\ \noindent Now for the second part of the proposition we should prove that the integral over $L\pm N$ tends to $0$ as $N\to\infty$ whenever $|z|^{\pm 1}<1$. We will prove this only for the $|z|<1$ case, the other case is analogous. We see that for $s$ on $L$ we have \begin{align*} |\Gamma(\alpha_{k}+s+N)\Gamma(1-\beta_{k}-(s+N))| = \left|\prod_{j=0}^{N-1}\frac{\alpha_{k}+s+j}{1-\beta_{k}+s+j}\right| |\Gamma(\alpha_{k}+s)\Gamma(1-\beta_{k}-s)|. \end{align*} We notice that uniformly on $L$ \begin{align*} \lim_{j\to\infty} \left|\frac{\alpha_{k}+s+j}{1-\beta_{k}+s+j}\right|\leq \lim_{j\to\infty} 1+\frac{|\alpha_{k}+1-\beta_{k}|}{|1-\beta_{k}+s+j|} = 1, \end{align*} where we have used that the real part of $s$ is bounded on $L$. In particular for $j$ big enough we have uniformly on $L$ that \begin{align*} \left|\frac{\alpha_{k}+s+j}{1-\beta_{k}+s+j}\right|\leq |z|^{-\frac{1}{2n}}. \end{align*} We conclude that the integrant of the integral over $L+N$ satisfies the same inequality as in (\ref{stirling}), but with a factor $|z|^{\frac{N}{2}}$ in front of it. Since $|z|<1$ we conclude that the integral over $L+N$ converges to $0$. \begin{flushright}$\square$\end{flushright} \begin{theorem} \label{MBbasis} The functions $I_{0},\ldots,I_{n-1}$ form a basis $\mathcal{I}$, the Mellin-Barnes basis\index{Mellin-Barnes basis}, of the generalized hypergeometric equation (\ref{generalized hypergeometric equation}). \end{theorem} \noindent\textbf{Proof. }Let us prove that they are solutions to the generalized hypergeometric equation. First we notice that \begin{align*} \theta e^{(2j-n)\pi i s}z^{s} = z e^{(2j-n)\pi i s} s z^{s-1} = s e^{(2j-n)\pi i s} z^{s}. \end{align*} Thus \begin{align*} z(\theta+\alpha_{1})\cdots (\theta+\alpha_{n}) I_{j} &= \frac{(-1)^{n}}{(2\pi i)^{n}}\int_{L} \left(\prod_{k=1}^{n} \Gamma(\alpha_{k}+s)\Gamma(1-\beta_{k}-s)\right)\\ &\times (s+\alpha_{1})\cdots (s+\alpha_{n}) e^{(2j-n)\pi i s} z^{s+1} ds \end{align*} \begin{align*} &= \frac{(-1)^{n}}{(2\pi i)^{n}}\int_{L} \left(\prod_{k=1}^{n}\Gamma(\alpha_{k}+s+1)\Gamma(1-\beta_{k}-s)\right) e^{(2j-n)\pi i s} z^{s+1} ds\\ &= \frac{(-1)^{n}}{(2\pi i)^{n}}\int_{L+1} \left(\prod_{k=1}^{n} \Gamma(\alpha_{k}+s)(1-\beta_{k}-s)\Gamma(1-\beta_{k}-s)\right) e^{(2j-n)\pi i s}(-1)^{n} z^{s} ds\\ &= (\theta+\beta_{1}-1)\cdots (\theta+\beta_{n}-1)I_{j}^{1}(z)\\ &+ 2\pi i (\theta+\beta_{1}-1)\cdots (\theta+\beta_{n}-1)\sum_{p\in R(1)} \text{Res}_{p}( i_{j,z}) \end{align*} by Proposition \ref{MBversch}. Now if there are indeed singularities in $R(1)$ they must be of the form $s=1-\beta_{k}$. The Residue corresponding to such a pole is a linear combination of terms of the form $\log^{l}(z) z^{1-\beta_{k}}$ for $0\leq l<n$. If such a term appears then $\beta_{k}$ must have degeneracy at least $l+1$. We notice using the Leibniz rule that \begin{align*} & (\theta+\beta_{k}-1)^{l+1} \log^{l}(z) z^{1-\beta_{k}} = (\theta+\beta_{k}-1)^{l} l\log^{l-1}(z) z^{1-\beta_{k}}\\ &= \ldots = (\theta+\beta_{k}-1) l(l-1)\cdots 1\cdot z^{1-\beta_{k}}=0. \end{align*} Hence \begin{align*} &(\theta+\beta_{1}-1)\cdots (\theta+\beta_{n}-1)\\ &\frac{(-1)^{n}}{(2\pi i)^{n}} \int_{L+1} \left(\prod_{k=1}^{n} \Gamma(\alpha_{k}+s)\Gamma(1-\beta_{k}-s)\right) e^{(\log(z)+(2j-n)\pi i)s} ds\\ &= (\theta+\beta_{1}-1)\cdots (\theta+\beta_{n}-1) I_{j}(z) \end{align*} and we conclude that the $I_{j}$ are solutions to the hypergeometric equation. Suppose $I_{0},\dots,I_{n-1}$ do not form a basis. Then there exists a polynomial $p$ of degree at most $n-1$, not identically zero, such that \begin{align*} \int_{L} \left(\prod_{k=1}^{n} \Gamma(\alpha_{k}+s)\Gamma(1-\beta_{k}-s)\right) e^{-\pi i n s}p(e^{2\pi i s}) z^{s} ds = 0. \end{align*} This is only possible if no terms of the form $\log^{l}(z) z^{1-\beta_{k}}$ occur (when evaluated in a neighborhood of $z=0$), i.e. that all singularities of the original integrant are removed by $p(e^{2\pi i s})$ (see remark \ref{combilogremark} for clarification). This implies that $p$ must have all $e^{-2\pi i\beta_{k}}$ as roots (with the same multiplicity as $\beta_{k}$), and this is a contradiction since it requires $p$ to have degree at least $n$. \begin{flushright}$\square$\end{flushright} \begin{theorem} \label{MBmonodromy} Suppose $\alpha_{k}$ differs from the $\beta_{l}$ modulo $1$ for all $1\leq k,l\leq n$. The monodromy matrices in the Mellin-Barnes basis are \begin{align*} M_{0} &= \left( \begin{array}{ccccc} 0 & 1 & 0 & \hdots & 0\\ 0 & 0 & 1 & \hdots & 0\\ \vdots & \vdots & & & \vdots\\ 0 & 0 & 0 & \hdots & 1\\ -B_{n} & -B_{n-1} & -B_{n-2} & \hdots & -B_{1} \end{array} \right) \end{align*} \begin{align*} M_{1} &= \left( \begin{array}{ccccc} 1+\frac{A_{n}-B_{n}}{B_{n}} & \frac{A_{n-1}-B_{n-1}}{B_{n}} & \frac{A_{n-2}-B_{n-2}}{B_{n}} & \hdots & \frac{A_{1}-B_{1}}{B_{n}}\\ 0 & 1 & 0 & \hdots & 0\\ 0 & 0 & 1 & \hdots & 0\\ \vdots & \vdots & & & \vdots\\ 0 & 0 & 0 & \hdots & 1 \end{array} \right) \end{align*} \begin{align*} M_{\infty} &= \left( \begin{array}{ccccc} -\frac{A_{n-1}}{A_{n}} & -\frac{A_{n-2}}{A_{n}} & -\frac{A_{n-3}}{A_{n}} & \hdots & -\frac{A_{0}}{A_{n}}\\ 1 & 0 & 0 & \hdots & 0\\ 0 & 1 & 0 & \hdots & 0\\ \vdots & \vdots & & & \vdots\\ 0 & 0 & \hdots & 1 & 0 \end{array} \right) \end{align*} Where $z^{n}+B_{1}z^{n-1}+\ldots+B_{n-1}z+B_{n}$ is the polynomial with roots $e^{-2\pi i\beta_{k}}$, $k=1,2,\ldots,n$ and $z^{n}+A_{1}z^{n-1}+\ldots+A_{n-1}z+A_{n}$ is the polynomial with roots $e^{-2\pi i\alpha_{k}}$, $k=1,2,\ldots,n$.\\ \end{theorem} \noindent\textbf{Proof. }By construction we have $I_{j}\to I_{j+1}$ under a counterclockwise loop around $0$ for $j=0,1,\ldots,n-2$. Notice that \begin{align*} -B_{n}I_{0}-\ldots-B_{1} I_{n-1} &= \frac{(-1)^{n}}{(2\pi i)^{n}}\int_{L} \left(\prod_{k=1}^{n} \Gamma(\alpha_{k}+s)\Gamma(1-\beta_{k}-s)\right) e^{-\pi i n s} z^{s}\\ & \times \left(e^{2\pi i n s} - \prod_{k=1}^{n} (e^{2\pi i s}-e^{-2\pi i\beta_{k}})\right) ds. \end{align*} Notice what happens when we lower the argument by $2\pi$. By the same arguments used in the proof of Proposition \ref{MBversch} we have that \begin{align*} \frac{(-1)^{n}}{(2\pi i)^{n}}\int_{L} \left(\prod_{k=1}^{n}\Gamma(\alpha_{k}+s)\Gamma(1-\beta_{k}-s)\right) e^{-2\pi i s-\pi i n s} z^{s} \prod_{k=1}^{n} (e^{2\pi i s}-e^{-2\pi i\beta_{k}}) ds \end{align*} is equal to $2\pi i$ times the sum of its residues corresponding to its singularities to the right of $L$ for $|z|<1$. But it has no (non removable) singularities in that region so it vanishes. We conclude that when we lower the argument by $2\pi$ then $-B_{n}I_{0}-\ldots-B_{1} I_{n-1}$ transforms to $I_{n-1}$, i.e. a counterclockwise loop around the origin corresponds to the transformation $I_{n-1}\to -B_{n}I_{0}-\ldots-B_{1} I_{n-1}$.\\ \noindent From the Frobenius basis around $\infty$ it is clear that $M_{\infty}^{-1}$ should have eigenvalues $e^{-2\pi i \alpha_{1}},\ldots,e^{-2\pi i \alpha_{n}}$. Furthermore, we know that $M_{0} M_{\infty} = M_{\infty}^{-1} M_{1}^{-1} M_{\infty}$ is a reflection. Hence we may apply Levelts theorem (\ref{Levelt}) to conclude that \begin{align*} (M_{\infty})^{-1} =\left( \begin{array}{ccccc} 0 & 1 & 0 & \hdots & 0\\ 0 & 0 & 1 & \hdots & 0\\ \vdots & \vdots & & & \vdots\\ 0 & 0 & 0 & \hdots & 1\\ -A_{n} & -A_{n-1} & -A_{n-2} & \hdots & -A_{1} \end{array} \right) \end{align*} The forms of $M_{\infty}$ and $M_{1}$ now easily follow. \begin{flushright}$\square$\end{flushright} \subsection{The non-resonant case} In this section we will consider the case where $\beta_{1},\ldots,\beta_{n}$ are distinct modulo $1$ and the $\alpha_{1},\ldots,\alpha_{n}$ are distinct from the $\beta_{1},\ldots,\beta_{n}$ modulo $1$\index{Non-resonant}. Though our research is mainly aimed at the maximally unipotent case, we treat the non-resonant case because it is barely any extra work, and the results can be compared with that of the maximally unipotent case. In the Frobenius basis at $0$, denoted by $f_{1},\ldots,f_{n}$, we have \begin{align*} M_{0} = \left(\begin{array}{cccc} e^{-2\pi i\beta_{1}} & 0 & \hdots & 0\\ 0 & e^{-2\pi i\beta_{2}} & \hdots & 0\\ \vdots & & \ddots & 0\\ 0 & 0 & \hdots & e^{-2\pi i\beta_{n}}\end{array}\right). \end{align*} We would also like to express the monodromy matrices $M_{1}$ and $M_{\infty}$ in the Frobenius basis at $z=0$. For this purpose we will prove the following theorem about the transformation matrix\index{Transformation matrix} between the Mellin-Barnes basis and the Frobenius basis at $z=0$. \begin{proposition} We have \begin{align} \left(\begin{array}{c}I_{0}\\ \vdots \\ I_{n-1}\end{array}\right) = V D \left(\begin{array}{c}f_{1}\\ \vdots \\ f_{n}\end{array}\right) \end{align} where $V$ is the VanderMonde matrix\index{VanderMonde matrix} $V_{kl} = e^{-2\pi i k\beta_{l}}$ and $D$ is the diagonal matrix with entries \begin{align*} D_{ll} = \frac{1}{(2i)^{n-1}} e^{\pi i(n-2k)\beta_{l}} \frac{\Gamma(\alpha_{1}-\beta_{l}+1)\cdots \Gamma(\alpha_{n}-\beta_{l}+1)}{\Gamma(\beta_{1}-\beta_{l}+1)\cdots \Gamma(\beta_{n}-\beta_{l}+1)} \left(\prod_{m=1,m\neq l}^{n} \frac{1}{\sin(\pi(\beta_{m}-\beta_{l}))}\right) \end{align*} with $k=0,1,\ldots,n-1$ and $l=1,\ldots,n$. \end{proposition} \noindent\textbf{Proof. }Using Proposition \ref{MBversch} we conclude that \begin{align*} I_{k} &= \frac{(-1)^{n}}{(2\pi i)^{n-1}} \sum_{l=1}^{n} \sum_{m=0}^{\infty} \lim_{s\to 1-\beta_{l}+m} (s-1+\beta_{l}-m)\Gamma(1-\beta_{l}-s)\\ &\times \Gamma(\alpha_{l}+s)\left(\prod_{p=1,p\neq l}^{n} \Gamma(\alpha_{p}+s)\Gamma(1-\beta_{p}-s)\right) e^{(2k-n)\pi i s}z^{s}\\ &= \frac{1}{(2\pi i)^{n-1}} \sum_{l=1}^{n} \sum_{m=0}^{\infty} \frac{(-1)^{m}}{m!} e^{\pi i(n-2k)\beta_{l}} (-1)^{nm} z^{1-\beta_{l}+m}\\ &\times \Gamma(\alpha_{l}-\beta_{l}+1+m)\left(\prod_{p=1,p\neq l}^{n}\Gamma(\alpha_{p}-\beta_{l}+1+m)\Gamma(\beta_{l}-\beta_{p}-m)\right)\\ &= \frac{1}{(2i)^{n-1}} \sum_{l=1}^{n} e^{\pi i(n-2k)\beta_{l}} \left(\prod_{p=1,p\neq l}^{n} \frac{1}{\sin(\pi(\beta_{p}-\beta_{l}))}\right)\\ &\times \frac{\Gamma(\alpha_{1}-\beta_{l}+1)\cdots \Gamma(\alpha_{n}-\beta_{l}+1)}{\Gamma(\beta_{1}-\beta_{l}+1)\cdots \Gamma(\beta_{n}-\beta_{l}+1)} z^{1-\beta_{l}}\sum_{m=0}^{\infty} \frac{(\alpha_{1}-\beta_{l}+1)_{m}\cdots (\alpha_{n}-\beta_{l}+1)_{m}}{(\beta_{1}-\beta_{l}+1)_{m}\cdots (\beta_{n}-\beta_{l}+1)_{m}} z^{m}. \end{align*} Therefore \begin{align*} I_{k} &= \frac{1}{(2i)^{n-1}} \sum_{l=1}^{n} e^{\pi i(n-2k)\beta_{l}} \frac{\Gamma(\alpha_{1}-\beta_{l}+1)\cdots \Gamma(\alpha_{n}-\beta_{l}+1)}{\Gamma(\beta_{1}-\beta_{l}+1)\cdots \Gamma(\beta_{n}-\beta_{l}+1)} \left(\prod_{p=1,p\neq l}^{n} \frac{1}{\sin(\pi(\beta_{p}-\beta_{l}))}\right) f_{l}(z). \end{align*} \begin{flushright}$\square$\end{flushright} \begin{theorem} \label{nonresonantthm} Define $c=2i (-1)^{n} e^{\pi i(\beta_{1}-\alpha_{1}+\ldots+\beta_{n}-\alpha_{n})}$. In the Frobenius basis at $z=0$ the monodromy matrix around $z=1$ satisfies \begin{align} (M_{1})_{kl} &= \delta_{kl}+c e^{2\pi i\beta_{k}} \prod_{m=1}^{n} \frac{\sin(\pi(\beta_{l}-\alpha_{m}))}{\sin(\pi(\beta_{l}-\beta_{m}))} \end{align} where $k,l=1,2,\ldots,n$ and the factor $\sin(\pi(\beta_{l}-\beta_{l}))$ should be read as $1$. \end{theorem} \noindent\textbf{Proof. }We calculate \begin{align*} \sum_{m=0}^{n-1} \frac{A_{n-m}-B_{n-m}}{B_{n}} e^{-2\pi i m\beta_{l}} &= \frac{1}{B_{n}} \left(\prod_{m=1}^{n} (e^{-2\pi i\beta_{l}}-e^{-2\pi i\alpha_{m}})-\prod_{m=1}^{n} (e^{-2\pi i\beta_{l}}-e^{-2\pi i\beta_{m}})\right)\\ &= (2i)^{n}e^{2\pi i(\beta_{1}+\ldots+\beta_{n})} e^{-\pi i(\alpha_{1}+\ldots+\alpha_{n})} e^{-\pi i n \beta_{l}} \prod_{m=1}^{n} \sin(\pi(\alpha_{m}-\beta_{l}))\\ &= 2i e^{2\pi i(\beta_{1}+\ldots+\beta_{n})} e^{-\pi i(\alpha_{1}+\ldots+\alpha_{n})} \tilde{D}^{-1}_{ll} \sin(\pi(\alpha_{l}-\beta_{l})) \prod_{m=1,m\neq l}^{n} \frac{\sin(\pi(\alpha_{m}-\beta_{l}))}{\sin(\pi(\beta_{m}-\beta_{l}))}. \end{align*} where \begin{align*} \tilde{D}_{ll}\frac{\Gamma(\alpha_{1}-\beta_{l}+1)\cdots \Gamma(\alpha_{n}-\beta_{l}+1)}{\Gamma(\beta_{1}-\beta_{l}+1)\cdots \Gamma(\beta_{n}-\beta_{l}+1)} = D_{ll}. \end{align*} To complete the proof we will have to determine the inverse of $V$. We notice that this inverse is determined by \begin{align*} \prod_{m=1,m\neq k}^{n} \frac{z-e^{-2\pi i\beta_{m}}}{e^{-2\pi i\beta_{k}}-e^{-2\pi i\beta_{m}}} = (V^{-1})_{k,0}+(V^{-1})_{k,1} z+\ldots+(V^{-1})_{k,n-1} z^{n-1}. \end{align*} We will only need the first column of $V^{-1}$, the $k^{th}$ entry of this column is \begin{align*} \prod_{m=1,m\neq k}^{n} \frac{-e^{-2\pi i\beta_{m}}}{e^{-2\pi i\beta_{k}}-e^{-2\pi i\beta_{m}}} = (-1)^{n-1}e^{2\pi i\beta_{k}} e^{-\pi i(\beta_{1}+\ldots+\beta_{n})} \tilde{D}_{kk}. \end{align*} We conclude that the matrix $M_{1}-\mathbb{I}$ equals $2i (-1)^{n-1}e^{\pi(\beta_{1}-\alpha_{1}+\ldots+\beta_{n}-\alpha_{n})}$ times \begin{align} \left(\begin{array}{c} e^{2\pi i\beta_{1}} \tilde{D}_{11}\\ \vdots \\ e^{2\pi i\beta_{n}} \tilde{D}_{nn} \end{array}\right) \left(\begin{array}{c} \sin(\pi(\alpha_{1}-\beta_{1})) \tilde{D}_{11}^{-1} \prod_{m=1,m\neq 1}^{n} \frac{\sin(\pi(\alpha_{m}-\beta_{1}))}{\sin(\pi(\beta_{m}-\beta_{1}))} \\ \vdots \\ \sin(\pi(\alpha_{n}-\beta_{n})) \tilde{D}_{nn}^{-1} \prod_{m=1,m\neq n}^{n} \frac{\sin(\pi(\alpha_{m}-\beta_{n}))}{\sin(\pi(\beta_{m}-\beta_{n}))}\end{array}\right)^{T} \end{align} which implies the desired result. \begin{flushright}$\square$\end{flushright} Though the form of $M_{1}$ is the easiest to find the following proposition will show that the form of $M_{\infty}$ can easily be deduced from the form of $M_{1}$. \begin{proposition} \label{inverserank1} Let $M$ be an $n\times n$ matrix with rank $\leq 1$. Suppose that $\mathbb{I}+M$ is invertible. Then \begin{align*} (\mathbb{I}+M)^{-1} = \mathbb{I}-\frac{1}{1+\text{Tr}(M)}M. \end{align*} \end{proposition} \noindent\textbf{Proof. }Since $M$ has rank $\leq 1$ it can be written as $M_{kl}=u_{k}v_{l}$ for $n$-dimensional vectors $u$ and $v$. Thus we notice that \begin{align*} (M^{2})_{kl} = \sum_{m=1}^{n} u_{k}v_{m}u_{m}v_{l} = \text{Tr}(M) M_{kl}. \end{align*} Since $M$ has rank $\leq 1$ we know that it has $n-1$ eigenvalues equal to $0$. The condition that $\mathbb{I}+M$ is invertible thus boils down to $\text{Tr}(M)\neq -1$. We see that \begin{align*} (\mathbb{I}+M)(\mathbb{I}-\frac{1}{1+\text{Tr}(M)}M) = \mathbb{I}+M-\frac{1}{1+\text{Tr}(M)} M-\frac{\text{Tr}(M)}{1+\text{Tr}(M)}M=\mathbb{I}. \end{align*} \begin{flushright}$\square$\end{flushright} \begin{corollary} Suppose $\alpha_{1},\ldots,\alpha_{n},\beta_{1},\ldots,\beta_{n}$ are distinct modulo $1$. Then in the Frobenius basis at $z=0$ the monodromy matrix around $z=\infty$ satisfies \begin{align} (M_{\infty})_{kl} = e^{2\pi i\alpha_{k}}\delta_{kl}+\frac{4}{c} e^{2\pi i(\beta_{k}+\alpha_{k})} \prod_{m=1}^{n} \frac{\sin(\pi(\beta_{l}-\alpha_{m}))}{\sin(\pi(\beta_{l}-\beta_{m}))} \end{align} where $k,l=1,2,\ldots,n$. \end{corollary} \noindent\textbf{Proof. }We know that $1+\text{Tr}(M_{1}-\mathbb{I})=1+(A_{n}-B_{n})/B_{n} = -c^{2}/4$. Hence \begin{align*} M_{\infty} &= (\mathbb{I}+4 c^{-2}(M_{1}-\mathbb{I})) M_{0}^{-1}, \end{align*} leading to the desired result. \begin{flushright}$\square$\end{flushright} We conclude this paragraph with the remark that when $\alpha_{1},\ldots,\alpha_{n},\beta_{1},\ldots,\beta_{n}\in\mathbb{Q}$ the corresponding monodromy group consists of matrices with algebraic entries. In the next chapter it will become clear that this is no longer implied in the maximally unipotent case. \subsection{The maximally unipotent case} In this section we will consider the case where $\beta_{1}=\ldots=\beta_{n}=1$\index{Maximally unipotent}. \noindent In what follows it will turn out that our results become more elegant when we slightly alter the Frobenius basis. We will consider the ordered basis $\{f_{n-1}/(2\pi i)^{n-1},f_{n-2}/(2\pi i)^{n-2},\ldots,f_{0}\}$ instead. Notice that in this basis we have \begin{align*} M_{0} = \left( \begin{array}{ccccc} 1 & 1 & \frac{1}{2} & \hdots & \frac{1}{(n-1)!}\\ 0 & 1 & 1 & \hdots & \frac{1}{(n-2)!}\\ 0 & 0 & 1 & \hdots & \frac{1}{(n-3)!}\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & \ldots & 1 \end{array} \right). \end{align*} Thus $M_{0}$ has in particular rational entries. Note that we can write $M_{0}=e^{N}$, where $N$ is our notation for the matrix whose non-zero entries are ones on the superdiagonal. In this newly defined basis we have the following theorem.\\ \begin{theorem} \label{InaarF} The matrix $T$ that transforms functions in the Mellin-Barnes basis $\mathcal{I}$ to the ordered basis $\{f_{n-1}/(2\pi i)^{n-1},f_{n-2}/(2\pi i)^{n-2},\ldots,f_{0}\}$ is given by $T=Q\Phi$. Here $Q$ is the VanderMonde type matrix $Q_{kl} = (k-\frac{n}{2})^{l}/l!$, where $k,l=0,1,\ldots,n-1$, and \begin{align*} \Phi =\left( \begin{array}{ccccc} \phi(0) & \frac{\phi'(0)}{2\pi i} & \frac{\phi''(0)}{2!(2\pi i)^{2}} & \hdots & \frac{\phi^{(n-1)}(0)}{(n-1)!(2\pi i)^{n-1}}\\ 0 & \phi(0) & \frac{\phi'(0)}{2\pi i} & \hdots & \frac{\phi^{(n-2)}(0)}{(n-2)!(2\pi i)^{n-2}}\\ 0 & 0 & \phi(0) & \hdots & \frac{\phi^{(n-3)}(0)}{(n-3)!(2\pi i)^{n-3}}\\ \vdots & \vdots & & & \vdots\\ 0 & 0 & 0 & \hdots & \phi(0) \end{array} \right), \end{align*} where $\phi$ is the function \begin{align} \phi(s) = \frac{\Gamma(\alpha_{1}+s)\cdots\Gamma(\alpha_{n}+s)}{\Gamma(\alpha_{1})\cdots\Gamma(\alpha_{n})}\Gamma(1-s)^{n}.\\ \end{align} \end{theorem} \noindent\textbf{Proof. }Let $k\in\{0,1,\ldots,n-1\}$. We see that for $|z|<1$ \begin{align*} & I_{k}(z) = \frac{(-1)^{n}}{(2\pi i)^{n}}\int_{L}\frac{\Gamma(\alpha_{1}+s)\cdots\Gamma(\alpha_{n}+s)}{\Gamma(\alpha_{1})\cdots\Gamma(\alpha_{n})} \Gamma(-s)^{n} e^{(2k-n)\pi i s} z^{s} ds\\ &= \sum_{m=0}^{\infty} \frac{(-1)^{n}}{(n-1)!} \frac{d^{n-1}}{ds^{n-1}}|_{s=m}\frac{\Gamma(\alpha_{1}+s)\cdots\Gamma(\alpha_{n}+s)}{\Gamma(\alpha_{1})\cdots\Gamma(\alpha_{n})} (s-m)^{n}\Gamma(-s)^{n}\frac{e^{(2k-n)\pi i s}}{(2\pi i)^{n-1}} z^{s}\\ &= \sum_{l=0}^{n-1} \frac{\log^{n-1-l}(z)}{(n-1-l)!}\sum_{m=0}^{\infty} \frac{z^{m}}{l!} \frac{d^{l}}{ds^{l}}|_{s=m} \frac{\Gamma(\alpha_{1}+s)\cdots\Gamma(\alpha_{n}+s)}{\Gamma(\alpha_{1})\cdots\Gamma(\alpha_{n})} (m-s)^{n}\Gamma(-s)^{n}\frac{e^{(2k-n)\pi i s}}{(2\pi i)^{n-1}}\\ &=\sum_{l=0}^{n-1} a_{k,l}(z)\frac{1}{(n-1-l)!}\frac{\log^{n-1-l}(z)}{(2\pi i)^{n-1-l}} \end{align*} for suitable analytic functions $a_{k,0},\ldots,a_{k,n-1}$ in a neighborhood of $z=0$ that satisfy in particular \begin{align*} a_{k,l}(0) &= \frac{ (2\pi i)^{-l}}{l!} \frac{d^{l}}{ds^{l}}|_{s=0} \frac{\Gamma(\alpha_{1}+s)\cdots\Gamma(\alpha_{n}+s)}{\Gamma(\alpha_{1})\cdots\Gamma(\alpha_{n})}\Gamma(1-s)^{n} e^{(2k-n)\pi i s}\\ &=\sum_{m=0}^{l} \frac{(k-\frac{n}{2})^{m}}{m!}\frac{\phi^{(l-m)}(0)}{(l-m)!(2\pi i)^{l-m}}. \end{align*} Here we have used the Leibniz rule. By definition we have $I_{k}(z) = \sum_{l=0}^{n-1} T_{kl} f_{n-1-l}/(2\pi i)^{n-1-l}$ in the Frobenius basis. Since $\log^{k}(z)/k!$ is the only term in $f_{k}$ which is a power of a logarithm multiplied by a constant term we can apply Proposition \ref{combilogs} to find \begin{align*} T_{kl} - a_{k,l}(0) = 0\text{ for }l=0,1,\ldots,n-1. \end{align*} \begin{flushright}$\square$\end{flushright} \begin{proposition} \label{combilogs} Let $m\in\mathbb{N}$ and let $a_{0},\ldots,a_{m}$ be analytic functions in a neighborhood of $0$. Suppose that for all $z$ in this neighborhood, with argument in $(0,2\pi)$, we have \begin{align*} \sum_{j=0}^{m} a_{j}(z)\log^{j}(z) = 0. \end{align*} Then we have $a_{j}(0)=0$ for all $0\leq j\leq m$. \end{proposition} \noindent\textbf{Proof. }Suppose the statement of the theorem is untrue. Denote by $0\leq r\leq m$ the largest number such that $a_{r}(0)\neq 0$. We can write \begin{align*} a_{r}(z) = -\sum_{j=0}^{r-1} a_{j}(z)\log^{j-r}(z)-\sum_{j=r+1}^{m} a_{j}(z)\log^{j-r}(z). \end{align*} Taking the limit $z\to 0$ yields $a_{r}(0)=0$, contradicting our assumption that $r$ was the largest number such that $a_{r}(0)\neq 0$. Here we have used that $\log^{j-r}(z)\to 0$ for $j<r$ and we have used the standard limit $z \log^{j-r}(z)\to 0$ for the terms with $j>r$. \begin{flushright}$\square$\end{flushright} \begin{remark} \label{combilogremark} By induction it follows that the analytic functions $a_{j}$ should actually vanish. \end{remark} \begin{theorem} \label{M1stelling} In the ordered basis $\{f_{n-1}/(2\pi i)^{n-1},f_{n-2}/(2\pi i)^{n-2},\ldots,f_{0}\}$ we have $M_{1} = \mathbb{I}+u v^T$. Here \begin{align*} u =\left( \begin{array}{c} (T^{-1})_{00}\\ (T^{-1})_{10} \\ \vdots \\ (T^{-1})_{(n-1)0} \end{array} \right) \textit{ and } v = \left( \begin{array}{cccc} \frac{V^{(0)}(0)}{0!}\\ \frac{1}{2\pi i}\frac{V^{(1)}(0)}{1!}\\ \vdots \\ \frac{1}{(2\pi i)^{n-1}}\frac{V^{(n-1)}(0)}{(n-1)!} \end{array} \right) \end{align*} and the function $V$ is defined by \begin{align*} V(s) = (-1)^{n}\phi(s)e^{-\pi i n s} \prod_{k=1}^{n} (e^{2\pi i s}-e^{-2\pi i \alpha_{k}}).\\ \end{align*} \end{theorem} \noindent\textbf{Proof. }From Theorem \ref{MBmonodromy} we obtain in the Mellin-Barnes basis \begin{align*} M_{1} &= M_{0}^{-1} M_{\infty}^{-1}\\ &= \left( \begin{array}{ccccc} (-1)^{n}A_{n} & (-1)^{n}A_{n-1}+\binom{n}{1} & (-1)^{n}A_{n-2}-\binom{n}{2} & \hdots & (-1)^{n}A_{1}\pm\binom{n}{n-1}\\ 0 & 1 & 0 & \hdots & 0\\ 0 & 0 & 1 & \hdots & 0\\ \vdots & \vdots & & & \vdots\\ 0 & 0 & 0 & \hdots & 1 \end{array} \right) \end{align*} Now we notice that the $(0,l)$th entry of $(M_{1}-\mathbb{I})T$ is \begin{align*} ((M_{1}-\mathbb{I})T)_{0l} &= \sum_{k=0}^{n-1} (-1)^{n}\left[A_{n-k}-(-1)^{n-k}\binom{n}{k}\right] \frac{(2\pi i)^{n-1-l}}{l!} \phi_{k}^{(l)}(0) \end{align*} where \begin{align*} \phi_{k}(s) = \frac{\phi(s)}{(2\pi i)^{n-1}}e^{-\pi i n s}e^{2\pi i k s}. \end{align*} We see that \begin{align*} \sum_{k=0}^{n-1}\binom{n}{k} (-1)^{n-k}\phi_{k}^{(l)}(0) &= \frac{d^{l}}{ds^{l}}|_{s=0} \frac{\phi(s)}{(2\pi i)^{n-1}}e^{-\pi i n s}\sum_{k=0}^{n-1} \binom{n}{k} (-1)^{n-k}e^{2\pi i k s}\\ &= \frac{d^{l}}{ds^{l}}|_{s=0}\frac{\phi(s)}{(2\pi i)^{n-1}}e^{-\pi i n s}\left((e^{2\pi i s}-1)^{n}-e^{2\pi i n s}\right)\\ &= 0 - \phi_{n}^{(l)}(0) = -A_{0} \phi_{n}^{(l)}(0). \end{align*} Therefore \begin{align*} ((M_{1}-\mathbb{I})T)_{0l} &= (2\pi i)^{n-1-l}\frac{(-1)^{n}}{(n-1-l)!} \sum_{k=0}^{n} A_{n-k} \phi_{k}^{(n-1-l)}(0)\\ &= (2\pi i)^{n-1-l}\frac{(-1)^{n}}{l!} \frac{d^{l}}{ds^{l}}|_{s=0}\frac{\phi(s)}{(2\pi i)^{n-1}}e^{-\pi i n s} \prod_{k=1}^{n} (e^{2\pi i s}-e^{-2\pi i \alpha_{k}})\\ &= (2\pi i)^{-l}\frac{V^{(l)}(0)}{l!}. \end{align*} Here we used the Leibniz rule. Of course all other entries of $(M_{1}-\mathbb{I})T$ are zero. We conclude that in the ordered basis $\{f_{n-1}/(2\pi i)^{n-1},f_{n-2}/(2\pi i)^{n-2},\ldots,f_{0}\}$ we have \begin{align*} M_{1} &= \mathbb{I}+T^{-1}(M_{1}^{\mathcal{I}}-\mathbb{I})T\\ &= \mathbb{I}+\left( \begin{array}{c} (T^{-1})_{00}\\ (T^{-1})_{10} \\ \vdots \\ (T^{-1})_{(n-1)0} \end{array} \right) \left( \begin{array}{cccc} \frac{V^{(0)}(0)}{0!}& \frac{1}{2\pi i}\frac{V^{(1)}(0)}{1!}& \hdots & \frac{1}{(2\pi i)^{n-1}}\frac{V^{(n-1)}(0)}{(n-1)!} \end{array} \right). \end{align*} Here the superscript $\mathcal{I}$ indicates that the particular matrix is in the Mellin Barnes Basis. \begin{flushright}$\square$\end{flushright} Using Proposition \ref{inverserank1} we get the following corollary. \begin{corollary} \label{M1stelling2} In the ordered basis $\{f_{n-1}/(2\pi i)^{n-1},f_{n-2}/(2\pi i)^{n-2},\ldots,f_{0}\}$ we have $M_{1}^{\mathcal{F}} = e^{N}+u v^T$. Here \begin{align*} u =\left( \begin{array}{c} (T^{-1})_{00}\\ (T^{-1})_{10} \\ \vdots \\ (T^{-1})_{(n-1)0} \end{array} \right) \textit{ and } v = \left( \begin{array}{cccc} \frac{W^{(0)}(0)}{0!}\\ \frac{1}{2\pi i}\frac{W^{(1)}(0)}{1!}\\ \vdots \\ \frac{1}{(2\pi i)^{n-1}}\frac{W^{(n-1)}(0)}{(n-1)!} \end{array} \right) \end{align*} and the function $W$ is defined by $W(s) = (-1)^{n} e^{-2\pi i(\alpha_{1}+\ldots+\alpha_{n})} e^{2\pi i s} V(s)$. \end{corollary} \section{The case where $(X-e^{-2\pi i\alpha_{1}})\cdots (X-e^{-2\pi i\alpha_{n}})$ is a product of cyclotomic polynomials} Theorem \ref{M1stelling} shows us that for large $n$ the expressions for the monodromy matrices seem to become rather cumbersome. Therefore we will, in this chapter, limit our study of the monodromy matrices in the maximally unipotent case to the case where $(X-e^{-2\pi i\alpha_{1}})\cdots (X-e^{-2\pi i\alpha_{n}})$ is a product of cyclotomic polynomials. This is actually not such a big restriction, since it seems to be a case of particular interest (see for example \cite{CYY}). In particular, many Calabi-Yau differential equations are of this form. \subsection{Polynomials with roots in the cyclotomic field}\index{Cyclotomic field} \begin{proposition} \label{polynoomst} Let $p\in\mathbb{Q}[X]$ be monic and suppose all its roots are roots of unity not equal to $1$. Then there exists a number $r\in\mathbb{N}$ and numbers $a_{1},\ldots,a_{r},b_{1},\ldots,b_{r}\in\mathbb{N}$ such that \begin{align} \label{polynoomvorm} p(X) = \frac{(X^{a_{1}}-1)\cdots (X^{a_{r}}-1)}{(X^{b_{1}}-1)\cdots (X^{b_{r}}-1)}. \end{align} \end{proposition} \noindent\textbf{Proof.} This follows immediately from the fact that the $k^{th}$ cyclotomic polynomial satisfies \begin{align*} \phi_{k}(X) = \prod_{d|k} (x^{d}-1)^{\mu(n/d)}, \end{align*} where $\mu$ denotes the M\"obius function. \begin{flushright}$\square$\end{flushright} \begin{theorem} \label{gammaproduct} Let $\alpha_{1},\ldots,\alpha_{n}\in \mathbb{Q}\cap (0,1)$ and suppose that $(X-e^{-2\pi i\alpha_{1}})\cdots (X-e^{-2\pi i\alpha_{n}})$ has integer coefficients. Then there exist a number $r\in\mathbb{N}$ and numbers $a_{1},\ldots,a_{r},b_{1},\ldots,b_{r}\in\mathbb{N}$ such that \begin{align*} \prod_{k=1}^{n} \Gamma(\alpha_{k}+s) = C^{-s}\frac{\Gamma(a_{1}s)\cdots \Gamma(a_{r}s)}{\Gamma(b_{1}s)\cdots \Gamma(b_{r}s)} (2\pi)^{\frac{n}{2}}\sqrt{\frac{a_{1}\cdots a_{r}}{b_{1}\cdots b_{r}}}\text{ where } C = \frac{a_{1}^{a_{1}}\cdots a_{r}^{a_{r}}}{b_{1}^{b_{1}}\cdots b_{r}^{b_{r}}}. \end{align*} \end{theorem} \noindent\textbf{Proof.} By Proposition (\ref{polynoomst}) we find a number $r\in\mathbb{N}$ and numbers $a_{1},\ldots,a_{r},b_{1},\ldots,b_{r}\in\mathbb{N}$ such that \begin{align*} \prod_{k=1}^{n} \Gamma(\alpha_{k}+s) = \frac{\left(\prod_{j=0}^{a_{1}-1}\Gamma(\frac{j}{a_{1}}+s)\right)\cdots \left(\prod_{j=0}^{a_{r}-1}\Gamma(\frac{j}{a_{r}}+s)\right)}{\left(\prod_{j=0}^{b_{1}-1}\Gamma(\frac{j}{b_{1}}+s)\right)\cdots \left(\prod_{j=0}^{b_{r}-1}\Gamma(\frac{j}{b_{r}}+s)\right)}. \end{align*} This is due to the fact that a bijection can be made between the terms in which the gamma functions are evaluated and the roots of the corresponding polynomials. According to the multiplication theorem for the Gamma function this equals \begin{align*} \frac{\left(\Gamma(a_{1}s)(2\pi)^{\frac{a_{1}}{2}} a_{1}^{\frac{1}{2}-a_{1}s}\right)\cdots\left(\Gamma(a_{r}s)(2\pi)^{\frac{a_{r}}{2}} a_{r}^{\frac{1}{2}-a_{r}s}\right)}{\left(\Gamma(b_{1}s)(2\pi)^{\frac{b_{1}}{2}} b_{1}^{\frac{1}{2}-b_{1}s}\right)\cdots \left(\Gamma(b_{r}s)(2\pi)^{\frac{b_{r}}{2}} b_{r}^{\frac{1}{2}-b_{r}s}\right)}\\ =C^{-s}\frac{\Gamma(a_{1}s)\cdots \Gamma(a_{r}s)}{\Gamma(b_{1}s)\cdots \Gamma(b_{r}s)} (2\pi)^{\frac{n}{2}}\sqrt{\frac{a_{1}\cdots a_{r}}{b_{1}\cdots b_{r}}} \end{align*} where we have used that $a_{1}+\ldots+a_{r} = n+b_{1}+\ldots+b_{r}$. \begin{flushright}$\square$\end{flushright} \begin{remark} Notice that we can rewrite this formula as \begin{align*} \prod_{k=1}^{n} \Gamma(\alpha_{k}+s) = C^{-s}\frac{\Gamma(a_{1}s+1)\cdots \Gamma(a_{r}s+1)}{\Gamma(b_{1}s+1)\cdots \Gamma(b_{r}s+1)} (2\pi)^{\frac{n}{2}}\sqrt{\frac{b_{1}\cdots b_{r}}{a_{1}\cdots a_{r}}} \end{align*} which implies the appealing form \begin{align} \label{appeal} C^{s}\prod_{k=1}^{n} \frac{\Gamma(\alpha_{k}+s)}{\Gamma(\alpha_{k})} = \frac{\Gamma(a_{1}s+1)\cdots \Gamma(a_{r}s+1)}{\Gamma(b_{1}s+1)\cdots \Gamma(b_{r}s+1)}. \end{align} \end{remark} The proof of the following theorem is by Julian Lyczak and Merlijn Staps. \begin{proposition} \label{integerC} The number $C$ of Theorem \ref{gammaproduct} is an integer. \end{proposition} \noindent\textbf{Proof. }Let $m\in\mathbb{N}$, the number of factors of the product $(X^{b_{1}}-1)\cdots (X^{b_{r}}-1)$ of which $e^{2\pi i/m}$ is a root cannot exceed the number of factors of the product $(X^{a_{1}}-1)\cdots (X^{a_{r}}-1)$ of which $e^{2\pi i/m}$ is a root, otherwise $(X^{a_{1}}-1)\cdots (X^{a_{r}}-1)(X^{b_{1}}-1)^{-1}\cdots (X^{b_{r}}-1)^{-1}$ could not be a polynomial. We conclude that $|\{j:m|a_{j}\}|\geq |\{j:m|b_{j}\}|$ for all $m\in\mathbb{N}$. Now let $p$ be prime and let $k\in\mathbb{N}$. Define $A_{k}=\{a_{j}:p^{k}|a_{j}\}$ and $B_{k}=\{b_{j}:p^{k}|b_{j}\}$ and consider the rational function \begin{align*} q(X) = \prod_{a\in A_{k}} (X^{a}-1)/\prod_{b\in B_{k}} (X^{b}-1). \end{align*} Suppose $q(X)$ is not a polynomial, then there exists a root of unity $\zeta\neq 1$ such that there are more factors of the form $(X^{b}-1)$ than of the form $(X^{a}-1)$ that have $\zeta$ as a root. This root is of the form $\zeta=e^{2\pi i l/m}$ for some $l,m\in\mathbb{N}$, where $m>1$. In particular, $|\{a\in A_{k}:m|a\}|<|\{b\in B_{k}:m|b\}|$. However, because $|A_{k}|=|\{j:p^{k}|a_{j}\}|\geq |\{j:p^{k}|b_{j}\}|=|B_{k}|$ we must have $gcd(m,p)=1$, and this would imply $|\{j:p^{k}m|a_{j}\}|<|\{j:p^{k}m|b_{j}\}|$, which is a contradiction. We must conclude that $q(X)$ is a polynomial, thus by comparing degrees we have \begin{align*} \sum_{a\in A_{k}} a \geq \sum_{b\in B_{k}} b. \end{align*} Denote by $\mathcal{A}_{j}$ the largest integer such that $p^{\mathcal{A}_{j}}| a_{j}$ and by $\mathcal{B}_{j}$ the largest integer such that $p^{\mathcal{B}_{j}} | b_{j}$. The theorem is now proved by the observation that \begin{align*} \sum_{j=1}^{r} \mathcal{A}_{j} a_{j} = \sum_{k=1}^{\infty} \sum_{a\in A_{k}} a\geq \sum_{k=1}^{\infty} \sum_{b\in B_{k}} b = \sum_{j=1}^{r} \mathcal{B}_{j} b_{j}. \end{align*} \begin{flushright}$\square$\end{flushright} \begin{corollary} \label{faculteit} Let $r\in\mathbb{N}$ and let $a_{1},\ldots,a_{n},b_{1},\ldots,b_{n}\in \mathbb{N}$. Suppose that \begin{align} \label{poly1} \frac{(X^{a_{1}}-1)\cdots (X^{a_{r}}-1)}{(X^{b_{1}}-1)\cdots (X^{b_{r}}-1)} \end{align} is a polynomial. Then $\frac{a_{1}!\cdots a_{r}!}{b_{1}!\cdots b_{r}!}$ is an integer. \end{corollary} \noindent\textbf{Proof. }Notice that by multiplying with $(X-1)$ we may assume (\ref{poly1}) to be non-constant. Without loss of generality (\ref{poly1}) is irreducible (this follows from Proposition \ref{polynoomst}). Thus there exists a $\mathcal{N}\in\mathbb{N}$ such that $\{\alpha_{1},\ldots,\alpha_{n}\}=\{m/\mathcal{N}:0<m<\mathcal{N},gcd(m,\mathcal{N})=1\}$. It follows from (\ref{appeal}) that \begin{align*} \frac{a_{1}!\cdots a_{r}!}{b_{1}!\cdots b_{r}!} = \alpha_{1}\cdots \alpha_{n} \frac{a_{1}^{a_{1}}\cdots a_{r}^{a_{r}}}{b_{1}^{b_{1}}\cdots b_{r}^{b_{r}}}. \end{align*} Let $p$ be a prime divisor of $\mathcal{N}$ and denote by $m$ its multiplicity. We follow the proof of Proposition \ref{integerC} untill we define the polynomial \begin{align*} q(X) = \prod_{a\in A_{k}} (X^{a}-1)/\prod_{b\in B_{k}} (X^{b}-1). \end{align*} for $k\leq m$ (with same notation). Notice that indeed there must exist an $a_{j}$ such that $p^{m}|a_{j}$ because $e^{2\pi i/\mathcal{N}}$ must be a root of our original polynomial. In this case, we can reason that $e^{-2\pi i\alpha_{j}}$ must be a root of $q(X)$, this is because it is a root of our original polynomial and cannot be a root of any factor not corresponding to $A_{k}$. By comparing degrees we conclude that \begin{align*} \sum_{a\in A_{k}} a \geq n+\sum_{b\in B_{k}} b. \end{align*} We obtain \begin{align*} -m n+\sum_{j=1}^{r} \alpha_{j} a_{j} = -m n+\sum_{k=1}^{\infty} \sum_{a\in A_{k}} a\geq \sum_{k=1}^{\infty} \sum_{b\in B_{k}} b = \sum_{j=1}^{r} \beta_{j} b_{j} \end{align*} which proves our corollary. \begin{flushright}$\square$\end{flushright} \subsection{A general expression for the monodromy matrices of the maximally unipotent case} \noindent If we would instead of the generalized hypergeometric equation have considered the equation \begin{align} \label{hypergeoC} \theta^{n}f=Cz(\theta-\alpha_{1})\cdots (\theta-\alpha_{n})f \end{align} then a solution $f$ to this equation for $C=1$, i.e. of the hypergeometric case, induces the solution $f(C z)$ for general $C\in\mathbb{C}\setminus\{0\}$. In other words, normalization of $z$ provides us with solutions to a related differential equation. Let us use our knowledge of the hypergeometric equation to find `a Frobenius basis' for (\ref{hypergeoC}). Denote this Frobenius basis by $f_{0}^{C},\ldots,f_{n-1}^{C}$. We know that a basis of solutions is given by $f_{0}(Cz),\ldots,f_{n-1}(Cz)$. Notice that \begin{align*} f_{j}(Cz) &= \frac{\log^{j}(Cz)}{j!}+\sum_{m=0}^{j} \frac{\log^{m}(Cz)}{m!}h_{m}(Cz)\\ &= \sum_{m=0}^{j} \frac{\log^{m}(z)}{m!} \frac{\log^{j-m}(C)}{(j-m)!}+\sum_{m=0}^{j} \frac{(\log(z)+\log(C))^{m}}{m!}h_{m}(Cz)\\ &= \sum_{m=0}^{j} \frac{\log^{j-m}(C)}{(j-m)!} f_{j}^{C}(z). \end{align*} We conclude that \begin{align*} \left(\begin{array}{c} f_{n-1}^{C}(z)/(2\pi i)^{n-1} \\ \vdots \\ f_{0}^{C}(z) \end{array}\right) = C^{-\frac{N}{2\pi i}} \left(\begin{array}{c} f_{n-1}(Cz)/(2\pi i)^{n-1} \\ \vdots \\ f_{0}(Cz) \end{array}\right). \end{align*} Again $N$ is the matrix who's only nonzero components are ones on the superdiagonal. Notice that in this case our monodromy group is generated by $M_{0},M_{1/C}$ and $M_{\infty}$. \\ From now on we choose $C$ to be the constant from the previous paragraph, that is \begin{align*} C = \frac{a_{1}^{a_{1}}\cdots a_{n}^{a_{n}}}{b_{1}^{b_{1}}\cdots b_{n}^{b_{n}}}. \end{align*} \begin{theorem} Let $\alpha_{1},\ldots,\alpha_{n}\in \mathbb{Q}\cap (0,1)$ and suppose that $(X-e^{-2\pi i\alpha_{1}})\cdots (X-e^{-2\pi i\alpha_{n}})$ has integer coefficients. Then the solution $f_{0}^{C}$ of (\ref{hypergeoC}) has integer coefficients in its powerseries expansion. \end{theorem} \noindent\textbf{Proof. }From the above discussion we infer that \begin{align*} f_{0}^{C}(z) = {_{n}F_{n-1}}(\alpha_{1},\ldots,\alpha_{n};1,\ldots,1|Cz) = \sum_{m=0}^{\infty} \frac{(a_{1}m)!\cdots (a_{r}m)!}{(b_{1}m)!\cdots (b_{r}m)!}\frac{z^{m}}{m!^{n}}, \end{align*} where we have used (\ref{appeal}). Without loss of generality $(X-e^{-2\pi i\alpha_{1}})\cdots (X-e^{-2\pi i\alpha_{n}})$ is irreducible. Let $p\leq m$ be prime. Let $\mathcal{N}$ be as in corollary \ref{faculteit}. Suppose $p\not|\mathcal{N}$. We have \begin{align*} \frac{(a_{1}m)!\cdots (a_{r}m)!}{(b_{1}m)!\cdots (b_{r}m)!} &= \left(\prod_{k=1}^{n} \prod_{l=0}^{m-1} \frac{\alpha_{k}+l}{m}\right) \frac{(a_{1}m)^{a_{1}m}\cdots (a_{r}m)^{a_{r}m}}{(b_{1}m)^{b_{1}m}\cdots (b_{r}m)^{b_{r}m}}\\ &= \left(\prod_{k=1}^{n} \prod_{l=0}^{m-1} \frac{\mathcal{N}\alpha_{k}+\mathcal{N} l}{\mathcal{N}}\right) \left(\frac{a_{1}^{a_{1}}\cdots a_{r}^{a_{r}}}{b_{1}^{b_{1}}\cdots b_{r}^{b_{r}}}\right)^{m}. \end{align*} Because $\text{gcd}(p,\mathcal{N})=1$ we have $\{0,\mathcal{N},2\mathcal{N},\ldots,(p^{l}-1)\mathcal{N}\}\equiv\{0,1,\ldots,p^{l}-1\} \mod{p^{l}}$. Thus at least $[m/p^{l}]$ of $\mathcal{N} \alpha_{k}, \mathcal{N} \alpha_{k}+\mathcal{N},\ldots,\mathcal{N} \alpha_{k}+(m-1)\mathcal{N}$ must be divisible by $p^{l}$. We conclude that \begin{align*} p^{n([m/p]+[m/p^{2}]+\ldots)}|\prod_{k=1}^{n} \prod_{l=0}^{m-1} (\mathcal{N}\alpha_{k}+\mathcal{N} l), \end{align*} and this is enough. Now suppose $p|\mathcal{N}$ with multiplicity $e$. We notice that \begin{align*} \frac{(a_{1}m)!\cdots (a_{r}m)!}{(b_{1}m)!\cdots (b_{r}m)!} &= \left(\prod_{k=1}^{n} \prod_{l=0}^{m-1} \frac{\mathcal{N}\alpha_{k}+\mathcal{N} l}{\mathcal{N}\alpha_{k}}\right) \left(\frac{a_{1}!\cdots a_{r}!}{b_{1}!\cdots b_{r}!}\right)^{m} \end{align*} We should prove that \begin{align*} p|\frac{a_{1}!\cdots a_{r}!}{b_{1}!\cdots b_{r}!}. \end{align*} If this is not the case then we deduce from the proof of corollary \ref{faculteit} that \begin{align*} (X-e^{-2\pi i\alpha_{1}})\cdots (X-e^{-2\pi i\alpha_{n}}) = \prod_{a\in A_{e}} (X^{a}-1)/\prod_{b\in B_{e}} (X^{b}-1). \end{align*} Thus $(X-e^{-2\pi i\alpha_{1}})\cdots (X-e^{-2\pi i\alpha_{n}})=q(X^{p^{e}})$ for some polynomial $q$ that must necessarily be cyclotomic and irreducible. We conclude that there must exist an $\mathcal{M}\in\mathbb{N}$ such that $\varphi(\mathcal{N})=n\varphi(\mathcal{M})$, where $\varphi$ is the Euler totient function\index{Euler's totient function}. Also we deduce that $p^{e}|n$. Since $e^{2\pi i p^{e}/\mathcal{N}}$ is a root of $q$ we must have $\mathcal{N}/p^{e}|\mathcal{M}$. Hence \begin{align*} \varphi(\mathcal{N}) = n\varphi(\mathcal{M}) \geq n\varphi(\mathcal{N}/p^{e}) = \varphi(\mathcal{N}) \frac{n}{p^{e}} \frac{p}{p-1} > \varphi(\mathcal{N}), \end{align*} a contradiction. \begin{flushright}$\square$\end{flushright} The authors of \cite{CYY} point out that this result holds for all Picard-Fuchs equations (i.e. the $n=4$ case), it is actually used as part of the definition of a Calabi-Yau type differential equation by the authors of \cite{AESZ}. A folklore conjecture that goes back to Bombieri and Dwork states that all power series $y_{0}(z)\in\mathbb{Z}[[z]]$ that satisfy a homogeneous linear differential equation have a geometrical origin.\\ Matrices that have the form of $\Phi$ from Theorem \ref{InaarF} have a certain homomorphism\index{Homomorphism} property. Explicitly, for a function $C(s)$ we have \begin{align*} & \left( \begin{array}{ccccc} \phi(0) & \frac{\phi'(0)}{2\pi i} & \frac{\phi''(0)}{2!(2\pi i)^{2}} & \hdots & \frac{\phi^{(n-1)}(0)}{(n-1)!(2\pi i)^{n-1}}\\ 0 & \phi(0) & \frac{\phi'(0)}{2\pi i} & \hdots & \frac{\phi^{(n-2)}(0)}{(n-2)!(2\pi i)^{n-2}}\\ 0 & 0 & \phi(0) & \hdots & \frac{\phi^{(n-3)}(0)}{(n-3)!(2\pi i)^{n-3}}\\ \vdots & \vdots & & & \vdots\\ 0 & 0 & 0 & \hdots & \phi(0) \end{array} \right) \left( \begin{array}{ccccc} C(0) & \frac{C'(0)}{2\pi i} & \frac{C''(0)}{2!(2\pi i)^{2}} & \hdots & \frac{C^{(n-1)}(0)}{(n-1)!(2\pi i)^{n-1}}\\ 0 & C(0) & \frac{C'(0)}{2\pi i} & \hdots & \frac{C^{(n-2)}(0)}{(n-2)!(2\pi i)^{n-2}}\\ 0 & 0 & C(0) & \hdots & \frac{C^{(n-3)}(0)}{(n-3)!(2\pi i)^{n-3}}\\ \vdots & \vdots & & & \vdots\\ 0 & 0 & 0 & \hdots & C(0) \end{array} \right)\\ &= \left( \begin{array}{ccccc} \phi_{C}(0) & \frac{\phi_{C}'(0)}{2\pi i} & \frac{\phi_{C}''(0)}{2!(2\pi i)^{2}} & \hdots & \frac{\phi_{C}^{(n-1)}(0)}{(n-1)!(2\pi i)^{n-1}}\\ 0 & \phi_{C}(0) & \frac{\phi_{C}'(0)}{2\pi i} & \hdots & \frac{\phi_{C}^{(n-2)}(0)}{(n-2)!(2\pi i)^{n-2}}\\ 0 & 0 & \phi_{C}(0) & \hdots & \frac{\phi_{C}^{(n-3)}(0)}{(n-3)!(2\pi i)^{n-3}}\\ \vdots & \vdots & & & \vdots\\ 0 & 0 & 0 & \hdots & \phi_{C}(0) \end{array} \right) \end{align*} where $\phi_{C}(s) = \phi(s) C(s)$. Notice that the second matrix in the product is simply $C^{\frac{N}{2\pi i}}$ when $C(s)$ is defined to be $C^{s}$. The results we have found so far adapt naturally to the new basis (where $z$ is normalized with $C$), we simply substitute $\phi$ by $\phi_{C}$ (compare this with Theorem \ref{InaarF}). It should be clear why this basis is interesting, with our particular choice of $C$ we have the appealing form \begin{align*} \phi_{C}(s) = \frac{\Gamma(a_{1}s+1)\cdots \Gamma(a_{r}s+1)}{\Gamma(b_{1}s+1)\cdots \Gamma(b_{r}s+1)}\Gamma(1-s)^{n}. \end{align*} \begin{definition} Let $j\in\mathbb{N}$. By $\pi_{j}$ we denote the set of integer partitions\index{Integer partition} of $j$, i.e. the set of finite (not necessarily strictly) decreasing sequences of natural numbers $p_{1},p_{2},\ldots$ such that $p_{1}+p_{2}+\ldots=j$. Any function $g$ whose domain contains $\mathbb{N}$ can be extended to partitions by multiplication, i.e. $g(p) = g(p_{1})g(p_{2})\cdots$. Additionally, we define $\pi_{0}=\{0\}$ and $g(0)=1$. \end{definition} The following theorem will provide us with a practical method to obtain the monodromy matrices in the ordered basis $f_{n-1}^{C}/(2\pi i)^{n-1},\ldots, f_{1}^{C}/(2\pi i), f_{0}^{C}$.\\ \begin{theorem} \label{main?} \textbf{(Main Theorem)}\\ Let $\alpha_{1},\ldots,\alpha_{n}\in \mathbb{Q}\cap (0,1)$ and suppose that $(X-e^{-2\pi i\alpha_{1}})\cdots (X-e^{-2\pi i\alpha_{n}})$ is a product of cyclotomic polynomials. Let $r\in\mathbb{N}$ and $a_{1},\ldots,a_{r},b_{1},\ldots,b_{r}\in\mathbb{N}$ be as in Theorem \ref{gammaproduct} and define $\zeta(1)=0$ for convenience. In the ordered basis $f_{n-1}^{C}/(2\pi i)^{n-1},\ldots,f_{1}^{C}/(2\pi i),f_{0}^{C}$ of (\ref{hypergeoC}) we have $M_{1/C} = \mathbb{I} - v_{-} v_{+}^{T}$, where \begin{align*} \label{entries} v_{-,j} &= \sum_{l=0}^{n-1-j} c_{l+j} \sum_{p\in\pi_{l}} \frac{1}{M(p)} c_{p}^{-}\frac{\zeta(p)}{(2\pi i)^{p}}\text{ and }v_{+,j} = \sum_{p\in \pi_{j}} \frac{1}{M(p)} c_{p}^{+}\frac{\zeta(p)}{(2\pi i)^{p}} \end{align*} for $j=0,1,\ldots,n-1$. Here the coefficients $c_{j},c_{j}^{\pm}\in\mathbb{Q}$ are given by $c_{0}^{\pm}=1$ and \begin{align*} c_{j}^{\pm} &= \frac{1}{j}\left(\pm n- (\pm 1)^{j}\sum_{m=1}^{r} (a_{m}^{j}-b_{m}^{j})\right)\text{ and }c_{j} = \frac{1}{(n-1)!}\frac{a_{1}\cdots a_{r}}{b_{1}\cdots b_{r}}\frac{d^{j}}{dz^{j}}\left. \prod_{m=1}^{n-1} \left(z-m+\frac{n}{2}\right)\right|_{z=0} \end{align*} (the definition for $c_{j}$ also being valid for $j=0$) and the function $M:\pi_{0}\cup \pi_{1} \cup\cdots\to\mathbb{N}$ by $M(p_{1},p_{2},\cdots) = |\{k:p_{k}=1\}|! |\{k:p_{k}=2\}|!\cdots$\\ \noindent In particular, all matrices in the corresponding monodromy group have their entries in $\mathbb{Q}(\zeta(3) (2\pi i)^{-3},\zeta(5) (2\pi i)^{-5},\ldots,\zeta(m)(2\pi i)^{-m})$, with $m$ the largest odd number below $n$.\\ \end{theorem} \noindent\textbf{Proof. }We use the function $V$ from theorem (\ref{M1stelling}). After conjugation with the matrix $C^{\frac{N}{2\pi i}}$ we have the same theorem but with function $\phi_{C}(s)=C^{s}\phi(s)$ instead. Notice that \begin{align*} (-1)^{n}e^{\pi i(\alpha_{1}+\ldots+\alpha_{n})} V_{C}(s) &:= \phi(s) C^{s}\prod_{k=1}^{n} (e^{\pi i(\alpha_{k}+s)}-e^{-\pi i(\alpha_{k}+s)})\\ &= (2\pi i)^{n}\Gamma(1-s)^{n}C^{s}\prod_{k=1}^{n} \frac{1}{\Gamma(\alpha_{k})\Gamma(1-\alpha_{k}-s)}\\ &= (2\pi i)^{n} \frac{\Gamma(1-s)^{n}}{\Gamma(\alpha_{1})^{2}\cdots \Gamma(\alpha_{n})^{2}} \frac{\Gamma(1-b_{1}s)\cdots \Gamma(1-b_{r}s)}{\Gamma(1-a_{1}s)\cdots \Gamma(1-a_{r}s)}\\ &= i^{n} \Gamma(1-s)^{n} \frac{a_{1}\cdots a_{r}}{b_{1}\cdots b_{r}}\frac{\Gamma(1-b_{1}s)\cdots \Gamma(1-b_{r}s)}{\Gamma(1-a_{1}s)\cdots \Gamma(1-a_{r}s)} \end{align*} We remark that one must have $\alpha_{1}+\ldots+\alpha_{n}=\frac{n}{2}$. Using the formula \begin{align*} \log \Gamma(1+s) &= -\gamma s+\sum_{p=2}^{\infty} \frac{(-1)^{p}}{p} \zeta(p) s^{p} \end{align*} yields \begin{align*} (-1)^{n}\frac{b_{1}\cdots b_{r}}{a_{1}\cdots a_{r}} V_{C}(s) &= \exp\left(\sum_{p=2}^{\infty} c_{p}^{+} \zeta(p) s^{p}\right)\\ &= 1+\sum_{r=1}^{\infty} \frac{1}{r!} \left(\sum_{p_{1}=1}^{\infty} c_{p_{1}}^{+} \zeta(p_{1}) s^{p_{1}}\right) \left(\sum_{p_{2}=1}^{\infty} c_{p_{2}}^{+} \zeta(p_{2}) s^{p_{2}}\right) \cdots \left(\sum_{p_{r}=1}^{\infty} c_{p_{r}}^{+} \zeta(p_{r}) s^{p_{r}}\right)\\ &= 1+\sum_{j=1}^{\infty} s^{j} \sum_{r=1}^{j} \frac{1}{r!} \sum_{p_{1}+\cdots+p_{r}=j} c_{p_{1}}^{+} \zeta(p_{1})\cdots c_{p_{r}}^{+} \zeta(p_{r})\\ &= \sum_{j=0}^{\infty} \left(\sum_{p\in\pi_{j}} \frac{1}{M(p)} c_{p}^{+}\frac{\zeta(p)}{(2\pi i)^{p}}\right) (2\pi i s)^{j}, \end{align*} where $p c_{p}^{+} = n-(a_{1}^{p}+\ldots+a_{r}^{p}-b_{1}^{p}-\ldots-b_{r}^{p})$. To complete the proof we will have to know the inverse of $Q\Phi C^{\frac{N}{2\pi i}}$. The inverse of $\Phi C^{\frac{N}{2\pi i}}$ is obvious from the homomorphism property of this type of matrix. We remark that the inverse of $Q$ is determined by \begin{align*} \prod_{m=0,m\neq k}^{n-1}\frac{(z-m+\frac{n}{2})}{k-j} = \frac{(Q^{-1})_{0,k}}{0!}+\frac{(Q^{-1})_{1,k}}{1!} z+\ldots+\frac{(Q^{-1})_{n-1,k}}{(n-1)!} z^{n-1}. \end{align*} Fortunately we will only need the first column. We find \begin{align*} (Q^{-1})_{l,0} = \frac{(-1)^{n-1}}{(n-1)!}\frac{d^{l}}{dz^{l}}|_{z=0} \prod_{m=1}^{n-1} (z-m+\frac{n}{2}). \end{align*} We notice that \begin{align*} \frac{1}{\phi_{C}(s)} &= \Gamma(1-s)^{-n} \frac{\Gamma(b_{1}s)\cdots\Gamma(b_{r}s)}{\Gamma(a_{1}s)\cdots\Gamma(a_{r}s)}\\ &= \exp\left(\sum_{p=2} c_{p}^{-}\zeta(p) s^{p}\right)\\ &= \sum_{j=0}^{\infty} \left(\sum_{p\in\pi_{j}} \frac{1}{M(p)} c_{p}^{-}\frac{\zeta(p)}{(2\pi i)^{p}}\right) (2\pi i s)^{j}, \end{align*} where $p c_{p}^{-} = -n-(-1)^{p}(a_{1}^{p}+\ldots+a_{r}^{p}-b_{1}^{p}-\ldots-b_{r}^{p})$. It follows that \begin{align*} (n-1)!(-1)^{n-1}(C^{-\frac{N}{2\pi i}}\Phi^{-1}Q^{-1})_{j,0} &=\sum_{l=0}^{n-1-j} \frac{d^{l+j}}{dz^{l+j}}|_{z=0} \prod_{m=1}^{n-1} \left(z-m+\frac{n}{2}\right) \sum_{p\in\pi_{j}} \frac{1}{M(p)} c_{p}^{-}\frac{\zeta(p)}{(2\pi i)^{p}} \end{align*} The last part of the theorem follows from the fact that $M_{0}$ has integer coefficients and \begin{align*} \frac{\zeta(2p)}{(2\pi i)^{2p}} = -\frac{B_{2p}}{2(2p)!} \end{align*} where $B_{2p}$ is the $2p$-th Bernoulli number\index{Bernoulli number}. \begin{flushright}$\square$\end{flushright} \begin{remark} Notice that the above theorem produces a practical method to determine monodromy matrices. Given $\alpha_{1},\ldots,\alpha_{n}\in\mathbb{Q}\cap (0,1)$ one has to write the corresponding polynomial in the form (\ref{polynoomvorm}) and then simply calculate the coefficients $c_{j}^{\pm},c_{j}$. \end{remark} \begin{remark} For the last part of the main theorem the $\alpha_{k}$ need not actually lie in $(0,1)$ as can be seen from the multiplicative property of the gamma function and the homomorphism property of the $\Phi$ matrix. \end{remark} We point out that in the Frobenius basis $f_{0},f_{1},\ldots,f_{n-1}$ the monodromy matrices can be obtained by a trivial transformation, namely inverting the conjugation by $C^{\frac{N}{2\pi i}}$. Hence the entries are in $\mathbb{Q}(\log(C)(2 \pi i)^{-1},\zeta(3) (2\pi i)^{-3},\zeta(5) (2\pi i)^{-5}\ldots,\zeta(m)(2\pi i)^{-m})$, with $m$ the largest odd number below $n$. \subsection{Applications of the main theorem} As one can check the case $n=2$ yields \begin{align*} M_{1} = \left(\begin{array}{cc} 1 & 0\\ -\frac{a_{1}\cdots a_{r}}{b_{1}\cdots b_{r}} & 1\end{array}\right) . \end{align*} The results are summarized in the following table. \begin{align*} \begin{array}{ l || c } \text{Case} & \frac{a_{1}\cdots a_{r}}{b_{1}\cdots b_{r}} \\ \hline (z+1)^{2} = \frac{(z^{2}-1)^{2}}{(z-1)^{2}} & 4 \\ (z^{2}+z+1) = \frac{z^{3}-1}{z-1} & 3 \\ (z^{2}+1) = \frac{z^{4}-1}{z^{2}-1} & 2 \\ (z^{2}-z+1) = \frac{(z^{6}-1)(z-1)}{(z^{3}-1)(z^{2}-1)} & 1 \end{array} \end{align*} \noindent Let us look at the case $n=3$. Using the identity $c_{2}^{-}+3=c_{2}^{+}$ we obtain the matrix \begin{align*} M_{1} = \left(\begin{array}{ccc} 1+b d & 0 & -b^{2} d\\ 0 & 1 & 0\\ -d & 0 &1+ b d\end{array}\right) \end{align*} where \begin{align*} b = \frac{c_{2}^{+}}{24}\text{ and }d=\frac{a_{1}\cdots a_{r}}{b_{1}\cdots b_{r}}. \end{align*} All the corresponding cases are worked out in the following table.\\ \begin{align*} \begin{array}{ l || c | c | c } \text{Case} & C & 24 b & d/2 \\ \hline (z+1)^{3} = \frac{(z^{2}-1)^{3}}{(z-1)^{3}} & 64 & -3 & 4\\ (z^{2}+z+1)(z+1) = \frac{(x^{2}-1)(x^{3}-1)}{(x-1)^{2}} & 108 & -4 & 3\\ (z^{2}+1)(z+1) = \frac{x^{4}-1}{x-1} & 256 & -6 & 2\\ (z^{2}-z+1)(z+1) = \frac{x^{6}-1}{x^{3}-1} & 1728 & -12 & 1 \end{array} \end{align*} From this table we read off that $bd=-1$ in all cases and we deduce the even nicer form \begin{align*} M_{1} = \left(\begin{array}{ccc} 0 & 0 & -1/d \\ 0 & 1 & 0\\ -d & 0 & 0\end{array}\right). \end{align*} \noindent Let us apply the theorem to the case $n=4$. This case corresponds to the Picard-Fuchs equation, given by \begin{align} [\theta^{4}-C z(\theta-A)(\theta+A-1)(\theta-B)(\theta+B-1)]f = 0. \end{align} These differential equations arise from Calabi-Yau threefolds\index{Calabi-Yau threefold} (see \cite{CYY}). Let us apply the main theorem, using that $c_{2}^{-}+4 = c_{2}^{+}$ and $c_{3}^{-} = -c_{3}^{+}$ we can write $M_{1}$ as \begin{align*} \left(\begin{array}{cccc} 1+a & 0 & ab/d & a^{2}/d\\ -b & 1 & -b^{2}/d & -ab/d\\ 0 & 0 & 1 & 0\\ -d & 0 & -b & 1-a \end{array}\right) \end{align*} when we identify \begin{align*} d &= \frac{a_{1}\cdots a_{r}}{b_{1}\cdots b_{r}}, a = d c_{3}^{+}\frac{\zeta(3)}{(2\pi i)^{3}}\text{ and }b = -\frac{d c_{2}^{+}}{24}. \end{align*} The authors of \cite{CYY} point out that the entries of $M_{1/C}$ contain geometric invariants belonging to the corresponding Calabi-You threefolds. The 14 corresponding cases are worked out in the following table. \begin{align*} \begin{array}{ l | r || c | c | c | c } \text{Case} & \text{Polynomial} & C & d & 24 b & (2\pi i)^{3} a/\zeta(3) \\ \hline (1/5,2/5,3/5,4/5) & \frac{X^{5}-1}{X-1} & 3025 & 5 & 50 & -200\\ (1/10,3/10,7/10,9/10) & \frac{(X-1)(X^{10}-1)}{(X^{2}-1)(X^{5}-1)} & 800 000 & 1 & 34 & -288\\ (1/2,1/2,1/2,1/2) & \frac{(X^{2}-1)^{4}}{(X-1)^{4}} & 256 & 16 & 64 & -128\\ (1/3,1/3,2/3,2/3) & \frac{(X^{3}-1)^{2}}{(X-1)^{2}} & 729 & 9 & 54 & -144\\ (1/3,1/2,1/2,2/3) & \frac{(X^{2}-1)^{2}(X^{3}-1)}{(X-1)^{3}} & 432 & 12 & 60 & -144\\ (1/4,1/2,1/2,3/4) & \frac{(X^{2}-1)(X^{4}-1)}{(X-1)^{2}} & 1024 & 8 & 56 & -176\\ (1/8,3/8,5/8,7/8) & \frac{X^{8}-1}{X^{4}-1} & 65 536 & 2 & 44 & -296\\ (1/6,1/3,2/3,5/6) & \frac{X^{6}-1}{X^{2}-1} & 11664 & 3 & 42 & -204\\ (1/12,5/12,7/12,11/12) & \frac{(X^{2}-1)(X^{12}-1)}{(X^{4}-1)(X^{6}-1)} & 2 985 984 & 1 & 46 & -484\\ (1/4,1/4,3/4,3/4) & \frac{(X^{4}-1)^{2}}{(X^{2}-1)^{2}} & 496 & 4 & 40 & -144\\ (1/4,1/3,2/3,3/4) & \frac{(X^{3}-1)(X^{4}-1)}{(X-1)(X^{2}-1)} & 1728 & 6 & 48 & -156\\ (1/6,1/4,3/4,5/6) & \frac{(X-1)(X^{4}-1)(X^{6}-1)}{(X^{2}-1)^{2} (X^{3}-1)} & 27 648 & 2 & 32 & -156\\ (1/6,1/6,5/6,5/6) & \frac{(X-1)^{2} (X^{6}-1)^{2}}{(X^{2}-1)^{2} (X^{3}-1)^{2}} & 186 624 & 1 & 22 & -120\\ (1/6,1/2,1/2,5/6) & \frac{(X^{2}-1)(X^{6}-1)}{(X-1)(X^{3}-1)} & 6912 & 4 & 52 & -256 \end{array} \end{align*} This is in agreement with the results of \cite{CYY}. \pagebreak \phantomsection \cleardoublepage \addcontentsline{toc}{section}{References}
1503.02402
\section{Introduction} \label{sec:intro} Pulsar Wind Nebulae (PWNe) are a class of Supernova Remnants (SNRs), whose broad-band emission is mostly non-thermal and powered by a fast-spinning magnetised neutron star, usually also observed as a pulsar. Most of the rotational energy lost by such a star goes into the acceleration of a highly relativistic magnetized wind, with a particle content dominated by electron-positron pairs. In the case of young pulsars, the wind is surrounded by the debris of the supernova explosion, which are in expansion at a much lower (non-relativistic) velocity. When the wind first impacts on the SNR, a reverse shock is launched towards the pulsar. At this shock (the termination shock, TS hereafter) the wind is slowed down and its bulk energy is efficiently converted into an outflow of relativistic particles that are then responsible for the nebular emission. Before discussing in more detail the current status of our understanding of such systems, let me briefly review the main reasons why these sources are of interest for several different fields of physics and astrophysics. Pulsars are thought to be the primary leptonic antimatter factories in the Galaxy, but the exact amount of pair-production in their magnetospheres, the so-called pair multiplicity, $\kappa$, is not well established. PWNe shine with luminosities that are a substantial fraction of the pulsar total spin-down power, $\dot E$, much larger than what comes out in the form of pulsed emission. Therefore, modelling of these nebulae provides the tightest constraints on pulsar multiplicities \cite{arons12,nicco11} in a time when assessing the role of pulsars as positron producers is particularly topical, after PAMELA \cite{pamela} and AMS02 \cite{ams02} measured an anomalous rise of the $e^+/e^-$ ratio that does not find obvious explanation if the $e^+$ only come from cosmic ray interactions in the Galaxy. Possible interpretations of this anomaly involve dark-matter related processes, but their credibility can only be assessed after subtraction of all relevant astrophysical backgrounds, of which pulsars are likely the main contributors. Understanding PWNe is important from the point of view of cosmic ray physics also in other respects. In these sources we observe the workings of the most relativistic shocks in Nature, an extreme version of the shocks that are usually invoked as particle accelerators to the highest energies observed, $\approx 10^{20}$ eV. PeV energies and acceleration efficiencies of tens of percent are inferred in the class prototype, the Crab Nebula, and they are reached in an environment that is highly hostile to particle acceleration according to standard theory. In fact, the most commonly invoked acceleration process in Astrophysics, the $1^{\rm st}$ order Fermi mechanism, or diffusive shock acceleration (DSA), does not work at highly relativistic shocks, unless the magnetic field is very weak \cite{sironi09}. A few alternative acceleration mechanisms have been proposed, but the viability of any of them depends on the flow magnetisation and composition. In order to understand particle acceleration it becomes then essential to unveil these two properties of the pulsar wind, which can only be assessed through detailed dynamical and radiation modelling of these systems. In the following I will discuss the current status of this kind of studies and their implications for particle acceleration mechanisms. \section{The pulsar wind magnetisation} \label{sec:sigma} The best available model of PWN emission and dynamics is based on a magnetohydrodynamical (MHD) description of the pulsar outflow. The general idea is that the fast rotation of a magnetised neutron star induces an electric field that is strong enough to extract charges from the star. Electrons are certainly extracted. Whether also positive charges (protons or more generally ions) are extracted is an open question. Charge extraction is anyway insufficient to guarantee complete screening of the electric field parallel to the magnetic field everywhere in the star surroundings, so that the extracted electrons are accelerated in gaps of unscreened field and become energetic enough to produce pairs following the interaction with ambient photons, either thermal radiation from the star or self-produced while moving along curved magnetic field lines. The pair production process is not fully understood and well described, but the general consensus is that each electron extracted from the star generates a number $\kappa$ of pairs, with $10^3<\kappa<10^7$ in the case of young energetic pulsars \cite{arons12}. These pairs become then part of the cold relativistic outflow that is referred to as pulsar wind, and end up in the PWN. Indeed a direct estimate of the pulsar multiplicity $\kappa$ can be obtained from radiation modelling of PWNe: as we will see this exercise is not free of ambiguities, but estimates generally agree with the theoretical limits above. One important thing to notice is that if also ions are extracted from the star, these will end up to be part of the wind too. In addition, if $\kappa <m_i/m_e$, with $m_i$ and $m_e$ the ions' and electrons' mass respectively, the hadronic component of the wind carries more energy than the leptonic one. The pulsar wind is highly magnetized at its origin, but is thought to become matter dominated before approaching the termination shock: the ratio between Poynting flux and particle kinetic energy, $\sigma$, changes from $\sigma \gg 1$ to $\sigma\lesssim 1$. Again the process is not well understood. At large distances from the pulsar, the wind is estimated to be cold and have a bulk Lorentz factor $\Gamma$ in the range $10^4 -10^6$. Its structure, as inferred from analytical modelling and numerical simulations \cite{spit06}, is thought to be well approximated as a split monopole outflow: the wind expansion is radial, the magnetic field is predominantly azimuthal and decreasing with distance as $1/r$, and the energy flux is anisotropic, maximum in the equatorial plane of pulsar rotation and varying as $\sin^2\theta$, with $\theta$ the colatitude. In a region of angular extent equal to the inclination angle between the pulsar rotation and magnetic axes the wind is striped, meaning that regions of different field polarity alternate, separated by a current sheet. This is a place where magnetic reconnection is likely to take place \cite{klprev}. Since the shock position is determined by the balance between the wind ram pressure and the pressure of the downstream plasma, which is relatively uniform, the anisotropy of the wind energy flux causes the termination shock surface to be oblate, closer to the star along the polar axis and at larger distance in the equatorial plane. This anisotropic energy outflow is at the origin \cite{lyub02} of the {\it jet-torus} morphology observed in the X-ray emission of the Crab Nebula and in a number of other nebulae. The interaction of the anisotropic pulsar wind with the surrounding SNR has been extensively studied by means of numerical simulations within the framework of 2D \cite{kl04,ldz04} (and more recently also 3D \cite{porth14}) relativistic MHD. These studies have provided important insights on the wind magnetisation at the shock front, which will be briefly summarised here. Most of the numerical work has focused on the Crab Nebula, which is one of the best studied objects in the Universe, with data available over 20 decades in frequency, and high resolution images in the radio, optical and X-rays. The general assumption of these numerical studies is that particle acceleration occurs at the TS: accelerated particles are then advected with the MHD flow from their injection site into the nebula and meanwhile are subject to adiabatic and radiative losses. The latter are mainly due to synchrotron emission, but the Inverse Compton Scattering (ICS) process also needs to be taken into account in terms of radiation. The synchrotron emission spectrum is cut-off at a photon energy $\epsilon_{\rm sync}\approx 100$ MeV, and higher energy photons, detected from Crab up to $\epsilon_\gamma\approx 10$ TeV, are due to ICS: the same particles responsible for lower energy emission, upscatter the synchrotron emitted photons and external radiation fields (CMB and Infrared background) to higher energies. The different dependence on magnetic field of the two contributions allows to put tight constraints on the nebular magnetic field strength, by comparison of models with data. As far as the magnetisation at the shock is concerned, a lower limit can be set based on the the very existence of a X-ray jet. Within the MHD framework, the polar jet, showing flow speeds of order 0.5 c, results from the hoop stress associated with the toroidal magnetic field. Even if $\sigma<1$ at the termination shock, the magnetic field builds up in the immediate post-shock and unless $\sigma$ was too small it reaches energy equipartition with the particles. When this happens the flow is diverted towards the axis and the jet is formed. Indeed 2D simulations show that in the case of the Crab Nebula a polar jet is formed as soon as $\langle \sigma\rangle>0.01$ averaged in latitude \cite{ldz04}. As we will discuss later this value of $\sigma$ is such that DSA cannot be at work along most of the shock front. For $\sigma\approx 0.025$ the morphology of the simulated nebula is very close to the observed one. However for such a value of $\sigma$ the above mentioned comparison between computed synchrotron and ICS emission with the data shows a discrepancy \cite{ldz06,volpi08}: the ICS emission is overestimated, indicating that the magnetic field in the simulated nebula is lower than in reality. Attempts at solving this discrepancy by increasing $\sigma$ result into emission maps with polar jets much brighter than the equatorial features, at odds with observations from the point of view of morphology. A possible solution to this puzzle has been recently suggested by the results of the first 3D MHD simulations performed for the Crab Nebula: the exceedingly strong jets are likely an artefact of axisymmetry. In 3D, the development of kink instabilities (that cannot develop in 2D) leads to the conversion of a large fraction of the magnetic energy in the nebula into poloidal field lines, much reducing the magnetic tension, as predicted by \cite{beg98}. As a result, for a given value of the magnetisation at the shock, the polar jet is much weaker in 3D than in 2D, so that one can hope to reproduce both the spectrum and the high energy morphology with $\sigma\approx 1$. The 3D simulations, however, have been run, so far, only for about 100 years \cite{porth14}. Given the fact that magnetic energy is shown to be effectively dissipated (due to numerical resistivity), it is not clear that the average field strength will be as large as inferred from observations, $B\approx 150 \mu G$, when the system age is as large as 1000 yr, the Crab Nebula's actual age. Further investigation is clearly needed, but one conclusion that seems safe to draw from these preliminary results is that the actual value of the magnetisation at the shock can only be larger than what inferred from 2D simulations, $\langle\sigma\rangle>0.03$. This conclusion has important implications for particle acceleration. \section{What is the process of particle acceleration?} \label{sec:acc} I already mentioned that the pulsar wind TS is in theory a hostile environment for particle acceleration and yet it accelerates particles with very high efficiency and up to very high energies. The resulting particle spectrum is a broken power-law, $N(E)\propto E^{-\gamma_e}$, with $\gamma_e\approx1.5$ at low energies and $\gamma_e\approx 2.2$ at high energies. The slope of the high energy spectrum is what one expects from DSA at a relativistic shock, but this process only works if the magnetisation at the shock is sufficiently low \cite{sironi09}, $\sigma\lesssim10^{-3}$, whereas we just saw that global MHD simulations of the Crab Nebula lead to infer $\langle \sigma \rangle>0.03$. According to current studies, only few \% of the wind energy flows though sectors of the TS where $\sigma \lesssim 10^{-3}$ \cite{hepro4}, which makes DSA unlikely as the main acceleration process. Alternative proposals that have received some attention are: 1) driven magnetic reconnection at the TS; 2) resonant absorption of ion-cyclotron waves in a ion-doped plasma. The viability of both processes depends on the wind composition and pulsar multiplicity. Driven reconnection has recently been investigated \cite{sironi11} in a set- up appropriate to describe the Crab Nebula. The idea is that if the pulsar wind keeps its stripes with opposite direction of the magnetic field all the way to the TS, there compression causes the field to reconnect. A number of reconnection islands develop in the flow, where unscreened electric fields can effectively accelerate particles: the slope and the extension in energy of the resulting particle spectrum depend on the flow magnetization and on the ratio between the wavelength of the stripes and the particle Larmor radius. The latter ratio can be expressed in terms of $\kappa$ and the final result is that in order to reproduce the spectrum of radio particles, one would need $\sigma>30$ and $\kappa > 10^7$. Even ignoring the fact that such a high value of $\kappa$ is difficult to explain from the theory point of view, a wind with such a large number of pairs is actually likely to reconnect before reaching the TS (see \cite{hepro4} and references therein), causing $\sigma$ to decrease below the required minimum. The requirements on $\kappa$ would be less severe if the process took place at high latitudes, since here the shock is closer to the pulsar and the particle density scales as $r^{-2}$. However at high latitudes one does not expect any stripes. An alternative proposal, that works for whatever $\sigma$, but requires that most of the energy of the pulsar wind be carried by ions, is that of resonant absorption by the pairs of the cyclotron radiation emitted by such ions. The idea is that at the crossing of the TS, the sudden enhancement of the magnetic field sets the plasma into gyration. The pairs quickly thermalize through emission and absorption of cyclotron waves, but ions with the same initial Lorentz factor (the wind is cold, so that all particles were moving with the same bulk Lorentz factor) react on time-scales that are longer by a factor $m_i/m_e$. If the wind is sufficiently cold ($\delta u/u < m_e/m_i$, with $u$ the flow four velocity) before the TS, the ions emit waves with large power not only at the fundamental frequency of their gyration, but up to a frequency $m_i/m_e$ times higher, which can then be resonantly absorbed by the pairs. The resulting acceleration efficiency $\epsilon_{\rm acc}$, spectral slope $\gamma_e$ and maximum energy $E_{\rm max}$, all depend on the fraction of energy carried by the ions $U_i/U_{\rm tot}$. PIC simulations show a wide variety of values: $\epsilon_{\rm acc}=few (30)$ \%, $\gamma_e > 3 (<2)$, $E_{\rm max}/(m_i \Gamma c^2)=0.2 (0.8)$ for $U_i/U_{\rm tot}=0.6 (0.8)$ \cite{amato06}. Once again the pulsar multiplicity plays a crucial role, since ions can only come from the star, they are at most as numerous as primary electrons. Therefore, if $\kappa>m_i/m_e$, $n_i m_i \Gamma c^2<n_\pm m_e \Gamma c^2$ and ions cannot dominate the wind energy budget. It is interesting to notice that if ions are there, instead, these would be PeV ions \cite{hepro4}. \section{The pulsar wind composition} \label{sec:compo} The most direct way of estimating the pulsar multiplicity is from modelling of the nebular emission: when this is done, one finds that in the Crab Nebula radio emission requires $\kappa \approx 10^6$, while X-ray emission implies $\kappa \approx 10^4$. While X-ray emitting particles have very short synchrotron life-times and must be tracing the current pair injection rate by the pulsar, radio emitting particles suffer little losses and might in principle be fossil \cite{atah96}. Clarifying their origin is fundamental to obtain a correct estimate of $\kappa$ and constrain the process of particle acceleration. A recent study has taken into account three different scenarios for the injection of radio emitting particles and tried to discriminate among them based on the emission morphology \cite{olmi14}. The result is that a scenario in which these particles are injected only for a short time after the SN explosion, with no further reacceleration, can actually be excluded. On the other hand, it does not seem possible to exclude any of the two other scenarios: A) ongoing injection at the TS followed by advection with the flow; B) uniform distribution in the nebula (as due to continuous reacceleration by spatially distributed turbulence in the body of the nebula). When synchrotron emission is computed on top of the MHD simulations, these two different hypotheses on the spatial distribution of radio particles produce very similar results: the emission maps are so similar that comparison with observations does not allow to discriminate between the two cases. In addition, also time-variability in the inner region is found in both cases and with very similar features. This is an important result, because the existence of variable radio features in the inner nebula, {\it radio wisps}, could be naively taken as evidence supporting ongoing injection of radio particles at the TS. In fact, wisp variability arises in simulated maps from the properties of the MHD flow: the moving bright features, seen at radio, optical and X-ray frequencies, are associated, in simulated maps, with locally enhanced magnetic field and Doppler boosting. However there is more to this. Multi wavelength observation campaigns of the Crab Nebula have shown that the {\it wisps} are not coincident at the different wavelengths \cite{schwei13}. Within the MHD description of the flow such an effect can only result from differences in the acceleration site of particles of different energies. If synchrotron emission maps are computed at the various frequencies, after assuming different acceleration sites for particles of different energies, it is possible to gain some insight from multi wavelength variability. A recent effort in this direction \cite{olmi15} shows that if the MHD description holds, observations can only be explained if radio emitting particles are uniform in the Nebula or accelerated in a wide sector of the shock front, while higher energies are reached only within a smaller angular sector around the equatorial plane. The general picture that emerges is that the conditions for acceleration of radio particles must be met by most of the shock surface, while X-ray emitting particles might require more stringent conditions that are only reached around the equator. It is possible, for example, that indeed they can only be accelerated where the magnetisation is low enough. But this would have to occur for a larger fraction of the flow than currently thought. \section{Summary and Conclusions} \label{sec:concl} Unveiling the mechanism of particle acceleration in PWNe is an especially challenging and fascinating problem, with implications for our general understanding of how the highest energies are reached in Nature. The answer to this question requires knowledge of plasma parameters that must be inferred from detailed modelling of the spectral and morphological properties of the emission, which so far has only been done for the Crab Nebula. Current 2D MHD models, very successful at explaining the high energy morphology of the Crab Nebula, require too low $\sigma$ and lead to a nebular magnetic field which is too low to account for the combined synchrotron and ICS spectrum. The first 3D studies show that kink instabilities might play an important role in tangling the field, reducing the hoop stress and allowing to reproduce the morphology with a larger $\sigma$. Current 3D simulations however have too short a duration and probably also excessive dissipation. Further investigation is needed but a conclusion that seems unavoidable is that along most of the TS, $\sigma$ is too high for DSA to operate. The two alternative mechanisms that have been proposed have opposite requirements on the pulsar multiplicity: driven magnetic reconnection requires $\kappa$ to be larger than currently believed, while ion-cyclotron absorption can only work if ions are extracted from the star and $\kappa<m_i/m_e$. The main unknown in determining the value of $\kappa$ is the origin of the low energy (radio emitting) particles, which has proven so far very difficult to assess based on emission morphology and time variability: the only scenario that could be excluded is that of fossil particles with no reacceleration. More work is needed, and detailed studies of objects other than the Crab Nebula. \acknowledgments{Attendance of this conference has been made possible by the grant PRIN-INAF 2012}
1503.02318
\section{Introduction} What graphic should I use to make a new startup more eye-catching than Instagram? Which image caption will help spread an under-represented shocking news? Should I put an image of a cat in my YouTube video if I want millions of views? These questions plague professionals and regular internet users on a daily basis. Impact of advertisements, marketing strategies, political campaigns, non-profit organizations, social causes, authors and photographers, to name a few, hinges on their ability to reach and be noticed by a large number of people. Understanding what makes content viral has thus been studied extensively by marketing researchers~\cite {berger2011drives,berger2011arousal,chen2012and,berger2013contagious}. \begin{figure}[t] \begin{center} \subfigure[Example viral images.]{ \includegraphics[scale=0.18]{Viral_exemplars.jpg} \label{fig:viral_exem} } \subfigure[Example non-viral images.]{ \includegraphics[scale=0.18]{Non_viral_exemplars.jpg} \label{fig:non_viral_exem} } \end{center} \caption{ Top: Images with high viral scores in our dataset depict internet ``celebrity'' memes ex. ``Grumpy Cat''; Bottom: Images with low viral scores in our dataset. The picture of Peter Higgs (Higgs Boson) was popular, but was not reposted multiple times and is hence not considered viral. \vspace{-20pt} } \label{fig:Viral_nonviral_exem} \vspace{-20pt} \end{figure} Many factors such as the time of day and day of week when the image was uploaded, the title used with the image, etc. affect whether an image goes viral or not~\cite{Leskovec}. To what extent is virality dependent on these external factors, and how much of the virality depends on the image content itself? How well can state-of-the-art computer vision image features and humans predict virality? Which visual attributes correlate with image virality? In this paper, we address these questions. We introduce three image databases collected from Reddit and a virality score. Our work identifies several interesting directions for deeper investigation where computer vision techniques can be brought to bear on this complex problem of understanding and predicting image virality. \vspace{-5pt} \section{Related Work} \vspace{-5pt} Most existing works ~\cite{leskovec2007dynamics,barabasi2005origin,1309.2963} study how people share content on social networking sites \emph{after} it has been posted. They use the network dynamics soon after the content has been posted to detect an oncoming snowballing effect and predict whether the content will go viral or not. We argue that predicting virality after the content has already been posted is too late in some applications. It is not feasible for graphics designers to ``try out'' various designs to see if they become viral or not. In this paper, we are interested in understanding the relations between the content itself (even before it is posted online) and its potential to be viral\footnote{In fact, if the machine understands what makes an image viral, one could use ``machine teaching''~\cite{JohnsCVPR2015} to train humans (e.g., novice graphic designers) what viral images look like.}. There exist several qualitative theories of the kinds of content that are likely to go viral~\cite{berger2011arousal,berger2013contagious}. Only a few works have quantitatively analyzed content, for instance Tweets~\cite{suh2010want} and New York Times articles~\cite{berger2012makes} to predict their virality. However, in spite of them being a large part of our online experience, the connections between content in visual media and their virality has not been analyzed. This forms the focus of our work. Virality of text data such as Tweets has been studied in \cite{nagarajan2010qualitative,suh2010want}. The diffusion properties were found to be dependent on their content and features like embedded URL's and hashtags. Generally, diffusion of content over networks has been studied more than the causes~\cite{1309.2963}. The work of Leskovec~\etal \cite{leskovec2007dynamics} models propagation of recommendations over a network of individuals through a stochastic model, while Beutel~\etal \cite{beutel2012interacting} approach viral diffusion as an epidemiological problem. Qualitative theories about what makes people share content have been proposed in marketing research. Berger~\etal~\cite{berger2011arousal,berger2012makes,berger2013contagious} for instance postulate a set of STEPPS that suggests that social currency, triggers, ease of emotion, public (publicity), practical value, and stories make people share. Analyzing viral images has received very little attention. Guerini~\etal~\cite{1309.3908} have provided correlations between low-level visual data and popularity on a non-anonymous social network (Google+), as well as the links between emotion and virality~\cite{guerini2015deep} . Khosla~\etal~\cite{www14_khosla} recently studied image popularity measured as the number of views a photograph has on Flickr. However, both previous works~\cite{1309.3908,www14_khosla} have only extracted image statistics for natural photographs (Google+, Flickr). Images and the social interactions on Reddit are qualitatively different (\eg many Reddit images are edited). In this sense, the quality of images that is most similar to ours is the concurrently introduced viral \emph{meme} generator of Wang~\etal, that combines NLP and Computer Vision (low level features)~\cite{WangWen:2015}. However, our work delves deep into the role of intrinsic visual content (such as high-level image attributes), visual context surrounding an image, temporal contex and textual context in image virality. Lakkaraju~\etal~\cite{Leskovec} analyzed the effects of time of day, day of the week, number of resubmissions, captions, category, etc. on the virality of an image on Reddit. However, they do not analyze the content of the image itself. Several works in computer vision have studied complex meta-phenomenon (as opposed to understanding the ``literal'' content in the image such as objects, scenes, 3D layout, etc.). Isola~\etal~\cite{isola2011makes} found that some images are consistently more memorable than others across subjects and analyzed the image content that makes images memorable~\cite{isola2011understanding}. Image aesthetics was studied in~\cite{dhar2011high}, image emotion in~\cite{borth2013large}, and object recognition in art in~\cite{Crowley14a}. Importance of objects~\cite{spain2011}, attributes~\cite{turakhia2013attribute} as well as scenes~\cite{berg2012} as defined by the likelihood that people mention them first in descriptions of the images has also been studied. We study a distinct complex phenomenon of image virality. \vspace{-5pt} \section{Datasets and Ground Truth Virality}\label{Creating} \subsection{Virality Score} \vspace{-5pt} Reddit is the main engine of viral content around the world. Last month, it had over 170M unique visitors representing every single country. It has over 353K categories (subreddits) on an enormous variety of topics. We focus only on the image content. These images are sometimes rare photographs, or photos depicting comical or absurd situations, or Redditors sharing a personal emotional moment through the photo, or expressing their political or social views through the image, and so on. Each image can be upvoted or downvoted by a user. Viral content tends to be resubmitted multiple times as it spreads across the network of users\footnote{These statistics are available through Reddit's API.}. Viral images are thus the ones that have many upvotes, few downvotes, \emph{and} have been resubmitted often by different users. The latter is what differentiates virality from popularity. Previously, Guerini~\etal defined multiple virality metrics as upvotes, shares or comments, Khosla~\etal define popularity as number of views and Lakkaraju~\etal define popularity as number of upvotes. We found that the the correlation between popularity as defined by the number of upvotes and virality that also accounts for resubmissions (detailed definition next) is -0.02. This quantitatively demonstrates the distinction between these two phenomenon. See Fig.~\ref{fig:viral_pop_sample} for qualitative examples. The focus of this paper is to study image virality (as opposed to popularity). \begin{figure}[t] \centering \includegraphics[scale=0.225]{popularity_virality.jpg} \caption[]{Virality ($V_h$) vs. popularity ($A_h$) in images. All images have a similar popularity score, but their virality scores vary quite a bit. ``Grumpy Cat'' is more viral than Peter Higgs due to number of resubmissions ($m_h$), that plays a critical role in our virality metric $V_h$. Clearly virality and popularity are two different concepts.} \label{fig:viral_pop_sample} \end{figure} Let score $S_h^n$ be the difference between the number of upvotes and downvotes an image $h$ received at its $n^{th}$ resubmission to a category. Let $t$ be the time of the resubmission of the image and $c$ be the category (\textit{subreddit}) to which it was submitted. $\bar{S}_c^t$ is the average score of all submissions to category $c$ at time $t$. We define $A_h^n$ to be the ratio of the score of the image $h$ at resubmission $n$ to the average score of all images posted to the category in that hour~\cite{Leskovec}. \vspace{-7pt} \begin{equation}\label{Jure_eq} A_h^n=\frac{S_h^n}{\bar{S}_{c}^t} \end{equation} We add an offset to $S_h^n$ so that the smallest score $\min_h \min_n S_h^n$ is 0. We define the overall (across all categories) virality score for image $h$ as \vspace{-7pt} \begin{equation}\label{Viral_rank} V_{h}=\max_n A_h^n log\left(\frac{m_h}{\bar{m}}\right) \end{equation} where $m_h$ is the number of times image $h$ was resubmitted, and $\bar{m}$ is the average number of times any image has been resubmitted. If an image is resubmitted often, its virality score will be high. This ensures that images that became popular when they were posted, but were not reposted, are not considered to be viral (Fig.~\ref{fig:viral_pop_sample}). These often involve images where the content itself is less relevant, but current events draw attention to the image such as a recent tragedy, a news flash, or a personal success story e.g. ``Omg, I lost 40 pounds in 2 weeks''. On the other hand, images with multiple submissions seem more ``flexible'' for different titles about multiple situations and are arguably, intrinsically viral. Examples are shown in Fig.~\ref{fig:viral_exem}. \subsection{Viral Images Dataset} \label{sec:10k_dataset} \vspace{-5pt} We use images from Reddit data collected in~\cite{Leskovec} to create our dataset. Lakkaraju \etal~\cite{Leskovec} crawled 132k entries from Reddit over a period of 4 years. The entries often correspond to multiple submissions of the same image. We only include in our dataset images from categories (subreddits) that had at least 100 submissions so we have an accurate measure for $\bar{m}$ in Equation~\ref{Viral_rank}. We discarded animated GIFs. This left us with a total of 10078 images from 20 categories, with $\bar{m}=6.7$ submissions per image. We decided to use images from Reddit instead of other social networking sites such as Facebook and Google+~\cite{1309.3908} because users post images on Reddit \textit{``\textsc{4thelulz}''} (i.e. just for fun) rather than personal social popularity \cite{berger2012makes}. We also prefer using Reddit instead of Flickr~\cite{www14_khosla} because images in Reddit are posted anonymously, hence they breed the purest form of ``internet trolling''. \subsection{Viral and Non-Viral Images Dataset}\label{viral_nonviral} \vspace{-5pt} Next, we create a dataset of 500 images containing the 250 most and least viral images each using Equation~\ref{Viral_rank}. This stark contrast in the virality score of the two sets of images gives us a clean dichotomy to explore as a first step in studying this complex phenomenon. Recall that non-viral images include both -- images that did not get enough upvotes, and those that may have had many upvotes on one submission, but were not reposted multiple times. \subsubsection{Random Pairs Dataset}\label{viral_nonviral_random} \vspace{-5pt} In contrast with the clean dichotomy represented in the dataset above, we also create a dataset of pairs of images where the difference in the virality of the two images in a pair is less stark. We pair a random image from the 250 most viral images with a random image from $> 10k$ images with virality lower than the median virality. Similarly, we pair a random image from the 250 least viral images with a random image with higher than median virality. We collect 500 such pairs. Removing pairs that happen to have both images from top/bottom 250 viral images leaves us with 489 pairs. We report our final human and computer results on this dataset, and refer to it as $(500_p)$ in Table~\ref{table:Method_Performance}. Training was done on the other 4550 pairs that can be formed from the remaining 10k images by pairing above-median viral images with below-median viral images. \subsection{Viral Categories Dataset}\label{subreddit} \vspace{-5pt} For our last dataset, we work with the five most viral categories: funny, WTF, aww, atheism and gaming. We identify images that are viral only in one of the categories and not others. To do so, we compute the ratio between an image's virality scores with respect to the category that gave it the highest score among all categories that it was submitted to, and category that gave it the second highest score. That is, \vspace{-10pt} \begin{equation} \label{eq:ratio} V^{c}_{h}=\frac{V^{{c}^1}_{h}}{V^{{c}^2}_{h}} \end{equation} where $V^{{c}^k}_{h}$ is the virality score image $h$ received on the category $c$ that gave it the $k^{th}$ highest score among all categories. \vspace{-10pt} \begin{equation} V^{{c}^k}_{h}=A_h^{c^k}\pi\left(log\left(\frac{m_h^{c^k}}{\bar{m}_h}\right)\right) \end{equation} where $A_h^{n^k}$ is as defined in Equation~\ref{Jure_eq} for the categories that gave it the $k^{th}$ highest score among all categories that image $h$ was submitted to, $\pi(x)$ is the percentile rank of $x$, $m_h^{n^k}$ is the number of times image $h$ was submitted to that category, and $\bar{m}_h$ is the average number of times image $h$ was submitted to all categories. We take the percentile rank instead of the actual $log$ value to avoid negative values in the ratio in Equation~\ref{eq:ratio}. To form our dataset, we only considered the top 5000 ranked viral images in our Viral Images dataset (Section~\ref{sec:10k_dataset}). These contained 1809 funny, 522 WTF, 234 aww, 123 atheism and 95 gaming images. Of these, we selected 85 images per category that had the highest score in Equation~\ref{eq:ratio} to form our Viral Categories Dataset. \vspace{-5pt} \section{Understanding Image Virality}\label{understanding} \vspace{-5pt} \begin{figure}[t] \centering \subfigure[WTF]{ \includegraphics[scale=0.21,clip=true,draft=false,]{baby_fusion.jpg} \label{fig:WTF_blur} } \subfigure[atheism]{ \includegraphics[scale=0.21,clip=true,draft=false,]{einstein_fusion.jpg} \label{fig:atheism_blur} } \vspace{-10pt} \caption[]{Examples of temporal contextual priming through blurring in viral images. Looking at the images on the left in both \subref{fig:WTF_blur} and \subref{fig:atheism_blur}, what do you think the actual images depict? Did your expectations of the images turn out to be accurate?\vspace{-15pt}} \label{fig:blur_answers} \end{figure} Consider the viral images of Fig.~\ref{fig:blur_answers}, where face swapping~\cite{bitouk2008face}, contextual priming~\cite{torralba2003contextual}, and scene gist~\cite{oliva2001modeling} make the images quite different from what we might expect at a first glance. An analogous scenario researched in NLP is understanding the semantics of \textit{``That's what she said!''} jokes~\cite{kiddon2011s}. We hypothesize that perhaps images that do not present such a visual challenge or contradiction -- where semantic perception of an image does not change significantly on closer examination of the image -- are ``boring''~\cite{leskovec2007dynamics,berger2012makes} and less likely to be viral. This contradiction need not stem from the objects or attributes within the image, but may also rise from the context of the image: be it the images surrounding an image, or the images viewed before the image, or the title of the image, and so on. Perhaps an interplay between these different contexts and resultant inconsistent interpretations of the image is necessary to simulate a visual double entendre leading to image virality. With this in mind, we define four forms of context that we will study to explore image virality. \begin{packed_enumerate} \item \textbf{Intrinsic context}: This refers to visual content that is intrinsic to the pixels of the image. \item \textbf{Vicinity context}: This refers to the visual content of images surrounding the image (spatial vicinity). \item \textbf{Temporal context}: This refers to the visual content of images seen before the image (temporal vicinity). \item \textbf{Textual context}: This non-visual context refers to the title or caption of the image. These titles can sometimes manifest themselves as visual content (e.g. if it is photoshopped). A word graffiti has both textual and intrinsic context, and will require NLP and Computer Vision for understanding. \end{packed_enumerate} \subsection{Intrinsic context} \vspace{-5pt} We first examine whether humans and machines can predict just by looking at an image, whether it is a viral image or not, and what the dominant topic (most suitable category) for the image is. For machine experiments, we use state-of-the-art image features such as DECAF6 deep features~\cite{donahue2013decaf}, gist~\cite{oliva2001modeling}, HOG~\cite{dalal2005histograms}, tiny images~\cite{torralba200880}, etc. using the implementation of~\cite{xiao2010sun}. We conduct our human studies on Amazon Mechanical Turk (AMT). We suspected that workers familiar with Reddit may have different performance at recognizing virality and categories than those unfamiliar with Reddit. So we created a qualification test that every worker had to take before doing any of our tasks. The test included questions about widely spread Reddit memes and jargon so that anyone familiar with Reddit can easily get a high score, but workers who are not would get a very poor score. We thresholded this score to identify a worker as familiar with Reddit or not. Every task was done by 20 workers. Images were shown at 360 $\times$ 360. Machine accuracies were computed on the same test set as human studies. Human accuracies are computed using a majority vote across workers. As a result (1) accuracies reported for different subsets of workers (e.g. those familiar with Reddit and those not) can each be lower than the overall accuracy, and (2) we can not report error bars on our results. We found that accuracies across workers on our tasks varied by $\pm 2.6\%$. On average, 73\% of the worker responses matched the majority vote response per image. \vspace{-5pt} \subsubsection{Predicting Topics}\label{subreddit_disc} \vspace{-5pt} We start with our topic classification experiment, where a practical application is to help a user determine which category to submit his image to. We use our Viral Categories Dataset (Section~\ref{subreddit}). See Fig.~\ref{fig:montage_family_App} in Appendix. The images do generally seem distinct from one category to another. For instance, images that belong to the aww category seem to contain cute baby animals in the center of the image, images in atheism seem to have text or religious symbols, images in WTF are often explicit and tend to provoke feelings of disgust, fear and surprise. After training the 20 qualified workers with a sample montage of 55 images per category, they achieved a category identification accuracy of $87.84\%$ on 25 test images, where most of the confusion was between funny and gaming images. Prior familiarity with Reddit did not influence the accuracies because of the training phase. The machine performance using a variety of features can be seen in Fig.~\ref{fig:prediction_plot1}. A performance of $62.4\%$ was obtained by using DECAF6~\cite{CloudCV} (chance accuracy would be 20\%). Machine and human confusion matrices can be found in Appendix II. \begin{figure}[t] \centering \subfigure[Category classification]{ \includegraphics[scale=0.27,clip=true,draft=false,]{Subreddit_Acc.png} \label{fig:prediction_plot1} } \subfigure[Virality prediction]{ \includegraphics[scale=0.27,clip=true,draft=false,]{Virality_Acc.png} \label{fig:prediction_plot2} } \vspace{-10pt} \caption[]{Machine accuracies on our Viral Categories (Section~\ref{subreddit}) and Viral \& Non-Viral Images datasets (Section~\ref{viral_nonviral}--tested on Top/Bottom 250 pairs), using different image features.\vspace{-15pt} } \label{fig:prediction} \end{figure} \vspace{-10pt} \subsubsection{Predicting Virality}\label{virality_prediction} \vspace{-5pt} Now, we consider the more challenging task of predicting whether an image is viral or not by looking at its content, by using our Viral and Non-Viral Images Dataset (Section~\ref{viral_nonviral}). We asked subjects on AMT whether they think a given image would be viral (i.e. ``become very viral on social networking websites like Facebook, Twitter, Reddit, Imgur, etc. with a lot of people liking, re-tweeting, sharing or upvoting the image?''). Classification accuracy was $65.40\%$, where chance is $50\%$. In each of these tasks, we also asked workers if they had seen the image before, to get a sense for their bias based on familiarity with the image. We found that 9\%, 1.5\% and 3\% of the images had been seen before by the Reddit workers, non-Reddit workers and all workers. While a small sample set, classification accuracies for this subset were high: $75.27\%$, $93.53\%$ and $91.15\%$. Note that viral images are likely to be seen even by non-Reddit users through other social networks. Moreover, we found that workers who were familiar with Reddit in general had about the same accuracy as workers who were not ($63.24\%$ and $63.08\%$ respectively). They did however have different classification strategies. Reddit workers had a hit rate of $40.64\%$, while non-Reddit workers had a hit rate of $28.96\%$. This means that Reddit workers were more likely to recognize an image as viral when they saw one (but may misclassify other non-viral images as viral). Non-Reddit workers were more conservative in calling images viral. Both hit rates under $50\%$ indicate a general bias towards labeling images as non-viral. This may be because of the unnaturally uniform prior over viral and non-viral images in the dataset used for this experiment. Overall, workers who have never seen the image before and are not familiar with Reddit, can predict virality of an image better than chance. This shows that intrinsic image content is indicative of virality, and that image virality on communities like Reddit is not just a consequence of snowballing effects instigated by chance. \begin{figure}[t] \centering \includegraphics[scale=0.23,clip=true,draft=false,]{Viral_vm.png} \label{fig:vm_metric} \vspace{-7pt} \caption[] {Machine accuracy using our virality metric averaged across 5 random train/test splits, test set contained 2078 random images each time. Notice that all descriptors produce chance like results ($50\%$). Novel image understanding techniques need to be developed to predict virality. \vspace{-20pt} } \label{fig:machine_score_prediction} \end{figure} Machine performance using our metric for virality is shown in Fig.~\ref{fig:machine_score_prediction}. Other metrics can be found in Appendix I. We see that current vision models have a hard time differentiating between these viral and non-viral images, under any criteria. The SVM was trained with both linear and non linear kernels on 5 random splits of our dataset of $\sim$10k images, using 250, 500, 1000, 2000, 4000 images for training, and 1039 images of each class for testing. The performance of the machine on the same set of images as used in the human studies using a variety of features to predict virality is shown in Fig.~\ref{fig:prediction_plot2}. Training was performed on the top and bottom 2000 images, excluding the top and bottom 250 images used for testing. DECAF features achieve highest accuracy at 59\%; This is above chance, but lower than human performance (65.4\%). The wide variability of images on Reddit (seen throughout the paper) and the poor performance of state-of-the-art image features indicates that automatic prediction of image virality will require advanced image understanding techniques. \vspace{-10pt} \subsubsection{Predicting Relative Virality} \vspace{-5pt} \label{sec:relative_virality} Predicting the virality of indivual images is a challenging task for both humans and machines. We therefore consider making relative predictions of virality. That is, given a pair of images, is it easier to predict which of the two images is more likely to be viral? In psychophysics, this setup is called a two-alternative forced choice (2AFC) task. We created image pairs consisting of a random viral image and a random non-viral image from our Viral and Non-Viral Images dataset (Section~\ref{viral_nonviral}). We asked workers which of the two images is more likely to go viral. Accuracies were all workers\footnote{62.12\% of AMT Workers were Reddit workers.}: $71.76\%$, Reddit workers: $71.68\%$ and non-Reddit workers: $68.68\%$, noticeably higher than $65.40\%$ on the absolute task, and 50\% chance. A SVM using DECAF6 image features got an accuracy of 61.60\%, similar to the SVM classification accuracy on the absolute task (Fig.~\ref{fig:prediction_plot2}). \vspace{-10pt} \subsubsection{Relative Attributes and Virality} \vspace{-5pt} \label{viral_attributes} Now that we've established that a non-trivial portion of virality does depend on the image content, we wish to understand what kinds of images tend to be viral i.e. what properties of images are correlated with virality. We had subjects on AMT annotate the same pairs of images used in the experiment above, with relative attribute annotations~\cite{parikh2011relative}. In other words, for each pair of images, we asked them which image has more of an attribute presence than the other. Each image pair thus has a relative attribute annotation $\in\{-1,0,+1\}$ indicating whether the first image has a stronger, equal or weaker presence of the attribute than the second image. In addition, each image pair has a $\in\{-1,+1\}$ virality annotation based on our ground truth virality score indicating whether the first image is more viral or the second. We can thus compute the correlation between each relative attribute and relative virality. We selected 52 attributes that capture the spatial layout of the scene, the aesthetics of the image, the subject of the image, how it made viewers feel, whether it was photoshopped, explicit, funny, etc. Inspirations for these attributes came from familiarity with Reddit, work on understanding image memorability~\cite{isola2011understanding}, and representative emotions on the valence/arousal circumplex~\cite{berger2011arousal,guerini2015deep}. See Fig.~\ref{fig:relative_graph} for the entire list of attributes we used. As seen in Fig.~\ref{fig:relative_graph}, synthetically generated (Photoshopped), cartoonish and funny images are most likely to be viral, while beautiful images that make people feel calm, relaxed and sleepy (low arousal emotions~\cite{berger2011arousal}) are least likely to be viral. Overall, correlation values between any individual attribute and virality is low, due to the wide variation in the kinds of images found on communities like Reddit. \begin{figure}[t] \centering \subfigure[Correlations of human-annotated attributes with virality]{ \includegraphics[scale=0.285,clip=true,draft=false,]{Relative_attributes_sort5.png} \label{fig:relative_graph}} \subfigure[Correlation of attribute combinations with virality ($>5000$ pairs). The Force condition puts tiebreakers on neutral atts.]{ \includegraphics[scale=0.19,clip=true,draft=false,]{Optimal_Behaviour5_Corr.png} \label{fig:Optimal_Behaviour}} \hspace{10pt} \subfigure[Correlation of attribute combinations with virality after priming (Top/Bottom $250$ pairs: Section~\ref{viral_nonviral})]{ \includegraphics[scale = 0.26]{merge_attribute_corr_graph.png} \label{fig:relative_performance}} \vspace{-20pt} \caption{The role of attributes in image virality.\vspace{-20pt}} \end{figure} We further studied virality prediction with combinations of attributes. We start by identifying the single (relative) attribute with the highest (positive or negative) correlation with (relative) virality. We then greedily find the second attribute that when added to the first one, increases virality prediction the most. For instance, funny images tend to be viral, and images with animals tend to be viral. But images that are funny \emph{and} have animals may be even more likely to be viral. The attribute to be added can be the attribute itself ($\uparrow$), or its negation ($\downarrow$). This helps deal with attributes that are negatively correlated with virality. For instance, synthetically generated images that are \emph{not} beautiful are more likely to be viral than images that are either synthetically generated or not beautiful. In this way, we greedily add attributes. Table~\ref{table:Combined_Relative_Attributes} shows the attributes that collaborate to correlate well with virality. We exclude ``likely to go viral'' and ``memorable'' from this analysis because those are high-level concepts in themselves, and would not add to our understanding of virality. A combination of 38 attributes leads to a virality predictor that achieves an accuracy of $81.29\%$. This can be viewed as a hybrid human-machine predictor of virality. The attributes have been annotated by humans, but the attributes have been selected via statistical analysis. We see that this significantly outperforms humans alone ($71.76\%$) and the machine alone ($59.00\%$, see Table~\ref{table:Method_Performance}). One could train a classifier on top of the attribute predictors to further boost performance, but the semantic interpretability provided by Table~\ref{table:Combined_Relative_Attributes} would be lost. Our analysis begins to give us an indication of which image properties need to be reliably predicted to automatically predict virality. We also explore the effects of ``attribute priming'': if the first attribute in the combination is one that is negatively correlated with virality, how easy is it to recover from that to \emph{make} the image viral? Consider the scenario where an image is very ``relaxed'' (inversely correlated with virality). Is it possible for a graphics designer to induce virality by altering other attributes of the image to make it viral? Fig.~\ref{fig:relative_performance} shows the correlation trajectories as more attributes are greedily added to a ``seed'' attribute that is positively $(+)$, negatively $(-)$, or neutrally $(N)$ correlated with virality. We see that in all these scenarios, an image can be made viral by adding just a few attributes. Table~\ref{table:Combined_Relative_Attributes} lists which attributes are selected for 3 different ``seed'' attributes. Interestingly, while sexual is positively correlated with virality, when seeded with animal, not sexual increases the correlation with virality. As a result, when we select our five attributes greedily, the combination that correlates best with virality is: animals, synthetically generated, not beautiful, explicit’’ and \emph{not} sexual. \vspace{-10pt} \subsubsection{Automated Relative Virality Prediction} \vspace{-5pt} To create an automated relative virality prediction classifier, we start by using our complete $\sim$10k image dataset and have AMT workers do the same task as in Section~\ref{viral_attributes}, by dividing them into viral (top half in rank) vs non viral (lower half in rank), and randomly pairing them up for relative attribute annotation for the top 5\footnote{Tagging all 52 relative attributes accurately for all $5k$ image pairs in the dataset is expensive.} performing attributes from our greedy search in Fig.~\ref{fig:relative_performance}: Animal, Synthetically Generated(SynthGen), Beautiful, Explicit and Sexual. Note that all of our top-5 attributes are visual. Correlation trajectories of combined attributes for all our dataset in a hybrid human-machine virality predictor can be seen at Fig.~\ref{fig:Optimal_Behaviour}. \begin{table}[t] \tiny \centering \scriptsize{ \resizebox{\textwidth}{!}{% \begin{tabular}[H]{|c||c|c|c|c|c|} \cline{1-3} \Xhline{2\arrayrulewidth} & 1 & 2 & 3 & 4 & 5 \\ \cline{1-6} \Xhline{2\arrayrulewidth} Attribute (+) & $\uparrow$ synth. gen. & $\uparrow$ animal & $\downarrow$ beautiful & $\uparrow$ explicit & $\downarrow$ sexual \\ \cline{1-6} Virality Correlation & 0.3036 & 0.3067 & 0.3813 & 0.3998 & 0.4236 \\ \cline{1-6} \Xhline{3\arrayrulewidth} Attribute (-) & $\uparrow$ beautiful & $\uparrow$ synth. gen. & $\uparrow$ animal & $\uparrow$ dynamic & $\uparrow$ annoyed \\ \cline{1-6} Virality Correlation & -0.1510 & 0.2383 & 0.3747 & 0.3963 & 0.4097 \\ \cline{1-6} \Xhline{3\arrayrulewidth} Attribute (N) & $\uparrow$ religious & $\uparrow$ synth. gen. & $\uparrow$ animal & $\downarrow$ beautiful & $\uparrow$ dynamic \\ \cline{1-6} Virality Correlation & 0.0231 & 0.1875 & 0.3012 & 0.3644 & 0.3913 \\ \cline{1-6} \Xhline{2\arrayrulewidth} \end{tabular}} \vspace{-5pt} {\caption{Correlation of human-annotated attribute combinations with virality. Combinations are ``primed'' with the first attribute.\vspace{-10pt}}\label{table:Combined_Relative_Attributes}} } \end{table} \begin{table}[t] \centering \renewcommand\footnoterule{} \scriptsize{ \begin{tabular}[H]{|c|c|c|} \cline{1-3} Dataset & Classification Method & Performance \\ \cline{1-3} \Xhline{2\arrayrulewidth} & Chance & $50\%$ \\ \cline{1-3} \Xhline{2\arrayrulewidth} All images & SVM + image features & $53.40\%$ \\ \cline{2-3} \Xhline{2\arrayrulewidth} & Human (500) & $71.76\%$ \\ \cline{2-3} Top/Bottom & SVM + image features $(500)$ & $61.60\%$ \\ \cline{2-3} 250 viral & Human annotated Atts.-1 $(500)$ & $56.77\%$ \\ \cline{2-3} (Section~\ref{viral_nonviral}) & Human annotated Atts.-3 $(500)$ & $68.53\%$ \\ \cline{2-3} & Human annotated Atts.-5 $(500)$ & $71.47\%$ \\ \cline{2-3} & Human annotated Atts.-11 $(500)$ & $73.56\%$ \\ \cline{2-3} & Human annotated Atts.-38 $(500)$ & $\mathbf{81.29\%}$ \\ \cline{2-3} \Xhline{2\arrayrulewidth} Top/Bottom & Khosla~\etal Popularity API~\cite{www14_khosla} $(500_p)$ & $51.12\%$ \\ \cline{2-3} 250 viral & SVM + image features $(500_p)$ & $58.49\%$ \\ \cline{2-3} paired with & Human $(500_p)$ & $60.12\%$\\ \cline{2-3} random imgs. & Human annotated Atts.-5 $(500_p)$& $65.18\%$ \\ \cline{2-3} (Section~\ref{viral_nonviral_random}) & SVM + Deep Attributes-5 $(500_p)$ & $\mathbf{68.10\%}$ \\ \cline{1-3} \end{tabular} } \vspace{-10pt} \captionof{table}{Relative virality prediction across different datasets \& methods. \vspace{-15pt}} \label{table:Method_Performance} \end{table} With all the annotations, we then train relative attribute predictors for each of these attributes with DECAF6 deep features~\cite{donahue2013decaf} and an SVM classifier through 10-fold cross validation to obtain relative attribute predictions on all image pairs (Section ~\ref{viral_nonviral_random}). The relative attribute prediction accuracies we obtain are: Animal: $70.14\%$, Synthgen: $45.15\%$, Beautiful: $56.26\%$, Explicit: $47.15\%$, Sexual: $49.18\%$ (Chance: $33.33\%$), by including neutral pairs. Futhermore, we get Animal: $87.91\%$, Synthgen: $67.69\%$, Beautiful: $81.73\%$, Explicit: $65.23\%$, Sexual: $71.13\%$ for $+/-$ relative labels, excluding neutral (tied) pairs (Chance: $50\%$). Combining these automatic attribute predictions to inturn (automatically) predicted virality, we get an accuracy of $68.10\%$. If we use ground truth relative attribute annotations for these 5 attributes we achieve ($65.18\%$) accuracy, better than human performance ($60.12\%$) at predicting relative virality directly from images. Using our deep relative attributes, machines can predict relative virality more accurately than humans! This is because (1) humans do not fully understand what makes an image viral (hence the need for a study like this and automatic approaches to predicting virality) and (2) the attribute classifiers trained by the machine may have latched on to biases of viral content. The resultant learned notion of attributes may be different from human perception of these attributes. Although our predictor works well above chance, notice that extracting attributes from these images is non-trivial, given the diversity of images in the dataset. While detecting faces and animals is typically considered to work reliably enough~\cite{girshick2013rich}, recall that images in Reddit are challenging due to their non-photorealism, embedded textual content and image composition. To quantify the qualitative difference in the images in typical vision datasets and our dataset, we trained a classifier to classify an image as belonging to our Virality Dataset or the SUN dataset~\cite{xiao2010sun,torralba2011unbiased}. We extracted DECAF6 features from our dataset and similar number of images from the SUN dataset. The resultant classifier was able to classify a new image as coming from one of the two datasets with $90.38\%$ accuracy, confirming qualitative differences. Moreover, the metric developed for popularity~\cite{www14_khosla} applied to our dataset outputs chance like results (Table~\ref{table:Method_Performance}). Thus, our datasets provide a new regime to study image understanding problems. \subsection{Vicinity context} \vspace{-5pt} \begin{figure*}[t] \centering \begin{tabular}{ccccc} \subfigure[car pair\label{fig:img_pair_1}]{ \includegraphics[scale=0.11]{car_example_crop.jpg} } \multirow{-2}[2.5]{*}{\subfigure[car set]{\includegraphics[scale=0.09]{car_total_crop.jpg} \label{fig:img_complete_1}}} & \multirow{-2}[2.5]{*}{\subfigure[set saliency]{\includegraphics[scale=0.09]{car_total_saliency.jpg}\label{fig:img_complete_2}}} & \multirow{-2}[2.5]{*}{\subfigure[Sort 4]{\includegraphics[scale=0.28]{Fig1b_small.png}\label{fig:mash1}}} & \multirow{-2}[2.5]{*}{\subfigure[Sort 2]{\includegraphics[scale=0.28]{Fig2b_small.png}\label{fig:mash2}}} \\ \subfigure[pair saliency]{\includegraphics[scale=0.11]{car_example_saliency.jpg} \label{fig:img_pair_2}}\\ \end{tabular} \vspace{-10pt} \caption[]{The value of how red a car is, or whether one car is more red than the other~\subref{fig:img_pair_1} does not change if more images are added to the pool~\subref{fig:img_complete_1}. However, an image that may seem more viral - visualized through saliency~\cite{judd2009learning} (e.g. the red vintage Ferrari in~\subref{fig:img_pair_2}) than another image, may start seeming less viral than the same image depending on the images added to the mix. See Fig.~\ref{fig:affinity}~\subref{fig:img_complete_2}. In our experiments, workers are asked to sort four images in ascending order of their virality in one experimental design~\subref{fig:mash1}, while they are asked to sort only 2 images in another design~\subref{fig:mash2}, after being shown all 4 of them. In both cases, there are only two target images of interest (viral:green, non-viral:red), while the other two images are proxy images (yellow) added to the mix. These images are chosen such that they are close (in gist space) to the viral target image (top row), the non-viral target image (middle row), or random (bottom row). \vspace{-15pt} } \label{fig:affinity} \end{figure*} Reasoning about pairs of images as we did with relative virality above, leads to the question of the impact of images in the vicinity of an image on human perception of its virality. We designed an AMT experiment to explore this (Fig.~\ref{fig:affinity}). Recall that in the previous experiment involving relative virality prediction, we formed pairs of images, where each pair contained a viral and non-viral image. We now append these pairs with two ``proxy'' images. These proxies are selected to be either similar to the viral image, or to the non-viral image, or randomly. Similarity is measured using the gist descriptor~\cite{oliva2001modeling}. The $4^{th}$ and $6^{th}$ most similar images are selected from our Viral Images dataset (Section~\ref{sec:10k_dataset}). We do not select the two closest images to avoid near identical matches and to ensure that the task did not seem like a ``find-the-odd-one-out'' task. We study these three conditions in two different experimental settings. The first is where workers are asked to sort all four images from what they believe is the least viral to the most viral. In the second experimental design, workers were still shown all four images, but were asked to only annotate which one of the two images from the original pair is more viral than the other. Maybe the mere presence of the ``proxy'' images affects perception of virality? For both cases, we only check the relative ranking of the viral and non viral image. \begin{wraptable}{r}{4.7cm} \vspace{-10pt} \scriptsize{ \begin{tabular}{|c|c|c|c|} \toprule & {Sort 4} & {Sort 2} \\ \midrule {Viral-NN } & $65.16\%$ & $66.64\%$ \\ \midrule {Non viral-NN} & $68.60\%$ & $65.56\%$ \\ \midrule {Random} & $\mathbf{52.24\%}$ & $65.00\%$ \\ \bottomrule \end{tabular} } \vspace{-10pt} \captionof{table}{Human ranking accuracy across different proxy images.} \label{table:KNN_Classification} \end{wraptable} Worker accuracy in each of the six scenarios is shown in Table~\ref{table:KNN_Classification}. We see that when asked to sort all four images, identifying the true viral images is harder with the presence of random proxies, as they tend to confuse workers and their performance at predicting virality drops to nearly chance. The presence of carefully selected proxies can still make the target viral image salient. When asked to sort just the two images of interest, performance is overall higher (because the task is less cumbersome). But more importantly, performance is very similar across the three conditions (Sort 2). This suggests that perhaps the mere presence of the proxy images does not impact virality prediction. Developing group-level image features that can reason about such higher-order phenomenon has not been well studied in the vision community. Visual search or saliency has been studied to identify which images or image regions pop out. But models of change in relative orderings of the same set of images based on presence of other images have not been explored. Such models may allow us to select the ideal set of images to surround an image by to increase its chances of going viral. \subsection{Temporal context}\label{temporal_context} \vspace{-5pt} Having examined the effect of images in the spatial vicinity on image virality, we now study the effects of temporal aspects. In particular, we show users the same pairs of images used in the relative virality experiment in Section~\ref{sec:relative_virality} at 4 different resolutions one after the other: $8 \times 8$, $16 \times 16$, $32 \times 32$, $360 \times 360$ (original). We choose blurring to simulate first impression judgements at thumbnail sizes when images are `previewed'. At each stage, we asked them which image they think is more likely to be viral. Virality prediction performance was $47.08\%$, $49.08\%$, $51.28\%$ and $62.04\%$. Virality prediction is reduced to chance even in $32 \times 32$ images, where humans have been shown to recognize semantic content in images very reliably~\cite{torralba200880}. Subjects reported being surprised for 65\% of the images. We found a -0.04 correlation between true virality and surprise, and a -0.07 correlation between predicted virality and surpise. Perhaps people are bad at estimating whether they were truly surprised or not, and asking them may not be effective; or surprise truly is not correlated with virality. \subsection{Textual context} \vspace{-5pt} As a first experiment to evaluate the role of the title of the image, we show workers pairs of images and ask them which one they think is more likely to be viral. We then reveal the title of the image, and ask them the same question again. We found that access to the title barely improved virality prediction ($62.04\%$ vs. $62.82\%$). This suggests that perhaps the title does not sway subjects after they have already judged the content. Our second experiment had the reverse set up. We first showed workers the title alone, and asked them which title is more likely to make an image be viral. We then showed them the image (along with the title), and asked them the same question. Workers' prediction of relative virality was worse than chance using the title alone ($46.68\%$). Interestingly, having been primed by the title, even with access to the image performance did not improve significantly above chance ($52.92\%$) and is significantly lower than their performance when viewing an image without being primed by the title ($62.04\%$). This suggests that image content seems to be the prime signal in human perception of image virality. However, note that these experiments do not analyze the role of text that may be embedded in the image (memes!) \vspace{-5pt} \section{Conclusions} \vspace{-5pt} We studied viral images from a computer vision perspective. We introduced three new image datasets from Reddit, the main engine of viral content around the world. We defined a virality score using Reddit metadata. We found that virality can be predicted more accurately as a relative concept. While humans can predict relative virality from image content, machines are unable to do so using low-level features. High-level image understanding is key. We identified five key visual attributes that correlate with virality: Animal, Synthetically Generated, (Not) Beautiful, Explicit and Sexual. We predict these relative attributes using deep image features. Using these deep relative attribute predictions as features, machines (SVM) can predict virality with an accuracy of $68.10\%$ (higher than human performance: $60.12\%$). Finally, we study how human prediction of image virality varies with different ``contexts'' -- intrinsic, spatial (vicinity), temporal and textual. This work is a first step in understanding the complex but important phenomenon of image virality. We have demonstrated the need for advanced image understanding to predict virality, as well as the qualitative difference between our datasets and typical vision datasets. This opens up new opportunities for the vision community. Our datasets and annotations will be made publicly available. \vspace{-5pt} \section{Acknowledgements} This work was supported in part by ARO YIP 65359NSYIP to D.P. and NSF IIS-1115719. We would also like to thank Stanislaw Antol, Michael Cogswell, Harsh Agrawal, and Arjun Chandrasekaran for their feedback and support. \vspace{-5pt} \vspace{-5pt} \section*{Appendix I: Virality Metrics}\label{App1} \begin{figure}[!h] \centering \subfigure[Virality metric: $V_{h}$] { \includegraphics[scale=0.23,clip=true,draft=false,]{viral_vm_EDIT.png} \label{fig:vm_metric_App} } \subfigure[Maximum upvotes: $\max_n\{A_h^n\}$]{ \includegraphics[scale=0.23,clip=true,draft=false,]{viral_up_EDIT.png} \label{fig:up_metric_App} } \subfigure[Number of resubmissions: $m_h$]{ \includegraphics[scale=0.23,clip=true,draft=false,]{viral_sh_EDIT.png} \label{fig:sh_metric_App} } \vspace{-7pt} \caption[] {Machine accuracy using our virality metric~\ref{fig:vm_metric_App}, and other metrics~\ref{fig:up_metric_App},~\ref{fig:sh_metric_App}. Notice that all descriptors produce chance like results. Novel image understanding techniques need to be developed to predict virality. } \label{fig:machine_score_prediction_App} \end{figure} Machine performance using different metrics for virality are shown in Fig.~\ref{fig:machine_score_prediction_App}. We see that current vision models have a hard time differentiating between these viral and non-viral images, under any criteria. The SVM was trained with both linear and non linear kernels on 5 random splits of our dataset of $\sim$10k images, using 250, 500, 1000, 2000, 4000 images for training, and 1039 images of each class for testing. Recall that we define our virality score for image $h$ across all resubmissions $n$ as \vspace{-7pt} \begin{equation}\label{Viral_rank} V_{h}=\max_n A_h^n log\left(\frac{m_h}{\bar{m}}\right) \end{equation} where $m_h$ is the number of times image $h$ was resubmitted, and $\bar{m}$ is the average number of times any image has been resubmitted. $A_h^n$ is a normalized score based on the metric of~\cite{Leskovec}. The other metrics we experimented with included maximum upvotes $\max_n \{A_h^n\}$(Fig.~\ref{fig:up_metric_App}) across submissions and number of resubmissions $m_h$(Fig.~\ref{fig:sh_metric_App}). \begin{figure}[!t] \centering \subfigure[AMT workers]{ \includegraphics[scale=0.20,clip=true,draft=false,]{AMT_subreddit_matrix.png} \label{fig:AMT_confusion_App} } \subfigure[SVM-DECAF6]{ \includegraphics[scale=0.20,clip=true,draft=false,]{SVM_subreddit_deep_matrix.png} \label{fig:SVM_confusion_App} } \vspace{-10pt} \caption[]{Confusion matrices in \subref{fig:AMT_confusion_App}, \subref{fig:SVM_confusion_App} for topic (subreddit) classification show that humans \& machines can do a reasonable job of recognizing the dominant topic of a viral image. Both find aww to be the easiest to recognize and both tend to confuse gaming images with funny. There are $90.4\%$ of the images correctly classified by either the machine or humans ($9.6\%$ are misclassified by both). } \label{fig:subreddit_exem_App} \end{figure} \section*{Appendix II: Category Prediction Confusion Matrices} The confusion matrices in Figure~\ref{fig:subreddit_exem_App} depict how a human and computer perform at identifying categories/topics (subreddits) in viral images (Section~\ref{subreddit_disc}). A computer achieves $62.4\%$ accuracy using SVM + DECAF6 after training on 60 images, while a human achieves $87.84\%$ on the same test images (chance is $20\%$). Recall that this group of AMT workers had to undergo training to learn how to identify the categories just like the computer, with the same exemplars (55 on training + 5 for validating that the workers paid attention during training). Both humans and machines tend to confuse gaming images with funny ones. They are also both remarkabely good at identifying the aww category. While for a human it might be trivial, we suspect that it is also easy for the machine, since texture (animal fur, feathers, skin) plays an important role that DECAF6 is encoding. Atheism on the other hand is very simple for a human, but complex for a machine. We are inclined to believe that since identifying whether an image contains religious content or not involves high-level semantics and often text, the task of detecting atheism is challenging for a machine. The machine as a result tends to confuse it with funny or gaming, as some of these images also contain text. Examples from all 5 categories are shown in Fig~\ref{fig:montage_family_App}. \begin{figure*}[!t] \centering \subfigure[SUN dataset] { \includegraphics[scale=0.5,clip=true,draft=false,]{SUN_montage.jpg} \label{fig:SUN_mini_App} } \subfigure[Flickr dataset used in~\cite{www14_khosla} for image popularity]{ \includegraphics[scale=0.5,clip=true,draft=false,]{Flickr_montage.jpg} \label{fig:Flickr_mini_App} } \subfigure[Reddit dataset used in our work for image virality. Virality scores are shown in the bottom of each image. `Philosoraptor' a celebrity meme, scores remarkably higher than other synthetic images.] { \includegraphics[scale=0.625,clip=true,draft=false,]{viral_montage_scores.jpg} \label{fig:Reddit_mini_App} } \vspace{-7pt} \caption[] {There is a qualititative difference in each dataset~\cite{torralba2011unbiased}. Notice that the images in~\ref{fig:SUN_mini_App} and~\ref{fig:Flickr_mini_App} are still constrained to photographs of everyday objects and scenes. However, a majority of images in~\ref{fig:Reddit_mini_App} are highly complex: they include text, cartoons, and out-of-context objects. In our dataset, this is the norm, rather than the exception. } \label{fig:dataset_montage_App} \end{figure*} Potential applications for subreddit classification include a recomendation system for a human to know to what category to submit his image for marketing/personal purposes. Automatically identifying a subreddit (or topic) of the image can also provide context when generating image descriptions~\cite{Ordonez:2011:im2text,karpathy2014deep}. \section*{Appendix III: Dataset Comparison} Viral images are very different from standard images analyzed in different computer vision datasets. In Figure~\ref{fig:SUN_mini_App}, we see how the SUN dataset images favors open spaces in outdoor and indoor environments. Images from Flickr (Figure~\ref{fig:Flickr_mini_App}) have more variation than SUN images, yet the images still follow many traditional and photographic rules: rule of the third, principal subject(s) in picture, all co-occurring objects are in context, and they are colorful just like the SUN dataset. Recall that Flickr has become an online platform for photographers and amateurs to share their most beautiful pictures, an attribute that correlates \textit{inversely} with virality. Viral images, on the other hand, seem non-sensical, chaotic, and very unpredictable. They do not follow photographic rules. They can have cartoons, text, and (non)photorealistic content embedded separately or together, yet they still express an idea that is easily understandable for humans. We hope that the vision community will study automatic image understanding in these kinds of images (Figure~\ref{fig:Reddit_mini_App}). \vspace{-15pt} \begin{figure}[!t] \centering \includegraphics[scale=0.24,clip=true,draft=false,]{exemplar_montage.png} \label{fig:montage_family_App} \vspace{-10pt} \caption[]{ Example images from the 5 most viral categories (top to bottom): funny, WTF, aww, atheism, gaming. } \label{fig:montage_family_App} \end{figure} \clearpage \clearpage {\footnotesize \bibliographystyle{ieee}
1612.04927
\subsection{Local wellposedness in $H^s$ for $s>1$} It is well known from the works of Klainerman and Machedon \cite{KlainermanMachedon1,KlainermanMachedon2,KlainermanMachedon3}, Klainerman and Selberg \cite{Klainerman1,Klainerman2}, and Selberg \cite{Selberg}, that the wave map equation (\ref{Mainwavemap}) is locally wellposed in $H^s\times H^{s-1}$ for $s>1$. In this subsection we recall the necessary regularity results from these works without giving proofs and refer the reader to the above cited works, and especially the survey \cite{Klainerman2} for details.\\ Since the spaces in which one can prove existence and uniqueness involves spacetime Fourier transforms even when one only considers local in time solutions, we have to be more precise on the Banach spaces which are used to hold the solutions and the nonlinearities. \\ We shall denote $\mathcal{F}(u)$ as the spacetime Fourier transform of $u$. For $s,\,b\in R$, and tempered distribution $u\in \mathcal{S'}(R^3)$, define \begin{equation}\label{HyperbolicSobolev} \|u\|_{X^{s,b}(R^3)}:=\left(\,\int_{R^3}(1+|\xi|^2)^{s}\,(1+||\xi|-|\tau||)^{2b}\,|\mathcal{F}(u)(\xi,\tau)|^2\,d\xi d\tau\,\right)^{\frac{1}{2}}, \end{equation} and set \begin{equation}\label{HyperbolicSpace} X^{s,b}(R^3):=\left\{u\in\mathcal{S'}(R^3):\,\|u\|_{X^{s,b}(R^3)}<\infty\right\}. \end{equation} We record the following wellposedness result for equation (\ref{Mainwavemap}) in the subcritical space $\dot{H}^s\times H^{s-1}$. We shall always assume that the initial data $(u_0,\,u_1)$ for (\ref{Mainwavemap}) satisfies the ``admissibility condition" that $|u_0|\equiv 1$ and $\ud_0\,u_1\equiv 0$. \begin{theorem}\label{th:subcriticallocalwellposedness} For $s>1$ and $\frac{1}{2}<b<\min\{s-\frac{1}{2},\,1\}$. Suppose that $(u_0,\,u_1)\in \dot{H}^{s}\times H^{s-1}$ and that $u_0$ equals a constant $u_{\infty}\in S^2$ for large $x$. Then for $T=T(\|(u_0-u_{\infty},\,u_1)\|_{H^s\times H^{s-1}})>0$ sufficiently small, there exists a unique solution $u$ to equation (\ref{Mainwavemap}) with initial data $(u_0,\,u_1)$ on $R^2\times (-T,T)$ in the sense of distributions, which satisfies the following properties \begin{eqnarray*} &(1)& u-u_{\infty}\in C(I, H^s\times H^{s-1});\label{eq:wellpro1}\\ &(2)& {\rm there\,\,exists}\,\,\overline{u}\in L^2(R^3)\,\,{\rm with}\,\,\overline{u}|_{R^2\times I}\equiv u-u_{\infty} \,\,{\rm and\,\,}\nabla_{x,t}\overline{u}\in X^{s-1,b},\label{eq:wellpro2} \end{eqnarray*} where $I=(-T,\,T).$ \end{theorem} \smallskip \noindent {\it Remark.} The above theorem provides a rigorous definition of solutions to equation (\ref{Mainwavemap}). (2) is important, as (1) by itself is not sufficient to guarantee uniqueness when $s$ is close to $1$. One could of course choose to work directly with smooth wave maps, instead of these low-regularity wave maps. However, below we shall need to extend a locally (in space) defined map to a global one, and it is much more convenient to have such extensions in the framework of $H^s$ solutions, rather than smooth solutions.\\ Solutions from Theorem \ref{th:subcriticallocalwellposedness} can be extended to a maximal interval of existence, more precisely, we have \begin{corollary}\label{cor:maximalextension} For $s>1$. Suppose that $(u_0,\,u_1)\in \dot{H}^{s}\times H^{s-1}$ and that $u_0$ equals a constant $u_{\infty}\in S^2$ for large $x$. Then there exists $T_+\in(0,\infty],\,T_{-}\in [-\infty,0)$, such that for any $T_{-}<T_1<T_2<T_{+}$, $u$ is a distributional solution to equation (\ref{Mainwavemap}), satisfying (1) and (2) on $I=(T_1,\,T_2)$, and that if $T_{+}<\infty$, then \begin{equation}\label{eq:blowupcriteria} \lim_{t\to T_+}\|\OR{u}(t)\|_{\dot{H}^s\times H^{s-1}}=\infty. \end{equation} Similar conclusion holds for $T_{-}$. Such $u$ is unique. In addition, if $(u_0,\,u_1)\in \dot{H}^{s_1}\times H^{s_1-1}$ for some $s_1>s$, then $u$ satisfies (\ref{eq:wellpro1}) and (\ref{eq:wellpro2}) with $s$ being replaced by $s_1$ on any $I=(T_1,\,T_2)\Subset (T_{-},\,T_{+})$. $T_+$ and $T_{-}$ are called the maximal time of existence for the solution $u$. \end{corollary} \subsection{Critical wellposedness results} Perhaps not surprisingly, our work depends crucially on the regularity results of Tao \cite{TaoHighDimension,TaoSmallEnergy}, Tataru \cite{Tataru1}, and Sterbenz-Tataru \cite{Tataru4,TataruSterbenz}. See also the work of Krieger and Schlag \cite{KriegerSchlag}. In this section we recall some important results for wave maps in the energy space from \cite{Tataru1,TaoSmallEnergy,TataruSterbenz}, that will be needed below.\\ In order to control the solution at the $\dot{H}^1\times L^2$ level of regularity, we need to use more sophisticated spaces. The precise definitions of these spaces are not very important for us, but we shall need the following properties that we briefly review below. \\ Fix a radial function $\Phi\in C_c^{\infty}(R^2)$ with $\Phi|_{B_1}\equiv 1$ and ${\rm supp}\,\Phi\Subset B_2$. Let $\Psi(x):=\Phi(x)-\Phi(2x)$, and $\Psi_k(x)=\Psi(x/2^k)$ for each $k\in \mathbb{Z}$. Then ${\rm supp}\,\Psi\Subset B_2\backslash B_{1/2}$, and $$\sum_{k\in \mathbb{Z}}\Psi_k\equiv 1,\,\,\,\,{\rm for}\,\,|\xi|\neq 0.$$ Recall that the Littlewood-Paley projection $P_k$ and $P_{<k}$ are defined as $$\widehat{P_kf}(\xi)=\Psi_k(\xi)\,\widehat{f}(\xi),$$ and $$P_{<k}f=\sum_{k'<k}P_{k'}f.$$ We will also use the notations $u_k:=P_ku$ and $u_{<k}=P_{<k}u$. Then $$\sum_{k\in\mathbb{Z}}P_kf=f,$$ for all $f\in L^2(R^2)$. We use the same definitions as in \cite{TaoSmallEnergy} for the spaces $S[k],\,N[k]$, which are translation invariant Banach spaces of distributions on $\R^2_x\times \R_t$ containing Schwartz functions whose partial Fourier transform in the $x$ variable is supported in $\{2^{k-3}\leq |\xi|\leq 2^{k+3}\}$, $\{2^{k-4}\leq |\xi|\leq 2^{k+4}\}$ respectively. For each $k$, we shall use the space $S[k]$ to hold the frequency localized piece $P_ku$ of the solution $u$, and use the space $N[k]$ to hold the frequency localized piece $P_kf$ of the nonlinearity $f:=u\,\partial_{\alpha}\ud\partial^{\alpha}u$. Define the $S(1)$ norm as \begin{equation} \|f\|_{S(1)}:=\|f\|_{L^{\infty}}+\sup_k\|P_kf\|_{S[k]}. \end{equation} The spaces $S[k]$ and $N[k]$ satisfy the following properties. \begin{theorem}\label{th:propertySN} There exists a small universal constant $\kappa>0$, such that\\ (1)\,{\bf (Algebra property)}\, For Schwartz functions $\phi,\,\psi$ with $\psi\in S[k_2]$, we have \begin{equation}\label{eq:algebra} \|P_k(\phi\psi)\|_{S[k]}\lesssim 2^{-\kappa (k_2-k)_{+}}\|\phi\|_{S(1)}\|\psi\|_{S[k_2]}; \end{equation} (2)\,{\bf (Product property)}\, For Schwartz functions $f,\,\psi$ with $f\in N[k_2]$, we have \begin{equation}\label{eq:product} \|P_k(f\psi)\|_{N[k]}\lesssim 2^{-\kappa (k_2-k)_{+}}\|\psi\|_{S(1)}\|f\|_{N[k_2]}; \end{equation} (3)\,{\bf (Null form estimate)} \,For Schwartz functions $\phi,\,\psi$ with $\phi\in S[k_1]$, $\psi\in S[k_2]$, we have \begin{equation}\label{eq:null} \|P_k(\partial^{\alpha}\phi\,\partial_{\alpha}\psi)\|_{N[k]}\lesssim 2^{-\kappa (\max\{ k_1,\,k_2\}-k)_{+}}\|\phi\|_{S[k_1]}\|\psi\|_{S[k_2]}; \end{equation} (4)\,{\bf (Trilinear estimate)} \,For Schwartz functions $\phi,\,\varphi,\,\psi$ with $\phi\in S[k_1]$, $\varphi\in S[k_2]$ and $\psi\in S[k_3]$, we have \begin{multline}\label{eq:trilinear} \|P_k(\phi\,\partial^{\alpha}\varphi\partial_{\alpha}\psi)\|_{N[k]}\\ \lesssim 2^{-\kappa (k_1-\min\{k_2,\,k_3\})_+}2^{-\kappa(\max\{k_1,k_2,k_3\}-k)_+}\|\phi\|_{S[k_1]}\|\varphi\|_{S[k_2]}\|\psi\|_{S[k_3]}. \end{multline} (5)\,{\bf (Linear wave estimate)} \,For solution $u^L$ to the linear wave equation $$\partial_{tt}u^L-\Delta u^L=f,$$ with initial data $(u_0,\,u_1)$, we have \begin{equation}\label{eq:inhomogeneousenergyestimates} \|P_ku^L\|_{S[k]}\lesssim \|P_k(u_0,\,u_1)\|_{\HL}+\|P_kf\|_{N[k]}. \end{equation} (6)\,{\bf ($S[k]$ controls energy)} \,For $u\in S[k]$, we have \begin{equation}\label{eq:Senergy} \|\nabla_{x,t}u\|_{L^{\infty}_tL^2_x}\lesssim \|u\|_{S[k]}. \end{equation} \end{theorem} \smallskip \noindent {\it Remark.} These estimates were proved in \cite{TaoSmallEnergy}, and some of them are slightly more general than those stated in the main summary of the properties of $S[k],\,N[k]$ in Theorem 3 from \cite{TaoSmallEnergy}. However, they can be found elsewhere in that paper. More precisely, the algebra estimate (\ref{eq:algebra}) is a consequence of equation (125) and (126) at page 516; the product estimate (\ref{eq:product}) is a consequence of (119) at page 510; the null form estimate (\ref{eq:null}) is (134) at page 523; the trilinear estimate (\ref{eq:trilinear}) is taken from the first formula at page 529. We also note that $P_{k'}$ is bounded from $S[k]$ to $S[k]$, by the translation invariance of the Banach space $S[k]$. (6) implies that $\|u\|_{L^{\infty}}\lesssim \|u\|_{S[k]}$. Another useful property of $S[k]$ is the weak stability of $S[k]$: if $u_i\to u$ in the sense of distributions and $u_i\in S[k]$ with $\|u_i\|_{S[k]}\leq 1$, then $\|u\|_{S[k]}\leq 1$. See a similar statement in (vii) of page 323 in \cite{TataruRough}. We will use these estimates extensively below.\\ Tao \cite{TaoSmallEnergy} introduced a very useful notation to keep track of multilinear expressions. More precisely, for scalar functions $\phi_1,\dots,\phi_l$, we use $L\left(\phi_1,\dots,\phi_l\right)$ to denote multilinear expression of the form \begin{equation*} L\left(\phi_1,\dots,\phi_l\right):=\int K\left(y_1,\dots,y_l\right)\phi_1(x-y_1)\cdots\phi_l(x-y_l)\,dy_1\cdots dy_l, \end{equation*} with a measure $K$ of bounded mass. In many cases, $\phi_1,\dots,\phi_l$ could also be expressions involving components $\phi^{j_1}_1,\dots,\phi^{j_l}_l$ and in such cases, we also assume that $K$ depends on $j_1,\dots,j_l$, but for the ease of notations, we shall suppress this dependence. By the translation invariance of the spaces $S[k],\,N[k]$, the estimates in Theorem \ref{th:propertySN} extend to expressions of the form $L(\phi,\psi),\,L(\partial^{\alpha}\phi,\partial_{\alpha}\psi)$ instead of just $\phi\,\psi$ and $\partial^{\alpha}\phi\,\partial_{\alpha}\psi$.\\ Let us record here the following useful Lemma from \cite{TaoSmallEnergy}. \begin{lemma}\label{lm:commutinglemma} For Schwartz functions $f,\,g$, we have \begin{equation} P_k(f\,g)-P_kf\cdot g=2^{-k}L\left(f,\,\nabla g\right). \end{equation} \end{lemma} \smallskip \noindent {\it Proof.} This Lemma is taken from \cite{TaoSmallEnergy}, we include the short proof for the convenience of readers. We have \begin{eqnarray*} &&P_k(f\,g)(x)-P_kf(x)\,g(x)\\ &&=\int 4^k\check{\Psi}(2^ky)f(x-y)g(x-y)\,dy-\int 4^k\check{\Psi}(2^ky)f(x-y)g(x)\,dy\\ &&=\int_0^1\int_{R^2}-4^k\check{\Psi}(2^ky)f(x-y)\,y_j\partial^jg(x-ty)\,dtdy\\ &&=-2^{-k}\int_0^1\int_{R^2}4^k(2^ky_j)\,\check{\Psi}(2^ky)f(x-y)\,\partial^jg(x-ty)\,dtdy\\ &&=2^{-k}L(f,\,\nabla g). \end{eqnarray*} The proof is complete.\\ Let us recall the definition of {\it frequency envelop} introduced in \cite{TaoSmallEnergy}. Fix positive $\vartheta$ such that $\vartheta\leq \frac{\kappa}{100}$, where $\kappa$ is as in Theorem \ref{th:propertySN}. \begin{definition} $(c_k)\in \ell^2$ is called a frequency envelop if $c_k>0$ and $c_{k_1}\leq 2^{\vartheta |k_1-k_2|}c_{k_2}$. \end{definition} For any frequency envelop $c=(c_k)$, define the norm $S(c)$ as \begin{equation} \|\phi\|_{S(c)}:=\|\phi\|_{L^{\infty}}+\sup_kc_k^{-1}\|P_k\phi\|_{S[k]}, \end{equation} and the space $S(c)$ as \begin{equation} S(c):=\{f\in L^{\infty}:\,\|f\|_{S(c)}<\infty\}. \end{equation} Note that $1\in S(c)$. The main property of the space $S(c)$ that we shall use below is that $S(c)$ a Banach algebra. \begin{lemma} $S(c)$ is a Banach algebra. \end{lemma} \smallskip \noindent {\it Proof:} This was proved in \cite{TaoSmallEnergy}. We include the short proof for the convenience of readers. We need to prove \begin{equation} \|\phi\,\psi\|_{S(c)}\lesssim \|\phi\|_{S(c)}\,\|\psi\|_{S(c)}. \end{equation} We note that $\|\phi\|_{S(1)}\lesssim_c\|\phi\|_{S(c)}$ and $\|\phi_{<k}\|_{S(c)}\lesssim \|\phi\|_{S(c)}$. We can normalize $\|\phi\|_{S(c)}=\|\psi\|_{S(c)}=1$. For each $k\in \mathbb{Z}$, we have \begin{eqnarray*} &&\left\|P_k(\phi\,\psi)\right\|_{S[k]}\\ &&=\left\|P_k\left(\phi_{>k-10}\psi\right)+P_k\left(\phi_{\leq k-10}\,\psi_{>k-10}\right)+P_k\left(\phi_{\leq k-10}\psi_{\leq k-10}\right)\right\|_{S[k]}. \end{eqnarray*} Note that $$P_k\left(\phi_{\leq k-10}\psi_{\leq k-10}\right)\equiv 0.$$ We get that \begin{eqnarray*} \left\|P_k(\phi\,\psi)\right\|_{S[k]}&\lesssim& \sum_{k_1>k-10}\left\|P_k\left(P_{k_1}\phi\,\psi\right)\right\|_{S[k]}+\sum_{k_2>k-10}\left\|P_k\left(\phi_{\leq k-10}\,P_{k_2}\psi\right)\right\|_{S[k]}\\ &\lesssim&\sum_{k_1>k-10}2^{-\kappa (k_1-k)_+}\left\|P_{k_1}\phi\right\|_{S[k_1]}\,\|\psi\|_{S(1)}\\ &&\hspace{.3in}+\,\sum_{k_2>k-10}2^{-\kappa (k_2-k)_+}\left\|P_{k_2}\psi\right\|_{S[k_2]}\,\|\phi_{\leq k-10}\|_{S(1)}\\ &\lesssim&\sum_{k_1>k-10}2^{-\kappa (k_1-k)}c_{k_1}+\sum_{k_2>k-10}2^{-\kappa (k_2-k)}c_{k_2}\\ &\lesssim&\sum_{k'>k-10}2^{-(\kappa-\vartheta)(k'-k)}c_{k}\lesssim c_k, \end{eqnarray*} and this finishes the proof.\\ Let us recall the following global wellposedness theorem for wave maps from Tao \cite{TaoSmallEnergy}. \begin{theorem}\label{th:globalregularity} There exists an $\varepsilon>0$ sufficiently small such that the following is true. Suppose that $(u_0,u_1)$ is smooth, $u_0-u_{\infty},\,u_1$ are compactly supported, and that $u_1^{\dagger}\cdot u_0\equiv 0$. Assume that $\left(\|P_k(u_0,\,u_1)\|_{\HL}\right)$ lies under a frequency envelop $c=(c_k)$ \footnote{For non-negative sequences $(a_k)$ and $(b_k)$, we say that $(a_k)$ lies below $(b_k)$ if $a_k\leq b_k$ for each $k$.} with $$\|c_k\|_{\ell^2}\leq \varepsilon.$$ Then the wave map $u$ with initial data $(u_0,u_1)$ is global, and moreover \begin{equation}\label{eq:GlobalControl} \|P_ku\|_{S[k]}+\sup_{t\in R}\|P_k\OR{u}(t)\|_{\HL}\leq Cc_k, \end{equation} for some universal $C$. \end{theorem} \smallskip \noindent {\it Remark:} By approximations by smooth maps, and the wellposedness for equation (\ref{Mainwavemap}) in $\dot{H}^s\times H^{s-1}$ for $s>1$, we can relax the smoothness requirement for the initial data in the above theorem to $(u_0,\,u_1)\in \dot{H}^s\times H^{s-1}$.\\ Fix $\epsilon_{\ast}>0$ be sufficiently small, so that classical wave maps with energy smaller than $C\epsilon_{\ast}$ exists globally for a sufficiently large universal $C>1$. In later sections, we shall need the following local-in-space smoothness result, when the initial data is locally but not globally smooth. \begin{lemma}\label{lm:localsmoothness} Let $(u_0,\,u_1)\in \dot{H}^s\times H^{s-1}$ for some $s>1$ and that $(u_{0}-u_{\infty},\,u_1)$ is compactly supported, with $|u_0|\equiv 1$ and $u_0\cdot u_1\equiv 0$. Assume that $(u_0,\,u_1)$ is smooth in $B_1$, and that $\initial\leq\epsilon_{\ast}.$ Then the global solution $u$ is smooth in $\{(x,t):\,|x|<1-|t|\}$. \end{lemma} \smallskip \noindent {\it Proof:} By the small energy global existence result and the subcritical Cauchy theory, we get that \begin{equation}\label{eq:Hsnorm} \sup_{0\leq t<1}\|u(t)\|_{\dot{H}^s}\leq C\left( \|(u_0-u_{\infty},\,u_1)\|_{H^s\times H^{s-1}}\right). \end{equation} Denote $\lambda:=C\left( \|(u_0-u_{\infty},\,u_1)\|_{H^s\times H^{s-1}}\right)$. Fix $\br>0$ small, and take $B_{\br}(\bx)\subset B_1$. Set $$\bu_0=\frac{1}{\pi\br^2}\int_{B_{\br}(\bx)}u.$$ Then by Sobolev inequality, we obtain that \begin{equation}\label{eq:smallinsmallball} \left\|u_0-\bu_0\right\|_{L^{\infty}\left(B_{\br}(\bx)\right)}\lesssim_{\lambda}\br^{s-1}. \end{equation} Take a smooth cutoff function $\eta$ such that $\eta\equiv 1$ in $B_{\br-\br^{1+\delta}}(\bx)$ with some $\delta\in (0,\,2(s-1))$, and $\eta\equiv 0$ outside $B_{\br}(\bx)$. In addition, we can require that \begin{equation}\label{eq:smalleta} |\nabla \eta|\lesssim \br^{-1-\delta}. \end{equation} Define \begin{equation*} \left(\tu_0,\,\tu_1\right)=\left(P\left[\eta\,(u_0-\bu_0)+\bu_0\right],\,\eta\, u_1\right), \end{equation*} where for each vector $v\neq 0$ $$Pv=\frac{v}{|v|}.$$ Then $\tu_0,\,\tu_1$ are smooth, and \begin{equation}\label{eq:localequal11} \left(\tu_0,\,\tu_1\right)\equiv (u_0,\,u_1),\,\,\,\,{\rm in}\,\,B_{\br-\br^{1+\delta}}(\bx). \end{equation} Moreover, we can verify by direct computation thanks to (\ref{eq:smallinsmallball}) and (\ref{eq:smalleta}) that $$\left\|\left(\tu_0,\,\tu_1\right)\right\|_{\HL}\lesssim\epsilon_{\ast},$$ if $\br$ is chosen sufficiently small. Hence the solution $\tu$ to the wave map equation with the initial data $\left(\tu_0,\,\tu_1\right)$ is smooth and global. By (\ref{eq:localequal11}), $u\equiv \tu$ for $|x-\bx|<\br-\br^{1+\delta}-|t|$, and is thus smooth for $|x-\bx|<\br-\br^{1+\delta}-|t|$. By moving around $\bx$ and finite speed of propagation, we conclude that $u$ is smooth in $\{(x,t):\,|x|<1-2\br^{1+\delta}-|t|,\,|t|<\br\}$. We can apply the same technique at $|t|=\br,\,2\br$ and so on, and conclude recursively that $u$ is smooth in $\{(x,t):\,|x|-2k\br^{1+\delta}-|t|,\,|t|<k\br\}$ for $k=1,\,2,\dots$ with $(k+2)\br<1.$ Hence, $u$ is smooth in $\{(x,t):\,|x|<1-C\br^{\delta},\,\,|t|<1-3\br\}$. Since $\br$ can be taken arbitrarily small, the lemma follows.\\ Since the global regularity result for small energy requires that the initial data belongs to a subcritical space $H^s\times H^{s-1}$ for some $s>1$, \footnote{See however Tataru \cite{TataruRough} where a notion of finite energy solution was introduced. } we shall need the following lemma when we deal with some initial data which is $C^{\infty}(R^2\backslash\{0\})$ but may fail to be in $H^s\times H^{s-1}$ globally for any $s>1$. \begin{lemma}\label{lm:exteriornice} Suppose that $(u_0,\,u_1)\in C^{\infty}(R^2\backslash\{0\})$, and that $(u_0-u_{\infty},\,u_1)$ is compactly supported. Assume that \begin{equation} \|(u_0,\,u_1)\|_{\HL}\leq \epsilon_{\ast}. \end{equation} Then there exists a unique smooth $u\in C^{\infty}((x,t):\,|x|>|t|\})$ such that $u$ solves the wave map equation in $\{(x,t):\,|x|>|t|\}$. Moreover \begin{equation}\label{eq:exteriornice} \lim_{t\to 0}\|\OR{u}(\cdot,\,t)-(u_0,\,u_1)\|_{\HL(|x|>|t|)}= 0. \end{equation} Similar results hold if we assume instead that $(u_0,\,u_1)\in H^{s}_{{\rm loc}}\times H^{s-1}_{{\rm loc}}(R^2\backslash\{0\})$, and in this case, $\OR{u}\in H^{s}_{{\rm loc}}\times H^{s-1}_{{\rm loc}}(|x|>|t|)$. \end{lemma} \smallskip \noindent {\it Proof.} We shall prove only the first part of the lemma. The proof of the second part is clear from the same argument. Let us firstly prove the existence of $u$ claimed in the lemma. For any $r>0$, since $$\int_{B_r\backslash B_{\frac{r}{2}}}|\nabla u_0|^2+|u_1|^2\,dx\leq\epsilon_{\ast}^2,$$ we can find $\br\in \left(\frac{r}{2},\,r\right)$ with $$\int_{|x|=\br}|\spartial u_0|^2\,d\sigma\lesssim \frac{\epsilon^2_{\ast}}{\br}.$$ Denote $$\overline{u}_0=\frac{1}{2\pi \br}\int_{\partial B_{\br}}u_0.$$ Then by Sobolev inequality, we get that $$\left\|u_0-\overline{u}_0\right\|_{L^{\infty}(\partial B_{\br})}\lesssim \epsilon_{\ast}.$$ Thus from the fact that $|u_0|\equiv 1$, we see that $\left|\overline{u}_0\right|\gtrsim 1$. Take smooth cutoff function $\eta$ such that $\eta\equiv 1$ for $|x|\ge\br$ and $\eta\equiv 0$ for $|x|<\frac{\br}{2}$ with $|\nabla \eta|\lesssim (\br)^{-1}$. Define \begin{equation*} \left(\widetilde{u}_0,\,\widetilde{u}_1\right)=\left\{\begin{array}{lr} (u_0,\,u_1) & {\rm in}\,\,B_{\br}^c;\\ \left(P\left[\eta(r)(u_0(\br \theta)-\overline{u}_0)+\overline{u}_0\right],\,0\right) & {\rm in}\,\, B_{\br}. \end{array}\right. \end{equation*} Then $$\left(\tu_0-u_{\infty},\,\tu_1\right)\in H^s\times H^{s-1},$$ for $s<\frac{3}{2}$, and direct computation shows that $$\left\|\left(\widetilde{u}_0,\,\widetilde{u}_1\right)\right\|_{\HL}\lesssim \epsilon_{\ast}.$$ Note also that $\left(\widetilde{u}_0,\,\widetilde{u}_1\right)$ is smooth for $|x|>\br$. Hence by small data theory and Lemma \ref{lm:localsmoothness} the solution $\tu$ to the wave map equation with initial data $\left(\widetilde{u}_0,\,\widetilde{u}_1\right)$ is global, and is smooth in $|x|>\br+|t|$. By taking $r\to 0+$ and the finite speed of propagation, we see that $$u=\lim_{r\to 0+}\tu$$ exists in $|x|>|t|$ and is smooth. We now turn to the proof of (\ref{eq:exteriornice}). Let $\tu$ be the solution as before, corresponding to $\br$, then \begin{equation}\label{eq:equal90} \tu\equiv u,\,\,\,{\rm for }\,\,\,|x|>\br+|t|, \end{equation} and $\tu$ is continuous in $\HL$ for $t\in(0,\,1]$. For any $\epsilon>0$, we can choose $\br$ sufficiently small, such that $$\|(u_0,\,u_1)\|_{\HL(B_{4\br})}<\epsilon.$$ Then by energy flux identity (say for $t>0$ and any $\epsilon>0$), \begin{eqnarray*} &&\int_{t+\epsilon<|x|<4\br-t}\nablaxtu(x,t)\,dx\\ &&\hspace{.3in}+\,\frac{1}{\sqrt{2}}\int_0^t\int_{|x|=4\br-t}\left(\frac{|\nabla u|^2}{2}+\frac{|\partial_tu|^2}{2}-\frac{x}{|x|}\cdot\nabla u\,\partial_tu\right)\,d\sigma ds\\ &&\hspace{.3in}+\,\frac{1}{\sqrt{2}}\int_0^t\int_{|x|=t+\epsilon}\left(\frac{|\nabla u|^2}{2}+\frac{|\partial_tu|^2}{2}+\frac{x}{|x|}\cdot\nabla u\,\partial_tu\right)\,d\sigma ds\\ &&\,\,=\,\int_{B_{4\br}\backslash B_{\epsilon}}\nablaxtu(x,0)\,dx, \end{eqnarray*} we see that \begin{equation}\label{eq:localsmall90} \|\OR{u}(t)\|_{\HL\left(B_{2\br}\backslash B_{|t|}\right)}\leq \|(u_0,\,u_1)\|_{\HL\left(B_{4\br}\right)}<\epsilon,\,\,\,{\rm for}\,\,|t|<\br. \end{equation} Since $\tu$ is continuous in the energy space and $\OR{\tu}(x,0)=(u_0,\,u_1)(x)$ for $|x|>\br$, we see that for sufficiently small $t_{1}\in(0,\,\br)$ and $|t|<t_1$, \begin{equation}\label{eq:tucontinuous} \|\OR{\tu}(t)-(u_0,\,u_1)\|_{\HL(|x|>\br)}<\epsilon. \end{equation} Combining (\ref{eq:localsmall90}), (\ref{eq:tucontinuous}) and (\ref{eq:equal90}), we conclude that for $|t|\leq t_{1}$ \begin{eqnarray*} &&\|\OR{u}(t)-(u_0,\,u_1)\|_{\HL(|x|>|t|)}\\ &&\leq \|\OR{u}(t)-(u_0,\,u_1)\|_{\HL(|x|>2\br)}+\|\OR{u}(t)-(u_0,\,u_1)\|_{\HL(|t|<|x|<2\br)}\\ &&\leq \|\OR{\tu}(t)-(u_0,\,u_1)\|_{\HL(|x|>2\br)}+2\epsilon\\ &&\leq 3\epsilon. \end{eqnarray*} Since $\epsilon>0$ is arbitrary, the lemma is proved.\\ By finite speed of propagation and small data global existence, understanding the energy concentration is important for studying the dynamics of the wave maps. To measure the energy concentration, let us define for a wave map $u$ the ``{\it energy concentration radius}" \begin{multline}\label{eq:concentrationradiusast} \hspace{2in}r(\epsilon_{\ast},\,t):=\\ \inf\left\{r>0:\,{\rm there\,\,exists\,\,}\bx\,\,{\rm such\,\,that\,\,}\int_{B_r(\bx)}\nablaxtu(x,t)\,dx>\epsilon_{\ast}\right\}. \end{multline} We adopt the convention that if the set is empty, then the infimum is infinity. The small energy global existence result, Theorem \ref{th:globalregularity}, and the finite speed of propagation imply that if wave map $u$ blows up at a finite time $T_+$, then $r(\epsilon_{\ast},t)\to 0+$ as $t\to T_+$. This is a very important piece of information that allows us to zoom in a small region near the blow up point and study the details of the blow up there. Unfortunately, knowing only that the energy concentrates in the small scales does not in itself allow one to ``extract" a nontrivial blow up profile in the limit, as we zooms in more and more. This is because a priori the energy can be concentrated in quite an arbitrary way, given that we do not (and it is probably not possible) to obtain control any higher order regularity beyond the energy when the time is close to the blow up time. To obtain a nontrivial blow up profile, the following result due to Sterbenz-Tataru \cite{TataruSterbenz} plays an essential role. \footnote{More precisely, this result is used to rule out the situation that all energy near the blow up point concentrates near the boundary of lightcone. The control inside the lightcone turns out to be quite favorable.} \begin{theorem}\label{th:bubble} There exists a function $\epsilon(E)$ with $0<\epsilon(E)\ll 1$ of the energy $E$ such that if $u$ is a classical solution to (\ref{Mainwavemap}) in $I\times R^2=[a,\,b]\times R^2$, with energy $E$ and \begin{equation}\label{eq:smalldispersion} \sup_{t\in I}\,\sup_k\left\|(P_ku,\,2^{-k}P_k\partial_tu)(t)\right\|_{L^{\infty}\times L^{\infty}}<\epsilon(E), \end{equation} then the energy concentration radius $r(\epsilon_{\ast},t)$ has a uniform lower bound on $I$: \begin{equation}\label{eq:energyconcentrationradiuslowerbound} \inf_{t\in I}\,r(\epsilon_{\ast},t)\ge r_0>0. \end{equation} \end{theorem} \end{section} \begin{section}{Channel of energy inequality for wave maps with small energy} In this section, we prove the channel of energy inequality for small wave maps. Let us begin with the following linear channel of energy inequality for outgoing waves, which is a slightly more quantitative two dimensional version of the channel of energy inequality that played a decisive role in \cite{DJKM}. \begin{lemma}\label{lm:linearchannel} Fix $\gamma\in(0,1)$. There exists $\mu=\mu(\gamma)>0$ sufficiently small such that the following statement is true. Let $v$ be a finite energy solution to the linear wave equation $$\partial_{tt}v-\Delta v=0,\,\,{\rm in}\,\,R^2\times[0,\infty),$$ with initial data $(v_0,\,v_1)\in\HL$ satisfying \begin{equation}\label{eq:outgoinglinearwave} \|(v_0,\,v_1)\|_{\HL(B^c_{1+\mu}\cup B_{1-\mu})}+\|\spartial v_0\|_{L^2}+\|\partial_rv_0+v_1\|_{L^2}\leq \mu\|(v_0,\,v_1)\|_{\HL}. \end{equation} We also assume that $v_0\equiv v_{\infty}$ for some constant $v_{\infty}$ for large $x$. Then for all $t\ge 0$, we have \begin{equation}\label{eq:linearchannel} \int_{|x|\ge \gamma+t}|\nabla_{x,t}v|^2(x,t)\,dx\ge \gamma\|(v_0,\,v_1)\|_{\HL}^2. \end{equation} \end{lemma} \smallskip \noindent {\it Proof.} We can normalize the initial data so that $\|(v_0,\,v_1)\|_{\HL}=1$. Let $\alpha=\int_{\{1+\mu\leq |x|\leq 2(1+\mu)\}}v_0(x)\,dx$. By Poincar\'e inequality \begin{equation} \label{eq:Poincare} \int_{1+\mu\leq |x|\leq 2(1+\mu)} |v_0(x)-\alpha|^2\,dx\lesssim \int_{1+\mu\leq |x|\leq 2(1+\mu)} |\nabla v_0(x)|^2\,dx, \end{equation} where the implicit constant is independent of $\mu\leq 1$. Take a non-negative radial $\eta\in C_c^{\infty}(R^2)$ with $\eta\equiv 1$ on $\overline{B_{1+\mu}}$ and ${\rm supp}\,\eta\Subset B_{1+\mu^{1/2}}$ satisfying $|\nabla \eta|\lesssim \mu^{-1/2}$. Define $$(\wt{v}_0,\wt{v}_1)=\eta(x)\left(v_0(x)-\alpha,v_1(x)\right).$$ Using \eqref{eq:Poincare}, the bound $|\nabla \eta|\lesssim \mu^{-1/2}$ and \eqref{eq:outgoinglinearwave}, we obtain: \begin{equation} \label{eq:veryclosechannel} \left\|\left(\nabla(\wt{v}-v_0),\wt{v}_1-v_1)\right)\right\|_{L^2\times L^2}^2\lesssim \frac{1}{\mu}\int_{|x|\geq 1+\mu} |\nabla v_0|^2+\int_{|x|\geq 1+\mu} |u_1|^2\lesssim \mu. \end{equation} By Sobolev and H\"older inequalities \begin{gather*} \||D|^{\frac 12} \wt{v}_0\|_{L^2}\lesssim \|\nabla \wt{v_0}\|_{L^{\frac 43}} \lesssim \|\nabla \wt{v}_0\|_{L^2(\{|x|\leq 1-\mu\})}+\mu^{\frac 18}\|\nabla \wt{v}_0\|_{L^2(\{1-\mu\leq |x|\leq 1+\mu^{1/2}\})}\lesssim \mu^{\frac 18}\\ \||D|^{-\frac 12} \wt{v}_1\|_{L^2}\lesssim \|v_1\|_{L^{\frac 43}}\lesssim \|v_1\|_{L^2(\{|x|\leq 1-\mu\})}+\mu^{\frac 18}\|v_1\|_{L^2(\{1-\mu\leq |x|\leq 1+\mu^{1/2}\})}\lesssim \mu^{\frac 18}. \end{gather*} By conservation of the $\dot{H}^{1/2}\times \dot{H}^{-1/2}$ norm for the linear wave equation, we obtain the for all $t\in \R$, \begin{equation} \label{eq:controlofl2} \left|\int \wt{v}_t(x,t)\wt{v}(x,t)\,dx\right|\lesssim \big\| |D|^{1/2} \wt{v}\big\|_{L^2}\big\| |D|^{-1/2} \wt{v}_t\big\|_{L^2}\lesssim \mu^{1/4}. \end{equation} Let $\wt{v}$ be the solution to the linear wave equation with initial data $(\wt{v}_0,\,\wt{v}_1)$. By direct computation, we see that \begin{equation} \frac{d}{dt}\int_{R^2}-\wt{v}_t\left(x\cdot\nabla\wt{v}+\frac{1}{2}\wt{v}\right)(x,t)\,dx=\E(\wt{v}):=\E_0. \end{equation} Hence, by (\ref{eq:controlofl2}) and the outgoing condition (\ref{eq:outgoinglinearwave}), we get that \begin{eqnarray*} \int_{R^2}-\wt{v}_t\,x\cdot\nabla \wt{v}(x,t)\,dx&=&\E_0\, t+\int_{R^2}-\wt{v}_1\left(x\cdot\nabla\wt{v}_0+\frac{1}{2}\wt{v}_0\right)(x)\,dx\\ &&\hspace{.3in}+O(\mu^{1/4})\\ &=&\E_0\, (t+1)+O(\mu^{1/4}). \end{eqnarray*} On the other hand, by the finite speed of propagation, ${\rm supp}\,\wt{v}(\cdot,t)\Subset B_{1+\mu^{1/2}+t}$ for all $t\ge 0$, and thus \begin{eqnarray*} &&\int_{R^2}-\wt{v}_t\,x\cdot\nabla \wt{v}(x,t)\,dx\leq \int_{|x|>\gamma +t}(1+\mu^{1/2}+t)\left(\frac{\left|\wt{v}_t\right|^2}{2}+\frac{\left|\nabla \wt{v}\right|^2}{2}\right)(x,t)\,dx\\ &&\hspace{.3in} +\,(\gamma+t)\int_{|x|<\gamma+t}\left(\frac{\left|\wt{v}_t\right|^2}{2}+\frac{\left|\nabla \wt{v}\right|^2}{2}\right)(x,t)\,dx\\ &&=(\gamma+t)\E_0-(\gamma+t)\int_{|x|>\gamma+t}\left(\frac{\left|\wt{v}_t\right|^2}{2}+\frac{\left|\nabla \wt{v}\right|^2}{2}\right)(x,t)\,dx+\\ &&\hspace{.3in}+\,(1+\mu^{1/2}+t)\int_{|x|>\gamma+t}\left(\frac{\left|\wt{v}_t\right|^2}{2}+\frac{\left|\nabla \wt{v}\right|^2}{2}\right)(x,t)\,dx. \end{eqnarray*} Combining this and the above, we see that \begin{eqnarray*} &&(1+\mu^{1/2}-\gamma)\int_{|x|>\gamma+t}\left(\frac{\left|\wt{v}_t\right|^2}{2}+\frac{\left|\nabla \wt{v}\right|^2}{2}\right)(x,t)\,dx\\ &&\ge (1-\gamma)\E_0+O(\mu^{1/4}). \end{eqnarray*} By choosing $\mu$ sufficiently small, we obtain the channel of energy inequality for $\wt{v}$, and consequently also for $v$, by (\ref{eq:veryclosechannel}). \\ As mentioned in the introduction, one of the main goals of this paper is to extend the channel of energy arguments to the wave map setting. As a first step towards understanding the implications of the channel of energy property of linear wave equations on the wave maps, we prove the following result for small energy wave maps. The extension to large energy case seems to require nontrivial improvement in the perturbative techniques for the wave maps. \begin{theorem}\label{th:channelofenergywavemap} Fix $\beta\in (0,1)$. There exist a small $\delta=\delta(\beta)>0$ and sufficiently small $\epsilon_0=\epsilon_0(\beta)>0$, such that if $u$ is a classical wave map with energy $\mathcal{E}(\OR{u})<\epsilon_0^2$ satisfying \begin{equation}\label{eq:outgoingcondition} \|(u_0,\,u_1)\|_{\HL\left(B^c_{1+\delta}\cup B_{1-\delta}\right)}+\|\spartial u_0\|_{L^2}+\|\partial_ru_0+u_1\|_{L^2}\leq \delta \|(u_0,\,u_1)\|_{\HL}, \end{equation} then for all $t\ge 0$, we have \begin{equation}\label{eq:smallchannel} \int_{|x|>\beta+t}|\nabla_{x,t}u|^2(x,t)\,dx\ge \beta\,\|(u_0,\,u_1)\|_{\HL}^2. \end{equation} \end{theorem} \smallskip \noindent {\it Proof.} Denote $\epsilon:=\|(u_0,\,u_1)\|_{\HL}\lesssim \epsilon_0$. To apply Theorem \ref{th:globalregularity}, let us define the following frequency envelop \begin{equation} c_k:=\sup_{j\in\mathbb{Z}}2^{-\vartheta|k-j|}\|(P_ju_0,\,P_ju_1)\|_{\HL}. \end{equation} Then one can verify that $c=(c_k)$ is a frequency envelop and that $\left(\|P_k(u_0,u_1)\|_{\HL}\right)$ lies below it. In addition, $$\|(c_k)\|_{\ell^2}\lesssim \epsilon.$$ By Theorem \ref{th:globalregularity}, if $\epsilon_0$ is chosen sufficiently small, then the wave map $u$ is globally defined, and satisfies (\ref{eq:GlobalControl}). \\ Since the proof is a bit lengthy, we divide the arguments in several steps. \\ \smallskip \noindent {\bf Step 1: Reduction to proving channel of energy inequality for frequency pieces.}\\ In this step, our main goal is to show that there exists a set $\mathcal{K}$ of {\it good frequencies}, such that \begin{equation}\label{eq:goodfrequencydominates} \sum_{m\in\mathcal{K}}\|P_m(u_0,\,u_1)\|^2_{\HL}\ge (1-C\delta^{\frac{1}{12}})\initial^2, \end{equation} and that for any $m\in \mathcal{K}$, $2^m$ is {\it ``high frequency"}, and that it suffices to prove the channel of energy inequality for each $m\in \mathcal{K}$.\\ \noindent {\it Substep (1): Control of the low frequency component.}\\ Fix $k_0$ large, whose precise value is to be determined below. We shall show that the total energy with frequency $\leq 2^{k_0}$ is small in a suitable sense. Assume firstly that $2^{-k_0}>C\delta$. Let us bound the low frequency energy of $(u_0,\,u_1)$, that is $$\left\|P_{\leq k_0}(u_0,\,u_1)\right\|_{\HL}.$$ We can write \begin{eqnarray*} &&\nabla u_0=(\nabla u_0)\chi_{B_{1+\delta}^c\cup B_{1-\delta}}+(\nabla u_0)\chi_{B_{1+\delta}\backslash B_{1-\delta}},\\ &&u_1=u_1\chi_{B_{1+\delta}^c\cup B_{1-\delta}}+ u_1\chi_{B_{1+\delta}\backslash B_{1-\delta}}. \end{eqnarray*} By the assumption on $(u_0,\,u_1)$, \begin{eqnarray*} \|(\nabla u_0)\chi_{B_{1+\delta}^c\cup B_{1-\delta}}\|_{L^2}&+&\|u_1\chi_{B_{1+\delta}^c\cup B_{1-\delta}}\|_{L^2}\\ &\lesssim&\delta \|(u_0,\,u_1)\|_{\HL}. \end{eqnarray*} Thus, \begin{eqnarray} &&\left\|P_{<k_0}\left(\nabla u_0\chi_{B_{1+\delta}^c\cup B_{1-\delta}}\right)\right\|_{L^2}\nonumber\\ &&\quad\quad+\,\left\|P_{<k_0}\left(u_1\chi_{B_{1+\delta}^c\cup B_{1-\delta}}\right)\right\|_{L^2}\lesssim\delta \|(u_0,\,u_1)\|_{\HL}. \label{eq:low1} \end{eqnarray} Denote $f=(\nabla u_0)\chi_{B_{1+\delta}\backslash B_{1-\delta}}$. Then $f$ is compactly supported in $\overline{B_{1+\delta}}\backslash B_{1-\delta}$, and $$\|f\|_{L^2}\leq \|(u_0,\,u_1)\|_{\HL}.$$ By Bernstein's inequality, then Cauchy-Schwarz $$\|P_{\leq k_0}f\|_{L^2}\lesssim 2^{k_0}\|f\|_{L^1}\lesssim 2^{k_0}\delta^{\frac 12}\|f\|_{L^2}\lesssim 2^{k_0}\delta^{\frac 12}\initial.$$ Choosing $2^{-k_0}\sim \delta^{\frac{1}{6}}$, then $\|P_{\leq k_0}f\|_{L^2}\lesssim \delta^{\frac{1}{3}}\initial$, that is, \begin{equation}\label{eq:lowfrequencypotential} \left\|P_{\leq k_0}\left[(\nabla u_0)\chi_{B_{1+\delta}\backslash B_{1-\delta}}\right]\right\|_{L^2}\lesssim \delta^{\frac{1}{3}}\initial. \end{equation} We can prove similarly that \begin{equation}\label{eq:lowfrequencytime} \left\|P_{\leq k_0}\left[u_1\chi_{B_{1+\delta}\backslash B_{1-\delta}}\right]\right\|_{L^2}\lesssim \delta^{\frac{1}{3}}\initial. \end{equation} Combining (\ref{eq:low1}), (\ref{eq:lowfrequencypotential}) and (\ref{eq:lowfrequencytime}), we conclude that \begin{equation}\label{eq:lowfreq} \left\|P_{\leq k_0}(u_0,\,u_1)\right\|_{\HL}\lesssim \delta^{\frac{1}{3}}\initial. \end{equation} Thus the low frequency energy is small.\\ \medskip \noindent {\it Substep (2): persistence of condition (\ref{eq:outgoingcondition}) for most high frequencies.}\\ Let us now consider $P_k(\nabla u_0,\,u_1)$ for high frequency $2^{k}\ge2^{k_0}$. Fix small $\lambda>10\delta^{\frac{1}{6}}\sim 2^{-k_0}$, whose value is to be determined below. Let us firstly bound $$\|P_k(\nabla u_0,\,u_1)\|_{L^2\left(B_{1+\lambda}^c\cup B_{1-\lambda}\right)}.$$ We can decompose as before \begin{eqnarray*} &&\nabla u_0=(\nabla u_0)\chi_{B_{1+\delta}^c\cup B_{1-\delta}}+(\nabla u_0)\chi_{B_{1+\delta}\backslash B_{1-\delta}},\\ &&u_1=u_1\chi_{B_{1+\delta}^c\cup B_{1-\delta}}+ u_1\chi_{B_{1+\delta}\backslash B_{1-\delta}}. \end{eqnarray*} Denote $$\sigma_k:=\left\|P_k\left[(\nabla u_0,\,u_1)\chi_{B_{1+\delta}^c\cup B_{1-\delta}}\right]\right\|_{L^2\times L^2},$$ then it follows from (\ref{eq:outgoingcondition}) that \begin{equation} \sum_k\sigma_k^2\lesssim \delta^2\|(\nabla u_0,\,u_1)\|^2_{L^2\times L^2}. \end{equation} Now let us consider $P_k\left[(\nabla u_0,\,u_1)\chi_{B_{1+\delta}\backslash B_{1-\delta}}\right]$ for $x$ with $||x|-1|>\lambda$. Denote $$f:=(\nabla u_0) \chi_{B_{1+\delta}\backslash B_{1-\delta}},$$ then $$P_kf(x)=4^k\int_{R^2}\,\check{\Psi}(2^k(x-y))f(y)\,dy.$$ Since $f$ is supported in $1-\delta\leq |y|\leq 1+\delta$, and $||x|-1|>\lambda\gg \delta$, we get that $$|P_kf(x)|\lesssim\frac{4^k}{(2^k||x|-1|)^M}\delta^{\frac{1}{2}}\initial.$$ Hence \begin{gather*} \|P_kf\|_{L^2\left(B_{1+\lambda}^c\cup B_{1-\lambda}\right)}^2\lesssim 4^{(2-M)k}\delta \initial^2\int_{\big||x|-1\big|\geq \lambda} \frac{1}{(|x|-1)^{2M}}\,dx\\ \lesssim 4^{(2-M)k}\delta\lambda^{-2M}\initial^2 \end{gather*} Fix $M=3$. Then we conclude \begin{equation} \|P_kf\|_{L^2\left(B_{1+\lambda}^c\cup B_{1-\lambda}\right)}\leq 2^{-k}\delta^{\frac{1}{2}}\lambda^{-3}\initial. \end{equation} Take $\lambda=\delta^{\frac{1}{12}}$, then $$\|P_kf\|_{L^2\left(B_{1+\lambda}^c\cup B_{1-\lambda}\right)}\lesssim 2^{-k}\delta^{\frac{1}{4}}\initial,$$ that is, \begin{equation}\label{eq:controlpkf1} \left\|P_k\left[(\nabla u_0)\chi_{B_{1+\delta}\backslash B_{1-\delta}}\right]\right\|_{L^2\left(B_{1+\lambda}^c\cup B_{1-\lambda}\right)}\lesssim 2^{-k}\delta^{\frac{1}{4}}\initial. \end{equation} Similarly, we can prove that \begin{equation}\label{eq:controlpkf2} \left\|P_k\left[u_1\chi_{B_{1+\delta}\backslash B_{1-\delta}}\right]\right\|_{L^2\left(B_{1+\lambda}^c\cup B_{1-\lambda}\right)}\lesssim 2^{-k}\delta^{\frac{1}{4}}\initial. \end{equation} Now let us control $$\|\partial_rP_ku_0+P_ku_1\|_{L^2(B_{1+\lambda}\backslash B_{1-\lambda})}+\|\spartial P_ku_0\|_{L^2(B_{1+\lambda}\backslash B_{1-\lambda})}$$ for $k\ge k_0$ with $2^{-k_0}\sim \delta^{\frac{1}{6}}$. We have \begin{eqnarray*} &&\partial_r\int_{R^2}\,4^k\check{\Psi}(2^ky)u_0(x-y)\,dy=\int_{R^2}\,4^k\check{\Psi}(2^ky)\frac{x}{|x|}\cdot\nabla u_0(x-y)\,dy\\ &&=\int_{R^2}\,4^k\check{\Psi}(2^ky)\frac{x-y}{|x-y|}\cdot\nabla u_0(x-y)\,dy\\ &&\hspace{.4in}+\,\int_{R^2}\,4^k\check{\Psi}(2^ky)\left[\frac{x}{|x|}-\frac{x-y}{|x-y|}\right]\cdot\nabla u_0(x-y)\,dy\\ &&=I_k+II_k. \end{eqnarray*} Note that $$I_k+P_ku_1=\int_{R^2}\,4^k\check{\Psi}(2^ky)(\partial_ru_0+u_1)(x-y)\,dy.$$ Thus \begin{equation}\label{eq:control111} \|I_k+P_ku_1\|_{L^2}\lesssim \|P_k(\partial_ru_0+u_1)\|_{L^2}. \end{equation} Note also that, for $x\in B_{1+\lambda}\backslash B_{1-\lambda}$, $$\left|\nabla \frac{x}{|x|}\right|\lesssim 1,$$ thus, \begin{eqnarray*} |II_k|&\leq& \int_{R^2}\,4^k|\check{\Psi}|(2^ky)\left|\frac{x}{|x|}-\frac{x-y}{|x-y|}\right|\cdot|\nabla u_0(x-y)|\,dy\\ &\leq&\int_{|y|<2^{-\frac{k}{2}}}\,+\,\int_{|y|>2^{-\frac{k}{2}}}\\ &\lesssim&\int_{|y|<2^{-\frac{k}{2}}}2^{-\frac{k}{2}}4^k|\check{\Psi}|(2^ky)\cdot|\nabla u_0(x-y)|\,dy\\ &&\hspace{.3in}+\int_{|y|>2^{-\frac{k}{2}}}4^k\left|2^ky\right|^{-M}|\nabla u_0(x-y)|\,dy. \end{eqnarray*} Then simple computation shows that \begin{eqnarray} &&\|II_k\|_{L^2(B_{1+\lambda}\backslash B_{1-\lambda})}\nonumber\\ &&\lesssim \int_{|y|<2^{-\frac{k}{2}}}2^{-\frac{k}{2}}\,4^k|\check{\Psi}|(2^ky)\cdot\|\nabla u_0\|_{L^2}\,dy+\int_{|y|>2^{-\frac{k}{2}}}4^k\left|2^ky\right|^{-M}\|\nabla u_0\|_{L^2}\,dy\nonumber\\ &&\lesssim 2^{-\frac{k}{2}}\initial.\label{eq:control112} \end{eqnarray} Thus combining (\ref{eq:controlpkf1}), (\ref{eq:controlpkf2}), (\ref{eq:control111}) and (\ref{eq:control112}), we get that \begin{equation}\label{eq:appout} \|\partial_rP_ku_0+P_ku_1\|_{L^2(B_{1+\lambda}\backslash B_{1-\lambda})}\lesssim 2^{-\frac{k}{2}}\initial+\|P_k(\partial_ru_0+u_1)\|_{L^2}. \end{equation} The bound \begin{equation}\label{eq:appradial} \|\spartial P_ku_0\|_{L^2(B_{1+\lambda}\backslash B_{1-\lambda})}\lesssim 2^{-\frac{k}{2}}\initial+\|P_k\,\spartial u_0\|_{L^2} \end{equation} follows similarly from the previous arguments. \\ \medskip \noindent {\it Substep (3): Summary of estimates from substep (1) and substep (2) and the definition of good frequencies.}\\ From (\ref{eq:lowfreq}),(\ref{eq:controlpkf1}),(\ref{eq:controlpkf2}),(\ref{eq:appout}) and (\ref{eq:appradial}), we have, for $\delta^{\frac 16}\sim 2^{-k_0}$ \begin{eqnarray} &(1)&\,\|P_{<k_0}(\nabla u_0,\,u_1)\|_{L^2\times L^2}\lesssim \delta^{\frac{1}{3}}\initial;\label{eq:(1)}\\ &(2)&\,\sum_{k\ge k_0}\left\|P_k\left[(\nabla u_0,\,u_1)\chi_{B_{1+\delta}^c\cup B_{1-\delta}}\right]\right\|^2_{L^2\times L^2}\lesssim \delta^2\initial^2;\label{eq:(1.1)}\\ &(3)&\,\sum_{k\ge k_0}\,\left\|P_k\left[(\nabla u_0,\,u_1)\chi_{B_{1+\delta}\backslash B_{1-\delta}}\right]\right\|_{L^2\times L^2(B_{1+\lambda}^c\cup B_{1-\lambda})}\nonumber\\ &&\,\,\lesssim \delta^{\frac{1}{4}}\initial;\label{eq:(2)}\\ &&\nonumber\\ &(4)&\|\partial_rP_ku_0+P_ku_1\|_{L^2(B_{1+\lambda}\backslash B_{1-\lambda})}+\|\spartial P_ku_0\|_{L^2(B_{1+\lambda}\backslash B_{1-\lambda})}\nonumber\\ &&\,\,\lesssim 2^{-\frac{k}{2}}\initial+\|P_k(\partial_ru_0+u_1)\|_{L^2}+\|P_k\,\spartial u_0\|_{L^2}.\label{eq:(3)} \end{eqnarray} By (\ref{eq:(1)}), we can focus on the high frequencies $2^{k}\ge 2^{k_0}$. Indeed, we have \begin{equation}\label{eq:highfrequency} \left\|P_{k\ge k_0}(\nabla u_0,\,u_1)\right\|_{L^2\times L^2}\ge \left(1-C\delta^{\frac{1}{4}}\right)\initial. \end{equation} By (\ref{eq:(1.1)}) and (\ref{eq:(2)}), we see that \begin{eqnarray*} &&\sum_{k\ge k_0}\|P_k(\nabla u_0,\,u_1)\|^2_{L^2\times L^2\left(B^c_{1+\lambda}\cup B_{1-\lambda}\right)}\\ &&\hspace{.7in}+\,P_k\left[(\nabla u_0,\,u_1)\chi_{B_{1+\delta}\backslash B_{1-\delta}}\right]\bigg\|^2_{L^2\times L^2(B^c_{1+\lambda}\cup B_{1-\lambda})}\\ &&\lesssim \sum_{k\ge k_0}\left\|P_k\left[(\nabla u_0,\,u_1)\chi_{B_{1+\delta}^c\cup B_{1-\delta}}\right]\right\|_{L^2\times L^2}^2\\ &&\hspace{.3in}+\,\sum_{k\ge k_0}\left\|P_k\left[(\nabla u_0,\,u_1)\chi_{B_{1+\delta}\backslash B_{1-\delta}}\right]\right\|^2_{L^2\times L^2\left(B^c_{1+\lambda}\cup B_{1-\lambda}\right)}\\ &&\lesssim\delta^{\frac{1}{2}}\initial^2. \end{eqnarray*} Then by the above calculation and (\ref{eq:(3)}), we get that \begin{eqnarray*} &&\sum_{k\ge k_0}\left[\|\partial_rP_ku_0+P_ku_1\|_{L^2}^2+\|\spartial P_ku_0\|_{L^2}^2\right]\\ &&\lesssim \sum_{k\ge k_0}\|P_k(\nabla u_0,\,u_1)\|^2_{L^2\times L^2\left(B_{1+\lambda}^c\cup B_{1-\lambda}\right)}+\\ &&\hspace{.3in}+\,\sum_{k\ge k_0}\left[\|\partial_rP_ku_0+P_ku_1\|^2_{L^2\left(B_{1+\lambda}\backslash B_{1-\lambda}\right)}+\|\spartial P_ku_0\|^2_{L^2\left(B_{1+\lambda}\backslash B_{1-\lambda}\right)}\right]\\ &&\lesssim\delta^{\frac{1}{2}}\initial^2+\sum_{k\ge k_0}2^{-k}\initial^2+\\ &&\hspace{.3in}+\,\sum_{k\ge k_0}\left(\|P_k(\partial_ru_0+u_1)\|^2_{L^2}+\|P_k\spartial u_0\|^2_{L^2}\right)\\ &&\lesssim \delta^{\frac{1}{6}}\initial^2. \end{eqnarray*} Hence, if we define the set \begin{eqnarray*} &&\mathcal{K}:=\bigg\{k\ge k_0:\,\|(P_ku_0,\,P_ku_1)\|_{\HL\left(B_{1+\lambda}^c\cup B_{1-\lambda}\right)}+\|\partial_rP_ku_0+P_ku_1\|_{L^2}\\ &&\hspace{.6in}+\,\|\spartial P_ku_0\|_{L^2}\leq \delta^{\frac{1}{100}}\|P_k(u_0,\,u_1)\|_{\HL}\bigg\}, \end{eqnarray*} we can estimate that \begin{eqnarray*} &&\sum_{k\ge k_0,\,k\not\in\mathcal{K}}\|(P_ku_0,\,P_ku_1)\|_{\HL}^2\\ &&\lesssim \delta^{-\frac{1}{50}}\sum_{k\ge k_0,\,k\not\in\mathcal{K}}\bigg[\|(P_ku_0,\,P_ku_1)\|^2_{\HL\left(B_{1+\lambda}^c\cup B_{1-\lambda}\right)}\\ &&\hspace{1.2in}+\,\|\partial_rP_ku_0+P_ku_1\|^2_{L^2}+\|\spartial P_ku_0\|^2_{L^2}\bigg]\\ &&\lesssim \delta^{-\frac{1}{50}}\delta^{\frac{1}{6}}\initial^2\lesssim \delta^{\frac{1}{12}}\initial^2. \end{eqnarray*} Hence the total energy at frequencies $\sim2^k$ with $k\ge k_0,\,k\not\in \mathcal{K}$ is negligible, and we will focus on the high frequency pieces $P_k(u_0,\,u_1)$ with $2^k\ge 2^{k_0}$ and $k\in\mathcal{K}$ below. \\ \medskip \noindent {\it Substep (4): Reduction to channel of energy inequality for frequencies in $\mathcal{K}$.}\\ Fix $m\in\mathcal{K}$, then \begin{multline}\label{eq:outgoingfork} \|(P_mu_0,\,P_mu_1)\|_{\HL\left(B_{1+\lambda}^c\cup B_{1-\lambda}\right)}+\|\partial_rP_mu_0+P_mu_1\|_{L^2}+\|\spartial P_mu_0\|_{L^2}\\ \leq \delta^{\frac{1}{100}}\|P_m(u_0,\,u_1)\|_{\HL}. \end{multline} We claim that if we can show for each $m\in \mathcal{K}$ that \begin{equation}\label{eq:channelform} \int_{|x|\ge \frac{1+\beta}{2}+t}\,|\nabla_{x,t}P_mu|^2(x,t)\,dx\ge \frac{1+\beta}{2}\|P_m(u_0,\,u_1)\|_{\HL}^2-C\epsilon^2c_m^2, \end{equation} for all $t\ge 0$, then we will be done. Indeed, write for each $t\ge 0$, $$P_m\nabla_{x,t}u=P_m\left[(\nabla_{x,t}u)\chi_{|x|>\beta+t}\right]+P_m\left[(\nabla_{x,t}u)\chi_{|x|\leq\beta+t}\right].$$ We can estimate, for $|x|>\frac{\beta+1}{2}+t$, that \begin{eqnarray*} &&\left|P_m\left[(\nabla_{x,t}u)\chi_{|y|\leq\beta+t}\right](x)\right|\\ &&\leq 4^m\int_{|x-y|\leq \beta+t}\big|\check{\Psi}(2^my)\big|\big|\nabla_{x,t}u(x-y,t)\big|\,dy\\ &&\leq 4^m\int_{|y|>\frac{1-\beta}{2}}\big|\check{\Psi}(2^my)\big||\nabla_{x,t}u(x-y,t)|\,dy\\ &&\leq 4^m\int_{|y|>\frac{1-\beta}{2}}\big|2^my\big|^{-M}|\nabla_{x,t}u(x-y,t)|\,dy. \end{eqnarray*} Thus, \begin{eqnarray*} &&\left\|P_m\left[|\nabla_{x,t}u|\chi_{|x|\leq \beta+t}\right]\right\|_{L^2\left(|x|>\frac{\beta+1}{2}+t\right)}\\ &&\lesssim 4^m\int_{|y|>\frac{1-\beta}{2}}\left|2^my\right|^{-M}\,dy\,\initial\\ &&\lesssim C(\beta)\,2^{-(M-2)m}\initial. \end{eqnarray*} Consequently, we get that \begin{eqnarray*} &&\sum_{m\ge k_0}\|P_m\nabla_{x,t}u\|^2_{L^2\left(|x|>\frac{1+\beta}{2}+t\right)}\\ &&\leq \left(1+\delta^{\frac{1}{50}}\right)\sum_{m\ge k_0}\left\|P_m\left[(\nabla_{x,t}u)\chi_{|y|\ge \beta+t}\right]\right\|^2_{L^2}+\\ &&\hspace{.5in}+\,2\delta^{-\frac{1}{50}}\sum_{m\ge k_0}\left\|P_m\left[(\nabla_{x,t}u)\chi_{|y|\leq \beta+t}\right]\right\|_{L^2\left(|x|>\frac{\beta+1}{2}+t\right)}^2\\ &&\leq \left(1+\delta^{\frac{1}{50}}\right)\int_{|x|>\beta+t}|\nabla_{x,t}u|^2(x,t)\,dx+\\ &&\hspace{.6in}+\,C(\beta)\delta^{-\frac{1}{50}}\sum_{m\ge k_0}2^{-2(M-2)m}\initial^2\\ &&\leq \left(1+\delta^{\frac{1}{50}}\right)\int_{|x|>\beta+t}|\nabla_{x,t}u|^2(x,t)\,dx+C(\beta)\delta^{-\frac{1}{50}}4^{-(M-2)k_0}\initial^2\\ &&\leq \left(1+\delta^{\frac{1}{50}}\right)\int_{|x|>\beta+t}|\nabla_{x,t}u|^2(x,t)\,dx+C(\beta)\delta^{\frac{1}{3}}\initial^2, \end{eqnarray*} if we choose $M=4$. Therefore if (\ref{eq:channelform}) holds, then by the choice of $\mathcal{K}$, (\ref{eq:highfrequency}), $\|(c_k)\|_{l^2}\lesssim \epsilon$, and the above calculation, we see that \begin{eqnarray*} &&(1-C\delta^{\frac{1}{12}}-C\epsilon^2)\frac{1+\beta}{2}\initial^2\\ &&\leq\sum_{m\in\mathcal{K}}\left(\frac{1+\beta}{2}\|P_m(u_0,\,u_1)\|^2_{\HL}-C^2\epsilon^2c_m^2\right)\\ &&\leq \sum_{m\in\mathcal{K}}\|P_m\nabla_{x,t}u\|^2_{L^2\left(|x|>\frac{1+\beta}{2}+t\right)}\\ &&\leq\left(1+\delta^{\frac{1}{50}}\right)\int_{|x|>\beta+t}|\nabla_{x,t}u|^2(x,t)\,dx+C(\beta)\delta^{\frac{1}{3}}\initial^2. \end{eqnarray*} The channel of energy inequality (\ref{eq:smallchannel}) follows if $\delta=\delta(\beta)$ and $\epsilon=\epsilon_0(\beta)$ are taken sufficiently small. Our goal is thus reduced to proving (\ref{eq:channelform}). \\ \medskip \noindent {\bf Step 2: Control of of the perturbative part of the nonlinearity.}\\ It is proved in \cite{DJKM} that (\ref{eq:channelform}) holds for solution to the linear wave equation with this type of outgoing initial data for dimension $\ge 3$, although the results we need here are more quantitative, see Lemma \ref{lm:linearchannel} above. Ideally one would like to say that the nonlinearity is negligible as we have small solutions. However, as is now well known, even in small energy case, the nonlinearity for the wave map equation can not be treated entirely perturbatively. Rather, we need to perform a gauge transform to modify the nonlinearity so that it becomes perturbative. Thus it is important to understand how the Gauge transform affects the channel of energy inequality. The arguments we use here are mostly from Tao \cite{TaoSmallEnergy} and Tataru \cite{Tataru3}. We shall present the details of the proof below, partly for the convenience of the reader, and partly as those works did not explicitly quantify the nonlinear effects (which are implicit in the proofs). In this step however, we shall firstly control the part of the nonlinearity that is perturbative.\\ Let $$\psi:=P_mu.$$ Then $\psi$ verifies \begin{equation}\label{eq:forpsi} \left\{\begin{array}{rll} \partial_{tt}\psi-\Delta\psi&=&P_m\left(u\,\partial^{\alpha}u^{\dagger}\partial_{\alpha}u\right)\\ \OR{\psi}(0)&=&(P_mu_0,\,P_mu_1). \end{array}\right. \end{equation} Let us rewrite the nonlinearity $P_m\left(u\,\partial^{\alpha}u^{\dagger}\partial_{\alpha}u\right)$ as \begin{eqnarray*} &&P_m\left(u\,\partial^{\alpha}u^{\dagger}\partial_{\alpha}u\right)\\ &&=P_m\left(u_{\ge m-10}\,\partial^{\alpha}u^{\dagger}\partial_{\alpha}u\right)\\ &&\hspace{.3in}+\,P_m\left(u_{<m-10}\,\partial^{\alpha}u^{\dagger}_{>m+10}\partial_{\alpha}u\right)\\ &&\hspace{.3in}+\,P_m\left(u_{<m-10}\,\partial^{\alpha}u^{\dagger}_{m-10\leq\cdot\leq m+10}\partial_{\alpha}u_{\ge m-10}\right)\\ &&\hspace{.3in}+\,P_m\left(u_{<m-10}\,\partial^{\alpha}u^{\dagger}_{m-10\leq \cdot\leq m+10}\partial_{\alpha}u_{< m-10}\right)\\ &&\hspace{.3in}+\,P_m\left(u_{<m-10}\,\partial^{\alpha}u^{\dagger}_{<m-10}\partial_{\alpha}u_{> m+10}\right)\\ &&\hspace{.3in}+\,P_m\left(u_{<m-10}\,\partial^{\alpha}u^{\dagger}_{<m-10}\partial_{\alpha}u_{m-10\leq\cdot\leq m+10}\right)\\ &&\hspace{.3in}+\,P_m\left(u_{<m-10}\,\partial^{\alpha}u^{\dagger}_{<m-10}\partial_{\alpha}u_{<m-10}\right)\\ &&=I_1+I_2+I_3+I_4+I_5+I_6+I_7. \end{eqnarray*} Denote $$\epsilon:=\initial\leq \epsilon_0.$$ We firstly peel off the perturbative part of the nonlinearity. We shall call $h$ {\it disposable} if $$\sup_{m'=m+O(1)}\|P_{m'}h\|_{N[m']}\lesssim \epsilon\,c_m.$$ Here the $O(1)$ term is a number of size $\sim 10$. The main use of this term is to deal with some technical ``frequency leakage" issues. \footnote{On a technical level, to apply the estimates from Theorem \ref{th:propertySN}, we need the right hand sides to carry the frequency localization operator $P_k$. } We shall call $h$ {\it disposable in the generalized sense} if there exists a sequence of disposable $h_k$ with $h_k\to h$ in the sense of distributions. Note that the notion of being disposable and that of being disposable in the generalized sense are not the same, due to the technical issue with the space $N[m]$, see e.g., page 324 of \cite{TataruRough} for more discussions. Note that $I_4=I_6$. Furthermore, analyzing the support of the trilinear expressions in frequencies, we obtain that $I_5=I_7=0$. We claim that $I_1,\,I_2,\,I_3$ are disposable, that is, \begin{claim}\label{claim:Ijareperturbative} For $j=1,\,2,\,3$ we have \begin{equation}\label{eq:perturbativenonlinearity} \sup_{m'=m+O(1)}\left\|P_{m'}I_j\right\|_{N[m']}\lesssim \epsilon c_m. \end{equation} \end{claim} We also claim that \begin{claim}\label{claim:commutingperturbative} For $m'=m+O(1)$, \begin{multline}\label{eq:commutatvieerror} \left\|P_{m'}\left[P_m\left(u_{<m-10}\,\partial^{\alpha}u^{\dagger}_{m-10<\cdot<m+10}\partial_{\alpha}u_{<m-10}\right)-u_{<m-10}\,\partial^{\alpha}\psi^{\dagger}\partial_{\alpha}u_{<m-10}\right]\right\|_{N[m']}\\ \lesssim \epsilon c_m, \end{multline} where $\psi$ is defined in (\ref{eq:forpsi}); similarly, \begin{multline}\label{eq:commutatvieerror2} \left\|P_{m'}\left[P_m\left(\partial_{\alpha}u_{<m-10}\,u^{\dagger}_{<m-10}\,\partial^{\alpha}u_{m-10<\cdot<m+10}\right)-\partial_{\alpha}u_{<m-10}\,u^{\dagger}_{<m-10}\,\partial^{\alpha}\psi\right]\right\|_{N[m']}\\ \lesssim \epsilon c_m. \end{multline} \end{claim} We postpone the proof of Claim \ref{claim:Ijareperturbative} and Claim \ref{claim:commutingperturbative} to the end of this section. Hence, by (\ref{eq:commutatvieerror}) we can rewrite the equation for $\psi$ as \begin{equation}\label{eq:intermediateform} \partial_{tt}\psi-\Delta\psi=\widetilde{f}+2u_{<m-10}\,\partial_{\alpha}u^{\dagger}_{<m-10}\partial^{\alpha}\psi, \end{equation} where $$\sup_{m'=m+O(1)}\left\|P_{m'}\widetilde{f}\right\|_{N[m']}\lesssim \epsilon c_m.$$ Let us note the relation $$u^{\dagger}\,\partial_{\alpha}u=0.$$ It follows that \begin{eqnarray*} 0&=&P_m\left(\partial^{\alpha}u_{<m-10}\ud\,\partial_{\alpha}u\right)\\ &=&P_m\left(\partial^{\alpha}u_{<m-10}\ud_{\ge m-10}\partial_{\alpha}u\right)+P_m\left(\partial^{\alpha}u_{<m-10}\ud_{< m-10}\partial_{\alpha}u_{m-10\leq\cdot\leq m+10}\right)\\ &=&I+II. \end{eqnarray*} We can estimate the $I$ term, by using the trilinear estimate, as for $m'=m+O(1)$ \begin{eqnarray*} &&\left\|P_{m'}\left(\partial^{\alpha}u_{<m-10}\ud_{\ge m-10}\partial_{\alpha}u\right)\right\|_{N[m']}\\ &&\lesssim \sum_{k_1<m-10,\,m-10\leq k_2\leq m+10,\,k_3\leq m+O(1)}\left\|P_{m'}\left(\partial^{\alpha}u_{k_1}\ud_{k_2}\,\partial_{\alpha}u_{k_3}\right)\right\|_{N[m']}+\\ &&\hspace{.5in}+\,\sum_{k_1<m-10,\,k_2>m+10,\,k_3=k_2+O(1)}\left\|P_{m'}\left(\partial^{\alpha}u_{k_1}\ud_{k_2}\,\partial_{\alpha}u_{k_3}\right)\right\|_{N[m']}\\ &&\lesssim \sum_{k_1<m-10,\,k_3\leq m+O(1)} 2^{-\kappa (m-\min\{k_1,\,k_3\})}c_{k_1}c_mc_{k_3}+\\ &&\hspace{.5in}+\,\sum_{k_1<m-10,\,k_2>m+10}2^{-\kappa(k_2-m)}2^{-\kappa(k_2-k_1)}c_{k_1}c_{k_2}^2\\ &&\lesssim\epsilon^2 c_m \end{eqnarray*} Consequently, by the boundedness of $P_{m}$ in $S[m']$, we see that \begin{equation*} \sup_{m'=m+O(1)}\|P_{m'}II\|_{N[m']}\lesssim \epsilon^2 c_m, \end{equation*} and thus $II$ is disposable. Thus by (\ref{eq:commutatvieerror2}) we can rewrite the equation for $\psi$ as \begin{equation}\label{eq:mainequationforpsi} \partial_{tt}\psi-\Delta\psi=f+2\left(u_{<m-10}\,\partial^{\alpha}u^{\dagger}_{<m-10}-\partial^{\alpha}u_{<m-10}\,u^{\dagger}_{<m-10}\right)\partial_{\alpha}\psi, \end{equation} where $f$ satisfies \begin{equation} \sup_{m'=m+O(1)}\|f\|_{N[m']}\lesssim \epsilon c_m. \end{equation} \\ \medskip \noindent {\bf Step 3: Construction of the micro-local gauge.}\\ To deal with the non-perturbative part of the nonlinearity, we need to use the idea of Tao \cite{TaoSmallEnergy}.\\ We have \begin{equation}\label{eq:modified} \partial_{tt}\psi-\Delta\psi=f+2\left(u_{<m-10}\,\partial^{\alpha}u^{\dagger}_{<m-10}-\partial^{\alpha}u_{<m-10}\,u^{\dagger}_{<m-10}\right)\partial_{\alpha}\psi, \end{equation} where $f$ satisfies \begin{equation}\label{eq:disposablef} \sup_{m'=m+O(1)}\|f\|_{N[m']}\lesssim \epsilon c_m. \end{equation} Let $w=U_{<m-10}\psi$ for some matrix $U_{<m-10}$ to be determined below, then (\ref{eq:modified}) implies that \begin{eqnarray*} &&-\partial^{\alpha}\partial_{\alpha}w=-\partial^{\alpha}\partial_{\alpha}U_{<m-10}\,\psi-2\partial^{\alpha}U_{<m-10}\partial_{\alpha}\psi\\ &&\hspace{.5in}+\,U_{<m-10}\left[f+2\left(u_{<m-10}\,\partial^{\alpha}u^{\dagger}_{<m-10}-\partial^{\alpha}u_{<m-10}\,u^{\dagger}_{<m-10}\right)\partial_{\alpha}\psi\right]\\ &&=\left(\Box U_{<m-10}\right)\,\psi+U_{<m-10}f+\\ &&\hspace{.3in}+\,2\left[U_{<m-10}\left(u_{<m-10}\,\partial^{\alpha}u^{\dagger}_{<m-10}-\partial^{\alpha}u_{<m-10}\,u^{\dagger}_{<m-10}\right)-\partial^{\alpha} U_{<m-10}\right]\partial_{\alpha}\psi. \end{eqnarray*} Then \begin{eqnarray} &&\partial_{tt}w-\Delta w=\left(\Box U_{<m-10}\right)\psi+U_{<m-10}f\label{eq:modifiedwithg}\\ &&+\,2\left[U_{<m-10}\left(u_{<m-10}\,\partial^{\alpha}u^{\dagger}_{<m-10}-\partial^{\alpha}u_{<m-10}\,u^{\dagger}_{<m-10}\right)-\partial^{\alpha} U_{<m-10}\right]\partial_{\alpha}\psi.\nonumber \end{eqnarray} Ideally we would like to choose $U_{<m-10}$ so that $$\partial^{\alpha}U_{<m-10}=U_{<m-10}\left(u_{<m-10}\partial^{\alpha}u^{\dagger}_{<m-10}-\partial^{\alpha}u_{<m-10}\,u^{\dagger}_{<m-10}\right),$$ for all $\alpha$, then the term on the right hand side of (\ref{eq:modifiedwithg}) containing $\partial_{\alpha}\psi$ would be eliminated, and we would be in a truly semilinear case. However this is impossible due to compatibility issues, see the discussions in \cite{TaoSmallEnergy}. Instead we will follow Tataru's modification of Tao's idea in \cite{TataruRough} to construct a micro-local approximate solution. \\ Fix large $N>1$. Define inductively \begin{eqnarray*} &&U^N_{-N}=I;\\ &&U^N_k=U^N_{<k-10}\left(u_{<k-10}u^{\dagger}_k-u_ku^{\dagger}_{<k-10}\right), \end{eqnarray*} where $U^N_{<k-10}=\sum\limits_{-N<j<k-10}U^N_j+I$ if $k>-N+11$ and $U^N_{<k-10}=I$ otherwise. In the end we will pass $N\to\infty$, but we need to obtain uniform in $N$ estimates for $U^N_k$ in order to do that. We claim the following properties for $U^N_k$ and $U^N_{<k}$ with $-N<k\leq N$: \begin{claim}\label{cl:uniformU} For $\epsilon$ sufficiently small, \begin{eqnarray} &&U^N_k\,\,{\rm has\,\,frequency\,\,support\,\,} 2^{k-2}\leq |\xi|\leq 2^{k+2};\\ &&\sup_{k'=k+O(1)}\left\|P_{k'}U^N_k\right\|_{S[k']}\lesssim c_k;\label{eq:bound_ck}\\ &&\left\|U^N_{<k}\left(U^N_{<k}\right)^{\dagger}-I\right\|_{S(c)}\lesssim \sqrt{\epsilon}.\label{eq:orthSc} \end{eqnarray} \end{claim} We shall prove the claim inductively. For $k=-N+1$, the claim follows from the property that by Theorem \ref{th:globalregularity} $$\|u\|_{S(c)}\lesssim 1.$$ Suppose the claim is true up to $k-1$, let us prove it holds also for $k$. A crucial point is the following important algebraic identity: \begin{eqnarray} &&U^N_k\left(U^N_{<k-10}\right)^{\dagger}+U^N_{<k-10}(U^N_k)^{\dagger}\\ &&=U^N_{<k-10}\left(u_{<k-10}u^{\dagger}_k-u_ku^{\dagger}_{<k-10}\right)\left(U^N_{<k-10}\right)^{\dagger}+\\ &&\hspace{.3in}+\,U^N_{<k-10}\left(u_ku^{\dagger}_{<k-10}-u_{<k-10}u^{\dagger}_k\right)\left(U^N_{<k-10}\right)^{\dagger}\\ &&=0. \end{eqnarray} We also note that $U^N_j$ is anti-symmetric if $-N<j\leq-N+11$, which is an easy consequence of the definition of $U^N_j$.\\ Thus by the anti-symmetry of $U^N_j$ for $-N<j\leq -N+11$, we get that \begin{eqnarray*} &&U^N_{<k}\left(U^N_{<k}\right)^{\dagger}=\left(\sum_{-N\leq j<k}U^N_j\right)\left(\sum_{-N\leq j<k}\left(U^N_j\right)^{\dagger}\right)\\ &&=\sum_{-N\leq j<j'-10<j'<k}U^N_j\left(U^N_{j'}\right)^{\dagger}+\sum_{-N\leq j'<j-10<j<k}U^N_j\left(U^N_{j'}\right)^{\dagger}+\\ &&\\ &&\hspace{.3in}+\,\sum_{|j-j'|\leq 10, \,-N< j,\,j'<k} U^N_j\left(U^N_{j'}\right)^{\dagger}+\sum_{-N<j\leq -N+10}\left[\left(U^N_{j}\right)^{\dagger}+U^N_j\right]+I\\ &&\\ &&=\sum_{-N+10< j'<k}U^N_{<j'-10}\left(U^N_{j'}\right)^{\dagger}+\sum_{-N+10< j<k}U^N_{j}\left(U^N_{<j-10}\right)^{\dagger}+I+\\ &&\hspace{.5in}+\,\sum_{|j-j'|\leq 10, \,-N< j,\,j'<k} U^N_j\left(U^N_{j'}\right)^{\dagger}\\ &&=I+\sum_{-N<j,\,j'<k,\,|j-j'|\leq 10}U^N_j\left(U^N_{j'}\right)^{\dagger}. \end{eqnarray*} Simplifying the above, we get that $$U^N_{<k}\left(U^N_{<k}\right)^{\dagger}-I=\sum_{-N<j,\,j'<k,\,|j-j'|\leq 10}U^N_j\left(U^N_{j'}\right)^{\dagger}.$$ Hence by (\ref{eq:bound_ck}) from induction, \begin{eqnarray*} &&\left\|U^N_{<k}\left(U^N_{<k}\right)^{\dagger}-I\right\|_{L^{\infty}}\\ &&\lesssim \sum_{-N<j,\,j'<k,\,|j-j'|\leq 10}\left\|U^N_j\right\|_{L^{\infty}}\left\|U^N_{j'}\right\|_{L^{\infty}}\\ &&\lesssim \sum_{-N<j,\,j'<k,\,|j-j'|\leq 10}\,\,\sum_{j_1=j+O(1),\,j_2=j'+O(1)}\left\|P_{j_1}U^N_j\right\|_{S[j_1]}\left\|P_{j_2}U^N_{j'}\right\|_{S[j_2]}\\ &&\lesssim \sum_{-N<j<k}c_j^2\lesssim \epsilon^2. \end{eqnarray*} In the second inequality above, we used the fact that $U^N_j=\sum_{j_1=j+O(1)}P_{j_1}U_j^N$ which follows from the frequency support property of $U^N_j$. We shall use this trick often, as a replacement of bound on $\|U_j^N\|_{S[j]}$ which we do not have. Below we will omit the routine details when we use the same trick. In particular, combining the above with the induction bound (\ref{eq:bound_ck}), we see that $\|U^N_{<k}\|_{S(1)}\leq C$ for some universal constant (by choosing $\epsilon$ small).\\ Similarly, for each $k'<k+O(1)$, by the property of $S[k]$ spaces and induction, \begin{eqnarray*} &&\left\|P_{k'}\left[U_{<k}^N\left(U^N_{<k}\right)^{\dagger}\right]\right\|_{S[k']}\\ &&\lesssim \sum_{-N<j,\,j'<k,\,|j-j'|\leq 10}\left\|P_{k'}\left[U^N_j\left(U^N_{j'}\right)^{\dagger}\right]\right\|_{S[k']}\\ &&\lesssim\sum_{O(1)+k'<j<k}2^{-\kappa (j-k')_{+}}c_j^2\\ &&\lesssim \sum_{O(1)+k'<j<k}2^{-(\kappa-\vartheta) (j-k')_{+}}c_jc_{k'}\lesssim \epsilon\,c_{k'}. \end{eqnarray*} Combining the above two estimates, (\ref{eq:orthSc}) follows. \\ The estimate for $\sup_{k'=k+O(1)}\left\|P_{k'}U^N_k\right\|_{S[k']}$ then follows from the definition and the fact that $\|u_{<k-10}\|_{S(c)}$, $\|U^N_{<k-10}\|_{S(1)}$ are universally bounded. The support property is obvious. \\ Using these uniform estimates, we can pass $N\to\infty$, and obtain a limit along a subsequence of $N$, so that $$U_k:=\lim_{N_i\to\infty}U^{N_i}_{k}, \,\,\,\,U_{<k}:=\lim_{N_i\to\infty}U^{N_i}_{<k},$$ exist in the sense of distributions, for each $k$. Since $U^N_k$ are frequency localized and have bounded overlap in frequency support, we can conclude that \begin{equation}\label{eq:iterativeU} U_{<k}=\sum_{k'<k}U_{k'}+I,\,\,{\rm and}\,\,U_k=U_{<k-10}(u_{<k-10}\ud_k-u_k\ud_{<k-10}). \end{equation} In addition, $U_k$, $U_{<k}$ satisfies the same estimates claimed for $U^N_k,\,U^N_{<k}$ above. As a consequence, we have \begin{equation}\label{eq:boundforUk} \sup_{k'=k+O(1)}\left\|P_{k'}U_k\right\|_{S[k']}\lesssim c_k,\,\,\,\,{\rm and}\,\,\left\|U_{<k}\right\|_{S(c)}\lesssim 1. \end{equation} This is a direct consequence of the property of $S[k]$ under weak convergence, see the remark below Theorem \ref{th:propertySN}. \\ \medskip \noindent {\bf Step 4: Control of the nonlinearity after applying the gauge transform.}\\ We shall show that the terms on the right hand size of (\ref{eq:modifiedwithg}) are all disposable.\\ \noindent {\it Substep (1): the terms involving $\Box U_k$.}\\ To control the terms $\left(\Box U_{<m-10}\right)\psi$, we need to control $\Box U^N_{<m-10}$ uniformly for all large $N$. By definition, \begin{eqnarray*} &&\Box U^N_k=\left(\Box U^N_{<k-10}\right)\left(u_{<k-10}u^{\dagger}_k-u_ku^{\dagger}_{<k-10}\right)\\ &&\hspace{.4in}-\,2\partial^{\alpha}U^N_{<k-10}\partial_{\alpha}\left(u_{<k-10}u^{\dagger}_k-u_ku^{\dagger}_{<k-10}\right)\\ &&\hspace{.4in}+\,U^N_{<k-10}\left(\Box u_{<k-10}u^{\dagger}_k+u_{<k-10}\Box u^{\dagger}_k-\Box u_k\,u^{\dagger}_{<k-10}-u_k\Box u_{<k-10}^{\dagger}\right)\\ &&\hspace{.4in}+\,2U^N_{<k-10}\left(\partial^{\alpha}u_k\partial_{\alpha} \ud_{<k-10}-\partial^{\alpha}u_{<k-10}\partial_{\alpha}u_k^{\dagger}\right)=I+II+III+IV. \end{eqnarray*} We claim that for $\nu=\frac{\kappa}{32}$, and uniformly for all large $N$ \begin{claim}\label{claim:boxUterm} \begin{equation}\label{eq:boxUestimate} \sup_{j'=j+O(1)}\left\|P_{j'}\left(\Box U^N_k\phi\right)\right\|_{N[j']}\lesssim 2^{-\nu(j-k)}c_k\|\phi\|_{S[j]}, \end{equation} for all $\phi$ with frequency support in $2^{j-5/2}\leq \cdot\leq 2^{j+5/2}$ and $k<j-7$. \end{claim} Assuming this claim for a moment, then we can estimate for $m'=m+O(1)$ \begin{eqnarray*} &&\left\|P_{m'}\left[\Box U^N_{<m-10}\,\psi\right]\right\|_{N[m']}\\ &&\lesssim \sum_{k<m-10}\left\|P_{m'}\left[\Box U^N_k\,\psi\right]\right\|_{N[m']}\\ &&\lesssim \sum_{k<m-10}2^{-\nu(m-k)}c_kc_m\lesssim \epsilon\, c_m, \end{eqnarray*} and thus the first term on the right hand side of (\ref{eq:modifiedwithg}) is disposable in the generalized sense.\\ We shall prove (\ref{eq:boxUestimate}) inductively. It is clear that (\ref{eq:boxUestimate}) holds for $k=-N$. Suppose (\ref{eq:boxUestimate}) holds for $k'<k$, let us prove that it holds for $k$. The bound for $I$ term: \begin{eqnarray*} &&\left\|P_{j'}\left[\Box U^N_{<k-10}\left(u_{<k-10}u^{\dagger}_k-u_k\,u^{\dagger}_{<k-10}\right)\phi\right]\right\|_{N[j']}\\ &&\lesssim\sum _{k'<k-10,\,|j-j''|\leq 3}\left\|P_{j'}\left[\Box U^N_{k'}\,P_{j''}\left\{\left(u_{<k-10}u^{\dagger}_k-u_k\,u^{\dagger}_{<k-10}\right)\phi\right\}\right]\right\|_{N[j']}\\ &&\lesssim \sum_{k'<k-10}2^{-\nu (j-k')}c_{k'}c_k\|\phi\|_{S[j]}\\ &&\lesssim 2^{-\nu(j-k)}\epsilon\,c_k\|\phi\|_{S[j]} \end{eqnarray*} follows from the inductive hypothesis and the property of $S[k]$ spaces. The projection $P_{j''}$ was used to deal with the frequency leakage, which is a minor technical issue.\\ Let us consider the $II$ term $\partial^{\alpha}U^N_{<k-10}\partial_{\alpha}\left(u_{<k-10}\ud_k-u_k\ud_{<k-10}\right)\phi$. By (\ref{eq:boundforUk}) and the trilinear estimate, we have \begin{eqnarray*} &&\left\|P_{j'}\left[\partial^{\alpha}U^N_{<k-10}\partial_{\alpha}\left(u_{<k-10}\ud_k-u_k\ud_{<k-10}\right)\phi\right]\right\|_{N[j']}\\ &&\lesssim \sum_{k'<k-10}\left\|P_{j'}\left[\partial^{\alpha}U^N_{k'}\partial_{\alpha}\left(u_{<k-10}\ud_k-u_k\ud_{<k-10}\right)\phi\right]\right\|_{N[j']}\\ &&\lesssim\sum_{k'<k-10}2^{-\kappa(j-k')}c_{k'}c_k\|\phi\|_{S[j]}\\ &&\lesssim 2^{-\kappa(j-k)}\epsilon \,c_k\|\phi\|_{S[j]}. \end{eqnarray*} Let us now consider the term $$P_{j'}\left[\left(U^N_{<k-10}\partial^{\alpha}u_{<k-10}\partial_{\alpha}u_k^{\dagger}\right)\phi\right],$$ from term $IV$. We have, by (\ref{eq:boundforUk}) and the trilinear estimate, \begin{eqnarray*} &&\left\|P_{j'}\left[\left(U^N_{<k-10}\partial^{\alpha}u_{<k-10}\partial_{\alpha}u_k^{\dagger}\right)\phi\right]\right\|_{N[j']}\\ &&\lesssim \sum_{k'<k-10}2^{-\kappa(j-k')}c_k c_{k'}\|\phi\|_{S[j]}\lesssim 2^{-\kappa (j-k)}\epsilon \,c_k\|\phi\|_{S[j]}, \end{eqnarray*} for $j'=j+O(1)$.\\ The term $$P_{j'}\left[\left(U^N_{<k-10}\partial^{\alpha}u_k\partial_{\alpha}\ud_{<k-10}\right)\phi\right]$$ can be controlled similarly.\\ It remains to control term $III$. For this, we need to use the equation for $u$. Since $u$ satisfies the wave map equation, we see that \begin{equation}\label{eq:equforuk'} \Box u_{k'}=P_{k'}\left(u\,\partial^{\alpha}\ud\partial_{\alpha}u\right),\,\,{\rm for\,\,each\,\,}k'\leq k. \end{equation} It suffices to show that, for any $\varphi$ with Fourier support $2^{j-3}\leq |\xi|\leq 2^{j+3}$ and $k'<j-6$, \begin{equation}\label{eq:estimatethird} \left\|P_{j'}\left[P_{k'}\left(u\,\partial^{\alpha}\ud\partial_{\alpha}u\right)\varphi\right]\right\|_{N[j']}\lesssim 2^{-\nu(j-k')}\epsilon^{\frac{1}{2}} c_{k'}\|\varphi\|_{S[j]}, \end{equation} for $j'=j+O(1)$. Indeed, from (\ref{eq:estimatethird}), it follows that \begin{eqnarray*} &&\left\|P_{j'}\left[U^N_{<k-10}\Box u_{<k-10}\ud_k\phi\right]\right\|_{N[j']}\\ &&\lesssim \sum_{k'<k-10}\left\|P_{j'}\left[U^N_{<k-10}\Box u_{k'}\ud_k\phi\right]\right\|_{N[j']}\\ &&\lesssim \sum_{k'<k-10}2^{-\nu (j-k')}c_{k'}c_k\|\phi\|_{S[j]}\\ &&\lesssim \epsilon\,c_k 2^{-\nu (j-k)}\|\phi\|_{S[j]} \end{eqnarray*} and that \begin{eqnarray*} &&\left\|P_{j'}\left[U^N_{<k-10}\Box u_{k}\,\ud_{<k-10}\phi\right]\right\|_{N[j']}\\ &&\lesssim 2^{-\nu (j-k)}c_k\,\epsilon^{\frac{1}{2}}\|\phi\|_{S[j]}. \end{eqnarray*} These estimates are sufficient for the completion of the induction, due to the presence of the extra $\epsilon^{\frac{1}{2}}$ factor, which can be used to absorb various constants in the inequalities.\\ To prove (\ref{eq:estimatethird}), let us decompose $P_{k'}\left(u\,\partial^{\alpha}\ud\partial_{\alpha}u\right)\phi$ as \begin{eqnarray*} &&P_{k'}\left(u\,\partial^{\alpha}\ud\partial_{\alpha}u\right)\varphi=P_{k'}\left(u_{>\frac{j+k'}{2}}\,\partial^{\alpha}\ud\partial_{\alpha}u\right)\varphi\\ &&\hspace{.4in}+\,P_{k'}\left(u_{\leq \frac{j+k'}{2}}\,\partial^{\alpha}\ud\partial_{\alpha}u\right)\varphi=I_1+I_2. \end{eqnarray*} For $I_1$, by the trilinear estimates and symmetry, we can estimate as follows \begin{eqnarray*} &&\left\|P_{j'}\left[P_{k'}\left(u_{>\frac{j+k'}{2}}\,\partial^{\alpha}\ud\partial_{\alpha}u\right)\varphi\right]\right\|_{N[j']}=\left\|\sum_{k_2,\,k_3,\,k_1>\frac{j+k'}{2}}P_{j'}\left[P_{k'}\left(u_{k_1}\,\partial^{\alpha}\ud_{k_2}\partial_{\alpha}u_{k_3}\right)\varphi\right]\right\|_{N[j']}\\ &&\lesssim \|\varphi\|_{S[j]}\left(\sum_{k_1>\frac{j+k'}{2},\,k_3\ge k_1+O(1),\,k_3=k_2+O(1)}2^{-\kappa(\max_{1\leq i\leq 3}k_i-k')}2^{-\kappa (k_1-\min\{k_2,\,k_3\})_+}c_{k_1}c_{k_2}c_{k_3}\right.\\ &&\quad\quad\quad+\left.\sum_{k_1>\frac{j+k'}{2},\,k_2<k_3-C,\,k_3= k_1+O(1)}2^{-\kappa(\max_{1\leq i\leq 3}k_i-k')}2^{-\kappa (k_1-\min\{k_2,\,k_3\})_+}c_{k_1}c_{k_2}c_{k_3}\right)\\ &&\lesssim\epsilon^2\, \|\varphi\|_{S[j]}\left(\sum_{k_1>\frac{j+k'}{2}}c_{k_1}2^{-\kappa(k_1-k')}+\sum_{k_1>\frac{j+k'}{2}}c_{k_1}2^{-\kappa (k_1-k')}\right)\\ &&\lesssim \epsilon^2\,2^{-\frac{\kappa}{2}(j-k')} \|\varphi\|_{S[j]}c_{k'}. \end{eqnarray*} Now let us deal with the term $I_2=P_{k'}\left(u_{\leq \frac{j+k'}{2}}\,\partial^{\alpha}\ud\partial_{\alpha}u\right)\varphi$. In this case, we can insert $P_{<\frac{j+k'}{2}+C}$ in front of $\partial^{\alpha}\ud\,\partial_{\alpha}u$, use symmetry, and obtain that \begin{eqnarray*} &&\left\|P_{j'}\left[P_{k'}\left(u_{\leq \frac{j+k'}{2}}\,\partial^{\alpha}\ud\partial_{\alpha}u\right)\varphi\right]\right\|_{N[j']}\\ &&=\left\|P_{j'}\left[P_{k'}\left(u_{\leq \frac{j+k'}{2}}\,P_{<\frac{j+k'}{2}+C}\left(\partial^{\alpha}\ud\partial_{\alpha}u\right)\right)\varphi\right]\right\|_{N[j']}\\ &&\lesssim\left\|\sum_{k_1\leq k_2,\,k_2=k_1+O(1)}P_{j'}\left[P_{k'}\left(u_{\leq \frac{j+k'}{2}}\,P_{<\frac{j+k'}{2}+C}\left(\partial^{\alpha}\ud_{k_1}\partial_{\alpha}u_{k_2}\right)\right)\varphi\right]\right\|_{N[j']}+\\ &&\hspace{.4in}+\,\left\|\sum_{k_1\leq k_2-C}P_{j'}\left[P_{k'}\left(u_{\leq \frac{j+k'}{2}}\,P_{<\frac{j+k'}{2}+C}\left(\partial^{\alpha}\ud_{k_1}\partial_{\alpha}u_{k_2}\right)\right)\varphi\right]\right\|_{N[j']}, \end{eqnarray*} which can be estimated as \begin{eqnarray*} &&\lesssim \left\|\sum_{k_1\leq k_2,\,k_2=k_1+O(1),\,k_1>\frac{3j+k'}{4}}P_{j'}\left[P_{k'}\left(u_{\leq \frac{j+k'}{2}}\,P_{<\frac{j+k'}{2}+C}\left(\partial^{\alpha}\ud_{k_1}\partial_{\alpha}u_{k_2}\right)\right)\varphi\right]\right\|_{N[j']}\\ &&\hspace{.3in}+\,\left\|\sum_{k_1\leq k_2,\,k_2=k_1+O(1),\,k_1\leq \frac{3j+k'}{4}}P_{j'}\left[P_{k'}\left(u_{\leq \frac{j+k'}{2}}\,P_{<\frac{j+k'}{2}+C}\left(\partial^{\alpha}\ud_{k_1}\partial_{\alpha}u_{k_2}\right)\right)\varphi\right]\right\|_{N[j']}\\ &&\hspace{.3in}+\,\sum_{k_1\leq k_2-C,\,k_2\leq \frac{j+k'}{2}+C}2^{-\kappa (j-k_1)}\|\varphi\|_{S[j]}c_{k_1}c_{k_2} \end{eqnarray*} which is \begin{eqnarray*} &&\lesssim \sum_{k_1>\frac{3j+k'}{4}}2^{-\kappa\left(k_1-\frac{j+k'}{2}\right)}\|\varphi\|_{S[j]}\cdot c_{k_1}^2+\sum_{k_1\leq\frac{3j+k'}{4}}c_{k_1}^2\cdot 2^{-\kappa(j-k_1)}\|\varphi\|_{S[j]}\\ &&\hspace{.4in}+\,\sum_{k_1\leq k_2-C,\,k_2\leq \frac{j+k'}{2}+C}2^{-\kappa(j-k_1)}\|\varphi\|_{S[j]}c_{k_1}c_{k_2}\\ &&\lesssim 2^{-\frac{\kappa}{8}(j-k')}\epsilon\,c_{k'}\|\varphi\|_{S[j]}. \end{eqnarray*} Combining the above estimates for $I,\,II,\,III,\,IV$ terms, the claim follows.\\ \medskip \noindent {\it Substep (2): Control of the term containing $\partial \psi$.}\\ Now we address the main term in the nonlinearity that forced us to use the gauge transform $$\wt{h}=\left[U_{<m-10}\left(u_{<m-10}\partial^{\alpha}\ud_{<m-10}-\partial^{\alpha}u_{<m-10}\ud_{<m-10}\right)-\partial^{\alpha}U_{<m-10}\right]\partial_{\alpha}\psi.$$ Note that by (\ref{eq:iterativeU}), we have \begin{eqnarray*} &&-\wt{h}=\left[\partial^{\alpha}U_{<m-10}-U_{<m-10}\left(u_{<m-10}\partial^{\alpha}\ud_{<m-10}-\partial^{\alpha}u_{<m-10}\ud_{<m-10}\right)\right]\partial_{\alpha}\psi\\ &&=\sum_{k<m-10}\left[\partial^{\alpha}U_k-U_{<m-10}\left(u_{<m-10}\partial^{\alpha}\ud_k-\partial^{\alpha}u_k\ud_{<m-10}\right)\right]\partial_{\alpha}\psi\\ &&=\sum_{k<m-10}\left[\partial^{\alpha}U_k-U_{<k-10}\left(u_{<k-10}\partial^{\alpha}\ud_k-\partial^{\alpha}u_k\ud_{<k-10}\right)\right]\partial_{\alpha}\psi\\ &&\hspace{.5in}-\,\sum_{k<m-10}U_{k-10\leq \cdot<m-10}\left(u_{<m-10}\partial^{\alpha}\ud_k-\partial^{\alpha}u_k\,\ud_{<m-10}\right)\partial_{\alpha}\psi\\ &&\hspace{.5in}-\,\sum_{k<m-10}U_{<k-10}\left(u_{k-10\leq \cdot<m-10}\partial^{\alpha}\ud_k-\partial^{\alpha}u_k\,\ud_{k-10\leq\cdot<m-10}\right)\partial_{\alpha}\psi\\ &&=\sum_{k<m-10}\left[\pua U_{<k-10}\left(u_{<k-10}\ud_k-u_k\ud_{<k-10}\right)\right.\\ &&\hspace{.8in}\left.-\,U_{<k-10}\left(\partial^{\alpha}u_{<k-10}\ud_k-u_k\partial^{\alpha}\ud_{<k-10}\right)\right]\partial_{\alpha}\psi+\mathcal{R}. \end{eqnarray*} To estimate the $\mathcal{R}$ term, let us firstly bound for $m'=m+O(1)$, \begin{eqnarray*} &&\left\|P_{m'}\left[U_{k-10\leq \cdot<m-10}u_{<m-10}\partial^{\alpha}\ud_k\partial_{\alpha}\psi\right]\right\|_{N[m']}\\ &&\lesssim \sum_{k-10\leq k'<m-10}\left\|P_{m'}\left[U_{k'}u_{<m-10}\partial^{\alpha}\ud_k\partial_{\alpha}\psi\right]\right\|_{N[m']}\\ &&\lesssim \sum_{k-10\leq k'<m-10}\sup_{m''=m+O(1)}\left\|P_{m''}\left[U_{k'}\partial^{\alpha}\ud_k\partial_{\alpha}\psi\right]\right\|_{N[m'']}\\ &&\lesssim \sum_{k-10\leq k'<m-10}2^{-\kappa (k'-k)}c_{k'}c_kc_m\lesssim \sum_{k-10\leq k'<m-10}2^{-(\kappa-\vartheta)(k'-k)}c_k^2c_m\\ &&\lesssim c_k^2c_m. \end{eqnarray*} Other terms in $\mathcal{R}$ can be treated similarly. Thus \begin{equation} \sup_{m'=m+O(1)}\left\|P_{m'}\left[\mathcal{R}\right]\right\|_{N[m']}\lesssim \sum_kc_k^2c_m\lesssim \epsilon^2c_m, \end{equation} and consequently $\mathcal{R}$ is disposable.\\ We can estimate for $m'=m+O(1)$ \begin{eqnarray*} &&\left\|P_{m'}\left[\partial^{\alpha}U_{<k-10}\left(u_{<k-10}\ud_k-u_k\ud_{<k-10}\right)\partial_{\alpha}\psi\right]\right\|_{N[m']}\\ &&\lesssim \sum_{k'<k-10}\left\|P_{m'}\left[\partial^{\alpha}U_{k'}\left(u_{<k-10}\ud_k-u_k\ud_{<k-10}\right)\partial_{\alpha}\psi\right]\right\|_{N[m']}\\ &&\lesssim \sum_{k'<k-10}2^{-\kappa (k-k')}c_kc_{k'}\|\psi\|_{S[m]}\\ &&\lesssim \sum_{k'<k-10}2^{-(\kappa-\vartheta)(k-k')}c_k^2\|\psi\|_{S[m]}\lesssim c_k^2\|\psi\|_{S[m]}, \end{eqnarray*} and \begin{eqnarray*} &&\left\|P_{m'}\left[U_{<k-10}\partial^{\alpha}u_{<k-10}\ud_k\partial_{\alpha}\psi\right]\right\|_{N[m']}\\ &&\lesssim \sum_{k'<k-10}\left\|P_{m'}\left[U_{<k-10}\partial^{\alpha}u_{k'}\ud_k\partial_{\alpha}\psi\right]\right\|_{N[m']}\\ &&\lesssim \sum_{k'<k-10}2^{-\kappa (k-k')}c_{k'}c_k\|\psi\|_{S[m]}\\ &&\lesssim c_k^2\|\psi\|_{S[m]}. \end{eqnarray*} Thus in summary, we can estimate \begin{equation*} \sup_{m'=m+O(1)}\|\wt{h}\|_{N[m']}\lesssim \sum_{k<m-10}c_k^2\|\psi\|_{S[m]}\lesssim \epsilon^2c_m, \end{equation*} and consequently $\wt{h}$ is disposable.\\ \noindent {\it Substep (3): $U_{<m-10}f$ term is disposable.}\\ This follows directly as $f$ is disposable.\\ \medskip \noindent {\bf Step 5: Proof of the channel of energy inequality for the good frequency piece.}\\ Take $m\in\mathcal{K}$. By the estimates from {\it Step 4}, we can write the equation for $w$ in Step 3 as \begin{equation}\label{eq:finalequforw} \partial_{tt}w-\Delta w=h, \end{equation} with $h$ being disposable in the generalized sense, that is, $h=\lim\limits_{k\to\infty}h_k$ in the sense of distributions and $\sup\limits_{m'=m+O(1)}\|P_{m'}h_k\|_{N[m']}\lesssim \epsilon c_m$ uniformly in k. Let us now study how the outgoing condition (\ref{eq:outgoingfork}) on the initial data of $\psi$ has been transformed. Recall that $$w=U_{<m-10}\psi.$$ Hence $$\nabla_{x,t}w=\nabla_{x,t}U_{<m-10}\psi+U_{<m-10}\nabla_{x,t}\psi.$$ Thus at time $t=0$, by (\ref{eq:boundforUk}) and the outgoing condition for $\psi$, \begin{eqnarray*} &&\|\nabla_{x,t}w(0)\|_{L_x^2(B^c_{1+\lambda}\cup B_{1-\lambda})}\\ &&\lesssim \|\nabla_{x,t}U_{<m-10}(0)\|_{L^{2}}\|\psi(0)\|_{L_x^{\infty}}+\|U_{<m-10}(0)\|_{L_x^{\infty}}\|\nabla_{x,t}\psi(0)\|_{L^2(B_{1+\lambda}^c\cup B_{1-\lambda})}\\ &&\lesssim \left(\sum_{k<m-10}\|\nabla_{x,t}U_k(0)\|^2_{L^2_x}\right)^{\frac{1}{2}}\|P_m(u_0,\,u_1)\|_{\HL}+\delta^{\frac{1}{100}}\|P_m(u_0,\,u_1)\|_{\HL}\\ &&\lesssim \left(\sum_{k<m-10}c_k^2\right)^{\frac{1}{2}}\|P_m(u_0,\,u_1)\|_{\HL}+\delta^{\frac{1}{100}}\|P_m(u_0,\,u_1)\|_{\HL}\\ &&\lesssim \left(\epsilon+\delta^{\frac{1}{100}}\right)\|P_m(u_0,\,u_1)\|_{\HL}. \end{eqnarray*} Similar calculations show that $$\|\spartial w_0\|_{L^2}+\|\partial_rw_0+w_1\|_{L^2}\lesssim\left(\epsilon+\delta^{\frac{1}{100}}\right)\|P_m(u_0,\,u_1)\|_{\HL},$$ and $$\|(w_0,\,w_1)\|_{\HL}\ge (1-\gamma(\epsilon))\|P_m(u_0,\,u_1)\|_{\HL},$$ with a suitable $\gamma\to 0$ as $\epsilon\to 0$. If $\delta$ and $\epsilon$ are chosen sufficiently small, then by the channel of energy inequality for the linear wave equation and the bound on $h$, we conclude using (\ref{eq:inhomogeneousenergyestimates}) and (\ref{eq:Senergy}) that for all $t\ge 0$, \begin{equation}\label{eq:channelforw} \int_{|x|\ge \frac{\beta+1}{2}+t}|\nabla_{x,t}w|^2(x,t)\,dx\ge \left|\frac{3+\beta}{4}\right|\|P_m(u_0,\,u_1)\|^2_{\HL}-C\epsilon^2c_m^2. \end{equation} Since $\nabla_{x,t}U_{<m-10}\,\psi$ is small in $L^2$ (smaller than $C\epsilon c_m$) and $U_{<m-10}$ is almost orthogonal by (\ref{eq:orthSc}), the channel of energy inequality (\ref{eq:channelform}) for $\psi$ follows (again by choosing $\epsilon,\,\delta$ sufficiently small depending on $\beta$). \\ This finishes the proof of the theorem.\\ It remains to prove Claim \ref{claim:Ijareperturbative} and Claim \ref{claim:commutingperturbative}.\\ \noindent {\it Proof of Claim \ref{claim:Ijareperturbative}:} We need to control $I_1,\,I_2,\,I_3$.\\%,\,I_5$. For $I_1$, by the trilinear estimate and symmetry, we get that for $m'=m+O(1)$ \begin{eqnarray*} &&\left\|P_{m'}\left(u_{\ge m-10}\partial^{\alpha}\ud\partial_{\alpha}u\right)\right\|_{N[m']}\\ &&=\left\|\sum_{k_1\ge m-10,\,k_2,\,k_3}P_{m'}\left(u_{k_1}\partial^{\alpha}\ud_{k_2}\partial_{\alpha}u_{k_3}\right)\right\|_{N[m']}\\ &&\lesssim \sum_{k_1\ge m-10,\,k_2\ge k_3}2^{-\kappa\left(\max\{k_1,\,k_2,\,k_3\}-m\right)_+}2^{-\kappa(k_1-\min\{k_2,\,k_3\})_+}\times\\ &&\hspace{1.2in}\times\,\|u_{k_1}\|_{S[k_1]}\,\|u_{k_2}\|_{S[k_2]}\,\|u_{k_3}\|_{S[k_3]}\\ &&\\ &&\lesssim \sum_{k_1\ge m-10,\,k_2\ge k_3}2^{-\kappa\left(\max\{k_1,\,k_2\}-m\right)_+}2^{-\kappa (k_1-k_3)_+}c_{k_1}\epsilon^2\\ &&\lesssim \epsilon^2\,c_m\sum_{k_1\ge m-10,\,k_2\ge k_3}2^{-(\kappa-\vartheta)\left(\max\{k_1,\,k_2\}-m\right)_+}2^{-\kappa (k_1-k_3)_+}\lesssim \epsilon^2\,c_m. \end{eqnarray*} For $I_2$, by the product property and null form estimate, we get that for $m'=m+O(1)$ \begin{eqnarray*} &&\left\|P_{m'}\left(u_{<m-10}\partial^{\alpha}\ud_{>m+10}\partial_{\alpha}u\right)\right\|_{N[m']}\\ &&\lesssim \sum_{k_1>m+10,\,k_2=k_1+O(1)}\left\|P_{m'}\left(u_{<m-10}\partial^{\alpha}\ud_{k_1}\partial_{\alpha}u_{k_2}\right)\right\|_{N[m']}\\ &&\lesssim \sum_{k_1>m+10,\,k_2=k_1+O(1)}2^{-\kappa (k_1-m)}\|u_{k_1}\|_{S[k_1]}\|u_{k_2}\|_{S[k_2]}\\ &&\lesssim \sum_{k_1>m+10,\,k_2=k_1+O(1)}2^{-\kappa (k_1-m)}c_{k_1}c_{k_2}\\ &&\lesssim \epsilon\,c_m\sum_{k_1>m+10,\,k_2=k_1+O(1)}2^{-(\kappa-\vartheta) (k_1-m)}\lesssim \epsilon\,c_m. \end{eqnarray*} For $I_3$, by the product property and null form estimate, we get that for $m'=m+O(1)$ \begin{eqnarray*} &&\left\|P_{m'}\left(u_{<m-10}\partial^{\alpha}\ud_{m-10\leq\cdot\leq m+10}\partial_{\alpha}u_{\ge m-10}\right)\right\|_{N[m']}\\ &&\lesssim \sum_{k\ge m-10}\left\|P_{m'}\left(u_{<m-10}\partial^{\alpha}\ud_{m-10\leq\cdot\leq m+10}\partial_{\alpha}u_{k}\right)\right\|_{N[m']}\\ &&\lesssim \sum_{k\ge m-10}2^{-\kappa (k-m)}\epsilon\,c_m\lesssim \epsilon\,c_m. \end{eqnarray*} Thus the terms $I_1,\,I_2,\,I_3$ are all disposable. The claim is proved.\\ \medskip \noindent {\it Proof of Claim \ref{claim:commutingperturbative}:} Noting that $$P_m\left(u_{m-10\leq\cdot\leq m+10}\right)=P_mu=\psi,$$ by Lemma \ref{lm:commutinglemma}, we get that \begin{eqnarray*} &&P_m\left(u_{<m-10}\partial_{\alpha}\ud_{<m-10}\partial^{\alpha}u_{m-10\leq\cdot\leq m+10}\right)-u_{<m-10}\partial_{\alpha}\ud_{<m-10}\partial^{\alpha}\psi\\ &&=2^{-m}L\left(\nabla \left(u_{<m-10}\partial_{\alpha}\ud_{<m-10}\right),\,\partial^{\alpha}u_{m-10\leq\cdot\leq m+10}\right)\\ &&=2^{-m}L\left(\nabla u_{<m-10}\partial_{\alpha}\ud_{<m-10},\,\partial^{\alpha}u_{m-10<\cdot<m+10}\right)+\\ &&\hspace{.3in}+\,2^{-m}L\left(u_{<m-10}\partial_{\alpha}\nabla\ud_{<m-10},\,\partial^{\alpha}u_{m-10<\cdot<m+10}\right). \end{eqnarray*} Thus, noting that $$\|\nabla u_k\|_{S[k]}\lesssim 2^k\|u_k\|_{S[k]},$$ by the trilinear estimate for the first term in the above and the product estimate and null form estimate for the second, we get that for $m'=m+O(1)$ \begin{eqnarray*} &&\left\|P_{m'}\left[P_m\left(u_{<m-10}\partial_{\alpha}\ud_{<m-10}\partial^{\alpha}u_{m-10<\cdot<m+10}\right)-u_{<m-10}\partial_{\alpha}\ud_{<m-10}\partial^{\alpha}\psi\right]\right\|_{N[m']}\\ &&\lesssim \sum_{k_1<m-10,\,k_2<m-10}2^{-m}\left\|P_{m'}\left[L\left(\nabla u_{k_1}\partial_{\alpha}\ud_{k_2},\,\partial^{\alpha}u_{m-10<\cdot<m+10}\right)\right]\right\|_{N[m']}\\ &&\hspace{.3in}+\,\sum_{k<m-10}2^{-m}\left\|P_{m'}\left[L\left(u_{<m-10}\partial_{\alpha}\nabla\ud_{k},\,\partial^{\alpha}u_{m-10<\cdot<m+10}\right)\right]\right\|_{N[m']}\\ &&\lesssim 2^{-m}\sum_{k_1<m-10,\,k_2<m-10}2^{k_1}2^{-\kappa(k_1-k_2)_+}c_m\epsilon^2+\sum_{k<m-10}2^{-m}2^k\epsilon \,c_m\lesssim \epsilon c_m. \end{eqnarray*} The first part of the claim is proved. The proof of the second part is similar. \end{section} \begin{section}{Morawetz estimates and applications} In the previous sections, the main tools we use are all perturbative in nature. In order to understand the dynamics of large wave maps, we need some global control on the solution. Such global control is often achieved with help of suitable monotonicity formulae. The most important monotonicity formula here are the energy flux identity and the Morawetz estimate. This section follows similar arguments in Sterbenz-Tataru \cite{Tataru4}. We use less geometric, but perhaps slightly more transparent notations similar to those used in \cite{DJKM}. \\ For notational convenience, we shall work with a classical wave map $u$ defined on $R^2\times(0,1]$, that equals $u_{\infty}\in S^2$ for large $x$. Let us firstly look at the energy flux identity for $u$. We thus have $$\partial_{tt}u-\Delta u=(|\nabla u|^2-|u_t|^2)u,\,\,{\rm in}\,\,R^2\times(0,1].$$ Noting that $\ud\cdot u_t\equiv 0$, we have the identity \begin{equation}\label{eq:utidentity} \left(\partial_{tt}\ud-\Delta \ud\right)\cdot u_t=0,\,\,{\rm in}\,\,R^2\times(0,1]. \end{equation} Take $0<t_1<t_2<1$, and integrate the identity (\ref{eq:utidentity}) in the truncated lightcone $\{(x,t):\,|x|<t,\,t_1<t<t_2\}$, we obtain that \begin{eqnarray*} &&\int_{|x|<t_2}\nablaxtu(x,t_2)\,dx-\int_{|x|<t_1}\nablaxtu(x,t_1)\,dx\\ &&\hspace{.3in}-\frac{1}{\sqrt{2}}\int_{t_1}^{t_2}\int_{|x|=t}\flux\,d\sigma dt=0. \end{eqnarray*} Denote \begin{equation*} \Fl(t_1,t_2):=\frac{1}{\sqrt{2}}\int_{t_1}^{t_2}\int_{|x|=t}\flux\,d\sigma dt \end{equation*} as the ``energy flux" through the lateral boundary of the lightcone. We see that \begin{eqnarray*} \Fl(t_1,t_2)&=&\int_{|x|<t_2}\nablaxtu(x,t_2)\,dx\\ &&\hspace{.4in}-\int_{|x|<t_1}\nablaxtu(x,t_1)\,dx. \end{eqnarray*} Since $\Fl(t_1,t_2)\ge0$, it follows that \begin{equation*} \int_{|x|<t}\nablaxtu (x,t)\,dx \end{equation*} is nondecreasing, and has a limit as $t\to 0+$. Thus $\Fl(t_1,t_2)\to 0+$ as $t_1,\,t_2\to 0+$. \\ The control of energy flux plays an essential role in the following Morawetz estimate. \begin{theorem}\label{th:Morawetz} Let $u$ be a classical wave map with energy $E$ on $R^2\times (0,1]$ and $\epsilon\in(0,1)$. For each $0<\bt<1$, if $\Fl(0,\bt)<\epsilon E$, then \begin{equation}\label{eq:Morawetz} \int_{\epsilon \bt}^{\bt}\int_{|x|<t}\rho_{\epsilon\bt}^3\left(X^{\alpha}\partial_{\alpha}u\right)^2\,dxdt+\int_{|x|<\bt}\bt\rho_{\epsilon\bt}\left(\frac{|\nabla u|^2}{2}+\frac{|u_t|^2}{2}+\frac{x}{\bt}\cdot\nabla \ud\,u_t\right)(x,\bt)\,dx\lesssim E, \end{equation} where we set $\rho_{\epsilon\bt}:=\left((t+\epsilon\bt)^2-|x|^2\right)^{-\frac{1}{2}}$ and $X^{\alpha}=x^{\alpha}$ if $\alpha=1,2$, $X^{0}=t+\epsilon\bt$. \end{theorem} \smallskip \noindent {\it Proof.} By rescaling, we can assume without loss of generality that $\bt=1$. (Then $u$ is rescaled to $R^2\times \left(0,\,\frac{1}{\bt}\right]$) Let us integrate the identity $$\partial^{\alpha}\partial_{\alpha}\ud\,\rho_{\epsilon}\,X^{\beta}\partial_{\beta}u=0$$ on $\{(x,t):\,|x|<t,\,\epsilon<t<1\}$. We have \begin{eqnarray*} 0&=&\int_{\epsilon}^1\int_{|x|<t}\partial^{\alpha}\partial_{\alpha}\ud\,\rho_{\epsilon}X^{\beta}\partial_{\beta}u\,dxdt\\ &=&\int_{\epsilon}^1\int_{|x|<t}\rho_{\epsilon}\,X^{\beta}\partial^{\alpha}(\partial_{\alpha}\ud\,\partial_{\beta}u)-\rho_{\epsilon}\,X^{\beta}\partial_{\beta}\frac{\partial^{\alpha}\ud\partial_{\alpha}u}{2}\,dxdt\\ &=&B+I, \end{eqnarray*} where the boundary term $B$ and the interior term $I$ are \begin{eqnarray*} B&=&\int_{\epsilon}^1\int_{|x|=t}\rho_{\epsilon}\,X^{\beta}\,n^{\alpha}\,\partial_{\alpha}\ud\,\partial_{\beta}u-\rho_{\epsilon}\,X^{\beta}\,n_{\beta}\frac{\partial^{\alpha}\ud\partial_{\alpha}u}{2}\,d\sigma dt\\ &&\,-\,\int_{|x|<1}\rho_{\epsilon}\,X^{\beta}\,\partial_t\ud\,\partial_{\beta}u(x,1)\,dx-\int_{|x|<1}(1+\epsilon)\rho_{\epsilon}\,\frac{\partial^{\alpha}\ud\partial_{\alpha}u}{2}(x,1)\,dx\\ &&\,+\,\int_{|x|<\epsilon}\rho_{\epsilon}\,X^{\beta}\,\partial_t\ud\,\partial_{\beta}u(x,\epsilon)\,dx+\int_{|x|<\epsilon}2\epsilon\,\rho_{\epsilon}\,\frac{\partial^{\alpha}\ud\partial_{\alpha}u}{2}(x,\epsilon)\,dx; \end{eqnarray*} and \begin{equation*} I=-\int_{\epsilon}^1\int_{|x|<t}\partial^{\alpha}\left(\rho_{\epsilon}\,X^{\beta}\right)\,\partial_{\alpha}\ud\,\partial_{\beta}u-\partial_{\beta}\left(\rho_{\epsilon}\,X^{\beta}\right)\,\frac{\partial^{\alpha}\ud\partial_{\alpha}u}{2}\,dxdt. \end{equation*} In the above we use the notation $n=\frac{1}{\sqrt{2}}\left(\frac{x}{|x|},\,-1\right)$, $n^j=n_j=\frac{1}{\sqrt{2}}\frac{x_j}{|x|}$ for $j=1,2$ and $n^0=-n_0=\frac{1}{\sqrt{2}}$. Hence $X^{\beta}n_{\beta}=-\frac{\epsilon}{\sqrt{2}}$ on $|x|=t$. We can compute $$\partial_j\rho_{\epsilon}=x_j\rho_{\epsilon}^3,\,\,\,\partial_t\rho_{\epsilon}=-t\rho_{\epsilon}^3-\epsilon\,\rho_{\epsilon}^3.$$ Hence $$X^{\beta}\partial_{\beta}\rho_{\epsilon}=-\rho_{\epsilon}.$$ We also note that $\epsilon\,\rho\leq 1$ when $|x|<t$, and record the following simple bound when $|x|=t$ $$\left|\rho_{\epsilon}\right|= \left(2\epsilon \,t+\epsilon^2\right)^{-\frac{1}{2}}\leq \epsilon^{-\frac{1}{2}}t^{-\frac{1}{2}}.$$ We can simplify the $B,\,I$ terms as \begin{eqnarray*} B&=&\frac{1}{\sqrt{2}}\int_{\epsilon}^1\int_{|x|=t}\rho_{\epsilon}\,\left(X^{\beta}\partial_{\beta}u\right)\cdot\frac{\left(x^{\alpha}\partial_{\alpha}u\right)}{t}+\epsilon\,\rho_{\epsilon}\frac{\partial^{\alpha}u^{\dagger}\,\partial_{\alpha}u}{2}\,d\sigma dt\\ &&\,\,\,\,-\int_{|x|<1}\rho_{\epsilon}\,\fluxOne(x,1)\,dx+O(E), \end{eqnarray*} and \begin{eqnarray*} -I&=&\int_{\epsilon}^1\int_{|x|<t}\bigg[X^{\beta}\partial_{\beta}\ud\,\partial^{\alpha}\rho_{\epsilon}\,\partial_{\alpha}u+\rho_{\epsilon}\,\left(|\nabla u|^2-|\partial_tu|^2\right)\\ &&\hspace{.4in}\left.-\,\frac{3}{2}\rho_{\epsilon}\,\left(\partial^{\alpha}\ud\,\partial_{\alpha}u\right)-X^{\beta}\partial_{\beta}\rho_{\epsilon}\,\frac{\partial^{\alpha}\ud\,\partial_{\alpha}u}{2}\right]\,dxdt\\ &=&\int_{\epsilon}^1\int_{|x|<t}\rho_{\epsilon}^3\left|X^{\beta}\partial_{\beta}u\right|^2\,dxdt. \end{eqnarray*} We can estimate \begin{eqnarray*} &&\left|\frac{1}{\sqrt{2}}\int_{\epsilon}^1\int_{|x|=t}\rho_{\epsilon}\,\left(X^{\beta}\partial_{\beta}u\right)\cdot\frac{\left(x^{\alpha}\partial_{\alpha}u\right)}{t}+\epsilon\,\rho_{\epsilon}\frac{\partial^{\alpha}u^{\dagger}\,\partial_{\alpha}u}{2}\,d\sigma dt\right|\\ &&\lesssim \int_{\epsilon}^1\int_{|x|=t}\rho_{\epsilon}\,t\left|\partial_tu+\frac{x}{t}\cdot\nabla u\right|^2\,d\sigma dt+\\ &&\hspace{.3in}+\,\int_{\epsilon}^1\int_{|x|=t}\epsilon\,\rho_{\epsilon}\,\left(\frac{|\nabla u|^2}{2}+\frac{|\partial_tu|^2}{2}+\frac{x}{t}\cdot\nabla \ud\,\partial_tu\right)\,d\sigma dt\\ &&\lesssim \epsilon^{-\frac{1}{2}}\Fl(0,1)\leq \epsilon^{\frac{1}{2}}E. \end{eqnarray*} Hence, combining the $B$ and $I$ terms, we conclude that \begin{eqnarray*} &&\int_{|x|<1}\rho_{\epsilon}\,\fluxOne(x,1)\,dx+\int_{\epsilon}^1\int_{|x|<1}\rho_{\epsilon}^3\left|X^{\beta}\partial_{\beta}u\right|^2\,dxdt\\ &&\lesssim E. \end{eqnarray*} The theorem is proved.\\ Theorem \ref{th:Morawetz} has the following corollary. \begin{corollary}\label{cor:vanishing} Let $u$ be as above. For any $\tau_n\to 0+$, $\gamma_n\to 1-$ as $n\to\infty$, we have that \begin{equation}\label{eq:corvani} \int_{B_{\tau_n}\backslash B_{\gamma_n\tau_n}}\flux(x,\tau_n)\,dx=o_n(1). \end{equation} \end{corollary} \smallskip \noindent {\it Proof:} Let $\epsilon_n:=2\,\Fl(0,\tau_n)/E$, then $\epsilon_n\to 0$ as $n\to\infty$. Theorem \ref{th:Morawetz} implies that \begin{equation*} \int_{B_{\tau_n}}\tau_n\,\rho_{\epsilon_n\tau_n}\flux(x,\tau_n)\,dx\lesssim E. \end{equation*} Note that $$\tau_n\,\rho_{\epsilon_n\tau_n}\gtrsim \left((1+\epsilon_n)^2-\gamma_n^2\right)^{-\frac{1}{2}}\to\infty,$$ for $t=\tau_n,\,|x|\in (\gamma_n\tau_n,\,\tau_n)$, we conclude that (\ref{eq:corvani}) holds. \end{section} \begin{section}{Universal blow up profile along a sequence of times} Our goal in this section is to prove Theorem \ref{th:smallsolitonresolutionMain} along a sequence of times. Again for the ease of notations, we shall consider classical wave map $u$ defined on $R^2\times (0,1]$ that blows up at time $t=0$. Recall from (\ref{eq:concentrationradiusast}) the definition of $r(\epsilon_{\ast},\,t)$. By the small data theory and finite speed of propagation, we have $\lim\limits_{t\to0+} r(\epsilon_{\ast},\, t)=0$. That is, energy concentrates in smaller and smaller regions as $t\to 0+$. By the definition of $r(\epsilon_{\ast},\,t)$, we can find $x_{\ast}(t)$ such that \begin{equation} \|\OR{u}(t)\|_{\HL\left(B_{2r(\epsilon_{\ast},\,t)}(x_{\ast}(t))\right)}>\frac{\epsilon_{\ast}}{2}, \end{equation} for $t$ close to $0$. Again by small data global existence and finite speed of propagation, $x_{\ast}(t)$ remains in a bounded region for $t\in (0,\,1]$. Assume without loss of generality that $x_{\ast}(t_n)\to 0$ as $t_n\to 0$ along a sequence of times $t_n$. Since $r(\epsilon_{\ast},\,t)\to 0$, we see that for any $r>0$, \begin{equation}\label{eq:singularconcentration} \liminf_{t\to 0}\left\|\OR{u}(t)\right\|_{\HL(B_r)}>\frac{\epsilon_{\ast}}{2}. \end{equation} In general, we call a point $\overline{x}$ {\it singular} if for any $r>0$ $$\limsup_{t\to 0}\|\OR{u}(t)\|_{\HL(B_r(\overline{x}))}>\frac{\epsilon_{\ast}}{2}.$$ By finite speed of propagation and energy flux identity, this is equivalent to requiring that for any $r>0$, $$\liminf_{t\to 0}\|\OR{u}(t)\|_{\HL(B_r(\overline{x}))}>\frac{\epsilon_{\ast}}{2}.$$ As the energy is conserved and finite, there can only be finitely many singular points, and in particular the singular points are isolated. \\ For the singular point $x_{\ast}=0$, since singular points are isolated, there exists $r_1>0$ such that for any $\overline{x}\in B_{r_1}\backslash\{0\}$, $\bx$ is not a singular point. Hence we can find $\tr>0$ with \begin{equation}\label{eq:small1} \|\OR{u}(\tau_n)\|_{\HL( B_{\tr}(\overline{x}))}<\epsilon_{\ast} \end{equation} along a sequence of times $\tau_n\to 0$. In particular, we have $$\|\spartial u(\cdot,\,\tau_n)\|_{L^2(B_{\tr}(\bx))}<\epsilon_{\ast}.$$ Hence there exists $\br_n\in \left(\frac{\tr}{2},\,\tr\right)$ with $$\int_{\partial B_{\br_n}(\bx)}|\spartial u|^2(x,\tau_n)\,d\sigma \lesssim \frac{\epsilon_{\ast}^2}{\br_n}.$$ Denoting $\overline{u}_n$ as the average of $u(\tau_n)$ over $\partial B_{\br_n}(\bx)$, that is $$\overline{u}_n=\frac{1}{2\pi \br_n}\int_{\partial B_{\br_n}(\bx)}u(\tau_n)\,d\sigma.$$ By Sobolev inequality, we get that \begin{equation}\label{eq:small0} \left\|u(\tau_n)-\overline{u}_n\right\|_{L^{\infty}(|x-\bx|=\br_n)}\lesssim \epsilon_{\ast}. \end{equation} Take smooth cutoff function $\eta_n\in C_c^{\infty}\left(B_{2\br_n}(\bx)\right)$ with $\eta_n|_{B_{\br_n}(\bx)}\equiv 1$ and $|\nabla \eta_n|\lesssim (\br_n)^{-1}$. Recall that for any $v\in R^2$ with $v\neq 0$, $$Pv=\frac{v}{|v|}.$$ Define \begin{equation*} (u_{0n},\,u_{1n})=\left\{\begin{array}{lr} (u,\,\partial_tu)(\tau_n) & {\rm in}\,\,B_{\br_n}(\bx);\\ \left(P[\eta_n (u(\br_n \theta,\tau_n)-\overline{u}_n)+\overline{u}_n],\,0\right) & {\rm in}\,\,\left(B_{\br_n}(\bx)\right)^c. \end{array}\right. \end{equation*} By (\ref{eq:small1}) and (\ref{eq:small0}), direct computation shows that \begin{equation}\label{eq:small3} \|(u_{0n},\,u_{1n})\|_{\HL}\lesssim \epsilon_{\ast}. \end{equation} Hence by small energy global existence theory and finite speed of propagation, we see that the solution $u_n$ to the wave map equation with $\OR{u}_n(\tau_n)=(u_{0n},\,u_{1n})$ is global and that \begin{equation}\label{eq:localequal} u_n\equiv u\,\,\,{\rm for}\,\,\, |x-\bx|<\frac{\br_n}{4}\,\,{\rm and}\,\,\, t\in(0,\,\tau_n] \end{equation} for sufficiently large $n$. Since $u_n\in C\left([0,\,\tau_n],\,\HL\right)$ and (\ref{eq:localequal}) holds, we conclude that $u$ can be extended to $t=0$ so that $u\in C\left([0,\,\tau_n],\,\HL(B_{\tr/8}(\bx))\right)$. Since $\bx\in B_{r_{1}}\backslash\{0\}$ is arbitrary, we conclude that $u$ can be extended to $t=0$ in $B_{r_1}$ with $u\in C\left([0,\,1],\,\HL(B_{r_1}\backslash B_r)\right)$ for each $0<r<r_1$.\\ In addition, by the regularity of $u_n$, we also have the additional (qualitative) regularity condition that $u\in C^{\infty}\left(B_{r_1}\times [0,\,1]\backslash\{(0,0)\}\right)$. One can of course apply the same argument to other singular points. As a result, we see that $u\in C^{\infty}(R^2\times[0,1]\backslash \{(x_j,\,0)\})$ where $x_j$ are the singular points.\\ On the other hand, since $\OR{u}(t)$ is bounded in $\HL$ and $|u|\equiv 1$, we can extract a weak limit $(v_0,\,v_1)\in\HL$ along a sequence of times $t_n\to 0+$. This limit is in fact a strong limit outside an arbitrarily small neighborhood of the finitely many singular points. From the above analysis, $(v_0,\,v_1)\in C^{\infty}(R^2\backslash\{x_j\})$. Let $$v=u$$ for $\inf\limits_j|x-x_j|>t$. Then $v\in C^{\infty}\left(R^2\times [0,1]\backslash \bigcup\limits_j \{|x-x_j|\leq t,\,t\in[0,\,1]\}\right)$, and by the same arguments as in the proof of Lemma \ref{lm:exteriornice}, \begin{equation}\label{eq:regularpart} \lim_{t\to 0+}\bigg\|\OR{v}(\cdot,\,t)-(v_0,\,v_1)\bigg\|_{\HL\left(\bigcap\limits_j\{|x-x_j|>t\}\right)}=0. \end{equation} We shall call $v$ the {\it regular part} of the wave map $u$. The main issue is to understand the behavior of the wave map $u$ inside singularity lightcones $ \bigcup\limits_j \{|x-x_j|\leq t,\,t\in[0,\,1]\}$.\\ We shall prove \begin{theorem}\label{th:solitonsequential} Let $u$ be a classical wave map with energy \begin{equation}\label{eq:energyconstraint5} \E(\OR{u})<\E(Q,0)+\epsilon_0^2, \end{equation} where $Q$ is a harmonic map with degree $1$, defined on $R^2\times (0,\,1]$ that blows up at time $t=0$ with the origin being a singular point. Assume that $\epsilon_0$ is sufficiently small. Then there exists a sequence of times $t_n\to 0+$, $\ell\in R^2$ with $|\ell|\ll 1$, $x_n\in R^2,\,\lambda_n>0$ with $$\lim_{n\to\infty}\frac{x_n}{t_n}=\ell,\,\,\,\lambda_n=o(t_n),$$ and $(v_0,\,v_1)\in \HL\cap C^{\infty}(R^2\backslash\{0\})$, such that \begin{equation} \OR{u}(t_n)=(v_0,\,v_1)+\left(Q_{\ell},\,\lambda_n^{-1}\partial_tQ_{\ell}\right)\left(\frac{x-x_n}{\lambda_n},\,\frac{t-t_n}{\lambda_n}\right)+o_{\HL}(1), \end{equation} as $n\to\infty$. \end{theorem} \smallskip \noindent {\it Remark:} As we discussed in the introduction, the main new point in Theorem \ref{th:solitonsequential} is that we eliminate any possible energy concentration near the boundary of singularity lightcone $|x|<t$. By the energy constraint (\ref{eq:energyconstraint5}) and the proof below, there is only one singularity point. Hence $u$ is regular outside $\{(x,t):\,|x|\leq t\}$. The main task is to understand the behavior of $u$ inside $\{(x,t):\,|x|\leq t\}$. \smallskip \noindent {\it Proof:} Our starting point is the work of Grinis \cite{Grinis}, which completely characterized the concentration of energy in $\{(x,\,\tau_n):\,|x|<a\tau_n\}$ for any $a\in(0,1)$ as traveling waves, for a suitable time sequence $\tau_n\to 0+$. See Theorem 1.1 and Theorem 1.2 in \cite{Grinis}. In our case, due to the energy constraint (\ref{eq:energyconstraint5}), there can only be one traveling wave. Hence, as a particular consequence of a rescaled version of the {\it asymptotic decomposition} in Theorem 1.2 of \cite{Grinis}, we have for $|x|<\tau_n$, \begin{equation}\label{eq:Grinis} \OR{u}(\tau_n):=\left(Q_{\ell}\left(\frac{x-x_n}{r_n},\,0\right),\,r_n^{-1}\nabla \partial_tQ_{\ell}\left(\frac{x-x_n}{r_n},\,0\right)\right)+(w_{0n},\,w_{1n})+o_{\HL}(1), \end{equation} as $n\to\infty$, where $|\ell|\ll 1$, $r_n=o(\tau_n)$, $\ell=\lim\limits_{n\to\infty}\frac{x_n}{\tau_n}$ and \begin{equation}\label{eq:residuenull} \int_{|x|<a\tau_n}|\nabla w_{0n}|^2+|w_{1n}|^2\,dx\to 0, \end{equation} as $n\to\infty$ for any $a\in(0,1)$. Our main task is to show that $$\int_{|x|<\tau_n}|\nabla w_{0n}|^2+|w_{1n}|^2\,dx\to 0$$ as $n\to\infty$. By (\ref{eq:residuenull}), we have to prove that that for any $\gamma_n\to 1-$, \begin{equation}\label{eq:noboundaryenergy3} \limsup_{n\to\infty}\int_{B_{\tau_n}\backslash B_{\gamma_n\tau_n}}\nablaxtu(x,\tau_n)\,dx=0, \end{equation} assuming that \begin{equation} \label{eq:wn0} \limsup_{n\to\infty}\int_{B_{\gamma_n\tau_n}}|\nabla w_{0n}|^2+|w_{1n}|^2\,dx=0. \end{equation} We now apply the channel of energy inequality and prove (\ref{eq:noboundaryenergy3}). Suppose that (\ref{eq:noboundaryenergy3}) is not true. Then there exists $\epsilon_2>0$, such that, by passing to a subsequence if necessary, we have for all sufficiently large $n$, \begin{equation}\label{eq:concentrationepsilon2} \E_n^2:=\int_{B_{\tau_n}\backslash B_{\gamma_n\tau_n}}\nablaxtu(x,\tau_n)\,dx\ge \epsilon^2_2. \end{equation} By the energy constraint, we must also have \begin{equation}\label{eq:energyonboundaryissmall} \int_{B_{\frac{\tau_n}{2}}^c\cap B_{\tau_n}}\nablaxtu(x,\tau_n)\,dx\lesssim\epsilon_0^2. \end{equation} Corollary \ref{cor:vanishing} implies that \begin{equation}\label{eq:morawetzoutgoing} \int_{B_{\tau_n}\backslash B_{\gamma_n\tau_n}}\left(\frac{|\nabla u|^2}{2}+\frac{|\partial_tu|^2}{2}+\partial_t\ud\,\partial_r u\right)(x,\tau_n)\,dx=o_n(1). \end{equation} Since $u$ is regular for $|x|>t$, we have for any $r>0$, \begin{equation}\label{eq:smallexterior17} \limsup_{t\to 0+}\int_{B_{2r}\backslash B_t}\nablaxtu(x,t)\,dx\leq \delta(r)\to 0,\,\,{\rm as}\,\,r\to 0+. \end{equation} Fix a small $r>0$ whose value is to be determined below. We can find $r_{1n}\in \left(\frac{r}{2},\,r\right)$, $r_{2n}\in \left(\frac{\tau_n}{2},\,\frac{3}{4}\tau_n\right)$, such that $$\int_{\partial B_{r_{1n}}}|\spartial u|^2(\tau_n)\,d\sigma\lesssim \frac{\delta(r)}{r_{1n}},\,\,\,{\rm and}\,\,\,\int_{\partial B_{r_{2n}}}|\spartial u|^2(\tau_n)\,d\sigma=\frac{o_n(1)}{r_{2n}}.$$ Let $$\overline{u}^1_n=\frac{1}{2\pi r_{1n}}\int_{\partial B_{r_{1n}}}u(\tau_n)\,d\sigma,\,\,\,{\rm and}\,\,\,\overline{u}^2_n=\frac{1}{2\pi r_{2n}}\int_{\partial B_{r_{2n}}}u(\tau_n)\,d\sigma.$$ Fix radial $\eta_{1n}\in C_c^{\infty}(B_{2r_{1n}})$ with $\eta_{1n}|_{B_{r_{1n}}}\equiv 1$, and radial $1-\eta_{2n}\in C_c^{\infty}(B_{r_{2n}})$ with $1-\eta_{2n}|_{B_{\frac{r_{2n}}{2}}}\equiv 1$. Define \begin{equation} (u_{0n},\,u_{1n})=\left\{\begin{array}{lr} \left(P\left[\eta_{1n}\,\left(u(r_{1n}\theta,\,\tau_n)-\overline{u}^1_n\right)+\overline{u}^1_n\right],\,0\right)&{\rm in}\,\,B_{r_{1n}}^c;\\ \OR{u}(\tau_n)&{\rm in}\,\,B_{r_{1n}}\backslash B_{r_{2n}};\\ \left(P\left[\eta_{2n}\,\left(u(r_{2n}\theta,\,\tau_n)-\overline{u}^2_n\right)+\overline{u}^2_n\right],\,0\right)&{\rm in}\,\,B_{r_{2n}} \end{array}\right. \end{equation} Then for sufficiently large $n$, in view of (\ref{eq:wn0}) and (\ref{eq:smallexterior17}), $$\|(u_{0n},\,u_{1n})\|_{\HL\left(B_{\tau_n}^c\bigcup B_{\tau_n\gamma_n}\right)}\lesssim \delta(r),$$ and $$\epsilon_0\gtrsim \|(u_{0n},\,u_{1n})\|_{\HL}>\E_n+O(\delta(r)) \geq \epsilon_2+O(\delta(r)).$$ In addition, by (\ref{eq:morawetzoutgoing}), for sufficiently large $n$ $$\|u_{1n}+\partial_ru_{0n}\|_{L^2}+\|\spartial u_{0n}\|_{L^2}\lesssim \delta(r).$$ Let $u_n$ be the solution to the wave map equation with $\OR{u}(\tau_n)=(u_{0n},\,u_{1n})$. Then if $r$ is taken sufficiently small so that $\delta(r)$ is much smaller than $\epsilon_2$ by (a rescaled and time translated version of) Theorem \ref{th:channelofenergywavemap} we conclude that for $t\ge \tau_n$ \begin{equation}\label{eq:thinchannel} \int_{|x|>t-\frac{\tau_n}{8}}\left|\nabla_{x,t}u_n\right|^2(x,t)\,dx\gtrsim \E_n^2. \end{equation} Take $t=\frac{r}{8}$ in (\ref{eq:thinchannel}), we get that for all sufficiently large $n$, \begin{equation}\label{eq:thinchannel'} \int_{|x|>\frac{r}{8}-\frac{\tau_n}{8}}\left|\nabla_{x,t}u_n\right|^2(x,\frac{r}{8})\,dx\gtrsim \E_n^2. \end{equation} By the energy inequality, (\ref{eq:smallexterior17}) and the definition of $u_n$, we see that for $t\leq \frac{r}{8}$, \begin{equation}\label{eq:noleakage2} \int_{|x|>t}\left|\nabla_{x,t}u_n\right|^2(x,t)\,dx\lesssim\delta(r). \end{equation} By finite speed of propagation, we also have $u\equiv u_n$ for $t-\frac{\tau_n}{4}<|x|<\frac{r}{4}$ and $t\leq \frac{r}{8}$. Combining with (\ref{eq:thinchannel'}), we conclude that \begin{equation}\label{eq:thinchannel''} \int_{\frac{r}{8}>|x|>\frac{r}{8}-\frac{\tau_n}{4}}\left|\nabla_{x,t}u\right|^2(x,\frac{r}{8})\,dx\gtrsim \E^2_n\ge\epsilon_2^2>0, \end{equation} if we choose $r$ sufficiently small, so that $\delta(r)$ is much smaller than $\epsilon_2^2$. However, (\ref{eq:thinchannel''}) contradicts with the fact that $\OR{u}\left(\frac{r}{8}\right)\in\HL$ for sufficiently large $n$. \\ Therefore, combining the above with the regular part outside the singularity lightcone, we get that along the sequence $\tau_n$, \begin{equation}\label{eq:resolution1} \OR{u}(\tau_n)=(v_0,\,v_1)+(Q_{\ell},\,r_n^{-1}\partial_tQ_{\ell})\left(\frac{x-x_n}{r_n},\,0\right)+o_{\HL}(1),\,\,\,{\rm as}\,\,n\to\infty. \end{equation} The theorem is proved. \end{section} \begin{section}{Coercivity and universal profile for all times} Our next task is to use a rigidity property of the energy to extend the decomposition we obtained from the last section to all times. One important tool is the following coercivity property of the energy functional near traveling waves. \begin{theorem}\label{th:coercivity} Let $\mathcal{M}_1$ be the space of harmonic maps from $R^2$ to $S^2$ with topological degree $1$. Fix $\ell\in R^2$ with $|\ell|<1$ and for any $Q\in\mathcal{M}_1$, let $Q_{\ell}$ be the Lorentz transform of $Q$ with velocity $\ell$, that is \begin{equation}\label{eq:Ql} Q_{\ell}(x,t)=Q\left(x-\frac{\ell\cdot x}{|\ell|^2}\ell+\frac{\frac{\ell\cdot x}{|\ell|^2}\ell-\ell t}{\sqrt{1-|\ell|^2}}\right). \end{equation} Denote $\mathcal{M}_{1,\ell}$ as the space of $Q_{\ell}$ with $Q\in\mathcal{M}_1$. For $0<\epsilon<\epsilon_0$ and $\epsilon_0$ sufficiently small, suppose that $(v_0,\,v_1)\in\HL$, with $|v_0(x)|\equiv 1$ and $v_0^{\dagger}\cdot v_1\equiv 0$, satisfies \begin{eqnarray} &&\deg(v_0)=1;\label{eq:degree1}\\ &&\left|\int_{R^2}\partial_{x_j}v_0^{\dagger}\,v_1\,dx-\int_{R^2}\partial_{x_j}Q_{\ell}^{\dagger}\,\partial_tQ_{\ell}\,dx\right|<\epsilon;\label{eq:equalmomentum}\\ &&\int_{R^2}\left(\frac{|\nabla v_0|^2}{2}+\frac{|v_1|^2}{2}\right)dx\leq \int_{R^2}\left(\frac{|\partial_tQ_{\ell}|^2}{2}+\frac{|\nabla Q_{\ell}|^2}{2}\right)\,dx+\epsilon;\label{eq:equalenergy}\\ &&\inf_{Q\in\mathcal{M}_1}\left\|(v_0,\,v_1)-(Q_{\ell},\,\partial_tQ_{\ell})\right\|_{\HL}<\epsilon_0.\label{eq:energyconstraint23} \end{eqnarray} Then there exists $\delta(\epsilon)>0$ with $\delta(\epsilon)\to 0$ as $\epsilon\to 0$, such that \begin{equation}\label{eq:closetotravelingwave} \inf_{Q\in\mathcal{M}_1}\left\|(v_0,\,v_1)-(Q_{\ell},\,\partial_tQ_{\ell})\right\|_{\HL}<\delta(\epsilon). \end{equation} \end{theorem} \smallskip \noindent {\it Remark:} As we will see in the proof \begin{eqnarray*} &&\int_{R^2}\frac{|\nabla Q_{\ell}|^2}{2}+\frac{|\partial_tQ_{\ell}|^2}{2}\,dx=\frac{4\pi}{\sqrt{1-|\ell|^2}},\\ &&-\int_{R^2}\partial_tQ^{\dagger}_{\ell}\,\partial_{x_j}Q_{\ell}\,dx=\frac{4\pi\ell_j}{\sqrt{1-|\ell|^2}}, \end{eqnarray*} for any $Q\in\mathcal{M}_1$. The conditions (\ref{eq:equalmomentum}) and (\ref{eq:equalenergy}) are thus independent of the choice of $Q\in \mathcal{M}_1$. The definition of degree $\deg(f)$ for mappings between manifolds is classical. For the definition with $f:\,R^2\to S^2\subset R^3$ and $f\in \dot{H}^1$ used here, we refer to \cite{Brezis}, see in particular (1) in page 205 of \cite{Brezis}. We also remark that the harmonic maps in $\mathcal{M}_1$ have been completely characterized as degree $1$ rational functions (M\"obius transforms), see \cite{Elles} and a more recent discussion in \cite{Oh}. By elementary geometric properties of M\"obius transforms, it is easy to see that degree one harmonic maps from $R^2$ to $S^2\subset R^3$ are unique up to the symmetries of $R^2$ and $S^2$. More precisely in an appropriate coordinate system, the harmonic maps in $\mathcal{M}_1$ are co-rotational. \smallskip \noindent {\it Proof.} Without loss of generality, let us assume that $\ell=le_1=(l,\,0)$. Suppose that (\ref{eq:closetotravelingwave}) is false, then for each $n=1,2,\dots$, by symmetry, we can assume that there exist $(v_{0n},\,v_{1n})\in\HL$ with $|v_{0n}|\equiv 1$ and $v_{0n}^{\dagger}\,v_{1n}\equiv 0$, such that \begin{eqnarray} &&\deg(v_{0n})=1;\label{eq:degree1'}\\ &&\left|\int_{R^2}\partial_{x_j}v_{0n}^{\dagger}v_{1n}\,dx-\int_{R^2}\partial_{x_j}Q_{\ell}^{\dagger}\,\partial_tQ_{\ell}\,dx\right|<\frac{1}{n};\label{eq:equalmomentum'}\\ &&\int_{R^2}\left(\frac{|\nabla v_{0n}|^2}{2}+\frac{|v_{1n}|^2}{2}\right)dx\leq \int_{R^2}\left(\frac{|\partial_tQ_{\ell}|^2}{2}+\frac{|\nabla Q_{\ell}|^2}{2}\right)\,dx+\frac{1}{n};\label{eq:equalenergy'}\\ &&\inf_{Q\in\mathcal{M}_1}\left\|(v_{0n},\,v_{1n})-(Q_{\ell},\,\partial_tQ_{\ell})\right\|_{\HL}<\epsilon_0.\label{eq:energyconstraint23'} \end{eqnarray} In addition, \begin{equation}\label{eq:closetotravelingwave'} \inf_{Q\in\mathcal{M}_1}\left\|(v_{0n},\,v_{1n})-(Q_{\ell},\,\partial_tQ_{\ell})\right\|_{\HL}>\delta_0>0. \end{equation} For fixed $(v_0,\,v_1)\in\HL$, with $|v_0(x)|\equiv 1$ and $v_0^{\dagger}\cdot v_1\equiv 0$, assume without loss of generality that $v_0$ is positively oriented, that is, \begin{equation}\label{eq:degreeH1} \deg(v_0)=-\frac{1}{4\pi}\int_{R^2}v_0^{\dagger}\cdot\left(\partial_{1}v_0\times\partial_2v_0\right). \end{equation} Consider the following algebraic identity \begin{multline}\label{eq:magicidentity} \int_{R^2}\left(\frac{|\nabla v_0|^2}{2}+\frac{|v_1|^2}{2}\right)\,dx\\ =\frac{1}{2}\int_{R^2}\left|v_{1}+l\partial_1v_{0}\right|^2\,dx+\frac{1}{4}\int_{R^2}\left|\sqrt{1-l^2}\partial_1v_{0}-v_{0}\times\partial_2v_0\right|^2\,dx\\ +\frac{1}{4}\int_{R^2}\left|\partial_2v_0+\sqrt{1-l^2}v_0\times\partial_1v_0\right|^2\,dx-\sqrt{1-l^2}\int_{R^2}v_0^{\dagger}\cdot\left(\partial_1v_0\times\partial_2v_0\right)\,dx\\ -l\int_{R^2}\partial_1v_0^{\dagger}\,v_1\,dx. \end{multline} (\ref{eq:magicidentity}) is a modified form of the remarkable decomposition of energy in \cite{Belavin}, see also the illuminating discussion in page 3 of \cite{RodSter}. The modification here is necessary in order to take into account the momentum part.\\ Direct calculations show that \begin{equation}\label{eq:infoQl} \int_{R^2}\left(\frac{|\partial_tQ_{\ell}|^2}{2}+\frac{|\nabla Q_{\ell}|^2}{2}\right)\,dx=\frac{4\pi}{\sqrt{1-l^2}}\,\,\,{\rm and}\,\,\,-\int_{R^2}\partial_tQ_{\ell}^{\dagger}\,\partial_1Q_{\ell}\,dx=\frac{4\pi l}{\sqrt{1-l^2}}. \end{equation} We can assume, after rotation, that $(v_{0n},\,v_{1n})$ has the same momentum as $(Q_{\ell_n},\,\partial_tQ_{\ell_n})$ with $\ell_n=l_n\,e_1$. Then $|l_n-l|\lesssim \frac{1}{n}$. Applying (\ref{eq:magicidentity}) to $(v_{0n},\,v_{1n})$ and using the assumptions on $(v_{0n},\,v_{1n})$, we get that \begin{eqnarray*} &&\int_{R^2}\left(\frac{|\nabla v_{0n}|^2}{2}+\frac{|v_{1n}|^2}{2}\right)\,dx\\ &&=\frac{1}{2}\int_{R^2}\left|v_{1n}+l_n\partial_1v_{0n}\right|^2\,dx+\frac{1}{4}\int_{R^2}\left|\sqrt{1-l_n^2}\partial_1v_{0n}-v_{0n}\times\partial_2v_{0n}\right|^2\,dx\\ &&\hspace{.3in}+\,\frac{1}{4}\int_{R^2}\left|\partial_2v_{0n}+\sqrt{1-l_n^2}v_{0n}\times\partial_1v_{0n}\right|^2\,dx-\sqrt{1-l_n^2}\int_{R^2}v_{0n}^{\dagger}\cdot\left(\partial_1v_{0n}\times\partial_2v_{0n}\right)\,dx\\ &&\hspace{.5in}-\,l_n\int_{R^2}\partial_1v_{0n}^{\dagger}\,v_{1n}\,dx\\ &&=\frac{1}{2}\int_{R^2}\left|v_{1n}+l\partial_1v_{0n}\right|^2\,dx+\frac{1}{4}\int_{R^2}\left|\sqrt{1-l^2}\partial_1v_{0n}-v_{0n}\times\partial_2v_{0n}\right|^2\,dx\\ &&\hspace{.3in}+\,\frac{1}{4}\int_{R^2}\left|\partial_2v_{0n}+\sqrt{1-l^2}v_{0n}\times\partial_1v_{0n}\right|^2\,dx+\frac{4\pi}{\sqrt{1-l^2}}+O\left(\frac{1}{n}\right). \end{eqnarray*} In the above we used the expression for degree and momentum. From (\ref{eq:equalenergy'}) and (\ref{eq:infoQl}), we conclude that \begin{multline}\label{eq:magicsmall} \frac{1}{2}\int_{R^2}\left|v_{1n}+l\partial_1v_{0n}\right|^2\,dx+\frac{1}{4}\int_{R^2}\left|\sqrt{1-l^2}\partial_1v_{0n}-v_{0n}\times\partial_2v_{0n}\right|^2\,dx\\ \hspace{.3in}+\,\frac{1}{4}\int_{R^2}\left|\partial_2v_{0n}+\sqrt{1-l^2}v_{0n}\times\partial_1v_{0n}\right|^2\,dx\\ =O\left(\frac{1}{n}\right). \end{multline} By (\ref{eq:energyconstraint23'}), applying suitable symmetry transformation to $(v_{0n},\,v_{1n})$ if necessary, we can assume that for suitable $\widetilde{Q}_{\ell}\in\mathcal{M}_{\ell,1}$, \begin{equation} (v_{0n},\,v_{1n})=(\widetilde{Q}_{\ell},\,\partial_t\widetilde{Q}_{\ell})(x,0)+(r_{0n},\,r_{1n}), \end{equation} with $$\|(r_{0n},\,r_{1n})\|_{\HL}\leq 2\epsilon_0.$$ Passing to a subsequence, we can assume that $(v_{0n},\,v_{1n})\rightharpoonup (v_0,\,v_1)$ as $n\to\infty$, with $$\left\|(v_0,\,v_1)-(\widetilde{Q}_{\ell},\,\partial_t\widetilde{Q}_{\ell})(x,0)\right\|_{\HL}\leq2\epsilon_0.$$ Hence by the continuity of topological degree \footnote{This is a direct consequence of the definition (\ref{eq:degreeH1}) of degree, and can be proved by noting that $\int_{R^2}\partial_xu \times \partial_yu\,dx\,dy=0$ for any $\dot{H}^1$ mapping from $R^2\to S^2$, and the dominated convergence theorem.} in $\dot{H}^1$ and the fact that degree only takes value in integers (see \cite{Brezis}), we see that if $\epsilon_0$ is taken small enough, then $$\deg(v_0)=1.$$ (\ref{eq:magicsmall}) implies that $(v_0,\,v_1)$ satisfies the {\it first order} ``Bogomol'nyi equations" (see \cite{Bogo}): \begin{eqnarray} &&v_1+l\partial_1v_0=0;\nonumber\\ &&\sqrt{1-l^2}\partial_1v_0-v_0\times\partial_2v_0=0;\nonumber\\ &&\partial_2v_0+\sqrt{1-l^2}v_0\times\partial_1v_0=0.\label{eq:bogo} \end{eqnarray} Equations (\ref{eq:bogo}) can be reduced by an obvious change of variable to the case $l=0$, in which case they can be explicitly solved as harmonic maps. Hence we see that there exists $\widetilde{\widetilde{Q}}\in \mathcal{M}_1$ such that $$(v_0,\,v_1)=\left(\widetilde{\widetilde{Q}}_{\ell},\,\partial_t\widetilde{\widetilde{Q}}_{\ell}\right)(x,0).$$ Thus we can write $$(v_{0n},\,v_{1n})=\left(\widetilde{\widetilde{Q}}_{\ell},\,\partial_t\widetilde{\widetilde{Q}}_{\ell}\right)(x,0)+\left(\widetilde{r}_{0n},\,\widetilde{r}_{1n}\right),$$ with $\left(\widetilde{r}_{0n},\,\widetilde{r}_{1n}\right)\rightharpoonup 0$ as $n\to\infty$. Then the energy expansion for $(v_{0n},\,v_{1n})$ around $\left(\widetilde{\widetilde{Q}}_{\ell},\,\partial_t\widetilde{\widetilde{Q}}_{\ell}\right)(x,0)$ gives \begin{eqnarray*} \E(Q_{\ell},\,\partial_t{Q}_{\ell})+\frac{1}{n} &\ge&\int_{R^2}\left(\frac{|\nabla v_{0n}|^2}{2}+\frac{|v_{1n}|^2}{2}\right)\,dx\\ &=&{\displaystyle \frac{1}{2}\int_{R^2}\left|\nabla \widetilde{\widetilde{Q}}_{\ell}\right|^2+\left|\partial_t\widetilde{\widetilde{Q}}_{\ell}\right|^2\,dx}\\ &&\hspace{.3in}+\,\int_{R^2}\nabla\widetilde{\widetilde{Q}}_{\ell}^{\,\dagger}\,\nabla \widetilde{r}_{0n}+ \partial_t\widetilde{\widetilde{Q}}_{\ell}^{\,\dagger}\,\widetilde{r}_{1n}\,dx\\ &&\hspace{.3in}+\,\int_{R^2}\frac{|\nabla \widetilde{r}_{1n}|^2}{2}+\frac{|\widetilde{r}_{1n}|^2}{2}\,dx\\ &=&\E\left(\OR{\widetilde{\widetilde{Q}}_{\ell}}\right)+\,\int_{R^2}\frac{|\nabla \widetilde{r}_{1n}|^2}{2}+\frac{|\widetilde{r}_{1n}|^2}{2}\,dx+o_n(1). \end{eqnarray*} By (\ref{eq:equalenergy'}) and the fact that $\E\left(\OR{\widetilde{\widetilde{Q}}}_{\ell}\right)=\E(\OR{Q_{\ell}})$, we see that $$\left(\widetilde{r}_{0n},\,\widetilde{r}_{1n}\right)\to 0,\,\,\,{\rm in}\,\,\HL.$$ This is a contradiction to (\ref{eq:closetotravelingwave'}). The Theorem is proved.\\ \medskip Now we turn to the proof of the second main theorem in the paper. \\ \begin{theorem}\label{th:smallsolitonresolutionlast} Let $u$ be a classical wave map defined on $R^2\times (0,\,1]$ with energy $\E(\OR{u})<\E(Q,0)+\epsilon_0^2$, where $Q$ is a harmonic map of degree $1$, that blows up at time $0$ and at the origin. Assume that $\epsilon_0$ is sufficiently small. Then there exist $\ell\in R^2$ with $|\ell|\ll 1$, $x(t)\in R^2,\,\lambda(t)>0$ with $$\lim_{t\to 0}\frac{x(t)}{t}=\ell,\,\,\,\lambda(t)=o\left(t\right),$$ and $(v_0,\,v_1)\in \HL\cap C^{\infty}(R^2\backslash\{0\})$ with $(v_0-u_{\infty},\,v_1)$ being compactly supported, such that \begin{eqnarray*} {\rm (i)}&&\inf\bigg\{\left\|\OR{u}(t)-(v_0,\,v_1)-\left(Q_{\ell},\,\partial_tQ_{\ell}\right)\right\|_{\HL}:\,Q_{\ell}\in \mathcal{M}_{\ell,1}\bigg\}\to 0,\,\,{\rm as}\,\,t\to 0;\\ &&\\ {\rm (ii)}&&\bigg\|\OR{u}(t)-(v_0,\,v_1)\bigg\|_{\HL\left(R^2\backslash B_{\lambda(t)}(x(t))\right)}\to 0\,\,{\rm as}\,\,t\to 0. \end{eqnarray*} \end{theorem} \smallskip \noindent {\it Proof:} We have already proved that along a sequence of times $t_n\to 0+$, \begin{equation}\label{eq:predecomp} \OR{u}(t_n)=\left(Q_{\ell}\left(\frac{x-x_n}{\lambda_n},0\right),\,\frac{1}{\lambda_n}\partial_tQ_{\ell}\left(\frac{x-x_n}{\lambda_n},0\right)\right)+(v_0,\,v_1)+o_{\HL}(1), \end{equation} where $(v_0,\,v_1)\in\HL\cap C^{\infty}(R^2\backslash\{0\})$ and $$\lim_{n\to\infty}\frac{x_n}{t_n}=\ell,\,\,\,{\rm and}\,\,\,\lambda_n=o(t_n),\,\,\,{\rm as}\,\,n\to\infty.$$ Since \begin{equation}\label{eq:sec6small} \epsilon_n:=\int_{B_{2t_n}\backslash B_{t_n}}\nablaxtu(x,t_n)\,dx\to 0,\,\,\,{\rm as}\,\,n\to\infty, \end{equation} we can find $r_n\in(t_n,\,2t_n)$ such that $$\int_{\partial B_{r_n}}\frac{|\spartial u|^2}{2}(x,t_n)\,d\sigma\lesssim\frac{\epsilon_n}{r_n}.$$ Let $$\overline{u}_n=\frac{1}{2\pi r_n}\int_{\partial B_{r_n}} u(t_n)\,d\sigma.$$ Take a smooth cutoff function $\eta_n$ with $\eta_n\equiv 1$ on $B_{r_n}$, ${\rm supp}\,\eta_n\Subset B_{2r_n}$ and $|\nabla\eta_n|\lesssim r_n^{-1}$. Define \begin{equation*} (u_{0n},\,u_{1n})=\left\{\begin{array}{lr} \OR{u}(x,t_n)&{\rm for}\,\,|x|<r_n;\\ \left(P\left[\eta_n(r)(u(r_n\theta,t_n)-\overline{u}_n)+\overline{u}_n\right],\,0\right)&{\rm for}\,\,|x|>r_n. \end{array}\right. \end{equation*} One can check that $(u_{0n},\,u_{1n})\in \dot{H}^s\times H^{s-1}$ for $s<\frac{3}{2}$, and $u_{0n}\equiv P(\overline{u}_n)$ for large $x$. Moreover, \begin{equation}\label{eq:sec6small2} \|(u_{0n},\,u_{1n})\|^2_{\HL(B^c_{t_n})}\lesssim \epsilon_n. \end{equation} Let $u_n$ be the solution to the wave map equation with $\OR{u}_n(t_n)=(u_{0n},\,u_{1n})$.\footnote{The local existence of $u_n$ follows from subcritical wellposedness theory. } Then by finite speed of propagation, $u_n\equiv u$ for $|x|<t$ and $t\in(0,\,t_n]$, assuming that $u_n$ is defined in $[t,\,t_n]$. In addition, by (\ref{eq:sec6small2}) and energy flux identity, since the energy flux of $u_n$ is equal to that of $u$ on $|x|=t,\,t\in(0,\,t_n]$ which decays to zero as $n\to\infty$, we get that \begin{equation}\label{eq:exteriorsmall} \int_{|x|>t}|\nabla_{x,t}u_n|^2(x,t)\,dx\lesssim \epsilon_n+o_n(1), \end{equation} for $t\leq t_n$, again assuming that $u_n$ is defined in $[t,\,t_n]$. As $u_n$ is identical to $u$ in the singularity light cone $|x|<t,\,0<t\leq t_n$ and $u_n$ has small energy for $|x|\ge t,\,0<t\leq t_n$, we conclude that $u_n$ is defined for $t\in (0,\,t_n]$. From (\ref{eq:predecomp}), it is easy to verify that \begin{eqnarray*} &&{\rm deg}(u_n(t_n))=1;\\ &&\E(\OR{u}_n)\leq \E(\OR{Q}_{\ell})+o_n(1);\\ &&\left|\mathcal{M}(\OR{u}_n)-\mathcal{M}(\OR{Q}_{\ell})\right|=o_n(1), \end{eqnarray*} where $\mathcal{M}(\OR{u})$ denotes the momentum of $u$. Hence by Theorem \ref{th:coercivity}, $\OR{u}_n(t)$ stays in a $\delta(\epsilon_n)$ neighborhood of $\mathcal{M}_{\ell,1}$ for $t\leq t_n$ with $\delta(\epsilon_n)\to 0$ as $n\to\infty$. It follows that \begin{equation}\label{eq:convergencetosoliton} \lim_{t\to 0}\,\inf\bigg\{\left\|\OR{u}(t)-(v_0,\,v_1)-\left(Q_{\ell},\,\partial_tQ_{\ell}\right)\right\|_{\HL}:\,Q_{\ell}\in \mathcal{M}_{\ell,1}\bigg\}=0. \end{equation} Part (i) of the theorem is proved. The fact that all degree $1$ harmonic maps are co-rotational implies that $\mathcal{M}_{\ell,1}$ is a compact set in the energy space, modulo translations and dilations. Hence, by the regularity of $u$ outside the singularity lightcone, we can find $x(t)$ and $\lambda(t)$ with $\lambda(t)=o(t)$ and $$\limsup_{t\to 0+}\frac{|x(t)|}{t}\leq 1.$$ The main remaining task is to show that \begin{equation}\label{eq:thecenter} \lim_{t\to 0+}\frac{x(t)}{t}=\ell. \end{equation} Without loss of generality, let us assume that $\ell=l\,e_1$ By (\ref{eq:convergencetosoliton}), it follows that \begin{equation}\label{eq:momentuminside} \lim_{t\to 0}\int_{|x|<t}\,-\partial_tu\,\partial_{x_1}u(x,t)\,dx=\frac{4l\pi}{\sqrt{1-l^2}}. \end{equation} Direct computation shows \begin{eqnarray*} &&\frac{d}{dt}\int_{|x|<t}x_1\nablaxtu(x,t)\,dx\\ &&\hspace{.3in}=\int_{|x|=t}x_1\nablaxtu(x,t)\,d\sigma+\int_{|x|=t}x_1\frac{x}{|x|}\cdot\nabla \ud \,\partial_tu\,d\sigma\\ &&\hspace{2in}-\int_{|x|<t}\partial_{x_1}u\,\partial_tu(x,t)\,dx. \end{eqnarray*} Integrating the above identity from $t=0$ to $t$, we get that \begin{equation}\label{eq:centermotion} \int_{|x|<t}x_1\nablaxtu(x,t)\,dx=O\left({\rm Flux}(0,t)\right)t+\frac{4l\pi\,t}{\sqrt{1-l^2}}+o(t). \end{equation} As ${\rm Flux}(0,t)\to 0$ as $t\to 0$, by (\ref{eq:centermotion}) and (\ref{eq:convergencetosoliton}), (\ref{eq:thecenter}) follows straightforwardly. The theorem is proved. \end{section}
2002.11106
\section{Introduction}\vspace{-.4truecm} Dirac reportedly once said about the so-called \emph{standard model of} ``\emph{everyday matter}'' that it covers all of the chemistry and most of the physics of systems from the size of atoms and molecules all the way up on the number-of-particles scale to objects the size of the moon (and beyond, a little bit). The scare quotes around \emph{everyday matter} are meant to remind us that atoms and molecules are of course not part of our everyday experience. Yet it is one and the same model which gives equally accurate results for atoms, molecules, etc., as well as for objects the size of the moon. This standard model has also allowed mathematical physicists to prove quantum field-theoretical results which are out of reach in QED; cf. \cite{JuergA}, \cite{SpohnBOOKb}. Incidentally, the name \emph{standard model} in physics usually simply means that the model sets \emph{the standard of accuracy and efficiency in computing} quantitative answers to questions concerning the subject matter, in essential agreement with empirical data. Any such standard model typically consists of a patchwork of partial theories, strung together, plus heuristic rules of procedure. It does \emph{not} mean that it sets the standard for a conceptually unified fundamental theory, although (ideally) it should. In particular, the practically successful rules of the standard model of everyday matter are a curious mix of non-relativistic classical and quantum mechanics, and of relativistic quantum field theory: the spin-$\frac12$ electrons are treated with the non-relativistic Pauli equation; the nuclei are treated in the Born--Oppenheimer approximation, which means their positions are classical parameters in the Pauli equation for the electrons; the photons are treated with the quantized electromagnetic Maxwell fields, minimally coupled to the Pauli spinors; in addition, there are externally generated (applied) classical electromagnetic fields, also minimally coupled to the Pauli spinors. Predictions extracted from the model are based on the usual measurement axioms formulated by von Neumann \cite{vN}. The non-relativistic quantum-mechanical Schr\"odinger--Pauli equation for electrons and nuclei (the latter of which could be either effectively bosons or fermions) is expected to be a reasonably accurate approximation to deal with the matter part of the system. The energy densities of everyday matter are way below the matter-antimatter pair creation threshold of the involved matter particles so that quantum field-theoretical effects should be negligible for the matter part of the model. The Born--Oppenheimer approximation for the nuclei is merely a convenient yet unnecessary further simplification; the nuclear position degrees of freedom can easily be included at the level of nonrelativistic quantum mechanics. However, it is widely believed that the emission / absorption of photons by atoms can only be described quantum field-theoretically, cf. \cite{BohmBOOKa}, \cite{WeinbergBOOKqft}. This unproven yet widely held belief should be greeted with a healthy dose of skepticism. In this paper we inquire into a purely quantum-mechanical alternative to this standard model of atoms, molecules, etc., all the way up to objects as large as the moon (and a little bit further). For convenience we keep the Born--Oppenheimer approximation; yet again, this can be relaxed. The quantum-mechanical model proposed in this paper can be easier controlled with rigorous mathematical techniques, perturbatively as well as non-perturbatively, than the standard model of everyday matter. It produces exactly the same atomic and molecular (etc.) energy spectra as the many-body Schr\"odinger, respectively Pauli equation with Coulomb interactions and external electro- and magneto-static fields, without putting those interactions into the Schr\"odinger / Pauli equation by hand, yet it also describes the emission of photons with the right frequencies. Furthermore, its physical predictions do not require von Neumann's measurement axioms of orthodox quantum theory. Another point we emphasize is that photons appear naturally in our approach, without resorting to second-quantizing the classical Maxwell field equations. The photon emerges by analyzing Schr\"odinger's 1926 findings from the perspective of Born's 1926 re-interpretation of Schr\"odinger's wave function, and by pursuing this lead to its logical conclusion. Of course, to have a viable many particle theory requires the input of principles discovered after 1926, in particular Pauli's 1927 theory that electrons are spin $\frac12$ fermions requiring permutation-antisymmetric spinor wave functions evolved by his generalization of Schr\"odinger's equation, and that photons are spin $1$ bosons, requiring permutation-symmetric wave functions. Thus, and initially ignoring electron spin, we begin by recalling Schr\"odinger's 1926 ``$\Psi$ as matter-wave'' theory, first for a non-relativistic hydrogen atom coupled with the actual classical electromagnetic fields, emphasizing its initial successes and also its ultimate failure, then for a non-relativistic spinless multi-electron atom. Next we recall Born's 1926 ``$\Psi$ as a probability amplitude'' re-interpretation of Schr\"odinger's formulas for the charge and current densities and how he justified it with his ``$\Psi$ as a guiding field'' interpretation, which Born wrote was inspired by Einstein's earlier speculations that photons are particles which are guided by the electromagnetic Maxwell fields. Einstein's ideas will also play a role in our model. Einstein's speculations also inspired de Broglie, who already in 1923 postulated a first-order guiding equation for massive particles involving a guiding phase wave $\Phi$, though without having a wave equation for $\Phi$; in 1926 he then found his $\Phi$ in Schr\"odinger's $\Psi$ through the polar presentation $\Psi = |\Psi|e^{i\Phi}$ and presented his theory at the 5th Solvay Conference in 1927 \cite{Solvay}. De Broglie's deterministic guiding equation was later re-discovered by Bohm, who developed the theory further. While Born himself did not propose a guiding equation, he wrote that he was convinced that it had to be a non-deterministic equation. Such a stochastic guiding law was eventually supplied by Nelson and further developed by Guerra et al. We use the deterministic de Broglie--Bohm law; a stochastic (Born--)Nelson--Guerra law may do just as well. Then, by analyzing Schr\"odinger's calculations of the electromagnetic radiation produced by solutions of his $\Psi$ equation from the perspective of Born's interpretation of $\Psi$ we deduce the existence of generalized electromagnetic fields which depend not only on space and time, but also on the generic position variables of the electrons. The field equations for the generalized electromagnetic fields can be put together easily. After Fourier transform, they can be solved by the method of characteristics, and the de Broglie--Bohm-type guiding equation is an integral part of these characteristic equations! Next, well-known results from classical electromagnetic field theory then suggest how to couple the generalized electromagnetic fields back into Schr\"odinger's equation. The model not only produces the same Schr\"odinger spectra as used in the (accordingly simplified) standard model of everyday matter, electron spin is easily accomodated by working with the Pauli equation instead of Schr\"odinger's spinless equation, in which case the spectra agree with those of the standard model. Moreover, it also describes emission of an electromagnetic radiation field with the right frequencies. Those radiation fields are spread out and cannot explain the ``clicks'' of some localized photon detector. Yet the mathematical equations suggest a re-interpretation of the generalized electromagnetic fields as actually living on many particle configuration space, with the photon position being part of the configuration space variables, and we propose a guiding equation for a photon. This change of physical perspective now does have the potential of explaining localized detector ``clicks.'' Furthermore, it also suggests a re-interpretation of the empirial electron charge and current densities in terms of ``photon creation operators'' in this model. This leads at once to a model of an atom coupled with many photons. Its generalization to a system of many nuclei, electrons, and photons is straightforward. Photon annihilation operators are also hinted at, in our non-relativistic Schr\"o\-dinger--Pauli equation, but our semi-relativistic model will have to be developed further to involve the relativistic Dirac operator before it can take a putatively final form in which it can possibly compete with the standard model of everyday matter. The many aspects which our tentative model gets exactly right already give us reason to be optimistic that such a purely quantum-mechanical model is feasible. We close with a brief discussion of perhaps the most intriguing finding of our work, how the putatively final model could account for Lorentz covariance \emph{in the mean}. By a law of large numbers, which holds for all everyday phenomena, a many body system will essentially behave like the mean, so that at the many-body level the special theory of relativity emerges as an apparent law of nature. Yet at a few-body level significant deviations may appear, as demonstrated in Bell-type experiments. \vspace{-20pt} \newpage \section{Schr\"odinger's matter-wave theory of radiating\\ atoms, molecules, etc.}\label{sec:Erwin} Schr\"odinger's notion of ``$\Psi$ as a matter wave,''\footnote{This folklore is too simplistic and should not be taken literally. In the course of the discussion in this section it will be made precise what Schr\"odinger meant by matter waves. See also \cite{ValiaETal}.} when coupled with the (classical) electromagnetic fields in a self-consistent manner, ultimately yields the Schr\"odinger--Maxwell system, a (neo-)classical field theory that is not a physically acceptable system of equations of a few-electron atom or molecule, coupled with electromagnetic radiation --- hopes expressed to the contrary \cite{KomechLECT} notwithstanding. Fortunately Schr\"odinger worked at first with a truncated system which produced the well-known striking results that became part of textbook QM. Since Schr\"odinger's results are important stepping stones on our way to a coherent quantum mechanics of electrons, nuclei, and photons, we briefly recall them below. While we do pay attention to the developments in 1926, we emphasize that our presentation is not meant as a strictly historical account; rather, we hope to convey a plausible train of thought. For a historian's account, see \cite{Renn}. \subsection{Hydrogen} We first work with the non-relativistic Schr\"odinger equation for a hydrogen atom, deferring spin and the Pauli equation to a later section. Like Schr\"odinger, we treat it first in isolation from the electromagnetic radiation fields, switching on the coupling subsequently in a perturbative manner, then address the non-perturbative Schr\"odinger--Maxwell system. Schr\"odinger's equation for the ``matter-wave'' function $\Psi(t,{\boldsymbol{s}})\in{\mathbb{C}}$ of an electron of mass $\mEL$ and charge $-e$ in the Coulomb field of a point proton of charge $e$, fixed at the origin, is found in every quantum-mechanics textbook; it reads \cite{ErwinWMd} \begin{equation} i \hbar \partial_t\Psi(t,{\boldsymbol{s}}) = \tfrac{1}{2\mEL}\big(-i\hbar\nabla_{\boldsymbol{s}}\big)^2\Psi(t,{\boldsymbol{s}}) - \tfrac{e^2}{|{\boldsymbol{s}}|}\Psi(t,{\boldsymbol{s}}). \label{eq:ERWINeqnMatterWaveBOhydrogen} \end{equation} Here, $\partial_t =\frac{\partial}{\partial t}$, and $\nabla_{\boldsymbol{s}}$ is the gradient operator w.r.t. the space vector ${\boldsymbol{s}}\in{\mathbb{R}}^3$, while $\hbar$ is Planck's constant divided by $2\pi$. In \cite{ErwinWMa} Schr\"odinger discussed only the stationary version of (\ref{eq:ERWINeqnMatterWaveBOhydrogen}), whose solutions $\psi_{n,\ell,m}({\boldsymbol{s}})$ map into solutions of (\ref{eq:ERWINeqnMatterWaveBOhydrogen}) via $\Psi_{n,\ell,m}(t,{\boldsymbol{s}}) = e^{-i E_n t/\hbar}\psi_{n,\ell,m}({\boldsymbol{s}})$, with $\psi_{n,\ell,m}({\boldsymbol{s}}) = R_{n,\ell}(r)Y_\ell^{m}(\vartheta,\varphi)$ (see the Appendix), for $n\in{\mathbb{N}}$ and $\ell\in\{0,...,n-1\}$ and $m\in\{-\ell,...,0,...,\ell\}$, and where $E_n = E^{\mbox{\textrm{\tiny{Bohr}}}}_n$ are the familiar Bohr energies of hydrogen (in Born--Oppenheimer approximation), \begin{equation} E^{\mbox{\textrm{\tiny{Bohr}}}}_n = - \tfrac12 \tfrac{e^4\mEL}{\hbar^2n^2}; \label{eq:EperBOHR} \end{equation} see \cite{BohrHatomA}. The Bohr spectrum meant that Schr\"odinger was onto something. But, of course, it had been obtained previously also by de Broglie; more importantly, it had been obtained by Pauli \cite{PauliH} who solved Heisenberg's matrix formulation of the hydrogen problem. However, thanks to the linearity of Schr\"odinger's equation (\ref{eq:ERWINeqnMatterWaveBOhydrogen}) the general bound state solution of (\ref{eq:ERWINeqnMatterWaveBOhydrogen}) is readily obtained as \begin{equation} \Psi(t,{\boldsymbol{s}}) = \sum_{n\in{\mathbb{N}}} e^{-i E_n t/\hbar}\sum_{\ell=0}^{n-1}\sum_{m=-\ell}^\ell c^{}_{n,\ell,m}\psi^{}_{n,\ell,m}({\boldsymbol{s}}), \label{eq:PSIboundGENERAL} \end{equation} and with this Schr\"odinger was in the position to obtain further significant results. \noindent \textbf{Remark}: \emph{Even though the dynamical equation \eqref{eq:ERWINeqnMatterWaveBOhydrogen} appears only in the fourth communication of Schr\"odinger's series ``Quantisierung als Eigenwertproblem,'' see Eq.($4^{\prime\prime}$) in} \cite{ErwinWMe}, \emph{an equivalent version of formula \eqref{eq:PSIboundGENERAL} does appear in} \cite{ErwinWMc} \emph{as Eq.(35); the ensuing Eq.(36) there makes it plain, though, that Schr\"odinger at that time must have been considering the relativistic (subsequently so-called) Klein--Gordon equation, which also appears as Eq.(36) in} \cite{ErwinWMe}. As is well-known, equation \eqref{eq:ERWINeqnMatterWaveBOhydrogen} implies the conservation of the $L^2$ norm of $\Psi$. With $\Im\ $ meaning \emph{imaginary part} and ${}^*$ meaning \emph{complex conjugate}, Schr\"odinger showed in \cite{ErwinWMd} that $\varrho(t,{\boldsymbol{s}})\! := \Psi^*(t,{\boldsymbol{s}}) \Psi(t,{\boldsymbol{s}})$ and ${\boldsymbol{J}}(t,{\boldsymbol{s}}) :=\frac{\hbar}{\mEL} \Im \left(\Psi^* \nabla_{\boldsymbol{s}} \Psi\right)(t,{\boldsymbol{s}})$ satisfy \begin{alignat}{1} \partial_t{\varrho(t,{\boldsymbol{s}})} + \nabla_{\boldsymbol{s}}\cdot{\boldsymbol{J}}(t,{\boldsymbol{s}}) = \label{eq:probCONSERVATIONs} 0, \end{alignat} and this continuity equation implies that $\int_{{\mathbb{R}}^3}\varrho(t,{\boldsymbol{s}})\mathrm{d}^3{s}$ is conserved if it is finite initially. Inserting the general bound state solution (\ref{eq:PSIboundGENERAL}) into the bilinear formulas for $\varrho$ and ${\boldsymbol{J}}$ revealed that they are sums of terms which oscillate harmonically with the Bohr angular frequencies $\omega_{n,n'} = \frac1\hbar (E_{n'}-E_n)$ for hydrogen. This finding must have suggested to Schr\"odinger that the electron charge density at the space point ${\boldsymbol{s}}$ at time $t$ is $\rho_{\mathrm{el}}(t,{\boldsymbol{s}}) = -e\Psi^*(t,{\boldsymbol{s}}) \Psi(t,{\boldsymbol{s}})$, and that the electron's electric current vector density ${\boldsymbol{j}}_{\mathrm{el}}(t,{\boldsymbol{s}}) = -e \frac{\hbar}{\mEL} \Im \left(\Psi^*(t,{\boldsymbol{s}}) \nabla_{\boldsymbol{s}} \Psi(t,{\boldsymbol{s}})\right)$. By (\ref{eq:probCONSERVATIONs}) this identification satisfies the electron charge conservation, viz. \begin{alignat}{1} \partial_t{\rho_{\mathrm{el}}(t,{\boldsymbol{s}})} + \nabla_{\boldsymbol{s}}\cdot{\boldsymbol{j}}_{\mathrm{el}}(t,{\boldsymbol{s}}) = \label{eq:chargeCONSERVATION} 0, \end{alignat} which we should have. Since the charge density of an electron, $\rho_{\mathrm{el}}$, has to integrate to $-e$, this requires the normalization $\int_{{\mathbb{R}}^3}\varrho(t,{\boldsymbol{s}})\mathrm{d}^3{s}= 1$. And since $\rho_{\mathrm{el}}(t,{\boldsymbol{s}})$ and ${\boldsymbol{j}}_{\mathrm{el}}(t,{\boldsymbol{s}})$ are sums of terms which oscillate harmonically with the Bohr angular frequencies $\omega_{n,n'} = \frac1\hbar (E_{n'}-E_n)$ for hydrogen, using these expressions for the charge and current densities as source terms in the inhomogeneous Maxwell--Lorentz equations for the electromagnetic fields of the electron, \begin{alignat}{1} - \partial_t{{\boldsymbol{E}}_{\mathrm{el}}(t,{\boldsymbol{s}})} + c\nabla\times{\boldsymbol{B}}_{\mathrm{el}}(t,{\boldsymbol{s}}) &= \label{eq:MdotE} 4\pi {\boldsymbol{j}}_{\mathrm{el}}(t,{\boldsymbol{s}}),\\ \nabla\cdot{\boldsymbol{E}}_{\mathrm{el}}(t,{\boldsymbol{s}}) &= \label{eq:MdivE} 4\pi \rho_{\mathrm{el}}(t,{\boldsymbol{s}})\, , \end{alignat} coupled with the homogeneous Maxwell equations \begin{alignat}{1} \partial_t{{\boldsymbol{B}}_{\mathrm{el}}(t,{\boldsymbol{s}})} + c \nabla\times{\boldsymbol{E}}_{\mathrm{el}}(t,{\boldsymbol{s}}) &= \label{eq:MdotB} {\boldsymbol{0}}\, , \\ \nabla\cdot {\boldsymbol{B}}_{\mathrm{el}}(t,{\boldsymbol{s}}) &= \label{eq:MdivB} 0\, , \end{alignat} the electric field ${\boldsymbol{E}}_{\mathrm{el}}(t,{\boldsymbol{s}})$ and the magnetic induction field ${\boldsymbol{B}}_{\mathrm{el}}(t,{\boldsymbol{s}})$ which solve this system of Maxwell--Lorentz equations are also sums of fields which oscillate with the same Bohr hydrogen frequencies, plus an arbitrary vacuum field solution. This is a striking result that Bohr --- not in possession of a dynamical theory --- could only obtain from his hydrogen energies $E^{\mbox{\textrm{\tiny{Bohr}}}}_n$ by invoking Planck's postulate $h\nu = \triangle E$ for the emitted / absorbed electromagnetic radiation through matter. Not all is well, though. The problem is that this calculation says that the hydrogen atom is oscillating forever with the superposition of its eigenmodes, and likewise the electromagnetic radiation is a superposition of incoming and outgoing waves forever. This is not surprising, though, for the feedback from the Maxwell--Lorentz field equations for ${\boldsymbol{E}}_{\mathrm{el}},{\boldsymbol{B}}_{\mathrm{el}}$ into Schr\"odinger's matter-wave equation for $\Psi$ is absent. Schr\"odinger in \cite{ErwinWMe} used minimal coupling to inject ${\boldsymbol{E}}_{\mathrm{el}},{\boldsymbol{B}}_{\mathrm{el}}$ into the matter-wave equation for $\Psi$. Thus he introduced the potentials $(\phi_{\mathrm{el}}(t,{\boldsymbol{s}}),{\boldsymbol{A}}_{\mathrm{el}}(t,{\boldsymbol{s}}))$ of the electromagnetic fields ${\boldsymbol{B}}_{\mathrm{el}}(t,{\boldsymbol{s}})$ and ${\boldsymbol{E}}_{\mathrm{el}}(t,{\boldsymbol{s}})$, which are solutions to the inhomogeneous, linear partial differential equations \begin{alignat}{1} \textstyle -\frac1c\partial_t{{\boldsymbol{A}}_{\mathrm{el}}(t,{\boldsymbol{s}})} - \nabla_{\boldsymbol{s}}\phi_{\mathrm{el}}(t,{\boldsymbol{s}}) & = {\boldsymbol{E}}_{\mathrm{el}}(t,{\boldsymbol{s}}), \label{Aevolve}\\ \textstyle \nabla_{\boldsymbol{s}}\boldsymbol{\times} {{\boldsymbol{A}}_{\mathrm{el}}(t,{\boldsymbol{s}})} & = {\boldsymbol{B}}_{\mathrm{el}}(t,{\boldsymbol{s}}).\label{Aconstraint} \end{alignat} Note that these two equations comprise an evolution equation for ${\boldsymbol{A}}_{\mathrm{el}}$, given ${\boldsymbol{E}}_{\mathrm{el}}$ and $\phi_{\mathrm{el}}$, plus a constraint equation for ${\boldsymbol{A}}_{\mathrm{el}}$, given ${\boldsymbol{B}}_{\mathrm{el}}$. Another equation is needed, for $\phi_{\mathrm{el}}$. A compelling choice from the perspective of relativity is the \emph{Lorenz gauge} \begin{alignat}{1}\label{LorLorGAUGE} \textstyle \frac1c\partial_t{\phi_{\mathrm{el}}(t,{\boldsymbol{s}})} + \nabla\cdot{\boldsymbol{A}}_{\mathrm{el}}(t,{\boldsymbol{s}}) & = 0, \end{alignat} which is an evolution equation for $\phi_{\mathrm{el}}$. Also the Coulomb gauge condition $\nabla_{\boldsymbol{s}}\cdot {\boldsymbol{A}}_{\mathrm{el}} = 0$ is popular, although it is not Lorentz covariant. Aside from demanding that all fields decay to zero when $|{\boldsymbol{s}}|\to\infty$ together with their derivatives, we need initial data for ${\boldsymbol{A}}_{\mathrm{el}}$ and for $\phi_{\mathrm{el}}$, but let's not digress. The minimal-coupling substitution for energy $E\mapsto E +e\phi_{\mathrm{el}}$ and momentum ${\boldsymbol{p}}\mapsto{\boldsymbol{p}} +\frac1c e{\boldsymbol{A}}_{\mathrm{el}}$, known from classical mechanics of the motion of a \emph{test electron}, a point particle with charge $-e$ in \emph{given} electromagnetic fields, changes (\ref{eq:ERWINeqnMatterWaveBOhydrogen}) into \begin{equation} \left(i \hbar \partial_t + e \phi_{\mathrm{el}}(t,{\boldsymbol{s}})\right)\Psi(t,{\boldsymbol{s}}) = \tfrac{1}{2\mEL}\left(-i\hbar \nabla_{\boldsymbol{s}} + \textstyle\frac{e}{c}{\boldsymbol{A}}_{\mathrm{el}}(t,{\boldsymbol{s}})\right)^2\Psi(t,{\boldsymbol{s}}) - \tfrac{e^2}{|{\boldsymbol{s}}|}\Psi(t,{\boldsymbol{s}}). \label{eq:ERWINeqnAphi} \end{equation} By inserting electromagnetic potential fields with simple periodic time dependence $\sin(\omega_{n,n'}t)$ Schr\"odinger was able to estimate that indeed the solution of (\ref{eq:ERWINeqnAphi}) will show a resonance and consist predominantly of a superposition of the eigenmodes for $E_n$ and $E_{n'}$. In 1927 Dirac \cite{GoldenRule} computed perturbatively that the solution of (\ref{eq:ERWINeqnAphi}) will transit from an initially $n$-th eigenstate to the $n'$-th eigenstate of (\ref{eq:ERWINeqnMatterWaveBOhydrogen}), or to the continuum. His formula for the transition probability was later sanctioned ``Golden Rule'' by Fermi. Note though that in this calculation the external electromagnetic potential fields oscillate forever, \emph{by assumption}. A definitive assessment of Schr\"odinger's matter-wave can only be obtained by non-perturbatively studying the self-consistent model, nowadays known as the Schr\"odinger--Maxwell system of equations, consisting of Schr\"odinger's equation (\ref{eq:ERWINeqnAphi}) and the electromagnetic potential equations (\ref{Aevolve}), (\ref{Aconstraint}), (\ref{LorLorGAUGE}), plus Maxwell's equations (\ref{eq:MdotE})--(\ref{eq:MdivB}), now with ${\boldsymbol{j}}_{\mathrm{el}}(t,{\boldsymbol{s}}) = -e \Im \bigl(\Psi^*[ \frac{\hbar}{\mEL} \nabla + i\frac{e}{\mEL c}{\boldsymbol{A}}_{\mathrm{el}}]\Psi\bigr)(t,{\boldsymbol{s}})$ as electron current vector density pertinent to minimal coupling between $\Psi$ and $(\phi_{\mathrm{el}},{\boldsymbol{A}}_{\mathrm{el}})$. The Schr\"odinger--Maxwell system is expected to yield emission of an electromagnetic wave at the expense of the electron-proton system's energy, and in the process resulting in $\Psi$ to settle down in a ground state. This expectation is based on the hyperbolicity of the Maxwell field equations in concert with the conservation of the energy functional \begin{eqnarray} \label{eq:SMenergyFctl} \hspace{-3pt}\Efrak(\Psi,{\boldsymbol{E}}_{\mathrm{el}},{\boldsymbol{B}}_{\mathrm{el}}) \!\!& := &\!\! \frac{1}{2\mEL} \displaystyle\int_{{\mathbb{R}}^3}\!\! \abs{\left(-i\hbar \nabla_{\boldsymbol{s}} + \textstyle\frac{e}{c}{\boldsymbol{A}}_{\mathrm{el}}(t,{\boldsymbol{s}})\right)\Psi(t,{\boldsymbol{s}})}^2 d^3\!s \\ \notag && \displaystyle -e^2\int_{{\mathbb{R}}^3}\!\! \frac{|{\Psi}|^2(t,{\boldsymbol{s}}) }{|{\boldsymbol{s}}|} d^3\!s + e^2 \frac12\int_{{\mathbb{R}}^3}\! \int_{{\mathbb{R}}^3} \!\!\!\! \frac{|\Psi|^2(t,{\boldsymbol{s}})|\Psi|^2(t,{\boldsymbol{s}}')}{\abs{{\boldsymbol{s}}-{\boldsymbol{s}}'}}d^3\!s d^3\!s' \\ \notag && \displaystyle + \frac{1}{8\pi}\int_{{\mathbb{R}}^3}\!\left(\abs{{\boldsymbol{E}}_{\mathrm{el}}^{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}})}^2 + \abs{{\boldsymbol{B}}_{\mathrm{el}}^{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}})}^2 \right)d^3\!s, \end{eqnarray} where ${}^{\mbox{\tiny{rad}}}$ refers to the fields without the Coulomb field of the nucleus. This functional can easily be shown to be bounded below. Emission of electromagnetic radiation by a localized oscillating $\rho_{\mathrm{el}}(t,{\boldsymbol{s}})$ and ${\boldsymbol{j}}_{\mathrm{el}}(t,{\boldsymbol{s}})$ will increase the electromagnetic field energy integral at the expense of the $\Psi$-energy, which is bounded below, and so the emission process should cease eventually. However, while plausible, to the best of our knowledge this has not yet been established rigorously. For a mathematical review of this model (though in the Coulomb gauge), see \cite{KomechLECT}. Even if the just described scenario can be established rigorously, which would be a fine result, this model of hydrogen coupled with the classical electromagnetic Maxwell fields does not seem to produce quantitatively acceptable results. For instance, the energy ground state in this model corresponds to minimizing $\Efrak(\Psi,{\boldsymbol{E}}_{\mathrm{el}},{\boldsymbol{B}}_{\mathrm{el}})$ for ${\boldsymbol{A}}_{\mathrm{el}}\equiv{\boldsymbol{0}}$ and vanishing electromagnetic radiation fields, and $\Psi(t,{\boldsymbol{s}}) = e^{-i Et/\hbar}\psi({\boldsymbol{s}})$. We set $\Efrak(e^{-iEt/\hbar}\psi,{\boldsymbol{0}},{\boldsymbol{0}})=:\Ffrak(\psi)$, thus \begin{eqnarray} \label{eq:psiFctl} \hspace{-3pt}\Ffrak(\psi) := \!\! \frac{\hbar^2}{2\mEL} \int_{{\mathbb{R}}^3}\!\! \abs{\nabla \psi}^2({\boldsymbol{s}}) d^3\!s -e^2\!\!\!\int_{{\mathbb{R}}^3}\!\! \frac{|\psi|^2({\boldsymbol{s}}) }{\abs{{\boldsymbol{s}}}}d^3\!s + e^2\frac{1}{2}\int_{{\mathbb{R}}^3}\! \int_{{\mathbb{R}}^3} \!\!\!\! \frac{|\psi|^2({\boldsymbol{s}})|\psi|^2({\boldsymbol{s}}')}{\abs{{\boldsymbol{s}}-{\boldsymbol{s}}'}}d^3\!s d^3\!s'\!.\ \end{eqnarray} As shown in \cite{BenguriaBrezisLieb}, \cite{BenguriaLieb}, the functional $\Ffrak$ has a necessarily spherically symmetric minimizer $\psi_1$ on the Sobolev space $H^1({\mathbb{R}}^3)$, satisfying $\int_{{\mathbb{R}}^3} |\psi|^2({\boldsymbol{s}}) d^3\!s =1$; spherical symmetry follows from uniqueness by convexity. The same spherical-symmetry-through-uniqueness-by-convexity argument for any putative minizer was subsequently rediscovered and emphasized in \cite{KK}. All these authors noticed that, while $\psi\mapsto \Ffrak(\psi)$ is neither convex nor concave, the map $|\psi|^2\mapsto \Ffrak(\psi)$ is convex, hence any minimizer must be unique, and uniqueness implies its spherical symmetry. By the virial theorem for systems with Coulomb interactions one has $\Ffrak(\psi_1)=-\tfrac{\hbar^2}{2\mEL} \int_{{\mathbb{R}}^3}\!\! |{\nabla \psi}|^2({\boldsymbol{s}}) d^3\!s < 0$, as expected. The minimizer $\psi_1$ satisfies the pertinent Euler--Lagrange equation \cite{BenguriaLieb}, which is a special case of (\ref{eq:ERWINeqnAphi}), with $\Psi(t,{\boldsymbol{s}})=e^{-iE_gt/\hbar}\psi({\boldsymbol{s}})$ and $\phi_{\mathrm{el}}$ the electrostatic Coulomb potential of $\rho_{\mathrm{el}}$, viz. \begin{equation} -\frac{\hbar^2}{2\mEL}\Delta_{\boldsymbol{s}}\psi({\boldsymbol{s}}) - e^2\frac{1}{|{\boldsymbol{s}}|}\psi({\boldsymbol{s}}) + e^2 {\displaystyle\int_{{\mathbb{R}}^3}} \frac{1}{|{\boldsymbol{s}}-{\boldsymbol{s}}'|}|\psi|^2({\boldsymbol{s}}')d^3s'\psi({\boldsymbol{s}}) = E_g \psi({\boldsymbol{s}}); \label{eq:ERWINeqnEphiEXPLICIT} \end{equation} the eigenvalue $E_g$ is also the Lagrange multiplier for the constraint $\|\psi\|_{L^2}=1$. However, in this \emph{nonlinear eigenvalue problem} the eigenvalue $E_g$ does not coincide with the minimum of $\Ffrak(\psi)$, which is easily seen as follows. Setting $\psi=\psi_1$ in \eqref{eq:ERWINeqnEphiEXPLICIT}, then multiplying \eqref{eq:ERWINeqnEphiEXPLICIT} by $\psi_1$ and integrating over ${\mathbb{R}}^3$, and recalling the normalization of $\psi_1$, for the ground state energy $E_g$ one obtains \begin{eqnarray} \label{eq:energy} \hspace{-10pt} E_g =\frac{\hbar^2}{2\mEL} \int_{{\mathbb{R}}^3}\!\! \abs{\nabla \psi_1}^2({\boldsymbol{s}}) d^3\!s - e^2\int_{{\mathbb{R}}^3}\!\! \frac{|{\psi_1}|^2({\boldsymbol{s}})}{\abs{{\boldsymbol{s}}}}d^3\!s + e^2 \! \int_{{\mathbb{R}}^3}\!\! \int_{{\mathbb{R}}^3} \!\!\!\! \frac{|\psi_1|^2({\boldsymbol{s}})|\psi_1|^2({\boldsymbol{s}}')}{\abs{{\boldsymbol{s}}-{\boldsymbol{s}}'}}d^3\!s d^3\!s',\ \end{eqnarray} i.e. \begin{eqnarray} \label{eq:energyB} \hspace{-10pt} E_g = \Ffrak(\psi_1) + e^2 \frac12 \int_{{\mathbb{R}}^3}\!\! \int_{{\mathbb{R}}^3} \!\!\!\! \frac{|\psi_1|^2({\boldsymbol{s}})|\psi_1|^2({\boldsymbol{s}}')}{\abs{{\boldsymbol{s}}-{\boldsymbol{s}}'}}d^3\!s d^3\!s';\ \end{eqnarray} cf. Eq.(6) in \cite{BazleySeydel}. And so we have \begin{eqnarray} \label{eq:EgBIGGERthanFCTLmin} E_g > \Ffrak(\psi_1). \end{eqnarray} \newpage By inequality \eqref{eq:EgBIGGERthanFCTLmin} the energetic significance of $E_g$ is obscure in this theory; recall that $\Ffrak(\psi_1)$ is the ground state energy of the conserved energy functional $\Efrak$, while $E_g$ is the lowest eigenvalue of the nonlinear eigenvalue problem. Moreover, for all non-vanishing $\psi$, we obviously also have \begin{eqnarray} \label{eq:psiFctlBIGGERthan} \hspace{-3pt}\Ffrak(\psi) > \frac{\hbar^2}{2\mEL} \int_{{\mathbb{R}}^3}\!\! \abs{\nabla \psi}^2({\boldsymbol{s}}) d^3\!s -\int_{{\mathbb{R}}^3}\!\! \frac{e^2}{\abs{{\boldsymbol{s}}}}\abs{ \psi}^2({\boldsymbol{s}}) d^3\!s , \end{eqnarray} which is the usual energy functional for the textbook QM Schr\"odinger equation of the hydrogen eigenvalue problem in Born--Oppenheimer approximation, and so \begin{eqnarray} \label{eq:FCTLminBIGGERthanEminBOHR} \Ffrak(\psi_1) > E^{\mbox{\textrm{\tiny{Bohr}}}}_1. \end{eqnarray} Since the conservation law for the energy functional makes it plain that in this theory the ionization energy has to be identified with $|\Ffrak(\psi_1)|$, and since by \eqref{eq:FCTLminBIGGERthanEminBOHR} this is smaller than $|E^{\mbox{\tiny{Bohr}}}_1|$, while $|E^{\mbox{\tiny{Bohr}}}_1|$ agrees quite well with the empirical ionization energy, it follows that this theory gives an incorrect ionization energy for hydrogen. Of course, the inequality $\Ffrak(\psi_1) > E^{\mbox{\tiny{Bohr}}}_1$ is only qualitative; in principle it leaves the possibility that the quantitative discrepancy could be unnoticeable. To rigorously eliminate the possibility of a quantitatively insignificant shift, one needs an explicit lower bound on $\Ffrak(\psi_1)$ which is strictly bigger than $E^{\mbox{\tiny{Bohr}}}_1$ by a significant amount. This should be possible to accomplish with the technique of \cite{BazleySeydel}, which relies on a clever choice of a trial function; our first attempt has not produced the desired bound, and so we defer producing one to some later time. However, if we do not insist on a rigorous proof and are willing to accept numerical results, a verdict exists! Indeed, the self-consistent Schr\"odinger matter-wave ground state problem for hydrogen (\ref{eq:ERWINeqnEphiEXPLICIT}) is mathematically identical with the Hartree \cite{Hartree} and with the Hartree--Fock (HF) approximation to the traditional Schr\"odinger ground state of the hydrogen anion $H^-$, a.k.a. hydride in the chemical literature \cite{CBSM}, with the Schr\"odinger matter-wave ground state energy $E_g$ equal to one-half of the HF ground state energy of hydride ($E_{H^-}^{\mbox{\tiny{HF}}}$, say), so $E_g = \frac12 E_{H^-}^{\mbox{\tiny{HF}}}$. The HF ground state energy of hydride has been computed numerically to an astonishing precision (see references 60-66 in \cite{CBSM}; see also \cite{Rau}), which (retaining just a few decimal places precision) translates into $E_g \approx 0.488 E^{\mbox{\textrm{\tiny{Bohr}}}}_1$ for the ground state energy of (\ref{eq:ERWINeqnEphiEXPLICIT}). This is so far off of the empirical data that I don't see how the model could possibly account for the electromagnetic radiation energy released when an electron and a proton recombine into a hydrogen atom. The upshot of the above discussion, cautiously expressed, is: \begin{quote} \emph{Schr\"odinger's ``matter-wave'' interpretation of his wave function leads to a non-linear theory of hydrogen that does not seem to be physically viable.} \end{quote} \noindent \textbf{Remark}: \emph{We have noted that the ground state problem for hydrogen in Schr\"odinger's matter-wave model is mathematically identical (up to rescaling of the eigenvalue by a factor 2) with the Hartree(--Fock) approximation to the conventional (textbook) QM Schr\"odinger equation (see (\ref{eq:ERWINeqnZ}), (\ref{eq:HAMzATOM}) below) for the ground state problem of hydride. Incidentally, since $E_{H^-}^{\mbox{\tiny{HF}}}\approx 0.976 E^{\mbox{\textrm{\tiny{Bohr}}}}_1$ the Hartree--Fock approximation to the conventional Schr\"odinger ground state of hydride fails to predict the existence of a bound state for this two-electron problem, which has exactly one bound state} \cite{Hill}. \emph{Hydride is the lightest two-electron ion in the isoelectronic family of helium, and the Hartree(--Fock) functional and Hartree(--Fock) equation for the Hartree(--Fock) approximation to the ground state of this family is obtained by replacing $-e^2$ by $-Ze^2$ in the attractive potential of the nucleus in the hydrogen problem \eqref{eq:psiFctl} and \eqref{eq:ERWINeqnEphiEXPLICIT}. A unique positive solution of the Hartree equation for helium satisfying $\|\psi\|_{L^2}=1$ was first shown to exist in} \cite{Reeken}, \emph{and that it coincides with the minimizer of the Hartree functional was shown in} \cite{BazleySeydel}. \emph{The Hartree--Fock model is a widely used mathematical approximation to the textbook Schr\"odinger equation for many-electron problems, successfully so when $Z>1$, but that should not be misconstrued as supporting Schr\"odinger's matter-wave ontology, his own hopes to the contrary, expressed in 1927} (see p.472 of \cite{Solvay}), \emph{notwithstanding.} \medskip Schr\"odinger himself eventually realized that a matter-wave ontology was not viable, but before he came to this conclusion he generalized his wave equation \eqref{eq:ERWINeqnMatterWaveBOhydrogen} to the many body problem and made further important discoveries. In the next subsection we briefly summarize Schr\"odinger's attempt at a matter-wave interpretation of multi-electron atoms, again in Born--Oppenheimer approximation, for it helps to appreciate the subsequent change of perspective offered by Born. \vspace{-.2truecm} \subsection{Helium, Lithium, Beryllium, etc. as per Schr\"odinger}\vspace{-.1truecm} We begin with an English translation of Schr\"odinger's own words: \begin{quote} ``We have repeatedly called attention to the fact that the $\Psi$-function itself cannot and may not be interpreted directly in terms of three-dimensional space --- however much the one-electron problem tends to mislead us on this point --- because it is in general a function in configuration space, not real space.'' (Quoted from \cite{WaveMechanics}, p.120/1.) \end{quote} \noindent In 1926, when Schr\"odinger arrived at his equation for an $N$ particle system of many nuclei and electrons, which may form any ordinary piece of everyday matter, he obtained a $t$-dependend $\Psi$ function on $N$-particle {configuration space}. So, at time $t$, $\Psi$ is a function of a high-dimensional vector variable $\vec{\boldsymbol{q}}=({\boldsymbol{q}}_1,...,{\boldsymbol{q}}_N)\in {\mathbb{R}}^{3N}$ formed by the generic position variables of the $N$ point particles. Restricting ourselves to an $N$-electron atom or ion with a nucleus of charge $Ze$ fixed at the origin, with $Z\in{\mathbb{N}}$ (though $Z\leq 92$ in nature), Schr\"odinger's $N$-body generalization of (\ref{eq:ERWINeqnMatterWaveBOhydrogen}) reads \begin{equation} i \hbar \partial_t\Psi(t,\vec{\boldsymbol{q}}) = H \Psi(t,\vec{\boldsymbol{q}}) \label{eq:ERWINeqnZ} \end{equation} \vspace{-.5truecm} \noindent with \vspace{-.3truecm} \begin{equation} H = \sum_{k=1}^N \frac{1}{2\mEL} \big(- i\hbar \nabla_{{\boldsymbol{q}}_k}\big)^2 - \sum_{k=1}^N \frac{Ze^2}{|{\boldsymbol{q}}_k|} + \sum\sum_{\hskip-.7truecm 1 \leq j < k \leq N} \frac{e^2}{|{\boldsymbol{q}}_j-{\boldsymbol{q}}_k|}. \label{eq:HAMzATOM} \end{equation} It is known that $H$ has infinitely many discrete eigenvalues $E_1<E_2<\cdots< 0$ below the essential spectrum $\sigma_{\mbox{\tiny{ess}}}$ (which itself has a strictly negative minimum), each one at most finitely degenerate, whose eigenfunctions represent bound states \cite{ReedSimonBOOKiv}. Let ${\boldsymbol{d}}(n)$, for $n\in{\mathbb{N}}$ denote a set of labels that label the eigenstates with same eigenvalue $E_n$, i.e. write $H\psi^{}_{n,{\boldsymbol{d}}(n)}(\vec{\boldsymbol{q}}) = E_n\psi^{}_{n,{\boldsymbol{d}}(n)}(\vec{\boldsymbol{q}})$. Then the general bound state solution is given by \begin{equation} \Psi(t,\vec{\boldsymbol{q}}) = \sum_{n\in{\mathbb{N}}} e^{-i E_n t/\hbar}\sum_{{\boldsymbol{d}}(n)} c^{}_{n,{\boldsymbol{d}}}\psi^{}_{n,{\boldsymbol{d}}(n)}(\vec{\boldsymbol{q}}). \label{eq:PSIboundGENERALz} \end{equation} The same year, Sommerfeld's (former) students Uns\"old and Heisenberg made quantitatively promising calculations for the $N=2$ helium problem ($Z=2$), using first-order perturbation techniques with hydrogenic wave functions. Hylleraas a few years later obtained more accurate spectral results for helium by a variational method that included the distance between the electron as a variable for the wave function, and also the continuous part of the hydrogenic spectrum; see the charming article \cite{Hylleraas} for Hylleraas' own reminiscences. Soon after, many other empirically known facts about matter were quite accurately extracted from Schr\"odinger's many body equation (\ref{eq:ERWINeqnZ}) with Hamiltonian $H$ given by (\ref{eq:HAMzATOM}), most notably the periodic table of the chemical elements by using Slater's determinantal wave functions built with hydrogenic eigenfunctions plus an extra bit (explained with spin, subsequently); see \cite{GeroETal} for a more recent rigorous contribution. There was no doubt that (\ref{eq:ERWINeqnZ}) with Hamiltonian given by (\ref{eq:HAMzATOM}) is an important equation for computing answers to questions about large atoms / ions, the atomic spectra among them. While the hydrogen $\Psi(t,{\boldsymbol{q}})$ could be confused with a matter-wave $\Psi(t,{\boldsymbol{s}})$ on physical space and time, the fact that the $N$-body $\Psi(t,{\boldsymbol{q}}_1,...,{\boldsymbol{q}}_N)$ lives on $N$-particle configuration space ${\mathbb{R}}^{3N}$ (at time ${\mathbb{R}}$) should have made it plain for Schr\"odinger that it was absurd to think that $\Psi$ was associated with a fundamental matter-wave ontology on physical space (and time). Yet for a while he continued to maintain that there are no particles at the fundamental subatomic level and instead proposed that his $\Psi$ does supply a matter-wave ontology, not directly so but indirectly, as follows. Schr\"odinger showed in \cite{ErwinWMd} that the function $\varrho(t,\vec{\boldsymbol{q}}) := \Psi^*(t,\vec{\boldsymbol{q}}) \Psi(t,\vec{\boldsymbol{q}})$ on ${\mathbb{R}}^{3N}$ and the $3N$-dimensional vector function $\vec{\boldsymbol{J}}(t,\vec{\boldsymbol{q}}):=\frac{\hbar}{\mEL} \Im \left(\Psi^*(t,\vec{\boldsymbol{q}}) \nabla_{\vec{\boldsymbol{q}}} \Psi(t,\vec{\boldsymbol{q}})\right)$ on ${\mathbb{R}}^{3N}$, jointly satisfy the continuity equation \begin{alignat}{1} \partial_t{\varrho(t,\vec{\boldsymbol{q}})} + \nabla{\,\boldsymbol{\cdot}\,}\vec{\boldsymbol{J}}(t,\vec{\boldsymbol{q}}) = \label{eq:probCONSERVATIONqN} 0; \end{alignat} here, $\nabla{\,\boldsymbol{\cdot}\,}\ $ is a $3N$-dimensional divergence operation; i.e. it acts on ${\mathbb{R}}^{3N}$-dimensional vectors. Equation (\ref{eq:probCONSERVATIONqN}) has the important implication that $\int_{{\mathbb{R}}^{3N}}|\Psi|^2(t,{\boldsymbol{q}}_1,...,{\boldsymbol{q}}_N) \mathrm{d}^{3N}q$ is constant in time if it is finite at $t=0$. Schr\"odinger \cite{ErwinWMd} now proposed that $\Psi$ yields a matter-wave ontology in physical space (and time) through a many-electron charge density \begin{alignat}{1} \rho_{\mathrm{el}} (t,{\boldsymbol{s}}) := - e \sum_n \int_{{\mathbb{R}}^{3(N-1)}}\varrho(t,{\boldsymbol{q}}_1,...,{\boldsymbol{s}},...,{\boldsymbol{q}}_N) \mathrm{d}^{3(N-1)}q, \label{eq:chargeRHO} \end{alignat} where ${\boldsymbol{s}}$ at r.h.s.(\ref{eq:chargeRHO}) is in the $n$-th position slot. For (\ref{eq:chargeRHO}) to yield the total charge $\int_{{\mathbb{R}}^3} \rho_{\mathrm{el}} (t,{\boldsymbol{s}})\mathrm{d}^3s = -Ne$, Schr\"odinger stipulated that $\int_{{\mathbb{R}}^{3N}}|\Psi|^2(t,{\boldsymbol{q}}_1,...,{\boldsymbol{q}}_N) \mathrm{d}^{3N}q~=~1$. Similarly, in \cite{ErwinWMd} he defined the many-electron current vector density as \begin{alignat}{1} {\boldsymbol{j}}_{\mathrm{el}} (t,{\boldsymbol{s}}) : = -e \sum_n \int_{{\mathbb{R}}^{3(N-1)}}{\boldsymbol{j}}_n(t,{\boldsymbol{q}}_1,...,{\boldsymbol{s}},...,{\boldsymbol{q}}_N) \mathrm{d}^{3(N-1)}q, \label{eq:chargeJ} \end{alignat} with \begin{alignat}{1}\label{eq:jOFsDEF} {\boldsymbol{j}}_n(t,{\boldsymbol{q}}_1,...,{\boldsymbol{s}},...,{\boldsymbol{q}}_N) := \tfrac{\hbar}{\mEL} \Im\! \left(\Psi^*(t,{\boldsymbol{q}}_1,...,{\boldsymbol{s}},...{\boldsymbol{q}}_N) \nabla_{{\boldsymbol{s}}} \Psi(t,{\boldsymbol{q}}_1,...,{\boldsymbol{s}},...{\boldsymbol{q}}_N)\right) \end{alignat} and ${\boldsymbol{s}}$ in the $n$-th position slot. He noted that $\rho_{\mathrm{el}}$ and ${\boldsymbol{j}}_{\mathrm{el}}$ jointly satisfy the continuity equation (\ref{eq:chargeCONSERVATION}). When evaluated with the general bound state solution (\ref{eq:PSIboundGENERALz}), again one finds harmonically oscillatory terms with Bohr-type frequencies $\propto (E_n-E_{n'})$. Inserted as source terms for the inhomogeneous Maxwell equations (\ref{eq:MdotE}), (\ref{eq:MdivE}), coupled with the homogeneous equations (\ref{eq:MdotB}), (\ref{eq:MdivB}), one finds fields which oscillate with these Bohr-type frequencies, extending Schr\"odinger's striking hydrogen result to many-electron atoms. Yet again, thinking of ${\boldsymbol{E}}_{\mathrm{el}}$ and ${\boldsymbol{B}}_{\mathrm{el}}$ as actual fields produced by all the electrons, their minimally coupled feedback into the Schr\"odinger equation leads to a contradiction with the just mentioned striking results obtained from (\ref{eq:ERWINeqnZ}) with Hamiltonian $H$ given by (\ref{eq:HAMzATOM}). Schr\"odinger himself must have had conflicting thoughts in his mind already in 1926, for in \cite{ErwinWMe} he writes that ``$\Psi\overline{\Psi}$ is a kind of \emph{weight-function} in the system's configuration space. The \emph{wave-mechanical} configuration of the system is a \emph{superposition} of many, strictly speaking of \emph{all}, point-mechanical configurations kinematically possible.'' This rings more akin to a many-worlds type theory; for more on this, see \cite{ValiaETal}. As already mentioned, Schr\"odinger would eventually abandon his matter-wave ontology, and also his ``many-worlds type'' thoughts. Yet Schr\"odinger would continue to reject a particle ontology of matter. \section Born -- de Broglie -- Bohm-inspired approach to \\ a quantum mechanics of radiating atoms (etc.)} We begin with an English translation of de Broglie's own words: \begin{quote} ``We cannot recall here the successes obtained by this method (papers by Messrs Schr\"odinger, ..., etc.), but we must insist on the difficulties of a conceptual type that it raises. Indeed let us consider, for simplicity, a system of $N$ material points each possessing three degrees of freedom. The configuration space is in an essential way formed by means of the coordinates of the points, and yet Mr. Schr\"odinger assumes that in atomic systems material points no longer have a clearly defined position. It seems a little paradoxical to construct a configuration space with points that do not exist.'' (Quoted on p.379 of the arXiv version in \cite{Solvay}.) \end{quote} De Broglie and Born had no problems with a particle ontology, and thought of $\Psi$ not ontologically but nomologically. Max Born seems to have been the first to interpret $\Psi$ as a guiding field for the particles, followed suit by de Broglie whose insights were later rediscovered and advanced by Bohm; note though that the existence of a guiding field for electrons was first postulated by de Broglie two or three years earlier, inspired by Einstein's ideas \cite{EinsteinPHOTON} that photons are guided in their motion by ``ghost fields.'' Born also re-interpreted $|\Psi|^2$ not as a weight-function in Schr\"odinger's sense but as a probability density. This probability interpretation became part of the standard textbook narrative which, unfortunately, usually leaves out the guiding field interpretation, and without it the practical success of using $|\Psi|^2$ as a probability density becomes mysterious, rather incomprehensible. We will retain the guiding field interpretation. \subsection{Particles and guiding fields} \subsubsection{Born's probability rule} If one has $N$ electrons with generic positions ${\boldsymbol{q}}_n\in {\mathbb{R}}^3$, then Schr\"odinger's many-electron ``charge density function'' (\ref{eq:chargeRHO}) can be rewritten as \begin{alignat}{1} \rho_{\mathrm{el}} (t,{\boldsymbol{s}}) =\int_{{\mathbb{R}}^{3N}} \Big({\textstyle\sum\limits_n} - e \delta_{{\boldsymbol{q}}_n}({\boldsymbol{s}}) \Big) \varrho(t,\vec{\boldsymbol{q}}) \mathrm{d}^{3N}q, \label{eq:chargeRHOexpect} \end{alignat} and since $\varrho \geq 0$ integrates to 1 (as Schr\"odinger had to stipulate), this looks like the expected value of the \emph{generic empirical charge density} $\sum_n - e \delta_{{\boldsymbol{q}}_n}({\boldsymbol{s}})$ of the electrons (a singular measure, to be petty), computed w.r.t. a probability measure $\varrho(t,\vec{\boldsymbol{q}}) \mathrm{d}^{3N}\!q$. So Born in \cite{BornsPSISQUAREpapersB} proposed that $|\Psi|^2(t,\vec{\boldsymbol{q}})$, normalized to integrate to 1 as Schr\"odinger had stipulated, is a \emph{probability density} for the first particle being at ${\boldsymbol{q}}_1$, the second one at ${\boldsymbol{q}}_2$, and so on.\footnote{This soon was rendered incomprehensible by positivistic interpretations of Heisenberg's uncertainty relations. Rather than probability density for a particle being at ${\boldsymbol{q}}$, it was insisted that $|\Psi|^2$ is the probability density of finding a particle at ${\boldsymbol{q}}$ in a measurement (which is still comprehensible), but that it would not make sense to say a particle is at ${\boldsymbol{q}}$ without a measurement (which is not).} To see that also Schr\"odinger's \begin{alignat}{1} \hspace{-1truecm} {\boldsymbol{j}}_{\mathrm{el}} (t,{\boldsymbol{s}}) = {\textstyle\sum\limits_n}\! \int_{{\mathbb{R}}^{3(N-1)}}\!\!\!\!\! -e \tfrac{\hbar}{\mEL} \Im\! \left(\Psi^*(t,{\boldsymbol{q}}_1,...,{\boldsymbol{s}},...{\boldsymbol{q}}_N) \nabla_{{\boldsymbol{s}}} \Psi(t,{\boldsymbol{q}}_1,...,{\boldsymbol{s}},...{\boldsymbol{q}}_N)\right) \mathrm{d}^{3(N-1)}q \!\!\!\!\!\! \label{eq:chargeJexpect} \end{alignat} is an expected value w.r.t. $|\Psi|^2$, we recall that the polar representation $\Psi = |\Psi|e^{i\Phi}$ yields $\Im\!\left( \Psi^* \nabla \Psi\right) = |\Psi|^2\nabla\Phi$. Thus for each $n$ we have \begin{alignat}{1}\label{eq:JisRHOgradPHI} \hspace{-.6truecm} \Im\!\left( \Psi^*(t,\vec{\boldsymbol{q}}) \nabla_{{\boldsymbol{q}}_n}\Psi(t,\vec{\boldsymbol{q}})\right) = |\Psi|^2(t,\vec{\boldsymbol{q}})\nabla_{{\boldsymbol{q}}_n}\Phi(t,\vec{\boldsymbol{q}}), \end{alignat} and so, with ${\mathcal{I}}_n: T_{{\boldsymbol{q}}_n}{\mathbb{R}}^3\to T_{\boldsymbol{s}}{\mathbb{R}}^3$ denoting the natural injection map from the $n$-th component of tangent space of configuration space at $\vec{\boldsymbol{q}}=({\boldsymbol{q}}_1,...,{\boldsymbol{q}}_N)$ into the tangent space of physical space at ${\boldsymbol{s}}$, (\ref{eq:chargeJexpect}) becomes \begin{alignat}{1} {\boldsymbol{j}}_{\mathrm{el}} (t,{\boldsymbol{s}})\! =\! \int_{{\mathbb{R}}^{3N}}\!\! \Big({\textstyle\sum\limits_n} - e \tfrac{\hbar}{\mEL} {\mathcal{I}}_n\big(\nabla_{{\boldsymbol{q}}_n} \Phi(t,\vec{\boldsymbol{q}})\big)\delta_{{\boldsymbol{q}}_n}({\boldsymbol{s}})\Big) |\Psi|^2(t,\vec{\boldsymbol{q}}) \mathrm{d}^{3N}q, \label{eq:chargeJexpectMANIFEST} \end{alignat} which is the expected value w.r.t. $|\Psi|^2$ of the electrons' \emph{generic electrical current vector density} $\sum_n -e \frac{\hbar }{\mEL}{\mathcal{I}}_n\big(\nabla_{{\boldsymbol{q}}_n} \Phi(t,\vec{\boldsymbol{q}})\big)\delta_{{\boldsymbol{q}}_n}({\boldsymbol{s}})$. We remark that $\Phi$, while part of the polar representation of $\Psi$, \emph{is generally not a function of} $|\Psi|^2$. Note that (\ref{eq:JisRHOgradPHI}) and (\ref{eq:probCONSERVATIONqN}) imply that $\frac{\hbar}{\mEL}\nabla_{{\boldsymbol{q}}_n} \Phi(t,\vec{\boldsymbol{q}})$ must be interpreted as the $n$-th component of a \emph{generic velocity field} on configuration space ${\mathbb{R}}^{3N}\!$. Now, probabilities usually reflect our ignorance of something, otherwise we would speak about those things with certainty. Our ignorance may simply be due to technical limitations for accessing in-principle-available information, or it may be a matter of principle, if nature herself throws rocks our way of accessing the information. Either way, why should our ignorance satisfy such an equation like Schr\"odinger's which has the amazing feature of producing an energy spectrum whose level differences agree to great precision with the empirical frequencies of spectral lines registered by chemists and atomic physicists? Max Born, in \cite{BornsPSISQUAREpapersB}, offered the following way out of this dilemma. \subsubsection{$\Psi$ as a guiding field (Born and de Broglie on the ``same'' page)} Like de Broglie a couple years earlier in his doctoral work, Born picked up on Einstein's speculation that photons are particles which are guided by the electromagnetic field, which becomes a guiding field in Einstein's interpretation. Born now proposed that $\Psi$ is a guiding field for the matter particles of atomic physics: the electrons and the nuclei. He emphasized that all that was needed was to assume that $\Psi$ guides the particles more likely to where $|\Psi|^2$ is large and less likely to where it's small. Born's explanation makes it plain that when $\Psi$ somehow guides the particles in this manner, then $|\Psi|^2$ only \emph{appears to be} a probability density \emph{for all practical purposes} --- it is \emph{not fundamentally} a probability density. At the end of \cite{BornsPSISQUAREpapersB} Born wrote that he thought it was unlikely that a detailed description of the actual dynamics of positions and momenta (the ``phases'') of the particles was possible, but that Frenkel had pointed out to him that it might be possible. In any event, he emphasized that he was convinced that it had to be a non-deterministic law. Born's ideas about a non-deterministic guiding role played by $\Psi$ were eventually implemented by Nelson \cite{NelsonBOOKa,NelsonBOOKb} and further developped by Guerra and his collaborators \cite{GuerraBOOK} and became known as ``Stochastic Mechanics.'' The generic velocity field whose $n$-th component is $\frac{\hbar}{\mEL}\nabla_{{\boldsymbol{q}}_n} \Phi(t,\vec{\boldsymbol{q}})$ appears as the so-called ``current velocity field'' in Nelson's stochastic mechanics; in addition there is an ``osmotic velocity field.'' In the meantime, at the 1927 Solvay Conference de Broglie (see \cite{deBroglieSOLVAY}) proposed that Schr\"odinger's equation supplied the very equation for the guiding field whose existence he had postulated in 1923, but could not nail down. Already in his 1924 thesis de Broglie had suggested as guiding equation for the \emph{actual electron positions} (here in non-relativistic approximation) precisely the guiding equation of the Hamilton--Jacobi reformulation of Newton's mechanics, viz. \begin{alignat}{1}\label{dBguidingEQ} \forall\ n:\ \frac{\mathrm{d} {\boldsymbol{q}}_n(t)}{\mathrm{d} t} = \frac{\hbar}{\mEL}{\mathcal{I}}_n\Big(\Bigl.\nabla_{{\boldsymbol{q}}_n}\Phi(t,{\boldsymbol{q}}_1,...,{\boldsymbol{q}}_N)\Big)\Bigr|_{\vec{\boldsymbol{q}}=\vec{\boldsymbol{q}}(t)}, \end{alignat} where often one writes $S$ instead of $\hbar \Phi$. De Broglie's point was that the Hamilton--Jacobi partial differential equation for $S$ had to be an approximation to the fundamental equation for $\Phi$, but he did not know how it should be formulated. After Schr\"odinger published his papers on ``wave mechanics,'' de Broglie figured that Schr\"odinger's continuity equation (\ref{eq:probCONSERVATIONqN}) in concert with the identity (\ref{eq:JisRHOgradPHI}) suggest that $\frac{\hbar}{\mEL}\nabla_{{\boldsymbol{q}}_n}\Phi(t,\vec{\boldsymbol{q}})$ is the $n$-th three-vector component of a velocity field on configuration space. Thus de Broglie put one-plus-one together and proposed that if one evaluates this generic velocity field at the \emph{actual configuration} $\vec{\boldsymbol{q}}(t)=({\boldsymbol{q}}_1(t),...,{\boldsymbol{q}}_N(t))$, one gets the actual velocity of the $n$-th particle, given by the system of equations (\ref{dBguidingEQ}). For $\Phi(t,{\boldsymbol{q}}_1,...,{\boldsymbol{q}}_N) = \sum_n {\boldsymbol{k}}_n\cdot{\boldsymbol{q}}_n$ one recovers the de Broglie relation ${\boldsymbol{p}}_j = \hbar {\boldsymbol{k}}_j$ for the $j$-th particle, where ${\boldsymbol{p}}_j$ has been defined per Newton's formula ${\boldsymbol{p}}_j(t) = m_j\dot{\boldsymbol{q}}(t)$. For a discussion of de Broglie's theory, and its reception in 1927, see \cite{Solvay}. Facing criticism from Pauli and many others, and encouragement only by a few, most notably by Einstein and Brillouin, and furthermore hitting road blocks on his way to finding a deeper justification for his theory through his pursuit of ``the double solution,'' de Broglie abandoned his proposal --- until his theory was rediscovered 25 years after the 1927 Solvay Conference, by Bohm \cite{BohmsHIDDENvarPAPERS,BohmsREPLYtoCRITICSa,BohmsREPLYtoCRITICSb}. As relayed by Bricmont \cite{BricmontBOOK}, Bohm's work was received by his peers with an irrational hostility, which discouraged also Bohm from working on this guiding wave theory for many years. For a long time John Stewart Bell \cite{BellBOOK} seems to have been one of only a few physicists who promoted the virtues of the de Broglie--Bohm theory. Nowadays there are several excellent expositions \cite{DuerrEtalA,DuerrTeufelBOOK,BricmontBOOK}; see also \cite{BoHi,Holland,Solvay}. In the following we will for simplicity work with the de Broglie--Bohm guiding law, but a perfectly analogous treatment would seem possible with a Born--Nelson--Guerra-type stochastic guiding principle. It may be helpful to think of the de Broglie--Bohm theory as a deterministic limit of the Born--Nelson--Guerra theory. As far as the empirical output is concerned, to the best of our knowledge the theories are equivalent (so far). We emphasize already that the guiding equation will play an integral part in solving the generalized electromagnetic field equations. \newpage \subsection{Electromagnetic fields with given generic sources}\label{sec:sharpFIELDS} When Born proposed that $|\Psi|^2(t,\vec{\boldsymbol{q}})$ should be considered as a joint probability density for the $N$ particle positions at time $t$ he chose to demonstrate the utility of his proposal by studying the scattering of particles off of each other \cite{BornsPSISQUAREpapersA,BornsPSISQUAREpapersB,BornsPSISQUAREpapersC,BornsPSISQUAREpapersD}; cf. \cite{ReedSimonBOOKiii}. He could also have revisited Schr\"odinger's ``$\Psi$ as matter wave''-inspired calculations for hydrogen and for many-electron atoms coupled to the electromagnetic Maxwell fields (cf. \cite{WaveMechanics}) and deduce from his ``$\Psi$ as probability amplitude and guiding wave field'' perspective many of the insights which we deduce below, but apparently he did not. In fact, the present author is not aware of any publication which details the following considerations and deductions. In this vein, we revisit the four Maxwell field equations (\ref{eq:MdotE})--(\ref{eq:MdivB}) with Schr\"odinger's expression (\ref{eq:chargeRHO}) at r.h.s.(\ref{eq:MdivE}) and his (\ref{eq:chargeJ}), (\ref{eq:jOFsDEF}) at r.h.s. (\ref{eq:MdotE}), computed from the general bound state solution (\ref{eq:PSIboundGENERALz}) of (\ref{eq:ERWINeqnZ}), with Hamiltonian $H$ given by (\ref{eq:HAMzATOM}). Thus for now we continue our discussion of many-electron atoms in the Born--Oppenheimer approximation; the generalization to many-nuclei \&\ many-electron systems (i.e. molecules, solids, ...) in the Born--Oppenheimer approximation is straightforward and will be briefly addressed in section \ref{manyatoms}. In Born's widely accepted probability interpretation of $\Psi$ the expressions (\ref{eq:chargeJ}) at r.h.s. (\ref{eq:MdotE}) and (\ref{eq:chargeRHO}) at r.h.s.(\ref{eq:MdivE}) must be seen as \emph{expected} values of the empirical current and charge densities, not as actual ``matter-wave'' values which Schr\"odinger had proposed. To emphasize this in our notation, we introduce the abbreviations \begin{alignat}{1} \rho_{\mathrm{el}}^{\mbox{\tiny{emp}}} ({\boldsymbol{s}};\vec{\boldsymbol{q}}) &:= {\textstyle\sum\limits_n} - e \delta_{{\boldsymbol{q}}_n}({\boldsymbol{s}}) , \label{eq:chargeRHOemp}\\ {\boldsymbol{j}}_{\mathrm{el}}^{\mbox{\tiny{emp}}} (t,{\boldsymbol{s}};\vec{\boldsymbol{q}}) & := {\textstyle\sum\limits_n} - e \delta_{{\boldsymbol{q}}_n}({\boldsymbol{s}}) \tfrac{\hbar}{\mEL} {\mathcal{I}}_n\big(\nabla_{{\boldsymbol{q}}_n} \Phi(t,\vec{\boldsymbol{q}})\big) \label{eq:chargeJemp} \end{alignat} for the generic empirical charge and current vector densities of the electrons, and we abbreviate Born's expected values $\int_{{\mathbb{R}}^3} G(t,{\boldsymbol{s}};\vec{\boldsymbol{q}})|\Psi(t,\vec{\boldsymbol{q}})|^2\mathrm{d}^{3N}q=: \langle G\rangle(t,{\boldsymbol{s}})$. Thus we have $\rho_{\mathrm{el}}(t,s) = \langle \rho_{\mathrm{el}}^{\mbox{\tiny{emp}}}\rangle (t,{\boldsymbol{s}})$ at r.h.s.(\ref{eq:MdivE}) and ${\boldsymbol{j}}_{\mathrm{el}}(t,s) = \langle {\boldsymbol{j}}_{\mathrm{el}}^{\mbox{\tiny{emp}}} \rangle (t,{\boldsymbol{s}})$ at r.h.s.(\ref{eq:MdotE}). But then the solutions to the field equations (\ref{eq:MdotE})--(\ref{eq:MdivB}) with Born's expression (\ref{eq:chargeJemp}) at r.h.s.(\ref{eq:MdotE}) and his (\ref{eq:chargeRHOemp}) at r.h.s.(\ref{eq:MdivE}) are not, as Schr\"odinger originally thought, the actual fields of nature which are generated by some actual electric charge and current densities. Rather, Maxwell's equations with the expected values $\langle \rho_{\mathrm{el}}^{\mbox{\tiny{emp}}}\rangle (t,{\boldsymbol{s}})$ at r.h.s.(\ref{eq:MdivE}) and $\langle {\boldsymbol{j}}_{\mathrm{el}}^{\mbox{\tiny{emp}}} \rangle (t,{\boldsymbol{s}})$ at r.h.s.(\ref{eq:MdotE}) as source terms should be seen as the \emph{expected} values of some equations for electric and magnetic fields which depend not only on space ${\boldsymbol{s}}$ and time $t$ but also on the generic position variables ${\boldsymbol{q}}_1,...,{\boldsymbol{q}}_N$ of the electrons. Those fields will be written with a superscript ${}^\sharp$, viz.: ${\boldsymbol{E}}^\sharp(t,{\boldsymbol{s}};\vec{\boldsymbol{q}})$ and ${\boldsymbol{B}}^\sharp(t,{\boldsymbol{s}};\vec{\boldsymbol{q}})$. In 2004 the present author already introduced related $\sharp$-field equations in an attempt to formulate a divergence problem-free classical relativistic theory of point charges coupled with the nonlinear Maxwell--Born--Infeld field equations \cite{KieJSPa}. Their nonlinearity has been (and still is) a major road block for progress, though. Here we adapt this precursor work to formulate the $\sharp$-field equations pertinent to the linear Maxwell--Lorentz field equations for regularized point sources and will ultimately succeed in formulating a semi-relativistic quantum theory of particle motion. To avoid infinite ``self-field'' energies (etc.), instead of point charges assume that the ${\boldsymbol{q}}_k$ are the centers of tiny uniformly charged balls of radius $a$. Likewise let the nucleus be such a uniformly charged ball, centered at the origin. We aim at a semi-relativistic theory, so this regularization is acceptable. $\!\!$We next state the $\sharp$-field equations for generic empirical sources with a velocity field ${\boldsymbol{v}}_n$ for the $n$-th source. We do not assume that the $n$-th velocity field component ${\boldsymbol{v}}_n(t,\vec{\boldsymbol{q}})$ is given by $\frac{\hbar}{m_n}\nabla_{{\boldsymbol{q}}_n}\Phi(t,\vec{\boldsymbol{q}})$, where $\Phi$ is the phase of a Schr\"odinger wave function; it will be defined subsequently. The $\sharp$-field equations with ball instead of point charges as sources comprise two {inhomogeneous equations} \begin{alignat}{1}\hspace{-.3truecm} - \partial_t{{\boldsymbol{E}}^\sharp} - \bigl({\textstyle\sum\limits_k}{\boldsymbol{v}}_k\!\cdot\!\nabla_{{\boldsymbol{q}}_k}\bigr){\boldsymbol{E}}^\sharp + c\nabla_{\boldsymbol{s}}\times{\boldsymbol{B}}^\sharp &= \label{eq:aMdotEsharp} 4\pi e {\textstyle\sum\limits_n} - {\mathcal{I}}_n {\boldsymbol{v}}_n \delta^{(a)}_{{\boldsymbol{q}}_n}({\boldsymbol{s}}),\hspace{-.3truecm} \\ \hspace{-.5truecm} \nabla_{\boldsymbol{s}}\cdot{\boldsymbol{E}}^\sharp &=\label{eq:aMdivEsharp} 4\pi e\bigl(Z \delta^{(a)}_{{\boldsymbol{0}}}({\boldsymbol{s}}) + {\textstyle\sum\limits_n} -\delta^{(a)}_{{\boldsymbol{q}}_n}({\boldsymbol{s}}) \bigr), \end{alignat} and two {homogeneous equations} \begin{alignat}{1} \partial_t{{\boldsymbol{B}}^\sharp} +\bigl({\textstyle\sum\limits_k}{\boldsymbol{v}}_k\!\cdot\!\nabla_{{\boldsymbol{q}}_k}\bigr){\boldsymbol{B}}^\sharp + c \nabla_{\boldsymbol{s}}\times{\boldsymbol{E}}^\sharp &= \label{eq:MdotBsharp} {\boldsymbol{0}} \, , \\ \nabla_{\boldsymbol{s}}\cdot {\boldsymbol{B}}^\sharp &= \label{eq:MdivBsharp} 0\, , \end{alignat} where, for brevity, we have suppressed the arguments from ${\boldsymbol{E}}^\sharp(t,{\boldsymbol{s}};\vec{\boldsymbol{q}})$ and ${\boldsymbol{B}}^\sharp(t,{\boldsymbol{s}};\vec{\boldsymbol{q}})$, and we wrote ${\boldsymbol{v}}_n$ for ${\boldsymbol{v}}_n(t,\vec{\boldsymbol{q}})$; moreover, $\delta_{{\boldsymbol{q}}_n}^{(a)}({\boldsymbol{s}})$ is the indicator function of a ball of radius $a$ centered at ${\boldsymbol{q}}_n$, normalized by $\frac43\pi a^3$. We note that \eqref{eq:aMdivEsharp} and \eqref{eq:MdivBsharp} are simply constraint equations on the initial data. For \eqref{eq:MdivBsharp} this is immediately seen by applying $\nabla_{\boldsymbol{s}}\cdot$ to \eqref{eq:MdotBsharp}, which reveals that $\nabla_{\boldsymbol{s}}\cdot {\boldsymbol{B}}^\sharp$ does not change with time. Similarly, for \eqref{eq:aMdivEsharp} this is seen by applying $\nabla_{\boldsymbol{s}}\cdot$ to \eqref{eq:aMdotEsharp} and noting that $\nabla_{\boldsymbol{s}} \delta^{(a)}_{{\boldsymbol{q}}_n}({\boldsymbol{s}}) = - \nabla_{{\boldsymbol{q}}_n} \delta^{(a)}_{{\boldsymbol{q}}_n}({\boldsymbol{s}})$. We now show that by averaging them w.r.t. $|\Psi|^2 = \varrho$ they turn into \begin{alignat}{1} \quad - \partial_t{\langle{\boldsymbol{E}}^\sharp\rangle(t,{\boldsymbol{s}})} + c\nabla_{\boldsymbol{s}}\times\langle{\boldsymbol{B}}^\sharp\rangle(t,{\boldsymbol{s}}) &= \label{eq:MdotEmean} 4\pi\; \langle {\boldsymbol{j}}_{\mathrm{el},a}^{\mbox{\tiny{emp}}} \rangle(t,{\boldsymbol{s}}), \\ \nabla_{\boldsymbol{s}}\cdot \langle{\boldsymbol{E}}^\sharp\rangle(t,{\boldsymbol{s}}) &= \label{eq:MdivEmean} 4\pi \left(\langle \rho_{\mathrm{el},a}^{\mbox{\tiny{emp}}} \rangle(t,{\boldsymbol{s}})+ Ze\delta^{(a)}_{{\boldsymbol{0}}}({\boldsymbol{s}})\right),\\ \quad \partial_t\langle{\boldsymbol{B}}^\sharp\rangle(t,{\boldsymbol{s}}) + c \nabla_{\boldsymbol{s}}\times\langle{\boldsymbol{E}}^\sharp\rangle(t,{\boldsymbol{s}}) &= \label{eq:MdotBmean} {\boldsymbol{0}} \, , \\ \nabla_{\boldsymbol{s}}\cdot \langle{\boldsymbol{B}}^\sharp\rangle(t,{\boldsymbol{s}}) &= \label{eq:MdivBmean} 0\, , \end{alignat} where \begin{alignat}{1} \rho_{\mathrm{el},a}^{\mbox{\tiny{emp}}} ({\boldsymbol{s}};\vec{\boldsymbol{q}}) & := {\textstyle\sum\limits_n} - e \delta^{(a)}_{{\boldsymbol{q}}_n}({\boldsymbol{s}}) , \label{eq:achargeRHOemp}\\ {\boldsymbol{j}}_{\mathrm{el},a}^{\mbox{\tiny{emp}}} (t,{\boldsymbol{s}};\vec{\boldsymbol{q}}) & := {\textstyle\sum\limits_n} - e \delta^{(a)}_{{\boldsymbol{q}}_n}({\boldsymbol{s}}) {\mathcal{I}}_n{\boldsymbol{v}}_n(t,\vec{\boldsymbol{q}}), \label{eq:achargeJemp} \end{alignat} are the ball-regularized generic empirical charge and current densities. Thus the $\varrho$-averaged $\sharp$-field equations for the generic empirical sources are precisely Schr\"odinger's version of the Maxwell field equations (\ref{eq:MdotE})--(\ref{eq:MdivB}) for the electromagnetic field, with Schr\"odinger's expression (\ref{eq:chargeJ}) at r.h.s. (\ref{eq:MdotE}) and his (\ref{eq:chargeRHO}) at r.h.s.(\ref{eq:MdivE}), except that here we have also included the charge density of the nucleus at r.h.s.(\ref{eq:MdivEmean}), and we have replaced the Dirac point sources by Abraham-type tiny ball sources. To demonstrate that the averaged $\sharp$-field equations are identical with (\ref{eq:MdotEmean})--(\ref{eq:MdivBmean}), all we will need in the following is that (a) ${\boldsymbol{v}}_n$ is the $n$-th ${\mathbb{R}}^3$ component of a velocity field $\vec{\boldsymbol{v}}$ in the tangent space of ${\mathbb{R}}^{3N}$, and that (b) $\vec{\boldsymbol{v}}$ is defined by $\vec{\boldsymbol{J}} = \varrho \vec{\boldsymbol{v}}$, where $\varrho$ and $\vec{\boldsymbol{J}}$ jointly satisfy the continuity equation (\ref{eq:probCONSERVATIONqN}). We now multiply the $\sharp$-field equations with $|\Psi(t,\vec{\boldsymbol{q}})|^2$ and integrate over $\mathrm{d}^{3N}q$. Integrations by parts, and use of the continuity equation (\ref{eq:probCONSERVATIONqN}) together with the definition $\vec{\boldsymbol{J}} = \varrho \vec{\boldsymbol{v}}$ yields \begin{alignat}{1} \hspace{-1truecm} \langle\partial_t{\boldsymbol{B}}^\sharp\rangle(t,{\boldsymbol{s}}) &= \label{eq:aveMdotB} \partial_t \langle{\boldsymbol{B}}^\sharp\rangle(t,{\boldsymbol{s}}) - \int_{{\mathbb{R}}^{3N}}\big(\partial_t\varrho(t,\vec{\boldsymbol{q}})\big){\boldsymbol{B}}^\sharp(t,{\boldsymbol{s}};\vec{\boldsymbol{q}})\mathrm{d}^{3N}q \\ &=\partial_t\langle{\boldsymbol{B}}^\sharp\rangle(t,{\boldsymbol{s}})+ \int_{{\mathbb{R}}^{3N}}\big({\textstyle\sum\limits_n} \nabla_{{\boldsymbol{q}}_n}\cdot[\varrho(t,\vec{\boldsymbol{q}}){\boldsymbol{v}}_n(t,\vec{\boldsymbol{q}})]\big){\boldsymbol{B}}^\sharp(t,{\boldsymbol{s}};\vec{\boldsymbol{q}})\mathrm{d}^{3N}q \\ &=\partial_t\langle{\boldsymbol{B}}^\sharp\rangle(t,{\boldsymbol{s}}) - \int_{{\mathbb{R}}^{3N}}\bigl(\varrho(t,\vec{\boldsymbol{q}}){\textstyle\sum\limits_n} {\boldsymbol{v}}_n(t,\vec{\boldsymbol{q}})\cdot\nabla_{{\boldsymbol{q}}_n}\bigr){\boldsymbol{B}}^\sharp(t,{\boldsymbol{s}};\vec{\boldsymbol{q}})\mathrm{d}^{3N}q \\ &=\partial_t\langle{\boldsymbol{B}}^\sharp\rangle(t,{\boldsymbol{s}}) - \bigl\langle \bigl({\textstyle\sum\limits_n} {\boldsymbol{v}}_n\cdot\nabla_{{\boldsymbol{q}}_n}\bigr){\boldsymbol{B}}^\sharp\bigr\rangle(t,{\boldsymbol{s}}), \end{alignat} and similarly for $\langle\partial_t{\boldsymbol{E}}^\sharp\rangle(t,{\boldsymbol{s}})$. And so, \begin{alignat}{1} \langle\partial_t{\boldsymbol{B}}^\sharp\rangle(t,{\boldsymbol{s}}) + \bigl\langle \bigl({\textstyle\sum\limits_n} {\boldsymbol{v}}_n\cdot\nabla_{{\boldsymbol{q}}_n}\bigr){\boldsymbol{B}}^\sharp\bigr\rangle(t,{\boldsymbol{s}}) & = \label{eq:aveMdotBfinal} \partial_t\langle{\boldsymbol{B}}^\sharp\rangle(t,{\boldsymbol{s}}) ,\\ \langle\partial_t{\boldsymbol{E}}^\sharp\rangle(t,{\boldsymbol{s}}) + \bigl\langle \bigl( {\textstyle\sum\limits_n} {\boldsymbol{v}}_n\cdot\nabla_{{\boldsymbol{q}}_n}\bigr){\boldsymbol{E}}^\sharp\bigr\rangle(t,{\boldsymbol{s}}) & = \label{eq:aveMdotEfinal} \partial_t\langle{\boldsymbol{E}}^\sharp\rangle(t,{\boldsymbol{s}}) . \end{alignat} On the other hand, since ${\boldsymbol{s}}$ derivatives and $\vec{\boldsymbol{q}}$ integration commute, we have \begin{alignat}{1} \langle\nabla_{\boldsymbol{s}}\times{\boldsymbol{B}}^\sharp\rangle(t,{\boldsymbol{s}}) &= \label{eq:curBmean} \nabla_{\boldsymbol{s}}\times\langle{\boldsymbol{B}}^\sharp\rangle(t,{\boldsymbol{s}}) ,\\ \langle\nabla_{\boldsymbol{s}}\times{\boldsymbol{E}}^\sharp\rangle(t,{\boldsymbol{s}}) &= \label{eq:curEmean} \nabla_{\boldsymbol{s}}\times\langle{\boldsymbol{E}}^\sharp\rangle(t,{\boldsymbol{s}}). \end{alignat} Lastly, the $\varrho$ average of r.h.s.(\ref{eq:aMdotEsharp}) is by definition equal to $ 4\pi\; \langle {\boldsymbol{j}}_{\mathrm{el},a}^{\mbox{\tiny{emp}}} \rangle(t,{\boldsymbol{s}})$, that of r.h.s.(\ref{eq:aMdivEsharp}) equal to $4\pi \big(Ze \delta^{(a)}_{{\boldsymbol{0}}}({\boldsymbol{s}})+ \langle \rho_{\mathrm{el},a}^{\mbox{\tiny{emp}}} \rangle(t,{\boldsymbol{s}}) \big)$. Our demonstration is complete. \newpage \emph{Thus, what Schr\"odinger thought are the electromagnetic fields of the electrons, here appear as the expected values of the $\sharp$-fields for generic empirical sources, to which we added the electrostatic Coulomb field} $-Ze \nabla_{\!{\boldsymbol{s}}}\, \frac{1}{|{\boldsymbol{s}}|}$ ($|{\boldsymbol{s}}|>a$) \emph{of the nucleus.} \smallskip Schr\"odinger's findings obtained with what he thought is a ``matter-wave'' theory allows us to instead conclude that the general bound state solution (\ref{eq:PSIboundGENERALz}) of (\ref{eq:ERWINeqnZ}), with Hamiltonian $H$ given by (\ref{eq:HAMzATOM}), produces expected charge and current densities which are given by a sum of terms that oscillate with the Bohr-type frequencies $\propto (E_n-E_{n'})$, and that this implies the same for the expected values of the $\sharp$-fields. This is not a mere change of name for the same mathematical expressions. Rather, the change from Schr\"odinger's ``$\Psi$ as matter-wave'' perspective to Born's ``$\Psi$ as probability amplitude / guiding wave'' perspective implies: \emph{Not} the $\varrho$-averaged $\sharp$-fields but the $\sharp$-fields should be coupled back into Schr\"odinger's wave equation. \smallskip \emph{This is a decisive change of perspective!} \smallskip Before we come to discuss the back-coupling of the $\sharp$-fields into Schr\"odinger's equation, we here add another observation about the $\sharp$-field equations that is the analog of what has been observed already in \cite{KieJSPa,KieJSPb}. Namely, if instead of taking the $\varrho$ average of the $\sharp$-field equations we evaluate the $\sharp$-fields at the actual position of the electrons, i.e. substituting ${\boldsymbol{q}}_n(t)$ for ${\boldsymbol{q}}_n$ in the generic position slots, we obtain electromagnetic fields ${\boldsymbol{E}}^\sharp(t,{\boldsymbol{s}};\vec{\boldsymbol{q}}(t)) =:{\boldsymbol{E}}(t,{\boldsymbol{s}})$ and ${\boldsymbol{B}}^\sharp(t,{\boldsymbol{s}};\vec{\boldsymbol{q}}(t))=: {\boldsymbol{B}}(t,{\boldsymbol{s}})$ which satisfy the classical Maxwell--Lorentz field equations \begin{alignat}{1} - \partial_t{{\boldsymbol{E}}(t,{\boldsymbol{s}})} + c \nabla_{\boldsymbol{s}}\times{\boldsymbol{B}}(t,{\boldsymbol{s}}) &= \label{eq:MdotEactual} 4\pi {\textstyle\sum\limits_n} -e \dot{\boldsymbol{q}}_n(t) \delta^{(a)}_{{\boldsymbol{q}}_n(t)}({\boldsymbol{s}}),\\ \nabla_{\boldsymbol{s}}\cdot {\boldsymbol{E}}(t,{\boldsymbol{s}}) &= \label{eq:MdivEactual} 4\pi \bigl(Ze\delta^{(a)}_{{\boldsymbol{0}}}({\boldsymbol{s}}) + {\textstyle\sum\limits_n} -e \delta^{(a)}_{{\boldsymbol{q}}_n(t)}({\boldsymbol{s}}) \bigr)\, , \\ \partial_t{{\boldsymbol{B}}(t,{\boldsymbol{s}})} + c \nabla_{\boldsymbol{s}}\times{\boldsymbol{E}}(t,{\boldsymbol{s}}) &= \label{eq:MdotBactual} {\boldsymbol{0}} \, , \\ \nabla_{\boldsymbol{s}}\cdot {\boldsymbol{B}}(t,{\boldsymbol{s}}) &= \label{eq:MdivBactual} 0. \end{alignat} These field equations with rigidly transported tiny charged ball sources are Max Abraham's version of the Maxwell--Lorentz equations for the actual electromagnetic fields ${\boldsymbol{B}}(t,{\boldsymbol{s}})$ and ${\boldsymbol{E}}(t,{\boldsymbol{s}})$ in spacetime, although $t\mapsto {\boldsymbol{q}}_n(t)$ will not obey Abraham's classical equations of motions. For an assessment of Abraham's theory, see \cite{KiePLA}, the appendix in \cite{AppKieAOP}, and the monographs \cite{SpohnBOOKb}, \cite{Yaghjian}. A final remark before we move on to compute the feedback from the $\sharp$-fields into Schr\"odinger's equation: All computations in this subsection are valid also in the limit $a\to 0$ when the ball-type sources become proper point sources, and \eqref{eq:MdotEactual}--\eqref{eq:MdivBactual} become the Maxwell--Lorentz field equations for moving point charge sources. The regularization with $a>0$ is needed for what comes next. \newpage \subsection{Coupling the $\sharp$-fields back into the Schr\"odinger equation}\vspace{-.2truecm} We next show that the Coulomb interaction term in (\ref{eq:HAMzATOM}) can naturally be obtained from the \emph{electrostatic} solutions to these $\sharp$-field equations. This is encouraging, and suggests the necessary modifications in the Schr\"odinger equation (\ref{eq:ERWINeqnZ}) to also couple dynamical $\sharp$-fields back into the dynamics of $\Psi$. The $\sharp$-field equations themselves do not need to be modified, except that the velocity field $\vec{\boldsymbol{v}}$, while still defined by $\vec{\boldsymbol{J}} = \varrho \vec{\boldsymbol{v}}$, is not generally $\propto\nabla\Phi$ because $\vec{\boldsymbol{J}}$ will be given by a modified expression. \subsubsection{The electrostatic $\sharp$-field energy} We begin with the electrostatic special case. Although very special, it will produce the Schr\"odinger eigenvalue spectra of large atoms (in Born--Oppenheimer approximation) with purely Coulombic interactions of the electrons among each other and with the nucleus, when an electrostatic external field is acting or not. We suppress $t$ as argument in the $\sharp$-fields, in this case, and now recall a well-known result from the classical theory of electrostatics. Assume that pairwise $|{\boldsymbol{q}}_j-{\boldsymbol{q}}_k| > 2a$, and that all $|{\boldsymbol{q}}_k|>2a$, so that no two charge balls overlap. Then the electrostatic field energy of such a generic $N+1$ charge configuration, with the field being the sum of the Coulomb fields of all charged balls, is given by (cf. \cite{JacksonBOOKb}) \begin{equation} \frac{1}{8\pi} \int_{{\mathbb{R}}^3} \big|{\boldsymbol{E}}^\sharp({\boldsymbol{s}};\vec{\boldsymbol{q}})\big|^2 \mathrm{d}^3s = E_{\mbox{\tiny{self}}} - \sum_{n=1}^N \frac{Z e^2}{|{\boldsymbol{q}}_n|} + \sum\sum_{\hskip-.7truecm 1 \leq j < k \leq N} \frac{e^2}{|{\boldsymbol{q}}_j-{\boldsymbol{q}}_k|}, \label{eq:HAMfromFIELDenergyZ} \end{equation} and except for the configuration-independent ``self-field'' energy $E_{\mbox{\tiny{self}}}=\frac35\frac{e^2 }{a} \big( z^2 + N \big)$ this is precisely the interaction term in Schr\"odinger's (\ref{eq:ERWINeqnZ}) with Hamiltonian $H$ given by (\ref{eq:HAMzATOM}). The configuration-independent ``self-field'' energy term is a constant which shifts the whole spectrum by this constant, but it will cancel in differences of energy eigenvalues of $H$ and therefore not be seen in the spectral frequencies $\omega_j-\omega_k$. When any $|{\boldsymbol{q}}_n|\leq 2a$ or $|{\boldsymbol{q}}_j-{\boldsymbol{q}}_k|<2a$, the interactions are mollified and the field energy stays bounded so long as $a>0$. Next, since the charged nucleus is treated in Born--Oppenheimer approximation, it plays the role of an ``external'' source --- i.e. not part of the system of $N$ electrons whose Schr\"odinger wave function $\Psi$ we are concerned with. Therefore (\ref{eq:HAMfromFIELDenergyZ}) at once suggests a generalization when in addition to the $N$ electron atom with its nucleus also another ``external'' electrostatic source is introduced whose charge distribution is regular and compactly supported (e.g. the field produced by a charged capacitor in the laboratory, used to study the \emph{Stark effect}) --- a calculation is supplied further below. The electrostatic $\sharp$-field ${\boldsymbol{E}}^\sharp({\boldsymbol{s}};\vec{\boldsymbol{q}})$ is then the sum of such an external electrostatic field ${\boldsymbol{E}}^{\mbox{\tiny{ext}}}({\boldsymbol{s}}) = -\nabla_{\boldsymbol{s}} \phi^{\mbox{\tiny{ext}}}({\boldsymbol{s}})$, which includes the field of the nucleus, and all the Coulomb fields of the $N$ electrons. If no two balls overlap and no ball overlaps with the support of this extra external charge distribution, one now finds (cf. (\ref{eq:FIELDenergySTATICext})--(\ref{eq:FIELDenergySTATICextII})) \begin{alignat}{1} \frac{1}{8\pi} \int_{{\mathbb{R}}^3} \big|{\boldsymbol{E}}^\sharp({\boldsymbol{s}};\vec{\boldsymbol{q}})\big|^2 \mathrm{d}^3s = C \label{eq:HAMfromFIELDenergyZext} - \sum_{n=1}^N e \phi^{\mbox{\tiny{ext}}}({\boldsymbol{q}}_n) + \sum\sum_{\hskip-.7truecm 1 \leq j < k \leq N} \frac{e^2}{|{\boldsymbol{q}}_j-{\boldsymbol{q}}_k|}, \end{alignat} where $C = N \tfrac35\tfrac{e^2 }{a} + \tfrac{1}{8\pi} \int_{{\mathbb{R}}^3} \big|{\boldsymbol{E}}^{\mbox{\tiny{ext}}}({\boldsymbol{s}})\big|^2 \mathrm{d}^3s$ (note that the external field energy integral includes the self energy $\frac35 z^2e^2/a$ of the nucleus), and where the external potential at the $n$-th position $\phi^{\mbox{\tiny{ext}}}({\boldsymbol{q}}_n) = \phi^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{q}}_n) + \frac{Z e}{|{\boldsymbol{q}}_n|}$ includes the Coulomb potential of the nucleus. We conclude: \emph{Except for an irrelevant additive constant (the $C$ at r.h.s.(\ref{eq:HAMfromFIELDenergyZext})) which shift the whole spectrum (which cancels in the eigenvalue differences that yield the spectral frequencies of the atom), the r.h.s.(\ref{eq:HAMfromFIELDenergyZext}) is recognized as the usual interaction term in Schr\"odinger's equation for the Born--Oppenheimer approximation of a many-electron atom exposed to an additional applied external electrostatic potential field.} \subsubsection{The general electromagnetic feedback $\sharp$-field $\to \Psi$} The formulas obtained in the previous subsection for the electrostatic special case suggest that at least part of the back-coupling of the electromagnetic fields into the Schr\"odinger equation is obtained from the energy of the $\sharp$-fields sourced by generic point charge densities and current densities, given by \begin{equation} E^\sharp(t,\vec{\boldsymbol{q}}):=\frac{1}{8\pi}\int_{{\mathbb{R}}^3}\left(\big|{\boldsymbol{E}}^\sharp(t,{\boldsymbol{s}};\vec{\boldsymbol{q}})\big|^2 + \big|{\boldsymbol{B}}^\sharp(t,{\boldsymbol{s}};\vec{\boldsymbol{q}})\big|^2 \right)\mathrm{d}^3s, \label{eq:FIELDenergyZ} \end{equation} which takes the place of a ``minimally coupled external electric potential.'' This now suggests that the field momentum of the electromagnetic $\sharp$-fields, \begin{equation} {\boldsymbol{P}}^\sharp(t,\vec{\boldsymbol{q}}):=\frac{1}{4\pi c}\int_{{\mathbb{R}}^3} {\boldsymbol{E}}^\sharp(t,{\boldsymbol{s}};\vec{\boldsymbol{q}})\times{\boldsymbol{B}}^\sharp(t,{\boldsymbol{s}};\vec{\boldsymbol{q}})\mathrm{d}^3s, \label{eq:FIELDmomentumZ} \end{equation} injected into the $n$-th component of $T_{\vec{\boldsymbol{q}}}{\mathbb{R}}^{3N}$, may take the place of a ``minimally coupled external magnetic vector potential,'' but this would overcount the contributions to the $n$-th canonical momentum. The linearity of the $\sharp$-field equations comes to the rescue. We decompose ${\boldsymbol{E}}^\sharp(t,{\boldsymbol{s}};\vec{\boldsymbol{q}}) = {\boldsymbol{E}}^{\mbox{\tiny{ext}}}(t,{\boldsymbol{s}}) + \sum_{n=1}^N{\boldsymbol{E}}^\sharp_n(t,{\boldsymbol{s}};\vec{\boldsymbol{q}})$ and ${\boldsymbol{B}}^\sharp(t,{\boldsymbol{s}};\vec{\boldsymbol{q}}) = {\boldsymbol{B}}^{\mbox{\tiny{ext}}}(t,{\boldsymbol{s}}) + \sum_{n=1}^N{\boldsymbol{B}}^\sharp_n(t,{\boldsymbol{s}};\vec{\boldsymbol{q}})$. Here, ${\boldsymbol{E}}^{\mbox{\tiny{ext}}}(t,{\boldsymbol{s}})$ and ${\boldsymbol{B}}^{\mbox{\tiny{ext}}}(t,{\boldsymbol{s}})$ are classical electromagnetic Maxwell fields sourced by the charge density $Ze\delta_{{\boldsymbol{0}}}^{(a)}$ of the nucleus and possibly other compactly supported external sources $\rho^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}(t,{\boldsymbol{s}})$ and ${\boldsymbol{j}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}(t,{\boldsymbol{s}})$ located far away from the atom, satisfying the continuity equation for external charge conservation. Hence, these external fields satisfy the Maxwell--Lorentz field equations \begin{alignat}{1} - \partial_t{{\boldsymbol{E}}^{\mbox{\tiny{ext}}}(t,{\boldsymbol{s}})} + c \nabla_{\boldsymbol{s}}\times{\boldsymbol{B}}^{\mbox{\tiny{ext}}}(t,{\boldsymbol{s}}) &= \label{eq:MdotEexternal} 4\pi {\boldsymbol{j}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}(t,{\boldsymbol{s}}), \\ \nabla_{\boldsymbol{s}}\cdot {\boldsymbol{E}}^{\mbox{\tiny{ext}}}(t,{\boldsymbol{s}}) &= \label{eq:MdivEexternal} 4\pi \bigl(Ze\delta^{(a)}_{{\boldsymbol{0}}}({\boldsymbol{s}}) + \rho^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}(t,{\boldsymbol{s}}) \bigr)\, , \\ \partial_t{{\boldsymbol{B}}^{\mbox{\tiny{ext}}}(t,{\boldsymbol{s}})} + c \nabla_{\boldsymbol{s}}\times{\boldsymbol{E}}^{\mbox{\tiny{ext}}}(t,{\boldsymbol{s}}) &= \label{eq:MdotBexternal} {\boldsymbol{0}} \, , \\ \nabla_{\boldsymbol{s}}\cdot {\boldsymbol{B}}^{\mbox{\tiny{ext}}}(t,{\boldsymbol{s}}) &= \label{eq:MdivBexternal} 0. \end{alignat} Again suppressing, for brevity, the arguments from ${\boldsymbol{E}}^\sharp(t,{\boldsymbol{s}};\vec{\boldsymbol{q}})$ and ${\boldsymbol{B}}^\sharp(t,{\boldsymbol{s}};\vec{\boldsymbol{q}})$, and from the $n$-th velocity field component ${\boldsymbol{v}}_n(t,\vec{\boldsymbol{q}})$, to be defined below, the $n$-th $\sharp$-fields satisfy the two {inhomogeneous equations} \begin{alignat}{1}\hspace{-.3truecm} - \partial_t{{\boldsymbol{E}}^\sharp_n} - \bigl({\textstyle\sum\limits_k}{\boldsymbol{v}}_k\!\cdot\!\nabla_{{\boldsymbol{q}}_k}\bigr){\boldsymbol{E}}^\sharp_n + c\nabla_{\boldsymbol{s}}\times{\boldsymbol{B}}^\sharp_n &= \label{eq:MdotEsharpn} 4\pi {\mathcal{I}}_n \big(-e{\boldsymbol{v}}_n \delta^{(a)}_{{\boldsymbol{q}}_n}({\boldsymbol{s}})\big),\hspace{-.3truecm} \\ \hspace{-.5truecm} \nabla_{\boldsymbol{s}}\cdot {\boldsymbol{E}}^\sharp_n &= \label{eq:MdivEsharpn} 4\pi\big(- e \delta^{(a)}_{{\boldsymbol{q}}_n}({\boldsymbol{s}})\big), \end{alignat} and the two {homogeneous equations}, \begin{alignat}{1} \partial_t{{\boldsymbol{B}}^\sharp_n} +\bigl({\textstyle\sum\limits_k}{\boldsymbol{v}}_k\!\cdot\!\nabla_{{\boldsymbol{q}}_k}\bigr){\boldsymbol{B}}^\sharp_n + c \nabla_{\boldsymbol{s}}\times{\boldsymbol{E}}^\sharp_n &= \label{eq:MdotBsharpn} {\boldsymbol{0}} \, , \\ \nabla_{\boldsymbol{s}}\cdot {\boldsymbol{B}}^\sharp_n &= \label{eq:MdivBsharpn} 0\, . \end{alignat} We now define (tentatively) \begin{alignat}{1} \label{eq:FIELDmomentumZn} {\boldsymbol{P}}^\sharp_n(t,\vec{\boldsymbol{q}}):= &{\mathcal{I}}_n^{-1}\frac{1}{4\pi c}\!\int_{{\mathbb{R}}^3} {\boldsymbol{E}}^\sharp_n \times{\boldsymbol{B}}^\sharp (t,{\boldsymbol{s}};\vec{\boldsymbol{q}})\mathrm{d}^3s, \end{alignat} and propose \begin{equation} \boxed{\left(i \hbar \partial_t - E^\sharp (t,\vec{\boldsymbol{q}})\right)\Psi(t,\vec{\boldsymbol{q}}) = {\textstyle\sum\limits_{n=1}^N} \tfrac{1}{2\mEL}\left(-i \hbar \nabla_{{\boldsymbol{q}}_n} - {\boldsymbol{P}}^\sharp_n(t,\vec{\boldsymbol{q}}) \right)^2 \Psi(t,\vec{\boldsymbol{q}})} \label{eq:ERWINeqnNEW} \end{equation} as Schr\"odinger wave equation for $\Psi(t,\vec{\boldsymbol{q}})$, coupled to the $\sharp$-fields. Note that $E^\sharp$ and the ${\boldsymbol{P}}^\sharp_n$ occupy the slots of the external electromagnetic potentials used in the convential minimal coupling procedure. We now multiply (\ref{eq:ERWINeqnNEW}) with $\Psi^*$, and the complex conjugate of (\ref{eq:ERWINeqnNEW}) with $\Psi$, then subtract the second from the first equation, and after some standard manipulations, arrive at the continuity equation (\ref{eq:probCONSERVATIONqN}), again with $\varrho(t,\vec{\boldsymbol{q}}) := \Psi^*(t,\vec{\boldsymbol{q}}) \Psi(t,\vec{\boldsymbol{q}})$, but now with $\vec{\boldsymbol{J}}$ having $n$-th component \begin{equation} {\boldsymbol{J}}_n(t,\vec{\boldsymbol{q}}) :=\Im \left(\Psi^*(t,\vec{\boldsymbol{q}})\frac{1}{\mEL} \bigl(\hbar \nabla_{{\boldsymbol{q}}_n} - i{\boldsymbol{P}}^\sharp_n(t,\vec{\boldsymbol{q}})\bigr) \Psi(t,\vec{\boldsymbol{q}})\right). \end{equation} As a consequence, the $L^2({\mathbb{R}}^{3N})$ norm of $\Psi(t,\vec{\boldsymbol{q}})$ is preserved in time. Moreover, using the polar decomposition $\Psi = |\Psi|e^{i\Phi}$, we have the familiar $\Im\left( \Psi^*(t,\vec{\boldsymbol{q}}) \nabla_{\vec{\boldsymbol{q}}}\Psi(t,\vec{\boldsymbol{q}})\right) = |\Psi|^2(t,\vec{\boldsymbol{q}})\nabla_{\vec{\boldsymbol{q}}}\Phi(t,\vec{\boldsymbol{q}})$, and therefore $\vec{\boldsymbol{J}}(t,\vec{\boldsymbol{q}}) = \varrho(t,\vec{\boldsymbol{q}})\vec{\boldsymbol{v}}(t,\vec{\boldsymbol{q}})$ with $\vec{\boldsymbol{v}}$ given by \begin{alignat}{1}\label{dBvelonMINcoup} \forall\; n:\ {\boldsymbol{v}}_n(t,\vec{\boldsymbol{q}}) = \frac{1}{\mEL}\left(\hbar\nabla_{{\boldsymbol{q}}_n} \Phi(t,\vec{\boldsymbol{q}}) - {\boldsymbol{P}}^\sharp_n(t,\vec{\boldsymbol{q}})\right), \end{alignat} which is to be used in (\ref{eq:aMdotEsharp}) and (\ref{eq:MdotEsharpn}). The actual positions of the electrons, ${\boldsymbol{q}}_n(t)$, are now postulated to evolve in time according to the pertinent de Broglie--Bohm-type guiding equation \begin{alignat}{1} \forall\;n:\quad \frac{\mathrm{d} {\boldsymbol{q}}_n(t)}{\mathrm{d} t} = \frac{1}{\mEL}{\mathcal{I}}_n\Bigl.\Bigl(\hbar\nabla_{{\boldsymbol{q}}_n}\Phi(t,\vec{\boldsymbol{q}})-{\boldsymbol{P}}^\sharp_n(t,\vec{\boldsymbol{q}})\Bigr)\Bigr|_{\vec{\boldsymbol{q}}=\vec{\boldsymbol{q}}(t)}. \end{alignat} \subsection{The Schr\"odinger--Maxwell\, ${}^\sharp$ bound states and spectrum} We saw that the $\Psi$-dependent Hamiltonian $H$ of the Schr\"odinger--Maxwell system does not produce the correct hydrogen spectrum. Worse, it does not produce the correct Schr\"odinger spectrum of any many-electron atom with Coulomb interactions, although in the semi-classical large $N$ limit some aspects of it may be recovered. By contrast, as we will now show, after subtraction of an additive constant the Schr\"odinger spectra of many-electron atoms with Coulomb interactions are \emph{arbitrarily precisely} reproduced by the Hamiltonian \begin{equation} H = {\textstyle\sum\limits_{n=1}^N} \frac{1}{2\mEL}\left(-i \hbar \nabla_{{\boldsymbol{q}}_n} - {\boldsymbol{P}}^\sharp_n(t,\vec{\boldsymbol{q}}) \right)^2 + E^\sharp (t,\vec{\boldsymbol{q}}) \label{eq:HerwinSHARP} \end{equation} of the Schr\"odinger--Maxwell\,$^\sharp$ system when the $\sharp$-fields are static and of finite energy. \subsubsection{Ground state energy in the absence of non-nuclear external fields} We begin with a definition. \smallskip \noindent \textbf{Definition}: \emph{Let the energy functional $W(\Psi,{\boldsymbol{E}}^\sharp,{\boldsymbol{B}}^\sharp)$ be given by} \begin{eqnarray} W =\int_{{\mathbb{R}}^{3N}}\!\! \Bigl({\textstyle\sum\limits_{n=1}^N}\tfrac{1}{2\mEL}\left|\left(-i \hbar \nabla_{\!{\boldsymbol{q}}_n} - {\boldsymbol{P}}^\sharp_n(t,\vec{\boldsymbol{q}}) \right)\Psi(t,\vec{\boldsymbol{q}})\right|^2 \!+\! E^\sharp(t,\vec{\boldsymbol{q}})|\Psi(t,\vec{\boldsymbol{q}})|^2\! \Bigr)\mathrm{d}^{3N}q, \label{eq:energyFUNC} \end{eqnarray} \emph{where $E^\sharp$ is given in (\ref{eq:FIELDenergyZ}) and ${\boldsymbol{P}}^\sharp_n$ in (\ref{eq:FIELDmomentumZn}), and with $\Psi\in H^1(\mathrm{d}^{3N}q)$, satisfying $\|\Psi\|_{L^2}=1$, and with ${\boldsymbol{E}}^\sharp\in L^2(\mathrm{d}^3s)$ and ${\boldsymbol{B}}^\sharp\in L^2(\mathrm{d}^3s)$ satisfying the constraint equations (\ref{eq:aMdivEsharp}), (\ref{eq:MdivBsharp}), with $E^\sharp(t,\vec{\boldsymbol{q}})\in L^1(|\Psi|^2\mathrm{d}^{3N}q)$ and ${\boldsymbol{v}}_n(t,\vec{\boldsymbol{q}})\in L^2(|\Psi|^2\mathrm{d}^{3N}q)$. Then a solution $(\Psi(t,\vec{\boldsymbol{q}}), {\boldsymbol{E}}^\sharp(t,{\boldsymbol{s}};\vec{\boldsymbol{q}}),{\boldsymbol{B}}^\sharp(t,{\boldsymbol{s}};\vec{\boldsymbol{q}}))$ of the Schr\"odinger--Maxwell\,${}^\sharp$ system is called a \underline{\emph{ground state}} if $W$ evaluated with this solution takes the smallest possible value among all solutions of the same system of equations.} \newpage We are now ready to state our first theorem. \noindent \textbf{Theorem}: \emph{In the absence of non-nuclear external fields, the ground state is of the form $\Big( e^{-iE_gt/\hbar}\psi_g(\vec{\boldsymbol{q}}), -\nabla_{\boldsymbol{s}} \phi^\sharp({\boldsymbol{s}};\vec{\boldsymbol{q}}),{\boldsymbol{0}}\Big)$, where \begin{alignat}{1} \phi^\sharp({\boldsymbol{s}};\vec{\boldsymbol{q}}) = \label{eq:phisharpSTATIC} e \int_{{\mathbb{R}}^3} \left(Z\delta^{(a)}_{{\boldsymbol{0}}}({\boldsymbol{s}}') - {\textstyle\sum\limits_{n=1}^N} \delta^{(a)}_{{\boldsymbol{q}}_n}({\boldsymbol{s}}')\right)\frac{1}{|{\boldsymbol{s}}-{\boldsymbol{s}}'|} \mathrm{d}^3s', \end{alignat} and where $E_g \; (= E_1 >0)$ is the lowest eigenvalue of the Hamiltonian \begin{alignat}{1}\label{HAMnoEXTERNALfields} H : = {\textstyle\sum\limits_{n=1}^N}-\tfrac{\hbar^2}{2\mEL}\Delta_{{\boldsymbol{q}}_n} \!+\! \tfrac{1}{8\pi}\int_{{\mathbb{R}}^3} \big|\nabla_{\boldsymbol{s}}\phi^\sharp({\boldsymbol{s}};\vec{\boldsymbol{q}})\big|^2 \mathrm{d}^3s, \end{alignat} and $\psi_g(\vec{\boldsymbol{q}}) $ is the associated real eigenfunction. } \noindent \emph{Proof}: We use the polar representation $\Psi(t,\vec{\boldsymbol{q}}) = |\Psi(t,\vec{\boldsymbol{q}})|e^{i\Phi(t,\vec{\boldsymbol{q}})}$ and find, for a solution of the Schr\"odinger--Maxwell\,${}^\sharp$ system, \begin{eqnarray} W = \int_{{\mathbb{R}}^{3N}}\Bigl({\textstyle\sum\limits_{n=1}^N}\Bigl[\tfrac{\hbar^2}{8\mEL}|\nabla_{{\boldsymbol{q}}_n}\ln \varrho(t,\vec{\boldsymbol{q}})|^2 + \tfrac{1}{2}\mEL|{\boldsymbol{v}}_n(t,\vec{\boldsymbol{q}})|^2 \Bigr] + E^\sharp(t,\vec{\boldsymbol{q}})\Bigr)\varrho(t,\vec{\boldsymbol{q}}) \mathrm{d}^{3N}q, \label{eq:energyFUNCrewrite} \end{eqnarray} with $\varrho(t,\vec{\boldsymbol{q}}) = |\Psi(t,\vec{\boldsymbol{q}})|^2$. Equation \eqref{eq:energyFUNCrewrite} makes it plain that $W$ as defined earlier is well-defined. Next, by \eqref{eq:energyFUNCrewrite} it is manifest that the value of $W$ can be lowered for the same $\varrho$ and same $\sharp$-fields by setting ${\boldsymbol{v}}_n(t,\vec{\boldsymbol{q}})\equiv {\boldsymbol{0}}\ \forall\;n$. This means $\hbar\nabla_{{\boldsymbol{q}}_n}\Phi(t,\vec{\boldsymbol{q}}) = {\boldsymbol{P}}^\sharp_n(t,\vec{\boldsymbol{q}})$, by (\ref{dBvelonMINcoup}). Compatible with ${\boldsymbol{v}}_n(t,\vec{\boldsymbol{q}})\equiv{\boldsymbol{0}}\ \forall\;n$ we can now further lower the ${}^\sharp$-field energy $\frac{1}{8\pi}\int_{{\mathbb{R}}^3}\bigl(\big|{\boldsymbol{E}}^\sharp(t,{\boldsymbol{s}};\vec{\boldsymbol{q}})\big|^2 + \big|{\boldsymbol{B}}^\sharp(t,{\boldsymbol{s}};\vec{\boldsymbol{q}})\big|^2 \bigr)\mathrm{d}^3s$ by setting ${\boldsymbol{B}}^\sharp(t,{\boldsymbol{s}};\vec{\boldsymbol{q}})\equiv {\boldsymbol{0}}$. This now means ${\boldsymbol{P}}^\sharp_n(t,\vec{\boldsymbol{q}})\equiv{\boldsymbol{0}}$, by (\ref{eq:FIELDmomentumZn}), and therefore $\nabla_{{\boldsymbol{q}}_n}\Phi(t,\vec{\boldsymbol{q}}) \equiv {\boldsymbol{0}}\ \forall\;n$, meaning $\Phi(t,\vec{\boldsymbol{q}})=\Phi(t)$. Moreover, with ${\boldsymbol{v}}_n(t,\vec{\boldsymbol{q}})\equiv{\boldsymbol{0}}\ \forall\; n$ we also have $\partial_t \varrho(t,\vec{\boldsymbol{q}})=0$, and so $\varrho(t,\vec{\boldsymbol{q}})\equiv\varrho(\vec{\boldsymbol{q}})$. Thus, $\Psi(t,\vec{\boldsymbol{q}}) = e^{i\varphi(t)}\psi(\vec{\boldsymbol{q}})$. Next, ${\boldsymbol{E}}^\sharp(t,{\boldsymbol{s}};\vec{\boldsymbol{q}})$ cannot be set identically zero because of its contraint equation. Yet, with ${\boldsymbol{v}}_n(t,\vec{\boldsymbol{q}})\equiv{\boldsymbol{0}}\ \forall\; n$ and ${\boldsymbol{B}}^\sharp(t,{\boldsymbol{s}};\vec{\boldsymbol{q}})\equiv {\boldsymbol{0}}$, we have ${\boldsymbol{E}}^\sharp(t,{\boldsymbol{s}};\vec{\boldsymbol{q}})\equiv {\boldsymbol{E}}^\sharp({\boldsymbol{s}};\vec{\boldsymbol{q}})$, a static field, and thus $E^\sharp(t,\vec{\boldsymbol{q}})=E^\sharp(\vec{\boldsymbol{q}})$. But for a static solution of the $\sharp$-field equations, $\nabla_{\boldsymbol{s}} \boldsymbol{\times} {\boldsymbol{E}}^\sharp({\boldsymbol{s}};\vec{\boldsymbol{q}})={\boldsymbol{0}}$, which means ${\boldsymbol{E}}^\sharp({\boldsymbol{s}};\vec{\boldsymbol{q}})=-\nabla_{\boldsymbol{s}} \phi^\sharp({\boldsymbol{s}};\vec{\boldsymbol{q}})$. Insertion of this representation into the divergence equation (\ref{eq:aMdivEsharp}) yields the Poisson equation \begin{alignat}{1} -\Delta_{\boldsymbol{s}}\phi^\sharp({\boldsymbol{s}};\vec{\boldsymbol{q}}) &= \label{eq:LAPphisharpSTATIC} 4\pi e \Big(Z\delta^{(a)}_{{\boldsymbol{0}}}({\boldsymbol{s}}) - {\textstyle\sum\limits_{n=1}^N} \delta^{(a)}_{{\boldsymbol{q}}_n}({\boldsymbol{s}})\Big). \end{alignat} This Poisson equation for $\phi^\sharp({\boldsymbol{s}};\vec{\boldsymbol{q}})\in\dot{H}^1(\mathrm{d}^3s)$ is solved by (\ref{eq:phisharpSTATIC}). Therefore, with $\phi^\sharp({\boldsymbol{s}};\vec{\boldsymbol{q}})$ given in (\ref{eq:phisharpSTATIC}), the ground state wave function $\Psi_g(t,\vec{\boldsymbol{q}}) = e^{i\varphi_g(t)}\psi_g(\vec{\boldsymbol{q}})$ minimizes the reduced energy functional (abusing notation) \begin{eqnarray} {W}(\psi) =\int_{{\mathbb{R}}^{3N}}\! \Bigl({\textstyle\sum\limits_{n=1}^N}\tfrac{\hbar^2}{2\mEL}\left|\nabla_{\!{\boldsymbol{q}}_n}\psi(\vec{\boldsymbol{q}})\right|^2 \!+\! \tfrac{1}{8\pi}\int_{{\mathbb{R}}^3} \big|\nabla_{\boldsymbol{s}}\phi^\sharp({\boldsymbol{s}};\vec{\boldsymbol{q}})\big|^2 \mathrm{d}^3s\; |\psi(\vec{\boldsymbol{q}})|^2\! \Bigr)\mathrm{d}^{3N}q \label{eq:energyFUNCreduced} \end{eqnarray} among all $\psi\in \dot{H}^1(\mathrm{d}^{3N}q)$ for which $\|\psi\|_{L^2}=1$. The Euler--Lagrange equation for this minimization problem is $H\psi_g = E_g\psi_g$, with $H$ given in (\ref{HAMnoEXTERNALfields}), with $E_g$ the lowest eigenvalue and $\psi_g$ the associated real eigenfunction (unique up to sign). Since here we have not insisted on an antisymmetric electron wave function, the ground state wave function $\psi_g$ has no nodes, hence we can assume it to be positive. But then $\Psi_g(t,\vec{\boldsymbol{q}}) = e^{i\varphi_g(t)}\psi_g(\vec{\boldsymbol{q}}) = e^{i\varphi_g(t)}|\psi_g(\vec{\boldsymbol{q}})|$, so $\varphi_g(t) = \Phi_g(t)$. Lastly, for $\Psi_g(t,\vec{\boldsymbol{q}}) = e^{i\Phi_g(t)}\psi_g(\vec{\boldsymbol{q}})$ to solve the corresponding Schr\"odinger equation $i\hbar\partial_t\Psi_g(t,\vec{\boldsymbol{q}}) = H \Psi_g(t,\vec{\boldsymbol{q}})$, it follows that $\Phi_g(t) = - E_g t/\hbar$. \hfill $\square$ If no two balls overlap, the ${}^\sharp$-field energy $ \tfrac{1}{8\pi}\int_{{\mathbb{R}}^3} \big|\nabla_{\boldsymbol{s}}\phi^\sharp({\boldsymbol{s}};\vec{\boldsymbol{q}})\big|^2 \mathrm{d}^3s$ is given by (\ref{eq:HAMfromFIELDenergyZ}). We have arrived at the usual variational principle for the ground state $\Psi_g(t,\vec{\boldsymbol{q}})$ of a many-electron atom in Born--Oppenheimer approximation, except for the irrelevant additive electrostratic self-energy of the regularized electron and proton charge distributions (which only shifts the whole spectrum by an additive constant), and except for the regularization of the point charges by tiny balls. The additive constant self-energy we can subtract from the Hamiltonian (\ref{HAMnoEXTERNALfields}). In this Hamiltonian without self-energy contribution we may now let $a\searrow 0$, for the difference between the correct regularized interaction term and the Coulomb interaction for true point charges makes a practically negligible difference in its eigenvalues (shown rigorously in the textbooks of Thirring), so we may as well work with r.h.s.(\ref{eq:HAMfromFIELDenergyZ}) for all $\vec{\boldsymbol{q}}\in{\mathbb{R}}^{3N}$. \subsubsection{Excited bound state energies without non-nuclear external fields} The eigenvalue equations for the excited states result also from the Ansatz $\Psi(t,\vec{\boldsymbol{q}})= e^{-i Et/\hbar}\psi(\vec{\boldsymbol{q}})$ with static $\sharp$-fields. This does not reveal their variational characterization. In fact, the spectrum can be defined by a Courant min-max principle, thusly. The spectrum can be build up successively by seeking the minimum of $W$ under the constraint that the minimizing $\Psi$ be $L^2$ orthogonal to all previously found so-constrained minimizers. The same reasoning as used in the previous subsection reduces the $W$ functional to the usual Schr\"odinger variational problem with ($a$-regularized) Coulomb interaction between all particles (electrons and nucleus), except for the same additive constant as in the ground state energy problem. Thus the orthogonality condition is the same as used in the usual atomic $N$-body problem. \subsubsection{\hspace{-5.pt} Atomic spectra when static external fields $\phi^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}$ and ${\boldsymbol{A}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}$ are present} In addition to the fixed nucleus, which is an ``external source'' of a Coulomb field for the many electron system, other ``external fields'' may be generated in a laboratory, e.g. by a charged capacitor, or by Helmholtz coils with a stationary electrical current flowing through them. Far away from these additional field-generating external sources the external fields may be assumed to decay to zero sufficiently rapidly to make their field energy finite. Away from their (regular macroscopic) external sources these are harmonic fields. We introduce the notation $\phi^{\mbox{\tiny{ext}}}({\boldsymbol{s}}) = \phi^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}}) + Ze \int_{{\mathbb{R}}^3} \delta^{(a)}_{{\boldsymbol{0}}}({\boldsymbol{s}}') \tfrac{1}{|{\boldsymbol{s}}-{\boldsymbol{s}}'|} \mathrm{d}^3s'$ to explicitly exhibit the electrostatic Coulomb field of the nucleus; here the suffix ${}_{\mbox{\tiny{lab}}}$ emphasizes the Coulomb field generated in the laboratory. As long as we do not introduce a magnetic moment of the nucleus, the external magnetic potential is entirely due to the laboratory equipment. In the presence of such an additional external, static electromagnetic field with potentials $\phi^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}})$ and ${\boldsymbol{A}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}})$, satisfying $\nabla_{\boldsymbol{s}}\cdot{\boldsymbol{A}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}})=0$, the eigenvalue equations are again obtained from the Ansatz of stationary $\Psi(t,\vec{\boldsymbol{q}})= e^{-i Et/\hbar}\psi(\vec{\boldsymbol{q}})$ and static $\sharp$-fields. Thus we have \begin{alignat}{1} &\phi^\sharp({\boldsymbol{s}};\vec{\boldsymbol{q}}) = \label{eq:phisharpSTATICext} \phi^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}}) + e \int_{{\mathbb{R}}^3} \Bigl(Z\delta^{(a)}_{{\boldsymbol{0}}}({\boldsymbol{s}}') - {\textstyle\sum\limits_n}\delta^{(a)}_{{\boldsymbol{q}}_n}({\boldsymbol{s}}')\Bigr)\tfrac{1}{|{\boldsymbol{s}}-{\boldsymbol{s}}'|} \mathrm{d}^3s', \\ &{\boldsymbol{A}}^\sharp({\boldsymbol{s}};\vec{\boldsymbol{q}}) = {\boldsymbol{A}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}}) \end{alignat} These potentials produce the electric and magnetic fields \begin{alignat}{1} & {\boldsymbol{E}}^\sharp({\boldsymbol{s}};\vec{\boldsymbol{q}}) = \label{eq:EsharpSTATICext} -\nabla_{\boldsymbol{s}} \phi^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}}) + e\int_{{\mathbb{R}}^3} \Bigl(Z\delta^{(a)}_{{\boldsymbol{0}}}({\boldsymbol{s}}')-{\textstyle\sum\limits_n}\delta^{(a)}_{{\boldsymbol{q}}}({\boldsymbol{s}}')\Bigr)\tfrac{{\boldsymbol{s}}-{\boldsymbol{s}}'}{|{\boldsymbol{s}}-{\boldsymbol{s}}'|^3} \mathrm{d}^3s',\\ \label{eq:BsharpSTATICext} & {\boldsymbol{B}}^\sharp({\boldsymbol{s}};{\boldsymbol{q}}) = \nabla_{\boldsymbol{s}}\times {\boldsymbol{A}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}}) . \end{alignat} The ${}^\sharp$-field energy (for $|{\boldsymbol{q}}_n|>2a$ and $|{\boldsymbol{q}}_j-{\boldsymbol{q}}_k|>2a$) changes to \begin{alignat}{1} &\frac{1}{8\pi}\int_{{\mathbb{R}}^3}\left(\big|{\boldsymbol{E}}^\sharp({\boldsymbol{s}};\vec{\boldsymbol{q}})\big|^2 + \big|{\boldsymbol{B}}^\sharp({\boldsymbol{s}};\vec{\boldsymbol{q}})\big|^2 \right)\mathrm{d}^3s = \label{eq:FIELDenergySTATICext} \\ & \frac{1}{8\pi}\int_{{\mathbb{R}}^3}\left[\big|{\boldsymbol{E}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}})\big|^2 +\big|{\boldsymbol{B}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}})\big|^2\right] \mathrm{d}^3s + N \frac35\frac{e^2 }{a} + \sum\sum_{\hskip-.7truecm 1 \leq j < k \leq N} \frac{e^2}{|{\boldsymbol{q}}_j-{\boldsymbol{q}}_k|} \notag \\ & - {\sum\limits_n} \frac{Ze^2}{|{\boldsymbol{q}}_n|} + \frac{e}{4\pi}\int_{{\mathbb{R}}^3}\!\! \nabla_{\boldsymbol{s}}\int_{{\mathbb{R}}^3} \frac{Z\delta^{(a)}_{{\boldsymbol{0}}}({\boldsymbol{s}}') - \sum_n \delta^{(a)}_{{\boldsymbol{q}}_n}({\boldsymbol{s}}')}{|{\boldsymbol{s}}-{\boldsymbol{s}}'|} \mathrm{d}^3s' \cdot \nabla_{\boldsymbol{s}} \phi^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}}) \mathrm{d}^3s = \\ & C + \sum\!\sum_{\hskip-.7truecm 1 \leq j < k \leq N} \frac{e^2}{|{\boldsymbol{q}}_j-{\boldsymbol{q}}_k|} - {\sum\limits_n} \Bigl(\frac{Ze^2}{|{\boldsymbol{q}}_n|} - \frac{\eEL}{4\pi}\!\int_{{\mathbb{R}}^3}\!\!\! \Delta_{\boldsymbol{s}}\!\! \int_{{\mathbb{R}}^3}\!\! \frac{\delta^{(a)}_{{\boldsymbol{q}}_n}({\boldsymbol{s}}')}{|{\boldsymbol{s}}-{\boldsymbol{s}}'|} \mathrm{d}^3s'\phi^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}}) \mathrm{d}^3s\Bigr) = \label{eq:FIELDenergySTATICextII} \\ & C + \sum\sum_{\hskip-.7truecm 1 \leq j < k \leq N} \frac{e^2}{|{\boldsymbol{q}}_j-{\boldsymbol{q}}_k|} - e\sum_{n=1}^N \Big(\frac{Z e}{|{\boldsymbol{q}}_n|} + \phi^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{q}}_n)\Big), \notag \end{alignat} where $C: = N\frac35\frac{\eEL^2 }{a}+ \frac{1}{8\pi}\int_{{\mathbb{R}}^3}\bigl[\big|{\boldsymbol{E}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}})\big|^2+ \big|{\boldsymbol{B}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}})\big|^2\bigr] \mathrm{d}^3s$ is a constant that can be absorbed into the energy eigenvalues. We used that $\ -\Delta_{\boldsymbol{s}} \frac{1}{|{\boldsymbol{s}}-{\boldsymbol{s}}'|} = 4\pi \delta_{{\boldsymbol{s}}'}({\boldsymbol{s}})$, and that $\int_{{\mathbb{R}}^3} \delta^{(a)}_{{\boldsymbol{q}}}({\boldsymbol{s}})\phi^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}}) \mathrm{d}^3s=\phi^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{q}})$, for $\phi^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}})$ is harmonic on the support of the ``charged ball'' particles. R.h.s.(\ref{eq:FIELDenergySTATICextII}) vindicates what we anticipated in (\ref{eq:HAMfromFIELDenergyZext}). Also, now there is a non-vanishing $t$-independent field momentum ${\boldsymbol{P}}^\sharp_n$. Since ${\boldsymbol{B}}^\sharp_n\equiv{\boldsymbol{0}}$ in our setting, the only contribution comes from the external field, viz. \begin{alignat}{1} &\hspace{-1truecm} \int_{{\mathbb{R}}^3} {\boldsymbol{E}}^\sharp_n({\boldsymbol{s}};\vec{\boldsymbol{q}})\times{\boldsymbol{B}}^\sharp({\boldsymbol{s}};\vec{\boldsymbol{q}})\mathrm{d}^3s =\\ &\hspace{-.1truecm} - \int_{{\mathbb{R}}^3} \nabla_{\boldsymbol{s}}\phi^\sharp_n({\boldsymbol{s}};\vec{\boldsymbol{q}})\times\nabla_{\boldsymbol{s}}\times {\boldsymbol{A}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}})\mathrm{d}^3s =\\ &\hspace{-.1truecm} - \int_{{\mathbb{R}}^3}\left[ \nabla_{\boldsymbol{s}}\times\left(\phi^\sharp_n({\boldsymbol{s}};\vec{\boldsymbol{q}}) \nabla_{\boldsymbol{s}}\times {\boldsymbol{A}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}})\right)- \phi^\sharp_n({\boldsymbol{s}};\vec{\boldsymbol{q}})\nabla_{\boldsymbol{s}}\times\nabla_{\boldsymbol{s}}\times {\boldsymbol{A}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}})\right]\mathrm{d}^3s = \\ &\hspace{-.1truecm} - \int_{{\mathbb{R}}^3} \phi^\sharp_n({\boldsymbol{s}};\vec{\boldsymbol{q}})\Delta_{\boldsymbol{s}} {\boldsymbol{A}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}})\mathrm{d}^3s =\\ &\hspace{-.1truecm} - \int_{{\mathbb{R}}^3} {\boldsymbol{A}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}}) \Delta_{\boldsymbol{s}} \phi^\sharp_n({\boldsymbol{s}};\vec{\boldsymbol{q}})\mathrm{d}^3s = -\eEL 4\pi{\boldsymbol{A}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{q}}_n). \label{eq:FIELDmomentumEXT} \end{alignat} When substituting our results (\ref{eq:FIELDmomentumEXT}) and (\ref{eq:FIELDenergySTATICext}) into (\ref{eq:ERWINeqnNEW}), together with $\Psi(t,\vec{\boldsymbol{q}}) = e^{-i Et/\hbar}\psi(\vec{\boldsymbol{q}})$, where $\psi(\vec{\boldsymbol{q}})\in{\mathbb{R}}$, we obtain (for $|{\boldsymbol{q}}_n|>2a$ and $|{\boldsymbol{q}}_j-{\boldsymbol{q}}_k|>2a$) \begin{eqnarray}\!\!\!\! \Bigl[{\textstyle\sum\limits_{n=1}^N}\Big( \!\tfrac{1}{2\mEL}\!\left[\tfrac{\hbar}{i}\nabla_{{\boldsymbol{q}}_n}^{}\!\! + \tfrac{\eEL}{c}{\mathcal{I}}_n^{-1}\!{\boldsymbol{A}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{q}}_n)\right]^2\!\! - e\phi^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{q}}_n) - \tfrac{Ze^2}{|{\boldsymbol{q}}_n|}\! \Big) \!+\! {\textstyle\sum\!\!\!\sum\limits_{\hskip-.6truecm 1 \leq j < k \leq N}} \tfrac{e^2}{|{\boldsymbol{q}}_j-{\boldsymbol{q}}_k|} \!\Bigr]\psi(\vec{\boldsymbol{q}}) = \label{eq:ErwinEQstationaryRELext} \widetilde{E} \psi(\vec{\boldsymbol{q}}) \end{eqnarray} where $\widetilde{E} = E - N\frac35\frac{\eEL^2 }{a} - \frac{1}{8\pi}\int_{{\mathbb{R}}^3}\big[\big|{\boldsymbol{E}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}})\big|^2 +\big|{\boldsymbol{B}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}})\big|^2\big]\mathrm{d}^3s$, while for $|{\boldsymbol{q}}_n|\leq 2a$ the interaction term between the $n$-th electron and the nucleus converges monotone down to $- 2\frac35\frac{z\eEL^2 }{a}$ when $|{\boldsymbol{q}}_n|\downarrow 0$; an analogous result with $+$ and $Z$ replaced by $1$ holds for any electron-electron interaction term when $|{\boldsymbol{q}}_j-{\boldsymbol{q}}_k|\leq 2a$. As mentioned, for tiny $a$ the difference between the regularized interaction term and the true Coulomb interaction for two point charges makes a practically negligible difference in the eigenvalues, so we may as well work with (\ref{eq:ErwinEQstationaryRELext}) for all $\vec{\boldsymbol{q}}\in{\mathbb{R}}^{3N}$. Since all eigenfunctions of (\ref{eq:ErwinEQstationaryRELext}) can be chosen real, they produce a trivial generic velocity field, consistent with an electro-magneto-static $\sharp$-field which gave us (\ref{eq:ErwinEQstationaryRELext}). We summarize our observation as follows: \noindent \emph{The assumption of a stationary solution $\Psi(t,\vec{\boldsymbol{q}}) = e^{-i Et/\hbar}\psi(\vec{\boldsymbol{q}})$ with $\psi(\vec{\boldsymbol{q}})\in{\mathbb{R}}$ is compatible with purely electrostatic $\sharp$-fields which include static external fields $\phi^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}})$ and ${\boldsymbol{A}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}})$, and (except for an additive constant shift in the energy eigenvalues) it leads to the conventional Schr\"odinger equation for an atom acted on by these externally sourced fields by the ``minimal coupling'' rule.} \bigskip \noindent\textbf{Appendix to 3.4.3: On the \emph{Zeeman} and \emph{Stark} effects } {Assuming that electrons and nucleus in a bound state are located in a ball of radius comparable to (several) Bohr lengths, while the externally generated electric and magnetic fields ${\boldsymbol{E}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}$ and ${\boldsymbol{B}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}$ are created by machines in the laboratory, with very tiny error these external fields will not vary over the distances of separation of electron and proton. Thus we may be tempted to suppose that for all practical purposes the correct Schr\"odinger eigenvalues are obtained by computing with an electron-proton system exposed to a constant electric field ${\boldsymbol{E}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}= -\nabla_{\boldsymbol{s}} \phi^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}})$ and a constant magnetic field ${\boldsymbol{B}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}=\nabla_{\boldsymbol{s}}\times {\boldsymbol{A}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}})$, with $\phi^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}})= -{\boldsymbol{E}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}\cdot{\boldsymbol{s}}$ and ${\boldsymbol{A}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}}) = \frac12{\boldsymbol{B}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}\times {\boldsymbol{s}}$. This supposition is partly correct, and partly wrong. If one has no electric field, i.e. ${\boldsymbol{E}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}\equiv{\boldsymbol{0}}$, yet a constant applied magnetic field ${\boldsymbol{B}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}\not\equiv{\boldsymbol{0}}$, then the just described replacement yields the Schr\"odinger equation for a atom in a constant ${\boldsymbol{B}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}$ field, which yields the \emph{normal Zeemann effect} of the splitting of the atomic spectral lines. If one has no magnetic field, i.e. ${\boldsymbol{B}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}\equiv{\boldsymbol{0}}$, yet a constant applied electric field ${\boldsymbol{E}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}\not\equiv{\boldsymbol{0}}$, then the just described replacement yields the Schr\"odinger equation for an atom in a constant ${\boldsymbol{E}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}$ field, which has \emph{no} eigenvalues \cite{ReedSimonBOOKiv} --- this however is a consequence of oversimplifying the problem. If on the other hand one uses Schr\"odinger's 1st order perturbation theory to estimate the energy shift in an applied electric field, starting from an unperturbed eigenstate, this captures the \emph{Stark effect} of the splitting of the spectral lines}, which was studied in great detail in \cite{ErwinWMc}.\vspace{-10pt} \subsection{Emission of a flash of electromagnetic radiation} In the following we argue that per our dynamical equations an atom, which initially is in an excited $n>1$ eigenstate with angular momentum $\ell>0$ and that is met subsequently by a gentle pulse of external electromagnetic vacuum radiation, will begin to emit an electromagnetic $\sharp$-field wave with frequencies centered on the $\omega_{n,n'}$, with $\hbar\omega_{n,n'}$ precisely the difference between the eigen energies of the usual Schr\"odinger hydrogen problem. We will set up a traditional perturbative iteration scheme for the response of the atom and of the radiation $\sharp$-fields. To first order approximation the incoming radiation triggers the atom to emit an electromagnetic wave with the right frequency (or frequencies). While so far we have not been able to show that this leads to a transition, ultimately to the ground state, we were able to check that such a dynamical scenario is compatible with our full set of dynamical equations. Qualitatively this latter statement is also true for the Schr\"odinger--Maxwell system, but as demonstrated earlier, that system does not produce quantitatively correct atomic energy spectra whereas our model evidently does. To convey the essence of our argument it suffices to consider a hydrogen atom ($N=1=Z$). The many-electron atom can be treated analogously. \medskip \noindent \textbf{Remark:} \emph{We will show that the emission to first order is a ``flash of light'' in this model, essentially a thin spherical shell of radiation propagating outward in all directions. This does not qualify as ``the photon as a particle,'' which we announced in the introduction would appear in the model. To arrive at the conclusion that the model accounts for the photon, another change of perspective is required, notationally trivial but conceptually radical! This will be discussed in a later section. } \subsubsection{Hydrogen radiation} Since the emission of electromagnetic radiation is a dynamical process it is advisable not to complicate the problem by also allowing time-dependent external sources. Therefore, in the following the external sources $\rho^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}})$ and ${\boldsymbol{j}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}})$ are assumed to be smooth, compactly supported, and static; recall that the proton's charge density, while also counted as external to the system of electrons, is not included in $\rho^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}$ but is treated as an external source in its own right. To facilitate our discussion we first treat the special case in which $\rho^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}})\equiv 0$ and ${\boldsymbol{j}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{s}})\equiv {\boldsymbol{0}}$, equivalently in the absence of laboratory-generated static ${\boldsymbol{E}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}$ and ${\boldsymbol{B}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}$. A straightforward generalization allows external laboratory sources to be taken into account. \medskip \noindent\textbf{3.5.1.i: Explicitly exhibiting the static and the radiation $\sharp$-fields.} \smallskip \smallskip Having only a single electron and the fixed proton, the external field is the Coulomb field of the proton. We thus write ${\boldsymbol{E}}^\sharp(t,{\boldsymbol{s}};{\boldsymbol{q}}) = {\boldsymbol{E}}^\sharp_{\mbox{\tiny{{el}}}}(t,{\boldsymbol{s}};{\boldsymbol{q}}) - \nabla_{\boldsymbol{s}} \phi^{(a)}_{\mbox{\tiny{{pr}}}}(|{\boldsymbol{s}}|)$, where $\phi^{(a)}_{\mbox{\tiny{{pr}}}}(|{\boldsymbol{s}}|) = e \int_{{\mathbb{R}}^3}\frac{1}{|{\boldsymbol{s}} -{\boldsymbol{s}}'|}\delta^{(a)}_{{\boldsymbol{0}}}({\boldsymbol{s}}')\mathrm{d}^3s'$; thus, $\phi^{(a)}_{\mbox{\tiny{{pr}}}}(r) = -e/r$ if $r\geq a$, and $\phi^{(a)}_{\mbox{\tiny{{pr}}}}(r) = \phi^{(a)}_{\mbox{\tiny{{pr}}}}(0) + \frac12er^2/a^3$ if $0\leq r\leq a$, with $\phi^{(a)}_{\mbox{\tiny{{pr}}}}(0)$ determined by continuity at $r=a$. We also have ${\boldsymbol{B}}^\sharp(t,{\boldsymbol{s}};{\boldsymbol{q}}) = {\boldsymbol{B}}^\sharp_{\mbox{\tiny{{el}}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})$. The electric $\sharp$-field of the electron, ${\boldsymbol{E}}^\sharp_{\mbox{\tiny{{el}}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})$, will be split further into a sum, of its electrostatic Coulomb field with generic position ${\boldsymbol{q}}$, and of its radiation field generated by the generic current density vector, i.e. ${\boldsymbol{E}}^\sharp_{\mbox{\tiny{{el}}}}(t,{\boldsymbol{s}};{\boldsymbol{q}}) = - \nabla_{\boldsymbol{s}} \phi^{(a)}_{\mbox{\tiny{{el}}}}(|{\boldsymbol{s}}-{\boldsymbol{q}}|) + {\boldsymbol{E}}^\sharp_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})$, where $\phi^{(a)}_{\mbox{\tiny{{el}}}}(|{\boldsymbol{s}}-{\boldsymbol{q}}|):=- e \int_{{\mathbb{R}}^3}\frac{1}{|{\boldsymbol{s}} -{\boldsymbol{s}}'|}\delta^{(a)}_{{\boldsymbol{q}}}({\boldsymbol{s}}')\mathrm{d}^3s'$; as for the proton's Coulomb potential, so also for the electron $\phi^{(a)}_{\mbox{\tiny{{el}}}}(r) = -e/r$ if $r\geq a$, and $\phi^{(a)}_{\mbox{\tiny{{el}}}}(r) = \phi^{(a)}_{\mbox{\tiny{{el}}}}(0) + \frac12er^2/a^3$ if $0\leq r\leq a$, with $\phi^{(a)}_{\mbox{\tiny{{el}}}}(0)$ determined by continuity at $r=a$. The magnetic $\sharp$-field of the electron is just the magnetic radiation field, i.e. ${\boldsymbol{B}}^\sharp_{\mbox{\tiny{{el}}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})= {\boldsymbol{B}}^\sharp_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})$. As indicated by the suffix ${}_{\textrm{rad}}$, these contributions to ${\boldsymbol{B}}^\sharp_{\mbox{\tiny{{el}}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})$ and ${\boldsymbol{E}}^\sharp_{\mbox{\tiny{{el}}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})$ will account for the phenomenon of electromagnetic radiation; however, included are also ``standing electrical waves'' associated with pulsating spherically symmetric $|\Psi(t,{\boldsymbol{q}})|^2$. Inserting this decomposition into (\ref{eq:MdotEsharpn})--(\ref{eq:MdivBsharpn}) (with $n=1$ replaced by the suffix $_{\textrm{{el}}}$), straightforward vector calculus then yields that the radiation fields satisfy the system of equations \begin{alignat}{1}\hspace{-.7truecm} \partial_t{{\boldsymbol{E}}^\sharp_{\mbox{\tiny{rad}}}} + \bigl({\boldsymbol{v}}\!\cdot\!\nabla_{{\boldsymbol{q}}}\bigr){\boldsymbol{E}}^\sharp_{\mbox{\tiny{rad}}} - c\nabla_{\boldsymbol{s}}\times {\boldsymbol{B}}^\sharp_{\mbox{\tiny{rad}}} &= \label{eq:MdotEsharpRAD} - \nabla_{\boldsymbol{s}}\times \Big(\nabla_{\boldsymbol{s}}\times \Big({\mathcal{I}} {\boldsymbol{v}}\phi^{(a)}_{\mbox{\tiny{{el}}}}(|{\boldsymbol{s}}-{\boldsymbol{q}}|)\Big)\Big) ,\\ \nabla_{\boldsymbol{s}}\cdot {\boldsymbol{E}}^\sharp_{\mbox{\tiny{rad}}} &= \label{eq:MdivEsharpRAD} 0,\\ \partial_t{{\boldsymbol{B}}^\sharp_{\mbox{\tiny{rad}}}} +\bigl({\boldsymbol{v}}\!\cdot\!\nabla_{{\boldsymbol{q}}}\bigr){\boldsymbol{B}}^\sharp_{\mbox{\tiny{rad}}} + c \nabla_{\boldsymbol{s}}\times{\boldsymbol{E}}^\sharp_{\mbox{\tiny{rad}}} &= \label{eq:MdotBsharpRAD} {\boldsymbol{0}} \, , \\ \nabla_{\boldsymbol{s}}\cdot {\boldsymbol{B}}^\sharp_{\mbox{\tiny{rad}}} &= \label{eq:MdivBsharpRAD} 0\, , \end{alignat} where, for brevity, we again have suppressed the arguments from ${\boldsymbol{E}}^\sharp_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})$ and ${\boldsymbol{B}}^\sharp_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})$, and from the velocity field ${\boldsymbol{v}}(t,{\boldsymbol{q}})$, given by $\varrho(t,{\boldsymbol{q}}){\boldsymbol{v}}(t,{\boldsymbol{q}}):={\boldsymbol{J}}(t,{\boldsymbol{q}})$, i.e. \begin{alignat}{1}\label{eq:dBBv} {\boldsymbol{v}}(t,{\boldsymbol{q}}) = \Im \left(\Psi^*(t,{\boldsymbol{q}})\tfrac{1}{\mEL} \bigl(\hbar \nabla_{{\boldsymbol{q}}} - i{\boldsymbol{P}}^\sharp_{\mbox{\tiny{{el}}}}(t,{\boldsymbol{q}})\bigr) \Psi(t,{\boldsymbol{q}})\right)\Big/|\Psi|^2(t,{\boldsymbol{q}}) . \end{alignat} Here, $\Psi$ satisfies the following Schr\"odinger equation, \begin{alignat}{1}\label{ERWINeqnHYDROcoupleSHARP} i\hbar\partial_t\Psi(t,{\boldsymbol{q}}) = \Bigl(\tfrac{1}{2\mEL}\left(-i \hbar \nabla_{\boldsymbol{q}} - {\boldsymbol{P}}^\sharp_{\mbox{\tiny{{el}}}}(t,{\boldsymbol{q}}) \right)^2 + E^\sharp(t,{\boldsymbol{q}})\Bigr) \Psi(t,{\boldsymbol{q}}), \end{alignat} with the ${}^\sharp$-field energy $E^\sharp(t,{\boldsymbol{q}})$ given by (for $|{\boldsymbol{q}}|>2a$) \begin{alignat}{1} \frac{1}{8\pi}\int_{{\mathbb{R}}^3}\left(\big|{\boldsymbol{E}}^\sharp\big|^2 + \big|{\boldsymbol{B}}^\sharp\big|^2 \right)\!(t,{\boldsymbol{s}};{\boldsymbol{q}})\,\mathrm{d}^3s = E^\sharp_{\mbox{\tiny{rad}}}(t,{\boldsymbol{q}}) - \frac{e^2}{|{\boldsymbol{q}}|} + 2\frac35\frac{e^2 }{a}, \label{eq:FIELDenergyHYDROGENradiatingNOextF} \end{alignat} with \begin{alignat}{1} \label{eq:FIELDenergyHYDROGENradiatingNOextFb} E^\sharp_{\mbox{\tiny{rad}}}(t,{\boldsymbol{q}}) = \frac{1}{8\pi}\int_{{\mathbb{R}}^3}\left(\big|{\boldsymbol{E}}^\sharp_{\mbox{\tiny{rad}}}\big|^2 + \big|{\boldsymbol{B}}^\sharp_{\mbox{\tiny{rad}}}\big|^2\right)\!(t,{\boldsymbol{s}};{\boldsymbol{q}}) \,\mathrm{d}^3s; \end{alignat} when $a$ is small enough, we can use r.h.s.(\ref{eq:FIELDenergyHYDROGENradiatingNOextF}) in the Schr\"odinger equation \eqref{ERWINeqnHYDROcoupleSHARP} for all ${\boldsymbol{q}}\in{\mathbb{R}}^3$, not just for $|{\boldsymbol{q}}|>2a$, with negligible errors. Furthermore, ${\boldsymbol{P}}^\sharp_{\mbox{\tiny{{el}}}}(t,{\boldsymbol{q}})$ is given by (recall that ${\boldsymbol{B}}^\sharp_{\mbox{\tiny{{el}}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})= {\boldsymbol{B}}^\sharp_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})$) \begin{alignat}{1} \label{eq:FIELDmomentumELECTRONhydrogenPRELIM} \hspace{-1truecm} {\boldsymbol{P}}^\sharp_{\mbox{\tiny{{el}}}}(t,{\boldsymbol{q}}) & = \tfrac{1}{4\pi c}\int_{{\mathbb{R}}^3} \big({\boldsymbol{E}}^\sharp_{\mbox{\tiny{{el}}}} \times{\boldsymbol{B}}^\sharp_{\mbox{\tiny{{rad}}}}\big)(t,{\boldsymbol{s}};{\boldsymbol{q}})\mathrm{d}^3s \\ \notag & = \tfrac{1}{4\pi c}\int_{{\mathbb{R}}^3}\!\!\Big( {\boldsymbol{E}}^\sharp_{\mbox{\tiny{rad}}}\times{\boldsymbol{B}}^\sharp_{\mbox{\tiny{rad}}} - \nabla \big( \phi^{(a)}_{\mbox{\tiny{{el}}}}(|\,\cdot\,|) \times{\boldsymbol{B}}^\sharp_{\mbox{\tiny{{rad}}}}\big)(t,{\boldsymbol{s}};{\boldsymbol{q}})\mathrm{d}^3s. \end{alignat} Thus we have explicitly exhibited the static and the radiation degrees of freedom in both the $\sharp$-field equations and in Schr\"odinger's equation. For some computational purposes it is convenient to rewrite the expression involving the Coulomb potentials into a more familiar looking format, invoking the identity (and purging some subscripts) $\nabla\times(\phi{\boldsymbol{B}}^\sharp) = (\nabla\phi)\times{\boldsymbol{B}}^\sharp +\phi \nabla\times{\boldsymbol{B}}^\sharp$, aided by Stokes' theorem that converts the integral of the curl into a vanishing integral over the ``boundary at infinity,'' which leaves an integral over $\phi \nabla\times{\boldsymbol{B}}^\sharp$. Next, invoking the ${\boldsymbol{s}}$-solenoidal character of ${\boldsymbol{B}}^\sharp$, we write ${\boldsymbol{B}}^\sharp = \nabla_{\boldsymbol{s}}\times {\boldsymbol{A}}^\sharp$, and note that we can demand the Coulomb gauge condition $\nabla_{\boldsymbol{s}}\cdot {\boldsymbol{A}}^\sharp= 0$, which can always be satisfied by adding the gradient of the solution to a Poisson equation to ${\boldsymbol{A}}^\sharp$ that does not change ${\boldsymbol{B}}^\sharp$. Finally recalling the identity $\nabla\times(\nabla\times{\boldsymbol{a}}) = \nabla(\nabla\cdot{\boldsymbol{a}}) - \Delta{\boldsymbol{a}}$, when ${\boldsymbol{a}}={\boldsymbol{A}}^\sharp$ with vanishing ${\boldsymbol{s}}$-divergence, we find (with the help of Green's theorem) that $\int (\nabla\phi)\times{\boldsymbol{B}}^\sharp d^3s = - \int\phi \nabla\times(\nabla\times{\boldsymbol{A}}^\sharp) d^3s = \int (\Delta\phi ) {\boldsymbol{A}}^\sharp d^3s$. And so, \begin{alignat}{1} \label{eq:FIELDmomentumELECTRONhydrogen} \hspace{-.7truecm} {\boldsymbol{P}}^\sharp_{\mbox{\tiny{{el}}}}(t,{\boldsymbol{q}}) =\tfrac{1}{4\pi c}\!\int_{{\mathbb{R}}^3}\!\!\Big( {\boldsymbol{E}}^\sharp_{\mbox{\tiny{rad}}}\times{\boldsymbol{B}}^\sharp_{\mbox{\tiny{rad}}}\Big)(t,{\boldsymbol{s}};{\boldsymbol{q}})\mathrm{d}^3s - \tfrac{\eEL}{c} {\left\{\!\left[\right.\right.}{\boldsymbol{A}}^\sharp_{\mbox{\tiny{rad}}}{\left.\left.\right]\!\right\}}_{{\boldsymbol{q}}} \!(t,{\boldsymbol{q}}), \end{alignat} where \begin{equation} {\left\{\!\left[\right.\right.} {\boldsymbol{A}}^\sharp_{\mbox{\tiny{rad}}}{\left.\left.\right]\!\right\}}_{{\boldsymbol{q}}}(t,{\boldsymbol{q}}) := \int_{{\mathbb{R}}^3} {\boldsymbol{A}}^\sharp_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})\delta^{(a)}_{{\boldsymbol{q}}}({\boldsymbol{s}})\mathrm{d}^3s. \end{equation} Note that l.h.s.(\ref{eq:FIELDmomentumELECTRONhydrogen}) is gauge-invariant, so r.h.s.(\ref{eq:FIELDmomentumELECTRONhydrogen}) must be --- except that we employed the Coulomb gauge condition to arrive at r.h.s.(\ref{eq:FIELDmomentumELECTRONhydrogen}) so that the remaining gauge transformations need to leave the Coulomb gauge condition intact. Turning our attention back to the Schr\"odinger equation \eqref{ERWINeqnHYDROcoupleSHARP}, we note that the constant at r.h.s.(\ref{eq:FIELDenergyHYDROGENradiatingNOextF}) can be absorbed into the energy eigenvalues by a gauge transformation $\Psi\mapsto\widetilde\Psi \equiv e^{i (6/5)(e^2/a)t/\hbar}\Psi$, not changing ${\boldsymbol{A}}^\sharp_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})$ and $\phi^\sharp_{\mbox{\tiny{}}}({\boldsymbol{s}};{\boldsymbol{q}})$, and which does not change the physical output of the model --- i.e., $\Psi$ and $\widetilde\Psi$ produce the same $\varrho$ and ${\boldsymbol{J}}$, and also ${\boldsymbol{E}}^\sharp_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})$ and ${\boldsymbol{B}}^\sharp_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})$ are unaltered. Thus, $\widetilde\Psi$ satisfies \begin{alignat}{1}\label{ERWINeqnHYDROradSHARP} i\hbar \partial_t\widetilde\Psi(t,{\boldsymbol{q}}) = \big(H_{\mbox{\tiny{{hyd}}}}({\boldsymbol{q}}) + H_{\mbox{\tiny{int}}}(t,{\boldsymbol{q}}) + H_{\mbox{\tiny{rad}}}(t,{\boldsymbol{q}})\big)\widetilde\Psi(t,{\boldsymbol{q}}), \end{alignat} where \begin{alignat}{1}\label{Hhyd} H_{\mbox{\tiny{{hyd}}}}({\boldsymbol{q}}) := -\frac{\hbar^2 }{2\mEL}\Delta_{\boldsymbol{q}} - e^2\frac{1}{|{\boldsymbol{q}}|}, \qquad\mbox{for}\qquad |{\boldsymbol{q}}|\geq 2a, \end{alignat} (regularized when $|{\boldsymbol{q}}|< 2a$) is Schr\"odinger's Hamiltonian of hydrogen in the Born--Oppenheimer approximation, where \begin{alignat}{1}\label{HhydRAD} H_{\mbox{\tiny{rad}}}(t,{\boldsymbol{q}}) : = E^\sharp_{\mbox{\tiny{rad}}}(t,{\boldsymbol{q}}), \end{alignat} and where $H_{\mbox{\tiny{int}}}(t,{\boldsymbol{q}})$ is defined by (\ref{ERWINeqnHYDROcoupleSHARP})--(\ref{HhydRAD}) as \begin{alignat}{1}\label{HhydINT} H_{\mbox{\tiny{int}}}(t,{\boldsymbol{q}}) : = \tfrac{1}{2\mEL}\big|{\boldsymbol{P}}^\sharp_{\mbox{\tiny{{el}}}}\big|^2 + i\tfrac{\hbar}{2\mEL}\nabla_{{\boldsymbol{q}}}\cdot{\boldsymbol{P}}^\sharp_{\mbox{\tiny{{el}}}} + i\tfrac{\hbar}{\mEL}{\boldsymbol{P}}^\sharp_{\mbox{\tiny{{el}}}} \cdot\nabla_{{\boldsymbol{q}}} , \end{alignat} with ${\boldsymbol{P}}^\sharp_{\mbox{\tiny{{el}}}}$ depending on $(t,{\boldsymbol{q}})$, as defined above. \newpage \noindent\textbf{3.5.1.ii: Solving the radiation $\sharp$-field equations, assuming ${\boldsymbol{v}}(t,{\boldsymbol{q}})$ is given.} \smallskip Fourier transform $\hat{f}({\boldsymbol{k}}) := \frac{1}{(2\pi)^{3/2}}\int e^{-i{\boldsymbol{k}}\cdot{\boldsymbol{s}}}f({\boldsymbol{s}})d^3s$ with conjugate variables ${\boldsymbol{s}}\to{\boldsymbol{k}}$ yields the evolution equations \begin{alignat}{1}\hspace{-.7truecm} \partial_t{\widehat{{\boldsymbol{E}}}^\sharp_{\mbox{\tiny{rad}}}} + \bigl({\boldsymbol{v}}\!\cdot\!\nabla_{{\boldsymbol{q}}}\bigr)\widehat{{\boldsymbol{E}}}^\sharp_{\mbox{\tiny{rad}}} - i c{\boldsymbol{k}}\times \widehat{{\boldsymbol{B}}}^\sharp_{\mbox{\tiny{rad}}} &= \label{eq:MdotEsharpRADF} - e 4\pi \tfrac{1\;}{|{\boldsymbol{k}}|^2}{\boldsymbol{k}}\times \Big({\boldsymbol{k}}\times {{\mathcal{I}} {\boldsymbol{v}}} \widehat{\delta}^{(a)}_{{\boldsymbol{q}}}({\boldsymbol{k}})\Big) ,\\ \partial_t{\widehat{{\boldsymbol{B}}}^\sharp_{\mbox{\tiny{rad}}}} +\bigl({\boldsymbol{v}}\!\cdot\!\nabla_{{\boldsymbol{q}}}\bigr)\widehat{{\boldsymbol{B}}}^\sharp_{\mbox{\tiny{rad}}} + ic {\boldsymbol{k}}\times\widehat{{\boldsymbol{E}}}^\sharp_{\mbox{\tiny{rad}}} &= \label{eq:MdotBsharpRADF} {\boldsymbol{0}} \, , \end{alignat} and the constraint equations \begin{alignat}{1}\hspace{-.7truecm} {\boldsymbol{k}}\cdot \widehat{{\boldsymbol{E}}}^\sharp_{\mbox{\tiny{rad}}} &= \label{eq:MdivEsharpRADF} 0,\\ {\boldsymbol{k}}\cdot \widehat{{\boldsymbol{B}}}^\sharp_{\mbox{\tiny{rad}}} &= \label{eq:MdivBsharpRADF} 0\, . \end{alignat} Here, we have suppressed the argument $(t,{\boldsymbol{k}};{\boldsymbol{q}})$ from the $\sharp$-fields, and the argument $(t,{\boldsymbol{q}})$ from ${\boldsymbol{v}}$, and where \begin{alignat}{1} \widehat{\delta}^{(a)}_{{\boldsymbol{q}}}({\boldsymbol{k}}) = \tfrac{3}{4\pi} \tfrac{1}{(a|{\boldsymbol{k}}|)^{3/2}} J^{}_{3/2}(a|{\boldsymbol{k}}|)e^{-i{\boldsymbol{k}}\cdot{\boldsymbol{q}}}, \end{alignat} with $J_\nu$ a Bessel function of the first kind \cite{AS}. It is easily seen that equations \eqref{eq:MdivEsharpRADF} and \eqref{eq:MdivBsharpRADF} are merely constraints on the initial data; they propagate in time when satisfied initially. Thus, let ${\boldsymbol{d}}\in{\mathbb{R}}$ be a non-zero vector, and define the projection onto the direction $\|{\boldsymbol{d}}$ by ${\boldsymbol{{\mathcal{P}}}_{{\boldsymbol{d}}}^\|}:=\tfrac{{\boldsymbol{d}}\otimes{\boldsymbol{d}}}{{\boldsymbol{d}}\cdot{\boldsymbol{d}}}$ and its orthogonal complement by ${\boldsymbol{{\mathcal{P}}}_{{\boldsymbol{d}}}^\perp}:=\boldsymbol{\mathcal{I}}-\tfrac{{\boldsymbol{d}}\otimes{\boldsymbol{d}}}{{\boldsymbol{d}}\cdot{\boldsymbol{d}}}$, where $\boldsymbol{\mathcal{I}}$ is the identity operator. Then our intial data have to be chosen such that $\widehat{{\boldsymbol{E}}}^\sharp_{\mbox{\tiny{rad}}}(0,{\boldsymbol{k}},{\boldsymbol{q}})= \widehat{{\boldsymbol{E}}}^\sharp_{\mbox{\tiny{rad}}}(0,{\boldsymbol{k}},{\boldsymbol{q}}){}^\perp_{{\boldsymbol{k}}}$, and $\widehat{{\boldsymbol{B}}}^\sharp_{\mbox{\tiny{rad}}}(0,{\boldsymbol{k}},{\boldsymbol{q}})= \widehat{{\boldsymbol{B}}}^\sharp_{\mbox{\tiny{rad}}}(0,{\boldsymbol{k}},{\boldsymbol{q}}){}^\perp_{{\boldsymbol{k}}}$; i.e. perpendicular to ${\boldsymbol{k}}$, as in the usual Maxwell--Lorentz theory. Incidentally, recalling that for any vector ${\boldsymbol{a}}$ we have ${\boldsymbol{k}}\times\big( {\boldsymbol{k}}\times {\boldsymbol{a}}\big) = - |{\boldsymbol{k}}|^2 {\boldsymbol{a}}^\perp_{{\boldsymbol{k}}}$, we see that r.h.s.\eqref{eq:MdotEsharpRADF} can be written more succinctly, viz. \begin{alignat}{1} \label{MdotEsharpRADFrhsSIMPLER} - \tfrac{1\;}{|{\boldsymbol{k}}|^2}{\boldsymbol{k}}\times \Big({\boldsymbol{k}}\times {{\mathcal{I}} {\boldsymbol{v}}} \widehat{\delta}^{(a)}_{{\boldsymbol{q}}}({\boldsymbol{k}})\Big) = {{\mathcal{I}} {\boldsymbol{v}}}^\perp_{{\boldsymbol{k}}} \widehat{\delta}^{(a)}_{{\boldsymbol{q}}}({\boldsymbol{k}}). \end{alignat} The remaining two equations, \eqref{eq:MdotEsharpRADF} (with \eqref{MdotEsharpRADFrhsSIMPLER}) and \eqref{eq:MdotBsharpRADF}, can each be interpreted as an inhomogeneous linear first-order transport equation for the Fourier transform of one of the $\sharp$-fields, given ${\boldsymbol{v}}$ and the Fourier transform of the other $\sharp$-field. As such, a general solution can be written as the sum of a special solution to the inhomogeneous equation plus the general solution to the associated homogeneous equation. For our purposes here we do not need to be that general, though. We note that a solution to the source-free Maxwell--Lorentz field equations is also a ${\boldsymbol{q}}$-independent solution to the associated homogeneous equation of our radiation $\sharp$-field equations. Thus we will only be concerned with solutions to our radiation $\sharp$-field equations which can be written as the sum of a source-free Maxwell--Lorentz radiation field (indicated by a superscript ${}^{,f}$ after the $\sharp$ symbol, for ``free'') and a special solution to the inhomogeneous radiation $\sharp$-field initial value problem with vanishing initial data (indicated by a superscrip ${}^{,s}$ after the $\sharp$ symbol, for ``source''). The latter problem we can integrate directly, at least in principle, with the method of characteristics; see e.g. \cite{Kamke}. In this vein, let $u(t,{\boldsymbol{q}})$ stand for any Cartesian component of any of the Fourier transformed $\sharp$-fields. Then each of the two equations \eqref{eq:MdotEsharpRADF} and \eqref{eq:MdotBsharpRADF} is of the form \begin{equation}\label{transportU} \partial_t u(t,{\boldsymbol{q}}) + {\boldsymbol{v}}(t,{\boldsymbol{q}})\!\cdot\!\nabla_{{\boldsymbol{q}}} u(t,{\boldsymbol{q}}) = R(t,{\boldsymbol{q}}), \end{equation} where for the purpose of this discussion ${\boldsymbol{v}}(t,{\boldsymbol{q}})$ and $R(t,{\boldsymbol{q}})$ can be assumed given. Suppose $u(t,{\boldsymbol{q}})$ is a smooth solution to \eqref{transportU}. Then we can picture equation \eqref{transportU} geometrically as stating that for each $(t,{\boldsymbol{q}})\in {\mathbb{R}}_+\times{\mathbb{R}}^3$ the vector $(1,{\boldsymbol{v}}(t,{\boldsymbol{q}}),R(t,{\boldsymbol{q}}))\in{\mathbb{R}}^5$ is orthogonal to the vector $(\partial_t u(t,{\boldsymbol{q}}),\nabla_{{\boldsymbol{q}}}u(t,{\boldsymbol{q}}),-1)\in {\mathbb{R}}^5$, in the sense of the usual Euclidean inner product. The latter vector is the normal at $(t,{\boldsymbol{q}})$ to the graph of $u$, which is the codimension-1 hypersurface $\{(t,{\boldsymbol{q}}, u(t,{\boldsymbol{q}}))\}\subset{\mathbb{R}}^5$. Thus the five-dimensional vector field $(t,{\boldsymbol{q}})\mapsto (1,{\boldsymbol{v}}(t,{\boldsymbol{q}}),R(t,{\boldsymbol{q}}))$ is everywhere tangent to the graph of $u$. Therefore the graph of $u$ can be constructed with the help of the characteristic curves, which solve the characteristic equations for \eqref{transportU}, \begin{alignat}{1} \frac{d}{d\tau} T_{\boldsymbol{q}}(\tau) &= 1, \label{charT}\\ \frac{d}{d\tau} {\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)& = {\boldsymbol{v}}(\tau,{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)), \label{charQV}\\ \frac{d}{d\tau} Q_{\boldsymbol{q}}(\tau) &= R(\tau,{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)), \label{charQ} \end{alignat} to be solved as \emph{initial-final value problem}, with $T_{{\boldsymbol{q}}}(t)=t$, ${\boldsymbol{Q}}_{{\boldsymbol{q}}}(t)={\boldsymbol{q}}$, and $Q_{{\boldsymbol{q}}}(0)= 0$. The final value problem for equation \eqref{charT} is trivially solved by $T_{\boldsymbol{q}}(\tau)=\tau$, and \emph{assuming a solution to \eqref{charQV} has been found}, also the initial value problem to \eqref{charQ} is readily solved by quadrature, \begin{alignat}{1} Q_{\boldsymbol{q}}(\tau) = \int_0^\tau R(\theta,{\boldsymbol{Q}}_{\boldsymbol{q}}(\theta)) d\theta. \label{charQsol} \end{alignat} The only nontrivial equation is the de Broglie--Bohm-type guiding equation \eqref{charQV}. It has been shown in \cite{BerndlETal}, and more generally in \cite{TeuTum}, that for a large class of solutions to Schr\"odinger's equation the de Broglie--Bohm velocity field \eqref{eq:dBBv} is regular enough so that \eqref{charQV} with initial or final values is typically well-posed with a global solution $\tau\mapsto {\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)$. We expect that a similar result is provable in the context of the present model; this, however, is a rather technical problem that will have to be addressed elsewhere. Now note, if $u(t,{\boldsymbol{q}})$ is a smooth solution of \eqref{transportU} that vanishes initially, then for each fixed ${\boldsymbol{q}}$ the function $U_{\boldsymbol{q}}(t):= u(t,{\boldsymbol{Q}}_{\boldsymbol{q}}(t)) -Q_{\boldsymbol{q}}(t)$ satisfies $\frac{d}{dt}U_{\boldsymbol{q}}(t)= 0$. Thus for all $t$ the function $U_{\boldsymbol{q}}(t) = U_{\boldsymbol{q}}(0)$, but $u(0,{\boldsymbol{q}})=0$ and $Q_{\boldsymbol{q}}(0)=0$, so $U_{\boldsymbol{q}}(t)\equiv 0$, and thus the solution $u(t,{\boldsymbol{q}})$ to \eqref{transportU} for vanishing initial data is given by $u(t,{\boldsymbol{q}}) = Q_{\boldsymbol{q}}(t)$, or explicitly (recycling $\tau$ now) \begin{equation}\label{transportUsol} u(t,{\boldsymbol{q}}) =\int_0^t R(\tau,{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)) d\tau. \end{equation} We pause to re-emphasize this important finding: \medskip \noindent \emph{The de Broglie--Bohm-type final value problem \eqref{charQV}, with ${\boldsymbol{Q}}_{\boldsymbol{q}}(t)={\boldsymbol{q}}$, is an integral part of the method of characteristics to solve the inhomogeneous transport equations \eqref{eq:MdotEsharpRADF} and \eqref{eq:MdotBsharpRADF} for the Fourier-transformed radiation $\sharp$-fields.} \medskip Armed with \eqref{transportUsol} we conclude that \eqref{eq:MdotEsharpRADF} (with \eqref{MdotEsharpRADFrhsSIMPLER}) and \eqref{eq:MdotBsharpRADF}, given vanishing initial data, are formally solved by \begin{alignat}{1} &\hspace{-1truecm} \widehat{{\boldsymbol{E}}}^{\sharp,s}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{k}};{\boldsymbol{q}}) = \label{eq:MdotEsharpRADFsol} i c {\boldsymbol{k}}\times\!\! \int_0^t\!\! \widehat{{\boldsymbol{B}}}^{\sharp,s}_{\mbox{\tiny{rad}}}(\tau,{\boldsymbol{k}},{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)) d\tau + 4\pi e \! \int_0^t \!\! {\mathcal{I}}{\boldsymbol{v}}^\perp_{{\boldsymbol{k}}}(\tau,{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)) \widehat{\delta}^{(a)}_{{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)}({\boldsymbol{k}})d\tau ,\\ &\hspace{-1truecm} \widehat{{\boldsymbol{B}}}^{\sharp,s}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{k}};{\boldsymbol{q}}) = - i c{\boldsymbol{k}}\times \int_0^t \widehat{{\boldsymbol{E}}}^{\sharp,s}_{\mbox{\tiny{rad}}} (\tau,{\boldsymbol{k}},{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)) d\tau. \label{eq:MdotBsharpRADFsol} \end{alignat} Now substituting r.h.s.\eqref{eq:MdotBsharpRADFsol} for $\widehat{{\boldsymbol{B}}}^{\sharp,s}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{k}};{\boldsymbol{q}})$ at r.h.s.\eqref{eq:MdotEsharpRADFsol}, we find \begin{alignat}{1}\hspace{-.7truecm} \widehat{{\boldsymbol{E}}}^{\sharp,s}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{k}};{\boldsymbol{q}}) &= \label{eq:MdotEsharpRADFsolsol} - c^2 |{\boldsymbol{k}}|^2 \int_0^t \int_0^\tau \widehat{{\boldsymbol{E}}}^{\sharp,s}_{\mbox{\tiny{rad}}} (\theta,{\boldsymbol{k}},{\boldsymbol{Q}}_{{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)}(\theta)) d\theta d\tau \\ \notag & \hspace{5truecm} + 4\pi e \int_0^t\!\! {\mathcal{I}} {\boldsymbol{v}}^\perp_{{\boldsymbol{k}}} (\tau,{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)) \widehat{\delta}^{(a)}_{{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)}({\boldsymbol{k}})d\tau, \end{alignat} and similarly we find \begin{alignat}{1}\hspace{-.7truecm} \widehat{{\boldsymbol{B}}}^{\sharp,s}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{k}};{\boldsymbol{q}}) &= \label{eq:MdotBsharpRADFsolsol} - c^2 |{\boldsymbol{k}}|^2 \! \int_0^t \int_0^\tau \widehat{{\boldsymbol{B}}}^{\sharp,s}_{\mbox{\tiny{rad}}} (\theta,{\boldsymbol{k}},{\boldsymbol{Q}}_{{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)}(\theta)) d\theta d\tau \\ \notag & \quad - i 4\pi e c{\boldsymbol{k}}\times\!\! \int_0^t \int_0^\tau {\mathcal{I}} {\boldsymbol{v}}^\perp_{{\boldsymbol{k}}} (\theta,{\boldsymbol{Q}}_{{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)}(\theta)) \widehat{\delta}^{(a)}_{{\boldsymbol{Q}}_{{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)}(\theta)})d\theta d\tau. \end{alignat} In going from \eqref{eq:MdotEsharpRADFsol} and \eqref{eq:MdotBsharpRADFsol} to \eqref{eq:MdotEsharpRADFsolsol} and \eqref{eq:MdotBsharpRADFsolsol} we used that by the constraint equations we have ${\boldsymbol{k}}\times \Big({\boldsymbol{k}}\times \widehat{{\boldsymbol{B}}}^{\sharp,s}_{\mbox{\tiny{rad}}}\Big) = - |{\boldsymbol{k}}|^2\widehat{{\boldsymbol{B}}}^{\sharp,s}_{\mbox{\tiny{rad}}}$ and similarly ${\boldsymbol{k}}\times\Big( {\boldsymbol{k}}\times \widehat{{\boldsymbol{E}}}^{\sharp,s}_{\mbox{\tiny{rad}}}\Big) = - |{\boldsymbol{k}}|^2\widehat{{\boldsymbol{E}}}^{\sharp,s}_{\mbox{\tiny{rad}}}$; furthermore, we used that ${\boldsymbol{k}}\times{\boldsymbol{a}}\perp{\boldsymbol{k}}$, so that for any vector ${\boldsymbol{a}}$ we have ${\boldsymbol{k}}\times\big({\boldsymbol{k}}\times\big( {\boldsymbol{k}}\times {\boldsymbol{a}}\big)\big) = - |{\boldsymbol{k}}|^2 {\boldsymbol{k}}\times {\boldsymbol{a}}$. Equations \eqref{eq:MdotEsharpRADFsolsol} and \eqref{eq:MdotBsharpRADFsolsol} are fixed point equations for the Fourier-transformed radiation $\sharp$-fields; they are decoupled under the assumption that the de Broglie--Bohm velocity field is given, and so then are the trajectories $t\mapsto {\boldsymbol{Q}}_{\boldsymbol{q}}(t)$ by corollary. These equations appear incredibly complicated, with iterated positions ${{\boldsymbol{Q}}_{{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)}(\theta)}$ inside iterated integrals. Yet we note that each trajectory is given by a group, so for $\theta<\tau$ we have ${{\boldsymbol{Q}}_{{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)}(\theta)} = {\boldsymbol{Q}}_{\boldsymbol{q}}(\theta)$; recall also that ${\boldsymbol{Q}}_{\boldsymbol{q}}(t) = {\boldsymbol{q}}$. Thus we see that equations \eqref{eq:MdotEsharpRADFsolsol} and \eqref{eq:MdotBsharpRADFsolsol} are identical to the two fixed point problems \begin{alignat}{1}\hspace{-.7truecm} \widehat{{\boldsymbol{E}}}^{\sharp,s}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{k}},{\boldsymbol{Q}}_{\boldsymbol{q}}(t)) &= \label{eq:MdotEsharpRADFsolsolsol} - c^2 |{\boldsymbol{k}}|^2 \int_0^t \int_0^\tau \widehat{{\boldsymbol{E}}}^{\sharp,s}_{\mbox{\tiny{rad}}} (\theta,{\boldsymbol{k}},{\boldsymbol{Q}}_{\boldsymbol{q}}(\theta)) d\theta d\tau \\ \notag & \quad + 4\pi e \int_0^t\! {\mathcal{I}} {\boldsymbol{v}}^\perp_{{\boldsymbol{k}}} (\tau,{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)) \widehat{\delta}^{(a)}_{{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)}({\boldsymbol{k}})d\tau ,\\ \widehat{{\boldsymbol{B}}}^{\sharp,s}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{k}},{\boldsymbol{Q}}_{\boldsymbol{q}}(t)) &= \label{eq:MdotBsharpRADFsolsolsol} - c^2 |{\boldsymbol{k}}|^2 \int_0^t \int_0^\tau \widehat{{\boldsymbol{B}}}^{\sharp,s}_{\mbox{\tiny{rad}}} (\theta,{\boldsymbol{k}},{\boldsymbol{Q}}_{\boldsymbol{q}}(\theta)) d\theta d\tau \\ \notag & \quad - i 4\pi e c{\boldsymbol{k}}\times \int_0^t \int_0^\tau {\mathcal{I}} {\boldsymbol{v}}^\perp_{{\boldsymbol{k}}} (\theta,{\boldsymbol{Q}}_{\boldsymbol{q}}(\theta)) \widehat{\delta}^{(a)}_{{\boldsymbol{Q}}_{\boldsymbol{q}}(\theta)}({\boldsymbol{k}})d\theta d\tau , \end{alignat} and these are twice integrated forced classical harmonic oscillator equations. Indeed, each Cartesian component of \eqref{eq:MdotEsharpRADFsolsolsol} and \eqref{eq:MdotBsharpRADFsolsolsol} is of the form \begin{equation}\label{solsolsolEQ} f(t) + \omega^2 \int_0^t \int_0^\tau f(\theta) d\theta d\tau = g(t) \end{equation} with $\omega = c|{\boldsymbol{k}}|\in {\mathbb{R}}_+$, and with $g(0)=0$; in the equations for the magnetic components, also $g'(0)=0$. It follows that also $f(0)=0$ (which entered the derivation of these equations, of course), and $f'(0)=g'(0)$; so for the magnetic equation it follows that $f'(0)=0$ as well, while for the electric equation in general $f'(0)$ need not vanish. The associated homogeneous problem has only the vanishing solution, so the solution to the inhomogeneous problem \eqref{solsolsolEQ} is \begin{equation}\label{solsolsolEQinh} f(t) = \int_0^t g'(\tau) \cos(|{\boldsymbol{k}}|c(t- \tau))d\tau + \int_0^t g''(\tau) \tfrac{\sin(|{\boldsymbol{k}}|c(t- \tau))}{|{\boldsymbol{k}}|c} d\tau . \end{equation} Moreover, using again that ${\boldsymbol{Q}}_{\boldsymbol{q}}(t) = {\boldsymbol{q}}$, and also recalling that ${\mathcal{I}} {\boldsymbol{v}} (\tau,{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)) =\frac{d}{d\tau}{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)$ (see \eqref{charQV}), we note that only the component of $\tfrac{d}{d\tau}{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)$ perpendicular to ${\boldsymbol{k}}$ enters at r.h.s.\eqref{eq:MdotEsharpRADFsolsolsol} and r.h.s.\eqref{eq:MdotBsharpRADFsolsolsol}. We thus set ${\big(\tfrac{d}{d\tau}{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)\big)^{}}_{{\boldsymbol{k}}}^\|(\tau) := {\boldsymbol{{\mathcal{P}}}_{{\boldsymbol{k}}}^\|} \cdot \frac{d}{d\tau}{{\boldsymbol{Q}}}_{\boldsymbol{q}}(\tau)$ and ${\big(\tfrac{d}{d\tau}{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)\big)^{}}_{{\boldsymbol{k}}}^\perp(\tau) := {\boldsymbol{{\mathcal{P}}}_{{\boldsymbol{k}}}^\perp} \cdot \frac{d}{d\tau}{{\boldsymbol{Q}}}_{\boldsymbol{q}}(\tau)$. With these abbreviations, for the Fourier-transformed electric radiation $\sharp$-field we obtain \begin{alignat}{1}\label{eq:MdotEsharpRADFfinalSOLa} &\hspace{-0.7truecm} \widehat{{\boldsymbol{E}}}^{\sharp,s}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{k}};{\boldsymbol{q}}) = e 4\pi \! \int_0^t\! \cos\!\big(|{\boldsymbol{k}}|c(t- \tau)\big){\mathcal{I}} {\big(\tfrac{d}{d\tau}{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)\big)^{}}_{{\boldsymbol{k}}}^\perp(\tau) \widehat{\delta}^{(a)}_{{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)}({\boldsymbol{k}}) d\tau \hspace{-5pt} \\ \notag &\qquad\qquad\ + e 4\pi \int_0^t \tfrac{\sin\!\big(|{\boldsymbol{k}}|c(t- \tau)\big)}{c|{\boldsymbol{k}}|}\tfrac{d}{d\tau} \Big[ {\mathcal{I}}{\big(\tfrac{d}{d\tau}{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)\big)^{}}_{{\boldsymbol{k}}}^\perp(\tau) \widehat{\delta}^{(a)}_{{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)}({\boldsymbol{k}})\Big] d\tau , \end{alignat} and for the Fourier-transformed magnetic radiation $\sharp$-field we obtain \begin{alignat}{1}\label{eq:MdotBsharpRADFfinalSOLa} &\hspace{-0.7truecm} \widehat{{\boldsymbol{B}}}^{\sharp,s}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{k}};{\boldsymbol{q}}) = - i 4\pi e c{\boldsymbol{k}}\times\!\! \int_0^t\! \cos\!\big(|{\boldsymbol{k}}|c(t- \tau)\big) \!\int_0^\tau \!\!{\mathcal{I}} {\big(\tfrac{d}{d\theta}{\boldsymbol{Q}}_{\boldsymbol{q}}(\theta)\big)^{}}_{{\boldsymbol{k}}}^\perp(\theta) \widehat{\delta}^{(a)}_{{\boldsymbol{Q}}_{\boldsymbol{q}}(\theta)}({\boldsymbol{k}})d\theta d\tau \\ \notag &\qquad \qquad\ - i 4\pi e c{\boldsymbol{k}}\times \int_0^t \tfrac{\sin\!\big(|{\boldsymbol{k}}|c(t- \tau)\big)}{|{\boldsymbol{k}}|c} {\mathcal{I}}{\big(\tfrac{d}{d\tau}{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)\big)^{}}_{{\boldsymbol{k}}}^\perp(\tau) \widehat{\delta}^{(a)}_{{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)}({\boldsymbol{k}}) d\tau . \end{alignat} Finally, integrating by parts in the second integral at r.h.s.\eqref{eq:MdotEsharpRADFfinalSOLa} and in the first integral at r.h.s.\eqref{eq:MdotBsharpRADFfinalSOLa}, we obtain \begin{alignat}{1}\label{eq:MdotEsharpRADFfinalSOLb} & \hspace{-0.7truecm} \widehat{{\boldsymbol{E}}}^{\sharp,s}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{k}};{\boldsymbol{q}}) = e 8\pi \int_0^t\! \cos\!\big(|{\boldsymbol{k}}|c(t- \tau)\big) {\mathcal{I}}{\big(\tfrac{d}{d\tau}{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)\big)^{}}_{{\boldsymbol{k}}}^\perp(\tau) \widehat{\delta}^{(a)}_{{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)}({\boldsymbol{k}}) d\tau \hspace{-10pt} \\ \notag &\qquad\qquad\ + e 4\pi \tfrac{\sin\!\big(|{\boldsymbol{k}}|ct\big)}{|{\boldsymbol{k}}|c} {\mathcal{I}}{\big(\tfrac{d}{d\tau}{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)\big)^{}}_{{\boldsymbol{k}}}^\perp(\tau)\Big|_{\tau=0} \widehat{\delta}^{(a)}_{{\boldsymbol{Q}}_{\boldsymbol{q}}(0)}({\boldsymbol{k}}) \end{alignat} and \begin{alignat}{1}\label{eq:MdotBsharpRADFfinalSOLb} \hspace{-0.7truecm} \widehat{{\boldsymbol{B}}}^{\sharp,s}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{k}};{\boldsymbol{q}}) = - i e 8\pi c {\boldsymbol{k}}\times \int_0^t \tfrac{\sin\!\big(|{\boldsymbol{k}}|c(t- \tau)\big)}{|{\boldsymbol{k}}|c} {\mathcal{I}}{\big(\tfrac{d}{d\tau}{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)\big)^{}}_{{\boldsymbol{k}}}^\perp(\tau) \widehat{\delta}^{(a)}_{{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)}({\boldsymbol{k}}) d\tau . \end{alignat} We note that for the purpose of computing the radiation Hamiltonian and the Poynting part of the interaction Hamiltonian, the Fourier representation is just fine, due to the Plancherel and Parseval theorems. However, for the interaction Hamiltonian and the velocity field we also need the vector potential. We have $\widehat{{\boldsymbol{B}}}^{\sharp,s}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{k}};{\boldsymbol{q}}) = i{\boldsymbol{k}}\times \widehat{{\boldsymbol{A}}}^{\sharp,s}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{k}};{\boldsymbol{q}})$ and $\widehat{{\boldsymbol{E}}}^{\sharp,s}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{k}};{\boldsymbol{q}}) = -\frac1c \frac{\partial}{\partial t} \widehat{{\boldsymbol{A}}}^{\sharp,s}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{k}};{\boldsymbol{q}}) - \frac1c \bigl({\boldsymbol{v}}(t,{\boldsymbol{q}})\!\cdot\!\nabla_{{\boldsymbol{q}}}\bigr){\boldsymbol{A}}^{\sharp,s}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{k}};{\boldsymbol{q}})$, where \begin{alignat}{1}\label{eq:AsharpRADFfinal} \hspace{-0.7truecm} \widehat{{\boldsymbol{A}}}^{\sharp,s}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{k}};{\boldsymbol{q}}) = - e 8\pi \int_0^t \tfrac{\sin\!\big(|{\boldsymbol{k}}|c(t- \tau)\big)}{|{\boldsymbol{k}}|} {\mathcal{I}}{\big(\tfrac{d}{d\tau}{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)\big)^{}}_{{\boldsymbol{k}}}^\perp(\tau) \widehat{\delta}^{(a)}_{{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)}({\boldsymbol{k}}) d\tau . \end{alignat} This concludes our solution of the Fourier-transformed radiation $\sharp$-field equations, the ${\boldsymbol{v}}$ field considered given. The radiation $\sharp$-vector potential itself can now be obtained by inverse Fourier transform, viz. ${f}({\boldsymbol{s}}) := \frac{1}{(2\pi)^{3/2}}\int e^{i{\boldsymbol{k}}\cdot{\boldsymbol{s}}}\hat{f}({\boldsymbol{k}})d^3k$. We can simplify the task by noting that due to the linearity of the $\sharp$-field equations, we can either solve them with regularized point charge sources in place, or we can solve them with point charge sources and regularize those solutions. Thus it suffices to compute the inverse Fourier transform of \eqref{eq:AsharpRADFfinal} for when $a\to 0$, as long as we remember to average the result over the ball of radius $a$ centered at ${\boldsymbol{q}}$. Since \begin{equation} \lim_{a\to 0} \widehat{\delta}^{(a)}_{{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)}({\boldsymbol{k}}) = (2\pi)^{-3/2} e^{-i{\boldsymbol{k}}\,\cdot\,{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)}, \end{equation} for $a\to 0$ the Fourier-transformed sourced radiation $\sharp$-field vector potential becomes \begin{alignat}{1}\label{eq:AsharpRADFfinalNULLa} \hspace{-0.7truecm} \widehat{{\boldsymbol{A}}}^{\sharp,s}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{k}};{\boldsymbol{q}}) = - e \tfrac{8\pi}{(2\pi)^{3/2}} \int_0^t \tfrac{\sin\!\big(|{\boldsymbol{k}}|c(t- \tau)\big)}{|{\boldsymbol{k}}|} {\mathcal{I}}{\big(\tfrac{d}{d\tau}{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)\big)^{}}_{{\boldsymbol{k}}}^\perp(\tau) e^{-i{\boldsymbol{k}}\cdot {\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)} d\tau . \end{alignat} First we carry out the angular integrations over ${\mathbb{S}}^2$ in ${\boldsymbol{k}}$ space (i.e. $|{\boldsymbol{k}}|$ is fixed). We introduce spherical coordinates in ${\boldsymbol{k}}$ space, with ${\boldsymbol{n}}_{\tau,{\boldsymbol{s}}}^{} =\frac{{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau) -{\boldsymbol{s}}}{|{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau) -{\boldsymbol{s}}|}$ as (north) polar vector for $|{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau) -{\boldsymbol{s}}|\neq 0$, $\psi$ the polar angle counting from the north pole (i.e. equal to $\pi/2-$latitude), and with $\varphi$ the azimuth about ${{\boldsymbol{n}}_{\tau,{\boldsymbol{s}}}^{}}$ (w.r.t. an arbitrary reference azimuth); the cases of degeneracy can be handled by taking limits. We find \begin{alignat}{1} \displaystyle &\hspace{-30pt} \tfrac{1}{4\pi}\int_{{\mathbb{S}}^2}\!\! {\big(\tfrac{d}{d\tau}{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)\big)^{}}_{{\boldsymbol{k}}}^\perp(\tau) e^{- i {\boldsymbol{k}}\cdot({\boldsymbol{Q}}_{{\boldsymbol{q}}}(\tau) -{\boldsymbol{s}})} \sin\psi \mathrm{d}\psi\mathrm{d}\varphi = \label{eq:StwoINTofFourierE} {\big(\tfrac{d}{d\tau}{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)\big)^{}}_{{\boldsymbol{n}}_{\tau,{\boldsymbol{s}}}}^\perp\!(\tau) \,\tfrac{\sin\left(|{\boldsymbol{k}}|||{\boldsymbol{Q}}_q(\tau) -{\boldsymbol{s}}|\right)}{|{\boldsymbol{k}}||{\boldsymbol{Q}}_q(\tau) -{\boldsymbol{s}}|} \\ \nonumber & +\displaystyle \left({\big(\tfrac{d}{d\tau}{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)\big)^{}} _{{\boldsymbol{n}}_{\tau,{\boldsymbol{s}}}}^\perp\!(\tau)-2{\big(\tfrac{d}{d\tau}{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)\big)^{}}_{{\boldsymbol{n}}_{\tau,{\boldsymbol{s}}}}^\|\!(\tau)\right)\! \left(\tfrac{{\cos\left(|{\boldsymbol{k}}||{\boldsymbol{Q}}_q(\tau) -{\boldsymbol{s}}|\right)}}{|{\boldsymbol{k}}|^2|{\boldsymbol{Q}}_q(\tau) -{\boldsymbol{s}}|^2} - \tfrac{\sin\left(|{\boldsymbol{k}}||{\boldsymbol{Q}}_q(\tau) -{\boldsymbol{s}}|\right)}{|{\boldsymbol{k}}|^3|{\boldsymbol{Q}}_q(\tau) -{\boldsymbol{s}}|^3}\right)\!. \end{alignat} Note that ${\big(\tfrac{d}{d\tau}{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)\big)^{}}_{{\boldsymbol{n}}_{\tau,{\boldsymbol{s}}}}^\|\!(\tau)$ is invariant under ${\boldsymbol{n}}_{\tau,{\boldsymbol{s}}}\to - {\boldsymbol{n}}_{\tau,{\boldsymbol{s}}}$. Next we observe that in the ensuing $|{\boldsymbol{k}}|$ integrations a factor $1/|{\boldsymbol{k}}|^2$ cancels vs. the factor $|{\boldsymbol{k}}|^2$ coming from $\mathrm{d}^3k = |{\boldsymbol{k}}|^2\sin\psi \mathrm{d}\psi\mathrm{d}\varphi\mathrm{d}{|{\boldsymbol{k}}|}$, leading to the integral \begin{alignat}{1} \hspace{-30pt} \tfrac{2}{\pi} \displaystyle \int_0^\infty\!\!\!\left( \cos\left(|{\boldsymbol{k}}|R\right) - \tfrac{\sin\left(|{\boldsymbol{k}}|R\right)}{|{\boldsymbol{k}}|R} \right)\! \tfrac{\sin\bigl(|{\boldsymbol{k}}|c(t-\tau) \bigr)}{|{\boldsymbol{k}}|}\mathrm{d}{|{\boldsymbol{k}}|} = - \tfrac{c(t-\tau)}{R}\boldsymbol{1}^{}_{\{R > c(t-\tau)\}} - \tfrac\pi4 \delta^{}_{R,c(t-\tau)}, \label{eq:Fzwei} \end{alignat} where $\delta^{}_{a,b}$ is the Kronecker $\delta$, not the Dirac $\delta$ (see integrals 3.741\#2 and \#3 on p.414 in \cite{GradRyzh}), and the well-known completeness relation \begin{alignat}{1} \hspace{-20pt} \tfrac{2}{\pi} \displaystyle \int_0^\infty\!\! \sin\left(|{\boldsymbol{k}}|R\right) \sin\bigl(|{\boldsymbol{k}}|c(t-\tau) \bigr) d|{\boldsymbol{k}}| = \delta\!\left(c(t-\tau)-R\right) - \delta\! \left(c(t-\tau)+R\right), \label{eq:Feins} \end{alignat} with the abbreviation $R= |{\boldsymbol{Q}}_q(\tau) -{\boldsymbol{s}}|$. Note that (\ref{eq:Fzwei}) is absolutely bounded by $1$. \newpage In the subsequent $\tau$-integration only the retarded $\delta$ function contributes. And so the sourced radiation $\sharp$-field vector potential (in the limit $a\to 0$) reads \begin{alignat}{1}\label{eq:AsharpRADfinalNULLa} \hspace{-0.8truecm} {{\boldsymbol{A}}}^{\sharp,s}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}};{\boldsymbol{q}}) = &- 2e \int_0^{t} \tfrac{{\mathcal{I}}{\big(\tfrac{d}{d\tau}{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)\big)^{}}_{{\boldsymbol{n}}_{\tau,{\boldsymbol{s}}}}^\perp\!} {R} \delta\!\left(c(t-\tau)- R\right) d\tau \\ \notag & + 2e \int_0^{t} {\mathcal{I}}\left({\big(\tfrac{d}{d\tau}{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)\big)^{}} _{{\boldsymbol{n}}_{\tau,{\boldsymbol{s}}}}^\perp\!-2{\big(\tfrac{d}{d\tau}{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)\big)^{}}_{{\boldsymbol{n}}_{\tau,{\boldsymbol{s}}}}^\|\!\right)\! \tfrac{c(t-\tau)}{R}\boldsymbol{1}^{}_{\{{R} > c(t-\tau)\}} d\tau. \end{alignat} In terms of the ``retarded (instant of) time'' $t^{\mathrm{ret}}(t,{\boldsymbol{s}};{\boldsymbol{q}})$, defined implicitly as solution of $c(t-t^{\mathrm{ret}}) = |{\boldsymbol{s}}-{\boldsymbol{Q}}_{{\boldsymbol{q}}}(t^{\mathrm{ret}})|$, where we also set ${\boldsymbol{Q}}_q(t^{\mathrm{ret}})= {\boldsymbol{Q}}_q(0)$ if $t^{\mathrm{ret}}<0$, \eqref{eq:AsharpRADfinalNULLa} is \begin{alignat}{1}\label{eq:AsharpRADfinalNULLb} \hspace{-0.8truecm} {{\boldsymbol{A}}}^{\sharp,s}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}};{\boldsymbol{q}}) = &- 2\tfrac{e}{c} \tfrac{{\mathcal{I}}{\big(\tfrac{d}{d\tau}{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)\big)^{}}_{{\boldsymbol{n}}_{\tau,{\boldsymbol{s}}}}^\perp\!} {|{\boldsymbol{Q}}_q(\tau) -{\boldsymbol{s}}|} \Big|^{}_{\tau = t^{\mathrm{ret}}(t,{\boldsymbol{s}};{\boldsymbol{q}})} \\ \notag & + 2\tfrac{e}{c} \int_0^{t}{\mathcal{I}} \left({\big(\tfrac{d}{d\tau}{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)\big)^{}}_{{\boldsymbol{n}}_{\tau,{\boldsymbol{s}}}}^\perp\! -2{\big(\tfrac{d}{d\tau}{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)\big)^{}}_{{\boldsymbol{n}}_{\tau,{\boldsymbol{s}}}}^\|\!\right)\! \tfrac{c(t-\tau)}{|{\boldsymbol{Q}}_q(\tau) -{\boldsymbol{s}}|}\boldsymbol{1}^{}_{\{\tau > t^{\mathrm{ret}}(t,{\boldsymbol{s}};{\boldsymbol{q}})\}} d\tau . \end{alignat} To obtain the $\sharp$-field vector potential for the $\sharp$-fields with ball sources, we need to convolute \eqref{eq:AsharpRADfinalNULLa}, \eqref{eq:AsharpRADfinalNULLb} w.r.t. its ${\boldsymbol{q}}$ variable with the normalized characteristic function of the ball of radius $a$ centered at ${\boldsymbol{q}}$. \smallskip \medskip \noindent \textbf{Remark}: \emph{The first line at r.h.s. \eqref{eq:AsharpRADfinalNULLb} is a counterpart of a Lien\'ard--Wiechert-type potential, describing the field at time $t$ and space location ${\boldsymbol{s}}$, given the location ${\boldsymbol{q}}$ of the electron at time $t$, by looking for the intersection of the electron trajectory with the backward light cone of vertex ${\boldsymbol{s}}$ at time $t$. The second line at r.h.s. \eqref{eq:AsharpRADfinalNULLb} has no analog in the Maxwell--Lorentz field theory. It describes a contribution from \emph{outside that light cone}.} \medskip \noindent \textbf{Remark}: \emph{Evaluation requires the input of the de-Broglie--Bohm motions, and then the quadratures need to be carried out. Unfortunately, due to the short shrift that the de-Broglie--Bohm (dBB) theory has received in the orthodox quantum-mechanical literature, studies of the dBB motions have been a backwater of research in quantum mechanics. The first calculations seem to have been done by Phillipidis, Dewdney, and Hiley} \cite{PhillipidisDewdneyHiley}, \emph{for the very idealized two-dimensional caricature of the important double-slit experiment; see also} \cite{BoHi}. \emph{Remarkably, the very recent reports} \cite{weakM} \emph{on photon trajectories using so-called ``weak-measurements'' are in striking resemblance of the idealized dBB trajectories computed in} \cite{PhillipidisDewdneyHiley}; \emph{for a scholarly assessment, see} \cite{ShellySEP}. \emph{Moreover, as a mathematical tool for computing solutions of Schr\"odinger's equation, dBB motions have been put to work in the computational chemistry literature, see} \cite{Wyatt}, \cite{OriolsMompart}. \emph{The dBB motions we need as input should be encouragement enough for computationally skilled readers to begin computing them!} \smallskip \newpage \noindent\textbf{3.5.1.iii: The joint initial value problem for $\sharp$-field and wave function} \smallskip We would like to show that an initial state consisting of an excited stationary state with angular momentum eigenvalue $\ell >0$, complemented with an ``incoming radiation ${}^\sharp$-field,'' will lead to an evolution which involves the emission of electromagnetic radiation with the correct Rydberg frequencies, and the transition of the atomic $\widetilde\Psi$ to a less excited level, ultimately to the ground state wave function, terminating the emission. It suffices to focus on a representative case, the Lyman-$\alpha$ line. We stipulate: \begin{quote} {The initial state is a \emph{real} eigenfunction $\widetilde\Psi(0,{\boldsymbol{q}}) = \widetilde\Psi_{2,1,m,+}(0,{\boldsymbol{q}})$ complemented by the electrostatic $\sharp$-field of electron and proton, and by an incoming vacuum radiation field (a solution to the source-free Maxwell--Lorentz field equations) of finite electromagnetic energy and momentum (space integral of its Poynting vector field).} \end{quote} \smallskip \noindent \textbf{Remark}: \emph{One should think of such an incoming radiation field as a so-called Gaussian beam, traveling along the $z$-direction, the $z$ Fourier transform centered on the wave number $\omega_{2,1}/c$, and with a spread of (say) $\triangle(k_z) = \sigma_{k_z} / \sqrt{2} = 10^{-3}\omega_{2,1}/c$ in Fourier space. This corresponds to a spread of $\triangle(z) = 1/(\sigma_{k_z}\sqrt{2}) = 500 c/\omega_{2,1} \approx 80 \lambda_{2,1}$ of the pulse along the $z$ direction. Now, $\lambda_{2,1}\approx 1.2 \times 10^{-7}$m, and the Bohr radius $a_{\mbox{\tiny{Bohr}}}\approx 5.3\times 10^{-11}$m, so the spread of the beam in $z$ direction is $\approx 1.8 \times 10^5$ Bohr radii, or $\approx 10,000$ hydrogen diameters. In the lateral directions we can imagine the pulse to be approximately Gaussian with a much bigger spread (also slowly spreading), say $10^{-3}$m, but this will not be used. If we assume that initially the Gaussian $z$ pulse is centered 1m away from the proton, then initially and for a short (laboratory) time thereafter the external wave will have a negligible influence on the hydrogen atom, and after the pulse arrives it acts in very good approximation like a plane wave of Lyman-$\alpha$ frequency during the period when it passes through the hydrogen atom; moreover, since the spread of the pulse is $10^4$ hydrogen diameters, it does not act like a shock wave but gently, and so its effects can to some extent be treated with perturbation theory. Finally, we notice that in the Coulomb gauge for ${\boldsymbol{A}}^\sharp$ the Coulomb potential $\phi^\sharp$ is time-independent, and so the vector potential of the incoming Gaussian radiation beam pulse satisfies the classical wave equation. We remark that generally the total ${\boldsymbol{A}}^\sharp$ itself will satisfy a more complicated wave equation, though.} \smallskip We next support the initial narrative of this subsection using perturbation theory. \newpage \noindent\textbf{3.5.1.iv: Perturbation-theoretical approach to the initial value problem} \smallskip Let $0<\epsilon\ll 1$ be a small parameter. We set up perturbative series in powers $p \in\{0,1,2,...\}$ of $\epsilon$ for $\widetilde\Psi$ and ${\boldsymbol{A}}^\sharp_{\mbox{\tiny{rad}}}$, viz. we write $\widetilde\Psi(t,{\boldsymbol{q}}) = \sum_{p=0}^\infty \epsilon^p\widetilde\Psi_{p}(t,{\boldsymbol{q}})$ and ${\boldsymbol{A}}^\sharp_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}};{\boldsymbol{q}}) = \sum_{p=0}^\infty\epsilon^p\widetilde{\boldsymbol{A}}^{\sharp,p}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})$; here, the superscripts $p$ at the $\sharp$-fields should not be misread as powers. We assume that at order $\epsilon^0$ there is no radiation $\sharp$-field, thus $\widetilde{\boldsymbol{A}}^{\sharp,0}_{\mbox{\tiny{rad}}}=0$. Inserting this series into the Schr\"odinger equation \eqref{Hhyd} for hydrogen, after purging $H_{\mbox{\tiny{int}}}(t,{\boldsymbol{q}}) + H_{\mbox{\tiny{rad}}}(t,{\boldsymbol{q}})$ from \eqref{Hhyd} at order $\epsilon^0$, we can neglect all terms in the perturbative series but $\widetilde\Psi_0(t,{\boldsymbol{q}})$. This yields the Schr\"odinger equation \eqref{eq:ERWINeqnMatterWaveBOhydrogen} for hydrogen in the absence of any radiation $\sharp$-fields. The excited eigen state $\widetilde\Psi_{2,1,m,+}(t,{\boldsymbol{q}})$ is an exact solution. So at order $\epsilon^0$ we set $\widetilde\Psi_0(t,{\boldsymbol{q}}) := \widetilde\Psi_{2,1,m,+}(t,{\boldsymbol{q}})$ and will consider both $m=0$ and $m=1$; we remark that the eigen states $\widetilde\Psi_{2,1,1,\pm}(t,{\boldsymbol{q}})$ yield identical results, so that it suffices to discuss $\widetilde\Psi_{2,1,1,+}(t,{\boldsymbol{q}})$ if $m=1$. For the above setup to be consistent in the sense of perturbation theory, to order $\epsilon^0$ the interaction and radiation Hamiltonians have to be negligible. We now assume that the incoming external pulse contributes an $O(\epsilon)$ term to the radiation $\sharp$-field ${\boldsymbol{A}}^{\sharp}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})$ (and analogously for the radiation $\sharp$-fields derived from ${\boldsymbol{A}}^{\sharp}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})$). However, this is only the (source-)free part ${\boldsymbol{A}}^{\sharp,f}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}})$ of the $O(\epsilon)$ radiation $\sharp$-field. Since it already produces an $O(\epsilon)$ velocity field ${\boldsymbol{v}}(t,{\boldsymbol{q}})=\frac{e}{\mEL c}{\mathcal{I}}^{-1}{\boldsymbol{A}}^{\sharp,f}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{q}})$, a pertinent $O(\epsilon)$ contribution from ${\boldsymbol{A}}^{\sharp,s}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})$ has to be added to ${\boldsymbol{A}}^{\sharp,f}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}})$ in order to obtain the full $O(\epsilon)$ radiation $\sharp$-field. In fact, the $\sharp$-field ${\boldsymbol{A}}^{\sharp,s}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})$ contributes its own $O(\epsilon)$ share $\epsilon\frac{e}{\mEL c}{\mathcal{I}}^{-1}\widetilde{\boldsymbol{A}}^{\sharp,s,1}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{q}},{\boldsymbol{q}})$ to the velocity field and solves a self-consistent problem; see below. The beam-produced velocity field is negligible away from the intense pulse region, and since the hydrogen wave functions considered by us are exponentially decaying away from the proton with a scale of a few Bohr radii, this velocity field is initially negligible for all ${\boldsymbol{q}}$ inside the atom, and then also no sourced $\sharp$-field is produced in the atomic region, yet it will be appreciably large for the brief duration of the radiation beam pulse's passage through the atom. With the starting wave function at $O(1)$, and the starting radiation $\sharp$-fields of $O(\epsilon)$, we can insert the perturbative series Ansatz for $\widetilde\Psi(t,{\boldsymbol{q}})$ and for ${\boldsymbol{A}}^{\sharp}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})$ into the dynamical equations and sort them by powers of $\epsilon$, which yields a consistent hierarchy of coupled linear equations. We next discuss the $O(\epsilon)$ responses to the incoming radiation beam pulse, i.e. the sourced $\sharp$-field $\widetilde{\boldsymbol{A}}^{\sharp,s,1}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})$, and the perturbation $\widetilde\Psi_{1}(t,{\boldsymbol{q}})$ of the wave function of the atom. We need to begin with the $\sharp$-field. It should be recalled that only the ${\boldsymbol{q}}$ within a few dozen Bohr radii from the nucleus are of interest. \smallskip \newpage \noindent\textbf{3.5.1.v: Response of the $\sharp$-field at first order in perturbation theory.} \smallskip To first order in perturbation theory the $\sharp$-field responds directly to the incoming Maxwell--Lorentz radiation beam pulse; no intermediary action by the wave function enters. This is decisively different from Schr\"odinger's matter-wave approach, viz. the Schr\"odinger--Maxwell equations. More to the point, and since we only need to consider ${\boldsymbol{q}}$ within a few dozen Bohr radii from the nucleus, the incoming radiation can be assumed to be a Gaussian plane-wave pulse\footnote{Often in the literature, by a plane wave one means a monochromatic plane wave; to avoid confusion we speak of a plane-wave pulse.} $\widetilde{\boldsymbol{A}}^{\sharp,\mbox{\tiny{f}}}_{\mbox{\tiny{rad}}} = {\boldsymbol{e}}_x \frac{e\mEL c}{\hbar}\exp\big(-\frac{1}{2\sigma_z^2}(z-z_0-ct)^2\big)\cos\big(\frac1c\omega_{2,1}(z-z_0-ct)\big)$, traveling in the $z$-direction, satisfying the Coulomb gauge, and producing divergence-free electric and magnetic radiation pulse fields. This maps into the velocity field \begin{equation} \widetilde{\boldsymbol{v}}^{\mbox{\tiny{f}}}(t,{\boldsymbol{q}}) = {\boldsymbol{e}}_{x} \tfrac{e^2}{\hbar}\exp\big(-\tfrac{1}{2\sigma_z^2}({\boldsymbol{q}}\cdot{\boldsymbol{e}}_z-z_0-ct)^2\big)\cos\big(\tfrac1c\omega_{2,1}({\boldsymbol{q}}\cdot{\boldsymbol{e}}_z-z_0-ct)\big). \end{equation} Now define \begin{equation}\label{Vdefine} \widetilde{\boldsymbol{v}}^{(1)}(t,{\boldsymbol{q}}) := \widetilde{\boldsymbol{v}}^{\mbox{\tiny{f}}}(t,{\boldsymbol{q}}) + \tfrac{e}{\mEL c}{\mathcal{I}}^{-1}{\left\{\!\left[\right.\right.}\widetilde{{\boldsymbol{A}}}^{\sharp,s,1}_{\mbox{\tiny{rad}}}{\left.\left.\right]\!\right\}}_{{\boldsymbol{q}}}(t,{\boldsymbol{q}}) . \end{equation} \smallskip \noindent \textbf{Remark}: \emph{Since we have taken the limit $a\to 0$ in the source term of the radiation $\sharp$-field equations, the sourced $\sharp$-field has a singularity when ${\boldsymbol{s}}={\boldsymbol{q}}$. Thus we cannot also let $a\to 0$ at r.h.s.\eqref{Vdefine}, because this would lead to the ill-defined $\widetilde{{\boldsymbol{A}}}^{\sharp,s,1}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{q}};{\boldsymbol{q}})$.} \smallskip For $\widetilde{{\boldsymbol{A}}}^{\sharp,s,1}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})$ we now obtain the following linear fixed point problem, \begin{alignat}{1}\label{eq:AsharpRADfixpt} \hspace{-0.8truecm} \widetilde{{\boldsymbol{A}}}^{\sharp,s,1}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}};{\boldsymbol{q}}) = &- 2\tfrac{e}{c} \tfrac{{\big(\widetilde{\boldsymbol{v}}^{(1)}(\tau,{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)\big)^{}}_{{\boldsymbol{n}}_{\tau,{\boldsymbol{s}}}}^\perp\!} {|{\boldsymbol{Q}}_q(\tau) -{\boldsymbol{s}}|} \Big|^{}_{\tau = t^{\mathrm{ret}}(t,{\boldsymbol{s}};{\boldsymbol{q}})} \\ \notag &\hspace{-2truecm} + 2\tfrac{e}{c} \int_0^{t}\!\! \left({{\big(\widetilde{\boldsymbol{v}}^{(1)}(\tau,{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)\big)^{}}_{{\boldsymbol{n}}_{\tau,{\boldsymbol{s}}}}^\perp\!} -2{{\big(\widetilde{\boldsymbol{v}}^{(1)}(\tau,{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)\big)^{}}_{{\boldsymbol{n}}_{\tau,{\boldsymbol{s}}}}^\|\!} \right)\! \tfrac{c(t-\tau)}{|{\boldsymbol{Q}}_q(\tau) -{\boldsymbol{s}}|}\boldsymbol{1}^{}_{\{\tau < t^{\mathrm{ret}}(t,{\boldsymbol{s}};{\boldsymbol{q}})\}} d\tau . \end{alignat} Although linear, this is a formidable integral-delay equation. However, $|\widetilde{\boldsymbol{v}}^{(1)}|/c = O(\alpha_{\mbox{\tiny{\textrm{S}}}})$, and so we can expect that a solution can be found by Picard-type iteration, similar to a Born series in Born's treatment of the quantum-mechanical scattering problem. This leads to an iterative series expansion of $\widetilde{{\boldsymbol{A}}}^{\sharp,s,1}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})$. In this vein we note that to first approximation in this iteration one neglects the contribution from ${\left\{\!\left[\right.\right.}\widetilde{{\boldsymbol{A}}}^{\sharp,s,1}_{\mbox{\tiny{rad}}}{\left.\left.\right]\!\right\}}_{{\boldsymbol{q}}}(t,{\boldsymbol{q}})$ at r.h.s.\eqref{Vdefine} and obtains the first contribution to $\widetilde{{\boldsymbol{A}}}^{\sharp,s,1}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})$ by explicit quadrature of r.h.s.\eqref{eq:AsharpRADfixpt}, after solving the final value problem \begin{equation}\label{charQinBORNapprox} \tfrac{d}{d\tau} {\boldsymbol{Q}}_{\boldsymbol{q}}(\tau) = \epsilon{\mathcal{I}}^{-1} \widetilde{\boldsymbol{A}}^{\sharp,\mbox{\tiny{f}}}_{\mbox{\tiny{rad}}}(\tau, {\boldsymbol{Q}}_{\boldsymbol{q}}(\tau) ) \end{equation} with ${\boldsymbol{Q}}_{\boldsymbol{q}}(t) ={\boldsymbol{q}}$. This problem reduces further to a simple one-dimensional linear ODE problem of first order, due to the fact that the incoming plane-wave pulse points in the ${\boldsymbol{e}}_x$ direction while its space-dependence involves only ${\boldsymbol{q}}\cdot {\boldsymbol{e}}_z$, which is fixed during a characteristic motion. Thus it can be integrated by direct quadrature, \begin{eqnarray}\!\!\!\! \notag {\boldsymbol{Q}}_{\boldsymbol{q}}(\tau) \!\!\!&=& \!\!\! {\boldsymbol{q}} - {\boldsymbol{e}}_{x} \epsilon \tfrac{e^2}{\hbar}\!\!\int_\tau^t \!\! \exp\big(-\tfrac{1}{2\sigma_z^2}({\boldsymbol{q}}\cdot{\boldsymbol{e}}_z-z_0-c\theta)^2\big)\cos\big(\tfrac1c\omega_{2,1}({\boldsymbol{q}}\cdot{\boldsymbol{e}}_z-z_0-c\theta)\big)d\theta \\ \label{QinBORNapprox} \!\!\! &=&\!\!\! {\boldsymbol{q}} - {\boldsymbol{e}}_{x} \epsilon \alpha_{\mbox{\tiny{\textrm{S}}}} \!\!\int_{c\tau - {\boldsymbol{q}}\cdot{\boldsymbol{e}}_z+z_0}^{ct -{\boldsymbol{q}}\cdot{\boldsymbol{e}}_z+z_0}\!\! \exp\big(-\tfrac{1}{2\sigma_z^2}\xi^2\big)\cos\big(\tfrac1c\omega_{2,1}\xi)\big)d\xi , \end{eqnarray} which can be expressed in terms of known functions, involving the error function, but the representation \eqref{QinBORNapprox} is explicit enough for our purposes. Inserting \eqref{QinBORNapprox} into \eqref{eq:AsharpRADfixpt} with $\epsilon\widetilde{\boldsymbol{v}}^{(1)}\!\big(\tau,{\boldsymbol{Q}}_{\boldsymbol{q}}(\tau)\big)$ given by r.h.s.\eqref{charQinBORNapprox} is still an involved formula, due to the retardations. However, since the relevant ${\boldsymbol{q}}$ are restricted to the atomic vicinity of the nucleus, and since it is clear that the velocity field within a few dozen Bohr radii of the nucleus will be of a brief transient character, and since during this transient period $T$ it oscillates with the Lyman-$\alpha$ frequency, our formula \eqref{eq:AsharpRADfixpt} in this Born-type approximation reveals that $\widetilde{{\boldsymbol{A}}}^{\sharp,s,1}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})$ will point in the ${\boldsymbol{e}}_x$ direction, and far away from the atom one essentially has a spherical shell of radiation of thickness $cT$, centered on the nucleus, moving outward at the speed of light, plus some dispersive part due to the contributions from outside the light cone. Moreover, and most importantly, the sourced radiation $\sharp$-field in this Born-type approximation oscillates itself with Lyman-$\alpha$ frequency. \smallskip \noindent\textbf{3.5.1.vi: Response of the atom at first order in perturbation theory.} \smallskip With radiation ${}^\sharp$-fields of $O(\epsilon)$, the interaction Hamiltonian $H_{\mbox{\tiny{int}}}(t,{\boldsymbol{q}})$ is $O(\epsilon)$ relative to $H_{\mbox{\tiny{{hyd}}}}({\boldsymbol{q}})$ because of the linear contribution from the vector potential; it is then negligible at $O(\epsilon^0)$ in the Schr\"odinger equation, vindicating our original assumption. The radiation Hamiltonian and the integrated Poynting vector in the interaction Hamiltonian are only $O(\epsilon^2)$ operators, hence negligible at both $O(\epsilon^0)$ and $O(\epsilon)$ in our Schr\"odinger equation for hydrogen. To $O(\epsilon)$ included we have ${\boldsymbol{P}}^\sharp_{\mbox{\tiny{{el}}}}(t,{\boldsymbol{q}}) = - \epsilon \tfrac{\eEL}{c} {\left\{\!\left[\right.\right.} \widetilde{\boldsymbol{A}}^{\sharp,\mbox{\tiny{1}}}_{\mbox{\tiny{rad}}}{\left.\left.\right]\!\right\}}_{{\boldsymbol{q}}} {\left.\left.\right]\!\right\}}_{{\boldsymbol{0}}} (t,{\boldsymbol{q}})$. Furthermore, the magnetic vector potential of the incoming radiation field is independent of ${\boldsymbol{q}}$, and since we stipulated the Coulomb gauge condition in the ${\boldsymbol{s}}$ variable, we find $\nabla_{\boldsymbol{q}}\cdot{\left\{\!\left[\right.\right.} {\boldsymbol{A}}^{\sharp,\mbox{\tiny{f}}}_{\mbox{\tiny{rad}}}{\left.\left.\right]\!\right\}}_{{\boldsymbol{q}}}(t,{\boldsymbol{q}})= 0 $ because \begin{alignat}{1} \label{divCALC} \nabla_{\boldsymbol{q}}\cdot \int_{{\mathbb{R}}^3} {\boldsymbol{A}}^{\sharp,\mbox{\tiny{f}}}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}})\delta^{(a)}_{{\boldsymbol{q}}}({\boldsymbol{s}})\mathrm{d}^3s & = \int_{{\mathbb{R}}^3} {\boldsymbol{A}}^{\sharp,\mbox{\tiny{f}}}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}})\cdot \nabla_{\boldsymbol{q}} \delta^{(a)}_{{\boldsymbol{q}}}({\boldsymbol{s}})\mathrm{d}^3s \\ & = - \int_{{\mathbb{R}}^3} {\boldsymbol{A}}^{\sharp,\mbox{\tiny{f}}}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}})\cdot \nabla_{\boldsymbol{s}} \delta^{(a)}_{{\boldsymbol{q}}}({\boldsymbol{s}})\mathrm{d}^3s \\ & = \int_{{\mathbb{R}}^3} \big(\nabla_{\boldsymbol{s}}\cdot {\boldsymbol{A}}^{\sharp,\mbox{\tiny{f}}}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}})\big) \delta^{(a)}_{{\boldsymbol{q}}}({\boldsymbol{s}})\mathrm{d}^3s = 0. \end{alignat} So we have shown that $\nabla_{{\boldsymbol{q}}}\cdot{\boldsymbol{P}}^{\sharp,\mbox{\tiny{f}}}_{\mbox{\tiny{{el}}}}(t,{\boldsymbol{q}}) = 0$. Similarly, for the averaged sourced $\sharp$-field vector potential we know that its ${\boldsymbol{s}}$-divergence vanishes, and this entails \begin{alignat}{1} \label{divCALCsourceA} 0 & = \int_{{\mathbb{R}}^3} \big(\nabla_{\boldsymbol{s}}\cdot {\boldsymbol{A}}^{\sharp,\mbox{\tiny{s}}}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})\big) \delta^{(a)}_{{\boldsymbol{q}}}({\boldsymbol{s}})\mathrm{d}^3s \\ \label{divCALCsourceB} & = - \int_{{\mathbb{R}}^3} {\boldsymbol{A}}^{\sharp,\mbox{\tiny{s}}}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})\cdot \nabla_{\boldsymbol{s}} \delta^{(a)}_{{\boldsymbol{q}}}({\boldsymbol{s}})\mathrm{d}^3s \\ \label{divCALCsourceC} & = \int_{{\mathbb{R}}^3} {\boldsymbol{A}}^{\sharp,\mbox{\tiny{s}}}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})\cdot \nabla_{\boldsymbol{q}} \delta^{(a)}_{{\boldsymbol{q}}}({\boldsymbol{s}})\mathrm{d}^3s \\ & = - \int_{{\mathbb{R}}^3} \big(\nabla_{\boldsymbol{q}}\cdot {\boldsymbol{A}}^{\sharp,\mbox{\tiny{s}}}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})\big) \delta^{(a)}_{{\boldsymbol{q}}}({\boldsymbol{s}})\mathrm{d}^3s \\ & = - \nabla_{\boldsymbol{q}}\cdot \int_{{\mathbb{R}}^3} {\boldsymbol{A}}^{\sharp,\mbox{\tiny{s}}}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})\delta^{(a)}_{{\boldsymbol{q}}}({\boldsymbol{s}})\mathrm{d}^3s + \int_{{\mathbb{R}}^3} {\boldsymbol{A}}^{\sharp,\mbox{\tiny{s}}}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})\cdot \nabla_{\boldsymbol{q}} \delta^{(a)}_{{\boldsymbol{q}}}({\boldsymbol{s}})\mathrm{d}^3s \\ & = - \nabla_{\boldsymbol{q}}\cdot \int_{{\mathbb{R}}^3} {\boldsymbol{A}}^{\sharp,\mbox{\tiny{s}}}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}};{\boldsymbol{q}})\delta^{(a)}_{{\boldsymbol{q}}}({\boldsymbol{s}})\mathrm{d}^3s, \end{alignat} the last equality because of the sequence of equalities \eqref{divCALCsourceA}--\eqref{divCALCsourceC}. This shows that its ${\boldsymbol{q}}$-divergence vanishes, too. The interaction Hamiltonian (\ref{HhydINT}) at $O(\epsilon)$ simplifies to $\epsilon H^{\mbox{\tiny{(1)}}}_{\mbox{\tiny{int}}}(t,{\boldsymbol{q}})$, with \begin{alignat}{1}\label{HhydINTreduced} H^{\mbox{\tiny{(1)}}}_{\mbox{\tiny{int}}}(t,{\boldsymbol{q}}) = - i\tfrac{\hbar \eEL}{\mEL c} {\left\{\!\left[\right.\right.}\widetilde{\boldsymbol{A}}^{\sharp,\mbox{\tiny{f}}}_{\mbox{\tiny{rad}}} + \widetilde{\boldsymbol{A}}^{\sharp,\mbox{\tiny{s,1}}}_{\mbox{\tiny{rad}}}{\left.\left.\right]\!\right\}}_{{\boldsymbol{q}}}(t,{\boldsymbol{q}}) \cdot\nabla_{{\boldsymbol{q}}} . \end{alignat} To establish the consistency of the perturbative setup we need to assume that $H^{\mbox{\tiny{(1)}}}_{\mbox{\tiny{int}}}(t,{\boldsymbol{q}})$ itself is of the same order as $H_{\mbox{\tiny{hyd}}}(t,{\boldsymbol{q}})$, which can always be arranged by choosing the amplitude of the incoming radiation field appropriately. \newpage \noindent \textbf{Remark}: \emph{Since the incoming radiation $\sharp$-fields can be assumed to be smooth, and since the radius, $a$, of the mollification of the point electron is as tiny as we please, we have} ${\left\{\!\left[\right.\right.}\widetilde{\boldsymbol{A}}^{\sharp,\mbox{\tiny{f}}}_{\mbox{\tiny{rad}}}{\left.\left.\right]\!\right\}}_{{\boldsymbol{q}}}(t,{\boldsymbol{q}}) \approx \widetilde{\boldsymbol{A}}^{\sharp,\mbox{\tiny{f}}}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{q}})$ \emph{with arbitrary precision. Therefore, for all practical purposes we can replace} ${\left\{\!\left[\right.\right.}\widetilde{\boldsymbol{A}}^{\sharp,\mbox{\tiny{f}}}_{\mbox{\tiny{rad}}}{\left.\left.\right]\!\right\}}_{{\boldsymbol{q}}}(t,{\boldsymbol{q}})$ with $\widetilde{\boldsymbol{A}}^{\sharp,\mbox{\tiny{f}}}_{\mbox{\tiny{rad}}}(t,{\boldsymbol{q}})$ \emph{in \eqref{HhydINTreduced}.} \smallskip Letting therefore $a\to 0$ in the incoming-radiation term of the perturbative expansion of the Schr\"odinger equation \eqref{eq:ERWINeqnNEW} for hydrogen exposed to an incoming radiation field, we find (after dropping the argument $(t,{\boldsymbol{q}})$ and $\widetilde{\quad}$ from the $\widetilde\Psi$s and $\widetilde{\boldsymbol{A}}^{\sharp,1}_{\mbox{\tiny{rad}}}$) \begin{equation} \Big(\! i \hbar \partial_t + \tfrac{\hbar^2 }{2\mEL}\Delta_{\boldsymbol{q}} + \tfrac{e^2}{|{\boldsymbol{q}}|}\Big)\Psi_1 = - i\tfrac{\hbar \eEL}{\mEL c} \left( {\boldsymbol{A}}^{\sharp,\mbox{\tiny{f}}}_{\mbox{\tiny{rad}}} + {\left\{\!\left[\right.\right.}{\boldsymbol{A}}^{\sharp,\mbox{\tiny{s,1}}}_{\mbox{\tiny{rad}}}{\left.\left.\right]\!\right\}}_{{\boldsymbol{q}}} \right) \cdot\nabla_{{\boldsymbol{q}}} \Psi_0. \label{eq:ERWINeqnNEWfirstORDER} \end{equation} We have arrived at a traditional first-order formulation of time-dependent perturbation theory of the Schr\"odinger equation of hydrogen in an external electromagnetic radiation field, albeit in the Schr\"odinger picture, while textbooks usually present the unitarily equivalent formulation in the interaction picture. Since this problem has been sorted out in great detail, it suffices to summarize the established facts; and since the two ``pictures'' are unitarily equivalent and we here consider only first-order theory, we may as well stay in the Schr\"odinger picture and describe the results. We now restrict the further discussion to the situation in which the time-evolved wave function $\Psi(t,{\boldsymbol{q}})$ remains in the negative energy subspace of the usual hydrogen Hamiltonian $H_{\mbox{\tiny{hyd}}} = -\frac{\hbar^2}{2\mEL}\Delta_{\boldsymbol{q}} - \frac{e^2}{|{\boldsymbol{q}}|}$. \medskip \noindent \textbf{Remark}: \emph{We should be prepared to find that such a scenario is an oversimplification, and that a positive energy admixture may be unavoidable, as suggested by Dirac's 1927 calculations known as ``Fermi's Golden Rule.'' In this sense, we are not assuming that it is always possible to describe the electron wave function dynamics in the negative energy subspace of $H_{\mbox{\tiny{hyd}}}$, but that we are \emph{restricting} the discussion to those situations when it is. It should be a valid assumption if the atom is initially in its ground state and the incoming wave has a frequency less than the ionization energy $/\hbar$.} \medskip Technically speaking, if $\Pi_{\underline{E},\overline{E}}$ denotes the projection on the subspace of Hilbert space $L^2({\mathbb{R}}^3)$ such that $\langle \Psi, H_{\mbox{\tiny{{hyd}}}}\Psi\rangle \in [\underline{E},\overline{E}]$ whenever $\Psi = \Pi_{\underline{E},\overline{E}}\Psi$, then what we assume is that to first order in perturbation theory included, $\Psi = \Pi_{E_1,0}\Psi$, where $E_1$ is the ground state energy for $H_{\mbox{\tiny{{hyd}}}}$. By the completeness of the hydrogen wave eigenfunctions in this subspace we then can expand the solution, to first order included, thus: $\Psi(t,{\boldsymbol{q}}) = \Psi_0(t,{\boldsymbol{q}}) + \Psi_1(t,{\boldsymbol{q}})$, with \begin{equation} \Psi_1(t,{\boldsymbol{q}}) = \sum_{n\in{\mathbb{N}}} e^{-i E_n t/\hbar}\sum_{\ell=0}^{n-1}\sum_{m=0}^\ell\sum_{\varsigma\in\pm} c^{}_{n,\ell,m,\varsigma}(t)\psi^{}_{n,\ell,m,\varsigma}({\boldsymbol{q}}), \label{eq:PSIboundGENERALagain} \end{equation} where $c^{}_{n,\ell,m,\varsigma}(0)=0$. \newpage Inserting this expansion into \eqref{eq:ERWINeqnNEWfirstORDER}, then multiplying by the real $\psi^{}_{n',\ell',m',\varsigma'}({\boldsymbol{q}})$ and integrating over ${\mathbb{R}}^3$ in ${\boldsymbol{q}}$, and abbreviating ${\boldsymbol{A}}^{\sharp,\mbox{\tiny{f}}}_{\mbox{\tiny{rad}}} + {\left\{\!\left[\right.\right.}{\boldsymbol{A}}^{\sharp,\mbox{\tiny{s,1}}}_{\mbox{\tiny{rad}}}{\left.\left.\right]\!\right\}}_{{\boldsymbol{q}}} =: {\boldsymbol{A}}^{\sharp,\mbox{\tiny{fs1}}}_{\mbox{\tiny{rad}}}$, we obtain \begin{equation} e^{-i E_{n'} t/\hbar} i\hbar \frac{d}{dt} c^{}_{n',\ell',m',\varsigma'}(t) = - i\hbar \tfrac{\eEL}{\mEL c} e^{-i E_{2} t/\hbar} \big\langle\psi^{}_{n',\ell',m',\varsigma'}\big|{\boldsymbol{A}}^{\sharp,\mbox{\tiny{fs1}}}_{\mbox{\tiny{rad}}}\cdot\nabla_{{\boldsymbol{q}}}\big|\psi^{}_{2,1,m,+}\big\rangle, \label{cCOEFFodes} \end{equation} where the angular brackets are the usual Dirac notation. Since $H_{\mbox{\tiny{int}}}(t,{\boldsymbol{q}})$ is given we can integrate \eqref{cCOEFFodes} with vanishing initial data, obtaining \begin{equation} c^{}_{n',\ell',m',\varsigma'}(t) = - \tfrac{\eEL}{\mEL c} \int_0^t e^{-i( E_{2}-E_{n})\tau/\hbar}\big\langle\psi^{}_{n',\ell',m',\varsigma'} \big|{\boldsymbol{A}}^{\sharp,\mbox{\tiny{fs1}}}_{\mbox{\tiny{rad}}}\cdot\nabla_{{\boldsymbol{q}}}\big|\psi^{}_{2,1,m,+}\big\rangle {d\tau} . \label{cCOEFFs} \end{equation} We next discuss the contributions from the incoming (free) and the outgoing (sourced) $\sharp$-fields separately. Thus we write $c^{}_{n',\ell',m',\varsigma'}(t) = c^{\mbox{\tiny{f}}}_{n',\ell',m',\varsigma'}(t) + c^{\mbox{\tiny{s}}}_{n',\ell',m',\varsigma'}(t)$, in self-explanatory notation. \emph{The source-free contribution}. For the incoming Gaussian beam pulse, the $z$-dependent factor of ${\boldsymbol{A}}^{\sharp,\mbox{\tiny{f}}}_{\mbox{\tiny{rad}}}$ is to very good approximation given by $\exp\big(-\frac{1}{2\sigma_z^2}(z-z_0-ct)^2\big)\cos\big(\frac1c\omega_{2,1}(z-z_0-ct)\big)$, where $z_0$ is the center of the pulse at time $t=0$, and $\omega_{2,1} = (E_2 - E_1)/\hbar$. So it is clear that many coefficients $c^{\mbox{\tiny{f}}}_{n',\ell',m',\varsigma'}(t)$ will rapidly (on our everyday time scale) transit from $0$ to some generally nonzero complex final value $c^{\mbox{\tiny{f}}}_{n',\ell',m',\varsigma'}(\infty)$ as $t\to\infty$; some may vanish identically, though, due to symmetries. Now, writing the cosine as real part of the exponential of $i\big(\frac1c\omega_{2,1}(z-z_0-ct)\big)$, we see that the interaction Hamiltonian is a sum of terms with factors $\exp(\pm i \omega_{2,1}t)$ and $\exp(\pm 2 i \omega_{2,1}t)$, and a non-oscillating (w.r.t. $t$) term. Under the integral the term $\propto \exp( i \omega_{2,1}\tau)$ will cancel the factor $\exp(- i\omega_{2,n'}\tau)$ at r.h.s.\eqref{cCOEFFs} iff $n'=1$, and in this case that integral will have no time-oscillatory terms left. All other coefficients $c^{\mbox{\tiny{f}}}_{n',\ell',m',\varsigma'}(t)$ are given by integrals over oscillatory functions with one of the Rydberg frequencies of hydrogen (in particular, the $n'=2$ terms have the Lyman-$\alpha$ frequency), or frequencies which are higher than the ionization frequency (which should make the smallest contributions). Thus it can be expected that $|c^{\mbox{\tiny{f}}}_{1,0,0,+}(\infty)|$ is either identically 0 by symmetry, or the largest magnitude of the coefficients, while all other coefficients remain smaller in magnitude than these. In the following we report our results for the $c^{\mbox{\tiny{f}}}_{n',\ell',m',\varsigma'}(t)$ as given by \eqref{cCOEFFs} with $H^{(1)}_{\mbox{\tiny{int}}}(\tau,\,.\,)$ computed from the stipulated Gaussian beam pulse. For convenience we restrict the study to coefficients with small enough $n'$ values so that the pertinent hydrogen eigenfunctions are exponentially decaying in $r$ with scale $n'a_{\mbox{\tiny{B}}}$ that is smaller than the lateral spread of the beam pulse. In $\big\langle\psi^{}_{n',\ell',m',\varsigma'}\big|{\boldsymbol{A}}^{\sharp,\mbox{\tiny{f}}}_{\mbox{\tiny{rad}}}\cdot\nabla_{{\boldsymbol{q}}}\big|\psi^{}_{2,1,m,+}\big\rangle$ we then can approximately set ${\boldsymbol{A}}^{\sharp,\mbox{\tiny{f}}}_{\mbox{\tiny{rad}}} = \frac{e\mEL c}{\hbar}{\boldsymbol{e}}_x\exp\big(-\frac{1}{2\sigma_z^2}(z-z_0-ct)^2\big)\cos\big(\frac1c\omega_{2,1}(z-z_0-ct)\big)$, a plane-wave pulse traveling in the $z$-direction, satisfying the Coulomb gauge, and producing divergence-free electric and magnetic radiation pulse fields; viz., we have ${\boldsymbol{A}}^{\sharp,\mbox{\tiny{f}}}_{\mbox{\tiny{rad}}}\cdot\nabla_{{\boldsymbol{q}}} = \frac{e\mEL c}{\hbar}\exp\big(-\frac{1}{2\sigma_z^2}(z-z_0-ct)^2\big)\cos\big(\frac1c\omega_{2,1}(z-z_0-ct)\big)\partial_{q_x}$. Taking $n'=1,\ell'=0=m',\varsigma=+$, the bra vector is the ground state, and its wave function is spherically symmetric. Then we obtain the following results for when $m=0$ versus when $m=1$. \smallskip \noindent ($m=0$) For the exited initial state, $\partial_{q_x}\psi^{}_{2,1,0,+}\propto \sin\vartheta \cos\varphi$, so the $\varphi$-integration yields $\big\langle\psi^{}_{1,0,0,+}\big|{\boldsymbol{A}}^{\sharp,\mbox{\tiny{f}}}_{\mbox{\tiny{rad}}}\cdot\nabla_{{\boldsymbol{q}}}\big|\psi^{}_{2,1,0,+}\big\rangle =0$. Thus to first-order perturbation theory this initial state does not excite the ground state amplitude when the atom is traversed by the stipulated Gaussian beam, viz. $c^{\mbox{\tiny{f}}}_{1,0,0,+}(t)=0\forall\, t$. If the $O(\epsilon^2)$ terms in $H_{\mbox{\tiny{int}}}(t,{\boldsymbol{q}})$ due to the incoming radiation $\sharp$-field would have been retained, $c^{\mbox{\tiny{f}}}_{1,0,0,+}(t)$ would be generally nonzero, but this is second order in perturbation theory. \smallskip \noindent ($m=1$) When the exited initial state is $\psi^{}_{2,1,1,+}$, its partial $q_x$-derivative is a sum of a spherically symmetric function $\times (1+ q_x^2/r)$, and this implies that the integrand $\big\langle\psi^{}_{1,0,0,+}\big|{\boldsymbol{A}}^{\sharp,\mbox{\tiny{f}}}_{\mbox{\tiny{rad}}}\cdot\nabla_{{\boldsymbol{q}}}\big|\psi^{}_{2,1,1,+}\big\rangle$ yields a generally nonzero result. The time-dependence can be conveniently evaluated with the help of a computer, but $\lim_{t\to\infty}c_{1,0,0,+}^{}(t)$ can be computed explicitly to excellent approximation if one realizes that the lower limit of the time integral can be extended from $0$ to $-\infty$ with a superexponentially small error. In that case, by an application of the Tonelli-Fubini theorem, one can carry out the time integration over $\tau$ first, which becomes just the Fourier transform of the plane-wave pulse with conjugate variables $\tau\mapsto \omega$. We use that \begin{equation}\label{FourierGAUSSa} \int_{{\mathbb{R}}} e^{-i\omega_{2,1}\tau}e^{-\frac{1}{2\sigma_z^2}(z-z_0-c\tau)^2}e^{ - i(\frac1c\omega_{2,1}(z-z_0-c\tau)}d\tau = \tfrac{\sqrt{2\pi} \sigma_z}{c} e^{ - i \frac1c\omega_{2,1}(z-z_0)} \end{equation} and that a Gaussian maps to a Gaussian, so that \begin{equation}\label{FourierGAUSSb} \int_{{\mathbb{R}}} e^{-i\omega_{2,1}\tau}e^{-\frac{1}{2\sigma_z^2}(z-z_0-c\tau)^2}e^{ i(\frac1c\omega_{2,1}(z-z_0-c\tau)}d\tau = \tfrac{\sqrt{2\pi} \sigma_z}{c} e^{- 2\frac{\sigma_z^2 }{c^2}\omega_{2,1}^2} e^{ - i \frac1c\omega_{2,1}(z-z_0)} \end{equation} and the Gaussian at r.h.s.\eqref{FourierGAUSSb}, for our choice of data, is $\approx\exp(- 10^6)$, hence zero for all practical purposes relative to the contribution from r.h.s.\eqref{FourierGAUSSa}. Similarly it follows that all other amplitude coefficients are practically as good as zero, as long as the plane-wave pulse approximation is valid; at larger $n'$ the amplitude coefficients cannot be computed with the plane-wave pulse approximation, yet after taking the lateral cutoff into account the remaining time integration argument applies with minor adjustments and again produces essentially vanishing magnitudes. \emph{The sourced contribution (Born-type approximation)}. The coefficients $c^{\mbox{\tiny{s}}}_{n',\ell',m',\varsigma'}(t)$ are more difficult to evaluate, and our results are rather preliminary at this point. A few insights are available. First of all, even though the velocity field generated by the incoming Gaussian plane-wave pulse always points along ${\boldsymbol{e}}_x$, the sourced $\sharp$-field vector potential, in Born-type approximation, points in (almost) all directions. This essentially spherical emission consists of a retarded Lien\'ard--Wiechert type part that leaves the realm of the atomic region as quickly as the plane-wave pulse passes through the atom, and a contribution from outside the light cone that lingers a bit longer; yet, since in Born-type approximation the velocity field in the atomic region is only active in transient while the incoming Gaussian beam pulse passes through the atom, also the contribution from outside the light cone is very brief in time. Next, the velocity field contribution to the sourced $\sharp$-field vector potential in Born-type approximation oscillates with Lyman-$\alpha$ frequency, but the contribution from ${\boldsymbol{Q}}_q(t^{\mathrm{ret}}(t,{\boldsymbol{s}};{\boldsymbol{q}}))$ in the numerator of r.h.s.\eqref{eq:AsharpRADfixpt} may cause a spectrum of frequencies. The upshot is: We may expect that the emission of the sourced $\sharp$-field radiation in first order of perturbation theory, and in first Born-type approximation, makes a non-zero contribution $c^{\mbox{\tiny{s}}}_{1,0,0,+}(t)$ to the ground state amplitude, in addition to $c^{\mbox{\tiny{f}}}_{1,0,0,+}(t)$. Moreover, one should be prepared to find that this emission process will also contribute to other amplitudes; in particular, it may contribute to the de-activation of the excited initial state, while contributions to other eigenstate amplitudes should be vanishingly small for similar numerical reasons as offered above already. To sort this out rigorously will require a detailed and lengthy evaluation of the retarded contributions and of those from outside the light cone. This, unfortunately, must be relegated to the to-do list as a high-priority item. \smallskip \noindent \textbf{Conclusion}: \emph{In the present model, first order perturbation theory predicts that after the passing through of an electromagnetic radiation beam pulse with Lyman-$\alpha$ frequency, a hydrogen atom that initially is in its first excited state with $n=2$, will have responded as follows:} $\bullet$ \emph{If $\ell=1$ and $m=1$, the atom will essentially be in a superposition of this state} \ \ \ \emph{and the ground state}; $\bullet\;$ \emph{If $\ell=1$ and $m=0$, or if $\ell=0$, the atom will remain in that state}. \noindent \emph{In any of these situations, there will be a negligible admixtures of other eigenstates.} \medskip \noindent \textbf{Remark}: \emph{Since ${\boldsymbol{A}}^{\sharp,\mbox{\tiny{f}}}_{\mbox{\tiny{rad}}}$ is independent of ${{\boldsymbol{q}}}$, we have $$ \big\langle\psi^{}_{1,0,0,+}\big|{\boldsymbol{A}}^{\sharp,\mbox{\tiny{f}}}_{\mbox{\tiny{rad}}}\cdot\nabla_{{\boldsymbol{q}}}\big|\psi^{}_{2,1,1,+}\big\rangle = - \big\langle\psi^{}_{2,1,1,+}\big|{\boldsymbol{A}}^{\sharp,\mbox{\tiny{f}}}_{\mbox{\tiny{rad}}}\cdot\nabla_{{\boldsymbol{q}}}\big|\psi^{}_{1,0,0,+}\big\rangle, $$ and so we can conclude that the incoming Lyman-$\alpha$ frequency wave, after passing through the atom that initially is in its ground state, will leave the atom in a superposition of the ground state and the first excited level state $(2,1,1)$.} \newpage \medskip \noindent \textbf{Remark}: \emph{Since to first order in perturbation theory the incoming Gaussian radiation beam pulse generates a flash of outgoing $\sharp$-field radiation already at the level of the $\sharp$-field equations \emph{without} the mediation of the wave function of the atom, we now have the apparently paradoxical situation that to first order in perturbation theory the atom, when initially in the first excited state with $m=0$, will remain in that state and yet a flash of radiation is emitted. However, the first excited $m=0$ state is spherically symmetric, and so upon taking the quantum-mechanical expected value this outgoing radiation $\sharp$-field flash will average to zero, compatible with the empirical results. When the atom is initially in a not spherically symmetric first excited $m=1$ state, then the quantum-mechanical average to first order in perturbation theory will not vanish.} \medskip We will compute the $t$-dependence of $|c^{\mbox{\tiny{f}}}_{1,0,0,+}(t)|^2$ numerically for some illustrative parameter values, when the initial state of the atom is $\Psi^{}_0 = \psi^{}_{2,1,1,+}$; see Fig. TBA. \bigskip \vfill\vfill This concludes our discussion of the first-order response of the hydrogen atom to the incoming radiation field pulse which, as we have shown, reduces to the same type of problem as in orthodox quantum mechanics, already studied by Schr\"odinger in 1926, except that he used monochromatic plane waves and not a Gaussian plane-wave pulse. But there is such a large literature on Gaussian beam pulses, e.g. \cite{EncyclopediaOPTICS} and references therein. Next we address the more subtle issue of what happens as a result of the feedback of the emitted radiation $\sharp$-fields into the Hamiltonian. \newpage \smallskip \noindent\textbf{3.5.1.vii: On the transition to the ground state.} \smallskip In this section we assume the initial state of the atom was the first excited state $(n=2,\ell=1,m=1,\varsigma=+)$, and that the stipulated Gaussian radiation beam pulse with Lyman-$\alpha$ frequency has moved through the atom already. Since to first order in perturbation theory our model exhibits all the contributions that also the usual Schr\"odinger-type perturbation theory features, plus some additional contributions, we may take guidance from Fermi's Golden Rule and conclude that the atom will have ended up in a superposition of this state and the ground state, plus perhaps some admixture of a continuum state --- which we ignored in our perturbative computations, and which we continue to ignore for simplicity also in this subsection. The question now is whether the transition to the ground state indeed happens in our theory. To find out one has to compute in higher-order of perturbation theory as a consequence of the interaction Hamiltonian caused by the emission, or without perturbation arguments altogether. In the following we give a non-perturbative argument for why we expect that this transition to the ground state indeed happens in this theory. Our Hamiltonian is the generator of the unitary dynamics for ${\Psi}$, but its expected value $\left\langle H_{\mbox{\tiny{{hyd}}}} + H_{\mbox{\tiny{{int}}}} \right\rangle + \big\langle E^\sharp_{\mbox{\tiny{rad}}}(t,{\boldsymbol{q}})\big\rangle$ does not seem to have the significance of furnishing a conserved energy for the system as a whole. If this was a conserved quantity, or if one had a conserved quantity similar to \eqref{eq:SMenergyFctl}, say $\left\langle H_{\mbox{\tiny{{hyd}}}} + H_{\mbox{\tiny{{int}}}} \right\rangle + \frac{1}{8\pi}\int_{{\mathbb{R}}^3}\!\bigl(\abs{{\boldsymbol{E}}_{\mathrm{el}}^{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}})}^2 + \abs{{\boldsymbol{B}}_{\mathrm{el}}^{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}})}^2 \bigr)d^3\!s$, then one could feel inspired by the reasoning in Schr\"odinger--Maxwell theory (recall its discussion in section \ref{sec:Erwin}) and try to show that the field energy necessarily increases, which then must come at the expense of the first two terms, and if one can also show that the expected interaction Hamiltonian goes to zero as time increases, then one has the basis for showing that the atom has made a transition to a lower energy state, and when starting in the first excited $\ell=1$ state, it would have to be the ground state. On the positive side, we can offer three potentially useful observations. First of all, $\frac{1}{8\pi}\int_{{\mathbb{R}}^3}\!\big(\abs{{\boldsymbol{E}}_{\mathrm{el}}^{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}})}^2 + \abs{{\boldsymbol{B}}_{\mathrm{el}}^{\mbox{\tiny{rad}}}(t,{\boldsymbol{s}})}^2 \big)d^3\!s$ is a lower bound to $\big\langle E^\sharp_{\mbox{\tiny{rad}}}(t,{\boldsymbol{q}})\big\rangle$, by Jensen's inequality. Initially these two agree, for the incoming vacuum radiation beam is ${\boldsymbol{q}}$-independent. Thus initially the energy of the expected electromagnetic radiation fields is just the field energy of the incoming radiation, and the emission will increase it. So also $\langle H_{\mbox{\tiny{rad}}} \rangle$ increases from its initial value, even more so because the emitted $\sharp$-fields do depend on ${\boldsymbol{q}}$. Second, in our model the Hamiltonian $H = H_{\mbox{\tiny{{hyd}}}} + H_{\mbox{\tiny{{int}}}}+ H_{\mbox{\tiny{{rad}}}}$, so \begin{equation}\label{ddtHhyd} \tfrac{\mathrm{d} \ }{\mathrm{d} t} \left\langle H_{\mbox{\tiny{{hyd}}}} \right\rangle = \left\langle \tfrac{1}{i\hbar}[H_{\mbox{\tiny{{hyd}}}}, (H_{\mbox{\tiny{{int}}}} + H_{\mbox{\tiny{rad}}})]\right\rangle, \end{equation} and r.h.s.(\ref{ddtHhyd}) is not manifestly zero, clearly. Of course, this observation does not imply that $\langle H_{\mbox{\tiny{hyd}}} \rangle$ will decrease when the atom radiates, but the atom could not transit from an excited state to the ground state if $\langle H_{\mbox{\tiny{hyd}}} \rangle$ was conserved, as in Schr\"odinger's equation (\ref{eq:ERWINeqnMatterWaveBOhydrogen}) where it has a conserved expected value $\langle H_{\mbox{\tiny{hyd}}} \rangle$ as per (\ref{eq:ERWINeqnMatterWaveBOhydrogen}). What we can conclude, however, is that \emph{if} $\langle H_{\mbox{\tiny{hyd}}} \rangle$ \emph{decreases as a consequence of the emission of electromagnetic radiation then}, since $H_{\mbox{\tiny{{hyd}}}}$ is bounded below, \emph{the evolution of $\Psi(t,{\boldsymbol{q}})$, starting in our initial state, will inevitably approach the ground state asymptotically in time}. Indeed, note that by our hypothesis that $\langle H_{\mbox{\tiny{hyd}}} \rangle$ decreases as a consequence of the emission of radiation, $\langle H_{\mbox{\tiny{hyd}}} \rangle$ cannot settle down to a value between the ground state and the first excited state, for then $\Psi$ would have to be in a superposition of eigenstates, which inevitably would lead to the emission of radiation (as shown above) and to the further decrease of $\langle H_{\mbox{\tiny{hyd}}} \rangle$. Third, for it to be possible that $\langle H_{\mbox{\tiny{hyd}}} \rangle$ approaches its ground state value asymptotically, (\ref{ddtHhyd}) would have to approach zero asymptotically. We will now see that this scenario is compatible with the dynamical equations. As shown earlier, if the atom is in its ground state $\Psi$, then the associated velocity field ${\boldsymbol{v}}$ vanishes. The radiation $\sharp$-field still contributes to the velocity field, but despite still lingering ${\boldsymbol{Q}}_q(\tau)$ motions the main dynamics of the $\sharp$-field is an outward moving flash of radiation --- at least this is suggested by first order perturbation theory. Thus the $\sharp$-field equations should become ${\boldsymbol{q}}$-independent and the emission of radiation inevitably would fade away, compatible with $\langle H_{\mbox{\tiny{hyd}}} \rangle$ ending its decrease. Although the already emitted $\sharp$-field radiation is time- and space-dependent, since this electromagnetic radiation leaves the Bohr-radius-sized region of the atom at the speed of light, it very soon after the essential ending of the emission process will become effectively ${\boldsymbol{q}}$-independent. The large amplitude region of the $\sharp$-field radiation will be concentrated around a spherical shell of radius $ct$ away from the origin, and the integrals $E^\sharp_{\mbox{\tiny{rad}}}$ and ${\boldsymbol{P}}^\sharp_{\mbox{\tiny{{el}}}}$ should become essentially independent of ${\boldsymbol{q}}$ for ${\boldsymbol{q}}$ in the atomic vicinity of the proton (origin), and exponentially (in $|{\boldsymbol{q}}|$) suppressed in the expected value functional. Moreover, $E^\sharp_{\mbox{\tiny{rad}}}$ and ${\boldsymbol{P}}^\sharp_{\mbox{\tiny{{el}}}}$ should approach time-independency because of the energy-momentum conservation for free Maxwell radiation fields (which is what ${\boldsymbol{q}}$-independent $\sharp$-fields are). As such, the expected commutator $\left\langle \tfrac{1}{i\hbar}[H_{\mbox{\tiny{{hyd}}}}, (H_{\mbox{\tiny{{int}}}} + H_{\mbox{\tiny{rad}}})]\right\rangle\to 0$, and $\langle H_{\mbox{\tiny{hyd}}} \rangle$ approaches a constant. With the just described scenario the radiation Hamiltonian becomes effectively a constant number that is being added to $H_{\mbox{\tiny{hyd}}}$. But this Hamiltonian has the same wave eigenfunctions as the initial Hamiltonian. Thus, the assumption of the atom settling down to the ground state wave function of the traditional hydrogen Hamiltonian, accompanied by the dynamical emission-of-``a-flash-of''-radiation scenario, at least is well compatible with our dynamical quantum-mechanical equations. \newpage \noindent\textbf{3.5.1.viii: Incorporating electron spin and static ${\boldsymbol{E}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}$ and ${\boldsymbol{B}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}$} \smallskip Everything we discussed in the previous subsection generalizes to the case of hydrogen when the $\sharp$-fields include laboratory-generated static ${\boldsymbol{E}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}$ and ${\boldsymbol{B}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}$. Of course, since we have neglected electron spin so far, the \emph{anomalous Zeeman effect} of hydrogen would not show up. We can easily generalize all this to an electron with spin by switching from Schr\"odinger's to Pauli's equation. Thus, $\Psi(t,{{\boldsymbol{q}}})$ becomes a two-component Pauli spinor, and the Schr\"odinger-type equation (\ref{ERWINeqnHYDROcoupleSHARP}) is replaced by a Pauli-type equation \begin{equation} \left(i \hbar \partial_t - E^\sharp (t,{{\boldsymbol{q}}})\right)\Psi(t,{{\boldsymbol{q}}}) = \tfrac{1}{2\mEL}\left(\boldsymbol\sigma\cdot\left(-i \hbar \nabla_{{\boldsymbol{q}}} - {\boldsymbol{P}}^\sharp_{\mathrm{el}}(t,{{\boldsymbol{q}}}) \right)\right)^2 \Psi(t,{{\boldsymbol{q}}}). \label{eq:PAULIeqn} \end{equation} For a spinor, the density $\varrho = \Psi^\dagger\Psi = |\Psi_+|^2 + |\Psi_-|^2$, where the suffix ${}_\pm$ indicates the upper and lower components of the Pauli spinor, and the probability current density \begin{equation} {\boldsymbol{J}} (t,{\boldsymbol{q}}) = \Im \left(\Psi^\dagger\tfrac{1}{\mEL} \bigl(\hbar \nabla_{{\boldsymbol{q}}} - i{\boldsymbol{P}}^\sharp_{\mbox{\tiny{{el}}}}\bigr) \Psi\right)(t,{\boldsymbol{q}}) + \tfrac{1}{2\mEL} \hbar \nabla_{{\boldsymbol{q}}} \times \bigl(\Psi^\dagger\boldsymbol\sigma\Psi\bigr)(t,{\boldsymbol{q}}); \label{eq:PAULIspinorJ} \end{equation} the curl term is optional, yet suggested by Dirac's equation (see \cite{BoHi}). The velocity field is again defined by ${\boldsymbol{J}} = \varrho {\boldsymbol{v}}$. One now gets the correct nonrelativistic hydrogen spectra, including the anomalous Zeeman effect and the Stark effect. Relativistic corrections, such as spin-orbit coupling, are of course not included. \section{Systems with multiple electrons and nuclei}\label{manyatoms} Everything we discussed in the previous section on hydrogen radiation generalizes to the case of a many-electron atom coupled with the $\sharp$-fields. Since the spectrum of a many-electron atom won't come out right without electron spin, even in the absence of an applied laboratory-generated magnetic field, we have to replace the $N$-electron Schr\"odinger-type equation (\ref{eq:ERWINeqnNEW}) with the $N$-electron Pauli-type equation \begin{equation} \boxed{\left(i \hbar \partial_t - E^\sharp (t,\vec{\boldsymbol{q}})\right)\Psi(t,\vec{\boldsymbol{q}}) = {\textstyle\sum\limits_{n=1}^N} \tfrac{1}{2\mEL}\left(\boldsymbol{\sigma}_n\cdot\left(-i \hbar \nabla_{{\boldsymbol{q}}_n} - {\boldsymbol{P}}^\sharp_n(t,\vec{\boldsymbol{q}}) \right)\right)^2 \Psi(t,\vec{\boldsymbol{q}})}, \label{eq:PAULIeqnNEW} \end{equation} with $\Psi(t,\vec{\boldsymbol{q}})$ an $N$-body Pauli spinor wave function that is antisymmetric under the permutation group $S_N$. For the bound-state spectrum in the presence of laboratory-generated static external fields, the de facto Hamiltonian extracted from this equation is a sum of the many-electron Hamiltonian at l.h.s.(\ref{eq:ErwinEQstationaryRELext}) plus Pauli terms $- \frac{e \hbar}{\mEL c}\boldsymbol\sigma_j\cdot {\boldsymbol{B}}^{\mbox{\tiny{ext}}}_{\mbox{\tiny{lab}}}({\boldsymbol{q}}_j)$. By `de facto' we mean that an irrelevant additive constant has been subtracted from $H$. In the general dynamical situation, also an interaction Hamiltonian $H_{\mbox{\tiny{int}}}$ that is a sum of similar expressions as before (one for each electron) emerges from (\ref{eq:PAULIeqnNEW}), as well as the by now familiar radiation Hamiltonian. Next we can generalize this generalization even further, to the many-nuclei problem of molecules and solids, i.e. as long as we employ the Born--Oppenheimer approximation. In that case the generalization from a many-electron atom to a system of many electrons and many nuclei is entirely straightforward. Indeed, the many-electron Pauli equation (\ref{eq:PAULIeqnNEW}) governs the evolution of the $N$-electron wave function with spin unchanged in its appearance. What changes is that the ${\boldsymbol{P}}_n^f$ and $E^\sharp$ are now computed from solutions to the $\sharp$-field equations in which the source term of the constraint equation (\ref{eq:aMdivEsharp}) includes $K$ nuclei rather than only one, i.e. (\ref{eq:aMdivEsharp}) changes to \begin{alignat}{1}\hspace{-.3truecm} \nabla_{\boldsymbol{s}}\cdot{\boldsymbol{E}}^\sharp &=\label{eq:aMdivEsharpMANYnuclei} 4\pi e\Bigl({\textstyle\sum\limits_{k=1}^K}Z_k \delta^{(a)}_{{\boldsymbol{q}}_k^+}({\boldsymbol{s}}) - {\textstyle\sum\limits_{n=1}^N} \delta^{(a)}_{{\boldsymbol{q}}_n}({\boldsymbol{s}}) \Bigr), \end{alignat} where the positions of the nuclei are distinguished from those of the electrons by the superscript ${}^+$. The energy of the pertinent electrostatic $\sharp$-field solution, provided no two charged balls of radius $a$ overlap, is \begin{alignat}{1}\hspace{-.3truecm} \frac{1}{8\pi} \int_{{\mathbb{R}}^3} \big|{\boldsymbol{E}}^\sharp({\boldsymbol{s}};\vec{\boldsymbol{q}} |\vec{\boldsymbol{q}}^+)\big|^2 \mathrm{d}^3s = \label{eq:HAMfromFIELDenergyMANYz} & \\ \notag E_{\mbox{\tiny{self}}} + {\textstyle\sum\sum\limits_{\hskip-.7truecm 1 \leq j < k \leq K} } \frac{z_jZ_ke^2}{|{\boldsymbol{q}}_j^+-{\boldsymbol{q}}_k^+|} & - {\textstyle{\sum\limits_{k=1}^K\sum\limits_{n=1}^N}} \frac{Z_k e^2}{|{\boldsymbol{q}}_n- {\boldsymbol{q}}_k^+|} + {\textstyle{\sum\sum\limits_{\hskip-.7truecm 1 \leq j < k \leq N}}} \frac{e^2}{|{\boldsymbol{q}}_j-{\boldsymbol{q}}_k|}, \end{alignat} where $E_{\mbox{\tiny{self}}} = \frac35\frac{e^2 }{a} \Big( N + {\textstyle{\sum\limits_{k=1}^K}}Z_k^2 \Big)$ is a constant; for smaller distances the Coulomb interactions are regularized. In the absence of laboratory-generated static external fields this is the correct Schr\"odinger potential of a many-nuclei many-electron system in Born--Oppenheimer approximation \cite{MaxOppi}. Laboratory-generated static external fields can be included also; we skip the calculation of the pertinent effective Hamiltonian. We close this brief section by noting that so far the locations of the ${\boldsymbol{q}}^+$ variables have been treated as given parameters, yet of course most arbitrary choices will not result in a physically accurate mathematical problem. Born and Oppenheimer \cite{MaxOppi} proposed to treat the quantum-mechanical expected value of \eqref{eq:HAMfromFIELDenergyMANYz} as a stand-in for the Hamiltonian of a classical Newtonian $K$-body problem. The ground state problem, for instance, would then require first minimization of the quantum-mechanical Hamiltonian, given the ${\boldsymbol{q}}_+$ variables, followed by the minimization of it's expected value w.r.t. those ${\boldsymbol{q}}_+$ variables. There is a huge literature on this; see also \cite{MaxKerson}. \newpage \section{Photons} Concerning the dynamics of an excited atom (and for simplicity, say, hydrogen) coupled to the $\sharp$-fields, even the most favorable outcome in the model so far describes a scenario in which the atom transits into the ground state while emitting a flash of electromagnetic $\sharp$-field radiation which has a quantum-mechanical expected value that corresponds to the overall empirical emission of a large number of independently radiating hydrogen atoms when registered far away from the region that these hydrogen atoms occupy. This expected value of radiation is an essentially spherical shell of radius $ct$ of Maxwell fields. As we have shown, the flash of electromagnetic $\sharp$-field radiation itself will, for each generic position ${\boldsymbol{q}}$ of the electron, consist of a similar spherical shell, centered on ${\boldsymbol{q}}$ (significant only for ${\boldsymbol{q}}$ in the Bohr-radius size vicinity of the nucleus), plus a lingering contribution from ``outside the light cone.'' Clearly this is not what seems to happen in experiments: an atom which transits from an excited to its ground state seems to do so under the emission of photons, which get registered in localized photon detectors. The radiation $\sharp$-field, being spread out, cannot in itself represent such a localized event. However, the following, notationally trivial but conceptually radical change of perspective brings the photon into the model. Thus, we note that the $\sharp$-fields, depending in addition to their variables $t$ and ${\boldsymbol{s}}$ also on the configuration space variable ${\boldsymbol{q}}$ of the hydrogen's electron, or more generally on the generic configuration space vector $\vec{\boldsymbol{q}}$ of $N$ electrons in case a many electron system is considered, are more reminiscent of quantum-mechanical many-body wave functions than of a classical field. It is therefore very suggestive to contemplate that the variable ${\boldsymbol{s}}$ in the $\sharp$-fields and their $\sharp$-field equations does not represent a generic point in physical space but instead represents the generic position of a photon. In the next subsection we pursue this lead. \subsection{Systems with a single photon} To emphasize this radical change of perspective, that the emitted electromagnetic $\sharp$-field wave now is re-interpreted as a kind of quantum-mechanical wave function, we replace ${\boldsymbol{s}}$ in the $\sharp$-fields and their equations by ${\boldsymbol{q}}_{\mathrm{ph}}$, and we set ${\boldsymbol{q}}\mapsto{\boldsymbol{q}}_{\mathrm{el}}$ (respectively, $\vec{\boldsymbol{q}}\mapsto\vec{\boldsymbol{q}}_{\mathrm{el}}$) to distinguish the two types of position variables clearly. Next, comparing the $\sharp$-field wave equations and the Schr\"odinger equation, one is struck by the fact that the feedback from $\sharp$-fields into the Schr\"odinger or Pauli equation is through bilinear (and square of bilinear) functionals of the $\sharp$-fields, but the Schr\"odinger or Pauli $\Psi$ enters the $\sharp$-field equations via ${\boldsymbol{v}}$, computed from $\Psi$ (the ratio of two bilinear expressions in $\Psi$). Yet if we recall that $\vec{\boldsymbol{J}}=\varrho \vec{\boldsymbol{v}}$ with each three-dimensional component ${\boldsymbol{J}}$ given in (\ref{eq:PAULIspinorJ}), and (following Heinrich Weber \cite{Weber}; see endnote 114 in \cite{KTZphoton}) set ${\boldsymbol{E}}^\sharp(t,{\boldsymbol{q}}_{\mathrm{ph}};\vec{\boldsymbol{q}}_{\mathrm{el}})+i{\boldsymbol{B}}^\sharp(t,{\boldsymbol{q}}_{\mathrm{ph}};\vec{\boldsymbol{q}}_{\mathrm{el}})=: \eEL\hbar\boldsymbol{\Psi}(t,{\boldsymbol{q}}_{\mathrm{ph}};\vec{\boldsymbol{q}}_{\mathrm{el}})$, we obtain \begin{alignat}{1}\hspace{-1truecm} \Big[ i\hbar\partial_t+c\hbar\bigl(\nabla_{{\boldsymbol{q}}_{\mathrm{ph}}}\times\bigr)\! +i\hbar \vec{\boldsymbol{v}}(t,{\vec{\boldsymbol{q}}_{\mathrm{el}}}){\,\boldsymbol{\cdot}\,}\nabla_{\vec{\boldsymbol{q}}_{\mathrm{el}}}\!\Big]\boldsymbol{\Psi}(t,{\boldsymbol{q}}_{\mathrm{ph}};\vec{\boldsymbol{q}}_{\mathrm{el}}) = \label{eq:MdotPSIsharp} 4\pi \vec{\boldsymbol{v}}(t,{\vec{\boldsymbol{q}}_{\mathrm{el}}}){\,\boldsymbol{\cdot}\,} \delta^{(a)}_{{\vec{\boldsymbol{q}}_{\mathrm{el}}}}({\boldsymbol{q}}_{\mathrm{ph}}),& \\ \hbar \nabla_{{\boldsymbol{q}}_{\mathrm{ph}}}{\,\boldsymbol{\cdot}\,} \boldsymbol{\Psi}(t,{\boldsymbol{q}}_{\mathrm{ph}};\vec{\boldsymbol{q}}_{\mathrm{el}}) = \label{eq:MdivPSIsharp} 4\pi \Bigl(\delta^{(a)}_{{\boldsymbol{0}}}({\boldsymbol{q}}_{\mathrm{ph}}) - {\textstyle\sum\limits_n}\delta^{(a)}_{\vec{\boldsymbol{q}}_{\mathrm{el},n}}({\boldsymbol{q}}_{\mathrm{ph}}) \!\Bigr)\!,& \end{alignat} where $\vec{\boldsymbol{v}}{\,\boldsymbol{\cdot}\,}\delta^{(a)}_{\vec{\boldsymbol{q}}_{\mathrm{el}}}$ is shorthand for $\sum_n {\boldsymbol{v}}_n \delta^{(a)}_{\vec{\boldsymbol{q}}_{\mathrm{el},n}}$. If we now multiply (\ref{eq:MdotPSIsharp}) by $\varrho$, and recall that $\varrho\vec{\boldsymbol{v}}=\vec{\boldsymbol{J}}$, we exhibit a bilinear feedback from the $\Psi$ equation into the $\boldsymbol{\Psi}$ equations; thus the coupled system of $\Psi$ and $\boldsymbol{\Psi}$ equations appears more on an equal footing. Yet when $\boldsymbol{\Psi}(t,{\boldsymbol{q}}_{\mathrm{ph}};\vec{\boldsymbol{q}}_{\mathrm{el}})$ lives on the joint configuration space for electron and photon, it is very tempting to let oneself be inspired by the speculations of de Broglie, Born, and Bohm, and to think of $\boldsymbol{\Psi}(t,{\boldsymbol{q}}_{\mathrm{ph}};\vec{\boldsymbol{q}}_{\mathrm{el}})$ as a guiding field for the photon. Thus we need also the guiding equation for the actual position of the photon in physical space. Let ${\boldsymbol{q}}_{\mathrm{ph}}(t)$ be its position at time $t$. Then it's suggestive in this semi-relativistic setting to postulate that the photon moves according to the guiding equation \begin{alignat}{1}\label{eq:photonGUIDING} \frac{\mathrm{d} {\boldsymbol{q}}_{\mathrm{ph}}(t)}{\mathrm{d} t}= c\frac{\Im\left(\boldsymbol{\Psi}^*(t,{\boldsymbol{q}}_{\mathrm{ph}};\vec{\boldsymbol{q}}_{\mathrm{el}}(t))\times\boldsymbol{\Psi}(t,{\boldsymbol{q}}_{\mathrm{ph}};\vec{\boldsymbol{q}}_{\mathrm{el}}(t))\right)} {\quad \boldsymbol{\Psi}^*(t,{\boldsymbol{q}}_{\mathrm{ph}};\vec{\boldsymbol{q}}_{\mathrm{el}}(t))\cdot\boldsymbol{\Psi}(t,{\boldsymbol{q}}_{\mathrm{ph}};\vec{\boldsymbol{q}}_{\mathrm{el}}(t))} \Biggr|_{{\boldsymbol{q}}_{\mathrm{ph}}={\boldsymbol{q}}_{\mathrm{ph}}(t)}. \end{alignat} Note that the r.h.s. is homogeneous of degree 0, so even while $\int |\boldsymbol\Psi|^2\mathrm{d}^3q_{\mathrm{ph}}\mathrm{d}^3q_{\mathrm{el}}$ is generally not conserved, a changing $\int |\boldsymbol\Psi|^2\mathrm{d}^3q_{\mathrm{ph}}\mathrm{d}^3q_{\mathrm{el}}$ does not affect the law of motion. \smallskip Note furthermore that the magnitude of the guiding velocity field is generally less than $c$, but equals $c$ in plane wave solutions. So if this guiding law contains a grain of truth then photons would only appear to move roughly at the speed of light in experiments where one can in good approximation assume there is a plane wave. \smallskip \noindent \textbf{Remark}: \emph{Einstein pondered a guiding field for photons (his ``quanta of light'') on \emph{physical spacetime}, obeying a relativistic field equation. In subsection \ref{sec:sharpFIELDS} we noted that the evaluation of the $\sharp$-fields with the actual electron position ${\boldsymbol{q}}_{\mathrm{el}}(t)$ in place of the generic ${\boldsymbol{q}}_{\mathrm{el}}$ turns the $\sharp$-fields into solutions of the classical Maxwell--Lorentz field equations for point charges, viz. (\ref{eq:MdotEactual})--(\ref{eq:MdivBactual}). In this sense it would seem that with the guiding equation (\ref{eq:photonGUIDING}) one comes as close as one can get to realizing Einstein's surmise that the classical electromagnetic field guides the photons.} \smallskip Our next step is to upgrade to a quantum mechanical model of radiating atoms in which a highly excited atoms may emit several photons while cascading down to its ground state. \newpage \subsection{Systems with many photons} Photons are thought to not interact with each other directly but only with charged particles. In quantum electrodynamics this includes virtual electron-positron pairs, which effectively allow photon-photon scattering without a real charged particle mediating the interaction; but electron-positron pair creation / annihilation is not part of the semi-relativistic so-called standard model of everyday matter, and not part of our purely quantum-mechanical model. Therefore we will implement many photons in such a way that they do not interact with each other but only with the real (i.e. not virtual) charged particles of the model. There are a number of requirements which a generalization of our model to a system of equations for an atom in the presence of many photons needs to satisfy. First of all, since photons are spin $1$ bosons, their quantum-mechanical $L$-photon wave function ${\boldsymbol{\Psi}}^L$ has to be permutation-symmetric. Thus, the generalized $L$-photon $\sharp$-field ${\boldsymbol{\Psi}}^L(t,\vec{\boldsymbol{q}}_{\mathrm{ph}};\vec{\boldsymbol{q}}_{\mathrm{el}})$ takes values in the closure of the $L$-fold symmetrized tensor products of single-photon $\boldsymbol{\Psi}(t,{\boldsymbol{q}}_{\mathrm{ph}}^{\ell};\vec{\boldsymbol{q}}_{\mathrm{el}})$ over $\ell =1,...,L$. Second, the stationary states must produce the correct atomic (molecular, etc.) energy spectra. It is straightforward to verify that our single-photon $\sharp$-field equations (\ref{eq:MdotPSIsharp}), (\ref{eq:MdivPSIsharp}) (in Weber notation) are the single-photon special case of the following equations for the $L$-photon wave function ${\boldsymbol{\Psi}}^L$, \emph{conditioned} on the generic $N$-electron configuration, \begin{alignat}{1} &\hspace{-0.8truecm} \Bigl(i\hbar\partial_t+ c\hbar{\textstyle{\sum\limits_\ell}}\nabla_{{\boldsymbol{q}}_{\mathrm{ph}}^\ell}\times_\ell\Bigr){\boldsymbol{\Psi}}^L(t,\vec{\boldsymbol{q}}_{\mathrm{ph}};\vec{\boldsymbol{q}}_{\mathrm{el}}) + i\hbar \left(\vec{\boldsymbol{v}}(t,{\vec{\boldsymbol{q}}_{\mathrm{el}}}){\,\boldsymbol{\cdot}\,}\nabla_{\vec{\boldsymbol{q}}_{\mathrm{el}}}\right){\boldsymbol{\Psi}}^L(t,\vec{\boldsymbol{q}}_{\mathrm{ph}};\vec{\boldsymbol{q}}_{\mathrm{el}}) \notag \\ \label{eq:MdotPSIsharpL} & \phantom{nixnixnixnixnix} = 4\pi \tfrac{1}{\sqrt{L}}{\textstyle{\sum\limits_\ell}} {\boldsymbol{\Psi}}^{L-1}(t,\vec{\boldsymbol{q}}_{\mathrm{ph}}^{\hat\ell};\vec{\boldsymbol{q}}_{\mathrm{el}}) {\otimes_\ell} \vec{\boldsymbol{v}}(t,{\vec{\boldsymbol{q}}_{\mathrm{el}}}){\,\boldsymbol{\cdot}\,}\delta^{(a)}_{{\vec{\boldsymbol{q}}_{\mathrm{el}}}}({\boldsymbol{q}}_{\mathrm{ph}}^\ell), \\ &\hspace{-0.7truecm} \label{eq:MdivPSIsharpL} \hbar \nabla_{\vec{\boldsymbol{q}}_{\mathrm{ph}}}{\,\boldsymbol{\cdot}\,} {\boldsymbol{\Psi}}^L(t,\vec{\boldsymbol{q}}_{\mathrm{ph}};\vec{\boldsymbol{q}}_{\mathrm{el}}) = 4\pi \tfrac{1}{\sqrt{L}} {\textstyle{\sum\limits_\ell}} {\boldsymbol{\Psi}}^{L-1}(t,\vec{\boldsymbol{q}}_{\mathrm{ph}}^{\hat\ell};\vec{\boldsymbol{q}}_{\mathrm{el}}) \Bigl(\delta^{(a)}_{{\boldsymbol{0}}}({\boldsymbol{q}}_{\mathrm{ph}}^\ell) - {\textstyle{\sum\limits_n}}\delta^{(a)}_{{\boldsymbol{q}}_{\mathrm{el}},n}({\boldsymbol{q}}_{\mathrm{ph}}^\ell) \Bigr). \end{alignat} Here, ${\boldsymbol{\Psi}}^{L-1}(t,\vec{\boldsymbol{q}}_{\mathrm{ph}}^{\hat\ell};{\boldsymbol{q}}_{\mathrm{el}})$ is an $L-1$ photon wave function, \emph{conditioned} on the generic $N$-electron configuration $\vec{\boldsymbol{q}}_{\vec{\boldsymbol{q}}_{\mathrm{el}}}$, and $\vec{\boldsymbol{q}}_{\mathrm{ph}}^{\hat\ell}$ is a $3(L-1)$-dimensional generic configuration space position of $L-1$ photons, obtained from $\vec{\boldsymbol{q}}_{\mathrm{ph}}$ by removing ${\boldsymbol{q}}_{\mathrm{ph}}^\ell$. Moreover, $\nabla_{\vec{\boldsymbol{q}}_{\mathrm{ph}}}{\,\boldsymbol{\cdot}\,} {\boldsymbol{\Psi}}^L(t,\vec{\boldsymbol{q}}_{\mathrm{ph}};{\boldsymbol{q}}_{\mathrm{el}})$ is a sum over $\ell\in\{1,...,L\}$ of the $\ell$-th divergence operator acting on the $\ell$-th factor of ${\boldsymbol{\Psi}}^L$. Finally, ${\boldsymbol{\Psi}}^{L-1}(t,\vec{\boldsymbol{q}}_{\mathrm{ph}}^{\hat\ell};\vec{\boldsymbol{q}}_{\mathrm{el}}){\otimes_\ell} \vec{\boldsymbol{v}}(t,{{\boldsymbol{q}}_{\mathrm{el}}}) {\,\boldsymbol{\cdot}\,}\delta^{(a)}_{{\vec{\boldsymbol{q}}_{\mathrm{el}}}}({\boldsymbol{q}}_{\mathrm{ph}}^\ell)$ manifestly resembles an $L$-photon wave function obtained from an $L-1$-photon wave function, both conditioned on the $N$-electron configuration, by applying a ``single-photon creation operator'' in which $\vec{\boldsymbol{v}}{\,\boldsymbol{\cdot}\,}\delta^{(a)}_{{\vec{\boldsymbol{q}}_{\mathrm{el}}}}({\boldsymbol{q}}_{\mathrm{ph}}^\ell)$ takes the place of the $\ell$-th factor. Therefore, tentatively we may consider (\ref{eq:MdotPSIsharpL}), (\ref{eq:MdivPSIsharpL}) as a plausible $L$-photon wave equation, conditioned on the generic $N$-electron configuration. \newpage \noindent \textbf{Remark}: \emph{Of course, in writing down (\ref{eq:MdotPSIsharpL}), (\ref{eq:MdivPSIsharpL}) we had the benefit of hindsight offered by the well-developed creation / annihilation operator formalism. Yet also this started in the 1920s, with the papers of Jordan \cite{Jordan}, enhanced jointly with Wigner \cite{JW}, and by Dirac \cite{GoldenRule}. Moreover, it is remarkable that the $L=1$ special case is just the $\sharp$-field equations obtained from interpreting the Maxwell--Lorentz field equations that Schr\"odinger had written down as quantum-mechanical expected values w.r.t. the Born probability measure for the $N$ electrons, $\varrho = |\Psi|^2$.} \medskip With the creation operator formalism occurring in the single-photon equations for ${\boldsymbol{\Psi}}$ as per re-interpretation of the original charged source terms in the electromagnetic $\sharp$-field interpretation, and easily generalized to the $L$-photon equations for ${\boldsymbol{\Psi}}^L$, we next note that the annihilation operator formalism already occurs in the $\Psi$ equations when re-interpreted accordingly. Indeed, it suffices to note that the ``$\sharp$-field energy'' \begin{equation} E^\sharp(t,\vec{\boldsymbol{q}}_{\mathrm{el}})=\frac{1}{8\pi}\int_{{\mathbb{R}}^3} \big({\boldsymbol{\Psi}}^* \cdot {\boldsymbol{\Psi}}\big) (t,{\boldsymbol{q}}_{\mathrm{ph}};\vec{\boldsymbol{q}}_{\mathrm{el}})\mathrm{d}^3q_{\mathrm{ph}}, \label{eq:FIELDenergyPHOTON} \end{equation} i.e. $E^\sharp$ at l.h.s.(\ref{eq:ERWINeqnNEW}) rewritten in Weber notation, is an ``annihilation operator'' acting on the one-photon wave function ${\boldsymbol{\Psi}}$ conditioned on the $N$-electron configuration, with the ``annihilation'' effected by the same ${\boldsymbol{\Psi}}$ itself. To generalize this to conditioned $L$-photon wave functions ${\boldsymbol{\Psi}}^L$, it turns out that one has several options if the only requirement is that for stationary states the same empirically correct Pauli energy spectra should be obtained. Therefore one needs further guidance in ones choice. A suggestive requirement to impose is to retain the bilinear type of feedback. In this vein, let $\Pi_\ell$ denote the projector onto the $\ell$-th factor of ${\boldsymbol{\Psi}}^L$. Then it is readily verified that \begin{equation} E_L^\sharp(t,\vec{\boldsymbol{q}}_{\mathrm{el}}):= \frac{1}{8\pi L} \sum_\ell \int_{{\mathbb{R}}^{3}} \Big(\big(\Pi_\ell{{\boldsymbol{\Psi}}^L}\big)^* {\,\boldsymbol{\cdot}\,} \big(\Pi_\ell{\boldsymbol{\Psi}}^L\big)\Big) (t,{\boldsymbol{q}}^\ell_{\mathrm{ph}};\vec{\boldsymbol{q}}_{\mathrm{el}})\mathrm{d}^{3}q_{\mathrm{ph}} \label{eq:FIELDenergyLphotonsELLsum} \end{equation} produces the correct atomic (etc.) Hamiltonian in the absence of radiation (i.e. static $\sharp$-fields). To see that this leads to the same spectrum as previously discussed, for convenience here only in the absence of laboratory-generated external fields, we observe that the stationary states $\Psi$ with purely electrostatic ${\boldsymbol{\Psi}}^L$ admit a separation of variables, so ${\boldsymbol{\Psi}}^L$ is a static Hartree state, consisting of $L$ copies of the Coulomb field. For now this is our tentative proposal for the $\sharp$-field feedback term when exactly a single ${\boldsymbol{\Psi}}^L$ is coupled to $\Psi$. To back up our claim that one has other options, we note that without the bilinarity requirement also \begin{equation} \widetilde{E}_L^\sharp(t,\vec{\boldsymbol{q}}_{\mathrm{el}}):= \frac{1}{8\pi}\Big(\int_{{\mathbb{R}}^{3L}} \big({{\boldsymbol{\Psi}}^L}^* {\,\boldsymbol{\cdot}\,} {\boldsymbol{\Psi}}^L\big) (t,\vec{\boldsymbol{q}}_{\mathrm{ph}};\vec{\boldsymbol{q}}_{\mathrm{el}})\mathrm{d}^{3L}q_{\mathrm{ph}}\Big)^{1/L} \label{eq:FIELDenergyLphotons} \end{equation} yields the same Pauli spectra. Indeed, for the static Hartree state the integral at r.h.s.(\ref{eq:FIELDenergyLphotons}) is the $L^{th}$ power of the integral at r.h.s.(\ref{eq:FIELDenergyPHOTON}), and the outer power $1/L$ in (\ref{eq:FIELDenergyLphotons}) corrects this. This observation may just be a curiosity, but it is helpful to realize that in the further development of the model one has to make choices. It is also readily possible now to couple all possible ${\boldsymbol{\Psi}}^L$ to $\Psi$. Thus, let $w^{}_\ell\geq 0$ and $\sum_{\ell\in{\mathbb{N}}} w^{}_\ell = 1$. Then a weighted sum of the $L$-photon terms, viz. \begin{equation} E^\sharp(t,\vec{\boldsymbol{q}}_{\mathrm{el}}):= \sum_{\ell\in{\mathbb{N}}} w^{}_\ell E_L^\sharp(t,\vec{\boldsymbol{q}}_{\mathrm{el}}), \label{eq:FIELDenergyALLphotons} \end{equation} is the most general non-negative ``$\sharp$-field energy'' feedback term that produces the correct Hamiltonian in the absence of radiation and which retains bilinearity. One also needs to extract corresponding three-dimensional $\sharp$-field momentum vectors ${\boldsymbol{P}}_n^\sharp$ from the ${\boldsymbol{\Psi}}^L$ that replace the current single-photon coupling terms in Schr\"odinger's, respectively Pauli's equation. We temporarily postpone this to a later revised and enlarged version of this paper. This concludes our demonstration that there seems to be a natural path to generalizing our radiating-atom model from the single-photon to a many-photon version. \subsection{Creation / annihilation of photons vs. their activation} It is remarkable that the source terms in (\ref{eq:MdotPSIsharpL}), (\ref{eq:MdivPSIsharpL}), which in our setup essentially suggest themselves as logical generalizations of the empirical charge and current density source terms in the Maxwell--Lorentz field equations, look very much like regularized boson creation operators in ``non-relativistic QFT.'' So this may well suggest that a mandatory next step is to consider the Fock space of all $L$-photon sectors, $L\in{\mathbb{N}}\cup\{0\}$, with a hierarchy of equations of the type (\ref{eq:MdotPSIsharpL}), (\ref{eq:MdivPSIsharpL}) or similar. This then would also seem to mandate the incorporation of the analogues of annihilation operators into the ${\boldsymbol{\Psi}}^L$ equations, in a similarly logically compelling manner. In keeping with the spirit of the whole quantum-mechanical approach, this should be done without invoking the second-quantization formalism. There is a different possibility, though, namely that what has the appearance of creation operators are really merely source terms for the photon wave function, not for the photons themselves. Also, the annihilation operators already made an appearance in the $\Psi$ equations, where they originally were implemented as $\sharp$-field energy (and momentum) operators without originally thinking of these as annihilation operators. Moreover, in our model so far the single-photon ${\boldsymbol{\Psi}}$ is never identically zero due to the constraint equations, and in this sense there is no ``no photon'' state. Instead, what transpires is to not think of creation / annihilation of photons themselves but of their ``activation,'' viz. their being set in motion by ${\boldsymbol{\Psi}}$ or rather ${\boldsymbol{\Psi}}^L$ getting excited above the electrostatic level. There may be $\infty$ many photons all the time, none getting destroyed or created, but perhaps only $L$ of them are in motion, or possibly different $L$ participate but with different weights $w_\ell^{}$. Clearly this is not sorted out but definitely seems worth pursuing. \vspace{-5pt} \section{Summary and Outlook}\vspace{-10pt} \subsection{Summary} In this paper we have developed a tentative semi-relativistic quantum-mechanical model of electrons and photons which interact with each other and with fixed atomic nuclei. The model accurately reproduces all the atomic and molecular (etc.) energy spectra of the so-called standard model of everyday matter, and it also describes the emission / absorption of photons by atoms. It also seems to capture at least some the details of a single photon emission processes accurately to the extent that can be expected from a semi-relativistic theory. Whether it captures all details accurately, and whether the many-photon generalization already captures the physics of a radiating atom qualitatively correctly and quantitatively accurately (to the extent which can reasonably be demanded from a semi-relativistic model of atoms and photons) is a different, and difficult question which can only be answered after further careful analysis of the equations. Yet, the semi-relativistic tentative quantum mechanics of electrons and photons developed in this paper gets so many things right already, qualitatively and quantitatively, that it seems reasonable to pursue this model further. We expect that it will serve as an intermediate stepping stone on the way to a completely satisfactory, \emph{macroscopically relativistic}, QM of electrons, photons, and their anti-particles --- indeed we believe that such a theory is feasible. By ``macroscopically relativistic'' we mean the intriguing possibility that relativity theory may only be valid as a ``quantum-mechanical expected value,'' as suggested by our model. The underlying theory itself would not be Lorentz covariant, yet its equations be such that their quantum expected value is. Since macroscopic matter consists of a huge number of particles, by a law of large numbers the expected values would be essentially sharp in all macroscopic phenomena. Thus relativity theory would appear to be a law of nature only for all practical purposes, similar to thermodynamics, yet not reflect a fundamental symmetry of nature. This would offer a way out of the apparent conflict between Einstein's relativity theory (no influence outside of the lightcone) and quantum nonlocality as established by Bell \cite{BellBOOK}. \newpage \subsection{Outlook} Such a ``macroscopically relativistic'' quantum mechanics should be formulated with a single joint wave function of all these particles, not a coupled system of various partial wave functions. This joint wave function should obey a single linear wave equation. Before one gets there, one first may want to generalize our model by replacing Schr\"odinger's, respectively Pauli's equations by a Dirac equation. Thus, in the example of hydrogen, or a hydrogenic atom / ion, we should replace (\ref{eq:PAULIeqn}) by \begin{equation} \left(i \hbar \partial_0 - P_0^ \right)\Psi = \boldsymbol\alpha\cdot\left(-i \hbar \nabla_{{\boldsymbol{q}}_{\mathrm{el}}} - {\boldsymbol{P}}^\sharp \right) \Psi +\mEL c\beta \Psi ,\hspace{-.2truecm} \label{eq:DIRACeqn} \end{equation} where $\boldsymbol\alpha$ and $\beta$ are the conventional Dirac matrices, $\Psi$ now is a Dirac bi-spinor (cf. \cite{Thaller}), and where we have dropped the argument $(t,{{\boldsymbol{q}}_{\mathrm{el}}})$ from $P_0$, ${\boldsymbol{P}}^\sharp$, and $\Psi$. This makes it plain that the square of a bilinear expression in the $\sharp$-fields which entered Schr\"odinger's, respectively Pauli's equations, was just a consequence of the non-relativistic approximation to a Dirac equation. Next, also the $\sharp$-field equations, generalizations of Maxwell's field equations, presumably will have to be replaced by counterparts more deserving of the name \emph{wave equation for a photon}; see \cite{KTZphoton} for such an equation describing a single free photon, which is a Dirac type equation acting on rank-two bi-spinors which can be decomposed into a system of four equations, part of which comprises two copies of formally Maxwell's field equations. This photon wave equation yields a Hamiltonian with the correct photon energies of $\hbar \omega = \hbar c|{\boldsymbol{k}}|$, where $\omega$ is the angular frequency of a plane photon wave function of wave vector ${\boldsymbol{k}}$, something we have not yet extracted from the current $\sharp$-field equations, and like Dirac's equation for the electron, it has positive and negative energies. We have yet to address its generalization to the $L$-photon formalism. (TBA) But even with such improved quantum-mechanical wave equations that for a single free particle \emph{are} properly Lorentz covariant, we do need to couple them, and to avoid the familiar infinite self-energies of point charges, we in this paper simply regularized them with tiny charged balls of radius $a$. Balls of fixed radius $a$ are not a Lorentz-covariant concept, and thus the quantum-expectation value of our formalism cannot be completely Lorentz covariant. To fix this there are at least two options, both of which worthy of pursuit. First, one can pick up on the ideas of Max Born, soon joined by Infeld, that the culprit is Maxwell's electromagnetic vacuum law. The Born--Infeld (BI) proposal is very intriguing, but the intimidating nonlinearity of their vacuum law makes it difficult to work with; cf. \cite{KieJSPa}, \cite{KieJSPb}. An alternative proposal was made by Fritz Bopp, and soon thereafter by Land\'e--Thomas, and then Podolsky (BLTP), to replace Maxwell's vacuum law with another linear law that, however, now involves second order partial differential operators (in fact, Klein--Gordon operators). The classical BLTP electrodynamics of $N$ point charges has been proved to be locally well-posed as a joint initial value problem for fields and charges, and is fully Lorentz covariant; see \cite{KiePRD}, \cite{KTZonBLTP}. It would seem to suggest itself as a way to replace the tiny balls of radius $a$ by true point particles, and thus to pave the ground for a quantum mechanics of charged point electrons, point nuclei, and photons that is macroscopically fully Lorentz covariant. Another intriguing possibility has been pioneered in \cite{TeufelTumulka}. There the UV infinities are removed thanks to what they call ``interior-boundary'' condition, which does not mean a boundary condition at some interior boundary of a space (like, e.g., the center of a punctured disk), but refers to a relationship between an (indeed) internal boundary of $N>2$ particle configuration space (namely the set of coincidence points of two particles) and the interior region of the $N-2$ particle sector (note the hyphen!). [I suggest, if I may, to reverse the order to ``boundary-interior'' condition, to avoid confusion.] No UV cutoff like balls or such are needed, but this approach now does not preserve the $L^2$ norm of the $N$-particle wave functions and works in Fock space, with creation and annihilation operators acting in the usual manner. Finally, it may yet be feasible to implement Lorentz covariance at the microscopic many-body level with the help of a multi-time formalism; see \cite{KLTZ} for a recent Lorentz-covariant model of an interacting electron-photon system in 1+1 dimensions using multi-time formalism, and the references theirein. The synchronization would yield not manifestly Lorentz covariant equations, and whether these can be matched with the types of equations developed in this paper has yet to be seen. \bigskip \textbf{ACKNOWLEDGEMENT}: TBA \newpage
1605.05402
\section{Introduction \label{sec:Intro}} With larger and more complex data sets becoming increasingly common, the annotation of data in order to semantically enrich it is a crucial task within data science \cite{TaylorJoudrey2008, Davies2009}. For example, in molecular biology a gene can be annotated by domain experts with terms, $t$, from a controlled vocabulary, thereby allowing other researchers to comprehend the function and role of that gene. Similarly, user tagging of information sources such as documents, photographs, or online content, provide additional meta-data and lead to emergent but uncontrolled vocabularies (often called folksonomies \cite{PetersStock2007}). Terms within a vocabulary can be further organized in a hierarchical structure such as a taxonomy \cite{TaylorJoudrey2008}, in which terms closer to the root of the hierarchy are less specific than those further from the root. Hierarchical organization of vocabulary terms can also be used to specify richer semantic relationships between terms - richer than just indicating that one term is a simply a sub-type of another. These richer hierarchical semantic structures are generally called ontologies \cite{Gruber2008}. The annotations that result from an ontology or taxonomy can exhibit interesting patterns. For example, Kalankesh {\it et al.} \cite{Kalankesh2012} have shown that distributions of term frequencies, $f_{t}$, taken from Gene Ontology (GO) \cite{Ashburner2000} annotations typically follow Zipf's law \cite{Zipf1936, Zipf1949}. Figure \ref{fig:KalankeshExample} shows a Zipf's law plot for annotations taken from the cellular component sub-ontology of GO. The schematic on the right-hand side of Fig.\ref{fig:KalankeshExample} shows, for illustration, part of the ontology that was used to produce the annotation data set plotted on the left-hand side of Fig.\ref{fig:KalankeshExample}. Statistical mechanics provides us with a natural tool to understand these annotation patterns, by allowing us to develop a formalism that quantifies both the natural variations in the annotation process, and the ontology structure itself. Although structure-based measures of ontologies already exist \cite{Yao2011}, within this work we are quantifying the ontology structure from the perspective of the annotations that arise, rather than simply quantifying the ontology structure in isolation. The goals, and ultimately the benefits, of developing a statistical mechanics based formalism are both practical and theoretical. \subsection{Ontologies as information stores} Ontologies and taxonomies, whether formally constructed or emergent, represent a store of information. Organizing a hierarchical store of information requires effort to be expended to create an ordered structure. Work by Ferrer i Cancho {\it et al.}, within an information theoretic framework, has shown how heavy tailed and essentially hierarchical patterns of term usage can arise simply from a principle of minimizing the communication effort expended when using those terms \cite{iCancho2003, iCancho2005, iCancho2007}. Within this current paper we also use information theoretic ideas, but it is the process of transferring information from an ontology to an annotated object that we study, {\it i.e.}, {\it after} the hierarchical term structure has been determined or prescribed. We do so using an explicit statistical mechanical model that takes into the structure of the ontology. Whilst existing work within the literature has used a specific Hamiltonian to study patterns of word usage, that work has not {\it per se} been interested in the impact of any underlying prescribed structure in the vocabulary \cite{Kosmidis2006}. Similarly, novel work by Palla {\it et al.} \cite{Palla2008} and Tib\'{e}ly {\it et al.} \cite{Tibely2012} has related tag usage patterns to ontology structure, but focused on an in depth study of observed tag patterns, rather than taking a Hamiltonian model based approach. \begin{figure}[t] \centering \includegraphics[width=5.45cm, keepaspectratio=true, valign=t, trim=0cm 0cm 0cm 2.00cm bb=0 0 792 612]{Fig1a.eps} \includegraphics[height=3.25cm, valign=t, bb=0 0 641 321]{Fig1b.eps} \caption{Left-hand plot shows a Zipf's law plot for GO gene annotations contained within Human GOA. Annotations have been taken from the cellular component sub-ontology. Terms near the root of the cellular component of GO are shown, for illustration, in the right-hand schematic - the dashed lines indicate the presence of further child terms.} \label{fig:KalankeshExample} \end{figure} The remainder of this paper is organized as follows - in Section \ref{sec:Theory} we express the annotation process as an ideal lattice gas model in an inhomogeneous field. In Section \ref{sec:LeafDesc} we use the lattice gas model to understand the term frequency patterns seen by Kalankesh {\it et al.} \cite{Kalankesh2012}, and we identify, $LD_{t}$, the number of leaf descendants of a node $t$, as the key quantity controlling the expected term usage frequencies. In Section \ref{sec:MaxEntField} we derive the most likely natural form for the inhomogeneous field strength, thereby giving rise to a local measure of the ontology. This natural form for the inhomogeneous field also allows us, in Section \ref{sec:Metric}, to construct an ensemble of ontologies of differing complexity. By restricting the ensemble to the class of regular trees we reveal in Section \ref{sec:regTrees} a set of transitions in the optimal tree size, and associated scaling laws, as the number of objects being annotated is increased. Finally in Section \ref{sec:Conc} we discuss a number of possible extensions of the statistical mechanical approach to quantifying ontology structures. \section{Statistical mechanical theory of the annotation process \label{sec:Theory}} We consider an ontology to be represented by a rooted Directed Acyclic Graph (DAG) \cite{BangJensen2009, Cormen2009}. An example DAG, in this case a tree, is shown in Figure \ref{fig:OntologySchematics}. Real-world ontologies are typically not pure trees, and we use a tree structure simply for illustrative purposes. The formalism we develop in this section will be equally applicable to any valid DAG structure. Associated with each node of the DAG is a particular term, and we use node and term interchangeably. \begin{figure}[htb] \begin{center} \scalebox{0.83} \includegraphics*[bb = 125 600 335 720]{Fig2a.ps} \includegraphics*[bb = 135 595 345 720]{Fig2b.ps}} \end{center} \caption{Left hand schematic shows an example (tree) DAG representing an ontology. The nodes represent terms, denoted by $t_{1}-t_{12}$, which may be selected to annotate objects that belong to classes represented by the leaves of the DAG. Each leaf corresponds to a unique set of paths from the root, and so we denote the classes by ${\mathcal P}_{1}-{\mathcal P}_{7}$. The directed edges of the DAG indicate semantic relationships between the terms. The right-hand side shows a lattice-gas schematic of an example annotation, {\it i.e.} a selection of terms appropriate to 21 objects, $o_{1}, o_{2},\ldots, o_{21}$, that have been assigned to the 7 classes ${\mathcal P}_{1}-{\mathcal P}_{7}$ (3 objects per class). A cell occupied by a particle in the lattice-gas schematic indicates the term has been selected to annotate that object, whilst a filled (grey) cell indicates a cell that cannot be occupied as a consequence of the ontology structure.} \label{fig:OntologySchematics} \end{figure} The directed edges between nodes of the DAG indicates that the child node conveys a possibly more specific meaning than the parent node, or represents a more specific subset of objects. For such an increase in specificity to occur the ontology topology must be ultimately be tree-like. Clearly, if a particular term is applicable to an object then so are the terms associated with any of the ancestor nodes. At a leaf, we have a term that is applicable to a small subset of highly defined objects, possibly even a single object. Each leaf of the DAG determines a unique path or set of paths, denoted by ${\mathcal P}$, consisting of all terms that can be traversed in moving from the root to the leaf. Thus we use leaf, class and ${\mathcal P}$ interchangeably and use $t\in{\mathcal P}$ to denote a term on the unique set of paths ${\mathcal P}$. The prior probability of a particular class ${\mathcal P}$ is $\pi_{{\mathcal P}}$, and is just the proportion of all objects within the class ${\mathcal P}$. If, within a data set we have a total of $|{\mathcal O}|$ objects and $|{\mathcal O}_{\mathcal P}|$ objects from class ${\mathcal P}$, then we can construct a simple estimator, $\hat{\pi}_{{\mathcal P}}$, of $\pi_{{\mathcal P}}$ via, \begin{equation} \hat{\pi}_{{\mathcal P}}\;=\; \frac{|{\mathcal O}_{{\mathcal P}}|}{|{\mathcal O}|}\;\; . \label{eq:T.1} \end{equation} \noindent Clearly, we expect $\hat{\pi}_{\mathcal P}\rightarrow \pi_{\mathcal P}$ as $|{\mathcal O}|\rightarrow \infty$. We then use ${\mathcal N}_{t}$ to denote the expected number of classes within the data set to which a term $t$ belongs, {\it i.e.}, ${\mathcal N}_{t}$ is simply the number of leaves which are descendants of term $t$ in the ontology, weighted by the prior $\pi_{{\mathcal P}}$. Formally we define, \begin{equation} {\mathcal N}_{t}\;=\;\sum_{{\mathcal P}\ni t} \pi_{\mathcal P}\;\;, \label{eq:T.2} \end{equation} \noindent and denote by $\widehat{{\mathcal N}}_{t}$ its equivalent defined using the estimators $\hat{\pi}_{{\mathcal P}}$, {\it i.e.}, \begin{equation} \widehat{{\mathcal N}}_{t}\;=\;\sum_{{\mathcal P}\ni t} \hat{\pi}_{\mathcal P}\;\;. \label{eq:T.2b} \end{equation} \noindent The value of ${\mathcal N}_{t}$ is essentially the probability, $P(t)$, that term $t$ is relevant to an object, irrespective of any further details of the object. Thus ${\mathcal N}_{t}$ gives a probabilistic measure of the specificity of term $t$ that reflects the structure of the ontology, and expert knowledge and information encoded within it. We therefore consider ${\mathcal N}_{t}$ to measure the intrinsic information content of term $t$. Specifically, we define the information content of term $t$ as $-\log P(t) = -\log {\mathcal N}_{t}$ \cite{CoverThomas1991}. An object belonging to the class represented by a leaf ${\mathcal P}$ can be annotated with any of the terms $t$ belonging to ${\mathcal P}$. We use $n_{ot} \in \{0,1\}$ to denote whether term $t$ is used to annotate object $o$. The discrete variables $n_{ot}$ are lattice gas occupancy numbers (or equivalently Ising spins). Consequently, we formulate the probability of a particular choice of annotations as a lattice-gas model. An example lattice-gas configuration corresponding to an example annotation is shown by the schematic on the right-hand side of Fig.\ref{fig:OntologySchematics}. The lattice-gas Hamiltonian is relatively simple and is given by, \begin{equation} H \;=\; \sum_{o,t} n_{ot}( v_{ot} - \mu )\;\; . \label{eq:T.3} \end{equation} \noindent Here $v_{ot}$ represents the local field acting upon a particle at term $t$ for object $o$, and determines how likely it is that term $t$ will be selected when annotating object $o$. Ideally no mis-annotation occurs, {\it i.e.} only terms appropriate to each object are selected, and there would be the same probability of selecting term $t$ provided the paths ${\mathcal P}$ to which the object belongs contains $t$. Thus, we set, \begin{equation} v_{ot} \; = \; \left \{ \begin{array}{cc} v_{t} & t \in {\mathcal P}\;{\rm and}\; o \in {\mathcal P}\;\;, \\ h & {\rm otherwise}\;\; . \end{array} \right . \label{eq:T.4} \end{equation} \noindent The value of $h$ determines the global background level of mis-annotation that may occur. It is possible to consider a more structured form for $v_{ot}$, {\it e.g.}, to allow for greater likelihood of mis-annotation when a term $t$ is close to, but not in, the paths ${\mathcal P}$ to which the object $o$ belongs. However, for the remainder of this paper we consider the more simplified form for $v_{ot}$ given in Eq.(\ref{eq:T.4}) above and also only consider the scenario where no mis-annotation occurs. Consequently, we set $h = \infty$. Therefore, for each object, particular terms are forbidden if the term is not on any of the paths from the root node to the leaf node associated with the object. This is also illustrated in Fig.\ref{fig:OntologySchematics}. In practice, when analysing real data sets, it may be required to consider a finite value of $h$, to capture the mis-annotations that will inevitably occur. Within Eq.(\ref{eq:T.3}) we have also included a chemical potential, $\mu$, which acts as a global field acting on all lattice cells. Equivalently, the fugacity $z=\exp(\beta\mu)$ is essentially the global prior probability that a term will be selected by an annotator, and controls or limits the average number, $\bar{n}$, of terms per object selected by an annotator. We have used $\beta = 1/T$ to denote the inverse temperature. The temperature $T$ does not at this stage have an explicit physical interpretation, other than to represent a parameter which controls the expectation value of the Hamiltonian in Eq.(\ref{eq:T.3}). However, in Section \ref{sec:MaxEntField} we propose an intrinsic form for the external field $v_{t}$ in terms of the information content of the term $t$. With this intrinsic form for $v_{t}$ the temperature $T$ then controls the average information retrieved from the ontology when an object is annotated. Clearly the temperature $T$ will also determine the level of variation seen in annotations from otherwise identical annotators when annotating the same set of objects. Since we only have non-interacting particles, the probability $p_{t}$, of term $t$ being used to annotate an object for which it is valid, is easily calculated as, \begin{equation} p_{t}\;=\; \langle n_{ot}\rangle \;=\;\frac{e^{-\beta( v_{t} - \mu )}}{1 + e^{-\beta( v_{t} - \mu )}}\;\; . \label{eq:T.5} \end{equation} \noindent Here $\langle n_{ot}\rangle$ denotes the expectation (over annotations) of $n_{ot}$. Likewise, with the simplification in Eq.(\ref{eq:T.4}) for $v_{ot}$ the partition function is straight-forward to evaluate, and for a particular set of objects is given by, \begin{equation} \log Z \;=\; |{\mathcal O}| \sum_{t}\widehat{{\mathcal N}}_{t}\log \left ( 1+e^{-\beta (v_{t} - \mu)}\right )\;\; . \label{eq:T.7} \end{equation} \noindent Consequently, the expected value of $\log Z$, averaged over all possible data sets, ${\mathcal O}$, of size $|{\mathcal O}|$, is, \begin{equation} \langle \log Z \rangle _{\mathcal O} \;=\; |{\mathcal O}| \sum_{t}{\mathcal N}_{t}\log \left ( 1+e^{-\beta (v_{t} - \mu)}\right ) \;\; . \label{eq:T.7b} \end{equation} \noindent Likewise, the expected frequency, $C_{t}$, of term $t$ within the annotation data set is given by, \begin{equation} C_{t}\;=\; \sum_{o\ni {\mathcal P}\ni t} \langle n_{ot} \rangle \;=\; -\beta^{-1}\frac{\partial \log Z}{\partial v_{t}} \;=\; |{\mathcal O}| \frac{e^{-\beta (v_{t}-\mu)}}{1+e^{-\beta ( v_{t} - \mu )}} \widehat{{\mathcal N}}_{t}\;=\; |{\mathcal O}| p_{t}\widehat{{\mathcal N}}_{t}\;\; , \label{eq:T.8} \end{equation} \noindent with the expectation over all data sets of size $|{\mathcal O}|$ being, \begin{equation} \langle C_{t}\rangle _{\mathcal O} \;=\; |{\mathcal O}| \frac{e^{-\beta (v_{t}-\mu)}}{1+e^{-\beta ( v_{t} - \mu )}}{\mathcal N}_{t}\;=\; |{\mathcal O}| p_{t}{\mathcal N}_{t}\;\; . \label{eq:T.8b} \end{equation} \noindent In all of the expressions above for the partition function $Z$ we do not consider objects within the same class ${\mathcal P}$ to be interchangeable. It is likely that objects within the class ${\mathcal P}$ are unique and distinguishable, and simply possess a common characteristic rather than being replicates of each other. Finally, we note that for any real data set of term frequencies $f_{t}$, Eq.(\ref{eq:T.8b}) provides a simple means of constructing an estimate of the field strength operating on node $t$, by equating the expectation value for $\langle C_{t}\rangle _{\mathcal O}$ given in Eq.(\ref{eq:T.8b}) with $f_{t}$. This gives us an estimate $\hat{v}_{t}$ for $v_{t}$ given by, \begin{equation} \hat{v}_{t}\;=\;\beta^{-1}\left [ \log z \;+\; \log \left (\frac{|{\mathcal O}|{\mathcal N}_{t}}{f_{t}} - 1 \right ) \right ]\;\; . \label{eq:T.8b1} \end{equation} \section{Replicating the patterns of real annotation data sets \label{sec:realworldPatterns}} \subsection{The relation between ${\mathcal N}_{t}$ and leaf descendants \label{sec:LeafDesc}} Clearly, a first test of the lattice-gas model of the annotation process is whether it can replicate or explain the broad patterns seen in real annotation data sets, such as those seen by Kalankesh {\it et al.} \cite{Kalankesh2012}. From Eq.(\ref{eq:T.8b}) we can see that $\langle C_{t}\rangle _{\mathcal O}$ depends upon two factors; ${\mathcal N}_{t}$ and $p_{t}$. Due to the typically hierarchical nature of the ontology topology we would expect ${\mathcal N}_{t}$ to have a broad distribution of values, and potentially to dominate the distribution of $\langle C_{t}\rangle _{\mathcal O}$. It is transparent that for class probabilities that are uniform across all possible classes then the value of ${\mathcal N}_{t}$ is simply proportional to the number of leaf descendants, $LD_{t}$, that can be reached from term $t$. For small deviations away from a uniform class probabilities we would still expect ${\mathcal N}_{t}$ and $LD_{t}$ to be approximately proportional. Larger deviations away from uniform class probabilities effectively represent a pruning of the DAG into a smaller topology, with leaf nodes that correspond to small class probabilities being effectively eliminated. Consequently, for any DAG topology we still consider $LD_{t}$ to give useful insight into the expected term frequency $\langle C_{t}\rangle _{\mathcal O}$, and in the next section we assess the likely distribution for $LD_{t}$ for tree-like ontologies. \subsection{The distribution of the leaf descendant count $LD_{t}$.\label{sec:LeafDescDist}} For irregular-trees precise evaluation of $LD_{t}$ given local information about the node $t$ can be framed in terms of Galton-Watson processes \cite{GaltonWatson1875, Kendall1966}, though there are few usefully applicable closed-form results available to us. In contrast, single parent regular trees are easier to study. For a single-parent regular tree with $b$ children per node we can label the layers of the tree from $l=0$ (at the root) to $l=L$ (at the leaves). The total number of nodes in the tree is $(b^{L+1}-1)/(b-1)$. The number of nodes in layer $l$ is simply $b^{l}$, whilst the number of leaf descendants for each node in layer $l$ is $b^{L-l}$, {\it i.e.}, within each layer there is a simple reciprocal relationship between node counts and leaf descendant counts, suggesting a Zipf's behaviour for the distribution of $LD_{t}$. More formally, for node $t$ in layer $l$, with leaf descendant count $LD_{t}=b^{L-l}$, it is then a simple matter to find, \begin{eqnarray} &&{\rm Fraction\;of\;nodes\;with\;leaf\;descendant\;count} \geq LD_{t}\;=\;\frac{b-1}{b^{L+1}-1}\sum_{k=0}^{l}b^{k} \nonumber \\ &=& \frac{b^{l+1}-1}{b^{L+1}-1}\;=\; \frac{b^{L+1}}{b^{L+1}-1}\left [\frac{1}{LD_{t}}\;-\;\frac{1}{b^{L+1}}\right ]\;\simeq\;\frac{1}{LD_{t}}\;\;,\;{\rm as}\;L\rightarrow\infty\;. \label{eq:T.8b1_2} \end{eqnarray} \noindent Consequently, for large regular trees we will have $P(LD_{t} \ge X)\;\simeq X^{-1}$, {\it i.e.}, a Zipf's law form. Overall, Eq.(\ref{eq:T.8b1_2}) suggests the origin of the Zipf's law behaviour observed by Kalankesh {\it et al.} \cite{Kalankesh2012} may be, in part, a consequence of the Zipf's law like behaviour of the distribution of $LD_{t}$. However, whilst the structure of a regular tree will lead, on average, to a classical Zipf's law like behaviour, the degeneracy in leaf descendant count for nodes within the same layer leads to a very evenly spaced behaviour in any plot of the cumulative probability. This can be seen in the inset of Figure \ref{fig:GOCC_Intrinsic}a which shows the cumulative probability distribution (on a logarithmic scale) of leaf descendant counts for a regular tree with $b=2$ and depth $L=20$. For a real ontology, where we will have significant variation in the number of children and parents at the scale of individual nodes, we would expect to see more continuous cumulative probability plots for $LD_{t}$. We have confirmed with further simulations of irregular tree-like ontologies (results not shown) that, typically, continuous Zipf's law like behaviour in $LD_{t}$ results when one has local variation in the node characteristics, {\it e.g.}, variation in the average number of children per node or in the probability that a node has children. Similarly, the main part of Fig.\ref{fig:GOCC_Intrinsic}a shows the cumulative probability distribution plot for the leaf descendant count of the cellular component GO ontology. This is the ontology used for annotating the data set shown in Fig.\ref{fig:KalankeshExample}. The intrinsic power-law like behaviour is clearly evident, and so it perhaps unsurprising that we should observe power-law like behaviour in annotations based upon this ontology. As postulated, and in contrast to the distribution shown for the regular tree, this real-world ontology shows a more continuous spread of leaf descendant counts. However, as with the regular tree, the effective exponent of the approximate power-law form for the cellular component GO ontology is close to -1. \begin{figure*} \begin{center} \scalebox{0.23} \includegraphics*[bb = 0 0 792 612]{Fig3a.eps} \includegraphics*[bb = 0 0 792 612]{Fig3b.eps}} \end{center} \caption{Plots of the distributions of leaf descendant counts for a real-world ontology and a regular tree. Plot (a) shows a plot of the log of the cumulative probability of $LD_{t}$ for all nodes with children within the GO cellular component ontology. The inset within plot (a) shows a plot of the log of the cumulative probability of $LD_{t}$ for all nodes with children within a bifurcating ($b=2$) regular tree of depth $L=20$. Plot (b) shows a log-log plot of observed term frequencies, $f_{t}$, against number of leaf descendants, $LD_{t}$, for the real data set shown in Fig.\ref{fig:KalankeshExample}.} \label{fig:GOCC_Intrinsic} \end{figure*} \subsection{The external field $v_{t}$: intrinsic and specific components} We have argued that term frequencies in real annotation data sets will be broadly determined by $LD_{t}$. Figure \ref{fig:GOCC_Intrinsic}b shows a log-log plot for the observed term frequencies, $f_{t}$, of the data set shown in Fig.\ref{fig:KalankeshExample}, plotted against the number of leaf descendants within the ontology from which the annotation terms were drawn. The broad correspondence between observed log of the term frequencies and the log of the number of leaf descendants is clear, and is statistically significant (estimate of Pearson correlation $\hat{\rho} = 0.454,\; 95\% CI = [0.408, 0.498]$, $p < 10^{-8}$ for null hypothesis of $\rho = 0$). However, we would not expect an exact correspondence with the real observed term frequencies, $f_{t}$, on the basis of ${\mathcal N}_{t}$ alone. The precise value of $\langle C_{t}\rangle _{\mathcal O}$ is determined by two factors; ${\mathcal N}_{t}$ and $p_{t}$, with $p_{t}$ determined by $v_{t}$. We might expect $v_{t}$ itself to have an intrinsic component determined by the ontology topology, but the scatter in Fig.\ref{fig:GOCC_Intrinsic}b suggests that we should decompose $v_{t}$ into an intrinsic contribution, $v^{(0)}_{t}$, and a specific residual contribution, $\Delta v_{t}$. That is we write, \begin{equation} v_{t} \;=\; v^{(0)}_{t}\;+\; \Delta v_{t} \label{eq:T.8b2} \end{equation} \noindent Once a form for $v^{(0)}_{t}$ has been specified and an estimate for $v_{t}$ has been obtained from observed term frequencies $f_{t}$ via Eq.(\ref{eq:T.8b1}), then an estimate for $\Delta v_{t}$ can be obtained. Overall, the values of $\Delta v_{t}$ reveal any biases towards particular terms beyond that expected on the basis of the ontology topology alone. Significant values of $\Delta v_{t}$ can potentially indicate regions of the ontology which are not matching the requirements of the annotators, and where the ontology can potentially be improved or needs modification. Equally, having a form for $v^{(0)}_{t}$ allows us to extend the theoretical statistical mechanical formalism by providing an explicit form for $p_{t}$ in the absence of any bias in $v_{t}$. Therefore, in the remainder of the paper we derive an appropriate form for $v^{(0)}_{t}$ and use it to study the consequences of increasing ontology complexity. \section{Determination of the intrinsic field strength $v^{(0)}_{t}$ \label{sec:MaxEntField} } To proceed we note that extracting or transferring information from the ontology to an object during the annotation process requires effort. Selecting more specific terms, far from the root, requires greater information about the object to be known by, or inferred by the annotator, and so we equate effort expended by an annotator with the intrinsic information content of the terms selected. A natural choice for $v^{(0)}_{t}$ would therefore be to set $v^{(0)}_{t}$ proportional to the information content of term $t$, {\it i.e.}, set $v^{(0)}_{t} \propto -\log {\mathcal N}_{t}$. This choice corresponds to the maximum entropy solution for the term probabilities $\{p_{t}\}_{t}$ under the constraint of capturing a given amount of information from the ontology using a fixed expected number of terms. To see this we first calculate the entropy, $S[\{ p_{t}\}]$ for for a given set of term probabilities, $\{p_{t}\}$. From Eq.(\ref{eq:T.5}) and Eq.(\ref{eq:T.7}) the entropy $S[\{ p_{t}\}]$ is obtained as, \begin{eqnarray} S[\{ p_{t}\}] & = & \langle \log Z\rangle_{\mathcal O}[\{v^{(0)}_{t}\}_{t}] \;+\;\beta\sum_{o,t}\langle \langle n_{ot}\rangle\rangle_{\mathcal O} ( v^{(0)}_{t} - \mu ) \nonumber \\ & = & -|{\mathcal O}| \sum_{t}{\mathcal N}_{t}\left [ p_{t}\log p_{t}\;+\; (1-p_{t})\log(1- p_{t}) \right ] \;\; . \label{eq:T.9} \end{eqnarray} \noindent In calculating the entropy in Eq.(\ref{eq:T.9}) we have clearly averaged over data sets of size $|{\mathcal O}|$, as we wish to eliminate the effects of data set to data set variation and focus solely on the effect of data set size. If we wish to obtain the set of term probabilities $\{p_{t}\}$ which maximize the entropy subject to capturing a given amount of the stored information in the ontology and a given expected total number of selected terms, then we maximize, \begin{eqnarray} && -\lambda\sum_{o,t}\langle\langle n_{ot}\rangle \log {\mathcal N}_{t}\rangle_{\mathcal O} \;+\; \phi\sum_{o,t}\langle\langle n_{ot}\rangle\rangle_{\mathcal O} \;+\;S\left [ \{p_{t}\}\right ] \nonumber \\ & = & -|{\mathcal O}|\sum_{t}{\mathcal N}_{t}p_{t}(\lambda\log {\mathcal N}_{t} - \phi)\; - \; |{\mathcal O}| \sum_{t}{\mathcal N}_{t}\left ( p_{t}\log p_{t}\;+\; (1-p_{t})\log(1- p_{t}) \right )\;\; . \nonumber \\ && \label{eq:T.10} \end{eqnarray} \noindent Here $\lambda$ and $\phi$ are Lagrange multipliers. The maximum entropy term probabilities, $p^{*}_{t}$, are simply the set of term probabilities that maximize the expression in Eq.(\ref{eq:T.10}), and so we find, \begin{equation} p^{*}_{t}\;=\; \frac{e^{-(\lambda\log {\mathcal N}_{t} -\phi)}}{1 + e^{-( \lambda\log {\mathcal N}_{t}-\phi)}}\;\; . \label{eq:T.11} \end{equation} \noindent Comparing Eq.(\ref{eq:T.5}) and Eq.(\ref{eq:T.11}) we immediately see that this maximum entropy approach corresponds to setting, \begin{equation} \beta v^{(0)}_{t}\;=\; \lambda\log {\mathcal N}_{t}\;\;\;,\;\;\;\beta \mu\;=\;\phi\;\; . \label{eq:T.11b-1} \end{equation} \noindent With $\beta >0$ we impose the relation (without loss of generality) $\beta = |\lambda|$, giving $v^{(0)}_{t} = -\log {\mathcal N}_{t}$ when $\lambda < 0$ and $v^{(0)}_{t} = \log {\mathcal N}_{t}$ when $\lambda > 0$. For $\lambda < 0$ terms with lower information content (larger ${\mathcal N}_{t}$), that are closer to the root, will be preferentially selected during annotation, whilst for $\lambda > 0$ terms with smaller values of ${\mathcal N}_{t}$, that are closer to the leaves of the ontology are preferentially selected. Substituting the expression for $p^{*}_{t}$ from Eq.(\ref{eq:T.11}) into Eq.(\ref{eq:T.8}) we get the corresponding maximum entropy expected term frequencies as, \begin{equation} \langle C^{*}_{t}\rangle_{\mathcal O} \;=\;|{\mathcal O}|\frac{z{\mathcal N}_{t}^{1-\lambda}}{1 + z{\mathcal N}_{t}^{-\lambda}}\;\; . \label{eq:T.12} \end{equation} \noindent Likewise, for the optimal term probabilities $\{p^{*}_{t}\}$, the free energy takes a value $F^{*}$ given by, \begin{equation} \beta F^{*} \; = \; \beta F[\{ p^{*}_{t}\}_{t}, \{\beta v_{t}=v^{(0)}_{t}\}_{t}]\;=\; -|{\mathcal O}|\sum_{t}{\mathcal N}_{t}\log ( 1 + z{\mathcal N}_{t}^{-\lambda} ) \;\; . \label{eq:T.14} \end{equation} Having obtained a suitable form for $v^{(0)}_{t}$ it is a simple matter to determine whether, for any real data set, there is a non-zero specific field component $\Delta v_{t}$. This can be done by testing if the observed frequency $f_{t}$ is statistically significantly different from $\langle C^{*}_{t}\rangle_{\mathcal O}$ given by Eq.(\ref{eq:T.12}). If $f_{t}$ is significantly different to $\langle C^{*}_{t}\rangle_{\mathcal O}$, then we can then combine Eq.(\ref{eq:T.8b1}), Eq.(\ref{eq:T.8b2}), and Eq.(\ref{eq:T.11b-1}) to construct an estimate $\widehat{\Delta v}_{t} = \hat{v}_{t}\;-\;v^{(0)}_{t}$ for $\Delta v_{t}$. Thus, we have, \begin{equation} \widehat{\Delta v}_{t} \; = \; \left \{ \begin{array}{cc} \frac{1}{|\lambda|}\left [ \log \left ( z{\mathcal N}^{-\lambda}_{t} \right ) \;+\; \log \left ( \frac{|{\mathcal O}|{\mathcal N}_{t}}{f_{t}} - 1 \right ) \right ] & {\rm if}\; f_{t}-\langle C^{*}_{t}\rangle_{\mathcal O}\;\; {\rm significant} \\ 0 & {\rm otherwise}\;\; . \end{array} \right . \end{equation} Realistically, for a large ontology we would only expect a small proportion of all the terms to be used when annotating an object, and so from Eq.(\ref{eq:T.12}) we would expect $\langle C^{*}_{t}\rangle_{\mathcal O} / |{\mathcal O}| \ll 1$. This suggests that $z{\mathcal N}_{t}\ll 1$ and that in general $\langle C^{*}_{t}\rangle_{\mathcal O} \sim {\mathcal N}_{t}^{1-\lambda}$. This gives us a simple mechanism for estimating appropriate values of $\lambda$ for real data sets. This also tells us that, with ${\mathcal N}_{t}$ from tree-like ontologies expected to display a Zipf's law behaviour with an exponent close to -1, then we expect to see a Zipf's law like behaviour in the observed term frequencies $f_{t}$ with exponent $-(1-\lambda)$. As the exponents for $f_{t}$ observed by Kalankesh {\it et al.} \cite{Kalankesh2012} are close to -1, this suggests that the effective values of $\lambda$ for real data sets are small in magnitude and potentially either side of $\lambda=0$. Finally, we note that as $\lambda = 0$ represents the boundary at which terms closer to the root are preferentially selected, an effective value of $\lambda < 0$ within a real data set may suggest that annotators do not have a strong desire to make use of the additional nodes provided by larger complex ontologies. Within our statistical mechanical framework we are able to explore whether there is a `optimal' or preferred size to the ontology that annotators wish to use, and also how this `optimal' size might scale with the number of objects being annotated. This we do in the next section. \section{Statistical mechanics of an ensemble of ontologies \label{sec:Metric}} Having set up a statistical mechanical model of the annotation process, we are in a position to perform more theoretical analyses and experiments that, whilst not being realisable, still provide valuable insight into the performance of annotators and the underlying ontology being used. For example, a key practical question we wish to address is to determine whether a real-world ontology is fit for purpose, or whether the ontology needs shrinking or expanding. To answer this question in real terms would require providing annotators with a range of ontology graphs and observing which ontology is used most frequently. Such an experiment is not readily performed in real terms. However, within the statistical mechanical framework we can address this question by extending the previous ensemble to one consisting of a large collection of annotators who annotate the same set of objects, but who can select the ontology to be used from a range of ontologies of differing complexity. Consequently, we will see variation in both the choice of topology ${\mathcal H}$ being used, and the annotation configuration $\{ n_{ot}\}$ being selected. Considering this ensemble allows us to easily determine the most preferred, and hence most appropriate, ontology. This optimal ontology will depend upon a number of parameters, such as the number of objects being annotated, $|{\mathcal O}|$, and annotator characteristics such as the average number of terms used per object and the average effort/information the annotator is willing to expend per object. The latter two characteristics are controlled by the parameters $z$ and $\lambda$. Therefore, the remainder of Section \ref{sec:Metric} is focused on elucidating how the preferred, or optimal, ontology varies with the parameters $|{\mathcal O}|, z$ and $\lambda$. To determine the variation with respect to $|{\mathcal O}|, z$ and $\lambda$ requires breaking the analysis down into a number of smaller, but still quite involved, steps. Firstly, in Section \ref{sec:complexityQuant} we construct a suitable statistical mechanical potential, $\beta \Omega^{*}$, from which the optimal ontology topology ${\mathcal H}$ can be determined. In section \ref{sec:regTrees} we restrict ${\mathcal H}$ to the class of regular trees to make analysis of $\beta \Omega^{*}$ tractable. In Section \ref{sec:asympAnalysis} we further simplify the analysis by developing a closed form approximation to $\beta \Omega^{*}$ for regular trees, and examining the asymptotic behaviour of the approximation as the number of objects $|{\mathcal O}|$ increases. In Section \ref{sec:simulation} we confirm the results of the asymptotic analysis on regular trees using numerical simulations. Finally, in Section \ref{sec:lambdaTrend} we use the insights gained from the analysis of $\beta \Omega^{*}$ to understand what are the most likely observed values for the parameter $\lambda$. \subsection{Quantifying ontology complexity and determination of the optimal ontology \label{sec:complexityQuant} } To extend the ensemble to one that consists of ontologies of varying complexity we must first introduce a measure to quantifying the intrinsic complexity of each ontology. Continuing the information theoretic approach, we measure the complexity of an ontology by its total intrinsic information content $-\sum_{t}\log {\mathcal N}_{t}$, and so introduce an additional control variable, $\omega > 0$, to set the average intrinsic information content of a ontology node within this ensemble. The partition function for this ensemble becomes, \begin{equation} {\mathcal Z}\;=\; \sum_{{\mathcal H}} \exp \left ( {\omega \sum_{t}\log {\mathcal N}_{t}}\right ) Z({\mathcal H})\;\; , \end{equation} \noindent where $Z({\mathcal H})$ is the partition function on a fixed DAG and $\log Z({\mathcal H})$ is given by $-\beta F^{*}$ in Eq.(\ref{eq:T.14}). Larger values of $\omega$ will favour smaller values for the average intrinsic information per node, and so will favour smaller, lower complexity ontologies. Just as the partition function has been modified, then similarly, the appropriate potential for this ensemble is obtained by adding $-\omega \sum_{t}\log {\mathcal N}_{t}$ to the optimal value of the free energy, $\beta F^{*}$. This gives a potential, \begin{equation} \beta \Omega^{*} \;=\; -|{\mathcal O}|\sum_{t}{\mathcal N}_{t}\log ( 1 + z{\mathcal N}_{t}^{-\lambda} )\;-\; \omega \sum_{t}\log {\mathcal N}_{t}\;\; . \label{eq:M.2} \end{equation} \noindent The most frequently selected, or optimal, ontology is that which minimizes $\beta\Omega^{*}$. The term $-\omega \sum_{t}\log {\mathcal N}_{t}$ therefore serves an important function by regulating the ontology complexity, with more complex topologies being penalized more heavily. We can see that the minimum value $\beta\Omega^{*}(\{ {\mathcal N}_{t}\}_{t}, z, \omega)$ changes on varying either the topology ${\mathcal H}$ or the total number of objects, $|{\mathcal O}|$, to which the ontology applies. Consequently, we have the possibility of a change in optimal DAG topology as the number of objects, $|{\mathcal O}|$, being annotated increases. As previously stated, this possibility of a change in the optimal DAG topology is something we wish to investigate further, though to do so over a range of general DAG topologies is unlikely to be tractable. However, as we have already argued for an ontology to be effective it will have a tree-like topology. This then highlights the importance of understanding the behaviour of $\beta\Omega^{*}$ for tree-like ontologies. Even so, characterizing the behaviour of $\beta\Omega^{*}$ for an arbitrary tree-like ontology is still likely to be a difficult task. Therefore, to gain further insight into the behaviour of $\beta \Omega^{*}$ for tree-like ontologies we restrict our further analysis of $\beta \Omega^{*}$ to regular trees. \subsection{Optimal ontology size for regular trees \label{sec:regTrees} } For a regular tree with multi-furcating nodes, {\it i.e.}, each parent having $b$ children, we can label nodes according to which layer, $l$, the node is in. The root node is in layer $l=0$, whilst leaf nodes are in the final layer $L$. Nodes in layer $l$ have $b^{L-l}$ leaf descendants and there are $b^{l}$ nodes in layer $l$. Consequently, the total number of leaves is $b^{L}$. For simplicity we will take the class distribution $\{\pi_{\mathcal P}\}_{\mathcal P}$ to be uniform across the leaf nodes, {\it i.e.}, $\pi_{\mathcal P} = b^{-L}$ for each of the classes corresponding to a leaf node. From these relations we can evaluate ${\mathcal N}_{t} = b^{-l}$, for a term corresponding to a node in layer $l$. It is then a simple matter to re-write $\beta\Omega^{*}$ as, \begin{equation} \beta \Omega^{*}\;=\; -|{\mathcal O}| \sum_{l=0}^{L} \log \left ( 1 + zb^{l\lambda}\right ) \;+\; \omega \log b \sum_{l=0}^{L} lb^{l}\;\; . \label{eq:R.1} \end{equation} \noindent The average number, $\bar{n}$, of terms used per annotated object is determined by the fugacity $z$ via the relation, \begin{equation} \bar{n}\;=\; -\frac{z}{|{\mathcal O}|}\frac{\partial}{\partial z} \beta \Omega^{*}\;=\; \sum_{l=0}^{L} \frac{zb^{l\lambda}}{1+zb^{l\lambda}}\;\; . \label{eq:R.2} \end{equation} \noindent Thus, if we wish to attain a specified value $\bar{n}$ we simply solve Eq.(\ref{eq:R.2}) for the required value $z$. Similarly, the average information retrieved per object, $I$, is determined via the relation, \begin{equation} I\;=\; -\frac{1}{|{\mathcal O}|}\frac{\partial}{\partial \lambda} \beta \Omega^{*}\;=\;\log b\sum_{l=0}^{L} \frac{zlb^{l\lambda}}{1+zb^{l\lambda}}\;\; . \label{eq:R.2b} \end{equation} The simple form for the $\beta\Omega^{*}$ in Eq.(\ref{eq:R.1}) also allows us to analytically determine the tree size $L$ that is optimal for annotating a given number of objects $|{\mathcal O}|$. The optimal tree size is that which minimizes $\beta\Omega^{*}$. Due to the discrete nature of $L$, any growth we observe in the optimal ontology size will occur via a series of transitions. Naively, we would expect the optimal ontology size, $L_{opt}$, to increase as the number of annotated objects is increased. That is, we would {\it a priori} expect $L_{opt}\rightarrow\infty$ as $|{\mathcal O}|\rightarrow\infty$. However, whether growth of $L_{opt}$ is possible or not may be affected by the particular value of $z$ or $\lambda$. Consequently, in the next section our analysis will focus upon the behaviour of $\beta\Omega^{*}$ as $|{\mathcal O}|$ and $L$ increase, in particular in the regime $|{\mathcal O}|, L\rightarrow\infty$, for different choices of $z$ and $\lambda$. \subsection{Asymptotic behaviour of the optimal ontology size for regular trees \label{sec:asympAnalysis} } The second term on the right-hand-side of Eq.(\ref{eq:R.1}) is easily evaluated as, \begin{eqnarray} \omega \log b \sum_{l=0}^{L} lb^{l} & = & \omega \frac{\log b}{(b-1)^{2}} \left [ Lb^{L+2} \;-\;(L+1)b^{L+1} + b \right ] \nonumber \\ & \simeq & \frac{\omega \log b}{(b-1)}Lb^{L+1}\;\;\;,\;{\rm as}\;L\rightarrow\infty\;\; . \label{eq:R.3} \end{eqnarray} \noindent To evaluate the first term on the right-hand-side of Eq.(\ref{eq:R.1}) we define $a=b^{\lambda}$ and make use of the Euler-Maclaurin sum formula \cite{WhittakerWatson1990} to obtain (see \ref{sec:EulerMaclaurin} for details), \begin{eqnarray} \sum_{l=0}^{L} \log \left ( 1 + za^{l}\right ) & \simeq & \frac{1}{\log a}\left ( Li_{2}(-z)\;-\;Li_{2}(-za^{L})\right ) \;+ \; \frac{1}{2}\log \left( 1+za^{L}\right ) \nonumber \\ & + &\frac{1}{2}\log\left (1+z\right ) \; + \; \frac{\log a}{12}\left ( \frac{za^{L}}{1+za^{L}}\;-\;\frac{z}{1+z}\right )\;\; . \label{eq:R.3e} \end{eqnarray} \noindent From the approximation in Eq.(\ref{eq:R.3e}) we have, \begin{eqnarray} \bar{n} \;=\;\frac{1}{\log a}\log \left ( \frac{1+za^{L}}{1+z}\right ) & + & \frac{1}{2}\frac{za^{L}}{1+za^{L}}\;+\;\frac{1}{2}\frac{z}{1+z}\nonumber \\ & + & \frac{\log a}{12}\left (\frac{za^{L}}{(1+za^{L})^{2}}\;-\;\frac{z}{(1+z)^{2}} \right )\;\; . \label{eq:R.3f} \end{eqnarray} \noindent For convenience of later analysis we can also write Eq.(\ref{eq:R.3e}) as, \begin{equation} \sum_{l=0}^{L} \log \left ( 1 + za^{l}\right ) \simeq f_{1}(za^{L}) \;+\; f_{2}(z)\;\; . \label{eq:R.3f2} \end{equation} \noindent with obvious definitions for the functions $f_{1}$ and $f_{2}$. With this more compact notation we can re-write Eq.(\ref{eq:R.3f}) as, \begin{equation} \bar{n}\;=\; za^{L}f^{'}_{1}(za^{L})\;+\; z f_{2}^{'}(z)\;\; . \label{eq:R.3f3} \end{equation} The approximation developed in Eq.(\ref{eq:R.3e}) is extremely accurate for the values of $z,a$ and $L$ we are interested in (see \ref{sec:EulerMaclaurin} for details), and thereby allows us to accurately elucidate the growth behaviour of the optimal ontology size. To proceed, we consider the asymptotic behaviour of the approximation to $\beta\Omega^{*}$, as $|{\mathcal O}|$ and $L\rightarrow\infty$. There are two regimes potentially worth studying in the asymptotic limit $|{\mathcal O}|, L\rightarrow\infty$. The first regime is where $\bar{n}$ is fixed in size. In this regime the increasing number of terms available for annotating an object, as $L\rightarrow\infty$, are not made use of, {\it i.e.}, the increased information captured within the larger ontologies is effectively ignored. Consequently, we also find it instructive to consider a second regime where $z$ is fixed, resulting in $\bar{n}$ scaling linearly with $L$. Detailed analysis of the behaviour of $\beta\Omega^{*}$ under these two regimes is given below, for both $\lambda < 0$ and $\lambda > 0$.\newline \noindent \underline{$\bm{\bar{n}}$ \bf{fixed as} $\bm {L\rightarrow\infty,\; \lambda < 0:}$} \noindent For $\lambda < 0$ we have $a = b^{\lambda} < 1$ and so $a^{L}\rightarrow 0$ as $L\rightarrow\infty$. Therefore we can expand the right hand side of both Eq.(\ref{eq:R.3f2}) and Eq.(\ref{eq:R.3f3}) in powers of $a^{L}$. Doing so, we find, \begin{equation} \bar{n} \; = \; z_{0}f^{'}_{2}(z_{0})\;\; , \;\;z \; = \; z_{0} \;+\; O(a^{L})\;\; , \label{eq:R.4.3} \end{equation} \begin{equation} -\frac{\beta F^{*}}{|{\mathcal O}|} \; = \; f_{1}(0)\;+\;f_{2}(z_{0}) \;+\; \frac{z^{2}_{0}f^{'}_{1}(0)f_{2}^{''}(z_{0})}{f_{2}^{'}(z_{0}) + z_{0}f_{2}^{''}(z_{0})}\,a^{L}\;+\; O(a^{2L})\;\; , \label{eq:R.4.2} \end{equation} \noindent and so the leading order contributions to $\beta\Omega^{*}$ take the form, \begin{equation} \beta\Omega^{*}\;\simeq\; {\rm Constant} \;-\;|{\mathcal O}|\frac{z^{2}_{0}f^{'}_{1}(0)f_{2}^{''}(z_{0})}{f_{2}^{'}(z_{0}) + z_{0}f_{2}^{''}(z_{0})}\,a^{L} \;+\;\frac{\omega \log b}{(b-1)}Lb^{L+1}\;\;\;{\rm as}\;L\rightarrow\infty\;\;. \label{eq:R.5} \end{equation} \noindent The optimal tree depth $L_{opt}$ is determined by minimizing $\beta\Omega^{*}$ with respect to $L$. With the constant in Eq.(\ref{eq:R.5}) being independent of $L$ we find that, provided $\frac{z^{2}_{0}f^{'}_{1}(0)f_{2}^{''}(z_{0})}{f_{2}^{'}(z_{0}) + z_{0}f_{2}^{''}(z_{0})} < 0$, we have, \begin{equation} L_{opt} \;\sim\; \frac{\log |{\mathcal O}|}{(1-\lambda)\log b}\;\;\;,\;{\rm as}\;|{\mathcal O}|\rightarrow\infty\;\; . \label{eq:R.5b} \end{equation} \noindent We show in \ref{sec:proof1} that for $ 0 > \log a > -6 -4\sqrt{6}\simeq -15.8$ we indeed have that $\frac{z^{2}_{0}f^{'}_{1}(0)f_{2}^{''}(z_{0})}{f_{2}^{'}(z_{0}) + z_{0}f_{2}^{''}(z_{0})} < 0$. So in this regime for $\log a$ Eq.(\ref{eq:R.5b}) predicts growth of $L_{opt}$, via a series of transitions, as $|{\mathcal O}|$ is increased, with Eq.(\ref{eq:R.5b}) giving the global scaling relation between the optimal value $L_{opt}$ and $|{\mathcal O}|$. It is also worth noting that the scaling relation in Eq.(\ref{eq:R.5b}) predicts that the number of leaf nodes, $b^{L_{opt}}$, or equivalently the number of required classes, grows slower than the number of objects $|{\mathcal O}|$. The restriction to $\log a > -15.8$ is not an onerous one, as it is likely to be well outside the range of values for $a$ that we are interested in (see the end of \ref{sec:proof1} for a discussion of this point).\newline \noindent \underline{$\bm{\bar{n}}$ \bf{fixed as} $\bm {L\rightarrow\infty,\; \lambda > 0:}$} \noindent For $\lambda > 0$ we have $a=b^{\lambda} > 1$ and so from Eq.(\ref{eq:R.3f}) we can see that in order to maintain a fixed value of $\bar{n}$ as $L\rightarrow\infty$ we must have the scaling $z\sim a^{-L}$. Thus, in general we write $z=\hat{z}_{1}a^{-L} + \hat{z}_{2}a^{-2L} + O(a^{-3L})$. With $z \ge 0$ then we must have $\hat{z}_{1}\ge 0$ for this decomposition of $z$ to hold over an arbitrary range of $L$. Substituting this form for $z$ into Eq.(\ref{eq:R.3f}) we find, \begin{equation} \bar{n}\;=\; \hat{z}_{1}f^{'}_{1}(\hat{z}_{1})\;\;\;\;,\;\;\;\;\hat{z}_{2}\;=\;-\frac{\hat{z}_{1}f^{'}_{2}(0)}{f^{'}_{1}(\hat{z}_{1})+\hat{z}_{1}f^{''}_{1}(\hat{z}_{1})} \;\; . \label{eq:R.14} \end{equation} \noindent From this we find, \begin{equation} \beta \Omega^{*} \;\simeq\; -|{\mathcal O}|\left ( {\rm Constant} \;-\;a^{-L}\hat{z}_{1}\hat{z}_{2}f^{''}_{1}(\hat{z}_{1})\;+\; O(a^{-2L})\right )\;\;+\;\;\frac{\omega \log b}{(b-1)}Lb^{L+1}\;\; . \label{eq:R.15} \end{equation} \noindent The constant in the expansion above does not depend upon $L$. In \ref{sec:proof2} we show that $\hat{z}_{1}\hat{z}_{2}f^{''}_{1}(\hat{z}_{1}) < 0$, leading to the immediate conclusion that, as $L\rightarrow\infty$, the leading order non-constant term in the asymptotic expansion of the free energy is positive, and so cannot counter-balance the increasing contribution from the complexity penalty term. That is, large tree depths $L$ will never be optimal (in the sense of producing a stationary value of $\beta\Omega^{*}$) for any value of $|{\mathcal O}|$, irrespective of the values of $\bar{n}$ and $a$. A natural corollary is that the optimal tree depth, $L_{opt}$, is then simply the smallest tree that will admit the required value of $\bar{n}$, {\it i.e.}, $L_{opt} = \bar{n} + 1$, though strictly speaking the leading order asymptotic analysis may not still be valid at such values of $L$. However, the fact that the asymptotic analysis suggests that using large trees is sub-optimal, irrespective of how many objects we wish to annotate, is surprising.\newline \noindent \underline{$\bm{z}$ \bf{fixed as} $\bm {L\rightarrow\infty,\; \lambda < 0:}$} \noindent With $a<1$ for $\lambda <0$ we have $za^{L}\rightarrow 0$ as $L\rightarrow\infty$, and so expanding $\beta\Omega^{*}$ in powers of $a^{L}$ gives, \begin{equation} \beta\Omega^{*}\;\simeq\; |{\mathcal O}| \left [ {\rm Constant}\;-\; za^{L}\left ( \frac{1}{\log a} + \frac{1}{2} +\frac{\log a}{12}\right )\right ] \;+\; \frac{\omega \log b}{(b-1)}Lb^{L+1} \;\; . \label{eq:R.5c} \end{equation} \noindent Again, the constant in the expansion of $\beta\Omega^{*}$ has no dependence upon $L$. As we have already shown in \ref{sec:proof1}, when $\log a < 0$ we have $\frac{1}{\log a}+\frac{1}{2} + \frac{\log a}{12} <0$. On setting the derivative (with respect to $L$) of Eq.(\ref{eq:R.5c}) to zero we obtain, \begin{equation} L_{opt} \;\sim\; \frac{\log |{\mathcal O}|}{(1-\lambda)\log b}\;\;\;,\;{\rm as}\;|{\mathcal O}|\rightarrow\infty\;\; . \label{eq:R.5d} \end{equation} \noindent Again, we see that it is predicted that the number of classes grows slower than the number of objects $|{\mathcal O}|$.\newline \noindent \underline{$\bm{z}$ \bf{fixed as} $\bm {L\rightarrow\infty,\; \lambda > 0:}$} \noindent For $\lambda > 0$ we have $a=b^{\lambda} > 1$ and so for $z$ fixed we have $za^{L}\rightarrow\infty$ as $L\rightarrow\infty$. Then to obtain the asymptotic behaviour of $\beta\Omega^{*}$ we observe the following relations and representation for the dilogarithm $Li_{2}(x)$ \cite{Kirillov1995}, \begin{eqnarray} && Li_{2}(x) \;+\; Li_{2}(-x) \;=\; \frac{1}{2}Li_{2}(x^{2})\;\;\forall x\in \field{C}\;\;, \nonumber \\ && Li_{2}(x)\;=\; \frac{\pi^{2}}{3}\;-\;\frac{1}{2}\left (\log x\right )^{2}\;-\;i\pi\log x \;-\;\sum_{k=1}^{\infty}\frac{1}{k^{2}x^{k}}\;\;\;\forall x \in \field{R}, x \geq 1\;\;. \label{eq:R.6} \end{eqnarray} \noindent Utilising the results above for $Li_{2}(x)$ we finally arrive at the leading order behaviour of $\beta F^{*}$, \begin{equation} \frac{\beta F^{*}}{|{\mathcal O}|} \; = \; {\rm Constant}\;-\; \frac{L^{2}}{2}\log a\;-\;L\left ( \log z\;+\;\frac{1}{2}\log a \right )\; + \; O\left (a^{-L}\right )\;\; . \label{eq:R.7} \end{equation} \noindent As before, the constant contains only terms that do not depend upon $L$. Setting $\left . \frac{\partial \beta\Omega^{*}}{\partial L}\right |_{L=L_{opt}} = 0$, we find the optimal tree depth $L_{opt}$ satisfies the leading order scaling relation, \begin{equation} L_{opt}\;\sim\; \frac{\log |{\mathcal O}|}{\log b}\;\;\;\; {\rm as}\; |{\mathcal O}|\rightarrow\infty\;\; . \label{eq:R.11} \end{equation} \noindent In contrast to the previous scenarios, we now have $|{\mathcal O}|b^{-L_{opt}}\sim {\rm constant}$, {\it i.e.} the number of objects per class (leaf node) is approximately constant (or more correctly, only a weak function of $L_{opt}$). It is also interesting to observe that with $z$ fixed, the average number of terms used per object, $\bar{n}$, is given by, \begin{equation} \bar{n}\;=\; -\frac{z}{|{\mathcal O}|}\frac{\partial}{\partial z} \beta \Omega^{*} \; \simeq\; {\rm Constant} \;+\;L\;\;\;\;,\;\;{\rm as}\;L\rightarrow\infty\;\; . \label{eq:R.8} \end{equation} \noindent Thus for fixed $z$ we find $\bar{n}$ scaling linearly with $L$. If we have a measurement of $\bar{n}$ at a particular tree depth $L_{0}$, then we can re-express Eq.(\ref{eq:R.8}) as, \begin{equation} \bar{n}(L)\;=\; \bar{n}(L_{0})\;+\;(L\;-\; L_{0})\;\; . \label{eq:R.9} \end{equation} Summarizing the asymptotic behaviour across the four scenarios we have, \begin{equation} \begin{array}{ccccc} L_{opt} \times \log b & \sim & \log |{\mathcal O}| / (1-\lambda)\;\;\; &,&\;\bar{n}\;{\rm fixed,}\;\lambda < 0\;\; , \\ L_{opt} \times \log b & \sim & {\rm Constant}\;\;\;&,&\;\bar{n}\;{\rm fixed,}\;\lambda > 0\;\; , \\ L_{opt} \times \log b & \sim & \log |{\mathcal O}| / (1-\lambda)\;\;\;&,&\;z\;{\rm fixed,}\;\lambda < 0\;\; , \\ L_{opt} \times \log b & \sim & \log |{\mathcal O}|\;\;\;&,&\;z\;{\rm fixed,}\;\lambda > 0 \;\; . \end{array} \label{eq:R.10} \end{equation} \noindent As estimates for $\lambda, \bar{n}$ (and hence $z$) can already be obtained, the scaling laws (along with their associated amplitudes) provide us with a potential mechanism for assessing whether a tree-like ontology is of optimal size for the given number, $|{\mathcal O}|$, of objects which it is being used to annotate. Thus, we can potentially assess whether an existing ontology should be expanded, or is overly complex for its current usage. Application of these scaling laws is clearly dependent upon their accuracy and the values of $\lambda$ we are likely to encounter for larger ontologies. These aspects we assess in the next two sections. \subsection{Simulation validation of optimal ontology growth and scaling laws \label{sec:simulation} } The possibility of growth in the optimal tree depth $L_{opt}$ arises in three of the four distinct regimes considered above; namely at fixed $z$ for both $\lambda <0$ and $\lambda >0$, and also at fixed $\bar{n}$ for $\lambda < 0$. This is borne out by numerical calculations. In Figure \ref{fig:figR3}a we have shown the growth, with $|{\mathcal O}|$, in the tree depth $L_{opt}$. For fixed $\bar{n}$ with $\lambda < 0$ the fugacity $z$ tends to a finite non-zero value as $L\rightarrow\infty$, so essentially we can regard the fixed $\bar{n}, \lambda < 0$ regime as equivalent to the fixed $z$, $\lambda < 0$ regime. Therefore, we have performed the calculations at a number of different values for $\lambda$, but in all cases held the fugacity fixed to achieve a value of $\bar{n}=\frac{25}{7}$ at $L=5$. All calculations of $\beta\Omega^{*}$ have used the exact summation form for evaluation of $\beta F^{*}$. The dashed lines shown in Fig.\ref{fig:figR3}a correspond to the growth in $L_{opt}$ predicted by by minimizing the leading order asymptotic contributions to the integral approximation of $\beta\Omega^{*}$. The correspondence between the simulation results and the long term growth trend predicted by the asymptotic analysis is good, confirming the leading order scaling relations given in Eq.(\ref{eq:R.10}). The predicted variation with $\lambda$ (when $\lambda < 0$), in the slope of the scaling relation is also clearly apparent from Fig.\ref{fig:figR3}a. \begin{figure*} \begin{center} \scalebox{0.23} \includegraphics*[bb = 0 0 792 612]{Fig4a.eps} \includegraphics*[bb = 0 0 792 612]{Fig4b.eps}} \end{center} \caption{a) Plot of optimal tree depth $L_{opt}$ versus $\ln |{\mathcal O}|$ for various values of $\lambda$. The solid lines represent the optimal tree depth $L$ determined by finding the minimum of $\beta\Omega^{*}$ over integer values of $L$. The dashed lines represent the broad growth trend as predicted by minimizing the leading order asymptotic contributions to $\beta\Omega^{*}$. b) Plots of the required values of $\lambda$ as $L$ increases for a fixed given value of $I$ and $\bar{n}$.} \label{fig:figR3} \end{figure*} \subsection{The expected value of $\lambda$ for increasing complexity \label{sec:lambdaTrend} } It is natural to ask which of the above scenarios, $\lambda > 0$ or $\lambda < 0$, is appropriate for a real annotation process? A `least effort' argument would suggest that an annotator will attempt to limit the effort expended on annotating an object, and so $I$, the information retrieved per object, will most likely be fixed or only a weakly increasing function of $|{\mathcal O}|$. At fixed $z$ simple inspection of Eq.(\ref{eq:R.2b}) shows that $I$ is an increasing function of tree depth $L_{opt}$, and hence $|{\mathcal O}|$, for all values of $\lambda$. However, more detailed analysis of Eq.(\ref{eq:R.2b}) (in particular applying the Ratio test for series convergence, when $\lambda < 0$) reveals that as $L\rightarrow\infty$ we have (at fixed $z$), \begin{eqnarray} I & \sim & \frac{\log b}{2}L^{2}\;+\; O(L) \;\;\;,\;\; \lambda > 0 \;\; ,\nonumber \\ I & \rightarrow & {\rm Constant} \;\;\;,\;\;\lambda < 0 \;\; . \end{eqnarray} \noindent Therefore, we associate $\lambda < 0$ with a fixed, or only weakly growing value of $I$. This idea is corroborated if we consider the more intuitive scenario of a fixed value of $\bar{n}$ (as opposed to fixed fugacity $z$) and a fixed value of $I$, and determine the required value of $\lambda$ as the tree depth is $L$ is increased. Figure \ref{fig:figR3}b shows plots, against $L$, of the value of $\lambda$ required. The value of $\bar{n}$ is held fixed at $\bar{n} = 25/7$ and we have fixed $I(\lambda, L, \bar{n}) = c\times I(\lambda=0, L=5, \bar{n})$ for different values of $c$. The value of $\lambda$ is then obtained by simultaneous solution of Eq.(\ref{eq:R.2}) and Eq.(\ref{eq:R.2b}). Irrespective of the value of $c$ we can see that increasing $L$ at fixed $I$ and $\bar{n}$ leads to $\lambda <0$. Consequently, if the information or effort expended per object by an annotator is limited we expect small negative values of $\lambda$ to be the norm. Furthermore, we note that the amount of annotation information retrieved per object essentially determines the specificity with which the different classes of objects are discriminated. Thus, as data sets of increasing size will potentially sample an increasing number of the classes ${\mathcal P}$ present in the population, a fixed value of $I$ may not be sufficient to discriminate them, {\it i.e.}, the annotation data set will not be of adequate quality. Therefore, the value of $\lambda$ estimated from a real data set provides a topology derived metric to assess the quality of annotation and annotators, with a negative estimate for $\lambda$ giving possible cause for concern. \section{ Discussion and Conclusions \label{sec:Conc}} Although the real-world ontologies used in annotating data sets may be fixed in form, the annotation process, by nature of it often being a manual process, is subject to variation. We have used statistical mechanics to construct a formalism of that variability in the annotation process. The formalism developed has allowed us to understand both the patterns seen in real data sets \cite{Kalankesh2012}, and to suggest measures of the ontology structure itself. This has been done by using a simple lattice-gas model of the term selection process combined with information theoretic concepts. Although measures of ontology structures \cite{Yao2011} already exist these do not tend to incorporate the effects of the expected variability in the annotation process. Likewise, many studies have previously combined information theoretic concepts with ontologies, but these works have largely not focused on assessing the underlying ontology structure. Instead studies have used information theoretic concepts in assessing the similarity of individual terms within an ontology \cite{Resnik1995,Resnik1999, Calmet2004}, assessing the similarity between annotated objects \cite{Tao2007}, assessing the similarity between two annotation data sets \cite{Alterovitz2007}, assessing the similarity between users \cite{KohMui2001}, or in empirical studies of the growth in tag frequencies \cite{ChiMytkowicz2008}. Similarly, studies of annotation and tag statistics that relate to the underlying ontology structure exist within the physics literature \cite{Palla2008, Tibely2012}, but these have not been based upon a Hamiltonian model of the annotation process. By constructing a Hamiltonian model of the annotations we have built upon the work of Palla {\it et al.} \cite{Palla2008} and Tib\'{e}ly {\it et al.} \cite{Tibely2012}, and been able to construct, in a principled fashion, statistical mechanical measures of the ontology itself. Having developed a simple lattice-gas model of the annotation process the statistical mechanical formalism has enabled us to progress further and to perform hypothetical experiments over an ensemble of different ontology structures, and in doing so gain new insight where it is not readily possible to perform real experiments. Our analysis of an ensemble of different ontology structures has focused on regular trees, though despite this restriction we still expect the high-level conclusions drawn in Sections \ref{sec:regTrees}, \ref{sec:simulation}, and \ref{sec:lambdaTrend} to be more universally valid. Firstly, our detailed analysis of $\beta\Omega^{*}$ identifies a natural or optimal ontology size, and associated growth scaling law, given the number of objects $|{\mathcal O}|$ to be annotated. The scaling laws derived provide us with 'rules-of-thumb' to decide when an ontology should be expanded to match the needs of the data sets to which it is being applied. Secondly, our analysis in Section \ref{sec:lambdaTrend} reveals a natural tendency towards $\lambda <0$, as more complex ontologies are used to annotate larger collections of objects. This suggests that values of $\lambda$ appropriate to real-world annotation data sets will be typically small in magnitude, possibly even negative. This is borne out by the power-law exponents observed by Kalankesh {\it et al.} \cite{Kalankesh2012}. Small values of $\lambda$ would imply that annotators are typically recovering less information from the ontology than if they selected terms uniformly at random. As we equate effort expended with the information retrieved, a negative value of $\lambda$ suggests the effort being expended during the annotation is below that appropriate to the ontology complexity. This may be due to the structure of the ontology being more complex than is necessary to discriminate between the classes of objects being annotated, the annotation effort expended not being sufficient, or simply that there is currently insufficient evidence available to the annotators to discriminate between certain objects. With the statistical mechanical formalism we can potentially extend the analysis presented here to construct measures capable of distinguishing these different possibilities. The statistical mechanical analysis we have presented here is based upon a relatively simple Hamiltonian. The richness of the behaviour we observe in the statistical mechanical model is more a consequence of the inhomogeneous field induced by the ontology topology. However, the statistical mechanical analysis we have presented here is far from complete. The advantage of having expressed the annotation process in terms of a statistical mechanical formalism is that we can easily extend our analysis to obtain quantitative results for other scenarios or other Hamiltonians. For example, to further reflect more realistic annotation patterns the analysis could be extended to take into account the non-independence of term usage. Indeed, term co-occurrence frequencies have been studied by Tibely {\it et al.} \cite{Tibely2012}, with terms that are closer (as measured by path length on the ontology DAG) occurring together more frequently than those further apart. Incorporating tag co-occurrence probabilities would turn our lattice-gas model from a non-interacting model to an interacting model. Alternatively, our statistical mechanical analysis can potentially be extended to account for more realistic aspects and nuances of the annotation process, {\it e.g.}, mis-selection of terms that are not appropriate to the object being annotated.
2003.12137
\section{Introduction} \begin{figure*} \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.75\linewidth]{figures/text2image.png} \caption{Text to Image} \label{fig:text2image} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.75\linewidth]{figures/text2image2text.png} \caption{Text to Image to Text} \label{fig:text2image2text} \end{subfigure} \caption{Text to Image Generation Tasks} \label{fig:test} \end{figure*} The goal of the text-to-image task is to generate realistic images given a text description. This problem has many possible applications ranging from computer-aided design to art generation \cite{DBLP:journals/corr/abs-1711-10485}. Moreover, this multimodal problem is an interesting and important task in natural language understanding because it connects language to an understanding of the visual world. The problem can be naturally decomposed into two parts: embedding the text into a feature representation that captures relevant visual information and using that representation to generate an realistic image that corresponds to the text. This problem is particularly challenging for several reasons. For one, there is the issue of a domain gap between the text and image feature representations. Furthermore, there are a myriad of legitimate images that correspond to a single text description and part of the goal is to be able to capture the diversity in plausible images \cite{DBLP:journals/corr/ReedAYLSL16}. Recent advances in the field of deep learning have made significant strides in this challenging task. In particular, recurrent architectures, such as LSTMs, can be used to learn feature representations from text and generative adversarial networks (GANs) can be used to create images conditioned on information. Nonetheless, the aforementioned challenges leave the text-to-image task an open problem. Adding to those problems, GANs, despite having widespread success in generative learning, often produce lower-resolution images, \cite{DBLP:journals/corr/ZhangXLZHWM16}, lack diversity in image generation, and fail to capture intricate details. The goal of our work is to explore and compare state-of-the-art methods for addressing some of these problems. These ideas include: stacking multiple GANs to sequentially learn higher resolution images; applying attention mechanisms to “focus” the generator on important parts of the text and to help bridge the domain gap; and unifying text-to-image and image-to-text in a single model. In addition to evaluating these ideas for diversity and quality of generated images, we also investigate the effect of using a pre-trained language model for contextualized word embeddings. Pretrained language models, such as BERT \cite{DBLP:journals/corr/abs-1810-04805} and ELMO \cite{DBLP:journals/corr/abs-1802-05365}, have revolutionized NLP as an effective means of transfer learning, similar to the impact of ImageNet on the field of computer vision. As such, we sought to explore the potential benefit of these pretrained embeddings since most current approaches learn the word embeddings from scratch. \section{Related Work} Generative adversarial networks (GANs) are the most widely used model in generative learning. Originally proposed to generate realistic images, GANs consist of two neural networks, a generator and a discriminator, that effectively compete with one another in a zero-sum game. The discriminator attempts to distinguish real and fake images, while the generator tries to create images that fool the discriminator into classifying them as real \cite{NIPS2014_5423}. Numerous works have built off of the basic idea. Some of these idea include the Conditional GANs, which pass a class label to both the generator and discriminator, unlike the original GAN where the generator creates an image solely from noise \cite{DBLP:journals/corr/MirzaO14}. GANs have also been used for style transfer between image domains. In this formulation, the generator is passed an image from a source domain and tries to fool the discriminator into thinking it is from the target domain. An extension to this idea is the CycleGAN, which learns to transfer from source to target and target back to source to ensure consistency and to stabilize training. This setup also ensures that important latent features are captured so that the source image can be reconstructed from the generated one \cite{originalcyclegan}. As we will see next, these ideas have a natural extension to the text-to-image problem. Reed et al. describe the first fully differentiable, end-to-end model that learns to construct images from text, building on conditional GANs \cite{DBLP:journals/corr/ReedAYLSL16}. They use a character-level convolutional-recurrent network to encode the input text. A fully-connected (FC) layer with a Leaky ReLU activation embeds the encoding into a lower dimension before concatenation with input noise drawn from a standard Gaussian, which is then fed into the generator. Instead of just training on pairs of either real images/matching text or fake images/matching text, they also train using pairs of real images with mismatched text. This is to encourage the discriminator to not only generate realistic images regardless of the text, but also to create realistic images that match the text. Zhang et al. build upon this work in “StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks”. Inspired by other works that use multiple GANs for tasks such as scene generation, the authors used two stacked GANs for the text-to-image task \cite{DBLP:journals/corr/ZhangXLZHWM16}. The motivating intuition is that the Stage-I GAN produces a low-resolution outline of the desired image and the Stage-II GAN fills in the details of the sketch. In addition to the stacked architecture, they also propose a novel augmentation technique to address the aforementioned interpolation issues. Rather than using the text embedding directly (following a single FC layer) as done in Reed et al., they instead use an FC layer to produce a mean and variance before sampling from the normal distribution. This sampling augments the data and increases the robustness. Another architecture developed by Xu et al. drew on the widespread success of attention-based models, particularly in NLP for tasks such as machine translation and image captioning. Their model, termed AttnGAN, introduces several novel ideas to the text-to-image task related to attention \cite{DBLP:journals/corr/abs-1711-10485}. Unlike previous approaches that focus on sentence-level encodings, AttnGAN extracts both sentence-level and word-level features from a bidirectional LSTM. In the stacked generator stages, multiplicative attention is performed over the encoded word vectors so the model can learn which words to attend to at each step. Finally, they add a Deep Attentional Multimodal Similarity Model, which is constructed to learn an attention-based matching score between the image-sentence pairs. Finally, Gorti et al. incorporate the ideas of stacking, attention, and cycle consistency in their state-of-the-art model, MirrorGAN \cite{mirrorgan}. Influenced by the CycleGAN architecture, the model adds an image-to-text component which acts as a sanity check that the image generated is indeed semantically consistent with the input caption text. The results demonstrate MirrorGAN's ability to train networks that can generate both higher quality images as well as image details which are semantically consistent with a provided caption (and in comparison with the true image for a given caption). MirrorGAN is the culmination of the work on the text-to-image problem. Nonetheless, there is room for improvement. The works up until this point use word embeddings trained from scratch. With the advent of pretrained language models such as ELMO or BERT, a possible extension is to initialize the embeddings with deep, contextualized word vectors derived from BERT or ELMO. In this work, we explore the effect of using BERT-derived word vectors. \section{Data} We used the 2011 Caltech-UCSD Birds 200 dataset (CUB-200), which contains 11,788 images of 200 different types of birds and is a widely used benchmark for text-to-image generation \cite{WahCUB_200_2011}. These images provide a boundary box and vary in size. Additionally, we have 10 text descriptions of the dataset downloaded from a github repository that serve as the text descriptions of the generated images \footnote{\href{https://github.com/taoxugit/AttnGAN}{taoxugit AttnGAN} \label{attngancode}}. \section{Methods} \subsection{Data Preprocessing} We preprocess this data according the precedence set by StackGAN++ \cite{stackganpp}. This includes cropping all images to ensure all bounding boxes have at least a 0.75 object-image size ratio and then downsampled to 64x64, 128x128, 256x256. Then, the data is split into class disjoint train and test sets. \subsection{Models} \subsubsection{AttnGAN} \begin{figure*}[ht] \centering \includegraphics[width=\textwidth]{figures/attngan.png} \caption{Attention GAN Architecture} \label{fig:attngan} \end{figure*} This first model combines both elements of the stack GAN \cite{DBLP:journals/corr/ZhangXLZHWM16} and attention \cite{DBLP:journals/corr/abs-1711-10485} . This attention GAN first embeds the caption and runs them through a LSTM, generating both word and sentence vectors. Using the Conditioning Augmentation first proposed in the StackGAN\cite{DBLP:journals/corr/SalimansGZCRC16}, we create a mean and variance from the sentence embedding via a fully-connected layer. We use this mean and variance to parameterize a normal distribution from which a sentence embedding sample is generated to pass into the GAN. This is used for regularization and to promote manifold smoothness. Additionally, we concatenate Gaussian noise to this new sentence embedding sample and pass into the generator. With the StackGAN architecture, we stack three generators together, generating 64x64, 128x128, and 256x256, respectively. Additionally, for the second and third generator, we pass the image and the word embeddings through an attention module to pass into the next generator \cite{DBLP:journals/corr/abs-1711-10485}. Each of these generators has a corresponding discriminator that take in both the original sentence embedding and the image. Finally, the 256x256 image is passed through an image encoder to generate local image features (a 17x17 feature map). These image features from the image encoder and word features from the text encoder combine to form the Deep Attentional Multimodal Similarity Model (DAMSM) and trained with an attention loss \cite{DBLP:journals/corr/abs-1711-10485}. For stability, we pretrained this DAMSM model. \subsubsection{CycleGAN} \begin{figure*}[ht] \centering \includegraphics[width=\textwidth]{figures/cyclegan.png} \caption{Cycle GAN Architecture} \label{fig:attngan} \end{figure*} Our CycleGAN combines the attention GAN and the original cycle GAN approach \cite{originalcyclegan}. By adding an RNN conditioned on the image features and the embedded captions, we attempt to return to text with the Semantic Text REgeneration and Alignment Module (STREAM) \cite{mirrorgan}. By learning this transition to the original text domain, we allow our images to better represent our captions, as they must hold the latent information to recreate the original caption \cite{mirrorgan} \footnote{\href{https://github.com/komiya-m/MirrorGAN}{komiya-m MirrorGAN}}. Additionally, we added the pretrained BERT encoding transformers, which we use instead of the standard word embeddings. \subsubsection{BERT} Pretrained word vectors are a common component of many of NLP models. However, until recently, one primary limitation of these word vectors was that they only allowed for one context-independent embedding. One of the biggest game-changers in recent NLP research is the advent of deep, contextualized word vectors. These vectors are derived from the internal states of deep, pretrained language models trained on massive corpuses of text and use the entire sequence to embed each word, not just the word itself. A key premise to this idea was previous research that showed different layers of an LSTM language model captured different information, such as part of speech at the lower levels and context at the higher layers \cite{DBLP:journals/corr/abs-1802-05365} Peters et al. were the first to introduce this idea with their language model, ELMO.. Their model was a deep, bidirectional LSTM with character-level convolutions. This pretrained model could then be used for more specific tasks, where each word vector is computed as the (learned) weighted sum of the hidden states of the LSTM, using the entire input sequence as input. Then, tasks like sentiment classification could be done by simply adding a fully connected layer on top of ELMO \cite{DBLP:journals/corr/abs-1802-05365}. Devlin et al. expanded on this work with the BERT model that replaces the bidirectional LSTM with a bidirectional Transformer \cite{DBLP:journals/corr/abs-1810-04805}. The Transformer is another recent innovation in NLP that replaces the recurrent nature of LSTMs with positional encoding and blocks of self-attention, layer normalization, and fully connected layers \cite{DBLP:journals/corr/VaswaniSPUJGKP17}. This architecture has become the de facto model for NLP, replacing LSTMs and standard RNNs in many cases. The BERT model has been widely used to achieve state-of-the-art results in challenging tasks such as Question-Answer (QA). In this work, we used a pretrained BERT model to obtain our embeddings and pass it through a fully connected layer before continuing in the CycleGAN architecture. \subsection{Loss} For a generator $G_i$ and the corresponding discriminator $D_i$, we have the following loss function which combines both a conditional and unconditional loss (conditioned on the sentence embedding): \begin{align*} \mathcal{L}_{G_i} = - \frac{1}{2} \mathbb{E}_{\hat x_i \sim p_{G_i}} [\log(D_i(\hat x_i))]\\ - \frac{1}{2} \mathbb{E}_{\hat x_i \sim p_{G_i}} [\log(D_i(\hat x_i, \bar e))] \end{align*} where $\hat x_i$ is the generated image and $\bar e$ is the sentence embedding. Then, we have the following discriminator loss: \begin{align*} \mathcal{L}_{D_i} = - \frac{1}{2} \mathbb{E}_{x_i \sim p_{data_i}} [\log(D_i(x_i))]\\ - \frac{1}{2} \mathbb{E}_{\hat x_i \sim p_{G_i}} [\log(1 - D_i(\hat x_i))]\\ - \frac{1}{2} \mathbb{E}_{x_i \sim p_{data_i}} [\log(D_i(x_i))]\\ - \frac{1}{2} \mathbb{E}_{\hat x_i \sim p_{G_i, \bar e}} [\log(1 - D_i(\hat x_i, \bar e))] \end{align*} \subsubsection{AttnGAN Loss} With the word embeddings matrix $e$ and the image embeddings $v$, we calculate a similarity score between the sentence: \begin{align} s = e^\intercal v \end{align} We create a $s \in \mathbb{R}^{T \times 289}$ with $T$ as the number of words in the sentence and 289 referring to a flattened version of the 17x17 image feature map. We then normalize the similarity matrix \begin{align} \bar s_{ij} = \frac{\exp(s_{ij})}{\sum_{k=1}^T \exp(s_{kj})} \end{align} We build a context vector $c_i$, where that represent the image regions that relate to the $i$th word in the sentence. \begin{align} c_i = \sum_{j=1}^{289} \alpha_j v_j \text{, where } \alpha_j = \frac{\exp(\gamma \bar s_{ij})}{\sum_{k=1}^{289} \exp(\bar s_{ik})} \end{align} where $\gamma$ is a hyperparameter to pay attention to certain features in the regions. We then have an attention-driven image-text matching score matching the entire image $Q$ to the whole text description $D$ that utilizes the cosine similarity cosine$(c_i, e_i) = \frac{c_i^\intercal e_i}{\|c_i\| \|e_i\|}$ \begin{align} R(Q,D) = \log \Big( \sum_{i=1}^T \exp(\gamma \text{cosine}(c_i, e_i))\Big) \end{align} We have the DAMSM probability between the different image-sentence pairs in the batch \begin{align} P(D_i|Q_i) = \frac{\exp(\gamma R(Q_i, D_i))}{\sum_{j=1}^M \exp(\gamma R(Q_i D_j))}\\ P(D_i|Q_i) = \frac{\exp(\gamma R(Q_i, D_i))}{\sum_{j=1}^M \exp(\gamma R(Q_j D_i))} \end{align} Therefore, our DAMSM combines the following \begin{align} \mathcal{L}_1^w = - \sum_{i=1}^M \log P(D_i | Q_i)\\ \mathcal{L}_2^w = - \sum_{i=1}^M \log P(Q_i | D_i) \end{align} We also define $\mathcal{L}_1^s$ and $\mathcal{L}_2^s$ the same as above but instead substituting $\bar e$ for $e$. Combining everything, we have the final loss of the attention generator \begin{align} \mathcal{L}_{DAMSM} = \mathcal{L}_1^w + \mathcal{L}_2^w + \mathcal{L}_1^s + \mathcal{L}_2^s \end{align} \begin{align} \mathcal{L} = \mathcal{L}_G + \lambda \mathcal{L}_{DAMSM} \text{, where } \mathcal{L}_G = \sum_{i=1}^3 \mathcal{L}_{G_i} \end{align} \subsubsection{CycleGAN Loss} In addition to the loss of the AttnGAN, we add an additional cross entropy loss to correctly predict the output word in the caption recreation \begin{align*} \mathcal{L}_{CE} = - \frac{1}{M} \sum_{i=1}^M \sum_{c=1}^{|V|} y_c^{(i)} \log (\hat y_c^{(i)})\\ \mathcal{L} = \mathcal{L}_G + \lambda \mathcal{L}_{DAMSM} + \lambda \mathcal{L}_{CE} \end{align*} where $M$ represents the batch size, $|V|$ is the size of the vocab, $y_c^{(i)}$ is the binary label of the $c$-th class of the $i$-th example, and $\hat y_c^{(i)}$ is the model probability output of the $c$-th class of the $i$-th example. \subsection{Evaluation Metrics} \subsubsection{Inception Score} To assess our models, we use the Inception score, which is a widely used ad hoc metric for generative models \cite{DBLP:journals/corr/SalimansGZCRC16}. The Inception score uses a pretrained Inception model that is fine-tuned to the specific dataset being used. The Inception score is computed by exponentiating the KL-divergence between the conditional distribution p(y $|$ x) and marginal distribution p(y), where y is the class label predicted by the Inception model and x is a generated sample. The intuition is that a good generative model should produce images with a conditional label distribution that has low entropy relative to the marginal distribution. In other words, we want images that can be easily classified into a category by the model but also create images that belong to many different classes. \begin{align*} D_{KL} (P \| Q) = - \sum_{x \in \mathcal{X}} P(x) \log \Big(\frac{Q(x)}{P(x)}\Big)\\ IS(G) = \exp \Big(\mathbb{E}_{\bold x \sim p_G} D_{KL} (p(y|\bold x) \| p(y)) \Big) \end{align*} The score rewards images that have greater variety and has been shown to be well-correlated with human evaluations of realistic quality. We randomly select 20 captions for each class and use our trained model to generate images, which is then fed into the Inception model to generate the distributions and to compute the score. \subsubsection{Mean Opinion Score (MOS)} Nonetheless, the Inception score cannot capture how well the generated images reflect accurate conditioning on the input text. Thus, we have humans examine the perceptual quality of images as well as their correspondence to the input task with the Mean Opinion Score \cite{DBLP:journals/corr/LedigTHCATTWS16}. Specifically, we asked $n=10$ subjects to rate the quality of images on a scale from 1 (poor quality) to 5 (high quality). We showed them 20 images from the ground truth, 20 from the AttnGAN, and 20 from the CycleGAN, along with corresponding captions, in random order, and averaged them to report the MOS. \section{Results} \begin{figure}[ht] \includegraphics[width=.55\textwidth]{figures/ablation.png} \caption{Generated Images from the models} \label{fig:generated} \end{figure} We trained the AttnGAN model over 100 epochs using Adam optimization to train the generator and all the discriminators. CycleGAN was trained over 100 epochs using the same generator and discriminator optimizers with betas of 0.5 and 0.999. For the AttnGAN, we pretrained the DAMSM architecture for 200 epochs. For the CycleGAN, we pretrained the STREAM architecture for 100 epochs. We used pretrained BERT embeddings only for the CycleGAN implementation, while initialized randomly initialized embeddings which were trained in the AttnGAN. \\ We report both the inception v3 scores, computed as a average measure of divergence with the true distribution of bird images of the generated test outputs for each of the models as well as the qualitative MOS scores from peer judges, reported below. We see that the CycleGAN trained with BERT embeddings had the strongest performance overall across the proposed metrics, and display generated samples from our model along with their representative ground truth image labels. \\ \begin{figure}[ht] \centering \begin{adjustbox}{width=.48\textwidth} \begin{tabular}{cccc} \toprule & \multicolumn{3}{c}{\textbf{Inception Score}}\\ \cmidrule{2-4} \textbf{Model} & Epoch 0 & Epoch 50 & Epoch 100\\ \midrule Ground Truth & 11.63 & - & - \\ AttnGAN & 0.94 & 2.78 & 3.92 \\ CycleGAN w/ BERT & 1.05 & 5.48 & 5.92 \\ \bottomrule \end{tabular} \end{adjustbox} \caption{Inception Scores of Models} \label{fig:inceptionscores} \end{figure} \begin{figure}[ht] \centering \begin{adjustbox}{width=0.48\textwidth} \begin{tabular}{cc} \toprule \textbf{Model} & \textbf{MOS (n=10)}\\ \midrule Ground Truth & 4.7\\ AttnGAN & 3.6\\ CycleGAN w/ BERT & 3.9\\ \bottomrule \end{tabular} \end{adjustbox} \caption{Mean Opinion Score (MOS) of Models with $n=10$ subjects} \label{fig:mos} \end{figure} We save model weights for the CycleGAN model with BERT text features every 25 epochs and compute Inception scores on the validation set during training. In Figure 7, we observe the CycleGAN inception score leveling, but still increasing as we approach 100 epochs of training. \begin{figure}[ht] \centering \includegraphics[width=.50\textwidth]{figures/inception.png} \caption{CycleGAN Inception Scores} \label{fig:inception_score} \end{figure} \section{Discussion} Examining several images output from the AttnGAN and CycleGAN (with BERT) in Figure 4, we can see some clear improvements from AttnGAN to CycleGAN. For one, the CycleGAN model generally produces clearer, more realistic looking images relative to AttnGAN. Furthermore, the CycleGAN model appears to be more precise with respect to details. We can see that for the AttnGAN model, the colors are occasionally incorrect (presence of red in the top image and brown instead of grey in the bottom). Additionally, AttnGAN images lack the level of detail in the beak present in the CycleGAN model. We found the CycleGAN with pretrained BERT embeddings was able to outperform AttnGAN on the test set in both the Inception score (Figure 5) and in the Mean Opinion Score (Figure 6). In particular, with respect to Inception scores, we found that CycleGAN with BERT was able to reach a higher score and train significantly faster, as indicated by the scores at 50 epochs (5.48 vs 2.78) and at 100 epochs (5.92 vs 3.92). In general, a higher Inception score reflects greater variety as well as distinctly capturing unique features, but we note that an ideal quantitative metric remains elusive for this task, particularly in capturing the correspondence between the image and caption. The higher performance of the CycleGAN with BERT on qualitative, human evaluation indicates some level of improvement in the image-text correspondence. One limitation of our work is that we were not able to train until convergence of the Inception score for comparison in the limit, which we note as a possible avenue for future work. \section{Conclusion} In this paper, we investigate the text-to-image generation task by experimenting with state-of-the-art architectures and incorporating the latest innovations in NLP, namely, the use of deep contextualized word vectors from pretrained languagew models, such as BERT. Our baseline model is the AttnGAN, which utilizes several key features, including stacking of GANs to progressively learn more detail at higher resolution and attention over word features, a technique that has found widespread success in a variety of NLP tasks. Our main model adds two additional features: a cyclic architectures that adds the image-to-text task in addition to the test-to-image task, and the use of contexualized word embeddings from a pretrained BERT. Through both qualitative and quantitative metrics, we found that the addition of these features showed improved generation of images conditioned on the text and had faster learning. For future work, it would be useful to train the models longer until convergence in some metric (such as Inception score) is reached for complete analysis. In addition, we did not do any hyperparameter tuning due to time constraints and thus further improvement may be found through a hyperparameter search. Further, with more time, it would also be interesting to perform ablation studies on our full model to show the additional gain, if any, achieved from adding only BERT or only the cyclic architecture to AttnGAN. \section{Acknowledgements} We would like to thank the instructors and TAs for designing and running the course. In particular we would like to thank Ignacio Cases for guiding us on this project. \section{Authorship Statement} Trevor Tsue: Coded the AttnGAN and CycleGAN, trained models, made architecture diagrams, wrote architecture and loss.\\\\ Jason Li: Performed literature review, proposed and coded/integrated BERT extension with CycleGAN; wrote intro, related work, discussion, conclusion\\\\ Samir Sen: AttnGAN implementation and training. Worked on incorporating BERT encodings within the attention GAN text featurization and built inception network for capturing key metrics across models. Data cleaning and visualization. Results, discussion, abstract. \\
2003.12083
\section{Introduction}\label{sec:intro} In the early Universe, the first objects formed and filled the Universe with light. They ionized the neutral gas in the intergalactic medium (IGM) via a phenomenon called ``cosmic reionization''. One of the candidates for the main source of reionization is star-forming galaxies, whose ionizing radiation, called ``Lyman Continuum'’ (LyC, $\lambda<912$ \AA), emitted from massive stars, is expected to leak into the IGM \citep[e.g.][]{Bouwens2015a,Bouwens2015b,Finkelstein2015a, Robertson2015, Livermore2017}. Another candidate is active galactic nuclei \citep[AGNs, e.g.][]{Madau2015}. However they have recently been reported to contribute less than $\approx10$\% of the ionizing photons needed to keep the IGM ionized \citep[over a UV magnitude range of $-18$ to $-30$ mag; ][see also \citealt{Parsa2018}]{Matsuoka2018}. Previous studies using the Gunn-Peterson absorption trough seen in quasar spectra \citep[e.g.][see however, \citealt{Bosman2018}]{Gunn1965, Fan2006, McGreer2015} and in gamma-ray burst spectra \citep[e.g.][]{Totani2006,Totani2014} suggest that cosmic reionization was completed by $z\approx6$. The Thomson optical depth of the cosmic microwave background measured by Planck suggests that the midpoint redshift of reionization (i.e. when half the IGM had been reionized) is at $z\approx7.7\pm0.7$ \citep[1$\sigma$ confidence interval,][]{PlanckCollaboration6_2018arXiv}. Ly$\alpha$ emission is intrinsically the strongest UV spectral feature of young star forming galaxies, and galaxies with mostly detectable Ly$\alpha$ emission or with Ly$\alpha$ equivalent widths higher than $\approx25$ \AA\ are called ``Ly$\alpha$ emitters (LAEs)''. Ly$\alpha$ emission is scattered by neutral hydrogen gas (\mbox{H\,{\sc i}}) in the IGM, and, therefore, the detectability of LAEs is affected by the \mbox{H\,{\sc i}}\ gas fraction in the IGM. The redshift evolution of Ly$\alpha$ luminosity functions has thus been used to investigate the history of the neutral hydrogen gas fraction of the IGM \citep[e.g.][]{Malhotra2004,Kashikawa2006,Hu2010,Ouchi2010, Santos2016, Drake2017b, Ota2017, Zheng2017, Konno2018,Itoh2018}. Ly$\alpha$ luminosity functions can be used to compute the evolution of the Ly$\alpha$ luminosity density, and its rapid decline at $z\gtrsim5.7$ compared with that of the cosmic star formation rate density derived from UV luminosity functions is interpreted to be caused by IGM absorption \citep[e.g.][]{Ouchi2010, Konno2018}. Similarly, the fraction of LAEs among UV selected galaxies, $X_{\rm LAE}$, can also be used to probe the evolution of the \mbox{H\,{\sc i}}\ gas fraction of the IGM \citep[e.g.][]{Fontana2010, Pentericci2011, Stark2011, Ota2012, Treu2013, Caruana2014, Faisst2014, Schenker2014}. $X_{\rm LAE}$ has been reported to increase from $z\approx3$ to $6$ and then to drop at $z>6$. This has again been interpreted as a signature of the IGM becoming more neutral at $z>6$ \citep[e.g.][]{Dijkstra2011,Jensen2013, Mason2018}. The LAE fraction is complementary to the test of Ly$\alpha$ luminosity functions (LFs) and has some advantages: efficient spectroscopic observations as a follow-up of continuum-selected galaxies, which is insensitive to the declining number density of star forming galaxies, and rich information obtained from the spectra such as spectroscopic redshifts and kinematics of the interstellar medium \citep[e.g.][]{Stark2010, Hashimoto2015}. It also enables us to solve the degeneracy between the Ly$\alpha$ escape fraction among star forming galaxies with different UV magnitudes and the comparison between luminosity densities of Ly$\alpha$ emission and UV continuum, which are obtained from the integration of UV and Ly$\alpha$ LFs. In addition, recently, \citet{Kakiichi2016} suggested that the UV magnitude-dependent evolution of the LAE fraction combined with the Ly$\alpha$ luminosity function can be used to constrain the ionization topology of the IGM and the history of reionization. Using $X_{\rm LAE}$ to set quantitative constraints on the evolution of the neutral content of the IGM remains challenging. In particular, we need to understand whether observed variations of $X_{\rm LAE}$ are exclusively due to variations in the IGM properties, or whether they can be attributed to galaxy evolution. Following the Ly$\alpha$ spectroscopic observations of Lyman break galaxies (LBGs) at $z\approx3$ by \citet{Steidel2000} and \citet{Shapley2003}, \citet{Stark2010, Stark2011} have found that $X_{\rm LAE}$ among LBGs evolves with redshift and depends on the rest-frame Ly$\alpha$ equivalent width ($EW(\rm Ly\alpha)$) cut. They also show that $X_{\rm LAE}$ depends on the absolute rest-frame UV magnitude ($M_{1500}$), so that UV-faint galaxies are more likely to show Ly$\alpha$ than UV-bright galaxies \citep[see also][]{Schaerer2011a,Forero-Romero2012, Garel2012}. One conclusion from these studies is that the evolution of $X_{\rm LAE}$ with redshift is more prominent for UV-faint galaxies and low $EW(\rm Ly\alpha)$ cuts. However, several recent studies show lower values of $X_{\rm LAE}$ for UV-faint galaxies ($-20.25< M_{1500}<-18.75$ mag) than those in the pioneering work of \citet{Stark2011}. At $z\approx4$ and $z\approx5$, \citet{ArrabalHaro2018} show more than $1\sigma$ lower $X_{\rm LAE}$ for the faint $M_{1500}$ and low EW cut ($25$ \AA), though their result at $z\approx6$ is consistent with that in \citet{Stark2011}. \citet{DeBarros2017} also investigate $X_{\rm LAE}$ for UV-faint galaxies with a low EW cut, at $z\approx6$. They obtain a low median value of $X_{\rm LAE}$, which is even slightly lower than the value previously found at $z\approx5$, though their $X_{\rm LAE}$ is consistent within $1\sigma$. They conclude that the drop at $z>6$ is less dramatic than previously found \citep[see also][for their recent study at $z\approx7$]{Pentericci2018}. \citet{DeBarros2017} and \citet{Pentericci2018} also suggest the possibility that the effect of an increase of the \mbox{H\,{\sc i}}\ gas fraction in the IGM is observed between $5 < z < 6$. This would be consistent with a later and more inhomogeneous reionization process than previously thought, as has also been recently suggested by observations and simulations of fluctuations in Ly$\alpha$ forest \citep[e.g.,][]{Bosman2018, Kulkarni2019,Keating2019}. The parent LBG sample in \citet{DeBarros2017} is selected with an additional UV magnitude cut on a normal LBG selection, while the parent sample in \citet{ArrabalHaro2018} is mostly based on photometric redshift (photo-z) even though it is regarded as an LBG sample in their paper. Therefore, the results of $X_{\rm LAE}$ for the faint $M_{1500}$ are not yet conclusive, possibly due to different parent sample selections. Moreover, for the UV-bright galaxies, the redshift evolution of $X_{\rm LAE}$ for a $25$ \AA\ EW cut has not been confirmed yet \citep[e.g.][]{Stark2011,Curtis-Lake2012,Ono2012,Schenker2014,Stark2017,Cassata2015,Mason2019}. \citet{DeBarros2017} and \citet{Pentericci2018} suggest that some previous results are affected by an LBG selection bias. As strong Ly$\alpha$ emission affects the red band, strong LAEs can be selected more easily compared to galaxies without Ly$\alpha$ emission at faint UV magnitudes. It results in a high LAE fraction of LBGs \citep[see also][]{Stanway2008,Inami2017}. \citet{ArrabalHaro2018} assess UV completeness of their parent sample using UV luminosity functions and find that their 90\% completeness magnitude is $\approx-20$ and $-21$ mag at $z\approx4$ and $z\approx5$, respectively. To summarize, it is important to obtain a firm conclusion about the evolution of $X_{\rm LAE}$ in the post-reionization epoch in order to quantify the drop of $X_{\rm LAE}$ at $z > 6$ and to assess the reliability of using $X_{\rm LAE}$ as a good probe of reionization. However, although there are a number of observational studies of $X_{\rm LAE}$, uncertainties in the measurement and interpretation of $X_{\rm LAE}$ are still a matter of debate \citep[e.g.][]{Stark2011, Garel2015, DeBarros2017, Caruana2018, Mason2018, Hoag2019aApJ, Hoag2019bMNRAS}. One of the biggest problems is the LBG selection bias due to the different depths of selected bands in previous studies. It is worth pointing out that none of the previous studies were based on complete parent samples of UV faint galaxies ($-20.25< M_{1500}<-18.75$ mag). Completeness in terms of UV magnitudes, as well as homogeneously selected samples over a wide redshift range are essential for the determination of $X_{\rm LAE}$. In addition, we also need deep and homogeneous spectroscopic observations of Ly$\alpha$ emission over a wide redshift range. In this study, we use a combination of deep Hubble Space Telescope (HST) and Very Large Telescope (VLT)/Multi-Unit Spectroscopic Explorer (MUSE) data, to overcome these limitations and improve our knowledge of the evolution with redshift of $X_{\rm LAE}$ among a homogeneous parent sample of UV faint galaxies. We use HST bands that are not contaminated by Ly$\alpha$ emission to measure UV magnitudes to avoid a selection bias. Deep and homogeneous spectroscopic Ly$\alpha$ observations at a wide redshift range have been achieved in the {\it Hubble} Ultra Deep field (HUDF) by VLT/MUSE \citep[][]{Bacon2010} in the guaranteed time observations (GTO), MUSE-HUDF survey \citep[e.g.][]{Bacon2017}. The LAE fraction has already been investigated with MUSE and HST data, using the MUSE-Wide GTO survey in the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS) Deep region in \citet{Caruana2018}. Their sample are constructed with an apparent magnitude cut of $F775W \leq26.5$ mag for an HST catalog, which is roughly converted to $M_{\rm 1500}\approx-19$--$-20$ mag at $z\approx3$--$6$. However, the MUSE HUDF data enable us to measure faint Ly$\alpha$ emission even for faint UV sources ($-17.75$ mag) in existing HUDF photometric catalogs. The paper is organized as follows. In Sect. 2, we describe the data, methods, and samples: our UV-selected samples, the MUSE data, the detection and measurement of Ly$\alpha$ emission, the calculation of the LAE fraction, and its uncertainties. Sect. 3 presents the LAE fraction as a function of redshift and UV magnitude. In Sect. 4, we discuss our results: the differences in LAE fraction from previous results, a comparison with predictions from a model of galaxy formation, and implications for reionization. Finally, the summary and conclusions are given in Sect. 5. Throughout this paper, we assume a flat cosmological model with a matter density of $\Omega_{\rm m} = 0.3$, a cosmological constant of $\Omega_{\Lambda} = 0.7$, and a Hubble constant of $H_0 = 70$ km s$^{-1}$ Mpc$^{-1}$ ($h_{100} = 0.7$). Magnitudes are given in the AB system \citep{Oke1983}. \section{Data, methods, and samples}\label{sec:data_sample} In Sect. \ref{subsec:parent_sample}, we first discuss the construction of a volume-limited UV-selected sample of galaxies from the HUDF catalog of \citet[][hereafter R15]{Rafelski2015}\footnote{\url{https://archive.stsci.edu/hlsp/uvudf}}. In Sect. \ref{subsec:CountingLAEs} we explain how we use MUSE data to detect and measure Ly$\alpha$ emission from galaxies of our UV sample. In Sect. \ref{subsec:uncertainties} we lay out our calculations of $X_{\rm LAE}$ and discuss the error budget. In Sect. \ref{subsec:slope} we present our measurement of slopes of $X_{\rm LAE}$ as a function of $z$ and $M_{1500}$. We discuss briefly in Sect. \ref{subsec:haloDiscussion} the effects of extended Ly$\alpha$ emission. \subsection{UV-selected samples}\label{subsec:parent_sample} We build our parent sample of high-redshift UV-selected galaxies using the latest HUDF catalog from R15. Sources in this catalog are detected in the average-stacked image of eight HST bands: four optical bands from $ACS/WFC$ ($F435W$, $F606W$, $F775W$, and $F850LP$), and four near infrared (NIR) bands from $WFC3/IR$ ($F105W$, $F125W$, $F140W$, and $F160W$). In total, out of the $9969$ sources in R15's catalog,$1095$ and $7904$ objects are within the footprints of the \textsf{udf-10}{} and \textsf{mosaic}{} regions of the MUSE HUDF Survey\footnote{The survey consists of two layers of different depths: a shallower area with $9$ MUSE pointings (\textsf{mosaic}) and a deeper area with $1$ MUSE pointing (\textsf{udf-10}) within the \textsf{mosaic}. More details are given in Sect. \ref{subsec:muse}.} \citep[][]{Bacon2017}, respectively (the duplicated region in \textsf{udf-10}{} is removed). We note that the $F140W$ image only covers $6.8$ arcmin$^2$ of the $11.4$ arcmin$^2$ footprint of the R15's catalog. The $F140W$ photometry is only used when it is available in R15. As discussed in footnote 2 of \citet{Hashimoto2017b}, the lack of $F140W$ may affect the detection of sources in R15's catalog. Moreover, the footprint of $F140W$ is also covered by deeper NIR images ($F105W$, $F125W$, and $F160W$). Indeed, the fraction of sources identified by R15 at $z\gtrsim6$ within the footprint of the $F140W$ image is higher than where there is no $F140W$ data. In order to avoid contamination from neighboring objects, we follow \citet{Inami2017} and discard all HST sources which have at least one neighbour within $0\farcs6$. Such associations cannot be resolved in our MUSE observations where the full width at half maximum (FWHM) of the average seeing of DR1 data is $\approx0\farcs6$ at $7750$ \AA{}. This procedure excludes $\approx 20\%$ of the sources. We assume that this does not result in a significant bias as it is effectively only decreasing the survey area. This assumption is true if interacting systems are not more often LAEs than isolated systems. R15 provide photometric redshifts and associated errors for all objects. These are obtained via spectral energy distribution (SED) fitting of photometric data in $11$ HST bands using either the Bayesian Photometric Redshift (BPZ) algorithm \citep{Benitez2000,Benitez2004,Coe2006} or the EAZY software \citep{Brammer2008}. In the present paper, we choose to use the results from BPZ because they are found to be more accurate \citep[][]{Rafelski2015,Inami2017,Brinchmann2017}. Note that R15 do not include Spitzer/IRAC data in their SED fitting. We state in Appendix \ref{ap:irac} that this addition would not improve the photometric redshifts of the faint galaxies studied here. Below, $z_{\rm p}$ denotes the photometric redshift given in R15\footnote{The photometric redshift value is the one which maximizes the likelihood estimate in BPZ.}, and we use it to define redshift selections of all our UV samples (see Table \ref{tbl:criteria}). In R15's catalog, the $95$\% lower and upper limits of $z_{\rm p}$ are provided as uncertainties on $z_{\rm p}$. We use these to construct an {\it inclusive parent sample}, that we will use for Ly$\alpha$ searches. This sample includes all sources with photometric redshift estimates ($95$\% confidence interval) overlapping with the redshift range $(2.91$-- $6.12)$ where Ly$\alpha$ can be observed with MUSE. Note that we remove sources at $z_{\rm} > 6.12$ from our sample because the parent photometric-redshift sample may be affected by selection bias (see Sect. \ref{subsec:parent_sample}), and because Ly$\alpha$ detectability in MUSE spectra is strongly reduced by sky lines \citep[][]{Drake2017b}. In the end, this inclusive parent sample consists of $3233$ and $402$ sources in the \textsf{mosaic}{} and \textsf{udf-10}, respectively (without duplication). We derive the absolute rest-frame UV magnitude using two or three HST photometric points to fit a power-law to the UV continuum. The power-law describes the spectral flux density $f_\nu$ as $f_{\nu}= f_{0} (\lambda/\lambda_0)^{\beta+2}$, where $\lambda_0 = 1500$ \AA, $f_0$ is the spectral flux density at $1500$ \AA\ (in $\mathrm{erg\ s^{-1}\ cm^{-2}\ Hz^{-1}}$), and $\beta$ is the continuum slope. We then simply have $M_{\rm 1500}=-2.5\log{(f_0)}-48.6 -5\log{(d_{\rm L}/10)} + 2.5\log{(1 + z_{\rm p})}$, where $d_{\rm L}$ (pc) is the luminosity distance. We choose the HST filters following \citet{Hashimoto2017b} so that Ly$\alpha$ emission and IGM absorption are not included in the photometry: we use HST/$F775W$, $F850LP$, and $F105W$ for objects at $2.91\leq{z_{p}}\leq4.44$; $F105W$, $F125W$, and $F140W$ for objects at $4.44< z_{p} \leq5.58$; and $F125W$, $F140W$, and $F160W$ for objects at $5.58< z_{p} \leq 6.12$. While \citet{Hashimoto2017b} use the MUSE spectroscopic redshifts, we use the values of the photometric redshifts from \citet{Rafelski2015}. Our derived $M_{\rm 1500}$ values are consistent with those of \citet{Hashimoto2017b} for the sources which we have in common (LAEs). The standard deviation of the relative difference in $M_{\rm 1500}$ for the sources included in both studies is $\approx3$\% without a systematic offset. In Figure\,\ref{fig:muv_z}, we show the distribution of $M_{\rm 1500}$ as a function of $z_{\rm p}$ for sources in our parent sample that have $2.91<z_{\rm p}\leq6.12$. In order to construct a complete parent sample in terms of $M_{\rm 1500}$, we define a limiting magnitude $M_{1500}^{\rm lim}$ so that objects brighter than $M_{1500}^{\rm lim}$ are detected with a signal-to-noise ratio ($S/N$) larger than two in at least two HST bands among the rest-frame UV HST bands. To compute $M_{1500}^{\rm lim}$, we again use a power-law continuum model, this time with a fixed UV slope $\beta=-2$, which is commonly used as a fixed value \citep[e.g.,][]{Caruana2018}. At each redshift, we derive the normalisation $M_{\rm 1500}^{\rm lim}$ such that the flux can be detected in at least two HST UV bands with $S/N>2$. The resulting $M_{\rm 1500}^{\rm lim}$ is shown with the thick black curve over-plotted to the data in magenta in the upper panel of Figure\,~\ref{fig:muv_z}. From this panel, we see that we can build a complete UV-selected sample at redshifts $z\approx3$--$4$ down to $M_{\rm 1500}^{\rm lim}\approx-16.3$ mag, and even at $z\approx6$ we can achieve completeness down to $M_{\rm 1500}^{\rm lim}\approx-17.7$ mag. In the upper panel of Figure\,\ref{fig:muv_z}, we highlight two regions of parameter space that we select to do two complementary studies: $X_{\rm LAE}$ vs. $M_{\rm 1500}$ with $z$ in the range $[2.91;4.44]$ (the polygon marked with a solid black line) and $X_{\rm LAE}$ vs. $z$ with $z$ in the range $[2.91;6.12]$ (the dashed-line rectangle). To define the criteria for $X_{\rm LAE}$ vs. $M_{\rm 1500}$ plots, completeness simulations for Ly$\alpha$ emission (see Sect. \ref{subsec:compsimu}) and $z_{\rm p}$ bins for the $M_{1500}^{\rm lim}$ calculation are also taken into account. In the lower panel of Figure\,\ref{fig:muv_z}, we show for comparison the locus of previous studies in the $(M_{\rm 1500}-z)$ plane. The faint galaxies in \citet{Stark2011} are shown by the light-grey shaded area. \citet{DeBarros2017} use a UV magnitude cut of $F160W \leq27.5$ mag at $z\approx6$ in their sample, which is shown with a dark-grey arrow. The recent sample of \citet{ArrabalHaro2018} is shown with dark-grey crosses which indicate the UV magnitudes at which they reach $\approx90$\% completeness. The LAE fraction with the MUSE-Wide GTO survey data \citep{Caruana2018} adopt an apparent magnitude cut of $F775W \leq26.5$ mag for an HST catalog \citep[][]{Guo2013}, which we roughly convert to $M_{\rm 1500}$ for illustrative purposes, and show with the solid grey line in the lower panel of Figure\,\ref{fig:muv_z}. Our MUSE-Deep data combined with the HST catalog from R15 allow us to probe deeper than all previous work, and to extend our study to UV-faint galaxies (i.e., $M_{\rm 1500}\geq-18.75$ mag). In Figure\,\ref{fig:n_muv}, we demonstrate the completeness of our UV-selected sample at different redshifts by comparing our UV number counts to what we would expect from the UV luminosity function (UVLF) of \citet{Bouwens2015b}. We find that the distribution of $M_{\rm 1500}$ for our samples (magenta) follow well those expected from the UVLFs for the same survey area at similar redshifts (black solid lines) within 1$\sigma$ error bars. For comparison, we also show in the bottom panels of Figure\,\ref{fig:n_muv} the distribution of magnitudes of the sample of \citet{Stark2010}. Clearly, their samples are incomplete even at relatively bright magnitudes ($\approx -20.25$ mag), most likely because of the shallow depth of their data and the LBG selection. Note that the LBG samples in \citet{Stark2010,Stark2011} consist of two LBG samples. One is from their own sample \citep{Stark2009}, and the other is a sample from the literature, which is biased towards bright objects including few B and V dropouts with magnitudes fainter than 26 mag in $F850LP$ ($z$) according to \citet{Stark2010}. Generally, the $M_{\rm 1500}$ ranges used in LAE fraction studies in the literature are close to those in \citet{Stark2010, Stark2011}. Recently, \citet{ArrabalHaro2018} tested the UVLF of their LBG sample (mainly constructed from photo-$z$ samples) used in their LAE fraction study. At $z\approx4$ and $z\approx5$, their LBG samples follow the UVLF in \citet{Bouwens2015b} at $M_{\rm 1500}\lesssim-20.5$ mag (dark-grey dashed lines in Figure\,\ref{fig:n_muv}). However, it is still not complete in the UV for the faint sample ($-20.25\leq M_{\rm 1500}\leq-18.75$ mag) just as \citet{Stark2011}.These comparisons illustrate the methodological improvement of our study in terms of the $M_{\rm 1500}$ completeness of the sample of galaxies for which we estimate the LAE fraction. For future use, $N_{\rm 1500}(z_{\rm p},\, M_{1500})$ denotes the number of galaxies with a photometric redshift $z_{\rm p}$ and absolute magnitude $M_{1500}$ within the given ranges (see Table \ref{tbl:criteria} which summarises different samples). As discussed above, our UV-selected samples are volume-limited and $N_{\rm 1500}$ is directly measured from the catalog with no need for incompleteness corrections. \begin{figure} \begin{flushright} \includegraphics[width=9.3cm]{fig01.pdf}\\ \caption[$M_{1500}$ versus $z_{p}$ for our sample.]{ $M_{1500}$ versus $z_{p}$ for our sample and the literature. The $M_{1500}$ and $z_{p}$ of our parent sample from \citet{Rafelski2015} are shown by magenta filed circles (identical in the two panels). In the upper panel, the $M_{1500}$ cut ($N_{1500}^{\rm lim}$) for our sample defined from the rest-frame UV HST bands is indicated by a black thick solid line. The parameter space studied here, $X_{\rm LAE}$ vs. $M_{\rm 1500}$ at $z\approx3$--$4$ and $X_{\rm LAE}$ vs. $z$ at $z\approx3$--$6$, is shown by a black solid polygon and a black dashed rectangle, respectively. In the lower panel, the UV magnitude cut of $F160W \leq27.5$ mag at $z\approx6$ in \citet[][]{DeBarros2017} and UV magnitudes corresponding to $\approx90$\% completeness at $z\approx4$, $5$, and $6$ in \citet[][]{ArrabalHaro2018} are represented by a dark-grey arrow and dark-grey crosses, respectively. The apparent magnitude cut of $F775W \leq26.5$ mag in \citet{Caruana2018} is shown by a grey solid line. The parameter space for the faint galaxies studied in \citet{Stark2011} is indicated by a light-grey shaded region. \label{fig:muv_z} } \end{flushright} \end{figure} \begin{figure} \sidecaption \includegraphics[width=9.3cm]{fig02.pdf}\\ \caption{Histograms of $M_{1500}$ for our parent samples at $z\approx3.7$ (=$2.91$--$4.44$, left), $5.0$ ($=4.44$--$5.58$, middle), and $5.9$ ($=5.58$--$6.12$, right). In the upper panels, magenta histograms and black dashed lines represent the number distribution of our parent sample and that expected from the UV luminosity functions (UVLF) in \citet{Bouwens2015b} for the same effective survey area, respectively. The uncertainty of the number distribution of our parent sample is given by the Poisson error. Grey hashed areas indicate $M_{1500}$ ranges that are not used in this work. In the lower panels, light-grey histograms shows the number distribution in \citet{Stark2010} at $z\approx3.75$, $z\approx5.0$ and $z\approx6.0$. Dark-grey dashed and dotted lines indicate the $M_{1500}$ for $90$\% completeness at $z\approx4$, $5$, and $6$ in \citet{ArrabalHaro2018} and the magnitude cut at $z\approx6$ in \citet{DeBarros2017}, respectively. \label{fig:n_muv} } \end{figure} \begin{table*} \caption{Subsample criteria.}\label{tbl:criteria} \centering \begin{tabular}{ccccc} \hline \noalign{\smallskip} $z$ range & mean $z$ & $M_{1500}$ range (mag) & $N_{\rm 1500}(z_{\rm p},\, M_{1500})$ &$EW(\rm Ly\alpha)$ cut (\AA) \\ \noalign{\smallskip} \hline \noalign{\smallskip} \multicolumn{5}{l}{Subsamples for Figure\, \ref{fig:xevo}: $X_{\rm LAE}$ vs. $z$ }\\ \noalign{\smallskip} \hline $2.91<z\leq3.68$ & $3.3$ & $-21.75\leq M_{1500}\leq-17.75^{\rm (a)}$ & 228 & 45$^{\rm (b)}$, 65 \\ $3.68<z\leq4.44$ & $4.1$ & $-21.75\leq M_{1500}\leq-17.75^{\rm (a)}$ & 119 & 45$^{\rm (b)}$, 65 \\ $4.44<z\leq5.01$ & $4.7$ & $-21.75\leq M_{1500}\leq-17.75^{\rm (a)}$ & 98 & 45$^{\rm (b)}$, 65 \\ $5.01<z\leq6.12$ & $5.6$ & $-21.75\leq M_{1500}\leq-17.75^{\rm (a)}$ & 89 & 45$^{\rm (b)}$, 65 \\ \hline \noalign{\smallskip} \multicolumn{5}{l}{Subsamples for Figures\,\ref{fig:zevocompare}, \ref{fig:xevo_galics}, and \ref{fig:xevo_galicscv}: $X_{\rm LAE}$ vs. $z$ for comparison with previous work} \\ \noalign{\smallskip} \hline $2.91<z\leq3.68$ & $3.3$ & $-20.25\leq M_{1500}\leq-18.75$ & 87 & 25, 55 \\ $3.68<z\leq4.44$ & $4.1$ & $-20.25\leq M_{1500}\leq-18.75$ & 40 & 25, 55 \\ $4.44<z\leq5.01$ & $4.7$ & $-20.25\leq M_{1500}\leq-18.75$ & 28 & 25, 55 \\ $5.01<z\leq6.12$ & $5.6$ & $-20.25\leq M_{1500}\leq-18.75$ & 35 & 25, 55 \\ \hline \noalign{\smallskip} \multicolumn{5}{l}{Subsamples for Figures\, \ref{fig:uvdep} and \ref{fig:uvdep_galics}: $X_{\rm LAE}$ vs. $M_{1500}$ }\\ \noalign{\smallskip} \hline $2.91<z\leq3.68$ & $3.3$ & $-21.5\leq M_{1500}\leq-20.0$ & 31 & 25, 45, 65, 85 \\ $2.91<z\leq3.68$ & $3.3$ & $-20.0<M_{1500}\leq-19.0$ & 58 & 25, 45, 65, 85 \\ $2.91<z\leq3.68$ & $3.3$ & $-19.0<M_{1500}\leq-18.0$ & 106 & 45, 65, 85 \\ $2.91<z\leq3.68$ & $3.3$ & $-18.0<M_{1500}\leq-17.0$ & 197 & 85 \\ \hline \noalign{\smallskip} \multicolumn{5}{l}{Subsamples for Figure\,\ref{fig:uvdepcompare}: $X_{\rm LAE}$ and $M_{1500}$ for comparison with previous work }\\ \noalign{\smallskip} \hline $2.91<z\leq3.68$ & $3.3$ & $-21.5\leq M_{1500}\leq-20.0$ & 31 & 50 \\ $2.91<z\leq3.68$ & $3.3$ & $-20.0<M_{1500}\leq-19.0$ & 58 & 50\\ $2.91<z\leq3.68$ & $3.3$ & $-19.0<M_{1500}\leq-18.0$ & 106 & 50\\ $3.68<z\leq4.44$ & $4.1$ & $-21.5\leq M_{1500}\leq-20.0$ & 8 & 50 \\ $3.68<z\leq4.44$ & $4.1$ & $-20.0<M_{1500}\leq-19.0$ & 23 & 50\\ $3.68<z\leq4.44$ & $4.1$ & $-19.0<M_{1500}\leq-18.0$ & 56 & 50\\ \hline \end{tabular} \tablefoot{Subsample criteria of redshift and $M_{1500}$ for the continuum-selected parent sample. The mean redshift, sample size ($N_{\rm 1500}(z_{\rm p},\, M_{1500})$), and $EW(\rm Ly\alpha)$ cut are also shown. To increase the sample size used in Figures \ref{fig:xevo}, \ref{fig:zevocompare}, \ref{fig:xevo_galics}, and \ref{fig:xevo_galicscv}, we combine the two highest redshift bins used to compute the UV magnitude and Ly$\alpha$ completeness. (a) We also calculate $X_{\rm LAE}$ for the $-21.75\leq M_{1500}\leq-18.75$ mag and $-18.75<M_{1500}\leq-17.75$ mag subsamples. (b) The $45$ \AA\ $EW(\rm Ly\alpha)$ cut is only applied to the $-21.75<M_{1500}\leq-18.75$ mag subsamples.} \end{table*} \subsection{Counting LAEs within the UV-selected sample} \label{subsec:CountingLAEs} \subsubsection{MUSE data}\label{subsec:muse} The data of the MUSE HUDF Survey were obtained as part of the MUSE GTO program (PI: R. Bacon). The MUSE HUDF Survey design is presented in \citet{Bacon2017}. It consists of two layers of different depths: the \textsf{mosaic}{} is composed of $9$ MUSE pointings that cover a $3'\times 3'$ area ($9.92$ arcmin$^2$) with an integration time of $\approx10$ hours; the \textsf{udf-10}{} is a deeper integration at a $1'\times 1'$ sub field within the \textsf{mosaic}{}, with an integration time of $\approx31$ hours. MUSE covers a wide optical wavelength range, from $4750$ \AA{} to $9300$ \AA, which allows the observation of the \mbox{Ly$\alpha$}{} line from $z\approx2.9$ to $z\approx6.6$. The typical spectral resolving power is $R=3000$, with a spectral sampling of $1.25$ \AA{}. The spatial resolution (pixel size) is $0\farcs2 \times 0\farcs2$ per pixel. In the present paper, we use the latest data release (the second data release, hereafter DR2) from the MUSE HUDF (Bacon et al.\, in prep.). The improved data reduction process results in data cubes with fewer systematics and a better sky subtraction. The FWHM of the Moffat point spread function (PSF) is $0\farcs65$ at $7000$ \AA\ in the MUSE HUDF. The estimated $1$ $\sigma$ surface brightness limits are $2.8\times10^{-20}$ erg s$^{-1}$ cm$^{-2}$ \AA$^{-1}$ arcsec$^{-2}$ and $5.5\times10^{-20}$ erg s$^{-1}$ cm$^{-2}$ \AA$^{-1}$ arcsec$^{-2}$ in \textsf{udf-10}\ and \textsf{mosaic}, respectively, in the wavelength range of $7000$--$8500$ \AA\ (excluding regions of OH sky emission, see \citealt{Inami2017}, Bacon et al. in prep. for more details). For instance, the estimated $3$ $\sigma$ flux limits are $1.5\times10^{-19}$ erg s$^{-1}$ cm$^{-2}$ and $3.1\times10^{-19}$ erg s$^{-1}$ cm$^{-2}$ in \textsf{udf-10}\ and \textsf{mosaic}, respectively, for a point-like source extracted over three spectral channels (i.e., $3.75$\AA) around $7000$ \AA\ \citep[see Figure 20 in][]{Bacon2017}. The PSF and noise characteristics are similar to the DR1 data, except in the reddest part of the wavelength range. In order to measure the fraction of galaxies which have a strong \mbox{Ly$\alpha$}{} line, we first extract a 1D spectrum from the MUSE cube for each HST source in our parent sample. We proceed as follows. First, we convolve the HST segmentation map of the R15 catalog with the MUSE PSF, which is normalized to $1$. To obtain a spatial mask applicable to MUSE observations for each object, we apply a threshold value of $0.2$ to the normalized convolved segmentation map. The median value of the radius of the normalized mask is $\approx1\farcs0$ to $0\farcs8$ arcsecond at $z\approx3$ to $6$, which is not affected by Ly$\alpha$ halo flux \citep[][see Sect. \ref{subsec:haloDiscussion} for more detailed discussion of our choice]{Leclercq2017}. Second, we integrate the cube spatially over the extent of the mask. We note that PSF weighted or white-light weighted integrations are used to extract spectra in the DR1 catalog of \citet{Inami2017}. These provide a higher $S/N$ in the extracted spectrum. However, in the present paper, we do not use a spatial weighting. This results in slightly lower $S/N$ values, but more accurate estimates of the fluxes (i.e., conserved flux), which are needed to assess the completeness of our \mbox{Ly$\alpha$}{} detections. Third, we subtract local residual background emission from the extracted spectrum for the 1D spectra as in \citet{Inami2017}. The local background is defined in $5\farcs$ $\times$ $5\farcs$ subcubes avoiding the masks of any source. \subsubsection{Search for Ly$\alpha$ emission}\label{subsec:marz} In order to detect Ly$\alpha$ emission lines in the 1D spectra extracted above, we use a customized version of the \textsf{MARZ} software\footnote{The original \textsf{MARZ} in \citet{Hinton2016} is based on a cross-correlation algorithm \citep[\textsf{AUTOZ},][]{Baldry2014} and is publicly available at: \url{https://github.com/Samreay/Marz.}} \citep{Hinton2016} described in \citet{Inami2017}. \textsf{MARZ} compares 1D spectra to a list of templates and returns the best-fitting spectroscopic redshift, the best-fitting 1D template, and a confidence level for the result (called the quality operator, QOP). In our customized \textsf{MARZ} version, the list of templates consists of templates made using MUSE data, and the interface is improved \citep{Inami2017}. We use our version of \textsf{MARZ} in a similar manner to \citet{Inami2017}, except for the two following changes. First, we do not activate cosmic ray replacement in \textsf{MARZ} because (1) it affects the detectability of bright and spectrally peaky Ly$\alpha$ emission, and (2) cosmic rays are efficiently removed in the data reduction. Second, we only use template spectra with Ly$\alpha$ emission: those of IDs$=10$, $18$, and $19$, which are used in \citet{Inami2017}, and those of IDs$=25$, $26$, $27$ and $30$, which are newly built from MUSE data in Bacon et al. in prep. and show single peaked Ly$\alpha$ (see Appendix \ref{ap:temp_marz} for the template spectra). As in \citet{Inami2017}, we use the 1D spectra and source files including the subcubes and cutouts of HST UV to NIR images for the parent sources as input for \textsf{MARZ}. To select robust Ly$\alpha$ detections, we keep only galaxies which \textsf{MARZ} identifies as LAEs with a high confidence level \citep[``Great'' and ``Good'' shown in Figure 1 in][]{Inami2017}\footnote{The confidence level is given as QOP, which is calculated from the peak values of the cross-correlation function \citep[figure of merit, hereafter FOM, see Sect. 5.3 in][for more details]{Hinton2016}. In our version of \textsf{MARZ} \citep{Inami2017}, QOP $=3$ (QOP $=2$) is regarded as ``Great'' (``Good'') and corresponds to $99.55$\% ($95$\%) confidence in the original \textsf{MARZ} \citep{Hinton2016}. However, the original relation between QOP and confidence percentage is calibrated with SED templates different from ours, and the confidence percentage may not be directly applicable to our data. Note that the FOM criterion for QOP $=3$ (QOP $=2$) in our version of \textsf{MARZ} is the same as that for QOP $\geq4$ (QOP $=3$) in the original \textsf{MARZ} (see Figure1 in \citealt{Inami2017} and Figure 12 in \citealt{Hinton2016}). }. Sources with lower confidence levels are not regarded as detected LAEs. According to this selection, among the $3233$ ($402$) sources in \textsf{mosaic}\ (\textsf{udf-10}), $374$ ($70$) are LAEs. However, some of these LAE candidates are in fact [O\,{\sc ii}]\ emitters or non-LAEs polluted by extended Ly$\alpha$ emission from LAE neighbors. We visually inspect all the LAE candidates as in \citet{Inami2017}: we check the entire MUSE spectra, Ly$\alpha$ line profiles, MUSE white-light images, MUSE narrow-band images of Ly$\alpha$ emission, all the existing HST UV to NIR images, and HST colors by eye using the customized \textsf{MARZ}. The MUSE white-light image is created by collapsing the $5\farcs$ $\times$ $5\farcs$ MUSE subcubes in wavelength direction \citep[see][]{Inami2017}, while the narrow-band image for the Ly$\alpha$ emission is extracted from the wavelength range around the Ly$\alpha$ emission in the subcubes for the sources \citep[see][]{Drake2017b, Drake2017a}. In contrast to \citet{Inami2017}, we also use a consistent photometric redshift ($95$\% uncertainty range) as an evidence of Ly$\alpha$ emission. As a result, we have $276$ ($58$) LAEs at $z\approx2.9$--$6.1$ among the parent sample in the \textsf{mosaic}\ (\textsf{udf-10}) field. Most of the removed sources ($\approx80$\%) have a 1D spectrum contaminated by (extended) Ly$\alpha$ emission from neighboring objects, which can be distinguished using the Ly$\alpha$ narrow-band images, MUSE white-light images, and HST images. We show an example of Ly$\alpha$ contamination in Appendix \ref{ap:contami}. \subsubsection{Measurement of Ly$\alpha$ fluxes}\label{subsec:measurement} For our LAEs, we measure the Ly$\alpha$ fluxes from the 1D spectra used for Ly$\alpha$ detection and described in Sect. \ref{subsec:muse}. The aperture size is defined by the R15's segmentation map for each source convolved with the MUSE PSF (see Sect. \ref{subsec:CountingLAEs} and Sect. \ref{subsec:haloDiscussion} for more details). It has been reported that Ly$\alpha$ emission is often spatially offset from the stellar UV continuum \citep[e.g.][]{Erb2019arXiv,Hoag2019bMNRAS}. The typical offset values for our LAEs are however measured to be less than $0\farcs1$ \citep{Leclercq2017}, significantly less than the PSF scale of our observations. We therefore assume that all the flux of the central Ly$\alpha$ component can be captured by our apertures, centered on the continuum emission peak, and with median radius ranging from $\approx1\farcs0$ to $0\farcs8$ at $z\approx3$ to $6$. We fit the Ly$\alpha$ emission either with an asymmetric Gaussian or a double Gaussian profile. The choice between these two solutions is made after visual inspection. In practice, we use the \textsf{gauss\_asymfit} and \textsf{gauss\_dfit} methods of the publicly available software \textsf{MPDAF} \citep{Piqueras2017}\footnote{MPDAF, the MUSE Python Data Analysis Framework, is publicly available from the following link: \url{https://mpdaf.readthedocs.io/en/latest/}}. We use the spectroscopic redshift from \textsf{MARZ} as information for the center wavelength of the fit, with a fitting range of rest-frame $1190$ \AA\ to $1270$ \AA. We inspect all the Ly$\alpha$ spectral profile fits to confirm their validity. Once we have measured the Ly$\alpha$ fluxes, we compute the rest-frame Ly$\alpha$ equivalent width using $M_{\rm 1500}$ and $\beta$ defined in Sect. \ref{subsec:parent_sample} to estimate the continuum. The distribution of $EW(\rm Ly\alpha)$ as a function of redshift is shown in Figure \ref{fig:ew_z}. \begin{figure} \sidecaption \includegraphics[width=9.3cm]{fig03.pdf}\\ \caption{$EW(\rm Ly\alpha)$ versus $z_{\rm z}$ for our final LAE sample. Colors show the $M_{\rm 1500}$. \label{fig:ew_z}} \end{figure} \vspace{0.5cm} \subsubsection{Completeness estimate and correction}\label{subsec:compsimu} \begin{figure} \begin{flushright} \includegraphics[width=9.3cm]{fig04.pdf}\\ \caption[Completeness of Ly$\alpha$ emission vs. Ly$\alpha$ flux.]{ Completeness of Ly$\alpha$ detection as a function of Ly$\alpha$ flux in the \textsf{udf-10}\ (upper panel) and \textsf{mosaic}\ (lower panel) fields. The simulated data points and their best fit completeness functions are indicated by circles and lines, respectively. Black, purple, violet, orange and yellow colors represent redshifts $z\approx3.3$, $4.1$, $4.7$, $5.3$, and $5.9$, respectively. Error bars are calculated from the Poisson errors of the numbers of the detected fake emission lines. \label{fig:lya_comp} } \end{flushright} \end{figure} In order to estimate the detection completeness of Ly$\alpha$ emission for the MUSE HUDF data with \textsf{MARZ}, we insert fake Ly$\alpha$ emission lines into 1D spectra and try to detect them as explained in Sect. \ref{subsec:marz}. We take realistic noise into account by creating 1D sky-background spectra from MUSE sub-cubes extracted for different continuum-selected sources as detailed in Sect. \ref{subsec:muse}. We choose a sample of spectra which show a clear Ly$\alpha$ emission line \citep[detected with a very high confidence level of ``Great'' shown in Figure 1 in][]{Inami2017}, and which do not have continuum or other spectroscopic features. We mask the spectral pixels covered by the Ly$\alpha$ emission in these 1D spectra (including $\pm 20$ pixels $\approx \pm 25$ \AA\ around the line center). Note that we do not insert fake Ly$\alpha$ lines in the masked regions. With this procedure, we obtain $131$ ($35$) 1D sky-background spectra in the \textsf{mosaic}\ (\textsf{udf-10}) field. We add fake emission lines with fluxes taking 18 values regularly spaced in log between $6$ $\times 10^{-19}$ erg s$^{-1}$ cm$^{-2}$ and $50$ $\times 10^{-18}$ erg s$^{-1}$ cm$^{-2}$, and $3102$ redshift values regularly distributed between $z=2.93$ and $z=6.12$. For each flux-redshift pair, we draw 4 lines (yielding a total of 223,344 lines) and add each of them to one of our 131 (35) 1D spectra chosen randomly. Each fake Ly$\alpha$ line, has line-shape parameter values (total FWHM and FWHM ratio of red wing to blue wing) randomly drawn from the measured distribution of Ly$\alpha$ emission line shapes of LAEs used in \citet{Bacon2015} and \citet{Hashimoto2017b}. We use the \textsf{add\_asym\_gaussian} method of \textsf{MPDAF} to generate the fake lines and add them to our test spectra. We then repeat the detection procedure of Sect. \ref{subsec:marz}, applying the same cut at a confidence level of ``Good''. In each field (\textsf{udf-10}\ or \textsf{mosaic}), we compute the completeness of Ly$\alpha$ detection as a function of the Ly$\alpha$ flux $f_{\rm Ly\alpha}$ for five redshift bins: $2.91\leq z <3.68$ ($z\approx3.3$), $3.68\leq z <4.44$ ($z\approx4.1$), $4.44\leq z <5.01$ ($z\approx4.7$), $5.01\leq z <5.58$ ($z\approx5.3$), and $5.58\leq z \leq6.12$ ($z\approx5.9$), which are defined from the redshift bins used to derive $M_{1500}$. We fit each simulated completeness curve with a formula based on the error function, \citep[e.g.][]{Rykoff2015arXiv}: \begin{equation} C(f_{\rm Ly\alpha})=\frac{1}{2}\left[1-{\rm erf}\left( \frac{-2.5\log_{10}f_{\rm Ly\alpha} -a}{\sqrt{2b}}\right)\right], \label{eq:comp_erf} \end{equation} where $a$ and $b$ are two free parameters for fitting (see Figure \ref{fig:lya_comp}). We use the function \textsf{curve\_fit} from \textsf{scipy.optimize} to perform the fit. The best fit parameters for the completeness curve in the \textsf{mosaic}\ and \textsf{udf-10}\ are summarized in Table \ref{tbl:comp}. At completeness above $\approx0.8$, our best fit relations slightly overestimate the measured completeness. The analytic fit is however at most $\approx5$\% above the $1\sigma$ upper errors, and this does not have a noticeable effect on the calculation of $X_{\rm LAE}$. We nevertheless take this into account in the error propagation of $X_{\rm LAE}$ in Sect. \ref{subsec:uncertainties}. Theoretically, completeness functions should just scale with $S/N$ and thus be applicable throughout the wavelength (or redshift) range of the instrument. For the MUSE-WIDE survey, \citet{Herenz2019} indeed find that the shape of their completeness function is independent of redshift. As expected, we also find very similar behavior of completeness at $z\approx3.3$ and $4.1$, in both fields. At these redshifts, the noise is well behaved and there are only few accidents due to sky-line removal in the spectra. At $z\approx 4.7$, the shape of the completeness curve is still well described by Eq. \ref{eq:comp_erf}, but the curve is shifted to fainter flux with a shallower slope. At $z\approx5.3$ and $5.9$, the shapes of the best-fitting completeness are different from those at lower redshifts: they have a shallower slope, the data points with completeness above $0.8$ are not well fit, and the completeness at a given flux is much lower than that at lower redshifts. The lower normalisation and distorted shape of the completeness may be caused by the many sky emission lines at high redshifts \citep[at $z\gtrsim5$,][]{Drake2017b}. Because \textsf{MARZ} is not a local line detector, as opposed, e.g. to the matched-filtering approach implemented in the tool \textsf{LSDCat} utilized in \citet{Herenz2019} where the filter has a compact support in spectral space, \textsf{MARZ} is affected by relatively long-range noise or distant spectroscopic features in the spectra. Thus, \textsf{MARZ} often does not return a high confidence level (``Great'' and ``Good'') for LAEs at $z\gtrsim5$, and our completeness goes down at $z\gtrsim5$ even for relatively bright Ly$\alpha$ fluxes. We note that the LAE template spectra that we use all have a single-peaked Ly$\alpha$ profile (see Figure \ref{fig:temp_marz}). However, the exact shape of the line profile is shown to have little impact on the detectability with cross-correlation function in general \citep[for instance, see Sect. 4.3 in][]{Herenz2017b}. The shape of the Ly$\alpha$ line of \textsf{MARZ}'s template should not affect significantly the detection rate of LAEs (see also Appendix \ref{ap:temp_marz}). Indeed, for example, our sample contains LAEs with double-peaked lines even though none of our templates have such features. We experimented using all the templates in \textsf{MARZ}, including templates of different galaxy populations such as [O\,{\sc ii}]\ and [O\,{\sc iii}]\ emitters, and we found only a small impact on our LAE sample, which is well within the error bars. Finally, we checked the dependence of completeness on the FWHM of fake Ly$\alpha$ emission lines at fixed flux, and again found no significant trend. \begin{table} \caption{The best fit parameters of completeness functions}\label{tbl:comp} \centering \begin{tabular}{lllll} \hline \noalign{\smallskip} mean $z$ & $a$ in \textsf{udf-10} & $b$ in \textsf{udf-10} & $a$ in \textsf{mosaic} & $b$ in \textsf{mosaic} \\ \noalign{\smallskip} \hline $z\approx3.3$ & 43.2 & 0.163 & 43.7 & 0.202 \\ $z\approx4.1$ & 43.3 & 0.197 & 43.9 & 0.228 \\ $z\approx4.7$ & 43.7 & 0.303 & 44.3 & 0.405 \\ $z\approx5.3$ & 42.9 & 0.614 & 43.4 & 0.629 \\ $z\approx5.9$ & 42.9 & 0.653 & 43.4 & 0.647 \\ \noalign{\smallskip} \hline \end{tabular} \tablefoot{The best fit parameter of a and b of Equation (\ref{eq:comp_erf}) in each redshift bin for the \textsf{mosaic}\ and \textsf{udf-10}\ fields.} \end{table} In the following, $N_{\rm LAE}^{\rm det}(z_{\rm s},\,M_{1500},\, EW)$ denotes the number of galaxies with detected Ly$\alpha$ emission and with spectroscopic redshift $z_{\rm s}$, absolute magnitude $M_{1500}$, and rest-frame equivalent width $EW$ in given ranges. We estimate the true number of LAEs with a corrected value $N_{\rm LAE}^{\rm corr}$ defined as follows. For a given field and redshift bin, we use the fits to the completeness function above to define four Ly$\alpha$ flux bins which correspond to regularly spaced bins in the logarithm of the completeness ($C$), ranging from $C=0.1$ to $C=0.9$. We then count the number of detected LAEs (within a given $z_{\rm s}$, $M_{1500}$, and EW bin) in each flux bin and divide it by the mean completeness (in log) in each of the flux bins. We compute $N_{\rm LAE}^{\rm corr}$ as the sum of these over the four flux bins. When the uncertainties of the LAE fraction is calculated, the completeness correction value in each flux bin is propagated, and the uncertainty of completeness correction itself is taken into account as described in the next section. The number of flux bins is defined through a test described in Appendix \ref{ap:bin_comp}. Four to six bins are a sweet spot where the error bars are small and they appear converged. For a larger number of bins, we often get flux-bins with no object at all, and these bins contribute to a large error bar. For a smaller number of bins, we introduce a larger error on the completeness correction (averaged over the bin). We adopt four bins. \subsection{$X_{\rm LAE}$ and its error budget} \label{subsec:uncertainties} Knowing the number of UV-selected galaxies in a volume-limited sample ($N_{\rm 1500}(z_{\rm p},\, M_{1500})$, Sect.\ref{subsec:parent_sample}), and the number of LAE among the {\it inclusive parent sample} ($N_{\rm LAE}^{\rm corr}(z_{\rm s},\,M_{1500},\, EW)$, Sect. \ref{subsec:compsimu}), the fraction of UV-selected galaxies with a Ly$\alpha$ line is simply given as: \begin{equation} X_{\rm LAE} = \frac{N_{\rm LAE}^{\rm corr}(z_{\rm s},\,M_{1500},\, EW)}{N_{\rm 1500}(z_{\rm p},\, M_{1500})}. \label{eq:xlae} \end{equation} The uncertainties on $X_{\rm LAE}$ arise from four components: (1) the uncertainty due to contaminants (type I\hspace{-1pt}I error) and missed objects (type I error) in $N_{\rm 1500}(z_{\rm p},\, M_{1500})$, (2) the uncertainty due to the completeness correction of $N_{\rm LAE}^{\rm corr}(z_{\rm s},\,M_{1500},\, EW)$, (3) the uncertainty for Bernoulli trials (i.e., fraction of $N_{\rm LAE}^{\rm corr}(z_{\rm s},\,M_{1500},\, EW)$ over $N_{\rm 1500}(z_{\rm p},\, M_{1500})$) measured by a binomial proportion confidence interval and (4) the uncertainty due to cosmic variance. We note that there is no obvious sample selection bias in our UV galaxies as shown in Figure \ref{fig:n_muv}, and our Ly$\alpha$ measurements are homogeneous over the whole redshift range (see discussion in Sect. \ref{subsec:difference}). We estimate the relative uncertainty of $X_{\rm LAE}$ from error propagation and discuss each of these contributions below. \begin{description} \item[(1) {\it Contaminants and missed objects in $N_{\rm 1500}(z_{\rm p},\, M_{1500})$}]\mbox{}\\ Sources with $z_{\rm p}$ in a given $z$ range that are truly located outside of the $z$ range are contaminants in the parent continuum-selected sample, while sources with $z_{\rm p}$ outside of the $z$ that are truly located in the $z$ range are missed sources. These mismatches of $z_{\rm p}$ can happen because of confusion between Lyman and $4000$ \AA\ breaks in the SED fitting. In addition, IGM absorption modeling has been suggested to affect $z_{\rm p}$ estimation \citep{Brinchmann2017}. As discussed in \citet{Inami2017} and \citet{Brinchmann2017}, the fraction of contaminants is very low for high confidence level objects \citep[with secure redshift, see Figure 20 in][]{Inami2017}. The fraction of missed objects with a relative redshift difference of more than $15$\%, |$z_{\rm s}$-$z_{\rm p}$|/(1+$z_{\rm s}$)$>0.15$, is suggested to be $\approx10$\% \citep[outlier fraction,][]{Brinchmann2017}. Since the missed objects whose $95$\% uncertainty range for $z_{\rm p}$ is outside the $z$ range for MUSE-LAEs are not included in our parent continuum-selected sample, they are also not included in our LAE sample. With an assumption that the fractions of missed objects are the same for the parent and LAE samples, the uncertainties due to missed objects can be neglected as well as those due to contaminants. Note that we do not find any significant relation between the $z_{\rm p}$-$z_{\rm s}$ difference and Ly$\alpha$ EW as well as Ly$\alpha$ flux. \item[(2) {\it Completeness correction of $N_{\rm LAE}^{\rm corr}(z_{\rm s},\,M_{1500},\, EW)$}]\mbox{}\\ Simulated data of completeness are fitted well with Equation (\ref{eq:comp_erf}) at $z\approx3.3$, $4.1$, and $4.7$. Even at $z\approx5.3$ and $5.9$, the differences in completeness between the simulated data and the best-fitting functions are at most $\approx5$\%, which is much smaller than the uncertainty due to the flux binning. The flux binning for the completeness correction described in Sect. \ref{subsec:compsimu} causes an uncertainty of at most $\approx\pm32$\%. The completeness bins, spaced regularly in log from $0.1$ to $0.9$, correspond to steps of a factor $1.73$, which has a square-root of $\approx 1.32$. We use this very conservative estimate of $32$\% for the completeness correction error in our error budget. The completeness correction error is smaller than the error component (3) as described later and does not change the total uncertainty of $X_{\rm LAE}$. We note that the completeness correction value is also taken into account according to error propagation, when the error component (3) is calculated. \item[ (3) {\it Uncertainties for Bernoulli trials}]\mbox{}\\ Measuring the fraction of a sub-sample among a parent sample is a kind of experiment of Bernoulli trials. An uncertainty for a Bernoulli trial is given by a binomial proportion confidence interval (hereafter, BPCI). We use the python module \textsf{binom\_conf\_interval} from \textsf{astropy.stats} and provide an approximate uncertainty for a given confidence interval (=$68$\%, 1$\sigma$, in this work), the number of trials, and the number of successes. Here, the number of trials and the number of successful experiments are $N_{\rm 1500}(z_{\rm p},\, M_{1500})$ and $N_{\rm LAE}^{\rm corr}(z_{\rm s},\,M_{1500},\, EW)$, respectively. However, we cannot obtain $N_{\rm LAE}^{\rm corr}(z_{\rm s},\,M_{1500},\, EW)$ directly from the observations. To include the effect of the completeness correction in each Ly$\alpha$ flux bin (described in Sect. \ref{subsec:compsimu}) in the error propagation, we calculate the uncertainty of the LAE fraction in each Ly$\alpha$ flux bin without applying a completeness correction from \textsf{binom\_conf\_interval} and multiply the uncertainties by the correction value in the flux bin. We choose as an approximation formula the Wilson score interval \citep{Wilson1927}, which is known to return an appropriate output even for a small number of trials and/or success experiments. Our method is confirmed to be accurate by numerical tests described in Appendix \ref{ap:bpci}. For the flux-bins with no LAEs, the average completeness value among the bins and the number of LAEs ($=0$) are used to derive the uncertainties conservatively. When we sum over all the uncertainties in a flux bin to derive the total uncertainties for the component (3), a python module, \textsf{add\_asym} developed in \citet{Laursen2019}, is used to treat asymmetric errors by BPCI. We note that the Poisson errors of $N_{\rm LAE}^{\rm corr}(z_{\rm s},\,M_{1500},\, EW)$ and $N_{\rm 1500}(z_{\rm p},\, M_{1500})$ are commonly used in the literature. However, the error for the LAE fraction should be derived by BPCI like in other fraction studies \citep[e.g. the galaxy merger fraction,][]{Ventou2017} to obtain statistically correct errors. The BPCI method is reviewed for astronomical uses in \citet{Cameron2011}. \item[(4) {\it Cosmic variance}]\mbox{}\\ The survey volume in each redshift range is limited to $\approx 1.5-2.5 \times 10^4$ cMpc$^3$. However, we find that the uncertainty due to cosmic variance is less than the BPCI error and thus not affecting our $X_{\rm LAE}$ significantly (See Sect. \ref{subsubsec:galics_cv} for more details). Since the uncertainty due to cosmic variance cannot be included in our MUSE measurements, we neglect this error component (4). \end{description} In addition to (1) - (4), uncertainties in photo-$z$ estimations and flux measurements are also potential error components. In Appendix \ref{ap:photoz}, we discuss the impact of uncertainties on $z_{\rm p}$ and conclude that it does not affect the error bar of $X_{\rm LAE}$ significantly. Uncertainties of $M_{1500}$, $\beta$, and the Ly$\alpha$ flux, combine into uncertainties on $EW(\rm Ly\alpha)$. As shown in Figures \ref{fig:uvdep} and \ref{fig:uvdepcompare}, $X_{\rm LAE}$ shows a slight dependence on $M_{1500}$ and $EW(\rm Ly\alpha)$. We thus expect that a small error on these quantities will translate into an even smaller error on $X_{\rm LAE}$. Note that although some objects have large errors in $M_{1500}$ in Figure \ref{fig:n_muv}, they are not included in our analysis because they are very faint\footnote{Among the subsamples shown in Table \ref{tbl:criteria}, fainter-$M_{1500}$ and higher-$z_{\rm p}$ objects have greater uncertainties in $M_{1500}$. The medians (standard deviations) of the uncertainties for the subsamples with $-18.75\leq M_{1500}\leq-17.75$ mag are $0.05$ mag ($0.02$ mag) at $z\approx3.3$ and $0.15$ mag ($0.12$ mag) at $z\approx5.6$. Those for the subsamples with $-19.0\leq M_{1500}\leq-18.0$ mag is $0.04$ mag ($0.01$ mag) at $z\approx3.3$ and $0.08$ mag ($0.11$ mag) at $z\approx4.1$. These uncertainties are much smaller than the width of $M_{1500}$ bins. With regard to $S/N$ cuts of Ly$\alpha$ fluxes corresponding to $EW(\rm Ly\alpha)$ cuts, those in the \textsf{mosaic}{} field for $M_{1500}=-17.75$ mag and $EW(\rm Ly\alpha)=65$ \AA\ are estimated to be $\approx17.4$ at $z\approx3.3$ and $\approx4.4$ at $z\approx5.6$, if we assume $\beta=-2$. Similarly, the $S/N$ cuts for $M_{1500}=-18.0$ mag and $EW(\rm Ly\alpha)=50$ \AA\ are $\approx16.9$ at $z\approx3.3$ and $\approx11.1$ at $z\approx4.1$. Here, the noise is the median of those shown in Figure \ref{fig:zevocompare} in each redshift bin.}. Thus, we ignore these two uncertainties, as is commonly done in the rest of the literature. With these considerations, the uncertainty of the LAE fraction is the quadratic sum of the uncertainty terms (2) and (3). Below, the error bars on $X_{\rm LAE}$ represent the $68$\% confidence intervals around the values calculated by Equation (\ref{eq:xlae}). Note that the dominant error for $X_{\rm LAE}$ derived in this work is component (3), the uncertainties for a Bernoulli trial, which are, for instance, $38$\% and $78$\% of $X_{\rm LAE}$ for $-21.75\leq M_{1500}\leq-18.75$ mag and $EW(\rm Ly\alpha)\geq65$ \AA\ at $z\approx3$ and $5.6$, respectively. \subsection{Measurement of the slope of $X_{\rm LAE}$ as a function of $z$ and $M_{1500}$} \label{subsec:slope} We measure linear slopes of $X_{\rm LAE}$ as a function of $z$ and $M_{1500}$ using a python package for orthogonal distance regression (hereafter, ODR) fitting, \textsf{scipy.odr}, to account for widths of bins in x-axis and uncertainties of $X_{\rm LAE}$ in y-axis. The ODR fitting minimizes the sum of squared perpendicular distances from the data to the fitting model. Since uncertainties of $X_{\rm LAE}$ are not symmetric, a Monte Carlo simulation with $10000$ trials is used. We assume an asymmetric Gaussian profile as a probability distribution function for $X_{\rm LAE}$ with each of the upper and lower uncertainties at a given bin ($z$ or $M_{1500}$) in x-axis. We fit a linear relation ($y=ax+b$) in each trial drawing $X_{\rm LAE}$ randomly with \textsf{scipy.odr}. The best fit values of $a$ and $b$ and their error bars are derived from the median values and the $68$\% confidence intervals around the median values. The results of fitting are shown in Sect. \ref{sec:result}. \subsection{Extended Ly$\alpha$ emission} \label{subsec:haloDiscussion} Our aim is to measure how the fraction of UV-selected galaxies showing a strong \mbox{Ly$\alpha$}{} line varies with redshift. We make the choice to discard possible significant contributions to the \mbox{Ly$\alpha$}{} luminosities by extended \mbox{Ly$\alpha$}{} haloes \citep[{\it LAH}, e.g.][Leclercq et al. in prep.]{Wisotzki2016, Drake2017b, Leclercq2017} in our study, though the contribution of LAHs to the total Ly$\alpha$ fluxes is typically more than $\approx50$\% \citep[e.g.][]{Momose2016, Leclercq2017}. There are a number of reasons for this choice. First, it is largely motivated by our lack of understanding of the physical processes lighting up these halos. In particular, it is not clear how they relate to the UV luminosities of their associated galaxies \citep[e.g.][]{Leclercq2017, Kusakabe2019}, to what extent they are associated to star formation \citep[e.g.][their Figure 12]{Yajima2012b}, or whether the nature of this association could vary with redshift. In order to assess the evolution of $X_{\rm LAE}$ with redshift, it thus appears more conservative to limit our measurement of the \mbox{Ly$\alpha$}{} emission from galaxies to the part which most likely has the same origin as the continuum UV light. Any evolution is then more likely to be related to the evolution of the ionisation state of the IGM. With the above procedure, our 1D spectra include as little as possible of the extended \mbox{Ly$\alpha$}{} emission that is found around LAEs \citep{Wisotzki2016,Leclercq2017}. Second, our choice has the advantage of following a similar methodology without including halo fluxes used in other studies \citep[e.g.][]{Stark2011,DeBarros2017, ArrabalHaro2018}, and thus allows for a fair comparison. Third, it is difficult to measure the faintest halos. If we demanded that LAHs were detected around galaxies in our sample, we would limit our sample to the brightest and/or compact halos only \citep[see Sect. 2.2 and Figure 8 of][]{Leclercq2017}. So even though an IFU enables us in principle to separate the central and halo component more clearly than slit and fiber spectrometers, the signal-to-noise ratio required to do so is still prohibitive for statistical studies such as ours. Note that because of our choice, the Ly$\alpha$ fluxes and EWs in the present paper are smaller than the total Ly$\alpha$ fluxes and EWs reported by e.g. \citet[][]{Hashimoto2017b}, \citet[][]{Drake2017b}, and \citet[][]{Leclercq2017}. \section{Results}\label{sec:result} In order to measure the variation of $X_{\rm LAE}$ with redshift or UV absolute magnitude, we design several sub-samples shown in Table \ref{tbl:criteria}. We use $EW(\rm Ly\alpha)$ cuts starting at $25$ \AA, which is a common limit in the literature, and then increase in steps of 20\AA\ to $45$ \AA, $65$ \AA, and $85$ \AA. We also use $50$ \AA\ and $55$ \AA\ cuts, for comparison to \citet{Stark2010} and \citet{Stark2011}. In Sect. \ref{subsec:zevo}, we present our results for the $X_{\rm LAE}$-$z$ relation, going as faint as $M_{1500}=-17.75$ mag for the first time in a homogeneous way over the redshift range $z\approx3$ to $z\approx6$. We discuss how these results compare to existing measurements. In Sect. \ref{subsec:uvdep}, we present the first measurement of the $X_{\rm LAE}$-$M_{1500}$ relation for galaxies as faint as $M_{1500}=-17.00$ mag at $z\approx3$, and compare our findings to other studies. The numerical values of $X_{\rm LAE}$ are summarized in Tables \ref{tbl:x_z} and \ref{tbl:x_muv}. The slopes of the best fit linear relations of $X_{\rm LAE}$ as a function of $z$ and $M_{1500}$ are shown in Figure \ref{fig:xlae_fit} in Appendix \ref{ap:xlae_fit}, and summarised in Tables \ref{tbl:x_z} and \ref{tbl:x_muv}. \begin{table}[h] \caption{LAE fraction as a function of redshift}\label{tbl:x_z} \begin{tabular}{cccc} \hline mean $z$ & $X_{\rm LAE}$ or slope & $1\sigma$ upper error & $1\sigma$ lower error \\ \hline \multicolumn{4}{c}{$-21.75\leq M_{1500}\leq-17.75$ mag, $EW(\rm Ly\alpha)\geq65$ \AA} \\ \hline 3.3 & 0.04 & 0.02 & 0.01 \\ 4.1 & 0.07 & 0.04 & 0.02 \\ 4.7 & 0.11 & 0.07 & 0.04 \\ 5.6 & 0.20 & 0.16 & 0.06 \\ \multicolumn{4}{l}{modest positive correlation:} \\ slope& 0.07 & 0.06 & 0.03 \\ \hline \multicolumn{4}{c}{$-21.75\leq M_{1500}\leq-18.75$ mag, $EW(\rm Ly\alpha)\geq45$ \AA} \\ \hline 3.3 & 0.05 & 0.03 & 0.02 \\ 4.1 & 0.16 & 0.10 & 0.06 \\ 4.7 & 0.21 & 0.13 & 0.08 \\ 5.6 & 0.13 & 0.13 & 0.05 \\ \multicolumn{4}{l}{modest positive correlation:} \\ slope& 0.05 & 0.05 & 0.03 \\ \hline \multicolumn{4}{c}{$-21.75\leq M_{1500}\leq-18.75$ mag, $EW(\rm Ly\alpha)\geq65$ \AA} \\ \hline 3.3 & 0.02 & 0.02 & 0.01 \\ 4.1 & 0.09 & 0.07 & 0.04 \\ 4.7 & 0.12 & 0.09 & 0.05 \\ 5.6 & 0.13 & 0.13 & 0.05 \\ \multicolumn{4}{l}{modest positive correlation:} \\ slope& 0.05 & 0.05 & 0.03 \\ \hline \multicolumn{4}{c}{$-18.75\leq M_{1500}\leq-17.75$ mag, $EW(\rm Ly\alpha)\geq65$ \AA} \\ \hline 3.3 & 0.05 & 0.04 & 0.02 \\ 4.1 & 0.05 & 0.05 & 0.02 \\ 4.7 & 0.10 & 0.09 & 0.03 \\ 5.6 & 0.25 & 0.24 & 0.09 \\ \multicolumn{4}{l}{modest positive correlation:} \\ slope& 0.09 & 0.09 & 0.04 \\ \hline \multicolumn{4}{c}{$-20.25\leq M_{1500}\leq-18.75$ mag, $EW(\rm Ly\alpha)\geq25$ \AA} \\ \hline 3.3 & 0.13 & 0.07 & 0.05 \\ 4.1 & 0.25 & 0.14 & 0.09 \\ 4.7 & 0.32 & 0.22 & 0.11 \\ 5.6 & 0.13 & 0.13 & 0.05 \\ \multicolumn{4}{l}{no correlation (flat):} \\ slope& 0.01 & 0.05 & 0.05 \\ \hline \multicolumn{4}{c}{$-20.25\leq M_{1500}\leq-18.75$ mag, $EW(\rm Ly\alpha)\geq55$ \AA} \\ \hline 3.3 & 0.06 & 0.04 & 0.03 \\ 4.1 & 0.10 & 0.09 & 0.04 \\ 4.7 & 0.18 & 0.13 & 0.07 \\ 5.6 & 0.13 & 0.12 & 0.05 \\ \multicolumn{4}{l}{no correlation (almost flat):} \\ slope& 0.04 & 0.05 & 0.03 \\ \hline \end{tabular} \tablefoot{The values and 1$\sigma$ uncertainties of the LAE fraction as a function of $z$ and the values of the slope are summarized.} \end{table} \begin{table}[h] \caption{LAE fraction as a function of UV magnitude}\label{tbl:x_muv} \begin{tabular}{cccc} \hline mean $M_{1500}$ & $X_{\rm LAE}$ or slope & $1\sigma$ upper error & $1\sigma$ lower error \\ \hline \multicolumn{4}{c}{$z\approx3.3$, $EW(\rm Ly\alpha)\geq25$ \AA} \\ \hline -20.75 & 0.13 & 0.10 & 0.06 \\ -19.50 & 0.10 & 0.07 & 0.04 \\ \multicolumn{4}{l}{no correlation (flat):} \\ slope & -0.02& 0.07 & 0.08 \\ \hline \multicolumn{4}{c}{$z\approx3.3$, $EW(\rm Ly\alpha)\geq45$ \AA} \\ \hline -20.75 & 0.03 & 0.07 & 0.02 \\ -19.50 & 0.03 & 0.03 & 0.02 \\ -18.50 & 0.08 & 0.05 & 0.03 \\ \multicolumn{4}{l}{no correlation (flat):} \\ slope & 0.01 & 0.02 & 0.03 \\ \hline \multicolumn{4}{c}{$z\approx3.3$, $EW(\rm Ly\alpha)\geq65$ \AA} \\ \hline -20.75 & 0.00 & 0.03 & 0.00 \\ -19.50 & 0.02 & 0.04 & 0.01 \\ -18.50 & 0.05 & 0.04 & 0.02 \\ \multicolumn{4}{l}{modest positive correlation:} \\ slope & 0.02 & 0.01 & 0.01\\ \hline \multicolumn{4}{c}{$z\approx3.3$, $EW(\rm Ly\alpha)\geq85$ \AA} \\ \hline -20.75 & 0.00 & 0.03 & 0.00 \\ -19.50 & 0.00 & 0.02 & 0.00 \\ -18.50 & 0.01 & 0.02 & 0.00 \\ -17.50 & 0.03 & 0.03 & 0.01 \\ \multicolumn{4}{l}{no correlation (flat):} \\ slope & 0.01 & 0.01 & 0.01 \\ \hline \multicolumn{4}{c}{$z\approx3.3$, $EW(\rm Ly\alpha)\geq50$ \AA} \\ \hline -20.75 & 0.03 & 0.07 & 0.02 \\ -19.50 & 0.03 & 0.03 & 0.02 \\ -18.50 & 0.08 & 0.04 & 0.03 \\ \multicolumn{4}{l}{no correlation (flat):} \\ slope & 0.01 & 0.02 & 0.03 \\ \hline \multicolumn{4}{c}{$z\approx4.1$, $EW(\rm Ly\alpha)\geq50$ \AA} \\ \hline -20.75 & 0.12 & 0.23 & 0.06 \\ -19.50 & 0.04 & 0.09 & 0.02 \\ -18.50 & 0.17 & 0.12 & 0.05 \\ \multicolumn{4}{l}{no correlation (flat):} \\ slope & 0.01 & 0.06 & 0.10 \\ \hline \end{tabular} \tablefoot{The values and 1$\sigma$ uncertainties of the LAE fraction as a function of $M_{1500}$ and the values of the slope are summarized.} \end{table} \subsection{Redshift evolution of $X_{\rm LAE}$}\label{subsec:zevo} We derive the redshift evolution of $X_{\rm LAE}$ for $EW(\rm Ly\alpha)\geq65$ \AA\ from $M_{1500}=-21.75$ mag to the faint UV magnitude of $-17.75$ mag. We show the results in the upper panel of Figure\, \ref{fig:xevo} with the large filled hexagons. We find a weak rise of $X_{\rm LAE}$ from $z\approx3$ to $z\approx6$, even though poor statistics do not allow us to set a firm constraint at $z\approx5.6$. Breaking our sample into a bright end (with $-21.75\leq M_{1500}\leq-18.75$ mag, purple circles), and a faint end (with $-18.75\leq M_{1500}\leq-17.75$ mag, purple squares), we find similar trends that are consistent within the $1\sigma$ error bars, suggesting that the $X_{\rm LAE}$-$z$ relation does not depend strongly on the rest-frame UV absolute magnitude. The best fit linear relations are $X_{\rm LAE} = 0.07^{+0.06}_{-0.03}z -0.22^{+0.12}_{-0.24}$, $X_{\rm LAE} = 0.05^{+0.05}_{-0.03}z -0.13^{+0.12}_{-0.18}$, and $X_{\rm LAE} = 0.09^{+0.09}_{-0.04}z -0.28^{+0.18}_{-0.36}$ for $-21.75\leq M_{1500}\leq-17.75$ mag, $-21.75\leq M_{1500}\leq-18.75$ mag, and $-18.75\leq M_{1500}\leq-17.75$ mag, respectively. Note that the bright sample here is dominated by the more numerous sub-L$^\ast$ galaxies, which are fainter than the bright samples in the literature. If we select the brighter part of our sample ($-21.75\leq M_{1500}\leq-18.75$ mag), we can estimate $X_{\rm LAE}$ down to lower equivalent widths. In the lower panel of Figure\,\ref{fig:xevo}, the orange circles show $X_{\rm LAE}$ for galaxies with $EW(\rm Ly\alpha)\geq45$ \AA. This is also found to increase from $z\approx3$ to $z\approx4$--$6$, and is above the relation obtained for $EW(\rm Ly\alpha)\geq65$ \AA{} also shown in the lower panel, as expected. The best fit linear relation is $X_{\rm LAE} = 0.05^{+0.05}_{-0.03}z -0.07^{+0.14}_{-0.20}$. These results are qualitatively consistent with the trend of increasing $X_{\rm LAE}$ with increasing $z$ based on bright samples in the literature (see below). The fact that only $\approx0$--$30$\% of galaxies within the UV magnitude range at $z\leq5$ are observed as LAEs with $EW(\rm Ly\alpha)\geq65$ \AA{} requires some explanation of the physical mechanisms. We discuss this further in Sect. \ref{subsec:galics}. \begin{figure} \sidecaption \includegraphics[width=9.0cm]{fig05.pdf}\\ \caption{ $X_{\rm LAE}$ vs. $z$. In the upper panel, big purple hexagons, small purple circles, and small purple squares indicate LAE fractions for $EW(\rm Ly\alpha)\geq65$ \AA\ derived with MUSE at $-21.75\leq M_{1500}\leq -17.75$ mag, $-21.75\leq M_{1500}\leq -18.75$ mag, and $-18.75\leq M_{1500}\leq -17.75$ mag, respectively. In the lower panel, purple and orange circles represent $X_{\rm LAE}$ for $EW(\rm Ly\alpha)\geq65$ \AA\ and $EW(\rm Ly\alpha)\geq45$ \AA\ at $-21.75\leq M_{1500}\leq -18.75$ mag, respectively. For visualization purposes, we show the width of $z$ only for one symbol in each panel and slightly shift the other points along the abscissa. } \label{fig:xevo} \end{figure} Next, we compare our MUSE results of $X_{\rm LAE}$ with previous studies. For this purpose, we derive $X_{\rm LAE}$ for $M_{1500}$ in the range $[-20.25;-18.75]$ \citep[which corresponds to the faint UV magnitude range of][see Figure \ref{fig:muv_z}]{Stark2011}, and for $EW(\rm Ly\alpha)\geq25$~\AA\ and $EW(\rm Ly\alpha)\geq55$ \AA. Figure\, \ref{fig:zevocompare} shows our results and those of other studies as a function of redshift. At $z\lesssim5$, we confirm the low values from \citet{ArrabalHaro2018} (grey crosses at $z\approx4$ and $z\approx5$) for $EW(\rm Ly\alpha)\geq25$ \AA. Our median values of $X_{\rm LAE}$ at $z\approx4.1$ and $z\approx4.7$ are somewhat smaller than those at $z\approx4$ and $z\approx5$ from \citet{Stark2011}, although they are compatible within the large error bars. At $z\approx5.6$, our value for $EW(\rm Ly\alpha)\geq25$ \AA\ appears to be significantly lower than those reported in the literature. We note that the discrepancy between our work and \citet{DeBarros2017} is however only $1.14\sigma$, and might thus be caused by the statistical error. However, our result is more than $2\sigma$ away from those of \citet{ArrabalHaro2018} and \citet{Stark2011}, which is less likely to be statistical fluctuation. We discuss in Sect. \ref{subsubsec:lbgbias} potential biases which may explain why these two latter references find large values of $X_{\rm LAE}$. We discuss the effect of cosmic variance in Sect. \ref{subsubsec:galics_cv}, and find that it cannot explain such large differences. Another possibility is that the low value we find reflects a late and/or patchy reionization process, and we discuss that further in Sect. \ref{subsec:implications}. We also check the best fit linear relation of $X_{\rm LAE}$ as a function of $z$ for $EW(\rm Ly\alpha)\geq25$ \AA, which is $X_{\rm LAE} = 0.01^{+0.05}_{-0.05}z +0.16^{+0.20}_{-0.22}$. The best fit slope is lowered by the point at $z\gtrsim5.6$, and is shallower than $0.11\pm0.04$ in \citet{Stark2011} and $0.18^{+0.06}_{-0.06}$ in \citet[][]{ArrabalHaro2018}. It is consistent with the flat relation reported by \citet{Hoag2019bMNRAS} within the $1\sigma$ error bars ($0.014\pm0.02$) and that by \citet{Caruana2018}. Note that \citet{Caruana2018} discuss the slope of $X_{\rm LAE}$ against $z$ using a sample of the MUSE-Wide GTO survey with an {\it apparent} magnitude cut of $F775<26.5$ mag, which is shown by a grey solid line in Figure\,\ref{fig:muv_z}. They also include the contribution of extended Ly$\alpha$ halos to their Ly$\alpha$ fluxes, which enhances the values, contrary to us. With regard to $EW(\rm Ly\alpha)\geq55$ \AA\, the best fit relation is $X_{\rm LAE} = 0.04^{+0.05}_{-0.03}z -0.05^{+0.14}_{-0.19}$, whose slope is consistent with that in \citep[]{Stark2011}, $0.018\pm0.036$. \begin{figure*} \sidecaption \includegraphics[width=18.0cm]{fig06.pdf}\\ \caption{$X_{\rm LAE}$ vs. $z$ compared with previous results. Upper left and right panels: sky noise vs. $z$. The purple and orange lines show the $1\sigma$ flux of the sky noise in the \textsf{mosaic}\ field (for a wavelength width of $600$ km/s). The sky noise in the \textsf{udf-10}\ field is $\approx1.7$ times lower than in the \textsf{mosaic}\ field. The dark-grey and light-grey hashed areas show redshift ranges that are not included in our sample and that are highly affected by sky lines, respectively. Lower left (right) panel: $X_{\rm LAE}$ vs. $z$ for $-20.25\leq M_{1500}\leq-18.75$ mag and $EW(\rm Ly\alpha)\geq25$ \AA\ ($EW(\rm Ly\alpha)\geq55$ \AA). Purple (orange) pentagons indicate our MUSE results. A grey square, triangle (right), diamond, inverted triangle, circle, triangle (left), cross, triangle, and thin diamond represent the results by \citet{Stark2011}, \citet{Treu2013}, \citet{Schenker2014}, \citet{Tilvi2014}, \citet{DeBarros2017}, \citet{Pentericci2018}, \citet{ArrabalHaro2018}, \citet{Caruana2018}, and \citet{Mason2019}, respectively. The upper limits in \citet{Tilvi2014} and \citet{Mason2019} show the $86$\% and $68$\% confidence levels of $X_{\rm LAE}$, respectively, while other error bars show 1$\sigma$ uncertainties of $X_{\rm LAE}$. We note that the original $M_{1500}$ ranges in \citet{Treu2013}, \citet{Schenker2014}, \citet{Tilvi2014}, \citet{DeBarros2017}, \citet{ArrabalHaro2018}, and \citet{Mason2019} are $M_{1500}\geq-20.25$ mag. In \citet{Caruana2018}, the original $M_{1500}$ range can be roughly estimated from the apparent $F775W$ cut (see Figure\,\ref{fig:muv_z}), and they include Ly$\alpha$ halos in the flux measurement. For visualization purposes, we slightly shift the data points of \citet{Tilvi2014} and \citet{ArrabalHaro2018} along the abscissae. } \label{fig:zevocompare} \end{figure*} \subsection{UV magnitude dependence of $X_{\rm LAE}$} \label{subsec:uvdep} Figure\, \ref{fig:uvdep} shows a diagram of $X_{\rm LAE}$ and $M_{1500}$ at $z=2.91$--$3.68$ ($\approx3.3$) for our MUSE sample. This is the first time that the dependence of the LAE fraction on $M_{1500}$ is studied at $M_{1500}\geq-18.5$ mag. The LAE fractions for $EW(\rm Ly\alpha)\geq25$ \AA\ (45 \AA{}, 65 \AA{}, and 85 \AA{}) are shown with the purple (violet, orange, and yellow) stars. The best fit linear relations are $X_{\rm LAE} = -0.02^{+0.07}_{-0.08}M_{1500} -0.30^{+1.39}_{-1.62}$, $X_{\rm LAE} = 0.01^{+0.02}_{-0.03}M_{1500} +0.33^{+0.45}_{-0.54}$, $X_{\rm LAE} = 0.02^{+0.01}_{-0.01}M_{1500} +0.41^{+0.18}_{-0.30}$, and $X_{\rm LAE} = 0.01^{+0.01}_{-0.01}M_{1500} +0.12^{+0.13}_{-0.14}$ for EW cuts of $25$ \AA, $45$ \AA, $65$ \AA, and $85$ \AA, respectively. We find no clear dependence of $X_{\rm LAE}$ on $M_{1500}$ in tension with the clear rise of $X_{\rm LAE}$ to faint UV magnitude for an EW cut of $50$ \AA reported in \citep{Stark2010}. Our results also show that the LAE fraction is sensitive to the equivalent width selection, as expected e.g. from \citet{Hashimoto2017b}. Although this means that the LAE fraction is useful in itself to test cosmological galaxy evolution models \citep[see Sect. \ref{subsec:galics} and][]{Forero-Romero2012, Inoue2018}, it also raises concern for the usage of $X_{\rm LAE}$ as a probe of the IGM neutral fraction at the end of reionization \citep[see also][]{Mason2018}, since homogeneous measurements of Ly$\alpha$ emission over a wide redshift range are required for a fair comparison. \begin{figure} \sidecaption \includegraphics[width=9.0cm]{fig07.pdf}\\ \caption{ $X_{\rm LAE}$ vs.$M_{1500}$ at $z\approx3.3$ ($=2.91$--$3.68$) for $M_{1500}\in [-21.5; -17.0]$. Purple, violet, orange and yellow stars indicate our MUSE results with $EW(\rm Ly\alpha)\geq25$ \AA, $EW(\rm Ly\alpha)\geq45$ \AA, $EW(\rm Ly\alpha)\geq65$ \AA, and $EW(\rm Ly\alpha)\geq85$ \AA, respectively. For visualization purposes, we show the width of $M_{1500}$ only for the violet stars and slightly shift the other points along the abscissa.} \label{fig:uvdep} \end{figure} \begin{figure} \sidecaption \includegraphics[width=9.cm]{fig08.pdf}\\ \caption{$X_{\rm LAE}$ vs. $M_{1500}$ for $EW(\rm Ly\alpha)\geq50$ \AA\ at $z\approx3$-$4$. Our $X_{\rm LAE}$ at $z\approx3.3$ ($\approx2.9$--$3.7$) and $z\approx4.1$ ($\approx3.7$--$4.4$), are indicated by filled and open magenta stars, respectively. \citet{Stark2010}'s $X_{\rm LAE}$ at $z\approx3.5$--$4.5$ is shown by filled grey squares. The open grey square indicates the corrected $X_{\rm LAE}$ value of \citet{Stark2010} at $M_{1500}=-19$ mag (see Sect. \ref{subsubsec:lbgbias}). For visualization purposes, we slightly shift the magenta open stars and the grey open square along the abscissa and show the width of $M_{1500}$ only for the red filled stars.} \label{fig:uvdepcompare} \end{figure} In Figure\,\ref{fig:uvdepcompare}, our results for the relation between $X_{\rm LAE}$ and $M_{1500}$ for $EW(\rm Ly\alpha)\geq50$ \AA\ at $z\approx3$-$4$ (filled and open red stars) are compared with those in \citet{Stark2010} (filled grey squares). The best fit linear relations for our results at $z\approx2.9$--$3.7$ and at $\approx3.7$--$4.4$ are $X_{\rm LAE} = 0.01^{+0.02}_{-0.03}M_{1500} +0.34^{+0.44}_{-0.55}$ and $X_{\rm LAE} = 0.01^{+0.06}_{-0.10}M_{1500} +0.33^{+1.12}_{-1.80}$, respectively. We find no dependence of $X_{\rm LAE}$ on $M_{1500}$ as opposed to the claim in \citet{Stark2010}, whose best fit relation is $X_{\rm LAE} = 0.13^{+0.03}_{-0.03}M_{1500} +2.87^{+0.74}_{-0.72}$. Our $X_{\rm LAE}$ is lower than that in \citet{Stark2010} at UV magnitude fainter than $M_{1500}\approx-19$ mag, and possibly at $M_{1500}\approx-20$ mag. We discuss the difference in $X_{\rm LAE}$ between this work and \citet{Stark2011} in Sect. \ref{subsec:difference}. \section{Discussion}\label{sec:discussion} In this section, we assess the cause of the differences between our results and previous results, compare our results with predictions from a cosmological galaxy formation model, and discuss the evolution of the LAE fraction and implication for reionization. \subsection{Possible causes of the differences between our MUSE results and previous results}\label{subsec:difference} \begin{figure} \sidecaption \includegraphics[width=9.0cm]{fig09.pdf}\\ \caption{Tests of a possible LBG selection bias. Upper panel: The shift of $F606W$ magnitude due to contamination of Ly$\alpha$ emission as a function of redshift. Green, red, and violet lines show the shifts for $EW(\rm Ly\alpha)=25$ \AA, $EW(\rm Ly\alpha)=50$ \AA, and $EW(\rm Ly\alpha)=100$ \AA, respectively. Lower panel: Color-color diagram for $B$ ($F435W$)-dropouts: $F435W$-$F606W$ as a function of $F606W$-$F850LP$. The grey, green, red, and violet points indicate UV selected galaxies with $-20.25 \leq M_{1500}\leq-18.75$ mag at $z_{\rm p}=3.5$--$4.5$, those with $EW(\rm Ly\alpha)=20$--$50$ \AA, $EW(\rm Ly\alpha)=50$--$100$ \AA, and $EW(\rm Ly\alpha)\geq100$ \AA, respectively. The black line represents the color-color criteria for $B$-dropout. Green, red, and violet arrows show the shifts of colors due to contamination of Ly$\alpha$ emission to $F606W$ at $z\approx4$ and $z\approx4.5$ for $EW(\rm Ly\alpha)=25$ \AA, $EW(\rm Ly\alpha)=50$ \AA, and $EW(\rm Ly\alpha)=100$ \AA, respectively. } \label{fig:lbgtest} \end{figure} In Figure\,\ref{fig:zevocompare}, we find that our measurements of $X_{\rm LAE}$ are systematically lower than those of \citet{Stark2011}, although consistent within the error bars. This tension supports the results of \citet{ArrabalHaro2018}, who also find low median values of $X_{\rm LAE}$ at $z\approx4$--$5$. It is worth discussing the potential origins of this tension, since the median values of $X_{\rm LAE}$ have been used to assess cosmic reionization in theoretical studies \citep[e.g.,][]{Dijkstra2014}. The difference between our study and that of \citet{Stark2010} is more striking in Figure\,\ref{fig:uvdepcompare} which shows $X_{\rm LAE}$ as a function of $M_{1500}$. Here, our results are inconsistent with theirs at a faint UV magnitude, even when taking into account the large error bars. Below we discuss two possible origins of this discrepancy in the plot of $X_{\rm LAE}$ as a function of $M_{1500}$: the LBG selection bias, and systematics due to different observing methods. \subsubsection{LBG selection bias}\label{subsubsec:lbgbias} There is a possibility that the LBG sample of \citet{Stark2010} is biased towards bright Ly$\alpha$ emission i.e., higher $X_{\rm LAE}$, as pointed out in previous studies \citep[LBG selection bias, e.g., ][see also \citealt{Cooke2014} for another potential bias of LBGs due to LyC leakers]{Stanway2008, DeBarros2017,Inami2017}. In other words, LBG selections could be biased towards having higher $X_{\rm LAE}$ if they preferentially miss low-EW sources. The LBG selection consists of a set of color-color criteria and signal-to-noise cuts \citep[e.g.,][]{Stark2009}. \citet{DeBarros2017} obtain a relatively low median $X_{\rm LAE}$ at $z\approx6$ and discuss the causes. They use common selection criteria for the i-dropout \citep{Bouwens2015b}, but add an additional criterion of the H-band ($F160W$, rest-frame UV at $\approx6$) magnitude cut. When Ly$\alpha$ emission is located in the red band, strong Ly$\alpha$ emission in a UV spectrum can significantly enhance the Lyman break. The additional criterion in \citet{DeBarros2017} can suppress the LBG selection bias, which increases $X_{\rm LAE}$ for faint UV sources as will be discussed below. Here we estimate the effect on $X_{\rm LAE}$ of two aspects of the LBG selection bias for $B$ ($F435W$)-dropouts: the impact of Ly$\alpha$ contamination on the signal-to-noise ratio cut in the $V$ ($F606W$) band, and on the color-color criteria in a diagram of $F435W$-$F606W$ vs. $F606W$-$F850LP$ (black solid line in Figure\,\ref{fig:lbgtest}). We estimate $F606W$ magnitudes assuming a power-law spectrum with a UV slope of $-2$ ($\lambda\geq912$ \AA\ in rest frame) and IGM transmission following \citet{Madau1995}. As shown in the upper panel in Figure\,\ref{fig:lbgtest}, strong Ly$\alpha$ emission can increase the flux in $F606W$ noticeably, especially at $z\gtrsim4$. The magnitude shift becomes larger at higher redshifts because of the increasingly smaller rest-frame wavelength range of the UV continuum that is covered by $F606W$ as the redshift goes up. Even at $z\approx4$, however, a moderate Ly$\alpha$ emission line with $EW(\rm Ly\alpha)=50$ \AA\ can cause a $\approx0.16$ mag difference in $F606W$. This affects both the signal-to-noise ratio and the colors. We first illustrate the effect on the signal-to-noise ratio cut of $F606W$ by considering an object with $M_{1500}=-19$ mag. At $z=4.5$ ($z=4.0$), a source without Ly$\alpha$ emission has $F606W=28.3$ mag ($F606W=27.5$ mag), while a source with $EW(\rm Ly\alpha)=50$ \AA\ has a $0.27$ ($0.16$) brighter magnitude. Since these $F606W$ magnitudes are close to the 5$\sigma$ limiting magnitude of $28.0$ mag in \citet{Stark2009}, which corresponds to a completeness of $50$\% in the case of a $S/N\geq5$ cut, the completeness for their $B$-dropout changes drastically around $28.0$ mag. Second, the effect on the two colors for the $B$-dropout selection is shown in the lower panel of Figure\,\ref{fig:lbgtest}. The green, red and violet arrows show color-color shifts in the diagram of $F435W$-$F606W$ and $F606W$-$F850LP$ due to the magnitude shift of $F606W$ in the case of $EW(\rm Ly\alpha)=25$ \AA, $50$ \AA, and $100$ \AA, respectively. The shift of $F606W$ enhances the possibility of meeting the dropout criteria. Indeed, at $z=3.5$--$4.5$, all of our LAEs with an absolute magnitude of $F775W=-20.25$--$-18.75$ mag are located in the dropout selection region (upper left region of bottom panel in Figure\, \ref{fig:lbgtest}). However, $\approx10$\% of continuum-selected galaxies do not meet the dropout criteria. Therefore, strong Ly$\alpha$ emission can enhance the probability to meet LBG-selection criteria both in terms of the signal-to-noise cut and of the color-color criteria. We estimate the completeness of the $B$-dropout galaxies in \citet{Stark2009} using a plot of surface number density as a function of UV magnitude for $B$-dropouts in \citet{Bouwens2007}. \citet{Stark2009} use the same color-color criteria as \citet{Bouwens2007} but with $\approx0.6$ mag shallower data sets than those in \citet{Bouwens2007}. Figure 1 in \citet{Bouwens2007} shows the surface number density of the $B$-dropouts as a function of apparent $F775W$ magnitude (i.e., apparent rest-frame UV magnitude). At absolute UV magnitudes of $\approx-18.8$ and $-18.3$ at $z\approx4$, the completeness values are $\approx25$\% and $\approx5$\%, respectively. The completeness values for $\approx0.6$ mag shallower data in \citet{Stark2009} are estimated to be $\approx25$\% and $\approx5$\% at $M_{1500}\approx-19.4$ mag and $M_{1500}\approx-18.9$ mag, respectively, if the behavior of completeness as a function of $S/N$ is similar. As shown in the lower panels of Figure \ref{fig:n_muv}, the $B$-dropout galaxies in \citet{Stark2009} are not complete at $M_{1500}\approx-19.0$ mag. Moreover, \citet{Stark2009} adopt stricter signal-to-noise ratio cuts than \citet{Bouwens2007}'s criteria for $B$-dropouts, and the completeness values in \citet{Stark2009} may be lower than estimated here, especially for faint sources. Following the discussions above, at $z\approx3.5$--$4.5$, the observed number (the denominator of $X_{\rm LAE}$) for their B-dropouts with $M_{1500}\approx-19$ mag is estimated to be less than $25$\% of the true value, while the observed number (the numerator of $X_{\rm LAE}$) for LAEs with $EW(\rm Ly\alpha)=50$ \AA\ is estimated to be larger than $50$\% of the true value under the assumption that all the LAEs meet the color-color criteria. This means that $X_{\rm LAE}$ for their B-dropout sample may be more than $\approx1.5$ times larger than the $X_{\rm LAE}$ for a complete sample, $0.3^{+0.08}_{-0.09}$. If the overestimate would be corrected for $X_{\rm LAE}$ at $M_{1500}\approx-19$ mag, their $X_{\rm LAE}$ would be consistent with ours within the $1\sigma$ error bars as shown by the open grey square in Figure\,\ref{fig:uvdepcompare}. Therefore, the $B$-dropout selection bias may be the dominant cause of the difference in $X_{\rm LAE}$ between LBGs and photo-$z$ selected galaxies at $z\approx4$ at faint UV magnitudes. This may also have an effect on the difference in $X_{\rm LAE}$ at $z\approx4$ shown in Figure\,\ref{fig:zevocompare}. Indeed, the difference for the $55$ \AA\ cut is more pronounced than that for the $25$ \AA\ cut. Although we do not discuss quantitatively biases of dropout selections at other redshifts, strong Ly$\alpha$ emission will cause similar effects, as discussed in the references. We note that the LBG sample in \citet{ArrabalHaro2018} consists of $\approx70$\% photometric-redshift selected objects and of $\approx30$\% dropout selected objects based on dropout selection criteria for their medium-band filters. Since we measure $X_{\rm LAE}$ for a photometric-redshift selected sample, this may result in the similarity of $X_{\rm LAE}$ to ours at $z\lesssim5$, though their sample is not complete in UV at $M_{1500}\gtrsim-20$ mag. Note that \citet{Oyarzun2017} mention yet another potential bias for LBG samples which are incomplete in UV, due to a correlation between the $EW(\rm Ly\alpha)$ and $M_{1500}$. This bias leads to an underestimate of $X_{\rm LAE}$ because large equivalent width objects are preferentially missed when faint-UV galaxies drop out of the sample. Our results are not affected by this bias, but it could affect the results of other work shown in Figure \ref{fig:zevocompare}, and could have compensated the LBG selection bias we discussed above. In Figure \ref{fig:uvdepcompare}, the bias from \citet{Oyarzun2017} has no impact because we are looking at $X_{\rm LAE}$ as a function of UV magnitude. \subsubsection{Different observational methods}\label{subsubsec:method} The Ly$\alpha$ emission in our sample is measured with an IFU (without including the Ly$\alpha$ halo) and is thus less affected by uncertainties due to slit-loss and aperture corrections. \citet{Hoag2019bMNRAS} measure the spatial offset between the Ly$\alpha$ emission and the UV continuum. They find a typical standard deviation for the offset which decreases towards higher redshifts ($2.17^{+0.19}_{-0.14}$ kpc ($\approx0\farcs3$) at $z\approx3.25$ to $1.19^{+1.29}_{-0.33}$ kpc ($\approx0\farcs2$) at $z\approx5.25$). They argue that the evolution of the spatial offset contributes to the increasing trend of $X_{\rm LAE}$ with $z$ measured with slit spectroscopy with $1\farcs$ slits such as in \citet{Stark2011}. According to \citet{Hoag2019bMNRAS} Figure 7, the simulated cumulative distribution function (CDF) of slit-loss is similar from $z\approx3.5$ to $5.5$ but is shifted at $z\approx3$--$3.5$ to a larger slit-loss for a $1\farcs$ slit. At $z\approx3.5-4.5$, the CDF reaches $\approx90$\% at a slitloss of $\approx10$\%. Moreover, their measured offsets are much larger than that for a lensed LAE at $z\approx1.8$ \citep[$0.65$ kpc,][]{Erb2019arXiv} and typical values for LAEs at $z\approx3$--$6$ \citep[$\lesssim0\farcs1$,][]{Leclercq2017}. Hence, this is probably not the dominant cause of the high $X_{\rm LAE}$ values of \citet{Stark2010,Stark2011} at $z\approx3.5$--$4.5$. Note that the aperture diameters (convolved mask diameters) for our Ly$\alpha$ measurement with IFU data are typically larger than $1''$ (see Sect. \ref{subsec:muse}). Our measurements are less affected by the spatial offset between the Ly$\alpha$ emission and the UV continuum. Meanwhile, \citet{Hoag2019bMNRAS} estimate slitlossess based on the spatial component of slit spectra. \subsubsection{Summary}\label{subsubsec:summary} We find indications that the dominant cause of the difference in $X_{\rm LAE}$ measured in \citet{Stark2010,Stark2011} and presented here is the LBG selection bias. Strong Ly$\alpha$ emission can enhance the probability to meet the LBG-selection criteria both in terms of the signal-to-noise ratio and color. The LBG selection bias has a strong effect on $X_{\rm LAE}$ especially for faint UV magnitudes, where LBG samples are not complete. Possible discrepancies arising from different observational methods probably affect $X_{\rm LAE}$ to a lesser extent. Thanks to the MUSE observations and to the HST photo-$z$ sample, our $X_{\rm LAE}$ are derived from the most homogeneous and complete sample to date. \subsection{Comparison with the GALICS model}\label{subsec:galics} Using a homogeneous and complete UV sample and MUSE spectroscopic data, we have measured the LAE fraction for the first time at very faint magnitudes ($M_{1500}\leq-17.0$ mag). While we confirm a weak increase of $X_{\rm LAE}$ as a function of redshift at $3 \lesssim z \lesssim$ $5$, we find that LAEs with $EW(\rm Ly\alpha)\geq45$ \AA\ make up a relatively low fraction of the underlying rest-frame UV-detected galaxy population, $\approx0$--$20$\%. This implies the existence of a duty cycle either for the star formation activity or for the escape and/or production of Ly$\alpha$ photons. Another possibility is that only a small fraction of all galaxies can evolve into LAEs or can be observed as LAEs in a limited range of inclinations \citep[e.g.][]{Verhamme2012}. Our results suggest no dependence of $X_{\rm LAE}$ on $M_{1500}$ at $z\approx3$. Keeping in mind that we want to assess the merits of the redshift evolution of $X_{\rm LAE}$ to probe the IGM neutral fraction at $z\gtrsim 6$, it is essential to understand these trends after reionization. To do so, we compare our results with predictions from the semi-analytic model of \citet{Garel2015}. \subsubsection{Description of the model}\label{subsubsec:galics_descrip} \citet{Garel2015} present an updated version of the GALICS hybrid model \citep[Galaxies In Cosmological Simulations,][]{Hatton2003} which is designed to study the formation and evolution of galaxies in the high redshift Universe. GALICS relies on an N-body cosmological simulation to follow the hierarchical growth of dark matter structures and on semi-analytic prescriptions to describe the physics of the baryonic component. The box size of the simulation is $100$ $h^{-1}$ cMpc on a side, and the dark-matter particle mass is $\approx8.5\times10^7\, M_\odot$ (with $1024^3$ particles) \citep{Garel2012}. In \citet{Garel2015}, stars are formed according to a Kennicutt-Schmidt law when the galaxy’s gas surface density $\Sigma_{\rm gas}$ is larger than a threshold value, $\Sigma_{\rm gas}^{\rm thresh}$ \citep[e.g.,][]{Schmidt1959,Kennicutt1998}, and the intrinsic \mbox{Ly$\alpha$}{} emission from galaxies is computed assuming case B recombination as $L_{Ly\alpha}^{\rm intr} \propto 0.67 Q(H)$. Here, $Q(H)$ is the production rate of hydrogen-ionising photons estimated from the stellar spectral energy distributions. In order to predict the observed \mbox{Ly$\alpha$}{} properties of galaxies, \citet{Garel2015} combine GALICS with the library of radiative transfer (RT) simulations of \citet{Schaerer2011} which predict the escape fraction of \mbox{Ly$\alpha$}{} photons through galactic winds \citep[$f_{\rm esc}$; see ][for more details]{Verhamme2006,Verhamme2008,Garel2012}. $f_{\rm esc}$ depends on the wind parameters (the wind expansion velocity, velocity dispersion, dust opacity, neutral hydrogen column density) which are computed by GALICS. The \mbox{Ly$\alpha$}{} luminosity emerging from each individual galaxy is then given by $L_{\rm Ly\alpha} = L_{\rm Ly\alpha}^{\rm intr} \times f_{\rm esc}$. The GALICS model was tuned to reproduce the UV and Ly$\alpha$ luminosity functions at $z\approx3$--$6$ in \citet{Garel2012}. This model was then shown to also reproduce accurately the observed stellar mass functions and star-formation-rate to stellar mass relations at $z\approx3$--$6$ \citep{Garel2016}. While their fiducial model can match these observational constraints at $3 \lesssim$ $z \lesssim$ $6$, it fails to reproduce the wide distribution of \mbox{Ly$\alpha$}{} EWs, in particular the high EW values (at $EW(\rm Ly\alpha)\gtrsim50$ \AA). \citet{Garel2015} discuss the possibility that this mismatch is linked to the lack of burstiness of star formation in GALICS given that the \mbox{Ly$\alpha$}{} EW is primarily set by the combination of (i) the production rate of \mbox{Ly$\alpha$}{} photons, dominated by short-lived stars, and (ii) the stellar UV emission which traces star formation over longer timescales \citep[see also][]{Charlot1993,Madau1998}. In the fiducial model, most galaxies keep forming stars at a rather constant rate because the surface gas density threshold is almost always met. In an alternative model (labeled bursty SF), they increase this threshold by a factor $10$, such that gas needs to accrete onto galaxies for longer periods before reaching the required surface density. This naturally gives rise to a star formation duty cycle and \citet{Garel2015} show that their bursty SF model predicts EW distributions in much better agreement with observations than the fiducial model does. Following the procedure of \citet{Garel2016}, we create mock surveys for both the fiducial and bursty SF models using the Mock Map Facility (MOMAF) tool \citep{Blaizot2005}. In practice, for each model, we generate 100 lightcones that mimic the geometry and redshift range of the MUSE HUDF survey, i.e. a square field of $\approx10$ arcmin$^2$ and $2.8 \lesssim$ $z \lesssim$ $6.7$, and we compute $X_{\rm LAE}$ from the mocks in the same bins of redshift and UV magnitude as for the observational measurements. \subsubsection{Measured LAE fraction vs. GALICS predictions}\label{subsubsec:galics_results} In Figure\,\ref{fig:uvdep_galics}, we show our MUSE measurement of $X_{\rm LAE}$ as a function of $M_{1500}$ at $z\approx3.3$ for $EW(\rm Ly\alpha)\geq45$ \AA\ and $EW(\rm Ly\alpha)\geq65$ \AA{}. These EW cuts correspond to our most secure measurements. We also show in this Figure the predictions from the GALICS model. Both fiducial and bursty SF GALICS models (dashed and solid lines, respectively) show an increase of $X_{\rm LAE}$ towards faint UV magnitudes with a slope which is in good agreement with the data (star symbols). As discussed in \citet{Verhamme2008} and \citet{Garel2015}, this trend may be the result of two factors. First, UV bright sources have intrinsically smaller $EW(\rm Ly\alpha)$ due to less significant or less recent bursts of star formation. Second, these galaxies are often more massive with higher \mbox{H\,{\sc i}}\ and dust contents which can dramatically reduce the escape fraction of Ly$\alpha$ photons and therefore the observed EW. Hence, few bright UV galaxies display a strong \mbox{Ly$\alpha$}{} emission line. For a more detailed comparison, we see that the fiducial GALICS model (dashed lines) does not reproduce the observed $X_{\rm LAE}$. This model overestimates (underestimates) the LAE fraction at almost all UV magnitudes for $EW(\rm Ly\alpha)\geq45$ \AA{} ($EW(\rm Ly\alpha)\geq65$ \AA). This is a consequence of the too narrow EW distribution predicted by this model (see Sect. \ref{subsubsec:galics_descrip}), which \citet{Garel2015} attribute to overly smooth star formation histories. In the bursty SF model however (solid lines), galaxies have more diverse recent star formation histories which result in wider EW distributions, and consequently this model is able to reproduce our measured $X_{\rm LAE}$ much better. In Figure\,\ref{fig:xevo_galics}, we compare the GALICS predictions of $X_{\rm LAE}$ as a function of $z$ with $X_{\rm LAE}$ from the MUSE data at $z\lesssim5$, i.e. where our observational measurements are most robust. For the same reasons as above, we find that the bursty SF model is much more successful at reproducing the observations than the fiducial model. This is particularly true for the lowest EW cut (i.e. $45$ \AA) where the agreement is quite good (solid orange curve). For $EW(\rm Ly\alpha)\geq65$ \AA{} however, we note that the bursty model slightly underpredicts the observed LAE fraction (solid magenta curve), especially at z $\gtrsim 4.5$. Additional ingredients could possibly be missing from this model that would help produce more galaxies with large EWs (in particular in the higher redshift bin) such as radiative transfer in asymmetric geometries, or Ly$\alpha$ production from other channels like collisions (gravitational cooling) or fluorescence \citep[see e.g. ][for a more detailed discussion on these aspects]{Verhamme2012,Garel2015,Dijkstra2017}. Also, the assumed IMF or the metallicity evolution of model galaxies may not be realistic and lead to low EWs \citep[e.g.][]{Hashimoto2017a,Hashimoto2017b}. Overall, these comparisons suggest that the measurements of $X_{\rm LAE}$ by MUSE in the post-reionization epoch can be reasonably well interpreted with current models of high-$z$ galaxies such as GALICS. We find that the observed trends between $X_{\rm LAE}$ and redshift/UV magnitude are mainly shaped by the burstiness of star formation in GALICS. It is also caused by the variation of $f_{\rm esc}$ with respect to the physical properties of the galaxies as discussed in \citet{Garel2015}. In GALICS, these two aspects modulate the observed \mbox{Ly$\alpha$}{} EWs of galaxies and therefore the LAE fraction at $z \lesssim 5$. \begin{figure} \sidecaption \includegraphics[width=9cm]{fig10.pdf}\\ \caption{ $X_{\rm LAE}$ vs. $M_{1500}$ at $z\approx3.3$ from our MUSE results compared to predictions from the GALICS mocks. The MUSE results at $z\approx3.3$ for $EW(\rm Ly\alpha)\geq45$ \AA\ and $EW(\rm Ly\alpha)\geq65$ \AA\ are indicated by violet and orange stars, respectively. The magenta and orange {\it dashed} lines with dots show the average $X_{\rm LAE}$ computed from 100 mocks of the fiducial GALICS model \citep[][]{Garel2015} at the same redshift for $EW(\rm Ly\alpha)\geq45$ \AA\ and $EW(\rm Ly\alpha)\geq65$ \AA, respectively. Those from the bursty SF model are shown by {\it solid} lines with dots. For visualization purposes, we slightly shift the points along the abscissa. Note that $X_{\rm LAE}$ for $EW(\rm Ly\alpha)\geq65$ \AA\ from MUSE and from the bursty SF model is $0$ at $M_{1500}\approx-21$ mag.} \label{fig:uvdep_galics} \end{figure} \begin{figure} \sidecaption \includegraphics[width=9cm]{fig11.pdf}\\ \caption{ $X_{\rm LAE}$ vs. $z$ for $M_{1500}\in[-21.75;-18.75]$, at $z<5$, from our MUSE results compared to predictions from the GALICS model. The MUSE results for $EW(\rm Ly\alpha)\geq45$ \AA\ and $EW(\rm Ly\alpha)\geq65$ \AA\ are indicated by violet and orange circles, respectively. The magenta and orange dashed (solid) lines with dots show the average $X_{\rm LAE}$ computed from 100 mocks of the fiducial (bursty SF) GALICS model \citep[][]{Garel2015} for $EW(\rm Ly\alpha)\geq45$ \AA\ and $EW(\rm Ly\alpha)\geq65$ \AA, respectively. For visualization purposes, we slightly shift the points along the $x$-axis.} \label{fig:xevo_galics} \end{figure} \subsubsection{Cosmic variance}\label{subsubsec:galics_cv} \begin{figure} \sidecaption \includegraphics[width=8.8cm]{fig12.pdf}\\ \caption{ Test of cosmic variance and uncertainties of $X_{\rm LAE}$ for our MUSE observations using GALICS mocks of the bursty SF model. {\it top panel}: $X_{\rm LAE}$ vs. $z$ for $M_{1500}\in[-21.75;-18.75]$, at $z<5$. In order to better compare GALICS with our observations and to provide a more accurate estimate of cosmic variance, we use slightly different EW cuts for the model. We replace the 45\AA{} cut with $46$\AA, $48$\AA, and $46$\AA{} cuts at $z\approx3.3$, $4.1$, and $4.7$, and we replace the 65\AA{} cut with 53\AA, 52\AA, and 49\AA{} at the same redshifts. With these cuts, the values of $X_{\rm LAE}$ from MUSE (violet and orange circles at for $EW(\rm Ly\alpha)\geq45$ \AA\ and $EW(\rm Ly\alpha)\geq65$ \AA{}) and GALICS (solid lines) match. {\it middle panel}: Relative upper $1\sigma$ uncertainties of $X_{\rm LAE}$ vs. $z$ for $M_{1500}\in[-21.75;-18.75]$, at $z<5$. The relative $68$\% percentiles of $X_{\rm LAE}$ (field-to-field variance) measured among $100$ GALICS mocks are indicated by violet and orange circles with soloid lines for $EW(\rm Ly\alpha)\geq45$ \AA\ and $EW(\rm Ly\alpha)\geq65$ \AA, respectively. The $68$\% percentile includes both of the cosmic variance and statistical error. The statistical errors estimated from BPCI are shown by violet and orange crosses. The MUSE uncertainties (estimated from BPCI including completeness correction effects) for $EW(\rm Ly\alpha)\geq45$ \AA\ and $EW(\rm Ly\alpha)\geq65$ \AA\ are indicated by violet and orange circles, respectively. {\it bottom panel}: Relative lower $1\sigma$ uncertainties of $X_{\rm LAE}$ vs. $z$ for $M_{1500}\in[-21.75;-18.75]$, at $z<5$. The symbols are the same as those in the middle panel. For visualization purposes, we slightly shift the points along the $x$-axis. } \label{fig:xevo_galicscv} \end{figure} The area of the MUSE HUDF survey is limited to $9.92$ arcmin$^2$ which translates into comoving volumes of $\approx 1.5-2.5 \times 10^4$ cMpc$^3$ for the redshift ranges we are considering here. As explained in Sect. \ref{subsec:uncertainties}, we have accounted for several sources of uncertainty to compute the error on $X_{\rm LAE}$ but so far we ignored cosmic variance (see Sect. \ref{subsec:uncertainties} for discussion). To assess the significance of this effect, we can estimate the cosmic variance from the GALICS mock lightcones, which are cut out from the $\approx3\times10^6$ cMpc$^3$ simulation box \citep{Garel2016}. We compute the $1\sigma$ standard deviation as the field-to-field variation, which includes the effects of both cosmic variance and the binomial proportion confidence interval. Note that our estimate of cosmic variance with GALICS only accounts for the clustering of galaxies and not for the possible contribution of large-scale variations in the IGM transmissivity due to a patchy reionization. In the middle and bottom panels of Figure\,\ref{fig:xevo_galicscv}, we compare the relative uncertainty of our MUSE $X_{\rm LAE}$ estimates (circles) with the relative uncertainties due to the field-to-field variation of $X_{\rm LAE}$ (solid lines with dots), which is estimated from the $100$ mocks based on the bursty SF model. For a fair comparison, we match $X_{\rm LAE}$ of GALICS to that of MUSE for $EW(\rm Ly\alpha)\geq45$ \AA\ ($EW(\rm Ly\alpha)\geq65$ \AA), by adopting slightly different $EW(\rm Ly\alpha)$ cuts in the model catalogs, of $46$ \AA, $48$ \AA, and $46$\AA\ ($53$ \AA, $52$ \AA, and $49$ \AA) at $z\approx3.3$, $4.1$, and $4.7$, respectively (see the top panel of Figure\,\ref{fig:xevo_galicscv}). Note that the total uncertainty for the MUSE $X_{\rm LAE}$ is calculated by summing the statistical error (binomial proportional confidence interval) multiplied by completeness corrections in each flux bin for completeness correction (see Sect. \ref{subsec:uncertainties} for more details). For both EW cuts, the relative upper errors of our MUSE $X_{\rm LAE}$ are much larger than those of the field-to-field variance for the bursty GALICS model. The relative lower errors of our MUSE $X_{\rm LAE}$ are in the same level of those of the field-to-field variance at $z\approx4$--$5$. The contribution of cosmic variance to the field-to-field variance in the GALICS mock is negligible, since statistical errors (crosses) are dominant. It suggests that the cosmic variance is a subdominant source of uncertainties in our measurement of $X_{\rm LAE}$. Therefore, we conclude that our MUSE results are not strongly affected by cosmic variance. We note that the field-to-field variance may be slightly underestimated in the mock catalogs because the fluctuations on scales larger than the simulated box ($150$ cMpc) are not sampled. The size of the simulated volume is however significantly larger than our MUSE survey volume ($\approx 1.5-2.5 \times 10^4$ cMpc$^3$) and so this underestimate should be weak. \subsection{The redshift evolution of the LAE fraction and implications for reionization}\label{subsec:implications} The main purpose of this paper is to measure the evolution of $X_{\rm LAE}$ in the post-reionization epoch, at $z\lesssim6$ (as shown in Figures \ref{fig:xevo}, \ref{fig:zevocompare}, and \ref{fig:xevo_galics}). Our results confirm the rise of $X_{\rm LAE}$ with redshift found in the literature between $z\approx3$ and $6$ for $-21.75\leq M_{1500}\leq-17.75$ mag, $X_{\rm LAE} = 0.07^{+0.06}_{-0.03}z -0.22^{+0.12}_{-0.24}$, in Figure \ref{fig:xevo}. Meanwhile, the trend stopps at $z\sim5.6$ for $-20.25\leq M_{1500}\leq-18.75$ mag in Figure \ref{fig:zevocompare}. As discussed in Sect. \ref{subsubsec:galics_cv}, this evolution at $z\approx3$-$5$ is not caused by the cosmic variance of the limited survey field of the MUSE HUDF Survey. Instead, it is probably caused by higher intrinsic $EW(\rm Ly\alpha)$ and/or higher Ly$\alpha$ escape fractions at higher redshift in a given $M_{1500}$ range, due to less massive and less dusty galaxies at higher redshift \citep[e.g.,][]{Speagle2014a,Bouwens2016a, Santini2017}. It is also very important to understand the co-evolution of the Ly$\alpha$ and UV luminosity functions \citep[e.g.,][]{Ouchi2008,Dunlop2013, Konno2016,Konno2018,Ono2018}. \citet{DeBarros2017} obtain a relatively low $X_{\rm LAE}$ ($\approx40$\%) at $z\approx6$, which implies a less dramatic turn-over at $z>6$ than previously found \citep[e.g.,][]{Stark2011,Pentericci2014, Schenker2014,Tilvi2014}. If we interpret our point at $z\approx 5.5$ as a statistical fluctuation $1\sigma$ below a true value $\approx 0.35$ as found by \citet{DeBarros2017}, we confirm this shallower increase of the LAE fraction towards $z\approx5$ and $z\approx6$. This could indicate the stop of the evolution of the Ly$\alpha$ escape fraction, possibly related to the plateau evolution of the star formation main sequence at $z\approx5$--$6$ suggested by \citet{Speagle2014a} and \citet{Salmon2015}, and implying a constant stellar mass at a given $M_{1500}$ \citep[see however,][]{Santini2017}. Another possibility is that our low value of $X_{\rm LAE}$ at $z\approx5.5$ is genuine and indeed indicates a late transition in the ionisation state of the IGM. However, our data at $z\leq5$ combined the work of \citet{DeBarros2017} and \citet{Pentericci2018}, which is not affected either by the LBG selection bias discussed above, suggest an earlier reionization, at $z\approx6-7$. Our measurement at $z\approx5.5$ may thus indicate a patchy reionization process. \citet{Bosman2018} measure the mean and scatter of the IGM Ly$\alpha$ opacity with the largest sample of quasars so far. They confirm the existence of tails towards high values in the Ly$\alpha$ opacity distributions, which may persist down to $z\approx5.2$. They find a linear increase in the mean Ly$\alpha$ opacity from $\approx1.8$ at $z\approx5$ to $\approx3.8$ at $z\approx6$. These results also imply a late and/or patchy reionization scenario, in which reionization ends at $z\approx5.2$--$5.3$ \citep[e.g.,][see also \citealt{Kashino2019arXiv}]{Kulkarni2019,Keating2019}. The Gunn-Peterson absorption trough in quasar spectra is only sensitive to a low Ly$\alpha$ opacity (very low \mbox{H\,{\sc i}}\ gas fraction) and the LAE fraction is therefore a complementary tool. In the near future, the James Web Space Telescope (JWST)/Near Infrared Spectrograph (NIRspec) will enable us to observe Ly$\alpha$ emission at $z\approx5$ to $z\gtrsim10$ homogeneously and help make significant progress. We can also use the WST/Near Infrared Imager and Slitless Spectrograph (NIRISS) for Ly$\alpha$ spectroscopy. Most importantly, one can measure H$\alpha$ emission at $z\approx0$ to $z\approx7$ with JWST/NIRspec, and subsequently the line ratio of Ly$\alpha$ to H$\alpha$ to disentangle between intrinsic evolution and escape fraction. Another important point raised in this paper is that $X_{\rm LAE}$ estimates are sensitive to $EW(\rm Ly\alpha)$ selections (see Figures \ref{fig:xevo} and \ref{fig:zevocompare}). Although a general consensus seems to emerge from previous work, a quantitative interpretation of the evolution of $X_{\rm LAE}$ with redshift requires more accurately constructed samples. In addition, the contribution of extended Ly$\alpha$ emission to the total Ly$\alpha$ budget is typically large and has a large scatter \citep[typically more than $\approx$50\%,][]{Momose2016, Leclercq2017}. Methods for measuring Ly$\alpha$ flux have a large effect on $EW(\rm Ly\alpha)$ and then $X_{\rm LAE}$. It means that accurate and homogeneous measurements of Ly$\alpha$ emission are required to use $X_{\rm LAE}$ as a tracer of the \mbox{H\,{\sc i}}\ gas fraction of the IGM. In Figures \ref{fig:uvdep} and \ref{fig:uvdepcompare}, the $X_{\rm LAE}$-$z$ relation does not depend strongly on the rest-frame UV absolute magnitude. This suggests that at $z<6$, combining UV-bright and faint samples can give us better statistics for $X_{\rm LAE}$ measurements. In addition, as discussed in Sect. \ref{subsubsec:lbgbias}, a firm definition of parent samples avoiding a selection bias is also required to assess the evolution of $X_{\rm LAE}$. The uncertainties of $X_{\rm LAE}$ have to be calculated with BPCI (Bernoulli trials, see Sect. \ref{subsec:uncertainties}). Moreover, \citet{Mason2018} warn of the interpretation of the evolution of $X_{\rm LAE}$ with the same UV magnitude, since galaxies with the same UV magnitude have very different stellar and halo masses at different redshifts \citep[e.g.,][]{Speagle2014a, Behroozi2013}. Because of such effects, sophisticated models of galaxy formation are needed to robustly interpret variations of $X_{\rm LAE}$ with cosmic time. We propose a method using a UV complete sample including faint galaxies, based on a photo-$z$ selection and an absolute magnitude cut, together with Ly$\alpha$ measurements by an IFU with a high sensitivity and a wide-wavelength coverage in a large field-of-view like VLT/MUSE and VLT/BlueMUSE \citep{Richard2019BlueMUSE} at $z\approx2$--$6.6$. \section{Summary and Conclusions}\label{sec:conclusions} We have investigated the LAE fraction at $z\approx3$--$6$ using the second data release of the MUSE {\it Hubble} Ultra Deep Field Survey and the HST catalog of the UVUDF. Thanks to the unprecedented depth of the MUSE and HST data for Ly$\alpha$ and UV, respectively, we have studied the LAE fraction for galaxies as faint as $M_{1500}=-17.0$ mag at $z\approx3$ for the first time with a UV-complete sample. We have also derived the LAE fraction as a function of redshift homogeneously from $z=3$ to $6$, down to $M_{1500}=-17.75$ mag. Our results are summarized as follows: \begin{enumerate} \item We derived the redshift evolution of $X_{\rm LAE}$ for a number of EW and UV magnitude selections, including the first estimate down to $-17.75$ mag. These results are summarised in Table \ref{tbl:x_z}. For all selections, we find low values of $X_{\rm LAE}$ $\approx0.04$-$0.3$, and a weak rise of $X_{\rm LAE}$ with $z$, qualitatively consistent with the trend reported for brighter samples in the literature. \item We compared our MUSE results with those in the literature for $M_{1500}\in [-20.25;-18.75]$. At $z\lesssim5$, our values of $X_{\rm LAE}$ are consistent with those in \citet{ArrabalHaro2018} and \citet{Stark2011} within $1\sigma$ error bars for $EW(\rm Ly\alpha)\geq25$ \AA{} (see left panel of Figure\,\ref{fig:zevocompare}). Our $X_{\rm LAE}$ at $z\approx5.6$ is lower than those in the literature, which may be caused by statistical errors, or a late and/or patchy reionization process. \item We measured the dependence of $X_{\rm LAE}$ on $M_{1500}$ at $z=2.9$--$3.7$ for $EW(\rm Ly\alpha)\geq25$ \AA, 45 \AA, 65 \AA, and 85 \AA{} (see Figure\,\ref{fig:uvdep}). This is the first time this has been measured down to $M_{1500}=-17.0$ mag (for the largest EWs of our sample), and for a volume-limited sample. We found no clear dependence of $X_{\rm LAE}$ on $M_{1500}$, in contrast to previous reports. \item We compared the dependence of $X_{\rm LAE}$ on $M_{1500}$ for $EW(\rm Ly\alpha)\geq50$ \AA\ at $z\approx3$-$4$ derived from MUSE with results from the literature (Figure\,\ref{fig:uvdepcompare}). Again we found no dependence of $X_{\rm LAE}$ on $M_{1500}$. Our slopes of $0.01^{+0.02}_{-0.03}$ and $0.01^{+0.06}_{-0.10}$ at $z\approx2.9$--$3.7$ and at $\approx3.7$--$4.4$, respectively, are shallower than that in \citet{Stark2010}, $0.13^{+0.03}_{-0.03}$ at $\approx3.5$--$4.5$. We also found lower values of our $X_{\rm LAE}$ at a faint UV magnitude of $M_{1500}\gtrsim-19$ mag. \item The dominant causes of the difference of $X_{\rm LAE}$ in our work and previous studies appear to be LBG selection biases in those studies. We showed how these can lead to an over-estimate by a factor $\approx1.5$ of $X_{\rm LAE}$ at $z\approx4$ for galaxies with $M_{1500}=-19$ mag and $EW(\rm Ly\alpha)\geq50$ \AA. \item We compared our MUSE results with predictions from a cosmological semi-analytic galaxy evolution model \citep[GALICS,][]{Garel2015}. When GALICS uses a bursty star formation model, it can reproduce our measurement of $X_{\rm LAE}$ as a function of $M_{1500}$ at $z\approx3$. The fiducial GALICS model however cannot. The bursty model can also reproduce $X_{\rm LAE}$ as a function of $z$ at $z\lesssim4$. We assessed cosmic variance for our MUSE results using the bursty SF model and found that it does not have a significant effect on our results. \item Overall, we found that $X_{\rm LAE}$ is lower than $\approx30$\%. This implies a low duty cycle of LAEs, suggesting bursty star formation or strong time variations in the production of Ly$\alpha$ photons and/or in their escape fraction. \end{enumerate} Despite the difficulties of the method, the dominant source of uncertainties in our work is the Poisson noise due to the small number of objects in our samples. This is encouraging and suggests that future deep surveys with e.g., MUSE and JWST will enable us to produce accurate measurements of $X_{\rm LAE}$ with secure samples and to extend our understanding of the evolution of $X_{\rm LAE}$ at all redshifts, after and during the epoch of reionization. \begin{acknowledgements} We thank the anonymous referee for constructive comments and suggestions. We would like to express our gratitude to Stephane De Barros and Pablo Arrabal Haro for kindly providing their data plotted in Figures \ref{fig:muv_z}, \ref{fig:n_muv}, and \ref{fig:zevocompare}. We are grateful to Pascal Oesch, Kazuhiro Shimasaku, Masami Ouchi, Rieko Momose, Daniel Schaerer, Hidenobu Yajima, Taku Okamura, Makoto Ando, and Hinako Goto for giving insightful comments and suggestions. This work is based on observations taken by VLT, which is operated by European Southern Observatory. This research made use of Astropy\footnote{\url{http://www.astropy.org}}, which is a community-developed core Python package for Astronomy \citep{TheAstropyCollaboration2013,TheAstropyCollaboration2018}, \textsf{MARZ}, \textsf{MPDAF}, and \textsf{matplotlib} \citep{Hunter2007}. H.K. acknowledges support from Japan Society for the Promotion of Science (JSPS) through the JSPS Research Fellowship for Young Scientists and Overseas Challenge Program for Young Researchers. This work was supported by the project FOGHAR (Agence Nationale de la Recherche, ANR-13-BS05-0010-02). JB acknowledges support from the ORAGE project from the Agence Nationale de la Recherche under grant ANR-14-CE33-0016-03. JR acknowledges support from the ERC starting grant 336736-CALENDS. T. H. acknowledges supports by the Grant-inAid for Scientic Research 19J01620. \end{acknowledgements} \begin{appendix} \section{Uncertainties of $z_{\rm p}$}\label{ap:photoz} \subsection{Impact of lacking IRAC data on $z_{\rm p}$ estimation in \citet{Rafelski2015}}\label{ap:irac} It is well known that LBG samples and photo-$z$ samples at $z\approx3$--$7$ can be contaminated by lower-$z$ galaxies with a $3646$ \AA\ Balmer or $4000$ \AA\ break at $z\approx0$--$1$, especially at faint magnitude. Spitzer/IRAC data can provide rest-frame optical data which are useful to break the degeneracy of $z_{\rm p}$ \citep[e.g.,][]{Bradac2019}. In this work, we use the catalog from R15, where $z_{\rm p}$ are derived using HST data alone. The advantage of this choice is discussed in \citet{Brinchmann2017}, where they show that IRAC data can in fact worsen photo-$z$ performance for faint galaxies in the MUSE HUDF sample (see their Appendix A). It might be a reflection of the difficulty of providing reliable IRAC photometry for sources as faint as most of our sample. As discussed in Sect. \ref{subsec:uncertainties}, the fraction of low-$z$ contaminants among R15 galaxies is found to be low within MUSE samples \citep[Figure 20 in][]{Inami2017}. To avoid contaminants due to poor photo-$z$ estimations, we apply an $S/N>2$ cut for our sample and then apply a stricter cut on $M_{1500}$ (see Sect. \ref{subsec:parent_sample} and Figure \ref{fig:muv_z} for more details). \subsection{Effects of $z_{\rm p}$ errors on redshift binning}\label{ap:check_photoz} To check the effect of the error on $z_{\rm p}$ on redshift binning, we compare the median of upper and lower $95\%$ errors of $z_{\rm p}$ to the half-width of redshift bins for $-20.25\leq M_{1500}\leq-18.75$ mag and $-18.75\leq M_{1500}\leq-17.75$ mag shown in Figures \ref{fig:xevo} and \ref{fig:zevocompare}. The upper (lower) $95\%$ errors are calculated from a difference between $95$\% upper (lower) limit of photo-$z$ and photo-$z$ where likelihood is maximized in BPZ. The results are summarized in Table \ref{tbl:zperror}. For $-20.25\leq M_{1500}\leq-18.75$ mag, the median values of the $95$\% $z_{\rm p}$ errors are smaller than the width of redshift bins at $z\approx3.3$ to $5.6$. For $-18.75\leq M_{1500}\leq-17.75$ mag, the errors are larger than those for the brighter $M_{1500}$, but the median of $95$\% errors are still smaller than or comparable to the width of redshift bins at the all over redshift range. Therefore, the $95$\% errors of $z_{\rm p}$ are typically smaller than the width of redshift bins in Figures \ref{fig:xevo} and \ref{fig:zevocompare}. Note that we include widths of the redshift bins in the linear relation fitting (see Sect. \ref{subsec:slope}). Next, we check the fraction of possible low-$z$ contaminants, $f_{\rm large\ error}$. Although the probability distribution functions (PDFs) of $z_{\rm p}$ in R15 catalog are not published, galaxies with a bimodal PDF of $z_{\rm p}$ show a large lower $95$\% error, which reaches at $z\approx0$--$1$ for a sample at $z\approx3$--$7$. We calculate $f_{\rm large\ error}$ from the wavelengths of breaks, $z_{\rm p}$, and its $95$\% errors and summarize the results in Table \ref{tbl:zperror}. For $-20.25\leq M_{1500}\leq-18.75$ mag, $f_{\rm large\ error}$ is $0.02$ to $0.09$, implying that our $X_{\rm LAE}$ in Figure \ref{fig:zevocompare} is not affected significantly by contaminants. Meanwhile, for $-18.75\leq M_{1500}\leq-17.75$ mag, $f_{\rm large\ error}$ is $0.05$ at $z\approx3.3$ so that our $X_{\rm LAE}$ is not suppressed significantly by contaminants in Figure \ref{fig:uvdepcompare}. At $z\approx4.1$ to $5.6$, $f_{\rm large\ error}$ is relatively high, $0.13$ to $0.35$. However, these are conservative estimations of the upper limit of low-$z$ contaminated fraction, since not all of the galaxies with a large photo-$z$ error locate at $z\approx0$--$1$. In fact, our sample shows the maximum likelihood at $z\approx3$--$6$. Our $X_{\rm LAE}$ at $z\gtrsim4.1$ and $5.6$ in Figure \ref{fig:xevo} should not be affected significantly by low-$z$ contaminants significantly. \begin{table*} \caption{ Comparison of the median of $95\%$ upper and lower errors of $z_{\rm p}$ to the half-width of redshift bins and the fraction of galaxies with a large lower error.}\label{tbl:zperror} \centering \begin{tabular}{ccccc} \hline \noalign{\smallskip} mean $z_{\rm p}$ & half-width of $z$ bins & median of $95$\% upper error & median of $95$\% lower error & $f_{\rm large\ error}$ \\ \noalign{\smallskip} \hline \multicolumn{5}{c}{$-20.25\leq M_{1500}\leq-18.75$ mag} \\ \hline $z\approx3.3$ & 0.39 & 0.19 & 0.20 & 0.02\\ $z\approx4.1$ & 0.38 & 0.24 & 0.25 & 0.05\\ $z\approx4.7$ & 0.29 & 0.28 & 0.27 & 0.0\\ $z\approx5.6$ & 0.56 & 0.30 & 0.32 & 0.09\\ \hline \multicolumn{5}{c}{$-18.75\leq M_{1500}\leq-17.75$ mag} \\ \hline $z\approx3.3$ & 0.39 & 0.21 & 0.23 & 0.05\\ $z\approx4.1$ & 0.38 & 0.30 & 0.34 & 0.2\\ $z\approx4.7$ & 0.29 & 0.33 & 0.34 & 0.13\\ $z\approx5.6$ & 0.56 & 0.35 & 0.44 & 0.35\\ \hline \end{tabular} \tablefoot{ The mean redshift, the half-width of the redshift bin, the median of upper $95\%$ errors of $z_{\rm p}$, the median of lower $95\%$ errors, and the fraction of galaxies with a large lower error suggesting a bimodal $z_{\rm p}$ PDF are shown.} \end{table*} \section{The template spectra used in \textsf{MARZ}}\label{ap:temp_marz} Figure \ref{fig:temp_marz} shows the template spectra of LAEs in \textsf{MARZ} used in this work. The continuum are subtracted in the templates and the bright Ly$\alpha$ lines are clipped to lie between $-30$ and $+30$ times the mean absolute deviation in the similar manner to those used in the original \textsf{MARZ} \citep{Hinton2016} and \textsf{AUTOZ} \citep[][]{Baldry2014} as shown in the right panels in Figure \ref{fig:temp_marz}. Cross-correlation functions indicate locations of lines in the fitting range and are not affected significantly by the line shape of templates in general \citep[see Sect. 4.3 in][]{Herenz2017b}. In fact, the completeness of \textsf{MARZ} does not depend on the FWHM of fake lines in completeness simulations as we mention in Sect. \ref{subsec:compsimu}. Although our templates do not cover various types of Ly$\alpha$ lines, it does not have an effect on detection of Ly$\alpha$ emission. \begin{figure*}[h] \sidecaption \includegraphics[width=18cm]{figB.pdf}\\ \vspace{-20pt} \caption{The LAE template spectra of \textsf{MARZ} used in this work: those of ID$=10$, $18$, and $19$ are used in \citet{Inami2017}, while those of ID$=25$, $26$, $27$, and $30$ are newly created from MUSE data (Bacon et al. in prep.). Left panels: scaled spectra of the templates in the rest frame. Right panels: zooms of the Ly$\alpha$ emission line in each left panel. } \label{fig:temp_marz} \end{figure*} \section{An example of contamination of Ly$\alpha$ detection}\label{ap:contami} Figure \ref{fig:contami} shows an example of contamination of Ly$\alpha$ emission from a neighboring object. Panels (a) - (d) show sub-panels in \textsf{MARZ} for a UV-selected source \citep[see][for more details of \textsf{MARZ}'s screen]{Inami2017}. The HST cutout around a UV-galaxy with \citet{Rafelski2015} ID $=628$ is shown in panel (a). The 1D spectrum shown in panel (d) clearly shows a strong Ly$\alpha$ emission line with the highest confidence level by \textsf{MARZ}. The spectrum is extracted from the object mask (panel (b)), and clearly contaminated by diffuse Ly$\alpha$ emission from a neighboring object (MUSE ID $=1185$ in the DR1 catalog) as shown in the narrow band in panel (c). We can remove these contaminated objects from our sample of Ly$\alpha$ emitter candidates in visual inspection. \begin{figure}[h] \sidecaption \includegraphics[width=9cm]{figC.pdf}\\ \caption{ An example of contamination of Ly$\alpha$ emission from a neighboring object. Panels (a) - (d) show sub-panels in \textsf{MARZ}'s screen \citep{Inami2017} for a UV-selected source with \citet{Rafelski2015} ID $=628$: (a) HST F606W cutout, (b) mask of the object for the extraction of the 1D spectrum, (c) MUSE NB cutout, and (d) 1D spectrum. The red and green circle in the images show the position of the UV-selected galaxy. The green and red lines in panel (d) indicate observational data and the best fit template. } \label{fig:contami} \end{figure} \section{The number of flux bins used to correct incompleteness of Ly$\alpha$ detection}\label{ap:bin_comp} We also examine the effect of binning to correct incompleteness of the number of LAEs. Because the number of objects we have is small, somewhat arbitrary binning may affect $X_{\rm LAE}$. For a large number of bins, we often get flux-bins with no object at all, and these bins increase the error bars. For a small number of bins, we introduce a large error on the completeness correction as described above. To test the effect of binning, we vary the number of bins ($N_{\rm bin}$) we use, from $2$ to $6$, and see how the median values and error bars change. In Figure \ref{fig:binning}, we show the median values and error bars of $X_{\rm LAE}$ for $N_{\rm bin}=2$--$6$ for the plot of the evolution of $X_{\rm LAE}$ (Figure\ref{fig:xevo}). The uncertainties of the completeness correction are $20$\%, $25$\%, $32$\%, $44$\%, and $73$\% for $N_{\rm bin}=6$, $5$, $4$, $3$, and $2$ (see Sect. \ref{subsec:uncertainties}). We find that $4$ to $6$ bins are a sweet spot where the error bars are small and appear converged and adopt $N_{\rm bin}=4$ in Sect. \ref{subsec:uncertainties}. \begin{figure} \sidecaption \includegraphics[width=12cm]{figD.pdf}\\ \caption{A test for the effect of binning of Ly$\alpha$ flux to correct incompleteness of the number of LAEs. The median values and error bars of $X_{\rm LAE}$ for the plot of the evolution of $X_{\rm LAE}$ (Figure \ref{fig:xevo}) are shown. The black, purple, violet, orange, and yellow hexagons indicate $N_{\rm bin}=6$, $5$, $4$, $3$, and $2$, respectively. For visualization purposes, we slightly shift the points along the $x$-axis and show the width of $z$ only for $N_{\rm bin}=6$ . } \label{fig:binning} \end{figure} \section{Error propagation of completeness correction values in a binomial proportion confidence interval}\label{ap:bpci} We test the applicability of a binomial proportion confidence interval (BPCI) for the case of a completeness correction to calculate error bars of the LAE fraction. First, we examine how to propagate completeness correction values in the error calculation with BPCI numerically. To do mock observations, we generate $N_{\rm LAE}^{\rm true}(z_{\rm s},\,M_{1500},\, EW)$ randomly with $100000$ trials using the python module \textsf{numpy.random.binomial} for each $N_{\rm 1500}(z_{\rm p},\, M_{1500})$ and the true LAE fraction ($X_{\rm LAE}^{\rm true}$). Then we generate $N_{\rm LAE}^{\rm det}(z_{\rm s},\,M_{1500},\, EW)$ randomly for each $N_{\rm LAE}^{\rm true}(z_{\rm s},\,M_{1500},\, EW)$ again using \textsf{numpy.random.binomial} for a given completeness correction value as a probability of detection. We can obtain the probability distribution of $X_{\rm LAE}$ with a given completeness correction value, $N_{\rm 1500}(z_{\rm p},\, M_{1500})$, and $X_{\rm LAE}^{\rm true}$. We compare the $1\sigma$ upper and lower uncertainties from the probability distribution function of the mock $X_{\rm LAE}$ with those derived from two methods of BPCI (\textsf{binom\_conf\_interval}) with input parameters of the number of LAEs and $N_{\rm 1500}(z_{\rm p},\, M_{1500})$ if the incompleteness of the number of LAEs are corrected ($N_{\rm LAE}^{\rm true}(z_{\rm s},\,M_{1500},\, EW)$) or not ($N_{\rm LAE}^{\rm det}(z_{\rm s},\,M_{1500},\, EW)$). We confirm that it is better to input $N_{\rm LAE}^{\rm det}(z_{\rm s},\,M_{1500},\, EW)$ and to multiply the obtained uncertainties by the correction value (see also Sect. \ref{subsec:uncertainties}). Second, we check the accuracy of the method above in the plane of $N_{\rm 1500}(z_{\rm p},\, M_{1500})$ and $X_{\rm LAE}^{\rm true}$ for a given completeness correction value. We calculate the $1\sigma$ upper and lower limits of $X_{\rm LAE}$ with this method. Again we generate $100000$ mock values of $N_{\rm LAE}^{\rm det}(z_{\rm s},\,M_{1500},\, EW)$ and then $X_{\rm LAE}$ numerically. The fractions of the mock $X_{\rm LAE}$ within the range of the $1\sigma$ upper and lower limits among all the experiments are calculated. If the method is accurate, this fraction would be $\approx68$\%. We check the fraction of experiments for $N_{\rm 1500}(z_{\rm p},\, M_{1500})$ from $0$ to $250$ with a step of $1$ and for $X_{\rm LAE}^{\rm true}$ from $0.1$ to $0.5$ with a step of $0.02$. In Figure \ref{fig:bpci_test}, the example with low completeness correction values, $0.1$ and $0.5$, is shown. In the panel (a) for completeness correction $=0.1$, most of the plane is colored with yellow green corresponding to a fraction close to $0.68$, while it is colored with blue or red corresponding to overestimation and underestimation of the errors, respectively, for cases with poor statistics (i.e., low $N_{\rm 1500}(z_{\rm p},\, M_{1500})$ and $X_{\rm LAE}^{\rm true}$). In panel (b) for completeness correction $=0.5$, the method is found to produce errors accurately except for the very poor statistics cases (at the upper left region in the panel). We can estimate the error more accurately with this method for a higher completeness case. Even with a low completeness and smallest numbers, the uncertainties are overestimated. Therefore, we adopt this method conservatively. \begin{figure} \sidecaption \includegraphics[width=9cm]{figE1.pdf}\\ \includegraphics[width=9cm]{figE2.pdf}\\ \caption{ Test of the accuracy of our uncertainty estimation of $X_{\rm LAE}$. We generate mock $X_{\rm LAE}$ distribution numerically for each $N_{\rm 1500}(z_{\rm p},\, M_{1500})$ and $X_{\rm LAE}^{\rm true}$ with a given completeness of $0.1$ in panel (a) and $0.5$ in panel (b). We then derive the fraction of experiments within our upper and lower limits calculated with BPCI. The colors encode the fraction of the experiment for each $N_{\rm 1500}(z_{\rm p},\, M_{1500})$ and $X_{\rm LAE}^{\rm true}$ in the x and y axes, respectively. } \label{fig:bpci_test} \end{figure} \section{The best fit linear relations of $X_{\rm LAE}$ as a function of $z$ and $M_{1500}$}\label{ap:xlae_fit} We show the best fit linear relations of $X_{\rm LAE}$ as a function of $z$ and $M_{1500}$ in Figure \ref{fig:xlae_fit}. The equations of relations are shown in Sects. \ref{subsec:zevo} and \ref{subsec:uvdep}. \begin{figure} \sidecaption \includegraphics[width=8cm]{figF.pdf}\\ \caption{ The slopes of the best fit linear relations of of $X_{\rm LAE}$ as a function of $z$ (top and middle panels) and $M_{1500}$ (bottom panel). Symbols are the same as those in Figures \ref{fig:xevo}, \ref{fig:zevocompare}, \ref{fig:uvdep}, and \ref{fig:uvdepcompare}. The best fit linear relations and the $\pm1\sigma$ slopes are shown by the solid and dashed lines, respectively, with lighter colors of those for the symbols. } \label{fig:xlae_fit} \end{figure} \end{appendix} \bibliographystyle{aa}
1512.04511
\section{Introduction} Magnetic fields pervade the universe. From the scales of planets up to galaxy clusters and beyond, they are not only ubiquitous but have also proven dramatically important in a wide variety of astrophysical and geophysical processes. Despite this, our understanding the mechanisms that lead to their creation and sustenance is hazy, and improving this remains an outstanding theoretical challenge. Much of the theory of field generation focuses on \emph{turbulent dynamo}, in which magnetic fields are stretched and twisted by turbulent fluctuations in such a way as to increase their strength, resulting in exponential instability. Through this process, very small seed fields---arising, for example, from the Biermann battery or kinetic instabilities---might be amplified enormously by plasma motions to the levels seen throughout the universe today. Interestingly, magnetic fields are generically observed to be correlated over larger scales than the underlying fluid motions, and such \emph{large-scale dynamos} are of vital importance for explaining astrophysical fields. The classic mechanism to allow such behavior is the \emph{kinematic} $\alpha$ \emph{effect}\footnote{The term kinematic denotes the situation where velocity fluctuations are unaffected by the magnetic field.} \citep{Moffatt:1978tc,Krause:1980vr}. Here, the small-scale fluid turbulence interacts with a large-scale magnetic field in such a way that an electromotive force (EMF, represented by $\bm{\mathcal{E}}$) is created in proportion the magnetic field itself ($\bm{\mathcal{E}}\sim\alpha \bm{B}$), potentially causing an instability to develop. To allow such behavior, the turbulence must break statistical symmetry in some way, either through a net helicity or through stratification. However, various problems with large-scale $\alpha$ dynamos become apparent when one considers how field growth rates change with the scale of the field---specifically, the smallest scales always grow the most rapidly \citep{Kulsrud:1992ej,Boldyrev:2005ix}. In addition, as a consequence of the conservation of magnetic helicity, these small-scale magnetic fields act to decrease the large-scale growth rate in a way that scales very unfavorably to high Reynolds numbers---the problem of ``catastrophic quenching'' \citep{Gruzinov:1994ea,Bhattacharjee:1995ip,Cattaneo:2009cx}. While a variety of solutions to such problems have been explored, primarily focused on the transport of magnetic helicity \citep{Blackman:2002fe,Vishniac:2001wo,Subramanian:2004gm,Ebrahimi:2014jt,Tobias:2014ek}, the scaling of $\alpha$ dynamos to astrophysically relevant regimes is still far from understood. Such issues are not necessarily confined to the $\alpha$ effect either. Above even moderate Reynolds numbers, the fast-growing small-scale dynamo (field generation on scales at and below that of the fluid turbulence; \citealt{Schekochihin:2007fy}) implies that velocity fluctuations should \emph{always} be accompanied by magnetic fluctuations of a similar magnitude \citep{Schekochihin:2004gj}. This challenges the relevance of the classical kinematic dynamo picture \citep{Cattaneo:2009cx}, which focuses purely on the properties of the small-scale velocity fields. In this paper---as well as in \citet{HighRm} (hereafter \defcitealias{HighRm}{Paper I}Paper I), \citet{LowRm} (hereafter \defcitealias{LowRm}{Paper II}Paper II), and \citet{Analytic} (hereafter \defcitealias{Analytic}{Paper III}Paper III)---we suggest and explore a new dynamo mechanism in which the small-scale \emph{magnetic fluctuations}, in combination with a background shear flow, are the primary driver of the large-scale field growth. Termed the ``magnetic shear-current effect,'' by analogy to earlier kinematic suggestions \citep{Urpin:1999wl,Rogachevskii:2003gg}, the effect is nonhelical (the dynamo $\alpha$ coefficient is zero), and is driven by an off-diagonal component of the turbulent resistivity tensor. There are two principal reasons for our interest in this effect. The first is that the mechanism is not an $\alpha$ effect, which implies that the dynamo can operate in turbulence with a high degree of symmetry. This makes it a possible mechanism to explain the dynamo seen in the central regions of accretion disk simulations \citep{Hawley:1996gh,Brandenburg:1995dc}, and we have seen good evidence that this is indeed the case (see Sec.~\ref{sec:discussion}, as well as \citealt{Squire:2015fk}). The second reason for our interest in the magnetic shear-current effect stems from the intriguing possibility of a large-scale dynamo being \emph{driven} by the saturated state of the small-scale dynamo. In some sense, this is the inverse of the quenching described in the previous paragraph---the small-scale dynamo, far from quenching large-scale growth, is its primary driver. Such a large-scale dynamo paradigm is far removed from classical kinematic theory, relying on saturation of the small-scale turbulent fields. Accordingly, the magnetic shear-current effect is an inherently nonlinear dynamo mechanism \citep{Tobias:2011da}, although it can be driven by a turbulent velocity field rather than resulting from the nonlinear development of a laminar instability. Proving the existence and importance of a dynamo instability is tricky: numerical simulations of turbulence are necessarily noisy, one is limited in available Reynolds numbers (and thus the ability to prove a dynamo will remain active at high values), and when large-scale field growth is observed it can be difficult to show convincingly that it is not some other (possibly unknown) mechanism that is responsible. These problems are exacerbated in the magnetically driven case studied in this work. In particular, due to the finite size of any numerically realizable mean-field average, the large-scale field will quickly come into equipartition with the turbulent bath of fluctuations, robbing the researcher of the ability to study the dynamo during a long period of exponential growth. In other words, the dynamo will very quickly transition into its saturated state (where the large-scale fields have a strong influence on the small-scale turbulence), exacerbating measurement of the properties of the linear growth phase, or even the observation of its qualitative behavior. For these reasons, we have attempted to tackle the problem from a variety of different angles, including analytically with the second-order correlation approximation \citepalias{Analytic}, through quasi-linear theory and statistical simulation \citepalias{LowRm}, and using direct numerical simulations \citepalias{HighRm,LowRm}. We also employ the novel technique of using an \emph{ensemble} of simulations to study the statistics of the mean field without taking time averages. Our hope is that with this variety of methods, which all lead to the same general conclusions, we present convincing evidence for the existence of the magnetic shear-current effect and its potential importance in astrophysical dynamo theory. The present paper serves two purposes. The first is to give a more heuristic and physical description of the magnetic shear-current effect, which is done throughout Sec.~\ref{sec:The physical mechanism}. Following a basic description of the mechanism in the language of mean-field dynamo theory, we describe (with diagrams and simple explanations) how magnetic fluctuations, interacting with a large-scale magnetic field and shear flow, can generate the correlated velocity perturbations that are required for a mean-field dynamo instability. Interestingly, we find that the pressure response of the velocity fluctuations is fundamental to the operation of the dynamo, and simple arguments based on the directions of induced perturbations explain qualitatively why one might expect the magnetic effect to be stronger than the kinematic effect. The second purpose of this paper, discussed in Sec.~\ref{sec:numerical evidence}, is to expand upon, and provide further details for, the analysis and simulations presented in \citetalias{HighRm}. In particular, these simulations demonstrate for the first time (so far as we are aware) that the saturated state of the small-scale dynamo can \emph{drive} a large-scale dynamo. Our method for showing this involves measuring the transport coefficients before and after small-scale dynamo saturation. This illustrates that strong magnetic fluctuations can decrease, and change the sign of, a particularly important component of the tensorial turbulent resistivity (termed $\eta_{yx}$ throughout the text), in a way that is consistent with observed mean-field evolution. Since the methods used to show this are somewhat nonstandard, considerable effort is put into explaining these and ensuring that the coefficients are determined accurately. This is done both through direct comparison with standard methods in lower-Reynolds-number kinematic dynamos (appendix~\ref{app:verification}), and by using the measured coefficients to solve for the expected large-scale field evolution. Finally, in Sec.~\ref{sec:discussion}, we conclude and present a more in-depth discussion of why the magnetic shear-current effect is interesting as a mechanism for large-scale dynamo. This includes some analysis of the evidence for the effect's importance in driving the dynamo in the central regions of accretion disks, which is primarily based on the Prandtl number dependence of its nonlinear saturation \citep{Squire:2015fk}. \section{The physical mechanism for the magnetic shear-current effect}\label{sec:The physical mechanism} In this section we describe how homogeneous nonhelical magnetic fluctuations, influenced by a large-scale shear flow and magnetic field gradient, can generate an EMF that acts to reinforce the large-scale magnetic field. We shall start by describing the form of the EMF that allows for such behavior, as well as constraints due to the symmetries of the system, then consider a simplified cartoon picture for how the interaction of magnetic fluctuations with velocity shear and a large-scale field gradient might produce this EMF. All studies in this work are carried out in the context of the incompressible MHD equations with a background shear flow $\bm{U}_{0}=-Sx\hat{\bm{y}}$, \begin{subequations} \begin{align} \frac{\partial\bm{U}_{T}}{\partial t} &-Sx\frac{\partial\bm{U}_{T}}{\partial y}+\left(\bm{U}_{T}\cdot\nabla\right)\bm{U}_{T}+2\Omega\bm{\hat{z}}\times\bm{U}_{T}+\nabla p\nonumber \\ &\qquad \qquad \quad \!= SU_{Tx}\bm{\hat{y}}+\bm{B}_{T}\cdot\nabla\bm{B}_{T}+\bar{\nu}\nabla^{2}\bm{U}_{T}+\bm{\sigma}_{\bm{u}}, \\ \frac{\partial\bm{B}_{T}}{\partial t} & -Sx\frac{\partial\bm{B}_{T}}{\partial y}=-SB_{Tx}\bm{\hat{y}}+\nabla\times\left(\bm{U}_{T}\times\bm{B}_{T}\right) +\bar{\eta}\nabla^{2}\bm{B}_{T},\label{eq:induction} \\ & \nabla\cdot\bm{U}_{T}=0,\;\;\;\nabla\cdot\bm{B}_{T}=0. \end{align}\label{eq:MHD}\end{subequations}Here $\Omega$ is a mean rotation of the frame, and $\bar{\nu}$ and $\bar{\eta}$ are the normalized viscosity and resistivity respectively. Since all quantities are normalized to one it is convenient to define $\mbox{Re}=1/\bar{\nu}$ and $\mbox{Rm}=1/\bar{\eta}$ for the Reynolds and magnetic Reynolds number, and their ratio is the Prandtl number $\mathrm{Pm}=\mathrm{Rm}/\mathrm{Re}$. $\bm{\sigma}_{\bm{u}}$ denotes a nonhelical driving noise source, white in time, which can be used to generate an homogenous bath of small-scale velocity fluctuations. $\bm{U}_{T}$ and $\bm{B}_{T}$ in Eq.~\eqref{eq:MHD} are simply the standard turbulent velocity and magnetic fields ($\bm{U}_{T}$ is the velocity not including the background shear). Throughout this work we consider initially homogenous turbulence with zero average helicity. \subsection{Nonhelical dynamo mechanisms}\label{sub:Nonhelical dynamo mechanisms} To examine field generation mechanisms in this geometry, it is helpful to start by defining mean and fluctuating fields through the relation $\bm{B}_{T}=\overline{\bm{B}}_{T}+\bm{b}=\bm{B}+\bm{b}$. Here $\bar{\cdot}$ is the \emph{mean-field average}, which is taken to be a spatial average over $x$ and $y$. An average of the induction equation (Eq.~\eqref{eq:induction}) leads leads to the well-known mean-field dynamo equations for the mean magnetic field $\bm{B}$ \citep{Moffatt:1978tc,Krause:1980vr}, \begin{equation} \partial_{t}\bm{B}=\nabla\times\left(\bm{U}_{0}\times\bm{B}\right)+\nabla\times\bm{\mathcal{E}}+\frac{1}{\mathrm{Rm}}\nabla^{2}\bm{B}.\label{eq:genMF} \end{equation} Here $\bm{\mathcal{E}}=\overline{\bm{u}\times\bm{b}}$ is the EMF, which provides the connection between the small-scale turbulence and large-scale fields. If we assume scale separation between the mean and fluctuating fields, a Taylor expansion of $\bm{\mathcal{E}}$ leads to the form \begin{equation} \mathcal{E}_{i}=\alpha_{ij}B_{j}-\eta_{ij}J_{j}+\cdots,\label{eq:E expansion}\end{equation} where $\alpha_{ij}$ and $\eta_{ij }$ are the transport coefficients, and the lack of $(x,y)$ dependence of the mean fields has been used to reduce the number of $\eta$ coefficients from $27$ to $4$ (note that $B_{z}=0$). In the case where the mean fields can be considered a small perturbation to some background turbulent state specified by statistics of $\bm{u}$ and $\bm{b}$ (which are influenced by shear and rotation), $\alpha$ and $\eta$ must be independent of $\bm{B}$. Due to reflectional symmetry, with a nonhelical forcing function $\bm{\sigma}_{\bm{u}}$, the $\alpha_{ij}$ coefficients are constrained to vanish on average in this geometry. Instead, we shall study the possibility of a mean-field dynamo that arises purely from the off-diagonal components of $\eta_{ij}$, which can be nonzero due to the anisotropy of the turbulence. Combining Eqs.~\eqref{eq:genMF} and \eqref{eq:E expansion}, one obtains \begin{subequations} \begin{gather} \partial_{t}B_{x}=-\alpha_{yx}\partial_{z}B_{x}-\alpha_{yy}\partial_{z}B_{y}-\eta_{yx}\partial_{z}^{2}B_{y}+(\eta_{yy}+\bar{\eta})\partial_{z}^{2}B_{x}, \\ \partial_{t}B_{y}=-SB_{x}+\alpha_{xx}\partial_{z}B_{x}+\alpha_{xy}\partial_{z}B_{y}-\eta_{xy}\partial_{z}^{2}B_{x}+(\eta_{xx}+\bar{\eta})\partial_{z}^{2}B_{y} \end{gather}\label{c4:eq:SC sa eqs}\end{subequations}where the time average of the $\alpha_{ij}$ components must vanish. From these equations (with $\alpha_{ij}=0$), it is straightforward to show that an eigenmode with the spatial structure $B_{i}=B_{i0}e^{ikz}$ has the growth rate \begin{equation} \gamma_{\eta}=k\sqrt{\eta_{yx}\left(-S+k^{2}\eta_{xy}\right)}-k^{2}\eta_{t},\label{c4:eq:gamma SC} \end{equation} where we have set $\eta_{yy}=\eta_{xx}=\eta_{t}$ for simplicity. Neglecting $\eta_{xy}$ by assuming $\left|k^{2}\eta_{xy}\right|\ll S$ (for all $k$ for which scale separation holds), one finds that positive dynamo growth is possible if $-S\eta_{yx}>0$ and $k\sqrt{-\eta_{yx}S}>k^{2}\eta_{t}$. The physical mechanism for the instability involves the $B_{x}$ generated by $B_{y}$ (through $-\eta_{yx}\partial_{z}^{2}B_{y}$) feeding back on $B_{y}$ through stretching by the mean shear flow (the $-SB_{x}$ term in Eq.~\eqref{c4:eq:SC sa eqs}). Thus the possibility of such a nonhelical dynamo rests crucially on the phase between $B_{x}$ and $B_{y}$ and therefore on the transport coefficient $\eta_{yx}$, which must be less than zero. Whether $\eta_{yx}$ is positive or negative depends on the properties of the turbulence in question, in particular on the sign of $(\overline{\bm{u}\times \bm{b}})_{y}$ that arises in the presence of a ${B}_{y}$ gradient. The standard kinematic approach in dynamo theory has been to consider strong underlying hydrodynamic fluctuations (denoted by $\bm{u}_{0}$), which generate $\bm{b}$ fluctuations through interaction with $\nabla \bm{B}$ (and $\bm{B}$). Although various early analytic works argued for a kinematic shear-current dynamo of this type \citep{Urpin:1999wl,Urpin:2002ct,Rogachevskii:2003gg}, subsequently several authors found that kinematically $\eta_{yx}>0$ (at least at low $\mathrm{Rm}$) and thus concluded that a coherent kinematic dynamo cannot explain the field generation observed in numerical experiments (\citealt{Radler:2006hy,Brandenburg:2008bc,Singh:2015kt}; \citetalias{LowRm}). Here we argue instead that strong homogenous magnetic fluctuations (denoted by $\bm{b}_{0}$) can generate $\bm{u}$ fluctuations with the required correlations to cause a negative $\eta_{yx}$. Such $\bm{b}_{0}$ fluctuations should be ubiquitous in MHD turbulence at high Reynolds numbers, since the small-scale dynamo will be unstable (with a large growth rate set by the smallest scales in the turbulence), creating a turbulent state with $\bm{b}_{0}\sim\bm{u}_{0}$ \citep{Schekochihin:2004gj}. Before continuing, it is worth mentioning another possibility for large-scale field generation in this geometry---the so-called, stochastic-$\alpha$ effect. This arises through {fluctuations} in the $\alpha_{ij}$ coefficients, even though their mean must vanish \citep{Vishniac:1997jo,Heinemann:2011gt,Mitra:2012iv}. This dynamo is not mean-field in the usual sense since it relies on the finite size of the system to cause the $\alpha$ fluctuations that lead to mean-field growth; nonetheless, given that the universe is sampling a single realization of turbulence, \emph{not} the ensemble average, such effects could be entirely physical. (That said, one consequence of this incoherent dynamo mechanism is that the growth rate can be arbitrarily increased or decreased by changing the volume of the mean-field average, which hints that coherent effects should dominate when a very large range of scales are present.) While we shall not examine the stochastic-$\alpha$ effect in detail in this work (see \citetalias{LowRm}), it is important to be mindful of the possibility, since it complicates the analysis of simulation results where large-scale field growth is observed. One distinguishing feature from the shear-current effect is that $\bm{B}\left(z,t\right)$ cannot have a constant phase in time as it grows, since the average of $\bm{B}$ over an ensemble of realizations vanishes, implying $\bm{B}$ must be uncorrelated with itself after $t\gtrsim(k^{2}\eta_{t})^{-1}$\footnote{This condition may be altered if one considers the effects of magnetic helicity conservation, which may cause a local magnetic $\alpha$ effect as the large-scale field grows \citep{Brandenburg:2008bc}. However, this also causes coupling between different mean-field modes, and the detailed consequences of such an effect remain unclear.}. More information, including analyses of the relative importance of the coherent and incoherent shear dynamo mechanisms in low-$\mathrm{Rm}$ systems, can be found in \citetalias{LowRm}. \subsection{The mechanism for the magnetic shear-current effect}\label{sec:MSC} In this section we discuss the mean field generation mechanism of the magnetic shear-current effect. The stability analysis given in Sec.~\ref{sub:Nonhelical dynamo mechanisms} makes it clear that we require $\eta_{yx}<0$ for a coherent dynamo instability. In the present context, with both the mean magnetic field and flow in the $y$ direction and their prescribed spatial dependencies (see Fig.~\ref{fig:SCResis Diagram}, left panel), this is equivalent to requiring that the $y$ component of the turbulent EMF be negative. The challenge is then for us to explain how this can come about in the present geometry. The cartoon picture that we present has its origins in the analytic ``second-order correlation approximation'' (SOCA) calculations presented in \citetalias{Analytic}. In particular, by selectively removing terms from the calculation and examining the effects on the final $\eta_{yx}$, one can unambiguously determine from where the effect arises (at least within the quasi-linear approximation). Most importantly, this exercise shows that the magnetic shear-current effect arises exclusively from the \emph{pressure response} of the velocity fluctuations. The mechanism is fundamentally related to the lack of turbulent resistivity quenching by the magnetic field (often referred to as a lack of ``$\beta$ quenching''; see \citealt{Gruzinov:1994ea} and \citealt{Bhattacharjee:1995ip}), which results from a cancellation between a turbulent magnetic resistivity (of the same form as kinematic turbulent resistivity), and an equal and opposite contribution from the pressure response \citep{Avinash:1991fu}. We divide our discussion up as answers to three questions: (1) How do we generate the fluctuations needed to support the required EMF? (2) What happens in the absence of flow shear? and (3) What happens in the presence of flow shear? \subsubsection{How do we generate the fluctuations needed to support the required EMF?}\label{sec:generating fluctuations} The fluctuations needed to support our physical picture are magnetically driven. In contrast to kinematic dynamos, the Maxwell stress $\bm{B}_{T}\cdot \nabla \bm{B}_{T}$ is fundamental for a magnetically driven dynamo, since this is required to generate $\bm{u}$ from $\bm{b}$ (in the same way the Lorentz force $\nabla \times (\bm{U}_{T}\times \bm{B}_{T})$ generates correlated $\bm{b}$ fluctuations in kinematic dynamos). Such dynamos can still be analyzed linearly if one assumes that the interaction of fluctuations with mean fields is more important for the EMF than the interaction with themselves; that is, \begin{equation} \bm{b}\cdot \nabla \bm{B}+\bm{B}\cdot \nabla \bm{b} \quad\text{is more important than} \quad\bm{b}\cdot \nabla \bm{b} - \overline{\bm{b}\cdot \nabla \bm{b}}.\label{eq:QL approx} \end{equation} This approximation---which along with a similar approximation for the Lorentz force, is the basis for SOCA---is valid only at low Reynolds numbers and nonzero mean fields, but allows one to consider how small-scale eddies and field loops would interact with large-scale field and flow gradients in a relatively straightforward way. Note that, ``is more important'' in Eq.~\eqref{eq:QL approx} refers to the terms' relative importance for the generation of an EMF that is correlated with $\bm{B}$ (this correlation is necessary for a large-scale dynamo). Since only the part of $\bm{b}$ that is influenced by $\bm{B}$ can contribute to this correlation, it seems reasonable to surmise that results should be qualitatively applicable outside their true validity range. In other words, since the interaction of $\bm{b}$ with $\bm{B}$ is the cause of the magnetic shear-current effect in the first place, we shall focus on this (rather than the much more complicated nonlinear terms) for the development of our simple cartoon model. \begin{figure} \begin{center} \includegraphics[width=0.5\columnwidth]{fig1} \caption{Depiction of the interactions between fluctuations ($\bm{b}_{0}$, $\bm{u}^{(i)}$, and $\bm{b}^{(i)}$) and a mean magnetic field $\bm{B}$ or a shear flow $\bm{U}$, that can lead to a nonzero shear-current effect through $\overline{\bm{u}\times \bm{b}}$, starting from strong homogenous magnetic fluctuations $\bm{b}_{0}$. Here the straight black arrows, with either $\bm{B}$ or $\bm{U}$, depict an interaction that creates one fluctuating field from another, which will be correlated with the original fluctuation and thus can contribute to the EMF (for instance $\bm{u}^{(0)}\sim \tau_{c }\bm{B}\cdot \nabla \bm{b}_{0}+ \tau_{c} \bm{b}_{0}\cdot \nabla \bm{B}$). The double headed (blue) arrows indicate the lowest order combinations of $\bm{b}_{0}$, $\bm{u}^{(i)}$, and $\bm{b}^{(i)}$ that can lead to nonzero $\eta_{yx}$, with the interaction studied in Sec.~\ref{sec:MSC} shown by the solid line. \label{fig:Interactions} } \end{center} \end{figure} The shear-current effect requires both a field gradient and a flow gradient (shear flow). Thus, any perturbation $\bm{b}_{0}$ (arising as part of the bath of statistically homogenous magnetic fluctuations) must interact with both $\bm{U}$ and $\bm{B}$ to generate a $\bm{u}$ fluctuation. The possible ways in which this can happen are illustrated in Fig.~\ref{fig:Interactions}, where the notation is the same as that used in \citetalias{Analytic}, with $\bm{f}^{(0)}$ indicating a field that arises directly from the interaction of $\bm{b}_{0}$ (or $\bm{u}_{0}$) with the mean fields, and $\bm{f}^{(1)}$ indicating one that arises through $\bm{f}^{(0)}$. In addition, we use $(\cdot )_{b}$ to denote the part of a transport coefficient that is due to homogenous magnetic fluctuations; for example, $(\eta_{yx})_{b}$. From the momentum equation, a $\bm{b}$ perturbation can generate a $\bm{u}$ perturbation through $\bm{b}\cdot \nabla \bm{B}+\bm{B}\cdot \nabla \bm{b}$, while a $\bm{u}$ perturbation can generate a $\bm{u}$ perturbation through $-\bm{u}\cdot \nabla \bm{U}-\bm{U}\cdot \nabla \bm{u}$. Similarly, from the induction equation a $\bm{b}$ perturbation is generated through either a $\bm{u}$ perturbation ($\bm{B}\cdot \nabla \bm{u}$), or through a $\bm{b}$ perturbation ($-\bm{U}\cdot \nabla \bm{b}$). We see from Fig.~\ref{fig:Interactions} that there are three possibilities for contributing to $(\eta_{yx})_{b}$: $\bm{u}^{(0)}\times \bm{b}^{(0)}$, $(\bm{u}^{(1)}\times \bm{b}_{0})_{1}$, and $(\bm{u}^{(1)}\times \bm{b}_{0})_{2}$. Here $(\bm{u}^{(1)}\times \bm{b}_{0})_{1}$ refers to the pathway for generating $\bm{u}^{(1)}$ through $\bm{u}^{(0)}$ (shown by the solid arrow in Fig.~\ref{fig:Interactions}), while $(\bm{u}^{(1)}\times \bm{b}_{0})_{2}$ refers to the pathway through $\bm{b}^{(0)}$ (shown by the top dashed arrow). Out of these, we have determined from the calculations in \citetalias{Analytic} that $(\bm{u}^{(1)}\times \bm{b}_{0})_{1}$ is both the simplest and contributes the most to $(\eta_{yx})_{b}$. In particular, the mechanism does not directly rely on dissipation to generate the required correlations, as will be seen below\footnote{To be more specific, in the SOCA calculations, this contribution does not involve $k$ derivatives of the functions $E_{\eta}=1/(i\omega -\bar{\eta}k^{2})$ or $N_{\nu}=1/(i\omega -\bar{\nu}k^{2})$ (which are zero at $\bar{\nu}=0$ or $\bar{\eta}=0$), while the other two contributions do.}. We have found empirically that the $\bm{u}^{(0)}\times \bm{b}^{(0)}$ contribution is moderate in size (generally a factor of $\sim 2$ smaller than $\bm{u}^{(1)}\times \bm{b}_{0}$) and also always negative, while the $(\bm{u}^{(1)}\times \bm{b}_{0})_{2}$ contribution (dotted line in Fig.~\ref{fig:Interactions}) can change sign but is much smaller in magnitude. \subsubsection{What happens in the absence of flow shear?}\label{sub:diagonal resistivity} As mentioned above, in the absence of flow shear, there is no quenching of the turbulent resistivity. This effect---which could also be stated as $(\eta_{xx})_{b}=(\eta_{yy})_{b}=0$ in the notation of Eq.~\eqref{c4:eq:SC sa eqs}---arises through the pressure response of the fluid. We feel it helpful to first explain this mechanism in more detail, since the form of the pressure response has not been discussed in detail in previous literature (so far as we are aware)\footnote{ \cite{Yokoi:2013di} gives a slightly different model for the negative contribution to the magnetic resistivity, based on small-scale current perturbations creating a magnetic pressure. This model is similar to that presented here but does not directly include the fluid pressure (it includes the magnetic pressure), which we have seen to be important by studying the relevant terms in SOCA calculations.} and the magnetic shear-current effect is essentially an extension of this. As can be seen using SOCA (or the $\tau$~approximation; see \citealt{Radler:2003gg}), the effect occurs because the pressure response has an equal and opposite effect to the primary velocity perturbation \citep{Avinash:1991fu}. This behavior is illustrated graphically in Fig.~\ref{fig:DiagResis Diagram}, which shows the response of the fluid to a magnetic perturbation in the linearly varying magnetic field $\bm{B}=S_{B}z \hat{\bm{y}}$. Due to the mean-field geometry, the velocity perturbation $\delta \bm{u}^{(0)}_{\mathrm{basic}} \sim \tau_{c} \bm{b} \cdot \nabla \bm{B}$ (where $\tau_{c}$ is some turbulent correlation time) is simply $S_{B} b_{0z} \hat{\bm{y}}$; i.e., only the $z$ component of $\bm{b}_{0}$ contributes. Note that the other contribution $\delta \bm{u}^{(0)}_{\mathrm{basic}} \sim \tau_{c} \bm{B}\cdot \nabla \bm{b}$, will only contribute directly to the EMF if there is a mean correlation between $\bm{b}$ and $\nabla \bm{b}$, which occurs if there is net current helicity (this term is the origin of the magnetic $\alpha$ effect)\footnote{The general situation is a little more complex than this. Fourier transformed, a term of the form $\bm{B }\cdot \nabla \bm{b}$ becomes $i \bm{B}\cdot \bm{k}\, \bm{b}-(\nabla \bm{B})_{jl}k_{j}\partial_{k_{l}}b_{i}+\cdots$ (where the Einstein summation convention is used). Without helicity the first term does not contribute when averaged over a domain, but the second term can in general be nonzero. However, contributions of this form generally seem to be smaller in magnitude, and its dependence on the $\bm{k}$ derivative of the fluctuations makes it troublesome to arrive at a simple cartoon picture. See \citetalias{Analytic} for more detail on the mathematics of the calculation.}. Obviously, the perurbation $\delta \bm{u}^{(0)}_{\mathrm{basic}} \sim S_{B} b_{0z} \hat{\bm{y}}$ is correlated with $\bm{b}_{0}$ and it is straightforward to see (see middle panel of Fig.~\ref{fig:DiagResis Diagram}) that a net $\bm{\mathcal{E}}$ is created in the $\hat{\bm{x}}$ direction, opposite to the mean current and thus acting as a turbulent dissipation for the mean field. \begin{figure} \begin{center} \includegraphics[width=1.0\columnwidth]{fig2} \caption{Graphical illustration of the mean-field resistivity---or lack thereof---generated by homogenous small-scale magnetic fluctuations, with the geometry of the mean field illustrated in the left-hand panel. The middle panel shows how $b_{0z}$ perturbations (from an homogeneous turbulent bath) lead to a $\bm{u}$ perturbation (labelled $\delta \bm{u}^{(0)}_{\mathrm{basic}}$) through $\bm{b}\cdot \nabla \bm{B} =S_{B}b_{0z}\hat{\bm{y}} $, resulting in an EMF in the $-\bm{J}$ direction. The right-hand panel shows how the pressure response to this $\delta \bm{u}^{(0)}_{\mathrm{basic}}$ (labelled $\delta \bm{u}^{(0)}_{\mathrm{pres}}$), which arises due to its nonzero divergence (yellow and red shaded regions for $\nabla \cdot \delta \bm{u}^{(0)}_{\mathrm{basic}}>0$ and $\nabla \cdot \delta \bm{u}^{(0)}_{\mathrm{basic}}<0$ respectively), leads to an EMF that opposes that from $\delta \bm{u}^{(0)}_{\mathrm{basic}}$. A more careful calculation shows that the cancellation is exact (in incompressible turbulence at low $\mathrm{Rm}$), so the turbulent resistivity due to magnetic fluctuations vanishes. See text for further discussion. \label{fig:DiagResis Diagram}} \end{center} \end{figure} However, as is clear from Fig.~\ref{fig:DiagResis Diagram}, the $\delta \bm{u}^{(0)}_{\mathrm{basic}}$ perturbation is not divergence free, given any $y$ variation in $b_{0z}$. In the third panel of Fig.~\ref{fig:DiagResis Diagram}, the shaded regions illustrate where the divergence of $\delta \bm{u}^{(0)}_{\mathrm{basic}}$ is positive (yellow) or negative (red). Given the incompressibility of the fluid, a nonzero divergence is not possible, and the $\nabla p$ term responds appropriately, creating a flow perturbation from regions of negative divergence to positive divergence (mathematically, $ \delta \bm{u}^{(0)}_{\mathrm{pres}} = \nabla^{-2}[ -\nabla (\nabla \cdot \delta \bm{u}^{(0)}_{\mathrm{basic}})]$). As shown in the third panel of Fig.~\ref{fig:DiagResis Diagram} this perturbation is anti-correlated with $\delta \bm{u}^{(0)}_{\mathrm{basic}}$ and thus creates an oppositely directed EMF, in the $+\bm{J}$ direction. Further, since $\nabla \cdot (\bm{b}\cdot \nabla \bm{B})=\nabla \cdot (\bm{B}\cdot \nabla \bm{b})$, each of these linear contributions to the Maxwell stress add in the same way to the pressure perturbation, and a more careful calculation shows that the effect exactly cancels the original EMF on average. Given its reliance on the pressure response, the effect will be reduced in a compressible flow (presumably becoming negligible for high Mach number flows), and one would expect $\bm{b}_{0}$ fluctuations to increase the turbulent diffusivity in this case (the magnetic shear-current effect will also be less effective in a compressible flow). Finally, it is worth mentioning that $z$ variation of the initial $\bm{b}_{0}$ perturbation will not contribute since this creates a $(\delta \bm{u}^{(0)}_{\mathrm{pres}})_{z}$ (which is zero in a cross product with $b_{0z}$), while $x$ variation of $\bm{b}_{0}$ produces a $(\delta \bm{u}^{(0)}_{\mathrm{pres}})_{x}$ that is out of phase with the original $b_{0z}$ perturbation. \begin{figure} \begin{center} \includegraphics[width=1.0\columnwidth]{fig3} \caption{Graphical illustration of the magnetic shear-current effect, which should be interpreted as follows. Left-hand panel, the geometry of the mean field and shear flow. Middle panel, the flow perturbation (both $\delta \bm{u}^{(0)}_{\mathrm{basic}}$ and $\delta \bm{u}^{(0)}_{\mathrm{pres}}$) that arises due to $x$, $y$ dependence of the initial $b_{0z}$, before interaction with the shear flow (note the rotation of the axes compared to the left panel). Right-hand panel, the $\delta \bm{u}^{(1)}$ perturbation that arises from $\delta \bm{u}^{(0)}$ due to stretching by the flow, which illustrates a correlation between $\delta \bm{u}^{(1)}_{\mathrm{pres}}$ and the original $b_{0z}$ structure. The resulting $\bm{\mathcal{E}}$ is pointing in the $-\hat{\bm{y}}$ direction, corresponding to a negative $\eta_{yx}$. In the middle panel, the yellow (red) shading indicates where $\delta \bm{u}^{(0)}_{\mathrm{basic}}$ has a positive (negative) divergence, while the shading in the right panel shows the same for $\delta \bm{u}^{(1)}_{\mathrm{basic}}$. More information and discussion is given in the main text. \label{fig:SCResis Diagram}} \end{center} \end{figure} \subsubsection{What happens in the presence of flow shear?} In the presence of flow shear, the cancellation discussed in the previous section leaves a residual $x$-directed $\bm{u}$ perturbation. This perturbation---which arises from the interaction of the pressure perturbation in Fig.~\ref{fig:DiagResis Diagram} with the mean shear, followed by the pressure response to this secondary perturbation---leads to the magnetic shear-current effect. This rather complex process is illustrated in graphically in Fig.~\ref{fig:SCResis Diagram}, using similar conventions (and color schemes) to Fig.~\ref{fig:DiagResis Diagram}. A shear flow in the $\hat{\bm{y}}$ direction is included in addition to the mean field $\bm{B}=S_{B}z \hat{\bm{y}}$, which corresponds exactly to the geometry discussed in Sec.~\ref{sub:Nonhelical dynamo mechanisms} and Eq.~\ref{c4:eq:SC sa eqs}. Recall that $\eta_{yx}<0$ is equivalent to ${\mathcal{E}}_{y}<0$ in this geometry (see Eq.~\eqref{c5:eq:EMF general}). The second panel in Fig.~\ref{fig:SCResis Diagram} illustrates the same effect as shown in Fig.~\ref{fig:DiagResis Diagram}, now including $x$ and $y$ dependence of the $b_{0z}$ perturbation. As is evident, even though $\delta \bm{u}^{(0)}_{\mathrm{basic}}$ points only in the $y$ direction, the pressure response includes equally strong $x$ directed flows, since it arises from the spatial dependence of $\nabla \cdot \delta \bm{u}^{(0)}_{\mathrm{basic}}$. The resulting $(\delta \bm{u}^{(0)})_{x}$ is out of phase with $\bm{b}_{0}$, so does not contribute to an EMF itself, but it is sheared by the background flow thrdough $\delta \bm{u}^{(1)}_{\mathrm{basic}}\sim -\tau_{c} \bm{u}^{(0)}\cdot \nabla \bm{U}=\tau_{x}S u_{x}^{(0)}\hat{\bm{y}}$, which is shown in the third panel of Fig.~\ref{fig:SCResis Diagram}. Again, since only the $x$ component contributes, $\delta \bm{u}^{(1)}_{\mathrm{basic}}$ is not divergence free (shaded yellow and red regions for $\nabla \cdot \delta \bm{u}^{(1)}_{\mathrm{basic}}>0$ and $\nabla \cdot \delta \bm{u}^{(1)}_{\mathrm{basic}}<0$ respectively). We see that the $x$ component of the pressure response towards (away from) regions where $\nabla \cdot \delta \bm{u}^{(1)}_{\mathrm{basic}}>0$ ($\nabla \cdot \delta \bm{u}^{(1)}_{\mathrm{basic}}<0$) is now correlated and in phase with the original perturbation. Most importantly, its direction is such that $\bm{\mathcal{E}}=\delta \bm{u}^{(1)} \times \bm{b}_{0}$ is always in the $-\hat{\bm{y}}$ direction, leading to $(\eta_{yx})_{b}<0$. Note that here, unlike in discussion of Fig.~\ref{fig:DiagResis Diagram}, the effect relies on the $x$ component of the pressure response (perpendicular to $\delta \bm{u}_{\mathrm{basic}}$), which must occur for any perturbation that varies in $x$ because the response is the gradient of a scalar field (i.e., $-\nabla p$). At this point, the reader could be forgiven for viewing the magnetic shear-current mechanism explained above with some skepticism---how do we know there are no opposing mechanisms to cancel out such effects? The simplest answer is that we have derived the physical picture in Fig.~\ref{fig:SCResis Diagram} from the SOCA calculation, by noting that $(\eta_{yx})_{b}$ is unchanged by removal of all contributions to the velocity perturbation other than $\nabla p$, and through the exploration of the different pathways in Fig.~\ref{fig:Interactions}. More physically, the reason the pressure is necessary for the shear-current effect arises from the mean-field and flow geometry. In particular, if a small-scale fluctuation interacts with either $\bm{U}$ or $\bm{B}$ through $\bm{u}\cdot \nabla \bm{U}$, $\bm{u}\cdot \nabla \bm{B}$, $\bm{b}\cdot \nabla \bm{U}$, or $\bm{b}\cdot \nabla \bm{B}$, the resulting perturbation is \emph{always} in the $\pm\hat{\bm{y}}$ direction. Obviously, such a perturbation cannot lead to a nonzero ${\mathcal{E}}_{y}$. Thus, $\eta_{yx}$ is both very important for dynamo action and particularly complicated to generate, because the flow and mean field are in the same direction as the required EMF. This explains why $\eta_{yx}$ is seen to be much smaller than $\eta_{xy}$ in numerical simulation and calculations (\citealt{Brandenburg:2008bc,SINGH:2011ho}; \citetalias{LowRm}), as well as the capricious nature of the kinematic shear-current effect (the sign of $(\eta_{yx})_{u}$ may depend on the Reynolds numbers, while analytic results depend on the closure method used), for which these same arguments apply\footnote{There is no equivalent of the above picture for the kinematic effect. This can be seen by the fact that the $\bm{b}$ perturbation that arises from $\bm{b}\cdot \nabla \bm{U}$ is necessarily in the $y$ direction (since there is no pressure). This means the $\bm{U}\cdot \nabla \bm{b}$ term, which involves the derivative of $\bm{b}$ in $\bm{k}$ space, must be responsible for any $\mathcal{E}_{y}$ (another possibility is correlations between components of $\bm{u}_{0}$ arising from $\nabla \cdot \bm{u}_{0}=0$). The same is true of the other pathways in Fig.~\ref{fig:Interactions}.}. Note that the requirements for $\mathcal{E}_{y}\neq 0$ in Fig.~\ref{fig:SCResis Diagram} are very specific---a $y$ variation of the $x$ variation of $b_{0z}$---and it is straightforward to see that this is the only possibility for generation of a $\delta u_{x}$ in this way. This implies we can ignore both the other $\bm{b}$ components and any variation in $z$. Thus, although Figs.~\ref{fig:DiagResis Diagram} and \ref{fig:SCResis Diagram} show the fluid response to a rather specific form for $\bm{b}_{0}$, the important features of the resulting perturbations (shown in the right-hand panels) are relatively generic for more general $\bm{b}_{0}$. Further, since the effect is linear, Fourier modes (such as that shown in Fig.~\ref{fig:SCResis Diagram}) can be added to form a more general $\bm{b}$ perturbation, and we are sure to obtain $\mathcal{E}_{y}<0$. We have thus seen how magnetic fluctuations can produce a negative $\eta_{yx}$ through their interaction with large-scale field and flow gradients, which can in some cases lead to large-scale dynamo action. In addition, the reliance of the effect on the fluid pressure response, as well as the general difficulty of creating perturbations that create an EMF parallel to both the mean flow and field, help explain the relative dominance of the magnetic over the kinematic shear-current effect. \section{Numerical evidence}\label{sec:numerical evidence} In this section we illustrate numerically that the magnetic shear-current mechanism discussed above is indeed realizable in MHD turbulence. We show using direct numerical simulation that it is possible and realizable to have the small-scale dynamo \emph{drive} the growth of the large-scale dynamo. So far as we are aware, this is the first demonstration of this interesting behavior. The methods used to illustrate this effect in numerical simulation are somewhat nonstandard in the dynamo literature. In particular, at each set of physical parameters we carry out an ensemble of simulations, each with different noise realizations. We then measure transport coefficients before and after small-scale saturation in each simulation, which shows (after an ensemble average) that $\eta_{yx}$ becomes more negative after the saturation of the small-scale dynamo. That this can drive a coherent dynamo is illustrated by qualitative observation of the mean-field pattern, as well as solution of the mean-field equations (Eq.~\eqref{c4:eq:SC sa eqs}) using the measured transport coefficients. The ensemble of simulations is required as a result of the relatively short period of large-scale dynamo growth before nonlinear saturation effects become significant. This is because the large-scale magnetic field starts its growth (when the small-scale dynamo saturates) at relatively large amplitudes, being in approximate equipartition with the small-scale fluctuations due to the finite size of the mean-field average. We shall see that in many cases, the growth of the mean field lasts little more than $20\rightarrow 30$ shearing times before saturating, and that its behavior can vary substantially between realizations. Because of this, the ensemble average over simulations is highly advantageous for accurate determination of the transport coefficients. (In other work we have used statistical simulation to circumvent this problem, but this requires a quasi-linear approximation, which eliminates in small-scale dynamo; see \citetalias{LowRm}) The method for measuring the transport coefficients from simulation data after small-scale dynamo saturation (termed the ``projection method'') is also nonstandard, and will be explained in some detail. Because test-field methods that explicitly include the magnetic fluctuations are rather complex and in the early stages of development \citep{Rheinhardt:2010do}, we instead choose to measure transport coefficients directly from mean-field and EMF data taken from simulations. The method, which is a modified version of that proposed in \citet{Brandenburg:2002cia} and is also used with some success in \citet{Racine:2011gh} and \citet{Simard:2013jf}, involves approximately solving $\mathcal{E}_{i}=\alpha_{ij}B_{j}-\eta_{ij}J_{j}$ at each time-step and taking spatiotemporal averages to obtain transport coefficients. To ensure that correct results are obtained, the projection method is checked in two independent ways: First, it is used to compute transport coefficients for low-$\mathrm{Rm}$ kinematic shear dynamos and compared directly to the test-field method (this is presented in appendix~\ref{app:verification}). Second, we solve the mean-field equations using the \emph{measured} time-dependent transport coefficients and compare to the mean-field evolution from the simulations. This provides a thorough check that the measured coefficients are correct, without relying on any assumptions about the type of dynamo, or simplifications to the form of $\bm{\mathcal{E}}$. Calculations are carried out using the nonlinear MHD equations (Eq.~\eqref{eq:MHD}), with homogenous Cartesian geometry, periodic boundary conditions in the azimuthal ($y$) and vertical ($z$) directions, and shearing periodic boundary conditions in the radial ($x$) direction. We use the {\scshape Snoopy} code \citep{Lesur:2007bh}, which applies the Fourier pseudospectral method (in the shearing frame), and system rotation is included in some simulations through a mean Coriolis force. The flow field forcing ($\bm{\sigma}_{\bm{u}}$ in Eq.~\eqref{eq:MHD}) is nonhelical, white noise in time, isotropic, and centered in wavenumber space at $\left| \bm{k} \right|=6\pi$ (with width $6\pi /5$). All simulations presented here use a box of size $(L_{x},L_{y},L_{z})=(1,4,2)$ with a resolution of $(N_{x},N_{y},N_{z})=(64,128,128)$, and we take $\bar{\eta}=1/2000$ ($\mathrm{Rm}=2000$) $\mathrm{Pm}=8$ ($\mathrm{Re}=250$). To test convergence, we have run several cases (both with and without rotation) at twice the resolution, and there is no discernible difference with lower resolution runs in either the spectrum, turbulence level, or mean-field evolution\footnote{Since there are relatively large differences between realizations (see Fig.~\ref{c5:fig:By examples}), we compare a variety of the lower resolution cases with the higher resolution runs, and note that disparities between different low resolution realizations are as severe as those between the low and high resolution runs. We thus conclude that the differences between the lower and higher resolutions are negligible at these parameters, and use the lower resolution for computational reasons in the ensemble of realizations. }. Our choice of this numerical setup for the simulation ensembles is motivated both by the calculations of \citet{Yousef:2008ix} with unstable small-scale dynamo (see their figure 9), and from studies of MRI turbulence in the shearing box\footnote{The numerical setup, aside from the noise source, is identical that of zero-net-flux unstratified accretion disk simulations.}. In particular, the relatively low Reynolds numbers are chosen both for computational reasons (100 simulations are run for each parameter set), and so that there is no transition to self-sustaining turbulence if $\bm{\sigma}_{\bm{u}}=0$. Thus, we choose Reynolds numbers that are intermediate between the small-scale dynamo being stable (on the low side) and the system transitioning to turbulence in the absence of noise (on the high side). While similar mechanisms may be operating in the case of self-sustaining turbulence \citep{Lesur:2008fn,Lesur:2008cv}, it is certainly a complicating influence that is more easily ignored for the purposes of this study. The relatively high $\mathrm{Pm}$ is chosen for the obvious reason of enhancing $\bm{b}$ in comparison to $\bm{u}$, while still allowing for a moderate range of scales in $\bm{u}$. It seems worth emphasizing that we do not consider these measurements to be firm proof of the magnetic shear-current effect's importance at high $\mathrm{Rm}$; rather, they serve as a demonstration that it is possible for the small-scale dynamo to significantly change $\eta_{yx}$, and as motivation for further studies at higher Reynolds numbers and with different numerical setups. \subsection{Measurement of the transport coefficients}\label{sec:eta measurements} In this section we describe the methods--- the test-field method \citep{Schrinner:2005jq}, and the projection method (based on \citealt{Brandenburg:2002cia})---for obtaining the transport coefficients from simulations. Those readers who are primarily interested in results may wish to skip directly to Sec.~\ref{sec:results Rm=2000}. Since the projection method is uncommon in the dynamo literature, its accuracy is verified in appendix~\ref{app:verification} through direct comparison to test-field method calculations for low-Rm nonhelical shear dynamos over a range of $\eta_{yx}$. While the test-field method gives unambiguous answers for kinematic transport coefficients (before the small-scale dynamo saturation), results can become more difficult to interpret in the presence of magnetic fluctuations \citep{Cattaneo:2009cx,Hubbard:2009dn,Rheinhardt:2010do}. In contrast, the projection method does not rely on any assumptions regarding the importance of small-scale magnetic fields, operating purely from the mean-field data from a given simulation. In addition to this method, we have also applied a weighted least-squares method, fitting simulation data for a single mode \citep{Kowal:2005uq}. This has led to almost identical results for the low-$\mathrm{Rm}$ test cases and the main results given here. However, the least-squares method was generally found to be somewhat less reliable and rather delicate, and we do not discuss the details. Another possibility for measuring transport coefficients, which could be explored for nonhelical shear dynamos in future work, is given in \citet{Tobias:2013bk}. \subsubsection{Test-field method} The test-field method \citep{Schrinner:2005jq}, which is used for calculating transport coefficients before small-scale dynamo saturation, has become a standard tool in dynamo studies \citep{Brandenburg:2008bc}, so we discuss this only briefly. The method involves solving for a set of $Q$ ``test fields'' $\bm{b}^{q}$ (where $q=1\rightarrow Q$), in addition to the standard MHD equations. The test fields satisfy the small-scale induction equation, \begin{equation} \partial_{t}\bm{b}^{q} = \nabla\times (\bm{u}\times \bm{B}^{q}) + \nabla \times (\bm{U}\times \bm{b}^{q}) + \nabla \times \left(\bm{u}\times \bm{b}^{q} -\overline{\bm{u}\times \bm{b}^{q}}\right)+\bar{\eta} \nabla^{2}\bm{b}^{q},\label{eq:TF equations} \end{equation} where $\bm{B}^{q}$ are a set of $Q$ test mean fields (specified at the start of the simulation), and $\bm{u}$ and $\bm{U}$ are taken from the simulation. By calculating the EMF $\bm{\mathcal{E}}^{q}=\overline{\bm{u}\times \bm{b}^{q}}$ that results from a variety of $\bm{B}^{q}$, one can determine the transport coefficients. The test-field method's simplest---and most obviously meaningful---use, is to utilize a $\bm{u}$ field that is unaffected by $\bm{b}$ or $\bm{B}$, thus calculating kinematic transport coefficients\footnote{One complication is the small-scale dynamo of the test-fields, which can be unstable and result in exponential growth of $\bm{b}^{q}$. This can be circumvented by reseting $\bm{b}^{q}$ periodically (every $T_{\mathrm{reset}}$), such that it does not become large in comparison to $\bm{u}$, but it is important to ensure results are independent of the choice of $T_{\mathrm{reset}}$.}. A simple extension is the ``quasi-kinematic'' method \citep{Brandenburg:2008hc,Hubbard:2009dn}, for which one runs an MHD simulation in which $\bm{u}$ is influenced by self-consistent magnetic fields, and extracts $\bm{u}$ to insert into the test-field equations. This can most obviously be used to understand how the modification of $\bm{u}$ by $\bm{b}$ or $\bm{B}$ affects the kinematic coefficients (see, for example, \citealt{Gressel:2013be}), but the direct effect of $\bm{b}$ fluctuations is not included. A variety of subtleties exist, however, and care must be used in interpreting results; see \citet{Hubbard:2009dn}. \subsubsection{Projection method} Inclusion of the direct effect of $\bm{b}$ on transport coefficients in the test-field method introduces significant complications and ambiguities, primarily because it can be difficult to ensure that the test fields $\bm{b}^{q}$ and $\bm{u}^{q}$ are linear in the test mean fields. A method has been proposed and explored in \citet{Rheinhardt:2010do}; however, given its complications and early stage of development, we choose to use the projection method detailed below to calculate mean-field transport coefficients after small-scale dynamo saturation. This method makes no assumptions regarding the importance of small-scale magnetic fluctuations, simply utilizing mean-field and EMF data extracted from standard MHD simulation. The starting point of the method is the standard Taylor expansion of $\bm{\mathcal{E}}$ in terms of $\bm{B}$. In coordinates this is (cf.~Eq.~\eqref{c4:eq:SC sa eqs}), \begin{subequations} \begin{gather} \mathcal{E}_{x}=\alpha_{xx}B_{x}+\alpha_{xy}B_{y}-\eta_{xy}\partial_{z}B_{x}+\eta_{xx}\partial_{z}B_{y}, \\ \mathcal{E}_{y}=\alpha_{yx}B_{x}+\alpha_{yy}B_{y}-\eta_{yy}\partial_{z}B_{x}+\eta_{yx}\partial_{z}B_{y}. \end{gather}\label{c5:eq:EMF general}\end{subequations} Note that we have not necessarily assumed linearity in $\bm{B}$, since $\alpha_{ij}$ and $\eta_{ij}$ are not assumed constant. The basic idea of the projection method, proposed in \citet{Brandenburg:2002cia}, is to extract time-series data for $\mathcal{E}_{i}$ and $B_{i}$ from nonlinear simulation, solving for the transport coefficients in Eq.~\eqref{c5:eq:EMF general} at each time point. In principle, all coefficients can be solved for directly, given $\bm{B}$ and $\bm{\mathcal{E}}$ data that consists of at least 2 Fourier modes. One calculates \begin{equation} \bm{E}^{(i)}=\left(\left\langle B_{x}\mathcal{E}_{i}\right\rangle ,\left\langle B_{y}\mathcal{E}_{i}\right\rangle ,\left\langle \partial_{z}B_{x}\mathcal{E}_{i}\right\rangle ,\left\langle \partial_{z}B_{x}\mathcal{E}_{i}\right\rangle \right)^{T} \end{equation} and the matrix \begin{equation} M=\left(\begin{array}{cccc} \left\langle B_{x}B_{x}\right\rangle & \left\langle B_{x}B_{y}\right\rangle & \left\langle B_{x}\partial_{z}B_{x}\right\rangle & \left\langle B_{x}\partial_{z}B_{x}\right\rangle \\ \left\langle B_{y}B_{x}\right\rangle & \left\langle B_{y}B_{y}\right\rangle & \left\langle B_{y}\partial_{z}B_{x}\right\rangle & \left\langle B_{y}\partial_{z}B_{y}\right\rangle \\ \left\langle \partial_{z}B_{x}B_{x}\right\rangle & \left\langle \partial_{z}B_{x}B_{y}\right\rangle & \left\langle \partial_{z}B_{x}\partial_{z}B_{x}\right\rangle & \left\langle \partial_{z}B_{x}\partial_{z}B_{y}\right\rangle \\ \left\langle \partial_{z}B_{y}B_{x}\right\rangle & \left\langle \partial_{z}B_{y}B_{y}\right\rangle & \left\langle \partial_{z}B_{y}\partial_{z}B_{x}\right\rangle & \left\langle \partial_{z}B_{y}\partial_{z}B_{y}\right\rangle \end{array}\right), \end{equation} where $\left\langle \cdot\right\rangle $ here denotes an average over $z$ and possibly time (the system statistically homogenous in $z$). Then, solving \begin{equation} \bm{E}^{(i)}=M\bm{C}^{(i)}, \end{equation} for $C^{(1)}=\left(\alpha_{xx},\alpha_{xy},-\eta_{xy},\eta_{xx}\right)$, $C^{(2)}=\left(\alpha_{yx},\alpha_{yy},-\eta_{yy},\eta_{yx}\right)$, one obtains the full set of transport coefficients. The data for $\bm{\mathcal{E}}$ and $\bm{B}$ are generally quite noisy and some care is required to avoid spurious effects that lead to incorrect results. In particular, while pure white noise in each variable will average to zero over time, there are correlations between components that can significantly pollute the data. These correlations arise from the fact that Eq.~\eqref{c5:eq:EMF general} is not the only expected relationship between components of $\bm{B}$ and $\bm{\mathcal{E}}$; $\bm{B}$ is also directly driven by $\bm{\mathcal{E}}$, and itself, through \begin{equation} \partial_{t}\bm{B} = -SB_{x}\hat{\bm{y}} + \nabla\times\bm{\mathcal{E}} + \bar{\eta}\triangle \bm{B}.\label{eq:B driving} \end{equation} From Eq.~\eqref{eq:B driving} and by examining data, it is found that the most harmful of the correlations are a correlation between $B_{x}$ and $B_{y}$ [as expected due to $-SB_{x}$ in Eq.~\eqref{eq:B driving}] and a correlation between fluctuations in $\mathcal{E}_{y}$ and $B_{x}$ ($B_{x}$ is directly driven by $\partial_{z}\mathcal{E}_{y}$)\footnote{ The correlation between $B_{y}$ and $\mathcal{E}_{x}$ is not so damaging as that between $B_{x}$ and $\mathcal{E}_{y}$ due to the $-SB_{x}$ term in the $B_{y}$ equation and larger range of $B_{y}$ values explored throughout a simulation.}. Note that this correlation of $\mathcal{E}_{y}$ and $B_{x}$ is not the same as a nonzero $\alpha_{yx}$ or $\eta_{yy}$ coefficient. Specifically, a noisy change in the imaginary part of $\mathcal{E}_{y}$ by $\epsilon$ will cause a change in $B_{x}$ of $\sim k\epsilon\Delta t$ after some time $\Delta t$ (related to the correlation time of the $\mathcal{E}_{y}$ noise). If the noise fluctuations are of similar or larger magnitude than the range of $B_{x}$ and $\mathcal{E}_{y}$ explored over the course of the calculation, this correlation can cause a \emph{negative} value for the fit parameter $\eta_{yy}$, since the {scatter} of the data has a preferred slope. In fact, a consistently negative calculated value for $\eta_{yy}$ is the most prominent spurious effect in simulations, which was also noted in \citet{Brandenburg:2002cia} without explanation. That this is purely a consequence of the projection method, and not physical, can be established by comparison to test-field calculations (see appendix~\ref{app:verification}). Importantly, the value of $\eta_{yy}$ is coupled to that of $\alpha_{iy}$ and $\eta_{yx}$. This implies one cannot simply ignore this effect and settle with not knowing $\eta_{yy}$, since the average values of other coefficients will also become polluted. The basic approach to overcoming these issues described above is to minimize the influence of $B_{x}$ on the calculation, to the extent possible. This is motivated by the fact that $B_{x}$ is very noisy in comparison to $B_{y}$, and is involved in both of the aforementioned damaging correlations. The approach works very well for shear dynamos because $B_{x}$ is much smaller than $B_{y}$ (e.g., in the simulations presented in this work, $B_{x}$ is usually between 25 and 150 times smaller than $B_{y}$ depending on the realization). In addition, those transport coefficients that require $B_{x}$ for their calculation (e.g., $\eta_{xy}$) are substantially less interesting, since they do not significantly effect the dynamo growth rate. To enable this reduction in the influence of $B_{x}$, two approximations are made to Eq.~\eqref{c5:eq:EMF general}. The first and most important is to assume that diagonal transport coefficients are equal, $\eta_{yy}=\eta_{xx}$ and $\alpha_{yy}=\alpha_{xx}$. This is not strictly required by the symmetries of the turbulence with shear (\citealt{Radler:2006hy}; \citetalias{Analytic}), but a variety of test-field calculations, including those after saturation of the small-scale dynamo (i.e., quasi-kinematic calculations; see \citealt{Hubbard:2009dn,Gressel:2010dj,Gressel:2015ev}), have shown this to be the case to a high degree of accuracy. The second approximation is to neglect $\eta_{xy}$ and $\alpha_{yx}$. This is justified by the fact that $B_{x}\ll B_{y}$ and $\eta_{xy}<\eta_{xx}$ on average, thus its effect on the mean value of $\eta_{xx}$ should be very small. This approximation is not strictly necessary and similar results can be obtained with $\eta_{xy}$ and $\alpha_{yx}$ included; however, these coefficients fluctuate wildly in time (far more than $\eta_{xx}$ for example) and cause increased fluctuations in the values of the other transport coefficients. It is useful to briefly consider the proportional error in $\eta_{xx}$ and $\eta_{yx}$ that might arise from these approximations. First, in considering the neglect of $\eta_{xy}$, one starts with the conservative estimate $25B_{x}\approx B_{y}$. Noting that test-field calculations give $\eta_{xy}\sim0.25\eta_{xx}$ for the simulations given in the manuscript (see also \citealt{Brandenburg:2008bc}), we see that this approximation should cause less than a $1\%$ systematic error in $\eta_{xx}$. Second, since we are primarily interested in determining $\eta_{yx}$, let us consider the error in $\eta_{yx}$ that results from an error in $\eta_{yy}$ (caused by either the neglect of $\eta_{xy}$ or the assumption $\eta_{xx}=\eta_{yy}$). Noting that $B_{x}\sim-k\sqrt{\eta_{yx}/S}B_{y}$ for a coherent shear dynamo, we can estimate that $ik\eta_{yx}B_{y}\gtrsim ik\eta_{yy}B_{x}$ when $k\eta_{yy}\lesssim\sqrt{\left|S\eta_{yx}\right|}$. This inequality is satisfied if the coherent dynamo has a positive growth rate; thus, very approximately, at marginality one would expect the proportional errors in $\eta_{yx}$ and $\eta_{yy}$ to be similar. Combining these two conclusions, one should expect the two approximations to cause very little systematic error in the determination of $\eta_{yx}$, despite the coefficient's small values. To summarize the previous paragraphs, we shall fit \begin{subequations} \begin{gather} \mathcal{E}_{x}=\alpha_{yy}B_{x}+\alpha_{xy}B_{y}+\eta_{xx}\partial_{z}B_{y}, \\ \mathcal{E}_{y}=\alpha_{yy}B_{y}-\eta_{xx}\partial_{z}B_{x}+\eta_{yx}\partial_{z}B_{y}, \end{gather}\label{c5:eq:EMF spec}\end{subequations} to simulation data at each time point. Since there are now fewer coefficients than rows of $\bm{E}^{(i)}$, the matrix equations are solved in the least-squares sense. One final difference from the method as utilized in \citet{Brandenburg:2002cia} is a filtering of the data to include only the first two Fourier modes. This is done to improve scale separation, since the small scales of the mean field will be dominated by fluctuations due to the finite size of the horizontal average, and cannot be expected to conform to the ansatz in Eq.~\eqref{c5:eq:EMF general}\footnote{ This filtering violates the Reynolds averaging rules (specifically $\langle \langle g \rangle_{f} h \rangle_{f} \neq \langle g\rangle_{f} \langle h \rangle_{f}$, where $\langle \cdot \rangle_{f}$ is the Fourier average). While not technically required for the validity of Eq.~\eqref{c5:eq:EMF general} (which relies only on some level of scale separation), the rules are required for the use of the mean-field induction equation (Eq.~\eqref{eq:B driving}) in the first place, since $\langle \bm{u}\times \bm{B} \rangle_{f}$ may drive the Fourier-averaged field in addition to $\bm{\mathcal{E}}$ and $\bm{U}\times \bm{B}$ (this is an aliasing effect). To ensure this does not adversely affect results, we have verified that the coefficients are essentially independent of the number of Fourier modes $n$ retained in the projection, up to $n\approx 5$ or $6$.}. Finally, we note that $\alpha$ coefficients can be excluded from these calculations altogether, and since their average over long times vanishes, this does not affect the results for $\eta_{ij}$. We have chosen to permit nonzero $\alpha$ in all calculations presented below, both as a consistency check and because over shorter time-windows $\alpha$ may not average to exactly zero. Nonetheless, repeating all calculations presented below and in appendix~\ref{app:verification} with $\alpha_{ij}=0$ imposed artificially, one obtains the same results (to within the margin of error). This illustrates that in the neglect of transport coefficients considered above (e.g., $\alpha_{yx}$), it is only necessary to consider the errors arising from neglect of $\eta$ coefficients, since those due to neglect of $\alpha$ coefficients average to zero. \subsubsection{Verification} To ensure the accuracy of results---especially with regards to possible systematic errors---it is crucial to verify the projection method. We do this with two independent approaches. First, in appendix~\ref{app:verification}, the projection method is used to calculate kinematic transport coefficients for low-$\mathrm{Rm}$ nonhelical shear dynamos, allowing a direct comparison to the kinematic test-field method. The study is carried out for dynamos with a range of positive and negative $\eta_{yx}$ by changing the rotation (see \citetalias{LowRm}), and is in a regime where the stochastic-$\alpha$ effect is significant. This ensures that the projection method does not inadvertently capture a property of the dynamo growth rate, rather than the coherent transport coefficients. Second, we verify the calculated transport coefficients are correct \emph{a posteriori} for the main simulation results (Sec.~\ref{sec:numerical results}). This is done by solving the mean-field equations (Eq.~\eqref{c4:eq:SC sa eqs}) using the time-dependent transport coefficients $\alpha_{ij}(t)$ and $\eta_{ij}(t)$ calculated with the projection method. Comparison with the mean-field evolution taken directly from the simulation provides a thorough check that the transport coefficients are being calculated correctly, without relying on assumptions about the nature of the dynamo (aside from the mean-field ansatz), or the importance of approximations made to the form of the EMF (i.e., Eq.~\eqref{c5:eq:EMF spec}). \begin{figure} \centering{}\includegraphics[width=0.535\linewidth]{fig4a-eps-converted-to}\includegraphics[width=0.465\linewidth]{fig4b-eps-converted-to}\caption{(a) Time development of the average mean-field energy $E_{B}=L_{z}^{-1}\int dz\, \bm{B}^{2}$ for the rotating simulation set. Each faded color curve illustrates a single realization, while the thick black curve shows the mean over all realizations. The dotted vertical lines indicate the saturation of the small-scale dynamo and the nonlinear saturation of the large-scale dynamo (the projection method is applied between these), while the dashed line is simply an approximate fit to the large-scale dynamo growth phase. (b) Shell-averaged turbulent spectra (of $\bm{B}_{T}$ and $\bm{U}_{T}$) for the rotating simulations shown in (a). The colored lines (from blue to yellow) illustrate the growth of the magnetic spectrum in time (averaged over all simulations), with the spectra at $t=50$, $t=100$, and $t=150$ highlighted by thicker lines. The solid black line illustrates the velocity spectrum (peaked at $k\approx 6\pi$), while the dashed line shows the magnetic spectrum from an identical simulation, but without velocity shear ($S=0$), and averaged in time from $t=50$ to $t=150$. Evidently, as also seen in \citet{Yousef:2008ix}, the second period of large-scale field growth is absent when $S=0$ (note that the velocity spectrum is essentially identical to the case with velocity shear). The dotted line simply illustrates a $k^{-5/3}$ spectrum for the sake of clarity. The slight bump in the spectrum at high $k$ is caused by spectral reflection from the grid cutoff; however, since this is well into the exponential fall-off and at very low energy we are confident that this does not affect large-scale evolution (note that the spectrum in the few double resolution simulations is essentially identical, aside at and above the bump itself). \label{fig:EandSpectrum}} \end{figure} \subsection{Numerical results---the magnetically driven dynamo}\label{sec:results Rm=2000}\label{sec:numerical results} In this section, we show that small-scale fields arising self-consistently through the small-scale dynamo can drive a coherent large-scale dynamo. To this end, we apply the methods discussed in the previous section to calculate transport coefficients before and after the saturation of the small-scale dynamo. The technique is applied to ensembles of 100 simulations, both with and without Keplerian rotation. Before continuing, we demonstrate that there is indeed a large-scale dynamo that develops after saturation of the small-scale dynamo. This is shown both in Fig.~\ref{fig:EandSpectrum}, which gives the time development of the mean-field energy and turbulent spectra, and in Fig.~\ref{c5:fig:By examples}, which illustrates the spatiotemporal evolution of $B_{y}(z,t)$ in several example realizations. From Fig.~\ref{fig:EandSpectrum}, we clearly see the fast growth of the small-scale dynamo until its saturation at $t\approx 50$. (This is observable in Fig.~\ref{fig:EandSpectrum}(a), which shows only the mean-field, because $\bm{B}$ is in approximate equipartition with the small-scales due to the finite domain.) Following this there is a slower period of growth in only the largest scales of the box (this is $k_{1z}=\pi$, a factor of six less than the forcing scale), with this saturating around $t=100$ on average. Importantly, this second period of slower growth in the mean-field is not present without the mean shear (see Fig.~\ref{fig:EandSpectrum}(b), dashed line), despite the velocity spectrum being essentially identical. This illustrates that the shear causes large-scale field generation after saturation, as also noted in \citet{Yousef:2008ix}, As shown in Fig.~\ref{fig:EandSpectrum}(a) and Fig.~\ref{c5:fig:By examples}, at these parameters, the prevalence of a coherent large-scale dynamo after saturation of the small-scale dynamo varies significantly between realizations. Specifically, it appears that the coherent effect cannot always overcome fluctuations in $\bm{\mathcal{E}}$ immediately after small-scale saturation, although the dynamo always develops after a sufficiently long time {[}e.g., Fig.~\ref{c5:fig:By examples}(d) near $t=150${]}. This behavior seems generic when the coherent dynamo is close to its threshold for excitation, and similar structures were observed at lower Rm in \citetalias{LowRm}, where forcing in the induction equation was used to create an homogeneous bath of magnetic fluctuations. Nonetheless, despite this variability in the dynamo's qualitative behavior, measurement of the transport coefficients over the ensemble of simulations illustrates a significant decrease in $\eta_{yx}$ after the magnetic fluctuations reach approximate equipartition with velocity fluctuations at small scales. \begin{figure} \centering{}\includegraphics[width=0.8\linewidth]{fig5}\caption{ Example spatiotemporal $B_{y}$ evolutions for (a-b) non-rotating ($\Omega=0$), and (c-d) Keplerian rotating ($\Omega=2/3$) driven turbulence (parameters described in the text). The first examples in each case {(}(a) and (c){)} show $B_{y}$ when a coherent dynamo develops, while the second examples {(}(b) and (d){)} illustrate the case when it is more incoherent. The main factors in distinguishing these are the coherency in phase of $B_{y}$ over some time period and the amplitude at saturation, which is larger in the coherent cases. In general the rotating simulations are substantially more coherent. The hatched area illustrates the region of small-scale dynamo growth. The projection method used to compute transport coefficients (see Fig.~\ref{c5:fig:Transport coeffs}) is applied between the dashed lines ($t=50\rightarrow100$).\label{c5:fig:By examples} } \end{figure} At low times, before $t\approx 50$, the kinematic $\alpha$ and $\eta$ are measured using the test-field method, fixing the mean field and calculating $\bm{\mathcal{E}}$, with no Lorentz force \citep{Brandenburg:2005kla,Brandenburg:2008bc}. Calculations are run from $t=0\rightarrow 2000$ with the errors estimated through the standard deviation of the mean over 100 segments. Since the small-scale dynamo grows quickly, test fields are reset every $t=5$. After small-scale saturation, we utilize the projection method (Sec.~\ref{sec:eta measurements}) to measure coefficients directly from the observed mean-field and EMF evolution\footnote{ While it would be ideal to measure coefficients before saturation using the projection method for consistency, this is difficult. In particular, the method is hindered by the small-scale dynamo causing the mean-field evolution to be completely overwhelmed by small-scale noise. We explored the possibility of seeding initial conditions with large-scale fields to obtain a short period of kinematic evolution; however, results were inconclusive due to very high levels of noise in the measurements. }. The time window of these measurements has been limited to $t=50\rightarrow100$, since growth is seen to stop at $t\approx100$ in many realizations (see Fig.~\ref{fig:EandSpectrum} and Figs.~\ref{c5:fig:By examples}(a) and (c)). Since this saturation presumably occurs due to a nonlinear change in the transport coefficients at large $\bm{B}$ (e.g., a change in sign of $\eta_{yx}$), it is important to not include this saturation phase in the measurement of $\eta_{yx}$. As should be expected from Fig.~\ref{c5:fig:By examples} and due to the short time window, measurements of the transport coefficients after small-scale saturation vary significantly between realization. Nonetheless, an average over the ensemble illustrates a statistically significant change in $\eta_{yx}$ that is consistent with observed behavior, in both the rotating and non-rotating simulation ensembles. \begin{figure} \begin{centering} \includegraphics[width=0.9\linewidth]{fig6} \par\end{centering} \centering{}\caption{ Measurements of the turbulent transport coefficients for 100 realizations of the simulations in Fig.~\ref{c5:fig:By examples}; (a) $\eta_{xx}$ coefficients, no rotation, (b) $\eta_{yx}$ coefficients, no rotation, (c) $\eta_{xx}$ coefficients, rotating, (d) $\eta_{yx}$ coefficients, rotating. Unfilled markers in each plot (circles and squares for non-rotating and rotating runs respectively) show coefficients measured from each of the individual realizations, with mean values displayed by solid markers and the shaded regions indicating error in the mean (2 standard deviations). Black markers illustrate the kinematic transport coefficients, with grey shaded regions indicating the error. After saturation of the small-scale dynamo, $\eta_{ij}$ is calculated using the projection method, taking the mean from $t=50$ to $t=100$. This limited time window is chosen to avoid capturing the saturation phase of the large-scale dynamo, since $\eta_{ij}$ is presumably modified in this phase. In both methods used to compute transport coefficients, the corresponding $\alpha$ coefficients are also calculated. In all cases these are zero to within error as expected, and the scatter between simulations is of a similar magnitude to that of $\eta_{ij}$ if their different units are accounted for (it is necessary to divide $\alpha$ by a characteristic $k$ value). \label{c5:fig:Transport coeffs}} \end{figure} Figure~\ref{c5:fig:Transport coeffs} illustrates the results. In the kinematic phase without rotation, we see $\eta_{yx}=\left(4.1\pm1.6\right)\times10^{-4}$, in qualitative agreement with previous studies \citep{Brandenburg:2008bc}. With rotation, we find $\eta_{yx}=\left(0.6\pm1.2\right)\times10^{-4}$, consistent with a reduction in $\eta_{yx}$ due to the $\bm{\Omega}\times\bm{J}$ effect (\citealt{Krause:1980vr}, but note the deviation from the lower-$\mathrm{Rm}$ case and SOCA result, which predicts negative $\eta_{yx}$). After saturation of the small-scale dynamo, $\eta_{yx}=(-0.1\pm1.0)\times10^{-4}$ for the non-rotating case, while $\eta_{yx}\approx-\left(2.0\pm0.8\right)\times10^{-4}$ in the rotating case. The reduction of each is the same to within error. Values for the diagonal resistivity are smaller after saturation, which is consistent with the observed decrease in the velocity fluctuation energy (by a factor $\sim1.4$). The numerical values of $\eta_{xx} $ and $\eta_{yx}$ show that the coherent dynamo is slightly stable on average in the non-rotating case and marginal in the rotating case. However, the coefficients vary significantly between realizations, sometimes yielding larger growth rates, and it is important to check that the observed mean-field evolution has some relation to this variation. This serves two purposes. First, it acts as a check that the projection method is measuring the transport coefficients correctly. Second, it illustrates that those realizations exhibiting the strongest growth are indeed being driven by the shear-current mechanism; that is, they are driven by $\eta_{yx}$ rather than residual variation of $\alpha$ about mean zero. This corroborates the earlier conclusion that the approximately constant phase of $B_{y}(z,t)$ in the development of the dynamo (Fig.~\ref{c5:fig:By examples}) is inconsistent with an $\alpha$ effect. As stated previously, the method for checking this consistency is to use the \emph{measured} transport coefficients to solve for the expected evolution of the largest Fourier mode of $B_{i}$ {(}using Eq.~\eqref{c4:eq:SC sa eqs}{)}, comparing this to the observed evolution from the full simulation. Note that we use the time-dependent coefficients $\alpha_{ij}(t)$ and $\eta_{ij}(t)$, rather than the time average that is shown in Fig.~\ref{c5:fig:Transport coeffs}, since this provides much more information about the details of the evolution. The check is carried out for each realization separately, initializing using the mean-field data and filtering transport coefficients in time with a Gaussian filter of width $5$ to remove the rapid fluctuations. Results from the first 12 realizations for rotating runs (chosen since the dynamo is stronger than in the nonrotating cases) are shown in Figs.~\ref{c5:fig:Rot evolutions}. The agreement is generally good, with qualitatively similar features between calculated and measured evolution in all realizations, and many cases showing quantitative agreement. It seems that in most instances for which there is a substantial divergence between the predicted and observed mean-field evolution, it is due to a slight error building up in $B_{x}$ that subsequently gets amplified enormously due to the $-SB_{x}$ term in the $B_{y}$ equation. \begin{figure} \includegraphics[width=1\textwidth]{fig7-eps-converted-to} \caption{Evolution of the mean-field magnitude for the first 12 of the ensemble of rotating simulations discussed in the manuscript. Here $B(\vert\hat{B}_{x}^{1}\vert^{2}+\vert\hat{B}_{y}^{1}\vert^{2})^{1/2}$ is the mean-field magnitude, where $\hat{B}_{i}^{1}$ is the largest scale Fourier mode of $B_{i}$. In each plot the solid blue curve shows data taken from the simulation. The dashed red curve shows the corresponding expected evolution, using the smoothed calculated values of the transport coefficients (see text). Finally, the dotted black curve illustrates the expected evolution, artificially setting all $\alpha$ coefficients to zero. We list the measured mean of $\eta_{yx}$ in each plot to show that higher absolute values (i.e, more negative values) do generally lead to substantially more growth of the mean field as expected for a coherent dynamo. For reference, at the measured $\eta_{xx}\approx0.006$, the coherent dynamo is unstable below $\eta_{yx}=-0.00036$. \label{c5:fig:Rot evolutions}} \end{figure} In addition to solving for expected evolution using both $\eta_{ij}(t)$ and $\alpha_{ij}(t)$ measurements, we present calculations obtained in an identical way, but with $\alpha_{ij}(t)$ coefficients artificially set to zero. The purpose of this analysis is to examine the degree to which the dynamo is driven by $\eta_{yx}$, rather than variation in $\alpha$ about its mean of zero. Through a comparison of the curves with and without $\alpha_{ij}$ it is clear that in many realizations of the rotating simulation set, the dynamo is primarily driven by $\eta_{yx}$, as shown by the agreement between dashed and dotted curves. Furthermore, the mean of $\eta_{yx}$ over the time interval (printed on each subfigure; these are taken from Fig.~\ref{c5:fig:Transport coeffs}) agrees nicely with the observed behavior. That is, large negative values for $\eta_{yx}$ correspond to those realizations with both strong dynamo growth and good agreement between evolution with and without $\alpha$. In contrast, realizations with lower absolute values of $\eta_{yx}$ (i.e., values for which the dynamo is stable) either grow very little or diverge substantially between evolution with and without $\alpha$. This shows that sometimes, for realizations in which the magnetic shear-current effect is weaker, a stochastic $\alpha$ effect is the primary driver. A similar examination of the non-rotating case shows that coherent dynamo growth is much less prevalent. In particular, while the agreement between the true and calculated evolution is satisfactory (similar to Fig.~\ref{c5:fig:Rot evolutions}), there is generally much less mean-field growth and larger differences with calculations for which $\alpha_{ij}$ is artificially set to zero. Since in most realizations $\eta_{yx}$ is larger than the threshold at which the coherent dynamo becomes unstable even after the decrease due to magnetic fluctuations, this is not surprising. We thus conclude that small-scale magnetic fluctuations act to make $\eta_{yx}$ \emph{more negative}, and that in some realizations (or after a sufficiently long time period) a coherent large-scale dynamo develops as a result. This demonstrates that magnetic fluctuations, excited by small-scale dynamo action, can drive large-scale magnetic field generation. The consistency of the numerical simulations with theoretical expectations, as well as the general agreement of measured transport coefficients with observed mean-field evolution, give us confidence that the observed large-scale dynamo is indeed a coherent effect. The mechanism is the magnetic shear-current effect, arising through the contribution of magnetic fluctuations to the off-diagonal turbulent resistivity $\eta_{yx}$ in the presence of large-scale shear flow. The most significant limitation of the studies presented in this section is the relatively low Reynolds numbers, which were chosen to be slightly below the transition to self-sustaining turbulence (in the absence of driving noise), as well as for computational reasons (since an ensemble of simulations was required). Specifically, the dynamo is likely far from any asymptotic regime at high $\mathrm{Re}$ and $\mathrm{Rm}$. This is almost certainly true for both the small-scale dynamo and its saturation\footnote{The critical $\mathrm{Rm}$ for onset of the small-scale dynamo with this forcing is $\mathrm{Rm}_{c}\approx 1100$, so we are well away from where one might expect the growth rate or saturated field level to converge \citep{Tobias:2015jp}.}, as well as for the large-scale magnetic shear-current effect itself. Further, the diffusion time scale of the large-scale modes is $t\approx(\bar{\eta}(2\pi/L_{z})^{2})\approx 200$, which is only an order of magnitude different from the time scale of growth of the large-scale field ($t\approx30$, as can be seen from Fig.~\ref{c5:fig:Rot evolutions}). The same is true of the separation between the turbulent forcing scale $k_{f}\approx 19$, and that of the large-scale field $k_{1}=2 \pi/L_{z} = \pi$. While such limitations are hardly unique in the dynamo literature, it is obviously pertinent to undertake future studies at much higher resolutions. Further discussion of some of the difficulties involved with truly understanding the astrophysical relevance of the magnetic shear-current effect is given in the next section. \section{Discussion and conclusions}\label{sec:discussion} This paper has revolved around exploration of the ``magnetic shear-current effect'' as a viable mechanism to drive large-scale dynamos in nonhelical shear flows. The suggestion is that a bath of homogeneous nonhelical \emph{magnetic} fluctuations, influenced by the velocity shear, can cause a dynamo instability through an off-diagonal turbulent resistivity, even if there is no $\alpha$ effect. More specifically, in response to a large-scale azimuthal magnetic field $B_{y}$, a bath of magnetic fluctuations will produce an azimuthal electromotive force $\mathcal{E}_{y}$, proportional to $\partial_{z}B_{y}$. This $\mathcal{E}_{y}$ causes the generation of a radial magnetic field, which in turn amplifies the azimuthal field through stretching by the mean flow (the $\Omega$~effect), resulting in a dynamo instability. The effect rests crucially on the sign of the proportionality between $\mathcal{E}_{y}$ and $\partial_{z}B_{y}$ (termed $\eta_{yx}$)---if the product $\eta_{yx}(\nabla\times \bm{U})_{z}$ is negative, the induced radial field will act to damp, rather than amplify, the azimuthal field. The physical picture for the magnetic shear-current effect---how magnetic fluctuations can interact with velocity shear and a large-scale field gradient to produce an $\bm{\mathcal{E}}$ of the required direction---is somewhat different from dynamo mechanisms described in previous literature (see, for example, \citealt{Brandenburg:2005kla,Yokoi:2013di}). In particular, it relies on the \emph{pressure response} of the fluid to the Maxwell stress $\bm{B}_{T}\cdot \nabla \bm{B}_{T}$. The basic effect arises because $\bm{b}\cdot \nabla \bm{B}$ creates $\bm{B}$-directed velocity perturbations from magnetic perturbations in the direction of magnetic shear (i.e., the $\nabla $ direction). This implies that any variation of the $\bm{b}$ perturbation along the $\bm{B}$ direction will create a velocity perturbation with nonzero divergence, leading to a significant pressure response. Without velocity shear, the response is well known and fundamental for turbulent diffusion; it exactly cancels another term and causes the contribution to the turbulent mean-field resistivity from magnetic fluctuations to vanish (this is also known as the absence of $\beta$-quenching; \citealt{Gruzinov:1994ea}). In the presence of velocity shear, a secondary pressure response, arising due to the stretching of the primary pressure response by the mean shear, causes perpendicular velocity fluctuations that are correlated with the original magnetic fluctuations. The resulting EMF is in the required direction to generate a $\bm{B}$ that is stretched by the shear flow, enhancing the mean field that caused the effect in the first place. Thus, a mean-field dynamo instability can ensue at sufficiently long wavelength. Why is magnetic shear-current mechanism interesting? We would like to give two answers to this question: the first relates generally to dynamo theory, the second to the specific case of the dynamo seen in simulations of turbulence in accretion disks (the MRI dynamo). \begin{description} \item[General mean-field dynamo theory.~]{Much of mean-field dynamo theory in recent years has focused on the issue of $\alpha$ quenching \citep{Kulsrud:1992ej,Gruzinov:1994ea}. This is specifically related to the adverse influence of small-scale magnetic fields on large-scale dynamo action. Since small-scale dynamos grow faster than large-scale fields above moderate Reynolds numbers, large-scale dynamos may \emph{always} have to grow on a bath of small-scale magnetic fluctuations (\citealt{Cattaneo:2009cx}, but see also \citealt{Tobias:2014ek}). With this in mind, the magnetic shear-current effect is the first suggestion (of which we are aware) for a large-scale dynamo driven by small-scale magnetic fluctuations (although quenching of the turbulent resistivity can lead to a dynamo with spatial variation of transport coefficients; \citealt{Parker:1993kd, Tobias:1996eu})\footnote{ The magnetic $\alpha$ effect can certainly drive a mean field in isolation. The key point is that its sign is opposite to that of the kinetic effect, and the small-scale dynamo grows such that the two effects cancel. While it may be possible that instabilities would cause a magnetic $\alpha$ effect to overwhelm the kinematic one (for instance, the MRI in the presence of stratification, \citealt{Gressel:2010dj,Park:2012eg}), this remains unclear. In contrast, the magnetic shear-current effect has a fixed sign, arising from the nonhelical part of the fluctuations.}. Thus, in some sense, the effect is the \emph{inverse} of dynamo quenching; rather than magnetic fluctuations overwhelming a desirable kinematic effect, mean-field growth starts after small-scale dynamo saturation, driven by the small-scale field itself. In this work, we have given an example of this interesting behavior through targeted numerical experiments. These illustrate that the magnetic fluctuations resulting from saturation of the small-scale dynamo cause a significant decrease (and in some cases, a sign change) of the crucial $\eta_{yx}$ transport coefficient, which can in turn drive a large-scale dynamo. Study of such magnetic dynamos in direct numerical simulations is confounded by the very short period of exponential growth that can be observed (in contrast with kinematic shear dynamos; \citealt{Yousef:2008ie}; \citetalias{LowRm}), and more work is needed to better assess regimes where the effect might be dominant, or even if it continues to operate at very high Reynolds numbers. Nonetheless, it is an interesting possibility that may find application across a wide variety of astrophysical objects. } \item[The~MRI~dynamo.~]{The central regions of accretion disks are both unstratified and lack a source of net kinetic or magnetic helicity, implying that an $\alpha$ effect is not possible. In addition, a variety of authors have found from simulation and theory that the crucial $\eta_{yx}$ is of the wrong sign for a kinematic nonhelical shear dynamo (\citealt{Radler:2006hy,Rudiger:2006gx,SINGH:2011ho}; \citetalias{LowRm}). What then is the cause of the apparent large-scale dynamo seen in simulations? While there is the possibility that it is driven by fluctuations in the $\alpha$ coefficients \citep{Vishniac:1997jo,Vishniac:2009il}, we would argue that the magnetic shear-current effect is a more likely candidate: MRI simulations exhibit stronger magnetic than kinetic fluctuations, the Keplerian rotation is favorable for dynamo growth, the velocity shear is obviously important, and the nonlinear behavior of the effect bears strong similarities to mean-field dynamics in unstratified MRI simulations. In addition, the basic importance of $\eta_{yx}$ in the MRI dynamo has been concluded from nonlinear simulation \citep{Lesur:2008cv} and perturbative calculations of the evolution of MRI modes \citep{Lesur:2008fn}. Our suggestion that small-scale magnetic fields are in fact the primary driver thus ties together formal mean-field dynamo theory with these studies and explains the special importance of strong magnetic fluctuations in MRI turbulence and dynamo. } \end{description} Some of the most compelling evidence that the magnetic shear-current effect is indeed responsible for the unstratified MRI dynamo comes from statistical simulation of the saturation of MRI turbulence \citep{Squire:2015fk}. Statistical simulation (\citealt{Farrell:2012jm,Tobias:2011cn}) involves formulating equations for \emph{statistics} of the small-scale fields $\bm{u}$ and $\bm{b}$ in the mean fields ($\bm{U}$ and $\bm{B}$), and solving these, rather than a single turbulent realization. Importantly for the shear dynamo, this completely eliminates the possibility of a stochastic-$\alpha$ effect, since the $\bm{\mathcal{E}}$ that drives $\bm{B}$ is calculated directly from fluctuation statistics. Coupled with the fact that the kinematic effect is too weak to explain the dynamo \citepalias{LowRm}, it is clear that the magnetic shear-current effect is the \emph{only possible} field generation mechanism in these calculations. Despite this, the agreement with nonlinear simulation is very good (see Fig.~2 of \citealt{Squire:2015fk}). Most important is the observed strong increase in the saturated mean $\bm{B}$ field, and consequently in the turbulent angular momentum transport, as the magnetic Prandtl number is increased at fixed $\mathrm{Re}$. This counterintuitive trend has been the source of much discussion in the MRI turbulence literature (see, for example, \citealt{Lesur:2007bh,Fromang:2007cy,Meheut:2015it}). The considerations above illustrate that it is, at least in part, a consequence of the $\mathrm{Pm}$ dependence of the saturation of the magnetic shear-current effect. Looking past the unstratified MRI dynamo, we might wonder about other applications of the magnetic shear-current effect. Large-scale velocity shear is inescapable in the universe due to the influence of gravity, while the generic instability of small-scale dynamo at large Reynolds numbers implies that plasma turbulence should always be accompanied by small-scale magnetic fluctuations in near equipartition with velocity fluctuations \citep{Schekochihin:2007fy}. However, the simulations discussed in Sec.~\ref{sec:numerical evidence} are intended to illustrate that the magnetic shear-current effect is \emph{possible}, not necessarily that it should be important in every situation. Unfortunately, estimating the relevance of the effect in astrophysical scenarios in any detail requires more knowledge about its dependence on physical parameters---particularly the Reynolds numbers (and magnetic Prandtl number). There are numerous complicating factors that will arise in estimating these dependencies. Most obvious is the variation of transport coefficients themselves (especially $\eta_{yx}$) for a given level of magnetic fluctuations. While it is certainly encouraging that a variety of different methods agree on the sign of $\eta_{yx}$, most results are truly valid only at low Reynolds numbers\footnote{The spectral $\tau$~approximation (which predicts $(\eta_{yx})_{b}<0$ and $\left|(\eta_{yx})_{b}\right| \gg \left|(\eta_{yx})_{u}\right|$; \citealt{Rogachevskii:2004cx}) is nominally valid at high Reynolds number, but its accuracy and reliability remain unclear (see, for example, \citealt{Radler:2007hp}).}. More subtly, the relevance of the effect could depend significantly on the saturation level of the small-scale dynamo, which would be especially important if the kinematic shear-current effect has the incorrect sign for dynamo action ($(\eta_{yx})_{u}$ may change sign with Reynolds number; see \citealt{Brandenburg:2008bc}). This saturation level presumably depends on $\mathrm{Pm}$, but may also change under the influence of velocity shear (at least at the larger of the small scales), an effect that may become significant only at very high Reynolds numbers \citep{Tobias:2014ek,Cattaneo:2014jg}. Finally, magnetic helicity and its transport are a cornerstone of modern dynamo theory \citep{Vishniac:2001wo,Field:2002fn}, but have not been explored in our work thus far due to the focus on the linear phases of the dynamo instability. Such effects will be important to consider in future studies of the saturation and nonlinear evolution of magnetic shear-current dynamos \citep{Rogachevskii:2006hy}. Overall, given the general difficulty of even measuring growth rates for magnetically driven large-scale dynamos, it seems that the magnetic shear-current effect will provide a variety of rich and interesting avenues for future exploration. \acknowledgements The authors would like to thank J.~Krommes, J.~Goodman, H.~Ji, G.~Hammett, and A. Schekochihin for enlightening discussion and useful suggestions, as well as G.~Lesur for distribution of the {\scshape Snoopy} code. JS acknowledges the generous support of a Burke Fellowship and the Sherman Fairchild Foundation at Caltech, as well as a Procter Fellowship at Princeton University. This work was funded by U.S. Department of Energy Grant No. DE-AC02-09-CH11466 and computations were carried out on the Dawson cluster at PPPL.
1512.04645
\section{Introduction} The \texttt{iopart-num} Bib\TeX{} style is intended for use in preparing manuscripts for Institute of Physics Publishing journals, including Journal of Physics. It provides numeric citation with Harvard-like formatting, based upon the specification in ``How to prepare and submit an article for publication in an IOP journal using \LaTeXe'' by Graham Douglas (2005). The \texttt{iopart-num} package is available on the Comprehensive \TeX{} Archive Network (CTAN) as \texttt{/biblio/bibtex/contrib/iopart-num}. \section{General instructions} To use the \texttt{iopart-num} style, include the command \verb+\bibliographystyle{iopart-num}+ in the document preamble. The reference section is then inserted into the document with the command \verb+ \section{Introduction} These guidelines show how to prepare articles for publication in \jpcs\ using \LaTeX\ so they can be published quickly and accurately. Articles will be refereed by the \corg s but the accepted PDF will be published with no editing, proofreading or changes to layout. It is, therefore, the author's responsibility to ensure that the content and layout are correct. This document has been prepared using \cls\ so serves as a sample document. The class file and accompanying documentation are available from \verb"http://jpcs.iop.org". \section{Preparing your paper} \verb"jpconf" requires \LaTeXe\ and can be used with other package files such as those loading the AMS extension fonts \verb"msam" and \verb"msbm" (these fonts provide the blackboard bold alphabet and various extra maths symbols as well as symbols useful in figure captions); an extra style file \verb"iopams.sty" is provided to load these packages and provide extra definitions for bold Greek letters. \subsection{Headers, footers and page numbers} Authors should {\it not} add headers, footers or page numbers to the pages of their article---they will be added by \iopp\ as part of the production process. \subsection{{\cls\ }package options} The \cls\ class file has two options `a4paper' and `letterpaper': \begin{verbatim} \documentclass[a4paper]{jpconf} \end{verbatim} or \begin{verbatim} \documentclass[letterpaper]{jpconf} \end{verbatim} \begin{center} \begin{table}[h] \caption{\label{opt}\cls\ class file options.} \centering \begin{tabular}{@{}*{7}{l}} \br Option&Description\\ \mr \verb"a4paper"&Set the paper size and margins for A4 paper.\\ \verb"letterpaper"&Set the paper size and margins for US letter paper.\\ \br \end{tabular} \end{table} \end{center} The default paper size is A4 (i.e., the default option is {\tt a4paper}) but this can be changed to Letter by using \verb"\documentclass[letterpaper]{jpconf}". It is essential that you do not put macros into the text which alter the page dimensions. \section{The title, authors, addresses and abstract} The code for setting the title page information is slightly different from the normal default in \LaTeX\ but please follow these instructions as carefully as possible so all articles within a conference have the same style to the title page. The title is set in bold unjustified type using the command \verb"\title{#1}", where \verb"#1" is the title of the article. The first letter of the title should be capitalized with the rest in lower case. The next information required is the list of all authors' names followed by the affiliations. For the authors' names type \verb"\author{#1}", where \verb"#1" is the list of all authors' names. The style for the names is initials then surname, with a comma after all but the last two names, which are separated by `and'. Initials should {\it not} have full stops. First names may be used if desired. The command \verb"\maketitle" is not required. The addresses of the authors' affiliations follow the list of authors. Each address should be set by using \verb"\address{#1}" with the address as the single parameter in braces. If there is more than one address then a superscripted number, followed by a space, should come at the start of each address. In this case each author should also have a superscripted number or numbers following their name to indicate which address is the appropriate one for them. Please also provide e-mail addresses for any or all of the authors using an \verb"\ead{#1}" command after the last address. \verb"\ead{#1}" provides the text Email: so \verb"#1" is just the e-mail address or a list of emails. The abstract follows the addresses and should give readers concise information about the content of the article and should not normally exceed 200 words. {\bf All articles must include an abstract}. To indicate the start of the abstract type \verb"\begin{abstract}" followed by the text of the abstract. The abstract should normally be restricted to a single paragraph and is terminated by the command \verb"\end{abstract}" \subsection{Sample coding for the start of an article} \label{startsample} The code for the start of a title page of a typical paper might read: \begin{verbatim} \title{The anomalous magnetic moment of the neutrino and its relation to the solar neutrino problem} \author{P J Smith$^1$, T M Collins$^2$, R J Jones$^{3,}$\footnote[4]{Present address: Department of Physics, University of Bristol, Tyndalls Park Road, Bristol BS8 1TS, UK.} and Janet Williams$^3$} \address{$^1$ Mathematics Faculty, Open University, Milton Keynes MK7~6AA, UK} \address{$^2$ Department of Mathematics, Imperial College, Prince Consort Road, London SW7~2BZ, UK} \address{$^3$ Department of Computer Science, University College London, Gower Street, London WC1E~6BT, UK} \ead{williams@ucl.ac.uk} \begin{abstract} The abstract appears here. \end{abstract} \end{verbatim} \section{The text} The text of the article should should be produced using standard \LaTeX\ formatting. Articles may be divided into sections and subsections, but the length limit provided by the \corg\ should be adhered to. \subsection{Acknowledgments} Authors wishing to acknowledge assistance or encouragement from colleagues, special work by technical staff or financial support from organizations should do so in an unnumbered Acknowledgments section immediately following the last numbered section of the paper. The command \verb"\ack" sets the acknowledgments heading as an unnumbered section. \subsection{Appendices} Technical detail that it is necessary to include, but that interrupts the flow of the article, may be consigned to an appendix. Any appendices should be included at the end of the main text of the paper, after the acknowledgments section (if any) but before the reference list. If there are two or more appendices they will be called Appendix A, Appendix B, etc. Numbered equations will be in the form (A.1), (A.2), etc, figures will appear as figure A1, figure B1, etc and tables as table A1, table B1, etc. The command \verb" \section{Outline of PICO-LON project} PICO-LON (Pure Inorganic Crystal Observatory for LOw-background Neutr(al)ino) aims at search for WIMPs by means of highly radio-pure NaI(Tl) scintillator. NaI(Tl) scintillator has great advantage to searching for WIMPs because all the nuclei are sensitive to both spin-dependent and spin-independent interactions. The NaI(Tl) scintillator has another advantages to WIMPs search because of its low background and easy to operate under room temperature. The DAMA/LIBRA group is continuously searching for the signal of WIMPs by highly radio-pure and large volume NaI(Tl) crystals \cite{DAMA/LIBRA}. They developed highly radio-pure NaI(Tl) crystal which contains only a few ppt of U and Th chain isotope impurities and less than 20 ppb of natural potassium \cite{DAMA_NIM}. Many other groups are trying to develop highly radio-pure NaI(Tl) crystals to search for WIMPs, however, the sensitivity to WIMPs are suffered from a large amount of $^{210}$Pb contamination \cite{ELEV, ANaIS, DM-ICE, KIMS}. Recently, the PICO-LON group established the method to reduce $^{210}$Pb in NaI(Tl) crystal. One of the most serious origin of background was successfully removed and further purification and low background test was done. The final set-up of the PICO-LON detector is planned to consist of 42 modules of large volume NaI(Tl) detectors, each with 12.70 cm$\phi\times$12.70 cm. The total mass of the detector system is enough to test the annual modulation signal which is reported by DAMA/LIBRA \cite{DAMA_TAUP2015}. The NaI(Tl) crystal is viewed by one photomultiplier tube (PMT) in order to lower the background events from PMTs. In the following sections, we will present the recent progresses on the crystal purification and the result of test measurement of low background measurement. \section{Development of low background NaI(Tl) scintillator} The purification of NaI(Tl) ingot is the most important task to develop the high sensitivity detector to search for WIMPs because radioactive impurities (RI) in the NaI(Tl) crystal reduces the sensitivity to the WIMPs seriously. The impurities of RIs in a crystal scintillator should be less than a few tens of $\mu$Bq/kg in order to use the crystal for dark matter search. The contamination of $^{210}$Pb is the serious backgrounds because it emits low energy beta rays ($E_{max}=17$ keV and 63.5 keV), the low energy gamma ray and the conversion electron ($E_{\gamma}=46.5$ keV) and L-X rays below 16 keV. The $^{210}$Bi, the progeny of $^{210}$Pb, emits high energy beta ray ($E_{max}=1162$ keV) which produces bremsstrahlung photons. All the radiations associated with $^{210}$Pb severely reduce the sensitivity to WIMPs signal. Although it is quite difficult to reduce the concentration of $^{210}$Pb, we have successfully reduced its concentration by chemical process of raw NaI powder. We tried to remove the Pb ion in the raw powder of NaI by cation exchange resin which was optimized to remove the Pb ion. The raw NaI powder was dissolved in ultra pure water with the concentration of 300 g/Liter. The NaI solution was poured into a column in which the cation exchange resin was filled. The best parameter was searched for and determined to optimize the reduction of lead ion by several trials. The processed solution was dried by rotary vacuum evaporator. The vacuum of the evaporator was broken by high purity nitrogen gas to avoid the contamination by $^{222}$Rn in the air. As a result, the concentration of $^{210}$Pb became as small as $24\pm2$ $\mu$Bq/kg. The U-chain ($^{238}$U and $^{226}$Ra) and Th-chain ($^{228}$Th) were effectively reduced by purifying the raw material of a graphite crucible. The graphite was selected based on results of U, Th and K measurements, however, we found the purity of the graphite was not sufficiently good because a significant contamination of U-chain and Th-chain were observed. Further purification of graphite was done by baking the graphite under 3000 K\@. The concentration of $^{226}$Ra and $^{228}$Th were successfully reduced to $58\pm4$ $\mu$Bq/kg and $1.5\pm1.9$ $\mu$Bq/kg, respectively. \section{Low background measurement in Kamioka underground observatory} The NaI(Tl) ingot was shaved and polished to make 7.62 cm$\phi\times$7.62 cm cylindrical shape. A quartz light guide with 4 mm in thickness was glued on the top of the cylindrical NaI(Tl) ingot. All other surfaces of the ingot was covered with 4 mm thick PTFE reflector to guide the scintillation photons to the light guide. The ingot and the light guide were covered with 0.08 cm thick oxygen free high conductive copper (OFHC). The NaI(Tl) detector was covered with 5 cm thick OFHC copper and 20 cm thick old lead passive shield. No active shield was installed in the present measurement. The minimum thickness of the lead shield was 18 cm. Fast neutrons were thermalized and absorbed by 5 cm thick borated polyethylene. Pure nitrogen gas evaporated from liquid nitrogen was flushed into the inner area of the shield to purge radon. The schematic drawing of the detector system is shown in Figure \ref{fg:Det}. \begin{wrapfigure}{r}{0.5\textwidth} \centering \includegraphics[width=0.9\linewidth, bb=0 0 714 836]{Kamioka_Geom.pdf} \caption{ Geometry of the present measurement in Kamioka underground observatory. } \label{fg:Det} \end{wrapfigure} The low background measurement was started in the summer of 2015 in Kamioka underground laboratory (36$^{\circ}$25'N, 137$^{\circ}$18'E) located at 2700 m water equivalent. The experiment area was placed in the area of KamLAND experiment. The air of the experimental room was controlled to keep clean as class 10 by using a HEPA filter. The flux of the cosmic ray is reduced by a factor of $10^{-5}$ relative to the flux in the surface laboratory. A low background photomultiplier tube (PMT) R11065-20 provided by Hamamatsu Photonics was attached on the light guide by optical grease. The concentrations of U and Th chain in the PMT were less than 10 mBq/module. The quantum efficiency was as large as 30 \% at the wavelength of 420 nm. The PMT output pulse was introduced into the fast data acquisition system MoGURA (Module for General Use Rapid Application)\cite{MoGURA} to digitize the pulse shape. The trigger for the data acquisition system was produced by timing filter amplifier (TFA) which integrates 200 nsec. The fast noise pulses below single photoelectron signals are effectively removed by introducing TFA and the trigger rate was reduced by about two order of magnitude. Energy calibration for higher energy range was performed by using $^{133}$Ba and $^{40}$K (KCl) sources. The energy resolution at 1.46 MeV was 6.9 \% in full-width-half-maximum (FWHM). \begin{wrapfigure}{r}{0.5\textwidth} \begin{center} \includegraphics[width=0.89\linewidth, bb=0 0 1463 1040]{BGspectrum.pdf} \end{center} \caption{\label{Spe}The energy spectra obtained by irradiating $^{133}$Ba (upper orange) and background (lower green). } \end{wrapfigure} Low background measurement was continued for the live time of 7 days$\times$ 1.2 kg. The energy spectra of energy calibration and low background measurements are shown in Figure \ref{Spe}. The background energy spectrum was well reproduced by Monte Carlo simulation with the concentration of the RIs in the surrounding materials. The present energy threshold was 10 keV$_{ee}$ and the event rate was 8 keV$^{-1}$kg$^{-1}$day$^{-1}$ at the energy threshold. \section{Future prospects} We developed highly radio-pure NaI(Tl) crystal to search for cosmic dark matter. The RIs of U-chain and Th-chain were sufficiently reduced by purification of the raw NaI powder and the graphite crucible. The significant potassium impurity was observed in the low background measurement. The Monte Carlo simulation agreed with the assumption that the 2.6 ppm of potassium was contained in NaI(Tl) crystal. The concentration of potassium was too large to use the crystal to the dark matter search. The chemical process to remove the potassium in NaI raw powder is now in progress. The background from the surrounding materials is the next important issue. All the materials which will be used for the detector are selected by measuring the gamma rays from the samples. We started the collaboration with the XMASS group to lower the background from PMTs. Extensive search for the low background materials will be finished in the beginning of 2016 and low background PMT will be developed for PICO-LON in 2016. Full background simulation of 250 kg PICO-LON setup is now ongoing. The detail of the detector design is fixing by discussing with Horiba and Hamamatsu Photonics. The detector design will be optimized to ensure the background rejection by making unti-coincidence measurements of background events such as potassium, 1461 keV gamma ray and 3 keV X ray. \section{Acknowledgment} The authors thank Professor S.Nakayama for fruitful discussion and encouragement. The authors also thank Kamioka Mining and Smelting Company for supporting activities in the Kamioka mine and Horiba Ltd. for making the NaI(Tl) detectors. This work was supported by Grant-in-Aid for Scientific Research (B) number 24340055, Grant-in-Aid for Scientific Research on Innovative Areas number 26104008. The work was also supported by Creative Research Project in Institute of Socio, Arts and Sciences, Tokushima University. The corresponding author thanks Nogami Fund at RCNP Osaka University for the travel support to attend TAUP 2015. \section*{References}
1512.04523
\section*{Abstract} This document contains a description of physics entirely based on a geometric presentation: all of the theory is described giving only a pseudo-riemannian manifold (M, g) of dimension n > 5 for which the g tensor is, in studied domains, almost everywhere of signature (-, -, +, ..., +). No object is added to this space-time, no general principle is supposed. The properties we impose to some domains of (M, g) are only simple geometric constraints, essentially based on the concept of ``curvature''. These geometric properties allow to define, depending on considered cases, some objects (frequently depicted by tensors) that are similar to the classical physics ones, they are however built here only from the g tensor. The links between these objects, coming from their natural definitions, give, applying standard theorems from the pseudo-riemannian geometry, all equations governing physical phenomenons usually described by classical theories, including general relativity and quantum physics. The purely geometric approach introduced hear on quantum phenomena is profoundly different from the standard one. Neither Lagrangian or Hamiltonian is used. This document ends with a quick presentation of our approach of complex quantum phenomena usually studied by quantum field theory. \newpage \section*{Résumé} Ce texte propose une description de la physique fondée entièrement sur une présentation géométrique: toute la théorie est décrite à partir de la simple donnée d'une variété pseudo-riemannienne $(\mathscr{M},g)$ de dimension $n>5$ pour laquelle le tenseur $g$ est, dans les domaines étudiés, presque partout de signature $(-,-,+,\dots,+)$. Aucun \begin{it}objet\end{it} n'est ajouté à cet \begin{it}espace-temps\end{it}, aucun principe général n'est supposé. Les propriétés particulières que l'on impose à certains domaines de $(\mathscr{M},g)$ sont de simples conditions géométriques essentiellement basées sur la notion de courbure. Ces \begin{it}propriétés géométriques\end{it} permettent de définir, suivant les cas considérés, des \begin{it}objets\end{it} (souvent représentés par des tenseurs) qui s'apparentent à ceux de la physique classique mais qui, ici, ne sont construits qu'à partir du tenseur $g$. Les liens de dépendance entre ces objets, qui viennent de leurs définitions naturelles, permettent d'obtenir, par la seule application de théorèmes standard de géométrie pseudo-riemannienne, toutes les équations qui gèrent la description des phénomènes physiques habituellement décrits par les théories classiques, y compris la théorie de la relativité générale et la physique quantique. Aucun \begin{it}lagrangien\end{it} ou \begin{it}hamiltonien\end{it} n'est utilisé. Les domaines de l'espace- temps $(\mathscr{M},g)$ que l'on étudie sont localement difféomorphes à $\Theta\times K$ où $\Theta$ est un ouvert de $\mathbb{R}^4$ et $K$ est une variété compacte. Le premier signe négatif de la signature de $g$ est relatif à $\Theta$, le second signe négatif (correspondant à une autre notion de <<~temps~>>) est relatif à $K$ et celui-ci est un ingrédient essentiel dans la description de l'électromagnétisme. Deux types différents d'approximations permettent de retrouver les équations habituelles de la physique: -Un premier type d'approximations consiste à négliger certains phénomènes liés à la variété compacte $K$ et ceci permet de retrouver les équations de la physique non quantique (la relativité générale incluant l'électromagnétisme). -Un deuxième type d'approximations consiste à supposer que le tenseur $g$ est <<~lié~>> à la métrique de Minkovski sur $\Theta$, mais dans ce cas on tient compte précisément des caractéristiques de la variété compacte $K$. On retrouve alors en particulier les résultats qualitatifs et quantitatifs habituellement obtenus par la physique quantique classique, ceci sans utiliser la moindre axiomatique de cette dernière. Le regard purement géométrique proposé ici sur les phénomènes quantiques est profondément différent de celui des théories standard. Ce texte se termine par une présentation rapide de la manière dont on aborde, avec la théorie présentée ici, l'étude des phénomènes quantiques complexes traités par la théorie quantique des champs. \newpage \section*{Avant-propos} \addtotdm{Avant-propos} Ce que je vais présenter ici peut être considéré comme le prolongement de ce qui a été écrit dans les manuscrits \cite{vaugon-1}, \cite{vaugon-2} et \cite{vaugon-3} puis dans l'article \cite{steph-1}. Cependant, la présentation sera telle qu'il n'est pas nécessaire de consulter ceux-ci pour la lecture de ce papier. \bigskip \noindent Cette théorie a été élaborée avec la participation amicale de: \medskip \begin{itemize} \item[$-$] Stéphane Collion: Docteur d'Université en Mathématiques, agrégé de Mathématiques, commandant de bord à Air-France. \smallskip \item[$-$] Marie Dellinger: Docteur d'Université et agrégée de Mathématiques, professeur en classe préparatoire à l'ENCPB. \smallskip \item[$-$] Zoé Faget: Docteur d'Université en Mathématiques, Docteur d'Université en Informatique, Maître de conférences à l'Université de Poitiers, détachée en CPGE. \smallskip \item[$-$] Emmanuel Humbert: Docteur d'Université et agrégé de Mathématiques, professeur et directeur de thèses de l'Université de Tours. \smallskip \item[$-$] Benoît Vaugon: Mathématicien, Physicien, Docteur d'Université en Informatique. \smallskip \item[$-$] Claude Vaugon: Professeur agrégée de Mathématiques au lycée Jean de La Fontaine de Château-Thierry. \end{itemize} \bigskip \newpage \section*{Introduction} \addtotdm{Introduction} Ce papier commence par un exposé rapide des considérations qui nous ont amenées à la théorie qui va être présentée ici. Elles sont liées au regard (personnel) porté sur la physique du siècle dernier que je résume en trois étapes caractéristiques qui concernent ce que l'on appelle communément la <<~physique classique~>> (non quantique). \paragraph{1\up{ère} étape}$ $ \smallskip L'espace-temps est modélisé par $\mathbb{R} \times \mathbb{R}^3$ (ou mieux par un espace affine), $\mathbb{R}$ pour le temps considéré comme absolu, $\mathbb{R}^3$ pour l'espace que l'on suppose muni du produit scalaire euclidien. Dans cet espace existent des <<~objets physiques~>> que l'on modélise par des <<~courbes~>> de l'espace (pour des particules par exemple), des champs de tenseurs: fonctions, champs de vecteurs, formes différentielles, etc. (pour des <<~fluides~>>, des champs électriques et magnétiques, des fonctions densité de masse ou de charge électrique, par exemples). On considère que ces objets physiques n'ont aucune influence sur les notions de temps et de distance données d'une manière absolue dans $\mathbb{R} \times \mathbb{R}^3$. Ces objets sont régis par des \textbf{lois} et respectent certains \textbf{principes}. Ces lois sont écrites relativement à des observateurs particuliers de l'espace-temps que l'on qualifie souvent d'<<~observateurs galiléens~>>. On peut citer: les lois de Newton pour la gravitation, les lois de Maxwell pour l'électromagnétisme. Les principes admis sont: l'invariance des lois lors des changements d'observateurs galiléens, l'homogénéité et l'isotropie de l'espace. Ce modèle est mis en défaut essentiellement par la constatation expérimentale de la constance absolue de la vitesse de la lumière et le fait que les équations de Maxwell ne sont pas invariantes lors des changements d'observateurs galiléens. Ce qui amène à la deuxième étape. \paragraph{2\up{ème} étape} (la théorie de la relativité restreinte) \smallskip L'espace-temps est modélisé par $\mathbb{R}^4$ (ou mieux un espace affine) qui est maintenant muni d'une forme quadratique de Lorentz: $q(t, x, y, z) = -c^2 t^2 + x^2 + y^2 + z^2$. Les notions d'espace et de temps sont alors intimement liées (le temps n'est plus absolu). Ce modèle rend parfaitement cohérent le fait que la vitesse de la lumière soit une constante absolue. Les objets physiques sont modélisés comme on l'a présenté dans la première étape et il n'ont toujours aucune influence sur les notions de temps et de distance données par la forme quadratique de Lorentz. Les lois de Maxwell respectent maintenant le nouveau principe d'invariance par transformations de Lorentz (qui remplacent les transformations de Galilée). Cependant, les lois de Newton, servant à décrire les phénomènes gravitationnels, deviennent totalement inadaptées à ce nouvel espace-temps (muni de la forme quadratique de Lorentz). L'idée fondamentale qui résout le problème est dans la troisième étape. \paragraph{3\up{ème} étape} (la théorie de la relativité générale) \smallskip L'espace-temps est maintenant modélisé par une variété $\mathscr{M}$ de dimension 4 munie d'un tenseur lorentzien $g$ dont la signature, en chaque point de $\mathscr{M}$, est $(-, +, +, +)$. Autrement dit, l'espace tangent en chaque point de $\mathscr{M}$ est muni d'une forme quadratique de Lorentz. Les objets physiques sont encore représentés par des champs de tenseurs définis sur la variété $\mathscr{M}$. À chaque objet présent dans un domaine de la variété est associé son tenseur d'<<~énergie-impulsion~>> qui est un champ de formes quadratiques. La loi fondamentale de la physique est donnée par l'équation d'Einstein qui stipule que le tenseur d'énergie-impulsion, qui caractérise les objets se trouvant dans un domaine de la variété, est égal à la courbure d'Einstein (pour un bon choix <<~d'unités~>>). Il y a donc maintenant un lien étroit entre la notion d'espace-temps et les objets physiques eux-mêmes. L'électromagnétisme s'introduit très naturellement dans cet espace-temps et l'<<~objet~>> qui le caractérise est une $2$-forme différentielle $F$ à laquelle on associe son tenseur d'énergie-impulsion. La $2$-forme différentielle $F$ est supposée vérifier les <<~lois de Maxwell~>> dont les opérateurs différentiels s'expriment maintenant à partir du tenseur lorentzien $g$. Cette représentation de la physique fonctionne très bien pour ce qui concerne la gravitation et l'électromagnétisme (en négligeant les effets quantiques). Elle permet en particulier de décrire des phénomènes inattendus, représentés par des <<~singularités~>> de l'espace-temps, que sont le big-bang, les trous noirs, etc. Bien entendu, les modèles présentés dans la première et la deuxième étape apparaissent alors comme des approximations de domaines particuliers de la théorie de la relativité générale. On pourra remarquer que dans cette théorie, aucun <<~principe~>> ne subsiste, l'homogénéité et l'isotropie ou plus généralement l'invariance sous l'action de certains groupes d'isométries que l'on suppose parfois, ne sont que des \textbf{approximations} qui permettent de faire des calculs approchés mais ne sont évidemment pas des principes généraux. \bigskip Dans les 3 étapes que je viens de décrire, les <<~lois de la physique~>> sont données par des équations différentielles qui lient les objets que l'on a choisis de faire intervenir dans l'espace-temps. Un autre point de vue consiste à utiliser le \textbf{principe lagrangien}: au lieu de donner axiomatiquement les lois de la physique sous forme d'équations différentielles, on donne, pour un domaine de l'espace-temps contenant des objets physiques, une \textbf{action} qui va caractériser le comportement de ces objets. L'action est une application à valeurs réelles dont les <<~variables~>> sont les objets considérés. L'<<~axiome~>> consiste alors à dire que certains de ces objets forment un point stationnaire de l'action (la minimisent , par exemple). Mathématiquement, cela se traduit par le fait que ces objets vérifient des équations différentielles (celles qu'on aurait pu se donner comme axiome). En résumé, au lieu de donner directement les équations différentielles, on préfère se donner une <<~action~>> à partir de laquelle on déduira les équations. Un intérêt de ce point de vue est que, souvent, l'expression de l'<<~action~>> est plus <<~esthétique~>>, voire plus <<~intuitive~>> que les équations différentielles elles-mêmes (bien que dans de nombreux cas, historiquement, on ait d'abord trouvé les équations avant de deviner l'action). Cependant, la notion de <<~lagrangien~>> a d'autres avantages: elle permet parfois de simplifier considérablement la présentation des calculs dans certains problèmes physiques, mais l'importance accordée actuellement à la notion de lagrangien (et à son détournement en hamiltonien) est essentiellement due au fait que cette notion est indispensable dans l'axiomatique des théories physiques qui traitent des phénomènes qui ne sont pas décrits par la <<~physique classique~>> et que je nommerai les <<~phénomènes quantiques~>>. Dans les théories qui tentent de décrire les phénomènes quantiques, je classerai: la mécanique quantique classique, la théorie quantique des champs, la théorie des cordes, etc. Ces théories se sont développées (disons depuis un siècle) parallèlement aux théories de la physique classique que j'ai présentées dans les étapes $2$ et $3$. Les procédures utilisées pour décrire les phénomènes (quantiques) observés sont très différentes de celles utilisées en physique classique. Si l'on part très souvent là aussi d'une <<~action~>> liée aux objets physiques étudiés, l'axiomatique utilisée est très éloignée de celle de la physique classique: on ne cherche plus des <<~points stationnaires~>> à l'action, mais on décrit des processus qui permettent, à partir de l'action (modifiée), d'obtenir des densités de probabilités pour les grandeurs caractéristiques de ces objets. En fait, ces théories ont fait l'objet de travaux considérables ces dernières décennies et ne peuvent certainement pas être résumées en quelques lignes. Une description plus précise n'aurait pour nous que peu d'intérêt car c'est avec un \textbf{tout autre point de vue} que nous allons décrire les phénomènes quantiques. Ce point de vue pourra être considéré plutôt comme un prolongement de la théorie de la relativité générale, cependant, de nombreux points communs avec la théorie quantique des champs apparaîtront naturellement que le lecteur familier avec cette théorie pourra remarquer.\\ Dans toute la physique que l'on va présenter dans ce papier, la notion de lagrangien-hamiltonien sera \textbf{complètement abandonnée}. Elle sera, en quelque sorte, remplacée par la notion de <<~type géométrique~>> conceptuellement profondément différente. Ce que je vais écrire maintenant peut être considéré comme l'étape qui suit celles que j'ai commencées à présenter et l'on peut, pour le moment, ne pas tenir compte de ce que j'ai dit sur les phénomènes quantiques. \paragraph{4\up{ème} étape}$ $ \smallskip L'espace-temps est modélisé par une variété $\mathscr{M}$ de dimension $n > 5$, munie d'un tenseur pseudo-riemannien $g$ défini presque partout sur $\mathscr{M}$ (un commentaire sur ce choix est proposé dans l'annexe \ref{a3.8}). \textbf{Aucun objet physique n'est <<~ajouté~>> dans cet espace-temps}. Les <<~objets physiques~>> étudiés, qui correspondent aux notions habituelles, ne sont que des caractéristiques de la géométrie de la variété pseudo-riemannienne $(\mathscr{M}, g)$. \textbf{Aucune loi, aucun principe n'est postulé}. Les équations liant les objets physiques (que l'on définit uniquement à partir de la géométrie de $(\mathscr{M}, g)$) ne sont que des résultats donnés par des théorèmes mathématiques standard sur les variétés pseudo-riemanniennes (souvent des conséquences des identités de Bianchi dans le cas de la physique <<~non-quantique~>>). La variété pseudo-riemannienne $(\mathscr{M}, g)$ est donc supposée être <<~totalement anarchique~>>. Faire de la physique dans cet <<~espace-temps totalement anarchique~>> se résume à \textbf{constater} que certains domaines ont des caractéristiques géométriques particulières. Ces caractéristiques particulières permettent de définir des <<~objets~>> (qui n'ont de sens que dans ce type de domaine) et les équations liant ces objets sont alors des conséquences mathématiques de leurs définitions. Donnons un premier exemple simple pour préciser ce que l'on vient de dire. Sur cet exemple, le lecteur peut considérer que $(\mathscr{M}, g)$ est l'espace-temps habituel de la relativité générale pour lequel $dim \mathscr{M} = 4$ et $g$ est de signature $(-, +, +, +)$, bien que cela ne sera plus le cas ensuite. Supposons qu'en chaque point d'un domaine $\mathscr{D}$ de $\mathscr{M}$, la courbure d'Einstein (considérée comme un endomorphisme de l'espace tangent) admette une valeur propre négative dont l'espace propre est de dimension $1$ et de genre temps. Supposons de plus qu'elle s'annule sur le sous-espace $g$-orthogonal à cet espace propre. On peut alors définir, sans ambiguïtés, sur ce domaine: \begin{itemize} \item Une fonction $\mu: \mathscr{D} \rightarrow \mathbb{R}$ qui, à chaque point de $\mathscr{D}$, fait correspondre la valeur absolue de la valeur propre de la courbure d'Einstein. \item Un champ de vecteurs $X$, qui en chaque point de $\mathscr{D}$, est le vecteur unitaire (c.a.d qui vérifie $g(X, X) = \text{-}1$) dans l'orientation en temps, de l'espace propre de dimension 1. \end{itemize} Un tel couple $(\mathscr{D}, g)$ sera appelé un domaine de type <<~fluide sans pression~>> car, si l'on réutilise le langage de la physique habituel, la fonction $\mu$ sera, par définition, la \textbf{fonction densité d'énergie} du fluide. Le flot du champ de vecteur $X$ sera le \textbf{flot du fluide}. Le choix de la nullité de la courbure d'Einstein sur le sous-espace $g$-orthogonal à l'espace propre traduira la nullité de la pression. Le lecteur peut alors vérifier (après quelques calculs) que la seule application de la seconde identité de Bianchi sur la courbure d'Einstein, redonne les équations standard de la relativité générale sur les fluides sans pression, qui apparaissent donc bien ici comme une simple conséquence mathématique des définitions données. (Bien entendu, cet exemple sera repris plus loin dans un cadre plus général). Voici un second exemple qui décrit un ingrédient essentiel dans la description de certains phénomènes quantiques. Ici, la dimension de $\mathscr{M}$ est nécessairement $> 5$ et la signature de $g$ particulière: on dira qu'un domaine $\mathscr{D}$ de $(\mathscr{M}, g)$ est de type <<~métrique oscillante dans un potentiel neutre~>> si la métrique pseudo-riemannienne $g$ est conforme à une métrique $g_0$ (c'est à dire de la forme $g = f g_0$ où $f: \mathscr{D} \rightarrow \mathbb{R}^+$) et est de plus à courbure scalaire constante égale à celle de $g_0$. La métrique pseudo-riemannienne $g_0$ est une métrique <<~de référence~>>, c'est celle avec laquelle sont effectuées les mesures, elle sera choisie de telle sorte que, restreinte à l'<<~espace apparent~>> (de dimension $4$), ce soit la métrique de Minkovski (bien que l'on puisse généraliser cet état de fait). Ce domaine (légèrement modifié par des <<~singularités~>>) représentera ce que l'on appelle dans le langage habituel des <<~particules dans le vide~>>. Contrairement au premier exemple, il est difficile au premier abord de voir le lien existant entre un tel domaine $(\mathscr{D}, g)$ et la notion habituelle de particules, et effectivement la théorie que l'on va présenter va s'éloigner clairement des théories quantiques standard sur les particules. Tout ceci sera bien sûr détaillé dans le chapitre 2 de ce papier où l'on montrera en particulier qu'avec ce nouveau <<~regard~>>, on retrouve bien la description qualitative et quantitative des expériences de la physique quantique classique en obtenant des équations de type <<~Klein-Gordon~>> qui donneront en approximation les équations de Schrödinger standard qui décrivent le comportement des particules dans le vide ou dans un potentiel. La physique telle que je vais la présenter va donc être ramenée à la recherche de domaines de l'espace-temps qui seront de type (géométrique) <<~humainement intéressant~>>. Comme je l'ai déjà dit, un \textbf{type} sur un domaine est une condition \textbf{géométrique} supposée sur ce domaine (je viens d'en donner deux exemples). Il sera <<~humainement intéressant~>> s'il est <<~suffisamment déterministe~>>. Cette dernière notion peut être définie mathématiquement de la manière suivante (elle mériterait des définitions biens plus précises que celle que je vais donner, mais ce n'est pas l'objet de ce papier (voir l'annexe \ref{a3.9})): un domaine typé $(\Omega, g)$ est <<~suffisamment déterministe~>> si la connaissance de certaines grandeurs géométriques sur un sous-domaine $\Omega'$ de $\Omega$ entraîne la connaissance de ces mêmes grandeurs sur tout $\Omega$. En géométrie pseudo-riemannienne, les résultats qui permettent d'obtenir des propriétés de ce genre sont appelés des \textbf{théorèmes de rigidité}. En pratique, cela revient à montrer que les conditions géométriques qui définissent le type donnent des équations sur les grandeurs géométriques concernées qui ont des solutions uniques pour des <<~conditions initiales~>> précisées (qui sont les données de ces grandeurs sur $\Omega'$). Il ne faut pas oublier que $(\Omega, g)$ contient la notion habituelle de <<~temps~>> et que de tels résultats de rigidité veulent dire, dans le langage commun, que des conditions initiales identiques sur les grandeurs donnent toujours la même évolution dans le temps de ces grandeurs, ce qui justifie le caractère <<~humainement intéressant~>> de ces domaines typés. Il est important de noter que les domaines typés que je vais définir ne sont en général que des \textbf{approximations}. Si l'on reprend le premier exemple, on peut considérer qu'il n'existe pas en réalité de domaines qui sont exactement de type <<~fluide sans pression~>> mais que, dans certains circonstances, ces domaines constituent une bonne approximation (utilisée par exemple à très grande échelle pour décrire l'expansion de l'univers dans un domaine de l'espace-temps contenant un singularité de type <<~big-bang~>>). Le second exemple, qui définit un domaine de type <<~métrique oscillante~>> peut lui aussi, bien sûr, être considéré comme une approximation (le domaine $\Omega$ de cet exemple peut éventuellement être vu comme un sous-domaine du premier exemple, mais dans ce cas, le tenseur pseudo-riemannien choisi correspond à une <<~approximation locale~>> totalement différente de l'<<~approximation globale~>> du premier exemple). En fait, les domaines typés que je vais définir se séparent naturellement en deux classes qui proviennent du fait que les expériences se séparent en deux catégories distinctes: \begin{itemize} \item Celles dont les \textbf{mesures} des grandeurs concernées ne modifient en rien (ou n'ont qu'une influence négligeable sur) leur déroulement. Ces expériences sont communément décrites par la physique classique. \item Celles dont les \textbf{mesures} modifient fondamentalement le déroulement de l'expérience. Dans ce cas, la description de l'expérience est plus délicate car elle doit inclure le processus de mesure lui-même. J'utiliserai la terminologie <<~phénomène quantique~>> lorsque l'on s'intéressera à ce type d'expérience dans le chapitre \ref{part:deux}. \end{itemize} Les domaines typés qui vont correspondre à la première catégorie et décrire la <<~physique classique~>> vont souvent être définis à partir de la courbure d'Einstein: $(Ricc - \frac{1}{2}S g)$ où $Ricc$ est la courbure de Ricci, $S$ la courbure scalaire et $g$ le tenseur pseudo-riemannien. Ceci est très naturel car la seconde identité de Bianchi dit simplement que la divergence de la courbure d'Einstein est nulle. Lorsque cette propriété de divergence nulle pourra se <<~transmettre~>> à un champ de vecteurs défini canoniquement à partir de la courbure d'Einstein (pour un choix de domaine typé), le théorème de Stokes donnera une \textbf{loi de conservation} pour la grandeur liée au flux de ce champ de vecteurs, et ceci est une caractéristique importante qui permettra de considérer que ce domaine typé est suffisamment déterministe. C'est à cause de cette simple propriété de nullité de la divergence de la courbure d'Einstein que cette dernière apparaîtra plus naturellement que la courbure de Ricci dans les définitions des domaines typés présentés dans le chapitre \ref{part:un}. Pour les domaines typés qui vont correspondre à la seconde catégorie, nous verrons qu'il sera important de connaître précisément le tenseur pseudo-riemannien $g$ lui-même (et pas seulement la courbure d'Einstein par exemple). On se limitera (pour le moment) aux domaines typés dont la détermination du tenseur $g$ se ramène à la résolution d'équations différentielles \textbf{linéaires} (pour le chapitre \ref{part:deux}), ce qui sera suffisant pour faire le lien avec les équations de Klein-Gordon ou de Schrödinger de la physique quantique standard. Pour ces mêmes domaines typés, qui correspondent à la seconde catégorie et décrivent les phénomènes quantiques, va intervenir le sous-ensemble de l'espace-temps $\mathscr{M}$ sur lequel le tenseur pseudo-riemannien $g$ n'est pas défini. Les parties de ce sous-ensemble (qui seront des sous-variétés de $\mathscr{M}$ de dimension $< n$ et donc de mesure nulle) seront appelées des \textbf{singularités} du tenseur $g$. (Il ne faudra pas confondre ces singularités de $g$ avec les <<~singularités~>> qui correspondent aux notions de <<~big-bang~>>, de <<~trous noirs~>>, etc., qui elles, ne sont pas considérées comme des parties de $\mathscr{M}$). Aucune loi ne va gérer ces singularités de $g$ (l'espace temps est totalement <<~anarchique~>>), ce sont elles qui vont donner <<~l'indéterminisme~>> dans les phénomènes quantiques (en revanche, l'espace-temps au voisinage de ces singularités aura souvent une description précise liée au <<~type~>> considéré). Elles introduiront des propriétés de <<~localisation~>> pour les domaines correspondant à la notion de <<~particules~>>. \bigskip La variété $\mathscr{M}$ représentant l'espace-temps va être de <<~grande dimension~>> ($n$ probablement $\geqslant 10$). Les domaines typés <<~suffisamment déterministes~>> qui vont permettre de retrouver en particulier les résultats habituels de la physique, seront localement difféomorphes à $\Theta \times S^1 \times W$ où $\Theta$ est un ouvert de $\mathbb{R}^4$, $S^1$ est le cercle et $W$ est une variété compacte. Nous verrons, de plus, que les phénomènes de <<~spin~>> pourront être décrits précisément si l'on décompose $W$ sous la forme $S^3 \times V$ où $S^3$ est la sphère standard de dimension $3$. Le tenseur pseudo-riemannien $g$ aura presque partout une signature $(-, +, +, +, -, +, \cdots, +)$ ce qui donnera une notion de temps <<~double~>> liée aux deux signes $(-)$ (placés arbitrairement en première et cinquième position). Dans de nombreux cas, (et lorsque $g$ est transporté localement sur $\Theta \times S^1 \times W$) ce temps double sera paramétré par $(t, u) \in \mathbb{R} \times S^1$, <<~$t$~>> pourra s'assimiler au temps habituel et <<~$u$~>> sera une notion de temps tout à fait nouvelle. Nous verrons que toutes les notions liées à l'électromagnétisme viendront de cette nouvelle notion de temps paramétré par $u \in S^1$ (considéré ici comme la cinquième dimension) et ceci que ce soit pour les domaines typés correspondant à la physique classique ou ceux décrivant les phénomènes quantiques. Dans les <<~phénomènes quantiques~>>, la notion de \textbf{masse} sera introduite comme une \textbf{fréquence} liée à la notion de temps habituelle $t \in \mathbb{R}$, et la notion de charge électrique comme une \textbf{fréquence} liée à la nouvelle notion de temps $u \in S^1$ (ce qui obligera la charge électrique à être un multiple entier d'une charge élémentaire). C'est ce choix qui permettra de retrouver, en approximation, les résultats de la physique quantique standard obtenus à partir des équations de Schrödinger qui décrivent le comportement des <<~particules dans un potentiel~>>. Le fait que la signature de $g$ ait exactement deux signes $(-)$ s'imposera dans cette étude (chapitre 2, section \ref{s2.12}). Les dimensions au delà de 5 (qui, localement, concernent $W$, donc) seront indispensables pour les phénomènes quantiques (mais elles seront aussi très importantes dans le chapitre 1 (section \ref{s1.4}), en particulier pour les domaines de type <<~potentiel~>>). En fait, les choix de <<~dimension~>> et de <<~signature~>> s'imposent petit à petit suivant l'avancée des recherches en partant d'une présentation entièrement <<~géométrique~>> de la physique: dans la catégorie <<~physique classique~>> présentée au chapitre 1 , la dimension $4$ de l'espace-temps aurait suffit pour les domaines typés représentant les fluides sans électromagnétisme (il n'y aurait eu rien de bien nouveau). Pour l'électromagnétisme (non quantique) il s'est avéré nécessaire, avec notre regard sur la physique, que la dimension de l'espace-temps soit au moins 5. Ce point à été détaillé dans \cite{steph-1}, mais dans cet article, la cinquième dimension avait été choisie <<~de genre espace~>>, alors qu'elle est maintenant de <<~genre temps~>>. Si l'on en était resté là, ce choix de signature n'aurait pas eu techniquement une grande importance (seuls quelques signes liés à la notions de charge électrique changent dans les équations obtenues). C'est dans la description des domaines de type potentiel et surtout dans l'étude des phénomènes quantiques que s'est imposé la signature de $g$ et que la dimension de l'espace-temps a due être augmentée. Avec le regard que l'on porte ici sur la physique, l'étude des notions qui, dans le langage habituel, correspondent aux <<~particules et leurs interactions~>>, va être fondée essentiellement sur la \textbf{théorie spectrale relative à la variété compacte} ($S^1 \times W, g| _{S^1 \times W}$). Celle-ci est difficile et est déterminée par la forme précise de $(W, g| _{W})$. Nous nous arrêterons dans ce papier à la décomposition de $W$ sous la forme $W = S^3 \times V$ où $S^3$ est la sphère standard de dimension $3$, ce qui permettra de décrire les phénomènes liés à la notion de <<~spin~>> et, en particulier, d'aborder les phénomènes d'<<~intrication quantique~>>. Quels que soient les domaines typés étudiés dans le chapitre 1 ou 2, la dimension de l'espace-temps sera considérée toujours égale au même $n$ (même si dans certains cas, des dimensions seront <<~négligées~>>, mais ceci avec une définition mathématique précise bien sûr). On aura donc en particulier une présentation parfaitement unitaire de la <<~physique classique~>> et des <<~phénomènes quantiques~>>. \newpage \section*{Notations utilisées} \addtotdm{Notations utilisées} Pour certaines notations, j'ai essayé de reprendre celles de \cite{gourg} qui sont parfois différentes des notations que l'on avait utilisées dans \cite{vaugon-1}, \cite{vaugon-2} et \cite{vaugon-3}. \noindent Soit $(\mathscr{M}, g)$ une variété pseudo-riemannienne. On note: \begin{center} \begin{tabular}{lll} $Ricc_g$ &:& la courbure de Ricci de $g$ \\ $S_g$ &:& la courbure scalaire de $g$ \\ $Ein_g := Ricc_g - \frac{1}{2}S_g g$ &:& la courbure d'Einstein de $g$\\ $G := 2 Ein_g$ &:& deux fois la courbure d'Einstein (cela simplifiera \\ && les formules dans de nombreuses situations)\\ $D$ &:& la dérivé covariante associée à $g$, et lorsque $T$ est\\ && un champ de tenseur sur $\mathscr{M}$ de coordonnées $T_{(k)}^{(j)}$, \\ && les coordonnées de $D T$ sont notées $\nabla_i T_{(k)}^{(j)}$. \end{tabular} \end{center} \begin{itemize} \item Lorsque $X$ est un champ de vecteurs sur $\mathscr{M}$, on note $D_X$ la dérivée covariante suivant ce champ de vecteurs. \item Lorsque $f : \mathscr{M} \rightarrow \mathbb{R}$ est une fonction, $\nabla_g f$ ou parfois $\overrightarrow{grad f}$ désigne le \textbf{gradient} de $f$ ~~~($(\nabla_g f)^j = g^{i j}\partial_i f$). \item $\nabla_g\cdotp X$ désigne la \textbf{divergence} du champ de vecteurs $X$ ~~~($\nabla_g\cdotp X = \nabla_i X^i$). \item Plus généralement, $\nabla_g\cdotp T$ désigne la \textbf{divergence} du champ de tenseurs $T$ (relative au premier indice). Exemple: $(\nabla_g\cdotp T)^j = \nabla_i T^{i j}$ si $T \in \otimes^{(**)}\mathscr{M}$. \item Lorsque $T$ est un tenseur totalement covariant (de coordonnées $T_{i j k ...}$) et $g$ est une forme bilinéaire symétrique non dégénérée sur un espace vectoriel $E$, sa version contravariante (par $g$) est notée $T^\sharp$. \item Lorsque $T$ est totalement contravariant ($T^{i j k ...}$), sa version covariante (par $g$) est notée $T^\flat$. \item Dans le cas particulier où $B$ est une forme bilinéaire (de coordonnées $B_{i j}$) on note $\leftidx{^e}{B}$ l'endomorphisme de $E$ associé à $B$ (par $g$) (de coordonnées $B^i_{\hspace*{0.7mm}j} = g^{i k}B_{k j}$). \item \textbf{La convention d'Einstein sur les sommations est utilisée tout au long de ce papier.} \end{itemize} \newpage \section*{Préliminaire mathématique} Dans tout ce qui suit, l'univers est représenté par un variété différentielle $\mathscr{M}$ de dimension $n > 4$ munie d'un tenseur pseudo-riemannien $g$ (défini sauf sur une partie de mesure nulle). Le tenseur $g$ sera, dans les domaines étudiés ici, de signature $(-, +, +, +, -, +,\dots, +)$ presque partout. \subsection*{Les atlas d'observation} Les variétés que l'on va considérer vont être localement difféomorphes à un produit $\Theta \times K$ où $\Theta$ est un ouvert de $\mathbb{R}^p$ et $K$ une variété compacte. Il va s'avérer être simplificateur d'introduire un langage qui tient compte de cette spécificité. On va donc commencer par établir une liste de définitions qui va préciser la terminologie utilisée tout au long de ce papier. La partie \textbf{A} de ce préliminaire ne va traiter que de la structure différentielle, la partie \textbf{B} concernera la structure pseudo-riemannienne de la variété. \vspace{1mm} \noindent\textbf{A.} On considère une variété $\mathscr{M}$ de dimension $n$ et de classe $C^k$ où $k$ est suffisamment grand pour ne poser aucun problème concernant les objets définis ensuite. \begin{dfn} Soit $K$ une variété compacte, $\Theta$ un ouvert de $\mathbb{R}^p$ et $\mathcal{V}$ un ouvert de $\mathscr{M}$. - Un $C^k$-difféomorphisme $\varphi : \mathcal{V} \rightarrow \Theta \times K$ sera appelé un \textbf{difféomorphisme d'observation}. -Un couple $(\mathcal{V},\varphi)$,~ où ~$\varphi : \mathcal{V} \rightarrow \Theta \times K$ est un difféomorphisme d'observation, sera appelé \textbf{ une carte sur $\mathscr{M}$ à valeurs dans $\Theta \times K$ } \end{dfn} \begin{dfn} Soient $\mathscr{D}$ un ouvert de $\mathscr{M}$ et $K$ une variété compacte.\\ Un \textbf{$K$-atlas d'observation} sur $\mathscr{D}$ est une famille de cartes $(\mathcal{V}_i,\varphi_i)_{i\in I}$ sur $\mathscr{D}$ à valeurs dans $\Theta \times K$ qui vérifie les deux propriétés suivantes: \begin{enumerate} \item $\bigcup_{i\in I} \mathcal{V}_i = \mathscr{D}$ \item \label{cond:2} $\forall (i, j) \in I^2, \forall x \in \mathcal{V}_i \cap \mathcal{V}_j~~ \varphi_i^{\text{-}1}(\{x_i^1\} \times K) = \varphi_j^{\text{-}1}(\{x_j^1\} \times K)$ où $x_i^1$ désigne la composante de $\varphi_i(x)$ sur $\Theta_i$. \end{enumerate} \end{dfn} La condition \ref{cond:2} est importante, elle permettra de définir sans ambiguïté, en chaque point $x$ de $\mathscr{D}$, une sous-variété difféomorphe à $K$ (relative à la donnée d'un atlas d'observation). \begin{prop} Soient $\mathscr{D}$ une variété de dimension $n$ et $K$ une variété compacte de dimension $m$. Il y a équivalence entre les deux assertions suivantes: \begin{enumerate} \item \label{hyp:1} Il existe un $K$-atlas d'observation sur $\mathscr{D}$. \item Il existe une variété $B$ de dimension $(n-m)$ et une submersion $\pi:\mathscr{D} \rightarrow B$ telles que le triplet $(\mathscr{D}, B, \pi)$ soit un fibré de fibre type $K$. \end{enumerate} (La démonstration est laissée au lecteur). \end{prop} \begin{rmq} La compacité de $K$ n'est pas nécessaire pour obtenir cette proposition mais elle garantit le fait que la variété $B$, construite à partir de l'hypothèse \ref{hyp:1}, ait une topologie séparée. Si $K$ n'est pas compacte, il suffit de supposer de plus dans la condition \ref{cond:2} que les $\varphi_i^{\text{-}1}(\{x_i^1\} \times K)$ sont des parties fermées dans $\mathscr{D}$ (elles le sont dans $\mathcal{V}_i$) pour que la topologie de $B$ soit séparée. \end{rmq} Cette proposition montre que ce qui va suivre pourrait être présenté dans le langage des <<~fibrés~>>, mais je pense que ce choix ne serait pas très naturel dans la mesure où les <<~bases~>> des fibrés n'interviendraient pas (ces dernières n'auraient d'intérêt que dans des cas très particuliers). La présentation choisie, en termes d'<<~atlas d'observation~>> permet de généraliser très simplement les définitions précédentes de la manière suivante. On suppose que la variété $K = K_1 \times K_2$ où $K_1$ et $K_2$ sont compactes (le lecteur adaptera ce qui va suivre au cas où $K = K_1 \times K_2 \times ... \times K_l$). \begin{dfn} Un \textbf{$K_1$-atlas d'observation} sur $\mathscr{D}$ est une famille de cartes $(\mathcal{V}_i, \varphi_i)_{i\in I}$ sur $\mathscr{D}$ à valeurs dans $ \Theta_i \times K_1 \times K_2$ qui vérifie les deux propriétés suivantes : \begin{enumerate} \item $\bigcup_{i \in I} \mathcal{V}_i = \mathscr{D}$ \item $\forall (i, j) \in I^2, \forall x \in \mathcal{V}_i \cap \mathcal{V}_j~~ \varphi_i^1(\{x_i^1\} \times K_1 \times \{x_i^3\} = \varphi_j^1(\{x_j^1\} \times K_1 \times \{x_j^3\}$ où $x_i^3$ est la composante de $\varphi_i(x)$ sur $K_2$. \end{enumerate} \end{dfn} \begin{dfn} Lorsque $K_1$ est une variété orientée, on dira que le $K_1$-atlas d'observation \textbf{conserve l'orientation de $K_1$} si: $\forall (i, j) \in I^2, \forall x \in \mathcal{V}_i \cap \mathcal{V}_j$, l'orientation sur la sous-variété $\varphi_i^{\text{-}1}(\{x_i^1\} \times K_1 \times \{x_i^3\})$ transportée de celle de $K_1$ par $\varphi_i$, est la même que celle transportée par $\varphi_j$. \end{dfn} Bien entendu, on écrit les mêmes définitions pour $K_2$. \begin{dfn} Un \textbf{$K_1$-$K_2$-atlas d'observation} sur $\mathscr{D}$ est à la fois un $K_1$-atlas d'observation et un $K_2$-atlas d'observation sur $\mathscr{D}$. \end{dfn} \noindent\textbf{B.} On considère maintenant une variété pseudo-riemannienne $(\mathscr{M}, g)$ de dimension $n$ et de classe $C^k$. Dans toute la suite de ce papier, les domaines $\mathscr{D}$ de $\mathscr{M}$ seront localement difféomorphes à $\Theta \times K$ où $\Theta$ est un ouvert de $\mathbb{R}^4$ et $K = S^1 \times W$, $S^1$ désigne le cercle standard orienté et $W$ une variété compacte. On reprend donc les notations de la partie \textbf{A} précédente avec $K_1 = S^1$ et $K_2 = W$. Soit $\mathscr{D}$ un domaine de $\mathscr{M}$ et $\mathscr{A}$ un $S^1$-$W$-atlas d'observation sur $\mathscr{D}$ qui conserve l'orientation de $S^1$. En chaque point $x$ de $\mathscr{D}$, la sous-variété de dimension $1$ notée $S_x^1$ (difféomorphe à $S^1$) est alors définie en posant: \[S_x^1 = \varphi_i^{\text{-}1}(\{x_i^1\} \times S^1 \times \{x_i^3\})\] où $(\mathcal{V}_i, \varphi_i)$ est un difféomorphisme d'observation en $x$ de l'atlas $\mathscr{A}$ et en remarquant que $S_x^1$ ne dépend pas du choix de ce difféomorphisme d'observation. La sous-variété $S_x^1$ est, de plus, orientée sans ambiguïté. On définit de même la sous-variété de dimension $n-5$ notée $W_x$ (difféomorphe à $W$) en posant: \[W_x = \varphi_i^{\text{-}1}(\{x_i^1\} \times \{x_i^2\} \times W)\] On remarquera que $x' \in S_x^1 \Leftrightarrow S_{x'}^1 = S_x^1$ et $x' \in W_x \Leftrightarrow W_{x'} = W_x$. \bigskip L'espace tangent $T_x(\mathscr{D})$ se décompose de manière unique sous la forme: \[H_x \obot (T_x(S_x^1) \oplus T_x(W_x))\] où $H_x$ désigne le sous-espace de dimension $4$ $g$-orthogonal à $T_x(S_x^1) \oplus T_x(W_x)$\\ ($T_x(S_x^1)$ n'est pas supposé $g$-orthogonal à $T_x(W_x)$). Dans la suite $H_x$ sera appelé \textbf{l'espace vectoriel apparent} en $x$ (relatif à l'atlas d'observation). Le champ d'espaces $H_x$ n'a aucune raison d'être intégrable, autrement dit, il n'y a aucune raison que par un point $x_0$ de $\mathscr{D}$, passe une sous-variété de dimension $4$ telle que les espaces tangents en tout point soient des $H_x$. \begin{dfn} Un \textbf{$g$-atlas d'observation sur $\mathscr{D}$} est un $S^1$-$W$-atlas d'observation sur $\mathscr{D}$ qui conserve l'orientation de $S^1$ et qui vérifie de plus les propriétés suivantes relatives au tenseur pseudo-riemannien $g$: \begin{enumerate} \item $\forall x \in \mathscr{D}$: \begin{itemize} \item $g|_{H_x}$ est de signature $(-, +, +, +)$ \item $g|_{T_x(S_x^1)}$ est de signature $(-)$ \item $g|_{T_x(W_x)}$ est de signature $(+, \dots, +)$ \end{itemize} \item \label{cond-local:2} $\forall (i, j) \in I^2, \forall x \in \mathcal{V}_i \cap \mathcal{V}_j~~~~ \varphi_i^*(\frac{\partial}{\partial t})_{\varphi_i(x)}$ ~et~ $\varphi_j^*(\frac{\partial}{\partial t})_{\varphi_j(x)}$ sont de genre temps et dans la même orientation en temps (c'est à dire $g(\varphi_i^*(\frac{\partial}{\partial t})_{\varphi_i(x)}, \varphi_j^*(\frac{\partial}{\partial t})_{\varphi_j(x)})<0$). Ici, $(\frac{\partial}{\partial t})_{\varphi_i(x)}$ désigne le vecteur tangent en $\varphi_i(x)$ associé au système de coordonnées standard $(t, x, y, z)$ de $\Theta_i \subset \mathbb{R}^4$~~ $(\varphi_i : \mathcal{V}_i \rightarrow \Theta_i \times S^1 \times W)$. \end{enumerate} \end{dfn} La condition \ref{cond-local:2} permet de définir une orientation en temps <<~classique~>> de chaque espace apparent $H_x$ (qui varie différentiablement en $x$). Les définitions suivantes utilisent le processus classique qui introduit la notion d'atlas <<~complet~>> (ou <<~saturé~>>). \begin{dfn} Deux $g$-atlas d'observation sur $\mathscr{D}$ sont \textbf{équivalents} si leur réunion est encore un $g$-atlas d'observation. \end{dfn} \begin{dfn} Soit $\mathscr{A}$ un $g$-atlas d'observation sur $\mathscr{D}$, le \textbf{complété} (ou le \textbf{saturé}) de $\mathscr{A}$ est le $g$-atlas d'observation formé de la réunion de tous les $g$-atlas équivalents à $\mathscr{A}$. \end{dfn} \begin{dfn} Un $g$-atlas d'observation sur $\mathscr{D}$ est \textbf{complet} (ou \textbf{saturé}) s'il est égal à son complété. \end{dfn} Les domaines typés que l'on va définir par la suite vont être des triplets $(\mathscr{D}, g, \mathscr{A})$ où $\mathscr{D}$ est un domaine de $\mathscr{M}$, $g$ un tenseur pseudo-riemannien sur $\mathscr{D}$ et $\mathscr{A}$ un $g$-atlas d'observation \textbf{complet} sur $\mathscr{D}$. Les <<~types~>> de ces domaines seront précisés par la donnée de \textbf{conditions géométriques} imposées sur $(\mathscr{D}, g)$. Le choix du $g$-atlas d'observation complet définira l'ensemble des <<~observateurs~>> que l'on s'autorise pour les mesures des grandeurs définies par le domaine typé (un <<~observateur~>> est mathématiquement défini par un difféomorphisme d'observation). Comme l'atlas sera complet, il laissera un grand choix de changements d'observateurs. Par exemple, si $(\mathcal{V}, \varphi)$ est une carte de cet atlas complet $\mathscr{A}$ et si $\sigma_1 : \Theta \rightarrow \Theta' \subset \mathbb{R}^4$, $\sigma_2 : S^1 \rightarrow S^1$, $\sigma_3 : W \rightarrow W$ sont trois difféomorphismes, alors $(\mathcal{V}, \sigma_1 \times \sigma_2 \times \sigma_3 \circ \varphi)$ est une carte de ce même atlas (sous réserve tout de même que les <<~orientations définies~>> soient conservées). \chapter{La physique non quantique} \label{part:un} \section{Les domaines de type <<~fluide~>> ou <<~potentiel~>>\label{s1.1}} Comme on vient de le préciser dans le <<~préliminaire mathématique~>>, un domaine de type <<~fluide~>> ou <<~potentiel~>> est en premier lieu un triplet $(\mathscr{D}, g, \mathscr{A})$ où $\mathscr{D}$ est un domaine de $\mathscr{M}$, $g$ un tenseur pseudo-riemannien et $\mathscr{A}$ un $g$-atlas d'observation complet sur $\mathscr{D}$. On rappelle qu'en chaque point $x$ de $\mathscr{D}$ l'espace tangent $T_x(\mathscr{D})$ se décompose sous la forme $H_x \obot (T_x(S_x^1) \oplus T_x(W_x))$. L'espace $H_x$ (de dimension $4$) est l'espace apparent au point $x$. La sous-variété $S_x^1$ est difféomorphe au cercle, orientée et de genre temps. La variété $W_x$ (de dimension $n-5$) est compacte et de genre espace. Sur $\mathscr{D}$, on définit le champ de vecteurs $Y$ en prenant pour chaque $x$ de $\mathscr{D}$ l'unique vecteur tangent à $S_x^1$, dans l'orientation et tel que $g_x(Y_x, Y_x) = -1$. \textbf{Ce champ de vecteurs sera l'objet fondamental de l'électromagnétisme dans $\mathscr{D}$}. \section[Type <<~fluide~>> à partir de la courbure d'Einstein]{Les domaines de type <<~fluide~>> définis à partir de la courbure d'Einstein\label{s1.2}} La condition géométrique naturelle imposée sur le domaine pour que celui-ci soit de type <<~fluide~>> sera une condition donnée sur le tenseur $G := 2 Ein_g$ qui va permettre de définir \textbf{canoniquement} un champ de vecteurs $X_0$ sur $\mathscr{D}$ de genre temps et tel que $\forall x \in \mathscr{D}, ~~{X_0}_x \in H_x$. Ce champ de vecteurs sera appelé le \textbf{champ de vecteurs apparent} du fluide et le flot associé le \textbf{flot apparent} du fluide. Cette condition permettra aussi de définir canoniquement deux fonctions $\mu$ et $\rho : \mathscr{D} \rightarrow \mathbb{R}$ qui représenteront respectivement la \textbf{densité d'énergie} et la \textbf{densité de charge électrique} du fluide. La deuxième identité de Bianchi ($\nabla_g\cdotp G = 0$) permettra alors d'obtenir facilement les lois de conservation ainsi que les équations d'évolution habituelles, ceci pour une large classe de fluides. Cette condition géométrique est la première posée dans la définition \ref{def:4}. La deuxième condition pourra être interprétée comme le fait que l'on <<~néglige~>> les effets quantiques sur l'électromagnétisme (mais ceci n'apparaîtra clairement qu'à la lecture complète de ce papier). Le lemme suivant montre que cette deuxième condition n'est que l'<<~approximation~>> qui consiste à moyenniser a priori la métrique pseudo-riemannienne $g$ sur les cercles $S_x^1$ définis précédemment et par là même <<~négliger~>> les variations de $g$ sur ces cercles. Bien sûr, cette deuxième condition sera abandonnée dans le chapitre \ref{part:deux} car ce sont justement les <<~variations~>> de $g$ sur les <<~petites variétés compactes~>> qui permettent de décrire les phénomènes quantiques. \begin{lem}\label{l1} On considère le champ de vecteurs $Y$ tangent aux cercles $S_x^1$ et normalisés par $g(Y, Y) = -1$ défini précédemment. On note $\sigma$ le groupe à 1 paramètre de difféomorphismes associé à $Y$. On définit la métrique pseudo-riemannienne <<~moyennisée~>> $\overline{g}$ en posant: \[\forall x \in \mathscr{M},~~ \overline{g}_x = \frac{1}{l_x} \int_{t_0}^{t_0 + l_x}\left(\sigma^*(t) g\right)_x dt\] où $l_x$ est la <<~longueur~>> du cercle $S_x^1$ relative à $g$ ($\overline{g}_x$ ne dépend pas du choix de $t_0$ car $\sigma_x(.)$ est périodique de période $l_x$). Alors, $\overline{g}(Y, Y) = -1$ et, $\forall s \in \mathbb{R},~~ \sigma^*(s) \overline{g} = \overline{g}$. Autrement dit, $Y$ est un champ de Killing pour $\overline{g}$. \end{lem} \begin{dem} Vérifier que $\overline{g}(Y, Y) = \text{-}1$ est immédiat, le reste est résumé par les égalités suivantes: \begin{displaymath} \begin{tabular}{rcl} $\displaystyle(\sigma^*(s)\overline{g})_x$ &=& $\displaystyle\frac{1}{l_x}\int_{t_0}^{t_0+l_x}\sigma^*(s)(\sigma^*(t)g)_x~ dt$\\ &=& $\displaystyle\frac{1}{l_x}\int_{t_0}^{t_0+l_x}(\sigma^*(t+s)g)_x~ dt$ \\ &=& $\displaystyle\frac{1}{l_x}\int_{t_0+s}^{t_0+s+l_x}(\sigma^*(t) g)_x~ dt$ \\[1.3em] &=& $\displaystyle\overline{g}_x$ \end{tabular} \end{displaymath} \end{dem} \begin{dfn} \label{def:4} \textbf{Un domaine de type <<~fluide~>>} est un triplet $(\mathscr{D}, g, \mathscr{A})$ pour lequel les deux conditions suivantes sont réalisées: \begin{enumerate} \item\label{ei11} $\forall x \in \mathscr{D}$~~ $\leftidx{^e}{G}|_{H_x}$ admet un espace propre $E_{-\mu}$ de dimension 1, de genre temps et de valeur propre $-\mu < 0$~~ (de sorte que $\mu > 0$). ~~(Ici, $\leftidx{^e}{G}|_{H_x}$ est l'endomorphisme de $H_x$ défini par: $\forall X \in H_x$~~ $\leftidx{^e}{G}|_{H_x}(X) = pr_{H_x} \leftidx{^e}{G}(X)$). \item\label{ei12} Le champ de vecteurs $Y$ est un champ de Killing, autrement dit, les difféomorphismes locaux engendrés par le champ $Y$ sont des $g$-isométries (voir le lemme \ref{l1}). \end{enumerate} \end{dfn} \begin{rmq} \label{r2} Lorsque la condition \ref{ei11} est vérifiée pour une métrique $g$, elle reste vérifiée pour toute métrique d'un <<~voisinage~>> de $g$ et son seul objectif est, comme on l'a déjà précisé, de permettre de définir canoniquement le champ de vecteurs $X_0$ et les deux fonctions $\mu$ et $\rho$. Ceci est fondamentalement différent du principe axiomatique lagrangien car ce dernier est caractérisé par la donnée axiomatique précise <<~d'égalités~>> qui définissent le lagrangien. On rappelle que la condition $\ref{ei12}$ a pour seul objectif de <<~négliger~>> les <<~effets quantiques~>> liés à l'électromagnétisme. \end{rmq} \subsection{Les objets physiques définis canoniquement dans un domaine de type <<~fluide~>>\label{ss1.1}} \begin{enumerate} \item \textbf{Le champ de vecteurs $Y$} (déjà présenté) défini $\forall x \in \mathscr{D}$ comme l'unique vecteur tangent en $x$ à $S^1_x$, dans l'orientation, et tel que $g(Y_x, Y_x) = -1$. \begin{itemize} \item \textbf{La 1-forme associée} $Y^\flat$ (où $Y^\flat_i := g_{ij} Y^j$). \item \textbf{La 2-forme différentielle} caractérisant l'électromagnétisme notée classiquement $F$ est définie ici par $F= d (Y^\flat)$. \end{itemize} \item \textbf{Le champ de vecteurs $X_0$} (déjà présenté) défini $\forall x \in \mathscr{D}$ comme l'unique vecteur ${X_0}_x$ de l'espace propre $E_{-\mu} \subset H_x$, dans l'orientation, et tel que $g({X_0}_x, {X_0}_x) = -1$ (on remarque que, compte tenu de la signature de $g|_{H_x}$, cet espace propre de genre temps est unique (cf. annexe \ref{a3.1})). Le champ de vecteurs $X_0$ sera appelé \textbf{le champ de vecteurs apparent} du fluide et le flot associé, \textbf{le flot apparent du fluide}. \item \textbf{La fonction $\mu : \mathscr{D} \rightarrow \mathbb{R}^+$} définie $\forall x \in \mathscr{D}$ par $\mu(x) = \mu_x$ où $-\mu_x$ est la valeur propre associée à l'espace propre $E_{-\mu_x}$. Cette fonction est \textbf{la fonction densité d'énergie} du fluide. \item \textbf{La fonction $\rho : \mathscr{D} \rightarrow \mathbb{R}$} définie $\forall x \in \mathscr{D}$ par $\rho(x) = G_x({X_0}_x)$. Cette fonction est \textbf{la fonction densité de charge électrique} du fluide. \item \textbf{Le champ de vecteurs $X := X_0 + \frac{\rho}{\mu} Y$}, de genre temps, est le \textbf{champ de vecteurs du fluide} ($X_0$ n'était que le champ apparent du fluide) et le flot associé, \textbf{le flot du fluide}. (Évidemment, $X = X_0$ si la densité de charge $\rho$ est nulle). \item Quel que soit $x \in \mathscr{D}$, \textbf{l'espace temporel $\mathcal{T}_x$}, de dimension $2$, est le sous-espace vectoriel de $T_x(\mathscr{D})$ engendré par ${X_0}_x$ et $Y_x$. Le champ de plans $\mathcal{T}$ est intégrable puisque la condition \ref{ei12} imposée sur les domaines de type <<~fluide~>> a pour conséquence le fait que $[X_0\ Y] = 0$ (cf. annexe \ref{a3.1}). On définit donc le \textbf{tube temporel} $\tau_x$ comme la sous-variété intégrale passant par $x$ de ce champ de plans. Le tube temporel $\tau_x$ est une sous-variété de dimension $2$ \textbf{totalement de genre temps} au sens où tous ses vecteurs tangents sont de genre temps. Ces <<~tubes~>> peuvent être interprétés comme la généralisation des lignes de flot du fluide. Ils sont orientés par <<~l'orientation~>> donnée pour $X_0$ et $Y$. \item Compte tenu des définitions précédentes, $\forall x \in \mathscr{D}$, le tenseur $G_x$ <<~restreint~>> à $\mathcal{T}_x$ s'écrit: $G|_{\mathcal{T}_x} = \mu(x) X^\flat \otimes X^\flat + \sigma(x) Y^\flat \otimes Y^\flat$ où $\sigma:\mathscr{D} \rightarrow \mathbb{R}$ est une fonction régulière. Le champ de formes bilinéaires $P = G - G|_{\mathcal{T}_x}$ sera appelé \textbf{la pression du fluide} (elle vérifie en particulier: $P(X_0, X_0) = P(X_0, Y) = P(Y, Y) = P(X, X) = 0$). Le champ de tenseurs $G$ s'écrit donc sous la forme: \begin{displaymath} \begin{tabular}{rcl} $\displaystyle G$ &=& $\displaystyle \mu X^\flat \otimes X^\flat + \sigma Y^\flat \times Y^\flat + P$\\ &=& $\displaystyle \mu X^\flat_0 \otimes X^\flat_0 + \rho(X^\flat_0 \otimes Y^\flat + Y^\flat \otimes X^\flat_0) + (\sigma + \frac{\rho^2}{\mu}) Y^\flat \otimes Y^\flat + P$ \end{tabular} \end{displaymath} \textbf{La pression apparente} ${P_A}_x$ est la pression $P$ restreinte à l'espace apparent $H_x$, autrement dit, par définition: $\forall (Z, Z') \in H^2_x$~~$ {P_A}_x(Z, Z') = P_x(Z, Z')$, $\forall Z \in T_x(\mathscr{D})\ \forall Z' \in T_x(S^1_x) \oplus T_x(W_x)\ {P_A}_x(Z, Z') = 0$ et ${P_A}_x(Z, X_0) = 0$.\\ \textbf{La pression cachée} est définie par ${P_C}_x := P_x - {P_A}_x$. Le champ de tenseurs $G$ s'écrit donc aussi sous la forme: $G = \mu X^\flat \otimes X^\flat + \sigma Y^\flat \otimes Y^\flat + P_A + P_C$. \end{enumerate} \textbf{Les objets que l'on vient de définir ne l'ont été qu'à partir de la géométrie du type choisi, c'est à dire du seul tenseur pseudo-riemannien $g$ sur $\mathscr{D}$. Ils ne sont pas indépendants entre eux. Les liens de dépendance sont donnés par les seules propriétés mathématiques des variétés pseudo-riemanniennes (en particulier la deuxième identité de Bianchi). Comme on va le vérifier, ces liens de dépendance ne sont autres que les <<~lois de la physique~>> sur les fluides qui redonnent en particulier celles de la relativité générale standard. Ici, aucune loi, aucun principe n'est ajouté. Les équations que l'on va écrire sont des conclusions obligées des définitions que l'on vient de donner.} \subsection{Les équations générales sur les fluides} \begin{thme} \label{t1.1} Dans un domaine de type <<~fluide~>>, les égalités suivantes sont vérifiées: \begin{enumerate} \item (générique de la conservation de l'énergie) \[\nabla_g\cdotp (\mu X) = \nabla_g\cdotp (\mu X_0) = g(X_0, \nabla_g\cdotp P)\] soit, en coordonnées: \[\nabla_i(\mu X^i) = \nabla_i(\mu X^i_0) = X_{0 j}\nabla_i P^{i j}\] de plus: \[\mu^2X(\frac{\rho}{\mu}) = \mu g(Y, \nabla_g\cdotp P) - \rho g(X_0, \nabla_g\cdotp P)\] et: \[g(Y, \nabla_g\cdotp P) = \nabla_g\cdotp \leftidx{^e}{P}(Y)\] \item (générique de la conservation de la charge électrique) \[\nabla_g\cdotp (\rho X) = \nabla_g\cdotp (\rho X_0) = \nabla_g\cdotp (\leftidx{^e}{P}(Y)) = g(Y, \nabla_g\cdotp P)\] soit, en coordonnées: \[\nabla_i(\rho X^i) = \nabla_i(\rho X^i_0) = Y_j\nabla_i P^{i j}\] (Remarque: $\leftidx{^e}{P}(Y) = \leftidx{^e}{P_C}(Y)$). \vspace{2cm} \item (équations de mouvement) \begin{enumerate} \item \[\mu D_x X = - \nabla_g\cdotp P - g(X_0, \nabla_g\cdotp P) X\] soit, en coordonnées: \[\mu X^i\nabla_iX^j = -\nabla_iP^{i j} - {X_0}_j \nabla_i P^{i j}X^j\]. \item \[\mu D_{X_0} X_0 = \rho \leftidx{^e}{F}(X_0) - pr_{\mathcal{T}^\bot}(\nabla_g\cdotp P)\] \end{enumerate} \item (générique de la deuxième équation de Maxwell, la première est évidente puisque $F := d(Y^\flat)$) \[\nabla_g\cdotp F = \rho X_0 + \frac{1}{2}(F_{i j}F^{i j})Y - \leftidx{^e}{P}(Y)\] soit en coordonnées: \[\nabla_iF^{i j} = \rho X^j_0 + \frac{1}{2}(F_{k l}F^{k l})Y^j - P^{i j} Y_i\] \end{enumerate} \end{thme} La démonstration de ce théorème s'obtient rapidement en utilisant les propriétés standard des variétés pseudo-riemanniennes (en particulier la deuxième identité de Bianchi) et est donnée dans l'annexe \ref{a3.1}. Les équations données par le théorème \ref{t1.1} généralisent les équations obtenues en relativité générale standard sur les fluides. Elles ne sont pas <<~suffisamment déterministes~>> (trop d'inconnues par rapport au nombre d'équations) et ne sont utiles sous cette forme que pour l'étude de comportements globaux sur les fluides. Pour être utilisées de manière plus précise, il est nécessaire d'imposer des conditions supplémentaires à la géométrie du type de fluide donnée. Ceci doit, bien sûr, être considéré comme des choix <<~d'approximation~>> différents suivant les cas considérés (voir l'annexe \ref{a3.9} sur les <<~approximations~>>). \subsection{Les fluides particuliers} Les conditions géométriques particulières que l'on va poser maintenant ont pour but de retrouver la forme exacte des équations connues sur les fluides en relativité générale standard évidemment écrites en dimension $4$. Comme, pour nous, les équations sont écrites en dimension $n$, c'est en <<~projection~>> sur les espaces apparents $H_x$ que la comparaison devra être faite. En fait, l'objectif est plutôt de montrer que les fluides considérés en relativité générale standard sont des cas particuliers de ceux que l'on vient de définir et on verra plus loin que des exemples de <<~fluides~>> et de <<~potentiels~>> spécifiques à la dimension $n > 4$ sont particulièrement intéressants. \begin{dfn}\label{def:5} ~\\\vspace*{-1em} \begin{enumerate} \item Un domaine de type \textbf{<<~fluide parfait~>>} est un domaine de type <<~fluide~>> pour lequel $\leftidx{^e}{P}(Y)=0$ ~~(en fait, $\leftidx{^e}{P}(Y)=\leftidx{^e}{P}_c(Y)$ ~où ~$P_c$ ~est la pression cachée (defs. \ref{ss1.1.1})) \item Un domaine de type \textbf{<<~fluide parfait isentropique~>>} est un domaine de type <<~fluide parfait~>> pour lequel, en chaque point $x$ de $\mathscr{D}$, le tenseur de pression $P_x$ est proportionnel au tenseur $(g_x-g_x|_{\mathcal{T}_x})$ ~(aucune direction <<~d'espace~>> n'est privilégiée), autrement dit : $P=\tilde p(g+X_0^\flat\otimes X_0^\flat+Y^\flat\otimes Y^\flat)$ où $\tilde p:\mathscr{D}\rightarrow \mathbb{R}$ sera appelée \textbf{la fonction pression}. \item Un domaine de type \textbf{fluide vraiment-parfait~>>} (ou de type <<~poussière~>>), éventuellement chargé électriquement, est un domaine de type <<~fluide parfait~>> pour lequel ~$\nabla_g\cdotp P=0$. Lorsque l'on considère un domaine de type \textbf{<<~fluide vraiment-parfait sans électromagnétisme~>>} on suppose de plus que $\rho=0$ et $F=0$. \end{enumerate} \end{dfn} \begin{rmq}\label{r3} Dans les définitions que l'on vient de donner, les hypothèses de nullité des tenseurs $\leftidx{^e}{P}(Y)$ et $\nabla_g\cdotp P$ peuvent être remplacés par le fait que ces tenseurs sont <<~négligeables~>> par rapport à ceux qui interviennent dans les équations qui vont suivre, en précisant, bien sûr, la notion de <<~négligeabilité~>>. \end{rmq} La proposition qui suit ne fait que transposer le théorème précédent aux cas particuliers de fluides que l'on vient de définir. \begin{prop}\label{p1.1} ~\\\vspace*{-1em} \begin{enumerate} \item Dans un domaine de type <<~fluide parfait~>> les égalités suivantes sont vérifiées : \begin{enumerate} \item (générique de la conservation de l'énergie) \[\nabla_g\cdotp (\mu X) = \nabla_g\cdotp (\mu X_0) = g(X_0, \nabla_g\cdotp P)\] soit, en coordonnées: \[\nabla_i(\mu X^i) = \nabla_i(\mu X^i_0) = X_{0 j}\nabla_i P^{i j}\] de plus: \[\mu^2X(\frac{\rho}{\mu}) =\mu^2X_0(\frac{\rho}{\mu}) = - \rho g(X_0,\nabla_g\cdotp P)\] \item (conservation de la charge électrique) \[\nabla_g\cdotp (\rho X) = \nabla_g\cdotp (\rho X_0) = 0\] soit, en coordonnées: \[\nabla_i(\rho X^i) = \nabla_i(\rho X^i_0) =0\] \item (équation de mouvement) \[\mu D_x X = - \nabla_g\cdotp P - g(X_0, \nabla_g\cdotp P) X\] soit, en coordonnées:\\ \[\mu X^i\nabla_iX^j =-\nabla_iP^{i j} - {X_0}_j \nabla_i P^{i j}X^j\]. qui s'écrit aussi sous la forme : \[\mu D_{X_0} X_0 =\rho^2F(X_0)- \nabla_g\cdotp P - g(X_0, \nabla_g\cdotp P) X_0\] \item(générique de la deuxième équation de Maxwell) \[\nabla_g\cdotp F = \rho X_0 + \frac{1}{2}(F_{i j}F^{i j})Y\] soit en coordonnées: \[\nabla_iF^{i j} = \rho X^j_0 + \frac{1}{2}(F_{k l}F^{k l})Y^j\] (en particulier : $\forall x\in\mathscr{D} ~~~pr_{H_x}(\nabla_g\cdotp F)=\rho X_{0_x}$) (Ces propriétés sont des conséquences immédiates du théorème \ref{t1.1}). \end{enumerate} \item Dans un domaine de type <<~fluide parfait isentropique~>> les égalités suivantes sont vérifiées : \begin{enumerate} \item (générique de la conservation de l'énergie) \[\nabla_g\cdotp (\mu X_0)+\tilde p\nabla_g\cdotp X_0=0\] de plus : \[\mu^2X(\frac{\rho}{\mu})=\mu^2X_0(\frac{\rho}{\mu})=\rho\tilde p\nabla_g\cdotp X_0\] \item(conservation de la charge électrique) \[\nabla_g\cdotp (\rho X)=\nabla_g\cdotp (\rho X_0)=0\] \item(équation de mouvement) \[(\mu+\tilde p)D_{X_0}X_0=\rho\leftidx{^e}{F}(X_0)-\nabla_g\tilde p-X_0(\tilde p)X_0\] On retrouve ici exactement les équations obtenues en relativité générale standard pour les fluides chargés isentropiques (on peut voir par exemple \cite{hawking}). (La démonstration de ces propriétés s'obtient rapidement après avoir vérifié que $\nabla_g\cdotp P=\nabla_g\tilde p+X_0(\tilde p)X_0+\tilde p(\nabla_g\cdotp X_0)X_0+\tilde pD_{X_0}X_0$ ~~puis ~~$g(X_0,\nabla_g\cdotp P)=-\tilde p\nabla_g\cdotp X_0$) \end{enumerate} \item Dans un domaine de type <<~fluide vraiment-parfait~>> les égalités suivantes sont vérifiées : \begin{enumerate} \item (conservation de l'énergie) \[\nabla_g\cdotp (\mu X)=\nabla_g\cdotp (\mu X_0)=0\] de plus : \[X(\frac{\rho}{\mu})=X_0(\frac{\rho}{\mu})=0\] \item(conservation de la charge électrique) \[\nabla_g\cdotp (\rho X)=\nabla_g\cdotp (\rho X_0)=0\] \item(équation de mouvement) \[D_XX=0\] ce qui s'écrit aussi : \[\mu D_{X_0}X_0=\rho\leftidx{^e}{F}(X_0)\] Dans ce cas \textbf{$X$ est un champ géodésique, que le fluide soit chargé électriquement ou non}. (Bien entendu, si la densité de charge électrique $\rho$ est nulle $X=X_0$ est aussi un champ géodésique). \end{enumerate} \end{enumerate} \end{prop} \begin{rmq}\label{r4} Dans <<~l'équation de mouvement~>> que l'on vient de donner, $(D_{X_0}X_0)_x$ et $\leftidx{^e}{F}(X_0)_x$ sont g-orthogonaux à $Y_x$ mais pas nécessairement à $W_x$. Si l'on veut garantir le fait que $(D_{X_0}X_0)_x$ et $\leftidx{^e}{F}(X_0)_x$ appartiennent à l'espace apparent $H_x$ on peut poser la condition supplémentaire (*) suivante : (*) ~~les sous-variétés $W_x$ sont parallèles le long des cercles géodésiques $S^1_x$. Précisément : $\forall x\in\mathscr{D} ~~\forall z\in T_x(W_x) ~~\forall x'\in S^1_x$, ~~le transporté parallèle de $Z$ en $x'$ le long du cercle géodésique $S^1_x$ est tangent à $W_x'$. On vérifie alors rapidement que, sous cette condition : $\forall x\in\mathscr{D} ~~\leftidx{^e}{F}(X_0)_x\in H_x$. Cette condition (*) sera vérifiée dans les exemples que l'on présentera ensuite. \end{rmq} \section[Type <<~potentiel~>> à partir de la courbure d'Einstein]{Les domaines de type <<~potentiel~>> définis à partir de la courbure d'Einstein\label{s1.3}} \begin{dfn}\label{def:6} Un domaine de \textbf{<<~type potentiel~>>} est un triplet $(\mathscr{D},g,\mathscr{A})$ qui vérifie les propriétés suivantes :\begin{enumerate} \item $\forall x\in \mathscr{D} ~~~G_x|_{H_x}=0$ ~~et~~$pr_{H_x}\leftidx{^e}{G}(Y)=0$ \item Le champ $Y$ est un champ de Killing. (Voir le lemme \ref{l1} et sa présentation). \end{enumerate} \end{dfn} Ces domaines de type <<~potentiel~>> apparaissent donc comme des domaines de type <<~fluide~>> pour lesquels la densité d'énergie, la densité de charge électrique ainsi que la pression apparente $P_A$ sont nulles. On remarquera d'autre part qu'il n'y a plus de champs de vecteurs canoniquement définis comme l'étaient $X_0$ et $X$ pour les fluides. Seuls subsistent, comme objets canoniques, ceux qui définissent l'électromagnétisme, c'est à dire le champ $Y$ et la 2-forme $F$. Le tenseur $G$ s'assimile alors à la pression cachée $P_c$ qui vérifie donc $\nabla\cdotp P_c=0$. Le théorème \ref{t1.2} suivant s'obtient immédiatement en utilisant la démonstration de la partie 4 du théorème \ref{t1.1} sur les équations de Maxwell. \begin{thme}\label{t1.2} Dans un domaine de type potentiel donné par la définition \ref{def:6} l'égalité suivante est vérifiée : (Seconde équation de Maxwell, la première est évidente puisque $F:=dY^\flat$) \[\nabla_g\cdotp F=\frac{1}{2} (F_{ij}F^{ij})Y-\leftidx{^e}{G}(Y)\] En particulier (puisque $pr_{H_x}\leftidx{^e}{G}(Y)=0$) : \[pr_{H_x}\nabla_g\cdotp F=O\] \end{thme} Les domaines de type <<~potentiel~>> sont en pratique très importants car la connaissance de leurs géodésiques donne, en approximation, les courbes des <<~objets élémentaires~>> (chargés électriquement ou non) <<~placés~>> dans ces potentiels lorsque l'on considère que l'incidence de ces objets est négligeable sur la géométrie du type <<~potentiel~>>. En effet, si l'on introduit un objet élémentaire dans un domaine de type <<~potentiel~>>, celui ci peut alors être considéré comme un domaine de type <<~fluide vraiment-parfait~>> dans la partie où la densité d'énergie n'est pas nulle, celle-ci étant de plus très <<~localisée en espace~>>. Si l'on considère qu'en dehors de ce domaine très localisé, la géométrie du domaine de type <<~potentiel~>> n'est pas modifiée, alors le champ $X$ du fluide (défini lorsque $\mu\neq0$) est un champ géodésique d'après la proposition \ref{p1.1} (2.3.c). Les courbes du flot du fluide déterminé par $X$, qui donnent la trajectoire de l'objet élémentaire, sont donc (en approximation) des géodésiques de type <<~potentiel~>>. De plus le quotient $\frac{\rho}{\mu}$ s'assimile au quotient de la charge électrique par la masse de l'objet élémentaire considéré puisque celui-ci est <<~restreint en espace~>>. Ce principe est classique en relativité générale standard pour les objets élémentaires \textbf{non chargés électriquement} et est utilisé, par exemple, pour déterminer la trajectoire des planètes dans un domaine de Schwarzschild, la déviation de la lumière, etc. Ce qui est remarquable c'est que ce principe s'applique aussi maintenant pour les objets élémentaires \textbf{chargés électriquement}, mais nécessairement dans un espace de dimension $n\geqslant5$. Dans ce cas la trajectoire apparente est déterminée par le champ <<~apparent~>> $X_0$ (non géodésique), lui même déduit du champ géodésique $X$. Bien sûr, $X=X_0$ lorsque la charge électrique est nulle. Des exemples précis de calculs seront présentés à la fin de la section suivante. \begin{rmq}\label{r5} Lorsqu'un domaine $(\mathscr{D},g)$ est isométrique à un domaine de la forme $(\mathscr{D}'\times V,g'\times g_V)$ où $V$ est une variété compacte de dimension $k$ et $g'\times g_V$ est une métrique produit, on peut définir sur le couple $(\mathscr{D}',g')$ (pour lequel $dim\mathscr{D}'=n-k$) les notions de <<~type fluide~>> ou de <<~type potentiel~>> d'une manière identique à ce qui a été fait sur $(\mathscr{D},g)$. Si les <<~objets~>> définis à partir de la courbure de Ricci donnent des notions très proches que ce soit sur $(\mathscr{D},g)$ ou $(\mathscr{D}',g')$ puisque $g'\times g_V$ est une métrique produit, il n'en est plus de même pour celles définies à partir du tenseur d'Einstein (ou de $G$) car la courbure scalaire de $g_V$ intervient de manière importante et $S_g$ est différente de $S_{g'}$ lorsque $S_{g_V}$ n'est pas nulle ($S_g=S_{g'}+S_{g_V}$). Or, c'est essentiellement à partir du tenseur d'Einstein que nous avons défini les notions importantes. Il se peut par exemple que $(\mathscr{D}',g')$, de dimension $n-k$, soit de type fluide alors que $(\mathscr{D},g)$ ne le soit pas, ou bien que $(\mathscr{D}',g')$ soit de type fluide sans pression et que $(\mathscr{D},g)$ soit de type fluide avec pression (ou réciproquement), etc. De plus, les objets définis sur les <<~fluides~>> sont différents considérés sur $(\mathscr{D},g)$ ou sur $(\mathscr{D}',g')$. Il peut donc être avantageux, dans le cas très particulier pour lequel $(\mathscr{D},g)$ est isométrique à $(\mathscr{D}'\times V,g'\times g_V)$, d'utiliser les <<~fluides~>> ou les <<~potentiels~>> définis sur $(\mathscr{D}',g')$ plutôt que sur $(\mathscr{D},g)$. Il faudra, bien sûr, dans ce cas préciser qu'il s'agit des fluides ou des potentiels définis sur $(\mathscr{D}',g')$ dont les caractéristiques sont éventuellement différentes de celles données par $(\mathscr{D},g)$. Bien entendu, les théorèmes obtenus précédemment s'appliquent à $(\mathscr{D}',g')$ de dimension $n-k$ aussi bien qu'à $(\mathscr{D},g)$ de dimension $n$. \end{rmq} \section[Exemples de domaines donnés par le tenseur $\protectg$]{Exemples de domaines de type potentiel et de type <<~fluide~>> donnés par le tenseur pseudo-riemannien $g$ lui même\label{s1.4}} Les égalités données par le théorème \ref{t1.1} sur les fluides sont écrites à l'aide du tenseur pseudo-riemannien $g$. Deux égalités qui ont la même écriture peuvent décrire des fluides dont le comportement est très différent si les tenseurs pseudo-riemanniens sous-jacents sont différents. En fait, le théorème \ref{t1.1}, écrit sans précisions sur le tenseur $g$, ne peut s'appliquer qu'à des études <<~globales~>> sur les fluides. Lorsque l'on souhaite décrire des comportements précis, il est important de savoir définir des domaines de type <<~fluide~>> ou de type <<~potentiel~>> à partir du tenseur pseudo-riemannien $g$. Ce processus est, bien sûr, utilisé en relativité générale standard et les domaines considérés sont souvent présentés sous la dénomination de <<~solutions exactes de l'équation d'Einstein~>>. On peut citer : les solutions de Schwarzschild-Kruskal, de Reissner-Nordström, de Kerr, de Lemaître, etc. Dans tous les cas, $g$ est donné explicitement. Tous les exemples de domaines de type <<~fluide~>> ou de type <<~potentiel~>> donnés en dimension 4 ou en dimension 5, introduits dans \cite{vaugon-1} ou \cite{vaugon-2} peuvent être immédiatement traduits dans le cadre que nous présentons ici en dimension $n$. Il suffit pour cela de définir les domaines de dimension $n$ qui sont localement isométriques à $(\Omega\times K,g\times g_K)$ où $(\Omega,g)$ est le domaine considéré en dimension 4 ou 5 et $(K,g_K)$ est une variété riemannienne compacte. La variété compacte $K$ est de la forme $K=S^1\times W$ et la métrique $g_K$ a pour signature $(-,+,+,\dots,+)$ si la dimension de $\Omega$ est 4. Évidemment ces constructions n'ont mathématiquement pas d'intérêt, elles consistent uniquement à tout ramener en dimension $n$ sans modifier les propriétés géométriques locales. Les exemples que l'on va présenter maintenant sont tout à fait nouveaux et n'ont de sens que lorsque $n>5$ et la signature de la métrique $g$ est de la forme $(-,+,+,+,-,+,\dots,+)$. Ils donneront en particulier, avec une grande simplicité , une très bonne approximation des domaines de la physique classique qui ne contiennent que des potentiels électromagnétiques et newtoniens. Ce sont ces domaines que l'on réutilisera plus loin dans l'étude des phénomènes quantiques. \bigskip Avant de présenter les exemples de domaines de type <<~potentiel~>> on commence par préciser quelques notations et donner quelques définitions. Celles-ci seront utilisées tout au long de ce papier. \subsection{Quelques notations et définitions\label{ss1.1.1}} \begin{enumerate} \item Le cercle $S^1(\delta)$ de rayon $\delta$ est défini en posant $S^1(\delta)=\mathbb{R}/2\pi\delta\varmathbb{Z}$, ce qui, en utilisant la surjection $\Pi:\mathbb{R}\rightarrow 2\pi\delta\varmathbb{Z}$, donne canoniquement : \textbf{une origine $P$} sur $S^1(\delta)$ ~~($P:=\Pi(0)$), \textbf{une orientation} (celle de $\mathbb{R}$ transportée par $\Pi$), \textbf{une coordonnée} $u\in ]0 ~~2\pi\delta[$ pour $\Pi(u)\in S^1(\delta)-\{P\}$, \textbf{une métrique} $g_{S^1(\delta)}$ (celle de $\mathbb{R}$ quotientée par $\Pi$). \item Soit $\Theta$ un ouvert de $\mathbb{R}^4$ et $\mathscr{C}=\Theta\times S^1(\delta)\times W$ où $W$ est une variété compacte (qui sera ensuite souvent décomposée sous la forme $S^3(\rho)\times V$ et $(S^3(\rho),g_{S^3(\rho)})$ sera la sphère riemannienne standard de dimension 3) $\mathscr{C}$ sera appelée \textbf{une cellule type}. \item \textbf{Un système de coordonnées standard sur $\mathscr{C}$} sera noté : $(t,x^1,x^2,x^3,u,w)$ \\où ~~$(t,x^1,x^2,x^3)\in \Theta\subset \mathbb{R}^4$, ~~$u\in S^1(\delta)$ ~et ~$w=(w^1,\dots,w^k)$ sont les coordonnées déterminées par le choix d'une carte sur $W$. Le couple $(t,u)$ des coordonnées du temps <<~double~>> sera parfois noté $(x^0,x^4)$. Les éléments de $S^1(\delta)$ seront notés <<~u~>> (de coordonnée standard <<~$u$~>>). Les éléments de $W$ seront notés <<~w~>> (de coordonnées standard <<~$(w^1,\dots,w^k)$~>>). \item \textbf{Une métrique de référence} sur la cellule $\mathscr{C}$ (qui deviendra ensuite une métrique de potentiel neutre) est une métrique pseudo-riemannienne $g_0$ qui s'écrit sous la forme : \[g_0=g_\Theta\times(-g_{S^1(\delta)})\times g_W \] où $g_\Theta$ est la métrique de Minkovski sur $\Theta \subset\mathbb{R}^4$, ~~$g_{S^1(\delta)}$ la métrique riemanniennne standard sur $S^1(\delta)$ ~et~ $g_W$ une métrique riemannienne sur $W$. La métrique $g_0$ s'écrit, dans un système de coordonnées standard : \[g_0=-dt^2 +\sum_{k=1}^3(dx^k)^2-du^2+\sum g_W{_{ij}}dw^idw^j\] Elle correspond aux choix des <<~unités géométriques~>> puisque les coefficients de <<~$dt^2$~>> et <<~$du^2$~>> sont ($-1$) et ceux de $(dx^k)^2$ sont $(+1)$. \end{enumerate} \begin{rmq}\label{r6} Comme <<~variété~>> la cellule type aurait pu être définie plus simplement en prenant $S^1(1)$ au lieu de $S^1(\delta)$ puisque à difféomorphisme près, les rayons n'ont pas d'importance. Mais alors, une <<~métrique standard~>> $g_0$ aurait due être définie en posant $g_0=-dt^2+\sum_{k=1}^3(dx^k)^2-\delta^2 du^2+\sum g_W{_{ij}}dw^idw^j$ pour obtenir un résultat identique. Dans ce cas, les unités de <<~temps~>> sur $\mathbb{R}$ et sur $S^1(1)$ auraient été différentes. Le choix précisé dans ce paragraphe m'a paru préférable. \end{rmq} \subsection{Les métriques représentant les potentiels\label{ss1.2}} On considère une carte $(\mathscr V,\zeta)$ de l'atlas d'observation pour laquelle la cellule type $\mathscr{C}$ est de la forme $\Theta\times S^1\times W$. \textbf{Les métriques représentant les potentiels seront définies, pour simplifier, sur $\mathscr{C}=\Theta\times S^1\times W$, autrement dit on notera $g$ la métrique transportée par $\zeta$ de celle définie sur $\mathscr V\subset \mathscr{M}$.} \subsubsection{A- les métriques représentant les potentiels neutres} \begin{dfn}\label{def:7} Une métrique pseudo-riemannienne $g_0$ définie sur une cellule type $\mathscr{C}=\Theta\times S^1\times W$ \textbf{représente un potentiel neutre} si elle s'écrit sous la forme d'une métrique produit : \[g_0=g_\Theta\times(-g_{S^1(\delta)})\times g_W\] où : $g_\Theta$ est la métrique de Minkovski habituelle sur $\Theta\subset\mathbb{R}^4$. $g_{S^1(\delta)}$ est la métrique standard du cercle $S^1$ de rayon $\delta$. $g_W$ est une métrique riemannienne sur la variété compacte $W$ telle que la courbure scalaire $S_{g_W}$ soit constante, ($S_{g_0}$ est alors égale à cette même constante). \end{dfn} Les différents potentiels <<~neutres~>> sont donc, pour le moment, liés aux différents choix possibles des variétés riemanniennes compactes $(W,g_W)$ à courbure scalaire constante ainsi qu'aux rayons $\delta$ du cercle $S^1$. La signature de $g_0$ est en tout point : $(-,+,+,+,-,\dots,+)$. Le choix de la métrique de Minkovski $g_\Theta$ sur $\Theta$ est dû au fait que l'on ne cherche , pour le moment, qu'à retrouver les résultats standard des théories quantiques (présentés dans le chapitre 2). Comme on le verra, la métrique $g_0$ sera considérée, lors de la description d'expériences précises, comme la métrique de l'observateur qui fait les mesures. Un choix de $g_0$ différent serait possible, pour caractériser par exemple une <<~déformation~>> de l'espace-temps dans lequel l'observateur fait les mesures, mais ceci compliquerait évidemment fortement les calculs. \begin{rmq}\label{r7} Le choix de $g_\Theta$ comme métrique de Minkovski permet d'avoir la propriété suivante : Si $\Lambda:\mathbb{R}^4\supset\Theta\rightarrow\Theta'\subset\mathbb{R}^4$ est une transformation de Lorentz standard, alors : $\sigma^* g_0=g_0$~ lorsque $\sigma:=\Lambda\times I_{S^1\times W}$ ~~où ~~$ I_{S^1\times W}$ est l'application <<~identité~>> de $S^1\times W$. Autrement dit, la notion de <<~potentiel neutre~>> est invariante par changement d'observateurs qui correspondent aux changements de cartes associés aux transformations de Lorentz. \end{rmq} \subsubsection{B-~Les métriques représentant les potentiels <<~actifs~>>\label{ss1.3}} Si l'on se donne un potentiel neutre $g_0$ sur une cellule type $\mathscr{C}$, toute autre métrique $g$ sur $\mathscr{C}$ s'écrit évidemment sous la forme $g=g_0+h$ ~où ~$h$ est un champ de formes bilinéaires symétriques. Le champ $h$ peut être considéré comme un champ d'endomorphismes sur $\mathscr{C}$, relativement à $g_0$, noté $\leftidx{^e}{h}$, (en coordonnées, $\leftidx{^e}{h}_j^i=g_o^{ik}h_{kj}$). \textbf{Le fait remarquable est que les métriques qui vont représenter les potentiels vont être caractérisés essentiellement par la propriété disant que $\leftidx{^e}{h}$ est un champ d'endomorphismes NILPOTENTS} (cf. def. \ref{def:8}). Il est clair que $\leftidx{^e}{h}$ est un champ d'endomorphismes symétriques relativement à $g_0$ (cf.annexe \ref{a3.2}). Si la signature de $g_0$ était de la forme $(+,+,\dots,+)$, les endomorphismes $\leftidx{^e}{h}$ seraient diagonalisables en tout point $x$ de $\mathscr{C}$ et ne pourraient donc être nilpotents qu'en étant identiquement nuls. Ceci n'est plus le cas si la signature de $g$ contient des signes ($-$) et des signes ($+$). L'étude qui va suivre va en fait montrer que \textbf{le signe ($-$) placé en première position} dans la signature de $g_0$ (qui correspond à celui de la métrique de Minkovski standard) permet l'existence de champs d'endomorphismes \textbf{nilpotents $\leftidx{^e}{h}$ non nuls} dont les métriques correspondantes $g=g_0+h$ seront en particulier celles qui donneront les \textbf{potentiels newtoniens}. \textbf{Le signe ($-$) placé en cinquième position} (qui correspond au cercle $S^1(\delta)$) permet l'existence d'autres champs d'endomorphismes \textbf{nilpotents non nuls} dont les métriques correspondantes seront en particulier celles qui donneront les \textbf{potentiels électromagnétiques}. \rmq La propriété de nilpotence de $\leftidx{^e}{h}$ dans la décomposition de $g$ sous la forme $g_0+h$ est une propriété importante qui permettra d'exhiber des domaines intéressants de types autres que <<~potentiels~>> comme, par exemple, celui présenté dans la section \ref{s1.8}.\\ Nous verrons qu'aux potentiels newtoniens (et même à des potentiels un peu plus généraux) sont attachés, de par leurs définitions mêmes, naturellement deux objets : -Une fonction $v:\mathscr{C}\rightarrow\mathbb{R}$ appelée la <<~fonction potentiel~>>, et un champ de vecteurs de genre lumière $X_1$ (cf proposition \ref{p1.2}). De même, à un potentiel électromagnétique seront attachés naturellement deux objets : -Un champ de vecteurs $\Upsilon$ appelé le <<~champ de vecteurs potentiel électromagnétique~>> (et on va retrouver là le potentiel déjà défini à partir du champ de vecteurs $Y$), et un champ de vecteurs de genre lumière $X_2$ (cf proposition \ref{p1.3}). Dans les deux cas, ces deux objets caractériseront complètement $h$. Les potentiels seront qualifiés <<~\textbf{d'actifs}~>> si $h\neq0$. On pourra remarquer que, pour une métrique $g_0$ représentant un potentiel neutre, les composantes sur $\Theta$ des images des géodésiques sont des droites. Ce ne sera plus le cas pour les potentiels actifs que l'on va présenter, ceci après avoir précisé toutes ces notions et donné quelques propriétés générales. \begin{dfn}\label{def:8} Un champ d'endomorphismes $\leftidx{^e}{h}$ est \textbf{nilpotent d'indice $p\in \mathbb{N}$} si, quel que soit $x\in\mathscr{C}$, ~~quel que soit $q\geqslant p$, ~~l'endomorphisme $\leftidx{^e}{h}$ de $T_x(\mathscr{C})$ vérifie ($\leftidx{^e}{h}_x)^q=0$~ et s'il existe~ $x\in\mathscr{C}$ tel que $(\leftidx{^e}{h}_x)^{p-1}\neq0$ \end{dfn} \begin{dfn}\label{def:9} Une métrique pseudo-riemannienne $g$ définie sur une cellule type $\mathscr{C}=\Theta\times S^1(\delta)\times W$ \textbf{représente un potentiel actif} si elle s'écrit sous la forme $g=g_0+h$ ~où ~ $g_0$ est une métrique représentant un potentiel neutre et le champ d'endomorphismes $\leftidx{^e}{h}$ (relatif à $g_0$) est nilpotent d'indice $p\geqslant2$. \end{dfn} Les propriétés suivantes sont des conséquences rapides de la symétrie de $h$ et de la nilpotence de $\leftidx{^e}{h}$. Elles sont démontrées dans l'annexe \ref{a3.2}. Propriétés : \begin{enumerate} \item $\forall x\in\mathscr{C}$, ~~$\leftidx{^e}{h}$ est un endomorphisme symétrique pour $g_0$, autrement dit,\\$\forall X$ et $Y\in T_x(\mathscr{C})$, ~~$g_0(\leftidx{^e}{h}_x(X),Y)=g_0(X,\leftidx{^e}{h}_x(Y))$. \item $\forall q\in\mathbb{N}^*$, ~~$\forall x\in\mathscr{C}$, ~~$trace(\leftidx{^e}{h}_x)^q=0$. \item quelle que soit la base de $T_x(\mathscr{C})$, ~~$det(g_{ij}(x))=det(g_0{_{ij}}(x))$. \end{enumerate} La propriété 3. est très importante et a en particulier pour conséquence le fait que l'élément de volume $\eta$ relatif à $g$ ~~($\eta=\sqrt{det(g_{ij})}dx^0\wedge\dots\wedge dx^{n-1}$) ~est le même que celui relatif à $g_0$. Ce résultat sera fondamental dans l'étude des phénomènes quantiques qui fait intervenir les potentiels <<~actifs~>> et est, à mon avis, la propriété essentielle qui justifie le terme <<~potentiel~>>. Le lemme suivant, dont la démonstration, très simple, est présentée dans l'annexe \ref{a3.2}, est une conséquence du choix de la signature de $g_0$ et de la nilpotence de $\leftidx{^e}{h}$. Il apparaît ici l'importance du fait que la signature de $g_0$ soit exactement de la forme $(-,+,+,+,-,+,\dots,+)$, ceci sera encore plus fondamental dans l'étude des phénomènes quantiques. \begin{lem}\label{l2} Soit $Y$ le champ de vecteurs unitaires de genre temps caractérisant l'électromagnétisme déjà présenté, ici <<~transporté sur la cellule $\mathscr{C}$~>>. Soit $X_0$ un champ de vecteurs unitaires de genre temps $g_0$-orthogonal à $Y$ (il en existe une infinité compte tenu de la signature de $g_0$). Alors, quel que soit $x\in\mathscr{C}$ ~l'endomorphisme $\leftidx{^e}{h}_x$ est nul sur l'espace $g_0$-orthogonal à l'espace engendré au point $x$ par les $2p$ champs de vecteurs : $Y,\leftidx{^e}{h}(Y),\dots,\leftidx{^e}{h}^{p-1}(Y),X_0,\leftidx{^e}{h}(X_0),\dots,\leftidx{^e}{h}^{p-1}(X_0)$. Le champ d'endomorphismes $\leftidx{^e}{h}$ est donc entièrement déterminé par ses valeurs sur ces $2p$ champs de vecteurs (qui, en général, ne sont pas indépendants). \end{lem} La remarque suivante va justifier la définition qui va suivre. \begin{rmq}\label{r8} La $1$-forme différentielle qui caractérise l'électromagnétisme a été définie dans le cadre général par $Y^\flat$ \textbf{où <<~$\flat$~>> est relatif à la métrique $g$}. Dans le cas d'un potentiel <<~actif~>> où $g=g_0+h$, le champ de vecteurs associé à $Y^\flat$ \textbf{par $g_0$} (par $g$ ce serait évidemment $Y$ lui même) n'est autre que $Y+\leftidx{^e}{h}(Y)$,~ceci puisque ~$g_{ij}Y^j=g_0{_{ij}}Y^j+h_{ij}Y^j$ ~et alors ~$g_0^{ki}g_{ij}Y^j=Y^k+\leftidx{^e}{h}^k_jY^j$. De plus, $F:=d(Y^\flat)=d(g_{ij}Y^j)=d(h_{ij}Y^j)$ ~~puisque ~ $d(g_0{_{ij}}Y^j)=0$. Le potentiel électromagnétique est donc complètement caractérisé par le champ de vecteurs $\leftidx{^e}{h}(Y)$ \textbf{que nous noterons par la suite $\Upsilon$}. Nous donnons alors la définition suivante : \end{rmq} \begin{dfn}\label{def:10} Un potentiel actif est \textbf{sans électromagnétisme} si le champ de vecteurs $\Upsilon:=\leftidx{^e}{h}(Y)=0$. \end{dfn} Nous allons nous intéresser maintenant aux cas particuliers de domaines de type <<~potentiel~>> qui vont permettre de retrouver, entre autre, tous les résultats standard qui décrivent le comportement <<~d'objets élémentaires~>>, chargés électriquement ou non, dans ce que l'on appelle communément un <<~potentiel newtonien~>> ou un <<~potentiel électromagnétique~>>. Notons que ce sont exactement ces mêmes domaines que l'on réutilisera dans la description des phénomènes quantiques au chapitre 2. \textbf{Les indices de nilpotence correspondants aux deux cas présentés seront $p=2$ et $p=3$}. Bien sûr, comme la dimension de $\mathscr{M}$ est $n$, l'indice de nilpotence maximum de $\leftidx{^e}{h}$ est $n-1$. L'indice de nilpotence est donc limité par la dimension de la variété compacte $W$. Le choix de <<~petits~>> indices de nilpotence pourra donc être interprété comme le fait que l'on néglige certains effets liés, par exemple, à une variété compacte $V_2$ dans une décomposition de $W$ de la forme $W=V_1\times V_2$. \bigskip \textbf{a- ~Les potentiels actifs d'indice 2 sans électromagnétisme} (en particulier les potentiels newtoniens). \bigskip La métrique $g$ définie sur la cellule $\mathscr{C}=\Theta\times S^1(\delta)\times W$ est de la forme $g=g_0+h$ ~où ~ $g_0$ est la métrique d'un potentiel neutre et le champ d'endomorphismes $\leftidx{^e}{h}$ (relatif à $g_0$) est \textbf{nilpotent d'indice 2}. Comme l'on suppose que ce potentiel est sans électromagnétisme (cf def. \ref{def:10}), on pose $\Upsilon:=\leftidx{^e}{h}(Y)=0$. D'après le lemme \ref{l2},~ $h$ est alors entièrement déterminé par ses valeurs sur $Y$, $X_0$, $\leftidx{^e}{h}(X_0)$ ~où ~ $X_0$ est un champ de vecteurs de genre temps, dans l'orientation en temps , $g_0$-orthogonal à $Y$, normalisé par $g_0(X_0,X_0)=-1$. ~Ce champ peut être considéré comme un champ <<~d'observation~>>. Comme le potentiel est <<~actif~>>, ~$h$ est supposé non nul. D'autre part, il est facile de vérifier que : $\leftidx{^e}{h}_x=0\Longleftrightarrow \leftidx{^e}{h}_x(X_0)=0$. ~En effet, si $\leftidx{^e}{h}_x(X_0)=0$, ~comme par hypothèse $\leftidx{^e}{h}_x(Y)=0$ ~et ~$\leftidx{^e}{h}_x^2(X_0)=0$, ~le lemme \ref{l2} montre que $\leftidx{^e}{h}_x=0$. ~On en déduit en particulier que la fonction $g_0(\leftidx{^e}{h}(X_0),X_0)$ n'est pas identiquement nulle. On donne alors la définition suivante : \begin{dfn}\label{def:11} La fonction non nulle $v=-\frac{1}{2}g_0(\leftidx{^e}{h}(X_0),X_0)$ ~est appelée \textbf{la fonction potentiel vue par $X_0$} du domaine potentiel actif considéré. (Le coefficient $-\frac{1}{2}$ est mis dans le seul but de retrouver ensuite la notion standard de potentiel newtonien). \end{dfn} \begin{prop}\label{p1.2} Si $g=g_0+h$ ~est la métrique représentant un potentiel actif d'indice 2 sans électromagnétisme, ~alors il existe, ~sur la partie de $\mathscr{C}$ où la fonction $v$ est non nulle, \textbf{un unique champ de vecteurs $X_1$} qui vérifie : \[h=-2vX_1^\flat\otimes X_1^\flat ~~~et~~~g_0(X_1,X_0)=1\] On a de plus les propriétés suivantes : $g_0(X_1,X_1)=0$ ~~($X_1$ est de genre lumière pour $g_0$) ~~et ~~$g_0(X_1,Y)=0$. (On rappelle que $X_1^\flat$ ~est la $1$-forme associée à $X_1$ \textbf{par $g_0$}). On a donc : \[g=g_0-2vX_1^\flat\otimes X_1^\flat\] \end{prop} \textbf{Démonstration} : On commence par vérifier que : \[g_0(\leftidx{^e}{h}(X_0),X_0)\leftidx{^e}{h}=\leftidx{^e}{h}(X_0)\otimes(\leftidx{^e}{h}(X_0))^\flat\] Cela s'obtient facilement en remarquant que, puisque $\leftidx{^e}{h}^2=0$, ~le deuxième membre de cette égalité est nilpotent d'indice $\leq2$, ~puis en montrant que l'égalité est vraie appliquée sur $Y$, $X_0$ et $\leftidx{^e}{h}(X_0)$ ~~et donc vraie partout d'après le lemme \ref{l2}. On pose donc, sur la partie où $v$ est non nulle : \[X_1:=-\frac{1}{2v}\leftidx{^e}{h}(X_0)\] Alors : \[g_0(X_1,Y)=-\frac{1}{2v}g_0(X_0,\leftidx{^e}{h}(Y))=0\] \[g_0(X_1,X_0)=-\frac{1}{2v}g_0(X_0,\leftidx{^e}{h}(X_0))=1\] Et : \[-2v\leftidx{^e}{h}=(2v)^2X_1\otimes X_1^\flat\] D'où : \[\leftidx{^e}{h}=-2vX_1\otimes X_1^\flat\] L'unicité de $X_1$ est acquise par le fait que les égalités ~$\leftidx{^e}{h}=-2vX_1\otimes X_1^\flat$ ~et ~$g_0(X_1,X_0)=1$ permettent d'écrire à elles seules : $\leftidx{^e}{h}(X_0)=-2vX_1$ ~~d'où ~~$X_1=-\frac{1}{2v}\leftidx{^e}{h}(X_0)$. L'égalité $g_0(X_1,X_1)=0$ ~est immédiatement vérifiée. \begin{rmq}\label{r9} La fonction potentiel $v$ et le champ $X_1$ dépendent du choix du champ d'observation $X_0$. Comme on le verra lors du calcul de la courbure de Ricci de ce type de potentiel (cf proposition \ref{p1.4}), les hypothèses données dans la définition \ref{def:6} d'un domaine de type potentiel défini à partir de la courbure d'Einstein, pourront être vérifiées si $\Delta_{g_0}v=0$ (voir aussi la remarque \ref{r5} pour l'influence de la courbure scalaire). On donne donc la définition suivante : \end{rmq} \begin{dfn}\label{def:12} Un domaine de type <<~potentiel newtonien~>> est un domaine de type potentiel actif d'indice 2 sans électromagnétisme tel que la fonction potentiel $v$ vérifie $\Delta_{g_0}v=0$. \end{dfn} \textbf{b- ~les potentiels électromagnétiques.} La métrique $g$ définie sur la cellule $\mathscr{C}=\Theta\times S^1(\delta)\times W$ est de la forme $g=g_0+h$ ~où ~$g_0$ est la métrique d'un potentiel neutre et le champ d'endomorphismes $\leftidx{^e}{h}$ (relatif à $g_0$) est \textbf{nilpotent d'indice 2 ou 3}. Le fait que ce potentiel est <<~électromagnétique~>> est essentiellement caractérisé par la propriété imposée : $\leftidx{^e}{h}(Y)\neq0$ (cf. def. \ref{def:10}). On supposera cependant que $g_0(\leftidx{^e}{h}(Y),Y):=h(Y,Y)=0$ ~de sorte que le champ de vecteurs associé par $g_0$ à $Y^\flat$ (où ici $\flat$ est relatif à $g$) est la somme $g_0$-orthogonale de $Y$ et $\leftidx{^e}{h}(Y)$ (cf remarque \ref{r8}). (On laisse le soin au lecteur de vérifier que, là encore, cette hypothèse peut s'interpréter comme le fait que l'on <<~néglige~>> les effets quantiques sur l'électromagnétisme). Pour préciser que ce potentiel n'est qu'électromagnétique et n'a pas de composante newtonienne, on supposera qu'il existe un champ de vecteurs $X_0$ de genre temps tel que $\leftidx{^e}{h}^2(X_0)=0$ ~et ~$g_0(\leftidx{^e}{h}(X_0),X_0)=0$ ~(hypothèses plus faibles que $\leftidx{^e}{h}(X_0)=0$, la première égalité est toujours vraie dans le cas de nilpotence 2). Le lecteur pourra faire la comparaison avec la définition du potentiel actif sans électromagnétisme et remarquer que les rôles de $Y$ ~et ~$X_0$ sont transposés (ils sont tous les deux de genre temps mais pour les signes <<~$-$~>> de la signature de $g_0$ différents), on s'autorise de plus ici à ce que la nilpotence soit d'indice 3. Une différence essentielle apparaît néanmoins due au fait que le champ $Y$ est parfaitement déterminé (compte tenu de la définition des atlas d'observation) alors que $X_0$ ne l'est pas et que son choix est considéré comme celui d'un champ d'observation. Dans le cas d'un potentiel électromagnétique que l'on définit ici, les objets qui vont caractériser $h$ ne dépendent pas du choix de $X_0$ (grâce à l'hypothèse $\leftidx{^e}{h}^2(X_0)=0$). Comme on l'a déjà précisé, on note $\Upsilon:=\leftidx{^e}{h}(Y)$.\\ L'analogue de la proposition \ref{p1.2} s'écrit de la manière suivante : \begin{prop}\label{p1.3} Si $g=g_0+h$ ~est la métrique représentant un potentiel électromagnétique, alors il existe, sur la partie de $\mathscr{C}$ où $\Upsilon$ n'est pas nul, \textbf{un unique champ de vecteurs $X_2$} tel que : \[h=\Upsilon^\flat\otimes X_2^\flat+X_2^\flat\otimes \Upsilon^\flat\] (où l'on rappelle que <<~$\flat$~>> dans $X_2^\flat$ ~et ~$\Upsilon^\flat$ est relatif à $g_0$). Ce champ de vecteurs a les propriétés suivantes :\\ $g_0(X_2,X_2)=0$ ~~($X_2$ est de genre lumière pour $g_0$), ~$g_0(X_2,Y)=1$ et $g_0(X_2,\Upsilon)=0$. On a donc : \[g=g_0+\Upsilon^\flat\otimes X_2^\flat+X_2^\flat\otimes \Upsilon^\flat\] \end{prop} \textbf{Démonstration} L'unicité de $X_2^\flat$ dans la décomposition $\Upsilon^\flat\otimes X_2^\flat+X_2^\flat\otimes \Upsilon^\flat$, lorsque $\Upsilon$ n'est pas nul, est une simple propriété tensorielle due au fait que $h$ est symétrique de rang 2. On montre l'existence d'une telle décomposition en considérant deux cas : \begin{enumerate} \item On suppose que $g_0(\Upsilon_x,\Upsilon_x):=g_0(\leftidx{^e}{h}_x(Y_x),\leftidx{^e}{h}_x(Y_x))\neq0$. (L'indice de nilpotence de $\leftidx{^e}{h}_x$ est ici 3). On commence par vérifier que, au point $x$ (omis dans l'écriture des lignes qui suivent) : \begin{eqnarray} \leftidx{^e}{h}=\frac{1}{g_0(\Upsilon,\Upsilon)}(\Upsilon\otimes (\leftidx{^e}{h}^2(Y))^\flat+(\leftidx{^e}{h} ^2(Y))\otimes\Upsilon^\flat) \label{F1.1} \end{eqnarray} D'après le lemme \ref{l2}, $\leftidx{^e}{h}$ s'annule sur l'espace $g_0$-orthogonal à : $Y$, $\leftidx{^e}{h}(Y)$, $\leftidx{^e}{h}^2(Y)$, $X_0$, $\leftidx{^e}{h}(X_0)$. ~Il est donc rapide de vérifier que le deuxième membre de \ref{F1.1} s'annule aussi sur cet espace. Pour montrer \ref{F1.1}, il ne reste donc plus qu'à vérifier que l'égalité a lieu lorsque chaque membre est appliqué successivement à $Y$, $\leftidx{^e}{h}(Y)$, $\leftidx{^e}{h}^2(Y)$, $X_0$, $\leftidx{^e}{h}(X_0)$,~ ce qui ne pose aucune difficulté en utilisant le fait que $\leftidx{^e}{h}$ est $g_0$-symétrique, ~$\leftidx{^e}{h}^3=0$, ~$h(Y,Y)=0$, ~$\leftidx{^e}{h}^2(X_0)=0$ ~et après avoir montré que : \[\leftidx{^e}{h}(X_0)=(h(Y,X_0)/g_0(\Upsilon,\Upsilon))\leftidx{^e}{h}^2(Y)\] Ce dernier point s'obtient avec la même méthode en considérant les formes linéaires associées à chacun des deux membres et en utilisant le fait que $h(X_0,X_0):=g_0(\leftidx{^e}{h}(X_0,X_0)=0$. La décomposition souhaitée est alors obtenue par \ref{F1.1} en posant : \[X_2:=(g_0(\Upsilon,\Upsilon))^{-1}\leftidx{^e}{h}^2(Y)\] Il est alors immédiat de vérifier que ~$g_0(X_2,X_2)=0$, ~~$g_0(X_2,Y)=1$ ~~et ~~$g_0(X_2,\Upsilon)=0$. \item On suppose que $g_0(\Upsilon_x,\Upsilon_x)=0$ ~et ~$\leftidx{^e}{h}_x(Y)\neq0$, ~autrement dit $\leftidx{^e}{h}_x(Y)$ est de genre lumière (c'est le cas si l'indice de nilpotence est 2). Alors, aux points $x$ considérés (omis dans l'écriture des lignes qui suivent) : \[h(X_0,Y):=g_0(\leftidx{^e}{h}(X_0),Y)\neq0\] (en particulier $h(X_0)\neq0$) En effet, on a : \[0=g_0(\leftidx{^e}{h}(Y),Y)=g_0(\leftidx{^e}{h}(Y),\leftidx{^e}{h}(Y))=g_0(\leftidx{^e}{h}(Y),\leftidx{^e}{h}^2 (Y))=g_0(\leftidx{^e}{h}(Y),\leftidx{^e}{h}(X_0))\] Si $g_0(\leftidx{^e}{h}(X_0),Y)=g_0(X_0,\leftidx{^e}{h}(Y)$ était nul, la forme linéaire $g_0$-associée à $\leftidx{^e}{h}(Y)$ serait nulle d'après le lemme \ref{l2}, ce qui est contraire à l'hypothèse. On vérifie alors que : \[\leftidx{^e}{h}=(h(X_0,Y))^{-1}(\leftidx{^e}{h}(Y)\otimes(\leftidx{^e}{h}(X_0))^\flat+\leftidx{^e}{h} (X_0)\otimes(\leftidx{^e}{h}(Y))^\flat)\] Ceci en utilisant la même procédure que dans le premier cas. La décomposition souhaitée est obtenue en posant : \[X_2=(h(X_0,Y))^{-1}\leftidx{^e}{h}(X_0)\] Il est alors immédiat de vérifier que : $g_0(X_2,X_2)=0$, ~~$g_0(X_2,Y)=1$ ~~et ~~$g_0(X_2,\Upsilon)=0$. \end{enumerate} \begin{rmq}\label{r10} Un potentiel actif, de métrique $g$, a été défini (pour simplifier) sur une cellule type $\mathscr{C}=\Theta\times S^1(\delta)\times W$, ~mais lorsque $(\mathscr V,\zeta)$ est la carte de l'atlas d'observation telle que $\zeta(\mathscr V)=\mathscr{C}$, ~$g$ n'est autre que l'image $\zeta_*(g_\mathscr{M})$ de la métrique riemannienne définie sur $\mathscr{M}$. Suite à la remarque \ref{r7}, lorsque $(\mathscr V',\zeta')$ est une autre carte de l'atlas d'observation telle que $\zeta'(\mathscr V')=\mathscr{C}'\times S^1(\delta)\times W$, ~ lorsque $\sigma:= \zeta\circ\zeta'^{-1}:\mathscr{C}'\rightarrow\mathscr{C}$ est une isométrie de la forme $\Lambda\times I_{S^1\times W}$ ~et ~$\Lambda:\Theta'\rightarrow\Theta$ est une transformation de Lorentz standard, la métrique $g'=(\zeta'^{-1})^*(g)=\sigma^*g$, définie sur $\Theta'\times S^1(\delta)\times W$, s'écrit sous la forme $g_0+\sigma^*h$. Dans le cadre d'un potentiel actif sans électromagnétisme, la fonction potentiel $v'$ vue par $\sigma_*^{-1}X_0$, <<~lue~>> dans $(\mathscr V',\zeta')$, est donc égale à $v\circ\sigma$ ~et ~$X'_1=\sigma_*^{-1}X_1$. Dans le cadre d'un potentiel électromagnétique $\Upsilon'=\sigma_*^{-1}\Upsilon$ ~et ~$X'_2=\sigma_*^{-1}X_2$. \end{rmq} \section{Les géodésiques des domaines de type <<~potentiel~>>\label{s1.5}} La détermination des géodésiques sera effectuée de manière habituelle en commençant par calculer les symboles de Christoffel des métriques correspondantes dans un système de coordonnées standard de la cellule type considérée. \bigskip Le calcul des symboles de Christoffel demande celui de l'inverse de la matrice $(g)$ dans le système de coordonnées choisi. L'hypothèse de $p$-nilpotence de $\leftidx{^e}{h}$ permet d'obtenir rapidement la matrice $(g)^{-1}$ dont les termes sont conventionnellement notés $g^{ij}$. En effet, comme $g=g_0+h$, ~~$(g_0)^{-1}(g)=I+(\leftidx{^e}{h})$ ~et, puisque $\leftidx{^e}{h}^p=0$, l'inverse de la matrice $I+(\leftidx{^e}{h})$ n'est autre que : \[I-(\leftidx{^e}{h})+(\leftidx{^e}{h})^2+\dots+(-1)^{p-1}(\leftidx{^e}{h})^{p-1}\]. Alors, $(g_0)^{-1}(g)=I-(\leftidx{^e}{h})+\dots+(-1)^{p-1}(\leftidx{^e}{h})^{p-1}$, ~~donc : \[g^{ij}=g_0^{ij}-h^{ij}+h^i_kh^{kj}+\dots+(-1)^{p-1}h^i_{k_1}h^{k_1}_{k_2}\dots h^{k_{p-1}j}\]. \textbf{Attention}, ici $(h^{ij})$ \textbf{n'est pas} l'inverse de la matrice ($h_{ij}$) ~(qui en général n'est pas inversible) mais est définie, on le rappelle, par $h^{ij}=g_0^{ik}g_0^{jl}h_{kl}$. Par contre $(g^{ij})$ est bien l'inverse de la matrice $(g_{ij})$ ~~(et n'est pas $g_0^{ik}g_0^{jl}g_{kl}$).\\ En particulier, si $p=2$ : \begin{eqnarray} g^{ij}=g_0^{ij}-h^{ij}. \label{F1.2} \end{eqnarray} Si $p=3$ : \begin{eqnarray} g^{ij}=g_0^{ij}-h^{ij}+h^i_kh^{kj}. \label{F1.3} \end{eqnarray} \subsection{Les géodésiques d'un domaine de type potentiel sans électromagnétisme\label{ss1.4}} La cellule type est de la forme $\mathscr{C}=\Theta\times W$ ~où ici le cercle $S^1(\delta)$ est considéré être un facteur de la variété compacte $W$, celui-ci n'intervenant pas lorsqu'il n'y a pas d'électromagnétisme. L'ouvert $\Theta\subset\mathbb{R}^4$ sera de la forme $\Theta=I\times\mathscr U$ ~où ~$I$ est un intervalle de $\mathbb{R}$ ~et ~$\mathscr U$ un ouvert de $\mathbb{R}^3$. Cette décomposition est justifiée par le fait que les domaines de type potentiel sans électromagnétisme, définis dans la proposition \ref{p1.2}, ne sont pas <<~Lorentz-invariants~>> lorsque l'on choisit $X_0=\frac{\partial}{\partial t}$ lié au système de coordonnées standard. D'après la proposition \ref{p1.2}, la métrique pseudo-riemannienne $g$ s'écrit : \[g=g_0-2vX_1^\flat\otimes X_1^\flat \text{ ~~~où ~~~}g_0=g_\Theta\times g_W\] $X_1$ est un champ de vecteurs (associé à $X_0$) qui vérifie : \[g_0(X_1,X_1)=0, ~~~g_0(X_1,Y)=0, ~~~g_0(X_1,X_0)=1\] Dans le but de retrouver précisément les résultats sur les géodésiques qui décrivent les mouvements classiques d'un objet élémentaire dans un potentiel sans électromagnétisme, nous poserons l'hypothèse ($H_N$) suivante. \bigskip \textbf{Hypothèse $H_N$ :} \begin{enumerate} \item La fonction potentiel $v$ est définie sur $\mathscr U$. \item Le champ de vecteurs $X_1$ est défini sur $I\times W$ et est un champ de Killing. \end{enumerate} (La fonction $v$ et le champ $X_1$ peuvent être naturellement considérés comme définis sur $\mathscr{C}$ : $v$ ne dépend que des variables de $\mathscr U$ et $X_1$ est tangent à $I\times W$) \bigskip Il serait intéressant d'étudier les modifications apportées aux géodésiques que l'on va décrire lorsque l'on ne suppose plus l'hypothèse $H_N$, en particulier dans le cas où la fonction potentiel est de la forme $v=-\frac{m}{r}$ (cf section \ref{s1.7}) et d'estimer les perturbations sur les coniques habituelles. \subsubsection{Calcul des symboles de Christoffel} Dans un système de coordonnées standard de la cellule $\mathscr{C}$, on note : $\Gamma^k_{ij}$ ~(resp $\tilde \Gamma^k_{ij})$ les symboles de Christoffel de $g$ ~(resp $g_0$). On note $T^k_{ij}$ les coordonnées du \textbf{tenseur} ($\Gamma^j_{ij}-\tilde\Gamma^k_{ij})$. On a, lorsque $g=g_0+h$ : \begin{eqnarray} T^k_{ij}=\frac{1}{2}g^{kl}(\nabla_ih_{jl}+\nabla_jh_{il}-\nabla_lh_{ij}) \label{F1.4} \end{eqnarray} \textbf{où les $\nabla_i$ sont relatifs à $g_0$}. (Ce résultat se montre facilement en prenant un système de coordonnées normales pour $g_0$). Ici $h=-2vX_1\otimes X_1$. \textbf{Dans la suite de ce calcul, $X_1$ sera noté simplement $X$ pour simplifier l'écriture}. On a : $g^{kl}=g_0^{kl}+2vX^kX^l$ Et : $\nabla_j(h_{il})=-2\nabla_j(vX_iX_l)=-2((\nabla_jv)X_iX_l+v(X_i\nabla_jX_l+X_l\nabla_iX_j))$ Alors, en développant \ref{F1.4} et en utilisant le fait que $X_1$ est un champ de Killing (c.a.d $\nabla_iX_j+\nabla_jX_i=0$), on obtient : \[T^k_{ij}=-(g_0^{kl} +2vX^kX^l)((\nabla_jv)X_iX_l+(\nabla_iv)X_jX_l-(\nabla_lv)X_iX_j+2v(X_i\nabla_jX_l+X_j\nabla_iX_l))\] Comme $X(v)=0$ et $X^lX_l=0$ : \begin{eqnarray} T^k_{ij}=-X^k((\nabla_jv)X_i+(\nabla_iv)X_j)+(\nabla^kv)X_iX_j-2v(X_i\nabla_jX^k+X_j\nabla_iX^k) \label{F1.5} \end{eqnarray} Et on rappelle que : $\Gamma^k_{ij}=\tilde\Gamma^k_{ij}+T^k_{ij}$. On en déduit, puisque $X^kX_k=0$ (donc $X_k\nabla_iX^k=0)$ et $X(v)=0$ : $X_k\Gamma^k_{ij}=X_k\tilde\Gamma^k_{ij}$ Mais, $\nabla_iX_j=\partial_iX_j-X_k\tilde\Gamma^k_{ij}$ Il s'en suit, comme $\nabla_iX_j+\nabla_jX_i=0$ : $2X_k\tilde\Gamma^k_{ij}=\partial_iX_j+\partial_jX_i$ D'où : \begin{eqnarray} X_k\Gamma^k_{ij}=\frac{1}{2}(\partial_iX_j+\partial_jX_i) \label{F1.6} \end{eqnarray} \subsubsection{Détermination des géodésiques importantes} On considère une géodésique $x:\mathbb{R}\supset I\rightarrow\mathscr{C}$. $\forall s\in I$ ~~~$x(s)=(x^0(s),x^1(s),\dots,x^{n-1}(s))$. Pour $k$ ~de ~$0$ à $n-1$~~ et~~ $\forall s\in I$ : \begin{eqnarray} {x^k}''_{(s)}+\Gamma^k_{{ij}_{(x(s))}}{x^i}'_{(s)}{x^j}'_{(s)}=0 \label{F1.7} \end{eqnarray} \hspace{0.5cm} Avec \ref{F1.6} on en déduit : $X_{k_{(x(s))}}{x^k}''+(\partial_iX_j)_{(x(s))}{x^i}'_{(s)}{x^j}'_{(s)}=0$ Autrement dit : $\frac{d}{ds}(X_{k_{(x(s))}}{x^k}'_{(s)})=0$ D'où : $X_k{x^k}'=K$ ~~où ~~$K$ est une constante (on ne précise plus, ici et dans la suite, l'indice de paramétrage $s$) \bigskip On s'intéresse maintenant aux composantes de la géodésique sur <<~l'espace apparent~>> $\mathscr U$. \textbf{Pour $k$ ~de ~$1$ à $3$}, ~~$X^k=0$ ~~et ~~$\tilde\Gamma^k_{ij}=0$, ~~alors, d'après \ref{F1.5} : $\Gamma^k_{ij}=(\nabla^kv)X_iX_j$ ~~~(où ici $\nabla^kv=\partial_kv$). L'équation des géodésiques donne alors : \textbf{Pour $k$ de ~$1$ à $3$}, ~${x^k}''=-(\nabla^kv)X_iX_j{x^i}'{x^j}'=-K^2\nabla^kv$. Un changement de paramétrage affine de la géodésique permet de choisir $K=1$ (on ne s'intéressera pas aux géodésiques particulières pour lesquelles $K=0$). On obtient finalement : \bigskip $({x^1},{x^2},{x^3})''_s=-(\frac{\partial v}{\partial x^1},\frac{\partial v}{\partial x^2},\frac{\partial v}{\partial x^3})_{x(s)}=-(\nabla_{(x^1,x^2,x^3)}v)_{x(s)}$ \bigskip \textbf{Ce qui n'est rien d'autre que l'équation de Poisson} en physique classique lorsque $v$ est le potentiel newtonien et $(x^1(s),x^2(s),x^3(s))$ représente la trajectoire d'un point matériel dans un tel potentiel,\textbf{ mais ceci en considérant que $s$ est le paramètre de temps qui, ici, ne correspond pas à $x^0=t$}. Le paramètre $s$ peut être interprété comme le temps propre associé à l'image de la géodésique correspondante. Remarquons que si l'on suppose le module de la <<~vitesse~>>, associée à une géodésique, très petit par rapport à la vitesse de la lumière, c'est à dire si $\forall k\neq0$ ~~~${x^k}'(s)=\circ(1)$ ~($x^0$ correspond à la variable de temps), alors ${x^0}'(s)=1+\circ(1)$. En effet, comme $X_k{x^k}'(s)=1$, ~~${x^0}'(s)=1-\sum_{k\neq0}X_k{x^k}'(s)$ ~ puisque $X_0=1$ et de plus $\sum_{k\neq0}X_k^2=1$. Ceci signifie que, dans ce cas, le paramètre <<~$s$~>> est très proche du temps <<~$x^0$~>> donné par le système de coordonnées, ce qui correspond à l'approximation non relativiste habituelle. \begin{rmq}\label{r11} On peut vérifier que, dans le cadre des dernières hypothèses posées\\ $g(\overrightarrow{\text{v}}(s),\overrightarrow{\text{v}}(s))=g_0(\overrightarrow{\text{v}}(s),\overrightarrow{\text{v}} (s))+h(\overrightarrow{\text{v}}(s) , \overrightarrow{\text{v}}(s))$ (où $\overrightarrow{\text{v} }$ désigne le vecteur tangent à la géodésique), qui est nécessairement une constante $C_0$, sera très proche de $-1$ si l'on suppose que le potentiel $v=\circ(1)$. On aurait pu choisir le paramétrage de la géodésique de sorte que $C_0=-1$ ~(normalisation classique), c'est alors la constante $K$ qui aurait été éventuellement différente de $1$ tout en étant très proche. \end{rmq} \subsection{Les géodésiques d'un domaine de type potentiel électromagnétique\label{ss1.5}} La cellule type est de la forme $\mathscr{C}=\Theta\times S^1(\delta)\times W$ (ici le cercle $S^1(\delta)$ est important). On considère le champ de vecteurs $\Upsilon=\leftidx{^e}{h}(Y)$ qui définit le potentiel électromagnétique et le champ de vecteurs $X_2$ donné par la proposition \ref{p1.3}. D'après celle-ci, la métrique pseudo-riemannienne s'écrit : $g=g_0+\Upsilon^\flat\otimes X_2^\flat+X_2^\flat\otimes\Upsilon^\flat$ ~~où ~~$g_0=g_\Theta\times(-g_{S^1})\times g_W$. $X_2$ est un champ de vecteurs qui vérifie : $g_0(X_2,X_2)=0$,~~~$g_0(X_2,Y)=1$, ~~~$g_0(X_2,\Upsilon)=0$. Dans le but de retrouver précisément les résultats sur les géodésiques qui décrivent les mouvements classiques d'un objet élémentaire \textbf{chargé électriquement} dans un champ électromagnétique, nous supposerons l'hypothèse $H_E$ suivante que l'on pourra comparer à l'hypothèse $H_N$ du domaine précédemment étudié. \bigskip \textbf{Hypothèse $H_E$ :} \begin{enumerate} \item a-~Le champ de vecteurs $\Upsilon$ est défini sur $\Theta$ \item b-~Le champ de vecteurs $X_2$ est défini sur $S^1(\delta)\times W$ ~et est un champ de Killing. \end{enumerate} (Les champs $\Upsilon$ et $X_2$ peuvent être naturellement considérés comme définis sur $\mathscr{C}$ : $\Upsilon$ est tangent à $\Theta$ et ne dépend que des variables de $\Theta$, ~$X_2$ est tangent à $S^1(\delta)\times W$ et ne dépend que des variables de $S^1(\delta)\times W$). \subsubsection{Calcul des symboles de Christoffel} On repart de l'expression \ref{F1.4} donnée dans l'étude précédente. Ici $h=\Upsilon^\flat\otimes X_2^\flat+X_2^\flat\otimes\Upsilon^\flat$. \textbf{Dans la suite de ce calcul $X_2$ sera noté $X$ pour simplifier l'écriture.} On a : $g^{kl}=g_0^{kl}-h^{kl}+h^{km}h_m^l$, ~~c'est à dire: $g^{kl}=g_0^{kl}-(\Upsilon^kX^l+\Upsilon^lX^k)+(\Upsilon^kX^m+\Upsilon^mX^k)(\Upsilon_mX^l+\Upsilon^lX_m)$ D'où, puisque $X^mX_m=0$ ~~et ~~$X^m\Upsilon_m=0$ : $g^{kl}=g_0^{kl}-(\Upsilon^kX^l+\Upsilon^lX^k)+(\Upsilon^m\Upsilon_m)X^kX^l$ Alors : $T^k_{ij}=\frac{1}{2}(g_0^{kl}-(\Upsilon^kX^l+\Upsilon^lX^k)+(\Upsilon^m\Upsilon_m)X^kX^l)(\nabla_jh_{il}+\nabla_ih_{jl} -\nabla_lh{ij})$ En développant et en utilisant le fait que $\nabla_iX_j+\nabla_jX_i=0$, on obtient : $(*):=(\nabla_jh_{il}+\nabla_ih_{jl}-\nabla_lh_{ij})=X_iF_{jl}+X_jF_{il} +2(\Upsilon_i\nabla_jX_l+\Upsilon_j\nabla_iX_l)+X_l(\nabla_j\Upsilon_i+\nabla_i\Upsilon_j)$ où l'on a noté $F_{ij}=\nabla_i\Upsilon_j-\nabla_j\Upsilon_i=\partial_i\Upsilon_j-\partial_j\Upsilon_i$ ~les composantes de la 2-forme différentielle $F:=dY^\flat=d\Upsilon^\flat$. Remarquons que, d'après l'hypothèse ($H_E$), $X^lF_{jl}=0$. Alors, puisque $X^lX_l=0$ ~(donc $X^l\nabla_iX_l=0$) : $X^l(*)=0$ D'où, puisque $\Upsilon^lX_l=0$ : \begin{eqnarray} T^k_{ij}=\frac{1}{2}(X_iF_j^{~~k}+X_jF_i^{~~k} )+\Upsilon_i\nabla_jX^k+\Upsilon_j\nabla_iX^k+X^k(\nabla_j\Upsilon_i+\nabla_i\Upsilon_j-\Upsilon^l(*)) \label{F1.8} \end{eqnarray} Il s'en suit : $X_kT^k_{ij}=0$ D'où $X_k\Gamma^k_{ij}=X_k\tilde\Gamma^k_{ij}$ Puis, comme pour le potentiel sans électromagnétisme, on en déduit ici aussi : $2X_k\tilde\Gamma^k_{ij}=\partial_iX_j+\partial_jX_i$, Alors : \begin{eqnarray} X_k\Gamma^k_{ij}=\frac{1}{2}(\partial_iX_j+\partial_jX_i) \label{F1.9} \end{eqnarray} \subsubsection{Détermination des géodésiques importantes} On considère une géodésique $x:\mathbb{R}\supset I\rightarrow\mathscr{C}$. $\forall s\in I$ ~~~$x(s)=(x^0(s),x^1(s),\dots,x^{n-1}(s))$. Pour $k$ ~de ~$0$ à $n-1$~~ et~~ $\forall s\in I$ : \begin{eqnarray} {x^k}''_{(s)}+\Gamma^k_{{ij}_{(x(s))}}{x^i}'_{(s)}{x^j}'_{(s)}=0 \label{F1.10} \end{eqnarray} \hspace{0.5cm} Avec \ref{F1.9} on en déduit : $X_{k_{(x(s))}}{x^k}''+(\partial_iX_j)_{(x(s))}{x^i}'_{(s)}{x^j}'_{(s)}=0$ Autrement dit : $\frac{d}{ds}(X_{k_{(x(s))}}{x^k}'_{(s)})=0$ D'où : $X_k{x^k}'=K$ ~~où ~~$K$ est une constante (on ne précise plus, ici et dans la suite, l'indice de paramétrage $s$) \bigskip On s'intéresse maintenant aux quatre premières composantes de la géodésique qui correspondent à l'espace-temps <<~classique~>> $\Theta$. D'après \ref{F1.8} et puisque $\Gamma^k_{ij}=\tilde\Gamma^k_{ij}+T^k_{ij}$, \textbf{Pour $k$ de ~0 ~à ~3} : $\Gamma^k_{ij}=\frac{1}{2}(X_iF_j^{~~k}+X_jF_i^{~~k})$ Alors l'équation \ref{F1.10} des géodésiques s'écrit : ${x^k}''+\frac{1}{2}(X_iF_j^{~~k}+X_jF_i^{~~k}){x^i}'{x^j}'=0$ C'est à dire, \textbf{Pour $k$ de ~0 ~à ~3} : \begin{eqnarray} {x^k}''+KF_i^{~~k}{x^i}'=0 \label{F1.11} \end{eqnarray} On paramètre classiquement la géodésique de sorte que $g(x'(s),x'(s))=-1$ ~et que \\~$x'(s)|_{T_{x(s)}(\Theta)}$ soit dans l'orientation en temps donnée par $x^0$. On note $\overrightarrow{\text{v}}_{(s)}=({x^0}'(s), {x^1}'(s), {x^2}'(s), {x^3}'(s))$ le vecteur formé des quatre premières composantes du vecteur $x'(s)$ tangent à la géodésique. L'équation \ref{F1.11} s'écrit alors puisque $F_i^{~~k}=-F^k_{~~i}$ : $\overrightarrow{\text{v}}'_{(s)}=K\leftidx{^e}{F}_{x(s)}(\overrightarrow{\text{v}})_{(s)}$ où $\leftidx{^e}{F}$ est le champ d'endomorphismes associé à $F$ relativement à $g_0$. \textbf{On retrouve l'expression classique (de la relativité restreinte) qui donne l'équation de mouvement d'une particule de masse $m$ et de charge électrique $q$ dans un champ électromagnétique $F$ lorsque l'on à posé $K=\frac{q}{m}$ et lorsque $s$ représente effectivement le temps propre de la particule ( voir, par exemple, \cite{gourg} (17-61) p.554). Le résultat disant que le mouvement d'un objet élémentaire est décrit par les géodésiques est connu en relativité générale mais uniquement dans le cadre de la gravitation. On vient de montrer ici que, pour nous, ce résultat est encore valable dans le cadre de l'électromagnétisme}. Ce que l'on vient d'écrire n'est qu'une vérification, sur cet exemple, du principe de calcul du mouvement d'un objet élémentaire chargé électriquement dans un potentiel dont on a parlé dans la section \ref{s1.3}. Ici $K=\frac{q}{m}=X_k{x^k}'$ est une caractéristique de la géodésique sur la variété compacte $S^1(\delta)\times W$ puisque $X_k=0$ pour $k$ de 0 à 3. \begin{rmq}\label{r12} Comme le paramétrage de la géodésique est choisi de sorte que $g(x'(s),x'(s))=-1$ et puisque $g_{ij}=g_{0_{ij}}+\Upsilon_iX_j+X_i\Upsilon_j$ on a : $\forall s\in I$ ~~~${{x^0}'}^2_{(s)}+{{x^4}'}^2_{(s)}=1+\sum_{k\neq0et4}{{x^k}'}^2_{(s)}+2K\Upsilon_{i(x(s))}{x^i}'_{(s)}$ Si l'on suppose que pour $i$ de 0 à 3 : $K\Upsilon_i=\circ(1)$ ~et que,~~ pour $k\neq0$ : ${x^k}'(s)=\circ(1)$ ~(ce qui dit en particulier que la <<~vitesse~>> déterminée par la géodésique est très petite par rapport à la vitesse de la lumière) alors ${x^0}'(s)=1+\circ(1)$ (compte tenu du choix du paramétrage dans l'orientation en temps relatif à $x^0$) et le paramètre <<~$s$~>> est très proche du <<~temps $x^0$~>> donné par le système de coordonnées, ce qui correspond à l'approximation non relativiste habituelle. \end{rmq} \section[Courbure de Ricci et courbure scalaire]{La courbure de Ricci, la courbure scalaire et différentes propriétés des domaines de type potentiel \label{s1.6}} \subsection{Les potentiels actifs sans électromagnétisme d'indice 2\label{ss1.6}} On rappelle que, d'après la proposition \ref{p1.2}, ~~$g=g_0-2vX_1^\flat\otimes X_1^\flat$ ~~où ~~$v$ est la fonction potentiel et $X_1$ vérifie : $g_0(X_1,X_1)=0$, ~~~$g_0(X_1,Y)=0$, ~~~$g_0(X_1,X_0)=1$. On suppose de plus ici, comme lors de l'étude des géodésiques (cf. \ref{ss1.4}), que l'hypothèse $H_N$ est vérifiée, et, pour ne pas faire intervenir des termes liés à $D_{g_0}X_1$ dans les résultats qui vont suivre, on supposera que $D_{g_0}X_1=0$, ce qui est une hypothèse plus forte que le fait que $X_1$ soit un champ de Killing. Ces hypothèses peuvent être interprétées comme le fait que l'on néglige certains effets quantiques concernant le champ $X_1$. On obtient alors le résultat suivant . \begin{prop}\label{p1.4} ~\\\vspace*{-1em} \begin{enumerate} \item $R_{{icc}_g}^\sharp=R_{{icc}_{g_0}}^\sharp-(\Delta_{g_0}v)X_1\otimes X_1$ \hspace{1cm} (ici ~$\Delta_{g_0}:=-\nabla^k\nabla_k$) où $R_{{icc}_{g_0}}$ est entièrement déterminé par $R_{{icc}_{{g_0}_W}}$ puisque $R_{{icc}_{{g_0}_\Theta}}=0$. \item $S_g=S_{g_0}+2vR_{{icc}_{g_0}}(X_1,X_1)$ où $S_{g_0}=S_{{g_0}_W}$. \item $X_1$ est aussi de genre lumière pour la métrique $g$ et $D_gX_1=0$. \item $D_gY=0$, en particulier $Y$ est un champ de Killing et un champ géodésique pour la métrique $g$ (ceci est évidemment vrai pour la métrique $g_0$). \end{enumerate} \end{prop} La démonstration de cette proposition est détaillée dans l'annexe \ref{a3.3}. \begin{rmq}\label{r13} Si $R_{{icc}_{g_0}}(X_1,X_1)=0$, la courbure scalaire $S_g$ reste égale à celle du potentiel neutre $S_{g_0}$. Ceci est le cas, par exemple, lorsque $W$ est une variété riemannienne produit $V_1\times V_2$ ~où ~($V_2,g_{V_2}$) est une variété d'Einstein et lorsque $X_1$ est tangent à $V_2$ ~(on peut avoir $dimV_1=0$). En effet, on a alors $R_{{icc}_{g_0}}(X_1,X_1)=C^{te}g_0(X_1,X_1)=0$. En fait, dans les expériences <<~standard~>> la fonction potentiel $v$ est en général $\ll 1$ ( en unités géométriques). Compte tenu de la normalisation de $X_1$ par $g_0(X_1,X_0)=1$, on peut écrire dans ce cas : $|2vR_{{icc}_{g_0}}(X_1,X_1)| \ll |S_{g_0}|$ ~et $S_g$ reste très proche de $S_{g_0}$ qui est constante par définition. \end{rmq} \subsection{Les potentiels électromagnétiques \label{ss1.7}} D'après la proposition \ref{p1.3} : $g=g_0+\Upsilon^\flat\otimes X_2^\flat+X_2^\flat\otimes\Upsilon^\flat$ où le champ de vecteurs $\Upsilon$ est le potentiel électromagnétique et $X_2$ vérifie : $g_0(X_2,X_2)=0$, ~~~$g_0(X_2,Y)=1$, ~~~$g_0(X_2,\Upsilon)=0$. On suppose de plus ici, comme pour l'étude des géodésiques (cf. \ref{ss1.5}), que l'hypothèse $H_E$ est vérifiée, ou bien, pour anticiper sur la section \ref{s2.13} où l'on tient compte des effets de <<~spin~>>, que l'hypothèse $H'_E$ suivante est vérifiée. \newpage \textbf{Hypothèse $H'_E$} \begin{enumerate} \item Le champ de vecteurs $\Upsilon$ est défini sur $\Theta\times S^3(\rho)$. \item Le champ de vecteurs $X_2$ est défini sur $S^1(\delta)\times V$. \end{enumerate} Ceci lorsque la cellule type est de la forme $\mathscr{C}=\Theta\times S^1(\delta)\times S^3(\rho)\times V$\\(cf. section \ref{s2.13}). Que ce soit sous l'hypothèse $H_E$ ou $H'_E$, on supposera de plus, comme pour la proposition \ref{p1.4} avec le champ $X_1$, que $D_{g_0}X_2=0$. \begin{prop}\label{p1.5} On note $F$ la $2$-forme différentielle de l'électromagnétisme : $F=d(Y^{\flat_g})=d(\Upsilon^\flat)$. \begin{enumerate} \item $R_{{icc}_g}^\sharp=R_{{icc}_{g_0}}^\sharp+\frac{1}{2}(f(X_2\otimes X_2)-(\nabla_{g_0}\cdotp F)\otimes X_2-X_2\otimes (\nabla_{g_0}\cdotp F))$ où la fonction $f:=\frac{1}{2}g_0^{ik}g_0^{jl}F_{kl}F_{ij}$ ~~et ~~$\nabla_{g_0}\cdotp F$ ~est le champ de vecteurs divergence de $F$ relative à $g_0$ ($\nabla_{g_0}\cdotp F$ a pour composantes $\nabla_i(g_0^{ik}F_k^j)$). \item $S_g=S_{g_0}+(\Upsilon^k\Upsilon_k)R_{icc_{g_0}}(X_2,X_2)$. \item $X_2$ est aussi de genre lumière pour $g$ ~et ~$D_g{X_2}=0$. \item $Y$ est un champ de Killing et un champ géodésique pour $g$ (ceci est évidemment vrai pour $g_0$). \end{enumerate} \end{prop} La démonstration de cette proposition est détaillée dans l'annexe \ref{a3.4}. \begin{rmq}\label{r14} Si $R_{icc_{g_0}}(X_2,X_2)=0$, la courbure scalaire $S_g$ reste égale à celle du potentiel neutre $g_0$ (cf remarque \ref{r13}). Là encore, dans les expériences <<~standard~>>, $|(\Upsilon^k\Upsilon_k)R_{icc_{g_0}}(X_2,X_2) |\ll |S_{g_0}|$ ~et ~$S_g$ reste donc très proche de $S_{g_0}$. \end{rmq} \section{Remarques sur le domaine de type <<~potentiel newtonien~>> \label{s1.7}} La cellule type considérée ici est de la forme $\mathscr{C}=\Theta\times W$ avec ~~$\Theta=I \times \mathscr U \subset \mathbb{R}^4$. La métrique pseudo-riemannienne $g$ vérifie, d'après la proposition \ref{p1.2}, $g=g_0-2vX_1^\flat\otimes X_1^\flat$ ~et l'on choisit ici $X_0=\frac{\partial}{\partial t}$ lié au système de coordonnées standard de la cellule $\mathscr{C}$. On suppose de plus que l'hypothèse utilisée pour la proposition \ref{p1.4} est encore valide. Ce domaine mériterait une étude plus approfondie que celle que l'on va présenter dans cette section, en particulier lorsque la fonction $|v|$ n'est pas $\ll 1$ (c.a.d lorsque <<~l'approximation newtonienne~>> n'est plus valable). On se bornera à faire quelques remarques élémentaires. Regardons par exemple le cas pour lequel , dans le système de coordonnées standard, \\$v=-\frac{m}{r}$ ~où ~$m$ ~est une constante positive et~ $r=(\sum_{k=1}^3{x^k}^2)^{1/2}$.\\ Le domaine considéré est alors à <<~symétrie sphérique en espace~>> pour les coordonnées habituelles $(x_1,x_2,x_3)$ de $\mathscr U$ ~(mais il ne faut pas oublier que les autres <<~dimensions~>> sont fondamentales). On a ici $\Delta v=0$ ~~(def. \ref{def:12})~~ et on a vu que (cf paragraphe \ref{ss1.4}), pour la métrique $g=g_0+\frac{2m}{r}X_1^\flat\otimes X_1^\flat$ ~~les géodésiques ~$x_{(s)}=(x^0_{(s)},\dots,x^{n-1}_{(s)})$ ~vérifient (du moins celles qui nous intéressent) : ~~$(x^1_{(s)},x^2_{(s)},x^3_{(s)})''=(-\nabla v)_{x(s)}=mr^{-2}_{x(s)}$. On en déduit, comme en physique newtonienne standard, que les composantes sur $\mathscr U$ de ces géodésiques ont pour images des coniques dont <<~$O$~>> est un foyer et que les lois de Kepler restent valables, \textbf{mais en considérant le paramètre <<~$s$~>> et non le temps <<~$x^0=t$~>> du système de coordonnées}, <<~$s$~>> pouvant être très différent de <<~$t$~>> si la fonction $|v|$ n'est pas $\ll 1$. Puisque $g(\frac{\partial}{\partial t},\frac{\partial}{\partial t})=-1+\frac{2m}{r}$, le champ de vecteurs $\frac{\partial}{\partial t}$ ($=\frac{\partial}{\partial x^0}$) est de genre temps si $r>2m$, ~~de genre lumière si $r=2m$ ~~et de genre espace si $0<r<2m$. La distance <<~critique~>> est ici $2m$ qui correspond exactement au rayon de Schwarzschild et ceci suggère de comparer ce domaine de type <<~potentiel newtonien~>> que l'on vient d'étudier à un domaine de type <<~Schwarzschild~>> dont la cellule type sera définie comme le couple $(\mathscr{C},g_S)$ pour lequel $\mathscr{C}=\Theta\times W$ ~~où ~~$\Theta=\mathbb{R}\times{\mathbb{R}^3}^*$ ~~et ~~$g_S$ est la métrique produit : $g_S:=g_{S_\Theta}\times g_W$ ~pour laquelle $g_{S_\Theta}$ désigne \textbf{la métrique classique de Schwarzschild} qui s'écrit, pour les coordonnées $(t,r,\varphi,\phi)$ ~sur ~$\mathbb{R}\times]2m\text{~~}+\infty[\times S^2\sim\mathbb{R}\times {\mathbb{R}^3}^*$ : \[g_{S_\Theta}(t,r,\varphi,\phi)=(-1+\frac{2m}{r})dt^2+(1-\frac{2m}{r})^{-1}dr^2+r^2(d\varphi^2+\sin^2\varphi d\phi^2)\] Bien entendu, il n'y a mathématiquement aucun intérêt à faire le produit de $g_{S_\Theta}$ par $g_W$, il s'agit simplement de ramener le domaine classique de Schwarzschild de dimension $4$ aux domaines que l'on a étudiés ici en dimension $n$. (on peut aussi considérer plus généralement le domaine de Schwarzschild <<~étendu~>> pour $0<r<2m$). Comparons alors quelques propriétés du type (1) : <<~potentiel newtonien~>> à celles du type (2) : <<~Schwarzschild~>> que l'on vient de définir. \begin{enumerate} \item Les deux courbures de Ricci : $R_{icc_g}$~~ et ~~$R{icc_{g_S}}$ sont identiques, toutes les deux égales à $R_{icc_{g_0}}$ ~déterminée par $R_{{icc}_{{g_0}_W}}$ puisque $R_{{icc}_{{g_0}_\Theta}}=0$ (cf proposition \ref{p1.4}). \item Pour $r \gg 2m$, d'après la proposition \ref{p1.2}, les composantes des géodésiques sur $\Theta$ du type (1) que l'on a décrites et celles de genre temps du type (2) redonnent avec une très bonne précision, les trajectoires d'un objet élémentaire autour d'une <<~masse~>> $m$ à symétrie sphérique sur <<~l'espace $\mathscr U$~>>, calculées avec la physique newtonienne standard. \end{enumerate} On remarquera aussi que le coefficient $(-1+\frac{2m}{r})$ de <<~$dt^2$~>> du tenseur $g$ du type (1) est le même que celui du tenseur $g_S$ du type (2), cependant, pour le type (1) le <<~potentiel~>> ~$\frac{2m}{r}$ ~perturbe les variétés compactes sans toucher à $\mathscr U$, alors que pour le type (2) le potentiel perturbe $\mathscr U$ sans toucher à $W$. Ces deux types apparaissent donc comme deux cas particuliers <<~extrêmes~>> et l'on peut sans doute décrire une famille de types <<~potentiels newtoniens~>> pour lesquels le potentiel <<~perturbe~>> à la fois $\mathscr U$ et $W$ et tels que les propriétés 1. et 2. soient conservées. Ces domaines pourront avoir des propriétés différentes lorsque $r$ n'est pas $\gg 2m$. \section[Un <<~fluide statique vraiment parfait~>>]{Un exemple de domaine de type <<~fluide statique vraiment parfait~>> \label{s1.8}} Ce domaine statique sera sans électromagnétisme et la pression apparente sera nulle (mais pas la pression cachée). L'intérêt de présenter un tel exemple est essentiellement lié au fait qu'un tel domaine ne peut exister dans le cadre de la relativité générale standard. La possibilité de son existence ici est due à la pression <<~cachée~>> (qui, évidemment, n'a pas de sens lorsque $dim \mathscr{M}=4$) Pour la simplicité des calculs qui vont suivre, nous ne présenterons qu'un cas très particulier. \bigskip Le domaine de type <<~fluide~>> considéré est un triplet $(\mathscr{D},\tilde g,\mathscr{A})$ (cf. définition \ref{def:4}) où $(\mathscr{D},\tilde g)$ est isométrique à $(\mathscr{C},g)$ défini de la manière suivante : $\mathscr{C}=Z\times W$ ~~~où ~$Z=\Theta\times S^1(\epsilon)$ ~~et ~~$\Theta=I \times \mathscr U\subset R^4$ $W=S^1(\delta)\times V$ ~~~où $V$ est une variété compacte, mais ici cette décomposition de $W$ ne sera pas utilisée car le domaine est supposé sans électromagnétisme et le cercle <<~habituel~>> $S^1(\delta)$ n'intervient pas . (Le cercle $S^1(\epsilon)$ dans la décomposition de $Z$ a été introduit uniquement pour simplifier les calculs qui vont suivre). La cellule type $\mathscr{C}$ est munie du système de coordonnées standard $(t,x^1,x^2,x^3,v,w)$ ~~(la coordonnée $v$ est ici celle de $S^1(\epsilon)$). La métrique $g$ est de la forme $g=g_Z\times g_W$, ~~la signature de $g_W$ est $(-,+,\dots,+)$. La métrique $g_Z$ est de la forme $g_Z=g'_0+\beta\otimes X_1^\flat+X_1^\flat\otimes\beta$ où ~~$g'_0=g_\Theta\times g_{S^1(\epsilon)}$ ~~($g_\Theta$ étant la métrique de Minkovski et ~$g_{S^1(\epsilon)}$ ~la métrique riemannienne standard sur $S^1(\epsilon)$). $\beta$ est une $1$-forme différentielle définie sur $\mathscr U$ : $\beta=adx^1+bdx^2+cdx^3$ ~où ~$a,b,c:\mathscr U\rightarrow\mathbb{R}$ sont trois fonctions régulières. $X_1$ est le champ de vecteurs défini sur $Z$ tel que~ $X_1^\flat=dt+dv$ ~~(où $\flat$ est relatif à $g'_0$). ($a,b,c$ ~et ~$X_1$ peuvent, bien sûr, être considérés définis sur la cellule $\mathscr{C}$) \bigskip Dans le système de coordonnées standard, les composantes de $g$ ne dépendent pas de <<~$t$~>>, c'est en ce sens que \textbf{le domaine est qualifié de statique}. Le tenseur $g$ est à rapprocher de celui d'un domaine de type <<~potentiel électromagnétique~>> à la différence (fondamentale) que $X_1^\flat=dt+dv$ remplace $X_2^\flat=du+dv$ qui aurait donné exactement un exemple de potentiel électromagnétique (ici $u$ est la coordonnée de $S^1(\delta)$). Comme les fonctions considérées sont indépendantes de <<~$t$~>>, on peut utiliser les résultats obtenus pour un domaine de type <<~potentiel électromagnétique~>> où ~$(\Upsilon_0,\Upsilon_1,\Upsilon_2,\Upsilon_3)$ ~devient ~$(0,a,b,c)$. ~Les rôles de <<~$t$~>> et de <<~$u$~>> sont transposés par le choix de $X_1$ à la place de $X_2$. En fait, un logiciel de calcul donne rapidement le résultat suivant : On note~ $(A,B,C):=rot(a,b,c)$ ~c'est à dire ~$A=(\frac{\partial b}{\partial x^3}-\frac{\partial c}{\partial x^2})$, ~~$B=(\frac{\partial c}{\partial x^1}-\frac{\partial a}{\partial x^3})$, ~~$C=(\frac{\partial a}{\partial x^2}-\frac{\partial b}{\partial x^1})$. On note ~$(\mathcal A,\mathcal B,\mathcal C):=rot(A,B,C)=rotrot(a,b,c)$. On obtient pour la matrice $(R_{{icc}_{g_Z}})$ de la courbure de Ricci de la métrique $g_Z$ relative au système de coordonnées standard de $Z$ ~(matrice d'ordre 5): \bigskip \[ (R_{{icc}_{g_Z}}) = \frac{1}{2}\begin{pmatrix} (A^2+B^2+C^2)&\mathcal A&\mathcal B&\mathcal C&(A^2+B^2+C^2) \\ \mathcal A&0&0&0&\mathcal A\\ \mathcal B&0&0&0&\mathcal B\\ \mathcal C&0&0&0&\mathcal C\\ (A^2+B^2+C^2)&\mathcal A&\mathcal B&\mathcal C&(A^2+B^2+C^2) \end{pmatrix}\] \bigskip Et pour la courbure scalaire : $S_{g_Z}=0$ \bigskip Comme $g$ est une métrique produit $g:=g_Z\times g_W$, la courbure de Ricci de $g$ est maintenant déterminée par la connaissance de la courbure de Ricci de $g_W$. De plus ~$S_g=S_{g_Z}+S_{g_W}=S_{g_W}$. On suppose maintenant que la courbure scalaire $S_{g_W}$ est nulle (cette hypothèse est simplificatrice pour le type de fluide présenté ici mais on peut s'en passer (cf. remarque \ref{r5}), le tenseur $G$ vérifie alors $G=2R_{{icc}_g}$. Pour obtenir un fluide <<~vraiment parfait~>> on impose les conditions supplémentaires suivantes : $(\mathcal A,\mathcal B,\mathcal C)=rotrot(a,b,c)=0$~~ et ~~$\mu:=A^2+B^2+C^2\neq0$. Alors ~$G_{g_Z}=\mu X_1^\flat\otimes X_1^\flat$ ~~où ~~$\mu=|rot(a,b,c)|^2=A^2+B^2+C^2>0$. Dans le système de coordonnées standard, la matrice de $g_Z$ est : \[ (g_Z) =\begin{pmatrix} -1&a&b&c&0\\ a&1&0&0&a\\ b&0&1&0&b\\ c&0&0&1&c\\ 0&a&b&c&1 \end{pmatrix}\] Quel que soit $x\in \mathscr{C}$, ~~l'espace apparent $H_x$ est de dimension 4. Il est facile de vérifier que les quatre champs de vecteurs : $X_0:=\frac{\partial}{\partial t}$, ~~$X_1:=a\frac{\partial}{\partial t}+\frac{\partial}{\partial x^1}-a\frac{\partial}{\partial v}$ ~~$X_2:=b\frac{\partial}{\partial t}+\frac{\partial}{\partial x^2}-b\frac{\partial}{\partial v}$, ~~$X_3:=c\frac{\partial}{\partial t}+\frac{\partial}{\partial x^3}-c\frac{\partial}{\partial v}$ ~~forment en chaque point $x$ une base $g_Z$-orthonormée de $H_x$. Comme $G(X_0,X_0)=\mu$ ~~et ~~$G(X_i,X_j)=0$ ~lorsque $(i,j)\neq(0,0)$, ~la matrice de $G_{H_x}$ \textbf{dans cette base} est : \[ (G_{H_x}) =\begin{pmatrix} \mu&0&0&0\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&0 \end{pmatrix}\] D'après les définitions données (cf. \ref{ss1.1.1}), le champ ~$X_0=\frac{\partial}{\partial t}$ ~est le champ apparent du fluide (ici $X_0=X$ est aussi le champ du fluide) et $\mu$ la fonction densité d'énergie. La pression apparente $P_A$ est nulle, la pression cachée $P_c$ est la pression $P$ elle même et $P=G-\mu X_0^\flat\otimes X_0^\flat$. On vérifie alors que $\nabla_g\cdotp P=0$ : il suffit pour cela , comme $\nabla_g\cdotp G=0$,~ de vérifier que ~ $\nabla_g\cdotp (\mu X_0^\flat\otimes X_0^\flat)=0$, ~c'est à dire que $\nabla_i(\mu T^{ij})=0$ ~~où ~~$X_0\otimes X_0:=T^{ij}\partial_i\partial_j$ ~avec ~$T^{00}=1$ ~~et ~~$T^{ij}=0$ ~si ~$(i,j)\neq(0,0)$. On a : $\nabla_i(\mu T^{ij})=\partial_i(\mu T^{ij}) +\mu(T^{lj}\Gamma^i_{il}+T^{il}\Gamma^j_{il})=\mu(T^{0j}\Gamma^i_{i0}+\Gamma^j_{00})$ ~puisque ~$\partial_i(\mu T^{ij})=\partial_0(\mu T^{00})=\partial_0\mu=0$ ~($\mu$ ne dépend pas de $t$). $\Gamma^j_{00}=-g^{jl}(\partial_l g_{00})=0$ ~~car ~~$g_{00}==-1$ $\Gamma^i_{i0}=\frac{1}{2}g^{il}(\partial_ig_{l0}+\partial_0g_{li}-\partial_lg_{i0})=\frac{1}{2}g^{il}(\partial_ig_{l0} -\partial_lg_{i0})=0$ ~~car ~~$g^{il}$ est symétrique en $(i,l)$ ~et ~$(\partial_ig_{l0} -\partial_lg_{i0})$ ~~antisymétrique. Le domaine $(\mathscr{C},g,\mathscr{A})$ est donc bien un domaine de type <<~fluide vraiment parfait~>> (la vérification des conditions restantes est très rapide), sa pression apparente est nulle et le champ de vecteurs du fluide $X_0=X=\frac{\partial}{\partial t}$ ~est un champ géodésique (pour $g$ et $g_0$). \vspace{2cm} Les domaines de type <<~potentiel~>> que l'on a présentés, pour lesquels le tenseur pseudo-riemannien $g$ est donné explicitement, vont être particulièrement importants dans l'étude des phénomènes quantiques que l'on va aborder maintenant, mais ce ne seront pas les géodésiques qui y interviendront, le processus sera tout autre. \chapter{Les phénomènes quantiques} \label{part:deux} \vspace{-11 mm} \section{Introduction} \vspace{-3mm} Dans la continuité de ce qui a été fait dans le premier chapitre, nous allons définir maintenant les domaines typés de l'espace-temps $(\mathscr{M},g)$ qui vont représenter ce que, classiquement, on appelle <<~des particules dans un potentiel~>>. Comme le regard que l'on porte sur ces notions est fondamentalement différent de celui de la physique quantique standard, le terme <<~particule~>> ne sera plus adapté et nous n'y ferons parfois référence que pour faire le lien entre ce que nous allons présenter et les théories classiques. La différence essentielle entre les domaines que l'on va définir et ceux présentés dans le premier chapitre va être due au fait que les \textbf{variations} du tenseur pseudo-riemannien $g$ (transporté sur une cellule $\mathscr {C} = \Theta \times S^1 \times W$), relatives à la variété compacte $S^1 \times W $, vont devenir les éléments essentiels de la description des<<~phénomènes quantiques~>>. En particulier on ne considérera plus que le champ de vecteurs $Y$ tangents aux cercles $S^1_x$ est un champ de Killing. Si l'on avait tenu compte précisément dans le premier chapitre de ces éventuelles variations de $g$ sur $S^1 \times W$ , de nombreux termes supplémentaires seraient apparus dans les équations et auraient rendus celles-ci <<~ingérables~>>, dans le langage de la physique standard cela aurait signifié que les équations auraient rendu compte à la fois des phénomènes <<~classiques~>> et <<~quantiques~>>. Les approximations que l'on va faire en définissant les types géométriques qui décrivent correctement les <<~phénomènes quantiques~>> vont permettre d'obtenir des équations <<~utilisables~>>, mais vont être de <<~nature~>> différente de celles utilisées dans le premier chapitre. Pour que le lecteur ait une idée claire de la structure de ce deuxième chapitre, on commence par une présentation exhaustive de ce qui va suivre: Les domaines typés $(\mathscr{D}, g)$ que l'on va définir auront pour métriques pseudo-riemanniennes des métriques $g$ construites à partir d'une métrique de référence $g_0$ qui sera celle d'un potentiel neutre que l'on a déjà définie dans le chapitre précédent. Cette métrique $g_0$, transportée sur une cellule ($\mathscr {C} = \Theta \times S^1(\delta) \times W$), pourra être considérée comme associée à un observateur qui fait les mesures. Le fait que, par définition, $g_0|_\Theta$ est la métrique de Minkovski précise <<~l'approximation choisie~>>. Nous pourrions utiliser pour $g_0|_\Theta$ une métrique qui tient compte d'une éventuelle déformation de l'espace-temps $\Theta$ lié à l'observateur, mais ceci compliquerait évidemment sérieusement les choses et nous nous contenterons pour le moment de retrouver les résultats standard de la physique quantique en considérant que $g_0|_\Theta$ est la métrique de Minkovski. \textbf{Le type géométrique fondamental qui va décrire tous les phénomènes quantiques sera simplement un domaine $(\mathscr{D},g)$ <<~à courbure scalaire constante et conforme à un potentiel~>>.} Autrement dit, le tenseur pseudo-riemannien $g$ sera de la forme $fg_{\mathcal P}$ où $g_{\mathcal P}$ est une métrique de potentiel tel que défini dans le premier chapitre (à partir de $g_0$), $f$ sera une fonction réelle positive, et la courbure scalaire de $g$ sera constante, égale à celle de $g_{\mathcal P}$ (elle même égale à celle de $g_0$). La fonction $f$ deviendra alors l'objet important de ces domaines et il sera simplificateur de considérer la fonction $a:=f^{4/n-2}$ (où $n=dim\mathscr{M}$) car la définition même du type géométrique choisi donne une équation différentielle <<~en $a$~>> plus simple que l'équation équivalente écrite <<~en $f$~>>. Il va s'avérer que la partie <<~non linéaire~>> de l'équation différentielle <<~en $a$~>> va traduire, en langage de physique classique (qui ne sera plus adapté ici) <<~l'interaction des particules entre elles~>> dans le domaine considéré. Comme l'essentiel de ce que nous allons étudier va être consacré aux phénomènes qui <<~négligent~>> les interactions des particules entre elles, c'est une \textbf{approximation linéaire} de l'équation générale que nous allons utiliser (du moins jusqu'à la section \ref{s2.16}). Les domaines pour lesquels nous considérerons cette approximation linéaire seront appelés des \textbf{domaines à métrique oscillante} car le simple fait que la métrique $fg_{\mathcal P}$ reste à courbure scalaire constante va entraîner dans la plupart des cas de fortes oscillations pour la fonction $f$. Ce sont les caractéristiques de la fonction $f$ (et donc de la fonction $a$) imposées par le type géométrique choisi qui redonneront les notions standard de masse, charge électrique, spin, etc. \textbf{Les équations linéaires obtenues seront donc des conclusions obligées de la définition du type géométrique fondamental choisi (et de l'approximation linéaire). Aucune loi, aucun principe ne sera ajouté. Ces équations linéaires seront de type <<~Klein- Gordon~>> (différentes suivant les potentiels considérés). Les équations de Schrödinger utilisées en physique quantique standard (non relativiste) apparaîtront comme des <<~approximations naturelles~>> (que nous préciserons) des équations de <<~Klein-Gordon~>> obtenues. Ceci montrera par la suite que l'on retrouve en particulier les résultats qualitatifs et quantitatifs des expériences génériques de la physique quantique classique (diffraction, fentes de Young, déviation par un potentiel, expérience de Stern- Gerlach, intrication quantique, etc.).} En fait, les domaines de type <<~métrique oscillante~>> ne vont pas suffire pour décrire complètement la notion de <<~particules dans un potentiel~>> et donc les expériences précédemment évoquées. Il manquera à cette étape une notion de <<~localisation~>> (qui, par exemple, précisera la notion <<~d'impact~>> sur un écran). Il sera nécessaire de définir plus précisément des domaines typés qui s'apparenteront aux domaines <<~particules dans un potentiel~>> de la physique standard. Pour cela il sera suffisant <<~d'ajouter~>> dans un domaine $(\mathscr{D}, g)$ <<~à courbure scalaire constante conforme à un potentiel~>> un sous ensemble $\mathscr{S}$ de $\mathscr{D}$ de mesure nulle. Les composantes connexes de $\mathscr{S}$ seront appelées \textbf{les singularités élémentaires de $\mathscr{D}$}. En fait ce sous ensemble pourra être considéré comme celui où le tenseur $g$ n'est pas défini, mais ce point de vue demande aussi à préciser le comportement de $g$ au voisinage de $\mathscr{S}$. Ce dernier point ne sera pas utile pour l'étude des phénomènes quantiques présentés dans ce papier mais devra être développé pour l'étude de phénomènes plus complexes. Le lecteur pourra donc se contenter de considérer les singularités élémentaires comme de simples parties connexes présentes dans $\mathscr{D}$. Aucune loi ne sera donnée sur le <<~comportement~>> de ces singularités et l'on supposera seulement que celles-ci sont <<~équiprobablement réparties~>> relativement à la métrique $g$ (mais pas $g_0$), ceci dans toute sous-variété de genre espace de dimension maximale $(n-2)$. Il ne faudra pas considérer qu'une singularité élémentaire <<~seule~>> représente une <<~particule~>> au sens habituel. Le bon point de vue est plutôt le suivant: Une singularité \textbf {dans un domaine de type <<~métrique oscillante~>>} dont les caractéristiques de la fonction $f ~(=a^{(n-2)/4})$ correspondent à celles d'un électron pourra être apparentée à la notion standard d'électron. Si les caractéristiques de la fonction $f$ correspondent à celles d'un photon, la singularité \textbf {dans ce domaine} sera apparentée à la notion standard de photon. Lorsque la singularité sera dans un domaine de type <<~potentiel~>> (pour lequel $f=C^{te}$) elle sera <<~indécelable~>>, etc. Ce point de vue est fondamentalement différent de tous ceux des théories quantiques (ou non quantiques) standard: les caractéristiques (masse, charge électrique, spin, etc.) seront celles \textbf {de la métrique oscillante} du voisinage dans lequel se trouve la singularité. Ces considérations ont pour conséquence le fait que la <<~dynamique des particules dans un potentiel~>> sera complètement gérée par la métrique oscillante. L'étude de celle-ci sera la partie <<~déterministe~>> de la description des phénomènes physiques considérés puisque les équations différentielles déduites n'auront qu'une solution pour des <<~conditions aux limites~>> bien précisées. La partie <<~aléatoire~>> sera celle liée à l'ensemble $\mathscr{S}$ des singularités élémentaires dont la probabilité de présence dans un domaine donnera, par définition, celle de la <<~position~>> des particules. Pour clarifier ce que l'on vient de dire on reprend ces mêmes notions dans le cadre d'une expérience simple exprimée dans le langage classique de la physique: Des particules lancées avec un vecteur vitesse connu pénètrent dans une boîte dans laquelle on a fait le vide et où existe éventuellement un potentiel. Les particules laissent des <<~impacts~>> sur un écran situé sur la face opposée à l'orifice d'entrée des particules. Cette expérience se place donc dans un domaine $\mathscr{D}$ d'espace-temps où, en dehors du domaine intérieur à la boîte (noté $\mathscr{D}_B$) pendant le <<~temps~>> de l'expérience, la métrique sera supposée être celle d'un potentiel neutre $g_0$ (en dimension $n$ avec deux signes moins dans la signature). Le domaine ($\mathscr{D}-\mathscr{D}_B,g_0$) pourra être considéré comme le domaine d'un observateur qui fait des mesures liées à l'expérience (mesure de la position des impacts sur l'écran par exemple). Ces mesures seront nécessairement faites relativement à la métrique $g_0$ de l'observateur. Par exemple, la mesure de la position des impacts sur l'écran sera faite par <<~transparence~>> sur la face extérieure de l'écran où la métrique est effectivement $g_0$, ou bien, après l'expérience, en démontant la boîte et là encore la mesure de la trace des impacts sera faite avec la métrique $g_0$ (en fait c'est surtout $g_0|_\Theta$ qui est utilisée, autrement dit la métrique de Minkovski). Il est impossible qu'un observateur fasse une quelconque mesure à l'intérieur de la boîte pendant l'expérience car l'introduction d'un <<~objet physique~>> dans celle-ci perturberait nécessairement complètement l'expérience. Avec le <<~regard~>> que l'on présente ici, ceci veut simplement dire que la présence d'un autre objet: <<~métrique oscillante avec singularités, par exemple~>> dans la boîte, change complètement les équations et l'étude devient alors celle d'un autre domaine de l'espace-temps. Le domaine $(\mathscr{D}_B,g)$ représentant l'intérieur de la boîte pendant le temps de l'expérience sera donc de type <<~particules dans un potentiel~>>. La fonction $a := f^{(n-2)/4}$ de la métrique conforme à $g_{\mathcal P}$ correspondant à la métrique oscillante de ce type de<<~particules~>> vérifiera une équation différentielle linéaire (exprimée à partir de la métrique $g_0$). La précision suffisante sur la solution viendra des conditions <<~aux limites~>> du domaine $\mathscr{D}_B$. Une des conditions est, bien sûr, celle qui précise que les particules pénètrent dans la boîte avec un vecteur vitesse donné. Pour ceci il faut en fait introduire un troisième domaine $\mathscr{D}'\subset{\mathscr{D}-\mathscr{D}_B}$ où est défini rigoureusement un type <<~particules dans un potentiel neutre se déplaçant à une vitesse $v$ dans une direction donnée~>> qui sera un cas particulier du type <<~particules dans un potentiel~>>. Ce type particulier sera important car il servira de <<~condition initiale~>> pour de nombreuses expériences. Les autres <<~conditions aux limites~>> traduiront le fait qu'il n'y a pas de réflexions des particules sur le bord intérieur de la boîte, etc. Les problèmes posés par les conditions aux limites seront les mêmes que ceux qui se posent en physique quantique classique avec la fonction d'état. La métrique oscillante sera donc suffisamment déterminée. Par contre, comme on l'a déjà dit , les <<~singularités~>> ne seront qu'<<~équiprobablement réparties~>> relativement à la métrique $g$. Pour visualiser l'expérience, le lecteur peut imaginer que l'intérieur de la boîte est <<~élastique~>> (ainsi que l'écran que l'on considérera ayant une épaisseur). Pendant la préparation de l'expérience la métrique est partout $g_0$. Pendant l'expérience, lors de l'impact des particules sur l'écran, la métrique effective est la métrique oscillante $g$ qui est <<~$g_0$ perturbée par le changement de métrique conforme, c'est à dire $fg_0$~>>. L'écran est ~<<~déformé~>> relativement à $g_0$. La position des singularités (et donc des impacts sur l'écran) est équiprobable sur l'écran déformé (la métrique effective est $g$). Lors de la mesure, après l'expérience, la métrique est $g_0$ (l'écran a sa forme initiale), la position des impacts n'est plus équiprobable sur l'écran et l'on voit bien que la loi de probabilité sur la position des impacts (pour $g_0$) est complètement déterminée par la <<~déformation~>> de l'espace-temps à l'intérieur de la boîte lors des impacts, c'est à dire par la métrique oscillante $g$, et on devine le lien étroit entre <<~l'élément de volume~>> pour $g$ et la <<~densité de probabilité de présence~>> des singularités dans l'élément de volume pour $g_0$. En fait, tout ce que l'on vient de décrire se passe dans un domaine de dimension $n>5$ que l'on peut considérer être $\Theta\times W$ où $\Theta$ est un ouvert de $\mathbb{R}^4$ et $W$ une variété compacte, et les métriques considérées ont une signature $(-, +, +, +, -, +, \cdots, +)$.\\ $\Theta$ peut être vu comme l'espace-temps standard et $W$ comme les <<~petites dimensions~>> supplémentaires (dont une de genre temps). La notion même de <<~boîte~>> dans un tel espace demande à être définie, mais ceci ne pose pas de problèmes puisque il suffit de la définir classiquement dans $\Theta$ puis de faire le produit cartésien avec $W$. La boîte est ici considérée uniquement comme un bord de domaine. Tous les tenseurs et les opérateurs différentiels qui interviennent dans la description de cette expérience dépendent, entre autre, des <<~variables des petites dimensions supplémentaires~>>. Ceci est fondamental et c'est ce qui distingue , comme on l'a déjà dit, la description des phénomènes quantiques de ceux décrits dans le premier chapitre. La classification des métriques oscillantes (qui devient donc l'équivalent de la classification des particules en physique classique) se ramènera à des choix de termes particuliers dans la décomposition spectrale de la fonction $a$ relative au d'alembertien de la variété compacte ($W,g_W$). Les notions habituellement appelées <<~interaction~>> (entre particules ou (et) avec des potentiels) gérées classiquement par des choix de lagrangiens-hamiltoniens puis par une procédure de <<~quantification~>>, seront gérées ici uniquement par l'axiomatique très simple que l'on a déjà précisée (domaine à courbure scalaire constante conforme à un potentiel) et par le choix des termes de la décomposition spectrale. Aucune loi, aucun principe ne sera ajouté. L'étude de la décomposition spectrale d'une fonction définie sur ($W,g_W$) est complexe et est évidemment complètement liée à la précision de la donnée de ($W,g_W$). Nous présenterons ici essentiellement l'étude des interactions de particules avec un potentiel (autrement dit, les résultats donnés par l'approximation linéaire), celle-ci incluant la notion de <<~spin~>>. La description des phénomènes quantiques plus complexes (abordée actuellement par la théorie quantique des champs) sera succinctement présentée dans la section \ref{s2.17}. Passons maintenant aux énoncés précis des définitions, des résultats obtenus et de leurs démonstrations. \section{Le <<~type géométrique~>> fondamental\label{s2.1}} Le type géométrique que l'on va définir maintenant va permettre l'étude complète de ce que l'on appelle classiquement des <<~particules dans un potentiel~>>. La seule application de sa définition va suffire pour retrouver les résultats standard de la physique quantique sur le sujet. Nous n'irons pas plus loin pour cette étude dans ce papier, mais ce <<~domaine géométrique~>> permettra aussi d'aborder les phénomènes traités actuellement par la théorie quantique des champs (que nous noterons <<~T.Q.C~>> par la suite) et ceci avec une très grande simplicité de l'axiomatique. Précisons pour le lecteur familier avec la T.Q.C, que les définitions qui vont suivre sont plutôt apparentées à la notion de <<~champs~>> de la T.Q.C. Les <<~quanta~>> de ces champs seront apparentés aux <<~singularités~>> définies dans la section \ref{s2.11}. Les potentiels que l'on va considérer sont ceux que l'on a définis en (\ref{ss1.2}). Ils seront à courbure scalaire constante. La métrique pseudo-riemannienne $g$ d'un domaine $\mathscr{D}$ qui va caractériser le <<~type géométrique~>> fondamental va être définie par deux propriétés : \begin{enumerate} \item la métrique $g$ sera conforme à une métrique de potentiel $g_\mathcal P$, autrement dit elle sera de la forme $fg_{\mathcal P}$ où $f$ est une fonction réelle positive ou nulle. \item La courbure scalaire $S_g$ sera conservée égale à $S_{g_0}$ (donc constante). \end{enumerate} On rappelle qu'une transformation conforme d'une métrique pseudo-riemannienne $g$ est une transformation qui ne privilégie aucune direction (que ce soit d'espace ou de temps), elle consiste à multiplier, en chaque point $x$ de $\mathscr{M}$, la forme quadratique $g_x$ de l'espace tangent $T_x(\mathscr{M})$ par un réel $f(x) \geq 0$. Il s'avère simplificateur, lors d'un changement de métrique conforme $fg_{\mathcal P}$, d'écrire $f$ sous la forme ${\left|a\right|}^{4/n-2}$ (où $n = \text{dim} \mathscr{M}$) car l'équation différentielle en <<~$a$~>>, déduite des propriétés précédentes, a une forme plus simple que celle écrite en <<~$f$~>>. La définition précise du domaine géométrique qui, pour nous, va décrire tous les <<~phénomènes quantiques~>> est donc la suivante. \begin{dfn} \label{d2.1} \textbf{Un domaine à courbure scalaire constante, conforme à un potentiel} est un domaine $\mathscr{D}$ de $\mathscr{M}$ qui vérifie les deux propriétés suivantes : \begin{enumerate} \item Quel que soit $x\in\mathscr{D}$ il existe une carte ($\mathscr{V},\varphi$) en $x$ du $g$-atlas d'observation, il existe une fonction régulière $a:\varphi(\mathscr{V})=\mathscr {C}=\Theta \times S^1 \times W\rightarrow \mathbb{R}$ telles que $\varphi:(\mathscr{V},g_{\mathscr{M}})\rightarrow (\mathscr{C},{\left|a\right|}^{4/n-2}g_{\mathcal P})$ soit une isométrie. Ici $g_{\mathcal P}$ est une métrique représentant un potentiel. \item La courbure scalaire $S_{g_{\mathscr{M}}}$ est égale à $S_{g_{\mathcal P}}$ (et est donc une constante égale à $S_{g_0}$ pour les potentiels considérés). \end{enumerate} \end{dfn} La condition $2.$ de cette définition est en fait une condition de \textbf{normalisation} de la fonction $a$ du changement de métrique conforme. Multiplier $a$ par une constante positive $\lambda$ fait que la courbure scalaire $S_{g'}$,~ où $g'=|a|^{4/n-2}g_\mathcal P$,~ est multipliée par $\lambda^{-4/n-2}$. On aurait pu choisir une autre forme de normalisation qui laisse $S_{g'}$ constante mais différente de $S_{g_{\mathcal P}}$. La condition $2.$ m'a paru le choix le plus simple. L'importance du choix de la normalisation apparaîtra dans la section \ref{s2.12}. \begin{rmq}\label{r1'} Cette remarque est à mettre en parallèle avec la remarque \ref{r5} du premier chapitre. On peut définir une classe plus large de domaines <<~à courbure scalaire constante conforme à un potentiel~>> lorsque la cellule type $\mathscr{C}$ de la définition \ref{d2.1} est de la forme $\mathscr {C} = \Theta \times S^1 \times V_1\times V_2$ et les potentiels sont définis par une métrique produit $g_{\mathcal P} = g'_{\mathcal P}\times g_{V_2}$ où $g'_{\mathcal P}$ est une métrique de potentiel sur $\mathscr{C'} = \Theta \times S^1 \times V_1$, ceci en considérant des transformations conformes de $g'_{\mathcal P}$ et non de $g_{\mathcal P}$. Précisément, la condition $1.$ de la définition \ref{d2.1} est remplacée par : Quel que soit $x\in\mathscr{D}$ il existe une carte ($\mathscr{V},\varphi$) en $x$ du $g$-atlas d'observation, il existe une fonction régulière $a:\mathscr {C'}=\Theta \times S^1 \times V_1\rightarrow \mathbb{R}$ telles que $\varphi:(\mathscr{V},g_{\mathscr{M}})\rightarrow (\mathscr{C},({\left|a\right|}^{4/n-2}g'_{\mathcal P})\times g_{V_2})$ soit une isométrie. Remarquons qu'avec cette définition, certaines <<~directions d'espace~>> sont privilégiées. Bien sûr, tous les résultats que l'on va obtenir à partir de la définition \ref{d2.1} pourront s'appliquer à cette nouvelle définition en considérant que la variété est de dimension $(n-dim V_2)$. Le fait que ${\left|a\right|}^{4/n-2}g'_{\mathcal P}\times g_{V_2}$ soit une métrique produit est ensuite simple à gérer. Cependant, la courbure scalaire $S_{g_{\mathcal P}}$ est différente de $S_{g'_{\mathcal P}}$ si $S_{V_2}$ est $\not=0$, (puisque $S_{g_{\mathcal P}} = S_{g'_{\mathcal P}}+S_{V_2}$), et ceci pourra avoir de l'importance car nous verrons par la suite que la notion de <<~masse~>> sera influencée par la courbure scalaire (def \ref{d2.8}). Étendre la définition \ref{d2.1} comme on vient de le présenter permettra donc plus de possibilités liées à la notion de masse. \end{rmq} \section[L'équation fondamentale...]{L'équation fondamentale d'un domaine <<~à courbure scalaire constante, conforme à un potentiel~>>\label{s2.2}} On sait que la loi de transformation de la courbure scalaire, lors d'un changement de métrique conforme $g'={\left|a\right|}^{4/n-2}g$, est donnée par l'équation de Yamabe : \begin{eqnarray} \frac{4(n-1)}{n-2}\Box_ga+S_ga=S_{g'}{\left|a\right|}^{4/n-2}a\label{F0} \end{eqnarray} où $S_g$ (resp $S_{g'}$) désigne la courbure scalaire relative à $g$ (resp $g'$) et $\Box_g:=-{\nabla_g}^i{\nabla_g}_i$ est le d'alembertien (des géomètres) relatif à $g$. Ici la fonction $a$ peut changer de signe. Sur l'ensemble où $a$ s'annule la courbure relative à $g'$ n'est pas définie. Sur cet ensemble l'équation \ref{F0} se réduit à $0 = 0$. Compte tenu de la définition \ref{d2.1}, l'équation fondamentale vérifiée par la fonction $a$, pour un domaine <<~à courbure scalaire constante, conforme à un potentiel~>>, est donc la suivante : \begin{eqnarray} \dfrac{4(n-1)}{n-2}\Box_{g_{\mathcal P}}a+S_{g_{\mathcal P}}a=S_{g_\mathcal P}{\left|a\right|}^{4/n-2}a\label{F0'} \end{eqnarray} où $S_{g_{\mathcal P}}$ est en fait $S_{g_0}$ qui est constante compte tenu des potentiels considérés. Lorsque $S_{g_{\mathcal P}}\not=0$, cette équation est non linéaire, ce qui rend son utilisation très délicate, cependant nous allons voir que dans de nombreux cas correspondant à des expériences classiques, la fonction $a$ sera $\ll 1$ (relativement à la normalisation donnée dans la définition \ref{d2.1}-$2.$). L'équation \ref{F0'} pourra alors être <<~approchée~>> par l'équation où le second membre est nul. Ceci est précisé dans la section suivante. \section[L'approximation linéaire]{L'approximation linéaire. Les domaines de type <<~métrique oscillante dans un potentiel~>>\label{s2.3}} Nous nous intéressons maintenant aux domaines <<~à courbure scalaire constante, conforme à un potentiel~>> pour lesquels les solutions de l'équation fondamentale \ref{F0'} sont <<~très proches~>> des solutions de l'équation \textbf{linéaire} associée : $$\dfrac{4(n-1)}{n-2}\Box_{g_\mathcal P}a+S_{g_0}a=0$$ ceci pour des conditions aux limites du domaine précisées (fondamentales pour la validité de cette approximation). Nous verrons plus loin (section \ref{s2.12}) que cette approximation correspond aux expériences pour lesquelles, dans le langage classique, on néglige l'interaction des particules entres elles (mais pas avec le potentiel bien sûr). Le fait que la fonction $a$ reste $\ll 1$ sera la traduction d'une <<~faible densité de particules~>> dans l'expérience considérée. La non linéarité de l'équation \ref{F0'} fait que, lorsque $a \ll 1$, le terme $S_{g_\mathcal P}{\left|a\right|}^{4/n-2}$ peut être <<~absorbé~>> par le terme $S_{g_\mathcal P}$ du premier membre de l'équation en ne le modifiant que très peu. Bien sûr, ce simple fait n'est pas suffisant mathématiquement pour que les solutions de l'équation linéaire soient <<~proches~>> de celles de l'équation \ref{F0'}, les choses doivent être précisées. (L'annexe \ref{ea3.4} présente un exemple très simple d'approximation de solutions d'une équation non linéaire par celles d'une équation linéaire associée, dans l'esprit de ce que l'on vient de présenter, qui peut aider à comprendre le processus). Nous donnons la définition suivante. \begin{dfn} \label{d2.2} Un domaine de type \textbf{métrique oscillante dans un potentiel} est un domaine <<~à courbure scalaire constante, conforme à un potentiel~>> pour lequel la condition $2.$ de la définition \ref{d2.1} est remplacée par le fait que la fonction $a$ vérifie : \begin{eqnarray} \Box_{g_\mathcal P}a+Sa=0\label{F1} \end{eqnarray} où l'on a posé $S=\frac{4(n-1)}{n-2}S_{g_0}$ \end{dfn} \textbf{L'équation \ref{F1} devient donc l'équation fondamentale d'un domaine de type <<~métrique oscillante dans un potentiel~>>.} la terminologie <<~métrique oscillante~>> est justifiée par le fait que dans la plupart des cas, comme nous le verrons plus loin, la simple utilisation de la définition \ref{d2.2} impose à la fonction $a:\mathscr {C}=\Theta \times S^1 \times W\rightarrow \mathbb{R}$ d'avoir de <<~fortes oscillations en $t\in\mathbb{R}$~>> et éventuellement en $u\in S^1$. La fréquence des oscillations en <<~$t\in\mathbb{R}$~>> sera liée à la notion de <<~masse~>>, celle des oscillations en $u\in S^1$ à la notion de <<~charge électrique~>> (defs \ref{d2.6} et \ref{d2.8}). La partie essentielle de ce deuxième chapitre a pour objectif le fait de retrouver les résultats de la physique quantique classique qui décrivent les expériences standard (diffraction, fentes de Young, influence des potentiels, expérience de Stern-Gerlach, intrication quantique, etc.). Pour ces résultats, les interactions des particules entre elles sont négligées (on peut les considérer <<~une par une~>> hormis pour les phénomènes d'<<~intrication quantique~>>) et c'est donc l'équation \ref{F1} que nous allons utiliser. \begin{rmq} L'équation fondamentale \ref{F1} peut être considérée comme l'approximation linéaire d'une équation plus générale que \ref{F0'}. Il n'est à priori pas nécessaire dans la définition \ref{d2.1} de supposer que le changement de métrique conforme conserve la courbure scalaire de ${g_\mathcal P}$ (ni même la laisse constante). Il suffit, pour la validité de \ref{F1}, de borner $S_{g'}$,~ où $g'=|a|^{4/n-2}g_\mathcal P$, ceci pour normaliser la fonction $a$. Le fait de conserver la constance de la courbure scalaire n'aura d'intérêt que pour faire le parallèle avec certaines études présentées en T.Q.C, mais celles-ci ne seront pas abordées ici. \end{rmq} \section{Classification des métriques oscillantes\label{s2.4}} Comme on l'a déjà dit, <<~un domaine à courbure scalaire constante, conforme à un potentiel~>> est caractérisé par deux <<~objets~>> : \begin{enumerate} \item Le potentiel donné par : $g_{\mathcal P} = g_0 + h$ ~~~(cf. \ref{ss1.2}). \item La fonction $a:\mathscr{C} = \Theta \times S^1(\delta) \times W\rightarrow \mathbb{R}$. \end{enumerate} Quel que soit $x\in\Theta$, la fonction $a_x(.):\mathscr {C}= S^1(\delta) \times W\rightarrow \mathbb{R}$, donnée par $a_x(u,v):=a(x,u,v)$, est définie sur la variété pseudo-riemannienne \textbf{compacte} $S^1(\delta) \times W$. Elle admet donc une décomposition spectrale relative au d'alembertien $\Box_{g_0{_{S^1(\delta) \times W}}}$. En fait, compte tenu de la signature $(-,+,+,\dots,+)$ de $g_0|_{S^1(\delta) \times W}$, il sera préférable de considérer indépendamment les décompositions spectrales de $a_{(x,u)}(.): W\rightarrow \mathbb{R}$ et de $a_{(x,w)}(.):S^1(\delta) \rightarrow \mathbb{R}$ relatives aux \textbf{laplaciens riemanniens} de $(W,g|_W)$ et de $(S^1(\delta),g_0|_{S^1(\delta)})$. Ceci sera précisé dans le paragraphe suivant. Le principe de décomposition que l'on va utiliser est le même que celui qui consiste à décomposer des <<~sons~>> périodiques en <<~sons purs~>>, ce qui se traduit mathématiquement par la décomposition en série de Fourier des fonctions périodiques définies sur $\mathbb{R}$. Les fonctions périodiques définies sur $\mathbb{R}$ s'identifient aux fonctions définies sur un cercle $S^1$ et celui-ci est une variété compacte. La <<~compacité~>> est fondamentale pour la validité d'une théorie spectrale <<~discrète~>>. L'obtention de grandeurs <<~discrètes~>> dans la théorie qui va suivre, qui justifie le terme <<~quantique~>>, va venir de la compacité de la variété $S^1(\delta) \times W$ dans la cellule type. Bien entendu, la décomposition spectrale que l'on va utiliser est bien plus complexe que celle où la variété est réduite à un cercle. La classification des métriques oscillantes que l'on va déduire de la décomposition spectrale s'apparente à la classification des <<~particules~>> des théories physiques standard. On commence par rappeler quelques résultats de théorie spectrale qui concer\-nent les variétés riemanniennes compactes. Pour nous, ces résultats concerneront les variétés $(W,g_0|_W)$ et $(S^1(\delta),g_0|_{S^1(\delta)})$. \subsection{Rappels de quelques résultats de théorie spectrale sur les variétés riemanniennes compactes}\label{ssn1} \textbf{Théorème spectral} : on considère une variété \textbf{riemannienne compacte} $(V,g)$ et l'opérateur laplacien (des géomètres) associé : $\Delta:=-{\nabla}^i{\nabla}_i$. \begin{enumerate} \item Les valeurs propres du laplacien $\Delta$ forment une suite croissante de réels positifs qui tend vers $+\infty$ : $0 = \lambda_0 < \lambda_1 < \lambda_2 <\dots < \lambda_n < \dots$ \item Pour chaque valeur propre $\lambda_i$, l'espace propre correspondant $E_{\lambda_i}$ est de dimension finie et, quels que soient $i\neq j$, $E_{\lambda_i}$ et $E_{\lambda_j}$ sont orthogonaux pour le produit scalaire standard de $L^2(V,g)$. ($\dim E_0=1,~E_0$ étant l'ensemble des fonctions constantes sur $V$) \item \textbf{La somme algébrique des espaces propres $E_{\lambda_i}$ est dense dans $C^\infty(V)$ muni de la topologie uniforme. En particulier l'espace de Hilbert $L^2(V,g)$ admet une base hilbertienne de fonctions propres}. \end{enumerate} La proposition suivante, simple à montrer, concerne le <<~d'alembertien~>> sur une variété pseudo-riemannienne produit. \begin{prop}\label{p2.1'} On considère $k$ variétés riemanniennes compactes $(V_i,g_i)$. Sur la variété produit $V=V_1\times...\times V_k$ on définit le tenseur \textbf{pseudo}-riemannien : \\ $g = (-g_1) \times (-g_2) \times \dots (-g_p) \times (g_{p+1}) \dots \times (g_k)$. Le d'alembertien associé est défini par $\Box=-{\nabla^i}{\nabla_i}$. ~~~(Si $p=0,~~\Box=\Delta$). Soient $E_{\lambda_1},..,E_{\lambda_k}$ des espaces propres respectivement associés aux laplaciens des\\ $(V_1,g_1), \dots ,(V_k,g_k)$. Alors l'ensemble $F$ constitué des <<~sommes finies de produits~>> $f_1f_2 \dots f_k$, où les $f_i~\in~E_{\lambda_i}$, est un sous-espace vectoriel de l'espace propre $E_\lambda$ relatif au d'alembertien $\Box$ de la variété pseudo-riemannienne $(V,g)$ associé à la valeur propre $\lambda = -\lambda_1\dots-\lambda_p+\lambda_{p+1}+\dots+\lambda_k$. (Les fonctions $f_i$ sont ici considérées définies de $V$ dans $\mathbb{R}$, l'abus de notation utilisé consiste à confondre $f_i:{V_i}\rightarrow \mathbb{R}$ et $f_i\circ p_i:V\rightarrow \mathbb{R}$ (où $p_i:V\rightarrow V_i$ est la projection canonique), cet abus sera fréquent par la suite). \end{prop} Autrement dit, \textbf{$F$ s'identifie canoniquement au produit tensoriel $E_{\lambda_1}\otimes\dots \otimes E_{\lambda_k}$} que l'on considérera comme un sous-espace vectoriel de l'espace propre $E_\lambda$. \begin{rmq} On peut montrer que si $\lambda$ est une valeur propre du d'alembertien de $(V,g)$ et s'il existe un \textbf{unique} $k$-uplet $(\lambda_1,\dots,\lambda_k)$ de valeurs propres des $(V_i,g_i)$ tel que $\lambda=-\lambda_1\dots-\lambda_p+\lambda_{p+1}+\dots+\lambda_k$, alors l'espace propre $E_\lambda$ est exactement $E_{\lambda_1}\otimes\dots \otimes E_{\lambda_k}$. On prendra garde d'autre part au fait que, lorsque la variété pseudo-riemannienne $(V,g)$ n'est pas riemannienne, il peut exister des espaces propres relatifs au d'alembertien qui sont de dimension infinie. \end{rmq} La variété $V$ que l'on considère ici est $S^1(\delta)\times W$ et la métrique pseudo-riemannienne $g_0|_V$ est la métrique produit $(g_0|_{S^1(\delta)}\times g|_W)$ de signature $(-,+,..,+)$. La fonction $a:\mathscr {C}=\Theta \times {S^1(\delta)} \times W\rightarrow \mathbb{R}$, qui précise la métrique oscillante, vérifie donc les propriétés suivantes d'après le théorème spectral : \begin{enumerate} \item Quel que soit $(x,u)\in\Theta\times{S^1(\delta)}$, la fonction $a_{(x,u)}(.):W\rightarrow\mathbb{R}$ admet la décomposition spectrale $a_{(x,u)}(.)=\sum_{i=1}^{\infty}\varphi_i{_{(x,u)}}\alpha_i(.)$ où $(\alpha_i)$ est une base hilbertienne orthonormée de fonctions propres de $(W,g)$. \item Quel que soit $i\in \mathbb{N}^*$, quel que soit $x\in\Theta$, la fonction $\varphi_{i,x}(.):S^1(\delta)\rightarrow\mathbb{R}$ admet la décomposition spectrale (décomposition de Fourier) : $\varphi_{i,x}(u)=\sum_{j=0}^\infty \zeta_{1,i,j}{_{(x)}}\cos(2\pi ju/\delta)+\zeta_{2,i,j}{_{(x)}}\sin(2\pi ju/\delta)$. \end{enumerate} \textbf{Classer les métriques oscillantes va consister à ne prendre que certains termes de la décomposition spectrale de la fonction $a$}. Ceci va permettre, entre autre, de définir des constantes caractéristiques pour chaque métrique oscillante élémentaire choisie. Ces constantes vont donner les notions de <<~masse~>>, de <<~charge électrique~>>, de <<~spin~>>, etc., associées à la métrique oscillante élémentaire considérée, ce qui correspondra ensuite, lorsque l'on aura précisé la notion de <<~singularités~>> (section \ref{s2.11}), aux notions standard liées aux <<~particules~>>. Le choix de la définition que l'on va présenter maintenant est essentiellement guidé par le fait que l'on souhaite retrouver les <<~classifications standard~>> sur les particules, il tient compte du fait que, vue la signature de $g$, $S^1(\delta)$ ne joue pas le même rôle que $W$. \subsection{Les métriques oscillantes élémentaires} Les métriques oscillantes \textbf{élémentaires} peuvent être vues comme les <<~sons purs~>> dans la décomposition d'un <<~son périodique~>>. On considère un espace propre du d'alembertien $\Box_{g_0{_{S^1(\delta) \times W}}}$ de la forme \\$E_{\lambda,\mu}:=E_{S^1(\delta)}(\lambda)\otimes E_W(\mu)$. \begin{dfn}\label{d2.3} Un domaine de type \textbf{métrique oscillante élémentaire dans un potentiel associée à $E_{\lambda,\mu}$} est un domaine de type métrique oscillante dans un potentiel (définition \ref{d2.2}) pour lequel la fonction $a:\mathscr{C}\rightarrow\mathbb{R}$ vérifie : quel que soit $x\in\Theta$,\\ $$a_{(x)}(.)\in E_{\lambda,\mu}$$. \end{dfn} Si $\lambda>0$, l'espace propre $E_{S^1(\delta)}(\lambda)$ est de dimension 2. Il est de dimension 1 lorsque $\lambda=0$. On choisit souvent les bases naturelles : \begin{enumerate} \item $(\alpha_1,\alpha_2)$~~si $\lambda>0$,~~~~où~ $\alpha_1(u):=\cos(\sqrt{\lambda}~u)$ et $\alpha_2(u):=\sin(\sqrt{\lambda}~u)$. \item $(\alpha_1=1)$ lorsque $\lambda=0$. \end{enumerate} (On identifiera ensuite $E_{S^1(\delta)}(\lambda)$ à $\varmathbb{C}$ si $\lambda \neq 0$~~ et~~ $E_{S^1(\delta)}(0)$ à $\mathbb{R}$). \medskip Si l'on choisit une base $(\beta_1,\dots,\beta_k)$ de $E_W(\mu)$ ~~ ($L^2_W$-orthonormée, par exemple), la fonction $a$ vérifie alors : quel que soit $x\in\Theta$, \begin{eqnarray} a_x(.)=\sum_{i=1}^k(\varphi_{1,i}{_{(x)}}~\alpha_1+\varphi_{2,i}{_{(x)}}~\alpha_2)~\beta_i\label{f4'} \end{eqnarray} Lorsque les bases des espaces propres sont précisées, la fonction $a$ est entièrement déterminée par les $2k$ fonctions~ $\varphi_{1,i}$ et ~$\varphi_{2,i}$~ ($k$ fonctions réelles si $\lambda=0$). Nous verrons dans la section \ref{s2.5'} que, par des principes naturels d'approximations, nous pourrons limiter le nombre des fonctions $\varphi_{1,i}$ et ~$\varphi_{2,i}$ qui déterminent la fonction $a$ dans la décomposition \ref{f4'} et ainsi affiner la classification des métriques oscillantes élémentaires. Avant cela nous présentons les premières caractéristiques importantes des métriques oscillantes élémentaires. \section[Les constantes associées]{Les constantes associées à une métrique oscillante élémentaire\label{s2.5}} On introduit ici en particulier la notion de \textbf{charge électrique} et de \textbf{masse} d'une métrique oscillante élémentaire. Bien sûr, la terminologie est choisie pour que ces notions correspondent ensuite exactement à celles de la physique standard. D'après la définition \ref{d2.3} la fonction $a:\mathscr{C}\rightarrow\mathbb{R}$ qui caractérise la métrique oscillante élémentaire associée à $E_{\lambda,\mu}$ vérifie : $$\forall x\in\Theta ~~~~~a_x(.)\in E_{\lambda,\mu}:=E_{S^1(\delta)}(\lambda)\otimes E_W(\mu)$$ \textbf{Les constantes importantes que l'on va définir ne sont autres que celles construites à partir des valeurs propres $\lambda$ et $\mu$ de $E_{\lambda,\mu}$}. Elles seront donc invariantes par changement de cartes de l'atlas d'observation qui proviennent des transformations de Lorentz, car celles-ci <<~laissent fixe~>>, par définition, la variété compacte $S^1(\delta)\times W$ donc l'espace propre $E_{\lambda,\mu}$. \subsubsection{La charge électrique absolue} Une base naturelle de l'espace propre $E_{S^1(\delta)}(\lambda)$, que l'on a utilisée dans la décomposition \ref{F4'} de $a_x(.)$, est donnée par : $(\alpha_1,\alpha_2)$ si $\lambda>0$ ~~(où $\alpha_1(u)=\cos({\sqrt\lambda}u)$et $\alpha_2(u)=\sin({\sqrt\lambda}u)$) et par ($\alpha_1=1$) lorsque $\lambda=0$. \begin{dfn}\label{d2.6} \textbf{La fréquence de charge électrique} de la métrique oscillante élémentaire associée à $E_{\lambda,\mu}$, est la constante positive ou nulle $Q^+:=\sqrt{\lambda}$. \end{dfn} Pour retrouver la notion standard de <<~charge électrique~>> exprimée dans les unités S.I, on donne la définition suivante : \begin{dfn}\label{d2.7} \textbf{La charge électrique absolue} de la métrique oscillante élémentaire associée à $E_{\lambda,\mu}$ est la constante $q^+:=\hbar Q^+$ ~~où $\hbar$ est la constante de Planck. \end{dfn} La définition de la charge électrique relative (positive ou négative) sera présentée plus loin dans un cadre plus restrictif de métriques oscillantes (def. \ref{d2.14}). La constante de Planck n'apparaît ici que comme un facteur qui ramène une fréquence (exprimée en unité géométrique) à une unité S.I de charge électrique. À ce sujet on rappelle que, en <<~unités géométriques~>>, une masse est une <<~longueur~>>, une charge électrique est une <<~longueur~>>, la constante de Planck est <<~le carré d'une longueur~>>, une fréquence est l'<<~inverse d'une longueur~>>, etc. \subsubsection{La masse} La justification de la terminologie <<~masse~>> (ainsi que celle de <<~charge électrique~>>) apparaîtra clairement plus loin lors de la traduction de l'équation fondamentale \ref{F1} sous la forme d'équations de Klein-Gordon puis de Schrödinger ( thms. \ref{2.1} et \ref{2.2}). La définition donnée ici est spécifique aux métriques oscillantes élémentaires, une extension à un cadre plus général pourra être envisagée dans le cadre d'études plus complexes mais ne sera que très succinctement abordée dans ce papier (cf.section \ref{s2.17}). \begin{dfn}\label{d2.8} \textbf{La fréquence de masse} de la métrique oscillante élémentaire associée à à $E_{\lambda,\mu}$~, est la constante positive ou nulle $M$ qui vérifie : $$M^2=S+\mu-\lambda$$ où ~~$S:=\frac{n-2}{4(n-1)}S_{g_0}$,~~ $S_{g_0}$ étant la courbure scalaire constante de la métrique $g_0$ (et des métriques $g_\mathcal P$ associées aux potentiels). \end{dfn} La fréquence de masse n'est évidemment définie ici que dans le cas où $S+\mu-\lambda$ est positif ou nul (les cas pour lesquels $S+\mu-\lambda$ est négatif seront étudiés lorsque la notion de <<~durée de vie~>> interviendra, mais ce sujet ne sera que succinctement abordé dans ce papier (cf. section \ref{s2.17}). Pour retrouver la notion de <<~masse~>> exprimée dans les unités S.I, on donne la définition suivante. \begin{dfn}\label{d2.10} \textbf{La masse} de la métrique oscillante élémentaire associée à $E_{\lambda,\mu}$ est la constante $m:=\hbar c^{-1}M$ ~~où $c$ est la célérité de la lumière. \end{dfn} On remarquera que la courbure scalaire $S_{g_0}$ intervient dans la définition de la masse en se rajoutant à la valeur propre $\mu-\lambda$, ceci pour retrouver, comme on l'a déjà dit, la notion standard de masse de la physique classique. Mais nous verrons (section \ref{s2.15}) que ce fait peut être lié à la notion de <<~champ de Higgs~>>, (la courbure scalaire, lorsqu'elle est $>0$ <<~donne~>> de la masse à des métriques oscillantes pour lesquelles $\mu-\lambda$ est $<0$). \begin{rmq}\label{r2.3} De par sa définition, la fréquence de charge électrique est un multiple entier de $\delta^{-1}$ ~où $\delta$ est le rayon du cercle $S^1(\delta)$. La charge électrique absolue $q^+$ est donc un multiple entier d'une charge élémentaire. Par contre, la masse d'une telle métrique oscillante possède un ensemble de valeurs possibles bien plus complexe. (Bien que celui-ci soit discret pour une variété ($S^1(\delta)\times W,g_0|_{S^1(\delta)\times W)}$) fixée). \end{rmq} Lorsque la variété compacte $W$ est décomposée sous la forme d'un produit $V_1\times\dots\times V_k$, les valeurs propres des espaces propres $E_{V_k}(\mu_k)$ sont des caractéristiques de la métrique oscillante qui peuvent s'avérer importantes. Ce sera le cas pour la décomposition $W=S^3(\rho)\times V$ pour laquelle les espaces propres $E_{S^3(\rho)}(\gamma)$ donneront la classification en terme de <<~spin~>> des métriques oscillantes élémentaires. Ceci sera détaillé dans la section \ref{s2.13}. \section[Affinement de la classification]{Affinement de la classification des métriques oscillantes. Les métriques oscillantes élémentaires d'ordre 1 et 2 dans un potentiel\label{s2.5'}} La précision des résultats obtenus lors de l'étude des métriques oscillantes élémentaires est, bien sûr, liée à la précision donnée sur la variété riemannienne compacte $(W,g_W)$. Le choix de celle-ci se précise petit à petit guidé par la volonté de décrire le plus finement possible les résultats expérimentaux. Pour pouvoir écrire des théorèmes suffisamment précis il est nécessaire de savoir <<~négliger~>> la <<~partie~>> encore inconnue de la variété $(W,g_W)$. \vspace{5mm} Pour retrouver les résultats de la physique quantique standard qui ne tiennent pas compte de la notion de spin, nous <<~négligerons~>> (avec une définition précise (def. \ref{d2.4})) les <<~effets quantiques~>> liés à la variété <<~entière~>> $(W,g_W)$ et seuls les <<~effets quantiques~>> liés à $(S^1(\delta),g_{0_{S^1}})$ seront considérés.\\ Les métriques oscillantes élémentaires correspondant à cette situation seront dites \textbf{<<~d'ordre 1~>>} (def. \ref{d2.9}-1.) et nous verrons qu'il suffira alors d'utiliser seulement \textbf{deux} fonctions réelles (donc une fonction complexe) pour déterminer complètement l'équation que vérifie la fonction $a$. \vspace{3mm} Pour retrouver les résultats qui tiennent compte de la notion de spin, nous considérerons que la variété $(W,g_W)$ se décompose sous la forme : $(W,g_W)=(S^3(\rho)\times V,g_{0_{S^3}}\times g_V)$ \\où $(S^3(\rho),g_{0_{S^3}})$ est la sphère riemannienne standard de dimension 3 et de rayon $\rho$. Une description précise de cette situation pourra être faite à condition de <<~négliger~>> les effets quantiques liés à $(V,g_V)$ et de ne considérer que ceux liés à $(S^1(\delta)\times S^3(\rho), g_{0_{S^1\times S^3}})$.\\ Les métriques oscillantes élémentaires correspondant à cette situation seront dites \textbf{<<~d'ordre 2~>>} (def. \ref{d2.9}-2.). Le nombre de fonctions réelles qui suffira à la détermination de l'équation vérifiée par la fonction $a$ sera ici limité par la dimension des espaces propres du laplacien sur la sphère $(S^3(\rho),g_{0_{S^3}})$. \vspace{3mm} Une description de plus en plus <<~fine~>> des effets quantiques pourra être poursuivie en continuant de préciser les propriétés de la variété $(W,g_W)$. Cette précision ne s'obtiendra pas nécessairement en décomposant la variété $V$ sous forme de produit, mais si cela est le cas, nous définirons les métriques oscillantes élémentaires d'ordre $k>2$. \vspace{5mm} La définition \ref{d2.4} qui va suivre permet de préciser ce que l'on vient de présenter, c'est à dire la manière dont on <<~néglige~>> certains <<~effets quantiques~>> liés à la variété $(W,g_W)$. Elle concerne les métriques représentant les potentiels et elle s'applique dans le cadre général pour lequel la variété compacte $(W,g_W)$ se décompose sous la forme $(V_1,g_{V_1})\times (V_2,g_{V_2})$. Les métriques oscillantes d'ordre 1 dont on vient de parler (précisées dans la définition \ref{d2.9}-1.) sont celles pour lesquelles $V_1$ est réduite à un singleton (sans intérêt) et $V_2=W$.\\ Les métriques oscillantes d'ordre 2 (précisées dans la définition \ref{d2.9}-2.) sont celles pour lesquelles $V_1=S^3(\rho)$ ~~(et $V_2$ est noté $V$). \vspace{5mm} On considère une cellule type $\mathscr{C}=\Theta\times S^1(\delta)\times W$ avec $W=V_1\times V_2$ où $V_1$ et $V_2$ sont deux variétés riemanniennes compactes ($V_1$ pouvant être éventuellement de dimension 0).\\ Soient : - $g_0$ une métrique de potentiel neutre sur $\mathscr{C}$ : $g_0=g_\Theta\times (-g_{S^1(\delta)})\times g_{V_1} \times g_{V_2}$. - $g_{\mathcal P}$ une métrique représentant un potentiel actif : $g_{\mathcal P}=g_0+h$ ~~(sous-section \ref{ss1.3}). - $E_{V_2}(\mu_2)$ un espace propre du laplacien sur la variété $(V_2,g_{V_2})$ associé à la valeur propre $\mu_2$. \begin{dfn}\label{d2.4} Un potentiel (de métrique $g_{\mathcal P}$) \textbf{est neutre sur $E_{V_2}(\mu_2)$} si :\\ quelles que soient les fonctions $\varphi:\Theta\times S^1(\delta)\times V_1\rightarrow\mathbb{R}$~~ et~~ $\beta\in E_{V_2}(\mu_2)$ : \begin{eqnarray} \Box_{g_\mathcal P}(\varphi\beta)=(\Box_{g_\mathcal P}\varphi+\mu_2\varphi)\beta \label{F4'} \end{eqnarray} ($\varphi$ et $\beta$ peuvent être considérées définies sur $\mathscr{C}$) \end{dfn} Évidemment, l'égalité \ref{F4'} est toujours vérifiée pour la métrique $g_0$, celle-ci étant une métrique <<~produit~>>. Elle l'est aussi pour toute métrique de potentiel $g_\mathcal{P}$ si $\mu_ 2=0$. Cette définition traduit le fait que la métrique $g_\mathcal{P}$ n'a pas plus d'influence sur l'espace propre $E_{V_2}(\mu_2)$ que n'en a la métrique $g_0$ (ou du moins que la différence d'influence est négligeable). \vspace{5mm} Si l'on considère un domaine de type <<~métrique oscillante élémentaire associée a $E_{\lambda,\mu}$ dans un potentiel~>> (def. \ref{d2.4}), pour lequel ~~$W=V_1\times V_2$ ~~et $E_{\lambda,\mu}=E_{S^1(\delta)}(\lambda)\otimes E_{V_1}(\mu_1)\otimes E_{V_2}(\mu_2)$~~~(avec $k_1:=\text{dim} V_1$ et $k_2:=\text{dim} V_2$), la fonction $a$ s'écrit sous la forme : \begin{eqnarray} a=\sum_{i=1}^{k_2}\varphi_i\beta_i\label{F5} \end{eqnarray} où les $\beta_i$ forment une base de $E_{V_2}(\mu_2)$ et les fonctions $\varphi_i:\Theta\times S^1(\delta)\times V_1\rightarrow\mathbb{R}$ vérifient :\\$\forall x\in\Theta$,~~~ $\varphi_{i_x}(.)\in E_{S^1(\delta)}(\lambda)\otimes E_{V_1}(\mu_1)$ ~~~~($E_{V_1}(\mu_1)=\mathbb{R}$ si dim $V_1=0$). L'équation fondamentale \ref{F1} vérifiée par $a$ s'écrit : \begin{eqnarray} {\Box_{g_\mathcal{P}}(\sum_{i=1}^{k_2}\varphi_i\beta_i)+S\sum_{i=1}^{k_2}\varphi_i\beta_i}=0\label{F6} \end{eqnarray} \textbf{Lorsque la métrique $g_{\mathcal{P}}$ est neutre sur $E_{V_2}(\mu_2)$}, l'équation \ref{F6} est équivalente, d'après \ref{F4'}, aux $k$ équations \textbf{identiques} : \begin{center} $\Box_{g_\mathcal{P}}(\varphi_i)+(\mu_2+S)\varphi_i=0$ \end{center} Pour la détermination de la fonction $a$, on est donc ramené lorsque $g_\mathcal {P}$ est neutre sur $E_{V_2}(\mu_2)$, à l'étude d'une seule équation de la forme : \begin{eqnarray} \Box_{g_\mathcal{P}}(\varphi)+(\mu_2+S)\varphi=0\label{F7} \end{eqnarray} où ici la fonction $\varphi$ est définie sur $\Theta\times S^1(\delta)\times V_1$. Ceci est particulièrement simplificateur si~ $k_1:= \text{dim} V_1=1$, ce qui est le cas lorsque $g_\mathcal{P}=g_0$ puisque $g_0$ est neutre sur $W$ suivant la définition \ref{d2.4}, mais ce sera aussi le cas pour les potentiels actifs d'ordre 1 comme on va le préciser dans le paragraphe suivant. \vspace{5mm} Les potentiels que l'on va considérer dans la suite sont essentiellement les potentiels actifs <<~sans électromagnétisme~>> et <<~électromagnétiques~>> (sous-section \ref{ss1.3}). Comme on l'a déjà annoncé, les résultats expérimentaux qui concernent les potentiels électromagnétiques seront correctement décrits (effet de <<~spin~>> compris) lorsque la cellule type se décompose sous la forme suivante : \begin{eqnarray} \mathscr {C} = \Theta \times S^1(\delta)\times S^3(\rho)\times V\label{F8} \end{eqnarray} où $S^3(\rho)$ est la sphère riemannienne standard de dimension $3$ et de rayon $\rho$. \textbf{Lorsque l'on négligera les effets de <<~spin~>>}, on supposera que le potentiel électromagnétique $g_\mathcal{P}$ est neutre sur $E_{{S^3(\rho)}\times{V}}(\mu)$ (auquel cas la décomposition $W=S^3(\rho)\times{V}$ est inutile). Ceci pour un domaine de type <<~métrique oscillante élémentaire associée à $E_{\lambda,\mu}=E_{S^1(\delta)}\times E_{S^3\times{V}}(\mu)$~>>~~~ (def. \ref{d2.3}). \textbf{Lorsque l'on tiendra compte des effets de <<~spin~>>}, on supposera que le potentiel $g_\mathcal{P}$ est neutre seulement sur $E_V(\mu_2)$. (Dans le premier cas on pourra considérer que l'on a <<~moyennisé~>> à priori le potentiel $g_\mathcal{P}$ du second cas sur $S^3(\rho)$). Quant aux potentiels sans <<~électromagnétisme~>>, ils seront souvent considérés neutres sur $E_{S^3\times{V}}(\mu)$. \bigskip Les potentiels actifs <<~sans électromagnétisme~>> et <<~électromagnétiques~>> ont été décrits précisément par les propositions \ref{p1.2} et \ref{p1.3}. La proposition suivante, dont la démonstration (très simple) est donnée dans l'annexe \ref{a3.5}, précise des conditions qui rendent ces potentiels <<~neutres~>> sur $E_{V_2}(\mu)$ lorsque la cellule est de la forme $\mathscr {C} = \Theta \times S^1(\delta)\times V_1\times V_2$. \begin{prop}\label{p2.1} ~\\\vspace*{-1em} \begin{enumerate} \item Si, pour un potentiel actif sans électromagnétisme (proposition \ref{p1.2}), le champ de vecteurs $X_1$ s'annule sur $E_{V_2}(\mu_2)$~~~ (c.a.d ~~si $\forall \beta\in E_{V_2}(\mu_2),~~~X_1(\beta)=0$), alors ce potentiel est neutre sur $E_{V_2}(\mu_2)$. \item Si, pour un potentiel électromagnétique (proposition \ref{p1.3}), les deux champs de vecteurs $\Upsilon$ et $X_2$ s'annulent sur $E_{V_2}(\mu_2)$, alors ce potentiel est neutre sur $E_{V_2}(\mu_2)$. \end{enumerate} \end{prop} Les considérations précédentes amènent donc à affiner la classification des métriques oscillantes élémentaires. On donne dans le paragraphe suivant la définition précise des métriques oscillantes élémentaires d'ordre 1 et 2. \subsubsection{Les métriques oscillantes élémentaires d'ordre 1 et 2 dans un potentiel} La cellule type considérée est $\mathscr {C} = \Theta \times S^1(\delta)\times W$ ~et ~$W$ est décomposée sous la forme $W=S^3(\rho)\times V$ pour l'ordre 2. Une métrique de potentiel neutre est de la forme :\\ $g_0= g_\Theta\times (-g_{S^1(\delta)})\times g_W$ ~et ~$g_W=g_{S^3(\rho)}\times {g_V}$ pour l'ordre 2. \begin{dfn}\label{d2.9} ~\\\vspace*{-1em} \begin{enumerate} \item Un domaine de type \textbf{métrique oscillante élémentaire d'ordre 1 dans un potentiel} est un domaine de type métrique oscillante dans un potentiel (def. \ref{d2.2}) pour lequel la fonction $a:\mathscr{C}\rightarrow \mathbb{R}$ est de la forme : $a=\varphi\beta$ ~~où ~~$\beta\in E_W(\mu)$ ~~et $\varphi:\Theta\times S^1(\delta)\rightarrow \mathbb{R}$ vérifie :\\ $\forall x\in\Theta$, ~~~$\varphi_x(.)\in E_{S^1(\delta)}(\lambda)$.\\ De plus, le potentiel est neutre sur $E_W(\mu)$. \item Un domaine de type \textbf{métrique oscillante élémentaire d'ordre 2 dans un potentiel} est un domaine de type métrique oscillante dans un potentiel (def \ref{d2.2}) pour lequel la fonction $a:\mathscr{C}\rightarrow \mathbb{R}$ est de la forme : $a=\phi\beta$ ~~où ~~$\beta\in E_V(\mu)$ ~~et $\phi:\Theta\times S^1(\delta)\times S^3(\rho)\rightarrow \mathbb{R}$ vérifie :\\ $\forall x\in\Theta$, ~~~$\phi_x(.)\in E_{S^1(\delta)}(\lambda)\otimes E_{S^3(\rho)}(\gamma)$.\\ De plus, le potentiel est neutre sur $E_V(\nu)$. \end{enumerate} \end{dfn} Que ce soit pour l'ordre 1 ou 2, cette définition correspond à la définition \ref{d2.3} lorsque l'on a pris pour la fonction $a$ \textbf{un seul} des termes de la décomposition \ref{F5}. (On rappelle que ceci est justifié par le fait que la contrainte sur les potentiels donne des équations identiques de la forme \ref{F7} pour chacun des termes de la somme \ref{F5}) . \begin{rmq}\label{r2.1} On aurait pu définir les métriques oscillantes <<~d'ordre 0~>> dans le cas particulier où $\lambda=0$ alors qu'on les a classées dans celles d'ordre 1. \\Puisque $E_{S^1(\delta)}(0)$ s'identifie naturellement à $\mathbb{R}$ et $E_{S^1(\delta)}(\lambda)$ à $\mathbb{C}$ lorsque $\lambda>0$ (comme on le précisera dans la section \ref{s2.8}), la fonction $\varphi$ de la définition précédente s'identifie à une fonction \textbf{réelle} définie sur $\Theta$ lorsque $\lambda=0$ et à une fonction \textbf{complexe} si $\lambda>0$. De même, comme $E_{S^1(\delta)}(\lambda)\otimes E_{S^3(\rho)}(\gamma)$ s'identifie au complexifié $E_{S^3(\rho)}^{\varmathbb C}(\gamma)$, la fonction $\varphi$ d'une métrique oscillante élémentaire d'ordre 2 s'identifie à une fonction à valeurs dans $E_{S^3(\rho)}^{\varmathbb C}(\gamma)$ définie sur $\Theta$. Comme on l'a déjà précisé, le processus peut éventuellement être poursuivi pour définir des métriques oscillantes d'ordre $k>2$ correspondant à des décompositions plus fines de $W$ que celle de la forme $S^3(\rho)\times V$, mais ceci ne sera pas abordé dans ce papier. \end{rmq} \begin{rmq}\label{r2.2} Le lecteur familier avec la T.Q.C pourra commencer à faire le parallèle suivant avec la théorie présentée ici : \begin{enumerate} \item Les métriques oscillantes élémentaires d'ordre 0 s'apparentent aux champs scalaires de la T.Q.C, celles d'ordre 1 aux champs complexes, celles d'ordre 2 aux champs spinoriels, etc. \item Les quanta des <<~champs~>> de la T.Q.C s'apparenteront aux singularités présentées dans la section \ref{s2.11}. \item L'espace de Fock d'un <<~système de particules~>> est remplacé par l'espace des fonctions régulières \{$a:\mathscr {C} = \Theta \times S^1(\delta) \times W\rightarrow \mathbb{R}$\}. \item Aucun lagrangien, hamiltonien et processus de <<~quantification~>> ne sera ici utilisé. Tout sera <<~géré~>> par la seule équation fondamentale \ref{F0'} qui n'est qu'une simple conséquence du type géométrique choisi. \item La description précise des expériences que l'on déduira de la théorie présentée ici sera profondément différente de celle donnée en physique quantique classique et en T.Q.C. Elle sera essentiellement liée à la <<~déformation~>> de l'espace-temps $\mathscr{M}$ dont l'interprétation ne pourra être précisée qu'après la section \ref{s2.12}. \end{enumerate} \end{rmq} \section[Des exemples importants]{Des exemples importants de métriques oscillantes élémentaires dans un potentiel neutre\label{s2.6}} Les exemples (très) particuliers de métriques oscillantes que l'on va décrire dans cette section sont ceux qui, en langage de physique classique , représentent les <<~flots de particules~>> se déplaçant à une vitesse constante $\overrightarrow{v}$ (donc nécessairement dans un potentiel neutre et dans le cadre de l'approximation linéaire ). Ils vont permettre , par exemple, d'écrire précisément des <<~conditions aux limites~>> pour la description d'expériences dont le principe est basé sur <<~l'envoi de particules ~>>lancées à une vitesse $\overrightarrow{v}$ dans un système physique. Ces exemples vont aussi permettre d'introduire naturellement la notion de charge électrique \textbf{relative} et ceci sera ensuite étendu à une classe de métriques oscillantes bien plus générale (section \ref{s2.7}). Nous commençons par considérer le cas des métriques oscillantes élémentaires d'ordre 1 (def. \ref{d2.9}), l'extension au cas plus général des métriques oscillantes élémentaires associées à $E_{\lambda,\mu}$ (def. \ref{d2.3}) ne pose pas de difficultés et sera présentée en fin de cette section. \subsection{Les métriques oscillantes élémentaires homogènes d'ordre 1 dans un potentiel neutre} D'après la définition \ref{d2.9}, la fonction $a$ qui caractérise une métrique oscillante élémentaire d'ordre 1 dans un potentiel est de la forme $a=\varphi\beta$~ où $\beta\in E_W(\mu)$ et $\varphi:\Theta\times S^1(\delta)\rightarrow \mathbb{R}$ vérifie : $\forall x\in\Theta ~~~\varphi_x(.)\in E_{S^1(\delta)}$. On suppose que cette métrique oscillante est \textbf{<<~homogène-stationnaire~>>}, ce que l'on traduit en disant que la fonction $a$ ne dépend pas des variables d'espace $(x^1,x^2,x^3)$ de $\Theta$. Cette propriété n'est évidemment pas invariante par les transformations de Lorentz sur $\Theta$, le mot <<~stationnaire~>> est relatif au choix de la carte de l'atlas d'observation. La fonction $\varphi$ est alors de la forme : $$\varphi(t,u)=\varphi_1(t)\cos(Q^+u)+\varphi_2(t)\sin(Q^+u)$$ où l'on a posé $t=x^0$, $u=x^4$ et $Q^+:=\sqrt \lambda$ est la fréquence de charge électrique (def. \ref{d2.6}). L'équation fondamentale \ref{F1} s'écrit ici : $$\Box_{g_0}a+Sa=0$$ $$\text{où} ~~ S=\frac{n-2}{4(n-1)}S_{g_0}~~ \text{et} ~~~\Box_{g_0}=\partial^2/\partial t^2-\sum_{k=1}^{3}{\partial^2/{(\partial x^k)^2}}+\partial^2 /{\partial u}^2+\Delta_{{g_0}_W}$$ Ce qui donne, en utilisant le fait que $\Delta_{{g_0}_W}(\beta)=\mu\beta$ : $$\frac {\partial^2{\varphi_1}}{\partial t^2}\cos(Q^+u)+\frac {\partial^2{\varphi_2}}{\partial t^2}\sin(Q^+u)+(S+\mu-(Q^+)^2)\varphi=0$$ Ce qui équivaut à : $$\frac {\partial^2{\varphi_1}}{\partial t^2}=-M^2\varphi_1 ~~~\text{et}~~~\frac {\partial^2{\varphi_2}}{\partial t^2}=-M^2\varphi_2$$ puisque $M^2=S+\mu-(Q^+)^2$ Autrement dit, les métriques oscillantes qui vérifient les propriétés précédentes ont leurs fonctions $a$ (vues dans la carte de l'atlas d'observation) qui sont de la forme :\\ $a=\beta((A_1\cos(Mt)+A_2\sin(Mt))\cos(Q^+u)+(B_1\cos(Mt)+B_2\sin(Mt))\sin(Q^+u))$\\ où $A_1,A_2,B_1,B_2$ sont des constantes. Ce qui s'écrit, après transformations : $$a=\beta(C_1\cos(Mt+Q^+u)+C_2\sin(Mt+Q^+u))+\beta(C_3\cos(Mt-Q^+u)+C_4\sin(Mt-Q^+u))$$ où $C_1=1/2(A_1-B_2), C_2=1/2(A_2+B_1), C_3=1/2(A_1+B_2), C_4=1/2(A_2-B_1)$ Les fonctions $a$ s'écrivent donc sous la forme ${a^+}+{a^-}$ avec : $$a^+=\beta(C_1\cos(Mt+Q^+u)+C_2\sin(Mt+Q^+u))$$ $$a^-=\beta(C_3\cos(Mt-Q^+u)+C_4\sin(Mt-Q^+u))$$ Et l'on remarquera qu'il n'y a pas d'ambiguïté pour la définition de $a^+$ et $a^-$ à condition que $M$ soit non nul. Si l'on pose $Q=Q^+$ pour $a^+$ et $Q=-Q^+$ pour $a^-$, les fonctions $a^+$ et $a^-$ s'écrivent sous la même forme : $$\beta( C\cos(Mt+Qu)+C'\sin(Mt+Qu))$$ La constante $Q$ définira, sur cet exemple très particulier, \textbf{la charge électrique relative} de la métrique oscillante, positive pour $a^+$, négative pour $a^-$, et ceci n'est bien défini que si $M>0$. (On peut lier $a^+$ et $a^-$ à la notion de <<~particule~>> et <<~antiparticule~>>) On donne donc la définition suivante : \begin{dfn}\label{d2.11} Un domaine de type \textbf{métrique oscillante élémentaire homogène-stationnaire pour $(\mathcal{V},\zeta)$, d'ordre 1}, est un domaine $\mathcal{V}$ de type <<~métrique oscillante élémentaire d'ordre 1 dans un potentiel neutre~>> pour lequel , dans la carte $(\mathcal{V},\zeta)$ la fonction $a$ est de la forme : $a=\varphi\beta$ où $\beta\in E_W(\mu)$ et $\varphi:\Theta\times S^1(\delta)\rightarrow \mathbb{R}$ vérifie : $$\varphi(t,x^1,x^2,x^3,u)=C\cos(Mt+Qu)+C'\sin(Mt+Qu)$$ (et ne dépend donc pas de $x^1,x^2,x^3$).\\ (Ici $Q$ est la charge électrique relative et peut être positive ou négative). \end{dfn} Pour définir une métrique oscillante élémentaire homogène \textbf{qui se déplace à une vitesse constante $\overrightarrow{v}$ relativement à une carte ($\mathcal{V}$',$\zeta'$) de l'atlas d'observation}, il suffit naturellement de considérer une transformation de Lorentz :$\wedge:\Theta\rightarrow\Theta'$ qui correspond à un observateur lié à $\Theta'$ se déplaçant à une vitesse $-\overrightarrow{v}$ relativement à $\Theta$. On définit alors la $g_0$-isométrie\\ $\sigma:\mathscr{C}=\Theta\times S^1(\delta)\times W\rightarrow\mathscr{C'}=\Theta'\times S^1(\delta)\times W$ en posant $\sigma= \wedge\times I_d$ ~~où $I_d$ désigne l'application identité sur $S^1(\delta)\times W$. On laisse le soin au lecteur de vérifier que la fonction $a\circ\sigma$, qui correspond à la fonction de la métrique oscillante <<~vue~>> dans la carte ($\mathcal{V}$',$\zeta'$), lorsque $a$ est celle <<~vue~>> dans $(\mathcal{V},\zeta)$, est de la forme $a\circ\sigma=\varphi'\beta$ ~~où $\beta\in E_W(\mu)$ et\\ $\varphi'(t,(x^k),u)=C\cos(M't-\sum_{k=1}^3\lambda_kx^k+Qu)+C'\sin(M't-\sum_{k=1}^3\lambda_kx^k+Qu)$.\\ Le vecteur vitesse $\overrightarrow{v}$ s'écrit $(1/M')(\lambda_1,\lambda_2,\lambda_3)\in \mathbb{R}^3$ ~~ et ~~ $\sum_{k=1}^3\lambda_k^2<M'^2$.\\ (Ici l'espace tangent en un point quelconque de $\Omega$ est canoniquement identifié à $\mathbb{R}^3$ lorsque $I\times\Omega= \Theta\subset\mathbb{R}^4$).\\ La constante $M'>0$ est appelée \textbf{la masse relativiste vue dans la carte ($\mathcal{V}$',$\zeta'$)}. Le fait que, par définition, ~~$\Box_{g_o}a+Sa=0$ ~~montre que $M=(1-\lvert\overrightarrow{v}\rvert^2)^{1/2}M'$ ~~où $M$ est la masse (au repos) déjà définie. On donne donc la définition suivante : \begin{dfn}\label{d2.12} Une métrique oscillante élémentaire homogène (d'ordre 1) dans un potentiel neutre \textbf{a une vitesse de propagation constante $\overrightarrow{v}$ relative à une carte $(\mathcal{V},\zeta)$ de l'atlas d'observation} si la fonction $a=\varphi\beta$ correspondante est telle que :\\ $\varphi(t,(x^k),u)=C\cos(M't-\sum_{k=1}^3\lambda_kx^k+Qu)+C'\sin(M't-\sum_{k=1}^3\lambda_kx^k+Qu)$.\\ le vecteur vitesse est alors $\overrightarrow{v}=(1/M')(\lambda_1,\lambda_2,\lambda_3)\in \mathbb{R}^3$. \end{dfn} Lorsque $\overrightarrow{v}=0$ on retrouve évidemment la définition \ref{d2.11}. \subsection{Le cas plus général des métriques oscillantes élémentaires associées à $E_{\lambda,\mu}$}\label{ss+2} Si l'on choisit une base $(\beta_1,\dots,\beta_k)$ de $E_W(\mu)$, la fonction $a$ représentant une telle métrique oscillante se décompose sous la forme \ref{F4'} : $$a(x,u,w)=\sum_{i=1}^k(\varphi_{1,i}(x)\cos(Q^+u)+\varphi_{2,i}(x)\sin(Q^+u))~\beta_i(w)$$ Chaque terme de cette somme peut être considéré comme étant la fonction $a_i$ d'une métrique oscillante élémentaire d'ordre 1. \begin{dfn}\label{d2.13} Un domaine de type métrique oscillante élémentaire associée à $E_{\lambda,\mu}$ est \textbf{homogène stationnaire pour une carte $(\mathcal{V},\zeta)$} si $\forall i$ de 1à k, les fonctions $a_i$ correspondent à celles données dans la définition \ref{d2.11}, toutes avec la même valeur de $Q$. (Lorsque $Q=\sqrt{\lambda}=Q^+$, la charge électrique relative est positive, Lorsque $Q=-\sqrt{\lambda}=-Q^+$ elle est négative). \end{dfn} On généralise de même la définition \ref{d2.12}. Ces définitions ne dépendent évidemment pas du choix de la base $(\beta_1,\dots,\beta_k)$. \section{La charge électrique relative\label{s2.7}} On étend ici la notion de charge électrique relative, que l'on a introduite dans la sous-section précédente, au cadre plus général des métriques oscillantes élémentaires associées à $E_{\lambda,\mu}$ (def \ref{d2.3}) pour lesquelles la notion de charge électrique absolue a été précisée dans les définitions \ref{d2.6} et \ref{d2.7}. On rappelle qu'ici la cellule type est $\mathscr{C}=\Theta\times S^1(\delta)\times W$, les fonctions $a:\mathscr{C}\rightarrow\mathbb{R}$ vérifient : $\forall x\in \Theta ~~~a_x(.)\in E_{\lambda,\mu}$. \begin{dfn}\label{d2.14} Un domaine de type métrique oscillante élémentaire associée à $E_{\lambda,\mu}$ dans un potentiel \textbf{a une charge électrique (relative) bien définie} si $Q^+=0$, \textbf{ou} si le champ de vecteurs défini sur $\Theta$ par : \begin{eqnarray} \int_{S^1\times W} \frac{\partial a}{\partial u}~~(\overrightarrow{grad}_{g_0{_\Theta}}a)~~dv_{g_0{_{S^1\times W}}}\label{F9} \end{eqnarray} est un champ de vecteurs \textbf{de genre temps} qui est de plus : \begin{enumerate} \item Soit partout dans l'orientation en temps de $\Theta$. \item Soit partout opposé à l'orientation en temps de $\Theta$. \end{enumerate} Dans le premier cas, \textbf{la fréquence de charge relative $Q$ est définie par $Q^+$}, dans le second cas \textbf{par $-Q^+$}. (On définit comme précédemment \textbf{la charge électrique $q$} en posant $q=\hbar Q$) \end{dfn} Dans l'intégrale \ref{F9}, la fonction $\frac{\partial a}{\partial u}:\mathscr{C}\rightarrow\mathbb{R}$ n'est autre que la fonction $Y(a)$ où $Y$ est le champ de vecteurs qui définit l'électromagnétisme, et le champ $\overrightarrow{grad}_{g_0{_\Theta}}a$ est le champ de vecteurs tangents à $\Theta$ défini par :\\ $\overrightarrow{grad}_{g_0{_\Theta}}a:=(\partial_0a)\partial_0-\sum_{k=1}^3(\partial_ka)\partial_k$~~ où~~ $\partial_0:=\frac{\partial}{\partial x^0}=\frac{\partial}{\partial t}$~~ et~~ $\partial_k:=\frac{\partial}{\partial x^k}$. \vspace{5mm} De par sa définition <<~intrinsèque~>>, la charge électrique relative est invariante par les transformations de Lorentz sur $\Theta$ qui conservent l'orientation en temps. On laisse le soin au lecteur de vérifier que la charge électrique relative donnée par la définition \ref{d2.14} correspond à celle donnée dans le cadre particulier des métriques oscillantes homogènes (defs. \ref{d2.12} et \ref{d2.13}) et que, lorsque $M=0$, le champ de vecteurs défini par \ref{F9} est de genre espace ou de genre lumière. (La charge électrique relative n'est donc pas bien définie dans ce cas). \section[Les fonctions canoniques associées]{Les fonctions canoniques associées aux métriques oscillantes élémentaires d'ordre 1 et 2\label{s2.8}} On rappelle que la cellule type considérée pour ces métriques oscillantes est $\mathscr {C} = \Theta \times S^1(\delta)\times W$ et que $W$ est décomposée sous la forme $W= S^3(\rho)\times V$ pour l'ordre 2.\\ Une métrique de potentiel neutre est de la forme $g_0= g_\Theta\times (-g_{S^1(\delta)})\times g_W$ où $g_W=g_{S^3(\rho)}\times {g_V}$ pour l'ordre 2. \vspace{5mm} D'après la définition \ref{d2.9}, la fonction $a$ qui caractérise la métrique oscillante vérifie pour l'ordre 1 : $$a=\varphi\beta$$ où ~~$\beta\in E_{S^3\times V}(\mu)$ ~~et ~~$\varphi:\Theta\times S^1\rightarrow\mathbb{R}$ vérifie : $\forall x\in\Theta, ~~~~~\varphi_x(.)\in E_{s^1(\delta)}(\lambda)$. On a donc :~~ $\forall (x,u)\in \Theta\times S^1(\delta)$, \begin{eqnarray}\label{F10} \varphi_x(u)=\varphi_1(x)\cos(Q^+u)+\varphi_2(x)\sin(Q^+u). \end{eqnarray} Et pour l'ordre 2 : $$a=\phi\beta$$ où ~~ $\beta\in E_V(\nu)$ ~et ~$\phi:\Theta\times S^1(\delta)\times S^3(\rho)\rightarrow\mathbb{R}$ vérifie : $\forall x\in\Theta, ~~\phi_x(.)\in E_{S^1(\delta)}(\lambda)\otimes E_{S^3(\rho)}(\nu)$. On a donc : ~~$\forall (x,u,s)\in \Theta\times S^1(\delta)\times S^3(\rho)$, \begin{eqnarray}\label{F11} \phi_x(u,s)=\phi_{1,x}(s)\cos(Q^+u)+\phi_{2,x}(s)\sin(Q^+u). \end{eqnarray} Nous allons définir \textbf{les fonctions canoniques} à partir des fonctions $\varphi$ et $\phi$ précédentes. Ces fonctions contiendront toutes les informations essentielles de la fonction $a$ et seront plus simples à manipuler que cette dernière. Elles auront comme autre intérêt d'être définies sur <<~l'espace apparent~>> $\Theta\subset\mathbb{R}^4$ et pourront donc être comparées aux fonctions qui interviennent dans les théories quantiques standard. \begin{dfn}\label{d2.15} ~\\\vspace*{-1em} \begin{enumerate} \item On considère un domaine de type <<~métrique oscillante élémentaire \textbf{d'ordre 1} dans un potentiel~>>.\\ \textbf{La fonction canonique} associée à ce domaine est la fonction \textbf{complexe} $a_c:\Theta\rightarrow\mathbb{C}$ ~~ donnée, lorsque $Q^+\neq 0$,~ par ~$a_c(x):=\varphi_1(x)+i\varphi_2(x)$ \\où ~$\varphi_1$ et $\varphi_2$ sont définies par \ref{F10},\\ et donnée simplement par $a_c=\varphi_1$ si $Q^+=0$ ~~(c'est, dans ce dernier cas, une fonction réelle). \item On considère un domaine de type <<~métrique oscillante élémentaire\\ \textbf{d'ordre 2} dans un potentiel~>>.\\ \textbf{La fonction canonique} associée à ce domaine est la fonction\\ $a_c:\Theta\rightarrow E_{S^3(\rho)}^{\mathbb{C}}(\gamma)$ \\donnée, ~lorsque ~$Q^+\neq0$,~ par $a_c(x):=\phi_{1,x}+i\phi_{2,x}\in E_{S^3(\rho)}(\gamma)+iE_{S^3(\rho)}(\gamma)$\\(où~~$E_{S^3(\rho)}^{\mathbb{C}}(\gamma)$ est le <<~complexifié~>> $\mathbb{C}\otimes E_{S^3(\rho)}(\gamma)=E_{S^3(\rho)}(\gamma)+iE_{S^3(\rho)}(\gamma)$ ~et ~ $\phi_{1,x}$ et $\phi_{2,x}$ sont définies par \ref{F11}),\\ et donnée simplement par $a_c(x):=\phi_{1,x}$ si $Q^+=0$. \end{enumerate} \end{dfn} On peut, bien sûr, étendre cette définition aux métriques oscillantes élémentaires d'ordre $k>2$ lorsque la variété compacte $V$ est décomposée en produit. En fait, les fonctions canoniques $a_c$ sont directement liées à l'existence des isomorphismes canoniques suivants que nous utiliserons lors des démonstrations : On considère les espaces propres $E_{S^1(\delta)}(\lambda)$. Pour les valeurs propres $\lambda\neq0$ tous ces espaces propres sont de dimension 2 et ont pour base canonique les deux fonctions <<~$\cos(\sqrt \lambda u)$~>> et <<~$\sin(\sqrt \lambda u)$~>> exprimées dans le système de coordonnées standard de $S^1(\delta)$ orientée (cf. \ref{ss1.1}). \\Lorsque $\lambda\neq0$, l'isomorphisme canonique $\mathbb{C}_\lambda:E_{S^1(\delta)}(\lambda)\rightarrow\mathbb{C}$ est défini par : \begin{eqnarray}\label{F12} \mathbb{C}_{\lambda}(A\cos(\sqrt \lambda (.))+B\sin(\sqrt \lambda(.)):=A+iB \end{eqnarray} Dans le cas des métriques oscillantes d'ordre 1, la fonction canonique $a_c$ vérifie donc :\\$\forall x\in\Theta, ~~a_c(x)=\mathbb{C}_\lambda(\varphi_x(.))$. L'isomorphisme $\mathbb{C}_\lambda$ induit alors naturellement un isomorphisme : \begin{eqnarray}\label{F13} \mathbb{C}_{\lambda,\gamma}:E_{S^1(\delta)}(\lambda)\otimes E_{S^3(\delta)}(\gamma)\rightarrow E_{S^3(\rho)}^{\mathbb{C}}(\gamma) \end{eqnarray} Et dans le cas des métriques oscillantes d'ordre 2, la fonction canonique $a_c$ vérifie :\\$\forall x\in\Theta, ~~a_c(x)=\mathbb{C}_{\lambda,\gamma}(\phi_x(.))$.\\ On montre alors rapidement que, pour l'ordre 1 ou 2 : \begin{eqnarray}\label{F14} a=Re(\beta e^{-iQ^+u}a_c) \end{eqnarray} \section[Les équations de Klein-Gordon]{Les équations de Klein-Gordon (pour différents potentiels) obtenues comme simples conséquences de la définition du <<~type géométrique~>>\label{s2.9}} Le résultat important de cette section est énoncé dans le théorème \ref{2.1} qui va suivre. Il concerne les métriques oscillantes élémentaires d'ordre 1 dans un potentiel. En fait, celles d'ordre 2, qui tiennent compte de la notion de <<~spin~>>, se traitent de manière identique, mais pour la clarté de l'exposé nous ne présenterons ce cas que dans la section \ref{s2.13}.\\ Les équations obtenues ne sont que des traductions de l'équation fondamentale $\Box_{g_{\mathcal P}}a+Sa=0$ vérifiée dans un domaine de type<<~métrique oscillante élémentaire dans un potentiel~>>. On commence par rappeler et préciser ce que sont les domaines de type <<~potentiel~>> que l'on va utiliser et que l'on a déjà étudiés (cf. \ref{ss1.2} et \ref{s1.5}): On considère une carte $(U,\zeta)$ de l'atlas d'observation telle que $\zeta(U)$ soit la cellule type $\mathscr{C}=\Theta\times S^1(\delta)\times W$ dont les coordonnées seront notées $(x,u,w)$ avec $x=(t,x^1,x^2,x^3)\in \Theta\subset\mathbb{R}^4$, les coordonnées de <<~genre temps~>> $x^0$ et $x^4$ sont ici notées $t$ et $u$. \begin{enumerate} \item \textbf{le potentiel neutre} \vspace{4mm} La métrique est $g_0= g_\Theta\times (-g_{S^1(\delta)})\times {g_W}$ ~~où ~~$g_\Theta$ est la métrique de Minkovski sur $\Theta \subset \mathbb{R}^4$, ~~$g_{S^1(\delta)}$ est la métrique standard de $S^1(\delta)$,~~$g_W$ est une métrique riemannienne sur la variété compacte $W$ à courbure scalaire constante. \item \textbf{Le potentiel actif sans électromagnétisme} \vspace{4mm} La métrique est, comme on l'a vu (cf. \ref{ss1.2}), de la forme $g=g_0+h$ où, en chaque point $P$, l'endomorphisme $\leftidx{^e}{h_P}$ est nilpotent d'indice 2. On a (proposition \ref{p1.2}) : $h=-2vX_1^\flat\otimes X_1^\flat$, ~~$g_0(X_1,\frac{\partial}{\partial t})=1$,~~$g_0(X_1,Y)=0$,~~$g_0(X_1,X_1)=0$. Ici $v$ est la fonction potentiel et $X_0$ a été choisi égal à $\frac{\partial}{\partial t}$ qui est lié à la carte de l'atlas d'observation. On suppose que l'hypothèse $H_N$ posée pour l'étude des géodésiques (cf. \ref{ss1.4}) est vérifiée (avec $D_{g_0}X_1=0$), que $S_g=S_{g_0}$ (voir la remarque \ref{r13}) et que la métrique $g$ est neutre sur $E_W(\mu)$ (def. \ref{d2.4}) lorsque la métrique oscillante considérée est associée à $E_{\lambda,\mu}:=E_{S^1}(\lambda)\otimes E_W(\mu)$. Ces derniers points sont récapitulés dans \textbf{l'hypothèse $H_{1,N}$} suivante : \begin{enumerate} \item $S_{\mathcal P}=S_{g_0}$. \item La fonction potentiel $v$ est définie sur $\mathcal U$ lorsque $\Theta=I\times\mathcal U\subset \mathbb{R}\times\mathbb{R}^3$. \item Le champ de vecteurs $X_1$ est défini sur $I\times W$, ~~$D_{g_0}X_1=0$ et $X_1$ s'annule sur $E_W(\mu)$. \end{enumerate} \item \textbf{le potentiel électromagnétique}\\ La métrique est, comme on l'a vu (cf. \ref{ss1.2}), de la forme $g=g_0+h$ où, en chaque point $P$, l'endomorphisme $\leftidx{^e}{h_P}$ est nilpotent d'indice 2 ou 3. On a (proposition \ref{p1.3}) : $h=\Upsilon^\flat\otimes X_2^\flat+X_2^\flat\otimes\Upsilon^\flat$,~~$g_0(X_2,Y)=1$,~~$g_0(X_2,\Upsilon)=0$,~~$g_0(X_2, X_2)=0$. on suppose que l'hypothèse $H_E$ posée pour l'étude des géodésiques (cf. \ref{ss1.4}) est vérifiée (avec $D_{g_0}X_2=0$), que $S_g=S_{g_0}$ (voir la remarque \ref{r14}) et que la métrique $g$ est neutre sur $E_W(\mu)$ (def. \ref{d2.4}). Ces derniers points sont récapitulés dans \textbf{l'hypothèse $H_{1,E}$} suivante : \begin{enumerate} \item $S_{\mathcal P}=S_{g_0}$. \item Le champ de vecteurs $\Upsilon$ est défini sur $\Theta$. \item Le champ de vecteurs $X_2$ est défini sur $S^1(\delta)\times W$, ~~$D_{g_0}(X_2)=0$ et $X_2$ s'annule sur $E_W(\mu)$. \end{enumerate} \end{enumerate} Les hypothèses $H_{1,N}$ et $H_{1,E}$ peuvent être vues comme des <<~approximations~>> qui traduisent le fait que l'on néglige certains effets quantiques relatifs à la variété compacte $W$. Ne pas supposer ces hypothèses apporterait des termes perturbatifs supplémentaires dans les équations données par le théorème \ref{2.1}. Il peut être intéressant d'étudier ces perturbations, cependant le problème est plus délicat qu'il n'y parait au premier abord car, si l'on approfondit les détails de la démonstration du théorème \ref{2.1}, on s'aperçoit que le fait d'avoir choisi l'approximation linéaire (section 2.3) impose des contraintes sur les potentiels (réalisées en particulier par les hypothèses $H_{1,N}$ et $H_{1,E}$). Supprimer totalement les hypothèses $H_{1,N}$ et $H_{1,E}$ obligerait donc à utiliser l'équation fondamentale \ref{F0'} et non son approximation linéaire \ref{F1}, ce qui rend l'étude beaucoup plus difficile. \begin{thme}\label{2.1} On considère un domaine de type <<~métrique oscillante élémentaire d'ordre 1 dans un potentiel~>> (def. \ref{d2.9}). Alors, dans les 3 cas de potentiels considérés, la fonction canonique $a_c$ vérifie les équations <<~de Klein-Gordon~>> suivantes : \begin{enumerate} \item \textbf{Dans un potentiel neutre.} \begin{eqnarray}\label{F15} \Box_\Theta a_c+ M^2a_c=0 \end{eqnarray} où ~~$\Box_\Theta=\frac{\partial^2}{(\partial t)^2}-\sum_{k=1}^3\frac{\partial^2}{(\partial x^k)^2}$~~ et ~~$M$ est la fréquence de masse de la métrique oscillante. \item \textbf{Dans un potentiel sans électromagnétisme sous l'hypothèse $H_{1,N}$.} \begin{eqnarray}\label{F16} \Box_\Theta a_c+ M^2a_c-2v\frac{\partial^2a_c}{(\partial t)^2}=0 \end{eqnarray} où ~~$v$ est la fonction potentiel (def. \ref{def:11}) \item \textbf{Dans un potentiel électromagnétique sous l'hypothèse $H_{1,E}$}. \begin{eqnarray}\label{F17} \sum_{j=0}^3\varepsilon_j(i\frac{\partial}{\partial x^j}+Q^+\Upsilon^j)^2a_c+M^2a_c=0 \end{eqnarray} où~~ $\varepsilon_j=g_{0jj}$~~ c.a.d :~~ $\varepsilon_0=-1$ ~~et~~$\varepsilon_1=\varepsilon_2=\varepsilon_3=+1$. (Le carré d'un opérateur différentiel se comprend, bien sûr, comme le composé par lui même). \end{enumerate} \end{thme} La démonstration de ce théorème est détaillée dans l'annexe (\ref{a3.6}). \begin{rmq}\label{r2.4} Les équations \ref{F15} et \ref{F17} sont invariantes par les changements de cartes qui proviennent des transformations de Lorentz. Ce n'est pas le cas pour l'équation \ref{F16} car le <<~champ d'observation~>> $X_0$ a été choisi égal à $\frac{\partial}{(\partial t)}$ lié a une carte de l'atlas d'observation, et celui-ci n'est pas invariant par les transformations de Lorentz. \end{rmq} \textbf{Il est important de noter que les équations données par ce théorème sont des conclusions obligées de la seule définition d'un domaine <<~à courbure scalaire constante, conforme à un potentiel~>> dans le cadre de l'approximation linéaire. Aucune loi, aucun principe n'a été ajouté pour obtenir ces équations.} \bigskip \textbf{Remarque sur l'équation de Dirac.} Les équations de type Klein-Gordon données par le théorème \ref{2.1} sont de la forme $D(a_c)=0$ où $D$ est un opérateur différentiel \textbf{d'ordre 2}. Dans un potentiel neutre, $D=\Box_\Theta+M^2$. L'équation de Dirac est obtenue en <<~factorisant~>> l'opérateur différentiel $D$ en deux opérateurs différentiels \textbf{d'ordre 1}. Le fait d'avoir des opérateurs différentiels d'ordre 1 est fondamental en physique quantique standard, essentiellement pour rendre cohérent l'axiomatique probabiliste de cette dernière. Ceci n'est pas du tout le cas pour la théorie que nous présentons ici comme on le précisera dans la section \ref{s2.12}. Nous n'avons donc aucun intérêt, d'un point de vue conceptuel, à introduire des équations d'ordre 1 et ce sont les équations du théorème \ref{2.1} qui resteront fondamentales.\\ Cependant, d'un point de vue simplement <<~calculatoire~>> il peut être utile, dans l'étude des solutions des équations de Klein-Gordon, de décomposer l'opérateur différentiel d'ordre 2 en <<~composé~>> d'opérateurs d'ordre 1. Ceci permet d'introduire simplement la notion d'<<~exponentielle d'opérateur~>> et fournit un outil puissant (très utilisé en T.Q.C) qui peut être intéressant même pour nous. Ceci dit, nous n'avons pas encore assez d'éléments à la fin de cette section pour donner l'interprétation physique des résultats du théorème \ref{2.1} qui permet d'expliquer et de décrire complètement les expériences étudiées habituellement par la physique quantique standard, il faudra pour cela attendre la fin de la section \ref{s2.12}. Les équations données par le théorème \ref{2.1} sont, bien sûr, valides lorsque $M=0$, bien que dans ce cas la charge électrique relative ne soit en général pas définie (mais elle n'intervient pas dans ces équations). Les métriques oscillantes <<~à masse nulle~>> ont en fait des caractéristiques physiques très différentes de celles <<~à masse strictement positive~>>. La section \ref{s2.14} précisera les choses et considérera aussi les métriques oscillantes <<~à masse nulle~>> d'ordre 2. Lorsque $M$ est strictement positive, la section suivante va montrer que le théorème \ref{2.1} redonne en approximation les équations de Schrödinger de la physique quantique standard sur la fonction d'état qui permettent l'étude du comportement <<~des particules dans un potentiel~>>. Le lien entre <<~la fonction d'état~>> que l'on va définir ici et celle de la physique quantique habituelle n'apparaîtra que plus loin. \section[La fonction d'état]{La fonction d'état d'un domaine de type <<~métrique oscillante dans un potentiel~>>.\\ Les équations de Schrödinger retrouvées comme approximations des équations de Klein-Gordon\label{s2.10}} La fonction d'état va être définie à partir de la fonction canonique $a_c$ (def. \ref{d2.15}) mais elle ne prendra de sens que dans le cadre des métriques oscillantes élémentaires pour lesquelles la charge électrique est bien définie (def \ref{d2.14}), en particulier la masse sera supposée strictement positive. Les équations que l'on va obtenir ne seront donc valides que dans un cadre plus restreint que celui supposé dans le théorème \ref{2.1}. La terminologie <<~fonction d'état~>> a été choisie car les équations vérifiées par celle-ci vont être très proches des équations de Schrödinger (dans différents potentiels) qui concernent la <<~fonction d'état~>> de la physique quantique classique. Cependant, l'interprétation des résultats <<~probabilistes~>> liés à la fonction d'état (ainsi qu'à la fonction canonique $a_c$) sera fondamentalement différente et sera précisée dans la section \ref{s2.12}. \begin{dfn}\label{d2.16} On considère un domaine de type <<~métrique oscillante élémentaire dans un potentiel~>> dont la masse $m$ est strictement positive et la charge $q$ bien définie (def. \ref{d2.14}). \textbf{La fonction d'état} associée à la fonction $a$ qui caractérise la métrique oscillante (vue dans une carte de l'atlas d'observation) est la fonction : $\varPsi:\Theta\rightarrow\mathbb{C}$~~ pour l'ordre 1, $\varPsi:\Theta\rightarrow E_{S^3(\rho)}^{\mathbb{C}}(\gamma)$~~pour l'ordre 2, qui vérifie : $\varPsi=e^{iMt}a_c$ ~~si la charge électrique est positive, $\varPsi=e^{iMt}\overline{a_c}$ ~~si la charge électrique est négative. où ici le conjugué $\overline{a_c}$ est défini dans $\mathbb{C}$ pour l'ordre 1 et dans $E_{S^3(\rho)}^{\mathbb{C}}(\gamma)$ pour l'ordre 2. (Si la charge électrique est nulle, ces deux égalités sont identiques : $\varPsi=e^{iMt}\varphi$, puisque alors $a_c=\overline{a_c}=\varphi$). \end{dfn} On remarque que, d'après \ref{F14} : \begin{eqnarray}\label{F1'} a=\beta(Re(\varPsi e^{-i(Mt+Qu)}))=\beta(\varPsi_1\cos(Mt+Qu)+\varPsi_2\sin(Mt+Qu)) \end{eqnarray} où ~~$\varPsi=\varPsi_1+i\varPsi_2$. \begin{rmq}\label{r2.5} L'idée qui amène à la définition de la fonction d'état que l'on vient de donner est de <<~factoriser~>> dans la fonction $a$ qui caractérise la métrique oscillante, le maximum <<~d'oscillations~>> en $t$. En effet, comme on l'a vu, la métrique pseudo-riemannienne $g$ d'une métrique oscillante élémentaire vérifie une forme d'<<~équation des ondes~>> et la fréquence de cette <<~onde~>> est justement la fréquence de masse $M$. En écrivant $a_c=e^{-iMt}\varPsi$ on cherche à ce que les variations de $\varPsi$ en $t$ soient les plus faibles possibles, ceci en factorisant $a_c$ par $e^{iMt}$ qui ne fait intervenir que la constante fondamentale $M$. Cela ne prend vraiment de sens que lorsque la métrique oscillante considérée est <<~proche~>> d'une métrique oscillante homogène (def. \ref{d2.13}) (lorsque celle-ci est stationnaire on montre facilement que $\varPsi$ ne dépend plus de $t$), ce qui est le cas dans la description des expériences génériques de la physique quantique standard. \end{rmq} \begin{rmq}\label{r2.6} Le module de $\varPsi$ est égal à celui de $a_c$. En physique quantique standard cela signifie que la <<~densité de probabilité de présence d'une particule~>> est aussi donnée par $|a_c|$. Cette égalité sera de même importante pour nous lors de <<~l'interprétation probabiliste~>> donnée dans la section \ref{s2.12}. \end{rmq} Le théorème qui suit n'est qu'une simple réécriture du théorème \ref{2.1} lorsque l'on a remplacé la fonction canonique $a_c$ par la fonction $\varPsi$ donnée dans la définition \ref{d2.16} (on se restreint ici aux métriques oscillantes élémentaires d'ordre 1, celles d'ordre 2 seront étudiées dans la section \ref{s2.13}) . La vérification des équations obtenues est laissée au lecteur. \begin{thme}\label{2.2} Sous les mêmes hypothèses que celles du théorème \ref{2.1} et lorsque la charge électrique est bien définie, la fonction d'état $\varPsi$ vérifie les équations suivantes : \begin{enumerate} \item \textbf{Dans un potentiel neutre.} \begin{eqnarray}\label{F2} 2iM\frac{\partial\varPsi}{\partial t}=\Delta\varPsi+\frac{\partial^2\varPsi}{\partial t^2} \end{eqnarray} où $\Delta$ est le laplacien des géomètres : $\Delta=-(\sum_1^3\frac{\partial^2}{(\partial x^k)^2})$ et $M$ la fréquence de masse. \item \textbf{Dans un potentiel actif sans électromagnétisme.} \begin{eqnarray}\label{F3} 2iM\frac{\partial\varPsi}{\partial t}=\Delta\varPsi+2vM^2\varPsi+(1-2v)\frac{\partial^2\varPsi}{\partial t ^2}+4iMv\frac{\partial\varPsi}{\partial t} \end{eqnarray} où ~~$v$~~ est la fonction potentiel. \item \textbf{Dans un potentiel électromagnétique.} \begin{eqnarray}\label{F4} 2iM\frac{\partial\varPsi}{\partial t}=\sum_{j=O}^3\varepsilon_j(i{\frac{\partial}{\partial x^j}}+Q\Upsilon^j)^2\varPsi-2MQ\Upsilon^0\varPsi \end{eqnarray} où~~ $\varepsilon_j=g_{0jj}$~~ c.a.d :~~ $\varepsilon_0=-1$ ~~et~~$\varepsilon_1=\varepsilon_2=\varepsilon_3=+1$,\\ $\Upsilon$ est le potentiel électromagnétique, $Q$ la fréquence de charge électrique. \end{enumerate} \end{thme} \begin{rmq}\label{r2.7} Comme cela a déjà été précisé dans la remarque \ref{r2.4} qui concerne les équations de Klein-Gordon, les équations \ref{F2} et \ref{F4} sont invariantes par les changements de cartes qui proviennent des transformations de Lorentz et ce n'est pas le cas pour l'équation \ref{F3}. Bien entendu, aucune de ces équations ne sera invariante lorsque l'on aura supprimé les termes <<~négligeables~>> pour retrouver les équations de Schrödinger classiques. \end{rmq} Il est intéressant de réécrire les équations obtenues dans le théorème \ref{2.2} en revenant aux unités plus habituelles dans lesquelles sont écrites les équations de Schrödinger standard. Le <<~temps~>> que l'on a noté $t$ dans le système d'unités géométriques sera noté $\tau$ dans le système S.I, et le <<~temps~>> $u$ sera noté $u'$ . La masse $m$ (en kg) et la charge électrique (en C) sont définis, on l'a vu, à partir de la fréquence de masse $M$ et de la fréquence de charge électrique $Q$ par : $m:=\hbar c^{-1}M$ et $q:=\hbar Q$. Si l'on introduit la $\tau$-fréquence de masse (en $s^{-1}$ ) et la $\tau$-fréquence de charge électrique $Q'$ (en $s^{-1}$) en posant $M':=cM$ et $Q':=cQ$, les égalités précédentes s'écrivent : \begin{eqnarray}\label{F5''} mc^2=\hbar M' ~~~{\text et} ~~~ qc=\hbar Q' \end{eqnarray} De même, il est naturel d'introduire le potentiel newtonien $v'$ (en $m^2s^{-2}$) en posant $v':=c^2v$ (où $v<1$ est sans unité) et le potentiel électrique $\phi:=c\Upsilon_0=-c\Upsilon^0$ puisque c'est <<~$\Upsilon_0~ dt$~>> qui intervient dans la définition de la 1-forme $(\leftidx{^e}{h}(Y))^\flat)$. Les équations \ref{F2}, \ref{F3}, \ref{F4} obtenues par le théorème \ref{2.2} s'écrivent alors, lorsque l'on note encore $\varPsi$ la fonction dépendant des variables $(\tau,x^1,x^2,x^3)$ : \begin{eqnarray}\label{F6'} i\frac{\partial\varPsi}{\partial \tau}=\frac{\hbar}{2m}\Delta\varPsi+\frac{\hbar}{2{mc^2}}\frac{\partial^2\varPsi}{\partial \tau^2} \end{eqnarray} \begin{eqnarray}\label{F7'} i\hbar\frac{\partial\varPsi}{\partial \tau}=\frac{{\hbar}^2}{2m}\Delta\varPsi+v'm\varPsi+\frac{1}{c^2}(\frac{\hbar^2}{2m}(1-2v'/c)\frac{\partial^2\varPsi} { \partial \tau^2}+2i\hbar v'\frac{\partial\varPsi}{\partial \tau}) \end{eqnarray} \begin{eqnarray}\label{F8'} i\hbar\frac{\partial\varPsi}{\partial \tau}=\frac{{\hbar}^2}{2m}\sum_{k=1}^3(i{\frac{\partial}{\partial x^k}}+\frac{q}{\hbar}\Upsilon_k)^2\varPsi+q\phi\varPsi-\frac{1}{2mc^2}(q\phi-i\hbar{\frac{\partial}{\partial \tau}})^2\varPsi \end{eqnarray} Le lecteur remarquera que, lorsque l'on a retiré le dernier terme du deuxième membre, les trois équations obtenues sont identiques aux équations de Schrödinger classiques relatives à la fonction d'état d'une particule dans un potentiel. La différence essentielle est le fait que les équations différentielles du théorème \ref{2.2} sont d'ordre 2 en $t$. Le dernier terme des équation \ref{F6'}, \ref{F7'}, \ref{F8'} écrites dans le système S.I a pour coefficient $\frac{1}{c^2}$, ce qui laisse supposer qu'il pourra être facilement <<~négligé~>> pour redonner les équations de Schrödinger standard. Ce point doit évidemment être précisé et ceci est l'objet de la sous-section suivante. \subsection{Les $\varepsilon$-approximations} Il s'agit ici de donner les conditions mathématiques qui permettent de justifier les approximations qui ramènent les équations données par le théorème \ref{2.2} aux équations de Schrödinger classiques correspondantes. Les conditions sont classées en deux catégories : les $\varepsilon$-approximations pour la fonction d'état et les $\varepsilon$-approximations pour les potentiels. \bigskip Pour la clarté de ce qui va suivre, on commence par écrire ce que l'on a appelé <<~les équations de Schrödinger classiques~>> puis on précise leurs différences avec les équations \ref{F2}, \ref{F3}, \ref{F4}, obtenues dans le théorème \ref{2.2}. \textbf{Les équations de Schrödinger classiques} \begin{enumerate} \item Dans un potentiel neutre. \begin{eqnarray}\label{F9'} 2iM\frac{\partial\varPsi}{\partial t}=\Delta\varPsi \end{eqnarray} \item Dans un potentiel actif sans électromagnétisme. \begin{eqnarray}\label{F10'} 2iM\frac{\partial\varPsi}{\partial t}=\Delta\varPsi+2vM^2\varPsi \end{eqnarray} \item Dans un potentiel électromagnétique. \begin{eqnarray}\label{F11'} 2iM\frac{\partial\varPsi}{\partial t}=\sum_{k=1}^3(i{\frac{\partial}{\partial x^k}}+Q\Upsilon^k)^2\varPsi-2MQ\Upsilon^0\varPsi \end{eqnarray} \end{enumerate} Pour retrouver les équations \ref{F2}, \ref{F3}, \ref{F4} les termes à rajouter sont successivement les suivants : \begin{eqnarray}\label{F12'} 1 : \frac{\partial^2\varPsi}{\partial t^2}, ~~~~~2 : (1-2v)\frac{\partial^2\varPsi}{\partial t ^2}+4iMv\frac{\partial\varPsi}{\partial t}, ~~~~~3 : -(i{\frac{\partial}{\partial t}}+Q\Upsilon^0)^2\varPsi \end{eqnarray} \subsubsection{Les $\varepsilon$-approximations pour la fonction d'état} l'idée intuitive est que ces approximations seront valides lorsque les vitesses mises en jeu seront très petites par rapport à la vitesse de la lumière. Dans le cadre des métriques oscillantes élémentaires, la notion de vitesse n'a été précisée que pour les métriques oscillantes homogènes dans un potentiel neutre (def. \ref{d2.12}). On commence donc, sur cet exemple, à calculer la fonction d'état $\varPsi$ pour déterminer les liens qui existent entre les dérivées de $\varPsi$ , $\varPsi$ elle même et la vitesse de propagation relative à une carte $(U,\zeta)$ de l'atlas d'observation. La fonction $a$ associée à une métrique oscillante homogène dans un potentiel neutre est, par définition, de la forme $a=\varphi\beta$ où :\\ $\varphi(t,(x^k),u)=C\cos(M't-\sum_{k=1}^3\lambda_kx^k+Qu)+C'\sin(M't-\sum_{k=1}^3\lambda_kx^k+Qu)$ et ~$\beta\in E_W(\mu)$ ~avec $\sum_{k=1}^3 \lambda_k^2 <M'^2$. Le vecteur vitesse $\overrightarrow{v}$ a pour composantes $\frac{1}{M'}(\lambda_1,\lambda_2,\lambda_3)$. $M'$ est la fréquence de masse relativiste et est liée à $M$ par : $M=\sqrt{1-|\overrightarrow{v}|^2}M'$. Un simple calcul montre que :\\ $\varphi(t,(x^k),u)=(C\cos\alpha+C'\sin\alpha)\cos(Qu)+(C'\cos\alpha-C\sin\alpha)\sin(Qu)$ où l'on a posé $\alpha=M't-\sum_{k=1}^{3}\lambda_kx^k$. On en déduit que la fonction canonique associée (def. \ref{d2.15}) est, si $Q>0$ : $a_c=(C\cos\alpha+C'\sin\alpha)+i(C'\cos\alpha-C\sin\alpha)=(C+iC')e^{-i(M't-\sum_{k=1}^{3}\lambda_kx^k)}$ et est égale au conjugué si $Q<0$. La fonction d'état $\varPsi$ (def. \ref{d2.16}) vérifie donc (quel que soit le signe de $Q$) :\\ $\varPsi=(C+iC')e^{-i((M'-M)t-\sum_{k=1}^{3}\lambda_kx^k)}$ Alors :\\ $|\varPsi|^2=C^2+C'^2, ~~|\frac{\partial\varPsi}{\partial t}|=(M'-M)|\varPsi|, ~~|\frac{\partial^2\varPsi}{\partial t^2}|=(M'-M)|\frac{\partial\varPsi}{\partial t}| \\=(M'-M)^2|\varPsi|, ~~|\triangledown_{x_1,x_2,x_3}\varPsi|^2=(\sum_{k=1}^{3}\lambda_k^2)|\varPsi|=|\Delta_{x_1,x_2,x_3}\varPsi|.$ Comme $|\overrightarrow{v}|^2=\frac{1}{M'^2}\sum_{k=1}^{3}\lambda_k^2$~~ et ~~$M'-M=((1-|\overrightarrow{v}|^2)^{-1/2}-1)M=(1/2|\overrightarrow{v}|^2+\circ(|\overrightarrow{v}|^2))M$,~~~ on en déduit : \begin{eqnarray}\label{F13'} |\frac{\partial\varPsi}{\partial t}|=(1/2|\overrightarrow{v}|^2+\circ|\overrightarrow{v}|^2)M|\varPsi|, ~~~~|\frac{\partial^2\varPsi}{\partial t^2}|=(1/4|\overrightarrow{v}|^4+\circ|\overrightarrow{v}|^4)M^2|\varPsi| \end{eqnarray} L'idée est alors de considérer que les métriques oscillantes élémentaires (pour lesquelles la notion de vitesse n'a pas été définie) qui vont intervenir dans les expériences pour lesquelles les approximations que l'on va préciser sont valides (diffraction, fentes de Young, expérience de Stern-Gerlach, intrication quantique, etc.) ne sont que de <<~faibles perturbations~>> de métriques oscillantes homogènes. Les relations \ref{F13'} ainsi que les termes donnés par \ref{F12'} inspirent donc la définition suivante. \begin{dfn}\label{d2.17} On considère un domaine de type <<~métrique oscillante élémentaire d'ordre 1 dans un potentiel~>> pour lequel la fonction d'état associée, relative à une carte de l'atlas d'observation, est notée $\varPsi$. On dira que \textbf{$\varPsi$ vérifie les $\varepsilon$- approximations} si : Il existe un réel ~$0<\varepsilon<1$ ~et deux fonctions $\mu$ et $\mu':\Theta\rightarrow\mathbb{C}$ tels que : \begin{eqnarray}\label{F14'} \frac{\partial\varPsi}{\partial t}=\varepsilon^2M\mu\varPsi, ~~~~\frac{\partial^2\varPsi}{\partial t^2}=\varepsilon^4M^2\mu'\varPsi \end{eqnarray} \vspace{-1cm} où, sur $\Theta$ : \begin{enumerate} \item $|\mu|<1$ ~~et~~$|\mu'|<1$ \item Il existe deux constantes $C_1$ et $C_2$ telles que: ~~~quel que soit $k$ de $0$ à $3$, $|\frac{\partial\mu}{\partial x^k}|<C_1$ ~~et ~~$|\frac{\partial\mu'}{\partial x^k}|<C_2$ \end{enumerate} Dans les unités géométriques, $C_1$ et $C_2$ sont des inverses de longueur, leurs choix est précisé dans les lignes qui suivent. \end{dfn} Si l'on utilise le fait que $\frac{\partial^2\varPsi}{\partial t^2}=\varepsilon^4M^2\mu'\varPsi$, l'équation \ref{F2} s'écrit : $$2iM\frac{\partial\varPsi}{\partial t}=\Delta\varPsi+\varepsilon^4M^2\mu'_1\varPsi+i\varepsilon^4M^2\mu'_2\varPsi$$ où l'on a posé $\mu'=\mu'_1+i\mu'_2$. Le terme $\varepsilon^4\mu'_1$ apparaît donc exactement comme le potentiel newtonien $2v$ de l'équation \ref{F10'} et le terme $\varepsilon^4M^2\mu'_2$ comme un des termes du potentiel magnétique $Q\frac{\partial\Upsilon_k}{\partial x^k}$ de l'équation \ref{F11'}. L'équation \ref{F2} correspond alors à une équation de Schrödinger classique avec un potentiel newtonien et un potentiel magnétique. Si $\varepsilon \ll 1$ et $C_1$, $C_2$ sont suffisamment petits relativement aux constantes caractéristiques de l'expérience étudiée, ces potentiels peuvent être considérés comme négligeables, et la solution $\varPsi$ de \ref{F2} très proche de celle donnée par \ref{F9'} (pour des conditions aux limites précisées). Ce principe de comparaison entre les solutions peut être appliqué sans difficultés aux équations \ref{F3} et \ref{F4} en utilisant de plus le fait que $\frac{\partial\varPsi}{\partial t}=\varepsilon^2M\mu\varPsi$ et les contraintes que l'on va imposer maintenant aux potentiels. \begin{rmq}\label{r2.8} La solution d'une équation de Schrödinger <<~avec potentiels actifs~>> dépend des <<~dérivées~>> de ces potentiels, c'est pour ceci que l'on a posé la deuxième condition sur les dérivées de $\mu$ et $\mu'$. Les valeurs de $\varepsilon$ que l'on choisit ainsi que les constantes $C_1$ et $C_2$ pour rendre les termes \ref{F12} négligeables, dépendent de l'expérience considérée. (Un potentiel peut avoir un effet négligeable sur les résultats d'une expérience si le temps de celle-ci est court, mais ne plus l'avoir pour ce même type d'expérience sur un temps <<~long~>>). \end{rmq} \subsubsection{Les $\varepsilon$-approximations pour les potentiels} Comme on le verra dans la section \ref{s2.12}, il faudra, pour retrouver l'interprétation probabiliste de la fonction d'état $\varPsi$, que les éléments de volume relatifs aux métriques des potentiels, restreintes aux <<~sous-variétés d'espace~>>, soient proches de ceux de la métrique $g_0$, elle même restreinte à ces sous-variétés. Ceci se traduit par le fait que, dans le système de coordonnées standard, le déterminant de la matrice $M_{g_{\mathcal P_r}}$ est proche de celui de $M_{g_0{_r}}$ lorsque $g_{\mathcal P_r}$ est la métrique <<~restreinte à la sous-variété de genre espace~>> du potentiel. La métrique $g_{\mathcal P}$ du potentiel est en général de la forme $g_{\mathcal P}=g_0+h$. Dans les exemples de potentiels que l'on a donnés, $detM_{g_{\mathcal P_r}}$ est proche de $detM_{g_0{_r}}$ si, dans le système de coordonnées standard : $|h_{ij}| \ll 1$, ce qui s'écrit pour le potentiel actif sans électromagnétisme : $|v| \ll 1$ et pour le potentiel électromagnétique : $|\Upsilon_k| \ll 1$ pour $k$ de $0$ à $3$. D'autre part, si l'on regarde les équations de Schrödinger \ref{F10} et \ref{F11}, les termes <<~comparables~>> sont : \begin{eqnarray}\label{F15'} M\frac{\partial\varPsi}{\varPsi}, ~~vM^2, ~~Q^2\Upsilon^2_{1,2,3}, ~~MQ\Upsilon_0. \end{eqnarray} De même, d'après l'expression 3. de \ref{F12}, ~$Q\frac{\partial\Upsilon_0}{\partial t}$ est comparable à $Q^2\Upsilon_0^2$.\\ Si l'on considère valides les $\varepsilon$-approximations sur $\varPsi$ données par \ref{F14'}, il est naturel, pour que les termes de l'expression \ref{F15'} soient tous du même ordre en $\varepsilon$, de donner la définition suivante. \begin{dfn}\label{d2.18} On considère un domaine de type <<~métrique oscillante élémentaire d'ordre 1 dans un potentiel~>> pour lequel la fonction d'état associée, relative à une carte de l'atlas d'observation, est notée $\varPsi$. On dira que \textbf{les potentiels vérifient les $\varepsilon$-approximations} si il existe un réel $0<\varepsilon<1$ tel que les suites d'inégalités 1. et 2. suivantes soient vérifiées : \begin{enumerate} \item $|v|<\varepsilon^2, ~~|Q\Upsilon_0|<M\varepsilon^2, ~~|Q\Upsilon_{1,2,3}|<M\varepsilon$. \item $|Q\frac{\partial\Upsilon_0}{\partial t}|<M^2\varepsilon^4$, il existe une constante $C$ telle que, pour $k$ de $0$ à $3$ : $|Q\frac{\partial^2\Upsilon_0}{\partial x^k \partial t}|<CM^2\varepsilon^4$, ~dont le choix est basé sur le même principe que celui utilisé pour les constantes $C_1$ et $C_2$ de la définition \ref{d2.17}. \end{enumerate} \end{dfn} Les trois inégalités de la partie 1. de cette définition sont suffisantes pour estimer la différence entre les éléments de volume relatifs aux métriques $g_{\mathcal P}$ et$g_o$ restreintes aux <<~sous-variétés de genre espace~>> définies rigoureusement dans la section suivante. Les inégalités données dans les définitions \ref{d2.17} et \ref{d2.18} permettent de montrer, lorsque $\varepsilon \ll 1$ et les constantes $C_1$, $C_2$ sont bien choisies, que les termes \ref{F12} apparaissent dans les équations \ref{F2}, \ref{F3}, \ref{F4} exactement comme des termes de potentiel standard dans les équations de Schrödinger et que ceux-ci sont <<~négligeables~>>. Dans la suite, nous dirons simplement qu'un domaine de type <<~métrique oscillante élémentaire d'ordre 1 dans un potentiel~>> \textbf{vérifie les $\varepsilon$-approximations} si les inégalités données dans les définition \ref{d2.17} et \ref{d2.18} sont vérifiées avec $\varepsilon \ll 1$. \subsubsection {L'utilisation rigoureuse des $\varepsilon$-approximations.} On considère une expérience pour laquelle le domaine d'espace-temps est de type <<~métrique oscillante élémentaire d'ordre 1 dans un potentiel~>>. On suppose que les potentiels vérifient les $\varepsilon$-approximations (def. \ref{d2.18}) avec $\varepsilon \ll 1$ (ceci est une donnée de l'expérience) et que, pour des conditions aux limites données, l'équation \ref{F2} ou \ref{F3} ou \ref{F4} admet une unique solution $\varPsi$ (cette équation est d'ordre 2 en $t$). Si l'on constate que $\varPsi$ vérifie les $\varepsilon$-approximations (def. \ref{d2.17}) avec $\varepsilon <<1$, alors les considérations exposées précédemment permettent de dire que $\varPsi$ est très proche de la solution $\varPsi'$ de l'équation de Schrödinger correspondante \ref{F9} ou \ref{F10} ou \ref{F11} pour des conditions aux limites compatibles avec les précédentes et qui assurent l'unicité de $\varPsi'$ (l'équation de Schrödinger est d'ordre 1 en $t$). Le procédé d'utilisation des $\varepsilon$-approximations que l'on vient de présenter est en fait purement théorique car, en pratique, la détermination exacte de $\varPsi$ est, hormis les cas triviaux, très difficile à obtenir. L'objectif principal de cette section était en fait de montrer que la description des expériences qui, avec notre <<~regard~>> sur la physique, utilise les équations du théorème \ref{2.2}, vont bien donner dans la plupart des cas la même description que celle obtenue par la physique quantique classique. \section{Les singularités \label{s2.11}} Il s'avère nécessaire, pour décrire complètement les expériences qui, classiquement, utilisent la notion de <<~particules~>>, d'introduire une notion de <<~localisation~>> pour les domaines de type <<~métrique oscillante~>>. Cette localisation correspond expérimentalement aux phénomènes <<~d'impacts~>> sur un écran, de <<~traces~>> dans une chambre à bulles, à dérive, à fils, etc. Nous commençons dans la première partie de cette section, par définir les objets <<~simples~>> (les singularités) qui suffisent à décrire qualitativement et quantitativement les résultats des expériences classiques de la physique quantique (particules dans un potentiel, diffraction, fentes de Young, expériences de type Stern-Gerlach, intrication quantique, etc.). Comme ce texte n'ira pas plus loin dans la théorie qui suffit à cette description, le lecteur pourra se contenter de la première partie de cette section et passer directement à la section suivante. Cependant, dans le but d'étudier ensuite des phénomènes beaucoup plus délicats que sont, par exemple, ce que l'on appelle classiquement <<~les interactions de particules entre elles ~>> nous allons, dans la deuxième partie de cette section, donner un point de vue plus précis sur la notion de singularité. Actuellement, la description de ces derniers phénomènes est abordée par la T.Q.C. Le parallèle entre la théorie présentée ici et la T.Q.C a été succinctement présenté dans la remarque \ref{r2.2}. \subsection{Les singularités vues <<~simplement~>>} On considère un domaine ($\mathscr{D},g$) représentant un certain type géométrique (un domaine à courbure scalaire constante conforme à un potentiel, par exemple) pour lequel la signature de $g$ est partout $(-,+,+,+,-,+,\dots,+)$. \begin{dfn}\label{d2.19} Un triplet $(\mathscr{D},g,\mathscr{S})$, où $\mathscr{S}$est une partie non vide et de mesure nulle de $\mathscr{D}$, sera appelé \textbf{un domaine avec singularités}. L'ensemble $\mathscr{S}$ sera appelé \textbf{la partie singulière de $\mathscr{D}$}. \end{dfn} \begin{dfn}\label{d2.20} On considère un domaine avec singularités $(\mathscr{D},g,\mathscr{S})$ et $\mathscr{H}$ une sous-variété de genre espace de dimension $n-2$ (dimension maximale donc). Les parties connexes de $\mathscr{H}\cap \mathscr{S}$ seront appelées \textbf{les singularités élémentaires de $\mathscr{H}$}. \end{dfn} Il est tentant de dire que les singularités élémentaires de $\mathscr{H}$ ne sont autres que les <<~particules~>> (au sens classique) qui se trouvent dans $\mathscr{H}$. Cependant, aucune loi ne sera donnée sur le comportement de ces singularités élémentaires et l'on supposera seulement que la répartition de celles-ci dans $\mathscr{H}$ est aléatoire relativement à la métrique $g$, cette notion sera bien sûr précisée dans la section suivante. Remarquons de plus que les grandeurs caractéristiques habituellement associées aux <<~particules~>> (masse, charge électrique, spin, impulsion, etc.) sont pour nous, comme on l'a déjà dit, des caractéristiques \textbf{de la métrique $g$ du domaine $\mathscr{D}$} (qui sera en général du type <<~métrique oscillante dans un potentiel~>>) et ne sont absolument pas associées aux <<~singularités~>>. Ceci est profondément différent des points de vue des autres théories physiques actuelles. Il est à noter que les singularités élémentaires ne sont pas nécessairement des points de $\mathscr{H}$ mais seulement des parties connexes dont le diamètre de chacune est parfaitement défini relativement à $g_\mathscr{H}$ qui est, en chaque point de $\mathscr{H}$, un produit scalaire puisque celui-ci est de genre espace. La notion de singularité que l'on vient de présenter a été introduite par la donnée d'un <<~objet ajouté~>> (la partie singulière $\mathscr{S}$) à la structure de la variété pseudo-riemannienne $(\mathscr{D},g)$. Il est cependant concevable que cette partie singulière soit simplement une caractéristique du tenseur pseudo-riemannien se trouvant dans $\mathscr{D}$, noté maintenant $g_{\mathscr{S}}$, et que celui-ci ne soit pas défini sur la partie $\mathscr{S}$ de $\mathscr{D}$. Le tenseur pseudo-riemannien $g_{\mathscr{S}}$ serait alors <<~très proche~>> du tenseur $g$ donné dans la définition \ref{d2.19}, sauf sur un voisinage de la partie $\mathscr{S}$ pour lequel la connaissance du comportement asymptotique de $g_{\mathscr{S}}$ (relatif à $g$) permettrait d'aborder précisément l'étude de certains <<~phénomènes d'interaction~>>. Ce point de vue est justifié dans le paragraphe suivant. On y montre que les domaines de type <<~métrique oscillante~>> peuvent être très proches des domaines de type <<~fluide~>> présentés dans la section \ref{s1.2} et que des <<~effondrements~>> de parties de ces domaines en singularités peuvent être justifiés par des calculs analogues à ceux utilisés en relativité générale standard qui décrivent <<~l'effondrement~>> de certains domaines de type <<~fluide~>> en singularités (trous noirs par exemple). Le tenseur $g_{\mathscr{S}}$ peut alors être considéré comme le résultat de <<~l'évolution~>> du tenseur $g$ représentant une <<~métrique oscillante~>> après certains <<~effondrements locaux~>> (le mot <<~évolution~>> demande à être précisé). La partie singulière $\mathscr{S}$ de $\mathscr{D}$ présentée dans la définition \ref{d2.19} est alors la partie sur laquelle le tenseur $g$ (devenu $g_{\mathscr{S}}$) n'est plus défini. \subsection{Les singularités vues comme des effondrements de métriques oscillantes} Considérons un domaine de type <<~métrique oscillante homogène dans un potentiel neutre~>> ayant une vitesse de propagation constante (def. \ref{d2.12}). La fonction $a$ est de la forme $a=\varphi\beta$ ~~où ~ ~$\varphi=C\cos(M't-\sum_{k=1}^3\lambda_kx^k+C')$ ~~et ~~$\beta\in E_W(\mu)$ ~~(on suppose ici pour simplifier que $Q^+=0$). On s'intéresse au cas particulier pour lequel $\beta=C^{te}$ qui correspond à un domaine de type <<~métrique oscillante~>> associé à la notion de <<~champ de Higgs~>> comme on le verra dans la section \ref{s2.15}. La métrique pseudo-riemannienne $g$ est donc de la forme $g=|\varphi|^{4/n-2}g_0$. D'après l'équation fondamentale d'une métrique oscillante : $\Box_{g_0}\varphi+S\varphi=0$.\\ On a donc : $(\sum_{k=1}^3\lambda_k^2-M'^2)\varphi+S\varphi=0$ ~~~ d'où : $S=M'^2-\sum_{k=1}^3\lambda_k^2$. La proposition suivante permet de faire le lien entre des domaines de type <<~métrique oscillante~>> et de type <<~fluide~>>, elle se démontre simplement en partant de l'expression standard de la courbure de Ricci lors d'un changement conforme de la métrique $g=|a|^\frac{4}{n-2}g_0$ : ~~~sur l'ouvert où $a$ ne s'annule pas, $R_{ij}(g)=R_{ij}(g_0)+\frac{2n}{n-2}a^{-2}\nabla_ia\nabla_ja-2a^{-1}\nabla_i\nabla_ja-\frac{2}{n-2}a^{-2} (\nabla^ia\nabla_ia+a\nabla^i\nabla_ia)g_{0_{ij}}$ où les dérivées covariantes $\nabla_i$ sont relatives à $g_0$. \begin{prop}\label{p2.2} Pour le domaine de type <<~métrique oscillante~>> que l'on vient de présenter, la courbure de Ricci vérifie : \begin{enumerate} \item Si $S:=\frac{n-2}{4(n-1)}S_{g_0}>0$ : $R_{icc}^{\sharp }(g)=R_{icc}^{\sharp} (g_0)+S\alpha_1X_0\otimes X_0+S\alpha_2g_0$ où ~~$\alpha_1:=2(\frac{n}{n-2}\tan^2 z+1)$ ~~et ~~$\alpha_2:=\frac{2}{n-2}(\tan^2 z-1)$\\ avec~$z:=M't-\sum_{k=1}^3\lambda_kx^k+C'$. $X_0=S^{-1/2}(M'\partial_t-\sum_{k=1}^3\lambda_k\partial_k)$ ~~et~~$g_0(X_0,X_0)=0$ \item Si $S_{g_0}=0$ : $R_{icc}^{\sharp }(g)=R_{icc}^{\sharp}(g_0)+S\alpha_1X\otimes X$ où $X=M'\partial_t-\sum_{k=1}^3\lambda_k\partial_k$ ~~et ~~$g_0(X,X)=g(X,X)=0$. De plus, $D_gX=0$, en particulier $X$ est un champ géodésique pour $g$. \end{enumerate} \end{prop} On remarque que, lorsque $S>0$, la courbure de Ricci se met sous la forme : $R_{icc}^{\sharp }(g)=\alpha'_1 X'_0\otimes X'_0-S\alpha'_2Y\otimes Y+S\alpha'_2(Y\otimes Y+X'_0\otimes X'_0+g)+R_{icc}^{\sharp} (g_0)$ où l'on a posé $X'_0=(\cos^{2/n-2}z)X_0$. Alors, $g(X'_0,X'_0)=-1, ~~R_{icc}^{\sharp} (g_0)(X'_0,X'_0)=0$, ~~(on a aussi $g(Y,Y)=-1$). Comme $S_g=0$ puisque~ $\Box_{g_0}\varphi+S\varphi=0$, ~~le tenseur $G$ vérifie $G=2R_{icc}(g)$. \textbf{Le domaine est alors de type <<~fluide~>>} d'après la définition \ref{def:4}. La fonction densité d'énergie est $S\alpha'_1$, elle est positive. Le champ de vecteurs apparent du fluide est $X'_0$ (ici non géodésique). La pression apparente est $S\alpha'_2(Y\otimes Y+X'_0\otimes X'_0+g)$. La pression cachée est $R_{icc}(g_0)$. Lorsque $S=0$, le domaine est de type <<~fluide lumière~>> puisque $g(X,X)=0$ et dans ce cas $X$ est un champ géodésique. Ce dernier domaine est comparable à celui de type <<~potentiel actif sans électromagnétisme~>> présenté en \ref{ss1.3} pour lequel $R_{icc}^{\sharp }(g)=(\Delta_{g_0}v)X_1\otimes X_1 +R_{icc}^{\sharp}(g_0)$ ~~où $v$ est la fonction potentiel, $X_1$ est de genre lumière et $D_g X_1=0$. On remarquera cependant que le champ de vecteurs $X_1$ était tangent à $I\times W$ alors que le champ $X$ est tangent à l'espace <<~apparent~>> $\Theta$. Les phénomènes d'effondrement d'un fluide massique en une singularité (un trou noir par exemple) sont décrits en relativité générale classique. Ils peuvent se voir mathématiquement comme une conséquence du théorème de Raychaudhuri qui dit que, sous certaines hypothèses, <<~l'expansion $\varTheta$~>> d'un champ de vecteurs géodésique $X$ caractérisant le fluide, tend vers $-\infty$ en un temps fini. Nous allons voir que ce même phénomène se décrit simplement (même en grande dimension) pour tout champ géodésique $X$ de genre lumière, lorsque la courbure de Ricci du domaine considéré est de la forme $R^\sharp_{icc}(g)=\alpha X\otimes X+P$ avec $R_{icc}(X,X)\geq0$, ce qui est le cas pour les métriques oscillantes que l'on vient de décrire ou pour un potentiel actif sans électromagnétisme. On considère les caractéristiques classiques d'un champ de vecteurs géodésique $X$ que sont : l'expansion $\varTheta$,~~la vorticité $\omega$, ~~la déformation $\sigma$ dont on ne rappelle pas les définitions ici. Le théorème de Raychaudhuri, en dimension $n$ et pour une signature de $g$ de la forme $(-,+,+,+,-,\dots,+)$, dit que : si $\beta$ est une géodésique de genre lumière paramétrée par $s$, telle que $\dot{\beta}(s)=X_{\beta(s)}$, alors l'égalité suivante est vérifiée : \begin{eqnarray}\label{F34} (\varTheta_{\beta(s)})'_s= -R_{icc}(X_{\beta(s)},X_{\beta(s)}) +2\omega_{ij}\omega^{ij}_{\beta(s)}-2\sigma_{ij}\sigma^{ij}_{\beta(s)}-\frac{1}{n-2} \varTheta^2_{\beta(s)} \end{eqnarray} On en déduit facilement la proposition suivante : \begin{prop}\label{p2.3} On suppose que la vorticité $\omega$ du champ $X$ est nulle et que, pour une valeur $s_0$ du paramètre : $\varTheta_ {\beta(s_0)}<0$.\\Alors il existe $s_1$ vérifiant $s_0<s_1\leq s_0+(n-2)|\varTheta(\beta(s_0)|^{-1}$ tel que l'expansion $\varTheta_{\beta(s)}$ tende vers $-\infty$ lorsque $s$ converge vers $s_1$ par valeurs inférieures. \end{prop} \textbf{Démonstration} : Comme $R^\sharp_{icc}(g)=\alpha X\otimes X+R^\sharp_{icc}(g_0)$ et que $X$ est de genre lumière, ~~$R_{icc}(g)(X_{\beta(s)},(X_{\beta(s)})=0$~~ puisque~~ $R_{icc}(g_0)(X,X)=0$.~~ Alors, comme la vorticité est nulle, le théorème de Raychaudhuri montre que : $(\varTheta_{\beta(s)})'_s\leq \frac{-1}{n-2}\varTheta^2(\beta(s))$. Lorsque $\varTheta_{\beta(s)}\neq0$ ~~on a : $(\varTheta^{-1}(\beta(s))'_s\geqslant\frac{1}{n-2}$, ~~d'où,~~ pour~~ $s\geqslant s_0$ :\\ $\varTheta^{-1}_{\beta(s)}-\varTheta^{-1}_{\beta(s_0)}\geqslant\frac{1}{n-2}(s-s_0)$. Comme $\varTheta^{-1}_{\beta(s)}$ est croissante en $s$ et est négative pour $s=s_0$, ~~$\varTheta^{-1}_{\beta(s)}$ converge vers $0$ lorsque $s$ converge vers $s_1\leq s_0+(n-2)|\varTheta(\beta(s_0)|^{-1}$. \bigskip La proposition que l'on vient de montrer décrit, sous la condition de nullité de la vorticité, un éventuel <<~effondrement~>> en une singularité des géodésiques engendrées par le champ $X$, ceci pour une valeur finie du paramètre $s$, sous la condition que l'expansion $\varTheta$ prenne des valeurs négatives. En fait, l'égalité \ref{F34} montre que ce phénomène peut avoir lieu sous des hypothèses bien plus faibles, et il n'est pas nécessaire de supposer la nullité de la vorticité ou de $R{icc}(X,X)$ puisque seule l'inégalité $(\varTheta_{\beta(s)})'_s\leq \frac{-1}{n-2}\varTheta^2(\beta(s)$ est utilisée. Cela laisse supposer que le phénomène <<~d'effondrement~>> d'une métrique oscillante en une singularité a lieu pour des métriques oscillantes bien plus générales que celles dont $R^\sharp_{icc}(g)$ est de la forme $\alpha X\otimes X+P$. La démonstration de la proposition \ref{p2.3} montre que c'est le fait que l'expansion $\varTheta$ soit négative en un certain domaine qui <<~provoque~>> le phénomène d'effondrement. Il est très concevable que seule une partie de la métrique oscillante s'effondre en une singularité et que l'on obtienne alors un domaine <<~métrique oscillante avec singularités~>>. Des précisions sur le comportement asymptotique de la métrique $g$ au voisinage d'une singularité n'aura d'importance que pour l'étude de phénomènes plus complexes. Pour les phénomènes physiques décrits dans ce papier jusqu'à la section \ref{s2.17}, seule la <<~localisation~>> des singularités (aléatoire relativement à $g$) sera utilisée. Ceci est le sujet de la section suivante. \section{La partie probabiliste\label{s2.12}} Dans l'axiomatique que l'on a choisie pour représenter l'espace- temps, les singularités (cf. section précédente) vont être considérées n'être régies par aucun principe ou aucune loi. Le fait que, dans les expériences génériques de la physique quantique (diffraction, fentes de Young,\dots, etc.), les singularités n'apparaissent pas équiprobablement réparties sur <<~l'écran~~>> (on considère que ce sont les singularités qui laissent des traces sur l'écran) \textbf{n'est pas dû à des lois qui régiraient ces singularités, mais seulement à la déformation de la métrique pseudo- riemannienne $g$ dans la partie de l'espace-temps où elles se trouvent, relative à la métrique $g_0$ utilisée lors de la mesure des résultats de l'expérience}. C'est bien un processus probabiliste que l'on va utiliser pour décrire les résultats de ces expériences et montrer que l'on obtient, sous certaines approximations, des résultats identiques à ceux de la physique quantique standard, ceci sans avoir recours à la moindre procédure ou axiomatique de cette dernière. \bigskip On considère un domaine avec singularités $(\mathscr{D},g,\mathscr{S})$ (def. \ref{d2.19}). \textbf{Le fait qu'aucune loi ne régisse les singularités de $\mathscr{D}$ est traduit mathématiquement de la manière suivante} : Toute sous-variété $\mathscr{H}$ de $\mathscr{D}$ de genre espace et de dimension $n-2$ (de dimension maximale donc) vérifie les deux propriétés suivantes : \begin{enumerate} \item Si $\mathscr{H}_2 \subset \mathscr{H}_1$ sont deux parties de $\mathscr{H}$ de volumes finis relatifs à la métrique riemannienne $g_{\mathscr{H}}$.\\ Si <<~$\varsigma$~>> est une singularité élémentaire dans $\mathscr{H}_1$ (def. \ref{d2.10}), alors la probabilité qu'elle soit dans $\mathscr{H}_2$ est : \begin{eqnarray}\label{F35} p=\text{vol}_{g_{\mathscr{H}}}(\mathscr{H}_2)~/~\text{vol}_{g_{\mathscr{H}}}(\mathscr{H}_1) \end{eqnarray} Autrement dit, la densité de probabilité de présence de la singularité <<~$\varsigma$~>> dans $\mathscr{H}_2$ est la densité uniforme donnée par la $(n-2)$-forme différentielle sur $\mathscr{H}_1$ définie par $\sigma=(\text{vol}_{g_{\mathscr{H}}}(\mathscr{H}_1))^{-1}dv_{g_\mathscr{H}}$ où $dv_{g_\mathscr{H}}$ désigne classiquement la $(n-2)$-forme de volume riemannienne sur $(\mathscr{H},g_{\mathscr{H}})$. (Lorsque $(v^1,\dots,v^{n-2})$ est un système de coordonnées sur $\mathscr{H}$, la forme de volume s'écrit : $dv_{g_\mathscr{H}}=\sqrt{det g_\mathscr{H}}dv^1\dots dv^{n-2}$ où $det g_\mathscr{H}$ est le déterminant de la matrice de $g_\mathscr{H}$ dans le système de coordonnées). \item Si $\varsigma_1,\dots,\varsigma_N$ sont $N$ singularités élémentaires dans $\mathscr{H}_1$, alors la probabilité que $k$ d'entre elles soient dans $\mathscr{H}_2$ est : \begin{eqnarray}\label{F36} p(N,k)=\displaystyle\binom{N}{k}p^kq^{N-k} \end{eqnarray} où $p=\text{vol}_{g_{\mathscr{H}}}(\mathscr{H}_2)~/~\text{vol}_{g_{\mathscr{H}}}(\mathscr{H}_1)$~~ et~~ $q=1-p$ Autrement dit, la loi de probabilité associée est une loi binomiale. Ceci traduit le fait que la présence d'une singularité élémentaire dans $\mathscr{H}$ n'a pas d'influence sur la présence des autres. \end{enumerate} On remarquera que les propriétés 1. et 2. ne sont pas liées au choix d'une carte du g-atlas d'observation. \bigskip Les propriétés 1. et 2. sont exactement celles qui correspondent à l'expérience consistant à lancer aléatoirement des objets ponctuels sur une surface $\mathscr{H}_1$ et à regarder les probabilités d'arrivée de ces objets sur une surface $\mathscr{H}_2\subset\mathscr{H}_1$ (mais pour nous, bien sûr, $\mathscr{H}_2$ et $\mathscr{H}_1$ sont de dimension $n-2$ et la métrique est $g_{\mathscr{H}_1}$). \bigskip On considère maintenant une carte de l'atlas d'observation pour laquelle la cellule type $(\mathscr{C},g)$ est celle d'une métrique oscillante dans un potentiel (def. \ref{d2.2}) : $\mathscr{C}=\Theta\times S^1(\delta)\times W$ et $g=|a|^\frac{4}{n-2}g_\mathcal P$. ($\Theta$ sera supposé de la forme $I\times \mathscr U\subset \mathbb{R}\times\mathbb{R}^3$). On <<~fixe~>> $(t,u)\in \mathbb{R}\times S^1(\delta)$ ~~(temps <<~double~>>). Soient $\mathscr{H}_{2_{(t,u)}}\subset \mathscr{H}_{1_{(t,u)}}\subset \mathscr{C}$ de la forme : $\mathscr{H}_{2_{(t,u)}}=\{t\}\times \omega\times\{u\}\times W\subset\mathscr{C}$ $\mathscr{H}_{1_{(t,u)}}=\{t\}\times \Omega\times\{u\}\times W\subset\mathscr{C}$ où $\omega\subset\Omega\subset \mathbb{R}^3$ sont deux domaines de $\mathbb{R}^3$. Pour visualiser les choses, le lecteur pourra supposer que $\Omega\subset\mathbb{R}^3$ est un domaine représentant un écran (avec une épaisseur) et $\omega$ un sous- domaine de cet écran lors d'une expérience standard qui consiste à étudier la répartition des impacts de <<~particules~>> sur l'écran, lorsque celles-ci sont envoyées au travers d'un potentiel $\mathcal P$. $\mathscr{H}_{2_{(t,u)}}$ et $\mathscr{H}_{1_{(t,u)}}$ sont des sous-variétés de genre espace de dimension $n-2$ pour les deux métriques $g$ et $g_0$. D'après \ref{F35}, lorsque <<~$\varsigma$~>> est une singularité élémentaire dans $\mathscr{H}_{1_{(t,u)}}$, la probabilité qu'elle se trouve dans $\mathscr{H}_{2_{(t,u)}}$ est : \begin{eqnarray}\label{F37} p_{(t,u)}=\int_{\mathscr{H}_{2_{(t,u)}}}dv_{g_{\mathscr{H}_{1_{(t,u)}}}}~/~\int_{\mathscr{H}_{1_{(t,u)}}}dv_{g_{\mathscr{H}_{1_{(t,u)}}}} \end{eqnarray} Dans un système de coordonnées standard de $\mathscr{C}$, ceci s'écrit : \bigskip $p_{(t,u)}=\int_{\mathscr{H}_{2_{(t,u)}}}\sqrt{det(g_{\mathscr{H}_{1_{(t,u)}}})}dx^idv^j~/~\int_{\mathscr{H}_{1_{(t,u)}}}\sqrt{det(g_{\mathscr{H}_{1_{(t ,u)}}})}dx^idv^j$ \bigskip (ici ~~$dx^idv^j:=dx^1dx^2dx^3dv^1\dots dv^{n-5}$) \bigskip d'où : \begin{eqnarray}\label{F38} p_{(t,u)}=\int_{\mathscr{H}_{2_{(t,u)}}}a^2_{(t,u)}\sqrt{det(g_{{\mathcal P}_{\mathscr{H}_{1_{(t,u)}}}})}dx^idv^j~/ \int_{\mathscr{H}_{1_{(t,u)}}}(idem) \end{eqnarray} \textbf{On remarquera ici l'importance de la signature de $g$ qui compte deux signes <<~$-$~>> et $(n-2)$ signes <<~$+$~>>}, car c'est ce fait qui permet d'obtenir l'exposant <<~2~>> sur la fonction $a$ puisque la dimension de $\mathscr{H}_1$ est alors égale à $(n-2)$ et par conséquent : $\sqrt{det(|a|^{\frac {4}{n-2}}g_{{\mathscr P}_{\mathscr{H}_{1_{(t,u)}}}})}=a^2\sqrt{det(g_{{\mathcal P}_{\mathscr{H}_{1_{(t,u)}}}})}$. ~~~(l'exposant <<~2~>> sur la fonction $a$ est particulièrement important pour la suite). On suppose maintenant que la métrique $g_{\mathcal P}$ est <<~suffisamment~>> proche de $g_0$, autrement dit que les <<~fonctions potentiel~>> qui interviennent dans $g_{\mathcal P}$ sont $\ll 1$. On peut alors écrire : \begin{eqnarray}\label{F39} p_{(t,u)}\simeq\int_{\mathscr{H}_{2_{(t,u)}}}a^2_{(t,u)}\sqrt{det(g_{0_{\mathscr{H}_{1_{(t,u)}}}})}dx^idv^j~/~\int_{\mathscr{H}_{1_{(t,u)}}} ( idem) \end{eqnarray} Une estimation de l'erreur dans <<~l'égalité~>>~ $\simeq$~ précédente peut être faite en utilisant les définitions précises des $\varepsilon$-approximations sur les potentiels (def. \ref{d2.18}), mais ce n'est pas à cela que nous nous intéressons ici. \bigskip Examinons maintenant le cas d'une métrique oscillante \textbf{élémentaire} d'ordre 1 (def. \ref{d2.9}) pour laquelle, par définition : $a=\varphi\beta$ ~~où ~~$\varphi:\Theta\times S^1(\delta))\rightarrow\mathbb{R}$ vérifie : $\varphi=\varphi_1\cos(Q^+u)+\varphi_2\sin(Q^+u)$ Ici $\varphi_1$ et $\varphi_2$ sont deux fonctions réelles définies sur $\Theta$ ~et ~$\beta\in E_W(\mu)$. On rappelle que la fonction canonique $a_c$ est définie par $a_c=\varphi_1+i\varphi_2$. (Le cas des métriques oscillantes élémentaires d'ordre 2 se traite d'une manière similaire mais ne sera présenté, pour la clarté de l'exposé, que dans la section concernant le <<~spin~>>. Le processus se généralise en fait sans difficultés à l'ordre $k>2$ suivant la décomposition de $W$ en produit de variétés compactes). Comme $g_0$ est une métrique <<~produit~>>, \ref{F39} s'écrit : \bigskip $p_{(t,u)}=(\int_\omega\varphi^2_{(t,x,u)}dv_{g_{0|_\omega}})(\int_W\beta^2dv_{g_{0|_W}}) ~/ ~(\int_\Omega\varphi^2_{(t,x,u)}dv_{g_{0|_\Omega}})(\int_W\beta^2dv_{g_{0|_W}})$ \bigskip où l'intégrale de $\beta^2$ sur $W$ se simplifie. On obtient donc : \begin{multline}\label{F40} p_{(t,u)}\simeq\int_\omega(\varphi^2_1{_{(t,x,u)}}\cos^2(Q^+u) + \varphi^2_{2_{(t,x,u)}}\sin^2(Q^+u)+\\ 2\varphi_1\varphi_2{_{(t,x,u)}}\cos(Q^+u)\sin(Q^+u))dx^1dx^2dx^3~ / ~\int_\Omega (idem) \end{multline} Dans le cas particulier où $Q^+=0$, la fonction canonique $a_c$ n'est autre que la fonction $\varphi_1$. <<~L'égalité~>> précédente s'écrit alors : \bigskip $p_{(t,u)}=p_{(t)}\simeq\int_\omega\varphi^2_1{_{(t,x)}}dx^i ~/ ~\int_\Omega\varphi^2_1{_{(t,x)}}dx^i$ \bigskip d'où : \bigskip $p_{(t)}\simeq\int_\omega|a_c|^2_{(t,x)}dx^i~/ ~\int_\Omega|a_c|^2_{(t,x)}dx^i $ \bigskip Lorsque $Q^+$ est strictement positif, il est intéressant de considérer les <<~moyennes~>> sur le <<~temps~>> $u\in S^1(\delta)$, ce qui revient à supposer que les variations en $u$ de $p_{(t,u)}$ donné en \ref{F40} ne sont pas perceptibles en pratique. On suppose donc que, pour un observateur <<~lié~>> au système de coordonnées standard de la cellule $\mathscr{C}$, la densité de probabilité de présence d'une singularité $\varsigma$ dans $\mathscr{H}_2{_{(t)}}:=\{t\}\times\omega\times S^1(\delta)\times W$ est uniforme relativement au temps $u\in S^1(\delta)$, ce que l'on traduit de la manière suivante : Lorsque, pour $t\in\mathbb{R}$, <<~$\varsigma$~>> est une singularité élémentaire dans $\mathscr{H}_1{_{(t)}}:=\{t\}\times\Omega\times S^1(\delta)\times W$, la probabilité pour que celle-ci soit dans $\mathscr{H}_2{_{(t)}}$ est : \begin{eqnarray}\label{F41} p_{(t)}\simeq\int_{\mathscr{H}_2{_{(t)}}}\sqrt{det(g_{\mathscr{H}_{1_{(t,u)}}})}dudx^idv^j ~/ ~\int_{\mathscr{H}_1{_{(t)}}}\sqrt{det(g_{\mathscr{H}_{1_{(t,u)}}})}dudx^idv^j \end{eqnarray} Dans le cas d'une métrique oscillante élémentaire d'ordre 1 et avec les mêmes approximations que celles décrites précédemment, \ref{F41} devient (en partant de \ref{F40}) : \vspace{0cm} $p_{(t)}\simeq\int_{\omega\times S^1(\delta)}(*)dx^idu~ / ~\int_{\Omega\times S^1(\delta)} (*)dx^idu$ \vspace{1mm} où $(*)=\varphi^2_1{_{(t,x,u)}}\cos^2(Q^+u) +\varphi^2_{2_{(t,x,u)}}\sin^2(Q^+u)+ 2\varphi_1\varphi_2{_{(t,x,u)}}\cos(Q^+u)\sin(Q^+u)$ \vspace{1mm} d'où, puisque : \vspace{2mm} $\int_{S^1(\delta)}\cos^2(Q^+u)du=\int_{S^1(\delta)}\sin^2(Q^+u)du=\pi\delta$ ~~et $\int_{S^1(\delta)}\cos(Q^+u)\sin(Q^+u)du=0$ : \vspace{2mm} $p_{(t)}\simeq\int_{\omega}(\varphi_1^2+\varphi_2^2)_{(t,x)}dx^i ~/ ~\int_{\Omega}(\varphi_1^2+\varphi_2^2)_{(t,x)}dx^i$ \vspace{5mm} Et l'on retrouve l'expression que l'on avait obtenue dans le cas particulier pour lequel $Q^+$ était nul : \begin{eqnarray}\label{F42} p_{(t)}\simeq\int_{\omega}|a_c|^2_{(t,x)}dx^i ~/ ~\int_{\Omega}|a_c|^2_{(t,x)}dx^i \end{eqnarray} Bien entendu, une singularité $\varsigma$ dans ${\mathscr{H}_1{_{(t)}}}$ (resp. ${\mathscr{H}_2{_{(t)}}}$) est vue <<~en pratique~>> dans $\Omega$ (resp. $\omega$) au temps $t$ par l'observateur lié au système de coordonnées. Les probabilités données par les égalités \ref{F36} s'étendent aussi au cas <<~moyennisé sur $S^1(\delta)$~>> et on laisse au lecteur la formulation précise du résultat. \vspace{2mm} Considérons le cas particulier où la métrique oscillante élémentaire a une charge électrique bien définie (def. \ref{d2.14}). La fonction d'état $\varPsi$ est alors elle même bien définie (def. \ref{d2.16}) et on a $|\varPsi|=|a_c|$. La probabilité $p_{(t)}$ donnée par \ref{F42} s'écrit donc aussi : \begin{eqnarray}\label{F43} p_{(t)}\simeq\int_{\omega}|\varPsi|^2_{(t,x)}dx^i ~/ ~\int_{\Omega}|\varPsi|^2_{(t,x)}dx^i \end{eqnarray} On retrouve ici un des axiomes principaux de la physique quantique classique à la différence cependant que le dénominateur $\int_{\Omega}|\varPsi|^2_{(t,x)}dx^i$ peut dépendre de <<~$t$~>>. Ce fait ne pose aucun problème conceptuel dans la physique présentée ici contrairement à la physique quantique standard, et on rappelle qu'une <<~singularité~>> n'est pas du tout une notion équivalente à celle de <<~particule~>> (cf l'introduction du chapitre 2). Si l'on suppose la validité des $\varepsilon$-approximations, on a vu que la fonction d'état $\varPsi$ vérifie (en approximation) les équations classiques de Schrödinger qui sont de la forme $i\frac{\partial \varPsi}{\partial t}=H(\varPsi)$ où $H$ est un opérateur hermitien, et si $\varPsi$ vérifie de bonnes conditions aux limites on en déduit classiquement que $\partial({\int_\Omega|\varPsi|^2}) ~/~\partial t=0$ et qu'alors $\int_\Omega|\varPsi|^2$ ne dépend pas de <<~$t$~>>. On peut alors, comme en physique quantique standard, \textbf{normaliser} la fonction d'état $\varPsi$ pour que $\int_\Omega|\varPsi|^2=1$ et l'on retrouve exactement avec \ref{F43} l'axiome standard considéré. Ce procédé n'est valable que lorsque $\varPsi$ vérifie une équation du type <<~Schrödinger~>> (d'ordre 1 en $t$) et ne fonctionne pas pour une équation <<~complète~>> de type <<~Klein-Gordon~>> qui est d'ordre 2 en $t$, mais, comme on l'a déjà dit, ceci ne pose aucun problème conceptuel pour la physique présentée ici. L'axiome important de la physique quantique donné par \ref{F43} <<~normalisé~>> n'est pour nous qu'une conséquence du fait qu'aucune loi ne régit les singularités sur $\mathscr{M}$ ce qui se traduit mathématiquement par \ref{F35} et \ref{F36} et, dans le cas des métriques oscillantes dans un potentiel, donne \ref{F38}. La probabilité donnée par \ref{F38} dépend des potentiels, ce qui ,en pratique, complique sérieusement les choses, mais, dans le cadre des $\varepsilon$-approximations, la dépendance vis à vis de ces potentiels est négligeable et l'on retrouve donc, pour les expériences étudiées par la physique quantique standard, des résultats identiques. \subsection{La notion de <<~densité de singularités~>>} La notion introduite dans cette sous-section n'est en fait pas vraiment indispensable pour la description des expériences de la physique quantique standard, mais elle deviendra utile dans l'étude de phénomènes quantiques décrits actuellement par la T.Q.C. Cette notion de <<~densité de singularités~>> va permettre, entre autre, d'aborder quantitativement le problème qui consiste à déterminer le seuil à partir duquel l'équation fondamentale linéaire \ref{F1} est une bonne approximation de l'équation fondamentale non linéaire \ref{F0'}. On considère, comme précédemment, une sous-variété $\mathscr{H}$ (d'un domaine $\mathscr{D}$) de genre espace et de dimension $n-2$. Lorsque le nombre de singularités élémentaires dans $\mathscr{H}$ est <<~suffisamment grand~>> on traduit le fait qu'aucune loi ne régisse les singularités de $\mathscr{D}$ ( plus précisément, l'équiprobabilité de présence des singularités dans $\mathscr{H}$ relativement à $g_\mathscr{H}$) en posant l'hypothèse suivante : Il existe une constante $D$ (la densité) telle que, quels que soient $N\in\mathbb{N}$ et un domaine $\mathscr{H}_N\subset\mathscr{H}\subset\mathscr{D}$ qui contient $N$ singularités élémentaires, on ait : \begin{eqnarray}\label{F44} N(1+\varepsilon(N))=Dv_{g_{\mathscr{H}_N}} \end{eqnarray} où $v_{g_{\mathscr{H}_N}}$ désigne le volume relatif à la métrique riemannienne $g_\mathscr{H}$ du domaine $\mathscr{H}_N$, ~~et ~~$lim ~~ \varepsilon (N)=0$ lorsque $N$ tend vers l'infini. On considère une suite de domaines $(\mathscr{H}_N)_{N>N_0}$ telle que $v_{g_{\mathscr{H}_N}}$ converge (fictivement) vers l'infini avec $N$, et $\mathscr{H}_1$ un sous-domaine des $\mathscr{H}_N$. D'après \ref{F36}, lorsque $\varsigma_1,\dots,\varsigma_N$ sont les $N$ singularités élémentaires dans $\mathscr{H}_N$, la probabilité que $k$ d'entre elles se trouvent dans $\mathscr{H}_1$ est : \vspace{2mm} $p(N,k)=\displaystyle\binom{N}{k}p^kq^{N-k}$ \vspace{2mm} où $p=v_1/v_N$,~~~$q=1-p$ ~~~lorsque l'on a posé $v_N:=v_{g_{\mathscr{H}_N}}$. Alors, en utilisant la relation \ref{F44}, on en déduit que : \vspace{0mm} $p(N,k)=\frac{N!}{k!(N-k)!}(v_1/v_N)^k(1-v_1/v_N)^{N-k}$ converge vers $\frac{(Dv_1)^k}{k!}e^{-Dv_1}$ \vspace{1mm} lorsque $N$ tend vers l'infini. On peut donc considérer que, lorsqu'il y a un grand nombre de singularités élémentaires dans $\mathscr{H}$, la probabilité d'en trouver $k$ dans un sous-domaine $\mathscr{H}_1$ de $\mathscr{H}$ est : \begin{eqnarray}\label{F44'} p(k)=\frac{(Dv_1)^k}{k!}e^{-Dv_1} \end{eqnarray} (On peut remarquer que l'on a bien $\sum_{k=0}^\infty p(k)=1$) \vspace{2mm} Lorsque $(\mathscr{C},g)$ est une cellule de type <<~métrique oscillante dans un potentiel~>> pour laquelle $g=|a|^{\frac{4}{n-2}} g_{\mathcal P}$, et lorsque l'on admet les $\varepsilon$-approximations telles que $g_{\mathscr{H}(t)}$ soit <<~proche~>> de $|a|^{\frac{4}{n-2}}g_0{_{\mathscr{H}(t)}}$ comme précédemment, la relation \ref{F44'} montre que, à l'instant <<~$t$~>> pour un observateur lié à un système de coordonnées standard de $\mathscr{C}$, la probabilité que celui-ci trouve $k$ singularités élémentaires dans $\mathscr{H}_1(t)$ ~est ~$ p(k)=\frac{(Dv_1(t))^k}{k!}e^{-Dv_1(t)}$. (En fait, ces singularités sont <<~vues~>> dans $\omega$ lorsque $\mathscr{H}_1(t)=\{t\}\times \omega\times S^1(\delta)\times W$) où ici $v_1(t)=\int_{ \omega\times S^1(\delta)\times W}a^2dx^idudv^j$. \vspace{2mm} Tout se passe donc comme si la densité de singularités élémentaires \textbf{vue par l'observateur lié au système de coordonnées} (qui fait les mesures avec $g_0$) était donnée par la fonction $Da^2$. Si l'on suppose que la fonction $a^2$ est suffisamment petite devant la constante $1$ pour que l'équation linéaire \ref{F1} vérifiée par la fonction $a$ soit une bonne approximation de l'équation \ref{F0'}, ceci n'est plus nécessairement le cas pour la fonction $(\lambda a)^2$ où $\lambda$ est une constante <<~suffisamment grande~>>, bien que la fonction $\lambda a$ vérifie encore l'équation linéaire \ref{F1}. D'après les résultats précédents ceci dit simplement que, lorsque la densité de singularités devient grande (elle grandit en $\lambda^2)$, l'équation linéaire cesse d'être une bonne approximation et on ne peut plus négliger la partie non linéaire de l'équation \ref{F0'}. Cela correspond à l'interprétation de la physique classique qui dit que, lorsque la densité des particules devient grande, on ne peut plus négliger l'interaction des particules entre elles, mais il faut bien comprendre qu'avec le regard que l'on porte ici sur la physique, on ne suppose aucune forme d'interaction entre les singularités, c'est seulement la métrique oscillante, par l'intermédiaire de la fonction $a$, qui tient compte de ce phénomène. Bien entendu, la densité effective des singularités qui apparaît lors d'une expérience dépend des conditions initiales (ou aux limites) de cette expérience. \section{Le spin\label{s2.13}} La description des résultats expérimentaux liés aux phénomènes de <<~spin~>>, traités habituellement par la physique quantique standard, s'avère être naturelle pour nous si l'on considère que, dans la cellule type de la forme $\mathscr{C}=\Theta\times S^1(\delta)\times W$, la variété compacte $W$ est une variété <<~produit~>> de la forme $S^3(\rho)\times V$ où $S^3(\rho)$ est la sphère standard de dimension $3$, de rayon $\rho$, munie (pour décrire les métriques de référence $g_0$) de la métrique riemannienne standard $g_{S^3(\rho)}$ induite par la métrique euclidienne de $\mathbb{R}^4$. l'étude de $(S^3(\rho),g_0|_{S^3(\rho)})$ et en particulier celle des espaces propres du laplacien est très importante pour nous et est précisée dans l'annexe \ref{sa3.7}. On en rappelle les points essentiels dans les deux sous-sections qui suivent. \subsection{Les valeurs propres et les espaces propres du laplacien sur la sphère $S^3(\rho)$ ainsi que ceux liés à la fibration de Hopf} Les espaces propres $E_{S^3(\rho)}(\gamma_p)$ du laplacien des géomètres ($\Delta:=-{\triangledown_i}{\triangledown^i}$) ont pour valeurs propres : $\gamma_p=\rho^2p(p+2)$ ~~~~$p\in \mathbb{N}$ On les classera suivant les valeurs croissantes des $\gamma_p$ en les notant : \begin{eqnarray}\label{F45} E_0,E_1,\dots,E_p,\dots \end{eqnarray} En fait, $p/2$ correspondra à <<~l'indice de spin~>> de la physique quantique standard. Il est important de noter que, lorsque \textbf{$p$ est pair}, chaque espace propre $E_p$ contient un sous espace vectoriel $E'_p$ qui s'identifie à l'espace propre $E_{S^2(\rho/2)}(\gamma_p)$ du laplacien de la sphère standard $S^2$ de rayon $\rho/2$ (Annexe \ref{sa3.7}). Les indices impairs de $E_p$ correspondent aux espaces propres liés uniquement à $S^3$ alors que les indices pairs sont liés aux espaces propres de $S^2$. Il est intéressant de noter que, \textbf{si $p$ est pair}, les fonctions propres $\beta\in E_p$ sont invariantes par l'isométrie <<~antipodale~>> $\sigma$ de $S^3(\rho)$ ~~($\beta\circ\sigma=\beta$), et changent de signe ~~($\beta\circ\sigma=-\beta$) \textbf{si $p$ est impair}. \subsection{Trois champs de vecteurs qui parallélisent $S^3$ et les endomorphismes des espaces propres qui leurs sont canoniquement associés} La sphère $S^3$ est une variété parallélisable ( seules les sphères $S^1,S^3,S^{15}$ le sont). On définit classiquement $S^3(\rho)$ en posant : $S^3(\rho):=\{ (x_1,x_2,x_3,x_4 )\in \mathbb{R}^4 /~ \sum_{k=1}^4x_k^2=\rho^2 \}$ Soient $L_1,L_2,L_3$ les trois champs de vecteurs définis sur $\mathbb{R}^4$ par : \begin{multline}\label{F46} L_1=x_3\partial_1+x_4\partial_2-x_1\partial_3-x_2\partial_4\\ L_2=-x_4\partial_1+x_3\partial_2-x_2\partial_3+x_1\partial_4\hspace{8.6cm}\\ L_3=x_2\partial_1-x_1\partial_2-x_4\partial_3+x_3\partial_4\hspace{8cm} \end{multline} où on a noté $\partial_i$ pour $\frac{\partial}{\partial x_i}$. En chaque point $x=(x_1,x_2,x_3,x_4)$ ces trois champs de vecteurs sont orthogonaux au vecteur radial $x_1\partial_1+x_2\partial_2+x_3\partial_3+x_4\partial_4$ et sont orthogonaux entre eux ( ce sont, en quelque sorte, les plus <<~simples~>> que l'on puisse écrire dans le système de coordonnées et qui ont ces propriétés). On note $L_k|_{S^3}\textbf{}$ les trois champs de vecteurs sur $S^3(\rho)$ <<~restrictions~>> des $L_k$. En chaque point $x\in S^3(\rho)$ les trois vecteurs $L_k|_{S^3}$ forment une base (orthogonale) de l'espace tangent $T_x(S^3(\rho))$. ($S^3$ est donc bien parallélisable). $L_1$, $L_2$, $L_3$ définissent naturellement (comme opérateurs différentiels) trois endomorphismes de $C^\infty(\mathbb{R}^4)$ puisque $\forall f\in C^\infty(\mathbb{R}^4)$ ~~$L_k(f)\in C^\infty(\mathbb{R}^4)$. De même, $L_1|_{S^3}$, $L_2|_{S^3}$, $L_3|_{S^3}$ définissent, par restriction, trois endomorphismes de $C^\infty(S^3(\rho))$. La proposition suivante est fondamentale pour la suite. Sa démonstration est donnée dans l'annexe \ref{ss3.2}. \begin{prop}\label{p2.4} Les espaces propres $E_{S^3(\rho)}(\gamma)$ notés $E_p$ ~($p\in \mathbb{N}$) précisés en \ref{F45} ainsi que les espaces $E'_q$ ~($q\in 2\mathbb{N}$) \textbf{sont stables} par chacun des trois endomorphismes définis par $L_k|_{S^3}$. \end{prop} Ce résultat permet de donner la définition suivante : \begin{dfn}\label{d2.21} Pour chaque $E_p$ (resp. $E'_q$), les trois endomorphismes notés $S_1$, $S_2$, $S_3$ (sans faire référence à $p$ ou à $q$) définis par $L_k|_{S^3}$ restreints à $E_p$ (resp. $E'_q$) seront appelés \textbf{les endomorphismes canoniques des espaces propres de $S^3(\rho)$}. \end{dfn} Comme on l'a déjà vu (prop. \ref{p2.1'}), les espaces propres du d'alembertien sur $S^1(\delta)\times S^3(\rho)$ relatifs à la métrique pseudo-riemannienne $(-g_0|_{S^1})\times (g_0|_{S^3})$ s'identifient aux $E_{S^1}(\lambda)\otimes E_{S^3}(\gamma_p)$. Les valeurs propres correspondantes sont \\$(\gamma_p-\lambda)$ ~où~ $\gamma_p=\rho^2p(p+2)$~ et~ $\lambda={Q^+}^2$. Lorsque $\lambda\neq 0$, l'espace propre $E_{S^1}(\lambda)$ s'identifie à $\mathbb{C}$ par l'isomorphisme ~$\mathbb{C}_\lambda$ ~(cf. \ref{F12})\\ et~ $E_{S^1}(\lambda)\otimes E_{S^3}(\gamma_p)$ à $\mathbb{C}\otimes E_p$ ~(cf. \ref{F13})~ qui n'est autre que le complexifié de $E_p$ que l'on a noté $E^{\mathbb{C}}_p$. Pour chaque $E_p$ (resp. $E'_q$) les trois endomorphismes $S_k$ (def. \ref{d2.21}) s'étendent naturellement sur les complexifiés $E^{\mathbb{C}}_p$ (resp. $E'^{\mathbb{C}}_q$) en posant : $\forall (\varphi_1+i\varphi_2)\in E_p+iE_p=E^{\mathbb{C}}_p$ ~~~~~$S^{\mathbb{C}}_k(\varphi_1+i\varphi_2):=S_k(\varphi_1)+iS_k(\varphi_2)$ Cependant, uniquement dans le but de retrouver exactement les endomorphismes qui interviennent en physique quantique standard pour les phénomènes liés au spin, on donne la définition suivante où l'on introduit le coefficient <<~$i$~>>. \begin{dfn}\label{d2.22} Pour chaque $E_p$ (resp $E'_q$), les trois endomorphismes de $E^{\mathbb{C}}_p$ (resp. $E'^{\mathbb{C}}_q$) définis par : $\hat{S}_k=-iS^{\mathbb{C}}_k$ seront appelés \textbf{les endomorphismes canoniques des espaces propres complexifiés de $S^3(\rho)$}. \end{dfn} Le lecteur pourra vérifier que les trois endomorphismes $\hat{S}_1$, $\hat{S}_2$, $\hat{S}_3$ ont les propriétés suivantes qui ne sont autres que celles vérifiées par les observables de moment cinétique (au facteur $\hbar$ près) en physique quantique classique :\\ $\hat{S}_2\hat{S}_3-\hat{S}_3\hat{S}_2=i\hat{S}_1$, ~~$\hat{S}_1\hat{S}_3-\hat{S}_3\hat{S}_1=-i\hat{S}_2$, ~~$\hat{S}_1\hat{S}_2-\hat{S}_2\hat{S}_1=i\hat{S}_3$ \subsection{Les domaines de type <<~métrique oscillante avec spin dans un potentiel~>>} Nous allons ici nous intéresser aux domaines de type <<~métrique oscillante élémentaire \textbf{d'ordre 2} dans un potentiel~>> (def. \ref{d2.9}). Expérimentalement, les effets physiques particuliers liés à la notion de spin apparaissent dans les domaines où <<~l'électromagnétisme~>> est présent. Nous allons donc étudier principalement le cas des métriques oscillantes avec spin dans un potentiel \textbf{électromagnétique}. Pour préciser les choses nous donnons la définition suivante : \begin{dfn}\label{d2.23} Un domaine de type \textbf{<<~métrique oscillante élémentaire avec spin dans un potentiel électromagnétique~>>} est un domaine de type <<~métrique oscillante élémentaire d'ordre 2 (def. \ref{d2.9}) pour lequel la métrique pseudo-riemannienne $g_\mathcal P$ du potentiel électromagnétique a la forme spécifique présentée dans le paragraphe suivant. \end{dfn} \subsubsection{Forme spécifique du potentiel électromagnétique $g_\mathcal P$} On rappelle que la cellule $\mathscr{C}$ est de la forme $\mathscr{C}=\Theta\times S^1(\delta)\times S^3(\rho)\times V$, et d'après la proposition \ref{p1.3}, ~~$g_\mathcal P=g_0+sym(\Upsilon^\flat\otimes X^\flat)$. Lors de l'étude des métriques oscillantes élémentaires d'ordre 1, $\Upsilon$ avait été choisi comme un champ de vecteurs défini sur $\Theta$ (autrement dit, défini sur $\mathscr{C}$ mais tangent à $\Theta$ et ne dépendant que des variables de $\Theta$). Ceci peut être interprété comme le fait que l'on négligeait les <<~effets quantiques~>> liés à $W$ (mais pas à $S^1(\delta)$). On considère maintenant que $\Upsilon$ est un champ de vecteurs défini sur $\Theta\times S^3(\rho)$ (on néglige les effets quantiques liés à $V$ mais pas à $S^1(\delta)\times S^3(\rho)$). le champ de vecteurs $\Upsilon$ se décompose naturellement sous la forme $\Upsilon=A+C$ où $A$ est la composante de $\Upsilon$ tangente à $\Theta$ et $C$ la composante tangente à $S^3(\rho)$. Le champ de vecteurs $A$ sera considéré comme représentant le potentiel électromagnétique classique et on supposera qu'il ne dépend que des variables de $\Theta$ (ce dernier point est évidemment à interpréter comme une <<~approximation~>>). Il s'écrit donc, dans le système de coordonnées standard : $A=\sum_{i=0}^3 A^i\partial_i$~ où ~ $\partial_i:= \frac{\partial}{\partial x^i}$. Le champ de vecteurs $C$ tangent à $S^3(\rho)$ se décompose sur la base des trois champs de vecteurs $L_1|_{S^3}$, $L_2|_{S^3}$, $L_3|_{S^3}$ qui parallélisent $S^3(\rho)$ : ~~$C=\sum_{k=1}^3C^kL_k|_{S^3}$. Il est à noter que, dans les <<~unités géométriques~>>, les composantes $C^k$ sont des <<~inverses de longueur~>> (puisque les $L_k$ ont des composantes de la forme $x_k\partial_k$) alors que les composantes $A^j$ sont <<~sans unité~>>. La forme spécifique du potentiel électromagnétique va être essentiellement due au choix particulier des composantes $C^k$, définies à partir des composantes $A^j$, que l'on va préciser dans les lignes qui suivent. \textbf{Ce choix n'est justifié que par le fait que l'on va retrouver les résultats standard sur le spin de la physique quantique classique}. (Il est à noter que l'on conserve le fait que l'indice de nilpotence est au plus 3, il serait sans doute judicieux d'affaiblir cette hypothèse, mais nous nous contentons de retrouver très simplement les résultats standard). \textbf{On pose} : $C:=\varrho\sum_{k=1}^3B^kL_k|_{S^3}$ où $\varrho$ est une constante et les trois fonctions $B^k$ sont les trois composantes du champ magnétique : $B^1=\frac{\partial A^3}{\partial x^2}-\frac{\partial A^2}{\partial x^3}$, ~~$B^2=\frac{\partial A^1}{\partial x^3}-\frac{\partial A^3}{\partial x^1}$, ~~$B^3=\frac{\partial A^2}{\partial x^1}-\frac{\partial A^1}{\partial x^2}$. ($\varrho$ est une constante <<~sans unité~>> et les $B^k$ ont bien comme <<~unités~>> des inverses de longueur). \begin{dfn}\label{d2.24} Le réel $\varrho$ sera appelé \textbf{la constante gyromagnétique} du domaine de type <<~métrique oscillante avec spin dans un potentiel électromagnétique~>>. \end{dfn} (voir à ce sujet la remarque \ref{r2.10} présentée plus loin). \begin{rmq}\label{r2.9} L'image du champ de bases orthogonales ($L_1|_{S^3}$, $L_2|_{S^3}$, $L_3|_{S^3}$) par une isométrie $\sigma$ de l'espace euclidien $\mathbb{R}^4$ (restreinte à $S^3(\rho)$) est encore un champ de bases orthogonales que l'on peut noter ($L^\sigma_1|_{S^3}$, $L^\sigma_2|_{S^3}$, $L^\sigma_3|_{S^3}$). \textbf{$\Upsilon$ peut en fait être choisi plus généralement de la forme $\Upsilon=A+C_\sigma$}~~ où ~~$C_\sigma:=\varrho\sum_{k=1}^3B^kL^\sigma_k|_{S^3}$. Ceci n'a pas une grande importance dans la mesure où les résultats finaux sur les mesures de spin <<~simples~>> que l'on va obtenir ne feront plus intervenir $S^3(\rho)$ précisément, cependant ce fait prendra de l'importance lors de l'étude des phénomènes d'<<~intrication quantique~>> présentés dans la section \ref{+2.2}. \end{rmq} \subsection{Les équations}\label{ss+3} Le résultat important de cette sous-section est énoncé dans le théorème \ref{2.3} qui va suivre. Le cas important est essentiellement celui qui concerne le champ électromagnétique. On commence par rappeler et préciser quels sont les potentiels utilisés et les hypothèses qui les concernent : Dans le cadre du potentiel actif sans électromagnétisme, la cellule considérée est\\ $\mathscr{C}=I\times \mathscr U\times S^1(\delta)\times W$ et la métrique du potentiel vérifie :\\ $g_\mathcal P=g_O-2vX^\flat_1\otimes X^\flat_1$ ~~~(prop. \ref{p1.2}).\\ La fonction $a$ s'écrit $a=\phi\beta$ ~où ~$\phi:I\times\mathscr U\times S^1(\delta)\rightarrow\mathbb{R}$~ et~ $\beta\in E_W(\mu)$. On considère \textbf{l'hypothèse $H_{2,N}$} suivante (à comparer avec l'hypothèse $H_{1,N}$) (section \ref{s2.9}) : \begin{enumerate} \item $S_{g_\mathcal P}=S_{g_0}$. \item $v$ est une fonction définie sur $\mathscr U$. \item $X_1$ est un champ de vecteurs défini sur $I\times W$, ~~$D_{g_0}X_1=0$, ~et ~$X_1$ s'annule sur $E_W(\mu)$ ~~ (c.a.d~~ $\forall \beta\in E_W(\mu) ~~X_1(\beta)=0$) \end{enumerate} ($v$ et $X_1$ peuvent être considérés définis sur $\mathscr{C}$). \bigskip Dans le cadre du potentiel électromagnétique spécifique, la cellule considérée est \\$\mathscr{C}=\Theta\times S^1(\delta)\times S^3(\rho)\times V$ et la métrique du potentiel vérifie :\\ $g_\mathcal P=g_O+sym(\Upsilon^\flat\otimes X^\flat_2)$ ~~~(prop. \ref{p1.3}).\\ La fonction $a$ s'écrit $a=\phi\beta$ ~où ~$\phi:\Theta\times S^1(\delta)\times S^3(\rho)\rightarrow\mathbb{R}$~ et~ $\beta\in E_V(\nu)$. On considère \textbf{l'hypothèse $H_{2,E}$} suivante (à comparer avec l'hypothèse $H_{1,E}$) (section \ref{s2.9}) : \begin{enumerate} \item $S_{g_\mathcal P}=S_{g_0}$. \item $\Upsilon$ est un champ de vecteurs défini sur $\Theta\times S^3(\rho)$. \item $X_2$ est un champ de vecteurs défini sur $S^1(\delta)\times V$, ~~$D_{g_0}X_2=0$, ~et ~$X_2$ s'annule sur $E_V(\nu)$. \end{enumerate} La remarque qui suit les énoncés des hypothèses $H_{1,N}$ et $H_{1,E}$ s'applique ici aux hypothèses $H_{2,N}$ et $H_{2,E}$. \begin{thme}\label{2.3} On considère un domaine de type <<~métrique oscillante élémentaire avec spin dans un potentiel~>>. Alors, dans les trois cas de potentiels considérés, la fonction canonique $a_c$ vérifie les équations suivantes : \begin{enumerate} \item \textbf{Dans un potentiel neutre.} \vspace{-3mm} \begin{eqnarray}\label{F47} \Box_\Theta a_c+ M^2a_c=0 \end{eqnarray} \vspace{-7mm} où ~~$\Box_\Theta=\frac{\partial^2}{(\partial t)^2}-\sum_{k=1}^3\frac{\partial^2}{(\partial x^k)^2}$ et $M$ est la fréquence de masse. \item \textbf{Dans un potentiel sans électromagnétisme sous l'hypothèse $H_{2,N}$.} \vspace{-3mm} \begin{eqnarray}\label{F48} \Box_\Theta a_c+ M^2a_c-2v\frac{\partial^2a_c}{(\partial t)^2}=0 \end{eqnarray} \vspace{-7mm} où ~~$v$ est la fonction potentiel (def. \ref{def:11}) \item \textbf{Dans un potentiel électromagnétique sous l'hypothèse $H_{2,E}$}. \vspace{-3mm} \begin{multline}\label{F49} \sum_{j=0}^3\varepsilon_j(i\frac{\partial}{\partial x^j}+Q^+\Upsilon^j)^2a_c+M^2a_c-2\varrho Q^+\sum_{k=1}^3B^k\hat S_k(a_c)+{Q^+}^2\varrho^2\rho^2|B|^2a_c=0 \end{multline} \vspace{-5mm} où~~ $\varepsilon_j=g_{0jj}$~~ c.a.d :~~ $\varepsilon_0=-1$ ~~et~~$\varepsilon_1=\varepsilon_2=\varepsilon_3=+1$,\\ $|B|^2:=\sum_ {k=1}^3{B^k}^2$,~~ $\rho$ est le rayon de la sphère $S^3(\rho)$, ~~$\varrho$ est la constante gyromagnétique, ~~les $\hat S_k$ sont les isomorphismes canoniques des espaces propres complexifiés de $S^3(\rho)$ ~(def. \ref{d2.22}) ~~et ~~$\hat S_k(a_k):\Theta\rightarrow E^{\mathbb{C}}_p$ est définie $\forall x\in \Theta$ ~par ~$\hat S_k(a_c)(x):=\hat S_k(a_c(x))$. \end{enumerate} \end{thme} La démonstration de ce théorème est détaillée dans l'annexe \ref{a3.7}. \bigskip Les équations \ref{F47}, \ref{F48}, sont identiques à celles <<~sans spin~>> du théorème \ref{2.1}, mais ici \textbf{la fonction canonique $a_c$ est à valeurs dans $E^{\mathbb{C}}_p$}. Les termes supplémentaires de l'équation \ref{F49} relatifs à ceux donnés par le théorème \ref{2.1} traduisent <<~l'effet de spin~>> dans le potentiel électromagnétique. \bigskip Lorsque la charge électrique est bien définie, les équations \ref{F47}, \ref{F48} et \ref{F49} se traduisent, via la définition \ref{d2.16}, en terme de fonction d'état $\varPsi$. Celles-ci redonnent, en approximation, les équations de Schrödinger (ou de Pauli) standard. Nous n'écrivons ici que le résultat correspondant au potentiel électromagnétique. \begin{coro}\label{c2.1} Sous les hypothèses du théorème \ref{2.3}, lorsque la charge électrique est bien définie (def. \ref{d2.14}) et dans le cas du potentiel électromagnétique spécifique, la fonction d'état $\varPsi$ vérifie l'équation : \begin{multline}\label{F50} 2iM\frac{\partial\varPsi}{\partial t}=\sum_{j=0}^3\varepsilon_j(i\frac{\partial}{\partial x^j}+Q\Upsilon^j)^2\varPsi-2MQ\Upsilon^0\varPsi-2\varrho Q\sum_{k=1}^3B^k\hat S_k(\varPsi)+{Q}^2\varrho^2\rho^2|B|^2\varPsi \end{multline} \end{coro} Ce résultat s'obtient rapidement en remplaçant $a_c$ exprimée en fonction de $\varPsi$ (def. \ref{d2.16}) dans l'équation \ref{F49}. Lorsque les $\varepsilon$-approximations (précisées dans le paragraphe suivant) seront valides, le dernier terme de l'équation \ref{F50} pourra être <<~négligé~>> (ainsi que le terme en $j=0$ de la somme $\sum$) pour redonner exactement <<~l'équation de Pauli~>> de la physique quantique classique. \begin{rmq}\label{r2.10} compte tenu des définitions données des domaines de type <<~métrique oscillante dans un potentiel~>> pour lesquels la métrique $g$ est de la forme $g=|a|^{4/n-2}g_\mathcal P$, il parait naturel de considérer la fonction $a$ comme <<~portant~>> les caractéristiques que l'on attribut aux particules en physique classique (masse, charge électrique, spin, etc.) et $g_\mathcal P$ comme portant les caractéristiques du <<~potentiel seul~>> dans lequel se trouve les particules. Cependant, cette interprétation est prise en défaut dans le cas des métriques oscillantes avec spin dans un champ électromagnétique puisque la constante gyromagnétique $\varrho$ a été introduite dans le potentiel spécifique $g_\mathcal P$ alors qu'elle est classiquement plutôt associée aux <<~particules~>>. En fait, je ne pense pas qu'avec la vision de la physique présentée ici, il faille séparer les deux <<~objets~>> $a$ et $g_\mathcal P$, le domaine (de métrique $|a|^{4/n-2}g_\mathcal P$) doit être considéré comme un <<~tout~>>. La constante $\varrho$ a été ici appelée <<~constante gyromagnétique~>> car elle correspond, dans les équations obtenues, au demi rapport gyromagnétique des particules de la physique quantique standard ($\varrho\simeq1$ dans le cas de l'électron). Une <<~interprétation~>> consisterait à poser $\varrho=1$ dans la définition de $g_\mathcal P$ et à considérer que les domaines de type <<~métrique oscillante élémentaire avec spin $1/2$~>> correspondent seulement à la notion <<~d'électrons~>> en physique classique. Les domaines décrivant les autres particules auraient une forme plus complexe pour la fonction $a$ que $a=\phi\beta$ (qui correspondrait à des particules composées) et ceci se traduirait <<~en approximation~>> par l'équation \ref{F49} donnée par le théorème \ref{2.3} mais avec $\varrho\neq1$, qui serait bien alors une caractéristique de la fonction $a$ et non de la métrique $g_\mathcal P$. \end{rmq} \begin{rmq}\label{r2.11} Dans l'hypothèse $H_{2,E}$ sur le potentiel électromagnétique, on a supposé que $\Upsilon$ était défini sur $\Theta\times S^3(\rho)$ et non sur $\Theta\times S^1(\delta)\times S^3(\rho)$. Les termes supplémentaires qu'apporterait cette généralisation (naturelle) ne sont pas nécessairement compatibles avec l'équation \textbf{linéaire}~~ $\Box a+Sa=0$, mais pourraient être estimés, par des calculs très complexes, dans le cadre de l'équation non linéaire \ref{F0'}. Il est possible que, dans l'étude des processus qui consistent à déterminer la valeur de la constante gyromagnétique $\varrho$ pour <<~l'électron~>>, ce soient les termes correcteurs dont on vient de parler qui expliqueraient la légère différence mesurée entre $\varrho$ et $1$, (mais ceci n'est pour l'instant que de la simple spéculation). \end{rmq} \subsection{Les $\varepsilon$-approximations} \subsubsection{Les $\varepsilon$-approximations pour la fonction d'état} On les pose identiques à celles de la définition \ref{d2.17}, mais ici $\varPsi$ est à valeurs dans $E^{\mathbb{C}}_{S^3(\rho)}(\gamma)$. \vspace{-5mm} \subsubsection{Les $\varepsilon$-approximations pour les potentiels} On reprend les conditions données dans la définition \ref{d2.18} auxquelles on ajoute une condition sur les champs magnétiques $B^1$, $B^2$, $B^3$ car ceux-ci interviennent dans la définition de $g_\mathcal P$ ainsi que dans l'équation \ref{F49} du théorème \ref{2.3}. Les conditions sur ($\rho B^k$) sont choisies identiques à celles posées pour $\Upsilon_0$ puisque ces deux termes interviennent de la même manière dans l'équation \ref{F49} du théorème \ref{2.3}. On rappelle que le seul intérêt des $\varepsilon$-approximations (sur la fonction d'état ou sur les potentiels) est de donner des conditions précises qui rendent <<~négligeables~>> certains termes des équations \ref{F47}, \ref{F48}, \ref{F49}. Ces dernières, après la suppression de ces termes négligeables, deviennent identiques aux équations obtenues en physique quantique classique. \subsection{La probabilité de présence d'une singularité dans un domaine de type <<~métrique oscillante élémentaire avec spin dans un potentiel~>>\label{ss2.13.6}} Il s'agit de reprendre ici, mais en tenant compte du <<~spin~>>, ce qui a été exposé dans la section \ref{s2.12}. Les modifications sont mineures, elles consistent uniquement à tenir compte de la présence de la variété compacte $S^3(\rho)$ et des espaces propres associés $E_{S^3(\rho)}(\gamma)$. La fonction canonique $a_c$ associée à la fonction $a$ ainsi que la fonction d'état $\varPsi$ sont maintenant à valeurs dans $E_{S^1(\delta)}(\lambda)\otimes E_{S^3(\rho)}(\gamma)$ identifié à $E^{\mathbb{C}}_{S^3(\rho)}(\gamma):=E^{\mathbb{C}}_p$. $E_{S^3(\rho)}(\gamma)$ est naturellement muni du produit scalaire :\\ $$\langle\alpha_1,\alpha_2 \rangle_{L^2}:=\int_{S^3(\rho)}\alpha_1\alpha_2 dv_{S^3}$$\\ Le complexifié $E^{\mathbb{C}}_{S^3(\rho)}(\gamma)$ est alors muni du produit hermitien : \begin{eqnarray}\label{F50'} \langle\alpha,\alpha'\rangle:=\int_{S^3(\rho)}\alpha\overline{\alpha'} dv_{S^3} \end{eqnarray} où $\alpha=\alpha_1+i\alpha_2$ ~et~ $\alpha'=\alpha'_1+i\alpha'_2\in E^{\mathbb{C}}_{S^3(\rho)}(\gamma)$\\ En particulier : \begin{eqnarray}\label{F51'} |\alpha|^2:=\langle\alpha,\alpha\rangle=\int_{S^3(\rho)}(\alpha_1^2+\alpha_2^2) dv_{S^3}=|\alpha_1|^2_{L^2}+|\alpha_2|^2_{L^2} \end{eqnarray} Le fait qu'aucune loi ne régisse les singularités dans un domaine $\mathscr{D}\subset\mathscr{M}$ a été traduit mathématiquement dans la section \ref{s2.12} par \ref{F35} et \ref{F36}. On rappelle que, si $\mathscr{H}_{2_{(t,u)}}\subset\mathscr{H}_{1_{(t,u)}}$ sont des sous-variétés de genre espace de dimension $(n-2)$, alors, lorsque $\varsigma$ est une singularité élémentaire dans $\mathscr{H}_{1_{(t,u)}}$, la probabilité qu'elle se trouve dans $\mathscr{H}_{2_{(t,u)}}$ est :\\ $p_{(t,u)}=\int_{\mathscr{H}_{2_{(t,u)}}}dv_{g_{\mathscr{H}_{1_{(t,u)}}}}/\int_{\mathscr{H}_{1_{(t,u)}}}dv_{g_{\mathscr{H}_{1_{(t,u)}}}}$ \bigskip On considère une carte de l'atlas d'observation pour laquelle la cellule type $(\mathscr{C},g)$ est celle d'une métrique oscillante avec spin dans un potentiel, pour laquelle :\\ $\mathscr{C}=\Theta\times S^1(\delta)\times S^3(\rho)\times V$ ~~($\Theta$ est supposé de la forme $I\times \mathscr U\subset \mathbb{R}\times\mathbb{R}^3$) ~et ~$g=|a|^{4/n-2}g_{\mathcal P}$. On <<~fixe~>> $(t,u)\in \mathbb{R}\times S^1(\delta)$.\\ Soient $\mathscr{H}_{2_{(t,u)}}\subset\mathscr{H}_{1_{(t,u)}}\subset\mathscr{C}$ de la forme : $\mathscr{H}_{2_{(t,u)}}=\{t\}\times \omega\times \{u\}\times S^3(\rho)\times V$ $\mathscr{H}_{1_{(t,u)}}=\{t\}\times \Omega\times \{u\}\times S^3(\rho)\times V$ avec $\omega\subset\Omega\subset\mathbb{R}^3$ On suppose que les potentiels vérifient les $\varepsilon$-approximations présentées dans le paragraphe précédent. On obtient alors (cf. \ref{F39}) : \begin{eqnarray}\label{F52'} p_{(t,u)}\simeq\int_{\mathscr{H}_{2_{(t,u)}}}(*)/\int_{\mathscr{H}_{1_{(t,u)}}}(*) \end{eqnarray} où $(*)=a^2_{(t,u)}\sqrt{detg_0|_{\mathscr{H}_{1_{(t,u)}}}}dv_{g_{0|_{\mathscr U\times S^3(\rho)\times V}}}$ Examinons le cas d'une métrique oscillante élémentaire \textbf{d'ordre 2} pour laquelle, par définition : $a=\phi\beta$ où $\phi:\Theta\times S^1(\delta)\times S^3(\rho)\rightarrow\mathbb{R}$ vérifie : $\phi=\phi_1\cos(Q^+u)+\phi_2\sin(Q^+u)$, $\phi_1$ et $\phi_2$ sont deux fonctions réelles définies sur $\Theta\times S^3(\rho)$, $\forall x\in \Theta$ ~~~$\phi_{1,x}(.)$ et $\phi_{2,x}(.)\in E_{S^3(\rho)}(\gamma)$, $\beta\in E_V(\nu)$, (On rappelle que la fonction canonique $a_c$ est ici définie par $a_c=\phi_1+i\phi_2$, alors : $\forall x\in \Theta$ ~~~$a_{c,x}(.)\in E^{\mathbb{C}}_{S^3(\rho)}(\gamma)$). On obtient, en utilisant \ref{F52'} et en <<~simplifiant~>> par $\int_V\beta^2 dv_{g_{0|_V}}$ :\\ $p_{(t,u)}\simeq\int_{\omega\times S^3(\rho)}(**)dx^idv_{g_{0|_{S^3}}}/\int_{\Omega\times S^3(\rho)}(**)dx^idv_{g_{0|_{S^3}}}$\\ où $(**)=\cos^2(Q^+u){\phi_1}^2_{(t,x^i,s)}+\sin^2(Q^+u){\phi_2}^2_{(t,x^i,s)}+2\cos(Q^+u)\sin(Q^+u)\phi_1{\phi_2}_{(t,x^i, s)}$\\ et $(x^i):=(x^1,x^2,x^3)\in \mathscr U\subset \mathbb{R}^3$, ~~$dx^i:=dx^1dx^2dx^3$.\\ En considérant la <<~moyenne~>> sur $S^1(\delta)$ (cf. \ref{F41}), on écrit :\\ $p_{(t)}\simeq\int_{\omega\times S^3(\rho)}(\phi^2_1+\phi^2_2)_{(t,x^i,s)}dx^idv_{g_{0|_{S^3}}}/\int_{\Omega\times S^3(\rho)}(\phi^2_1+\phi^2_2)_{(t,x^i,s)}dx^idv_{g_{0|_{S^3}}}$.\\ D'où :\\ $p_{(t)}\simeq\int_\omega|a_{c(t,x^i)}|^2dx^i/\int_\Omega|a_{c(t,x^i)}|^2dx^i$.\\ ici, $|.|$ est la norme dans $E^{\mathbb{C}}_{S^3(\rho)}(\gamma)$ définie en \ref{F51'}.\\ Lorsque la métrique oscillante élémentaire a une charge électrique bien définie (def. \ref{d2.14}) et compte tenu de la définition de la fonction d'état $\psi:\Theta\rightarrow E^{\mathbb{C}}_{S^3(\rho)}(\gamma)$, on obtient :\\ $p_{(t)}\simeq\int_\omega|\psi_{(t,x^i)}|^2dx^i/\int_\Omega|\psi_{(t,x^i)}|^2dx^i$.\\ On retrouve ici un résultat standard de la physique quantique classique, à la différence cependant que le dénominateur $\int_\Omega|\psi_{(t,x^i)}|^2dx^i$ peut dépendre de $t$ (cf. les commentaires à ce sujet de la section \ref{s2.12}). \subsection{Quelques exemples} \subsubsection{Exemple 1 - Le spin $1/2$}\label{ss+1} Par définition, un domaine de type <<~métrique oscillante élémentaire avec spin dans un potentiel~>> possède un spin $1/2$ lorsque l'espace propre $E_{S^3(\rho)}(\gamma)$ correspond à l'espace $E_1$ de la classification donnée en \ref{F45}. Dans ce cas, $\gamma=3\rho^{-2}$ et $dim E_1=4$. Une base naturelle de cet espace propre est obtenue en prenant sur $S^3(\rho)$ les restrictions $(\alpha_1,\alpha_2,\alpha_3,\alpha_4)$ des fonctions coordonnées de $\mathbb{R}^4$ (polynômes harmoniques homogènes de degré~1) que l'on notera ici : $(x_1,x_2,x_3,x_4)$ (cf. annexe \ref{sa3.7}). Puisque l'indice de spin est demi-entier, on sait que les fonctions propres $(\alpha_1,\alpha_2,\alpha_3, \alpha_4)$ ne proviennent pas de la sphère $S^2$ par la fibration de Hopf. On note $M_1$, $M_2$, $M_3$ les matrices de $S_1$, $S_2$, $S_3$ (def. \ref{d2.21}) relatives à la base $(\alpha_1,\alpha_2,\alpha_3, \alpha_4)$. Ce sont les mêmes que les matrices des opérateurs différentiels $L_1$, $L_2$, $L_3$ relatives à la base $((x_1), (x_2), (x_3), (x_4))$ puisque $(L_k(x_l))|_{S^3}=(L_k|_{S^3})(x_l|_{S^3})$. Un calcul très rapide, partant des expressions de $L_1$, $L_2$, $L_3$, donne: \begin{center} $ M_1 = \begin{pmatrix} 0&0&-1&0\\ 0&0&0&-1\\ 1&0&0&0\\ 0&1&0&0 \end{pmatrix}$~~~ $ M_2 = \begin{pmatrix} 0&0&0&1\\ 0&0&-1&0\\ 0&1&0&0\\ -1&0&0&0 \end{pmatrix}$~~~ $M_3 = \begin{pmatrix} 0&-1&0&0\\ 1&0&0&0\\ 0&0&0&1\\ 0&0&-1&0 \end{pmatrix}$ \end{center} Les trois matrices $\hat M_k$ des endomorphismes $\hat S_k$ de $E^{\mathbb{C}}_1$ sont donc $\hat M_k=-iM_k$ d'après la définition (\ref{d2.22}). (On peut vérifier que ces trois matrices $\hat M_k$ ont les propriétés de commutation des matrices de Pauli qui ne sont que des cas particuliers des propriétés de commutation des observables de moment cinétique (au facteur $\hbar$ près)). \bigskip Sous les hypothèses du corollaire \ref{c2.1}, la fonction d'état $\varPsi$ vérifie l'équation \ref{F50}. Notons $\varPsi^1$, $\varPsi^2$, $\varPsi^3$, $\varPsi^4$ les quatre fonctions complexes définies sur $\Theta$, composantes de $\varPsi$ dans la base $(\alpha_1,\alpha_2,\alpha_3,\alpha_4)$ : $\varPsi=\sum_{j=1}^4\varPsi^j\alpha_j$. L'équation \ref{F50} se décompose alors en quatre équations : \begin{eqnarray}\label{F51} 2iM\frac{\partial\varPsi^1}{\partial t}=(\alpha)\varPsi^1+2\varrho Qi(B^1\varPsi^3-B^2\varPsi^4+B^3\varPsi^2) \end{eqnarray} \begin{eqnarray}\label{F52} 2iM\frac{\partial\varPsi^2}{\partial t}=(\alpha)\varPsi^2+2\varrho Qi(B^1\varPsi^4+B^2\varPsi^3-B^3\varPsi^1) \end{eqnarray} \begin{eqnarray}\label{F53} 2iM\frac{\partial\varPsi^3}{\partial t}=(\alpha)\varPsi^3+2\varrho Qi(-B^1\varPsi^1-B^2\varPsi^2-B^3\varPsi^4) \end{eqnarray} \begin{eqnarray}\label{F54} 2iM\frac{\partial\varPsi^4}{\partial t}=(\alpha)\varPsi^4+2\varrho Qi(-B^1\varPsi^2+B^2\varPsi^1+B^3\varPsi^3) \end{eqnarray} où $(\alpha)$ est l'opérateur : ($\sum_{j=0}^3\varepsilon_j(i\frac{\partial}{\partial x^j}+Q\Upsilon^j)^2-2MQ\Upsilon^0+\varrho^2Q^2\rho^2|B|^2$). Il est alors intéressant de définir les quatre fonctions : $\varphi^1:=\frac{1}{\sqrt{2}}(\varPsi^3-i\varPsi^4), ~~~\varphi^2:=\frac{1}{\sqrt{2}}(\varPsi^2+i\varPsi^1)$ $\varphi^3:=-\frac{1}{\sqrt{2}}(\varPsi^1+i\varPsi^2),~~~\varphi^4: =\frac{1}{\sqrt{2}}(-\varPsi^4+i\varPsi^3)$ Le coefficient $\frac{1}{\sqrt{2}}$ est pris de sorte que : $\sum_{j=1}^{4}|\varphi^j|^2=\sum_{j=1}^{4}|\varPsi^j|^2=|\varPsi|^2_{E^{\mathbb{C}}_1}$. Ces quatre fonctions complexes correspondent aux composantes de la fonction $\varPsi$ dans la base de $E^{\mathbb{C}}_1$: $$\beta_1:=\frac{1}{\sqrt{2}}(\alpha_3+i\alpha_4),~~ \beta_2:=\frac{1}{\sqrt{2}}(\alpha_2-i\alpha_1)$$ \begin{eqnarray}\label{F+6} \beta_3:=\frac{1}{\sqrt{2}}(i\alpha_2-\alpha_1),~~ \beta_4:=\frac{-1}{\sqrt{2}}(i\alpha_3+\alpha_4) \end{eqnarray} Dans cette base, les trois matrices $\hat M_k$ des endomorphismes $\hat S_k$ de $E^{\mathbb{C}}_1$ sont: \begin{center} $\hat M_1 = \begin{pmatrix} 0&1&0&0\\ 1&0&0&\\ 0&0&0&1\\ 0&0&1&0 \end{pmatrix}$~~~ $\hat M_2 = \begin{pmatrix} 0&-i&0&0\\ -i&0&0&0\\ 0&0&0&-i\\ 0&0&i&0 \end{pmatrix}$~~~ $\hat M_3 = \begin{pmatrix} -1&0&0&0\\ 0&1&0&0\\ 0&0&-1&0\\ 0&0&0&1 \end{pmatrix}$ \end{center} les équations \ref{F51}, \ref{F52}, \ref{F53}, \ref{F54} montrent que les fonctions ($\varphi^j$) vérifient : \begin{eqnarray}\label{F55} 2iM\frac{\partial\varphi^1}{\partial t}=(\alpha)\varphi^1+2\varrho Q(-B^1\varphi^2-iB^2\varphi^2+B^3\varphi^1) \end{eqnarray} \begin{eqnarray}\label{F56} 2iM\frac{\partial\varphi^2}{\partial t}=(\alpha)\varphi^2+2\varrho Q(-B^1\varphi^1+iB^2\varphi^1-B^3\varphi^2) \end{eqnarray} \begin{eqnarray}\label{F57} 2iM\frac{\partial\varphi^3}{\partial t}=(\alpha)\varphi^3+2\varrho Q(-B^1\varphi^4-iB^2\varphi^4+B^3\varphi^3) \end{eqnarray} \begin{eqnarray}\label{F58} 2iM\frac{\partial\varphi^4}{\partial t}=(\alpha)\varphi^4+2\varrho Q(-B^1\varphi^3+iB^2\varphi^3-B^3\varphi^4) \end{eqnarray} Le couple d'équations (\ref{F55}, ~\ref{F56}) est \textbf{identique} au couple (\ref{F57}, ~\ref{F58}) lorsque $\varphi^1$ devient $\varphi^3$ et $\varphi^2$ devient $\varphi^4$. \textbf{Chacun de ces couples d'équations correspond exactement aux équations de Pauli de la physique quantique classique} lorsque le dernier terme ainsi que celui en $j=0$ (dans la somme $\sum$) de l'opérateur ($\alpha$) sont supprimés. Ceux-ci sont effectivement <<~négligés~>> lorsque les $\varepsilon$-approximations sont valides. On laisse le soin au lecteur d'interpréter d'ores et déjà les résultats d'une expérience de type <<~Stern-Gerlach~>> pour un spin $1/2$, en terme de <<~déformation de l'espace temps~>> relatif à $g_0$ précisée par les couples d'équations (\ref{F55}, ~\ref{F56}) et (\ref{F57}, ~\ref{F58}), cependant ceci sera développé dans la section \ref{+2.1} pour permettre d'aborder l'étude des phénomènes d'<<~intrication quantique~>> dans la section \ref{+2.2}. \subsubsection{Exemple 2 - Le spin $1$} Le spin $1$ est associé au domaine pour lequel l'espace propre $E_{S^3(\rho)}(\gamma)$ correspond à l'espace classé $E_2$ (cf. \ref{F45}). Dans ce cas $\gamma=8\rho^{-2}$. On sait que les espaces $E_q$ dont l'indice $q$ est pair, contiennent un sous-espace $E'_q$ qui s'identifie à l'espace propre $E_{S^2(\rho/2)}(\gamma)$ de la sphère $S^2(\rho/2)$ (cf. annexe \ref{sa3.7}). Nous pourrions faire la description générale du <<~spin $q$~>>, lorsque $q$ est pair, en considérant les espaces $E_q$ (et non $E'_q$), mais nous ne présenterons, comme exemple, que le cas particulier où la fonction d'état associée à la métrique oscillante vérifie $\varPsi_x(.)\in E'^{\mathbb{C}}_2$. La dimension des espaces $E'_q$ est $q+1$, ce qui correspond bien à la dimension considérée en physique quantique standard pour les <<~spin entiers~>>. Nous choisissons comme base de $E'_2$, les restrictions à $S^3(\rho)$ des trois polynômes $\tilde{P}_k$, harmoniques, homogènes de degré $2$, qui s'écrivent sous la forme $\tilde{P}_k=P_k\circ\pi$ où $\pi:\mathbb{R}^4\rightarrow\mathbb{R}^3$ est l'application qui définit la fibration de Hopf et les polynômes $P_k$ sont les <<~fonctions coordonnées~>> de $\mathbb{R}^3$ : ($y_1$), ($y_2$), ($y_3$). On a donc (cf. annexe \ref{sa3.7}): \begin{center} \begin{tabular}{rcl} $\tilde{P}_1(x_1,x_2,x_3,x_4)$ &$=$& $x_1x_3+x_2x_4$\\[0.7em] $\tilde{P}_2(x_1,x_2,x_3,x_4)$ &$=$& $x_1x_4-x_2x_3$\\[0.7em] $\tilde{P}_3(x_1,x_2,x_3,x_4)$ &$=$& $\frac{1}{2}(x^2_3+x^2_4-x^2_1-x^2_2)$\\[0.7em] \end{tabular} \end{center} On note $(\beta_1,\beta_2,\beta_3):=(\tilde{P}_1|_{S^3(\rho)},\tilde{P}_2|_{S^3(\rho)},\tilde{P}_3|_{S^3(\rho)})$ la base choisie dans $E'_2$. Les matrices $M_1$, $M_2$, $M_3$ de $S_1$, $S_2$, $S_3$ (def. \ref{d2.21}) relatives à la base $(\beta_1,\beta_2,\beta_3)$ sont les mêmes que les matrices des opérateurs différentiels $L_1$, $L_2$, $L_3$ relatives à la base $\tilde{P}_1$, $\tilde{P}_2$, $\tilde{P}_3$. Un calcul rapide donne : \begin{center} $ M_1 =2 \begin{pmatrix} 0&0&1\\ 0&0&0\\ 1&0&0 \end{pmatrix}$ ~~~ $ M_2 = 2\begin{pmatrix} 0&0&0\\ 0&0&1\\ 0&-1&0 \end{pmatrix}$ ~~~ $ M_3 = 2\begin{pmatrix} 0&1&0\\ -1&0&0\\ 0&0&0 \end{pmatrix}$ \end{center} Les trois matrices $\hat{M}_k$ des endomorphismes $\hat{S}_k$ de ~$E'^{\mathbb{C}}_2$ sont donc ~$\hat{M}_k=-iM_k$. \bigskip Sous les hypothèses du corollaire \ref{c2.1}, la fonction d'état $\varPsi$ vérifie l'équation \ref{F50}. Notons $\varPsi_1$, $\varPsi_2$, $\varPsi_3$ les trois fonctions complexes définies sur $\Theta$, composantes de $\varPsi$ dans la base $(\beta_1,\beta_2,\beta_3)$. L'équation \ref{F50} se décompose alors en trois équations : \begin{center} \begin{tabular}{rcl} $2iM\displaystyle\frac{\partial\varPsi^1}{\partial t}$ &$=$& $(\alpha)\varPsi^1+4\varrho Qi(B^3\varPsi^2+B^1\varPsi^3)$\\[1em] $2iM\displaystyle\frac{\partial\varPsi^2}{\partial t}$ &$=$& $(\alpha)\varPsi^2+4\varrho Qi(-B^3\varPsi^1+B^2\varPsi^3)$\\[1em] $2iM\displaystyle\frac{\partial\varPsi^3}{\partial t}$ &$=$& $(\alpha)\varPsi^3+4\varrho Qi(B^1\varPsi^1-B^2\varPsi^2)$\\[1em] \end{tabular} \end{center} où $(\alpha)$ est l'opérateur : \[\sum_{j=0}^3\varepsilon_j(i\frac{\partial}{\partial x^j}+Q\Upsilon^j)^2-2MQ\Upsilon^0+\varrho^2Q^2\rho^2|B|^2\] On laisse, là encore, le soin au lecteur d'interpréter les résultats d'une expérience de type <<~Stern-Gerlach~>> pour un spin $1$, en terme de <<~déformation de l'espace-temps~>> relatif à $g_0$ précisée par les trois équations précédentes. \section{Les métriques oscillantes à masse nulle\label{s2.14}} La notion de masse a été définie pour les métriques oscillantes élémentaires associées à un espace propre $E_{\lambda,\mu}$ (def. \ref{d2.3}). la fréquence de masse $M$ est (def. \ref{d2.8}) la constante positive ou nulle $\sqrt{S+\mu-\lambda}$ ~où ~$S:=\frac{n-2}{4(n-1)}S_{g_0}$, ~$S_{g_0}$ étant la courbure scalaire constante de la métrique $g_0$. \textbf{Nous considérons donc dans cette section, le cas pour lequel $M=0$, c'est à dire : $S=\lambda-\mu$}. On remarquera que, si $\lambda=0$, cette égalité ne peut être vérifiée que si la courbure scalaire est négative ou nulle (ce qui n'est pas à exclure (cf. remarque \ref{r1'})). La fréquence de charge électrique a été définie (def.\ref{d2.7}) par $Q^+:=\sqrt{\lambda}$, cependant, lorsque l'on considère que $M=0$, la charge électrique relative n'est en général pas définie (def. \ref{d2.14}) et il en est de même, par conséquent, de la fonction d'état $\psi$. La constante $Q^+$ (qui peut être nulle) restera une caractéristique importante des métriques oscillantes à <<~masse nulle~>>, mais le terme <<~fréquence de charge électrique~>> ne sera plus adapté si l'on veut rester proche du langage de la physique standard. \textbf{Les équations vérifiées par la fonction canonique $a_c$ d'une métrique oscillante élémentaire d'ordre 1 ou 2 à masse nulle ont été données par les théorèmes \ref{2.1} et \ref{2.3} où il suffit de poser $M=0$.} On réécrit ici les équations obtenues pour les métriques oscillantes élémentaires d'ordre 1 à masse nulle (pour celles d'ordre 2 (cf. thm. \ref{2.3}), seul le cas du potentiel électromagnétique est différent car on tient compte de la notion de <<~spin~>>, de plus, $a_c$ est à valeurs dans $E^{\mathbb{C}}_{S^3(\rho)}(\gamma)$ et non dans $\mathbb{C}$). \begin{enumerate} \item Dans un potentiel neutre. \begin{eqnarray}\label{F59} \Box_\Theta a_c=0 \end{eqnarray} où ~~$\Box_\Theta=\frac{\partial^2}{(\partial t)^2}-\sum_{k=1}^3\frac{\partial^2}{(\partial x^k)^2}$ \item Dans un potentiel actif sans électromagnétisme. \begin{eqnarray}\label{F60} \Box_\Theta a_c=2v\frac{\partial^2a_c}{(\partial t)^2} \end{eqnarray} \item Dans un potentiel électromagnétique. \begin{eqnarray}\label{F61} \sum_{j=0}^3\varepsilon_j(i\frac{\partial}{\partial x^j}+Q^+\Upsilon^j)^2a_c=0 \end{eqnarray} où~~ $\varepsilon_j=g_{0jj}$ \end{enumerate} Dans le cas du potentiel neutre, la fonction canonique $a_c$ est donc solution de l'équation des ondes classique. Les ondes électromagnétiques standard seront pour nous des métriques oscillantes à masse nulle particulières. Les équations \ref{F60} et \ref{F61} décrivent l'influence d'un potentiel sur la métrique oscillante à masse nulle. Qu'une telle influence existe n'est pas surprenant et apparaît déjà en relativité générale classique où l'on sait que, par exemple, un champ électromagnétique déforme l'espace-temps par l'intermédiaire de son tenseur d'énergie-impulsion. \bigskip Lorsque l'on considère un domaine avec singularités $(\mathscr{D},g,\mathscr{S})$ (def. \ref{d2.20}) pour lequel $g$ est une métrique oscillante élémentaire \textbf{à masse nulle}, la probabilité de présence d'une singularité dans $\mathscr{D}$ a été étudiée dans la section \ref{s2.12}. Les singularités s'assimileront à des \begin{it}photons\end{it} dans le cadre des ondes électromagnétiques standard, à des \begin{it}gravitons\end{it} dans le cadre des ondes gravitationnelles, etc., mais ceci lorsque nous aurons donné des définitions précises pour les différents domaines de types <<~ métriques oscillantes à masse nulle~>> dans lesquels se trouvent ces singularités. \bigskip Dans cette section, les métriques oscillantes à masse nulle ont été présentées d'un point de vue <<~quantique~>> (avec notre regard). Cependant, on peut aussi s'intéresser à ces notions en terme de <<~fluide de genre lumière~>>, mais alors ce n'est plus le même type d'approximations que l'on utilise et ceci correspond au premier chapitre de ce papier dans lequel on a exclu les <<~phénomènes quantiques~>> puisque l'on a fait, dès le départ, une moyenne de la métrique $g$ sur les cercles $S^1_x(\delta)$ (cf. lemme \ref{l1}). \section[Le champ de Higgs?]{Une métrique oscillante <<~très élémentaire~>> \label{s2.15}} Considérons un domaine de type <<~métrique oscillante élémentaire dans un potentiel neutre associé à $E_{\lambda,\mu}$ (def. \ref{d2.3}) \textbf{pour lequel $\lambda=0$ et $\mu=0$}. Les fonctions propres intervenant dans la décomposition spectrale de la fonction $a$ associée sont donc toutes constantes. La fréquence de charge électrique $Q^+$ est nulle, le <<~spin~>> est nul, etc. La fonction $a$ se réduit donc à une fonction réelle définie sur $\Theta\subset\mathbb{R}^4$ (l'équivalent d'un champ scalaire en T.Q.C). Dans le cadre de l'approximation linéaire, l'équation fondamentale \ref{F1} s'écrit : $\Box_{g_\mathcal P}a+Sa=0$ ~où ~$S=\frac{n-2}{4(n-1)}S_{g_0}$ ~et ~$\Box_\Theta=\frac{\partial^2}{(\partial t)^2}-\sum_{k=1}^3\frac{\partial^2}{(\partial x^k)^2}$. En supposant que $S_{g_0}$ est strictement positive, la fréquence de masse n'est autre que : \bigskip $M=(\frac{n-2}{4(n-1)}S_{g_0})^{1/2}$. \bigskip Cette métrique oscillante a donc la forme \textbf{la plus simple possible} que l'on puisse écrire dans un potentiel neutre et sa masse ne dépend que de la courbure scalaire du potentiel neutre. On avait déjà remarqué, lors de la définition de la masse (def. \ref{d2.10}), qu'une courbure scalaire strictement positive du potentiel neutre <<~donne~>> de la masse à des métriques oscillantes élémentaires associées à $E_{\lambda,\mu}$ pour lesquelles $\mu-\lambda$ est strictement négatif (lorsque $S+\mu-\lambda$ est positif). Il me parait naturel de faire le lien entre cette métrique oscillante élémentaire et le champ de Higgs présenté en T.Q.C. On peut considérer que les singularités dans un tel domaine correspondent aux bosons de Higgs, cependant il est probable que ces singularités associées à ce domaine précis soient <<~indécelables~>>. Les expériences qui permettent de déceler effectivement la présence de bosons de Higgs ne rentrent certainement pas dans le cadre de la métrique oscillante que l'on vient de présenter mais décrivent des phénomènes d'interaction complexes pour lesquels la notion de <<~masse~>> n'a plus le sens qu'on lui a donnée jusqu'à présent et la notion de <<~durée de vie~>> devient importante. Ces derniers points sont succinctement abordés dans la section \ref{s2.17}. \section[Grandeurs et mesures]{Grandeurs et mesures}\label{s2.16} Il est important de remarquer que, dans tout ce que nous avons présenté dans ce papier, la notion \begin{it} d'impulsion\end{it} (par exemple) n'est pas intervenue, alors que cette notion est fondamentale dans toutes les théories physiques standard. Dans le chapitre sur les phénomènes quantiques la seule notion de \begin{it}mesure\end{it} que nous ayons considérée est celle de \begin{it}position\end{it} et celle-ci n'a été définie que pour les \begin{it}singularités\end{it} (cf. \ref{s2.11}). Ceci est suffisant pour décrire qualitativement et quantitativement toutes les expériences standard de la physique quantique (diffraction, fentes de Young, déviations par un potentiel, expérience de Stern-Gerlach, intrication quantique, etc.). Il va s'agir maintenant d'aborder des phénomènes plus complexes (traités actuellement par la T.Q.C). Il n'y a aucune raison, à priori, avec le regard que l'on porte ici sur la physique, d'introduire des notions \begin{it}d'impulsion\end{it}, \begin{it}d'énergie\end{it}, etc. Cependant, dans le but de <<~garder le contact~>> avec les théories physiques standard de manière à pouvoir comparer nos résultats théoriques avec les leurs, nous allons introduire dans cette section la notion de \begin{it}grandeurs\end{it} telles que \begin{it}l'impulsion\end{it}, \begin{it}l'énergie\end{it}, etc., qui seront associées aux métriques conformes à un potentiel (et non à des singularités) ainsi que la notion \begin{it}d'instrument de mesure de ces grandeurs\end{it}. Nous retrouverons en particulier dans cette étude des inégalités analogues aux \begin{it}relations d'incertitudes\end{it} de la physique quantique standard, mais pour nous, celles-ci n'auront conceptuellement pas une grande importance. Il est à noter que la notion de \begin{it}vitesse\end{it} (qui permet d'introduire la notion \begin{it}d'impulsion\end{it}) n'a été définie que dans le cas très particulier des métriques oscillantes homogènes (cf. \ref{s2.6}), ceci servira néanmoins de base à ce que nous allons introduire. Dans tout ce qui concerne cette section la cellule type est de la forme $\mathscr{C}=\Theta\times S^1(\delta)\times W$ où $\Theta=I\times \Omega\subset\mathbb{R}\times\mathbb{R}^3$ et la métrique de référence est $g_0=g_\Theta\times (-g_{S^1(\delta)})\times g_W$. \subsection{Un exemple}\label{ssn2.1} Avant de préciser dans un cadre général les définitions relatives à la notion de mesure, nous commençons par considérer le cas particulier d'un domaine de type <<~métrique oscillante d'ordre 1 dans un potentiel neutre~>> qui servira d'exemple de référence dans cette section. La métrique $g$ s'écrit: $g=|a|^{4/n-2}g_0$ et la fonction $a$ est de la forme particulière: $a=\beta\sum_{j=1}^p\varphi_j$ ~~ où ~~,$\forall j$ de 1 à p, ~~$\varphi_j:\Theta\times S^1(\delta)\rightarrow\mathbb{R}$ vérifie: $\varphi_j=C_j \cos(M_jt+Qu-\sum_{k=1}^3\lambda_{jk}x^k)+C'_j\sin(M_jt+Qu-\sum_{k=1}^3\lambda_{jk}x^k)$. $C_j$ et $C'_j$ sont ici des constantes,~~ $\beta\in E_W(\mu)$,~~ $Q$ est la charge électrique relative. D'après la définition \ref{d2.12} chaque fonction $a_j:=\beta\varphi_j$ correspond à une métrique oscillante élémentaire \textbf{homogène} d'ordre 1 et le vecteur vitesse de cette métrique oscillante est donné par: $\overrightarrow{v_j}=(1/M_j)(\lambda_{j1},\lambda_{j2},\lambda_{j3})$. L'impulsion correspondante est alors naturellement définie par: $\overrightarrow{\lambda_j}:=M_j\overrightarrow{v_j}=(\lambda_{j1},\lambda_{j2},\lambda_{j3})$. $M_j$ est la (fréquence de) masse <<~relativiste~>>, elle est liée à la masse <<~au repos~>> $M_0$ (que l'on considère ne pas dépendre de j sur cet exemple) par (cf.\ref{s2.6}): $M_0=(1-|\overrightarrow{v_j}|^2)^{1/2}M_j$. Il est facile de vérifier que $\Box_{g_0}a+Sa=0$ puisque $M_0^2=S+\mu-Q^2$. La métrique oscillante donnée par $g=|a|^{4/n-2}g_0$ est donc bien \begin{it}élémentaire\end{it} d'ordre 1 dans un potentiel neutre. Notons (cf.\ref{s2.8}) que la fonction canonique associée vérifie: $a_c=\sum_{j=1}^p(C_j+iC'_j)e^{-i(M_jt-\sum_{k=1}^3\lambda_{jk}x^k)}$ ~~lorsque $Q>0$, et est égale au conjugué lorsque $Q<0$. La fonction d'état est: $\varPsi=\sum_{j=1}^p(C_j+iC'_j)e^{-i((M_j-M_0)t-\sum_{k=1}^3\lambda_{jk}x^k)}$. Dans le langage <<~classique~>>, l'étude du domaine que l'on vient de décrire (lorsque l'on considère que la fonction d'état $\Psi$ est celle de la physique quantique standard) est traduite de la manière suivante:\\ Soit une particule de masse $m$ et de charge électrique $q$ dans un domaine $\Omega\subset\mathbb{R}^3$ telle que la fonction d'état $\varPsi$ soit celle que l'on vient de préciser. Alors, d'après les principes de la physique quantique classique (délicats dans ce cas car $\varPsi$ n'est pas normalisable), on peut conclure que, lors d'une mesure de l'impulsion de la particule à un instant $t_0$, la probabilité d'obtenir $\overrightarrow{q_j}=(\lambda_{j1},\lambda_{j2},\lambda_{j3})$ est $({C_j}^2+{C'_j}^2)/\sum_{j=1}^p({C_j}^2+{C'_j}^2)$. Pour nous, le processus va être fondamentalement différent et nous allons avoir recours à une définition précise de la notion de \begin{it}grandeurs\end{it} puis à celle \begin{it}d'instrument de mesure de ces grandeurs\end{it} qui, utilisées en particulier sur l'exemple précédent, redonneront des résultats analogues à ceux de la physique quantique standard. Bien que les définitions partent des mêmes principes, nous présenterons dans deux sous-sections différentes les notions de \begin{it} grandeurs\end{it} liées à l'espace apparent $\Theta$ et celles liées aux variétés compactes de la cellule type $\mathscr{C}$. La non-compacité de l'espace apparent demandera une présentation plus élaborée que celle qui concerne les variétés compactes. \subsection{Les grandeurs liées à l'espace apparent $I\times\Omega\subset\mathbb{R}\times\mathbb{R}^3$ et leurs mesures} Comme les fonctions réelles définies sur $\mathbb{R}^p$ que nous allons considérer n'appartiendront pas nécessairement à $L^2(\mathbb{R}^p)$, nous introduisons la définition suivante où l'on note $B_L$ le cube dans $\mathbb{R}^p$ dont les coordonnées des sommets sont $(\pm L/2,...,\pm L/2)$. \newpage \begin{dfn}\label{d2.25} Une famille $(f_i)_{i\in \mathcal{G}}$ de fonctions réelles continues sur $\mathbb{R}^p$ est \textbf{B-orthonormée} si:\begin{enumerate} \item $\forall i\in\mathcal{G}$ ~~ $\frac{2}{L^p}\int_{B_L}f_i^2\rightarrow 1$ quand $L$ tend vers l'infini. \item $\forall i,j\in\mathcal{G}$ tels que $i\neq j$ ~~ $\frac{2}{L^p}\int_{B_L}f_if_j\rightarrow 0$ quand $L$ tend vers l'infini. \end{enumerate} \end{dfn} $\frac{2}{L^p}\int_{B_L}f_if_j$ sera parfois noté $<f_i,f_j>_{B_L}$. Le choix du facteur <<~2~>> permettra de simplifier certains coefficients dans les calculs qui vont suivre, essentiellement lorsque les fonctions $f_i$ seront des fonctions trigonométriques. La notion de \begin{it}grandeur\end{it} que l'on introduit maintenant n'aura d'intérêt que liée à la notion \begin{it}d'instrument de mesure\end{it} que l'on définira ensuite. \begin{dfn}\label{d2.26} On considère l'espace apparent $I\times\Omega\subset\mathbb{R}\times\mathbb{R}^3$.\\ \textbf{Une grandeur définie sur $\Omega\subset\mathbb{R}^3$ (resp. $I\subset\mathbb{R}$)} est une famille $(h_i)_{i\in\mathcal{G}}$ de fonctions $C^\infty$ définies sur $R^3$ (resp. $\mathbb{R}$) à valeurs dans $\mathbb{R}^m$ telle que la réunion des $m$ familles composantes $(h_{1,i})_{i\in\mathcal{G}},...,(h_{m,i})_{i\in\mathcal{G}}$ forme une famille B-orthonormée (déf. \ref{d2.25}). \end{dfn} \begin{rmq} Dans les exemples qui vont suivre $m$ sera égal à 2 et dans ce cas nous pourrions dire que les fonctions $h_i$ sont à valeurs dans $\mathbb{C}$ plutôt que dans $\mathbb{R}^2$. L'introduction de $\mathbb{C}$ permet effectivement de simplifier certains calculs dans les cas particuliers des domaines pour lesquels la fonction canonique (cf.\ref{s2.8}) est définie. Cependant les définitions qui vont suivre s'appliquent dans un cadre plus général. \end{rmq} Les deux exemples fondamentaux sont les suivants. \subsubsection{Exemple 1} La \textbf{grandeur impulsion} est définie par la famille $(h_{\overrightarrow{q}})_{\overrightarrow{q}\in\mathcal{G}}$ où $\mathcal{G}=\mathbb{R}^{3^*}$ et $\forall \overrightarrow{q}=(q_1,q_2,q_3)\in\mathcal{G}$ ~~ $h_{\overrightarrow{q}}:\mathbb{R}^3\rightarrow\mathbb{R}^2$ ~ vérifie :\\ $\forall (x_1,x_2,x_3)\in\mathbb{R}^3$ ~~ $h_{\overrightarrow{q}}(x_1,x_2,x_3)=(\cos(\sum_{k=1}^3q_kx^k),\sin(\sum_{k=1}^3q_kx^k))$. \subsubsection{Exemple 2} La \textbf{grandeur énergie} est définie par la famille $(h_e)_{e\in\mathcal{G}}$ ~~ où $\mathcal{G}=\mathbb{R}^*$ \\et, $\forall e\in\mathcal{G}$ ~~ $h_e:\mathbb{R}\rightarrow\mathbb{R}^2$ vérifie: $\forall t\in\mathbb{R}$ ~~ $h_e(t)=(\cos(et),\sin(et))$. \begin{rmq} Habituellement, la dénomination de la \begin{it}grandeur\end{it} considérée est plutôt donnée aux éléments de l'ensemble des indices $\mathcal{G}$: on parlera de \begin{it}l'impulsion\end{it} $\overrightarrow{q}=(q_1,q_2,q_3)$ ou de \begin{it}l'énergie\end{it} $e$. \end{rmq} On notera que l'ensemble $\mathbb{R}^3$ qui correspond à la grandeur \begin{it}impulsion\end{it} représente l'espace tangent à $\Omega$ en chacun de ses points (qui s'identifie canoniquement à $\mathbb{R}^3$ relativement au choix de la cellule type). \bigskip Pour préciser maintenant le lien entre la notion de \begin{it}mesure d'une grandeur\end{it} associée à un domaine de type <<~métrique conforme à un potentiel~>> et la réalité physique, nous présentons sous forme de \begin{it}définitions\end{it} la notion \begin{it}d'instrument de mesure d'une grandeur\end{it}. Ces définitions ne concernent pour l'instant que les grandeurs que l'on vient de présenter associées à l'espace apparent, mais elles s'adapteront sans difficultés aux grandeurs associées aux variétés compactes que l'on précisera ensuite. On considère un domaine de type <<~métrique conforme à un potentiel~>> de cellule type $(\mathscr{C},g)$ où $\mathscr{C}=I\times\Omega\times S^1(\delta)\times W$ ~ et ~ $g=|a|^{4/n-2}g_\mathcal{P}$, $g_\mathcal{P}$ étant une métrique de potentiel (ce domaine peut être en particulier un domaine <<~à courbure scalaire constante~>> (def.\ref{d2.1}) ou de type <<~métrique oscillante~>> (def.\ref{d2.2})). Soit une grandeur $(h_i)_{i\in\mathcal{G}}$ (def.\ref{d2.26}) associée à ce domaine. Les trois caractéristiques importantes d'un instrument de mesure de cette grandeur, dont la définition va suivre, sont les suivantes. \begin{enumerate} \item Son \textbf{spectre} qui, par définition, est une sous-famille finie de la famille $(h_i)_{i\in\mathcal{G}}$, c'est à dire une famille $(h_i)_{i\in Sp}$ où $Sp$ est une partie finie de $\mathcal{G}$. \item Son \textbf{domaine de mesure} $B_L$ qui est un cube de $\mathbb{R}^3$ dont les coordonnées des sommets sont $(\pm L/2,\pm L/2,\pm L/2)$ et est donc de volume euclidien $L^3$. Pour simplifier, et sans restreindre la généralité, on supposera que $0\in\Omega$ et que , pour $L$ suffisamment petit, $B_L\subset\Omega$, ce qui peut toujours s'obtenir par une <<~translation~>> des coordonnées $(x^1,x^2,x^3)$ dans la cellule type. \item Sa \textbf{durée de mesure T} qui correspondra au temps, relatif à l'observateur associé à la cellule type, pour effectuer la mesure. \end{enumerate} \begin{dfn}\label{d2.27} \textbf{Un instrument de mesure de la grandeur $(h_i)_{i\in\mathcal{G}}$ définie sur $\Omega\subset\mathbb{R}^3$ (def.\ref{d2.26}) } dont les caractéristiques sont $Sp,B_L$ et $T$, est un système physique qui a les propriétés suivantes: \begin{enumerate} \item À partir d'un instant $t_0\in I$, il transforme le <<~sous-domaine~>> $(\mathscr{C}_{B_L}, g)$ de $(\mathscr{C},g)$ en un <<~domaine~>> $(\mathscr{C}',g')$ pour lesquels: \begin{enumerate} \item $\mathscr{C}_{B_L}=I\times B_L\times S^1(\delta)\times W$ \item $\mathscr{C}'=]t_0 , t_0+T[\times(\bigcup_{i\in Sp}B_i)\times S^1(\delta)\times W$ où $]t_0 , t_0+T[\subset I$ et les $B_i$ sont des cubes de $\mathbb{R}^3$ disjoints 2 à 2 (ou du moins qui ont leurs intersections 2 à 2 de mesure nulle) chacun isométrique à $B_L$. \item $\forall i\in Sp$ ~~ $g'_{B_i}=|a_i|^{4/n-2}g_0$ ~ où ~ $a_i:]t_0 , t_0+T[\times B_i\times S^1(\delta)\times W\rightarrow\mathbb{R}$ est définie par: $$a_i=\sum_{l=1}^m <a,h_{il}>_{B_L}(h_{il}\circ\sigma_i)$$ Ici $\sigma_i$ désigne l'isométrie entre $B_i$ et $B_L$ ($h_{il}\circ\sigma_i$ peut être considérée définie sur $\mathscr{C}_i:=]t_0 , t_0+T[\times B_i\times S^1(\delta)\times W$). On rappelle que $<f_1,f_2>_{B_L}:=\frac{2}{L^3}\int_{B_L}f_1f_2$. \end{enumerate} \item Pour chaque $i\in Sp$, l'instrument de mesure estime le nombre moyen de singularités élémentaires se trouvant dans$ B_i$ pendant le temps $(t,u)\in ]t_0 , t_0+T[\times S^1(\delta)$. (Lorsque $n_i(t,u)$ désigne le nombre de singularités élémentaires se trouvant au temps $(t,u)$ dans $\mathscr{H}_i:=\{t\}\times B_i\times\{u\}\times W$, le nombre moyen est défini par $\bar{n}_i:=\frac{1}{2\pi\delta T}\int_{S^1(\delta)}\int_{t_0}^{t_0+T}n_i(t,u)dudt)$. \end{enumerate} \end{dfn} On peut résumer succinctement les propriétés demandées à un tel instrument de mesure en disant que celui-ci <<~sépare en espace~>> les <<~composantes~>> qui nous intéressent de la métrique sur laquelle on effectue les mesures, puis analyse le domaine créé (<<~séparé en espace~>>) en comptant les singularités élémentaires qui s'y trouvent. Un exemple de tel instrument de mesure est donné par un prisme qui décompose la lumière constituée de p impulsions distinctes de même direction en la répartissant sur p domaines (arc en ciel). Dans ce cas les métriques oscillantes considérées sont à masse nulle (cf. section.\ref{s2.14}). \bigskip Lorsque la grandeur $(h_i)_{i\in\mathcal{G}}$ est définie sur $I\in\mathbb{R}$, la définition d'un instrument de mesure de cette grandeur est analogue à la définition \ref{d2.27} précédente, seule la condition $1.(c)$ est différente. \begin{dfn}\label{d2.28} \textbf{ Un instrument de mesure de la grandeur $(h_i)_{i\in\mathcal{G}}$ définie sur $I\in\mathbb{R}$ (def.\ref{d2.26})} dont les caractéristiques sont $Sp,B_L$ et $T$, est un système physique qui a les propriétés suivantes: \begin{enumerate} \item À partir d'un instant $t_0\in I$, il transforme le <<~sous-domaine~>> $(\mathscr{C}_{B_L}, g)$ de $(\mathscr{C},g)$ en un <<~domaine~>> $(\mathscr{C}',g')$ pour lesquels: \begin{enumerate} \item Idem définition \ref{d2.27} \item Idem définition \ref{d2.27} \item $\forall i\in Sp$ ~~ $g'_{B_i}=|a_i|^{4/n-2}g_0$ ~ où ~ $a_i:\mathscr{C}_i:=]t_0 , t_0+T[\times B_i\times S^1(\delta)\times W\rightarrow\mathbb{R}$ est définie par: $$a_i=(\sum_{l=1}^m(\frac{2}{T}\int_{t_0}^{t_0+T}a_{\mathscr{C}_L}h_{il}dt)h_{il})\circ\sigma_i$$ Ici $\mathscr{C}_L:=]t_0 , t_0+T[\times B_L\times S^1(\delta)\times W$ et on rappelle que, dans le cadre de cette définition, $h_i$ est une fonction définie sur $\mathbb{R}$. $\sigma_i$ est l'isométrie entre $B_i$ et $B_L$ prolongée naturellement par <<~l'identité~>> entre $\mathscr{C}_i$ et $\mathscr{C}_L$. \end{enumerate} \item Idem définition \ref{d2.27} \end{enumerate} \end{dfn} Il est important de remarquer que les instruments de mesure que l'on vient de présenter mesurent des grandeurs relatives à la \textbf{métrique} conforme à un potentiel mais n'ont aucun lien avec les singularités se trouvant dans l'espace temps <<~avant~>> la mesure. \subsubsection{Exemples de mesures d'impulsion et d'énergie} On reprend l'exemple de la métrique oscillante élémentaire d'ordre 1 présenté au début de cette section: La métrique $g$ s'écrit: $g=|a|^{4/n-2}g_0$ et la fonction $a$ est de la forme particulière: $a=\beta\sum_{j=1}^p\varphi_j$ ~~ où ~~,$\forall j$ de 1 à p, ~~$\varphi_j:\Theta\times S^1(\delta)\rightarrow\mathbb{R}$ vérifie: $\varphi_j=C_j \cos(M_jt+Qu-\sum_{k=1}^3\lambda_{jk}x^k)+C'_j\sin(M_jt+Qu-\sum_{k=1}^3\lambda_{jk}x^k)$. $C_j$ et $C'_j$ sont ici des constantes,~~ $\beta\in E_W(\mu)$,~~ $Q$ est la charge électrique relative. \begin{enumerate} \item Considérons une \textbf{mesure de l'impulsion} effectuée sur cette métrique oscillante à l'aide de l'instrument de mesure décrit dans la définition \ref{d2.27}.\\Comme nous allons le voir, les conditions essentielles pour que l'instrument de mesure <<~sépare correctement en espace~>> chacun des $\beta\varphi_j$ sont les suivantes: \begin{eqnarray}\label{Fn62} \forall j\neq j'\in\{1..p\}~~~~ L|\overrightarrow{\lambda_j}-\overrightarrow{\lambda_{j'}}|>>1 ~~~~ et ~~~~ L|\overrightarrow{\lambda_j}|>>1 \end{eqnarray} où $|\overrightarrow{\lambda_j}|:=sup(|\lambda_{j_1}|,|\lambda_{j_2}|,|\lambda_{j_3}|)$ et $L$ est la longueur des côtés du cube qui définit le domaine de mesure.\\ Le fait que $\forall j \in\{1..p\} ~~~~ L|\overrightarrow{\lambda_j}|>>1$ permet d'écrire le résultat suivant: \\ $\forall j\in\{1..p\},~~ \forall \overrightarrow{q}\in Sp$ \begin{eqnarray}\label{Fn63} <\varphi_j,h_{1\overrightarrow{q}}>_{B_L}h_{1\overrightarrow{q}}+<\varphi_j,h_{2\overrightarrow{q}}>_{B_L}h_{ 2\overrightarrow{q}}\simeq\varphi_j ~~~~ si ~~ L|\overrightarrow{\lambda_j}-\overrightarrow{q}|<<1 \end{eqnarray} \begin{eqnarray}\label{Fn64} <\varphi_j,h_{1\overrightarrow{q}}>_{B_L}h_{1\overrightarrow{q}}+<\varphi_j,h_{2\overrightarrow{q}}>_{B_L}h_{ 2\overrightarrow{q}}\simeq 0 ~~~~ si ~~ L|\overrightarrow{\lambda_j}-\overrightarrow{q}|>>1 \end{eqnarray} où $|\overrightarrow{\lambda_j}-\overrightarrow{q}|:=sup(|\lambda_{j_1}-q_1|,|\lambda_{j_2}-q_2|,|\lambda_{j_3}-q_3|)$. Ceci s'obtient en utilisant des égalités de la forme suivante ici écrites pour simplifier en <<~dimension 1~>> (alors qu'elles sont en <<~dimension 3~>>): $\forall \lambda\in\mathbb{R}$, ~~ $\forall q\in\mathbb{R}$ $\frac{2}{L}\int_{-L/2}^{L/2}\cos(\lambda x)\cos(qx)dx=\frac{2}{L(\lambda+q)}\sin\frac{L}{2}(\lambda+q)+\frac{2}{ L(\lambda-q)}\sin\frac{L}{2}(\lambda-q)$ ~~ si $\lambda\neq q$ \hspace{3,7cm} $=1+\frac{1}{L\lambda}\sin(L\lambda)$ ~~ si $\lambda=q$ $\frac{2}{L}\int_{-L/2}^{L/2}\cos(\lambda x)\sin(qx)dx=0$ ~~ et l'on a: $\frac{2}{L(\lambda-q)}\sin\frac{L}{2}(\lambda-q)\simeq 1$ ~~ si $L|\lambda-q|<<1$ $\frac{2}{L(\lambda-q)}\sin\frac{L}{2}(\lambda-q)\simeq 0$ ~~ si $L|\lambda-q|>>1$ \bigskip On déduit de \ref{Fn62} ~ \ref{Fn63} et \ref{Fn64} que : $\forall j\in\{1..p\} ~~ \forall \overrightarrow{q}\in Sp$ $<a,h_{1\overrightarrow{q}}>_{B_L}{h_{1\overrightarrow{q}}}_{B_L}+<a,h_{2\overrightarrow{q}}>_{B_L}{h_{ 2\overrightarrow{q}}}_{B_L}\simeq{\beta\varphi_j}_{\mathscr{C}_L}$ ~~ si $L|\overrightarrow{\lambda_j}-\overrightarrow{q}|<<1$ \hspace{7,3cm} $\simeq 0$ \hspace{0,9cm} si $L|\overrightarrow{\lambda_j}-\overrightarrow{q}|>>1$ Alors (def.\ref{d2.27}) le domaine $(\mathscr{C}',g')$ créé par l'instrument de mesure est tel que $g'=|a'|^{4/n-2}g_0$ où la fonction $a'$ vérifie les propriétés suivantes: pour chaque $j\in\{1..p\}$ $a'_{\mathscr{C}'_{\overrightarrow{q}}}\simeq\beta\varphi_j\circ\sigma_{\overrightarrow{q}}$ ~~ si $\overrightarrow{q}\in Sp$ vérifie $L|\overrightarrow{\lambda_j}-\overrightarrow{q}|<<1$ \hspace{0,85cm} $\simeq 0$ \hspace{1,6cm} si $\overrightarrow{q}\in Sp$ vérifie $L|\overrightarrow{\lambda_j}-\overrightarrow{q}|>>1$ où $\mathscr{C}'_{\overrightarrow{q}}:=]t_0 , t_0+T[\times B_{\overrightarrow{q}}\times S^1(\delta)\times W$. Ceci signifie que la fonction $a'$ est $\simeq 0$ sur les domaines $\mathscr{C}'_{\overrightarrow{q}}$ lorsque $\overrightarrow{q}$ est <<~éloigné~>> de tous les $\overrightarrow{\lambda_j}$ (c.a.d. lorsque $L|\overrightarrow{\lambda_j}-\overrightarrow{q}|>>1$) et que $a'$ est <<~très proche~>> d'une fonction représentant une métrique oscillante élémentaire \textbf{homogène} sur $\mathscr{C}'_{\overrightarrow{q}}$ lorsque $\overrightarrow{q}$ est <<~très proche~>> d'un $\overrightarrow{\lambda_j}$ (c.a.d. lorsque $L|\overrightarrow{\lambda_j}-\overrightarrow{q}|<<1$). On suppose maintenant que le spectre est suffisamment <<~riche~>> pour que l'on puisse choisir pour chaque $j\in\{1..p\}$, un $\overrightarrow{q_j}\in Sp$ tel que $L|\overrightarrow{\lambda_j}-\overrightarrow{q_j}|<<1$ (bien entendu, ce choix fait que l'on ne va décrire qu'un résultat partiel relativement aux données de l'instrument de mesure). On peut alors écrire: $\forall j\in\{1..p\}$ $a'_{\mathscr{C}'_{\overrightarrow{q_j}}}\simeq\beta\varphi_j\circ\sigma_{\overrightarrow{q_j}}$ où $\varphi_j(t,x,u)=C_j\cos(M_jt+Qu-\sum_{k=1}^3\lambda_{jk}x^k)+C'_j\sin(M_jt+Qu-\sum_{k=1}^3\lambda_{jk}x^k)$ et $(\mathscr{C}'_{\overrightarrow{q_j}},|a'|^{4/n-2}g_0)$ est isométrique à $(\mathscr{C}_L,|\beta\varphi_j|^{4/n-2}g_0)$. On en déduit, d'après les résultats de la section \ref{s2.12} que, si à un instant $t\in ]t_0, t_0+T[$ une singularité élémentaire <<~$\varsigma$~>> se trouve dans $\bigcup_{j=1}^p\mathscr{H}_j(t)$ où $\mathscr{H}_j(t):=\{t\}\times B_{\overrightarrow{q_j}}\times S^1(\delta)\times W$, alors la probabilité qu'elle soit dans un $\mathscr{H}_j(t)$ est $({C_j}^2+{C'_j}^2)/\sum_{j=1}^p({C_j}^2+{C'_j}^2)$ puisque les $B_{\overrightarrow{q_j}}$ sont disjoints 2 à 2 (ou du moins ont leurs intersections 2 à 2 de mesure nulle). Cette probabilité ne dépend pas de $t$. Ce résultat est comparable à celui donné par la physique quantique standard (précisé en \ref{ssn2.1}). Cependant l'interprétation liée à la physique que nous présentons est, comme on vient de le voir, profondément différente. Il est en particulier important de remarquer que les singularités élémentaires que l'on observe dans les $\mathscr{H}_j(t)$ et dont on connaît la probabilité de présence, n'ont rien à voir avec les singularités se trouvant dans le domaine $I\times\Omega\times S^1(\delta)\times W$ pour $t<t_0$ c'est à dire avant la mesure. \subsubsection{Comparaison entre les conditions \ref{Fn62} et la relation d'incertitude <<~position-impulsion~>>} Comme on l'a déjà précisé, l'impulsion $\overrightarrow{\lambda_j}$ est donnée par $\overrightarrow{\lambda_j}=M_j\overrightarrow{v_j}$ où $M_j$ est la fréquence de masse et $\overrightarrow{v_j}=(v_1,v_2,v_3)$ est la vitesse à laquelle se déplace la métrique oscillante élémentaire homogène. Dans le système d'unités SI on a $M_j=m_jc/\hbar$ où $m_j$ est la masse (en kg) (def.\ref{d2.10}) et $\overrightarrow{\text{v}_j}=(\text{v}_1,\text{v}_2,\text{v}_3)$ (en $ms^{-1}$) est donnée par $\overrightarrow{\text{v}_j}=c\overrightarrow{v_j}$. Les conditions \ref{Fn62} s'écrivent donc aussi, lorsque l'on note $\overrightarrow{p_j}=m_j\overrightarrow{\text{v}_j}=\frac{m_0}{(1-|\overrightarrow{\text{v}_j}|^2/c^2)^{ 1/2}}\overrightarrow { \text{v}_j}$ l'impulsion <<~standard~>>: \begin{eqnarray}\label{Fn65} \forall j\neq j'\in\{1..p\}~~~~ L|\overrightarrow{p_j}-\overrightarrow{p_{j'}}|>>\hbar ~~~~ et ~~~~ L|\overrightarrow{p_j}|>>\hbar \end{eqnarray} L'inégalité $L|\overrightarrow{p_j}-\overrightarrow{p_{j'}}|>>\hbar$ est à rapprocher de la relation d'incertitude classique en physique quantique qui lie les mesures de \begin{it}position\end{it} et \begin{it}d'impulsion\end{it} d'une particule. L'interprétation des inégalités \ref{Fn65} est néanmoins assez différente. Rappelons que la notion de \begin{it}mesure de position\end{it} est, pour nous, définie pour les \textbf{singularités élémentaires} alors que celle de\begin{it} mesure d'impulsion\end{it} que l'on vient de présenter concerne les \textbf{métriques conformes à un potentiel}, la relation d'incertitude de la physique quantique standard perd donc, pour nous, tout son sens. Les inégalités \ref{Fn65} disent simplement que si l'on mesure des impulsions à l'aide de l'instrument de mesure (def.\ref{d2.27}) et que l'on souhaite que celui-ci <<~sépare correctement en espace~>> la métrique initiale en métrique oscillantes élémentaire homogènes (pour lesquelles la notion d'impulsion est naturellement définie), alors le domaine de mesure $B_L$ doit être suffisamment <<~grand~>> relativement aux impulsions considérées (l'inégalité $L|\overrightarrow{p_j}|>>\hbar$ précise que l'on ne peut mesurer, sans les critères que l'on vient de préciser, une impulsion <<~trop petite~>>). \item Considérons maintenant une \textbf{mesure de l'énergie} effectuée sur cette même métrique oscillante à l'aide de l'instrument de mesure décrit dans la définition \ref{d2.28}. La condition essentielle pour que l'instrument de mesure <<~sépare correctement en espace~>> chacun des $\beta\varphi_j$ est la suivante: \begin{eqnarray}\label{Fn66} \forall j\neq j'\in \{1..p\} ~~~~ T(M_j-M_{j'})>>1 ~~~~et ~~~~ TM_j>>1 \end{eqnarray} Des calculs analogues (plus rapides ici) à ceux de la partie précédente montrent que: $\forall j\in\{1..p\}$ ~~ $\forall e\in Sp$ $(\frac{2}{T}\int_{t_0}^{t_0+T}a_{\mathscr{C}_L}h_{1e}dt)(h_{1e})_{]t_0, t_0+T[}+(\frac{2}{T}\int_{t_0}^{t_0+T}a_{\mathscr{C}_L}h_{2e}dt)(h_{2e})_{]t_0, t_0+T[}\simeq\beta\varphi_{j_{\mathscr{C}_l}}$ si $T|M_j-e|<<1$ (On rappelle que $h_e(t)=(h_{1e}(t),h_{2e}(t))=(\cos (et),\sin (et))$) Alors (def.\ref{d2.28}) le domaine $(\mathscr{C}',g')$ créé par l'instrument de mesure est tel que $g'=|a'|^{4/n-2}g_0$ où la fonction $a'$ vérifie les propriétés suivantes: pour chaque $j\in\{1..p\}$ $a'_{\mathscr{C}'_e}\simeq\beta\varphi_j\circ\sigma_e$ ~~ si $e\in Sp$ vérifie $T|M_j-e|<<1$ \hspace{0,6cm} $\simeq 0$ \hspace{1,3cm} si $e\in Sp$ vérifie $T|M_j-e|>>1$ où $\sigma_e$ désigne l'isométrie entre $B_e$ et $B_L$ et $\mathscr{C}'_e:=]t_0,t_0+T[\times B_e\times S^1(\delta)\times W$. Ceci signifie que la fonction $a'$ est $\simeq 0$ sur les domaines $\mathscr{C}'_e$ lorsque $e$ est <<~éloigné~>> de tous les $M_j$ (c.a.d. lorsque $T|M_j-e|>>1$) et que $a'$ est <<~très proche~>> d'une fonction représentant une métrique oscillante élémentaire homogène sur $\mathscr{C}'_e$ lorsque $e$ est <<~très proche~>> d'un $M_j$ (c.a.d. lorsque $T|M_j-e|<<1$). On suppose maintenant que le spectre est suffisamment <<~riche~>> pour que l'on puisse choisir pour chaque $j\in\{1..p\}$, un $e_j\in Sp$ tel que $T|M_j-e_j|<<1$ (bien entendu, ce choix fait que l'on ne va décrire qu'un résultat partiel relativement aux données de l'instrument de mesure). On peut alors écrire: $\forall j\in\{1..p\}$ $a'_{\mathscr{C}'_{e_j}}\simeq\beta\varphi_j\circ\sigma_{e_j}$ où $\varphi_j(t,x,u)=C_j\cos(M_jt+Qu-\sum_{k=1}^3\lambda_{jk}x^k)+C'_j\sin(M_jt+Qu-\sum_{k=1}^3\lambda_{jk}x^k)$ et $(\mathscr{C}'_{e_j},|a'|^{4/n-2}g_0)$ est isométrique à $(\mathscr{C}_L,|\beta\varphi_j|^{4/n-2}g_0)$. On en déduit, d'après les résultats de la section \ref{s2.12} que, si à un instant $t\in ]t_0, t_0+T[$ une singularité élémentaire <<~$\varsigma$~>> se trouve dans $\bigcup_{j=1}^p\mathscr{H}_j(t)$ où $\mathscr{H}_j(t):=\{t\}\times B_{e_j}\times S^1(\delta)\times W$, alors la probabilité qu'elle soit dans un $\mathscr{H}_j(t)$ est $({C_j}^2+{C'_j}^2)/\sum_{j=1}^p({C_j}^2+{C'_j}^2)$ puisque les $B_{e_j}$ sont disjoints 2 à 2 (ou du moins ont leurs intersections 2 à 2 de mesure nulle). Cette probabilité ne dépend pas de $t$. Là encore ce résultat est comparable à celui que donne la physique quantique standard. \subsubsection{Comparaison entre les conditions \ref{Fn66} et la relation d'incertitude <<~temps-énergie~>>} Les conditions \ref{Fn66} écrites dans les unités SI deviennent: \begin{eqnarray}\label{Fn67} \forall j\neq j'\in\{1..p\}~~~~\mathcal{T}|m_jc^2-m_{j'}c^2|>>\hbar ~~~~ et ~~~~\mathcal{T}m_jc^2>>\hbar \end{eqnarray} où $\mathcal{T}$ (en s) est défini par $c\mathcal{T}=T$ et puisque $M_j=m_jc/\hbar$. L'inégalité $\mathcal{T}|m_jc^2-m_{j'}c^2|>>\hbar$ est à rapprocher de la relation d'incertitude \begin{it}temps-énergie\end{it} de la physique quantique classique bien que, tout comme celle \begin{it}position-impulsion\end{it} l'interprétation en soit, pour nous,j assez différente. (Il est à noter que les conditions \ref{Fn66} ne font pas intervenir la <<~dimension~>> de $B_L$ qui, ici, a peu d'importance). \end{enumerate} \subsection{Les grandeurs liées aux variétés compactes de la cellule type et leurs mesures} La cellule type considérée est de la forme $\mathscr{C}=I\times\Omega\times V_1\times V_2$ où $V_1$ et $V_2$ sont deux variétés compactes. Nous nous intéressons aux <<~grandeurs~>> définies sur $V_1$. Les définitions de \begin{it}grandeurs\end{it}, \begin{it}instruments de mesure de ces grandeurs\end{it} que l'on va préciser, partent des mêmes principes que l'on a utilisés pour les grandeurs définies sur l'espace apparent, mais maintenant la compacité de $V_1$ (qui permet d'utiliser le théorème spectral (section\ref{s2.4})) simplifie grandement les choses. On considère un espace propre $E_{V_1}(\mu)$ du laplacien $\Delta_{g_{V_1}}$. Sur celui-ci est défini naturellement le produit scalaire suivant: \begin{eqnarray}\label{Fn68} <\beta_1,\beta_2>:=(vol V_1)^{-1}\int_{V_1}\beta_1\beta_2 \end{eqnarray} \begin{dfn}\label{d2.29} \textbf{ Une grandeur définie sur $E_{V_1}(\mu)$} est une famille $(h_i)_{i\in\mathcal{G}}$ de fonctions définies sur $V_1$ à valeurs dans $\mathbb{R}^m$ ($m\geqslant 1$) telle que la réunion des $m$ familles composantes $(h_{1,i})_{i\in\mathcal{G}},...,(h_{m,i})_{i\in\mathcal{G}}$ forme une famille orthonormée (pour le produit scalaire \ref{Fn68}) de fonctions propres appartenant à $E_{V1}(\mu)$. D'après le théorème spectral cette famille est finie. \end{dfn} Là encore, la dénomination de la grandeur (grandeur spin par exemple) sera souvent affectée à l'élément $i$ de $\mathcal{G}$. \begin{dfn}\label{d2.30} \textbf{ Un instrument de mesure de la grandeur $(h_i)_{i\in\mathcal{G}}$ définie sur $V_1$ dont les trois caractéristiques sont $Sp$, $B_L$ et $T$} est un système physique qui a les propriétés suivantes: \begin{enumerate} \item À partir d'un instant $t_0\in I$, il transforme le <<~sous-domaine~>> $(\mathscr{C}_{B_L}, g)$ de $(\mathscr{C},g)$ en un <<~domaine~>> $(\mathscr{C}',g')$ pour lesquels: \begin{enumerate} \item $\mathscr{C}_{B_L}=I\times B_L\times V_1\times V_2$ \item $\mathscr{C}'=]t_0 , t_0+T[\times(\bigcup_{i\in Sp}B_i)\times V_1\times V_2$ où $]t_0 , t_0+T[\subset I$ et les $B_i$ sont des cubes de $\mathbb{R}^3$ disjoints 2 à 2 (ou du moins qui ont leurs intersections 2 à 2 de mesure nulle) chacun isométrique à $B_L$. \item $\forall i\in Sp$ ~~ $g'_{B_i}=|a_i|^{4/n-2}g_0$ ~ où ~ $a_i:\mathscr{C}_i:=]t_0 , t_0+T[\times B_i\times V_1\times V_2\rightarrow\mathbb{R}$ est définie par: $$a_i=\sum_{l=1}^m (<a_{\mathscr{C}_{B_L}},h_{il}>h_{il})\circ\sigma_i$$ Ici $\sigma_i$ désigne l'isométrie entre $B_i$ et $B_L$ étendue naturellement par <<~l'identité~>> en une isométrie entre $\mathscr{C}_i$ et $\mathscr{C}_{B_L}$. \end{enumerate} \item Pour chaque $i\in Sp$, l'instrument de mesure estime le nombre moyen de singularités élémentaires se trouvant dans$ B_i$ pendant le temps $(t,u)\in ]t_0 , t_0+T[\times S^1(\delta)$. (Lorsque $n_i(t,u)$ désigne le nombre de singularités élémentaires se trouvant au temps $(t,u)$ dans $\mathscr{H}_i:=\{t\}\times B_i\times\{u\}\times W$, le nombre moyen est défini par $\bar{n}_i:=\frac{1}{2\pi\delta T}\int_{S^1(\delta)}\int_{t_0}^{t_0+T}n_i(t,u)dudt)$. \end{enumerate} \end{dfn} Bien entendu, les instruments de mesure décrits par les définitions \ref{d2.27}, \ref{d2.28} et \ref{d2.30} sont <<~idéalisés~>>. En réalité les mesures des \begin{it}grandeurs\end{it} que l'on a citées dans cette section et qui interviennent dans les phénomènes quantiques, sont délicates. On peut considérer qu'une chambre à bulles (à fils ou à dérive) est une <<~association~>> d'instruments de mesure tels qu'on les a définis (mais très imparfaits). Les instruments de mesure (fictifs) que l'on a décrits dans cette section sont basés, comme on l'a vu, sur une <<~séparation en espace~>> des différentes <<~composantes~>> de la métrique conforme à un potentiel considérée. D'autres types d'instruments de mesure (réels) sont cependant utilisés et certains sont basés sur la provocation de \begin{it} phénomènes de résonance\end{it} (résonance magnétique par exemple) et donnent souvent des résultats très précis. Ils ne peuvent s'assimiler à ceux que l'on a présentés ici.\\ La section suivante est consacrée à un exemple important de mesure de grandeur liée aux variétés compactes: La mesure de spin. L'étude en est détaillée car c'est avec cette notion que sera présenté le phénomène d'<<~intrication quantique~>> dans la section \ref{+2.2}. \section{Mesure de spin}\label{+2.1} \subsection{Faisceau de métrique oscillante et <<~état de spin~>>} De nombreuses expériences sont basées sur l'envoi de <<~faisceaux de particules~>>. Cette notion, exprimée dans le langage de la physique classique, se traduit pour nous par celle de <<~faisceaux de métriques oscillantes~>>. Les métriques oscillantes considérées sont les métriques oscillantes élémentaires homogènes se déplaçant à une vitesse $\overrightarrow{v}$ dans un potentiel neutre, définies dans la section \ref{s2.6}. Ce sont celles pour lesquelles la notion de vitesse (et d'impulsion) est bien définie. Nous nous limitons ici aux métriques oscillantes d'ordre 2 avec spin 1/2 (cf.\ref{ss+1}) mais la généralisation ne pose pas de difficultés. On rappelle (def.\ref{d2.12} et généralisation \ref{ss+2}) que, pour une métrique élémentaire homogène d'ordre 2 et de spin 1/2 définie sur la cellule $\mathscr{C}=\Theta\times S^1(\delta)\times S^3(\rho)\times V$, la métrique $g=|a|^{4/n-2}g_0$ correspondante est telle que: \begin{eqnarray}\label{F+1} a=\beta(\sum_{l=1}^4(C_l\cos(M't-\sum_{k=1}^3\lambda_kx^k+Qu)+C_l'\sin(M't-\sum_{k=1} ^3\lambda_kx^k+Qu))\alpha_l \end{eqnarray} où $(\alpha_1,\alpha_2,\alpha_3,\alpha_4)$ est une base orthonormée de l'espace propre $E_{S^3(\rho)}(\gamma)$ (que l'on peut aussi considérer comme une base de $E^{\mathbb{C}}_{S^3(\rho)}(\gamma)$),~~ $\beta\in E_V(\nu)$,~~ $\overrightarrow{v}=1/M'(\lambda_1,\lambda_2,\lambda_3)$~~ et~~ $M'=(1-|\overrightarrow{v}|^2)^{-1/2}M$. La fonction $a$ vérifie l'équation fondamentale \ref{F1}:~~$\Box_{g_0}a+Sa=0$. La fonction canonique associée : $a_c:\Theta\rightarrow E^{\mathbb{C}}_{S^3(\rho)}(\gamma)$ et la fonction d'état $\varPsi:\Theta\rightarrow E^{\mathbb{C}}_{S^3(\rho)}(\gamma)$ vérifient alors (cf.\ref{ssn2.1} étendu à l'ordre 2): \begin{eqnarray}\label{F+2} a_c=\sum_{l=1}^4(C_l+iC'_l)e^{-i(M't-\sum_{k=1}^3\lambda_{k}x^k)}\alpha_l \end{eqnarray} lorsque $Q>0$, et est égale au conjugué lorsque $Q<0$. \begin{eqnarray}\label{F+3} \varPsi=\sum_{l=1}^4(C_l+iC'_l)e^{-i((M'-M)t-\sum_{k=1}^3\lambda_{k}x^k)}\alpha_l \end{eqnarray} quel que soit $Q$. Le terme <<~faisceau~>> précise le fait que les métriques oscillantes considérées sont définies sur un <<~tube~>> de la cellule $\mathscr{C}=\Theta\times S^1(\delta)\times S^3(\rho)\times V$~ où ~$\Theta=I\times \mathscr U\subset \mathbb{R}\times\mathbb{R}^3$. Les coordonnées canoniques de $\mathscr U$ seront notées plus conventionnellement $(x,y,z)$. On considère par exemple le <<~tube~>> $T(r_1,y_1,y_2):=D(r_1)\times)y_1,y_2(\times S^1(\delta)\times S^3(\rho)\times V\subset \mathscr U\times S^1(\delta)\times S^3(\rho)\times V$ ~où ~$D(r_1)$ est le disque de rayon euclidien $r_1$ centré en $O$ dans le plan $(\overrightarrow x, \overrightarrow z)$ et $)y_1,y_2($ est un intervalle de l'axe $\overrightarrow y$. On a choisi ici $\overrightarrow y$ comme axe du tube, le disque $D(r_1)$ lui étant perpendiculaire. Bien entendu, on étendra, si besoin est, cette définition à un tube d'axe quelconque. On donne alors la définition suivante. \begin{dfn}\label{d+1} \textbf{Un domaine de type faisceau de métrique oscillante dans un potentiel neutre, de spin 1/2, de rayon $r_1$ et d'axe $\overrightarrow y$} est un domaine $(\mathscr{C},g)$ où $g=|a|^{4/n-2}g_0$ et la fonction $a:\mathscr{C}\rightarrow \mathbb{R}$ (supposée régulière) vérifie:\begin{enumerate} \item $a/_{I\times T(r_1)}$ est de la forme donnée en (\ref{F+1}) lorsque $\sum_{k=1}^3\lambda_kx^k$ est remplacé par ($\lambda y$) pour préciser que la vitesse $\overrightarrow v$ a pour direction $\overrightarrow y$. \item $a/_{\mathscr{C}-(I\times T(r_2))}=C^{te}$~~ où ~~$0\leq C^{te}\leq 1$ ~~et ~~$r_2=r_1+\varepsilon$. ($\varepsilon$ est en général choisi $<r_1$ et ne sert qu'à permettre la régularité de la fonction ($a$) sur $\mathscr{C}$). \end{enumerate} \end{dfn} (La condition 2. n'a pas une grande importance pour la suite, elle signifie que dans le domaine considéré, la métrique sera approximativement celle d'un potentiel neutre).\\ Il est simplificateur pour la suite d'introduire la notion d'<<~état de spin~>> d'une métrique oscillante homogène dans un potentiel neutre, on donne donc la définition suivante. \begin{dfn}\label{d+2} \textbf{L'état de spin } d'une métrique oscillante homogène d'ordre 2 avec spin 1/2 est l'élément $\zeta\in E^{\mathbb{C}}_{S^3(\rho)}(\gamma)$ obtenu à partir de la fonction d'état $\varPsi$ précisée en (\ref{F+3}) en posant: \begin{eqnarray}\label{F+4} \zeta=\frac{\sum_{l=1}^4(C_l+iC'_l)\alpha_l}{(\sum_{l=1}^4|C_l+iC'_l|^2)^{1/2}} \end{eqnarray} \end{dfn} Bien entendu, cette définition s'étend sans difficultés à des <<~spin~>> autres que 1/2. \subsection{L'appareil de Stern-Gerlach idéalisé. La mesure de spin} Le principe de l'expérience de Stern-Gerlach (Figure \ref{f+1}), décrit en langage classique, est le suivant: On envoie des particules (voire des atomes) avec une vitesse bien définie dans une zone où règne un champ magnétique inhomogène dirigé suivant une direction orthogonale à la vitesse initiale des particules. on mesure la déviation éventuelle des particules par ce gradient de champ magnétique en regardant les impacts sur un écran orthogonal à la direction du jet de particules. \begin{figure} \begin{center} \label{fig:1} \begin{tikzpicture}[yscale=0.8, xscale=0.9] \draw (0, 0) ellipse (0.5 and 1); \draw (7, 0) ellipse (0.5 and 1); \draw (8, 0) ellipse (0.5 and 1); \draw (0, 1) -- (8, 1); \draw[->] (-2, 0) -- (12, 0); \draw (0, -1) -- (8, -1); \draw[->] (-1, -3) -- (-1, 3); \draw[->] (0, 1.5) -- (-2.4, -2); \draw ( 9, -3) -- ( 9, 2); \draw ( 9, 2) -- (11, 3); \draw (11, 3) -- (11, -2); \draw (11, -2) -- ( 9, -3); \draw (-1, -0.01) node {\tiny$\bullet$}; \draw (0, -0.01) node {\tiny$\bullet$}; \draw (7, -0.02) node {\tiny$\bullet$}; \draw (8, -0.02) node {\tiny$\bullet$}; \draw (10, -0.02) node {\tiny$\bullet$}; \draw[thick, ->] (0, 0) node[below right]{$~~~~P$} -- (1.5, 0); \draw (10, -0.02) node {\tiny$\bullet$}; \draw ( 9, -2.7) node[above right]{$E$}; \draw (-1, -0.01) node[below right]{$0$}; \draw (0, -0.01) node[below ]{$y_1$}; \draw (7, -0.01) node[below ]{$y_2$}; \draw (8, -0.01) node[below ]{$y_3$}; \draw (10, -0.01) node[below ]{$y_4$}; \draw (11.5, 0) node[below ]{$\overrightarrow{y}$}; \draw (-2.8,-1.5) node[right ]{$\overrightarrow x$}; \draw (-1,2.5) node[right ]{$\overrightarrow z$}; \draw[->] (7.15, -0.7) -- (7.35, 0.7); \draw[->] (7.35, -0.7) -- (7.55, 0.7); \draw[->] (7.55, -0.7) -- (7.75, 0.7); \draw (7.25, 0.7) node{$\overrightarrow{B}$}; \end{tikzpicture} \end{center} \caption{L'appareil de Stern-Gerlach}\label{f+1} \end{figure} Le champ magnétique $\overrightarrow B$ est nul sur le domaine pour lequel $y\in)y_1,y_2(\cup)y_3,y_4($ ,il est dirigé dans une direction orthogonale à $\overrightarrow y$ et a un gradient non nul sur le domaine pour lequel $y\in)y_2,y_3($.\\ Pour nous, en terme de métrique oscillante, l'expérience de Stern-Gerlach se présente de la manière suivante:\\ On considère la cellule $\mathscr{C}=)t_1,t_2(\times\omega\times S^1\times S^3\times V$ ~~où ~~$\omega$ est la réunion d'un tube $T_1(r_1,y_1,y_2)$ d'axe $\overrightarrow y$ de rayon $r_1$ (sur lequel $\overrightarrow B=0$), d'un tube $T_2(r_2,y_2,y_3)$ (sur lequel $\overrightarrow B\neq0$) avec $r_2>r_1$, d'un tube $T_3(r_2,y_3,y_4)$ (sur lequel $\overrightarrow B=0$).\\ La métrique $g=|a|^{4/n-2}g_\mathcal P$ définie sur la cellule $\mathscr{C}$ est celle d'une métrique oscillante élémentaire d'ordre 2, que l'on supposera de spin 1/2. Sa fonction d'état est notée $\varphi$.\\ La cellule $\mathscr{C}_{y_1,y_2}=)t_1,t_2(\times T_1(r_1,y_1,y_2)\times S^1\times S^3\times V$ munie de la métrique $|a|^{4/n-2}g_0$ définit un domaine de type <<~faisceau de métrique oscillante~>> (def.\ref{d+1}). Sur cette cellule, $\varphi$ est donc de la forme: \begin{eqnarray}\label{F+5} \varphi_{|_{\mathscr{C}_{y_1,y_2}}}=\sum_{l=1}^4(C_l+iC'_l)e^{-i((M'-M)t-\lambda y)}\alpha_l \end{eqnarray} L'état de spin associé (def.\ref{d+2}) est $\zeta=\sum_{l=1}^4(C_l+iC'_l)\alpha_l$. L'égalité (\ref{F+5}) constitue donc une forme de <<~condition initiale~>> pour la fonction $\varphi$. D'après le corollaire \ref{c2.1} de la section \ref{ss+3}, la fonction d'état $\varphi$ est solution de l'équation \ref{F50}. On choisit la base orthonormée $(\beta_1,\beta_2,\beta_3,\beta_4)$ de $E^{\mathbb{C}}_{S^3(\rho)}(\gamma)$ pour laquelle les quatre composantes de $\varphi$ vérifient les équations (\ref{F55}) (\ref{F56}) (\ref{F57}) (\ref{F58}). Comme nous ne nous intéressons qu'à l'<<~effet de spin~>>, nous allons négliger, dans l'opérateur différentiel $(\alpha)$, les termes qui ont le coefficient $Q$, ce qui, en langage de la physique classique, revient à dire que nous ne nous intéressons qu'aux déviations liées au spin et non aux éventuelles déviations liées à la charge électrique dans le potentiel électromagnétique. Compte tenu du fait que la composante $(B^2)$ du champ électromagnétique est ici nulle, les quatre équations (\ref{F55}) (\ref{F56}) (\ref{F57}) (\ref{F58}) s'écrivent: $$ -2iM\frac{\partial\varphi^1}{\partial t}=\Box_\Theta\varphi^1+2\varrho Q(B^1\varphi^2-B^3\varphi^1)$$ $$ -2iM\frac{\partial\varphi^2}{\partial t}=\Box_\Theta\varphi^2+2\varrho Q(B^1\varphi^1+B^3\varphi^2)$$ $$ -2iM\frac{\partial\varphi^3}{\partial t}=\Box_\Theta\varphi^3+2\varrho Q(B^1\varphi^4-B^3\varphi^3)$$ $$ -2iM\frac{\partial\varphi^4}{\partial t}=\Box_\Theta\varphi^4+2\varrho Q(B^1\varphi^3+B^3\varphi^4)$$ Les deux premières équations sont identiques aux deux dernières lorsque $\varphi^1$ devient $\varphi^3$ et $\varphi^2$ devient $\varphi^4$. La fonction d'état $\varphi$ peut être considérée comme la somme de deux fonctions d'état $\varphi'$ et $\varphi''$ correspondant à la superposition de deux métriques oscillantes, la première fonction d'état ayant $(\varphi^1,\varphi^2,0,0)$ pour composantes dans la base $(\beta_1,\beta_2,\beta_3,\beta_4)$ et la deuxième $(0,0,\varphi^3,\varphi^4)$. En physique classique ceci correspond à l'envoi de deux particules de même masse et charge électrique, mais éventuellement avec un état de spin différent. Le résultat de la mesure de l'appareil de Stern-Gerlach donne, lorsqu'un impact a lieu sur l'écran, la probabilité que celui-ci soit dans un des deux domaines disjoints de l'écran (pour le spin 1/2). Ce résultat sera facilement obtenu pour la superposition des deux métriques oscillantes si l'on sait l'obtenir pour celle correspondant à la fonction d'état $\varphi'$ car les calculs sont identiques pour $\varphi''$. En effet, ceci est la conséquence des deux faits suivants: \begin{enumerate} \item $E_1^{\mathbb{C}}:=E^{\mathbb{C}}_{S^3(\rho)}(\gamma)=E_1'^{\mathbb{C}}\oplus E_1''^{\mathbb{C}}$ ~où ~$E_1'^{\mathbb{C}}$ est le sous-espace vectoriel engendré par $\beta_1$ et $\beta_2$ et $E_1''^{\mathbb{C}}$ celui engendré par $\beta_3$ et $\beta_4$ \item Les endomorphismes $\hat{S}_1,\hat{S}_2,\hat{S}_3$ sont stables sur $E_1'^{\mathbb{C}}$ et $E_1''^{\mathbb{C}}$ compte tenu de la forme de leurs matrices dans la base des $\beta_k$ donnée en \ref{ss+1}. \end{enumerate} On introduit alors la terminologie suivante: \begin{dfn}\label{d+3} L'espace $E_1'^{\mathbb{C}}$ est appelé \textbf{l'espace propre restreint} (Il est de dimension complexe 2 et correspond à l'espace des états de spin 1/2 de la physique quantique standard). Les endomorphismes $\hat{S}_1,\hat{S}_2,\hat{S}_3$ restreints à $E_1'^{\mathbb{C}}$ notés $\hat{S'}_1,\hat{S'}_2,\hat{S'}_3$ sont appelés \textbf{les endomorphismes canoniques restreints}. \end{dfn} Les matrices dans la base $(\beta_1,\beta_2)$ des endomorphismes canoniques restreints $\hat{S'}_1$ et $\hat{S'}_3$ (les seuls qui vont nous intéresser car $B^2=0$) sont donc: \begin{center} $\hat M'_1 = \begin{pmatrix} 0&1&\\ 1&0& \end{pmatrix}$ ~~ $\hat M'_3 = \begin{pmatrix} -1&0&\\ 0&1& \end{pmatrix}$ \end{center} On a, bien entendu, les mêmes résultats si l'on considère $E_1''^{\mathbb{C}}$ et $\hat{S''}_1$, $\hat{S''}_3$. \textbf{Dans la suite de cette section nous considérerons donc que les fonctions d'état sont à valeurs dans l'espace propre restreint $E_1'^{\mathbb{C}}$. Les états de spin (def.\ref{d+2}) des métriques oscillantes homogènes d'ordre 2 avec spin 1/2 qui vont intervenir seront alors des éléments de $E_1'^{\mathbb{C}}$}. Sur la cellule $\mathscr{C}_{y_2,y_3}$ de l'appareil de Stern-Gerlach idéalisé, le champ magnétique est de la forme $\overrightarrow B=\mathcal{B}\overrightarrow s $ ~~où $\overrightarrow s$ est un vecteur unitaire dans le plan $(\overrightarrow x,\overrightarrow z)$ et $\mathcal B$ est une fonction non constante dépendant d'une variable réelle notée $s=(\cos \theta)z+(\sin \theta)x$ et l'on supposera $\frac{\partial\mathcal B}{\partial s}>0$ pour qu'il n'y ait pas d'ambiguïté sur le sens de $\overrightarrow s$. L'angle $\theta$ est précisé dans la définition suivante. \begin{dfn}\label{d+4} On appelle \textbf{angle de mesure} de l'appareil de Stern-Gerlach, l'angle $\theta\in)-\pi,\pi)$ donné par $\cos\theta=(\overrightarrow s/\overrightarrow k), ~~~ \sin\theta=(\overrightarrow s/\overrightarrow i)$ ~~où $\overrightarrow i$ et $\overrightarrow k$ sont les vecteurs unitaires des axes $\overrightarrow x$ et $\overrightarrow z$. \end{dfn} Le champ magnétique $\overrightarrow B$ s'écrit donc $\overrightarrow B=\mathcal B(\cos\theta\overrightarrow k+\sin\theta\overrightarrow i)$ et les deux composantes de $\varphi'$ (considérée maintenant à valeurs dans $E_1'^{\mathbb{C}}$) dans la base $(\beta_1,\beta_2)$: $(\varphi^1,\varphi^2)$, vérifient, compte tenu des approximations déjà précisées, les deux équations: \begin{eqnarray}\label{F+8} -2iM\frac{\partial\varphi^1}{\partial t}=\Box_\Theta\varphi^1+2\varrho Q\mathcal B((\sin\theta)\varphi^2-(\cos\theta)\varphi^1) \end{eqnarray} \begin{eqnarray}\label{F+9} -2iM\frac{\partial\varphi^2}{\partial t}=\Box_\Theta\varphi^2+2\varrho Q\mathcal B((\sin\theta)\varphi^1+(\cos\theta)\varphi^2) \end{eqnarray} On définit l'endomorphisme $\hat S_\theta$ de $E_1'^{\mathbb{C}}$ en posant: \begin{eqnarray}\label{F+10} \hat S_\theta:=(\sin\theta) \hat S'_1+(\cos\theta)\hat S'_3 \end{eqnarray} Celui-ci a pour matrice relative à la base $(\beta_1,\beta_2)$: \begin{center} $\hat M_\theta = \begin{pmatrix} {-\cos\theta}&\sin\theta&\\ \sin\theta&\cos\theta& \end{pmatrix}$ \end{center} Les deux équations précédentes peuvent alors s'écrire sous la forme suivante: $$-2iM\frac{\partial\varphi'}{\partial t}=\Delta\varphi'+2\varrho Q\mathcal B\hat S_\theta(\varphi')$$ On vérifie facilement que le couple suivant: \begin{eqnarray}\label{F+11} \beta'_1:=(\cos\theta/2)\beta_1-(\sin\theta/2)\beta_2 ~~~,~~~ \beta'_2:=(\sin\theta/2)\beta_1+(\cos\theta/2)\beta_2 \end{eqnarray} forme une base de vecteurs propres de $\hat S_\theta$, orthonormée dans $E_1'^{\mathbb{C}}$, ceci pour les valeurs propres respectives $(-1)$ et $(+1)$. La matrice de rotation $R_{\theta/2}$ suivante: \begin{center} $R_{\theta/2} = \begin{pmatrix} {\cos\theta/2}&-\sin\theta/2&\\ \sin\theta/2&\cos\theta/2& \end{pmatrix}$ \end{center} vérifie alors l'égalité : $R_{\theta/2}\hat M_\theta=JR_{\theta/2}$ où $J$ est la matrice: \begin{center} $J = \begin{pmatrix} -1&0&\\ 0&1& \end{pmatrix}$ \end{center} On en déduit que les deux composantes $\varphi'^1$ et $\varphi'^2$ de $\varphi'$ dans cette base, vérifient les deux équations, maintenant indépendantes: \begin{eqnarray}\label{F+12} -2iM\frac{\partial\varphi'^1}{\partial t}=\Box_\Theta\varphi'^1+2\varrho Q\mathcal B\varphi'^1 \end{eqnarray} \begin{eqnarray}\label{F+13} -2iM\frac{\partial\varphi'^2}{\partial t}=\Box_\Theta\varphi'^2-2\varrho Q\mathcal B\varphi'^2 \end{eqnarray} Chacune de ces équations est identique à celle qui permet de décrire, en physique quantique classique, la répartition des impacts sur l'écran lorsque les <<~particules~>> sont sous l'influence d'un potentiel qui ici est donné par la fonction $2\varrho Q\mathcal B$ pour l'équation \ref{F+12} et par la fonction $-2\varrho Q\mathcal B$ pour l'équation \ref{F+13}. On peut considérer que $2\varrho Q\mathcal B$ (resp. $-2\varrho Q\mathcal B$) représente un potentiel électrique, le champ électrique correspondant est alors $\overrightarrow E=2\varrho Q\frac{\partial\mathcal B}{\partial s}\overrightarrow s$ (resp.$\overrightarrow E=-2\varrho Q\frac{\partial\mathcal B}{\partial s}\overrightarrow s$), ce qui, en physique classique, <<~dévie~>> les particules dans un sens suivant la direction $\overrightarrow s$ pour l'équation \ref{F+12} et dans l'autre sens pour l'équation \ref{F+13}.\\ Nous ne détaillons pas cette étude ici, mais donnons les résultats qualitatifs dans la proposition suivante (sans démonstration): \begin{prop}\label{p+1} Dans un domaine pour lequel $y_3<y<y_4$ (c'est à dire après la sortie de champ magnétique $\overrightarrow B$), la fonction ($a$) caractéristique de la métrique $g=|a|^{4/n-2}g_0$, est approximativement nulle en dehors de deux domaines disjoints $\mathscr D_1=)t_1,t_2(\times\omega_1\times S^1\times S^3\times V$ et $\mathscr D_2=)t_1,t_2(\times\omega_2\times S^1\times S^3\times V$ ~où ~ $\omega_1$ et $\omega_2$ sont deux <<~tubes~>> disjoints inclus dans la partie limité par $y_3$ et $y_4$ (fig.\ref{f+1}). Les deux <<~taches d'impacts~>> sur l'écran, classiques pour le spin 1/2, sont limitées par l'intersection de ces deux tubes avec l'écran. Les axes des tubes $\omega_1$ et $\omega_2$, notés $\overrightarrow y_1$ et $\overrightarrow y_2$, intersectent l' axe $\overrightarrow y$. La fonction d'état de la métrique oscillante restreinte à $\mathscr D_1$ est ~~ $\varphi'_1\beta'_1|_{)t_1,t_2(\times\omega_1}$, on peut l'assimiler à celle d'un faisceau de métrique oscillante d'axe $\overrightarrow y_1$ (def.\ref{d+1}). Il en est de même pour la fonction d'état de la métrique oscillante restreinte à $\mathscr D_2$ . \end{prop} L'étude de ce phénomène est identique à celle faite par la physique quantique standard puisque, si l'on utilise les $\varepsilon$-approximations, les équations (\ref{F+12}) et (\ref{F+13}) se ramènent à celles vérifiées par la fonction d'état de cette dernière. De plus, l'analyse probabiliste faite en terme de <<~singularités~>> pour nous, donne des résultats qui correspondent aussi à ceux de la physique quantique standard comme on l'a vu dans la section \ref{s2.12}. Les résultats précis sont donnés dans la proposition \ref{p+2} qui va suivre dans le cadre d'un appareil de Stern-Gerlach idéalisé. Dans ce but, nous continuons l'étude de la manière suivante: Sur le domaine pour lequel $y\in)y_1,y_2($ (fig.\ref{f+1}), la fonction d'état $\varphi'$ est celle du faisceau de métrique oscillante homogène se déplaçant à une vitesse $\overrightarrow v=\lambda\overrightarrow j$ où $\lambda$ est positif et $\overrightarrow j$ est le vecteur unitaire de l'axe $\overrightarrow y$. Restreinte à ce domaine on la notera $\varphi'_0$. D'après \ref{F+5} et puisque l'on considère la restriction à $E_1'^{\mathbb{C}}$, la fonction $\varphi'_0$ s'écrit sous la forme: \begin{eqnarray}\label{F+14} \varphi'_0=e^{-i((M'-M)t-\lambda y)}(z_1\beta_1+z_2\beta_2) \end{eqnarray} où $z_1$ et $z_2\in \mathbb{C}$ D'après la définition \ref{F+2}, l'état de spin du faisceau entrant dans l'appareil de Stern-Gerlach est $z_1\beta_1+z_2\beta_2$. Si l'on considère la fonction $\epsilon:\Theta\rightarrow\mathbb{C}$ définie par: \begin{eqnarray}\label{F+15} \epsilon:=e^{-i((M'-M)t-\lambda y)} \end{eqnarray} alors, sur le domaine où $y\in)y_1,y_2($, les deux composantes de $\varphi'_0$ dans la base $(\beta_1,\beta_2)$, que l'on notera $\varphi_0^1$ et $\varphi_0^2$, s'écrivent: $$\varphi_0^1=z_1\epsilon ~~~~\varphi_0^2=z_2\epsilon$$ Les composantes de $\varphi'_0$ dans la base $(\beta'_1,\beta'_2)$ précisée en \ref{F+11} (et sur le domaine où $y\in)y_1,y_2($) sont donc: \begin{eqnarray}\label{F+16} {\varphi'_0}^1=((\cos\theta/2) z_1-(\sin\theta/2)z_2)\epsilon \end{eqnarray} \begin{eqnarray}\label{F+17} {\varphi'_0}^2=((\sin\theta/2) z_1+(\cos\theta/2)z_2)\epsilon \end{eqnarray} Elles sont, bien sûr, solutions des équations \ref{F+12} et \ref{F+13} dans le domaine où $y\in)y_1,y_2($ puisque le champ magnétique $\overrightarrow B$ y est nul. On peut les considérer comme des <<~conditions initiales~>> qui déterminent $\varphi'^1$ et $\varphi'^2$ sur le domaine <<~entier~>>, en particulier celui où $y\in)y_3,y_4($ qui donne la <<~ mesure de spin~>>.\\ Pour obtenir un résultat précis sur cette <<~mesure de spin~>> par l'appareil de Stern-Gerlach, nous précisons maintenant les données sur le champ magnétique $\overrightarrow B$ dans le domaine où $y\in)y_2,y_3($. On a déjà supposé que $\overrightarrow B=\mathcal B\overrightarrow s$ où $\mathcal B$ dépend de la variable $s=(\cos\theta)z+(\sin\theta)x$. Nous supposerons maintenant que $\mathcal B$ est une fonction impaire (on peut, par exemple, considérer que, par une approximation linéaire, $\mathcal B=(\frac{\partial\mathcal B}{\partial s}(0))s$. Comme, sous cette condition, $\mathcal B(0)=0$, le champ $\overrightarrow B$ est nul sur l'axe $\overrightarrow y$ et ceci peut être vu comme un moyen de négliger les <<~déviations~>> (en langage de physique classique) autres que celles liées au <<~spin~>>. Bien entendu, $\frac{\partial \mathcal B}{\partial s}(0)\neq0$.\\ le choix d'imparité de la fonction $\mathcal B$ va permettre d'utiliser la propriété suivante: $$\mathcal B\circ\sigma=-\mathcal B$$ où $\sigma$ désigne l'isométrie de $\mathbb{R}^3$ définie par $\sigma(x,y,z)=(-x,y,-z)$. \begin{prop}\label{p+2} On suppose que la condition suivante est réalisée:\\ Le domaine où $y\in)y_1,y_2($ (fig.\ref{f+1}) est un domaine de type <<~faisceau de métrique oscillante dans un potentiel neutre de spin 1/2 et d'axe $\overrightarrow y$~>> (def.\ref{d+1}), son état de spin est $\zeta=z_1\beta_1+z_2\beta_2$ (def.\ref{d+2}) où $(\beta_1,\beta_2)$ est la base de $E_1'^{\mathbb{C}}$ dans laquelle les équations \ref{F+8} et \ref{F+9} sont vérifiées.\\ L'angle de mesure de l'appareil de Stern-Gerlach (def.\ref{d+4}) est noté $\theta$. Alors:\\ Si à un instant $t\in)t_1,t_2($, $\varsigma$ est une singularité élémentaire dans $\mathscr{H}(t):=\{t\}\times (\omega_1\cup\omega_2)\times S^1\times S^3\times V$, où $\omega_1$ et $\omega_2$ sont précisés dans la proposition (\ref{p+1}), la probabilité pour que celle-ci soit dans $\mathscr{H}_1(t):=\{t\}\times\omega_1\times S^1\times S^3\times V$ est: \begin{eqnarray}\label{F+18} p_1(t)=\frac{|(\cos\theta/2)z_1-(\sin\theta/2)z_2|^2}{|z_1|^2+|z_2|^2} \end{eqnarray} (Bien entendu, la probabilité pour que celle-ci soit dans $\mathscr{H}_2(t):=\{t\}\times\omega_2\times S^1\times S^3\times V$ est: $p_2(t)=\frac{|(\sin\theta/2)z_1+(\cos\theta/2)z_2|^2}{|z_1|^2+|z_2|^2}=1-p_1(t)$).\\ On a choisi ici pour $\omega_1$ (resp. $\omega_2$) le domaine associé à la composante $\varphi'_1$ (resp. $\varphi'_2$) de la fonction d'état et donc à la valeur propre ($-1$) (resp.($+1$)) de l'endomorphisme $\hat S_\theta$ défini par l'égalité (\ref{F+10}).\\ La probabilité donnée par (\ref{F+18}) s'écrit donc aussi de la manière suivante: \begin{eqnarray}\label{F+19} p_1(t)=|\langle\zeta,\zeta'_1\rangle|^2 \end{eqnarray} où $\langle,\rangle$ désigne le produit hermitien de l'espace propre restreint $E_1'^{\mathbb{C}}$ (def.\ref{d+3}),\\ $\zeta$ est l'état de spin du faisceau de métrique oscillante défini sur le domaine pour lequel\\ $y\in)y_1,y_2($, ~~ $\zeta'_1:=(\cos\theta/2)\beta_1-(\sin\theta/2)\beta_2$ est le vecteur propre de $\hat S_\theta$ défini par l'égalité (\ref{F+10}). \end{prop} Les résultats donnés par cette proposition concordent bien avec les résultats de la physique quantique standard (et avec les résultats expérimentaux). \bigskip \textbf{Démonstration}: d'après la condition initiale imposée, la restriction de la fonction d'état $\varphi'$ au domaine pour lequel $y\in)y_1,y_2($ s'écrit: $\varphi'_0=\varphi_0'^1\beta'_1+\varphi_0'^2\beta'_2$ ~~~où $\varphi_0'^1$ et $\varphi_0'^2$ vérifient (\ref{F+16}) et (\ref{F+17}) et $\beta'_1$, $\beta'_2$ sont définis par les égalités (\ref{F+11}).\\ Notons $\varPsi:\Theta\supset\omega\rightarrow{\mathbb{C}}$ la solution de l'équation (\ref{F+12}) qui vérifie $\varPsi=\epsilon$ sur le domaine pour lequel $y\in)y_1,y_2($, où $\epsilon$ est définie par (\ref{F+15}). Alors, puisque $\epsilon\circ\sigma=\epsilon$ et $\mathcal B\circ\sigma=-\mathcal B$,\\ $\varPsi\circ\sigma$ est solution de l'équation (\ref{F+13}) et $\varPsi\circ\sigma=\epsilon$ sur le domaine pour lequel $y\in)y_1,y_2($. De plus, $\omega_1=\sigma(\omega_2)$ où $\omega_1$ et $\omega_2$ sont les deux tubes disjoints intervenant dans la proposition \ref{p+1}.\\ De la linéarité des équations on déduit que:\\ $\varphi'_1=((\cos\theta/2)z_1-(\sin\theta/2)z_2)\varPsi$~~~ et~~~ $\varphi'_2=((\sin \theta/2)z_1+(\cos\theta/2)z_2)(\varPsi\circ\sigma)$. D'après la section \ref{s2.12} et son extension dans \ref{ss2.13.6}, si, à un instant $t$, une singularité élémentaire $\varsigma$ est dans $\mathscr{H}(t)$, alors la probabilité pour que celle-ci soit dans $\mathscr{H}_1(t)$ est: $$p_1(t)=\frac{\int_{\omega_1}|\varphi'_1|^2(t,x^i)dx^i}{\int_{\omega_1}|\varphi'_1|^2(t,x^i)dx^i+\int_{\omega_2} |\varphi'_2|^2(t , x^i)dx^i}$$ Alors, $$p_1(t)=\frac{|(\cos\theta/2)z_1-(\sin\theta/2)z_2|^2}{ |(\cos\theta/2)z_1-(\sin\theta/2)z_2|^2+|(\sin\theta/2)z_1+(\cos\theta/2)z_2|^2}$$ car: $\int_{\omega_1}|\varPsi(t,x^i)|^2dx^i=\int_{\omega_2}|\varPsi(t,\sigma(x^i)|^2dx^i$ Ce qui termine la démonstration de l'égalité (\ref{F+18}) puisque: $$|(\cos\theta/2)z_1-(\sin\theta/2)z_2|^2+|(\sin\theta/2)z_1+(\cos\theta/2)z_2|^2=|z_1|^2+|z_2|^2$$ L'égalité (\ref{F+19}) en découle immédiatement puisque: $$\langle\zeta,\zeta'_1\rangle=\langle z_1\beta_1+z_2\beta_2,(\cos\theta/2)\beta_1-(\sin\theta/2)\beta_2\rangle=(\cos\theta/2)z_1-(\sin\theta/2)z_2$$ \subsubsection{Cas particulier où il existe $\theta'\in)-\pi, \pi)$ tel que l'état de spin soit un vecteur propre de l'endomorphisme $\hat S_{\theta'}$}. \begin{dfn}\label{d+5} L'angle $\theta'\in)-\pi,\pi)$ tel que l'état de spin $\zeta$ est un vecteur propre de $\hat S_{\theta'}$ est appelé \textbf{l'angle de l'état de spin}.\\ L'état de spin est alors noté $\zeta_{\theta'}$. \end{dfn} Notons $E_{\theta'}(-1)$ et $E_{\theta'}(+1)$ les espaces propres de l'endomorphisme $\hat S_{\theta'}$ relatifs aux valeurs propres $(-1)$ et $(+1)$. Les vecteurs propres de norme 1 pour $\langle,\rangle$ dans $E_{\theta'}(-1)$ sont tous de la forme: $\zeta_{\theta',k}=e^{ik}((\cos\theta'/2)\beta_1-(\sin\theta'/2)\beta_2)$ ~~où ~~$k\in\mathbb{R}$.\\ Alors: \begin{eqnarray}\label{F+20} \zeta_{\theta',k}^\bot=\zeta_{\theta'-\pi,k}=e^{ik}((\sin\theta'/2)\beta_1+(\cos\theta'/2)\beta_2) \end{eqnarray} est un vecteur propre de $E_{\theta'}(+1)$. \begin{dfn}\label{d+6} $ \zeta_{\theta',k}^\bot$ donné par l'égalité \ref{F+20} est appelé \textbf{l'orthogonal canonique de $\zeta_{\theta',k}$}. \end{dfn} (Tout autre orthogonal de norme 1 de $\zeta_{\theta',k}$ est de la forme $e^{ik'} \zeta_{\theta',k}^\bot$ ~~où ~~$k'\in\mathbb{R}$).\\ Dans le cas où l'état de spin du faisceau de métrique oscillante est de la forme $\zeta_{\theta',k}$, la proposition \ref{p+2} a une conclusion dont l'écriture est très simple, précisée dans le corollaire suivant. \begin{coro}\label{c+2} Sous les hypothèses de la proposition \ref{p+2} et lorsque l'état de spin $\zeta$ a la forme $\zeta_{\theta',k}\in E_{\theta'}(-1)$ définie précédemment, alors la probabilité $p_1(t)$ donnée par (\ref{F+18}) s'écrit: \begin{eqnarray}\label{F+21} p_1(t)=\cos^2(\frac{\theta-\theta'}{2}) \end{eqnarray} \end{coro} En effet, d'après (\ref{F+19}): $$p_1(t)=|\langle e^{ik}((\cos\theta'/2)\beta_1-(\sin\theta'/2)\beta_2,(\cos\theta/2)\beta_1-(\sin\theta/2)\beta_2\rangle|^2$$ $$=(\cos\theta'/2\cos\theta/2+\sin\theta'/2\sin\theta/2)^2=\cos^2(\frac{\theta-\theta'}{2})$$ \subsection{L'appareil de Stern-Gerlach idéalisé vu comme appareil de mesure correspondant à la définition (\ref{d2.30})}. On précise ici la correspondance entre un instrument de mesure basé sur l'expérience de Stern-Gerlach et la notion d'<<~instrument de mesure~>> donnée par la définition (\ref{d2.30}) de la section \ref{s2.16}.\\ On se limite au <<~spin 1/2~>>, bien que la généralisation ne pose pas de difficultés conceptuelles.\\ L'angle de mesure de l'appareil de Stern-Gerlach idéalisé (def.\ref{d+4}) est noté $\theta$.\\ Les premières notions sont les suivantes:\\ Le cube $B_L$ introduit dans 1(a) de la définition (\ref{d2.30}) est un cube contenu dans le domaine $)y_1,y_2(\times D(r_1)$ de l'appareil de Stern-Gerlach (fig.\ref{f+1}).\\ On note $\beta_{\theta,-1}$ un vecteur propre normalisé de l'endomorphisme $\hat S_\theta$ défini par l'égalité (\ref{F+10}) relatif à la valeur propre (-1) et $\beta_{\theta,+1}$ son orthogonal canonique (def.\ref{d+6}) (alors relatif à la valeur propre (+1)).\\ La <<~grandeur spin~>> correspondant à la définition (\ref{d2.29}) est la famille $(h_i)_{i\in\mathcal G}$ de fonctions définies sur $S^1(\delta)\times S^3(\rho)$ à valeurs dans $\mathbb{R}^2$ ~où ~$\mathcal G$ (ici égal au spectre $S_p$) est l'ensemble à deux éléments que l'on notera $\{-1,+1\}$. Les fonctions $h_{-1}$ et $h_{+1}:S^1(\delta)\times S^3(\rho)\rightarrow\mathbb{R}^2$ sont définies par: $$h_{-1}:=(\eta_1\beta_{\theta,-1},\eta_2\beta_{\theta,-1}), ~~~~h_{+1}:=(\eta_1\beta_{\theta,+1},\eta_2\beta_{\theta,+1})$$ où $(\eta_1,\eta_2)$ est la base orthonormée de $E_{S^1(\delta)}(\lambda)$ donnée par:\\ $\eta_1(u)=\sqrt2\cos(Qu)$ ~~et ~~$\eta_2(u)=\sqrt2\sin(Qu)$.\\ Les deux cubes $B_{-1}$ et $B_{+1}$ introduits dans 1(b) de la définition (\ref{d2.30}) sont deux cubes tels que $B_{-1}\subset\omega_1$ et $B_{+1}\subset\omega_2$ où $\omega_1$ et $\omega_2$ sont définis dans la proposition \ref{p+1}. On les considérera isométriques à $B_L$ par $\sigma_{-1}$ et $\sigma_{+1}$.\\ Le faisceau de métrique oscillante <<~entrant~>> dans l'appareil de Stern-Gerlach (def.\ref{d+1}), sur lequel on effectue la mesure de spin, a pour métrique $g=|a|^{4/n-2}g_0$ où la fonction (a) est de la forme $a=\beta\Phi$. Ici, d'après (\ref{F+1}), $\Phi$ vérifie: $$\Phi=\sum_{k=1}^2(C_k\cos(M't-\lambda y+Qu)+C_k'\sin(M't-\lambda y+Qu))\alpha_k$$ Les fonctions $\alpha_1$ et $\alpha_2$ qui forment une base orthonormée de $E_1'^{\mathbb{C}}$ sont choisies telles que\\ $\alpha_1=\beta_{\theta,-1}$ et $\alpha_2=\beta_{\theta,+1}$.\\ Les fonctions $a_{-1}$ et $a_{+1}$ définies dans 1(c) de la définition (\ref{d2.30}) sont alors: $a_{-1}=\frac{\beta}{\sqrt2}((C_1\cos(M't-\lambda y)+C'_1\sin(M't-\lambda y))\eta_1$ \hspace{6cm} $+(C'_1\cos(M't-\lambda y)-C_1\sin(M't-\lambda y))\eta_2)\beta_{\theta,-1}\circ\sigma_{-1}$ \vspace{5mm } $a_{+1}=\frac{\beta}{\sqrt2}((C_2\cos(M't-\lambda y)+C'_2\sin(M't-\lambda y))\eta_1$ \hspace{6cm} $+(C'_2\cos(M't-\lambda y)-C_2\sin(M't-\lambda y))\eta_2)\beta_{\theta,+1}\circ\sigma_{+1}$\\ Il s'en suit que les fonctions canoniques associées sont: \begin{eqnarray}\label{F+22} a_{-1,c}=z_{\theta,-1}e^{-i(M't-\lambda y)} \end{eqnarray} \begin{eqnarray}\label{F+23} a_{+1,c}=z_{\theta,+1}e^{-i(M't-\lambda y)} \end{eqnarray} où $z_{\theta,-1}:=\frac{1}{\sqrt2}(C_1+iC'_1)$ ~~et ~~$z_{\theta,+1}:=\frac{1}{\sqrt2}(C_2+iC'_2)$. Comme exposé dans la proposition \ref{p+1}, la fonction canonique $a_{-1,c}$ (resp. $a_{+1,c}$) correspondant à la métrique oscillante dans le domaine où $\omega_1\subset B_{-1}$ (resp.$\omega_2\subset B_{+1}$) est celle donnée par (\ref{F+22}) (resp.(\ref{F+23})). On en déduit donc que: si, à un instant $t$, une singularité $\varsigma$ <<~est vue~>> dans $\omega_1\cup\omega_2$, la probabilité qu'elle soit vue dans $\omega_1$ est (cf.\ref{F42}): $$p(t)=\frac{|z_{\theta,-1}|^2}{|z_{\theta,-1}|^2+|z_{\theta,+1}|^2}$$ \textbf{Ce résultat correspond bien à celui obtenu dans la proposition \ref{p+2}}. En effet, d'après (\ref{F+11}):\\ $\beta_{\theta,-1}=\beta'_1=(\cos\theta/2)\beta_1-(\sin\theta/2)\beta_2$ ~~~et~~~ $\beta_{\theta,+1}=\beta'_2=(\sin\theta/2)\beta_1+(\cos\theta/2)\beta_2$\\ Alors:\\ $z_{\theta,-1}=(\cos\theta/2)z_1-(\sin\theta/2)z_2$ ~~~et ~~~$z_{\theta,+1}=(\sin\theta/2)z_1+(\cos\theta/2)z_2$\\ (On a ici, pour simplifier, considéré une singularité $\varsigma$ à un instant $t$ donné, et non un nombre moyen de singularités pendant le temps $(t,u)\in)t_0,t_0+T(\times S^1(\delta)$ comme présenté en 2 de la définition (\ref{d2.30})). \newpage \section{L'intrication quantique}\label{+2.2} \begin{figure} \begin{center} \label{fig:1} \begin{tikzpicture}[yscale=0.8, xscale=0.9] \draw (0, 0) ellipse (0.5 and 1); \draw (10, 0) ellipse (0.5 and 1); \draw (0.8, 0) ellipse (0.5 and 1); \draw (9.3, 0) ellipse (0.5 and 1); \draw (4.7, 0) ellipse (0.5 and 1); \draw (5.3, 0) ellipse (0.5 and 1); \draw (0, 1) -- (10, 1); \draw[->] (-3, 0) -- (13, 0); \draw (0, -1) -- (10, -1); \draw[->] (5, -3) -- (5, 3); \draw[->] (8, 0.7) -- (2.4, -0.6); \draw (-2, -2) -- (-2, 1.5); \draw (-2, 1.5) -- ( 0, 2); \draw ( 0, 2) -- ( 0, -1.5); \draw ( 0, -1.5) -- (-2, -2); \draw ( 10, -2) -- ( 10, 1.5); \draw ( 10, 1.5) -- (12, 2); \draw (12, 2) -- (12, -1.5); \draw (12, -1.5) -- ( 10, -2); \draw (9.3, -0.02) node[below] {$y_{2D}$} node {\tiny$\bullet$}; \draw (0.8, -0.02) node[below] {$y_{2G}$} node {\tiny$\bullet$}; \draw (11, -0.02) node[below] {$y_{4D}$}node {\tiny$\bullet$}; \draw (-1, -0.02) node[below] {$y_{4G}$}node {\tiny$\bullet$}; \draw (-0.2, -0.02) node[below] {$y_{3G}$}node {\tiny$\bullet$}; \draw[thick, <-] (2.5, 0) node[above right]{$~~~~~~P_G$} -- (5, 0); \draw (5, -0.01) node[below] {O} node {\tiny$\bullet$}; \draw[thick, ->] (5, 0) node[above right]{$~~~~~~~~~P_D$} -- (7.5, 0); \draw (10.2, -0.02) node[below] {$y_{3D}$}node {\tiny$\bullet$}; \draw (4.5, -0.01) node[below] {$y_{1G}$} node {\tiny$\bullet$}; \draw (5.5, -0.01) node[below] {$y_{1D}$} node {\tiny$\bullet$}; \draw (-1, -2.7) node[above right]{$E_G$}; \draw ( 9, -2.7) node[above right]{$E_D$}; \draw[->] (0.25, -0.35) -- (0.05, 0.35); \draw[->] (0.45, -0.35) -- (0.25, 0.35); \draw[->] (0.65, -0.35) -- (0.45, 0.35); \draw (0.75, 0.7) node{$\overrightarrow{B_G}$}; \draw (12.5, 0) node[below ]{$\overrightarrow{y}$}; \draw (5, 2.5) node[right]{$\overrightarrow{z}$}; \draw (3, -0.4) node[below ]{$\overrightarrow{x}$}; \draw[->] (9.35, -0.35) -- (9.55, 0.35); \draw[->] (9.55, -0.35) -- (9.75, 0.35); \draw[->] (9.75, -0.35) -- (9.95, 0.35); \draw (9.25, 0.7) node{$\overrightarrow{B_D}$}; \end{tikzpicture} \end{center} \caption{Le double appareil de Stern-Gerlach}\label{f+2} \end{figure} Cette section est consacrée à l'étude d'un phénomène physique qui est décrit correctement avec l'axiomatique de la physique quantique classique mais qui ne peut l'être dans le cadre d'une théorie des particules plus classique, quelles que soient les caractéristiques qu'on leurs donne. L'observation effective de ce phénomène a été un des arguments qui a privilégié la théorie quantique au détriment de théories éventuelles plus classiques. Le lecteur pourra obtenir plus de précisions sur ce sujet en consultant les ouvrages spécialisés de mécanique quantique (par exemple \cite{basd}). Nous allons cependant montrer que ce phénomène peut être décrit correctement dans le cadre de la théorie présentée dans ce papier.\\ Pour la clarté de l'exposé, nous nous concentrerons, dans toute cette section, sur l'expérience suivante décrite en langage <<~classique~>>: Dans le domaine du tube présenté dans la figure \ref{f+2} pour lequel $y\in)y_{1G},y_{1B}($, sont <<~créées~>> deux particules $P_G$ et $P_D$, chacune de spin 1/2. $P_G$ est lancée vers un appareil de Stern-Gerlach (cf.section \ref{+2.1}) situé à gauche, $P_D$ vers un appareil situé de Stern-Gerlach situé à droite. Ces appareils mesurent le spin des particules et sont constitués chacun d'un champ magnétique $\overrightarrow B$ ($\overrightarrow B_G$ pour celui de gauche, $\overrightarrow B_D$ pour celui de droite) inhomogène mais approximativement formé de vecteurs parallèles dont la direction peut être choisie quelconque dans le plan ($\overrightarrow x,\overrightarrow z$), puis d'un écran $E$ ($E_G$ et $E_D$).\\ Chacune des particules, après avoir traversé le champ magnétique $\overrightarrow B$ de l'appareil correspondant, marque son impact sur l'écran $E$. Le spin des particules étant <<~1/2~>>, les impacts n'ont lieu que sur deux zones distinctes de chaque écran concerné, ce qui donne la mesure de spin (binaire) de la particule.\\ On notera $(+1_G)$, $(-1_G)$ ~~(resp.$(+1_D)$,$(-1_D)$) les deux mesures possibles pour la particule $P_G$ ~~(resp.$P_D$).\\ On s'intéresse à la probabilité d'obtenir des couples de mesures simultanées: $(+1_G,+1_D)$, $(+1_G,-1_D)$, $(-1_G,+1_D)$, $(-1_G,-1_D)$, et ceci suivant l'orientation de $\overrightarrow B_G$ relative à $\overrightarrow B_D$. Expérimentalement, c'est une étude statistique qui est réalisée, on effectue plusieurs fois une expérience identique à celle que l'on vient de décrire.\\ Le fait remarquable, déduit de l'expérience, est que le résultat statistique obtenu sur les $4$ valeurs possibles des <<~couples de mesures~>> que l'on vient d'écrire, ne peut s'expliquer que si \textbf{la mesure du spin d'une particule a une influence sur la mesure du spin de l'autre}. Ceci exclut toute interprétation en terme de <<~deux particules indépendantes~>> au sens classique.\\ On résume dans les lignes qui suivent les arguments qui permettent d'arriver à cette conclusion. Pour cela, on introduit la \textbf{fonction de corrélation $E(\overrightarrow B_G, \overrightarrow B_D)$}. Cette fonction est égale, pour les orientations de $\overrightarrow B_G$ et $\overrightarrow B_D$ précisées, à la valeur moyenne du produit des résultats de mesure ($(+1)$ ou $(-1)$) des appareils de Stern-Gerlach situés à gauche et à droite. On a nécessairement $|E(\overrightarrow B_G,\overrightarrow B_D)|\leq1$.\\ On effectue les couples de mesures pour deux directions différentes du champ magnétique à gauche ($\overrightarrow B_G,\overrightarrow B'_G$) et deux directions du champ magnétique à droite ($\overrightarrow B_D,\overrightarrow B'_D$).\\ On définit la quantité: \begin{eqnarray}\label{F+24} S= E(\overrightarrow B_G,\overrightarrow B_D)+E(\overrightarrow B_G,\overrightarrow B'_D)+E(\overrightarrow B'_G,\overrightarrow B_D)-E(\overrightarrow B'_G,\overrightarrow B'_D) \end{eqnarray} (On somme trois des valeurs et on soustrait la quatrième). On montre alors que, pour toute <<~théorie à variables cachées locale~>>, dont la définition est précisée par exemple dans \cite{basd}, l'inégalité suivante, appelée <<~inégalité de Bell~>>, est vérifiée (il existe plusieurs inégalités de Bell de ce type): \begin{eqnarray}\label{F+25} |S|\leq2 \end{eqnarray} L'axiomatique de la physique quantique classique (qui ne correspond pas à une théorie à variables cachées locale) donne, pour l'expérience que l'on vient de décrire et avec un bon choix des quatre directions de mesures $\overrightarrow B_G,\overrightarrow B'_G,\overrightarrow B_D,\overrightarrow B'_D$, le résultat théorique $|S|=2\sqrt2$ qui est en contradiction avec l'inégalité de Bell (\ref{F+25}). L'expérience, effectuée à Orsay \cite{aspe} par A.Aspect, P.Grangier et G.Roger (que l'on notera par la suite A,G,R), obtient un résultat pour lequel $|S|$ est très proche de $2\sqrt2$ (la petite différence avec cette valeur est justifiée par les caractéristiques des appareils utilisés).\\ Cette expérience utilise des paires de photons émises par une cascade atomique d'atomes de calcium provoquée par des rayons lasers, ce ne sont donc pas des <<~spin 1/2~>> que l'on a mesurés mais des <<~polarités~>>, cependant la transposition de la théorie d'une notion à l'autre ne pose pas de difficultés et donne des résultats identiques.\\ On commence par rappeler rapidement les principes de la mécanique quantique standard qui permettent d'obtenir les résultats sus-cités. L'analyse du phénomène avec le point de vue de notre théorie sera présentée dans la sous-section \ref{ss+4}, ce sera là le point essentiel de la section \ref{+2.2}. \subsection{L'expérience de type A,G,R \cite{aspe} vue par la physique quantique standard} Les données axiomatiques de la physique quantique standard que l'on va utiliser sont les suivantes: \begin{enumerate} \item L'espace des états de spin 1/2 d'une particule est un espace de Hilbert $(\mathcal{E},\langle,\rangle)$ de dimension complexe 2 que l'on pourra assimiler à $\mathbb{C}^2$ muni du produit hermitien standard. \item Les trois observables de spin $\hat S_1$, $\hat S_2$, $\hat S_3$ liées aux directions $\overrightarrow x$, $\overrightarrow y$, $\overrightarrow z$, sont des endomorphismes hermitiens dont les matrices sont: \begin{center} $\hat M_1= \begin{pmatrix} 0&1&\\ -1&0& \end{pmatrix}$ ~~~ $\hat M_2= \begin{pmatrix} 0&-i&\\ i&0& \end{pmatrix}$ ~~~ $\hat M_1= \begin{pmatrix} 1&0&\\ 0&-1& \end{pmatrix}$ \end{center} (Le coefficient $\hslash/2$ qui apparaît habituellement, où $\hslash$ est la constante de Planck, a été ici supprimé sans qu'il n'y ait de conséquences sur les phénomènes étudiés ici).\\ Plus généralement, l'observable de spin $\hat S_{\overrightarrow u}$ liée a une direction $\overrightarrow u= \sin\theta\cos\phi \overrightarrow i+ \sin\theta\sin\phi \overrightarrow j+ cos\theta \overrightarrow k$ a pour matrice: \begin{center} $\hat M_{\overrightarrow u}= \begin{pmatrix} \cos \theta&\sin\theta e^{-i\phi}&\\ \sin \theta e^{i\phi}&-\cos \theta& \end{pmatrix}$ \end{center} On remarquera que les valeurs propres de $\hat S_{\overrightarrow u}$ sont $(-1)$ et $(+1)$. \item La probabilité d'obtenir $(-1)$ ~~(resp.$(+1)$) pour la mesure de spin liée à la direction $\overrightarrow u$ d'une particule dont l'état de spin normalisé est $\xi\in\mathcal{E}$ est donnée par: $$p=|\langle\xi_{\overrightarrow u},\xi\rangle|^2$$ où $\xi_{\overrightarrow u}$ est un vecteur propre normalisé de l'endomorphisme $\hat S_{\overrightarrow u}$ pour la valeur propre $(-1)$ ~~(resp.$(+1)$). \item Pour une paire de particules $(P_1,P_2)$, l'espace des états de spin est: $\mathcal{E}\otimes\mathcal{E}$. Comme la dimension de $\mathcal{E}$ est $2$ pour le spin 1/2, tout élément de $\mathcal E\otimes\mathcal{E}$ se met sous la forme $\Psi=\xi^1\otimes\xi^2+\xi^3\otimes\xi^4$.~~ $\Psi$ est au maximum de rang 2, c'est dans cette situation qu'apparaît la notion d'<<~états intriqués~>>. \item Pour une paire de particules $(P_1,P_2)$ dont la fonction d'état normalisée $\Psi\in\mathcal E\otimes\mathcal E$, lorsque l'on effectue une mesure de spin sur la particule $P_1$ liée à la direction $\overrightarrow u_1$ et une mesure de spin sur la particule $P_2$ liée à la direction $\overrightarrow u_2$, la probabilité d'obtenir $(-1,-1)$ ~~(resp.$(-1,+1), (+1,-1),(+1,+1)$) est : $$p=|\langle\xi_{\overrightarrow u_1}\otimes\xi_{\overrightarrow u_2}, \Psi\rangle_{\mathcal E\otimes\mathcal{E}}|^2$$ où $\xi_{\overrightarrow u_1}$ est un vecteur propre normalisé de $\hat S_{\overrightarrow u_1}$ pour la valeur propre $(-1)$ (resp. $(+1)$) et $\xi_{\overrightarrow u_2}$ un vecteur propre normalisé de $\hat S_{\overrightarrow u_2}$ pour la valeur propre $(-1)$ (resp. $(+1)$). \item Si, pour une paire $(P_1,P_2)$ de particules de spin 1/2 dont la fonction d'état normalisée est $\Psi\in\mathcal E\otimes\mathcal E$, on n'effectue qu'une mesure de spin sur (par exemple) $P_1$ liée à la direction $\overrightarrow u_1$ (sans se préoccuper de $P_2$), la probabilité d'obtenir la mesure $(-1)$ ~~(resp.$(+1)$) est: $$p=\sum_{k=1}^2|\langle\xi_{\overrightarrow u_1}\otimes\xi_k,\Psi\rangle|^2$$ où $\xi_{\overrightarrow u_1}$ est un vecteur propre normalisé de $\hat S_{\overrightarrow u_1}$ pour la valeur propre $(-1)$ ~~(resp.$(+1)$) et $(\xi_1,\xi_2)$ est une base orthonormée de $\mathcal{E}$ (le résultat ne dépend pas du choix de la base). \end{enumerate} \rmq Le lecteur fera le parallèle entre l'espace des états donné en 1. (et les observables de spin données en 2.) avec l'espace propre restreint (et les endomorphismes restreints) définis dans (def.\ref{d+3}). Il est important de remarquer que les premiers sont des données axiomatiques abstraites de la physique quantique classique alors que ceux définis dans le cadre de notre théorie sont des données précises définies à partir de la géométrie de $S^3$ qui est un des éléments de l'espace-temps de dimension $n$. Dans le cadre de l'expérience du type <<~A,G,R \cite{aspe}~>> la fonction d'état normalisée de la paire de particules $(P_1,P_2)$ est supposée être de la forme: $\Psi=\frac{1}{2}(\xi\otimes\xi+\xi^\bot\otimes\xi^\bot)$ ~~et est de rang 2.\\ Ici $\xi$ est de norme $1$ dans $\mathcal{E}$ et $\xi^\bot$, bien choisi, vérifie $\langle\xi^\bot,\xi\rangle=0$. Lorsque l'on applique l'axiome 5., la probabilité d'obtenir $(-1,-1)$ \\(resp.$(-1,+1), (+1,-1), (+1,+1)$) comme mesure de spin de $(P_1,P_2)$ liée aux directions\\ $(\overrightarrow u_1,\overrightarrow u_2)$ est: $$p=|\langle\xi_{\overrightarrow u_1}\otimes\xi\rangle\langle\xi_{\overrightarrow u_2}\otimes\xi\rangle+\langle\xi_{\overrightarrow u_1}\otimes\xi^\bot\rangle\langle\xi_{\overrightarrow u_2}\otimes\xi^\bot\rangle|^2$$ où $\xi_{\overrightarrow u_1}$ ~~(resp.$\xi_{\overrightarrow u_2}$) est un vecteur propre normalisé de $\hat S_{\overrightarrow u_1}$ ~~(resp.$\hat S_{\overrightarrow u_2}$) pour la valeur propre $(-1)$ ~~(resp.$(+1)$).\\ Ceci permet de calculer la fonction de corrélation $E(.,.)$ pour deux directions${\overrightarrow u_1}$ et ${\overrightarrow u'_1}$ relatives à $P_1$ et deux directions ${\overrightarrow u_2}$ et ${\overrightarrow u'_2}$ relatives à $P_2$ puis de calculer la grandeur $S$ définie par (\ref{F+24}). On vérifie ensuite, par des bons choix des directions de mesure ${\overrightarrow u_1}$, ${\overrightarrow u'_1}$, ${\overrightarrow u_2}$, ${\overrightarrow u'_2}$, que $|S|>2$, ce qui contredit l'inégalité de Bell (\ref{F+25}). \rmq En physique quantique classique, le fait que, dans un système physique, il existe un nombre précis de particules est une notion bien définie. L'espace des états est alors axiomatiquement donné par le produit tensoriel des espaces des états de chaque type de particule. Dans le cadre de notre théorie, les choses sont profondément différentes, seules les métriques oscillantes sont parfaitement définies. Les singularités (cf.section \ref{s2.11}) qui, lors d'une mesure (cf.\ref{s2.16}), donnent l'équivalent de la présence de particules en physique quantique classique, ne sont gérées que par le principe d'équiprobabilité relative à la métrique $g$ (cf.\ref{s2.12}) et seule une <<~estimation probabiliste~>> a un sens pour nous. Cela fait qu'il est difficile d'établir un parallèle mathématique entre l'axiomatique de la physique quantique classique (qui traite de plusieurs particules en utilisant la notion de produit tensoriel) et les procédés de calculs liés à notre théorie. Ceci apparaîtra clairement dans la sous-section suivante. \subsection{L'intrication quantique décrite par notre théorie}\label{ss+4} Pour la clarté de l'exposé qui va suivre, nous commençons par préciser, sous forme de définitions, les notions que nous allons utiliser. \begin{dfn}\label{d+7} \textbf{Une métrique oscillante double} est une métrique oscillante d'ordre 2 dans un potentiel électromagnétique, de métrique $g=|a|^{\frac{4}{n-2}}g_\mathcal P$, elle est définie sur le domaine déterminé par le <<~double appareil de Stern-Gerlach~>> décrit par la figure \ref{f+2}. Sur le domaine pour lequel $y\in)y_{1D}, y_{3D}($ ~~(resp.$y\in)y_{1G}, y_{3G}($) la métrique est notée $g_D=|a_D|^{\frac{4}{n-2}}g_{\mathcal P_D}$ ~~(resp.$g_G=|a_G|^{\frac{4}{n-2}}g_{\mathcal P_G}$). Le domaine pour lequel $y\in)y_{1D}, y_{2D}($ ~~(resp.$y\in)y_{1G}, y_{2G}($) est celui d'un faisceau de métrique oscillante dans un potentiel neutre de spin 1/2 d'axe $\overrightarrow y$ de vitesse $\overrightarrow v$ de même sens que $\overrightarrow y$ ~~(resp.$-\overrightarrow y$). \end{dfn} Les notions introduites dans les définitions suivantes sont essentielles dans la description du phénomène d'<<~intrication quantique~>>. Nous n'utiliserons qu'un cas particulier de la définition (\ref{d+8}) qui va suivre, celle-ci sera utile pour une étude plus générale des phénomènes d'intrication. Le cas particulier que nous allons utiliser ici sera présenté dans la définition (\ref{d+9}).\\ On considère une métrique oscillante élémentaire d'ordre 2 dans un potentiel électromagnétique $g=|a|^{\frac{4}{n-2}}g_\mathcal P$, ainsi qu'un isomorphisme $\sigma$ de l'espace $E_{\lambda,\gamma}:=E_{S^1}(\delta)\otimes E_{S^3}(\gamma)$. Cet espace sera considéré comme espace vectoriel réel de dimension 8 pour lequel on prendra comme base $(\nu_{ij}) ~~i\in\{1,2\} ~~j\in\{1,2,3,4\}$ ~~~où ~~~$\nu_{1j}(u,s):=(\cos u)\alpha_j(s) ~~, ~~\nu_{2j}(u,s):=(\sin u)\alpha_j(s)$.\\ (Ici $(\alpha_1,\alpha_2,\alpha_3,\alpha_4)$ est la base de $E_{S^3}(\gamma)$ définie en \ref{ss+1} exemple 1).\\ La fonction $a$ est de la forme $\phi\beta$ (def. \ref{d2.9})~~ où~~ $\phi:\Theta\times S^1(\delta)\times S^3(\gamma)\rightarrow\mathbb{R}$ s'écrit:\\ $\forall x\in\Theta, ~~~\phi_x(.)=\sum_{i,j}\phi^{ij}(x)\nu_{ij}(.)$.\\ Notons $\phi_\sigma:\Theta\times S^1(\delta)\times S^3(\gamma)\rightarrow\mathbb{R}$ la fonction définie par:\\ $\forall x\in\Theta, ~~~{\phi_\sigma}_x(.)=\sum_{i,j}\phi^{ij}(x)\sigma(\nu_{ij})(.)$ ~~et ~~$a_\sigma:=\phi_\sigma\beta$ \begin{dfn}\label{d+8} \textbf{La $\sigma$-transformée} de la métrique $g=|a|^{\frac{4}{n-2}}g_\mathcal P$ est la métrique $g_\sigma=|a_\sigma|^{\frac{4}{n-2}}g_\mathcal P$. \end{dfn} Bien entendu, $g_\sigma$ est encore une métrique élémentaire d'ordre 2 avec spin 1/2.\\ Nous nous limitons maintenant au cas particulier d'une métrique oscillante homogène d'ordre 2 avec spin 1/2 qui sera, dans la suite, la restriction au domaine pour lequel $y\in)y_{1D},y_{2D}($ \\(resp.$y\in)y_{1G},y_{2G}($) d'une métrique oscillante double (def.\ref{d+7}). Nous ne considérons que le cas où l'état de spin $\zeta_D\in E_1'^{\mathbb{C}}$ ~~( resp.$\zeta_G$) est un vecteur propre d'un endomorphisme $\overrightarrow S_{\theta_D}$ ~~(resp.$\overrightarrow S_{\theta_G}$) défini par l'égalité (\ref{F+10}). Soit $\theta_T\in)-\pi, +\pi)$ ~~et ~~$g=|a|^{\frac{4}{n-2}}g_0$ une métrique oscillante homogène d'ordre 2 avec spin 1/2 dont l'état de spin est de la forme $\zeta_{\theta,k}=e^{ik}((\cos\theta/2)\beta_1+(\sin\theta/2)\beta_2)$. \begin{dfn}\label{d+9} \textbf{La $\theta_T$-transformée de la métrique oscillante $g$, notée $g_{\theta_T}$}, est la $\sigma$-transformée de $g$ (def.\ref{d+8}) pour laquelle l'état de spin est: $\zeta_{\theta-\theta_T,k}=e^{ik}((\cos\frac{\theta-\theta_T}{2})\beta_1-(\sin\frac{\theta-\theta_T}{2})\beta_2)$. \end{dfn} \textbf{Les $\theta_T$-transformations sont donc celles qui provoquent une rotation d'angle $\theta_T/2$ de l'état de spin}~~(Le lecteur pourra expliciter les isomorphismes $\sigma$ (en fait, des rotations) de $E_{S^1}(\lambda)\otimes E_{S^3}(\gamma)$ qui correspondent à ces $\theta_T$-transformations). La dernière notion que l'on introduit est celle de <<~métriques oscillantes successives~>>, elle traduit en terme de métrique oscillante ce qu'en langage de physique classique on appellerait <<~flots de particules successifs~>>, les particules pouvant être de nature différente suivant les intervalles de temps considérés.\\ On considère donc un intervalle de temps $)t_1,t_m)$ décomposé en intervalles successifs $I_1,I_2 \dots I_m$\\ où ~$I_k:=)t_k,t_{k+1})$.\\ Un domaine de la forme $\mathscr{C}=)t_1,t_m)\times\Omega\times S^1(\delta)\times S^3(\rho)\times V$ se décompose comme réunion disjointe des sous-domaines $:\mathscr{C}_k:=I_k\times\Omega\times S^1(\delta)\times S^3(\rho)\times V$.\\ On donne la définition suivante: \begin{dfn}\label{d+10} \textbf{Un domaine de type <<métriques oscillantes successives~>>} est un domaine $(\mathscr{C},g)$ ~où ~$\mathscr{C}=\cup_{k=1}^m\mathscr{C}_m$ est présenté dans les lignes précédentes. Pour les indices pairs ($k=2l$) (par exemple), la métrique $g|_{\mathscr{C}_k}$ est celle d'une métrique oscillante qui s'écrit $|a_k|^{\frac{4}{n-2}}g_\mathcal P$. Pour les indices impairs ($k=2l+1$) la métrique $g|_{\mathscr{C}_k}$ est de la forme $c_kg_\mathcal P$ ~où ~$c_k$ est une constante positive ou nulle. \end{dfn} Cette notion de <<~métriques oscillantes successives~>> est, bien sûr, idéalisée. Dans la suite les constantes $c_k$ pourront être considérées $<<1$ (lorsque $c_k=0$ la métrique est dégénérée). \subsubsection{Analyse de l'expérience de type A,G,R \cite{aspe} en terme de métriques oscillantes} L'appareil considéré est le double appareil de Stern-Gerlach décrit par la figure \ref{f+2}. Nous faisons référence à l'expérience \cite{aspe}, bien qu'ici ce soit la mesure de spin qui intervienne et non la mesure de polarité de photons, ceci car nous n'avons pas développé, dans ce papier, cette dernière notion, propre, pour nous, aux métriques oscillantes à masse nulle (cf.\ref{s2.14}).\\ En physique quantique standard, l'état de polarisation de la paire de photons émise par une <<~cascade atomique~>> d'atomes de calcium dans le domaine où $y\in)y_{1G},y_{1D}($ est supposé être l'état intriqué suivant (cf.figure \ref{f+2}): \begin{eqnarray}\label{F+26} \frac{1}{2}(\xi\otimes\xi+\xi^\bot\otimes\xi^\bot) \end{eqnarray} Nous étudierons ici le cas plus général pour lequel l'état de polarisation (ou de spin) est de la forme: \begin{eqnarray}\label{F+27} \frac{1}{2}(\xi_1\otimes\xi_2+\xi_1^\bot\otimes\xi_2^\bot) \end{eqnarray} \textbf{En terme de métrique oscillante d'ordre 2 avec spin 1/2, ceci est remplacé par la donnée présentée dans le paragraphe qui suit}. \subsubsection{Les métriques oscillantes doubles successives (def.\ref{d+7} et def.\ref{d+10}) créées dans le domaine où $y\in)y_{1G},y_{1D}($} Ces métriques oscillantes seront définies par la succession de métriques oscillantes doubles qui pourront être de deux types différents, ces types étant équiprobablement répartis sur les intervalles de temps concernés. Nous commençons par donner les précisions suivantes:\\ L'intervalle de temps total de l'expérience $I=)t_1,t_m($ est décomposé sous la forme $I=\cup_{k=1}^{m-1}I_m$.\\ On note $\mathscr{C}_{y_i,y_j}$ le sous-domaine de $\mathscr{C}=I\times\omega\times S^1\times S^3\times V$ pour lequel $y\in)y_i,y_j($.\\ Les métriques oscillantes doubles sont construites à partir des quatre fonctions $a_1^G,a_2^G,a_1^D,a_2^D$ suivantes ($G$ et $D$ font référence à la partie gauche et droite de l'appareil décrit par la figure \ref{f+2}): \begin{enumerate} \item $a_1^G:\mathscr{C}_{y_{4G},y_{1G}}\rightarrow\mathbb{R}$ est telle que $g_1^G:=|a_1^G|^{\frac{4}{n-2}}g_\mathcal P$ soit une métrique oscillante élémentaire d'ordre 2 avec spin 1/2, sa restriction à $\mathscr{C}_{y_{2G},y_{1G}}$ est un faisceau de métrique oscillante (def. \ref{d+1}) dont l'état de spin a pour angle $\theta_1^G$ et est noté $\gamma_{\theta_1^G}$ (def. \ref{d+5}), sa vitesse $\overrightarrow v_1^G$ a un sens opposé à$\overrightarrow y$. \item $a_1^D:\mathscr{C}_{y_{1D},y_{4D}}\rightarrow\mathbb{R}$ a les mêmes propriétés que $a_1^G$, l'état de spin de la restriction de $g_1^D$ à $\mathscr{C}_{y_{1D},y_{2D}}$ est noté $\gamma_{\theta_1^D}$ , sa vitesse est $\overrightarrow v_1^D=- \overrightarrow v_1^G$ (on suppose que $|\overrightarrow v_1^G|= |\overrightarrow v_1^G|$ dans le seul but de simplifier les conditions sur les dimensions de l'appareil en vue de la simultanéité des mesures en $G$ et $D$). \item $a_2^G:\mathscr{C}_{y_{4G},y_{1G}}\rightarrow\mathbb{R}$ a les mêmes propriétés que $a_1^G$ mais l'état de spin $\gamma_{\theta_2^G}$ de la restriction de $g_2^G$ à $\mathscr{C}_{y_{2G},y_{1G}}$ est l'orthogonal canonique $\gamma_{\theta_1^G}^\bot$ de $\gamma_{\theta_1^G}$ (def. \ref{d+6}). \item $a_2^D:\mathscr{C}_{y_{1D},y_{4D}}\rightarrow\mathbb{R}$ a les mêmes propriétés que $a_1^D$ mais l'état de spin $\gamma_{\theta_2^D}$ de la restriction de $g_2^D$ à $\mathscr{C}_{y_{1D},y_{2D}}$ est l'orthogonal canonique $\gamma_{\theta_2^D}^\bot$ de $\gamma_{\theta_2^d}$. \end{enumerate} Bien que les notions soient fondamentalement différentes, le lecteur pourra faire le parallèle entre les conditions que l'on vient de poser sur les états de spin des métriques associées aux quatre fonctions précédentes et l'état de polarisation (\ref{F+27}) donné en physique quantique classique.\\ Les métriques oscillantes effectives qui vont permettre d'obtenir des résultats identiques à ceux de la physique quantique classique sont précisées dans les lignes qui suivent.\\ On note $\mathscr{C}_{y_i,y_j}^k$ le sous domaine de $\mathscr{C}_{y_i,y_j}$ pour lequel $t\in I_k$.\\ Pour un indice $k=2l+1$, la métrique sur $\mathscr{C}$ est de la forme $g=c_kg_\mathcal P$ ~où ~$c_k$ est une constante positive ou nulle.\\ Pour un indice $k=2l$~~ la métrique est une métrique oscillante double (def. \ref{d+7}), c'est \textbf{équiprobablement} l'une des deux métriques suivantes: \begin{enumerate} \item $g_{\theta_{T_1}}$: la $\theta_{T_1}$-transformée (def.\ref{d+9}) de la métrique $g=|a_1|^{\frac{4}{n-2}}g_\mathcal P$ ~où la fonction $a_1$ restreinte à $\mathscr{C}_{y_{4G},y_{1G}}^{2l}$ est $a_1^G$ et restreinte à $\mathscr{C}_{y_{1D},y_{4D}}^{2l}$ est $a_1^D$. \item $g_{\theta_{T_2}}$: la $\theta_{T_2}$-transformée de la métrique $g=|a_2|^{\frac{4}{n-2}}g_\mathcal P$ ~où la fonction $a_2$ restreinte à $\mathscr{C}_{y_{4G},y_{1G}}^{2l}$ est $a_2^G$ et restreinte à $\mathscr{C}_{y_{1D},y_{4D}}^{2l}$ est $a_2^D$. \end{enumerate} Voici quelques exemples pour les choix de $\theta_{T_1}$ et $\theta_{T_2}$ qui donnent les résultats (identiques à ceux de la mécanique quantique standard) énoncés dans la proposition \ref{p+3} qui suivra. Un commentaire sur ces choix est donné après la présentation des exemples.\\ On note $\theta^G$ l'angle de mesure de l'appareil de Stern-Gerlach situé à gauche et $\theta^D$ celui de l'appareil situé à droite (Figure \ref{f+2}). \vspace{3mm} \textbf{Exemple 1}: $\theta_{T_1}=\theta^G-\theta_1^G$ ~~~et ~~~$\theta_{T_2}=\theta^G-\theta_2^G$.\\ (Ici la partie gauche de l'appareil est privilégiée mais on peut symétriquement privilégier la partie droite en posant $\theta_{T_1}=\theta^D-\theta_1^D$ ~~~et ~~~$\theta_{T_2}=\theta^D-\theta_2^D$).\\ La dissymétrie gauche-droite de l'exemple 1 peut sembler injustifiée mais on remarquera que, dans l'expérience (A,G,P \cite{aspe}), une dissymétrie est imposée puisque l'on sélectionne par leurs fréquences les photons partant vers la gauche et ceux partant vers la droite.\\ L'exemple qui suit a une symétrie gauche-droite. \vspace{3mm} \textbf{Exemple 2}: Il y a équiprobabilité entre les quatre possibilités suivantes:\\ $\theta_{T_1}=\theta^G-\theta_1^G$, ~~$\theta_{T_1}=\theta^D-\theta_1^D$, ~~$\theta_{T_2}=\theta^G-\theta_2^G$, ~~$\theta_{T_2}=\theta^D-\theta_2^D$.\\ Mais on peut concevoir d'autres exemples comme celui pour lequel:\\ $|\theta_{T_1}|=min(|\theta^G-\theta_1^G|,|\theta^D-\theta_1^D|)$ ~~et ~~$|\theta_{T_2}|=min(|\theta^G-\theta_2^G|,|\theta^D-\theta_2^D|)$.\\ En fait,un choix de $\theta_{T_1}$ et $\theta_{T_2}$ ne pourra être précisé que par une étude du phénomène de <<~création~>> des métriques oscillantes doubles (étude de la cascade atomique d'atomes de calcium par exemple), mais ce papier n'aborde pas encore ce sujet complexe. D'autre part, il est possible que les hypothèses que l'on a posées pour caractériser les métriques oscillantes doubles successives soient notablement modifiées par la suite. L'intérêt de cette section est principalement de montrer que les phénomènes d'<<~intrication~>> étudiés par la mécanique quantique classique peuvent être décrits qualitativement et quantitativement dans le cadre de notre théorie.\\ (On peut résumer brièvement le processus en disant que, systématiquement, une <<~rotation~>> dans $S^1(\delta)\times S^3(\rho)$ de la métrique oscillante double $g=|a_1|^{\frac{4}{n-2}}g_\mathcal P$ fait coïncider: soit l'angle de l'état de spin lié à $a_1^G$ avec l'angle de mesure $\theta_G$, soit l'angle de l'état de spin lié à $a_1^D$ avec l'angle de mesure $\theta_D$).\\ Il peut paraître surprenant, au premier abord, que les choix de $\theta_{T_1}$ et $\theta_{T_2}$ donnés dans les exemples précédents et qui permettent d'obtenir les résultats de la proposition \ref{p+3}, dépendent des angles le mesure $\theta^G$ et $\theta^D$ définis à partir des champs magnétiques $\overrightarrow B_G$ et $\overrightarrow B_D$. Ces champs sont, en effet, situés à une $g_0|_\Theta$-distance du domaine de <<~création~>> des métriques oscillantes qui peut être importante.\\ \textbf{Cependant, il ne faut pas oublier que toutes les notions introduites sur le spin (ou éventuellement la polarité) ne concernent que la variété compacte $S^1(\delta)\times S^3(\rho)$ et que celle-ci apparaît dans le produit cartésien avec l'espace apparent $\Theta$ (les objets liés à la notion de spin ne sont pas <<~inclus~>> dans l'espace apparent). Ceci change profondément des habitudes acquises en physique classique pour laquelle une procédure similaire n'est pas adaptable. D'autre part, on pourra remarquer que, pendant les intervalles de temps $I_{2l+1}$ (immédiatement avant la création d'une métrique oscillante double) la métrique $g$ est de la forme $cg_\mathcal P$ et, si la constante $c$ est $<<1$, la distance effective, qui est la $g|_\Theta$-distance, peut être aussi petite qu'on le souhaite relativement à la ${g_0}|_\Theta$-distance, cette dernière étant celle mesurée par un observateur extérieur au domaine de l'expérience }.\\ <<~Le lien fantôme~>>, dont parlait ironiquement A.Einstein au sujet de deux particules intriquées, a comme support, dans le cadre de notre théorie, la variété compacte $S^1(\delta)\times S^3(\rho)$.\\ Le résultat important de cette section est le suivant: \begin{prop}\label{p+3} On considère l'appareil décrit par la figure \ref{f+2}.\\ On note $\theta^G$ ~~(resp. $\theta^D$) l'angle de mesure (def.\ref{d+4}) de l'appareil de Stern-Gerlach situé à gauche (resp. à droite).\\ Les métriques oscillantes doubles sont celles décrites dans le chapitre précédent et on se place dans le cadre d'un des exemples que l'on vient de présenter.\\ On note $\omega_{1G}$ et $\omega_{2G}$ ~(resp. $\omega_{1D}$ et $\omega_{2D}$) les deux domaines de $\mathbb{R}^3$ présentés dans les propositions \ref{p+1} et \ref{p+2} pour l'appareil de Stern-Gerlach situé à gauche (resp. à droite).\\ On suppose qu'à un instant $t\in)t_1,t_2($, un couple de singularités élémentaires $(\varsigma_G,\varsigma_D)$ est tel que $\varsigma_G\in\omega_{1G}\cup\omega_{2G}$ ~et ~$\varsigma_D\in\omega_{1D}\cup\omega_{2D}$.\\ On note $(+,+)$ la situation pour laquelle $\varsigma_G\in\omega_{1G}$ et $\varsigma_D\in\omega_{1D}$, $(+,-)$ celle pour laquelle $\varsigma_G\in\omega_{1G}$ et $\varsigma_D\in\omega_{2D}$, de même pour $(-,+)$ et $(-,-)$.\\ Alors: \begin{enumerate} \item Les probabilités d'obtenir $(+,+)$, $(+,-)$, $(-,+)$, $(-,-)$ sont les suivantes: \begin{eqnarray}\label{F+28} p(+,+)=p(-,-)=\frac{1}{2}\cos^2\frac{(\theta^G-\theta^D)-(\theta_1^G-\theta_1^D)}{2} \end{eqnarray} \begin{eqnarray}\label{F+29} p(+,-)=p(-,+)=\frac{1}{2}\sin^2\frac{(\theta^G-\theta^D)-(\theta_1^G-\theta_1^D)}{2} \end{eqnarray} où $\theta_1^G$ (resp.$\theta_1^D$) est l'angle de l'état de spin de la métrique $g_1^G$ ~(resp. $g_1^D$) définie dans le chapitre précédent.\\ Ces probabilités peuvent aussi s'écrire comme suit: \begin{eqnarray}\label{F+30} p(+,+)=p(-,-)=\frac{1}{2}|\langle\gamma_{\theta^G},\gamma_{\theta_1^G}\rangle\langle\gamma_{\theta^D},\gamma_{\theta_1^D }\rangle+\langle\gamma_{\theta^G},\gamma_{\theta_1^G}^\bot\rangle\langle\gamma_{\theta^D},\gamma_{\theta_1^D} ^\bot\rangle|^2 \end{eqnarray} \begin{eqnarray}\label{F+31} p(+,-)=p(-,+)=\frac{1}{2}|\langle\gamma_{\theta^G},\gamma_{\theta_1^G}\rangle\langle\gamma_{\theta^D}^\bot,\gamma_{ \theta_1^D }\rangle+\langle\gamma_{\theta^G},\gamma_{\theta_1^G}^\bot\rangle\langle\gamma_{\theta^D}^\bot,\gamma_{\theta_1^D} ^\bot\rangle|^2 \end{eqnarray} où $\gamma_{\theta^G}$ ~(resp. $\gamma_{\theta^D}$) est le vecteur propre de $\hat S_{\theta^G}$~ (resp. $\hat S_{\theta^D}$) défini par (\ref{F+10}), ~~ $\gamma_{\theta_1^G}$ ~(resp. $\gamma_{\theta_1^D}$) est l'état de spin de la métrique $g_1^G$ ~(resp. $g_1^D$), ~~ $\gamma_{\theta_1^G}^\bot=\gamma_{\theta_2^G}$ ~(resp. $\gamma_{\theta_1^D}^\bot=\gamma_{\theta_2^D}$) est l'orthogonal canonique de $\gamma_{\theta_1^G}$ ~(resp. $\gamma_{\theta_1^D}$)~ (def.\ref{d+6}). \item Si l'on note $(+)_G$ ~(resp. $(+)_D$) la situation pour laquelle $\varsigma_G\in\omega_{1G}$ ~(resp. $\varsigma_D\in\omega_{1D}$) et $(-)_G$ ~(resp. $(-)_D$) celle pour laquelle $\varsigma_G\in\omega_{2G}$ ~(resp. $\varsigma_D\in\omega_{2D}$), alors:\\les probabilités d'obtenir les situations $(+)_G$ et $(-)_G$ (indépendemment des situations $(+)_D$ et $(-)_D$) sont égales à $\frac{1}{2}$. De même pour celles d'obtenir les situations $(+)_D$ et $(-)_D$. \end{enumerate} \end{prop} On a ici obtenu des résultats identiques à ceux qu'obtient la mécanique quantique classique en partant de l'état <<~intriqué~>> donné par (\ref{F+27}).\\ Dans le cas particulier qui correspond à l'expérience du type A,G,R \cite{aspe}, l'état de spin de la paire de particules est donnée, en mécanique quantique classique, par (\ref{F+26}). Pour nous, cette hypothèse se traduit, dans la proposition \ref{p+3}, par le fait que $\theta_1^G=\theta_1^D$.\\ Il s'en suit que:\\ $p(+,+)=p(-,-)=\frac{1}{2}\cos^2\frac{(\theta^G-\theta^D)}{2}$~~ et~~ $p(+,-)=p(-,+)=\frac{1}{2}\sin^2\frac{(\theta^G-\theta^D)}{2}$.\\ La fonction de corrélation~ $E_{\theta^G,\theta^D}:=p(+,+)+p(-,-)-p(+,-)-p(-,+)$ vérifie donc:\\ $E_{\theta^G,\theta^D}=\cos^2\frac{(\theta^G-\theta^D)}{2}-\sin^2\frac{(\theta^G-\theta^D)}{2} =\cos(\theta^G-\theta^D)$.\\ Si l'on choisit comme angles de mesure:\\ ($\theta^G=0$ ~ou ~$\theta'^G=\pi/2$), ~~~~($\theta^D=\pi/4$ ~ou ~$\theta'^D=-\pi/4$),~ les quatre mesures donnent:\\ $E_{0,\pi/4}=\frac{\sqrt2}{2}$, ~~~$E_{0,-\pi/4}=\frac{\sqrt2}{2}$, ~~~$E_{\pi/2,\pi/4}=\frac{\sqrt2}{2}$, ~~~$E_{\pi/2,-\pi/4}=-\frac{\sqrt2}{2}$.\\ La quantité $S:=E_{0,\pi/4}+E_{0,-\pi/4}+E_{\pi/2,\pi/4}-E_{\pi/2,-\pi/4}$ vaut alors $2\sqrt2$ ce qui contredit l'inégalité de Bell et correspond au résultat expérimental \cite{aspe}. Ceci montre que la théorie présentée ici ne s'apparente pas à une théorie à variables cachées locale.\\ \textbf{Démonstration de la proposition \ref{p+3}}\\ On considère l'exemple 1, les autres exemples se traitent de la même manière. \begin{enumerate} \item On suppose qu'au temps $t\in I_{2l}$ de la mesure, la métrique oscillante double est $g_{\theta_{T_1}}$.\\ La proposition \ref{p+2} montre que la probabilité pour que $\varsigma_G$ soit vue dans $\omega_{1G}$ est:\\ $\cos^2\frac{(\theta^G-\theta_1^G)-(\theta^G-\theta_1^G)}{2}=1$ et la probabilité qu'elle soit vue dans $\omega_{2G}$ est nulle.\\ la probabilité pour que $\varsigma_G$ soit vue dans $\omega_{1D}$ est: $\cos^2\frac{(\theta^G-\theta_1^G)-(\theta^D-\theta_1^D)}{2}$ et la probabilité qu'elle soit vue dans $\omega_{2D}$ est: $\sin^2\frac{(\theta^G-\theta_1^G)-(\theta^D-\theta_1^D)}{2}$. La probabilité d'obtenir la situation $(+,+)$ est donc: $\cos^2\frac{(\theta^D-\theta_1^D)-(\theta^G-\theta_1^G)}{2}$, celle d'obtenir $(+,-)$ est: $\sin^2\frac{(\theta^D-\theta_1^D)-(\theta^G-\theta_1^G)}{2}$,~~les probabilités d'obtenir $(-,+)$ et $(-,-)$ sont nulles. \item On suppose qu'au temps $t\in I_{2l}$ de la mesure, la métrique oscillante double est $g_{\theta_{T_2}}$.\\ La proposition \ref{p+2} montre ici que la probabilité pour que $\varsigma_G$ soit vue dans $\omega_{1G}$ est:\\ $\cos^2\frac{(\theta^G-\theta_1^G)-(\theta^G-\theta_2^G)}{2}=0$ puisque $\theta_2^G$ est l'angle de l'orthogonal canonique $\gamma_{\theta_1^G}^\bot$ et vérifie donc d'après (\ref{F+20}): $\theta_2^G=\theta_1^G-\pi$. La probabilité pour que $\varsigma_G$ soit vue dans $\omega_{2G}$ est alors égale à $1$.\\ La probabilité pour que $\varsigma_D$ soit vue dans $\omega_{1D}$ est:\\ $\cos^2\frac{(\theta^D-\theta_1^D)-(\theta^G-\theta_2^G)}{2}=\sin^2\frac{(\theta^D-\theta_1^D)-(\theta^G-\theta_1^G)}{2 }$ et la probabilité pour qu'elle soit vue dans $\omega_{2D}$ est: $\cos^2\frac{(\theta^D-\theta_1^D)-(\theta^G-\theta_1^G)}{2}$.\\ Les probabilités d'obtenir les situations $(+,+)$ et $(+,-)$ sont donc nulles, la probabilité d'obtenir $(-,+)$ est: $\sin^2\frac{(\theta^D-\theta_1^D)-(\theta^G-\theta_1^G)}{2}$, celle d'obtenir $(-,-)$ est: $\cos^2\frac{(\theta^D-\theta_1^D)-(\theta^G-\theta_1^G)}{2}$. \end{enumerate} Puisque, par hypothèse, les situations présentées sont les seules possibles et sont équiprobables, on en déduit que les égalités (\ref{F+28}) et (\ref{F+29}) sont vérifiées.\\ Les égalités (\ref{F+30}) et (\ref{F+31}) se démontrent rapidement puisque: $$\gamma_{\theta^G}=((\cos\frac{\theta}{2})\beta_1-(\sin\frac{\theta}{2})\beta_2))e^{ik_G} ~~~~\gamma_{\theta^G}^\bot=((\sin\frac{\theta}{2})\beta_1+(\cos\frac{\theta}{2})\beta_2))e^{ik_G}$$ $$\gamma_{\theta_1^G}=((\cos\frac{\theta_1}{2})\beta_1-(\sin\frac{\theta_1}{2})\beta_2))e^{ik_{G_1}} ~~~~\gamma_{\theta_1^G}^\bot=((\sin\frac{\theta_1}{2})\beta_1+(\cos\frac{\theta_1}{2})\beta_2))e^{ik_{G_1}}$$ où $k_G$ et $k_{G_1}$ sont des réels.\\ De même pour $\gamma_{\theta^D}$, $\gamma_{\theta^D}^\bot$, $\gamma_{\theta_1^D}$, $\gamma_{\theta_1^D}^\bot$.\\ La partie 2. de la proposition \ref{p+3} est une conséquence rapide de la symétrie des données. \section[Quelques remarques pour la suite]{Quelques remarques sur la description à venir de phénomènes plus complexes}\label{s2.17} Dans le deuxième chapitre de ce papier nous nous sommes limités à l'étude des phénomènes physiques qui interviennent dans les expériences génériques de la mécanique quantique standard (diffraction, fentes de Young, déviations par un potentiel, expérience de Stern-Gerlach, intrication quantique, etc.). Avec le regard que l'on porte ici sur la physique nous n'avons considéré que les domaines de type <<~métrique oscillante dans un potentiel~>> (en restant dans le cadre de l'approximation linéaire (cf.section \ref{s2.3}) et les singularités élémentaires (cf.section \ref{s2.11}) n'ont eues qu'un rôle secondaire en n'intervenant que lors de <<~mesures~>> (mesures de position ou autre (cf.section\ref{s2.16})). Les phénomènes plus complexes auxquels nous souhaitons nous intéresser maintenant sont ceux qui, dans le langage de la physique standard , font intervenir les <<~interactions~>> des particules entre elles.\\ Les deux faits suivants vont être remis en cause: \begin{enumerate} \item Le fait d'avoir utilisé l'approximation linéaire et donc l'équation \ref{F1} (alors que l'équation qui tient compte d'une forme d'interaction entre les particules est l'équation non linéaire \ref{F0'}) \item Le fait d'avoir considéré que les singularités n'ont qu'une influence négligeable sur les métriques oscillantes. \end{enumerate} Nous allons présenter très succinctement dans ce qui suit quelques éléments d'étude dans lesquels les deux faits précédents ne sont plus nécessairement considérés, ceci en décrivant des exemples très particuliers dans le seul but de montrer les difficultés auxquelles nous allons être confrontés. \subsection{Où l'on abandonne l'approximation linéaire mais on continue à négliger l'influence éventuelle des singularités sur les équations qui régissent les métriques conformes à un potentiel}\label{ssn2.2} On décrit une expérience qui, dans le langage de la physique standard correspond à l'envoie de deux flots de particules lancées à des vitesses opposées de module $\lambda$ et qui se rencontrent à l'instant $t=0$ sur la surface d'équation $x^1=0$. Pour cela on considère deux domaines de $\mathbb{R}^4$ notés $\Theta_1$ et $\Theta_2$ définis par: $\Theta_1=\{(t,x)\in \mathbb{R}^4 ~~~ /t<t_0<0,~~~ x^1<\lambda t<0,~~~ (x^1,x^2)\in\omega\subset\mathbb{R}^2\}$ $\Theta_1=\{(t,x)\in \mathbb{R}^4 ~~~ /t<t_0<0,~~~ x^1>-\lambda t>0,~~~ (x^1,x^2)\in\omega\subset\mathbb{R}^2\}$ On suppose que le domaine $\mathscr{C}_1:=\Theta_1\times S^1(\delta)\times S^3(\rho)\times V$ est un domaine de type métrique à courbure scalaire constante conforme à $g_0$ (def.\ref{d2.1}) dont la métrique $g_1=|a_1|^{4/n-2}g_0$ est telle que la décomposition spectrale de la fonction $a_1$ est de la forme (cf.sous-section \ref{ssn1}):\\ $$a_{1}(x,.)=\sum_{i=1}^{\infty}\varphi_i(x)\alpha_i(.)$$ où $(\alpha_i)$ est une base hilbertienne orthonormée de fonctions propres de $(S^1(\delta)\times S^3(\rho)\times V,g_0)$. On suppose que cette décomposition spectrale a un terme principal qui représente une métrique oscillante élémentaire \textbf{homogène} se déplaçant à une vitesse $\vec{v}$ suivant l'axe des $x^1$ et de module $|\vec v|=\lambda$, les autres termes de la décomposition spectrale (les <<~harmoniques~>>) étant négligeables relativement au terme principal (il est à noter qu'une fonction $a_1$ correspondant à une métrique oscillante élémentaire homogène ne peut pas vérifier l'équation non-linéaire \ref{F0'} puisque une solution d'une telle équation contient nécessairement une infinité de termes non nuls dans la décomposition). On suppose que le domaine $\mathscr{C}_2:=\Theta_2\times S^1(\delta)\times S^3(\rho)\times V$ vérifie les mêmes propriétés avec $g_2=|a_2|^{4/n-2}g_0$ , la vitesse correspondant au terme principal étant égale à $-\vec v$ (on ne suppose pas nécessairement que le terme principal représente une métrique oscillante du même type que celle définie sur $\mathscr{C}_1$). La vitesse $\vec v$ est choisie de sorte que la <<~rencontre~>> des deux métriques oscillantes ait lieu pour $t=0$. La métrique est supposée proche de $g_0$ sur le domaine $\mathscr{C}_{t_0}-(\mathscr{C}_1 \cup \mathscr{C}_2)$ où $\mathscr{C}_{t_0}:=\{t<t_0\}\times\mathbb{R}^3\times S^1(\delta)\times S^3(\rho)\times V$. Il s'agit alors de déterminer la fonction $a$ (ou du moins d'avoir des informations sur celle-ci) définie sur $\mathscr{C}=\mathbb{R}^4\times S^1(\delta)\times S^3(\rho)\times V$, correspondant à une métrique à courbure scalaire constante telle que $g=|a|^{4/n-2}g_0$ (c'est à dire qui vérifie l'équation \ref{F0'} avec $g_\mathcal{P}=g_0$) celle-ci étant égale à la fonction que l'on a précisée sur $\mathscr{C}_{t_0}$. Ce dernier point est la <<~condition aux limites~>> imposée à la solution de l'équation \ref{F0'}. La difficulté technique de cette étude est considérable et on se limitera sans doute à évaluer les termes principaux de la décomposition spectrale de la fonction $a$. Ces termes pourront représenter des métriques oscillantes de types différents de ceux intervenant dans la condition aux limites pour $t<t_0$. Il est concevable qu'apparaissent des termes représentant des métriques oscillantes <<~à durée de vie limitée~>> que l'on présentera plus loin dans cette section. \subsection{Les singularités et leurs influences} Jusqu'à présent nous n'avons pas donné d'expressions précises de métriques au voisinage de singularités (cf.section \ref{s2.11}) car ceci n'était pas nécessaire dans le cadre des phénomènes que nous avons étudiés. Le comportement des métriques au voisinage des singularités va devenir un ingrédient important dans la description précise des notions qui, dans le langage de la physique standard, concernent les <<~particules composées~>>, <<~les noyaux d'atomes~>>, <<~les atomes~>>, etc. Bien entendu, pour nous, ces notions resteront dans le cadre de la théorie déjà présentée en terme de <<~déformation de l'espace-temps de dimension $n$~>> et plus précisément, dans l'étude de domaines particuliers de type <<~métrique à courbure scalaire constante conforme à un potentiel~>>, mais des résultats précis ne pourront être obtenus qu'avec une description suffisamment détaillée du comportement asymptotique de la métrique au voisinage d'une singularité. Des exemples de comportements asymptotiques respectant les critères géométriques imposés ont été présentés dans le manuscrit \cite{vaugon-2}, ceci pour des singularités de <<~dimensions~>> (relatives aux sous-variétés correspondantes) différentes. Nous nous restreindrons ici à la présentation de quelques exemples <<~naïfs~>> dans le seul but d'initier une étude plus approfondie. La cellule type considérée est de la forme $\mathscr{C}=I\times\Omega\times W$ où $W=S^1(\delta)\times S^3(\rho)\times V$. On note $(t,x^1,x^2,x^3)$ les coordonnées de $I\times\Omega$, on suppose que $0\in\Omega$ et on pose $r=(\sum_{k=1}^3(x^k)^2)^{1/2}$. \subsubsection{Exemple 1} Considérons une métrique à courbure scalaire constante conforme à un potentiel neutre $g=|a|^{4/n-2}g_0$ où la fonction $a$ est de la forme suivante: \begin{eqnarray}\label{Fn69} a=\frac{c}{r}a_1+a_2 \end{eqnarray} où $c$ est une constante. On suppose que les métriques $|a_1|^{4/n-2}g_0$ et $|a_2|^{4/n-2}g_0$ sont celles de métriques oscillantes élémentaires (def. \ref{d2.3}) et vérifient donc $\Box_{g_0}a_i+Sa_i=0$. Lorsque la fonction $a_1$ ne dépend pas des variables $(x^1,x^2,x^3)$ on en déduit, puisque $\Box_{g_0}(\frac{1}{r})=\Delta(\frac{1}{r})=0$, que $\Box_{g_0}a+Sa=0$ sur $\mathscr{C}-\mathcal{S}$ où $\mathcal{S}=I\times\{0\}\times W$. Pour $(t,u)\in I\times S^1(\delta)$ la singularité $\mathcal{S}(t,u)$ est donc de la forme $\mathcal{S}(t,u)=\{t\}\times\{0\}\times\{u\}\times S^3(\rho)\times V$. La fonction $a$ est donc ici celle d'une métrique oscillante avec une singularité (stationnaire en $0\in\Omega$). On suppose que les fonctions $a_1$ et $a_2$ sont du même ordre de grandeur. Comme exemple, et pour faire le lien avec la physique standard, on suppose que la métrique oscillante $g:=|a|^{4/n-2}g_0$ est celle associée à un proton (stationnaire). En dehors d'un domaine très proche de la singularité on a $a\simeq a_2$. La fonction $a_2$ est donc celle qui caractérise la métrique oscillante <<~proton~>>. Lorsque l'on <<~néglige~>> la singularité, c'est la fonction $a_2$ qui porte, comme on l'a vu, les caractéristiques importantes: masse, charge électrique, spin, etc. et que l'on utilise dans la description des expériences standard (diffraction, etc.). La métrique $g_1:=|\frac{c}{r}a_1|^{4/n-2}g_0$ ne devient prépondérante que lorsque $\frac{c}{r}>1$, de sorte que la fonction $a_2$ devient négligeable devant $\frac{c}{r}a_1$. la fonction $a_1$ peut donc être considérée comme celle qui décrit la \textbf{constitution interne du proton}. La décomposition spectrale de la fonction $a_1$ peut éventuellement faire intervenir dans ses termes prépondérants la notion de <<~quark~>> et il est probable que des grandeurs caractéristiques importantes proviennent de la variété compacte $V$ dans la décomposition de $W$ sous la forme $S^1(\delta)\times S^3(\rho)\times V$ (il se peut que le groupe $SU(3)$ soit naturellement associé à $V$ comme $SU(1)$ était associé à $S^1(\delta)$ (difféomorphe) et $SU(2)$ à $S^3(\rho)$). Il est à noter que la métrique $g_1$ s'écrit sous la forme $g_1:=|ca_1|^{4/n-2}g'_0$ où $g'_0:=\frac{1}{r^{4/n-2}}g_0$ et que $g'_0$ est une métrique <<~inversée~>> par rapport à $g_0$ (lorsque $n=6$, la métrique $g'_0=\frac{1}{r}g_0$ est, sur $\mathbb{R}^{3*}$, celle qui correspond à l'inversion $\varphi:\mathbb{R}^{3*}\rightarrow\mathbb{R}^{3*}$ définie par $\varphi(x)=\frac{x}{r^2}$ où $x=(x^1,x^2,x^3)$ et l'on a $\varphi^*g_0=g'_0$). Nous avons, sur cet exemple, utilisé l'approximation linéaire puisque les fonctions considérées sont solutions de l'équation \ref{F1}. Lorsque les fonctions $a_1$ et $a_2$ sont $<<1$ et $\frac{c}{r}$ est $<1$ ceci peut être considéré comme une bonne approximation du cas général qui utilise l'équation \ref{F0'}, cependant, ce n'est plus le cas lorsque $\frac{c}{r}>>1$, c'est à dire dans la description de la constitution interne du proton. \begin{rmq} On peut éventuellement considérer que le terme $\frac{c}{r}a_1$ dans l'expression de la fonction $a$ qui décrit la singularité provient d'un <<~effondrement~>> d'une métrique oscillante caractérisée par la fonction $a_2$, mais ceci n'est que pure spéculation. \end{rmq} \subsubsection{Exemple 2 (description naïve d'un atome)} Sur la cellule $\mathscr{C}$ on considère la métrique à courbure scalaire constante, conforme à un potentiel $g:=|a|^{4/n-2}g_0$ où la fonction $a$ est de la forme suivante: \begin{eqnarray}\label{Fn70} a=\frac{c}{r}(\varepsilon \cos(\lambda_1r)a_1+\sin(\lambda_2r)a_2)+a_3 \end{eqnarray} où $c$, $\lambda_1$, $\lambda_2$ sont des constantes.\\ On impose les conditions suivantes: Les fonctions $a_1$, $a_2$ et $a_3$ ne dépendent pas des variables $x^1,x^2,x^3$ de $\Omega$, ce qui rend les calculs très simples (et donne en particulier une symétrie sphérique à la fonction $a$). Les fonctions $a_1$ et $a_2$ vérifient $\Box_{g_0}a_1=k_1a_1$ et $\Box_{g_0}a_2=k_2a_2$ (et ne sont donc pas directement associées à des métriques oscillantes élémentaires). La métrique $|a_3|^{4/n-2}g_0$ est celle d'une métrique oscillante élémentaire (et porte donc les caractéristiques importantes: masse, charge électrique, spin, etc., comme dans l'exemple précédant). Elle vérifie l'équation \ref{F1}. Les constantes sont choisies de sorte que $c\lambda_2<\varepsilon<<1$, les fonctions $a_1$, $a_2$ et $a_3$ sont supposées <<~du même ordre de grandeur~>> et sont $<<1$. On en déduit:\\- lorsque $r<<c\varepsilon$, les termes en $a_2$ et $a_3$ sont négligeables devant le terme $\frac{c\varepsilon}{r}\cos(\lambda_1r)a_1$.\\ -lorsque $c\varepsilon<r<<c$, les termes en $a_1$ et $a_3$ sont négligeables devant le terme $\frac{c}{r}\sin(\lambda_2)a_2$.\\ -lorsque $r>>c$, les termes en $a_1$ et $a_2$ sont négligeables devant le terme $a_3$.\\ Comme $\Delta(\frac{1}{r}\cos(\lambda_1r))=\lambda_1^2(\frac{1}{r}\cos(\lambda_1r))$ et $\Delta(\frac{1}{r}\sin(\lambda_2r))=\lambda_1^2(\frac{1}{r}\sin(\lambda_2r))$ ($\Delta$ des géomètres), lorsque $k_1+\lambda_1^2+S=k_2+\lambda_2^2+S=0$ la fonction $a$ vérifie $\Box_{g_0}a+Sa=0$ et est donc celle d'une métrique oscillante avec singularité (stationnaire), tout comme dans le premier exemple, mais ici la singularité est plus complexe. Le terme $\frac{c\varepsilon}{r}\cos(\lambda_1r)a_1$ caractérise le noyau d'un atome (prédominant pour $r<<c\varepsilon$). Le terme $\frac{c\varepsilon}{r}\sin(\lambda_2r)a_2$ caractérise le domaine électronique de l'atome (prédominant pour $c\varepsilon<r<<c$). Le terme $a_3$ caractérise la <<~métrique oscillante~>> de l'atome lorsque l'on néglige la singularité. La fonction $\sin(\lambda_2r)$ précise la <<~répartition en couches~>> du domaine électronique de l'atome, la fonction $\cos(\lambda_1r)$ celle de la structure du noyau de l'atome. Bien entendu, l'exemple de l'atome que l'on vient de présenter est très <<~naïf~>>, en particulier, la symétrie sphérique sur $\Omega$ pour la fonction $a$ que l'on a supposée est très restrictive, de plus, tout comme dans l'exemple précédent, nous nous sommes placés dans le cadre de l'approximation linéaire (mais ceci a permis de donner des <<~formules explicites~>>). L'utilisation de l'équation non-linéaire \ref{F0'} modifiera certainement profondément les choses, du moins au voisinage des singularités, autrement dit dans l'étude de la structure du noyau de l'atome. \subsubsection{Chocs de particules} Les métriques oscillantes avec singularités que l'on a décrites dans les deux exemples précédents, pour lesquels la fonction $a$ vérifiait l'équation \ref{F0'} où \ref{F1}, ont une singularité <<~stationnaire~>> sur $\Omega$ relativement à la cellule type $\mathscr{C}=\Theta\times W$ où $W=S^1(\delta)\times S^3(\rho)\times V$. On considère une transformation de Lorentz <<~étendue~>> $\sigma:\mathscr{C}'\rightarrow\mathscr{C}$ où $\mathscr{C}'=\Theta'\times W$ qui provient d'une transformation de Lorentz standard $\Lambda:\Theta'\rightarrow\Theta$ (cf.remarque\ref{r7}). En utilisant la fonction $a\circ\sigma$ sur $\mathscr{C}'$ le domaine de type <<~métrique oscillante avec une singularité stationnaire relativement à $\mathscr{C}$ devient un domaine de type <<~métrique oscillante avec une singularité se déplaçant à une vitesse $\vec v$ relativement à $\mathscr{C}'$ si l'on a choisi la transformation de Lorentz $\Lambda$ qui traduit le fait qu'un observateur associé à $\mathscr{C}'$ se déplace à une vitesse $-\vec v$ relativement à un observateur associé à $\mathscr{C}$. On reprend maintenant l'expérience décrite au début de cette section (cf.\ref{ssn2.2}). On considère les deux domaines $\mathscr{C}_1$ et $\mathscr{C}_2$ sur lesquels les métriques sont respectivement $g_1=|a_1|^{4/n-2}g_0$ et $g_2=|a_2|^{4/n-2}g_0$, mais on prend maintenant pour $a_1$ et $a_2$ des fonctions associées à des métriques oscillantes avec une singularité se déplaçant à une vitesse $\vec v$ (respectivement $-\vec v$) suivant l'axe des $x^1$ et de module $|\vec v|=\lambda$. Ce choix de <<~position~>> des singularités dans $\mathbb{R}^3$ (non nécessairement sur un même axe) est une donnée importante. Ceci constitue les conditions aux limites de l'expérience considérée, qui, en langage de la physique classique, correspond à un <<~choc~>> de particules. On peut considérer par exemple que la fonction $a_1$ caractérise un proton (cf. exemple 1) se déplaçant à une vitesse $\vec v$ et que la fonction $a_2$ caractérise un anti-proton (pour lequel, en particulier, $Q$ devient $-Q$) se déplaçant à une vitesse $-\vec v$. Il s'agit, pour nous, d'étudier la fonction $a$ définie sur $\mathscr{C}$ correspondant à une métrique à courbure scalaire constante (qui vérifie donc l'équation non-linéaire \ref{F0'} avec $g_\mathcal{P}=g_0$) telle que les conditions aux limites soient celles que l'on vient de préciser. La difficulté de cette étude est encore supérieure à celle présentée en \ref{ssn2.2} compte tenu de la complexité des métriques avec singularités qui interviennent. Il est, là encore, concevable que dans la décomposition spectrale de la fonction $a$ apparaissent des <<~termes principaux~>> représentant des métriques oscillantes qui n'existent pas dans les conditions aux limites (création de particules) et que certains termes aient une <<~durée de vie~>> limitée. Ces dernières notions sont présentées dans la sous-section suivante. \subsection{Durée de vie, création, annihilation} On commence par la remarque suivante: Sur une cellule type $\mathscr{C}=\Theta\times S^1(\delta)\times V$, on considère la métrique $g=|a|^{4/n-2}g_0$ où, pour $t>0$ et lorsque $K$ est une constante positive: \begin{eqnarray}\label{Fn71} a=e^{-Kt}a_0 \end{eqnarray} On suppose que la fonction $a_0$ définie sur $\mathscr{C}$ ne dépend pas de la variable $t$ et vérifie\\ $\Box_{g_0}a_0=\nu a_0$ (on peut prendre, par exemple, $a_0=C \beta\cos(Qu-\sum_{k=1}^3x^k)$ où $\beta\in E_V(\mu)$ auquel cas $\nu=-Q^2+\sum_{k=1}^3x^k+\mu$). Alors $\Box_{g_0}a+Sa=(K^2+\nu+S)a$. Lorsque $K^2+\nu+S=0$, la fonction $a$ définie par \ref{Fn71} vérifie la même équation que celle correspondant à une métrique oscillante élémentaire. Les caractéristiques importantes de cette métrique oscillante sont les constantes $Q$ et $\mu$ ($\mu$ pouvant se décomposer et introduire la notion de spin par exemple). La constante $K$ apparaît comme une caractéristique analogue à la fréquence de masse (bien qu'ici, le terme <<~fréquence~>> ne soit plus adapté) mais avec un signe opposé. On rappelle que la fréquence de masse $M>0$ a été définie (def.\ref{d2.8}) par $M^2=\mu+S-Q^2$ pour les métriques oscillantes élémentaires car elle correspond exactement à la notion habituelle de masse comme le montrent les équations données par les théorèmes \ref{2.1} et \ref{2.2}. Mais ces théorèmes n'ont de sens que pour des métriques oscillantes pour lesquelles la fréquence de masse est bien définie et ne concernent pas des métriques de la forme $g=|a|^{4/n-2}g_0$ où la fonction $a$ vérifie \ref{Fn71}. Ces métriques (de <<~pseudo-masse~>> $K$) peuvent éventuellement apparaître par l'intermédiaire de termes importants dans la décomposition spectrale de fonctions solutions de l'équation \ref{F0'} qui décrivent les résultats d'expériences du type de celles que l'on vient de présenter (rencontre de métriques oscillantes, chocs de particules, etc.).\\ Notons que, si un instrument de mesure crée un domaine $\mathscr{C}=]t_0, t_0+T[\times B\times S^1(\delta)\times W$ (cf.section \ref{s2.16}) tel que la métrique soit de la forme $g=|a|^{4/n-2}g_0$ où $a=Ce^{-Kt}\beta\cos(Qu-\sum_{k=1}^3x^k)$, la probabilité qu'une singularité se trouve à un instant $t$ dans $\mathscr{H}_t=\{t\}\times B\times W$ (après une <<~moyenne~>> sur $u\in S^1(\delta)$) est de la forme $C^2e^{-2Kt}$. Ceci décrit un phénomène \textbf{d'annihilation} de la <<~pseudo métrique oscillante~>> (dont les constantes caractéristiques sont $Q$ et $\mu$) pour laquelle la durée de vie moyenne est $1/2K$. (Remarquons qu'il n'y a aucune raison de supposer que $K^2+\nu+S=0$ car $e^{-Kt}a_0$ n'est qu'un terme particulier d'une décomposition spectrale). Par analogie, on peut considérer les métriques de la forme $g=|a|^{4/n-2}g_0$ où, pour $t>0$, $a=(1-e^{-Kt})a_0$, ce qui décrit un phénomène de \textbf{création} de <<~pseudo métrique oscillante~>>. Bien sûr, les exemples de <<~pseudo métriques oscillantes~>> que l'on vient de présenter sont très particuliers et on peut définir, dans un cadre plus général, des <<~pseudo métriques oscillantes~>> qui ont des propriétés analogues: constantes caractéristiques, durée de vie (liée à la pseudo-masse), etc. \bigskip À suivre ... \chapter{Annexes} \section{Démonstration du théorème \protect\ref{t1.1}\label{a3.1}} \begin{enumerate} \item \textbf{On montre qu'un espace propre $E$ de l'endomorphisme $\leftidx{^e}{G}|_{H_x}$ qui est de dimension 1 et de genre temps, est nécessairement unique dans $H_x$}. Supposons que $X_0$ et $X'_0$ soient deux vecteurs propres de genre temps de $\leftidx{^e}{G}|_{H_x}$, de valeurs propres respectives $\lambda$ et $\lambda'$. On a : $G_x(X_0,X'_0)=\lambda g(X_0,X'_0)$ $G_x(X'_0,X_0)=\lambda' g(X'_0,X_0)$ d'où ~$\lambda=\lambda'$ car, d'après la signature de $g|_{H_x}$, nécessairement $g(X_0,X'_0)\neq0$.\\ $X'_0$ est donc un vecteur propre pour la valeur propre $\lambda$. Comme l'espace propre est de dimension 1, $X'_0$ est proportionnel à $X_0$. \item Notons $(\sigma_s)_{s\in\mathbb{R}}$ le groupe à 1 paramètre de difféomorphismes du champ $Y$. \textbf{On montre que le champ $X_0$, les fonctions $\mu$ et $\rho$ sont invariantes par $\sigma_s$}. Par définition, $\forall s\in\mathbb{R}$, ~~~$\sigma_{s_*}(Y)=Y$. D'autre part, $\forall x\in\mathscr{D}$, ~$\forall Z\in T_x(W_x)$,\\ $\sigma_{s_{*x}}(Z)\in T_{\sigma_s(x)}(S^1_{\sigma_s(x)})\oplus T_{\sigma_s(x)}(W_{\sigma_s(x)})$. Ce point peut être vérifié, par exemple, en considérant la matrice jacobienne de $\sigma_{s_*}$ relative à une carte de l'atlas d'observation choisie en $x$. Il s'en suit que $\forall s\in\mathbb{R}$, \\ $\sigma_{s_*}(T_x(S^1_x)\oplus T_x(W_x))= T_{\sigma_s(x)}(S^1_{\sigma_s(x)})\oplus T_{\sigma_s(x)}(W_{\sigma_s(x)})$. (Le fait que les $\sigma_s$ soient des isoméries n'a pas été utilisé pour obtenir cette égalité). Comme les $\sigma_x$ sont des isométries, on en déduit que : $\forall s\in\mathbb{R}$, ~~~$\sigma_{s_*}(H_x)=H_{\sigma_s(x)}$. Compte tenu de l'unicité de l'espace propre montrée en 1., on a : $\forall s\in\mathbb{R}$, ~$\forall x\in\mathscr{D}$, ~~~$\sigma_{s_x}(X_{0_x})=\pm X_{0_{\sigma_s(x)}}$. On considère maintenant une carte de l'atlas d'observation $(\mathscr V,\varphi)$ en $x$. $\forall x'\in\mathscr V$, ~~~$g(X_{0_x'}, \varphi^*(\frac{\partial}{\partial t}))<0$ ~d'après l'orientation choisie, donc : $\forall s\in\mathbb{R}$,\\ (*) ~~~$g(X_{0_{\sigma_s(x)}},\varphi^*(\frac{\partial}{\partial t}))=\pm g(\sigma_{s_x}(X_{0_x},\varphi^*(\frac{\partial}{\partial t}))<0$ Mais pour $s=0$ ~~$\sigma_s=I_d$ ~~et ~~$g((X_{0_x},\varphi^*(\frac{\partial}{\partial t}))<0$. Par continuité en <<~$s$~>> de $g(\sigma_{s_x}(X_{0_x},\varphi^*(\frac{\partial}{\partial t}))$ qui ne s'annule pas, on en déduit que le signe dans (*) est nécessairement <<~$+$~>>. Alors, $\forall s\in\mathbb{R}$, $\forall x \in\mathscr{D}$, ~~~$\sigma_{s_x}(X_{0_x})=X_{0_{\sigma_{s_x}}}$. Puisque $G$, $Y$, $X_0$ sont invariants par les $g$-isométries $\sigma_s$, on déduit maintenant facilement que les fonctions $\mu$ et $\rho$ le sont aussi, autrement dit : $Y(\mu)=Y(\rho)=0$. \item \textbf{Les résultats que l'on vient d'obtenir permettent d 'écrire les égalités suivantes} : \begin{enumerate} \item $D_YX_0=D_{X_0}Y$, ~~ceci puisque le champ $X_0$ est invariant par les isométries $\sigma_s$ et donc : $\mathscr L_Y X_0=D_Y X_0-D_{X_0}Y=0$. On en déduit aussi, puisque la torsion est nulle, que $[X_0~~ Y]=0$. \item (i) ~$D_YY=0$ (ii) ~$\nabla\cdotp Y=0$ (iii) ~$\leftidx{^e}{F}(X_0)=-2D_YX_0$ En effet : $Y$ est un champ de Killing, donc $\nabla_iY_j=-\nabla_jY_i$, en particulier $\nabla_iY^i=\nabla\cdotp Y=0$. Comme $Y^iY_i=-1$, ~~$0=\nabla_j(Y^iY_i)=2Y^i\nabla_jY_i=-2Y^i\nabla_iY_j$. De plus : \begin{eqnarray}\label{F62} F_{ij}=\nabla_iY_j-\nabla_jY_i=2\nabla_iY_j=-2\nabla_jY_i \end{eqnarray} Autrement dit ~$\leftidx{^e}{F}(X_0)=-2D_{X_0}Y=-2D_YX_0$ d'après (a). \end{enumerate} \item \textbf{Démonstration du 1. du théorème \ref{t1.1}}. $G^{ij}=\mu X^iX^j+\sigma Y^iY^j+P^{ij}=\mu X_0^iX_0^j+\rho(X_0^iY^j+Y^iX_0^j)+(\sigma+\rho^2/\mu)Y^iY^j+P^{ij}$. Alors, d'après la seconde identité de Bianchi : \begin{multline}\label{F63} \nabla_iG^{ij}=X_0^j\nabla_i(\mu X_0^i)+\mu X_0^i\nabla_iX_0^j+\nabla_i(\rho(X_0^iY^j+Y^iX_0^j))+\nabla_iP^{ij}\\ =X^j\nabla_i(\mu X^i)+\mu X^i\nabla_iX^j+\nabla_iP^{ij}=0 \end{multline} car $\nabla_i((\sigma+\rho^2/\mu)Y^iY^j)=\nabla_i(\sigma Y^iY^j)=0$, en effet : $\sigma$ et $\sigma+\rho^2/\mu$ sont des fonctions invariantes par les $\sigma_s$ du groupe à un paramètre de difféomorphismes associé à $Y$, donc $Y(\sigma+\rho^2/\mu)=0$, alors : $\nabla_i((\sigma+\rho^2/\mu)Y^iY^j)=Y(\sigma+\rho^2/\mu)Y^j+(\sigma+\rho^2/\mu)(Y^i\nabla_iY^j+Y^j\nabla_iY^i)=0$ d'après (i) et (ii). D'après \ref{F63} et puisque $X_{0j}\nabla_iX_0^j=0$ car $X_{0j}X_0^j=-1$, on peut écrire: ${X_0}_j \nabla_iG^{ij}=-\nabla_i(\mu X_0^i)+{X_0}_j\nabla_i(\rho(X_0^iY^j+Y^iX_0^j))+{X_0}_j\nabla_iP^{ij}=0$. Mais ${X_0}_j\nabla_i(\rho(X_0^iY^j+Y^iX_0^j))=0$, ce dernier point s'obtenant facilement en développant et en utilisant 2., 3.(a), (i), (ii), le fait que $X_0^i{X_0}_i=Y^iY_i=-1$ et $X_o^iY_i=0$. Alors : \begin{eqnarray}\label{F64} \nabla_i(\mu X_0^i)=X_{0_j}\nabla_iP^{ij} \end{eqnarray} ce qui est le premier résultat cherché puisque de plus : $\nabla_i(\mu X^i)=\nabla_i(\mu X_0^i)+\nabla_i(\rho Y^i)=\nabla_i(\mu X_0^i)$. D'autre part, en réutilisant \ref{F63} : $X_j\nabla_iG^{ij}=X_jX^j\nabla_i(\mu X^i)+\mu X^iX_j\nabla_iX^j+X_j\nabla_iP^{ij}=0$ mais ~$X^jX_j=-(1+\rho^2/\mu^2)$, ~alors ~$X_j\nabla_iX^j=\frac{1}{2} \nabla_i(X^jX_j)=-\rho/\mu\nabla_i(\rho/\mu)$. On peut donc écrire : $\rho X^i\nabla_i(\rho/\mu)=-(1+\rho^2/\mu^2)\nabla_i(\mu X^i)+X_j\nabla_iP^{ij}$ et d'après \ref{F64} : $\rho X^i\nabla_i(\rho/\mu)=-(1+\rho^2/\mu^2)X_{0_j}\nabla_iP^{ij}+X_{0_j}\nabla_iP^{ij}+(\rho/\mu )Y_j\nabla_iP^{ij}$ \hspace{1.8cm} $=-(\rho^2/\mu^2)X_{0_j}\nabla_iP^{ij}+(\rho/\mu) Y_j\nabla_iP^{ij}$ ce qui donne le deuxième résultat cherché. \item \textbf{Démonstration du 2. du théorème \ref{t1.1}}. D'après \ref{F63} : $Y_j\nabla_i(\rho(X_0^iY^j+Y^iX_0^j))+Y_j\nabla_iP^{ij}=0$ ~puisque $Y_jX_0^j=0$. Mais : $Y_j\nabla_i(\rho(X_0^iY^j+Y^iX_0^j))=-\nabla_i(\rho X_0^i)+Y_jY^i\nabla_i(\rho X_0^j)$ ~d'après (i) et (ii). De plus : $Y_jY^i\nabla_i(\rho X_0^j)=\rho Y_jY^i\nabla_iX_0^j=\rho Y_jX_0^i\nabla_iY^j=0$ ~~d'après 3.(a) et puisque $Y_j\nabla_iY^j=0$ On obtient donc : $\nabla_i(\rho X_0^i)=Y_j\nabla_iP^{ij}$. Ce qui est le premier résultat cherché sachant que : $\nabla_i(\rho X^i)=\nabla_i(\rho X_0^i+(\rho^2/\mu) Y^i)=\nabla_i(\rho X_0^i)$ ~~d'après 2. et (ii). D'autre part : $\nabla_i(P^{ij}Y_j)=Y_j\nabla_iP^{ij}+P^{ij}\nabla_iY_j$. Mais ~~$P^{ij}\nabla_iY_j=\frac{1}{2} P^{ij}F_{ij}=0$ ~~car $P$ est symétrique et $F$ antisymétrique. Alors ~~$\nabla_i(P^{ij}Y_j)=Y_j\nabla_iP^{ij}$ ~~autrement dit : $\nabla\cdotp (\leftidx{^e}{P}(Y))=g(Y,\nabla\cdotp P)$. \item \textbf{Démonstration du 3. du théorème \ref{t1.1}}. D'après \ref{F63} ~$\mu X^i\nabla_iX^j=-\nabla_iP^{ij}-X^j\nabla_i(\mu X^i)$ C'est à dire : $\mu D_XX=-\nabla\cdotp P-g(X_0,\nabla\cdotp P)X$ ~~puisque d'après 1., $\nabla_i(\mu X^i)=g(X_0,\nabla\cdotp P)$. Ce qui montre 3.(a). D'autre part : $D_XX=D_{X_0+(\rho/\mu)Y}(X_0+(\rho/\mu)Y)=D_{X_0}X_0+D_{X_0}((\rho/\mu)Y)+(\rho/\mu)D_YX_0$ $=D_{X_0}X_0+2(\rho/\mu)D_YX_0+X_0(\rho/\mu)Y$ ~~~puisque $D_{X_0}Y=D_YX_0$, ~$Y(\rho/\mu)=0$, ~$D_YY=0$. Comme $X(\rho/\mu)=X_0(\rho/\mu)=\frac{1}{\mu}g(Y,\nabla\cdotp P)-(\rho/\mu^2)g(X_0,\nabla\cdotp P)$ on en déduit : $\mu D_{X_0}X_0=\mu D_XX-2\rho D_YX_0-\mu X_0(\rho/\mu)Y$, ~~puis : $\mu D_{X_0}X_0=-\nabla\cdotp P-g(X_0,\nabla\cdotp P)X-2\rho D_YX_0-g(Y,\nabla\cdotp P)Y+(\rho/\mu)g(X_0,\nabla\cdotp P)Y$ \hspace{1.3cm}$=-\nabla\cdotp P+\rho\leftidx{^e}{F}(X_0)-g(X_0,\nabla\cdotp P)X_0-g(Y,\nabla\cdotp P)Y$ \hspace{1.3cm}$=\rho\leftidx{^e}{F}(X_0)-pr_{\mathcal{T}^\bot}(\nabla\cdotp P)$ Ce qui montre 3.(b). \item \textbf{Démonstration du 4. du théorème \ref{t1.1}}. D'après \ref{F62} $\nabla_iF^{ij}=-2\nabla_i\nabla^jY^i$. Mais : \begin{eqnarray}\label{F65} \nabla_i\nabla^jY^i=\nabla^j\nabla_iY^i+R^j_kY^k=R^j_kY^k \end{eqnarray} puisque d'après (ii) : $\nabla_iY^i=0$. On en déduit : $\nabla_iF^{ij}=-2R^j_kY^k$. D'autre part, par définition : $2R^j_k=G^j_k+S_g\delta^j_k$ (où $S_g$ est la courbure scalaire et $\delta^j_k$ les symboles de Krönecker) En utilisant le fait que ~$G^j_k=\mu X_0^jX_{0_k}+\rho(X_0^jY_k+Y^jX_{0_k})+(\sigma+\rho^2/\mu)Y^jY_k+P^j_k$ ~on écrit, puisque ~$X_{0_k}Y^k=0$ : $G^j_kY^k=-\rho X_0^j-(\sigma+\rho^2/\mu)Y^j+P^j_kY^k$. On obtient donc : $\nabla_iF^{ij}=\rho X_0^j+(\sigma+\rho^2/\mu-S_g)Y^j-P^j_kY^k$. Alors d'après \ref{F62} : $2Y_j\nabla_i\nabla^jY^i=-Y_j\nabla_iF^{ij}=\sigma+\rho^2/\mu-S_g$ ~~puisque ~$P(Y,Y)=P^j_kY^kY_j=0$, ~$Y^jY_j=-1$, ~$X_0^jY_j=0$. Comme $Y^j\nabla_jY^i=0$, ~on peut écrire : $0=\nabla_i(Y^j\nabla_jY^i)=(\nabla_iY^j)(\nabla_jY^i)+Y^j\nabla_i\nabla_jY^i=-\frac{1}{4}F_{ij}F^{ij} +Y^j\nabla_i\nabla_jY^i$. Donc : $\sigma+\rho^2/\mu-S_g=\frac{1}{2}F_{ij}F^{ij}$ ~et finalement : $\nabla_iF^{ij}=\rho X_0^j+\frac{1}{2}(F_{ij}F^{ij})Y^j-P^j_kY^k$. Ce qui montre 4. \end{enumerate} \newpage \section[Démonstration des propriétés 1, 2 et 3]{Démonstration des propriétés 1, 2 et 3 sur les métriques représentant un potentiel actif (paragraphe \ref{ss1.3} B.) et du lemme \ref{l2}\label{a3.2}} \begin{enumerate} \item Démonstration des propriétés 1. 2. 3. \begin{enumerate} \item Symétrie de $\leftidx{^e}{h}_x$. $g_0(\leftidx{^e}{h}_x(X),Y)=g_{0_{ij}}h^i_kX^kY^j=h_{jk}X^kY^j=h_{kj}X^kY^j=g_{0_{ki}}h^i_jY^jX^k=g_0(X,\leftidx{^e}{h }_x(Y))$. \item Lorsque un endomorphisme est nilpotent, il existe toujours une base dans laquelle sa matrice est triangulaire strictement supérieure, donc à trace nulle. C'est le cas, quel que soit $q\in\mathbb{N}^*$, pour $(\leftidx{^e}{h}_x)^q$. \item En chaque $x\in\mathscr{C}$ : $(g_0)^{-1}_B(g)_B=(g_0)^{-1}_B(g_0)_B+(\leftidx{^e}{h})_B=I+(\leftidx{^e}{h})_B$ où $(~)_B$ signifie <<~la matrice de~>> dans la base $B$. Dans une base $B$ où la matrice de $\leftidx{^e}{h}_x$ est triangulaire strictement supérieure on a : $det(g_0)^{-1}_B det(g)_B=det(I+(\leftidx{^e}{h}_x)_B)=1$ donc ~$det(g)_B=det(g_0)_B$. Si l'on désigne par $P$ la matrice de passage d'une base $B'$ quelconque à la base $B$, on obtient : $(detP)^2det(g)_{B'}=(detP)^2det(g_0)_{B'}$ d'où le résultat. \end{enumerate} \item Démonstration du lemme \ref{l2} Pour $x\in\mathscr{C}$, notons ~$A$ ~le sous-espace vectoriel de $T_x(\mathscr{C})$ engendré par $Y$ et $X_0$ et $B$ ~le sous-espace vectoriel engendré par $(Y,\leftidx{^e}{h}(Y),\dots,\leftidx{^e}{h}^{p-1}(Y),X_0,\dots,\leftidx{^e}{h}^{p-1}(X_0))$. la métrique $g_0$ est non dégénérée par hypothèse puisque de signature $(-,+,+,+,-,+,\dots,+)$. Comme $Y$ et $X_0$ sont de genre temps et $g_0$-orthogonaux, $g_0|_A$ est de signature $(-,-)$. ~~$g_0|_ {A^\perp}$ est donc de signature $(+,+,\dots,+)$ et est un produit scalaire. Puisque ~$B^\perp\subset A^\perp$, ~$g_0|_{B^\perp}$ ~est encore un produit scalaire. Comme ~$(\leftidx{^e}{h})^p=0$, ~~$B$ est stable par $\leftidx{^e}{h}$ et, puisque $\leftidx{^e}{h}$ est $g_0$-symétrique, $B^\perp$ est aussi stable par $\leftidx{^e}{h}$. $\leftidx{^e}{h}|_{B^\perp}$ est un endomorphisme de $B^\perp$ ~$g_0|_{B^\perp}$-symétrique et, comme ~$g_0|_{B^\perp}$ ~est un produit scalaire, ~$\leftidx{^e}{h}|_{B^\perp}$ est diagonalisable donc identiquement nul puisque nilpotent. On a donc montré que ~$\leftidx{^e}{h}|_{B^\perp}=0$ ~. Comme ~$T_x(\mathscr{C})=B\oplus B^\perp$ ~car ~$B^\perp$ et (donc) $B$ sont $g_0$-réguliers, le lemme \ref{l2} est démontré. \end{enumerate} \newpage \section{Démonstration de la proposition \ref{p1.4} \label{a3.3}} On reprend ici les notations utilisées lors de la détermination des géodésiques dans la section \ref{ss1.4}. Dans un système de coordonnées standard de la cellule $\mathscr{C}$, on note : $\Gamma^k_{ij}$ ~(resp. $\tilde \Gamma^k_{ij})$ les symboles de Christoffel de $g$ ~(resp. $g_0$). On note $T^k_{ij}$ les coordonnées du \textbf{tenseur} ($\Gamma^j_{ij}-\tilde\Gamma^k_{ij})$. On a, lorsque $g=g_0+h$ : \begin{eqnarray} T^k_{ij}=\frac{1}{2}g^{kl}(\nabla_ih_{jl}+\nabla_jh_{il}-\nabla_lh_{ij}) \label{F65'} \end{eqnarray} \textbf{où les $\nabla_i$ sont relatifs à $g_0$}. De plus, lorsque l'on note $R_{ij}$ (resp. $\tilde R_{ij}$) la courbure de Ricci de $g$ (resp. $g_0$), on montre facilement que : \begin{eqnarray} R_{ij}=\tilde R_{ij}+\nabla_kT^k_{ij}-T^k_{li}T^l_{kj} \label{F65*} \end{eqnarray} \textbf{Ici $h=-2vX_1\otimes X_1$ et, dans la suite des calculs de cette démonstration, $X_1$ sera noté $X$ pour simplifier l'écriture}. L'expression de $T^k_{ij}$ a été donnée par \ref{F1.5} mais elle se simplifie ici grâce à l'hypothèse $DX=0$ plus forte que le fait que $X$ soit un champ de Killing, on obtient : \begin{eqnarray} T^k_{ij}=-(X_iX^k\nabla_jv+X_jX^k\nabla_iv-X_iX_j\nabla^kv) \label{F66} \end{eqnarray} où l'on a utilisé aussi les propriétés : $X^kX_k=0$ ~et ~$X(v)=0$. \begin{enumerate} \item Calcul de $R_{ij}$. Avec \ref{F66} on obtient : $T^k_{li}T^l_{kj}=0$ ~~et ~~$\nabla_kT^k_{ij}=-(X_iX^k\nabla_k\nabla_jv+X_jX^k\nabla_k\nabla_iv-X_iX_j\nabla_k\nabla^kv)$ D'où, puisque $X(v)=0$ : $\nabla_kT^k_{ij}=-(\Delta_{g_0}v)X_iX_j$ ~~~(où ~$\Delta_{g_0}:=-\nabla^k\nabla_k$). Ce qui donne finalement pour la courbure de Ricci d'après \ref{F65*} : $R_{ij}=\tilde R_{ij}-(\Delta_{g_0}v)X_iX_j$. \item Calcul de $S_g$. $S_g=g^{ij}R_{ij}=(g_0^{ij}+2vX^iX^j)(\tilde R_{ij}-(\Delta_{g_0}v)X_iX_j)$. Soit, en développant et puisque $X^kX_k=0$ : $S_g=S_{g_0}+2vR_{icc_{g_0}}(X_1,X_1)$. \item $g(X_1,X_1)=g^{ij}X_iX_j=(g_0^{ij}+2vX^iX^j)X_iX_j=0$. D'autre part : $\nabla_{g,i}X^j=\partial_iX^j+\Gamma^j_{il}X^l=\nabla_iX^j+T^j_{il}X^l=T^j_{il}X^l$. Et en utilisant \ref{F66} : $\nabla_{g,i}X^j=0$. \item $\nabla_{g,i}Y^j=\partial_iY^j+\Gamma^j_{il}Y^l=\nabla_iY^j+T^j_{il}Y^l$. $\nabla_iY^j=\partial_iY^j+\tilde\Gamma^j_{il}Y^l=\tilde\Gamma^j_{il}Y^l=\frac{1}{2}g_0^{mj}(\partial_ig_{0km} +\partial_kg_{0im}-\partial_mg_{0ik})Y^k$. Mais $Y^4=1$ et $Y^k=0$ pour $k\neq4$, donc, compte tenu de la définition de $g_0$ :\\ \vspace{-5mm} $\nabla_iY^j=0$. D'autre part d'après \ref{F66}, puisque $Y(v)=0$ ~et ~$Y^jX_j=0$ : $T^j_{il}Y^l=0$. Finalement : $\nabla_{g,i}Y^j=0$. \end{enumerate} \section{Démonstration de la proposition \protect\ref{p1.5} \label{a3.4}} On repart des expressions \ref{F65'} et \ref{F65*} données dans la section précédente. \textbf{Ici $h=\Upsilon^\flat\otimes X^\flat_2+X^\flat_2\otimes\Upsilon^\flat$ et, dans la suite des calculs de cette démonstration, $X_2$ sera noté simplement $X$ pour simplifier l'écriture}. Calcul de $T^k_{ij}$. \begin{eqnarray} T^k_{ij}=\frac{1}{2}(g_0^{kl}-h^{kl}+h^{km}h^l_m)(*) \label{F67} \end{eqnarray} où l'on a noté : $(*)=\nabla_j(\Upsilon_iX_l+\Upsilon_lX_i)+\nabla_i(\Upsilon_jX_l+\Upsilon_lX_j)-\nabla_l(\Upsilon_iX_j+\Upsilon_jX_i)$ \bigskip On a, puisque $DX=0$ : $(*)=X_l(\nabla_j\Upsilon_i+\nabla_i\Upsilon_j)+X_i(\nabla_j\Upsilon_l-\nabla_l\Upsilon_j)+X_j(\nabla_i\Upsilon_l-\nabla _l\Upsilon_i)$. C'est à dire, puisque $F=d\Upsilon^\flat$ : $(*)=X_l(\nabla_j\Upsilon_i+\nabla_i\Upsilon_j)+X_iF_{jl}+X_jF_{il}$ Alors : $g_0^{kl}(*)=X^k(\nabla_j\Upsilon_i+\nabla_i\Upsilon_j)+X_iF_j^{~~k}+X_jF_i^{~~k}$. $\nabla_j\Upsilon_i=\partial_j\Upsilon_i-\tilde\Gamma^l_{ij}\Upsilon_l=\partial_j\Upsilon_i-\frac{1}{2}g_0^{lm} (\partial_ig_{0jm}+\partial_jg_{0im}-\partial_mg_{0ij})\Upsilon_l$. On en déduit que $\nabla_j\Upsilon_i$ ne dépend pas des variables de $\Theta$ sous l'hypothèse $H_E$ et des variables de $\Theta\times S^3(\rho)$ sous l'hypothèse $H'_E$. Il en est de même en particulier pour $F_{ij}$ et on en déduit $X^lF_{il}=0$. Comme de plus $X^lX_l=0$, on a : $h^{kl}(*)=(\Upsilon^kX^l+\Upsilon^lX^k)(*)=\Upsilon^lX^k(X_iF_{jl}+X_jF_{il})$. Et : $h^{km}h^l_m(*)=\Upsilon^m\Upsilon_mX^kX^l(*)=0$. Alors, d'après \ref{F67} : \begin{eqnarray} 2T^k_{ij}=X^k(\nabla_j\Upsilon_i+\nabla_i\Upsilon_j)+X_iF_j^{~~k}+X_jF_i^{~~k}-\Upsilon^lX^k(X_iF_{jl}+X_jF_{il}) \label{F68} \end{eqnarray} \begin{enumerate} \item Calcul de $R_{ij}$. D'après la dépendance par rapport aux variables déduite des lignes précédentes, on a : $X^k\nabla_k(\nabla_j\Upsilon_i+\nabla_i\Upsilon_j)=0$ ~~et ~~$X^k\nabla_k(\Upsilon^l(X_iF_{jl}+X_jF_{il}))=0$. Alors : $\nabla_kT^k_{ij}=\frac{1}{2}(X_i\nabla_kF_j^{~~k}+X_j\nabla_kF_i^{~~k})$. D'autre part, d'après \ref{F68}, en développant et en utilisant les propriétés déjà citées : $4T^k_{li}T^l_{kj}=F_l^{~~k}F_k^{~~l}X_iX_j$. D'où : \begin{eqnarray} R_{ij}=\tilde R_{ij}+\frac{1}{2}(X_i\nabla_kF_j^{~~k}+X_j\nabla_kF_i^{~~k})-\frac{1}{4}F_l^{~~k}F_k^{~~l}X_iX_j \label{F69} \end{eqnarray} Ce qui donne le résultat 1. de la proposition \ref{p1.5} puisque $F_j^{~~k}=-F^k_{~~j}$. \item Calcul de $S_g$. $S_g=g^{ij}R_{ij}=(g_0^{ij}-h^{ij}+h^i_kh^{kj})R_{ij}$. D'après \ref{F69} et les propriétés déjà utilisées : $g_0^{ij}R_{ij}=S_{g_0}$. D'autre part, puisque $g_0$ est une métrique produit et d'après les hypothèses $H_E$ et $H'_E$ : $h^{ij}R_{ij}=(\Upsilon^iX^j+\Upsilon^jX^i)R_{ij}=2\tilde R_{ij}\Upsilon^iX^j=0$. $h^i_kh^{kj}R_{ij}=\Upsilon^k\Upsilon_kR_{ij}X^iX^j=\Upsilon^k\Upsilon_k\tilde R_{ij}X^iX^j$. Finalement : $S_g=S_{g_0}+(\Upsilon^k\Upsilon_k)R_{icc_{g_0}}(X_2,X_2)$. \item $g(X_2,X_2)=g^{ij}X_iX_j=(g_0^{ij}-\Upsilon^iX^j-\Upsilon^jX^i+\Upsilon^k\Upsilon_kX^iX^j)X_iX_j=0$. D'autre part : $\nabla_{g,i}X^j=\partial_iX^j+\Gamma^j_{il}X^l=\nabla_iX^j+T^j_{il}X^l=T^j_{il}X^l$. Et en utilisant \ref{F68} et les propriétés déjà citées : $\nabla_{g,i}X^j=0$. \item Se déduit rapidement du lemme suivant qui donne un résultat plus général. \begin{lem} \label{al1} On considère une cellule type $\mathscr{C}=\Theta\times S^1(\delta)\times W$ avec un système de coordonnées $(x^0,x^1,\dots,x^{n-1})$. Soit $g$ le tenseur métrique transporté sur $\mathscr{C}$ du tenseur $g_\mathscr{M}$ et $Y$ le champ de vecteurs défini comme précédemment ($Y^i=0$ quel que soit $i\neq4$, $Y^4=1$). Alors : $Y$ est un champ de Killing si et seulement si, dans le système de coordonnées, ~~$\partial_4g_{ij}=0$ quels que soient $i et j$ ~(autrement dit, les termes de la matrice $(g_{ij})$ ne dépendent pas de $x^4$). \end{lem} \textbf{Démonstration du lemme}. $Y$ est un champ de Killing si et seulement si $\nabla_iY_j+\nabla_jY_i=0$. On a : $\nabla_iY_j=\partial_iY_j-\Gamma^k_{ij}Y_k$. Alors: $\nabla_iY_j+\nabla_jY_i=\partial_iY_j+\partial_jY_i-2\Gamma^k_{ij}Y_k$ Dans le système de coordonnées choisi, $Y_j=g_{ij}Y^i=g_{4j}$ ~donc : $\nabla_iY_j+\nabla_jY_i=\partial_i(g_{4j})+\partial_j(g_{4i})-2\Gamma^k_{ij}g_{4k}$, ~~mais : $\Gamma^k_{ij}g_{4j}=\frac{1}{2}\delta^l_4(\partial_ig_{jl}+\partial_jg_{il}-\partial_lg_{ij})=\frac{1}{2} (\partial_ig_{4j}+\partial_jg_{4i}-\partial_4g_{ij})$. Alors : $\nabla_iY_j+\nabla_jY_i=\partial_4g_{ij}$ d'où le résultat. \end{enumerate} \section[Un exemple très simple d'approximation]{Un exemple très simple d'approximation des solutions d'une équation non linéaire par celles d'une équation linéaire\label{ea3.4} } Le seul intérêt de l'exemple que l'on va présenter est d'aider à comprendre le processus d'approximation des solutions de l'équation fondamentale \ref{F0'} par les solutions de l'équation linéaire \ref{F1} associée aux domaines de type <<~métrique oscillante dans un potentiel~>>. Sur l'intervalle $[0 ~1]\subset\mathbb{R}$, on considère l'équation différentielle : $$(*) \text{ ~~~}y'-y=y^2$$ (L'équation $(*)$ est ici d'ordre 1 pour que les calculs soient très simples, l'équation \ref{F0'} est bien sûr d'ordre 2). La solution de cette équation différentielle qui vérifie la condition <<~au bord~>> ~~$y(0)=\varepsilon$ ~~(que l'on supposera ensuite $\ll 1$) est, comme on peut le vérifier rapidement : $$y(t)=\frac{\varepsilon e^t}{1+\varepsilon(1-e^t)}$$ On considère l'équation linéaire associée : $$(**) \text{ ~~~}y'-y=0$$ la solution $y_1$ de cette équation qui vérifie la même condition <<~au bord~>> $y_1(0)=\varepsilon$ est : $$y_1(t)=\varepsilon e^t$$ Comme on le voit, lorsque $\varepsilon \ll 1$, cette solution est <<~très proche~>> de celle de $(*)$ puisque $y_1(t)/y(t)=1+\varepsilon(1-e^t)$ ~~et ~$-2<(1-e^t)<0$ ~car on a limité $t$ à l'intervalle $[0 ~1]$. \section{Démonstration de la proposition \protect\ref{p2.1} \label{a3.5}} \begin{enumerate} \item Soient $\varphi:\Theta\times S^1(\delta)\times V_1\rightarrow\mathbb{R}$ ~~et ~~$\beta\in E_{V_2}(\mu_2)$. $\Box_{g_\mathcal P}(\varphi\beta)=-\nabla^i\nabla_i(\varphi\beta)=\beta~\Box_{g_\mathcal P}(\varphi)+\varphi~\Box_{g_\mathcal P}(\beta)-2\nabla^i\varphi\nabla_i\beta$ où les $\nabla_i$ sont relatifs à $g_\mathcal P$. $\nabla^i\varphi\nabla_i\beta=g_\mathcal P^{ij}\nabla_j\varphi\nabla_i\beta=g_0^{ij}\nabla_j\varphi\nabla_i\beta+2v X_1^iX_1^j\nabla_j\varphi\nabla_i\beta=0$ puisque ~~$g_0^{ij}\nabla_j\varphi\nabla_i\beta=0$ ~~et ~~$X_1^i\nabla_i\beta=X_1(\beta)=0$ ~~compte tenu des hypothèses. D'autre part : \begin{eqnarray} \Box_{g_\mathcal P}\beta=|g_0|^{-\frac{1}{2}}\partial_i(|g_0|^{\frac{1}{2}}g_\mathcal P^{ij}\partial_j\beta) \label{F70} \end{eqnarray} puisque $|g_\mathcal P|=|g_0|$. Mais ~$g_\mathcal P^{ij}\partial_j\beta=g_0^{ij}\partial_j\beta+2vX_1^iX_1^j\partial_j\beta=g_0^{ij}\partial_j\beta$. Alors : $\Box_{g_\mathcal P}\beta=\Box_{g_0}\beta=\mu_2\beta$. Finalement : $\Box_{g_\mathcal P}(\varphi\beta)=(\Box_{g_\mathcal P}(\varphi)+\mu_2\varphi)\beta$. \item Ici ~~ $\nabla^i\varphi\nabla_i\beta=g_0^{ij} \nabla_j\varphi\nabla_i\beta-(\Upsilon^iX_2^j+\Upsilon^jX_2^i)\nabla_j\varphi\nabla_i\beta+\Upsilon^k\Upsilon_kX_2^iX_2^ j\nabla_j\varphi\nabla_i\beta$ mais, compte tenu des hypothèses : \begin{eqnarray} \Upsilon^i\nabla_i\beta=\Upsilon(\beta)=0 ~~~\text{et} ~~~X_2^i\nabla_i\beta=X_2(\beta)=0 \label{F71} \end{eqnarray} d'où : $(\Upsilon^iX_2^j+\Upsilon^jX_2^i)\nabla_j\varphi\nabla_i\beta=0$. Alors, comme dans 1. : $\nabla^i\varphi\nabla_i\beta=0$. d'autre part : $g_\mathcal P^{ij}\partial_j\beta=g_0^{ij} \partial_j\beta-(\Upsilon^iX_2^j+\Upsilon^jX_2^i)\partial_j\beta+\Upsilon^k\Upsilon_kX_2^iX_2^ j\partial_j\beta=g_0^{ij}\partial_j\beta$. D'où, d'après \ref{F70} et \ref{F71} : $\Box_{g_\mathcal P}\beta=\Box_{g_0}\beta=\mu_2\beta$. Finalement : $\Box_{g_\mathcal P}(\varphi\beta)=(\Box_{g_\mathcal P}(\varphi)+\mu_2\varphi)\beta$. \end{enumerate} \section{Démonstration du théorème \ref{2.1} \label{a3.6}} \begin{enumerate} \item \textbf{Dans un potentiel neutre}. La fonction $a$ vérifie : $a=\varphi\beta$ ~~où ~~$\varphi:\Theta\times S^1(\delta)\rightarrow\mathbb{R}$ ~~et ~~$\beta\in E_W(\mu)$. On sait que : $\Box_{g_0}a+Sa=0$ ~~où ~~$S=\frac{n-2}{4(n-1)}S_{g_0}$. Alors : $\beta(\Box_{\Theta\times S^1}\varphi+\mu\varphi+S\varphi)=0$ ~~où ~~$\Box_{\Theta\times S^1}=\frac{\partial^2}{\partial t^2}+\frac{\partial^2}{\partial u^2}-\sum_{j=1}^3\frac{\partial^2}{\partial (x^j)^2}$ Puis : $\Box_\Theta\varphi+(\mu+S-\lambda)\varphi=0$ ~~où ~~$\Box_\Theta=\frac{\partial^2}{\partial t^2}-\sum_{j=1}^3\frac{\partial^2}{\partial (x^j)^2}$, c'est à dire : $\Box_\Theta\varphi+M^2\varphi=0$. Alors, quel que soit $x\in \Theta$, ~~$\mathbb{C}_\lambda((\Box_\Theta\varphi+M^2\varphi)_x(.))=0$ où ~$\mathbb{C}_\lambda$ ~est l'isomorphisme défini dans la section \ref{s2.8}. Comme $\mathbb{C}_\lambda((\Box_\Theta\varphi)_x(.))=\Box_\Theta(\mathbb{C}_\lambda(\varphi_x(.)))=\Box_\Theta a_c$ on en déduit : $\Box_\Theta a_c+M^2a_c=0$. \item \textbf{Dans un potentiel sans électromagnétisme}. La cellule type est $\mathscr{C}=\Theta\times S^1(\delta)\times W$ ~~où ~~$\Theta=I\times \mathcal U\subset\mathbb{R}\times\mathbb{R}^3$, ~~$a=\varphi\beta$ ~~avec ~~$\varphi:\Theta\times S^1(\delta)\rightarrow\mathbb{R}$ ~~et ~~$\beta\in E_W(\mu)$, ~mais maintenant : $\Box_{g_\mathcal P}a+Sa=0$ ~~où ~~$g_\mathcal P=g_0+h$ ~~et ~~$h=-2vX_1\otimes X_1$. (Dans les calculs qui suivent $g_\mathcal P$ sera noté $g$ pour simplifier l'écriture). Dans un système de coordonnées standard de la cellule $\mathscr{C}=\Theta\times S^1(\delta)\times W$, on a : $\Box_ga=-|g|^{-\frac{1}{2}}\partial_i(g^{ij}|g|^{\frac{1}{2}}\partial_ja)$ ~~où ~~$|g|:=detg=detg_0$ ~(cf \ref{ss1.3})\\ et ~~$g^{ij}=g_0^{ij}+2vX^iX^j$. Alors : $\Box_ga=\Box_{g_0}a-2|g|^{-\frac{1}{2}}\partial_i(v|g|^{\frac{1}{2}}X^iX^j\partial_ja)$. D'après les hypothèses de théorème : $X^1=X^2=X^3=0$ ~~et $v$ ne dépend que des variables $x^1,x^2,x^3$. Donc : $|g|^{-\frac{1}{2}}\partial_i(v|g|^{\frac{1}{2}}X^iX^j\partial_ja)=v|g|^{-\frac{1}{2}}\partial_i(|g|^{\frac{1}{2}} X^iX(a))$. Mais : $|g|^{-\frac{1}{2}}\partial_i(|g|^{\frac{1}{2}}X^iX(a))=|g|^{-\frac{1}{2}}(\partial_i(|g|^{\frac{1}{2}} X^i))X(a)+X^i\partial_i(X(a))$ \hspace{3,2cm} $=(\nabla\cdotp X)X(a)+X(X(a))$. De plus, comme $X(\beta)=0$, ~$X$ ~est tangent à $\mathbb{R}\times W$ ~et ~$X^0=-1$ on en déduit : $X(a)=\beta X(\varphi)+\varphi X(\beta)=-\beta\frac{\partial\varphi}{\partial t}$ ~~et ~~$X(X(a))=\beta\frac{\partial^2\varphi}{\partial t^2}$. Alors, puisque $DX=0$ : $|g|^{-\frac{1}{2}}\partial_i(v|g|^{\frac{1}{2}}X^iX^j\partial_ja)=\beta(\frac{\partial^2\varphi}{\partial t^2}-(\nabla\cdotp X)\frac{\partial\varphi}{\partial t})=\beta\frac{\partial^2\varphi}{\partial t^2}$. D'où, comme ~$\Box_{g_0}a+Sa=\beta(\Box_\Theta\varphi+M^2\varphi)$ ~~(de même que dans 1.) : $0=\Box_{g}a+Sa=\beta(\Box_\Theta\varphi+M^2\varphi-2v\frac{\partial^2\varphi}{\partial t^2})$. En prenant l'image par l'isomorphisme $\mathbb{C}_\lambda$, on obtient l'équation \ref{F16}. \item \textbf{Dans un potentiel électromagnétique}. La fonction $a$ vérifie encore : $a=\varphi\beta$ ~~où ~~$\varphi:\Theta\times S^1(\delta)\rightarrow\mathbb{R}$ ~~et ~~$\beta\in E_W(\mu)$. On a : \begin{eqnarray} \Box_{g_\mathcal P}a+Sa=0 \text{~~où ~~}g_\mathcal P=g_0+h\text{ ~~et ~~}h=\Upsilon^\flat\otimes X_2^\flat+X_2^\flat\otimes\Upsilon^\flat \label{F72} \end{eqnarray} (Dans les calculs qui suivent $g_\mathcal P$ sera noté $g$ pour simplifier l'écriture, $\Upsilon$ sera noté $A$ et correspond au champ de vecteurs <<~potentiel électromagnétique~>> défini sur $\Theta$ et $X_2$ sera noté simplement $X$). Le champ de vecteurs $A$ est tangent à $\Theta$ et ne dépend que des variables de $\Theta$. Dans un système de coordonnées standard de la cellule $\mathscr{C}=\Theta\times S^1(\delta)\times W$, on a : $\Box_ga=-|g|^{-\frac{1}{2}}\partial_i(g^{ij}|g|^{\frac{1}{2}}\partial_ja)$ ~~où ~~$|g|:=detg=detg_0$ et $g^{ij}=g_0^{ij}-h^{ij}+h^i_kh^{kj}$ ~~avec ~~$h^{ij}=A^iX^j+A^jX^i$. Compte tenu des hypothèses : (i) ~~$A^i=0$ ~si ~$i>3$, ~$X^j=0$ ~si ~$j<4$ ~~et ~$X^4=-1$ ~puisque ~$g_0(X_2,Y)=1$ Alors : \begin{eqnarray} \Box_ga+Sa=\Box_{g_0}a+Sa+(*)+(**) \label{F73} \end{eqnarray} où ~$(*):=|g_0|^{-\frac{1}{2}}\partial_i(|g_0|^{\frac{1}{2}}h^{ij}\partial_ja)$ ~et ~$(**):=-|g_0|^{-\frac{1}{2}}\partial_i(|g_0|^{\frac{1}{2}}h^i_kh^{kj}\partial_ja)$. Comme $g_0$ est une métrique <<~produit~>> sur $\Theta\times S^1(\delta)\times W$ on a (cf. 1. et 2.) : \begin{eqnarray} \Box_{g_0}a+Sa=\beta(\Box_\Theta\varphi+M^2\varphi) \label{F74} \end{eqnarray} \begin{enumerate} \item Étude de (*). $(*)=(*_1)+(*_2)$ ~~où ~~$(*_1):=|g_0|^{-\frac{1}{2}}\partial_i(|g_0|^{\frac{1}{2}}h^{ij}\varphi\partial_j\beta)$ et ~~$(*_2):=|g_0|^{-\frac{1}{2}}\partial_i(|g_0|^{\frac{1}{2}}h^{ij}\beta\partial_j\varphi)$. $(*_1)=|g_0|^{-\frac{1}{2}}(\partial_i(|g_0|^{\frac{1}{2}}A^iX^j\varphi\partial_j\beta)+ \partial_j(|g_0|^{\frac{1}{2}}A^iX^j\varphi\partial_i\beta))=0$ car $X^j\partial_j\beta=0$ ~~et ~~$A^i\partial_i\beta=0$ ~d'après (i) puisque $\partial_i\beta=0$ ~si ~$i\leq4$. D'autre part : $(*_2):=(*_{2_1})+(*_{2_2})$ ~~où ~~$(*_{2_1})=|g_0|^{-\frac{1}{2}}\partial_i(|g_0|^{\frac{1}{2}}A^iX^j\beta\partial_j\varphi)$ et ~~$(*_{2_2})=|g_0|^{-\frac{1}{2}}\partial_j(|g_0|^{\frac{1}{2}}A^iX^j\beta\partial_i\varphi)$. $(*_{2_1})=-|g_0|^{-\frac{1}{2}}\partial_i(|g_0|^{\frac{1}{2}}A^i\beta\partial_4\varphi)$ ~~puisque $\partial_j\varphi=0$ si $j>4$ ~~et ~~$X^4=-1$ ~~($\partial_4\varphi:=\frac{\partial\varphi}{\partial u}$). Alors, comme $A^i=0$ ~pour ~$i\geqslant4$ : $(*_{2_1})=-\beta(|g_0|^{-\frac{1}{2}}(\partial_i(|g_0|^{\frac{1}{2}} A^i))\partial_4\varphi+A^i\partial_i\partial_4\varphi)$ \hspace{8mm} $= -\beta((\nabla_{g_0}\cdotp A)\partial_4\varphi+A(\partial_4\varphi))$. D'autre part, puisque $X^4=-1$ : $(*_{2_2})=-|g_0|^{-\frac{1}{2}}\partial_4(|g_0|^{\frac{1}{2}}\beta A(\varphi))+\sum_{j>4}|g_0|^{-\frac{1}{2}}\partial_j(|g_0|^{\frac{1}{2}}X^j\beta A(\varphi))$. Mais, puisque $|g_0|$, $\beta$, et les $A^i$ ne dépendent pas de $x^4=u$ : $(*_{2_2})=-\beta A(\partial_4\varphi)$. Finalement : \begin{eqnarray} (*)=(*_1)+(*_{2_1})+(*_{2_2})=-\beta((\nabla_{g_0}\cdotp A)\partial_4\varphi+2A(\partial_4\varphi)) \label{F75} \end{eqnarray} \item Étude de $(**)$. $(**)=-|g_0|^{-\frac{1}{2}}\partial_i(|g_0|^{\frac{1}{2}}h^i_kh^{kj}\partial_ja)$ \hspace{7mm} $=-|g_0|^{-\frac{1}{2}}\partial_i(|g_0|^{\frac{1}{2}}(A^iX_k+A_kX^i)(A^kX^j+A^jX^k)\partial_ja)$ Alors, puisque $A^kX_k=0$ ~~et ~~$X^kX_k=0$ : $(**)=-|g_0|^{-\frac{1}{2}}\partial_i(|g_0|^{\frac{1}{2}}X^iX^jA_kA^k\partial_j(\varphi\beta))$ Mais, comme $A_kA^k$ ne dépend pas des variables de $S^1\times W$ : $(**)=-A_kA^k|g_0|^{-\frac{1}{2}}(\partial_i(|g_0|^{\frac{1}{2}}X^iX^j\beta\partial_j\varphi)+\partial_i(|g_0|^{\frac{1 }{2}}X^iX^j\varphi\partial_j\beta))$. Comme $X^j\partial_j\varphi=-\partial_4\varphi$ puisque $\partial_j\varphi=0$ pour $j>4$ ~et $X(\beta)=0$, on obtient : $(**)=A_kA^k|g_0|^{-\frac{1}{2}}\partial_i(|g_0|^{\frac{1}{2}}X^i\beta\partial_4\varphi)$. En considérant $i=4$ puis $i>4$, on trouve : $(**)=-A_kA^k(\beta\partial^2_4\varphi-|g_0|^{-\frac{1}{2}}(\sum_{i>4}\partial_i(|g_0|^{\frac{1}{2}} X^i\beta))\partial_4\varphi)$. Ce que l'on peut écrire : $(**)=-A_kA^k(\beta\partial^2_4\varphi-(\nabla_{g_0}\cdotp (\beta X))\partial_4\varphi)$. Et, puisque $\nabla_{g_0}\cdotp (\beta X))=X(\beta)+\beta\nabla_{g_0}\cdotp X=0$ : \begin{eqnarray} (**)=-A_kA^k\beta\partial^2_4\varphi \label{F76} \end{eqnarray} \item Fin de la démonstration de 3. D'après \ref{F72}, \ref{F73}, \ref{F74}, \ref{F75} et \ref{F76} on a : $0=\Box_ga+Sa=(\Box_\Theta\varphi+M^2\varphi-(\nabla_{g_0}\cdotp A)\partial_4\varphi-2A(\partial_4\varphi)-A_kA^k\partial^2_4\varphi)\beta$. Cette équation se met aussi sous la forme : \begin{eqnarray} -\sum_{j=0}^3\varepsilon_j(\frac{\partial}{\partial x^j}+A^j\frac{\partial}{\partial u})^2\varphi+M^2\varphi=0 \label{F77} \end{eqnarray} où ~~$\varepsilon_0=-1$, ~~$\varepsilon_1=\varepsilon_2=\varepsilon_3=1$, ~~$\frac{\partial}{\partial u}=\partial_4$. Pour obtenir l'équation en $a_c$ donnée dans le théorème il suffit de composer les deux membres de l'équation \ref{F77} par l'isomorphisme $\mathbb{C}_\lambda$. On vérifie facilement que : $\mathbb{C}_\lambda\circ((\Box_\Theta\varphi)_x(.))=(\Box_\Theta a_c)(x)$,~~~$\mathbb{C}_\lambda\circ((\frac{\partial \varphi}{\partial u})_x(.))=-iQ^+a_c(x)$, $\mathbb{C}_\lambda\circ((\frac{\partial^2 \varphi}{\partial u^2})_x(.))=-(Q^+)^2a_c(x)$. L'équation obtenue est alors : $\sum_{j=0}^3\varepsilon_j(i\frac{\partial}{\partial x^j}+Q^+A^j)^2a_c+M^2a_c=0$. \end{enumerate} \end{enumerate} \section[La sphère $S^3$]{La sphère $S^3$, la fibration de Hopf, les espaces propres du laplacien riemannien\label{sa3.7}} \subsection{La fibration de Hopf} On considère l'application~~ $\tilde{\Pi}:\mathbb{R}^4=\mathbb{C}^2\rightarrow\mathbb{R}^3$ ~~définie par : \begin{eqnarray}\label{F3.1} \tilde{\Pi}(x_1,x_2,x_3,x_4):=(x_1x_3+x_2x_4,~~ x_1x_4-x_2x_3, ~~\frac{1}{2}(x_3^2+x_4^2-x_1^2-x_2^2)) \end{eqnarray} Qui s'écrit aussi, en posant $z_1=x_1+ix_2 ~et ~z_2=x_3+ix_4$ : \bigskip $\tilde{\Pi}(z_1,z_2)=(Re(\bar{z_1}z_2),~~Im(\bar{z_1}z_2),~~\frac{1}{2}(|z_2|^2-|z_1|^2)$ \bigskip On note : $S^3(1)=\{(x_1,x_2,x_3,x_4)\in\mathbb{R}^4/~~\sum_{i=1}^4x_i^2=1\}$ \bigskip $S^2(\frac{1}{2})=\{(y_1,y_2,y_3)\in\mathbb{R}^3/~~\sum_{i=1}^3y_i^2=\frac{1}{4}\}$ \bigskip Il est facile de vérifier que : $\tilde{\Pi}(S^3(1))=S^2(\frac{1}{2})$ On considère alors le diagramme commutatif suivant : $$\begin{CD} S^3(1)@>i_1>>\mathbb{R}^4\\ @V{\Pi}VV @VV{\tilde{\Pi}}V\\ S^2(\frac{1}{2})@>i_2>>\mathbb{R}^3 \end{CD}$$ où ~$i_1$ ~et ~$i_2$ sont les injections canoniques et $\Pi$ la restriction de $\tilde\Pi$ à $(S^3(1),S^2(\frac{1}{2}))$. On munit $S^3(1)$ de sa métrique riemannienne canonique par la métrique euclidienne de $\mathbb{R}^4$ et de même pour $S^2(\frac{1}{2})$ par la métrique euclidienne de $\mathbb{R}^3$. On vérifie alors que, quel que soit le point $P$ de $S^2(\frac{1}{2})$, ~~$\Pi^{-1}\{P\}$ est un grand cercle de $S^3(1)$. $\Pi:S^3(1)\rightarrow S^2(\frac{1}{2})$ est appelée une <<~fibration de Hopf~>>. \begin{rmq} On a imposé ici, par la définition de $\tilde\Pi$, le fait que l'image de la sphère $S^3$ de rayon $1$ soit la sphère $S^2$ de rayon $\frac{1}{2}$. En multipliant $\tilde\Pi$ par un réel positif on peut <<~jouer~>> sur les rayons des sphères comme on le souhaite. Le choix donné par \ref{F3.1} est justifié par le fait que l'application :\\ $\Pi:(S^3(1),g_{S^3(1)})\rightarrow(S^2(\frac{1}{2}),g_{S^2(\frac{1}{2})})$\\ est alors une \textbf{submersion riemannienne}, ce qui permettra de montrer la proposition \ref{p3.1} qui va suivre. \end{rmq} La fibration de Hopf peut aussi être introduite de la manière suivante : le groupe $SO(2)$, que l'on identifie au groupe des complexes de module $1$ ~~$(r_\alpha\leftrightarrow e^{i\alpha})$~~opère naturellement sur $\mathbb{R}^4=\mathbb{C}^2$ comme \textbf{groupe d'isométries} en posant :\\ $ \forall r_\alpha\in SO(2) ~~~r_\alpha\bullet(z_1,z_2):=e^{i\alpha}(z_1,z_2)$. Il est facile de vérifier que, sous cette opération, les orbites sous $SO(2)$ sont exactement les grands cercles $\Pi^{-1}\{P\}$ de la fibration de Hopf définie précédemment. La variété quotient $S^3(1)/SO(2)$ est alors difféomorphe à $S^2(\frac{1}{2})$ par le diagramme commutatif suivant : \vspace{-0.3cm} \[ \xymatrix {S^3(1)\ar[r]^\Pi \ar[d]_{\Pi'} & S^2(\frac{1}{2}) \\ S^3(1)/SO(2) \ar[ur]_f} \] où $\Pi'$ est la submersion riemannienne canonique associée au quotient $S^3(1)/SO(2)$ lorsque $g'$ est la métrique sur $S^3(1)/SO(2)$ <<~quotientée~>> de $g_{S^3(1)}$. L'application $f$ est évidemment un difféomorphisme et on peut montrer que c'est en fait une isométrie de $(S^3(1)/SO(2),g')$ dans $(S^2(\frac{1}{2}),g_{S^2(\frac{1}{2})})$.\\ La fibration de Hopf correspond donc aussi à :\\ $\Pi':S^3(1)\rightarrow S^3(1)/SO(2)\sim S^2(\frac{1}{2})$. \begin{prop}\label{p3.1} Quelle que soit l'application $\varphi:S^2(\frac{1}{2})\rightarrow\mathbb{R}$ de classe $C^2$ : \bigskip $\Delta_{S^3(1)}(\varphi\circ\Pi)=(\Delta_{S^2(\frac{1}{2})}\varphi)\circ\Pi$ \end{prop} où $\Delta_{S^3(1)}$ et $\Delta_{S^2(\frac{1}{2})}$ désignent les laplaciens riemanniens standard définis sur les sphères $S^3(1)$ et $S^2(\frac{1}{2})$. \bigskip \textbf{Démonstration}. Ce résultat est fondamentalement lié au fait que l'image réciproque par $\Pi$ de chaque point de $S^2(\frac{1}{2})$ est un grand cercle de $S^3(1)$, autrement dit la fonction $v:S^2(\frac{1}{2})\rightarrow\mathbb{R}$ définie par $v:=\text{vol}_{g_{S^3(1)}}(\Pi^{-1}\{P\})$ est une fonction constante égale à $2\pi$, et que, de plus, $\Pi$ est une submersion riemannienne. On s'y prend de la manière suivante : Puisque $\Pi$ est une submersion riemanniennne, on sait que : $\forall \varphi$ et $\phi\in C^1(S^2(\frac{1}{2}))$ \begin{eqnarray}\label{F3.2} \int_{S^3(1)}\varphi\circ\Pi=\int_{S^2(\frac{1}{2})}\varphi v=2\pi\int_{S^2(\frac{1}{2})}\varphi ~~~\text{et} \end{eqnarray} \begin{eqnarray}\label{F3.3} (\nabla_{S^2}\varphi\nabla_{S^2}\phi)\circ\Pi= \nabla_{S^3}(\varphi\circ\Pi)\nabla_{S^3}(\phi\circ\Pi) \end{eqnarray} Quelle que soit $\varphi\in C^2(S^2(\frac{1}{2}))$,~~ $\varphi\circ\Pi$ est invariante par l'opération de $SO(2)$ sur $S^3(1)$ définie précédemment, et il en est de même pour la fonction $\Delta_{s^3}(\varphi\circ\Pi)$. Celle-ci <<~passe au quotient~>> et il existe donc une fonction $h:S^2(\frac{1}{2})\rightarrow\mathbb{R}$ telle que $\Delta_{s^3}(\varphi\circ\Pi)=h\circ\Pi$. Il s'agit alors de montrer que $h=\Delta_{S^2}\varphi$ : D'après \ref{F3.2} et \ref{F3.3}, ~$\forall \phi\in C^1(S^2(\frac{1}{2}))$ : \begin{displaymath} \begin{tabular}{rcl} ${\displaystyle 2\pi\int_{S^2(\frac{1}{2})}(\Delta_{S^2}\varphi)\phi}$ &=& $\displaystyle 2\pi\int_{S^2(\frac{1}{2})}\nabla_{S^2}\varphi\nabla_{S^2} \phi=\int_{S^3(1)}(\nabla_{S^2}\varphi\nabla_{S^2}\phi)\circ\Pi$\\ &=& $\displaystyle \int_{S^3(1)}\nabla_{S^3}(\varphi\circ\Pi)\nabla_{S^3}(\phi\circ\Pi)$ \end{tabular} \end{displaymath} Donc : $2\pi\int_{S^2(\frac{1}{2})}(\Delta_{S^2}\varphi)\phi=\int_{S^3(1)}(\phi\circ\Pi)\Delta_{s^3} (\varphi\circ\Pi)=\int_{S^3(1)}(h\phi)\circ\Pi=2\pi\int_{S^2(\frac{1}{2})}h\Phi$ Et l'on en déduit bien : $\Delta_{S^2}\varphi=h$ \bigskip Cette proposition permet de faire un lien important entre les espaces propres relatifs au laplacien $\Delta_{S^2(\frac{1}{2})}$ et ceux relatifs au laplacien $\Delta_{S^3(1)}$. Ceci est présenté dans le paragraphe qui suit. \subsection{Les espaces propres de $(S^3(1),g_{S^3(1)})$ et de $(S^2(\frac{1}{2}),g_{S^2(\frac{1}{2})})$} On commence par rappeler le résultat connu sur les espaces propres de $(S^n(1),g_{S^n(1)})$ où $g_{S^n(1)}$ désigne la métrique riemannienne standard sur $S^n(1)$. \begin{prop}\label{p3.2} ~\\\vspace*{-1em} \begin{enumerate} \item Les valeurs propres du laplacien (des géomètres) de $(S^n(1),g_{S^n(1)})$ sont données \textbf{par la suite $(\gamma_k)_{k\in\mathbb{N}}$ où $\gamma_k=k(k+n-1)$.} \item Les espaces propres $E_k$ correspondants sont formés \textbf{des restrictions à $S^n(1)$ des polynômes homogènes harmoniques de $\mathbb{R}^{n+1}$ de degré $k$.} \end{enumerate} \end{prop} On en déduit rapidement que les valeurs propres de $\Delta_{S^n(\rho)}$ sont données par la suite $(\gamma_k(\rho))_{k\in\mathbb{N}}$ où $\gamma_k(\rho))=\rho^{-2}k(k+n-1)$. Les valeurs propres de $\Delta_{S^3(1)}$ sont donc : $\gamma_k=k(k+2))$ pour $k\in\mathbb{N}$ et celles de $\Delta_{S^2(\frac{1}{2})}$ : $\gamma'_l=4l(l+1)$ pour $l\in\mathbb{N}$, autrement dit $\gamma'_l=2l(2l+2)$. D'autre part, la proposition \ref{p3.1} montre que si $\varphi$ est une fonction propre sur $S^2(\frac{1}{2})$ pour la valeur propre $\gamma'_l$~ alors ~$\varphi\circ\Pi$ est une fonction propre sur $S^3(1)$ pour la même valeur propre $\gamma'_l=2l(2l+2)$. Si l'on note $F_{2l}$ l'espace propre du $\Delta_{S^2(\frac{1}{2})}$ correspondant à la valeur propre $\gamma'_l$, alors $E'_{2l}:=\{\varphi\circ\Pi~~/~~\varphi\in F_{2l}\}$ est un sous-espace vectoriel de l'espace propre $E_{\gamma'_l}$ et $F_{2l}$ est naturellement isomorphe à $E'_{2l}$. \textbf{Chaque espace propre $E'_{2l}$ de $S^2(\frac{1}{2})$ s'identifie donc, par la fibration de Hopf, à un sous espace vectoriel de l'espace propre d'indice pair $E_{2l}$ de $S^3(1)$ qui correspond à la valeur propre $2l(2l+2)$.} \subsection{Démonstration du fait que, quel que soit $k$ de 1 à 3, $\nabla_{S^3}.L_{k_{S^3}}=0$ lorsque les $L_{k_{S^3}}$ sont les trois champs de vecteurs qui parallélisent $S^3$ \label{ss3.1}} (Nous utilisons ici, la notation plus simple $L_{k_{S^3}}$ en remplacement de $L_k|_{S^3}$). Il est immédiat de vérifier que, pour la métrique euclidienne $\xi$ de $\mathbb{R}^4$, ~$\nabla_\xi.L_k=0$. On considère, en un point $x$ de $S^3$, les quatre vecteurs orthogonaux 2 à 2 : $L_{1_x}, L_{2_x}, L_{3_x}, N_x$ ~~où $N_x$ est le vecteur normal à $S^3(\rho)$ : $N_x=x^1\partial_1+\dots+x^4\partial_4$. On a, pour chaque $k$ (en omettant d'écrire le point $x$ en indice) : $0=\nabla_\xi.L_k=\xi(L_1,D_{L_1}L_k)+\xi(L_2,D_{L_2}L_k)+\xi(L_3,D_{L_3}L_k)+\xi(N,D_{N}L_k)$ où ~$D$ désigne la connexion euclidienne de $\mathbb{R}^4$. Mais $\xi(N,D_{N}L_k)=\xi(D_N N,L_k)=0$ ~car $D_N N=N$ comme on le vérifie rapidement. De plus $\xi(L_i,D_{L_i}L_k)=\xi(L_i,\tilde{D} _{L_i}L_k)$ ~où ~$\tilde D$ est la connexion euclidienne de $\mathbb{R}^4$ induite sur $S^3$,~ ceci car $\tilde D_{L_i}L_k$ est la projection orthogonale sur $T_x (S^3(\rho))$ de $D_{L_i}L_k$. On en déduit : $0=\xi(L_1,\tilde D_{L_1}L_k)+\xi(L_2,\tilde D_{L_2}L_k)+\xi(L_3,\tilde D_{L_3}L_k)=\nabla_{S^3}.L_{k_{S^3}}$. \subsection{Démonstration de la proposition \ref{p2.4} de la section \ref{s2.13}\label{ss3.2}} \subsubsection{Stabilité des espaces propres $E_p$ par l'opération des trois champs de vecteurs $X_1, X_2, X_3$} D'après la proposition \ref{p3.2} les fonctions propres du laplacien qui constituent l'espace propre $E_p$ sont les restrictions à $S^3(\rho)$ des polynômes harmoniques homogènes de degré $p$ définis sur $\mathbb{R}^4$.\\ Il s'agit donc de montrer que si $P$ est un tel polynôme alors, quel que soit $k$ de 1 à 3,~~ $X_k(P)$ est encore un polynôme harmonique homogène de même degré $p$, ceci puisque $X_k{|_{S^3(\rho)}}(P|_{S^3(\rho)})=(X_k(P))|_{S^3(\rho)}$. Compte tenu de l'expression des $X_k$, il est clair que les $X_k(P)$ sont homogènes de degré $p$. La difficulté est de montrer que $X_k(P)$ est harmonique sachant que $P$ l'est. Pour cela on développe $\Delta(X_k(P)):=\sum_{i=1}^4{\partial_i}^2(X_k(P))$ ~~où ~~$\partial_i:=\frac{\partial}{\partial {x^i}}$. \bigskip Si $k=1$, ~~~$X_1(P)= -x^2\partial_1P+x^1\partial_2P+x^4\partial_3P-x^3\partial_4P$. Comme $\partial_i^2(x^l\partial_m P)=2\delta_i^l\partial_i\partial_m P+x^l\partial_i^2\partial_m P$, on a : $\sum_{i=1}^4\partial_i^2(x^l\partial_m)=x^l\sum_{i=1}^4\partial_i^2\partial_m P+2\partial_l\partial_m P=2\partial_l\partial_m P$ puisque $\sum_{i=1}^4\partial_i^2\partial_m P=\partial_m\Delta P=0$. On en déduit : $\Delta (X_1(P))=2(-\partial_2\partial_1 P+\partial_1\partial_2 P+\partial_4\partial_3 P-\partial_3\partial_4 P)=0$. On vérifie de même que $\Delta (X_k(P))=0$ pour $k=2$ et $3$. Ce qui montre la stabilité de $E_p$ par l'opération des trois champs de vecteurs $X_1, X_2, X_3$. \subsubsection{Stabilité des espaces $E'_q$ par l'opération des trois champs de vecteurs $X_1, X_2, X_3$} On considère l'application $\Pi:\mathbb{R}^4\rightarrow R^3$ qui définit la fibration de Hopf donnée par : \hspace{-5mm}$\Pi(x^1,x^2,x^3,x^4)=(x^1x^3+x^2x^4,x^1x^4-x^2x^3,1/2((x^3)^2+(x^4)^2-(x^1)^2-(x^2)^2))$ \hspace{2cm}$:=(y_1,y_2,y_3)$. \bigskip Les fonctions propres de $E'_q$ ($q$ pair) sont les restrictions à $S^3(\rho)$ des polynômes harmoniques homogènes de degré ${q}$ définis sur $R^4$ de la forme $\tilde P=P\circ \Pi$ ~où ~$P$ est un polynôme harmonique homogène de degré $q/2$ défini sur $\mathbb{R}^3$. Il s'agit donc de montrer que, lorsque $\tilde P$ est de la forme précédente, pour $k$ de 1 à 3,~~$X_k(\tilde P)$ est de la forme $Q\circ \Pi$~ où~ $Q$ est un polynôme harmonique homogène de degré $q/2$ défini sur $\mathbb{R}^3$. (On sait déjà, d'après le paragraphe précédent, que $X_k(\tilde P)$ est harmonique homogène de degré $q$). Pour cela, on développe $X_k(\tilde P)$ : \bigskip $\partial_1(\tilde P)=(\frac{\partial P}{\partial y^1}\circ\Pi)\frac{\partial y^1}{\partial x^1}+(\frac{\partial P}{\partial y^2}\circ\Pi)\frac{\partial y^2}{\partial x^1}+(\frac{\partial P}{\partial y^3}\circ\Pi)\frac{\partial y^1}{\partial x^1}$ \hspace{1cm}$=(\frac{\partial P}{\partial y^1}\circ\Pi)x^3+(\frac{\partial P}{\partial y^2}\circ\Pi)x^4+(\frac{\partial P}{\partial y^3}\circ\Pi)(-x^1)$ \bigskip De même : \bigskip $\partial_2(\tilde P)=(\frac{\partial P}{\partial y^1}\circ\Pi)x^4+(\frac{\partial P}{\partial y^2}\circ\Pi)(-x^3)+(\frac{\partial P}{\partial y^3}\circ\Pi)(-x^2)$ $\partial_3(\tilde P)=(\frac{\partial P}{\partial y^1}\circ\Pi)x^1+(\frac{\partial P}{\partial y^2}\circ\Pi)(-x^2)+(\frac{\partial P}{\partial y^3}\circ\Pi)(x^3)$ $\partial_4(\tilde P)=(\frac{\partial P}{\partial y^1}\circ\Pi)x^2+(\frac{\partial P}{\partial y^2}\circ\Pi)x^1+(\frac{\partial P}{\partial y^3}\circ\Pi)(x^4)$ \bigskip En regroupant les termes puis en simplifiant on obtient, pour $k=1$ : \bigskip $X_1(\tilde P)=2(x^1x^4-x^2x^3)(\frac{\partial P}{\partial y^1}\circ\Pi)-2(x^1x^3+x^2x^4) (\frac{\partial P}{\partial y^2}\circ\Pi)$ \hspace{1cm}$=2(y^2\frac{\partial P}{\partial y^1}-y^1\frac{\partial P}{\partial y^2})\circ\Pi$ il ne reste plus qu'à vérifier que $ P_1:=y^2\frac{\partial P}{\partial y^1}-y^1\frac{\partial P}{\partial y^2}$ est harmonique sur $\mathbb{R}^3$. On a : \bigskip $\frac{\partial^2 P_1}{(\partial y^1)^2}=y^2\frac{\partial^3 P}{(\partial y^1)^3}-y^1\frac{\partial^3 P}{(\partial y^1)^2\partial y^2}-2\frac{\partial^2 P}{\partial y^1\partial y^2}$ $\frac{\partial^2 P_1}{(\partial y^2)^2}=y^2\frac{\partial^3 P}{\partial y^1(\partial y^2)^2}+2\frac{\partial^2 P}{\partial y^1\partial y^2}-y^1\frac{\partial^3 P}{(\partial y^2)^3}$ $\frac{\partial^2 P_1}{(\partial y^3)^2}=y^2\frac{\partial^3 P}{\partial y^1(\partial y^3)^2}-y^1\frac{\partial^3 P}{\partial y^2(\partial y^3)^2}$ d'où : \bigskip $\Delta_{R^3} P_1=y^2\frac{\partial\Delta P}{\partial y^1}-y^1\frac{\partial\Delta P}{\partial y^2}=0$ \bigskip On vérifie de même la stabilité par $X_2$ et $X_3$. Ce qui termine la démonstration de la proposition \ref{p2.4}. \newpage \section{Démonstration du théorème \ref{2.3}\label{a3.7}} On ne présente ici que la démonstration détaillée de la partie 3. du théorème. La démonstration des parties 1. et 2. est très proche de celle du théorème \ref{2.1} donnée dans l'annexe \ref{a3.6}. La partie 1. n'est évidemment qu'un corollaire de la partie 3. dans laquelle il suffit d'annuler le potentiel électromagnétique $\Upsilon$. En fait, le schéma de la démonstration de la partie 3. que l'on va présenter est le même que celui de la partie 3. du théorème \ref{2.1}, seuls quelques termes apparaissent en plus liés à <<~l'effet de spin~>>. \textbf{Démonstration du 3. du théorème \ref{2.3} : <<~dans un potentiel électromagnétique~>>}. La fonction $a$ vérifie : $a=\phi\beta$ ~~où ~~$\phi:\Theta\times S^1(\delta)\times S^3(\rho)\rightarrow\mathbb{R}$ ~~et ~~$\beta\in E_V(\nu)$. On a : \begin{eqnarray} \Box_{g_\mathcal P}a+Sa=0 \text{~~où ~~}g_\mathcal P=g_0+h\text{ ~~et ~~}h=\Upsilon^\flat\otimes X_2^\flat+X_2^\flat\otimes\Upsilon^\flat \label{F78} \end{eqnarray} (Dans les calculs qui suivent $g_\mathcal P$ sera noté $g$ pour simplifier l'écriture et $X_2$ sera noté simplement $X$). Dans un système de coordonnées standard de la cellule $\mathscr{C}=\Theta\times S^1(\delta)\times S^3(\rho)\times V$, on a : $\Box_ga=-|g|^{-\frac{1}{2}}\partial_i(g^{ij}|g|^{\frac{1}{2}}\partial_ja)$ ~~où ~~$|g|:=detg=detg_0$ et $g^{ij}=g_0^{ij}-h^{ij}+h^i_kh^{kj}$ ~~avec ~~$h^{ij}=\Upsilon^iX^j+\Upsilon^jX^i$. Compte tenu des hypothèses : (i) ~~$\Upsilon^i=0$ ~si ~($i=4$ ~et ~$i>7$), ~~$X^j=0$ ~si ~($j<7$ ~et ~$j\neq4$) ~~et ~$X^4=-1$ ~puisque ~$g_0(X_2,Y)=1$ Alors : \begin{eqnarray} \Box_ga+Sa=\Box_{g_0}a+Sa+(*)+(**) \label{F79} \end{eqnarray} où ~$(*):=|g_0|^{-\frac{1}{2}}\partial_i(|g_0|^{\frac{1}{2}}h^{ij}\partial_ja)$ ~et ~$(**):=-|g_0|^{-\frac{1}{2}}\partial_i(|g_0|^{\frac{1}{2}}h^i_kh^{kj}\partial_ja)$. \vspace{4mm} Comme $g_0$ est une métrique <<~produit~>> sur $\Theta\times S^1(\delta)\times S^3(\rho)\times V$ on a : \begin{multline} \Box_{g_0}a+Sa=(\Box_\Theta+\Box_{S^1\times S^3}+\Delta_V)(\phi\beta)+S\phi\beta\\=\beta(\Box_\Theta\phi+(\gamma-\lambda+\nu+S)\phi)=\beta(\Box_\Theta\phi+M^2\phi) \label{F80} \end{multline} \begin{enumerate} \item \textbf{Étude de (*)}. $(*)=(*_1)+(*_2)$ ~~où ~~$(*_1):=|g_0|^{-\frac{1}{2}}\partial_i(|g_0|^{\frac{1}{2}}h^{ij}\phi\partial_j\beta)$ et ~~$(*_2):=|g_0|^{-\frac{1}{2}}\partial_i(|g_0|^{\frac{1}{2}}h^{ij}\beta\partial_j\phi)$. $(*_1)=|g_0|^{-\frac{1}{2}}(\partial_i(|g_0|^{\frac{1}{2}}\Upsilon^iX^j\phi\partial_j\beta)+ \partial_j(|g_0|^{\frac{1}{2}}\Upsilon^iX^j\phi\partial_i\beta))=0$ car $X^j\partial_j\beta=0$ ~~et ~~$\Upsilon^i\partial_i\beta=0$ ~d'après (i) puisque $\partial_i\beta=O$ ~si ~$i\leq7$. D'autre part : $(*_2):=(*_{2_1})+(*_{2_2})$ ~~où ~~$(*_{2_1})=|g_0|^{-\frac{1}{2}}\partial_i(|g_0|^{\frac{1}{2}}\Upsilon^iX^j\beta\partial_j\phi)$ et ~~$(*_{2_2})=|g_0|^{-\frac{1}{2}}\partial_j(|g_0|^{\frac{1}{2}}\Upsilon^iX^j\beta\partial_i\phi)$. $(*_{2_1})=-|g_0|^{-\frac{1}{2}}\partial_i(|g_0|^{\frac{1}{2}}\Upsilon^i\beta\partial_4\phi)$ ~~puisque $\partial_j\phi=0$ si $j>7$ ~~et ~~$X^4=-1$ ~~($\partial_4\phi:=\frac{\partial\phi}{\partial u}$). Alors, comme $\Upsilon^i=0$ ~pour ~$i\geqslant7$ : $(*_{2_1})=-\beta(|g_0|^{-\frac{1}{2}}(\partial_i(|g_0|^{\frac{1}{2}} \Upsilon^i))\partial_4\phi+\Upsilon^i\partial_i\partial_4\phi)$ \hspace{8mm} $= -\beta((\nabla_{g_0}\cdotp \Upsilon)\partial_4\phi+\Upsilon(\partial_4\phi))$. D'autre part, puisque $X^4=-1$ : $(*_{2_2})=-|g_0|^{-\frac{1}{2}}\partial_4(|g_0|^{\frac{1}{2}}\beta \Upsilon(\phi))+\sum_{j>7}|g_0|^{-\frac{1}{2}}\partial_j(|g_0|^{\frac{1}{2}}X^j\beta \Upsilon(\phi))$. Mais, puisque $\partial_j(\Upsilon(\phi))=0$ ~si ~$j>7$ : $\sum_{j>7}|g_0|^{-\frac{1}{2}}\partial_j(|g_0|^{\frac{1}{2}}X^j\beta \Upsilon(\phi))=\Upsilon(\phi)\nabla_{g_0}\cdotp (\beta X)=0$. Alors, puisque $|g_0|$, $\beta$ et les $\Upsilon^i$ ne dépendent pas de $u$ : $(*_{2_2})=-\beta\Upsilon(\partial_4\phi)$. Finalement : \begin{eqnarray} (*)=(*_1)+(*_{2_1})+(*_{2_2})=-\beta((\nabla_{g_0}\cdotp \Upsilon)\partial_4\phi+2\Upsilon(\partial_4\phi)) \label{F81} \end{eqnarray} \item \textbf{Étude de (**)}. $(**)=-|g_0|^{-\frac{1}{2}}\partial_i(|g_0|^{\frac{1}{2}}h^i_kh^{kj}\partial_ja)$ \hspace{7mm} $=-|g_0|^{-\frac{1}{2}}\partial_i(|g_0|^{\frac{1}{2}} (\Upsilon^iX_k+\Upsilon_kX^i)(\Upsilon^kX^j+\Upsilon^jX^k)\partial_ja)$ Alors, puisque $\Upsilon^kX_k=0$ ~~et ~~$X^kX_k=0$ : $(**)=-|g_0|^{-\frac{1}{2}}\partial_i(|g_0|^{\frac{1}{2}}X^iX^j\Upsilon_k\Upsilon^k\partial_j(\phi\beta))$ Mais, comme $\Upsilon_k\Upsilon^k$ ne dépend pas des variables de $S^1\times V$ : $(**)=-\Upsilon_k\Upsilon^k|g_0|^{-\frac{1}{2}}(\partial_i(|g_0|^{\frac{1}{2}} X^iX^j\beta\partial_j\phi)+\partial_i(|g_0|^{ \frac{1}{2}}X^iX^j\phi\partial_j\beta))$. Comme $X^j\partial_j\phi=-\partial_4\phi$ puisque $\partial_j\phi=0$ pour $j>7$ ~et $X(\beta)=0$, on obtient : $(**)=\Upsilon_k\Upsilon^k|g_0|^{-\frac{1}{2}}\partial_i(|g_0|^{\frac{1}{2}}X^i\beta\partial_4\phi)$. En considérant $i=4$ puis $i>7$, on trouve : $(**)=-\Upsilon_k\Upsilon^k(\beta\partial^2_4\phi-|g_0|^{-\frac{1}{2}}(\sum_{i>7}\partial_i(|g_0|^{\frac{1}{2}} X^i\beta))\partial_4\phi)$. Ce que l'on peut écrire : $(**)=-\Upsilon_k\Upsilon^k(\beta\partial^2_4\phi-(\nabla_{g_0}\cdotp (\beta X))\partial_4\phi)$. Et, puisque $\nabla_{g_0}\cdotp (\beta X))=X(\beta)+\beta\nabla_{g_0}\cdotp X=0$ : \begin{eqnarray} (**)=-\Upsilon_k\Upsilon^k\beta\partial^2_4\phi \label{F82} \end{eqnarray} \item \textbf{Fin de la démonstration de 3}. D'après \ref{F78}, \ref{F79}, \ref{F80}, \ref{F81} et \ref{F82} on a : $0=\Box_ga+Sa=(\Box_\Theta\phi+M^2\phi-(\nabla_{g_0}\cdotp \Upsilon)\partial_4\phi-2\Upsilon(\partial_4\phi)-\Upsilon_k\Upsilon^k\partial^2_4\phi)\beta$. D'où : \begin{eqnarray} 0=\Box_\Theta\phi+M^2\phi-(\nabla_{g_0}\cdotp \Upsilon)\partial_4\phi-2\Upsilon(\partial_4\phi)-\Upsilon_k\Upsilon^k\partial^2_4\phi \label{F83} \end{eqnarray} \begin{enumerate} \item Étude du terme : $(\nabla_{g_0}\cdotp \Upsilon)\partial_4\phi+2\Upsilon(\partial_4\phi)$. $\Upsilon$ a été choisi sous la forme ; $\Upsilon=A +\varrho C$ ~~où ~~$A \sum_{i=0}^3A^i\frac{\partial}{\partial x^i}$ ~~et ~~$C=\sum_{k=1}^3B_kL_{k_{S^3}}$ $\varrho$ est la constante gyromagnétique. $A$ est un champ de vecteurs défini sur $\Theta$. $C$ est un champ de vecteurs tangent à $S^3(\rho)$. ($A$ et $C$ sont considérés définis sur $\Theta\times S^3(\rho)$). 0n a : $(\nabla_{g_0}\cdotp \Upsilon)\partial_4\phi+2\Upsilon(\partial_4\phi)=\sum_{i\leq7}(|g_0|^{-\frac{1}{2}} \partial_i(|g_0|^{\frac{1}{2}} \Upsilon^i)\partial_4\phi+2\Upsilon^i\partial_i\partial_4\phi)$. Ce que l'on peut écrire, puisque $\Upsilon^4=0$, sous la forme : $\sum_{i=0}^3(|g_0|^{-\frac{1}{2}}\partial_i(|g_0|^{\frac{1}{2}} \Upsilon^i)\partial_4\phi+2\Upsilon^i\partial_i\partial_4\phi)+\sum_{i=5}^7(|g_0|^{-\frac{1}{2}} \partial_i(|g_0|^{\frac{1}{2}} \Upsilon^i)\partial_4\phi+2\Upsilon^i\partial_i\partial_4\phi)$. D'où : $(\nabla_{g_0}\cdotp \Upsilon)\partial_4\phi+2\Upsilon(\partial_4\phi)=(\nabla_\Theta\cdotp A)\partial_4\phi+2A\partial_4\phi+\varrho(\nabla_{S^3}\cdotp C)\partial_4\phi+2\varrho C(\partial_4\phi)$. Mais $\nabla_{S^3}\cdotp C=\sum_{k=1}^3B_k(\nabla_{S^3}L_{k_{S^3}})=0$ puisque ~$\nabla_{S^3}L_{k_{S^3}}=0$ ~~(cf. \ref{ss3.1}). Donc : \begin{eqnarray} (\nabla_{g_0}\cdotp \Upsilon)\partial_4\phi+2\Upsilon(\partial_4\phi)=(\nabla_\Theta\cdotp A)\partial_4\phi+2A\partial_4\phi+2\varrho C(\partial_4\phi) \label{F84} \end{eqnarray} \item Étude du terme $\Upsilon_k\Upsilon^k\partial_4^2\phi$. $A$ et $C$ étant $g_0$-orthogonaux on a : $\Upsilon_k\Upsilon^k=A_kA^k+\varrho^2C_kC^k$. Mais, puisque les trois champs de vecteurs $L_{j_{S^3}}$ sont deux à deux $g_0$-orthogonaux et que $g_{0_{S^3}}(L_{j_{S^3}},L_{j_{S^3}})=\rho^2$ ~~où ~~$\rho$ est le rayon de la sphère $S^3(\rho)$, ~on a : $C_kC^k=g_0(C,C)=\rho^2\sum_{j=1}^3B^2_j$. D'où : \begin{eqnarray} \Upsilon_k\Upsilon^k\partial_4^2\phi=(A_kA^k+\varrho^2\rho^2\sum_{j=1}^3B^2_j)\partial_4^2\phi \label{F85} \end{eqnarray} Finalement l'équation \ref{F83} s'écrit : \begin{eqnarray} 0=\Box_\Theta\phi+M^2\phi-(\nabla_{g_0}\cdotp A)\partial_4\phi-2A(\partial_4\phi)-A_kA^k\partial^2_4\phi-(\alpha) \label{F86} \end{eqnarray} Où $(\alpha):=2\varrho C(\partial_4\phi)+\varrho^2\rho^2(\sum_{k=1}^3B^2_k)\partial^2_4\phi$ C'est à dire : $(\alpha):=2\varrho\sum_{k=1}^3B_kS_k(\partial_4\phi)+\varrho^2\rho^2(\sum_{k=1}^3B^2_k)\partial^2_4\phi$ où les $S_k$ ont été précisés dans la définition \ref{d2.21}. L'équation \ref{F86} se met aussi sous la forme : \begin{eqnarray} 0=-\sum_{j=0}^3\varepsilon_j(\frac{\partial}{\partial x^j}+A^j\frac{\partial}{\partial u})^2\phi+M^2\phi-(\alpha) \label{F87} \end{eqnarray} où $\varepsilon_0=-1$ ~~$\varepsilon_1=\varepsilon_2=\varepsilon_3=1$ Pour obtenir l'équation en $a_c$ donnée par le théorème, on considère l'isomorphisme (cf. \ref{s2.8}): $\mathbb{C}_{\lambda,\nu}:E_{S^1(\delta)}\otimes E_p\rightarrow E_p^{\mathbb{C}}$ ~~où ~~$E_p=E_{S^3(\rho)}(\nu)$ On vérifie facilement que : \begin{center} \begin{tabular}{rcl} $\mathbb{C}_{\lambda,\nu}\circ((\Box_\Theta\phi)_x(.))$ &$=$& $(\Box_\Theta a_c)(x)$\\[0.7em] $\mathbb{C}_{\lambda,\nu}\circ((\frac{\partial\phi}{\partial u})_x(.))$ &$=$& $-iQ^+a_c(x)$\\[0.7em] $\mathbb{C}_{\lambda,\nu}\circ((\frac{\partial^2\phi}{\partial u^2})_x(.))$ &$=$& $-{Q^+}^2a_c(x)$\\[0.7em] \end{tabular} \end{center} En composant, pour chaque $x\in\Theta$, chaque membre de l'équation \ref{F87} par $\mathbb{C}_{\lambda,\nu}$, on obtient l'équation \ref{F49} du théorème \ref{2.3} : \[\sum_{j=0}^3\varepsilon_j(i\frac{\partial}{\partial x^j}+Q^+\Upsilon^j)^2a_c+M^2a_c-2\varrho Q^+\sum_{k=1}^3B^k\hat S_k(a_c)+{Q^+}^2\varrho^2\rho^2|B|^2a_c=0\] \end{enumerate} \end{enumerate} \newpage \section{Le choix d'une variété pseudo-riemannienne} \label{a3.8} \begin{enumerate} \item La structure de variété. \begin{enumerate} \item L'ensemble des réels $\mathbb{R}$.\\ Si nous voulons étudier une application définie sur un ensemble fini et à valeurs dans un autre ensemble fini, ceux-ci ayant un très grand nombre d'éléments et étant munis, par exemple, d'une relation d'ordre, nous disposons a priori de très peu de moyens. Un procédé consiste à <<~boucher intelligemment les trous~>> dans ces deux ensembles en rajoutant des éléments virtuels puis à prolonger l'application sur ces deux nouveaux ensembles maintenant <<~continus~>>. La première étape est traitée mathématiquement précisément dans la construction de $\mathbb{Q}$ puis de $\mathbb{R}$ à partir de $\mathbb{N}$. Cette construction non-triviale (surtout pour passer de $\mathbb{Q}$ à $\mathbb{R}$) permet d'introduire les notions de <<~limite~>>, de <<~dérivée~>>, d'<<~équation différentielle~>>, etc., attachées aux fonctions définies de $\mathbb{R}$ dans $\mathbb{R}$ et cela donne une très grande puissance au procédé d'étude des fonctions, celles-ci pouvant être, si besoin est, restreintes ensuite aux ensembles finis du problème d'origine. En fait, la construction de $\mathbb{R}$, purement axiomatique, introduit tout l'outillage de ce que l'on appelle <<~l'analyse réelle~>>. \item L'espace vectoriel $\mathbb{R}^n$.\\ La construction de $\mathbb{R}^n$ comme ensemble de $n$-uplets de réels est une trivialité, mais elle introduit la notion importante de \textbf{dimension} par sa structure naturelle d'espace vectoriel sur $\mathbb{R}$. La topologie standard de $\mathbb{R}^n$ (globale) est <<~pauvre~>>. L'analyse différentielle développée sur $\mathbb{R}$ s'étend naturellement sur $\mathbb{R}^n$. \item Les variétés topologiques.\\ Une variété topologique est, par définition, un espace topologique dont tout point admet un voisinage homéomorphe à $\mathbb{R}^n$. L'intérêt de cette structure est qu'elle garde les propriétés topologiques locales de $\mathbb{R}^n$ mais permet des topologies globales très variées. Il n'y a pas de structure algébrique canonique sur une variété topologique contrairement à sur $\mathbb{R}^n$. Ceci est un bon point pour la représentation de l'espace-temps, la structure algébrique de $\mathbb{R}^n$ s'étant avérée trop rigide pour avoir un sens physique (ce qui a amené à la construction de la relativité générale). Bien entendu, tous les outils d'analyse liés à la topologie peuvent être utilisés (continuité, etc.), cependant, l'analyse différentielle sur $\mathbb{R}^n$ (qui vient de celle de $\mathbb{R}$), ne s'étend pas aux variétés topologiques. (On peut, localement, se ramener à $\mathbb{R}^n$ par un homéomorphisme, mais le calcul différentiel défini comme ceci sur un ouvert de la variété topologique, dépend complètement du choix de l'homéomorphisme et, comme a priori aucun n'est privilégié, ceci n'a pas d'intérêt). Pour introduire correctement l'analyse différentielle sur une variété topologique, il est nécessaire de lui donner une structure supplémentaire par l'intermédiaire d'un <<~atlas différentiable~>>. \item Les variétés différentielles.\\ Une variété différentielle est, par définition, un couple formé d'une variété topologique et d'un <<~atlas complet~>> défini sur cette variété (on renvoie le lecteur aux ouvrages spécialisés pour une définition précise). Un atlas est un ensemble de <<~cartes~>> (homéomorphismes d'un ouvert de la variété topologique dans un ouvert de $\mathbb{R}^n$) et cette notion a un sens physique très important. Le choix d'une carte de cet altas (on dit aussi: un système de coordonnées) peut être vu comme le choix d'un <<~observateur~>> qui traduit ce qui se passe sur un ouvert de la variété en le ramenant sur $\mathbb{R}^n$. Ce qui est conceptuellement très important, c'est que la donnée d'un atlas complet (différentiable, de classe $C^k$, etc.) sur la variété topologique permet de reconstruire une grande partie de l'analyse différentielle sur cette variété et ceci \textbf{indépendamment de tout choix particulier de carte dans l'atlas}. Cette reconstruction commence par celle d'un <<~espace tangent~>> en chaque point de la variété, qui est un espace vectoriel sur $\mathbb{R}$ dont la dimension correspond à la dimension topologique de la variété (cet espace tangent est construit indépendamment de tout choix particulier de carte dans l'atlas). À partir de là, tous les <<~objets~>> de l'analyse différentielle se redéfinissent sans difficultés: champ de tenseurs, différentielles, etc.). Toutes ces notions sont telles qu'elles ne dépendent pas d'une carte particulière de l'atlas, autrement dit, du <<~regard~>> d'un observateur sur la variété. En résumé, une variété différentielle est un espace topologique sur lequel est défini un ensemble (complet) d'observateurs de sorte que l'essentiel des outils de l'analyse connue sur $\mathbb{R}^n$ puisse être utilisé. De plus, toutes les notions de cette analyse ont leurs définitions indépendantes d'un choix d'observateur. Il n'y a, a priori, aucune structure algébrique sur (l'ensemble sous-jacent à) la variété, donc aucune notion physiquement artificielle qui s'impose (comme c'était le cas, par exemple, dans $\mathbb{R}^n$ pour les objets que sont: les droites, les sous-espaces affines, l'origine, etc.), ceci laisse une grande liberté pour introduire des notions qui, elles, auront un sens physique. \end{enumerate} \item La structure de variété pseudo-riemannienne.\\ Une variété pseudo-riemannienne est un couple formé d'une variété différentielle $\mathscr{M}$ et d'un champ $g$ de formes quadratiques défini sur cette variété. Le champ de formes quadratiques est la donnée, en tout point $x$ de $\mathscr{M}$, d'une forme bilinéaire symétrique $g_x$ sur l'espace tangent en ce point. Nous n'imposons a priori aucune restriction sur la signature de $g_x$ ni même sur le fait qu'elle soit dégénérée ou non. \textbf{Dans la théorie que nous présentons dans ce papier, toute la physique est décrite par la seule donnée d'une variété pseudo-riemannienne}. Tous les <<~objets~>> habituels sont définis à partir de $g$. Le choix d'un champ de formes quadratiques est donc un fait important, il est essentiellement justifié par la facilité qu'il donne à la définition des notions de <<~distance~>> et de <<~temps~>>, ceci en laissant de grandes possibilités de <<~manipulations~>>. Il est évidemment nécessaire qu'une théorie physique finisse par se ramener aux notions standards de <<~distance~>> et de <<~temps~>> liées à un observateur. Bien entendu, il ne faudrait pas hésiter à <<~essayer~>> d'autres objets qu'un champ de formes quadratiques si besoin est, mais, pour le moment, le choix qui a été fait a l'air de convenir parfaitement. \item En résumé.\\ Même s'il est concevable que l'<<~espace-temps~>> puisse se décrire par un ensemble fini muni d'une structure ($X$, Struct), il est probablement beaucoup plus intéressant, dans le seul but de décrire cette structure, de <<~plonger~>> ($X$, Struct) dans une variété pseudo-riemannienne ($\mathscr{M}$, $g$) de manière à pouvoir utiliser toute la puissance de l'analyse mathématique développée dans cette dernière. Je ne suis pas sûr, par ailleurs, qu'il y ait beaucoup d'intérêt à préciser un éventuel couple ($X$, Struct) et son plongement dans ($\mathscr{M}$, $g$). On a donc choisi dans ce papier de représenter l'univers par une variété pseudo-\\riemannienne ($\mathscr{M}$, $g$) de <<~grande~>> dimension. Comme il a été précisé dans l'introduction, cette variété sera considérée <<~totalement anarchique~>>. Le lecteur pourra se référer au chapitre 14 du manuscrit \cite{vaugon-1} où est précisé ce que l'on entend par <<~variété totalement anarchique~>> dans le cas où celle-ci est lorentzienne de dimension 4 (ce qui s'étend naturellement à notre variété ($\mathscr{M}$, $g$)). Cette variété est <<~parsemée~>> de singularités de différents types (big-bangs, big-crunchs, trous-noirs, etc.). La singularité <<~big-bang~>> que l'on semble observer (à peu près homogène et isotrope <<~à grande échelle~>>) n'est qu'un détail de cet univers <<~anarchique~>>. Ce point de vue est assez différent de celui couramment admis en cosmologie, bien qu'il ne change rien de fondamental dans ce domaine. Le point de vue que nous adoptons exclut tout essai de précision dans la représentation globale de l'univers. \end{enumerate} \section{Déterminisme et approximations} \label{a3.9} On considère un domaine typé $(\mathscr{D}, g, \mathscr{A})$ (voir le préliminaire mathématique) et l'on rappelle qu'un <<~type~>> est la donnée d'une condition géométrique imposée sur $(\mathscr{D}, g)$. Il est naturel de dire qu'un tel domaine typé est \textbf{totalement déterministe} si la connaissance du tenseur $g$ sur un sous-domaine $\mathscr{D}'$ de $\mathscr{D}$ détermine complètement $g$ sur $\mathscr{D}$, autrement dit: si un couple $(\mathscr{D}, g')$ vérifie la même condition géométrique que celle qui donne le type de $(\mathscr{D}', g)$ et si $g' = g$ sur $\mathscr{D}'$ alors, nécessairement, $g = g'$ sur $\mathscr{D}$. Vérifier qu'un domaine typé est totalement déterministe est en général un problème très complexe que l'on présente souvent sous la dénomination de <<~problème de Cauchy~>>. Le type de problème de Cauchy que l'on vient d'énoncer peut se généraliser: le domaine $\mathscr{D}'$ peut être, par exemple, remplacé par une <<~hypersurface~>> de $\mathscr{D}$, le domaine $\mathscr{D}$ lui même peut ne pas être imposé a priori, etc. mais ceci demande une présentation rigoureuse assez longue à expliquer. Si, mathématiquement, ces problèmes sont intéressants, la difficulté de leurs résolutions (même dans des cas d'énoncés très simples) liée essentiellement pour nous au fait que la variété $\mathscr{M}$ est de grande dimension et est localement difféomorphe à un produit $\Theta \times K$ (où $\Theta$ est un ouvert de $\mathbb{R}^4$ et $K$ une variété compacte) fait que nous n'abordons pas ce point de vue dans ce papier. Nous nous contentons dans cette annexe de préciser succinctement quelques méthodes d'approximation qui permettent de rendre les domaines typés que l'on définit <<~suffisamment déterministes~>> pour être humainement intéressants. Ces <<~méthodes d'approximation~>> ne sont autres que celles couramment utilisées en physique standard et ici adaptées aux espaces considérés. Prenons comme exemple le cas d'un domaine de type <<~fluide~>>. Les équations données par le théorème \ref{t1.1} ne sont certainement pas <<~suffisamment déterministes~>>. Les opérateurs différentiels sont relatifs au tenseur métrique $g$ lui-même. Un cas particulier important pour lequel ces équations peuvent devenir exploitables est celui où, dans un domaine de coordonnées spécifique $\Theta \times K$, le tenseur métrique $g$ s'écrit sous la forme $g_{ij} = {g_0}_{ij} + h_{ij}$ où $g_0$ est une métrique de potentiel neutre telle que $g_0 |_\Theta$ est la métrique de Minkovski et les fonctions $h_{ij}$ sont $\ll 1$ (les $\partial_kh_{ij}$ étant eux aussi contrôlés). Les équations du théorème \ref{t1.1} peuvent alors être réécrites en approximation en remplaçant les opérateurs différentiels relatifs à $g$ par ceux relatifs à $g_0$. Il faut cependant vérifier que les termes supprimés par ce procédé (qui viennent des symboles de Christoffel) sont biens <<~négligeables~>> par rapport aux termes restant. Ceci est encore loin d'être suffisant pour rendre les équations exploitables en un sens classique, les termes de <<~pression~>> donnent trop d'<<~inconnues~>> par rapport au nombre d'équations. Les cas étudiés sont alors ceux où l'on suppose, par exemple, que ces termes sont nuls (ou négligeables) ce qui correspond au cas d'un fluide vraiment parfait (définition \ref{def:5}), ou bien, plus généralement, ceux où l'on impose des équations supplémentaires sur les termes de pression (équations d'état). Si maintenant, on suppose que les fonctions inconnues restantes ne dépendent pas, dans le système de coordonnées $\Theta \times K$, des <<~variables~>> de $K$, on retrouve les équations sur les fluides (chargés électriquement ou non) de la physique classique (c'est ainsi que la physique newtonienne apparaît comme approximation de la théorie de la relativité générale standard, elle-même approximation de la physique présentée ici). Les approximations que l'on vient de décrire, liées au fait que le tenseur métrique est <<~proche~>> d'une métrique de potentiel neutre $g_0$ ont pour principal intérêt celui de montrer que les équations de la physique classique (qui, elles, sont suffisamment déterministes) se déduisent bien de celles obtenues dans le théorème \ref{t1.1}. \textbf{Les <<~approximations~>> qui consistent à donner \textbf{explicitement} un tenseur métrique $g$ (<<~approché~>>) sur un domaine $\mathscr{D}$ sont en fait bien plus intéressantes}, elles permettent des calculs précis dans un cadre général (qui ne suppose pas nécessairement que $g$ est <<~proche~>> d'un $g_0$). Ce sont ces <<~types approximatifs~>> que l'on utilise en relativité générale standard sous la dénomination de <<~solution exacte de l'équation d'Einstein~>> et s'adaptent sans difficultés aux domaines $\mathscr{D}$ de la variété $\mathscr{M}$ de grande dimension que l'on considère ici, mais c'est surtout les types <<~potentiels~>> que l'on définit en \ref{s1.4} qui permettent la description précise de nombreux phénomènes physiques par l'intermédiaire de leurs géodésiques comme on l'explique en \ref{s1.3} et qui redonnent simplement, en approximation, la physique newtonienne et l'électromagnétisme standard. C'est, à mon avis, en continuant dans ce sens, par la donnée explicite de <<~métriques~>> $g$ sur des domaines $\mathscr{D}$, que l'on étudiera efficacement des domaines plus complexes de type <<~fluide avec pression ou autre~>> qui concernent le chapitre 1 de ce papier. Le chapitre 2 sur les phénomènes quantiques ne considère que des domaines typés définis à partir du tenseur $g$. Il est important de noter que, techniquement, c'est en imposant des \textbf{invariances par opérations de groupes} que l'on construit des domaines typés suffisamment simples pour être intéressants. En voici quelques exemples: \begin{itemize} \item La condition \ref{ei12} donnée dans la définition \ref{def:4} d'un domaine de type fluide, qui traduit le fait que l'on néglige les effets quantiques de l'électromagnétisme, peut s'écrire en disant que le tenseur métrique $g$ est invariant par le groupe des difféomorphismes engendré par le champ de vecteurs $Y$. \item En divers endroits de ce papier, des conditions imposées à des fonctions définies sur une cellule type $\mathscr{C} = \Theta \times K_1 \times \cdots \times K_l$ (qui, souvent, servent à définir le tenseur métrique) sont énoncées en disant que ces fonctions ne dépendent pas des variables de $K_i$. Ceci se traduit par le fait que ces fonctions sont invariantes par l'opération d'un groupe qui opère transitivement sur $K_i$ et ceci peut aussi s'écrire en faisant opérer le groupe sur la variété $\mathscr{M}$. \item De nombreux exemples importants de <<~domaines typés~>> en relativité générale standard (que l'on adapte sans difficultés à $\mathscr{M}$ comme on l'a déjà vu) sont définis en supposant des invariances par des groupes classiques. On peut citer: les domaines de Schwarzschild, de Reissner-Nordström, de Friedmann, invariants par l'opération du groupe $SO(3)$ ainsi que l'exemple présenté en \ref{s1.7}, les domaines de Kerr, invariants par $SO(2)$, etc. \end{itemize} Il serait intéressant de détailler toutes ces méthodes d'approximation bien plus que ne l'on fait les quelques lignes de cette annexe, mais ceci est laissé en exercice pour le lecteur (exercice parfois difficile).
1512.04275
\section*{Supplemental Material on} \begin{center} {\large\bf A Tweezer for Chimeras in Small Networks}\\[5mm] Iryna Omelchenko$^1$, Oleh E. Omel'chenko$^2$, Anna Zakharova$^1$, Matthias Wolfrum$^2$, and Eckehard Sch{\"o}ll$^1$\\[4mm] $^1${\it Institut f{\"u}r Theoretische Physik, Technische Universit\"at Berlin, Hardenbergstra\ss{}e 36, 10623 Berlin, Germany}\\ $^2${\it Weierstrass Institute, Mohrenstra\ss{}e 39, 10117 Berlin, Germany}\\[8mm] \end{center} \twocolumngrid \subsection{Phase reduction of the main model} \numberwithin{equation}{subsection} \setcounter{equation}{0} Let us denote $$ x_k(t) = \sqrt{\varepsilon} u_k(t),\qquad \dot{x}_k(t) = \sqrt{\varepsilon} v_k(t), $$ then the original system of $N$ coupled Van der Pol oscillators Eq.~(1) can be rewritten as a $2N$-dimensional dynamical system of the form \begin{eqnarray} \dot{u}_k &=& v_k,\nonumber\\[2mm] \dot{v}_k &=& \varepsilon (1-u_k^2)v_k - u_k \nonumber\\[1mm] & + & \dfrac{a}{R} \sum\limits_{j=1}^R \left[ (u_{k-j} - u_k) + \sigma_{-}(v_{k-j} - v_k) \right] \nonumber\\[1mm] & + & \dfrac{a}{R} \sum\limits_{j=1}^R \left[ (u_{k+j} - u_k) + \sigma_{+}(v_{k+j} - v_k) \vphantom{\sum} \right]. \label{System:UV_app} \end{eqnarray} We perform a phase reduction in order to determine a parameter set appropriate for observation of chimera states. Assuming that~$\varepsilon$ and~$a$ are both small, we can apply the averaging procedure to Eqs.~(\ref{System:UV_app}). To this end we substitute the ansatz $$ u_k = r_k \sin(t+\theta_k),\qquad v_k = r_k \cos(t+\theta_k) $$ into system~(\ref{System:UV_app}) and average out the fast time~$t$, assuming that amplitude~$r_k(t)$ and phase~$\theta_k(t)$ are slowly varying functions. As result we obtain the system \begin{eqnarray*} \dot{r}_k &=& \dfrac{\varepsilon}{8} r_k (4-r_k^2) \\[1mm] &+& \dfrac{a}{2R} \sum\limits_{j=1}^R \left[ r_{k-j} \sin(\theta_{k-j} - \theta_k) \right. \\ & & \hphantom{\dfrac{a}{2R} \sum\limits_{j=1}^R} \left. + ~\sigma_{-}(r_{k-j}\cos(\theta_{k-j} -\theta_k) - r_k) \right]\\[1mm] &+& \dfrac{a}{2R} \sum\limits_{j=1}^R \left[ r_{k+j} \sin(\theta_{k+j} - \theta_k) \right. \\ & & \hphantom{\dfrac{a}{2R} \sum\limits_{j=1}^R} \left. + ~\sigma_{+}(r_{k+j}\cos(\theta_{k+j} -\theta_k) - r_k) \right], \\ \end{eqnarray*} \begin{eqnarray*} r_k \dot{\theta}_k &=& \dfrac{a}{2R} \sum\limits_{j=1}^R \left[ - (r_{k-j}\cos(\theta_{k-j} - \theta_k) - r_k) \right. \\ & & \hphantom{\dfrac{a}{2R} \sum\limits_{j=1}^R} \left. - ~\sigma_{-} r_{k-j} \sin(\theta_k - \theta_{k-j})\right] \\[1mm] &+& \dfrac{a}{2R} \sum\limits_{j=1}^R \left[ - (r_{k+j}\cos(\theta_{k+j} - \theta_k) - r_k) \right. \\ & & \hphantom{\dfrac{a}{2R} \sum\limits_{j=1}^R} \left. - ~\sigma_{+} r_{k+j} \sin(\theta_k - \theta_{k+j})\right] \end{eqnarray*} which can also be rewritten as follows \begin{eqnarray} \dot{r}_k &=& \dfrac{\varepsilon}{8} r_k \left(\left( 4-\dfrac{4a}{\varepsilon}(\sigma_{-} + \sigma_{+}) \right) - r_k^2 \right) \\[2mm] &+& \dfrac{a}{2R} \sqrt{1+\sigma_{-}^2} \sum\limits_{j=1}^R r_{k-j} \cos (\theta_k - \theta_{k-j} + \alpha_{-}) \nonumber \\[2mm] &+& \dfrac{a}{2R} \sqrt{1+\sigma_{+}^2} \sum\limits_{j=1}^R r_{k+j} \cos (\theta_k - \theta_{k+j} + \alpha_{+}), \nonumber \label{Rdot} \end{eqnarray} \begin{eqnarray} \dot{\theta}_k &=& a - \dfrac{a}{2R}\sqrt{ 1 + \sigma_{-}^2}\sum\limits_{j=1}^R \dfrac{r_{k-j}}{r_k}\sin(\theta_k - \theta_{k-j} + \alpha_{-}) \nonumber\\[2mm] &\phantom{=}& \phantom{a} - \dfrac{a}{2R}\sqrt{ 1 + \sigma_{+}^2}\sum\limits_{j=1}^R \dfrac{r_{k+j}}{r_k}\sin(\theta_k - \theta_{k+j} + \alpha_{+}),\nonumber \label{ThetaDot} \end{eqnarray} If $0<a \ll\varepsilon$, from Eq.~(A.2) we find that~$r_k\approx 2$ is a stable fixed point. Substituting this into the second equation, we obtain a Kuramoto-like system \begin{eqnarray} \dot{\theta}_k &=& a - \dfrac{a}{2R}\sqrt{ 1 + \sigma_{-}^2}\sum\limits_{j=1}^R \sin(\theta_k - \theta_{k-j} + \alpha_{-}) \nonumber\\[-1mm] &\phantom{=}& \phantom{a} - \dfrac{a}{2R}\sqrt{ 1 + \sigma_{+}^2}\sum\limits_{j=1}^R \sin(\theta_k - \theta_{k+j} + \alpha_{+}) \label{System:Kuramoto} \end{eqnarray} where \begin{equation} \alpha_\pm = \mathrm{arccot}\:\sigma_\pm = \frac{\pi}{2} - \arctan \sigma_\pm. \label{Formula:alpha_sigma} \end{equation} Note that for~$\sigma_{-} = \sigma_{+}$, equation~(\ref{System:Kuramoto}) is equivalent to the system considered in~\cite{S_OME10a,S_WOL11,S_WOL11a}. This suggests a range of parameters~$\sigma_\pm$ where chimera states should be expected, i.e., $\alpha_\pm \approx \pi/2$. \subsection{Role of nonlinearity and system size} \numberwithin{equation}{subsection} \setcounter{equation}{0} \begin{figure*}[Ht!] \includegraphics[height=0.8\linewidth, angle=270]{FIGURE1_suppl} \caption{(Color online) Standard deviation of the mean phase velocity profiles for $\Delta T=100000$, $a=0.02$. (a) Effect of parameter~$\varepsilon$ for~$K_s=0.5$ (black circles), $K_s=1$ (red squares), $K_s=1.5$ (blue diamonds), and $K_a=2$, $N=24$, $R=8$. Insets show examples of mean phase velocities profiles for $K_s=0.5$ and (A)~$\varepsilon=0.2$, (B)~$\varepsilon=2$, (C)~$\varepsilon=6$; (b)~Role of the system size~$N$: $K_s=0.5$ (black circles), $K_s=1$ (red squares), $K_s=3$ (blue diamonds), and $\varepsilon=0.2$, $K_a=2$, $r=R/N=1/3$. } \label{fig_measures} \end{figure*} To analyse the influence of the system parameters on the controlled chimera states, we use the standard deviation of the mean phase velocity profile $\Delta_{\omega}= \sqrt{\dfrac{1}{N}\sum\limits_{k=1}^N (\omega_k - \overline{\omega})^2}$, where $\overline{\omega}=\dfrac{1}{N}\sum\limits_{k=1}^N \omega_k$. Larger values of~$\Delta_{\omega}$ correspond to a well pronounced arc-like mean phase velocity profile, characterizing chimera states. Fig.~{\ref{fig_measures}} depicts the influence of the nonlinearity parameter~$\varepsilon$ of the individual Van der Pol unit and of system size~$N$ on the mean phase velocity profiles. Increasing~$\varepsilon$ results in changing the dynamics of the individual elements from regular sinusoidal oscillations to relaxation oscillations. Fig.~{\ref{fig_measures}}(a) shows that for small values of~$\varepsilon$, the chimera states are well pronounced (inset A, black circles denoting~$K_s=0.5$), for intermediate values the difference between maximum and minimum phase velocity is very small (inset B), and for even larger~$\varepsilon$ it increases again (inset C). When symmetric control becomes stronger ($K_s=1$ or $1.5$, shown by red squares and blue diamonds, respectively) again for intermediate values of~$\varepsilon$ the chimera states are more pronounced, while larger nonlinearity results in a decrease of~$\Delta_{\omega}$, but the maximum is shifted to larger $\varepsilon$, since for larger $\varepsilon$ the amplitude of the limit cycle grows, and larger $K_s$ matches better with optimum control. Fig.~{\ref{fig_measures}}(b) demonstrates the dependence of~$\Delta_{\omega}$ on the system size for three values of the stabilizing control parameter $K_s$. We keep the coupling radius $r=R/N=1/3$ fixed. For small systems, $\Delta_{\omega}$ increases with the system size, followed by saturation of its value for larger system size. Optimum control for $\varepsilon=0.2$ occurs at $K_s=0.5$ in accordance with Fig.~5(a) of the main paper. \subsection{Systems of inhomogeneous oscillators} \numberwithin{equation}{subsection} \setcounter{equation}{0} \begin{figure*}[Ht!] \includegraphics[height=\linewidth, angle=270]{Figure_EpsDistribution} \caption{(Color online) Exemplary realizations of $\varepsilon_k$ for $\varepsilon_{mean}=0.2$ and standard deviations (a)~$\delta_{\varepsilon}=0.0001$; (b)~$\delta_{\varepsilon}=0.001$; (c)~$\delta_{\varepsilon}=0.01$, (d)~$\delta_{\varepsilon}=0.05$.} \label{fig_eps_distr} \end{figure*} \begin{figure*}[Ht!] \includegraphics[height=\linewidth, angle=270]{Figure_InhomogenVDP} \caption{(Color online) Mean phase velocities for a system of $N=24$ oscillators, and $R=8$, $a=0.02$, $K_s=0.5$, $K_a=2$. Parameters $\varepsilon_k$ are drawn from a Gaussian distribution with $\varepsilon_{mean}=0.2$ as shown in the corresponding panels of Fig.~\ref{fig_eps_distr}(a)-(d).} \label{fig_inhomogenVDP} \end{figure*} To prove the robustness of our control scheme, we consider a system of inhomogeneous Van der Pol oscillators: \begin{eqnarray} \ddot{x}_k &=& (\varepsilon_k - x_k^2)\dot{x}_k - x_k \nonumber\\[-1mm] &+& \dfrac{1}{R} \sum\limits_{j=1}^R \left[ a_{-} (x_{k-j}-x_k) + b_{-}(\dot{x}_{k-j} - \dot{x}_k) \right] \nonumber\\[-2mm] &+& \dfrac{1}{R} \sum\limits_{j=1}^R \left[ a_{+} (x_{k+j}-x_k) + b_{+}(\dot{x}_{k+j} - \dot{x}_k) \vphantom{\sum} \right] \label{Eq:VdP_inhomogen} \end{eqnarray} The individual Van der Pol oscillators have nonidentical nonlinearity parameters $\varepsilon_k$ and hence different frequencies, where $\varepsilon_k$ is chosen randomly from a normal (Gaussian) distribution with mean value $\varepsilon_{mean}$ and standard deviation $\delta_{\varepsilon}$. We fix $\varepsilon_{mean}=0.2$, and vary $\delta_{\varepsilon}$ to increase the inhomogeneity. Fig.~\ref{fig_eps_distr} demonstrates exemplary realizations of $\varepsilon_k$ with increasing width of the Gaussian distribution. Fig.~\ref{fig_inhomogenVDP} shows the mean phase velocities for the system~(\ref{Eq:VdP_inhomogen}) of $N=24$ Van der Pol oscillators, where $\varepsilon_k$ is taken from Fig.~\ref{fig_eps_distr}. For small inhomogeneity, the controlled chimera state is robust. With increasing inhomogeneity, chaotic elements start to appear in the coherent domain, leading eventually to its destruction for large inhomogeneity as shown in Fig.~\ref{fig_inhomogenVDP}(d). \onecolumngrid ~~~ \twocolumngrid \subsection{Control of chimeras in small networks of FitzHugh-Nagumo oscillators} \numberwithin{equation}{subsection} \setcounter{equation}{0} As another example illustrating our control technique we consider a system of~$N$ coupled identical FitzHugh-Nagumo oscillators \begin{eqnarray} \dot{X}_k = F_{\varepsilon,a}(X_k) & + & \dfrac{1}{R} \sum\limits_{j=1}^R B_- ( X_{k-j} - X_k ) \nonumber\\[1mm] & + & \dfrac{1}{R} \sum\limits_{j=1}^R B_+ ( X_{k+j} - X_k ), \label{System:FHN} \end{eqnarray} where $X_k = (u_k,v_k)^{\mathrm{T}}\in\mathbb{R}^2$ is the state vector of the $k$-th oscillator and \begin{equation} F_{\varepsilon,a}(X_k) = \left( \begin{array}{c} ( u_k - \frac{1}{3}u_k^3 - v_k ) / \varepsilon \\[2mm] u_k + a \end{array} \right) \end{equation} is given by the nonlinear local dynamics of the FitzHugh-Nagumo model with time-scale parameter $\varepsilon > 0$ and threshold parameter $a\in(-1,1)$ in the oscillatory regime, i.e., each uncoupled oscillator exhibits a stable periodic orbit on a limit cycle. Similar to Eq.~(1) in the main paper, we assume that each oscillator is coupled with $R$~left and $R$~right nearest neighbors such that the matrices~$B_-, B_+\in\mathbb{R}^{2\times 2}$ describe the local topology of coupling to the left and right, respectively. The case of symmetric coupling $$ B_- = B_+ = b S(\psi), $$ where $b\in\mathbb{R}_+$ and \begin{equation} S(\psi) = \left( \begin{array}{ccc} \cos \psi & & \sin \psi \\ -\sin \psi & & \cos \psi \end{array} \right) \label{Matrx:B} \end{equation} is a rotational matrix with coupling phase $\psi$, has been considered in~\cite{S_OME13}. There chimera states have been found for $\psi\lessapprox \pi/2$, and it has been shown that in the limit of small coupling strength $b << 1$ the phase dynamics of system~(\ref{System:FHN}) is approximately described by $$ \dot{\theta}_k = - \sum\limits_{j=-R}^R \sin(\theta_k - \theta_{k+j} + \alpha) $$ where $\alpha\approx\psi$. This suggests the following control scheme for system~(\ref{System:FHN}). \begin{equation} B_- = b S(\psi_-)\quad\mbox{and}\quad B_+ = b S(\psi_+) \label{B_minus_plus} \end{equation} where \begin{equation} \psi_\pm = \dfrac{\pi}{2} - K_{\mathrm{s}} \left(1 - \dfrac{|Z_1 + Z_2|}{2} \right) \mp K_{\mathrm{a}}(|Z_1| - |Z_2|), \label{Psi_minus_plus} \end{equation} $$ Z_1(t) = \frac{1}{[N/2]} \sum\limits_{k=1}^{[N/2]} e^{i \phi_k(t)} $$ $$ Z_2(t) = \frac{1}{[N/2]} \sum\limits_{k=1}^{[N/2]} e^{i \phi_{N-k+1}(t)}, $$ and $\phi_k(t)$ is the geometric phase of the $k$-th oscillator computed from $$ e^{i \phi_k (t)} = \left( u_k^2(t) + v_k^2(t) \right)^{-1/2} \left( u_k(t) + i v_k(t)\right). $$ Now, for an appropriate choice of control gains $K_{\mathrm{s}}$ and $K_{\mathrm{a}}$ in~(\ref{Psi_minus_plus}) we can stabilize chimera states in the system (\ref{System:FHN})-(\ref{Psi_minus_plus}) with a small number of oscillators. Figure~\ref{fig_FHN} shows an example of a stabilized chimera state in a network of $N=12$ FitzHugh-Nagumo oscillators. \begin{figure}[Ht!] \includegraphics[height=1.05\linewidth, angle=270]{FigureFHN} \caption{(Color online) Controlled chimera state in the system (\ref{System:FHN})-(\ref{Psi_minus_plus}) of $N=12$ coupled FitzHugh-Nagumo oscillators. Parameters: $R=4$, $\varepsilon = 0.15$, $a = 0.5$, $b= 0.15$, $K_{\mathrm{s}} = 2.0$, $K_{\mathrm{a}} = 1.0$.} \label{fig_FHN} \end{figure}
1303.0199
\section{Introduction} As a generalization of the Fenchel-Nielsen twist deformation for a simple closed curve, Thurston introduced earthquake deformations for measured geodesic laminations. Later in his study of minimal stretch maps, Thurston generalized earthquakes to shears (cataclysms), deformations incorporating left and right displacements \cite{Thurstre}. Bonahon subsequently developed the fundamental theory of shear deformations in a sequence of papers \cite{Bonshear,BontHd,BonTran}. At the same time, Penner developed a deformation theory of Riemann surfaces with cusps by considering shear deformations on disjoint ideal geodesics triangulating a surface \cite{Pendec,Pen,Penbk}. More recently shear deformations play a basic role in the Fock-Goncharov work on the quantization of Teichm\"{u}ller space \cite{Fk,FkChk,FkGn} and in the Kahn-Markovic work on the Weil-Petersson Ehrenpreis conjecture \cite{KMwp}. The Weil-Petersson (WP) geometry of Teichm\"{u}ller space is recognized as corresponding to the hyperbolic geometry of Riemann surfaces. For example, twice the dual in the WP K\"{a}hler form of a Fenchel-Nielsen twist deformation is the differential of the associated geodesic-length function. Also for example, the WP Riemannian pairing of twist deformations is given by a sum of lengths of orthogonal connecting geodesics, see Theorem \ref{gradpr} and \cite{Rier}. An infinitesimal shear on a disjoint union of ideal geodesics is specified by weights on the geodesics with vanishing sum of weights for the edges entering each cusp. We define the length of a balanced sum of ideal geodesics and find that twice the dual in the WP K\"{a}hler form of a shear is the differential of the defined length. We then present the basic WP symplectic and Hamiltonian geometry in Section \ref{sympgeom} with Theorem \ref{twlth} and Corollaries \ref{commshear}, \ref{lpr} and \ref{hh}. The results include new formulas for the K\"{a}hler form. We show that the Poisson bracket of a pair of weight systems on a common set of triangulating ideal geodesics is given in terms of an elementary $2$-form computed from the weights alone. In Section \ref{alge}, we use the elementary $2$-form to show in Theorem \ref{FkWP} that the Fock shear coordinate algebra introduced in the quantization of Teichm\"{u}ller space is the WP Poisson algebra. The basic WP Riemannian geometry of shears is developed in Section \ref{riemmgeom} with Theorem \ref{shpr}. We generalize Riera's WP inner product formula and show that the Riemannian pairing of two weight systems on ideal geodesics is given by the combination of an invariant of the geometry of ideal geodesics entering a cusp and a sum of lengths of orthogonal connecting geodesics. There are challenges in calculating shear deformations. In contrast to earthquake deformations, shear deformations are in general not limits of Fenchel-Nielsen twists and a shear on a single geodesic deforms a complete hyperbolic structure to an incomplete structure. For the deformation theory larger function spaces are involved; for earthquakes geodesic laminations carry transverse Borel measures and for shears geodesic laminations carry transverse H\"{o}lder distributions. A general approach would require a deformation theory of incomplete hyperbolic structures. Rather, we follow the approach of \cite{Wlext} and double a surface with cusps across cusps, and open cusps to collars to obtain approximating compact surfaces with reflection symmetries. Shears are then described as limits of opposing twists. Given the above expectations, the approximating formulas include individual terms that diverge with the approximation. The object is to show that diverging terms cancel and to calculate the remaining contributions. We use the Chatauby topology for representations to show that the hyperbolic structures converge and an analysis of holomorphic quadratic differentials to show that infinitesimal deformations converge. We begin considerations in Section \ref{glfs} with the variation of cross ratio and geodesic-length. A unified treatment is given for Gardiner's geodesic-length formula \cite{Gardtheta}, Riera's twist Riemannian product formula \cite{Rier} and the original twist-length cosine formula \cite{Wlsymp}. In Section \ref{Thurshear}, we review Bonahon's results on shears on compactly supported geodesic laminations and Penner's results on shears on ideal geodesics triangulating a surface with cusps. The review includes the Thurston-Bonahon Theorem that shears on a maximal geodesic lamination are transitive on Teichm\"{u}ller space and Penner's Theorem on $\lambda$ and $h$ length global coordinate. We include the Bonahon-S\"{o}zen and Papadopoulos-Penner results that in appropriate settings the WP K\"{a}hler form is a multiple of the Thurston symplectic form. In Sections \ref{Thuroppos} and \ref{Chat}, beginning with hyperbolic collars and cusps, we give the geometric description of shear deformations and describe the convergence of opposing twists to shears. In Section \ref{results}, we treat the convergence of infinitesimal opposing twists to infinitesimal shears. The analysis includes the convergence of holomorphic quadratic differentials. In Section \ref{sympgeom}, we define the length of a balanced sum of ideal geodesics and establish the basic symplectic geometry results in Theorem \ref{twlth} and the following corollaries. In Corollary \ref{commshear}, we show that the Poisson bracket of length functions and the shear derivative of a length function are given by evaluation of the elementary $2$-form. We consider the Fock shear coordinate algebra in Section \ref{alge}. We use Penner's topological description of the shear coordinate bracket and compute with the elementary $2$-form to show that the algebra is the WP Poisson algebra. In Section \ref{riemmgeom} we begin with expansions for gradient pairings for geodesics crossing short geodesics. Then in Theorem \ref{shpr}, we provide the formula for the WP Riemannian pairing of balanced sums of ideal geodesics. In Example \ref{Dedekind} we calculate the pairing for the Dedekind $PSL(2;\mathbb Z)$ tessellation to find an exact distance relation. Finally in Section \ref{circ} we give the length parameter expansion for the sum of lengths of circuits about a closed geodesic. It is my pleasure to thank Joergen Andersen, Robert Penner, Adam Ross and Dragomir \v{S}ari\'c for many helpful conversations and valuable suggestions. \section{Gradients of geodesic-lengths}\label{glfs} We begin with the basics of deformation theory of Riemann surfaces \cite{Ahqc,Hbbk,ImTan}. A conformal structure is described by its uniformization. An infinitesimal variation of a conformal structure is described by a variation of the identity map for the universal cover. The interesting case for the present considerations is for a Riemann surface of finite type, a compact surface with a finite number of points removed, covered by the upper half plane $\mathbb H$. For a vector field $v$ on the universal cover and parameter $\epsilon$, there is a variation of the identity map $w_{\epsilon}(z)=z+\epsilon v+o(\epsilon)$, for $z$, respectively $w$, conformal coordinates for the domain and range universal covers. Provided the vector field is deck transformation group invariant, the map is equivariant with respect to deck transformation groups. The range conformal structure is described by the angle measure $\arg(dw_{\epsilon})$ for the differential $dw_{\epsilon}=w_{\epsilon,z}dz+w_{\epsilon,\bar z}\overline{dz}$. The expansion for the variation provides that $dw_{\epsilon}=w_{\epsilon,z}(dz\,+\,\epsilon v_{\bar z}\overline{dz})\,+\,o(\epsilon)$, and thus $\arg(dw_{\epsilon}) =\arg(w_{\epsilon,z})\,+\,\arg(dz+\epsilon v_{\bar z}\overline{dz})$. The derivative of the vector field $v_{\bar z}$ describes the infinitesimal variation of the conformal structure. The quantity $v_{\bar z}$ is an example of a Beltrami differential, a tensor of type $\frac{\partial}{\partial z}\otimes\overline{dz}$. For a Riemann surface $R$ of finite type and vector field $v$ defined on the surface (equivalently on the universal cover and invariant by deck transformations), then $w_{\epsilon}(z)$ is a variation of the identity map of the surface and in effect describes a relabeling of the points of the surface - the deformation is trivial. Nontrivial deformations are given by vector fields on the universal cover; vector fields with nontrivial group cocycles relative to the deck transformation group. We consider $B(\mathH)$, the space of Beltrami differentials on $\mathH$, bounded in $L^{\infty}$. By potential theory considerations, for $\mu\in B(\mathH)$ there is a vector field $v$ on $\mathH$ with $v_{\bar z}=\mu$, that is actually continuous on $\overline \mathH$ and is bounded as $O(|z|\log|z|)$ at infinity \cite{AB}. In particular elements of $B(\mathH)$ also describe variations of the points of $\mathR$. We are interested in the corresponding variational formula. The cross ratio of points of $\mathbb P^1$ is given as \[ (p,q,r,s)\,=\,\frac{(p-r)(q-s)}{(p-s)(q-r)} \] and for $q=s+\Delta s$ and rearranging variables, we obtain a holomorphic $1$-form \[ \Omega_{pq}(z)\,=\,\frac{(p-q)dz}{(z-p)(z-q)}\,=\,\frac{dz}{(z-p)}\,-\,\frac{dz}{(z-q)}. \] The cross ratio and $1$-form are invariant by the diagonal action of $PSL(2;\mathC)$ on all variables. There is a natural pairing of Beltrami differentials with $Q(\mathH)$, the space of integrable holomorphic quadratic differentials on $\mathH$, \[ \quad(\mu,\psi)\,\rightarrow\,\int_{\mathH} \mu\psi \quad\mbox{ for }\mu\in B(\mathH)\mbox{ and } \psi\in Q(\mathH). \] Rational functions, holomorphic on $\mathH$, with at least three simple poles on $\mathR$ are example elements of $Q(\mathH)$. The holomorphic quadratic differentials $Q(\mathH)$ describe cotangents of the deformation space of conformal structures. The variational formula for points of $\mathR$ is fundamental. \begin{theorem}\textup{Variation of the cross ratio \cite{Ahsome,Ahqc}.}\label{crvr} For $p,q,r,s\in\mathR$ the variational differential of the cross ratio is \[ d\log(p,q,r,s)\,=\,-\frac{2}{\pi}\Omega_{pq}\Omega_{rs}\,\in Q(\mathH). \] \end{theorem} The quadratic differentials $Q(\mathH)$ form a pre-inner product space with a densely defined Hermitian pairing \[ \langle\phi,\psi\rangle\,=\,\int_{\mathH}\phi\bar\psi\,(ds^2)^{-1}\quad\mbox{ for }\phi,\psi\in Q(\mathH)\cap L^2 \] and $ds^2$ the hyperbolic metric. The pairing is the Weil-Petersson pre-inner product \cite{Ahsome,Wlcbms}. The pairing provides formal dual tangent vectors for the differentials of cross ratios \[ \grad\log(p,q,r,s)\,=\,\overline{(d\log(p,q,r,s))}(ds^2)^{-1}. \] We are interested for distinct quadruples $\mathcal P=(p_1,p_2,r_1,r_2),\ \mathcal F=(f_1,f_2,g_1,g_2)$ in the pairing \[ \langle\grad\log\mathcal P,\grad\log\mathcal F\rangle. \] The pairing is continuous in the quadruples for all points distinct and also is continuous for $(r_1,r_2)$ tending to $(p_1,p_2)$ and $(g_1,g_2)$ tending to $(f_1,f_2)$. We will evaluate particular configurations for the pairing. Let $\caT$ be the Teichm\"{u}ller space of homotopy marked genus $g$, $n$ punctured Riemann surfaces $R$ of negative Euler characteristic. We are interested in pairings corresponding to geometric constructions of deformations. A point of $\caT$ is the equivalence class of a pair $(R,f)$ with $f$ a homeomorphism from a reference topological surface $F$ to $R$. By the Uniformization Theorem a conformal structure determines a unique complete compatible hyperbolic metric $ds^2$ for $R$ and a deck transformation group $\Gamma\subset PSL(2;\mathbb R)$ with $R=\mathH/\Gamma$. The Teichm\"{u}ller space is a complex manifold with cotangent space at $R$ represented by $Q(R)$, the space of holomorphic quadratic differentials on $R$ with at most simple poles at punctures. The pairing \[ \quad(\mu,\psi)\,\rightarrow\,\int_R\mu\psi\quad\mbox{for } \mu\in B(R)\mbox{ and } \psi\in Q(R) \] is the ingredient for Serre duality and consequently the tangent space of $\caT$ at $R$ is $B(R)/Q(R)^{\perp}$ \cite{Ahsome,Ahqc,Cmbbk,Hbbk,ImTan}. The $L^2$ Hermitian pairing \[ \langle\phi,\psi\rangle\,=\,\int_R\phi\bar\psi\,(ds^2)^{-1} \] is the Weil-Petersson (WP) cometric for $Q(R)$. The metric dual mapping \[ \phi\,\rightarrow\,\bar\phi(ds^2)^{-1}\,\in Q(R) \] is a complex anti linear isomorphism, since Beltrami differentials of the given form (harmonic differentials) give a direct summand of $Q(R)^{\perp}$ in $B(R)$. The metric dual mapping associates a tangent vector to a cotangent vector and so defines the WP K\"{a}hler metric on the tangent spaces of $\caT$; the mapping is the Hermitian metric gradient. Geodesic-lengths and Fenchel-Nielsen twist deformations are geometric quantities for pairings. Associated to a nontrivial, non peripheral free homotopy class $\alpha$ on the reference surface $F$ is the length $\lla(R)$ of the unique geodesic in the free homotopy class for $R$. Geodesic-length is given as $2\cosh\lla/2\,=\,\operatorname{tr}A$ for $\alpha$ corresponding to the conjugacy class of $A\in\Gamma$ in the deck transformation group. Geodesic-lengths are functions on Teichm\"{u}ller space with a direct relationship to WP geometry. A Fenchel-Nielsen twist deformation is also associated to a closed simple geodesic. The deformation is given by cutting the surface along the geodesic $\alpha$ to form two metric circle boundaries, which then are identified by a relative rotation to form a new hyperbolic surface. A flow on $\caT$ is defined by considering the family of surfaces $\{R_t\}$ for which at time $t$ reference points from sides of the original geodesic are relatively displaced by $t$ units to the right on the deformed surface. The infinitesimal generator the Fenchel-Nielsen vector field $t_{\alpha}$, the differential of the geodesic-length and the gradient of geodesic-length satisfy duality relations \begin{equation}\label{wpdual} 2\omega_{WP}(\ ,t_{\alpha})\,=\,d\lla\quad\mbox{and equivalently}\quad 2t_{\alpha}\,=\,J\grad\lla, \end{equation} for $\omega_{WP}$ the WP K\"{a}hler form and $J$ the complex structure of $\caT$ (multiplication by $i$ on $B(R)/Q(Q)^{\perp}$) \cite{WlFN,Wlcbms}. The factor of $2$ adjustment to our formulas as detailed in \cite[\S 5]{Wlcusps} is included. We are interested in the WP metric and Lie pairings of the infinitesimal deformations $\grad\lla$ and $t_{\alpha}$ with geodesic-length functions $\llb$. The formulas begin with Gardiner's calculation of the differential of geodesic-length. We now use a single simplified approach that provides Gardiner's $d\lla$ formula \cite{Gardtheta}, the cosine formula for $t_{\alpha}\llb$ \cite{Wlsymp,Wlcbms}, the sine-length formula for $t_{\alpha}t_{\beta}\llg$ \cite{Wlsymp,Wlcbms}, as well as Riera's length-length formula for $\langle\grad\lla,\grad\llb\rangle$ \cite{Rier,Wlcbms}. The approach combines Theorem \ref{crvr}, coset decompositions for the uniformization group and calculus calculations. An important step is identifying a telescoping sum corresponding to a cyclic group action. We present the approach. \begin{theorem}\textup{Gardiner's variational formula \cite{Gardtheta}.}\label{Gardtheta} For a closed geodesic $\alpha$, \[ d\lla\,=\,\frac{2}{\pi}\sum_{C\in\langle A\rangle\backslash\Gamma}\Omega_{r_A a_A}^2(Cz)\, \in Q(R) \] with $\alpha$ corresponding to the conjugacy class of $A\in\Gamma$ with repelling fixed point $r_A$ and attracting fixed point $a_A$. \end{theorem} \begin{proof} We begin with the geodesic-length. For a hyperbolic transformation $A$, the geodesic-length is $\log(As,s,r_A,a_A)$ for $s$ a point of $\mathR$ distinct from the fixed points. We begin with the variational formula for the cross ratio from Theorem \ref{crvr}. The resulting integrand is in $L^1(\mathH)$ and $\mathH$ is the disjoint union \[ \bigcup_{n\in\mathZ}\,\bigcup_{C\in\langle A\rangle\backslash\Gamma}A^nC(\mathcal F) \] for $\mathcal F$ a $\Gamma$ fundamental domain. By a change of variables the union over domains is replaced by a sum of integrands \begin{equation}\label{unfolded} d\lla[\mu]\,=\,-\frac{2}{\pi}\Re\int_{\mathcal F}\mu\sum_n\sum_{C\in\langle A\rangle\backslash\Gamma}\Omega_{As\,s}(A^nCz)\Omega_{r_Aa_A}(A^nCz). \end{equation} The invariance of $\Omega$ by the diagonal $PSL(2;\mathR)$ action gives $\Omega_{pq}(A^nw)\,=\,\Omega_{A^{-n}pA^{-n}q}(w)$ and the given product of forms is \[ \Omega_{A^{-n+1}sA^{-n}s}(Cz)\,\Omega_{r_Aa_A}(Cz). \] Using the $\Omega$ partial fraction expansion, the first factor is \[ \Omega_{A^{-n+1}sA^{-n}s}\,=\,\frac{dw}{(w-A^{-n+1}s)}\,-\,\frac{dw}{(w-A^{-n}s)} \] and the integer sum telescopes \[ \sum_{n=-N}^N\Omega_{A^{-n+1}sA^{-n}s}\,=\,\Omega_{A^{N+1}sA^{-N}s} \] and as $N$ tends to infinity, $A^{N+1}s$ tends to $a_A$ and $A^{-N}s$ tends to $r_A$. (Various forms of the telescoping appear in the calculations for the cosine formula \cite[pgs. 220-221]{Wlsymp}, the sine-length formula \cite[pgs. 223-224]{Wlsymp} and the length-length formula \cite[pgs. 113-114]{Rier}.) The sum in (\ref{unfolded}) now becomes the desired sum \[ -\sum_{C\in\langle A\rangle\backslash\Gamma}\Omega_{r_A a_A}^2(Cz). \] \end{proof} We consider the WP Hermitian pairing of gradients $\langle\grad\lla,\grad\llb\rangle$. By (\ref{wpdual}) the imaginary part of the pairing is \[ \Re\langle J\grad\lla,\grad\llb\rangle\,=\,2t_{\alpha}\llb\,=\,2\sum_{p\in\alpha\cap\beta}\cos\theta_p. \] The real part of the pairing $\langle\grad\lla,\grad\llb\rangle$ was first evaluated by Riera \cite{Rier}. We now apply the above approach and with a single simpler treatment derive the real and imaginary part formulas. Riera's formula involves the logarithmic function \[ R(u)\,=\,u\log\Big|\frac{u+1}{u-1}\Big|\,-\,2. \] The function is even with a logarithmic singularity at $\pm 1$ and with the expansion \[ R(u)\,=\,2\,(\frac{1}{3u^2}\,+\,\frac{1}{5u^4}\,+\,\frac{1}{7u^6}\,+\,\cdots)\quad\mbox{for }|u|>1. \] In particular for $u>1$, the function and its even derivatives are positive and the function is $O(u^{-2})$ for $u>1$. The function $R(u)$ is also given as \[ \frac{u}{2}\tanh^{-1} \frac{1}{u}\,-\,2\quad\mbox{for }|u|>1\quad\mbox{and }\quad \frac{u}{2}\tanh^{-1} u\,-\,2\quad\mbox{for }|u|<1. \] We present the pairing formula for the general case of a cofinite group possibly with parabolic and elliptic elements. \begin{theorem}\textup{The complex gradient pairing \cite{Wlsymp,Rier}.}\label{gradpr} For closed primitive geodesics $\alpha,\beta$ corresponding to elements $A,B\in\Gamma$, we have for the WP pairing \[ \langle\grad\lla,\grad\llb\rangle\,=\,\frac{2}{\pi} \delta_{\alpha\beta}e(A)\lla\,+\,\sum_{D\in\langle A\rangle\backslash\Gamma\slash\langle B\rangle} \mathcal R_D, \] where $\delta_{\alpha\beta}$ is the Kronecker delta for the geodesic pair, where $e(A)$ is $2$ in the special case of the axis of $A$ having order-two elliptic fixed points and is $1$ otherwise, where for the axes $\operatorname{axis}(A),\operatorname{axis}(DBD^{-1})$ disjoint in $\mathH$, then \[ \mathcal R_D\,=\,\frac{2}{\pi}R(\cosh d(\operatorname{axis}(A),\operatorname{axis}(DBD^{-1}))) \] and for the axes intersecting with angle $\theta_D$, then \[ \mathcal R_D\,=\,\frac{2}{\pi}R(\cos\theta_D)\,-\,2i\cos\theta_D. \] Twist-length duality and $J$ an isometry provide that $4\langle t_{\alpha},t_{\beta}\rangle\,=\,\langle\grad\lla,\grad\llb\rangle$. \end{theorem} \begin{proof} For A a hyperbolic element we write \[ \Theta_A\,=\,\sum_{C\in\langle A\rangle\backslash\Gamma}\Omega_{r_Aa_A}^2 \] and from Gardiner's formula $d\lla=(2/\pi)\Theta_A$ with \[ \langle\Theta_A,\Theta_B\rangle\,=\,-\int_{\mathH}\,\sum_{C\in\langle A\rangle\backslash\Gamma}\Omega_{r_Aa_A}^2(Cz)\,\overline{\Omega_{Bs\,s}(z)}\,\overline{\Omega_{r_Ba_B}(z)}\,(ds^2)^{-1}. \] We first decompose each left coset $\langle A\rangle\backslash\Gamma$ by considering right $\langle B\rangle$ cosets and then move the $\langle B\rangle$ action to the two conjugate forms. The resulting sum over $\langle B\rangle$ is telescoping. In particular, we enumerate the cosets of the sum by writing for $C\in\langle A\rangle\backslash\Gamma$ the decomposition $C=DB^n,\,D\in\langle A\rangle\backslash\Gamma\slash\langle B\rangle$ for $n\in\mathbb Z$. For $A,B$ primitive hyperbolic elements, we consider uniqueness of the presentation of an element of $\langle A\rangle D$ in the form $A^mDB^n$. A non unique presentation is equivalent to a solution of $A^a=DB^bD^{-1}$ for a non trivial integer pair $(a,b)$. Since $A,B$ each generate $\Gamma$ maximal cyclic subgroups, a non trivial solution of $A^a=DB^bD^{-1}$ provides that $A$ is conjugate to $B^{\pm1}$ by the element $D$. In particular the presentation $A^mDB^n$ is unique except for the case $\alpha=\beta$ with $A=DB^{\pm1}D^{-1}$. In the case $\alpha=\beta$ we select the element $A$ to represent the geodesic and the presentation is unique except for the case of $D$ either the identity or the special case of $\Gamma$ containing an order-two elliptic $E$ with $A=EA^{-1}E$. For the special cases there is no distinction between left and right $\langle A\rangle$ cosets; we only use left cosets. The special left cosets are for the identity element and the element $E$. Now for each resulting integral of the sum, change variable by writing $w=B^nz$; the effect is to move a $B^{-n}$ action to the variable of $\Omega_{Bs\,s}\Omega_{r_Ba_B}$. Using the diagonal $PSL(2;\mathR)$ invariance of $\Omega$, the $B^{-n}$ action is moved to the quadruple of points, resulting in the telescoping sum \[ \sum_{n\in\mathbb Z}\,\Omega_{B^{n+1}sB^ns}\Omega_{r_Ba_B}\,=\,-\Omega_{r_Ba_B}^2. \] The result is the general formula \begin{multline}\label{genform} \langle\Theta_A,\Theta_B\rangle\,=\,-\delta_{AB^{\pm 1}}e(A)\int_{\mathH}\Omega_{r_Ba_B}^2\,\overline{\Omega_{Bs\,s}}\,\overline{\Omega_{r_Ba_B}}\,(ds^2)^{-1}\,+\\ \sum_{D\in\langle A\rangle\backslash\Gamma\slash\langle B\rangle}\,\int_{\mathH}\Omega_{r_{D^{-1}AD}a_{D^{-1}AD}}^2\,\overline{\Omega_{r_Ba_B}^2}\,(ds^2)^{-1}, \end{multline} where the Kronecker delta indicates that the first integral is only present for the case that $A=B^{\pm 1}$, $e(A)$ is $2$ in the case of order-two elliptic fixed points on the axis of $A$ and is otherwise $1$, and for the second integral the diagonal invariance was used to move the $D$ action to the pair of points. For each integral, a change of variable by an element of $PSL(2;\mathR)$ results in the inverse element applied to the tuple of points. It follows that the first integral depends only on the $PSL(2;\mathR)$ conjugacy class of $B$ and the second integral depends only on the $PSL(2;\mathR)$ class of the pair $(D^{-1}AD,B)$. It follows that the first integral is a function of the geodesic-length for $B$ and the second integral depends only on the distance between/intersection angle of the axes. We evaluate the integrals. The differential $\Omega_{pq}$ is continuous in $p,q$, including at infinity; for $q$ tending to infinity the form limits to $dz/(z-p)$. For the first integral of (\ref{genform}), we take the pair of points to be $0$ and $\infty$, to obtain for $z=re^{i\theta}$ the integral \[ -\int_{\mathH}\,\frac{1}{z^2\bar z}\,\overline{\frac{(Bs-s)}{(z-Bs)(z-s)}}\,r^2\sin^2\theta\,rdrd\theta, \] which for $P=e^{i\theta}Bs,\,Q=e^{i\theta}s$ becomes \begin{multline*} -\int_0^{\pi}\int_0^{\infty}\frac{(P-Q)}{(r-P)(r-Q)}\sin^2\theta\, drd\theta\,=\\ \log\frac{(r-Q)}{(r-P)}\bigg|_0^{\infty}\int_0^{\pi}\sin^2\theta\, d\theta\,=\,\frac{\pi}{2}\log\frac{Bs}{s}, \end{multline*} as expected, since $\grad\ell_*=2/\pi\,\Theta_*$. For the second integral of (\ref{genform}), we take the first pair of points to be $0$ and $\infty$, to obtain the integral \[ \int_{\mathH}\,\frac{1}{z^2}\,\overline{\bigg(\frac{(p-q)}{(z-p)(z-q)}\bigg)^2}\,r^2\sin^2\theta\, rdrd\theta, \] which for $P=e^{i\theta}p,\,Q=e^{i\theta}q,$ becomes \begin{equation}\label{mainint} \int_0^{\pi}\int_0^{\infty}\frac{(P-Q)^2}{(r-P)^2(r-Q)^2}\,\sin^2\theta\,rdrd \theta. \end{equation} The $r$ integral has antiderivative \[ \frac{(P+Q)}{(P-Q)}\log\frac{(r-Q)}{(r-P)}\,-\,\frac{P}{(r-P)}\,-\,\frac{Q}{(r-Q)}. \] We are evaluating an area integral and $\theta$ varies in the interval $(0,\pi)$; for $p,q\in\mathR$, $\theta$ as described, and $r$ real positive, the quotient $(r-Q)/(r-P)$ is valued in the complex open lower half plane. The antiderivative is invariant under interchanging $p,q$; we now normalize $p$ to be positive real. We use the principal branch of the logarithm; for $r$ close to zero the argument is close to $-\pi$. Evaluating $r$ at $0,\infty$ and integrating in $\theta$ gives \[ \frac{\pi}{2}\big(\frac{\kappa+1}{\kappa-1}\log\kappa\,-\,2\big)\quad\mbox{for }\kappa\ \mbox{the ratio }q/p\,=\,(q,p,0,\infty). \] To interpret geometrically, compare to \cite[pg. 114]{Rier}, set $u=(\kappa+1)/(\kappa-1)=2(\infty,q,p,0)-1$, to obtain the complex-valued expression \[ \frac{\pi}{2}\,\big(u\log\frac{u+1}{u-1}\,-\,2\big). \] For the lines $\stackrel{\frown}{0\infty}$ and $\stackrel{\frown}{pq}$ disjoint, the ratio $\kappa=p/q$ is positive and the logarithm is real, with $u=\cosh \delta_*$, for $\delta_*$ the distance between the lines. For the lines intersecting, the ratio $\kappa=q/p$ is negative and the argument of the logarithm is $-\pi$ and evaluation gives \[ \frac{\pi}{2}\,R(\cos\theta_*)\,-\,\frac{\pi}{2}i\pi\cos\theta_*, \] as desired. \end{proof} The double coset enumeration admits a topological/geometric description. We consider that $\alpha$ and $\beta$ are primitive and $\Gamma$ is torsion-free. On the surface $R$, consider the homotopy classes rel the closed sets $\alpha, \beta$ of arcs connecting $\alpha$ to $\beta$. For the universal cover, fix a lifting of $\alpha$ to a line $\tilde\alpha_0$ in $\mathH$; then a connecting homotopy class on $R$ lifts to a homotopy class of arcs connecting $\tilde\alpha_0$ to $\tilde\beta$ (a line lifting of $\beta$). The relation rel $\alpha$ corresponds to the relation of the $\langle A\rangle$ action on homotopy lifts. In particular, the non trivial classes on $R$ rel $\alpha,\beta$ {\em biject} to the classes in $\mathH$ rel $\tilde\alpha_0, \tilde\beta$, for $\tilde\beta$ (disjoint from $\tilde\alpha_0$) ranging over the line liftings of $\beta$ modulo the action of $\langle A\rangle$; the non trivial classes on $R$ correspond to lines $\tilde\beta$ disjoint from $\tilde\alpha_0$. To enumerate the pairs $(\tilde\alpha_0,\tilde\beta)$ for $\tilde\beta$ distinct modulo the $\langle A\rangle$ action, for $A$ generating the stabilizer of $\tilde\alpha_0$ and $B$ generating the stabilizer of a line lifting of $\beta$, then line pairs distinct modulo the $\langle A\rangle$ action {\em correspond} bijectively to double cosets by the rule \[ (\tilde\alpha_0,\tilde\beta)\,=\,(\operatorname{axis}(A),\operatorname{axis}(DBD^{-1}))\quad\mbox{corresponds to}\quad D\in\langle A\rangle\backslash\Gamma\slash\langle B\rangle. \] The relation $\operatorname{axis}(DBD^{-1})=D(\operatorname{axis}(B))$ is part of the correspondence. For a finite number of double cosets the corresponding axes intersect. Overall the axes enumeration by double cosets, enumerates pairs of line liftings of $\alpha$ and $\beta$ modulo the diagonal action of the group $\Gamma$. The geometric description comes from the description of a pair of lines. A pair of lines either intersects or has a unique perpendicular geodesic, minimizing the connecting distance. The cosine and hyperbolic cosine describe the geometry of the configurations. The present approach to evaluating the pairing is a combination and simplification of earlier works. The role of the cyclic group in Gardiner's formula was first noted by Hejhal \cite[Theorem 4]{Hejmono}. The telescoping of the cyclic group sums appears in the proofs of Theorem 3.3 and 3.4 of \cite{Wlsymp} and in Theorem 2 of \cite{Rier}, although in each case the telescoping is presented as a special feature. The basic integral (\ref{mainint}) is simpler than found in the earlier formulations. The present approach can be applied to evaluate the second twist Lie derivatives $t_{\alpha}t_{\beta}\llg$. The first derivative $t_{\alpha}\llb$ is a sum of cosines of intersection angles. A cosine is given by a cross ratio, the starting point for the above considerations. \section{Thurston shears}\label{Thurshear} We are interested in Thurston shears (cataclysms) on ideal geodesics for a Riemann surface with cusps. Thurston studied the shear deformation for compact geodesic laminations \cite{Thurstre}. Bonahon developed the fundamental results in a sequence of papers \cite{Bonshear,BontHd,BonTran}. We present a brief summary of Bonahon's basic results following \cite{Bonshear}. In a series of works \cite{Pendec,Pen,Penbk}, Penner developed a deformation theory of Riemann surfaces with cusps by considering shear deformations on ideal geodesics triangulating a surface. Our interests include Penner's $\lambda$-length formulas and formulas for the WP K\"{a}hler/symplectic form \cite{PapPen}. We present a brief summary of Penner's results following the exposition of the book \cite{Penbk}. A {\em geodesic lamination} $\lambda$ is a closed union of disjoint simple geodesics. A geodesic lamination for a compact surface $R$ is maximal provided $R-\lambda$ is a union of ideal triangles. A {\em transverse measure} for a geodesic lamination $\lambda$ is the assignment for each transverse arc $k$ with endpoints in $\lambda^c$ of a positive Borel measure $\mu$ on the transverse arc with $\operatorname{supp}(\mu)=\lambda\cap k$. If transverse arcs $k,k'$ are homotopic through arcs with endpoints in $\lambda^c$ then the assigned measures correspond by the homotopy. The assignment $k\mapsto \mu(k)$ is additive under countable subdivision of transverse arcs. A measured geodesic lamination defines an earthquake deformation by interpreting $\mu(k)$ as the relative left shift of the $\lambda$ complementary regions containing the $k$ endpoints. By allowing left and right shifts on complementary regions, Thurston defined the shear deformation. The relative left shift of $\lambda$ complementary regions again defines a functional on transverse arcs. The functional, called a {\em transverse cocycle}, is only finitely additive under subdivision of transverse arcs. A transverse cocycle is not given by integrating a measure, rather is given by elements of the dual of H\"{o}lder continuous functions on transverse arcs. The space of transverse cocycles $\mathcal H(\lambda)$ on a geodesic lamination is a finite dimensional vector space. Teichm\"{u}ller space is the space of isotopy classes of hyperbolic metrics. A geodesic lamination is represented on each isotopy class of a hyperbolic metric. Shear deformations on a given maximal geodesic lamination parameterize Teichm\"{u}ller space. A projection between leaves is defined for the lift of a lamination to the universal covering of the surface. The construction begins with the observation that the unit area horoballs in an ideal triangle are foliated by horocycles. The tangent field of the partial foliation of ideal triangles extends to a Lipschitz vector field on the universal covering; the vector field is not defined on the small trilateral regions in each ideal triangle. The Lipschitz vector field defines a projection between leaves of the lift of the lamination. The projection defines a relative displacement between lamination complementary regions. The relative displacement is finitely additive. The relative left displacement is called the {\em shearing cocycle} $\sigma_R$ of the surface $R$. The transverse cocycle for the shear deformation from a surface $R_1$ to a surface $R_2$ is the difference $\sigma_{R_1}-\sigma_{R_2}$ of shearing cocycles. For a train track carrying a geodesic lamination, transverse measures are specified in terms of non negative weights on the track and transverse cocycles are specified in terms of real weights. We also refer to the Thurston symplectic intersection form $\tau$ for a train track. The shearing cocycles for a maximal geodesic lamination provide an embedding of Teichm\"{u}ller space. \begin{theorem} \textup{\cite[Theorems A, B]{Bonshear}}\label{thrmAB}. The map $R\mapsto \sigma_R$ defines a real analytic homeomorphism from $\caT$ to an open convex cone $\caC(\lambda)$ bounded by finitely many faces in $\caH(\lambda)$. A transverse cocycle $\mu$ is in the cone $\caC(\lambda)$ if and only if $\tau(\mu,\nu)>0$ for every transverse measure $\nu$ for $\lambda$. \end{theorem} The $R$-length $\ell_{\mu}(R)$ of the transverse cocycle $\mu$ for $\lambda$ is a generalization of the total-length of a transverse measure. The $R$-length is defined as \[ \ell_{\mu}(R)\,=\,\int\int_{\lambda}d\ell\, d\mu, \] computed locally by first integrating hyperbolic length measure along the leaves of $\lambda$ and then integrating the local function on the local space of $\lambda$ leaves with respect to the H\"{o}lder distribution $\mu$. The $R$-length generalizes the weighted length for weighted simple closed geodesics; $R$-length is given by the Thurston intersection form and the shearing cocycle as follows. \begin{theorem} \textup{\cite[Theorem E]{Bonshear}}\label{thrmE}. If $\mu$ is a transverse cocycle for the maximal geodesic lamination $\lambda$ and $\sigma_R\in\caH(\lambda)$ is the shearing cocycle of the hyperbolic surface $R$ then $\ell_{\mu}(R)=\tau(\mu,\sigma_R)$. \end{theorem} The Theorem \ref{thrmAB} embedding of $\caT$ into the vector space $\caH(\lambda)$ provides identifications of tangent spaces $\mathbf T\caT$ with $\caH(\lambda)$. The identification enables a comparison of symplectic forms. \begin{theorem} \textup{\cite{BonSoz}}. Let $R$ be a compact hyperbolic surface with a maximal geodesic lamination $\lambda$. Then for the tangent space identifications $\mathbf T\caT \simeq \caH(\lambda)$, the WP K\"{a}hler form is a constant multiple of the Thurston intersection form. \end{theorem} A {\em decoration} for a hyperbolic metric with cusps is the designation of a horocycle at each cusp. Decorated Teichm\"{u}ller space $\caD\caT$ is the space of isotopy classes of hyperbolic metrics with cusps and decorations \cite{Penbk}. The decorated Teichm\"{u}ller space is naturally fibered over Teichm\"{u}ller space with fibers given by varying the horocycle lengths in a decoration. A section of the fibration is given by prescribing horocycle lengths. A decoration enables a notion of relative length for ideal geodesics. The $\lambda$-{\em length} of an ideal geodesic $\alpha$ is $\lambda(\alpha)=e^{\delta(\alpha)/2}$, where $\delta(\alpha)$ is the signed distance along $\alpha$ between the decoration horocycles; the distance is positive in the case that the associated horodiscs are disjoint. We are interested in the $\lambda$-lengths for the isotopy class of a given ideal triangulation $\Delta$ of hyperbolic metrics. An ideal triangulation for a genus $g$ surface with $n$ cusps has $6g-6+3n$ ideal geodesics and $4g-4+2n$ triangles. \begin{figure}[thb] \centering \includegraphics[bb=0 0 627 639,width=3in,height=3.06in,keepaspectratio]{diamond} \caption{Adjacent ideal triangles with a second diagonal. } \label{fig:diamond} \end{figure} Additional parameters are associated to an ideal triangulation. The ideal geodesics divide the decoration horocycles into segments. The $h$-{\em lengths} are the lengths of the horocycle segments. For an ideal triangle, the lengths are related by $h_{\hat a}=\lambda_a/\lambda_b\lambda_c$, where for the horocycle segment $\hat a$ the triangle opposite side is $a$ and the triangle adjacent sides are $b,c$. We are particularly interested in the shear coordinates. An ideal triangle has a median. For a pair of triangles adjacent along an ideal geodesic $\alpha$, drop perpendiculars from the medians to $\alpha$. The {\em shear coordinate} for $\alpha$ is the signed distance between the median projections; the distance is positive if the projections lie to the right of one another along $\alpha$. The shear coordinate is given simply in terms of $\lambda$-lengths and $h$-lengths. In Figure \ref{fig:diamond}, the shear coordinate for the diagonal $e$ is given as \begin{equation}\label{shear} \sigma_e\,=\,\log \frac{\lambda_b\lambda_d}{\lambda_a\lambda_c}\,=\,\log \frac{h_1}{h_4}\,=\,\log \frac{h_3}{h_2},\ \textup{\cite[Chap. 1, Corollary 4.16]{Penbk}.} \end{equation} \noindent The fibers of the Teichm\"{u}ller fibration $\caD\caT\rightarrow \caT$ are characterized simply by constant shear coordinates. By the classical result of Whitehead, triangulations with common vertices can be related by a sequence of replacing diagonals in quadrilaterals \cite[Chap. 2, Lemma 1.4]{Penbk}. The effect on $\lambda$-lengths of replacing diagonals is given by Penner's basic Ptolemy equation $\lambda_{13}\lambda_{24}=\lambda_{12}\lambda_{34}+\lambda_{14}\lambda_{23}$ for the configuration of Figure \ref{fig:diamond}, \cite[Chap. 1, Corollary 4.6]{Penbk}. We also note the {\em coupling equation} $h_1h_2=h_3h_4$ for the configuration of Figure \ref{fig:diamond}; the equation follows from the definition of $h$-lengths. The $\lambda$ and $h$ lengths provide global coordinates for $\caD\caT$. \begin{theorem}\label{Penthrm}\textup{\cite[Chap. 2, Theorems 2.5, 2.10; Chap. 4, Theorems 2.6, 4.2]{Penbk}.} For the ideal triangulation $\Delta$, the $\lambda$-length mapping $\caD\caT\rightarrow \mathbb R_{>0}^{\Delta}$ is a real-analytic homeomorphism. For $V$ the vertex sectors of the ideal triangulation, the $h$-length mapping $\caD\caT\rightarrow \mathbb R_{>0}^V$ is a real-analytic embedding into a real-algebraic quadric variety given by coupling equations. For the ideal triangulation, the shear coordinate mapping $\caT\rightarrow \mathbb R^{\Delta}$ is a real-analytic homeomorphism onto the linear subspace given by vanishing of the sum of shears around each cusp. The action of the mapping class group $\operatorname{MCG}$ is described by permutations followed by finite compositions of Ptolemy transformations. \end{theorem} The WP K\"{a}hler form pulls back to the decorated Teichm\"{u}ller space and has a universal expression in terms of $\lambda$ and $h$ lengths. We present new formulas for the pullback in Section \ref{results}. \begin{theorem}\textup{\cite[Chap. 2, Theorem 3.1]{Penbk}.}\label{Penlambda} For an ideal triangulation $\Delta$, the pullback WP K\"{a}hler form on $\caD\caT$ is \[ \widetilde{\omega_{WP}}\,=\,\sum_{\Delta} \widetilde{\lambda_a}\wedge\widetilde{\lambda_{b\,}}+\widetilde{\lambda_{b\,}}\wedge\widetilde{\lambda_{c\,}}+\widetilde{\lambda_{c\,}}\wedge\widetilde{\lambda_{a}}, \] where the sum is over ideal triangles, $\widetilde{\lambda_*}=d\log\lambda_*$ and the individual triangles have sides $a,b$ and $c$ in clockwise order. \end{theorem} \noindent The formula is given without Penner's initial $2$ factor following the adjustment to our own formulas as detailed in \cite[$\S$5]{Wlcusps}. Papadopoulos-Penner establish a formula for the pullback $\widetilde{\omega_{WP}}$ in terms of $h$-lengths and describe identifications of spaces to establish that $2\widetilde{\omega_{WP}}$ coincides with Thurston's intersection form \cite{PapPen}. Specifically the authors show that their change of variable $(\dag\dag)$ transforms their formula $(\dag)$ to the formula $(\dag\dag\dag)$; the calculation applies to the present setting by taking $\mu(greek\ index)=-\log h_{\widehat{index}}$ and $\mu(index)=\log \lambda_{index}$ and noting the factor of $2$. \begin{corollary}\textup{\cite{PapPen}.}\label{hform1} For an ideal triangulation $\Delta$, the pullback WP K\"{a}hler form is \[ \widetilde{\omega_{WP}}\,=\,\sum_{\Delta} \widetilde{h_{\alpha}}\wedge\widetilde{h_{\beta}}+\widetilde{h_{\beta}}\wedge\widetilde{h_{\gamma}}+\widetilde{h_{\gamma}}\wedge\widetilde{h_{\alpha}}, \] where the sum is over ideal triangles, $\widetilde{h_*}=d\log h_*$ and the individual triangles have vertex sectors $\alpha,\beta$ and $\gamma$ in clockwise order. \end{corollary} \noindent In particular the $\lambda$ to $h$ change of coordinates is pre symplectic. Papadopoulos and Penner introduce the formal Poincar\'{e} dual of an ideal triangulation. The formal dual is a trivalent graph with an orientation for the edges at a vertex. A modification of the trivalent graph is a punctured null gon train track. A set of logarithms of $\lambda$-lengths corresponds to a measure on the train track. A modification of the construction of a measured foliation from a measured train track parameterizes the space $\caD\caM\caF$ of decorated measured foliations. \begin{theorem} \textup{\cite[Proposition 4.1]{PapPen}}. The train track parameterization provides a homeomorphism of $\caD\caT$ to $\caD\caM\caF$. The homeomorphism identifies twice the pullback WP K\"{a}hler form and the Thurston intersection form $2\,\widetilde{\omega_{WP}}=\tau$. \end{theorem} \section{Thurston shears as limits of opposing twists}\label{Thuroppos} We show that weighted Fenchel-Nielsen twists with twist lines orthogonal to short geodesics converge to a Thurston shear deformation on ideal geodesics, as the short lengths tend to zero. We begin with the collars and cusp description \cite{Busbook}. For a closed geodesic $\alpha$ on the surface $R$ of length $\lla$, normalize the universal covering for the corresponding deck transformation to be $z\rightarrow e^{\lla}z$. The collar $\mathcal C(\alpha)=\{\lla/2\le\arg z\le \pi -\lla/2\}/\langle z\rightarrow e^{\lla}z\rangle$ embeds into $R$ with $\alpha$ the core geodesic. For a cusp, normalize the universal covering for the corresponding deck transformation to be $z\rightarrow z+1$. The cusp region $\mathcal C_{\infty}=\{\Im z\ge 1/2\}/\langle z\rightarrow z+1\rangle$ embeds into $R$. The collars about short geodesics and cusp regions are mutually disjoint in $R$. In the universal cover a Fenchel-Nielsen twist deformation for a single geodesic line $\beta$ is the piecewise isometry self map of $\mathbb H$ with jump discontinuity across $\beta$ given by a hyperbolic transformation stabilizing $\beta$. A twist deformation of magnitude $t$ offsets the $\beta$ half planes by a relative $t$ units to the right, as measured when crossing $\beta$. The relative displacement of a combination of twists on disjoint lines is found as follows. For the displacement of $q$ relative to $p$, consider the twist lines separating $p$ and $q$ (for neither point on a twist line). There is a partial ordering of lines based on containment of half planes containing $p$. By definition the $(n+1)^{st}$ line contains the preceding $n$ lines in a common half plane with $p$. The individual twist deformations are normalized to fix $p$. The combined deformation map of $\mathbb H$ is given by left (post) composition of the individual deformations formed in the order of the lines. A basic property is that the Fenchel-Nielsen twists on a set of disjoint lines is a commutative group. A finite collection of disjoint closed geodesics on a surface $R$ lifts to a locally finite collection in $\mathbb H$ and an equivariant twist mapping is determined on relatively compact sets. For our purposes it suffices to analyze finite combinations of twists in $\mathbb H$. We begin with hyperbolic cylinders and cusp regions. \begin{definition} For a hyperbolic cylinder with core geodesic $\gamma$, an opposing twist is a finite combination of weighted Fenchel-Nielsen twists with twist lines orthogonal to $\gamma$ and vanishing magnitude sum. For a hyperbolic cusp region, a Thurston shear is a finite combination of weighted Fenchel-Nielsen twists with twist lines asymptotic at the cusp and with vanishing magnitude sum. \end{definition} A positive shear corresponds to a right earthquake. For a Thurston shear an initial piecewise horocycle orthogonal to the twist lines with successive displacements given by the negative weights is deformed to a closed horocycle. The deformed region is complete hyperbolic with a closed horocycle, consequently is a cusp region. The vanishing magnitude sum condition is required for completeness of the deformed structure. The condition is noted in \cite[\S 12.3]{Bonshear} and considered in detail in \cite[Chap. 2, \S 4]{Penbk}. \begin{lemma}\label{oppbd} The opposing twist deformation of a hyperbolic cylinder is a hyperbolic cylinder. The core length of the deformed cylinder is bounded uniformly in terms of the initial core length and the twist weights. For a bounded number of bounded weights, the deformed core length is small uniformly as the initial core length is small. \end{lemma} \begin{proof} Opposing twist lines decompose a cylinder into bands, each isometric to a region between ultra parallel lines in $\mathbb H$. The twist deformation is given by translations across lines. The vanishing magnitude sum provides that a deformed cylinder is complete hyperbolic containing ultra parallel bands, consequently is a hyperbolic cylinder. We observe that for disjoint weighted twist lines converging, Fenchel-Nielsen twists (normalized with a common fixed region) converge. For a core length $\ell$, collar twist lines are represented in the band $\{1\le |z|<e^{\ell}\}$ in $\mathbb H$. For $\ell$ small, the individual twists are close to the twist line $|z|=1$. The magnitude sum vanishing provides that for $\ell$ small the combined twist transformation is close to the identity. In particular for twist weights bounded on a compact set the opposing twist is close to the identity uniformly in $\ell$. The deformed core length is the translation length of $z\rightarrow ze^{\ell}$ conjugated by the opposing twists. The deformed core length is uniformly small in $\ell$, as desired. \end{proof} Next we make precise the notion of opposing twists converging to a Thurston shear and also note a consequence. \begin{definition} Opposing twists for a sequence of cylinders with core lengths tending to zero geometrically converge to a Thurston shear provided the following. First, the universal coverings are normalized with the hyperbolic deck transformations for the cylinders converging in the compact open topology for $\mathbb H$ to the parabolic deck transformation for the cusp region. Second, for a relatively compact open set $K$ in $\mathbb H$ whose projection to the cusp region contains a loop encircling the cusp, the intersection with $K$ of the weighted twist lines for the cylinders converges to the intersection with the weighted Thurston shear lines. \end{definition} \begin{figure}[htbp] \centering \includegraphics[bb=0 0 538 263,width=3.25in,height=2.8in,keepaspectratio]{collarscusps} \caption{A hyperbolic cylinder with geodesics orthogonal to the core geodesic and a cusp region with geodesics asymptotic at the cusp.} \label{fig:collarscusps} \end{figure} \begin{lemma}\label{oppconv} Consider hyperbolic cylinders converging to a cusp region with opposing twists geometrically converging to a Thurston shear. A normalization by $\mathbb H$ isometries of the twist deformation maps of $\mathbb H$ converges to the Thurston shear in the compact open topology for $\mathbb H$. \end{lemma} \begin{proof} Convergence of lines intersecting a given relatively compact set in $\mathbb H$ provides convergence on any compact set. As noted convergence of weighted lines in $\mathbb H$ provides that suitably normalized deformation maps converge in the compact open topology. \end{proof} \section{Chatauby convergence and opening cusps}\label{Chat} The points of Teichm\"{u}ller space $\mathcal T$ are equivalence classes $\{(R,f)\}$ of Riemann surfaces with reference homeomorphisms $f:F\rightarrow R$ from a reference surface. The {\em complex of curves} $C(F)$ is defined as follows. The vertices of $C(F)$ are the free homotopy classes of homotopically nontrivial, non peripheral, simple closed curves on $F$. A $k$-simplex consists of $k+1$ homotopy classes of mutually disjoint simple closed curves. For surfaces of genus $g$ and $n$ punctures, a maximal set of mutually disjoint simple closed curves, a {\em partition}, has $3g-3+n$ elements. The mapping class group acts on the complex $C(F)$. The Fenchel-Nielsen coordinates for $\mathcal T$ are given in terms of geodesic-lengths and lengths of auxiliary geodesic segments, \cite{Abbook,Busbook,ImTan}. A partition $\mathcal{P}=\{\alpha_1,\dots,\alpha_{3g-3+n}\}$ decomposes the reference surface $F$ into $2g-2+n$ components, each homeomorphic to a sphere with a combination of three discs or points removed. A homotopy marked Riemann surface $(R,f)$ is likewise decomposed into pants by the geodesics representing the elements of $\mathcal P$. Each component pants, relative to its hyperbolic metric, has a combination of three geodesic boundaries and cusps. For each component pants, the shortest geodesic segments connecting boundaries determine designated points on each boundary. For each geodesic $\alpha$ in the pants decomposition, a twist parameter $\tau_{\alpha}$ is defined as the displacement along the geodesic between designated points, one for each side of the geodesic. For marked Riemann surfaces close to an initial reference marked Riemann surface, the displacement $\tau_{\alpha}$ is the distance between the designated points; in general the displacement is the analytic continuation (the lifting) of the distance measurement. For $\alpha$ in $\mathcal P$ define the {\em Fenchel-Nielsen angle} by $\vartheta_{\alpha}=2\pi\tau_{\alpha}/\ell_{\alpha}$. The Fenchel-Nielsen coordinates for Teichm\"{u}ller space for the decomposition $\mathcal P$ are $(\ell_{\alpha_1},\vartheta_{\alpha_1},\dots,\ell_{\alpha_{3g-3+n}},\vartheta_{\alpha_{3g-3+n}})$. The coordinates provide a real analytic equivalence of $\mathcal T$ to $(\mathbb{R}_+\times \mathbb{R})^{3g-3+n}$, \cite{Abbook,Busbook,ImTan}. A partial compactification, the {\em augmented Teichm\"{u}ller space} $\Tbar$, is introduced by extending the range of the Fenchel-Nielsen parameters. The added points correspond to unions of hyperbolic surfaces with formal pairings of cusps. The interpretation of {\em length vanishing} is the key ingredient. For an $\ell_{\alpha}$ equal to zero, the angle $\vartheta_{\alpha}$ is not defined and in place of the geodesic for $\alpha$ there appears a pair of cusps; the reference map $f$ is now a homeomorphism of $F-\alpha$ to a union of hyperbolic surfaces (curves parallel to $\alpha$ map to loops encircling the cusps). The parameter space for a pair $(\ell_{\alpha},\vartheta_{\alpha})$ will be the identification space $\mathbb{R}_{\ge 0}\times\mathbb{R}/\{(0,y)\sim(0,y')\}$. More generally for the partition $\mathcal P$, a frontier set $\mathcal{T}(\mathcal P)$ is added to the Teichm\"{u}ller space by extending the Fenchel-Nielsen parameter ranges: for each $\alpha\in\mathcal{P}$, extend the range of $\ell_{\alpha}$ to include the value $0$, with $\vartheta_{\alpha}$ not defined for $\ell_{\alpha}=0$. The points of $\mathcal{T}(\mathcal P)$ in general parameterize unions of Riemann surfaces with each $\ell_{\alpha}=0,\,\alpha\in\mathcal{P},$ specifying a pair of cusps. We present an alternate description of the frontier points in terms of representations of groups and the Chabauty topology. A Riemann surface with punctures and hyperbolic metric is uniformized by a cofinite subgroup $\Gamma\subset PSL(2;\mathbb R)$. A puncture corresponds to the $\Gamma$-conjugacy class of a maximal parabolic subgroup. In general, a Riemann surface with punctures corresponds to the $PSL(2;\mathbb R)$ conjugacy class of a tuple $(\Gamma,\langle\Gamma_{01}\rangle ,\dots,\langle\Gamma_{0n}\rangle )$ where $\langle\Gamma_{0j}\rangle $ are the maximal parabolic classes and a labeling for punctures is a labeling for conjugacy classes. A {\em Riemann surface with nodes} $R'$ is a finite collection of $PSL(2;\mathbb R)$ conjugacy classes of tuples $(\Gamma^\ast,\langle\Gamma_{01}^\ast\rangle ,\dots,\langle\Gamma_{0n^\ast}^\ast\rangle )$ with a formal pairing of certain maximal parabolic classes. The conjugacy class of a tuple is called a {\em part} of $R'$. The unpaired maximal parabolic classes are the punctures of $R'$ and the genus of $R'$ is defined by the relation $Total\ area=2\pi(2g-2+n)$. A cofinite $PSL(2;\mathbb R)$ injective representation of the fundamental group of a surface is topologically allowable provided peripheral elements correspond to peripheral elements. A point of the Teichm\"{u}ller space $\mathcal T$ is given by the $PSL(2;\mathbb R)$ conjugacy class of a topologically allowable injective cofinite representation of the fundamental group $\pi_1(F)\rightarrow\Gamma\subset PSL(2;\mathbb R)$. For a simplex $\sigma$, a point of the corresponding frontier space $\mathcal T(\sigma)\subset\Tbar$ is given by a collection $\{(\Gamma^\ast,\langle\Gamma_{01}^\ast\rangle ,\dots,\langle\Gamma_{0n^\ast}^\ast\rangle )\}$ of tuples with: a bijection between $\sigma$ and the paired maximal parabolic classes; a bijection between the components $\{F_j\}$ of $F-\sigma$ and the conjugacy classes of parts $(\Gamma^j,\langle\Gamma_{01}^j\rangle ,\dots,\langle\Gamma_{0n^j}^j\rangle )$ and the $PSL(2;\mathbb R)$ conjugacy classes of topologically allowable isomorphisms $\pi_1(F_j)\rightarrow\Gamma^j$, \cite{Abdegn, Bersdeg}. We are interested in geodesic-lengths for a sequence of points of $\mathcal T$ converging to a point of $\mathcal T(\sigma)$. The convergence of hyperbolic metrics provides that for closed curves of $F$ disjoint from $\sigma$, geodesic-lengths converge, while closed curves with essential $\sigma$ intersections have geodesic-lengths tending to infinity, \cite{Bersdeg, Wlhyp}. We refer to the Chabauty topology to describe the convergence for the $PSL(2;\mathbb R)$ representations. Chabauty introduced a topology of geometric convergence for the space of discrete subgroups of a locally compact group, \cite{Chb}. A neighborhood of $\Gamma\subset PSL(2;\mathbb R)$ is specified by a neighborhood $U$ of the identity in $PSL(2;\mathbb R)$ and a compact subset $K\subset PSL(2;\mathbb R)$. A discrete group $\Gamma'$ is in the neighborhood $\mathcal N(\Gamma,U,K)$ provided $\Gamma'\cap K\subseteq\Gamma U$ and $\Gamma\cap K\subseteq\Gamma'U$. The sets $\mathcal N(\Gamma,U,K)$ provide a neighborhood basis for the topology. The $PSL(2;\mathbb R)$ topology coincides with the induced compact open topology for transformations of $\mathbb H$. Important for the present considerations is the following convergence characterization. A sequence of points of $\mathcal T$ converges to a point of $\mathcal T(\sigma)$, provided for each component $F_j$ of $F-\sigma$, there exist $PSL(2;\mathbb R)$ conjugations such that restricted to $\pi_1(F_j)$ the corresponding representations converge element wise to $\pi_1(F_j)\rightarrow\Gamma^j$, \cite[Thrm. 2]{HrCh}. We now consider a Riemann surface $R$ with cusps and data $\sum\mathfrak b_j\widehat\beta_j$ for a Thurston shear. The data is a weighted sum of disjoint simple ideal geodesics, geodesics with endpoints at infinity in the cusps. The weighted sum of segments entering each cusp vanishes. Double the surface across its cusps; consider the union of $R$ and its conjugate surface $\bar R$ with the reflection symmetry $\rho$ for the pair. For the geodesic $\widehat\beta_j$, we write $\beta_j$ for the union $\widehat\beta_j\cup\rho(\widehat\beta_j)$. To open cusps, given $\epsilon$ positive, remove the area $\epsilon$ horoball at each cusp and glue the remaining surfaces by the map $\rho$ to obtain a compact surface $R_{\epsilon}$. The surface $R_{\epsilon}$ has a reflection symmetry (also denoted $\rho$) and smooth simple closed curves obtained from surgering the $\beta_j$ (also denoted $\beta_j$). The construction provides a homeomorphism from a reference surface $F$ to $R_{\epsilon}$ for $\epsilon$ positive and the simplex $\sigma$ of short curves for $F$ is given by the $\epsilon$ horocycles. Standard comparison estimates for metrics provide that for the uniformization hyperbolic metric, the simplex is realized by short geodesics with lengths tending to zero with $\epsilon$. The comparison estimates also provide that on the complement of prescribed area collars about the short geodesics, the $R_{\epsilon}$ hyperbolic metrics converge $C^{\infty}$ to the hyperbolic metric of $R\cup\bar R$, \cite{Wlhyp}. The uniformization groups $\Gamma(R_{\epsilon})$ for the $R_{\epsilon}$, Chatauby converge to the uniformization pair $\Gamma(R),\,\Gamma(\bar R)$, relative to $F$ and the horocycle simplex $\sigma$. The uniqueness of geodesics and convergence of hyperbolic metrics provide that the geodesics $\tilde \beta_j$ in the free homotopy classes $\beta_j$ converge uniformly on $\sigma$ collar complements to $\widehat \beta_j\cup\rho(\widehat\beta_j)$ on $R\cup\bar R$. We are ready to compare the effect of the Thurston shear $\sum\mathfrak b_j(\widehat\beta_j\cup-\rho(\widehat\beta_j))$ on $R\cup\bar R$ to the effect of the opposing twist $\sum\mathfrak b_j\tilde\beta_j$ on the hyperbolic metric of $R_{\epsilon}$. The reflection $\rho$ reverses orientation and notions of left/right; even though $\bar R$ is the mirror image, we require regions to move in the same direction by a twist; the minus sign provides the desired effect. Opposing twist deformations do not preserve the reflection symmetry. As a preliminary matter, we note from Lemma \ref{oppbd} for weights bounded, the opposing twist of $R_{\epsilon}$ has small geodesic lengths bounded in terms of $\epsilon$. Twisting $R_{\epsilon}$ defines a family close to the frontier $\mathcal T(\sigma)$. We observe the following. \begin{lemma}\label{opclose} For $\epsilon$ small and weights bounded, the opposing twist $\sum\mathfrak b_j\tilde\beta_j$ of $R_{\epsilon}$ is Chatauby close to the Thurston shear $\sum\mathfrak b_j(\widehat\beta_j\cup-\rho(\widehat\beta_j))$ of $R\cup\bar R$. Furthermore, the infinitesimal opposing twist is close to the infinitesimal Thurston shear in the sense of infinitesimal variations of $PSL(2;\mathbb R)$ representations. \end{lemma} \begin{proof} In brief the convergence of metrics provides for the compact open convergence of the twist/shear lines on $\mathbb H$, which in turn provides for the element wise convergence of representations. By construction of $R_{\epsilon}$, for the components $F_j$ of $F-\sigma$, the representations $\pi_1(F_j)$ into $PSL(2;\mathbb R)$ converge element wise and the twist lines compact open converge to shear lines. Choose generators for the limiting representations and a relatively compact open set $U\subset\mathbb H$, such that $CU\cap U\ne\varnothing$ for each generator $C$. For $\epsilon$ small, the same elements generate the representations of $\pi_1(F_j)$ and satisfy the non empty translate intersection condition. The representations are completely determined by their action on $U$. A twist/shear map $\tau$ of $\mathbb H$ induces a variation of a representation by varying a transformation $B$ by the conjugation $\tau B\tau^{-1}$. Only a finite number of twist/shear lines intersect $U$. The $PSL(2;\mathbb R)$ normalized combined twist is given by finite ordered compositions as described above. By metric convergence, as $\epsilon$ tends to zero, on $U$ the twist lines converge uniformly and the twists converge uniformly to shears and thus the representations of the finite number of generators converge. The representations are element wise uniformly close in $\epsilon$. To consider the infinitesimal variations, we introduce a parameter $t$ for $t\sum\mathfrak b_j\tilde\beta_j$ and $t\sum\mathfrak b_j(\widehat\beta_j\cup-\rho(\widehat\beta_j))$. The considerations provide that the initial infinitesimal variations of the generators are also close in $\epsilon$. The infinitesimal variations of the representations are determined on generators. \end{proof} \section{Infinitesimal Thurston shears and opposing twists}\label{results} We are interested in geodesic-length gradients. A thick-thin decomposition of hyperbolic surfaces is determined by a positive constant. The thin subset consists of those points with injectivity radius at most the positive constant; for a constant at most unity the thin subset is a disjoint union of collars and horoballs \cite{Busbook}. Surface representations into $PSL(2;\mathbb R)$ are Chatauby close precisely when their thick subsets are Gromov-Hausdorff close. For a sequence of hyperbolic surfaces with certain geodesic-lengths tending to zero, we are interested in the magnitude and convergence of geodesic-length gradients $\grad\lla$ for geodesics $\alpha$ crossing the short geodesic-length collars. Applications of convergence of surfaces and gradients include generalizing the Gardiner formula, Theorem \ref{Gardtheta}, to balanced sums of ideal geodesics and generalizing twist length duality (\ref{wpdual}) to Thurston shears and balanced sums of ideal geodesics. The basic matter is to understand the effect of Chatauby convergence for sums of the basic differential $\Omega^2$ from Section \ref{glfs}. We begin with convergence of hyperbolic transformations of $\mathbb H$. A hyperbolic transformation with translation length $\ell$, fixed points symmetric with respect to the origin and $i$ on its collar boundary is given as \[ A\,=\, \begin{pmatrix} \cosh \ell/2 & 1/\ell\,\sinh \ell/2 \\ \ell\,\sinh \ell/2 & \cosh \ell/2 \end{pmatrix} \] ($i$ is distance $\log1/\ell$ to the $A$ axis with endpoints $\pm 1/\ell$). As $\ell$ tends to zero, $A$ converges to the parabolic transformation \[ \begin{pmatrix} 1 & 1/2 \\ 0 & 1 \end{pmatrix}. \] We consider a Chatauby converging sequence of surfaces with short length core geodesics and a crossing geodesic intersecting the core geodesics orthogonally. \begin{figure}[htbp] \centering \includegraphics[bb=0 0 460 152,width=4in,height=1.2in,keepaspectratio]{crossing} \caption{A symmetric compact surface with crossing and core geodesics.} \label{fig:crossing} \end{figure} \noindent A crossing geodesic intersects collars and core geodesics. Given a segment of a crossing geodesic $\alpha$ in a thick region, normalize the universal coverings so that the segment lifts to a segment along the imaginary axis with highest point at $i$. Extend the segment by including the arcs that connect to core geodesics (the added arcs cross half collars). A core geodesic intersecting $\alpha$ lifts to a geodesic orthogonal to the imaginary axis. The figures for the universal covers of the surface, Figure \ref{fig:fundregn}, and the Chatauby limit, Figure \ref{fig:fundcusp}, are as follows. In the figures the collar lift and its limit are shaded. In Figure \ref{fig:fundregn}, the left and right circular arcs orthogonal to the baseline bound a fundamental domain for a core geodesic transformation. \begin{figure}[htb] \centering \includegraphics[bb=0 0 519 360,width=2.9in,height=2.01in,keepaspectratio]{fundregn} \caption{Crossing and core geodesics. The vertical line is the lift of the crossing geodesic. The two semi circles orthogonal to the baseline are consecutive lifts of the core geodesic. The left and right circular arcs bound a fundamental domain for the hyperbolic transformation stabilizing the larger semi circle. The shaded sectors are lifts of half collars for the core geodesic. The region bounded by the shaded sectors and the circular arcs covers a region containing a component of the thick subset of the surface.} \label{fig:fundregn} \end{figure} \begin{figure}[htb] \centering \includegraphics[bb=0 0 508 382,width=3.15in,height=2.37in,keepaspectratio]{fundcusp} \caption{An ideal geodesic and horoballs. The central vertical line is the lift of the ideal geodesic connecting cusps. The left and right vertical lines bound a fundamental domain for the parabolic transformation stabilizing infinity. The shaded sectors are horoballs about the cusps. The region bounded by the shaded sectors and the vertical lines covers a region containing the thick subset of the surface. } \label{fig:fundcusp} \end{figure} \noindent Chatauby convergence provides that the original segments on the crossing geodesic $\alpha$ have length bounded and it is standard that collar boundaries converge to horocycles. Figure \ref{fig:fundcusp} is the limit of a sequence of Figures \ref{fig:fundregn} with upper, respectively lower, shaded regions converging to upper, respectively lower, shaded regions. The crossing geodesic limits to an ideal geodesic connecting cusps. \begin{definition} For an ideal geodesic $\alpha$, we write \[ d\lla\,=\,\frac{2}{\pi}\sum_{C\in\Gamma}\Omega_{pq}^2(Cz) \] for the infinite series, where $p,q$ are endpoints of a lift of $\alpha$ to $\mathbb H$. \end{definition} \begin{lemma}\label{idglfb} For a surface $R$ with cusps and an ideal geodesic $\alpha$, the infinite series $d\lla$ converges. As above, consider surfaces $R_{\epsilon}$ with reflection symmetries obtained by doubling $R$ across its cusps and opening cusps to obtain short length core geodesics. Consider that an ideal geodesic $\alpha$ on $R$ is approximated on thick subsets by closed core orthogonal geodesics $\alpha_{\epsilon}$ on $R_{\epsilon}$. There is a Chatauby neighborhood $\mathcal U$ of $R\cup\bar R$ such that for $R_{\epsilon}\in\mathcal U$, on thick subsets the harmonic Beltrami differentials $d\ell_{\alpha_{\epsilon}}(ds^2)^{-1}$ and $d\lla(ds^2)^{-1}$ are uniformly bounded and are uniformly close. \end{lemma} \begin{proof}The $d\lla$ series are bounded by area integrals as follows. We first consider regions. In Figure \ref{fig:fundregn}, the unshaded region in $\mathbb H$ between the shaded crescents, by normalization, lies below the line $\Im z=1$ and outside a circle tangent to $\mathbb R$ at $0$. The integral of $|\Omega_{0\infty}|^2=dr/r\,d\theta$ for $z=re^{i\theta}$ over the unshaded region is bounded by the integral over the region between the shaded sectors in Figure \ref{fig:fundcusp} \[ \int^{\pi}_0\int^{\csc \theta}_{a\sin\theta}\frac{dr}{r}d\theta\,=\,\int_0^{\pi}\log\frac{\csc^2\theta}{a}\,d\theta\,=\,2\pi\log 2\,-\,\pi\log a. \] On a thick region of a surface a holomorphic quadratic differential satisfies a mean value estimate in terms of the integral over a hyperbolic metric ball of a radius $r_0$ at most the injectivity radius. The thick regions of $R_{\epsilon}$ and $R$ are contained in the projection of the indicated unshaded regions in Figures \ref{fig:fundregn} and \ref{fig:fundcusp}. By the standard unfolding, the absolute values of $d\ell_{\alpha_{\epsilon}}(ds^2)^{-1}$ and $d\lla(ds^2)^{-1}$ at a thick point are bounded by the integral of $|\Omega_{0\infty}|^2$ over the disjoint union of $r_0$ balls about the orbit of the lifted point in the unshaded region \cite[Chapter 8]{Wlcbms}. By the above considerations, the integrals are uniformly bounded, establishing the first result. For the second conclusion, given $\delta$ positive, choose a relatively compact set $K$ in the Figure \ref{fig:fundregn} region between shaded crescents, such that the integral of $|\Omega_{0\infty}|^2$ over the complement between the shaded crescents is bounded by $\delta$. The sum of evaluations of $\Omega_{0\infty}^2$ at points not in $K$ is bounded by $\delta$ by a mean value estimate. Chatauby convergence provides convergence for the sum of evaluations of $\Omega_{0\infty}^2$ for the orbit points in $K$. Boundedness and convergence are established. \end{proof} \begin{example} \textup{The ideal geodesic series $d\lla$ for a hyperbolic cusp.} For a cusp uniformized at infinity with integer translation group then the sum over the group is \[ \sum_{C\in\Gamma_{\infty}}\Omega_{0\infty}^2(Cz)\,=\,\sum_{n\in\mathbb Z}\frac{dz^2}{(z-n)^2}. \] The formula for the integer sum gives $d\lla=2\pi\csc^2\pi z\,dz^2$. From the above lemma, for a hyperbolic cylinder the series $d\lla$ approximates the cosecant squared in the compact open topology of $\mathbb H$. \end{example} We now combine considerations to obtain a uniform majorant for an opposing sum of twists and gradients of geodesic-length functions. The majorant is the necessary ingredient for general limiting arguments. We codify the situation as follows. \begin{definition} A crossing configuration is a compact surface with reflection symmetry with fixed locus a finite union of small length core geodesics $\gamma$ and no other geodesics having small length. A crossing geodesic $\alpha$ is symmetric with respect to the reflection with two intersections with the core geodesics. For a crossing configuration, a sum $\sum\mathfrak a_j\ell_{\alpha_j}$ of crossing geodesics length functions is balanced provided for each core geodesic $\gamma$, the weighted intersection number $\sum\mathfrak a_j\#(\alpha_j\cap\gamma)$ vanishes. For a surface with cusps, a formal sum $\sum\mathfrak a_j\ell_{\alpha_j}$ of ideal geodesics length functions is balanced provided at each cusp the weighted intersection number $\sum\mathfrak a_j\#(\alpha_j\cap h)$ with each small closed horocycle $h$ vanishes. \end{definition} Balanced is the precedent to the condition of the weight sum vanishing for each cusp for a Thurston shear. To prepare for a convergence argument, we first consider the distribution of mass of a harmonic Beltrami differential. \begin{lemma}\label{bglfb} A balanced sum $\sigma=\sum \mathfrak a_j\grad\ell_{\alpha_j}$ of gradients for a crossing configuration is bounded as follows. On the thick subset the absolute value $|\sigma|$ is uniformly bounded. On a core geodesic $\gamma$ collar, uniformized as $1\le|z|\le e^{\llg}$, $\ell_{\gamma}\le \theta\le \pi-\ell_{\gamma}$ for $z=re^{i\theta}\in\mathbb H$, the balanced sum $\sigma$ is bounded as \[ O\big((\ell_{\gamma}^3+e^{-2\pi\theta/\ell_{\gamma}}\ +\ e^{2\pi(\theta-\pi)/\ell_{\gamma}})\ell_{\gamma}^{-2}\sin^2\theta\big). \] The bounding constants depend only on the number of crossing geodesics, the norm of the weights and a choice of Chatauby neighborhood for the limiting cusped surface. \end{lemma} \begin{proof}A general bound for a harmonic Beltrami differential on a $\gamma$ collar is \begin{equation}\label{collbd} |\mu|\quad \mbox{is}\quad O\big(\big(|(\mu,\grad\log\llg)|\ +\ (e^{-2\pi\theta/\ell_{\gamma}}\ +\ e^{2\pi(\theta-\pi)/\ell_{\gamma}})\llg^{-2}\big)\sin^2\theta\, M\big) \end{equation} for $M$ the maximum of $\mu$ on the collar boundary \cite[Prop. 6]{Wlcurv}. We use Theorem \ref{gradpr} to bound the pairings $\langle\grad\ell_{\alpha_j},\grad\llg\rangle$. By setup the crossing and core geodesics are orthogonal. Each core geodesic intersection contributes $-2$ to the pairing evaluation. From the balanced hypothesis, the weighted sum of intersection contributions vanishes. Each remaining term of the evaluation involves a connecting geodesic segment that crosses the $\gamma$ half collar; the width of the half collar is $-\log\llg$. For large distance, the formula summand $\mathcal R$ is approximately $e^{-2d(\gamma,\alpha)}$. In \cite[Chap. 8]{Wlcbms} we showed that the sum of distances from $\alpha$ to the $\gamma$ collar boundary is uniformly bounded. It follows that the contribution $\llg^2$ of the half collar width can be factored out of each summand. The sum evaluation is $O(\llg^2)$, the desired bound. Lemma \ref{idglfb} provides the desired bound for $\sigma$ on the thick subset. \end{proof} \section{The symplectic geometry of lengths}\label{sympgeom} There is a length interpretation for a balanced sum $\mathcal A=\sum \mathfrak a_j \ell_{\alpha_j}$ of ideal geodesics length functions as follows. Let $\mathcal H$ be a neighborhood of the cusps given as a union of small horoballs, one at each cusp. The length $L(\mathcal A)$ of the balanced sum is the sum with weights $\mathfrak a_j$ of lengths of segments $\alpha_j\cap (R-\mathcal H)$. The balanced condition provides that the length does not depend on the choice of horoball neighborhood $\mathcal H$. For a crossing configuration the length of a balanced sum $L(\mathcal A)$ is defined in the corresponding manner. In the crossing case, the value $L$ coincides with the sum of geodesic lengths. The length $L(\mathcal A)$ of a balanced sum is a generalization of the $R$-length of a transverse cocycle. The balanced condition at cusps is discussed in \cite[\S 12.3]{Bonshear}, where it is noted that the condition provides a well-defined notion of length. The definition in terms of horoballs shows that the length $L(\mathcal A)$ is given as $\sum2 \mathfrak a_j\log\lambda_{\alpha_j}$ for the $\lambda$-lengths of the ideal geodesics and a decoration. An example of a balanced sum is a shear coordinate $\sigma_*$, see formula (\ref{shear}); the sum is balanced at each vertex of the quadrilateral of Figure \ref{fig:diamond}. A second example comes directly from the shear coordinates of Riemann surfaces. By Theorem \ref{Penthrm}, the sum $\sum\sigma_j\ell_{\alpha_j}$ is balanced since the sum of shear coordinates around each cusp vanishes. The adjustment of a factor of $2$ to our formulas as detailed in \cite[\S 5]{Wlcusps} is included in the following. \begin{theorem}\label{twlth} For a surface $R$ with cusps and a balanced sum $\mathcal A=\sum \mathfrak a_j \ell_{\alpha_j}$ of ideal geodesics length functions, the length $L(\mathcal A)$ is a differentiable function on the Teichm\"{u}ller space of $R$ with \[ dL(\mathcal A)\ =\ \sum \mathfrak a_jd\ell_{\alpha_j}\,\in\,Q(R). \] The formal sum $\sum \mathfrak a_j\alpha_j$ is data for an infinitesimal Thurston shear $\sigma_{\mathcal A}$ with \[ \sigma_{\mathcal A}\ =\ \frac{i}{2}\sum \mathfrak a_j\grad \ell_{\alpha_j}. \] The WP twist-length duality \[ 2\omega_{WP}(\ ,\sigma_{\mathcal A})\ =\ dL(\mathcal A) \] is satisfied. In particular, the Thurston infinitesimal shear $\sigma_{\mathcal A}$ is a WP symplectic vector field with Hamiltonian potential function $L(\mathcal A)/2$. \end{theorem} \begin{proof} We first observe that $L$ is a differentiable function on the $PSL(2;\mathbb R)$ representation space. For the reference surface $F$, a simple loop $\delta\in\pi_1(F)$ about the cusp has representation into $PSL(2;\mathbb R)$ a parabolic element that generates a maximal parabolic subgroup. Prescribing an area value (at most unity) for the quotient of a horoball by the maximal parabolic subgroup determines a horoball and horocycle. (The prescription is equivalent to a choice of decoration in the Penner approach \cite{Pencell,Penbk}.) For a pair of elements of $\pi_1(F)$ defining distinct maximal parabolic subgroups, the distance between the prescribed horocycles is a smooth function of the $PSL(2;\mathbb R)$ representation. The length $L$ is a sum of distances between horocycles and hence a smooth function. The differential $dL$ is an element of $Q(R)$. In particular the integral of the element over small neighborhoods of the cusps is small. The construction of the function and its differential is also valid for the distance between collar boundaries. Consider a sequence of compact surfaces $R_{\epsilon}$ with reflection symmetries obtained by doubling and opening the cusps of $R$. From Lemma \ref{idglfb}, on thick subsets, the differentials of geodesic-lengths converge uniformly to differentials for ideal geodesics. From Lemma \ref{bglfb}, for a balanced sum, the sum of differentials is uniformly bounded in each core collar; the integral of the sum is uniformly small over small area collars. As $R_{\epsilon}$ limits to $R$, the distance between collar boundaries limits to the distance between horocycles. And for closed geodesics $\beta$ contained in the thick subsets, the Fenchel-Nielsen twists on $\beta\cup\rho(\beta)$ of $R_{\epsilon}$ converge to the twist of $R\cup\bar R$ and the twist derivatives of distance converge. The considerations of Chatauby convergence and Lemmas \ref{idglfb} and \ref{bglfb} can be applied for the Fenchel-Nielsen twists on $\beta\cup\rho(\beta)$. The conclusion is again that the gradient pairing integrals over small area collars and small area horoballs are uniformly small. It follows that the pairing for a balanced sum length differential and twist converges to the limiting pairing as $\epsilon$ tends to zero. The derivative of length converges to the derivative of length. Reflection-even twists span the reflection-even tangent space. The $dL$ formula is established. The considerations for infinitesimal Thurston shears are analogous. The deformation is smooth and by Lemma \ref{opclose} the infinitesimal deformation is a limit of opposing twists. The opposing twists satisfy $\sum \mathfrak a_j t_{\alpha_j}=i/2\sum \mathfrak a_j\grad\ell_{\alpha_j}$ on the side of $R_{\epsilon}$ that limits to $R$. We find the $\epsilon$ tending to zero limit by Lemmas \ref{idglfb} and \ref{bglfb}. The conclusions follow. \end{proof} We remark that symmetry is basic to considering the $R_{\epsilon}$ to $R$ limit of the tangent-cotangent pairing. With respect to the reflection $\rho$, the differential of the length $L(\mathcal A)$ is even, while the opposing twist and its limit are odd. Also the K\"{a}hler form is odd since the reflection reverses orientation for surface integration. The above duality relation $2\omega_{WP}(\ ,\sigma_{\mathcal A})=dL(\mathcal A)$ is established for reflection even tangents of $R\cup\bar R$ and cannot be applied to evaluate a shear pairing $\omega_{WP}(\sigma_{\mathcal B},\sigma_{\mathcal A})$. To evaluate the pairing of Thurston shears, we introduce an elementary alternating $2$-form for coefficients summing to zero. For a balanced sequence $\{a_j\}_{j=1}^p$, we consider the partial sums $A_0=0, A_k=\sum_{j=1}^ka_j,\,1\le k\le p,$ where by hypothesis $A_p=0$. We introduce a pairing for balanced sequences \begin{equation}\label{2form} \omega(\{a_j\},\{b_j\})\,=\,\frac12 \sum_{j=1}^p(A_j+A_{j-1})b_j. \end{equation} We explain that the pairing depends only on the joint cyclic ordering of the sequences and that the pairing is alternating. A cyclic shift in the index $j, 1\le j\le p,$ has the effect of adding a constant to the partial sums $A_j,0\le j\le p$. The balanced condition for the sequence $\{b_j\}$ provides that the pairing is unchanged. For the alternating property, we have summation by parts for balanced sequences $\{f_j\}$ and $\{g_j\}$ with partial sums $F_k$ and $G_k$ \[ \sum_{k=m}^{n-1}F_kg_{k+1}\,=\,F_nG_n\,-\,\sum_{k=m}^nG_kf_k. \] In particular we have that \[ \sum_{j=1}^pA_jb_j\,=\,A_pB_p\,-\,\sum_{j=1}^{p-1}B_ja_{j+1}\,=\,-\,\sum_{j=1}^pB_{j-1}a_j \] and \[ \sum_{j=1}^pA_{j-1}b_j\,=\,A_pB_p\,-\,\sum_{j=1}^pB_ja_j\,=\,-\,\sum_{j=1}^pB_ja_j \] using that $A_0, A_p, B_0$ and $B_p$ vanish. The pairing can be written in the alternating form \begin{equation}\label{2forma} \omega(\{a_j\},\{b_j\})\,=\,\frac12 \sum_{j=1}^p A_jb_j-B_ja_j. \end{equation} We note that balanced sequences have an interpretation as tangents to the regular $(p-1)$-simplex and $\omega$ an interpretation as a closed $2$-form on the regular simplex. The form $\omega$ can be evaluated for a pair of balanced sums for a common set of disjoint ideal geodesics limiting to a cusp. For balanced sums $\mathcal A=\sum a_j\ell_{\alpha_j}, \mathcal B=\sum b_j\ell_{\alpha_j}$ and a given cusp, consider the geodesic segments limiting to the cusp; some geodesics $\alpha_j$ may not limit to the given cusp and some may have both ends limiting to the cusp. Choose and label a limiting geodesic as the first and enumerate limiting geodesics in the counterclockwise order about the cusp. Evaluate the form $\omega$ on the enumerated sequences of weights $\{a_j\}$ and $\{b_j\}$. \begin{corollary}\label{commshear} For the balanced sums $\mathcal A=\sum a_j\ell_{\alpha_j}$ and $\mathcal B=\sum b_j\ell_{\alpha_j}$ for a common set of disjoint ideal geodesics, the shear pairing is \[ \omega_{WP}(\sigma_{\mathcal A},\sigma_{\mathcal B})\,=\,\frac12 \sigma_{\mathcal A}L(\mathcal B)\,=\,\frac12\sum_{\operatorname{cusps}}\omega(\{a_j\},\{b_j\}), \] The Poisson bracket for the length functions $L(\mathcal A)$ and $L(\mathcal B)$ is \[ \{L(\mathcal A),L(\mathcal B)\}\,=\,2\sum_{\operatorname{cusps}}\omega(\{a_j\},\{b_j\}). \] \end{corollary} \begin{proof} The shear-length duality comes from Theorem \ref{twlth}. The first line of equations is established by finding the contribution to the change in the length $L(\mathcal B)$ from the change in the determination of a closed horocycle at a cusp. We refer to the schematic Figure \ref{fig:shearlines} for the basic geometry. \begin{figure}[tbp] \centering \includegraphics[bb=0 0 502 258,width=3.5in,height=1.8in,keepaspectratio]{shearlines} \caption{Shear lines at a cusp. The longer horizontal lines represent ideal geodesics ending at a cusp on the far left; the uppermost and lowermost horizontal lines are identified. The dotted vertical represents a closed horocycle in the undeformed hyperbolic structure and the shorter solid verticals form a closed horocycle after applying a shear $\sigma_{\mathcal A}$ for the horizontal lines. The shorter verticals are successively displaced by horizontal increments $-a_1,-a_2,\dots, -a_p$. The shaded horizontals indicate segments along the upper and lower edges of each ideal geodesic, segments connecting the horocycles of the deformed structure.} \label{fig:shearlines} \end{figure} To evaluate the change in length and $\omega$, geodesic segments are labeled as described above. In the $\sigma_{\mathcal A}$ deformed hyperbolic structure, the distance between closed horocycles measured on the upper edge of an ideal geodesic agrees with the distance measured on the lower edge. We can compute the change in distance by averaging the change for the upper and lower edges. In Figure \ref{fig:shearlines}, the change in the first distance is $A_1/2$, while the change in the $j^{th}$ distance is $(A_j+A_{j-1})/2$. For the weighted length $L(\mathcal B)$, the weight for the $j^{th}$ distance is $b_j$. The change in weighted distance for the given cusp is $\sum(A_j+A_{j-1})b_j/2$, as desired. We next consider the Poisson bracket. The non degenerate K\"{a}hler form $\omega_{WP}$ defines an isomorphism from tangent to cotangent spaces and a dual form $\widehat{\omega_{WP}}$. For the Hamiltonian length functions the Poisson bracket is defined as $\widehat{\omega_{WP}}(dL(\mathcal A),dL(\mathcal B))$. By duality the pairing is $4\omega_{WP}(\sigma_{\mathcal A},\sigma_{\mathcal B})$. The final formula follows. \end{proof} There is a counterpart to Theorem \ref{thrmE} for the setting of shear coordinates.\footnote{Theorem \ref{thrmE} is formulated for left twists/shears while the present results are formulated for right twists/shears. The orientation difference explains the interchange of entries when comparing $2$-forms.} First given an ideal triangulation $\Delta$, Theorem \ref{Penthrm} provides a bijection between balanced sum shears $\sum a_j\mathfrak s_j$ and $\caT$ as follows, for $\mathfrak s_j$ denoting the shear deformations on the $\Delta$ edges. A {\em basepoint} $R_{\Delta}\in\caT$ in Teichm\"{u}ller space is determined by all shear coordinates vanishing. The surface $R_{\Delta}$ is constructed by gluing ideal triangles with medians on sides always matching. Each marked Riemann surface $R\in\caT$ is given uniquely as a balanced sum shear $\sigma_R=\sum a_j(R)\mathfrak s_j$ of the surface $R_{\Delta}$. We show the balanced sum length functions are linear in the shear coordinates as follows. \begin{corollary}\label{lpr} For a balanced sum $\mathcal B=\sum b_j\ell_{\alpha_j}$ of lengths of ideal geodesics of the triangulation $\Delta$ and a marked Riemann surface $R\in\caT$ then \[ L(\mathcal B)(R)\,=\,\sum_{\operatorname{cusps}}\omega(\{a_j(R)\},\{b_j\}). \] \end{corollary} \begin{proof} First we observe that all balanced sum length functions vanish at $R_{\Delta}$. Given a balanced sum $\mathcal B=\sum b_j\ell_{\alpha_j}$, consider the double sum of weights \[ \sum_{\operatorname{cusps}}\ \sum_{\operatorname{edges\ at \ cusp}}b_{(m,n)}, \] where the index $m$ enumerates cusps and the index $n$ enumerates half edges entering a cusp. The balanced sum condition is the vanishing of the inner sums. Each triangulation edge enters two cusps; the enumeration includes each triangulation edge twice. Thus the sum of weights of a balanced sum vanishes. Since the shear coordinates of $R_{\Delta}$ vanish, we can introduce a decoration $\caH$ for $R_{\Delta}$ such that all $h$-lengths have a common value. It follows that all $\lambda$-lengths have a common value $\lambda_0$. The length $L(\mathcal B)=\sum b_j2\log\lambda_0$ of the balanced sum vanishes at $R_{\Delta}$. Given a surface $R$, the path of shears $\sigma_t=t\sum a_j(R)\mathfrak s_j$ connects the surfaces $R_{\Delta}$ and $R$. Corollary \ref{commshear} provides that the $t$-derivative of $L(\caB)$ along the path has the constant value $\sum_{\operatorname{cusps}}\omega(\{a_j(R)\},\{b_j\})$. Integration in $t$ provides the desired formula. \end{proof} By Theorem \ref{Penthrm}, the shear coordinates for the edges of an ideal triangulation provide a continuous immersion into Euclidean space. In particular the shear coordinates for appropriate subsets of edges provide continuous coordinates for Teichm\"{u}ller space. A procedure determining appropriate subsets of edges is given in the proof of Lemma \ref{edgebasis} below. From Theorem \ref{Penthrm}, for a subset of shear coordinates without linear relations, the differentials of the coordinates are generically linearly independent. Furthermore from Corollary \ref{commshear}, for a subset of shear coordinates without linear relations there are sets of balanced sum length functions with constant full rank Poisson bracket pairing. It follows from the pointwise full rank pairing that the differentials of the shear coordinates in the subset are pointwise linearly independent on Teichm\"{u}ller space. It also follows that the shear coordinates from the subset give a basis for the vector space of balanced sums of length functions. In \cite[\S 4]{Wlsymp}, we found for surface fundamental group representations into $PSL(2;\mathbb R)$ that the Poisson bracket of trace functions is a sum of trace functions. The present result describes a simpler structure. By construction Thurston shears on a common set of ideal geodesics commute and accordingly the Poisson bracket of Hamiltonian potential length functions is constant. We now express the $2$-form $\omega$ in terms of $h$-lengths and use the formula to give the relation to Corollary \ref{hform1}. \begin{corollary} \label{hh}For an ideal triangulation $\Delta$, the pullback WP K\"{a}hler form is \[ \widetilde{\omega_{WP}}\,=\,\sum_{\operatorname{cusps}}\, \sum_{j=1}^p\, \widetilde{h}_{j}\wedge\widetilde{h}_{j+1}, \] where the first sum is over cusps, the second sum is over $h$-lengths at a cusp enumerated in counterclockwise cyclic order and $\widetilde{h}_{*}=d\log h_*$. For an ideal triangulation $\Delta$, the pullback WP K\"{a}hler form is also given as \[ \widetilde{\omega_{WP}}\,=\, \frac12\sum_{e\in\Delta}d\log \lambda_e\wedge d\sigma_e. \] \end{corollary} \begin{proof} We begin with shear coordinates for $\caT$ and the shear pairing $\omega_{WP}(\sigma_{\mathcal A},\sigma_{\mathcal B})$ of Corollary \ref{commshear} above. The coefficients $\{a_j\},\{b_j\}$ are the evaluations of the differentials $\{d\sigma_e\}$ of the shear coordinates on the Thurston shears $\sigma_{\mathcal A},\sigma_{\mathcal B}$. From (\ref{shear}) and Figure \ref{fig:diamond}, the differential of a shear coordinate is $d\log h''/h'$ where $h''$ is the $h$-length clockwise from the edge and $h'$ is the $h$-length counterclockwise from the edge. We now write the sum (\ref{2form}) at a cusp in terms of increments of $h$-lengths. We use the notation of formula (\ref{2form}). Introduce a decoration for the surface and write the shear coordinate increments in terms of $h$-length increments as $a_j=\widetilde{h}_{j-1}-\widetilde{h}_{j}$ and $b_j=\widetilde{g}_{j-1}-\widetilde{g}_{j}$, where $\widetilde{h}_{*}, \widetilde{g}_{*}$ are now the evaluations of the differential $d \log h_*$. The partial sums are $A_0=0$ and $A_k=\sum_{j=1}^ka_j=\widetilde{h}_{p}-\widetilde{h}_{j}$, where with the cyclic ordering $\widetilde{h}_{0}=\widetilde{h}_{p}$ and by hypothesis $\sum_{j=1}^p\widetilde{h}_{j}=0$. We find the contribution to $\omega$ from an individual increment $\widetilde{g}_{k}$ by considering \begin{multline*} (A_k+A_{k-1})b_k\,+\,(A_{k+1}+A_{k})b_{k+1}\,=\\ (2\widetilde{h}_{p}-\widetilde{h}_{k}-\widetilde{h}_{k-1})(\widetilde{g}_{k-1}-\widetilde{g}_{k})\,+\,(2\widetilde{h}_{p}-\widetilde{h}_{k+1}-\widetilde{h}_{k})(\widetilde{g}_{k}-\widetilde{g}_{k+1}). \end{multline*} The overall contribution is $(\widetilde{h}_{k-1}-\widetilde{h}_{k+1})\widetilde{g}_{k}$. We now have that \begin{multline*} \omega\,=\,\frac12\sum_{k=1}^p(A_k+A_{k-1})b_k\,=\\ \frac12\sum_{k=1}^p\det \begin{pmatrix} \widetilde{h}_{k-1} & \widetilde{h}_{k} \\ \widetilde{g}_{k-1} & \widetilde{g}_{k} \end{pmatrix} \,=\,\sum_{k=1}^pd\log h_{k-1}\wedge d\log h_k\,(\sigma_{\mathcal A},\sigma_{\mathcal B}) \end{multline*} and the first formula is established. The second formula follows from Theorem \ref{Penlambda} and formal considerations. From formula (\ref{shear}) we have that \[ d\log\lambda_e\wedge d\sigma_e\,=\,\widetilde{\lambda}_a\wedge\widetilde{\lambda}_e+\widetilde{\lambda}_e\wedge\widetilde{\lambda}_b + \widetilde{\lambda}_c\wedge\widetilde{\lambda}_e + \widetilde{\lambda}_e\wedge\widetilde{\lambda}_d, \] where the ordered side pairs $(a,e),(e,b),(c,e)$ and $(e,d)$ are in counterclockwise order relative to their containing triangles. The pairs are the side pairs of Figure \ref{fig:diamond} with one side a diagonal. Now given a pair of adjacent sides of the triangulation $\Delta$, the pair occurs in two quadrilaterals with one of the sides being a diagonal. It follows that the sum of $d\log\lambda_e\wedge d\sigma_e$ over edges is twice the sum of Theorem \ref{Penlambda}. The second formula follows. \end{proof} An observation of Joergen Andersen provides a direct relation of the above to Corollary \ref{hform1}. The coupling equation $h_1h_2=h_3h_4$ gives the $2$-form equation $\widetilde h_1\wedge \widetilde h_2+\widetilde h_2\wedge \widetilde h_3+\widetilde h_3\wedge \widetilde h_4+\widetilde h_4\wedge \widetilde h_1=0$ for $\widetilde h_*=d\log h_*$. The relation $\widetilde h_1\wedge \widetilde h_2+\widetilde h_3\wedge \widetilde h_4\,=\,\widetilde h_3\wedge \widetilde h_2+\widetilde h_1\wedge \widetilde h_4$ follows. Beginning with Corollary \ref{hform1} and referring to Figure \ref{fig:diamond}, we observe the following. For an edge $e$ of the triangulation, the wedge of $h$-lengths adjacent to $e$ of the triangles adjacent to $e$ can be replaced with the wedge of $h$-lengths for consecutive vertex sectors at the cusps at the ends of $e$. The replacement agrees with the orientations of the formulas. The replacement for each edge of the triangulation transforms the first adjacent by side formula to the second adjacent by vertex formula. \begin{example}\textup{The form $\omega$ for a once punctured torus.}\end{example} \vspace{-.1in} \noindent A choice of three disjoint ideal geodesics decomposes a once punctured torus into two ideal triangles. The torus is described by edge identifying two ideal triangles to form a topological rectangle with diagonal $\gamma$, and then separately identifying the horizontal edges $\alpha$ and vertical edges $\beta$. The pattern of geodesics at the cusp is twofold $\alpha,\gamma,\beta$. Consider the triples of balanced weights $\{a,b,-a-b\}$ and $\{c,d,-c-d\}$ for the sequence $\alpha,\beta$ and $\gamma$. For the geodesics enumerated according to the pattern at the cusp, the sequence of partial sums for the second set of weights is $A_0=0,A_1=c,A_2=-d$ and $A_3=0$. The sum (\ref{2form}) evaluates to $(ca+(c-d)(-a-b)+-db)=(ad-bc)$. \vspace{.12 in} We now follow the discussion of Bonahon \cite[Theorem 15]{BonTran} and Harer-Penner \cite[Section 2.1]{HP} for the dimension of the space of balanced sum coefficients. \begin{lemma}\label{edgebasis} For a surface with cusps and a maximal configuration of disjoint ideal geodesics, the space of balanced sum coefficients has the same dimension as the Teichm\"{u}ller space. \end{lemma} \begin{proof} Consider a configuration of ideal geodesics with weights as a graph with weighted edges. The graph is connected since ideal triangles fill in the configuration to form a connected surface. We will sequentially coalesce and remove edges, each time decreasing the number of vertices, to finally obtain a single vertex graph. For a surface with a single cusp no coalescing of edges is necessary. Otherwise by connectedness, there is an ideal triangle with not all vertices at the same cusp. Begin with such a designated triangle. If only two vertices are at distinct cusps, then we begin by coalescing an edge connecting the distinct vertices. If all vertices are at distinct cusps then we begin by sequentially coalescing two edges of the triangle and the third edge will not be subsequently coalesced. We label the ends of edges as {\em incoming} or {\em outgoing} at coalesced vertices as follows. Label the ends of edges adjoining the first vertex as {\em incoming}. Coalesce the first designated edge, remove the weight and label the remaining ends of edges at the second vertex as {\em outgoing} for the coalesced vertex. At the coalesced vertex the weight condition is that the sum of incoming weights equals the sum of outgoing weights. To continue, take a path of edges to an uncoalesced vertex and coalesce the first edge to an uncoalesced vertex along the path. Label the new ends of edges at the coalesced vertex as the opposite type as for the initial segment of the coalesced edge. At the coalesced vertex the weight condition continues to be that the sum of incoming weights equals the sum of outgoing weights. Continue coalescing edges until only a single vertex remains. For a surface of genus $g$ with $n$ cusps, there are $6g-6+3n$ edges in a maximal configuration. A total of $n-1$ edges are coalesced and $6g-6+2n+1$ edges remain. At least one edge of the initial designated triangle gives rise to an incoming-incoming edge of the final coalesced vertex. The single weight sum relation is a non trivial condition for the weight on the incoming-incoming edge. The space of weights on the final graph has the expected dimension. \end{proof} \section{The Fock shear coordinate algebra}\label{alge} Fock and Goncharov in their quantization of Teichm\"{u}ller space introduced and worked with a Poisson algebra for the shear coordinate functions \cite{Fk,FkChk,FkGn}. The quantization considerations begin with the Fock-Thurston Theorem that for any ideal triangulation, the corresponding shear coordinates (without the vanishing sums about cusps condition) provide a real-analytic homeomorphism of the holed Teichm\"{u}ller space to Euclidean space \cite[Chap. 4, Theorem 4.4]{Penbk}. Fock proposed a Poisson structure by introducing a natural bivector, an exterior contravariant $2$-tensor $\eta$ and defining $\{f,g\}=\langle (df, dg),\eta\rangle$ for $f,g$ smooth functions. A relationship to the WP K\"{a}hler form was also proposed. A bivector defines a Poisson structure with Jacobi identity provided its Schouten-Nijenhuis tensor vanishes. \begin{theorem}\textup{\cite{Fk,FkChk}} For an ideal triangulation $\Delta$ and corresponding shear coordinates, the bivector \[ \eta_{\Delta}\,=\,\sum_{\Delta} \frac{\partial\ }{\partial\sigma_{a}}\wedge\frac{\partial\ }{\partial\sigma_{b}}+\frac{\partial\ }{\partial\sigma_{b}}\wedge\frac{\partial\ }{\partial\sigma_{c}}+\frac{\partial\ }{\partial\sigma_{c}}\wedge\frac{\partial\ }{\partial\sigma_{a}} \] is natural for the holed Teichm\"{u}ller space, where the individual triangles have sides $a,b$ and $c$ in counterclockwise order. \end{theorem} Penner gave a topological description of the bracket of shear coordinates \cite[pg. 81]{Penbk}, a proof that the bivector is independent of triangulation and also determined the center of the algebra \cite[Chap. 2]{Penbk}. For the topological description of the bracket, recall the definition of the {\em fat graph} dual to an ideal triangulation. To construct the fat graph $G$ embedded in the surface, choose a vertex interior to each triangle and connect vertices by an edge when triangles are adjacent. The result is a trivalent graph with a cyclic ordering of edges at each vertex. The trivalent graph is a deformation retract of the surface. Penner's topological description of the bracket is the following \cite[pg. 81]{Penbk}. Consider an ideal triangulation $\Delta$ with dual fat graph spine $G$. If $a,b\in\Delta$ are distinct edges, then let $\epsilon_{ab}$ be the number of components of the complement of $\Delta\cup G$ whose frontier contains points of $a$ and $b$, counted with a positive sign if $a$ and $b$ are consecutive in the counterclockwise order in the corresponding region, and with a negative sign if $a$ and $b$ are consecutive in the clockwise order.\footnote{We have reversed Penner's original sign convention given that his bivector has sides enumerated in a clockwise order, while Fock's bivector has sides enumerated in a counterclockwise order.} Setting $\epsilon_{aa}=0$ for each $a\in\Delta$, $\epsilon_{ab}$ takes the possible values $0,\pm 1,\pm2$ and comprises a skew-symmetric matrix indexed by $\Delta$. The quantity $\epsilon_{ab}$ is the count of oriented vertex sectors jointly bounded by $a$ and $b$. \begin{definition} The Fock shear coordinate algebra is defined by the bracket $\{\sigma_a,\sigma_b\}\,=\,\epsilon_{ab}$ for $a,b\in\Delta$. \end{definition} From formula (\ref{shear}) and Figure \ref{fig:diamond}, a shear coordinate is a balanced sum of length functions. For Riemann surfaces with cusps the WP Poisson bracket of sums of length functions is given in Corollary \ref{commshear} in terms of weights and the form $\omega$. We evaluate $\omega$ for quadrilaterals and find that the evaluation agrees with Penner's topological description of the count $\epsilon_{ab}$. \begin{theorem}\label{FkWP} The Fock shear coordinate algebra is the WP Poisson algebra. The Fock shear coordinate bracket is given by the form $\omega$. \end{theorem} \begin{proof} We begin with Corollary \ref{commshear} providing that the Poisson bracket of the shear coordinates for edges $e,f$ is $\{\sigma_e,\sigma_f\}=2\sum_{\operatorname{cusps}}\omega(\{a_j\},\{b_j\})$, for $\{a_j\},\,\{b_j\}$ the weights for the shears as sums of lengths of ideal geodesics. The matter is to evaluate the sum (\ref{2form}) for $\omega$ for the possible configurations. We first consider the case of the quadrilateral for the side $e$ embedded in the surface and then describe necessary modifications for sides of the quadrilateral coinciding. The quadrilateral with weights for the edge $e$ is given in Figure \ref{fig:diamondwt}. \begin{figure}[htb] \centering \includegraphics[bb=0 0 551 573,width=3in,height=3.12in,keepaspectratio]{diamondwt} \caption{The quadrilateral for a triangulation edge $e$ following formula (\ref{shear}). The quadrilateral sides are labeled by lower case letters and vertices are labeled by Roman numerals. The edge weights $0,\pm1$ refer to expressing the $e$ shear coordinate as a balanced sum of edge lengths. The numbers in square brackets are the sums $A_j+A_{j-1}$.} \label{fig:diamondwt} \end{figure} Referring to formula (\ref{2form}), the first calculation is for the partial sums $A_j$ of edge weights. At a vertex, edges are enumerated for summation in the counterclockwise order with the {\em first} edge being the clockwise most edge. Normalize the partial sums to be zero for the not listed edges preceding the first edge. The partial sums by vertex and in counterclockwise order are given in Table \ref{parsum}. The second calculation is for the sums $A_j+A_{j-1}$ of partial sums about vertices. The sums are given in Figure \ref{fig:diamondwt} by the numbers in square brackets; again sums vanish for edges not listed. Now we are ready to consider the configuration of the quadrilateral for the edge $f$ and the sum of weights $\frac12(A_j+A_{j-1})b_j$. The weights for $f$ are again $0,\pm1$ as in Figure \ref{fig:diamondwt}. The edges $e$ and $f$ are necessarily distinct. First consider that $f$ coincides with a boundary edge of the $e$ quadrilateral. In this case the diagonal edge weight $0$ for $f$ is multiplied by the $[\pm1]$ boundary edge weights for $e$ and added to the $\pm1$ boundary edge $f$ weight times $1/2$ the sum of the $[2]$ and $[2]$ diagonal weights for $e$. The result is $\pm2$ with the positive sign if $f$ is counterclockwise from $e$. Now consider the case that the $e$ and $f$ quadrilaterals are either disjoint or intersect along a boundary edge. In the case of intersection along a boundary, the vanishing sum $[1]+[-1]$ of $e$ boundary weights gives a vanishing overall contribution. This completes the calculation if the quadrilateral of $e$ is embedded. In general a pair of sides of the quadrilateral of $e$ could coincide; we do not consider the special cases $(g,n)=(0,3)$ or $(1,1)$ where two side pairs coincide. A pair of adjacent sides could coincide by a $3/4$ rotation about the common vertex or opposite sides could coincide by a translation. When sides coincide the contribution to $\omega$ is found by adding the contributions from each of the relative configurations for the quadrilateral of $f$. The result will be $0,\pm4$ according to adjacent or opposite sides coinciding and the $e,f$ orientation. As already noted, we are using the adjustment \cite[\S 5]{Wlcusps} to our formulas $2\omega_{WP}(\ ,t_*)=d\ell_*$ in place of $\omega_{WP}(\ ,t_*)=d\ell_*$ systematically used by Penner and Fock. The consequence is that our shear pairing is fourfold the Fock and Penner calculations. With this information, the shear pairing evaluations correspond and the proof is complete. \begin{table} \begin{center} \renewcommand{\arraystretch}{1.3} \begin{tabular}{|c|c|c|c|} \hline Vertex & $A_1$ & $A_2$ & $A_3$ \\ \hline I & -1 & -1 & 0 \\ \hline II & 1 & 0 & \\ \hline III & -1 & -1 & 0 \\ \hline IV & 1 & 0 & \\ \hline \end{tabular} \caption{Partial weight sums in counterclockwise order about vertices.} \label{parsum} \end{center} \end{table} \end{proof} \section{The norm of a length gradient for a collar crossing geodesic}\label{riemmgeom} We continue to consider compact surfaces with crossing geodesics $\alpha$ and a reflection symmetry, see Figure \ref{fig:crossing}. We consider surfaces $R_{\epsilon}$ obtained by doubling a surface with cusps with ideal geodesics $\alpha$, and opening cusps to obtain short length core geodesics $\gamma$. We are interested in the products of the gradients $\grad\lla$ and $\grad\llg$. Theorem \ref{gradpr} and Lemma \ref{bglfb} can be combined to provide expansions for the pairings \[ \langle\grad\llg,\grad\llg\rangle\,=\,\frac{2}{\pi}\llg\,+\,O(\llg^4) \] and \[ \langle\grad\lla,\grad\llg\rangle\,=\,\frac{-4}{\pi}(\#\alpha\cap\gamma)\,+\,O(\llg^2). \] Considerations of Chatauby convergence and sums of the differential $\Omega^2$ from Section 2 suggest the heuristic expansion $\grad\ell_{\alpha}=c_{\alpha}(\ell_{\gamma})\grad\ell_{\gamma}+\overline{\psi(\ell_{\gamma})}(ds^2)^{-1}$ with $\psi(\ell_{\gamma})\in Q(R_{\epsilon})$ converging to $\psi(0)\in Q(R\cup\bar R)$. A simple argument provides that $\psi(0)$ is orthogonal to the limit of $\grad\ell_{\gamma}$. The above pairing formulas and heuristic then suggest an expansion \[ \langle\grad\lla,\grad\lla\rangle\,=\,\frac{8}{\pi\llg}(\#\alpha\cap\gamma)^2\,+\,O(1). \] The divergence of the pairing corresponds to the geometry. The limit of $d\lla$ is formally the differential of length of an ideal geodesic and is a holomorphic quadratic differential with double poles at cusps. The limit is not an element of $Q(R)$. Also the limiting infinitesimal deformation $\grad\ell_{\alpha}$ corresponds to opening cusps and has infinite WP norm. We would like to now use the gradient pairing formula, Theorem \ref{gradpr}, to find the WP pairing for balanced sums of gradients of lengths of ideal geodesics. The above considerations show that a pairing formula involves canceling divergences in $\llg$. The divergences appear directly in evaluating the formula. The crossing geodesic $\alpha$ is orthogonal to the collar core $\gamma$. Arcs along $\gamma$ connect the intersection points with $\alpha$. Each connecting arc provides a summand for the Theorem \ref{gradpr} evaluation. The connecting arcs along $\gamma$ occur in families; a family consists of a simple arc and the additional arcs obtained by adjoining complete circuits of $\gamma$. With $\llg$ tending to zero and the summand $R(\cosh \mbox{dist})\approx2\log2/\mbox{dist}$ for small distance, there is an immediate divergence. We consider the sequence of lengths as a partition for a Riemann sum and find the $\llg$-asymptotics of the sum. The resulting formulas involve an elementary function, a reduced length for an ideal geodesic and a reduced connecting arcs sum formula. \begin{definition} For $0\le a\le 1$, define the function $\lambda(a)=a(1-a)/(2\sin\pi a)$ with value given by continuity at the interval endpoints. For a crossing geodesic $\alpha$ on a compact surface $R$ with reflection symmetry, the reduced length $\red(\lla)$ is the signed length of the segment connecting length $1$ boundaries of the complement of collars about core geodesics. For an ideal geodesic $\alpha$ on a surface with cusps, the reduced length $\red(\lla)$ is the signed length of the segment of $\alpha$ connecting the length $1$ horocycles about the limiting cusps. \end{definition} The function $\lambda(a)$ is symmetric about $a=1/2$ and satisfies $1/8\le\lambda\le 1/2\pi$. For a pair of points $p,q$ on a circle, we write $\lambda(p,q)$ for the evaluation using the fractional part of the segment from $p$ to $q$. For a hyperbolic surface without cone points the length $1$ horocycles are embedded circles bounding disjoint cusp regions and $\red(\lla)$ is non negative. For surfaces with cone points, the reduced length can be negative. For crossing geodesics $\alpha,\beta$ on a surface with reflection symmetry or ideal geodesics $\alpha,\beta$ on a surface with cusps, we will write \[ {\sum}^{red}_{\alpha\operatorname{ to }\beta}\mathcal R \] for the reduced sum over homotopy classes rel the closed sets $\alpha,\beta$ of arcs connecting $\alpha$ to $\beta$, that are not homotopic to arcs along a core $\gamma$ or along a horocycle. For the double of a surface with cusps, the symmetric homotopy classes are even with respect to the reflection; for this situation the sum is only over arcs with representatives on a chosen side of the surface. Each geodesic representative for the reduced sum intersects the thick subset of the surface and the reduced sum includes any intersection points of the ideal geodesics $\alpha$ and $\beta$. We assume the main result Theorem \ref{shpr} and illustrate the approach with the example of a single core geodesic. The general formula depends on the pattern of crossing geodesics. \begin{example}\textup{Expansion of the WP gradient pairing for crossing geodesics $\alpha,\beta$ and a single core geodesic $\gamma$.} For the core intersections $\alpha\cap\gamma=\{a_1,a_2\},\, \beta\cap\gamma=\{b_1,b_2\}$ and a given positive constant $c$ then \begin{multline*} \langle\grad\lla,\grad\lla\rangle\,= \\ \,\frac{2}{\pi}\big(\frac{16}{\llg}\,+\,\red(\lla)\,+\,4\,+\,2\sum_{(a_i,a_j)}\log\lambda(a_i,a_j)\big)\,+\, 2\,{\sum}^{red}_{\alpha\operatorname{ to }\alpha}\mathcal R\,+\,O(\llg^{1-c}) \end{multline*} and for $\alpha\ne\beta$ \begin{multline*} \langle\grad\lla,\grad\llb\rangle\,= \\ \,\frac{2}{\pi}\big(\frac{16}{\llg}\,+\,2\sum_{(a_i,b_j)}\log\lambda(a_i,b_j)\big)\,+\, 2\,{\sum}^{red}_{\alpha\operatorname{ to }\beta}\mathcal R\,+\,O(\llg^{1-c}). \end{multline*} \end{example} We are ready to consider that pairings of balanced sums on a surface with cusps are the limits of pairings of balanced sums on approximating symmetric compact surfaces. The balanced sum condition will serve to cancel the universal $16/\llg$ leading divergence terms. To compare formulas note that a surface with cusps represents half of a compact surface. It is also important that remainder terms as in the example tend to zero with $\llg$ . We state the main result. For a surface with cusps, the sum over core geodesic intersections is replaced with a double sum. First, a sum over cusps and second, a sum over ordered pairs of ideal geodesic segments limiting to a cusp. Ideal geodesics are orthogonal to horocycles. The fractional part of a horocycle defined by a pair of ideal geodesics is independent of the choice of horocycle. The geometric invariant $\lambda$ is evaluated by considering the intersections with any horocycle for the cusp. We present the formula for the case of a torsion-free cofinite group. \begin{theorem}\label{shpr} \textup{The ideal geodesic complex gradient pairing.} For a surface $R$ with cusps and balanced sums $\mathcal A=\sum \mathfrak a_j\ell_{\alpha_j}, \mathcal B=\sum\mathfrak b_k\ell_{\beta_k}$ of ideal geodesic length functions, the WP pairing of gradients is \begin{multline*} \langle\grad L(\mathcal A),\grad L(\mathcal B)\rangle\,=\,\\ \sum_{j,k}\mathfrak a_j\mathfrak b_k\bigg(\delta_{\alpha_j\beta_k} \frac{2}{\pi}(\red(\ell_{\alpha_j})+2)\,+ \,\frac{2}{\pi}\sum_{\operatorname{cusps}}\, \sum_{\begin{smallmatrix}\operatorname{segments\,}\tilde\alpha_j,\tilde\beta_k \\ \operatorname{ limiting\,to\,the\,cusp}\end{smallmatrix}}\log\lambda(\tilde\alpha_j,\tilde\beta_k) \\ \,+\,{\sum}^{red}_{\alpha_j\operatorname{ to }\beta_k}\mathcal R \bigg). \end{multline*} The first sum is over weights; the double sum is over ordered pairs of geodesic segments limiting to cusps. The final sum is over homotopy classes rel the closed sets $\alpha_j,\beta_k$ of arcs connecting $\alpha_j$ to $\beta_k$, arcs that are not homotopic into a cusp. For the homotopy class of an intersection $\alpha_j\cap\beta_k$, the function $\mathcal R$ is evaluated on $\cos \theta$, $\theta$ the intersection angle. Otherwise, the function $\mathcal R$ is evaluated on the hyperbolic cosine of the length of the unique minimal connecting geodesic segment. Twist-length duality and $J$ an isometry provide that $4\langle \sigma_{\mathcal A},\sigma_{\mathcal B}\rangle\,=\,\langle\grad\mathcal A,\grad\mathcal B\rangle$. \end{theorem} \noindent\emph{Proof.} Begin the consideration with compact surfaces with reflection symmetries and balanced sums of geodesic-length functions converging to a surface with cusps formally doubled across the cusps. The approach is to show that the connecting arcs sums of Theorem \ref{gradpr} converge to the sum for the limiting surface. The individual summands are considered in terms of the geometry of the biorthogonal connecting geodesic segments. Begin by normalizing the uniformizations to ensure Chabauty convergence of the deck transformation groups $\Gamma$. For the geodesic $\alpha$, let $\tilde\alpha$ be a chosen geodesic line lift and $\langle A\rangle$ the cyclic group stabilizer. A fundamental interval on $\tilde\alpha$ is chosen; each left $\langle A\rangle$ orbit in $\Gamma\tilde\alpha$ and $\Gamma\tilde\beta$, $\tilde\beta$ a lift of $\beta$, has a unique biorthogonal geodesic connecting segment with one endpoint in the $\tilde\alpha$ fundamental interval. The considerations proceed in terms of the geometry of the second endpoint of the connecting segment. The finite number of terms corresponding to endpoints in a given compact set converge. The sums for families of connecting segments along the core geodesics provide universal divergences; the analysis is described in the next section. The remaining connecting segments have second endpoint outside a given compact set and the segments do not lie along core geodesics. The remaining segments necessarily intersect the lift of the thick subset. The remaining segments are treated according to whether the second endpoint lies in the lift of the thick or the thin subset. In the first case, the injectivity radius is bounded away from zero and the sum of such terms is uniformly bounded by applying the distant-sum method of \cite[Chap. 8]{Wlcbms}. In the second case, the endpoint lies in the lift of a standard collar or cusp region. Hyperbolic geometry is used to show that the full sum over the stabilizing cyclic hyperbolic or parabolic group is bounded simply by the distance of the fundamental interval on $\tilde\alpha$ to the boundary of the region. The distant-sum and cyclic group bounds provide that the contributions from the complement of a large compact set is sufficiently small. The estimates for the various cases are combined to establish convergence of formulas. We consider the connecting segments along a given core geodesic. We outline the approach and give a detailed treatment in the next section. The sum for a family of connecting arcs in a given direction along a core geodesic has the form \[ \sum^{\infty}_{n=0}S((a+n)\ell)\quad\mbox{for}\quad S(t)=\cosh t\,\bigg(\log\frac{\cosh t+1}{\cosh t-1}\bigg)-2 \] for $\ell$ the core length and $a\ell, a>0,$ the distance between core intersection points. The function $S(t)$ has the initial expansion $S(t)\approx 2\log 2/t$ and for $N$ approximately $\ell^{-1-\epsilon}, \epsilon>0,$ we break up the sum \begin{multline*} \sum^{\infty}_{n=0}S((a+n)\ell)\,=\\ \sum^N_{n=0}2\log\frac{2}{(a+n)\ell}\,+\,\frac{1}{\ell}\sum^N_{n=0}\ell\bigg(S((a+n)\ell)-2\log\frac{2}{(a+n)\ell}\bigg)\,+\,\sum^{\infty}_{n=N+1}S((a+n)\ell). \end{multline*} For the first sum, we use additivity of the logarithm to obtain an expression in terms of $\log 2/\ell$ and $\log\Gamma(a+1)$ for the gamma function. Stirling's formula is then applied. For the second sum, half of the first and last sum terms are separated, then the Trapezoid Rule is applied to approximate the sum by an integral and an error term. The Trapezoid Rule provides an improved approximation in $\ell$. The integral is calculated by an antiderivative. Finally the bound that $S(t)$ is $O(e^{-2t})$ for $t\ge t_0 >0$, provides that the third sum is exponentially small; the consequence is that for $a>0$ the original full sum has the expansion \[ \frac{2}{\ell}\ +\ \log\frac{\Gamma(a+1)^2\ell^{2a-1}}{2^{2a}\pi}\ +\ 2a-1\ +\ O(\ell^{1-\epsilon}). \] The overall expansion for connecting arcs in the forward and reverse directions is obtained by combining the expansions for the values $a$ and $1-a$. Identities for the gamma function are used to simplify the resulting formula and to obtain the function $\lambda$. As already noted, the $\ell$-divergence is in the leading term. The balanced sum condition provides for the overall canceling of divergences in evaluating the gradient product. The proof is complete. $\qedd$ \begin{corollary} For a balanced sum $\mathcal A=\sum \mathfrak a_j\ell_{\alpha_j}$ of ideal geodesic length functions and $\beta$ a closed geodesic, the shear and twist derivative pairing is \[ \sigma_{\mathcal A}\llb\,=\,-t_{\beta}L(\mathcal A)\,=\,\sum_j\mathfrak a_j\sum_{p\in\alpha_j\cap\beta}\cos\theta_p \] for the intersection angles measured from $\alpha_j$ to $\beta$. \end{corollary} \begin{example}\label{Dedekind} \textup{A distance relation for the elliptic modular tessellation.} \end{example} \begin{figure}[htbp] \centering \includegraphics[bb=0 0 624 266,width=5.1in,height=2.17in,keepaspectratio]{modtess} \caption{The Dedekind tessellation. Graphic created by and used with permission from Gerard Westendorp.} \label{fig:modtess} \end{figure} \vspace{-.1in} The Dedekind tessellation is the tiling of the upper half plane for the action of $PSL(2;\mathZ)$. The light, respectively dark, triangle tiles form a single $PSL(2;\mathZ)$ orbit. The reflection in the imaginary axis normalizes the group and interchanges the light and dark triangles. The tessellation vertices are fixed points of elements of the group action. There are two orbits for vertices. There are also two orbits for ideal lines. The first consists of the lines containing a single order-$2$ fixed point. The second consists of the lines sequentially containing an order-$3$, an order-$2$ and an order-$3$ fixed point. We refer to the types as $2$-lines and $323$-lines. We consider the lines with weights: $w=+1$ for $323$-lines and $w=-1$ for $2$-lines. The system of weighted lines is $PSL(2;\mathZ)$ invariant. The formula of Theorem \ref{shpr} provides a relation for the distances between lines for the Dedekind tessellation. For any choice $\tilde a$ of a $323$-line and $\tilde \alpha$ of a $2$-line, we have \[ \sum_{\operatorname{ultraparallels\ to\,}\tilde a}w(\eta) R(d(\tilde a,\eta))\ - \sum_{\operatorname{ultraparallels\ to\,}\tilde \alpha}w(\eta) R(d(\tilde \alpha,\eta)) \ =\ \log\frac{3^6\pi^4}{2^{26}} \] for $R(d)=u\log ((u+1)/(u-1))-2$ and $u=\cosh d$. Ultraparallels are the tessellation lines at positive distance. Lines at zero distance are asymptotic. We find the relation as an exercise in evaluating the formula of Theorem \ref{shpr}. We begin with the geometry of the tiling quotient. We work with the thrice-punctured sphere uniformized by the projectivized index $6$ subgroup $P\Gamma(2)\subset PSL(2;\mathZ)$ of matrices congruent to the identity modulo $2$. A fundamental domain for the torsion-free group $P\Gamma(2)$ is given by the twelve light and dark triangles adjacent to a given largest height non vertical $323$-line. The $P\Gamma(2)$ quotient is a tri-corner pillow with three $323$-lines, labeled $a,b,c$ and three $2$-lines, labeled $\alpha,\beta,\gamma$. The $2$-lines separate the quotient into two ideal triangles. A $323$-line enters a single cusp of the quotient, while a $2$-line connects two distinct cusps. We evaluate the pairing product for the weighted balanced sum $\sigma=a+b+c-\alpha-\beta-\gamma$. The sum is $P\Gamma(2)$ invariant, thus $\grad \sigma\in Q(P\Gamma(2))$ by Theorem \ref{twlth}. The space of $P\Gamma(2)$ quadratic differentials is zero dimensional. The self pairing of $\grad \sigma$ is zero. We determine the contributions for terms on the right hand side of the Theorem \ref{shpr} formula. The evaluation corresponds to the formal expansion of the product $(a+b+c-\alpha-\beta-\gamma)^2$. The pairing is real and the initial factor $\pi/2$ can be moved to the left hand side. We begin with the reduced length contribution. The $P\Gamma(2)$ cusps have width $2$; the length $1$ horocycle at infinity has height $2$. For a vertical $2$-line, half of the reduced length segment connects the height two horocycle to the order-$2$ fixed point at height $1$. A $2$-line has reduced length $2\log 2$. For a vertical $323$-line, half of the reduced length segment connects the height two horocycle to the order-$2$ fixed point at height $1/2$. A $323$-line has reduced length $4\log 2$. The reduced length contributing terms of the product are $a^2+b^2+c^2+\alpha^2+\beta^2+\gamma^2$. The total first term reduced length contribution is $18\log2 +12$. We next consider the $\log \lambda$ contributions, which measure the geometry of the ideal geodesics limiting to cusps. There are two reflections stabilizing each cusp. The reflections stabilize the geodesics and provide that the intersections of the ideal geodesics with a horocycle are equally spaced and alternate by weights. The $\log\lambda$ contributing terms of the product are \begin{multline*} a^2\,+\,b^2\,+\,c^2\,+\,\alpha^2\,+\,\beta^2\,+\,\gamma^2\, -\,2a\beta\,-\,2a\gamma\\ -\,2b\alpha\,-\,2b\gamma\, -\,2c\alpha\,-\,2c\beta\,+\,2\alpha\beta\,+\,2\alpha\gamma\,+\,2\beta\gamma. \end{multline*} By $PSL(2;\mathZ)$ symmetry, the evaluation is the same as for $3a^2+3\alpha^2-12a\beta+6\alpha\beta$. The $a^2$ contribution is $2\log(\lambda(0)\lambda(1/2))$ given the two segments at a cusp; the $\alpha^2$ contribution is $2\log\lambda(0)$ given the two limiting cusps; the $a\beta$ contribution is $2\log\lambda(1/4)$ given the symmetry of $\lambda$ and the $\alpha\beta$ contribution is $\log\lambda(1/2)$. The evaluations are $\lambda(0)=1/(2\pi)$, $\lambda(1/4)=3\sqrt2/32$ and $\lambda(1/2)=1/8$. The total $\log \lambda$ contribution is \[ 6\, \log\frac{1}{16\pi}\ +\ 6\, \log\frac{1}{2\pi}\ +\ -24\, \log\frac{3\sqrt2}{32}\,+\,6\,\log\frac18. \] We next consider the contribution from ideal geodesics intersecting. The intersection product contributing terms are $2ab+2ac+2bc-2a\alpha-2b\beta-2c\gamma$. The geodesic intersections $ab$, $ac$ and $bc$ are twofold. From the formula the total intersection contribution is \[ 2\cdot3\cdot R(\cos\frac{\pi}{3})\ +\ 2\cdot3\cdot R(\cos\frac{2\pi}{3})\ -\ 2\cdot 3\cdot R(\cos\frac{\pi}{2})=\ 6\log3\,-\, 12 \] as follows. The leading $2$-factors are from the formal expansion of $\sigma^2$. The $3$-factors are from the symmetry of the triples $a,b,c$ and $\alpha,\beta,\gamma$. The first and second terms correspond to the fact that distinct $323$-lines intersect twice. The $R$-evaluations $R(\cos\pi/3)=(\log 3)/2-2$ and $R(\cos\pi/2)=-2$ are elementary. The final contribution of the right hand side of the overall formula is the sum for the nontrivial connecting geodesics. We start with the formal expansion $\sigma^2=a\sigma+b\sigma+c\sigma-\alpha\sigma-\beta\sigma -\gamma\sigma$. By $PSL(2;\mathZ)$ symmetry the evaluation is the same as for $3a\sigma-3\alpha\sigma$. Connecting geodesics are enumerated by lifting to the universal cover. Given lifts $\tilde a$ and $\tilde\alpha$, the desired sums are obtained. The overall relation now follows. We note that the lines asymptotic to $\tilde a$ and $\tilde\alpha$ correspond to the limits of lines with connecting segments along core geodesics; the $\log \lambda$ terms account for the combined contribution of the asymptotic lines. \section{The geodesic circuit sum}\label{circ} We consider the contribution to the Theorem \ref{gradpr} sum corresponding to connecting geodesics given by circuits about a fixed closed geodesic. Such a circuit sum enters when the geodesics $\alpha$ and $\beta$ are orthogonal to a common closed geodesic. The summands are evaluations of the function \[ S(t)\,=\,\cosh t\bigg(\log\frac{\cosh t +1}{\cosh t -1}\bigg)\,-\,2. \] The consideration is for the length parameter $\ell$ expansion of the infinite sum of circuits. The application to Theorem \ref{shpr} requires an expansion with remainder term tending to zero for small $\ell$. Simple analysis gives that the expansion begins with terms divergent in $\ell$. We provide the expansion. \begin{theorem}\label{lasymp} For $a$ and $\epsilon$ positive, the circuit sum has the expansion \[ \sum_{n=0}^{\infty}S((a+n)\ell)\,=\,\frac{2}{\ell}\,+\,\log\frac{\Gamma(a+1)^2\ell^{2a-1}}{2^{2a}\pi}\,+\,2a-1\,+\,O(\ell^{1-\epsilon}) \] for the gamma function $\Gamma(z)$. \end{theorem} \begin{corollary} For $\epsilon$ positive, the circuit sum for $0<a<1$ has the expansion \begin{align} \sum_{n=-\infty}^{\infty}S((a+n)\ell)\,&=\,\frac{4}{\ell}\,+\,2\log\frac{\Gamma(a+1)\Gamma(2-a)}{2\pi}\,+\,O(\ell^{1-\epsilon})\notag\\ &=\,\frac{4}{\ell}\,+\,2\log\frac{a(1-a)}{2\sin\pi a}\,+\,O(\ell^{1-\epsilon}),\notag \end{align} and for $a=1$ has the expansion \[ \sum_{n=1}^{\infty}S(n\ell)\,=\,\frac{2}{\ell}\,+\,\log\frac{\ell}{4\pi}\,+\,1\,+\,O(\ell^{1-\epsilon}). \] \end{corollary} \noindent\emph{Proof of Corollary.} Since $S(t)$ is an even function the first sum can be rewritten as $\sum_{n=0}^{\infty}S((a+n)\ell)+S((1-a+n)\ell)$ and the theorem is applied. The gamma function identities $\Gamma(z+1)=z\Gamma(z)$ and $\Gamma(1-z)\Gamma(z)\sin\pi z=\pi$ are applied to obtain the desired expression. Finally the case $a=1$ is a direct application of the theorem.$\qedd$ \noindent\emph{Proof of Theorem.} We begin with properties of the summand $S(t)$. The summand has the small-$t$ expansion $S(t)=2\log2/t\,-\,2\,+\,O(t^2\log t)$ and the large-$t$ expansion $S(t)=O(e^{-2t})$. We also consider the function \[ F(t)\,=\,S(t)\,-\,2\log\frac{2}{t} \] and write \[ F(t)\,=\,(\cosh t-1)\bigg(\log\frac{\cosh t+1}{\cosh t-1}\bigg)\ +\ \log\frac{t^2(\cosh t+1)}{4(\cosh t-1)}\ -\ 2. \] We note that for small-$t$, since $\cosh t-1$ is $O(t^2)$ and $t^2/(\cosh t-1)$ is analytic it follows that $F(t)$ has second derivative bounded by $-\log t$ for small-$t$. We are ready to begin the overall considerations and write the sum in the form of Riemann sums, adding in and subtracting out a $2\log 2/t$ contribution \begin{align}\label{123} \sum_{n=0}^{\infty}S((a+n)\ell)\,=\,&\sum_{n=0}^N2 \log\frac{2}{(a+n)\ell}\notag\\ &+\,\frac{1}{\ell}\sum_{n=0}^N\ell\big(S((a+n)\ell)-2\log\frac{2}{(a+n)\ell}\big)\notag\\&+\,\frac{1}{\ell}\sum_{n=N+1}^{\infty}\ell S((a+n)\ell)\notag\\ =\,&I\,+\,II\,+\,III. \end{align} We consider the right-hand sums in order. For the first sum we have \[ 2\sum_{n=0}^N\log\frac{2}{(a+n)\ell}\,=\,2(N+1)\log\frac{2}{\ell}\,+\,2\sum_{n=0}^N\log\frac{1}{(a+n)}. \] The right hand sum is $-2\log\prod_{n=0}^N(a+n)\,=\,-2\log \Gamma(a+N+1)/\Gamma(a+1)$. We apply Stirling's formula $\log \Gamma(z)=\frac12\log2\pi/z\,+\,z(\log z\,-\,1)\,+\,O(1/z)$ to find that \begin{multline*} I\,=\,2(N+1)\log\frac{2}{\ell}\,+\,2\log\Gamma(a+1)\,-\,2(a+N+\frac12)\log(a+N+1)\\+\,2(a+N+1)\,-\,\log 2\pi\,+\,O(N^{-1}) \end{multline*} and noting that $\log(a+N+1)=\log(a+N)+1/(a+N)+O(N^{-2})$ gives the desired final expansion \begin{multline}\label{1} I\,=\,2(N+1)\log\frac{2}{\ell}\,+\,2\log\Gamma(a+1)\,-\,2(a+N)\log(a+N)\\-\,\log(a+N+1)\,+\,2(a+N)\,-\,\log2\pi\,+\,O(N^{-1}). \end{multline} For the second sum of (\ref{123}) we use the Trapezoid Rule approximation for an integral. The approximation involves weights $1/2$ for the first and last sum terms. The error bound is in terms of the second derivative of $F(t)$ on the interval $[a\ell,(a+N)\ell]$ and the square of the partition size. The approximation gives the expansion \begin{multline*} II\,=\,\frac{1}{\ell}\int_{a\ell}^{(a+N)\ell}F(t)dt\,+\,\frac12\big(F(a\ell)\,+\,F((a+N)\ell)\big)\\+\,O(\ell\,|[a\ell,(a+N)\ell]|\max |F''|). \end{multline*} We set $(a+N)=\ell^{-\epsilon}$ and consider terms in order from right to left. Given the small-$t$ logarithmic bound for $F''$ the remainder is bounded as $O(\ell^{1-2\epsilon})$. Given the large-$t$ exponential decay $S(t)$ and the small-$t$ expansion of $S(t)$ then \[ F((a+N)\ell)\,=\,-2\log \frac{2}{(a+N)\ell}\,+\,O(e^{-\ell^{-\epsilon}})\quad \mbox{and}\quad F(a\ell)=-2+O(\ell^{2-\epsilon}). \] The next step is to include the contribution of sum $III$. The sum is replaced with the corresponding integral. Since the integrand is exponentially decreasing on the interval, the replacement remainder is exponentially small. The considerations combine to give the expansion \begin{multline*} II\,+\,III\,=\,-\frac{2}{\ell}\int_{a\ell}^{(a+N)\ell}\log\frac{2}{t}\,dt\,+\,\frac{1}{\ell}\int_{a\ell}^{\infty}S(t)\,dt\\-\,1\,-\log\frac{2}{(a+N)\ell}\,+\,O(\ell^{1-2\epsilon}). \end{multline*} The first integrand has antiderivative $t\log 2/t\,+\,t$. The second integrand $S(t)$ has antiderivative \[ \sinh t\bigg(\log\frac{\cosh t+1}{\cosh t-1}\bigg), \] which has the large-$t$ expansion $2\,+\,O(e^{-2t})$. We evaluate the integrals to find the contribution \begin{multline*} II\,+\,III\,=\,-2N\log 2\,+\,2(a+N)\log((a+N)\ell)\,-\,2(a+N)\\-2a\log a\ell\,+\,2a+\frac{2}{\ell}-2a\log\frac{2}{a\ell}\,-\,1\,-\log\frac{2}{(a+N)\ell}\,+\,O(\ell^{1-2\epsilon}). \end{multline*} The next step is to combine with expansion (\ref{1}) and note again that $(a+N)\ell=\ell^{-\epsilon}$ to find the desired final expansion \[ I\,+\,II\,+\,III\,=\,\frac{2}{\ell}\,+\,\log\frac{\Gamma(a+1)^2\ell^{2a-1}}{2^{2a}\pi}\,+\,2a\,-\,1\,+\,O(\ell^{1-\epsilon}). \] \providecommand\WlName[1]{#1}\providecommand\WpName[1]{#1}\providecommand\Wl{ lf}\providecommand\Wp{Wlp}\def\cprime{$'$}
1303.0101
\section{\boldmath Observation of $Z_b$ states in the $\Upsilon(nS)\pi^+\pi^-$ and $h_b(mP)\pi^+\pi^-$ channels} Recently Belle observed the $h_b(1P)$ and $h_b(2P)$ states in the transitions $\Upsilon(5S)\to h_b(mP)\pi^+\pi^-$~\cite{hb_belle}. The rates of these transitions appeared to be unsuppressed relative to the $\Upsilon(5S)\to\Upsilon(nS)\pi^+\pi^-$ ($n=1,2,3$). The $h_b(mP)$ production involves spin-flip of $b$-quark and is suppressed as $(\Lambda_{QCD}/m_b)^2$ in the multipole expansion; this unexpected result motivated further studies of the $h_b(mP)$ and $\Upsilon(nS)$ production mechanisms. Belle studied the resonant structure of the $\Upsilon(5S)\to\Upsilon(nS)\pi^+\pi^-$ and $h_b(mP)\pi^+\pi^-$ decays ($n=1,2,3$; $m=1,2$)~\cite{zb_belle}. The $\Upsilon(nS)$ [$h_b(mP)$] states are reconstructed in the $\mu^+\mu^-$ channel [inclusively using missing mass of the $\pi^+\pi^-$ pairs]. Invariant mass spectra of the $\Upsilon(nS)\pi^{\pm}$ and $h_b(mP)\pi^{\pm}$ combinations are shown in Fig.~\ref{fig:zb_signals}. \begin{figure}[tbhp] \includegraphics[width=0.33\linewidth]{./y1spp-f-yp.eps} \includegraphics[width=0.33\linewidth]{./y2spp-f-yp.eps} \includegraphics[width=0.33\linewidth]{./y3spp-f-yp.eps} \center \includegraphics[width=0.33\linewidth]{./hb1_vs_mmp_fit_d.eps} \includegraphics[width=0.33\linewidth]{./hb2_vs_mmp_fit_e.eps} \caption{ Invariant mass spectra of the (a) $\Upsilon(1S)\pi^{\pm}$, (b) $\Upsilon(2S)\pi^{\pm}$, (c) $\Upsilon(3S)\pi^{\pm}$, (d) $h_b(1P)\pi^{\pm}$ and (e) $h_b(2P)\pi^{\pm}$ combinations. } \label{fig:zb_signals} \end{figure} Each distribution shows two peaks. For the channels $\Upsilon(nS)\pi^+\pi^-$ [$h_b(mP)\pi^+\pi^-$] the Dalitz plot analysis [fit to one-dimensional distributions] is performed. The non-resonant contributions in the $h_b(mP)\pi^+\pi^-$ channels are negligible, justifying the one-dimensional analysis. Preliminary results of the angular analysis indicate that both states have the same spin-parity $J^P=1^+$~\cite{zb_belle_angular}, therefore coherent sum of Breit-Wigner amplitudes is used to describe the signals. The Dalitz plot model for the $\Upsilon(5S)\to\Upsilon(nS)\pi^+\pi^-$ channels includes also the $\pi^+\pi^-$ resonances $f_0(980)$ and $f_2(1270)$, and non-resonant contribution, parameterized as $a+b\,M_{\pi^+\pi^-}^2$, where $a$ and $b$ are complex numbers floating in the fit. The masses and widths of the two peaks are found to be in good agreement among different channels (see Fig.~\ref{fig:zb_table}). \begin{figure}[tbhp] \center \includegraphics[width=0.7\linewidth]{./graph.eps} \caption{ The deviations of the mass and width measurements of the $Z_b(10610)$ and $Z_b(10650)$ in different channels from the averaged over all channels value. Green vertical lines indicate the $B\bar{B}^*$ and $B^*\bar{B}^*$ thresholds.} \label{fig:zb_table} \end{figure} Averaged over the five decay channels parameters are \begin{align*} M_1 & =(10607.4\pm2.0)\,\mathrm{MeV}/c^2, & M_2 & =(10652.2\pm1.5)\,\mathrm{MeV}/c^2, \\ \Gamma_1 & =(18.4\pm2.4)\,\mathrm{MeV}, & \Gamma_2 & =(11.5\pm2.2)\,\mathrm{MeV}. \end{align*} The peaks are identified as signals of two new states, named $Z_b(10610)$ and $Z_b(10650)$. Another result of the amplitude analyses is that the phase between the $Z_b(10610)$ and $Z_b(10650)$ amplitudes is zero for the $\Upsilon(nS)\pi^+\pi^-$ channels, and $180^{\circ}$ for the $h_b(mP)$ channels. The masses of the $Z_b(10610)$ and $Z_b(10650)$ states are close to the $B\bar{B}^*$ and $B^*\bar{B}^*$ thresholds, respectively. All the properties of the $Z_b(10610)$ and $Z_b(10650)$ find natural explanation once molecular structure for these states is assumed without even the need of dynamic model. Considering the heavy-quark spin structure of the $B^{(*)}\bar{B}^*$ molecule with $I^G(J^P)=1^+(1^+)$, one concludes that $Z_b$ contain both ortho- and para-bottomonium components~\cite{zb_voloshin}. The weight of these components is equal, therefore the decay to the $h_b(mP)\pi^{\pm}$ is not suppressed relative to the $\Upsilon(nS)\pi^{\pm}$. The $Z_b(10610)$ and $Z_b(10650)$ differ by the sign between ortho- and para-bottomonium components, this explains why the $Z_b(10610)$ and $Z_b(10650)$ amplitudes appear with the sign plus for the $\Upsilon(nS)\pi^+\pi^-$ channels and with the sign minus for the $h_b(mP)\pi^+\pi^-$ channels. In the limit of infinitely heavy $b$ quark the $B$ and $B^*$ mesons have equal mass, thus the $Z_b(10610)$ and $Z_b(10650)$ are also degenerate. Given minus sign between the $Z_b$ amplitudes in the $h_b(mP)\pi^+\pi^-$ channel the contribution of this channel vanishes if the heavy quark symmetry is exact. \section{Observation of the $Z_b(10610)\to B\bar{B}^*$ and $Z_b(10650)\to B^*\bar{B}^*$ decays \label{sec:obs}} Given proximity to the thresholds and finite widths, it is natural to expect that the rates of the ``fall-apart'' decays $Z_b(10610)\to B\bar{B}^*$ and $Z_b(10650)\to B^*\bar{B}^*$ are substantial in the molecular picture. To search for these transitions Belle studied the $\Upsilon(5S)\to[B^{(*)}\bar{B}^*]^{\pm}\pi^{\mp}$ decays~\cite{zb_belle_bb}. One $B$ meson is reconstructed fully using the $D^{(*)}\pi^+$ and $J/\psi K^{(*)}$ channels. The distribution of the missing mass of the $B\pi^{\pm}$ pairs shows clear signals of the $\Upsilon(5S)\to[B\bar{B}^*]^{\pm}\pi^{\mp}$ and $\Upsilon(5S)\to[B^*\bar{B}^*]^{\pm}\pi^{\mp}$ decays [see Fig.~\ref{fig:bbpi} (a)]; \begin{figure}[tbhp] \includegraphics[width=0.33\linewidth]{./rmbp-ex2a.eps} \includegraphics[width=0.33\linewidth]{./sig-bbp1b.eps} \includegraphics[width=0.33\linewidth]{./sig-bbp2c.eps} \caption{ Missing mass of the pairs formed from the reconstructed $B$ candidate and charged pion (a) and missing mass of the charged pions for the $B\pi$ combinations for (b) $\Upsilon(5S)\to B\bar{B}^*\pi$ and (c) $\Upsilon(5S)\to B^*\bar{B}^*\pi$ candidate events.} \label{fig:bbpi} \end{figure} corresponding branching fractions of $(2.83\pm0.29\pm0.46)\,\%$ and $(1.41\pm0.19\pm0.24)\,\%$, respectively, are in agreement with previous Belle measurement~\cite{bbp_belle}. No signal of the $\Upsilon(5S)\to[B\bar{B}]^{\pm}\pi^{\mp}$ decay is found, with upper limit on its fraction of $<0.4\,\%$ at 90\% confidence level. The distributions in the $B\bar{B}^*$ and $B^*\bar{B}^*$ invariant mass for the $\Upsilon(5S)\to[B\bar{B}^*]^{\pm}\pi^{\mp}$ and $\Upsilon(5S)\to[B^*\bar{B}^*]^{\pm}\pi^{\mp}$ signal regions, respectively, indicate clear excess of events over background, peaking at the thresholds [see Fig.~\ref{fig:bbpi}~(b) and~(c)]. These threshold peaks are interpreted as the signals of the $Z_b(10610)\to B\bar{B}^*$ and $Z_b(10650)\to B^*\bar{B}^*$ decays, with significances of $8\,\sigma$ and $6.8\,\sigma$, respectively. Despite much larger phase-space, no significant signal of the $Z_b(10650)\to B\bar{B}^*$ decay is found. Assuming that the $Z_b$ decays are saturated by the channels so far observed, Belle calculated relative branching fractions of the $Z_b(10610)$ and $Z_b(10650)$ (see Table~\ref{tab:zb_dec}). \begin{table}[tb!h] \caption{Branching fractions ($\mathcal{B}$) of $Z_b(10610)$ and $Z_b(10650)$ assuming that the observed so far channels saturate their decays.} \label{tab:zb_dec} \center \begin{tabular}{l|c|c} \hline Channel & $\mathcal{B}$ of $Z_b(10610)$, \% & $\mathcal{B}$ of $Z_b(10650)$, \% \\ \hline $\Upsilon(1S)\pi^{+}$ & $0.32\pm0.09$ & $0.24\pm0.07$ \\ $\Upsilon(2S)\pi^{+}$ & $4.38\pm1.21$ & $2.40\pm0.63$ \\ $\Upsilon(3S)\pi^{+}$ & $2.15\pm0.56$ & $1.64\pm0.40$ \\ $h_b(1P)\pi^{+}$ & $2.81\pm1.10$ & $7.43\pm2.70$ \\ $h_b(2P)\pi^{+}$ & $2.15\pm0.56$ & $14.8\pm6.22$ \\ $B^+\bar{B}^{*0}+\bar{B}^0B^{*+}$ & $86.0\pm3.6$ & -- \\ $B^{*+}\bar{B}^{*0}$ & -- & $73.4\pm7.0$ \\ \hline \end{tabular} \end{table} The $B^{(*)}\bar{B}^*$ channel is dominant and accounts for about 80\% of the $Z_b$ decays. The $Z_b(10650)\to B\bar{B}^*$ channel is not included in the table because its significance is marginal. If considered, the $Z_b(10650)\to B\bar{B}^*$ branching fraction would be $(25.4\pm10.2)\%$. All other fractions would be reduced by a factor of 1.33. \section{Evidence for neutral isotriplet member $Z_b(10610)^0$} Both $Z_b(10610)$ and $Z_b(10650)$ are isotriplets with only charged components observed originally. Belle searched for their neutral components using the $\Upsilon(5S)\to\Upsilon(nS)\pi^0\pi^0$ ($n=1,2$) decays~\cite{zb_belle_neutral}. These decays are observed for the first time and the measured branching fractions $\mathcal{B}[\Upsilon(5S)\to\Upsilon(1S)\pi^0\pi^0]=(2.25\pm0.11\pm0.22)\times10^{-3}$ and $\mathcal{B}[\Upsilon(5S)\to\Upsilon(2S)\pi^0\pi^0]=(3.66\pm0.22\pm0.48)\times10^{-3}$, are in agreement with isospin relations. Belle performed the Dalitz plot analyses of the $\Upsilon(5S)\to\Upsilon(1S,2S)\pi^0\pi^0$ transitions using the same model as for the charged pion channels (see Fig.~\ref{fig:zb_neu}). \begin{figure}[tbhp] \includegraphics[width=0.48\linewidth]{./y1s-100.eps} \includegraphics[width=0.48\linewidth]{./y1s-200.eps} \includegraphics[width=0.48\linewidth]{./y2s-100.eps} \includegraphics[width=0.48\linewidth]{./y2s-200.eps} \caption{ The projections of the Dalitz plot fit for the $\Upsilon(1S)\pi^0\pi^0p$ (top row) and $\Upsilon(2S)\pi^0\pi^0$ (bottom row) channels on the $\Upsilon(nS)\pi^0$ (left column) and $\pi^0\pi^0$ invariant mass. } \label{fig:zb_neu} \end{figure} The $Z_b(10610)^0$ signal is found in the $\Upsilon(2S)\pi^0$ channel with the significance of $4.9\,\sigma$ including systematics. The $Z_b(10610)^0$ mass of $(10609^{+8}_{-6}\pm6)\,\mathrm{MeV}/c^2$ is consistent with the charged $Z_b(10610)^{\pm}$ mass. The signal of the $Z_b(10610)^0$ in the $\Upsilon(1S)\pi^0$ channel and the $Z_b(10650)^0$ signal are insignificant. The Belle data do not contradict the existence of the $Z_b(10610)^0\to\Upsilon(1S)\pi^0$ and the $Z_b(10650)^0$, but the available statistics are insufficient to establish these signals. \section{Interpretations} As discussed at the end of Section~\ref{sec:obs}, the assumption of molecular $B^{(*)}\bar{B}^*$ structure naturally explains all observed so far properties of the $Z_b$ states. Their dynamical model, however, is an open question. Proposed interpretations include presence of the compact tetraquark~\cite{zb_ali}, non-resonant rescattering~\cite{zb_thresh}, multiple rescatterings that result in the amplitude pole known as coupled channel resonance~\cite{zb_cc_resonance} and deutron-like molecule bound by meson exchanges~\cite{zb_molec}. All these mechanisms (except for the tetraquark) are intimately related and correspond rather to quantitative than to qualitative differences. Further experimental and theoretical studies are needed to clarify the nature of the $Z_b$ states. As discussed in Ref.~\cite{zb_voloshin}, based on heavy quark symmetry one can expect more states with similar nature but with differing quantum numbers. Such states should be accessible in radiative and hadronic transitions in data samples with high statistics at and above the $\Upsilon(5S)$, that will be available at the SuperKEKB. \section{Summary} Despite observed only recently, the $Z_b$ states provide a very rich phenomenological object with a lot of experimental information available. They could be very useful for understanding dynamics of the hadronic systems near and above the open flavor thresholds.
1303.0227
\section{Introduction} Within our Solar System, Earth and smaller bodies are primarily rocky (or, far from the Sun, mixtures of rock and ices), whereas the cosmically-abundant low-density constituents H$_2$ and He dominate the volume in Uranus/Neptune and larger bodies. There are no local examples of bodies intermediate in size or mass between Earth (1 R$_\oplus$, 1 M$_\oplus$) and Uranus/Neptune, both of which are larger than 3.8 R$_\oplus$ and more massive than 14 M$_\oplus$. However, observations of extrasolar planets are now filling this gap in our knowledge of the mass-radius relationship of planetary bodies. To date, the only accurate radius measurements for exoplanets have been provided by planets observed to transit across the disk of their star. The fractional depth of the transit provides a direct measure for the ratio of the radius of the planet to that of its star. The star's radius is estimated using spectroscopic classification, in some cases augmented by other techniques. Doppler measurements of the variation of a star's radial velocity have been used to compute mass estimates for almost two hundred transiting giant planets, as well as for the first three sub-Uranus exoplanets for which both radii and masses were determined: GJ 1214 b \citep{cha09}, CoRoT-7 b \citep{que09}, and Kepler-10 b \citep{bat11}. Analysis of transit timing variations (TTVs) resulting from mutual planetary perturbations provided dynamical estimates of the masses of the five innermost known planets orbiting Kepler-11 \citep{liss11a}, more than doubling the number of exoplanets less massive than Uranus with both size and mass measurements. Precise mass estimates have subsequently been obtained for several more sub-Uranus mass planets, in three cases by using radial velocity (RV): 55 Cancre e (\citealt{win11,endl12}), Kepler-20 b \citep{gau12}, and GJ 3470 b \citep{bon12}; three using TTVs: Kepler-36 b,c \citep{car12}, and Kepler-30 b \citep{san12}; and one, Kepler-18 b \citep{coch11}, using a combination of RV and TTV data. Less precise estimates for the masses of dozens of \ik planets and planet candidates, many of which are in this mass range, have been derived from TTVs by \citet{wu12}. \citet{liss11a} estimated the masses of the five planets Kepler-11 b-f using only the first 16 months of \ik data. Similar mass constraints on these planets, as well as an upper limit of 30 M$_\oplus$ on the mass of the outer planet Kepler-11 g, were obtained by \citet{mig12}. \citet{mig12} analyzed the same Q1-Q6\footnote{The \ik spacecraft rotates four times per orbit to keep the sunshade and solar panels oriented properly. Targets are imaged on different parts of the focal plane during different orientations. The \ik orbital period is $\sim$372 days, and the data are grouped according to the ``quarter'' year during which observations were made. The data on Kepler-11 taken prior to \ik's first ``roll'' are referred to as Q1. Subsequent quarters are numbered sequentially: Q2, Q3, ...} data using a photodynamical model, which adjusted planetary parameters (size, orbital elements, masses) to minimize the residuals of a fit of a model lightcurve that accounts for mutual planetary interactions to the measured lightcurve. We report herein more precise estimates of the masses of the six Kepler-11 planets derived from TTV measurements that incorporate 40 months of \ik photometric time series data. In Section 2, we present our estimates of transit times; detailed descriptions of the three independent techniques used to compute these times are given in Appendix A. Our dynamical analysis of the Kepler-11 system based upon these transit times is presented in Section 3, with additional information provided in Appendix B. In Section 4, we combine estimates of stellar density obtained using transit profiles and the dynamical measurement of planetary eccentricities presented in Section 3 together with analyses of high-resolution spectra taken at the Keck I telescope to provide refined parameters for the star Kepler-11. We tabulate the properties of Kepler-11's six known planets that are derived by combining lightcurve analysis with our dynamical results and stellar parameters in Section 5, wherein we also discuss implications of these results for planetary compositions. We conclude the paper with a summary of our principal results. \section{Measurement of Transit Times from \ik Photometric Time Series} Variations in the brightness of Kepler-11 have been monitored with an effective duty cycle exceding 90\% starting at barycentric Julian date (BJD) 2454964.512, with all data returned to Earth at a cadence of 29.426 minutes (long cadence, LC); data have also been returned at a cadence of 58.85 seconds (short cadence, SC) since BJD 2455093.216. Our analysis uses short cadence data where available, augmented by the long cadence dataset primarily during the epoch prior to BJD 2455093.216, for which no SC data were returned to Earth. We obtained these data from the publicly-accessible MAST archive at http://archive.stsci.edu/kepler/ . As measurement of transit times (TTs) requires a complicated analysis of often noisy data, authors Jason Rowe (J.R.), Eric Agol (E.A.) and Donald Short (D.S.) performed independent measurements of TTs using techniques described in Appendix A. Figure~\ref{fig:1} shows the deviations of all three sets of observed transit times, $O$, relative to time from a linear ephemeris fit, $C_\textit{l}$, through Q14 \ik data. Here and throughout we base our timeline for transit data from JD-2,454,900. As evident in Figure~\ref{fig:1}, each set of TT measurements contain several outliers. These outliers are unlikely to be correct, and may be due to overlapping transits, star spots, or uncertain fits to the lightcurve. Trying to fit these outlier TTs would degrade our dynamical studies. Therefore, we remove points where only one of the methods yields a TT whose uncertainty is more than 2.5 times as large as the median TT uncertainty computed by that method for the planet in question. We then use the three sets of measured TTs to filter out unreliable measurements as follows: If two or three sets of measurements are available for a specific transit and each of the $1\sigma$ uncertainty ranges overlap with at least one of the other ranges, then each of the points are used. If there is only a single measurement, or if there is no overlap of $1\sigma$ uncertainty ranges, then all measurements of this transit are discarded. If three measurements are available, and two overlap but the third does not overlap with either, then the data are discarded for TTs of planets b -- f, but the two overlapping points are retained for planet g, which has far fewer transits observed than any other planet (and no significant TTVs even with these points included). This culling procedure removed fewer than 8\% of detected transit times from each dataset, with the most points discarded from Kepler-11 b, whose transits are the most numerous and have the lowest signal-to-noise ratio (S/N). For planet b, we removed 17 of the 103 TTs measured by E.A., 9 of the 111 TTs measured by J.R., and 13 of the 90 TTs measured by D.S. Our approach is conservative in the sense that the data set used for our dynamical studies presented in Section 3 consists only of transit times that are corroborated by at least one alternative method. \begin{figure} \includegraphics [height = 1.8 in]{f1a.eps} \includegraphics [height = 1.8 in]{f1b.eps} \newline \includegraphics [height = 1.8 in]{f1c.eps} \includegraphics [height = 1.8 in]{f1d.eps} \newline \includegraphics [height = 1.8 in]{f1e.eps} \includegraphics [height = 1.8 in]{f1f.eps} \caption{Transit timing variations for Kepler-11's six known planets, using short cadence data when available, supplemented by long cadence data prior to $t$ (JD-2,454,900) = 193 days, where short cadence data were not sent to Earth. The TTs measured by E.A. are displayed as green open triangles, those from J.R. as blue open circles, and those calculated by D.S. as red open squares, with their respective methods described in Appendix A. The sets of data points are largely consistent. The observed transit times, $O$, are displayed as deviates from times, $C_{\textit{l}}$, that were calculated using a linear fit to each set of transit data, i.e., a fit that assumes strictly periodic orbits. All measured TTs are displayed, apart from one outlier for Kepler-11 d that deviated from both of the other estimates and a linear ephemeris by more than three hours. Note that the vertical scales differ among panels.} \label{fig:1} \end{figure} \section{Dynamical Models of the Kepler-11 Planetary System} Transits of a planet on a keplerian orbit about its star must be strictly periodic. In contrast, the gravitational interactions among planets in a multiple planet system cause orbits to speed up and slow down by small amounts, leading to deviations from exact periodicity of transits (\citealt{dob96,hm05,assc05}). Such variations are strongest when planetary orbital periods are commensurate or nearly so, which is the case for the large planets Kepler-9 b and c \citep{hol10}, or when planets orbit close to one another, which is the case for the inner five transiting planets of Kepler-11 \citep{liss11a}. To integrate planetary motions, we adopt the 8th order Runge-Kutta Prince-Dormand method, which has 9th order errors. Our choice of dynamical epoch was $T_{0} = 680$ days, near the midpoint of the fourteen quarters of \textit{Kepler} data being modeled. In all of our simulations, the orbital period and phase of each planet are free parameters. The phase is specified by the midpoint of the first transit subsequent to our chosen epoch. Initially, we keep all planetary masses as free parameters. In some cases, we required planets to be on circular orbits at epoch, whereas in others we allowed the orbits to be eccentric. We have assumed co-planarity, i.e., negligible mutual inclinations between planetary orbits, in all of our dynamical models. We make no attempt to model transit durations or impact parameters in our dynamical simulations. Our integrations produce an ephemeris of simulated transit times, $C_\textit{s}$, and we compare these simulated times to the observed TTs. We employ the Levenberg-Marquardt algorithm to search for a local minimum in $\chi^2$. The algorithm evaluates the local slope and curvature of the $\chi^2$ surface. Once it obtains a minimum, the curvature of the surface is used to evaluate error bars. Other parameters are allowed to float when determining the error limits on an individual parameter's error bars. Assuming that the $\chi^2$ surface is parabolic in the vicinity of its local minimum, its contours are concentric ellipses centered at the best-fit value. The orientations of these ellipses depend on correlations between parameters. The errors that we quote account for the increase in uncertainty in some dimensions due to such correlations. We adopted a wide variety of initial conditions for comparison, and found that our solutions were insensitive to the mass of the outer planet, Kepler-11 g. Hence for all subsequent simulations used to determine the masses and orbital parameters of the five inner planets, we keep the mass of Kepler-11 g as a fixed parameter set to $2.53 \times 10^{-5}M_\star$ (comparable to the masses of similar size planets in this system and equal to 8 M$_{\oplus}$ for the value of stellar mass estimated by \citealt{liss11a}), with its orbital eccentricity fixed at zero. We find that the masses and orbital parameters of planets Kepler-11 b--f converge to the values listed in Tables~\ref{tbl-EAbestfit} -- \ref{tbl-DSbestfit} (Appendix B), and the resulting modeled TTVs fit the data well, as displayed in Figures~\ref{fig:EA1} -- \ref{fig:DS2}. \begin{table} \begin{center} \begin{tabular}{|cccccc|} \hline Planet & $P$ (days) & $T_{0}$ (date) & $e \cos \omega$ & $e \sin \omega$ & $M_p/M_{\star} \times 10^{-6}$ \\ \hline b & \textbf{10.3039}$^{+0.0006}_{-0.0010}$ & \textbf{689.7378}$^{+0.0026}_{-0.0047}$ & \textbf{0.032}$^{+0.036}_{-0.032}$ & \textbf{0.032}$^{+0.059}_{-0.029}$ & \textbf{5.84}$^{+4.25}_{-3.10}$ \\ c & \textbf{13.0241}$^{+0.0013}_{-0.0008}$ & \textbf{683.3494}$^{+0.0014}_{-0.0019}$ & \textbf{0.016}$^{+0.033}_{-0.025}$ & \textbf{0.020}$^{+0.053}_{-0.029}$ & \textbf{9.19}$^{+9.12}_{-4.90}$ \\ d & \textbf{22.6845}$^{+0.0009}_{-0.0009}$ & \textbf{694.0069}$^{+0.0022}_{-0.0014}$ & \textbf{-0.003}$^{+0.005}_{-0.005}$ & \textbf{0.002}$^{+0.006}_{-0.002}$ & \textbf{22.86}$^{+2.58}_{-4.83}$ \\ e & \textbf{31.9996}$^{+0.0008}_{-0.0012}$ & \textbf{695.0755}$^{+0.0015}_{-0.0009}$ & \textbf{-0.008}$^{+0.004}_{-0.003}$ & \textbf{-0.009}$^{+0.005}_{-0.005}$ & \textbf{24.87}$^{+4.84}_{-6.68}$ \\ f & \textbf{46.6888}$^{+0.0027}_{-0.0032}$ & \textbf{718.2710}$^{+0.0041}_{-0.0038}$ & \textbf{0.011}$^{+0.009}_{-0.008}$ & \textbf{-0.005}$^{+0.006}_{-0.007}$ & \textbf{6.32}$^{+2.63}_{-2.94}$ \\ g & \textbf{118.3807}$^{+0.0010}_{-0.0006}$ & \textbf{693.8021}$^{+0.0030}_{-0.0021}$ & (0) & (0) & \textbf{$\lesssim 70$} \\ \hline \end{tabular} \caption{Our combined fit dynamical model to the observed transit times, with the orbital periods (second column), time of first transit after JD = 2,454,900 (third column), $e\cos\omega$ (fourth column), $e\sin\omega$ (fifth column), and planetary mass in units of the stellar mass (sixth column), all as free variables for planets Kepler-11 b-f. Periods are given as viewed from the barycenter of our Solar System. Because Kepler-11 is moving towards the Solar System at 57 km/s, actual orbital periods in the rest frame of Kepler-11 are a factor of 1.00019 times as long as the values quoted (as noted by \citealt{liss11a}). The simulations used to derive these parameters adopted a circular orbit and a fixed mass of $25.3 \times 10^{-6} M_{\star}$ for Kepler-11 g. The upper limit on the mass of planet g was explored separately, as described in the text\label{tbl-dyn}.} \end{center} \end{table} \begin{figure} \includegraphics [height = 2.1 in]{f2a.eps} \includegraphics [height = 2.1 in]{f2b.eps} \newline \includegraphics [height = 2.1 in]{f2c.eps} \includegraphics [height = 2.1 in]{f2d.eps} \newline \includegraphics [height = 2.1 in]{f2e.eps} \includegraphics [height = 2.1 in]{f2f.eps} \caption{Observed and simulated transit timing variations for planets Kepler-11 b, c and d, using transit measurements from E.A. The panels on the left-hand side compare observed TTVs (the difference between observed TTs and the best fit constant-period ephemeris, $O-C_{\textit{l}}$), which are represented by open symbols with error bars, with model TTVs (the departure of model times from the same constant-period ephemeris, $C_{\textit{s}} - C_\textit{l}$), which are represented by filled black points. The right hand side plots the residuals of the fit (i.e., the dynamical model subtracted from the observed transit times). Note the differences between the vertical scales of the various panels. } \label{fig:EA1} \end{figure} \begin{figure} \includegraphics [height = 2.1 in]{f3a.eps} \includegraphics [height = 2.1 in]{f3b.eps} \newline \includegraphics [height = 2.1 in]{f3c.eps} \includegraphics [height = 2.1 in]{f3d.eps} \newline \includegraphics [height = 2.1 in]{f3e.eps} \includegraphics [height = 2.1 in]{f3f.eps} \caption{Observed and simulated transit timing variations for Kepler-11 e, f and g, using transit time measurements from E.A. See the caption to Figure 2 for details.} \label{fig:EA2} \end{figure} \begin{figure} \includegraphics [height = 2.1 in]{f4a.eps} \includegraphics [height = 2.1 in]{f4b.eps} \newline \includegraphics [height = 2.1 in]{f4c.eps} \includegraphics [height = 2.1 in]{f4d.eps} \newline \includegraphics [height = 2.1 in]{f4e.eps} \includegraphics [height = 2.1 in]{f4f.eps} \caption{Observed and simulated transit timing variations for Kepler-11 b, c and d, using transit time measurements from J.R. See the caption to Figure 2 for details. } \label{fig:JR1} \end{figure} \begin{figure} \includegraphics [height = 2.1 in]{f5a.eps} \includegraphics [height = 2.1 in]{f5b.eps} \newline \includegraphics [height = 2.1 in]{f5c.eps} \includegraphics [height = 2.1 in]{f5d.eps} \newline \includegraphics [height = 2.1 in]{f5e.eps} \includegraphics [height = 2.1 in]{f5f.eps} \caption{Observed and simulated transit timing variations for Kepler-11 e, f and g, using transit time measurements from J.R. See the caption to Figure 2 for details.} \label{fig:JR2} \end{figure} \begin{figure} \includegraphics [height = 2.1 in]{f6a.eps} \includegraphics [height = 2.1 in]{f6b.eps} \newline \includegraphics [height = 2.1 in]{f6c.eps} \includegraphics [height = 2.1 in]{f6d.eps} \newline \includegraphics [height = 2.1 in]{f6e.eps} \includegraphics [height = 2.1 in]{f6f.eps} \caption{Observed and simulated transit timing variations for Kepler-11 b, c and d, using transit time measurements from D.S. See the caption to Figure 2 for details. } \label{fig:DS1} \end{figure} \begin{figure} \includegraphics [height = 2.1 in]{f7a.eps} \includegraphics [height = 2.1 in]{f7b.eps} \newline \includegraphics [height = 2.1 in]{f7c.eps} \includegraphics [height = 2.1 in]{f7d.eps} \newline \includegraphics [height = 2.1 in]{f7e.eps} \includegraphics [height = 2.1 in]{f7f.eps} \caption{Observed and simulated transit timing variations for Kepler-11 e, f and g, using transit time measurements from D.S. See the caption to Figure 2 for details.} \label{fig:DS2} \end{figure} Our dynamical fitting of the planetary parameters minimizes residuals by adjusting parameters to search for a best-fit, which is determined by a local minimum value of $\chi^2$. Uncertainties are based on the assumption that the shape of the $\chi^2$ surface is well-approximated by local gradients near the minimum, i.e., is shaped like a parabola. For multi-variate problems such as this, the dimensionality of phase space is large, and multiple minima typically exist. Furthermore, the low S/N of some lightcurves, particularly, Kepler-11 b, makes the $\chi^2$ surface fairly rough, with many local minima. Thus, the minimum that the code finds need not be the global minimum, i.e., the best fit to the data. And even if it does converge to the global minimum, parameters that yield other minima with $\chi^2$ only slightly larger than that of the global minimum are almost as likely to approximate well the true parameters of the system as are those of the global minimum. To qualitatively account for the increased uncertainty caused by these concerns, we combined the solutions with the three data sets by averaging their nominal values and defining error bars such that they extend over the entire range given by the union of the 1$\sigma$ confidence intervals of all three solutions; error bars are thus asymmetric. Note that this gives fairly large ranges, and thus more conservative values than standard $1\sigma$ ranges - this is to compensate for shortcomings of Levenberg-Marquardt fitting of such a complex multi-parameter space. The principal results of our dynamical analysis are presented in Table~\ref{tbl-dyn}. These dynamical measurements are combined with estimates of the star's mass and radius to yield measurements of the planetary characteristics that we present in Section 5. We also performed fits to each of the three sets of TTs in which both the eccentricity and the mass of Kepler-11 g were allowed to float, as well as fits in which the mass of planet g was a free parameter but it was constrained to be on a circular orbit. In all six cases, the fits converged to values similar to those in our fits with planet g on a circular orbit at the nominal mass, albeit with large uncertainties in g's mass. When the eccentricity of planet g was allowed to float, all six fits were inferior (in a $\chi^2/$d.o.f. sense, where d.o.f. stands for degrees of freedom) to fits with g's parameters fixed. To constrain the mass of Kepler-11 g, we performed a suite of simulations using the same initial conditions as our best fit to each set of transits times (see Tables~\ref{tbl-EAbestfit}, ~\ref{tbl-JRbestfit}, ~\ref{tbl-DSbestfit}). Eccentricities for all planets except g were allowed to float in these fits, but g's eccentricity was always fixed at zero, since eccentricity and mass are inversely correlated and our goal is to determine an upper bound on Kepler-11 g's mass. For each simulation, the mass of planet g was fixed, but since we are comparing simulations with differing masses of planet g, we are effectively allowing this parameter to vary, thereby adding one degree of freedom above those in our best fit models. The F-ratio, defined as \begin{equation} {\rm F-ratio} = \frac{\Delta \chi^2/\Delta({\rm d.o.f.})}{\chi^2/({\rm d.o.f.})}, \label{eqn:f-ratio} \end{equation} describes the likelihood that a change in the minimum of $\chi^2$ could happen by chance given a change in the number of degrees of freedom, in our case, by varying the (fixed for any given run but changed from one run to another) the mass of Kepler-11 g between fits. Figure~\ref{fig:mass-g} shows the change in $\chi^2$ with variations in the mass of planet g. The $2\sigma$ limits constrain the mass of g, with a confidence of $95\%$, such that $M_{p}$(g) $\lesssim 70\times 10^{-6}~M_{\star}$ for two of the three datsets (the error bars in third dataset, for which the mass constraint is looser, are likely to be significantly over-estimated; see Table 5 and associated text for details). \begin{figure} \includegraphics [height = 3.1 in]{f8.eps} \caption{The goodness of fit of our dynamical model to the observed TTs is shown as a function of the mass of planet Kepler-11 g. For each point, the $\chi^2$ minimum was found keeping the time of the first transit after epoch, orbital periods, eccentricities and masses as free variables for planets Kepler-11 b-f. For Kepler-11 g, the time of its first transit after epoch and its orbital period were free parameters, with its eccentricity fixed at zero, and its mass fixed in each numerical run. The vertical axis marks the F-ratio, described by Equation~(\ref{eqn:f-ratio}). Results are shown for the A.E. data with open green triangles, for the J.R. dataset in solid blue circles, and for the D.S. dataset in filled red squares. The horizontal lines mark the confidence intervals that $\chi^2$ is not elevated by chance. For the 2$\sigma$ limit, two of the datasets constrain the mass of planet g below $\sim 70 \times 10^{-6}M_{\star}$ with 95\% confidence. (The dataset yielding weaker constraints appears to have overestimated uncertainties in measured TTs; see Table 5 and associated text.)} \label{fig:mass-g} \end{figure} We next consider the dynamical evolution of the Kepler-11 system using the parameters that we have derived and presented in Table 1. Our analysis treats the planets and star as point masses and neglects relativistic effects, so we do not need to know the sizes of the objects nor the mass of the star for this analysis. One may ask whether as compact a planetary system as Kepler-11 is dynamically stable on gigayear timescales. We performed a numerical simulation of a system consisting of planets with masses and components of eccentricity equal to the nominal values in our best fit (Table~\ref{tbl-dyn}). The system remained bounded with no gross changes in orbital elements for the entire 250 Myr simulated. In contrast, an integration of a system with planetary masses and eccentricity components $1\sigma$ above the tabulated values went unstable after 1 Myr, but note that the tabulated uncertainties do not account for the anticorrelation between fitted masses and eccentricities of planets b and c, so the combination of $1\sigma$ high eccentricities and masses is highly unlikely based upon analysis of the short-term dynamics alone. The intermediate case of a system with planetary masses and eccentricity components $0.5\sigma$ above the tabulated values went unstable after 140 Myr; however, in addition to the caveats mentioned for the $1\sigma$ high integrations, we note that tidal damping (not included in our integrations) could well counter eccentricity growth in such a compact planetary system on $10^8$ year timescales. We also performed precise short-term integrations of the nominal system given in Table~\ref{tbl-dyn} for 10$^7$ days using a Bulirsch-Stoer code. The eccentricities of each of the three low-mass planets, Kepler-11 b, c and f, varied from minima of $\sim 0.002 - 0.008$ to maxima between 0.04 and 0.05. The eccentricities of Kepler-11 d and e varied from values below 0.0006 to $\sim 0.013$. Kepler-11 g was included in these integrations, but it is weakly coupled to the other planets, and its eccentricity remained below 0.0006. We also ran an analogous integration with all planetary eccentricities initially set to zero. All eccentricities remained small, with peak values for the inner five planets in the range 0.0014 -- 0.0024. \section{Properties of the Star Kepler-11} \citet{liss11a} performed a standard SME spectroscopic analysis (\citealt{vp96,vf05}) of a high resolution (R = 60,000) spectrum of Kepler-11 with a wavelength coverage of 360 -- 800 nm that was taken by the Keck I telescope at BJD = 2455521.7666 using the observing setup of the California Planet Search group \citep{mar08}. They derived an effective temperature, $T_{\rm eff} = 5680 \pm 100$ K, surface gravity, log $g = 4.3 \pm 0.2$ (cgs), metallicity, [Fe/H] = $0.0 \pm 0.1$ dex, and projected stellar equatorial rotation $v \sin i = 0.4 \pm 0.5$ km s$^{-1}$. Combining these measurements with stellar evolutionary tracks \citep{gir00, yi01} yielded estimates of the star's mass, $M_\star = 0.95 \pm 0.10$ M$_\odot$, and radius, $R_\star = 1.1 \pm 0.1$ R$_\odot$. We have performed new SME analyses of the same Keck spectrum and of another spectrum of comparable quality taken with the same system at BJD = 2455455.8028. The combined results (weighted mean values) are: $T_{\rm eff} = 5666 \pm 60$ K, surface gravity, log $g = 4.279 \pm 0.071$ (cgs), metallicity, [Fe/H] = $0.002 \pm 0.040$ dex, and projected stellar equatorial rotation $v \sin i = 3.86 \pm 0.85$ km/s. These values, together with Yale-Yonsei stellar evolutionary tracks, yield estimates of the star's mass, $M_\star = 0.975 \pm 0.031$ M$_\odot$, radius, $R_\star = 1.193 \pm 0.115$ and age = 9.7 $\pm$ 1.5 Gyr. The TTV dynamical solution presented in Table~\ref{tbl-dyn} provides stringent constraints on the orbits of the inner five transiting planets. We used the computed values of the planets' $e \cos\omega$ and $e \sin\omega$ shown in Table~\ref{tbl-dyn} as constraints in our transit model to provide a geometrical determination of the stellar density, \rhostar. The transit model is similar to that described in Appendix A, but we also fit for $e\cos\omega$ and $e\sin\omega$ for each of the five inner planets. Posterior distributions for each model parameter where estimated using a Monte-Carlo-Markov-Chain (MCMC) algorithm similar to the one that is described in \citet{for05}, but augmented with a parameter buffer to allow jumps that account for correlated variables as described in Rowe et al.~(2013, in preparation). We produced 4 Markov-Chains, each with a length of 2,500,000. We ignored the first 40\% of each chain as burn in and combined the remainder into one chain of length 6,000,000. We adopted the median value for each model parameter, which we list in Table~\ref{tbl-transit}. Since the dynamical model provides a good solution for the orbits of the planets from modeling of the TTVs, we reran the transit model and use the constraints on \ecosw\ and \esinw\ to estimate \rhostar. This translates into the tight constraint: \rhostar\ = 1.122$^{+0.049}_{-0.060}$. We combined this estimate of \rhostar\ with the new (weighted mean) SME spectroscopic parameters to determine the stellar mass and radius by fitting \teff, \logg\ and \feh\ to $M_{\star}$, age and heavy element mass fraction, $Z$, as provided by the Yale-Yonsei evolution models. We used our MCMC algorithm to determine posterior distributions of the stellar parameters and adopted the median value for each parameter as listed in Table \ref{tbl-star}. Note that the star is slightly evolved, more than halfway through its lifetime on the main sequence. \begin{table}[h!] \begin{center} \begin{tabular}{|ccccccc|} \hline Planet & $R_p/R_{\star}$ & duration (h) & depth (ppm) & $b$ & $i$ ($^{\circ}$) & $a/R_{\star}$ \\ \hline b & 0.01563$^{+0.00018}_{-0.00023}$ & 4.116$^{+0.053}_{-0.078}$ & 301.3$^{+7.3}_{-7.9}$ & 0.116$^{+0.053}_{-0.116}$ & 89.64$^{+0.36}_{-0.18}$ & 18.55$^{+0.31}_{-0.23}$ \\ c & 0.02496$^{+0.00015}_{-0.00019}$ & 4.544$^{+0.033}_{-0.046}$ & 750.8$^{+6.8}_{-10}$ & 0.156$^{+0.059}_{-0.156}$ & 89.59$^{+0.41}_{-0.16}$ & 21.69$^{+0.37}_{-0.27}$ \\ d & 0.02714$^{+0.00018}_{-0.00019}$ & 5.586$^{+0.045}_{-0.079}$ & 885.0$^{+11}_{-11}$ & 0.181$^{+0.074}_{-0.084}$ & 89.67$^{+0.13}_{-0.16}$ & 31.39$^{+0.53}_{-0.39}$ \\ e & 0.03643$^{+0.00021}_{-0.00028}$ & 4.165$^{+0.019}_{-0.040}$ & 1333$^{+14}_{-14}$ & 0.763$^{+0.008}_{-0.008}$ & 88.89$^{+0.02}_{-0.02}$ & 39.48$^{+0.67}_{-0.49}$ \\ f & 0.02169$^{+0.00026}_{-0.00026}$ & 6.431$^{+0.082}_{-0.089}$ & 548$^{+12}_{-12}$ & 0.463$^{+0.030}_{-0.032}$ & 89.47$^{+0.04}_{-0.04}$ & 50.79$^{+0.86}_{-0.63}$ \\ g & 0.02899$^{+0.00022}_{-0.00032}$ & 9.469$^{+0.086}_{-0.122}$ & 1006$^{+15}_{-19}$ & 0.217$^{+0.092}_{-0.087}$ & 89.87$^{+0.05}_{-0.06}$ & 94.4$^{+1.6}_{-1.2}$ \\ \hline \hline \end{tabular} \caption{Transit constraints on the planets of Kepler-11, following dynamical models; $b$ signifies impact parameter, $i$ inclination of the orbit to to the plane of the sky and $a$ the orbital semimajor axis.}\label{tbl-transit} \end{center} \end{table} \begin{table}[h!] \begin{center} \begin{tabular}{||l|l||} \hline \hline $ M_{\star} (M_{\odot})$ & 0.961$^{+0.025}_{-0.025}$ \\ $ R_{\star} (R_{\odot})$ & 1.065$^{+0.017}_{-0.022}$ \\ $ L_{\star} (L_{\odot})$ & 1.045$^{+0.061}_{-0.078}$ \\ $T_{\rm eff}$ (K) & 5663$^{+55}_{-66}$ \\ \logg\ (cm s$^{-2}$) & 4.366$^{+0.014}_{-0.016}$ \\ $Z$ & 0.0182$^{+0.0015}_{-0.0017}$ \\ $\rho_{\star}$ (g cm$^{-3}$) & 1.122$^{+0.049}_{-0.060}$ \\ Age (Gyr) & 8.5$^{+1.1}_{-1.4}$ \\ \hline \hline \end{tabular} \caption{The characteristics of the star Kepler-11, with 1$\sigma$ uncertainties.}\label{tbl-star} \end{center} \end{table} We also conducted a search for spectral evidence of a companion star. We began by fitting the observed spectrum of Kepler-11 obtained on BJD = 2455521.7666 (UT = 21 Nov 2010) with the closest-matching (in a $\chi^2$ sense) member of our library of 800 stellar spectra. The stars in our library have $T_{\rm eff} = 3500 - 7500$ K and $\log g = 2.0 - 5.0$, which spans the FGK and early M type main sequence and subgiant stars. All library stars have accurate parallax measurements, allowing for good estimates of stellar mass and radius for each. The Kepler-11 spectrum is placed on a common wavelength scale and normalized in intensity. The $\chi^2$ value is then calculated as the sum of the squares of the differences between the Kepler-11 spectrum and each library spectrum. The final stellar properties are determined by the weighted mean of the ten library spectra with the lowest $\chi^2$ values. We adopt errors in each parameter by comparing results for standard stars. The closest-matching spectrum is modified superficially by removing the Doppler shift relative to the observed spectrum, applying needed artificial rotational broadening, setting the continuum normalization, and diluting the line strengths (due to a possible secondary star), thereby achieving a best-fitting spectrum that can be subtracted from the observed spectrum to yield residuals. We search for secondary stars by taking the residuals to that first spectral fit and performing the same $\chi^2$ search for a ``second'' spectrum that best fits those residuals; details will be presented in Kolbl et al.~(in preparation). Our approach assumes that spectra are single, until proven double, rather than immediately doing a self-consistent two-spectrum fit. This stems from an Occam's razor perspective; the notion is that if the target's spectrum is adequately fit by a single library spectrum, without need to invoke a second spectrum, then the target's spectrum can only be deemed single. A minimum in $\chi^2$ as a function of Doppler shift for the fit of any library spectrum (actually a representative subset of them) to the residuals serves to indicate the presence of a second spectrum. We adopt a detection threshold that is approximately a $3 \sigma$ detection of the secondary star. We find no stellar companion to Kepler-11 within $0.4\arcsec$ of the primary star, corresponding to half of the slit-width ($0.87\arcsec$) of the Keck-HIRES spectrometer. The detection threshold for any companion star depends on the RV separation between the primary star and the putative secondary star. For all RV separations greater than 20 km s$^{-1}$, we would detect (at $3\sigma$) any companions that are 2\% as bright (in the optical) as the primary star. For RV separations of 10 km s$^{-1}$, the detection threshold rises to 3\% as bright as the primary star, and for RV separations smaller than 10 km s$^{-1}$, the detection threshold rises rapidly to unity for FGK stars, but remains at 3\% for M dwarfs due to their very different spectra. The poor detectability of FGK-type companion stars having little Doppler offset is caused by overlap of the absorption lines. Speckle images for Kepler-11 show no nearby star. Neighbors located in an annulus from 0.05 to $0.7\arcsec$ from Kepler-11 would have been detected if their brightness were within 3 magnitudes in either V or I band, and those between 0.7 and $1.9\arcsec$ distant would have been seen down to a magnitude difference of 4 in either band. \section{Properties of the Planets Orbiting Kepler-11} \begin{table}[h!] \begin{center} \begin{tabular}{|ccccccc|} \hline \hline Planet & Mass ($M_{\oplus}$) & Radius ($R_{\oplus}$) & Density (g cm$^{-3}$) & $a$ (AU) & $e$ & Flux ($F_{\odot, 1AU}$) \\ \hline b & \textbf{1.9}$^{+1.4}_{-1.0}$ & \textbf{1.80}$^{+0.03}_{-0.05} $ & \textbf{1.72}$^{+1.25}_{-0.91}$ & \textbf{0.091}$^{+0.001}_{-0.001} $ & \textbf{0.045}$^{+0.068}_{-0.042}$ & \textbf{125.1} \\ c & \textbf{2.9}$^{+2.9}_{-1.6}$ & \textbf{2.87}$^{+0.05}_{-0.06}$ & \textbf{0.66}$^{+0.66}_{-0.35}$ & \textbf{0.107}$^{+0.001}_{-0.001} $ & \textbf{0.026}$^{+0.063}_{-0.013}$ & \textbf{91.6} \\ d & \textbf{7.3}$^{+0.8}_{-1.5}$ & \textbf{3.12}$^{+0.06}_{-0.07} $ & \textbf{1.28}$^{+0.14}_{-0.27}$ & \textbf{0.155}$^{+0.001}_{-0.001} $ & \textbf{0.004}$^{+0.007}_{-0.002}$ & \textbf{43.7} \\ e & \textbf{8.0}$^{+1.5}_{-2.1}$ & \textbf{4.19}$^{+0.07}_{-0.09} $ & \textbf{0.58}$^{+0.11}_{-0.16}$ & \textbf{0.195}$^{+0.002}_{-0.002} $ & \textbf{0.012}$^{+0.006}_{-0.006}$ & \textbf{27.6} \\ f & \textbf{2.0}$^{+0.8}_{-0.9}$ & \textbf{2.49}$^{+0.04}_{-0.07} $ & \textbf{0.69}$^{+0.29}_{-0.32}$ & \textbf{0.250}$^{+0.002}_{-0.002} $ & \textbf{0.013}$^{+0.011}_{-0.009}$ & \textbf{16.7} \\ g & $<$ 25 & \textbf{3.33}$^{+0.06}_{-0.08} $ & $<$4 & \textbf{0.466}$^{+0.004}_{-0.004} $ & < 0.15 & \textbf{4.8} \\ \hline \hline \end{tabular}\label{tbl-planet} \caption{The planets of Kepler-11. The mass and eccentricity of Kepler-11 g are 2$\sigma$ upper bounds. All other uncertainties are 1$\sigma$ confidence intervals.} \end{center} \end{table} \begin{figure*}[placement h] \begin{center} \includegraphics[width=7.0in,height=5.0in]{f9.eps} \end{center} \caption{Updated mass-radius diagram for transiting exoplanets with measured masses, along with curves for different compositions. Planets are color-coded by the incident bolometric flux that they receive. \ik planets are shown by circles, filled for Kepler-11, open for others, with numbers and letters indicating each planet. Other known exoplanets in this mass and radius range are shown by open squares; in order of increasing radius, these are CoRoT-7 b, 55 Cancre e, GJ 1214 b and GJ 3470 b. Solar System planets Venus and Uranus are shown by black letters. The solid black curve is for an Earth-like composition with 2/3 rock and 1/3 iron by mass. All other curves use thermal evolution calculations \citep{lopez12}, assuming a volatile envelope atop a core of rock and iron with composition the same as that of the bulk Earth. The dashed blue curve is for 50\% water by mass, and the solid blue curve is for a pure H$_2$O planet. The dotted orange curves are for H/He envelopes at 8 Gyr; each one is tailored to match a Kepler-11 planet and is computed at the appropriate flux for that planet.\label{mrfig}} \end{figure*} \begin{figure*}[h!] \begin{center} \includegraphics[width=7.0in,height=5.0in]{f10.eps} \end{center} \caption{Updated version of the mass loss threshold diagram from \citet{lopez12}. Bolometric flux at the top of the atmosphere, relative to the flux incident on Earth, is plotted against the product of planet mass and planet density. Again, the Kepler-11 planets are shown by filled circles. Open squares show the other extrasolar planets $< 15~M_{\mathrm{\oplus}}$, while crosses show all other transiting planets with measured masses up to 100 $M_{\mathrm{\oplus}}$. Planets are color coded by the percentage of their mass in their H/He envelopes, $f_{envelope}$, according to thermal evolution models. Potentially rocky planets are rust colored. The dashed black line shows the critical mass loss timescale found by \citet{lopez12}. \label{thresholdfig}} \end{figure*} Combining our dynamical results (as presented in Table~\ref{tbl-dyn} plus upper bounds on the mass of Kepler-11 g illustrated in Figure~\ref{fig:mass-g}) with transit parameters of all planets given in Table~\ref{tbl-transit}, bounds on planet g's eccentricity from transit models, and the stellar characteristics listed in Table~\ref{tbl-star}, we derive the planetary parameters shown in Table~4. The nominal mass values of planets Kepler-11 d, e and f derived herein are within $1 \sigma$ error bars of the preferred fit presented by \citet{liss11a}, and the newly-estimated masses of Kepler-11 b and c are within $2 \sigma$ of their values; the various fits presented by \citet{mig12} are of comparable accuracy. The major differences from the results presented by \citet{liss11a} are that the planetary radii are $\sim 10\%$ smaller than previously estimated, and planets Kepler-11 b and especially c are less massive than estimates computed with Q1-Q6 data, resulting in the nominal masses monotonically increasing with planetary radii rather than the inner pair of planets being more dense than the outer ones. Despite the reductions in size estimates, all planets are large for their masses in the sense that they lie above both the $M_p/M_\oplus \approx (R_p/R_\oplus)^{2.06} $ relationship that is valid for planets in our Solar System \citep{liss11b} and mass--radius fits to exoplanets \citep{wu12, weis13}. The six planets in Kepler-11 are all substantially less dense than an iron-free rocky planet, a characteristic already noted for the five inner planets by \citet{liss11a} and \citet{lopez12}, and which now can be stated with even greater (statistical) significance. As a result, they must have substantial envelopes of light components, most likely dominated by the cosmically-abundant constituents H$_2$, He, and/or H$_2$O. In order to understand these envelopes, we use the thermal evolution models described in detail in \citet{lopez12}. This allows us to determine the size of the H/He envelope for each planet, assuming an Earth-like rock/iron core. Figure \ref{mrfig} plots an updated version of the mass--radius diagrams shown in \citet{liss11a} and \citet{lopez12}. We include all transiting planets with measured masses $M_p < 15~M_{\mathrm{\oplus}}$. For comparison, we include mass--radius curves for Earth-like, 50\% water, and 100\% water compositions. In addition, for each of the five Kepler-11 planets whose mass has been measured, we include a mass--radius curve at the composition (H/He envelope mass fraction) and incident flux of that planet. The new masses imply that Kepler-11 c is less massive than if it were composed of pure water, meaning that it must have a large H/He envelope. However, Kepler-11 b can still be explained by either a H/He or a steam envelope on top of a rocky core. If we assume that Kepler-11 b's envelope is water rather than H/He, then this planet would be $59\%\pm^{39\%}_{30\%}$ water by mass. The envelope would be composed of steam, since planets like Kepler-11 b are far too irradiated for their interiors to include liquid or high-pressure ice phases. Most of the H$_2$O would be in the vapor and molecular fluid phases, with the ionic fluid and plasma phases occurring at high pressures deep within these planets (\citealt{nettel08,nettel11}). For mixtures of rock with H/He (no H$_2$O), and using the sizes and masses presented in Table~4, we find that Kepler-11 b is currently $0.5\%\pm^{0.5\%}_{0.4\%}$ H/He, Kepler-11 c is $5.0\%\pm^{1.1\%}_{0.8\%}$ H/He, Kepler-11 d is $6.6\%\pm^{1.3\%}_{1.2\%}$ H/He, Kepler-11 e is $15.7\%\pm^{1.7\%}_{1.7\%}$ H/He, and Kepler-11 f is $4.0\%\pm^{1.0\%}_{0.7\%}$ H/He by mass. The quoted uncertainties include the measured uncertainties on each planet's mass, radius, incident flux, and age as well as theoretical uncertainties on the albedo and the iron fraction of the rocky/iron core \citep{marcus10}. Despite the small mass fractions in light gases, the presence of these H/He envelopes is key to the observed radii. One way to emphasize this fact is to compare each planet's radius to the radius of its rocky core, as determined by our thermal evolution models for planets lacking H$_2$O. For every Kepler-11 planet whose mass has been measured except for b, approximately half of the observed radius is due to its H/He envelope. The cores make up 46\%, 54\%, 40\%, and 48\% of the total radii of planets Kepler-11 c, d, e, \& f, respectively, and thus only 6 -- 16\% of the volume. Moreover, even for Kepler-11 b, the rocky core only makes up 66\% of the total radius, corresponding to 29\% of this planet's volume. In addition, we have included a updated version of the mass loss threshold diagram presented in \citet{lopez12}. Figure \ref{thresholdfig} plots incident flux against the product of planet mass times planet density. Diagonal lines (i.e., lines with slope = 1) in this space correspond to constant mass loss timescales for a specified mean molecular weight of escaping gas, making this diagram useful for understanding how the population of highly-irradiated planets has been sculpted by photoevaporation \citep{lecav07}. Here we have color-coded planets by the fraction of their mass in the H/He envelope, assuming an Earth-like core. Four known exoplanets are dense enough to be composed of bare rock (this list includes Kepler-20 b, whose large error ellipse in the mass-radius plane is mostly outside of the rocky composition zone); these planets are shown as rust colored. The key feature of Figure \ref{thresholdfig} is that there is a critical mass loss timescale above which there are no planets with significant H/He envelopes. The dashed black line shows the critical mass loss timescale found by \citet{lopez12}. The existence of such a mass loss threshold is a robust predication of planet evolution models that include photoevaporation (\citealt{owen12,lopez12}). The three planets that lie above this threshold in the upper right are Kepler-10 b \citep{bat11}, CoRoT-7 b (\citealt{leger09,que09,hat11}), and 55 Cancri e (\citealt{win11,dem11}), none of which are expected to have H/He envelopes. With the newly-estimated masses, Kepler-11 b and c are clearly highly vulnerable to photoevaporation; in fact they both lie on the critical mass loss timescale identified by \citet{lopez12}. On the other hand, planets Kepler-11 d and e have predicted mass loss rates a factor of a few below this threshold. However, this does not mean that these planets have not experienced significant mass loss. Using the original discovery masses, \citet{lopez12} showed that planets Kepler-11 d and e could have lost at least half of their initial H/He envelopes. Moreover, the assumption of a single critical mass loss timescale is only a rough approximation. The efficiency of photoevaporative mass loss changes as a function of irradiation and stellar age \citep{owen12}. In particular, more irradiated planets like Kepler-11 b and c lose more energy to radiation and recombination-driven cooling, resulting in lower mass loss efficiencies and thus a higher threshold in Figure 10 \citep{mur09}. This is one possible explanation for why the planets in Kepler-11 do not lie along a single mass-loss timescale. \section{Conclusions} We have performed an updated analysis of the Kepler-11 planetary system, concentrating on the dynamical interactions evident in transit timing variations observed in the first 40 months of \ik photometric data. We have also improved our estimates of the characteristics of the star by combining stellar density constraints from transit profiles and dynamical measurements of planetary eccentricity with spectral information obtained at the Keck observatory. Our updated transit, stellar and planetary parameters are presented in Tables~ \ref{tbl-transit}, \ref{tbl-star} and 4, respectively. The six planets observed to transit Kepler-11 all have small orbital eccentricities. None is dense enough to be composed entirely of rocky material, and at least the four middle planets must contain volumetrically-significant envelopes of gases less dense than H$_2$O. Planets Kepler-11 b and f, and nominally c as well, are less massive than any other exoplanets for which both mass and radius have been measured. The planetary parameters are consistent with a monotonic increase in mass as a function of radius, although as Figure 9 illustrates, the Kepler-11 planets are less massive for a given radius than most other planets with mass and radius measurements. \acknowledgments {\it Kepler} was competitively selected as the tenth Discovery mission. Funding for this mission is provided by NASA's Science Mission Directorate. E.~A.'s work was supported by NSF Career grant AST-0645416. W.~F.~W.~gratefully acknowledges support from the \ik Participating Scientist Program via NASA grant NNX12AD23G, and from the NSF via grant AST-1109928. D.~J. gratefully acknowledges a Fellowship from the NASA Postdoctoral Program. We thank Jerome Orosz and Gur Windmiller for assistance in developing D.S.'s method for measuring transit times and Tony Dobrovolskis, Darin Ragozzine and Billy Quarles for helpful comments on the manuscript. \section{ Appendix A: Techniques used to Measure Transit Times} We measured transit times using three different techniques, each of which is described below. \subsection{TT Measurements by Jason Rowe} This analysis used Q1 -- Q14 long cadence and Q3 -- Q14 short cadence \ik simple aperture photometry (labeled SAP\_FLUX). Only data with a quality flag set to zero as documented in the \ik data release notes were used. This provided 52,539 and 1,464,980 long and short cadence photometric measurements, respectively. The data were initially detrended using a running 2-day box-car median filter that was applied to individual segments of time-series photometry. A segment was defined as a continuous string of time-series data that does not contain an interruption longer than 2.5 hours (5 long cadence measurements). This was done to handle offsets observed after data outages, typically caused by a change in the thermal environment of the CCD detector. A circular quadratic transit model based on \citet{man02} was fit to the data by minimization of $\chi^2$ with a Levenberg-Marquardt algorithm. The transit model was used to measure the transit duration for each transiting planet. The original SAP\_FLUX photometric data were then reprocessed using a second-order polynomial to detrend the time-series to remove instrumental (such as focus changes) and astrophysical effects. All data obtained during transit were excluded, as well as those taken in the 30 minutes before ingress and in the 30 minutes after egress. A clipping algorithm was used to exclude any measurement that differed from the mean by more than $3\sigma$. Measurements obtained during a planet transit were excluded from the clipping exercise. It was found that the data before a data outage near JD = 2455593 could not be sufficiently detrended. As such, data from 2455593 to 2455594.5 were excluded, which meant that a transit of Kepler-11 g was not included in our analysis. The detrended LC and SC photometric time-series were then each fit with a multi-planet, circular orbit, quadratic Mandel \& Agol transit model. The model parameters are the mean stellar density (\rhostar), photometric zero point, and, for each planet, the center of transit time, orbital period, impact parameter, and scaled planetary radius (\rprs). The model assumes that the mass of star is much greater than the combined mass of the orbiting planets, so that \begin{equation}\label{eq:rhostar} \left(\frac{a}{R_\star}\right)^3 \frac{3\pi}{G P^2} = \frac{(M_\star+M_p)}{\frac{4 \pi}{3}R_\star^3} \approx \rhostar. \end{equation} A photometric time-series for each transiting planet was then produced by removing the transits of the other transiting planets. The remaining transits were then individually fit by using the best-fit model as a template and only allowing the center of transit time to vary. This yielded a time-series of transit timing variations (TTVs) for each planet. The measured TTVs were then used to linearize (or {\it deTTV}) the photometry, such that when folded at the orbital period the transits are aligned in the resulting lightcurve. The multi-planet transit model was then refit out to the {\it deTTVed} lightcurve and used the updated template to determine the final set of TTVs shown by the green points in Figure 1. Uncertainties in the transit times were determined by examining the residuals from the fits to each individual transits and scaling the photometric errors such that reduced $\chi^2$ was equal to one. The diagonal elements of the co-variance matrix were adopted as the uncertainty in the measurement. \subsection{TT Measurements by Eric Agol} The times of transit were fit using a quadratic limb-darkening model in which the duration and impact parameter for each planet were assumed to be fixed, while the times of each transit were allowed to vary. The model was computed simultaneously for the short cadence (when available) and long cadence data (otherwise). A window equal to one transit duration was included before and after every transit. The lightcurve was divided by the model (computed for all planets simultaneously so that overlapping transits were properly accounted for), and then fit with a third-order polynomial for each contiguous data set (without gaps larger than 12 hours). The model parameters were optimized until a best fit was found; a second iteration was carried out after outliers from the first fit were rejected. After finding the best fit, the times of each and every transit were allowed to vary over a grid of values spanning (typically) about 2 hours on either side of the best fit time. The variation in $\chi^2$ with transit time was then fit with a quadratic function to measure the uncertainty in the transit time. If that fit failed, then the transit time error was measured from the width of the $\chi^2$ function for values less than one above the best fit value. \subsection{TT Measurements by Donald Short} In contrast to the Rowe and Agol methods, a purely mathematical technique was used to determine the transit times, under the assertion that the time of a transit event can be estimated without need of a physical model of the event. Under conditions of poor signal-to-noise ratio or undersampling, the constraints imposed by a physical model are extremely valuable. For high signal-to-noise cases, a non-physical model can match, or even excel a physical model under certain conditions. The limitations in a physical model, such as imperfect limb darkening parameterization or assumed zero eccentricity, have no consequence in a non-physical model. Since no assumptions about sphericity, obliquity, gravity darkening, strict keplerian motion, etc., were made, the method is insensitive to errors in these physical parameters or effects. Both LC and SC data were used in computing the planetary transit time estimates, provided the pipeline data quality flag had the nominal value of zero. The TTs were estimated by an iterative method starting with the SC data. Using an estimate of the transit duration and estimates of the transit times based on the linear ephemeris from \citet{liss11a}, each transit was locally detrended. Detrending employed a low-order polynomial centered on the transit and extending symmetrically either 0.3, 0.6, or 0.83 days beyond the ends of the transit; the length and polynomial order that provided the best fit to these out-of-transit data was selected. During this process, each transit was checked for missing data and overlapping transits from other planets that could compromise the determination of that TT. Transits that had such problems were eliminated from further consideration. After detrending, the transits were shifted in time so that the center of each transit was at time zero. All of the transits were then combined (``stacked'' or ``folded'' on top of each other). A piecewise cubic Hermite spline (PCHS) was then least-squares fit to the combined-transit lightcurve, giving a transit template. The transit template was generated by the data themselves; no physical constraints on its shape were imposed. As such, it should be an excellent match to the observed transits. From this template, a refined transit width was estimated and used to revise the detrending of each transit. The template was then correlated with each individual transit, yielding improved TTs. Any outliers with respect to the template were flagged and eliminated from further template building, but no rejections were made when estimating the individual TTs. The detrended transits were shifted (folded) on the revised TTs, combined, and a new PCHS template generated. Again, the individual transits were then detrended, now using both the revised duration and revised transit times. The detrended transits were correlated with the revised template, yielding a refined set of TTs. Three iterations of this process were carried out. The uncertainty in each TT was estimated from the shifts in time needed to degrade the $\chi^2$ fit of the template to the transit by one. For transits with LC data only, the SC PCHS template was convolved to 30 minutes, yielding the LC template. The LC template was then correlated with each transit, providing a correction to the times from the initial linear ephemeris. The revised TTs were used to improve the detrending window, but the template was not updated-- it was held fixed at the shape derived from the SC template. This process iteratively produced measurements of the TTs, uncertainties, and model fits for each transit. Finally, those TTs that had large timing error estimates ($>$40 minutes) were eliminated from the final list of TTs. The process above was repeated independently for each planet, noting that overlapping transits from different planets were discarded. In general, the TTs computed by this method agree quite well with the physical methods; however, the error estimates are notably larger. \section{ Appendix B: Details of Dynamical Models} Here we present the results of our dynamical models in detail. We carried out three classes of fit using each set of TTs. In the ``all-circular'' class, all planets were assumed to travel on circular orbits at epoch. In the ``all-eccentric'' class, all planets were allowed to have eccentric orbits at epoch. We found that the quality of these fits was not sensitive to the mass or eccentricity of planet Kepler-11 g as long as these were not too large, so we performed ``g-fixed'' fits wherein the eccentricity of planet g is set to zero at epoch and its mass set to $25.3\times10^{-6}~M_{\star}$ (which equals $8~M_{\oplus}$ for an assumed stellar mass of $0.95~M_{\oplus}$, as estimated by \citealt{liss11a}). Table \ref{tbl-fits} compares the quality of fit between using various data sets and assumptions. Note that comparisons of the numerical values between the quality of fits using different data sets are not meaningful because of the differing prescriptions employed to compute the uncertainties of individual TTs, but comparison between the reduced $\chi^2$ for the all-circular, all-eccentric, and g-fixed results using a given set of TTs shows that eccentricities are detected for the five inner planets but not for planet g. As the quality of the all-circular fits are distinctly inferior to those that allow at least the five inner planets to travel on eccentric orbits, we do not consider the all-circular fits further. \begin{table} \begin{center} \begin{tabular}{|c|ccc|ccc|ccc|} \hline Planet & All & circular & & All & eccentric & & g-fixed & & (Tables \ref{tbl-EAbestfit},~\ref{tbl-JRbestfit},~\ref{tbl-DSbestfit}) \\ & $\chi^2_{EA}$ & $\chi^2_{JR}$ & $\chi^2_{DS}$ & $\chi^2_{EA}$ & $\chi^2_{JR}$ & $\chi^2_{DS}$ & $\chi^2_{EA}$ & $\chi^2_{JR}$ & $\chi^2_{DS}$ \\ \hline b & 229.62 & 100.36 & 41.61 & 189.49 & 86.59 & 36.16 & 189.43 & 86.61 & 36.14 \\ c & 280.39 & 111.72 & 72.30 & 211.18 & 79.71 & 50.15 & 211.21 & 79.66 & 50.14 \\ d & 123.25 & 51.68 & 15.63 & 100.95 & 45.08 & 14.78 & 101.02 & 45.08 & 14.79 \\ e & 78.47 & 53.97 & 22.06 & 46.47 & 29.08 & 12.03 & 46.73 & 29.24 & 12.06 \\ f & 120.79 & 58.09 & 38.70 & 52.03 & 16.81 & 12.73 & 54.10 & 17.24 & 13.06 \\ g & 9.84 & 6.71 & 3.69 & 10.08 & 6.69 & 3.62 & 9.61 & 6.66 & 3.64 \\ \hline total & 842.16 & 382.53 & 193.99 & 610.20 & 263.97 & 129.46 & 612.09 & 264.49 & 129.83 \\ $\chi^2/$(d.o.f.) & 3.25 & 1.38 & 0.82 & 2.47 & 1.00 & 0.58 & 2.46 & 0.99 & 0.57 \\ \hline \end{tabular} \caption{$\chi^2$ contributions from each planet for a suite of models against both sets of transit times. The second through fourth columns show best fits to an orbital configuration with all eccentricities fixed at zero, the fifth through seventh columns show all eccentric fits, and the eighth through tenth columns shows the g-fixed models whose results are shown in Tables \ref{tbl-EAbestfit}, \ref{tbl-JRbestfit}} and \ref{tbl-DSbestfit}. \label{tbl-fits} \end{center} \end{table} As shown in Table \ref{tbl-fits}, the g-fixed fits, which are presented in Tables \ref{tbl-EAbestfit}, \ref{tbl-JRbestfit} and \ref{tbl-DSbestfit}, are of slightly better quality (in a $\chi^2/$d.o.f. sense) than are the corresponding all-eccentric fits. Thus, the parameters from the three g-fixed fits are synthesized to incorporate the full ranges of all $1\sigma$ error bars from fits to each set of data and displayed as our primary results in Table~\ref{tbl-dyn}. Table \ref{tbl-allecc} is the counterpart of Table 1, synthesizing all-eccentric fit results of the three sets of transit time data. The small values (compared to unity) of $\chi^2/$(d.o.f.) shown in Table~\ref{tbl-fits} for fits to D.S.'s TTs imply that the uncertainties quoted for these TTs were overestimated. Similarly, the large values of $\chi^2/$(d.o.f.) for E.A.'s TTs strongly suggest that these uncertainties were underestimated. The values of $\chi^2/$(d.o.f.) near unity for both fits allowing eccentric planetary orbits to J.R.'s TTs suggest that uncertainties in these TTs may have been slightly overestimated. \begin{table} \begin{center} \begin{tabular}{|cccccc|} \hline \hline Planet & $P$ (days) & $T_{0}$ & $e \cos \omega$ & $e \sin \omega$ & $M_{p}/M_{\star} \times 10^{-6}$ \\ \hline b & \textbf{10.3043}$\pm 0.0002$ & \textbf{689.7378}$\pm 0.0009 $ & $0.038\pm 0.016$ & $0.009 \pm 0.008 $ & \textbf{3.91}$ \pm 1.03 $ \\ c & \textbf{13.0236}$\pm 0.0003$ & \textbf{683.3494}$\pm 0.0005 $ & $0.019\pm 0.014$ & $-0.005 \pm 0.004 $ & \textbf{6.23}$ \pm 1.75 $ \\ d & \textbf{22.6839}$\pm 0.0003$ & \textbf{694.0061}$\pm 0.0005 $ & $-0.006\pm 0.003$ & $0.001 \pm 0.001 $ & \textbf{23.60}$ \pm 1.66 $ \\ e & \textbf{31.9996}$\pm 0.0004$ & \textbf{695.0752}$\pm 0.0005 $ & $-0.009\pm 0.002$ & $-0.009 \pm 0.001 $ & \textbf{27.77}$ \pm 1.92 $ \\ f & \textbf{46.6903}$\pm 0.0011$ & \textbf{718.2737}$\pm 0.0015 $ & $0.007\pm 0.003$ & $-0.007 \pm 0.002 $ & \textbf{7.45}$ \pm 1.09 $ \\ g & \textbf{118.3807}$\pm 0.0004$ & \textbf{693.8022}$\pm 0.0010 $ & (0) & (0) & (25.29) \\ \hline \hline \end{tabular} \caption{Best dynamical fit (fixed mass and circular orbit for planet g) to TTs from E.A. \label{tbl-EAbestfit}} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{|cccccc|} \hline \hline Planet & $P$ (days) & $T_{0}$ & $e \cos \omega$ & $e \sin \omega$ & $M_{p}/M_{\star} \times 10^{-6}$ \\ \hline b & \textbf{10.3039}$\pm 0.0004$ & \textbf{689.7391}$\pm 0.0012 $ & $0.050\pm 0.019$ & $0.014 \pm 0.010 $ & \textbf{6.80}$ \pm 2.16 $ \\ c & \textbf{13.0240}$\pm 0.0005$ & \textbf{683.3497}$\pm 0.0010 $ & $0.033\pm 0.016$ & $0.005 \pm 0.008 $ & \textbf{9.25}$ \pm 3.34 $ \\ d & \textbf{22.6849}$\pm 0.0005$ & \textbf{694.0072}$\pm 0.0007 $ & $-0.003\pm 0.004$ & $0.004 \pm 0.003 $ & \textbf{23.61}$ \pm 1.84 $ \\ e & \textbf{31.9999}$\pm 0.0005$ & \textbf{695.0756}$\pm 0.0005 $ & $-0.007\pm 0.003$ & $-0.007 \pm 0.003 $ & \textbf{23.85}$ \pm 2.55 $ \\ f & \textbf{46.6877}$\pm 0.0014$ & \textbf{718.2697}$\pm 0.0021 $ & $0.014\pm 0.005$ & $-0.001 \pm 0.002 $ & \textbf{5.26}$ \pm 1.21 $ \\ g & \textbf{118.3806}$\pm 0.0005$ & \textbf{693.8010}$\pm 0.0010 $ & (0) & (0) & (25.29) \\ \hline \hline \end{tabular} \caption{Best (g-fixed) dynamical fit to TTs from J.R. \label{tbl-JRbestfit}} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{|cccccc|} \hline \hline Planet & $P$ (days) & $T_{0}$ & $e \cos \omega$ & $e \sin \omega$ & $M_{p}/M_{\star} \times 10^{-6}$ \\ \hline b & \textbf{10.3036}$\pm 0.0007$ & \textbf{689.7363}$\pm 0.0032 $ & $0.009\pm 0.008$ & $0.072 \pm 0.018 $ & \textbf{6.80}$ \pm 3.28 $ \\ c & \textbf{13.0247}$\pm 0.0006$ & \textbf{683.3490}$\pm 0.0015 $ & $-0.004\pm 0.005$ & $0.059 \pm 0.014 $ & \textbf{12.06}$ \pm 6.25 $ \\ d & \textbf{22.6846}$\pm 0.0006$ & \textbf{694.0074}$\pm 0.0016 $ & $-0.001\pm 0.002$ & $-0.000 \pm 0.001 $ & \textbf{21.27}$ \pm 3.23 $ \\ e & \textbf{31.9993}$\pm 0.0009$ & \textbf{695.0759}$\pm 0.0011 $ & $-0.008\pm 0.003$ & $-0.010 \pm 0.004 $ & \textbf{22.91}$ \pm 4.73 $ \\ f & \textbf{46.6883}$\pm 0.0027$ & \textbf{718.2695}$\pm 0.0023 $ & $0.014\pm 0.007$ & $-0.007 \pm 0.005 $ & \textbf{5.94}$ \pm 2.55 $ \\ g & \textbf{118.3809}$\pm 0.0003$ & \textbf{693.8030}$\pm 0.0021 $ & (0) & (0) & (25.29) \\ \hline \hline \end{tabular} \caption{Best (g-fixed) dynamical fit to TTs from D.S. \label{tbl-DSbestfit}} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{|cccccc|} \hline \hline Planet & $P$ (days) & $T_{0}$ & $e \cos \omega$ & $e \sin \omega$ & $M_{p}/M_{\star} \times 10^{-6}$ \\ \hline b & \textbf{10.3039}$^{+0.0006}_{-0.0011}$ & \textbf{689.7377}$^{+0.0031}_{-0.0046}$ & $0.032^{+0.037}_{-0.035}$ & $0.032^{+0.060}_{-0.030}$ & \textbf{5.83}$^{+4.29}_{-3.09}$ \\ c & \textbf{13.0241}$^{+0.0013}_{-0.0008}$ & \textbf{683.3494}$^{+0.0014}_{-0.0020}$ & $0.016^{+0.035}_{-0.029}$ & $0.020^{+0.054}_{-0.030}$ & \textbf{9.13}$^{+9.30}_{-4.77}$ \\ d & \textbf{22.6845}$^{+0.0010}_{-0.0009}$ & \textbf{694.0069}$^{+0.0022}_{-0.0013}$ & $-0.003^{+0.006}_{-0.006}$ & $0.002^{+0.006}_{-0.002}$ & \textbf{22.84}$^{+2.64}_{-4.97}$ \\ e & \textbf{31.9996}$^{+0.0008}_{-0.0013}$ & \textbf{695.0755}$^{+0.0015}_{-0.0008}$ & $-0.008^{+0.005}_{-0.004}$ & $-0.009^{+0.004}_{-0.005}$ & \textbf{24.83}$^{+4.84}_{-7.05}$ \\ f & \textbf{46.6887}$^{+0.0029}_{-0.0038}$ & \textbf{718.2711}$^{+0.0043}_{-0.0052}$ & $0.011^{+0.010}_{-0.007}$ & $-0.005^{+0.006}_{-0.007}$ & \textbf{6.20}$^{+2.52}_{-2.93}$ \\ g & \textbf{118.3809}$^{+0.0012}_{-0.0010}$ & \textbf{693.8021}$^{+0.0030}_{-0.0022}$ & $0.032^{+0.097}_{-0.103}$ & $0.022^{+0.055}_{-0.063}$ & \textbf{23.21}$^{+59.18}_{-58.69}$ \\ \hline \hline \end{tabular} \caption{Dynamical all-eccentric fits to the observed transit times with the orbital periods (second column), time of first transit after epoch (third column), $e\cos\omega$ (fourth column; $e$ represents eccentricity, $\omega$ is the angle, measured from the star, between the place the planet's orbit pierces the sky, coming towards the observer, and the pericenter of the orbit), $e\sin\omega$ (fifth column), and planetary mass in units of the stellar mass (sixth column), all as free variables. For planet g, this model has settled on a mass near the initial estimate of 8 $M_{\oplus}$ ($25.3\times10^{-6}~M_{\star}$). \label{tbl-allecc}} \end{center} \end{table}
2207.14376
\section{\label{sec:level1}INTRODUCTION} Transitional flow regime is very frequently encountered in turbomachines and especially in aircraft engines at relatively low Reynolds numbers. As a consequence, a significant part of the flow on the blade surfaces is under the laminar-turbulent transition process. The boundary development, losses, efficiency, and momentum transfer are greatly affected by the laminar-turbulent transition. Therefore, accurate prediction for the transition process is crucial for the design of efficient as well as reliable aerospace designs \cite{pecnik2007application}. RANS simulations remain the most commonly used computational technique for analysis of turbulent flows. There has been considerable effort spent in the past two decades to develop RANS based transition models for engineering applications to predict various kinds of transitional flows \cite{menter2002transition,menter2004correlation,menter2006transition,langtry2009correlation,menter2015one,wei2017modeling,tousi2021active}. Each model has its strengths and weaknesses, and by far the correlation-based transition models by Langtry and Menter \cite{langtry2009correlation,menter2015one} have been widely used in engineering industries, in particular, aerospace industry. Most RANS models have adopted the Boussinesq turbulent viscosity hypothesis, i.e., anisotropy Reynolds stresses are proportional to the mean rate of strain, therefore also referred to as linear eddy viscosity models. It is well known that linear eddy viscosity models are limited due to the restrictions of the Boussinesq turbulent viscosity hypothesis on yielding accurate predictions for complex flow features such as flow with significant streamline curvature, separation, reattachment, and laminar-turbulent transition. Large eddy simulations (LES) or Direct numerical simulations (DNS) provide high-fidelity solution for such problems, but the calculations are often too expensive in computational time and cost, especially for high-Reynolds number flows. Therefore, accounting for the errors and uncertainties in the RANS model predictions provides a means to quantify trust in the predictions, as well as enabling the application of robust and reliability based design optimization. More expensive LES or DNS would only be considered necessary if the model form uncertainty is too large. The current study considers a physics-based approach that has been recently introduced by Emory \textit{et al.} \cite{emory2013modeling}, namely eigenspace perturbation method. This framework quantifies the model form uncertainty associated with the linear eddy viscosity model via sequential perturbations in the predicted amplitude (turbulence kinetic energy), shape (eigenvalues), and orientation (eigenvectors) of the anisotropy Reynolds stress tensor. This is an established method for RANS model UQ and has been applied to analyze and estimate the RANS uncertainty in flow through scramjets \cite{emory2011characterizing}, aircraft nozzle jets, turbomachinery, over stream-lined bodies \cite{gorle2019epistemic}, supersonic axisymmetric submerged jet \cite{mishra2017rans}, and canonical cases of turbulent flows over a backward-facing step \cite{iaccarino2017eigenspace,cremades2019reynolds}. This method has been used for robust design of Organic Rankine Cycle (ORC) turbine cascades \cite{razaaly2019optimization}. In aerospace applications, this method has been used for design optimization under uncertainty\cite{cook2019optimization,mishra2020design,matha2022extending,matha2022assessment}. In civil engineering applications, this method is being used to design urban canopies \cite{garcia2014quantifying}, ensuring the ventilation of enclosed spaces, and used in the wind engineering practice for turbulent bluff body flows \cite{gorle2015quantifying}. This perturbation method for RANS model UQ has been used in conjunction with Machine Learning algorithms to provide precise estimates of RANS model uncertainty in the presence of data \cite{xiao2016quantifying,wu2016bayesian,parish2016paradigm,xiao2017random,wang2017physics,wang2017comprehensive,heyse2021estimating}. The method is also being used for the creation of probabilistic aerodynamic databases, enabling the certification of virtual aircraft designs \cite{mukhopadhaya2020multi,nigam2021toolset}. All of the aforementioned studies that adopted the eigenspace perturbation framework focused on eigenvalue and eigenvector perturbations but did not consider the turbulence kinetic energy perturbation. According to Mishra and Iaccarino \cite{mishra2019theoretical}, turbulence kinetic energy perturbation varies the coefficient of turbulent viscosity in the Boussinesq turbulent viscosity hypothesis. Currently all eddy viscosity models utilize a predetermined constant value of this coefficient. In reality, the coefficient of turbulent viscosity varies between different turbulent flow scenarios and even between different regions in the same turbulent flow \cite{mishra2019theoretical}. Therefore, perturbing the amplitude of the anisotropy Reynolds stress tensor not only captures the full ranges of uncertainties introduced by the Boussinesq turbulent viscosity hypothesis, but plays an important role in capturing the true physics of the turbulent flow. However, studies of turbulence kinetic energy perturbation are lacking. The only studies that have been conducted to address the turbulence kinetic energy perturbation are proposed by \cite{gorle2013framework,cremades2019reynolds}. Yet to date, the combined effect of the turbulence kinetic energy and eigenvalue perturbation have not been examined for airfoil flows. It should be noted that introducing uniform perturbations in the entire flow field often lead to overly conservative confidence intervals, because decades of experience in RANS modeling show that the models are not always inaccurate. Consequently, it is reasonable for one to only introduce uncertainties in the regions of the flow where the model is deemed plausibly untrustworthy. Gorl{\'e} \textit{et al.} \cite{gorle2014deviation} first proposed the concept of \textit{ad hoc} ``marker function'' that identifies regions that deviate from parallel shear flow. A recent study of Gorl{\'e} \textit{et al.} \cite{gorle2019epistemic} employed this marker function and applied it to the simulation for a flow over a periodic wavy wall. Emory \textit{et al.} \cite{emory2013modeling} also provided a variety of marker functions aimed at spatially varying the magnitude of the eigenvalue perturbation in a computational domain. Nevertheless, marker function development is still very under-explored and more rigorous discussion and validation of new marker is needed. There are few methods for implementing the effects of the model form uncertainty on a transitional near-wall flow in a RANS formulation. In this case, the local-correlation laminar-turbulent transition model of Langtry and Menter \cite{langtry2009correlation} is used to close the mean transport equations. It has been extensively used to predict a wide variety of transitional flows such as natural transition and laminar-turbulent transition. However, there are few studies concerning the model form uncertainty in transition modeling. Therefore, the objective of this paper is to advance the understanding of the performance of the eigenspace perturbation approach for quantifying the model form uncertainty in RANS simulations of transitional flows over a SD7003 airfoil using the transition model of Langtry and Menter \cite{langtry2009correlation}. Specifically, the objectives of this study are (1) to develop a new regression based marker function $M_{k}$ for the perturbation to the amplitude of the anisotropy Reynolds stress tensor based on the turbulence kinetic energy discrepancy between the RANS and in-house DNS \cite{zhang2021turbulent} datasets; (2) to explore the effect of turbulence kinetic perturbation on various quantities of interests (QoIs) through a sets of uniform perturbations; (3) and to have a thorough understanding of the combined effect of the shape and marker-involved amplitude perturbation to the anisotropy Reynolds stress tensor. A novelty of this study lies in the application of the eigenspace perturbation method to transitional flows, as opposed to fully developed turbulent flows as is done in almost prior investigations. \section{Methodology} \subsection{\label{sec:level2}Governing equations} The flow was assumed to be two-dimensional and incompressible. The RANS formulation of the continuity and momentum equations is as follows: \begin{equation} \label{p_Continuity} \frac{\partial \left\langle U_{i} \right\rangle}{\partial x_{i}}=0, \end{equation} \begin{equation} \label{p_Momentum} \frac{ D \left\langle U_{j}\right\rangle}{\mathrm{Dt}}=-\frac{1}{\rho} \frac{\partial \left\langle P \right\rangle}{\partial x_{j}}+\nu \frac{\partial^{2} {\left\langle U_{j} \right\rangle}}{\partial x_{i} \partial x_{i}}-\frac{\partial \left\langle u_{i} u_{j}\right\rangle}{\partial x_{i}} \end{equation} \noindent where $\left\langle \ \right\rangle$ represents time-averaging, $\rho$ is the density, $\left\langle P \right\rangle$ is the time-averaged pressure, and $\nu$ is the kinematic viscosity. The $\left\langle U_{i}\right\rangle$ are the time-averaged velocity components. Reynolds stress terms in Eqs. \ref{p_Continuity} - \ref{p_Momentum}, i.e., $\left\langle u_{i}u_{j}\right\rangle$, are unknowns that need to be approximated using a RANS model. In the results presented in this study for a flow over a SD7003 airfoil, the modified version of shear-stress transport (SST) $k-\omega$ \cite{menter1993zonal,hellsten1998some,menter2001elements,menter2003ten} for transitional flow simulations by Langtry and Menter \cite{langtry2009correlation} is considered. The RANS based transition model \cite{langtry2009correlation} is a linear eddy viscosity model based on the Bossinesq turbulent viscosity hypothesis as follows: \begin{equation}\label{Eq:noMark_uiuj} \left\langle{u_{i} u_{j}}\right\rangle=\frac{2}{3} k \delta_{i j}-2 \nu_{\mathrm{t}} \left\langle S_{i j} \right\rangle, \end{equation} \noindent where $k$ is the turbulence kinetic energy, $\delta_{i j}$ is the Kronecker delta, $\nu_\mathrm{t}$ is the turbulent viscosity, and $\left\langle S_{i j} \right\rangle$ is the rate of mean strain tensor. Results obtained from the RANS based transition model bereft of any perturbations are refered to as ``baseline'' solutions. In Eq. \ref{Eq:noMark_uiuj}, the deviatoric anisotropic part is \begin{equation}\label{Eqn:Bou_Ani_Tensor} \begin{aligned} a_{i j} & \equiv\left\langle u_{i} u_{j}\right\rangle-\frac{2}{3} k \delta_{i j} \\ &=-\nu_{\mathrm{t}}\left(\frac{\partial\left\langle U_{i}\right\rangle}{\partial x_{j}}+\frac{\partial\left\langle U_{j}\right\rangle}{\partial x_{i}}\right) \\ &=-2 \nu_{\mathrm{t}} \left\langle S_{i j} \right\rangle. \end{aligned} \end{equation} The (normalized) anisotropy is defined by \begin{equation}\label{Eq:noMark_AnisotropyTensor} b_{i j}= \frac{a_{ij}}{2k} = \frac{\big \langle {u_{i} u_{j}} \big \rangle }{2 k}-\frac{\delta_{i j}}{3} = -\frac{\nu_{t} }{k}\big \langle {S_{i j}} \big \rangle. \end{equation} \subsection{Eigenspace perturbation method} The Reynolds stress tensor $\left\langle u_{i} u_{j}\right\rangle$ is symmetric positive semi-definite \cite{pope2001turbulent}, thus it can be eigen-decomposed as follows: \begin{equation} \label{Eq:noMarker_Rij} \left\langle u_{i} u_{j}\right\rangle=2 k\left(\frac{\delta_{i j}}{3}+v_{i n} \hat{b}_{n l} v_{j l}\right), \end{equation} \noindent in which $k \equiv {u_{i} u_{i}} / 2$, $v$ represents the matrix of orthonormal eigenvectors, $\hat{b}$ represents the diagonal matrix of eigenvalues ($\lambda_{i}$), which are arranged in a non-increasing order such that $\lambda_{1} \geq \lambda_{2} \geq \lambda_{3}$. The amplitude, the shape and the orientation of $\left\langle u_{i}u_{j} \right\rangle$ are explicitly represented by $k$, $\lambda_{i}$, and $v_{i j}$, respectively. Equations \ref{Eq:noMark_AnisotropyTensor} and \ref{Eq:noMarker_Rij} lead to \begin{equation}\label{Eq:noMarker_bij} b_{i j}=-\frac{\nu_{t} }{k}\big \langle {S_{i j}} \big \rangle = v_{i n} \hat{b}_{n l} v_{j l}. \end{equation} Equation \ref{Eq:noMarker_bij} indicates that the Boussinesq turbulent viscosity hypothesis requires that the shape and orientation of $\left\langle u_{i}u_{j} \right\rangle$ to be determined by $(\nu_{t}/k)\big \langle {S_{i j}} \big \rangle$. This assumption implies the $a_{i j}$ tensor is aligned with the $\big \langle {S_{i j}} \big \rangle$ tensor, which is not true in most circumstances in practice, in particular, complex flows, e.g., strongly swirling flows, flow with significant streamline curvature, and flow with separation and reattachment, and thus a source of the model form uncertainty. The eigenspace perturbation method was first proposed in \cite{emory2011modeling,gorle2012epistemic}. To quantify errors introduced by the model form uncertainty, perturbation is injected into the eigen-decomposed Reynolds stress defined in Eq. \ref{Eq:noMarker_Rij}. The perturbed Reynolds stresses are defined as \begin{equation}\label{Eqn_Rij_perturbed} \left\langle u_{i} u_{j}\right\rangle^{*}=2 k^{*}\left(\frac{1}{3} \delta_{i j}+v_{i n}^{*} \hat{b}_{n l}^{*} v_{j l}^{*}\right), \end{equation} \noindent where $k^{*}$ is the perturbed turbulence kinetic energy, $\hat{b}_{k l}^{*}$ is the diagonal matrix of perturbed eigenvalues, and $v_{i j}^{*}$ is the matrix of perturbed eigenvectors. For eigenvalue perturbations, Pecnik and Iaccarino \cite{emory2011modeling} proposed a perturbation approach, which enforces the realizability constraints on $\left\langle u_{i}u_{j} \right\rangle$ via the barycentric map \cite{banerjee2007presentation}, as shown in Fig. \ref{fig:BMap_Sketch.pdf}, because the map contains all realizable sates of $\left\langle u_{i}u_{j} \right\rangle$. Due to the realizability constraint of the semi-definiteness of $\left\langle u_{i}u_{j} \right\rangle$, there are three extreme states of componentiality of $\left\langle u_{i}u_{j} \right\rangle$: one component limiting state ($1C$), which has one non-zero principal fluctuation, i.e., $\hat{b}_{1c}=\operatorname{diag}[2 / 3,-1 / 3,-1 / 3]$; two component limiting state ($2C$), which has two non-zero principal fluctuations of the same intensity, i.e., $\hat{b}_{2c}=\operatorname{diag}[1 / 6,1 / 6, -1 / 3]$; and three component (isotropic) limiting state (3C), which has three non-zero principal fluctuations of the same intensity, i.e., $\hat{b}_{3c}=\operatorname{diag}[0,0,0]$. In addition, the $\hat{b}_{1c}$, $\hat{b}_{2c}$, and $\hat{b}_{3c}$ limiting states correspond to the three vertices of the barycentric map. Given an arbitrary point $\mathbf{x}$ within the barycentric map, any realizable $\left\langle u_{i}u_{j} \right\rangle$ can be determined by a convex combination of the three vertices $\mathbf{x}_{i c}$ (limiting states) and $\lambda_{l}$ as follows: \begin{equation}\label{Eq:noMarker_Coordinates_InsideBary} \mathbf{x} = \mathbf{x}_{1 \mathrm{c}}\left(\lambda_{1}-\lambda_{2}\right)+\mathbf{x}_{2 \mathrm{c}}\left(2 \lambda_{2}-2 \lambda_{3}\right)+\mathbf{x}_{3 \mathrm{c}}\left(3 \lambda_{3}+1\right). \end{equation} In order to define the perturbed eigenvalues $\hat{b}_{i j}^{*}$, first determine the location on the barycentric map for the Reynolds stresses computed by a linear eddy viscosity model and subsequently inject uncertainty by shifting it to a new location on the barycentric map. In Fig. \ref{fig:BMap_Sketch.pdf}, perturbations toward $1c$, $2c$, and $3c$ vertices of the barycentric map shift point $O$ to $B_{1c/2c/3c}$, respectively, which can be written as \begin{equation}\label{Eq:noMarker_xstar} \mathbf{x_{B(1c/2c/3c)}^{*}}=\mathbf{x_{O}}+\Delta_{B}\left(\mathbf{x}_{1c/2c/3c}-\mathbf{x_{B(1c/2c/3c)}}\right), \end{equation} \noindent where $\Delta_{B}$ is the magnitude of perturbation. Once the new location is determined, a new set of eigenvalues $\lambda_{i}$ can be computed from Eq. \ref{Eq:noMarker_Coordinates_InsideBary} and $b_{i j}$ can be reconstructed, which eventually yields $\left\langle u_{i}u_{j} \right\rangle^{*}$. As noted earlier in Eq. \ref{Eq:noMarker_bij}, the unperturbed anisotropy Reynolds stress tensor is modeled as $b_{i j}=-\nu_{t} \backslash {k} \big \langle {S_{i j}} \big \rangle = v_{i n} \hat{b}_{n l} v_{j l}$ or, equivalently, $a_{i j} = -2\nu_{t} \big \langle {S_{i j}} \big \rangle =2kv_{i n} \hat{b}_{n l} v_{j l}$. Accordingly, the anisotropy Reynolds stress tensor subject to turbulence kinetic energy perturbation becomes \begin{equation}\label{Eq:perturb_aij} a_{i j}^{*} = -2\nu_{t}^{*} \big \langle {S_{i j}} \big \rangle =2k^{*}v_{i n} \hat{b}_{n l} v_{j l}. \end{equation} Because perturbing $k$ does not affect the eigenvalues and eigenvectors of the anisotropy Reynolds stress tensor, the change in the turbulent viscosity hypothesis has to be accounted in the turbulent viscosity coefficient \cite{mishra2019theoretical}. Comparing the unperturbed anisotropy Reynolds stress tensor to Eq. \ref{Eq:perturb_aij}, it is easy to obtain \cite{mishra2019theoretical} \begin{equation}\label{Eq:k_nut} \frac{k^{*}}{k} = \frac{\nu_{T}^{*}}{\nu_{T}}, \quad \text{or equivalently,} \quad \nu_{T}^{*} = \frac{\nu_{T}k^{*}}{k}, \end{equation} \noindent where $k^{*} = k + \Delta_{k}$. From Eq. \ref{Eq:k_nut}, turbulence kinetic energy perturbation leads to spatial variation of turbulent viscosity coefficient. Specifically, the relation between the turbulent viscosity and the turbulent viscosity coefficient $C_{\mu}$ is given by \begin{equation} \label{nut_unperturb} \nu_{T} = C_{\mu}\frac{k^{2}}{\varepsilon}, \end{equation} \noindent where $\varepsilon$ is the dissipation rate. Thus, the perturbed turbulent viscosity can be expressed as follows: \begin{equation}\label{nut_perturb} \nu_{T}^{*} = C_{\mu}^{*}\frac{{k^{*}}^{2}}{\varepsilon}, \end{equation} \noindent where $C_{\mu}^{*} = C_{\mu} + \Delta_{C_{\mu}}$. Substituting Eqs. \ref{nut_unperturb} and \ref{nut_perturb} into Eq. \ref{Eq:k_nut}, we get \cite{mishra2019theoretical} \begin{equation} \label{Eq:Mishra_Eq} \frac{k}{k^{*}} = \frac{C_{\mu}^{*}}{C_{\mu}}, \quad \text{or equivalently,} \quad \Delta_{C_{\mu}} = -\frac{\Delta_{k}C_{\mu}}{k+\Delta_{k}}. \end{equation} In this study, the turbulence kinetic energy discrepancies between the RANS based predictions and the in-house DNS data \cite{zhang2021turbulent} are modeled by high-order regressions. These regressions generate values of $k^{*}$ that vary spatially in the computational domain: \begin{equation}\label{Eq:Marker_Mk_Method} k^{*} = k +\Delta k = kM_{k}, \quad M_{k} \sim f(x,y). \end{equation} In Eq. \ref{Eq:Marker_Mk_Method}, $M_{k}$ is a marker function of the $x$ and $y$ coordinate in a computational domain. Additionally, substituting Eq. \ref{Eq:Mishra_Eq} into Eq. \ref{Eq:Marker_Mk_Method} and rearranging, we get: \begin{equation} \label{Eq:1_M_k} \frac{1}{M_{k}} = \frac{C_{\mu}^{*}}{C_{\mu}}. \end{equation} Substituting Eq. \ref{Eq:1_M_k} to Eq. \ref{Eq:Mishra_Eq}, the relation between $M_{k}$ and $\Delta_{C_{\mu}}$ can be expressed as follows: \begin{equation}\label{Eq:Minghan_Eq} \Delta_{C_{\mu}} = \frac{C_{\mu}(1-M_{k})}{M_{k}}. \end{equation} Therefore, Eq. \ref{Eq:Minghan_Eq} provides the underlying model structure of turbulence kinetic energy perturbation with a marker function involved. A detailed description for the modeling of $k^{*}$ is presented in Section \ref{RegressionM}. In addition, eigenvector perturbations rotate the eigenvectors of the anisotropy Reynolds stress tensor with respect to the principal axes of the mean rate of strain. Recall that the eigenvectors of the anisotropy Reynolds stress tensor are forced to align along the principal axes of the mean rate of strain due to the limitations of the Boussinesq turbulent viscosity hypothesis \cite{pope2001turbulent}. This again violates the true physics of turbulent flow. Consequently, eigenvector perturbations extend the Boussinesq turbulent viscosity hypothesis to anisotropy turbulent viscosity hypothesis. Unlike eigenvalue perturbations, which are strictly constrained by realizability. Eigenvector perturbations are more difficult to be physically constrained in a local sense. In this study, eigenvector perturbations are omitted for brevity. Therefore, the present study restricts the contribution to the amplitude and shape perturbation to the anisotropy Reynolds stress tensor. \begin{figure} \centerline{\includegraphics[width=3.4in]{fig_Marker/BMap_Sketch.pdf}} \caption{Barycentric map.} \label{fig:BMap_Sketch.pdf} \end{figure} \subsection{Eigenspace perturbation framework in OpenFOAM} At present the eigenspace perturbation framework is available only in Stanford University's SU2 CFD suite \cite{mishra2019uncertainty} and the TRACE solver of DLR \cite{matha2022assessment}. In spite of its utility to the design and simulation community, there are no tested and validated implementations of this framework available in popular CFD software. OpenFOAM \cite{winkelman1980flowfield} is the most widely used open source CFD software in research and academia. A contribution of this investigation, is the development of a verified and validated implementation of the eigenspace perturbation framework for the OpenFOAM software. Relatively few studies have been conducted to implement the eigenspace perturbation framework in a RANS formulation using OpenFOAM, e.g., see \cite{cremades2019reynolds,hornshoj2021quantifying}. All of these studies employed the MATLAB software compounded with OpenFOAM to decompose and recompose the Reynolds stress tensor. This increases the complexity of using the eigenspace perturbation framework \cite{emory2013modeling} in OpenFOAM, which is prone to errors and violating the spirit of versatility. In addition, C++ is inherently faster than Matlab, which reduces the computational expense. In this study, the eigenspace perturbation framework along with the novel marker functions were completely implemented in C++ in OpenFOAM, which greatly reduces the number of user-defined inputs and allows the users without much knowledge of the fluid mechanics to use the eigenspace perturbation framework in OpenFOAM. \begin{figure*} \centerline{\includegraphics[width=6in]{fig_Marker/FlowChart_Method_Marker.pdf}} \caption{Flow chart showing the implementation of model form framework within OpenFOAM with marker configuration involved.} \label{fig:FlowChart_Method_Marker.pdf} \end{figure*} In the input files (located under the ``constant'' directory in OpenFOAM), user needs to specify what magnitude of $\Delta_{B}$ should be assigned, if $M_{k}$ is needed, and which eigenvalue perturbation ($1c$, $2c$, $3c$) is to be performed. The eigenspace framework conducts the perturbations during the execution of simulations, as illustrated in Fig. \ref{fig:FlowChart_Method_Marker.pdf}. At each control volume (CV), the baseline Reynolds stress tensor is calculated and decomposed into its eigenvalue and eigenvector matrices, which are perturbed using the eigenspace perturbation method as prescribed earlier. If $M_{k}$ is involved, perturbation to the turbulence kinetic energy will be performed. The perturbed eigenvalue and eigenvector matrices are then recomposed into a perturbed Reynolds stress tensor for each CV. These perturbed Reynolds stress matrices together with the perturbed turbulence kinetic energy are then used to compute the perturbed velocity field and the perturbed turbulent production to advance each node to the next time step. At convergence, the Reynolds stress also converges to its perturbed state. \section{Flow description and numerical method} The flow being considered is around an SD7003 airfoil, as shown in Fig. \ref{fig:no_Mk_SD7003domain.pdf}. At the low Reynolds number based on the chord length of $\operatorname{Re}_{c} = 60000$, a laminar separation bubble (LSB) is formed on the suction side of the airfoil. Note that the bubble moves upstream as the angle of attack (AoA) increases \cite{catalano2011rans}. In this study, an $8^{\circ}$ AoA (nearing stall) was considered. Figure \ref{fig:no_Mk_SD7003domain.pdf} schematically shows that the solution domain is a two-dimensional C-topology grid of $389$ (streamwise) $\times$ $280$ (wall-normal) $\times$ $1$ (spanwise) control volumes, which is comparable to the number of control volumes ($768 \times 176$) used in the numerical study of \cite{catalano2011rans}. The magnified view of the two-dimensional SD7003 airfoil labels the camber, suction side and pressure side, as shown in Fig. \ref{fig:no_Mk_SD7003domain.pdf}. The first grid node to the wall was placed at $y^{+} \approx 1.0$ in the turbulent boundary layer, in which more than $20$ CVs were placed. A grid convergence study has been performed to test the influence of the grid resolution on the results. Grid dependency study indicated that higher grid resolution in the near-wall region results in negligible changes in the predicted results: the effect of increasing the number of CVs in the wall-normal direction on the predicted mean velocity and Reynolds shear stress profile was at most $1\%$. Therefore, the simulation results based on the smaller grid ($389 \times 280$) has been used in the present analysis. The governing Eqs. \ref{p_Continuity} - \ref{p_Momentum} were closed by the RANS based transition model of \cite{langtry2009correlation} using OpenFOAM. The transport equations were discretized on a staggered mesh using finite volume method. The scheme is second order upwind for spatial dicretization, and Gauss linear scheme was used to evaluate the gradients. The PIMPLE algorithm was adopted for pressure-velocity coupling, which is a combination of PISO (Pressure Implicit with Splitting of Operator) \cite{ferziger2002computational} and SIMPLEC (Semi-Implicit Method for Pressure Linked Equations-Consistent) \cite{van1984enhancements}. It should be noted that PIMPLE algorithm can deal with large time steps where the maximum Courant (C) number may consistently be above $1$. In this study, the maximum value of C was set consistently equal to $0.6$, and OpenFOAM automatically adjusted the time step to achieve the set maximum. In addition, both residuals and distributions of lift and drag coefficients that vary with respect to time ($T$) were used to track convergence status. The solution fields were iterated until convergence, which required residuals of energy and momentum to drop more than four orders of magnitude, and both lift and drag coefficients almost stopped changing with time. This happened at $T \approx 0.3$, which corresponds to a normalized time $T^{*} = T U_{\infty} / c = 6.75$, and similar behavior has been observed by Catalano and Tognaccini \cite{catalano2011rans} in their numerical study for a low-Reynolds number flow over a SD7003 airfoil at $\mathrm{AoA} = 10^{\circ}$. Sampling began at $T = 0.6$ (double the time of convergence) and ended at $T = 1.4$, which required approximately $35000$ iterations for all simulations. The fluid was assumed to be air, with freestream turbulence intensity of $\operatorname{Tu} = 0.03\%$ and kinematic viscosity of $\nu = 1.5 \times 10^{-5} \mathrm{~m}^{2} / \mathrm{s}$. Ideally, the value of $\operatorname{Tu}$ should be close to zero. From Fig. \ref{fig:no_Mk_SD7003domain.pdf} at the inlet of the domain, the freestream velocity was set equal to $4.5 \ m/s$, which corresponds to $\operatorname{Re}_{c} = 60000$. The chord length was set equal to $c = 0.2 \ m$. At the outlet, a zero-gradient boundary condition was implemented for $\left\langle U_{i}\right\rangle$ ($\left\langle U \right\rangle$ for $x$ direction, $\left\langle V \right\rangle$ for $y$ direction), $k$, $\omega$ and pressure. At the wall, a no-slip boundary condition was used. \begin{figure*} \centerline{\includegraphics[width=5.0in]{fig_Marker/SD7003domain.pdf}} \caption[SD7003 computational domain and boundary conditions: {\color{red} \rule{0.7cm}{0.4mm}} far field, {\color{blue} \rule{0.7cm}{0.4mm}} outflow, {\color{black} \rule{0.7cm}{0.4mm}}, and no-slip walls.]{SD7003 computational domain and boundary conditions: {\color{red} \rule{0.7cm}{0.4mm}} far field, {\color{blue} \rule{0.7cm}{0.4mm}} outflow, {\color{black} \rule{0.7cm}{0.4mm}}, and no-slip walls. Depiction of the suction side, camber, and pressure side of the SD7003 airfoil is displayed in the magnified plot. A three-dimensional version of the computational domain is provided with freestream ($U_{\infty}$) encountering the leading edge at $8^{\circ}$ AoA.} \label{fig:no_Mk_SD7003domain.pdf} \end{figure*} \section{Regression model for amplitude perturbation} An important and novel focus of this study is the development of a marker function that modulates the degree of perturbations over the entire flow domain. We have explained earlier that this should lead to better calibrated confidence intervals. In this section, high-order polynomial regressions are constructed using MATLAB software in a least-squares sense to fit both the baseline RANS and in-house DNS datasets. Note that these high-order regressions lay the foundation for the development of the new marker functions. \subsection{Example: a linear regression}\label{RegressionM} The $n^{th}$ polynomial regression model that describes the relationship between a dependent $y$ and an independent $x$ can be expressed as \begin{equation} \label{Eqn:poly_fit} y(x)=p_{1} x^{n}+p_{2} x^{n-1}+\ldots+p_{n} x+p_{n+1} \end{equation} where $p = 1,\ldots,n+1$ stands for the coefficients in descending orders, $x$ is the independent variable. Fig. \ref{fig:Linear_Regression_Eg.pdf} illustrates a first-order or linear regression model on a random dataset. The errors $y_{i}-\hat{y}_{i}$ between the predicted values $\hat{y}_{i}$ and the actual data values $y_{i}$ are referred to as residuals. Using MATLAB software, the least-squares method finds the coefficients $p_{i}$ that best fit this datasets by minimizing the sum of squared residuals, i.e.: \begin{equation} RSS = \sum_{j=1}^{n}\left(y_{j}-\hat{y}_{j}\right)^{2}, \quad j = 1,\ldots,m \end{equation} where $RSS$ stands for residual sum of squares, $y_{j}$ is the $j$th actual value of the dependent variable to be predicted, $m$ represents the number of points of the datasets, $\hat{y}_{j}$ is the $j$th predicted value of $y_{j}$. \begin{figure} \centerline{\includegraphics[width=3.5in]{fig_Marker/Linear_Regression_Eg.pdf}} \caption{Linear regression relation between $x$ and $y$.} \label{fig:Linear_Regression_Eg.pdf} \end{figure} \subsubsection{Define untrustworthy regions}\label{Sec:Def_untrust} \begin{figure*} \centerline{\includegraphics[width=5.5in]{fig_Marker/cfcp_Tu003_forMarker.pdf}} \caption[Distribution of (a) pressure coefficient and (b) skin friction coefficient over the SD7003 airfoil at $\operatorname{Re}_{c}=6 \times 10^{4}$ and $\operatorname{AoA} = 8^{\circ}$.]{Distribution of (a) pressure coefficient and (b) skin friction coefficient over the SD7003 airfoil at $\operatorname{Re}_{c}=6 \times 10^{4}$ and $\operatorname{AoA} = 8^{\circ}$. Two headed arrow is added to indicate the untrustworthy region. (c) Schematic of transitional and turbulent regions over a SD7003 airfoil with important transitional parameters highlighted.} \label{fig:cfcp_Tu003_forMarker.pdf} \end{figure*} \begin{table} \begin{center} \caption{Comparison of transition parameters.} \label{table:transi_parameters} \begin{ruledtabular} \begin{tabular}{c c c c} Method & $X_{S}/c$ &$X_{T}/c$ &$X_{R}/c$\\ \hline SSTLM (Baseline) \cite{langtry2009correlation} & $0.03$& $0.15$ &$0.29$ \\ In-house DNS \cite{zhang2021turbulent} & $0.02$ &$0.16$&$0.27$\\ LES \cite{garmann2013comparative}& $0.02$ &$0.16$&$0.27$\\ ILES \cite{galbraith2010implicit}& $0.03$ &$0.18$&$0.27$\\ \end{tabular} \end{ruledtabular} \end{center} \end{table} To construct marker functions for $k^{*}$, first and foremost is to identify the regions where the turbulent viscosity hypothesis becomes invalid. This study identifies the regions where the RANS model gives plausible untrustworthy results based on the comparison between the baseline prediction and the in-house DNS data of \cite{zhang2021turbulent}. For the flow over an airfoil geometry, perhaps the local wall shear stress and the local pressure are the most important parameters, whose dimensionless forms become the skin friction coefficient $C_{f}=\tau_{w} / {0.5 \rho U_{\infty}^{2}}$, where $\tau_{w}$ is the wall shear stress, and the pressure coefficient $C_{p}=(p-p_{\infty}) / {0.5 \rho U_{\infty}^{2}}$, where $p$ is the undisturbed static pressure and $p_{\infty}$ is the static pressure in the freestream, respectively. In Figs. \ref{fig:cfcp_Tu003_forMarker.pdf} (a) and (b), the predicted $C_{f}$ and $C_{p}$ are plotted. According to the technique described by Boutilier and Yarusevych \cite{boutilier2012parametric}, Fig. \ref{fig:cfcp_Tu003_forMarker.pdf} (a) shows three ``kinks'' as representatives of the separation, transition and reattachment points, denoted $X_{S}/c$, $X_{T}/c$ and $X_{R}/c$, respectively. Moreover, the size of the LSB can be determined by finding the $X_{S}/c$ and $X_{R}/c$ points, which can be determined as the zeros of the skin friction coefficient \cite{de2021model}. The two methods showed good agreement with each other, and a summary of these important transition parameters are tabulated in Table \ref{table:transi_parameters}. In this study, the LSB is treated to be composed of a ``fore'' (from $X_{S}/c$ to $X_{T}/c$) and an ``aft'' (from $X_{T}/c$ to $X_{R}/c$) portion for the sake of analysis simplicity, followed by a fully turbulent region, as shown in Fig. \ref{fig:cfcp_Tu003_forMarker.pdf} (c). The in-house DNS \cite{zhang2021turbulent} and implicit LES (ILES)/LES data of \cite{galbraith2010implicit} and \cite{garmann2013comparative} for $C_{f}$ and $C_{p}$ are included for comparison. In the fore portion of the LSB, the predicted $C_{p}$ profile shows relatively good agreement with the ILES data of \cite{galbraith2010implicit}, while a clear discrepancy is observed in the aft portion, where it gives a smaller value of $C_{p}$, i.e. the region indicated by the two headed arrow. This kind of discrepancy was observed by Tousi \textit{et al.} \cite{tousi2021active} in their numerical study as well. Besides, the predicted $C_{p}$ shows good agreement with the reference data for the turbulent region on the suction side, as well as shows good agreement with the reference data for the entire pressure side. On the other hand, a noticeable discrepancy is observed on the $C_{f}$ profile at the negative ``trough'' in the aft portion of the LSB, as well as at the positive ``crest'' in the turbulent boundary layer after the reattachment point $X_{R}$. In Fig. \ref{fig:cfcp_Tu003_forMarker.pdf} (b), a shift of the predicted $C_{f}$ profile in the upstream direction at the trough is observed, and the value of $C_{f}$ is significantly under-predicted at the crest in the region of turbulent boundary layer for $0.3 < x/c < 0.6$. This behavior has been observed by other researchers as well, e.g., see \cite{catalano2011rans,bernardos2019rans,tousi2021active}. Therefore, it can be concluded that the region for $0.14 \leq x/c \leq 0.6$, as indicated by the double headed arrow shown in Fig. \ref{fig:cfcp_Tu003_forMarker.pdf} (b), should be identified as representative for the untrustworthy regions where perturbations should be introduced. \begin{figure} \centerline{\includegraphics[width=3.5in]{fig_Marker/BL_black.pdf}} \caption[Injection of uncertainty into the untrustworthy zones: \textcolor{red}{zone $ab$}, \textcolor{OliveGreen}{zone $cd$}, \textcolor{Cyan}{zone $em$} and \textcolor{gray}{zone $mf$}.]{Injection of uncertainty into the untrustworthy zones: \textcolor{red}{zone $ab$}, \textcolor{OliveGreen}{zone $cd$}, \textcolor{Cyan}{zone $em$} and \textcolor{gray}{zone $mf$}. The outer edge of the boundary layer ({\color{black} \protect\tikz[baseline]{\protect\draw[dashed] (0,.5ex)--++(.5,0) ;}} with $\circ$) at seven locations $I = x/c = 0.02$, $II = x/c = 0.03$, $III = x/c = 0.04$, $IV = x/c = 0.06$, $V = x/c = 0.08$, $VI = x/c = 0.10$, $VII = x/c = 0.12$ ($\cdots$) selected on the suction side are provided for reference.} \label{fig:BL_black} \end{figure} This study ensures that the amplitude perturbation is introduced across the entire boundary layer within the untrustworthy region $0.14 \leq x/c \leq 0.6$, which is further divided into the $ab$, $cd$, $em$ and $mf$ zone. In Fig. \ref{fig:BL_black}, the mean velocity profiles at seven locations downstream of the leading edge are used to illustrate the flow development in the streamwise direction, i.e. $I = x/c = 0.1$, $II = x/c = 0.15$, $III = x/c = 0.2$, $IV = x/c = 0.3$, $V = x/c = 0.4$, $VI = x/c = 0.5$, and $VII = x/c = 0.6$. Due to the airfoil curved upper surface, the mean velocity profiles are shifted down to the origin of $y/c$, denoted $y/c|_{o} = (y-y_{w})/c$ for sake of better contrast, where $y_{w}$ is the vertical location of the upper surface of the airfoil. Figure \ref{fig:BL_black} clearly shows that the boundary layer thickness increases as the flow develops in the streamwise direction downstream of the leading edge, i.e. the dash line with open circles indicates the approximate thickness of the outer edge of the boundary layer (OBL). In this study, the regions within which the amplitude perturbation will be introduced are shaded red, green, blue and gray corresponding to the $ab$, $cd$, $em$ and $mf$ zone, respectively, as shown in Fig. \ref{fig:BL_black}. It is clear that all these shaded regions extend well beyond OBL, i.e. $0 < y/c|_{o} < 0.05$, implying the propagation of the amplitude perturbation effect deeper into the outer boundary layer as the flow develops further downstream of the leading edge. \subsubsection{Polynomial regression for DNS/RANS turbulence kinetic energy datasets}\label{Sec:Poly_reg_k} In this study, MATLAB software was used to construct a set of least squares higher-order regression lines that are used to fit seventh-order polynomials to both the baseline RANS and in-house DNS datasets, i.e. gray lines with open circles, for turbulence kinetic energy normalized with the freestream velocity squared, $k/U_{\infty}^2$ are shown in Figs. \ref{fig:Marker_SSTLM_DNS_fit_k_all.pdf} (a) and (b). There are 5 locations selected for the $ab$ zone and 12 locations for the $cd$ zone. The regression based $k/U_{\infty}^2$ profiles for the $ab$ and $cd$ zone are colored red and green, respectively, with a uniform spacing of $x/c = 0.01$. As the flow proceeds further downstream, the regression based $k/U_{\infty}^2$ profiles for the $ef$ zone, which comprises a $em$ and $mf$ subzone, are colored blue, within which 15 locations are selected with a uniform spacing of $x/c = 0.02$. Within each zone same number of locations are selected for both the regression based RANS and in-house DNS $k/U_{\infty}^2$ profiles, as documented in Table \ref{table:marker_ranges}. As a result, a total of 32 locations are selected and placed uniformly on the suction side of the airfoil, ranging from the LSB to the fully turbulent flow further downstream. In addition, the locations are more densely packed by imposing a smaller spacing distance within the $ab$ and $cd$ zone, where the LSB evolves and complex flow features start developing. Therefore, a closer investigation into this region is taken. From Figs. \ref{fig:Marker_SSTLM_DNS_fit_k_all.pdf} (a) and (b), the regression based RANS $k/U_{\infty}^2$ profiles in general exhibit a similar behavior as that for in-house DNS, i.e. a gradual increase of the $k/U_{\infty}^2$ profile in the $ab$ and $cd$ zone, followed by a reduction of the profile further downstream in the $ef$ zone. \begin{figure} \centerline{\includegraphics[width=3.5in]{fig_Marker/Marker_SSTLM_DNS_fit_k_all.pdf}} \caption{(a) Regressed profile of normalized turbulence kinetic energy for the baseline RANS and (b) in-house DNS datasets (gray profiles) along the suction side of the SD7003 airfoil (geometry depicted by gray line): from left to right are \textcolor{red}{zone $ab$}, \textcolor{Green}{zone $cd$} and \textcolor{Blue}{zone $ef$}.} \label{fig:Marker_SSTLM_DNS_fit_k_all.pdf} \end{figure} \begin{table*} \begin{center} \caption{Zone ranges for the untrustworthy region.} \label{table:marker_ranges} \begin{ruledtabular} \begin{tabular}{c c c c c c} &\textcolor{red}{zone $ab$} & \textcolor{Green}{zone $cd$} &\textcolor{blue}{zone $em$} &\textcolor{gray}{zone $mf$}\\ \hline $x/c$ & \textcolor{red}{$0.14 \leq \frac{x}{c}\leq 0.18$}& \textcolor{Green}{$0.18 < \frac{x}{c}\leq 0.3$} &\textcolor{blue}{$0.3 < \frac{x}{c}\leq 0.4$} & \textcolor{gray}{$0.4 < \frac{x}{c}\leq 0.6$}\\ $y/c$ & \textcolor{red}{$y_{w} \leq \frac{y}{c}\leq 0.1$} &\textcolor{Green}{$y_{w} \leq \frac{y}{c}\leq 0.1$}&\textcolor{blue}{$y_{w} \leq \frac{y}{c}\leq 0.1$}&\textcolor{gray}{$y_{w} \leq \frac{y}{c}\leq 0.1$}\\ Number of locations & \textcolor{red}{5} & \textcolor{Green}{12} &\multicolumn{2}{c}{\textcolor{blue}{15}}\\ Spacing of $x/c$ & \multicolumn{2}{c}{\textcolor{black}{0.01}} & \multicolumn{2}{c}{\textcolor{black}{0.02}} \\ \end{tabular} \end{ruledtabular} \end{center} \end{table*} \subsubsection{Spatial discrepancies in $k/U_{\infty}^2$ regressions from DNS/RANS comparison} In Figs. \ref{fig:SSTLMDNS_origin_fit_all.pdf} (a) and (b), these 32 regression based $k/U_{\infty}^2$ profiles are shifted to the origin of the $x/c$ and $y/c$ axes, respectively, for sake of strong contrast. The baseline predictions and the in-house DNS data are also included for reference, depicted by the gray lines with open circles and shifted to the origin of the $x/c$ axis to be distinguished from these regression based profiles. From Fig. \ref{fig:SSTLMDNS_origin_fit_all.pdf} (a), the regression based RANS $k/U_{\infty}^2$ profiles increase in magnitude as the flow moves further downstream. This is qualitatively similar to that for the in-house DNS profile shown in Fig. \ref{fig:SSTLMDNS_origin_fit_all.pdf} (b). From Figs. \ref{fig:SSTLMDNS_origin_fit_all.pdf} (a) and (b), the regression based RANS $k/U_{\infty}^2$ profiles increase in a somewhat larger magnitude than that for in-house DNS for the $ab$ zone; however, they are significantly reduced in magnitude compared to that for in-house DNS for the $cd$ zone (in the aft portion of the LSB), i.e., the magnitude of reduction is around $50 \%$. For the $ef$ zone, both the regression based RANS and in-house DNS $k/U_{\infty}^2$ profiles show a gradual decrease in magnitude as the flow moves further downstream. Overall, the regression based RANS and in-house DNS $k/U_{\infty}^2$ profiles are similar in magnitude for the $ab$ and $ef$ zones, but the discrepancy is significant in the aft portion of the LSB for the $cd$ zone. \begin{figure} \centerline{\includegraphics[width=3.5in]{fig_Marker/SSTLMDNS_origin_fit_all.pdf}} \caption[(a) Regression based profile of normalized turbulence kinetic energy for baseline RANS and (b) in-house DNS for \textcolor{red}{zone $ab$}, \textcolor{Green}{zone $cd$} and \textcolor{Blue}{zone $ef$}.]{(a) Regressed profile of normalized turbulence kinetic energy for baseline RANS and (b) in-house DNS for \textcolor{red}{zone $ab$}, \textcolor{Green}{zone $cd$} and \textcolor{Blue}{zone $ef$}. Actual datasets (gray profiles) for baseline RANS and in-house DNS are provided for reference.} \label{fig:SSTLMDNS_origin_fit_all.pdf} \end{figure} \subsubsection{Marker for $k^{*}$} \begin{figure} \centerline{\includegraphics[width=3.5in]{fig_Marker/avek_RANSDNS_xpls_0pt14To0pt60.pdf}} \caption[Mean of regression lines for normalized turbulence kinetic energy of both baseline RANS (line with green squares) and in-house DNS (line with red circles). (a) zone ab; (b) zone cd; and (c) zone ef.]{Mean of regression lines for normalized turbulence kinetic energy of both baseline RANS (line with green squares) and in-house DNS (line with red circles). (a) zone ab; (b) zone cd; and (c) zone ef. Also included are profiles of baseline RANS (gray-dashed) and and in-house DNS (gray-solid) for reference.} \label{fig:avek_RANSDNS_xpls_0pt14To0pt60.pdf} \end{figure} \begin{figure*} \centerline{\includegraphics[width=5.5in]{fig_Marker/kcorrection_factor_RANSDNS_three.pdf}} \caption{Defining the marker function for (a) zone $ab$, (b) zone $cd$ and (c) zone $ef$ based on the corresponding discrepancy data.} \label{fig:kcorrection factor_RANSDNS_three.pdf} \end{figure*} \begin{figure} \centerline{\includegraphics[width=3.5in]{fig_Marker/RANSDNS_xpls_0pt3To0pt60_withIndicator.pdf}} \caption{Defining the marker function for subzone $mf$.} \label{fig:RANSDNS_xpls_0pt3To0pt60_withIndicator.pdf} \end{figure} \begin{figure} \centerline{\includegraphics[width=3.5in]{fig_Marker/RANS_contour_Marker_CF_k.pdf}} \caption[Contours of $M_{k}$ (Eq. \ref{Eqn:Markerfunc}) for (a) $0 < M_{k} < 1$ and (b) $1 < M_{k} < 10$ in an $xy$ plane.]{Contours of $M_{k}$ (Eq. \ref{Eqn:Markerfunc}) for (a) $0 < M_{k} < 1$ and (b) $1 < M_{k} < 10$ in an $xy$ plane. The dashed lines in (a) and (b) denote the actual locations on the suction side of the airfoil, which separate the $ab$, $cd$, $em$ and $mf$ zone.} \label{fig:RANS_contour_Marker_CF_k.pdf} \end{figure} As noted earlier relatively few methods thus far have been developed to construct marker functions, e.g., see \cite{emory2013modeling} and \cite{gorle2014deviation}. Essentially, they can be classified into two categories: (1) spatially varying magnitude of $\Delta_{B}$ and (2) identifying regions that deviate from parallel shear flow. All of these methods essentially use only one explanatory variable to predict the error in RANS model predictions. In this study, a novel method based on least squares high-order regressions is developed to construct a switch marker function for $k^{*}$. This method uses a set of explanatory variables dedicated to the identified untrustworthy zones, which aims at introducing correct level of uncertainty by strictly comparing the RANS predictions for turbulence kinetic energy to the in-house DNS data. We performed numerous tests and found that increasing the order of polynomial regressions higher than seven no longer gave more accurate results. Consequently, the seventh-order polynomial regression lines for the $k/U_{\infty}^2$ profiles were conducted, as shown in Fig. \ref{fig:avek_RANSDNS_xpls_0pt14To0pt60.pdf}. For each of the $ab$, $cd$ and $ef$ zone, the averaged regression relations for both RANS and in-house DNS are computed using the equation defined as follows: \begin{equation}\label{Eqn:normk_fit} {k}_{RANS/DNS}^{ave}|_{zone \ ab/cd/ef} = \frac{\sum_{i=1}^{n} P_{i}(\frac{y}{c}|_{o})}{n}, \end{equation} where $i$ represents the $i$th location on the suction side of the SD7003 airfoil (there are 32 selected locations), $P_{i}$ represents the polynomial regression at the $i$th location, and $n$ is the number of locations for each zone, as summarized in Table \ref{table:marker_ranges}. The regression based $k/U_{\infty}^2$ profiles for the $ab$, $cd$ and $ef$ zone are plotted in Figs. \ref{fig:avek_RANSDNS_xpls_0pt14To0pt60.pdf} (a), (b) and (c), respectively. The two solid lines with filled markers represent the mean of the regression based datasets for both RANS and in-house DNS. In each zone, Figs. \ref{fig:avek_RANSDNS_xpls_0pt14To0pt60.pdf} (a), (b) and (c) clearly show the discrepancy between these two averaged regression relations. For the $ab$ zone, Fig. \ref{fig:avek_RANSDNS_xpls_0pt14To0pt60.pdf} (a) shows a small discrepancy close to zero at the wall, i.e. $y/c|_{o} = 0$, as well as in the far outer region, i.e. $y/c|_{o} > 0.025$. Besides, the discrepancy tends to increase with $y/c|_{o}$ and peaks around $y/c|_{o} = 0.015$. Within the $cd$ zone shown in Fig. \ref{fig:avek_RANSDNS_xpls_0pt14To0pt60.pdf} (b), there is a large discrepancy at the wall and the discrepancy retains at a nearly consistent level until peaks around $y/c|_{o} = 0.015$, then gradually decreases with $y/c|_{o}$ to approach the value of zero. It is interesting that the discrepancy peaks around $y/c|_{o} = 0.15$ for both the $ab$ and $cd$ zone (aft portion of the LSB). On the other hand, a relatively small discrepancy is observed consistently throughout the entire boundary layer for the $ef$ zone, as shown in Fig. \ref{fig:avek_RANSDNS_xpls_0pt14To0pt60.pdf} (c). This indicates that the RANS based transition model \cite{langtry2009correlation} tends to become more trustworthy in the predictions for the turbulence kinetic energy in the far downstream region than that within/close to the LSB, i.e. the $ab$ and $cd$ zone. From Figs. \ref{fig:avek_RANSDNS_xpls_0pt14To0pt60.pdf} (a), (b) and (c), the discrepancy between the averaged regression relations for RANS and in-house DNS describes the degree of untrustworthiness in the $y/c|_{o}$ direction ranging from the $ab$ zone to the $ef$ zone across the suction side, therefore the discrepancy can be used as an approximation to a marker function. This study defines a correction factor based on the $k/U_{\infty}^2$ discrepancy between the regression based RANS and in-house DNS, which can be written as follows: \begin{equation}\label{Eqn:CF_k} CF_{k} = \left| \frac{Averaged \ DNS}{Averaged \ SSTLM} \right| . \end{equation} Equation \ref{Eqn:CF_k} indicates that $CF_{k}$ is constantly positive, which satisfies the physical realizability constraint, i.e. $k^{*} \geq 0$. For each zone, the discrepancy data obtained using Eq. \ref{Eqn:CF_k} are depicted by the blue solid circles, as shown in Figs. \ref{fig:kcorrection factor_RANSDNS_three.pdf} (a), (b) and (c). Using MATLAB software, the marker function for each zone can be constructed by fitting to the corresponding $CF_{k}$ data, i.e., fitting a seventh-order polynomial to the discrepancy data for the $ab$ and $ef$ zone, while fitting a Fourier series to the discrepancy data for the $cd$ zone, as shown in Figs. \ref{fig:kcorrection factor_RANSDNS_three.pdf} (a), (c), and (b), respectively. Therefore, a switch marker function that introduces local injection of perturbation with respect to the $ab$, $cd$ and $ef$ zone can be written as follows: \begin{equation}\label{Eqn:Markerfunc} \text {Switch $M_{k}$ }= \begin{cases}\textcolor{red}{a_{0}^{ab}(\frac{y-y_{w}^{ab}}{c})^{7} + a_{1}^{ab}(\frac{y-y_{w}^{ab}}{c})^{6} +...+}\\ \textcolor{red}{a_{5}^{ab}(\frac{y-y_{w}^{ab}}{c})^{2}+a_{6}^{ab}(\frac{y-y_{w}^{ab}}{c}) +a_{7}^{ab}} \quad \textcolor{red}{\text { if }} \textcolor{red}{zone \ ab,} \\ \\ \textcolor{Green}{a_{0}^{cd}+a_{1}^{cd} \cos \left(w\left(\frac{y-y_{w}^{cd}}{c}\right)\right) +} \\ \textcolor{Green}{b_{1}^{cd} \sin \left(w\left(\frac{y-y_{w}^{cd}}{c}\right)\right)+}\\ \textcolor{Green}{+ a_{2}^{cd} \cos \left(\left(2w\left(\frac{y-y_{w}^{cd}}{c}\right)\right)\right)+} \quad \textcolor{Green}{\text { if } zone \ cd,} \\ \textcolor{Green}{b_{2}^{cd} \sin \left(\left(2w\left(\frac{y-y_{w}^{cd}}{c}\right)\right)\right)} \\ \\ \textcolor{blue}{a_{0}^{em}(\frac{y-y_{w}^{em}}{c})^{7}+a_{1}^{em}(\frac{y-y_{w}^{em}}{c})^{6}+...+}\\ \textcolor{blue}{a_{5}^{em}(\frac{y-y_{w}^{em}}{c})^{2}+a_{6}^{em}(\frac{y-y_{w}^{em}}{c})+a_{7}^{em}} \quad \textcolor{blue}{\text { if }} \textcolor{blue}{ zone \ em ,}\\ \\ \textcolor{gray}{2.8} \qquad \qquad \textcolor{gray}{\text { if } zone \ mf ,} \end{cases} \end{equation} \noindent where a0, a1, a2, a3, a4, a5, a6, a7 represent the polynomial coefficients; a0, a1, b1, a2, b2, w represent the Fourier coefficients. Therefore, the perturbed turbulence kinetic energy, $k^{*}$, is defined as follows: \begin{equation}\label{Eqn:kstar} k^{*} = kM_{k}. \end{equation} It is worth noting that the development of spatial variations in $M_{k}$, are what the turbulence machine learning efforts are focused on. Because when a neural network model is developed to predict the perturbation in the flow, this neural network model will not predict the same perturbation at all points in the flow domain. Instead, it will naturally lead to a non-uniform perturbation. The key differences between my work and the work based on machine learning is two pronged: (1) the choice of the model and (2) the choice of the modeling basis (or the explanatory variables utilized to predict the perturbation). We have used a seventh-order regression, the work based on machine learning uses a random forest or neural network. We have utilized a small set of explanatory variables in $M_{k}$ that is developed based on physics arguments and prior experience. The work based on machine learning utilize a large set of explanatory variables (called features) that is almost $100$ in number and includes invariants of the mean velocity field, scaled distance from the wall. If a uniform value of $M_{k}$ is used, then Eq. \ref{Eqn:kstar} becomes \begin{equation}\label{Eqn:Deltak} k^{*} = k\Delta_{k}, \end{equation} \noindent where $k$ is the perturbed turbulence kinetic energy from the previous time step, and $\Delta_{k}$ represents a uniform value of $M_{k}$. The value of $\Delta_{k}$ must be larger than zero to satisfy physical realizability. Due to airfoil curved surfaces, $y_{w}$ varies along the suction side. If let $y_{w} = f(x)$ represent the curved upper surface, then its gradient can be calculated by taking the derivative of $f(x)$, e.g., $df(x)/dx$. In Eq. \ref{Eqn:Markerfunc}, the strategy to choose a reasonable magnitude for $y_{w}$ as representative of a zone is to find the minimum value of $y_{w}$, which ensures that the realizability constraint of $M_{k} \geq 0$ is satisfied, and hence $k^{*} \geq 0$. Note that the value of $df(x)/dx$ approaches to the value of zero around $x/c = 0.266$, i.e. sitting within the $cd$ zone. This implies that the minimum value of $y_{w}$ is located closer to the leading edge for $x/c < 0.266$, while the minimum value of $y_{w}$ is located closer to the trailing edge for $x/c > 0.266$. As noted earlier in Fig. \ref{fig:BL_black}, the $ef$ zone is composed of two subzones, i.e. $em$ and $mf$, as illustrated in Fig. \ref{fig:RANSDNS_xpls_0pt3To0pt60_withIndicator.pdf}. Figure \ref{fig:RANSDNS_xpls_0pt3To0pt60_withIndicator.pdf} enlarges the $ef$ zone, in which the regression based $k/U_{\infty}^2$ profiles for both RANS and in-house DNS are shifted down to the origin of $y$, to highlight the discrepancy in the region for $0.3 < x/c < 0.6$, within which the profiles for the $em$ subzone are painted blue, while the profiles are painted gray for the $mf$ subzone. It is clear that a similar level of discrepancy between the regression based RANS and in-house DNS profile across the $mf$ subzone is observed, as shown in Fig. \ref{fig:RANSDNS_xpls_0pt3To0pt60_withIndicator.pdf}. From Fig. \ref{fig:RANSDNS_xpls_0pt3To0pt60_withIndicator.pdf}, the discrepancy is significant in the vicinity of the wall, i.e at $y/c|_{o} = 0.004$, which corresponds to an approximate value of $2.8$ for $CF_{k}$ in Fig. \ref{fig:kcorrection factor_RANSDNS_three.pdf} (c). For sake of simplicity, the uniform value of $2.8$ for $M_{k}$ is employed for the $mf$ subzone in Eq. \ref{Eqn:Markerfunc}. We visualize the spatial variation of the magnitude of $M_{k}$ from the contours of $0 < M_{k} < 1$ and $1 < M_{k} < 10$, as shown in Figs. \ref{fig:RANS_contour_Marker_CF_k.pdf} (a) and (b), respectively. From Fig. \ref{fig:RANS_contour_Marker_CF_k.pdf} (a), it is clear that the magnitude of $0 < M_{k} < 1$ is more prevalent in the $ab$ zone, and in the upper portion of the $cd$ zone. In Fig. \ref{fig:RANS_contour_Marker_CF_k.pdf} (a), an overall decreasing trend of $M_{k}$ in magnitude with $y/c$ is observed for the $ab$ zone. On the other hand, the $M_{k}$ magnitude for both the $cd$ and $em$ zone varies with $y/c$ in a fashion consistent with the behavior observed in Figs. \ref{fig:kcorrection factor_RANSDNS_three.pdf} (b) and (c). Moreover, a uniform magnitude of $M_{k}$ is observed for the $ef$ zone, which confirms the uniform magnitude of $2.8$. \section{Results and discussion} \subsection{Sensitivity to $\Delta_k$} \subsubsection{Skin friction coefficient} A set of $C_{f}$ distributions undergoing the $\Delta_{k}$ perturbations are shown in Fig. \ref{fig:cf_uniformk_line.pdf}. The baseline prediction for $C_{f}$ is used as a reference. The increasing magnitude of $\Delta_{k}$ is indicated by lighter to darker hues, as shown in Fig. \ref{fig:cf_uniformk_line.pdf}. In addition, the red solid arrows are added to indicate the trend of $C_{f}$ with increasing $\Delta_{k}$, and the regions that contain a peak negative (trough) and a peak positive (crest) value of $C_{f}$ are enlarged to distinguish the clusters of $C_{f}$ profiles. In Fig. \ref{fig:cf_uniformk_line.pdf}, the magnitude of $C_{f}$ profiles increases with $\Delta_{k}$ perturbations for $\Delta_{k} < 1$ ($\Delta_{k} = \big\{ 0.1, 0.25, 0.5, 0.75 \big\}$) and $\Delta_{k} > 1$ ($\Delta_{k} = \big\{ 2, 4, 6, 8 \big\}$), respectively, at the trough (around $X_{T}$), indicating a monotonic increase. As the flow moves further downstream within the aft portion of the LSB, the magnitude of $C_{f}$ tends to decrease monotonically when the value of $\Delta_{k}$ is increased; as the flow proceeds further downstream of $X_{R}$, a monotonic increase with $\Delta_{k}$ in the magnitude of $C_{f}$ again occurred for $\Delta_{k} < 1$ and $\Delta_{k} > 1$. It should be noted that the $C_{f}$ profiles tend to converge and collapse onto a single curve when $\Delta_{k}$ is increased. The baseline prediction is well enveloped in between the $\Delta_{k} < 1$ and $\Delta_{k} > 1$ perturbations. Compared to the baseline prediction, rather subtle increases in the magnitude of $C_{f}$ for $\Delta_{k} < 1$ is observed, as contrasted with more noticeable increases in $C_{f}$ for $\Delta_{k} > 1$. This indicates that the simulation's response to the injection of $\Delta_{k}$ is more dependent on $\Delta_{k} > 1$ than $\Delta_{k} < 1$. This behavior is highlighted in the enlarged trough and crest region. In addition, the dashed red arrow is added along the line of zeros of $C_{f}$ to indicate the tendency of a shift of $X_{R}$ in the upstream direction when the value of $\Delta_{k}$ is increased for both $\Delta_{k} < 1$ and $\Delta_{k} > 1$. Since wall shear stress is a consequence of momentum transfer from the mean flow to the wall surface \cite{monteith2013principles}, the magnitude of mean velocity is closely related to the magnitude of $C_{f}$. In the aft portion of the LSB, the $\Delta_{k} > 1$ perturbations overall yield a smaller magnitude of $C_{f}$, and an increase in the magnitude of mean velocity is expected, while the opposite is true for the $\Delta_{k} < 1$ perturbations. \begin{figure} \centerline{\includegraphics[width=3.5in]{fig_Marker/cf_uniformk_line.pdf}} \caption[Skin friction coefficient distributions over the suction side of the airfoil with enlarged regions at the trough and the crest. Displayed are $k^{*}$ perturbations with uniform $\Delta_{k}$: $\Delta_{k} < 1$ ($\Delta_{k} = \big\{ 0.1, 0.25, 0.5, 0.75 \big\}$) and $\Delta_{k} > 1$ ($\Delta_{k} = \big\{ 2, 4, 6, 8 \big\}$).]{Skin friction coefficient distributions over the suction side of the airfoil with enlarged regions at the trough and the crest. Displayed are $k^{*}$ perturbations with uniform $\Delta_{k}$: $\Delta_{k} < 1$ ($\Delta_{k} = \big\{ 0.1, 0.25, 0.5, 0.75 \big\}$) and $\Delta_{k} > 1$ ($\Delta_{k} = \big\{ 2, 4, 6, 8 \big\}$); increasing values indicated by lighter to darker hues. Red solid arrows ($\color{red} \longrightarrow$) are provided to indicate increasing magnitude of $C_{f}$ with $\Delta_{k}$; the red dashed arrow ($\color{red} \dashrightarrow$) is provided to indicate the shift in reattachment point with $\Delta_{k}$. Note that the value of $\Delta_{k}$ must be larger than zero to satisfy realizability. The baseline prediction is provided for reference.} \label{fig:cf_uniformk_line.pdf} \end{figure} \subsubsection{Mean velocity field} \begin{figure*} \centerline{\includegraphics[width=5.5in]{fig_Marker/RANS_contour_streamline_normU_All_uniformk_subplot_foucs.pdf}} \caption[Contours of $\left\langle U \right \rangle/U_{\infty}$ with different values of $\Delta_{k}$: $\Delta_{k} < 1$ ($\Delta_{k} = \big\{ 0.1, 0.25, 0.5 \big\}$) and $\Delta_{k} > 1$ ($\Delta_{k} = \big\{4, 6, 8 \big\}$) in an $xy$ plane.]{Contours of $\left\langle U \right \rangle/U_{\infty}$ with different values of $\Delta_{k}$: $\Delta_{k} < 1$ ($\Delta_{k} = \big\{ 0.1, 0.25, 0.5 \big\}$) and $\Delta_{k} > 1$ ($\Delta_{k} = \big\{4, 6, 8 \big\}$) in an $xy$ plane. Baseline prediction is provided for reference, and in-house DNS data are included for comparison. Streamlines show the size of the LSB on the suction side of the airfoil.} \label{fig:RANS_contour_streamline_normU_All_uniformk_subplot_foucs.pdf} \end{figure*} Contours of the mean velocity normalized with the free stream velocity, $\left\langle U \right \rangle/U_{\infty}$ from the baseline, $\Delta_{k}$ perturbations, and in-house DNS of \cite{zhang2021turbulent} in an $xy$ plane are shown in Fig. \ref{fig:RANS_contour_streamline_normU_All_uniformk_subplot_foucs.pdf}. The streamlines for depicting a large recirculation vortex within the LSB, characterized by the region of reverse flow ($\left\langle U \right \rangle/U_{\infty} < 0$) \cite{rist2002numerical}, are included as well. This large recirculating region contains large-scale events (coherent structures), which are at low-frequency fluctuations due to very-large scale of unsteadiness of the recirculating region itself \cite{kiya1985structure}. As a consequence, the $\left\langle U \right \rangle/U_{\infty}$ contours exhibit a LSB surviving after time-averaging, as shown in Fig. \ref{fig:RANS_contour_streamline_normU_All_uniformk_subplot_foucs.pdf}. This behavior has been observed in the experimental measurements of \cite{zhang2018lagrangian}, RANS analysis of \cite{catalano2011rans}, and ILES/LES data of \cite{galbraith2010implicit} and \cite{garmann2013comparative}. Figure \ref{fig:RANS_contour_streamline_normU_All_uniformk_subplot_foucs.pdf} clearly shows that the baseline prediction for the LSB shows a comparable length to in-house DNS; however, the LSB's height is under-predicted. This inaccurate prediction for the LSB height alters the effective shape of the airfoil, hence inaccuracy in simulation results \cite{gaster1967structure,spalart2000mechanisms}. This reflects the error in RANS model predictions in the region of the LSB. Compared to the baseline prediction, rather subtle responses to the $\Delta_{k} < 1$ perturbations ($\Delta_{k} = 0.1, 0.25, 0.5$) are observed, which confirms the behavior shown in Fig. \ref{fig:cf_uniformk_line.pdf}. On the other hand, more noticeable changes are observed with the $\Delta_{k} > 1$ perturbations ($\Delta_{k} = 4, 6, 8$), i.e., a clear suppression of the LSB length; in addition, it is clear that the magnitude of mean velocity increases downstream of the LSB within the attached turbulent boundary layer, characterized by the more clustered streamlines compared to the baseline prediction. This confirms the reduction in the magnitude of $C_{f}$ in the aft portion of the LSB, as shown in Fig. \ref{fig:cf_uniformk_line.pdf}. There are two monotonic behaviors: first, the size of the recirculating region deceases monotonically with $\Delta_{k}$ (shallower region of streamlines), showing a tendency of deviating from the in-house DNS contour; second, the magnitude of $\left\langle U \right \rangle/U_{\infty}$ monotonically increases with $\Delta_{k}$ in the attached turbulent boundary layer (more densely clustered streamlines), showing a tendency of approaching closer to the in-house DNS contour. \subsubsection{Reynolds shear stress} Contours of the Reynolds shear stress normalized with the freestream velocity squared, $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ from the baseline, $\Delta_{k}$ perturbations, and in-house DNS of \cite{zhang2021turbulent} in an $xy$ plane are presented in Fig. \ref{fig:RANS_contour_streamline_normuv_All_uniformk_subplot_foucs.pdf}. Also included are the streamlines for the depiction of the recirculation vortex region. From Fig. \ref{fig:RANS_contour_streamline_normuv_All_uniformk_subplot_foucs.pdf}, all of the $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ contour plots show a magnitude of nearly zero in the region near the leading edge and in the outer region of the flow, and a peak is found within the LSB around $X_{T}$, i.e., the bright yellow region, from which the magnitude of $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ reduces as the flow moves further downstream. A similar behavior was also observed by Zhang and Rival \cite{zhang2018lagrangian} in their experimental measurements. Overall, the baseline prediction for Reynolds shear stress gives a smaller value than the in-house DNS data, especially in the LSB. It should be noted that the contour plots of $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ and $\left\langle U \right \rangle/U_{\infty}$ show a similar trend: a lack of sensitivity to the $\Delta_{k} < 1$ ($\Delta_{k} = 0.1, 0.25, 0.5$) perturbations, while a rather strong sensitivity to the $\Delta_{k} > 1$ ($\Delta_{k} = 4, 6, 8$) perturbations in both the transitional and turbulent region. In general, the Reynolds shear stress contour exhibits a larger response to $\Delta_{k}$ compared to the mean velocity contour. From Fig. \ref{fig:RANS_contour_streamline_normuv_All_uniformk_subplot_foucs.pdf}, the $\Delta_{k} < 1$ perturbations give a somewhat larger value of $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ than the baseline prediction, while the $\Delta_{k} > 1$ perturbations do the opposite. In addition, the $\Delta_{k} < 1$ perturbations slightly reduce the magnitude of $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ when the value of $\Delta_{k}$ is increased, as opposed to the $\Delta_{k} > 1$ perturbations, which greatly reduces the magnitude of $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$. In addition, it is clear that the peak value for $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ gradually becomes smaller as the value of $\Delta_{k}$ is increased, in particular for $\Delta_{k} > 1$. This is accompanied with a suppression of the recirculating region and hence a decrease in the turbulence kinetic energy \cite{lengani2014pod}. In Fig. \ref{fig:RANS_contour_streamline_normuv_All_uniformk_subplot_foucs.pdf}, the $\Delta_{k} > 1$ perturbations tend to approach closer to the in-house DNS data in the turbulent boundary layer, while the $\Delta_{k} < 1$ perturbations tend to result in a closer agreement with the in-house DNS data in the LSB. According to Davide \textit{et al.} \cite{lengani2014pod}, the overall turbulence kinetic energy can be decomposed into the large-scale coherent (Kelvin-Helmholtz induced) and stochastic (turbulence-induced) contributions. With the total energy in the mean flow remained constant, the $\Delta_{k}$ perturbations in a sense redistribute the Reynolds-shear-stress momentum transfer between turbulence and mean flow. \begin{figure*} \centerline{\includegraphics[width=5.5in]{fig_Marker/RANS_contour_streamline_normuv_All_uniformk_subplot_foucs.pdf}} \caption[Contours of $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ with different values of $\Delta_{k}$: $\Delta_{k} < 1$ ($\Delta_{k} = \big\{ 0.1, 0.25, 0.5 \big\}$) and $\Delta_{k} > 1$ ($\Delta_{k} = \big\{4, 6, 8 \big\}$) in an $xy$ plane.]{Contours of $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ with different values of $\Delta_{k}$: $\Delta_{k} < 1$ ($\Delta_{k} = \big\{ 0.1, 0.25, 0.5 \big\}$) and $\Delta_{k} > 1$ ($\Delta_{k} = \big\{4, 6, 8 \big\}$) in an $xy$ plane. Baseline prediction is provided for reference, and in-house DNS data are included for comparison. Streamlines show the size of the LSB on the suction side of the airfoil.} \label{fig:RANS_contour_streamline_normuv_All_uniformk_subplot_foucs.pdf} \end{figure*} \subsection{Comparison between uniform $\Delta_{k}$ and $M_{k}$} \subsubsection{Skin friction coefficient and pressure coefficient} \begin{figure} \centerline{\includegraphics[width=3.5in]{fig_Marker/cfcp_uniformkVsOnlyMarker.pdf}} \caption[(a) Profile of skin friction coefficient and (b) pressure coefficient with enlarged regions at the flat spot and the kink.]{(a) Profile of skin friction coefficient and (b) pressure coefficient with enlarged regions at the flat spot and the kink. Displayed are envelopes for uniform $k^{*}$ perturbations: $\Delta_{k} = 0.1$ (red envelope), $\Delta_{k} = 8$ (gray envelope), and $M_{k}$ (blue envelope). The baseline prediction is provided for reference. $\circ$ in-house DNS data \cite{zhang2021turbulent}.} \label{fig:cfcp_uniformkVsOnlyMarker.pdf} \end{figure} Distributions of the skin friction coefficient and the pressure coefficient, $C_{f}$ and $C_{p}$, are shown in Figs. \ref{fig:cfcp_uniformkVsOnlyMarker.pdf} (a) and (b). The in-house DNS \cite{zhang2021turbulent} and ILES/LES data of \cite{galbraith2010implicit} and \cite{garmann2013comparative} are included for comparison. In Figs. \ref{fig:cfcp_uniformkVsOnlyMarker.pdf} (a) and (b), an enveloping behavior with respect to the baseline prediction is observed. Figure \ref{fig:cfcp_uniformkVsOnlyMarker.pdf} (a) shows that the effect of $M_{k}$ is more prevalent in the aft portion of the LSB $0.25 < x/c < 0.28$, as well as in the region downstream of the LSB $0.28 < x/c < 0.6$. This reflects the effect of spatial variability in $M_{k}$. In addition, the uncertainty bound generated from the $M_{k}$ perturbation sits within the gray envelope of $\Delta_{k} = 8$. Figure \ref{fig:cfcp_uniformkVsOnlyMarker.pdf} (a) clearly shows that the uncertainty bound generated from the $M_{k}$ perturbation is well encompassed by the uniform $\Delta_{k} = 0.1$ and $\Delta_{k} = 8$ perturbations. It is interesting to note that the $\Delta_{k} = 8$ perturbation overall tends to approach closer to the in-house DNS \cite{zhang2021turbulent} and ILES/LES data of \cite{galbraith2010implicit} and \cite{garmann2013comparative} than the $\Delta_{k} = 0.1$ perturbation does. At the trough shown in Fig. \ref{fig:cfcp_uniformkVsOnlyMarker.pdf} (a), the $\Delta_{k} = 8$ perturbation gives a larger magnitude of $C_{f}$, sitting below the baseline prediction and showing a clear tendency to approach closer to the in-house DNS \cite{zhang2021turbulent} and LES \cite{garmann2013comparative} data. In addition, Fig. \ref{fig:cfcp_uniformkVsOnlyMarker.pdf} (a) clearly shows that the reattachment point is well encompassed by the $\Delta_{k} = 8$ perturbation. Further downstream of reattachment point, the uncertainty bound generated from both the $\Delta_{k} = 8$ and $M_{k}$ perturbations show a tendency to approach closer to the in-house DNS \cite{zhang2021turbulent} and the ILES/LES data of \cite{galbraith2010implicit} and \cite{garmann2013comparative}, while the $\Delta_{k} = 0.1$ perturbation under-predicts the baseline prediction and deviates from the reference data. At the flat spot and the kink ($X_{R}$) followed by a steep drop on the $C_{p}$ profile, the uncertainty bound generated from the $M_{k}$ perturbation is encompassed by the $\Delta_{k} = 8$ perturbation, as shown in the enlarged regions in Fig \ref{fig:cfcp_uniformkVsOnlyMarker.pdf} (b). In addition, a tendency for the $\Delta_{k} = 8$ and $M_{k}$ perturbations to approach closer to the in-house DNS \cite{zhang2021turbulent} and LES data of \cite{garmann2013comparative} is observed at the flat spot and the kink. On the other hand, it is interesting that the $\Delta_{k} = 0.1$ perturbation shows a tendency of approaching toward the ILES data of \cite{galbraith2010implicit} in the enlarged regions shown in Fig. \ref{fig:cfcp_uniformkVsOnlyMarker.pdf} (b). In the region downstream of the kink and along the entire pressure side, both $\Delta_{k}=0.1$ and $\Delta_{k}=8$ perturbations are almost negligible in magnitude, i.e., a collapse onto the baseline prediction. It should be noted that the baseline prediction overall shows good agreement with the in-house DNS data \cite{zhang2021turbulent} and ILES/LES data of \cite{galbraith2010implicit} and \cite{garmann2013comparative}, especially good agreement with the ILES/LES data of \cite{galbraith2010implicit} and \cite{garmann2013comparative} for the pressure side. This indicates a low level of the model form uncertainty in the predictions for $C_{p}$ for these regions. \subsubsection{Mean velocity field} The $\left\langle U \right \rangle/U_{\infty}$ profiles across the entire boundary layer on the suction side of the airfoil are plotted in Fig. \ref{fig:UQ_uniformkvsOnlyMarker_U_RepeatOn_Tu0027.pdf}. Overall, the baseline prediction at each location is encompassed by the $\Delta_{k} = 0.1$ and $\Delta_{k} = 8$ perturbations, exhibiting an enveloping behavior, as shown in Fig. \ref{fig:UQ_uniformkvsOnlyMarker_U_RepeatOn_Tu0027.pdf}. The baseline prediction for the $\left\langle U \right \rangle/U_{\infty}$ profile at $x/c = 0.15$ ($X_{T}$) matches the in-house DNS profile of \cite{zhang2021turbulent}, except in the regions $y/c|_{o} < 0.007$ (next to the wall) and $y/c|_{o} > 0.011$ (upper portion of the boundary layer), where it gives slightly smaller values of $\left\langle U \right \rangle/U_{\infty}$, as shown in Fig. \ref{fig:UQ_uniformkvsOnlyMarker_U_RepeatOn_Tu0027.pdf}. At $x/c = 0.2$ (in the aft portion of the LSB), the baseline prediction for the $\left\langle U \right \rangle/U_{\infty}$ profile shows good agreement with the in-house DNS profile in the region of reverse flow $y/c|_{o} < 0.011$, with a somewhat reduction in the predicted $\left\langle U \right \rangle/U_{\infty}$ profile in the upper portion of the boundary layer $0.011 < y/c|_{o} < 0.027$. For the attached turbulent boundary layer, the baseline predictions for the $\left\langle U \right \rangle/U_{\infty}$ profiles at $x/c = 0.3$, $x/c = 0.4$ and $x/c = 0.5$ give smaller values of $\left\langle U \right \rangle/U_{\infty}$ compared to the in-house DNS profiles, and the discrepancies are comparable with each other. \begin{figure} \centerline{\includegraphics[width=4.0in]{fig_Marker/UQ_uniformkvsOnlyMarker_U_RepeatOn_Tu0027.pdf}} \caption[Streamwise mean velocity profiles in the aft portion of the LSB ($x/c = 0.15$ and $0.2$) and in the attached turbulent boundary layer ($x/c = 0.3, 0.4$ and $0.5$).]{Streamwise mean velocity profiles in the aft portion of the LSB ($x/c = 0.15$ and $0.2$) and in the attached TBL ($x/c = 0.3, 0.4$ and $0.5$). From left to right are $x/c = 0.15, 0.2, 0.3, 0.4$ and $0.5$, respectively. Displayed are envelopes for two extreme $\Delta_{k}$ perturbations considered in this study: $\Delta_{k} = 0.1$ (red envelope), $\Delta_{k} = 8$ (gray envelope), and $M_{k}$ (blue envelope). The baseline prediction is provided for reference. $\circ$ in-house DNS data \cite{zhang2021turbulent}.} \label{fig:UQ_uniformkvsOnlyMarker_U_RepeatOn_Tu0027.pdf} \end{figure} Figure \ref{fig:UQ_uniformkvsOnlyMarker_U_RepeatOn_Tu0027.pdf} shows that the $\Delta_{k} = 0.1$ perturbation under-predicts the baseline prediction, and the simulation's response to the $\Delta_{k} = 0.1$ perturbation is negligibly small within both the transitional and turbulent boundary layer. This well confirms the behavior of the $\Delta_{k} = 0.1$ perturbation in the prediction for $C_{f}$, as shown in Fig. \ref{fig:cfcp_uniformkVsOnlyMarker.pdf} (a). As the flow proceeds downstream from $x/c = 0.15$ to $x/c = 0.3$, the uncertainty bounds generated from the $\Delta_{k} = 0.1$ perturbations gradually increase in size, although the increase is rather subtle. This confirms the slightly increased $C_{f}$ in magnitude compared to the baseline prediction for the aft portion of the LSB. As the flow moves further downstream from $x/c = 0.4$ to $x/c = 0.5$ (in the attached turbulent boundary layer), the $\Delta_{k} = 0.1$ perturbation gradually reduces the size of the uncertainty bounds at a decreasing rate, reflecting the damping effect of the positive values of $C_{f}$ on the mean flow. On the other hand, the $\Delta_{k} = 8$ perturbation over-predicts the baseline prediction, exhibiting rather noticeable uncertainty bounds, as shown in Fig. \ref{fig:UQ_uniformkvsOnlyMarker_U_RepeatOn_Tu0027.pdf}. As the flow proceeds from $x/c = 0.15$ to $x/c = 0.3$, it is interesting to note that the uncertainty bounds generated from the $\Delta_{k} = 8$ perturbations increase blatantly in size, showing a tendency of approaching closer to the in-house DNS data. In addition, the effect of the $\Delta_{k} = 8$ perturbation tends to become more prevalent in the near-wall region, which well confirms the significantly reduced $C_{f}$ in magnitude compared to the baseline prediction for $0.15 < x/c < 0.3$, as shown in Fig. \ref{fig:cfcp_uniformkVsOnlyMarker.pdf} (a). As the flow proceeds further downstream from $x/c = 0.4$ to $x/c = 0.5$, the uncertainty bounds become larger in the upper section of the mean velocity profiles, while remain at a relatively small magnitude in the near-wall region due to the large positive values of $C_{f}$ at the crest shown in Fig. \ref{fig:cfcp_uniformkVsOnlyMarker.pdf} (a), reflecting the weakening propagation of the effect of the positive $C_{f}$ values deeper into the outer boundary layer. Unlike the $\Delta_{k} = 0.1$ and $\Delta_{k} = 8$ perturbation, $M_{k}$ identifies the untrustworthy regions in which uncertainty will be injected. In Fig. \ref{fig:UQ_uniformkvsOnlyMarker_U_RepeatOn_Tu0027.pdf}, the uncertainty bounds generated from the $M_{k}$ perturbations in general over-predict the baseline prediction, and sit within the uncertainty bounds generated from the $\Delta_{k} = 8$ perturbations. It should be noted that the sole effect of the $M_{k}$ perturbation on the predicted mean velocity profile is rather small. In section \ref{Sec:compound}, the $M_{k}$ perturbation is compounded with the eigenvalue perturbation ($1c$, $2c$, $3c$) to construct more effective uncertainty bounds. \subsubsection{Reynolds shear stress} The predicted profiles for the Reynolds shear stress normalized with the freestream velocity squared, $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ are shown in Fig. \ref{fig:UQ_uniformk_uv_RepeatOn_Tu0027.pdf}. Undergoing the $\Delta_{k} = 0.1$ and $\Delta_{k} = 8$ perturbations, an enveloping behavior with respect to the baseline prediction can be observed. Figure \ref{fig:UQ_uniformk_uv_RepeatOn_Tu0027.pdf} shows that the baseline prediction for the $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ profile at $x/c = 0.15$ significantly over-predicts the in-house DNS profile, implying a higher level of momentum transfer due to the Reynolds shear stress. In the aft portion of the LSB and downstream of the LSB near the reattachemnt point ($X_{R}$), the predictions for the $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ profiles at $x/c = 0.2$ and $x/c = 0.3$ exhibit a shape of parabolic arch, revealing a same effect as the in-house DNS data, i.e., a strong increase in the Reynolds shear stress profile around the peak of the parabolic arch. The magnitude of the increase is much greater for the in-house DNS data probably due to the larger height of the LSB produced than the baseline prediction. Further downstream of the LSB, the baseline predictions for the $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ profiles at $x/c = 0.4$ and $x/c = 0.5$ (in the attached turbulent boundary layer) show relatively good agreement with the in-house DNS data, although some discrepancies exist in the regions next to the wall and in the upper section of the Reynolds shear stress profiles. \begin{figure} \centerline{\includegraphics[width=3.7in]{fig_Marker/UQ_uniformk_uv_RepeatOn_Tu0027.pdf}} \caption[Reynolds shear stress profiles in the aft portion of the LSB ($x/c = 0.15$ and $0.2$) and in the attached turbulent boundary layer ($x/c = 0.3, 0.4$ and $0.5$).]{Reynolds shear stress profiles in the aft portion of the LSB ($x/c = 0.15$ and $0.2$) and in the attached TBL ($x/c = 0.3, 0.4$ and $0.5$). From left to right are $x/c = 0.15, 0.2, 0.3, 0.4$ and $0.5$, respectively. Displayed are envelopes for two extreme $\Delta_{k}$ perturbations considered in this study: $\Delta_{k} =0.1$ (red envelope), $\Delta_{k} = 8$ (gray envelope), and $M_{k}$ (blue envelope). The baseline prediction is provided for reference. $\circ$ in-house DNS data \cite{zhang2021turbulent}.} \label{fig:UQ_uniformk_uv_RepeatOn_Tu0027.pdf} \end{figure} In Fig. \ref{fig:UQ_uniformk_uv_RepeatOn_Tu0027.pdf}, the $\Delta_{k} = 0.1$ perturbation increases the magnitude of the $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ profile compared to the baseline prediction. In the aft portion of the LSB ($x/c = 0.15$, $x/c = 0.2$ and $x/c = 0.3$), the $\Delta_{k} = 0.1$ perturbations retain the shape of parabolic arch, with a peak value around the maximum height of the arch gradually reducing in magnitude to zero from the peak in the opposite directions toward the wall and the OBL, respectively. In addition, the $\Delta_{k} = 0.1$ perturbations increase the momentum transfer due to the Reynolds shear stress compared to the baseline prediction. Consequently, the $\Delta_{k} = 0.1$ perturbations tend to approach closer to the in-house DNS data except at $x/c = 0.15$, where a deviation from the in-house DNS data is observed. As the flow proceeds further downstream within the attached turbulent boundary layer ($x/c = 0.4$ and $x/c = 0.5$), the effect of the $\Delta_{k} = 0.1$ perturbation gradually deteriorates with $x/c$, with some of the in-house DNS data being encompassed. On the other hand, the $\Delta_{k} = 8$ perturbation decreases the magnitude of the $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ profile compared to the baseline prediction, with the size of the uncertainty bound significantly larger than that for the $\Delta_{k} = 0.1$ perturbation, reflecting the simulation's much stronger response to the $\Delta_{k} = 8$ perturbation. Likewise, a shape of parabolic arch and a similar behavior to the $\Delta_{k} = 0.1$ perturbations are observed for the $\Delta_{k} = 8$ perturbations in the aft portion of the LSB ($x/c = 0.15$, $x/c = 0.2$ and $x/c = 0.3$) as well: peaking around the maximum height of the parabolic arch and gradually decreasing in magnitude toward the wall and OBL. As a result, the $\Delta_{k} = 8$ perturbations reduce the momentum transfer due to the Reynolds shear stress to a great extend around the peak of the parabolic arch. This shows a tendency for the $\Delta_{k} = 8$ perturbations to deviate from the in-house DNS data except at $x/c = 0.15$, where the $\Delta_{k} = 8$ perturbations tend to approach closer to the in-house DNS profile that much lag behind the baseline prediction. Within the attached turbulent boundary layer ($x/c = 0.4$ and $x/c = 0.5$), an important observation for the $\Delta_{k} = 8$ perturbation is that the uncertainty bounds retain a value of zero not only at the wall but also extend for some distance above the wall, which violates the ``rule'' that all Reynolds stresses decrease to zero at the wall surface due to the no-slip wall condition \cite{versteeg2007introduction}. This marks the behavior of ``over perturbation'' with $\Delta_{k} = 8$, and is not physically realizable. Since few studies have been conducted to determine the upper bound of $k^{*}$, this study here sheds light on a possible way of determining the upper bound of $k^{*}$ using the result of Reynolds shear stress. Therefore, the maximum magnitude of $\Delta_{k}$ must ensure that Reynolds stresses must behave in a physically-realizable manner in the near-wall region. The $M_{k}$ perturbation in general under-predicts the baseline prediction across the suction side, in general sitting within the gray envelope, with a subtle movement to the red envelope being discerned in the lower section of the Reynolds shear stress profiles for $x/c = 0.2$ and $x/c = 0.3$. Within the attached turbulent boundary layer ($x/c = 0.4$ and $x/c = 0.5$), the uncertainty bounds generated from the $M_{k}$ perturbation remain constantly below the baseline prediction, which is consistent with the uniform magnitude of $\Delta_{k} = 2.8$. It should be noted that the simulation's response to the perturbed Reynolds shear stress profile is in general stronger than that for the perturbed mean velocity profile. This indicates that the level of sensitivity to the $\Delta_{k}$ perturbation varies with different QoIs being observed. In Fig. \ref{fig:UQ_uniformk_uv_RepeatOn_Tu0027.pdf}, the $M_{k}$ function successfully avoids over-perturbations through strictly comparing to the available high-fidelity data, ensuring that only the physical realistic perturbations are considered. \subsection{Combining $M_{k}$ with $1c$, $2c$, and $3c$}\label{Sec:compound} \subsubsection{Skin friction coefficient} Distributions of the skin friction coefficient and the pressure coefficient, $C_{f}$ and $C_{p}$, are shown in Figs. \ref{fig:cfcp_marker.pdf} (a) and (b), respectively. Also included are the in-house DNS \cite{zhang2021turbulent} and ILES/LES data of \cite{galbraith2010implicit} and \cite{garmann2013comparative} for comparison. Integrating the $M_{k}$ perturbation with the eigenvalue perturbation ($1c$, $2c$ and $3c$) using Eqs. \ref{Eqn_Rij_perturbed}, \ref{Eqn:Markerfunc} and \ref{Eqn:kstar} yields compound effect, namely, $1c\_M_{k}$, $2c\_M_{k}$ and $3c\_M_{k}$. Also included are the eigenvalue perturbation ($1c$ and $3c$) as a reference for the $1c\_M_{k}$ $2c\_M_{k}$ and $3c\_M_{k}$ perturbation. In the aft portion of the LSB, Fig. \ref{fig:cfcp_marker.pdf} (a) clearly shows that the $1c\_M_{k}$ perturbation decreases the magnitude of $C_{f}$ more than the $2c\_M_{k}$ perturbation does compared to the baseline prediction, while the $3c\_M_{k}$ perturbation results in the uncertainty bound that almost overlaps the one generated from the $3c$ perturbation, indicating simulation's low sensitivity to the $3c\_M_{k}$ perturbation. In addition, both uncertainty bounds generated from the $3c\_M_{k}$ and $3c$ perturbations in general sit slightly below the baseline prediction except at the trough around $x/c = 0.2$ (in the aft portion of the LSB), where they sit somewhat above the baseline prediction. As a consequence, an enveloping behavior with respect to the baseline prediction is observed. On the other hand, the uncertainty bounds generated from the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations lie significantly above the baseline prediction, encompassing the reference data for $X_{R}$, as well as the steep rise followed by $X_{R}$. Interestingly, it is clear that this promising increase associated with the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations is not a simple sum of the $M_{k}$ and $1c$/$2c$ uncertainty bounds up, but a ``synergy'' has developed. Moreover, the synergy behavior associated with the $1c\_M_{k}$ perturbation results in the encompassing of the gap between the baseline prediction and the reference data in the aft portion of the LSB as well as at the crest. Besides, it is interesting to note that the uncertainty bounds generated from $1c\_M_{k}$ and $2c\_M_{k}$ perturbations tend to retain the shape of the $C_{f}$ profile at the crest for $0.3 < x/c < 0.4$, with the $1c\_M_{k}$ perturbation effectively encompassing the in-house DNS data of \cite{zhang2021turbulent}. This confirms the effect of spatial variability in $M_{k}$. As the flow proceeds further downstream $0.4 < x/c < 0.6$, a rapid collapse of the uncertainty bounds generated from the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations is observed. This confirms the uniform magnitude of $M_{k}$ used in the region for $0.4 < x/c < 0.6$. On the other hand, the $3c\_M_{k}$ and $3c$ perturbations become almost indistinguishable from each other, lying somewhat below the baseline prediction across the entire suction side, except for a slight decrease found associated with the $3c\_M_{k}$ perturbation in the region for $0.35 < x/c < 0.4$. \begin{figure} \centerline{\includegraphics[width=3.5in]{fig_Marker/cfcp_marker.pdf}} \caption[(a) Profile of skin friction coefficient and (b) pressure coefficient with enlarged regions at the flat spot and the kink followed by a sharp drop of $C_{p}$.]{(a) Profile of skin friction coefficient and (b) pressure coefficient with enlarged regions at the flat spot and the kink followed by a sharp drop of $C_{p}$. Displayed are uncertainty bounds for $1c\_M_{k}$, $2c\_M_{k}$ and $3c\_M_{k}$ perturbations (red envelope). $\Delta_{B1}$ stands for $\Delta_B = 1.0$. Profiles of baseline prediction and eigenvalue perturbations ($1c$ and $3c$) are provided for reference. $\circ$ in-house DNS data \cite{zhang2021turbulent}.} \label{fig:cfcp_marker.pdf} \end{figure} In Fig. \ref{fig:cfcp_marker.pdf} (b), at the flat spot the $1c\_M_{k}$ perturbation increases the magnitude of $C_{p}$ more than the $2c\_M_{k}$ perturbation does compared to the baseline prediction. Both $1c\_M_{k}$ and $2c\_M_{k}$ perturbations show a tendency to approach toward the in-house DNS \cite{zhang2021turbulent} and LES data of \cite{garmann2013comparative}, and sit within the uncertainty bound generated from the $1c$ perturbation. Interestingly, there is no discernible synergy behavior appearing at the flat spot, the uncertainty bound generated from the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations tend to reduce somewhat in size compared to the $1c$ and $2c$ perturbations instead. On the other hand, the uncertainty bounds generated from the $3c\_M_{k}$ and $3c$ perturbations become almost indistinguishable at the flat spot, sitting slightly below the baseline prediction in a trend of approaching toward the ILES data of \cite{galbraith2010implicit}. At the kink around $X_{R}$, the uncertainty bounds generated from the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations under-predict the baseline prediction and tend to approach closer to the reference data, while the uncertainty bound for the $3c\_M_{k}$ perturbation over-predicts the baseline prediction and retains the behavior of collapsing onto the $3c$ perturbation, showing a trend of deviating from the reference data. As a consequence, a discernible enveloping behavior with respect to the baseline prediction is observed at both the flat spot and the kink around $X_{R}$, where most uncertainty is generated, as shown in Fig. \ref{fig:cfcp_marker.pdf} (b). In addition, the collapsing behavior of the $3c\_M_{k}$ and $3c$ perturbations at the flat spot and the kink indicates the simulation's low sensitivity to the $3c\_M_{k}$ perturbation. Compared to $C_{f}$, it is clear that $Cp$ overall is less sensitive to all kinds of perturbations. This is because the wall pressure is determined by the freestream, which is only modified minutely by the eigenvalue perturbations \cite{emory2013modeling}. In addition, this reflects that the degree of response to the $\Delta_{k}$ perturbation varies with which QoIs being observed. On the pressure side, the simulation's response to all kinds of perturbations are rather small, indicating a low level of model form uncertainty and hence high trustworthiness in the baseline prediction for $C_{p}$. It should be noted that because $3c$ perturbation retains the isotropic nature of the turbulent viscosity model, it yields limited influence on the perturbed results \cite{mishra2019theoretical}. This is well reflected in the smaller size of the uncertainty bound generated from the $3c$ perturbation compared to the $1c$ and $2c$ perturbations. Such inefficacy of $3c$ perturbation has been observed by Emory \textit{et al.} \cite{emory2013modeling} as well. Importantly, this inefficacy persists when being compounded with $M_{k}$, which might partly explain the collapsing behavior of the $3c\_M_{k}$ profile onto the $3c$ profile as can be observed in Figs. \ref{fig:cfcp_marker.pdf} (a) and (b). Moreover, this collapsing behavior not only happens to the results of $C_{f}$ and $C_{p}$ but also happens to the mean velocity profile and the turbulence quantities, as can be observed in the following sections. \subsubsection{Mean velocity field} \begin{figure*} \centerline{\includegraphics[width=5.5in]{fig_Marker/RANS_contourf_normU_All_subplot_outlook.pdf}} \caption[Contours of $\left\langle U \right \rangle/U_{\infty}$ with $1c\_M_{k}$, $2c\_M_{k}$, $3c\_M_{k}$, $1c$, $2c$, $3c$ and $M_{k}$ perturbations in an $xy$ plane.]{Contours of $\left\langle U \right \rangle/U_{\infty}$ with $1c\_M_{k}$, $2c\_M_{k}$, $3c\_M_{k}$, $1c$, $2c$, $3c$ and $M_{k}$ perturbations in an $xy$ plane. Isolines of the mean streamwise velocity are superimposed on the contour plots. The contour of baseline prediction is provided for reference, and the contour of in-house DNS data \cite{zhang2021turbulent} is included for comparison.} \label{fig:RANS_contourf_normU_All_subplot_outlook.pdf} \end{figure*} \begin{figure} \centerline{\includegraphics[width=3.7in]{fig_Marker/markerfunc_U_five.pdf}} \caption[Profile of $\left\langle U \right \rangle/U_{\infty}$ in the aft portion of the LSB for (a) $x/c = 0.15$ and (b) $x/c = 0.2$; and profile of $\left\langle U \right \rangle/U_{\infty}$ in the attached TBL for (c) $x/c = 0.3$, (d) $x/c = 0.4$ and (e) $x/c = 0.5$. Displayed are uncertainty bounds for $1c\_M_{k}$, $2c\_M_{k}$ and $3c\_M_{k}$ perturbations (red envelope).]{Profile of $\left\langle U \right \rangle/U_{\infty}$ in the aft portion of the LSB for (a) $x/c = 0.15$ and (b) $x/c = 0.2$; and profile of $\left\langle U \right \rangle/U_{\infty}$ in the attached TBL for (c) $x/c = 0.3$, (d) $x/c = 0.4$ and (e) $x/c = 0.5$. Displayed are uncertainty bounds for $1c\_M_{k}$, $2c\_M_{k}$ and $3c\_M_{k}$ perturbations (red envelope). $\Delta_{B1}$ stands for $\Delta_B = 1.0$. Profiles of baseline prediction and eigenvalue perturbations ($1c$ and $3c$) are provided for reference. $\circ$ in-house DNS data \cite{zhang2021turbulent}.} \label{fig:markerfunc_U_five.pdf} \end{figure} \begin{figure*} \centerline{\includegraphics[width=5.5in]{fig_Marker/arrows_1c2c3c_Base_Marker_DNS_Marker.pdf}} \caption[Contours of normalized mean velocity $\left\langle U \right \rangle/U_{\infty}$ with in-plane velocity vectors superimposed on the contours in an $xy$ plane.]{Contours of normalized mean velocity $\left\langle U \right \rangle/U_{\infty}$ with in-plane velocity vectors superimposed on the contours in an $xy$ plane. The region in the vicinity of the wall is enlarged to highlight the flow behavior in the LSB, as well as in the turbulent region right downstream of the LSB. A focus on a section of the airfoil suction side is considered: $0.14 < x/c < 0.44$.} \label{fig:arrows_1c2c3c_Base_Marker_DNS_Marker.pdf} \end{figure*} Contours of the mean velocity normalized by the freestream velocity, $\left\langle U \right \rangle/U_\infty$ from the baseline, $1c\_M_{k}$, $2c\_M_{k}$ and $3c\_M_{k}$ perturbations, eigenvalue perturbations ($1c$, $2c$ and $3c$), $M_{k}$ perturbation and in-house DNS of \cite{zhang2021turbulent} in an $xy$ plane are shown in Fig. \ref{fig:RANS_contourf_normU_All_subplot_outlook.pdf}. From Fig. \ref{fig:RANS_contourf_normU_All_subplot_outlook.pdf}, all of the $\left\langle U \right \rangle/U_\infty$ contours show a recirculating region, i.e., the eye-like green region, where the negative value of velocity is present. In addition, the mean velocity contour generated from the $M_{k}$ perturbation results in a shorter LSB and a slightly increased $\left\langle U \right \rangle/U_\infty$ in magnitude in the region downstream of the reattachment point $0.3 < x/c < 0.6$, in which the untrustworthy zones are identified. This indicates that the $M_{k}$ perturbation tends to suppress the LSB compared to the baseline prediction. This reduces the turbulence kinetic energy contained in the large-scale coherent structures \cite{lengani2014pod}, implying the increased mean-flow energy in the vicinity of the LSB $0.3 < x/c < 0.6$, therefore increased magnitude of $\left\langle U \right \rangle/U_\infty$ in this region, as shown in Fig. \ref{fig:RANS_contourf_normU_All_subplot_outlook.pdf}. For the $1c$ and $2c$ eigenvalue perturbations, the contours show a reduction in the length of the LSB compared to the baseline prediction, which results in an overall increase in the mean-flow magnitude further downstream of the reattachment point, while the $3c$ perturbation does the opposite. In addition, it is clear that the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations further increase the magnitude of $\left\langle U \right \rangle/U_\infty$ than the $1c$ and $2c$ perturbations do compared to the baseline prediction, which confirms the greatly reduced $C_{f}$ in magnitude in the aft portion of the LSB, while the $3c\_M_{k}$ perturbation remains at nearly same magnitude as that for the $3c$ perturbation, which confirms the collapse of the $C_{f}$ profiles generated from the $3c\_M_{k}$ and $3c$ perturbations, as shown in Fig. \ref{fig:cfcp_marker.pdf} (a). This indicates a weak compound effect of the $3c\_M_{k}$ perturbation. Compared to the baseline prediction, both $1c\_M_{k}$ and $2c\_M_{k}$ perturbations shorten the region of reverse flow (deviating from the in-house DNS data), while increase the mean-flow magnitude in the attached turbulent boundary layer (approaching closer to the in-house DNS data); on the other hand, the $3c\_M_{k}$ perturbation bolsters the region of reverse flow, showing a tendency of approaching closer to the in-house DNS data, while shows a reduction in the magnitude of $\left\langle U \right \rangle/U_\infty$ in the attached turbulent boundary layer, causing a deviation from the in-house DNS data. The predictions for the mean velocity profile normalized by $U_{\infty}$, i.e., $\left\langle U \right \rangle/U_{\infty}$, are presented in Figs. \ref{fig:markerfunc_U_five.pdf} (a) - (e). The in-house DNS data of \cite{zhang2021turbulent} is included for comparison. In Figs. \ref{fig:markerfunc_U_five.pdf} (a) - (e), the $1c$ and $3c$ eigenvalue perturbations are used as a reference for the $1c\_M_{k}$, $2c\_M_{k}$ and $3c\_M_{k}$ perturbation. From Figs. \ref{fig:markerfunc_U_five.pdf} (a) - (e), an enveloping behavior with respect to the baseline prediction is observed, i.e., the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations leading the baseline prediction, while the $3c\_M_{k}$ perturbation lagging behind. A similar behavior of the $1c$ and $3c$ perturbations with respect to the baseline prediction was also observed by Luis \textit{et al.} \cite{cremades2019reynolds} in their numerical study for a turbulent flow over a backward-facing step. In addition, the $3c\_M_{k}$ profile tends to collapse onto the $3c$ profile, reflecting the simulation's low sensitivity to the $3c\_M_{k}$ perturbation, which is consistent with the behavior shown in Fig. \ref{fig:RANS_contourf_normU_All_subplot_outlook.pdf}. At $x/c = 0.15$ ($X_{T}$), the uncertainty bound generated from the $1c\_M_{k}$ perturbation increases the magnitude of the mean velocity profile more than the $2c\_M_{k}$ perturbation does in both the region of reverse flow ($U/U_{\infty} < 0$) for $0 < y/c|_{o} < 0.007$ and the upper portion of the boundary layer for $0.011 < y/c|_{o} < 0.023$, showing a tendency to approach closer to the in-house DNS data. On the other hand, the $3c\_M_{k}$, $3c$ and baseline profiles show a collapse, indicating a type of similarity. This might be partly explained by the almost same values of $C_{f}$ found in the $3c\_M_{k}$, $3c$ and baseline profiles around $X_{T}$, as shown in Fig. \ref{fig:cfcp_marker.pdf} (a). It should be noted that the uncertainty bounds generated from the $1c\_M_{k}$, $2c\_M_{k}$, and $3c\_M_{k}$ perturbations and the baseline prediction are negligibly small for $0.007 < y/c|_{o} < 0.011$, i.e., all collapsing onto a single curve, which shows good agreement with the in-house DNS data, as shown in Fig. \ref{fig:markerfunc_U_five.pdf} (a). This reveals a low level of the model form uncertainty and hence relatively high trustworthiness in this region. As the flow moves further downstream to $x/c = 0.2$ (the aft portion of the LSB), the effect of the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations has permeated the entire boundary layer, showing a tendency of approaching closer to the in-house DNS data in the upper portion of the boundary layer. Moreover, the uncertainty bounds generated from the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations over-predict the baseline prediction but lie somewhat above the $1c$ perturbation in the region for $0.007 < y/c|_{o} < 0.011$, overall showing closer agreement with the in-house DNS data, as shown in Fig. \ref{fig:markerfunc_U_five.pdf} (b). Again, this reflects the effect of spatial variability in $M_{k}$. Note that the baseline prediction overall shows good agreement with the in-house DNS data in the region of reverse flow at $x/c = 0.2$, at which the baseline prediction, $3c$, and $3c\_M_{k}$ perturbations collapse onto a single curve. The almost identical magnitude of $C_{f}$ retained by the $3c\_M_{k}$, $3c$ and baseline predictions for $x/c = 0.2$, as shown in Fig. \ref{fig:cfcp_marker.pdf} (a), might partly explain this type of similarity. Away from the wall, a collapsing behavior is again observed for the $3c\_M_{k}$ and $3c$ perturbations, with a slight offset from the baseline prediction. Note that the $1c$ perturbation is well encompassed by $1c\_M_{k}$ in the aft portion of the LSB, as shown in Figs. \ref{fig:markerfunc_U_five.pdf} (a) - (b). This confirms the larger values of $C_{f}$ for the $1c\_M_{k}$ perturbation in this region. At $x/c = 0.3$ (downstream of the LSB near \textbf{$X_{R}$}), $x/c = 0.4$ and $x/c = 0.5$ (in the reattached turbulent boundary layer), the uncertainty bounds generated from the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations in general tend to approach closer to the in-house DNS data, with the perturbation effect gradually deteriorating further downstream. This is consistent with the gradual reduction in the positive values of $C_{f}$ as the flow moves further downstream of $X_{R}$, as shown in Fig. \ref{fig:cfcp_marker.pdf} (a). Also the difference between the $1c$ and $1c\_M_{k}$ perturbation becomes smaller, which confirms the comparable magnitude of $C_{f}$ in the region of the attached turbulent boundary layer, as shown in Fig. \ref{fig:cfcp_marker.pdf} (a). On the other hand, a collapse is also observed for the $3c\_M_{k}$ and $3c$ perturbations at $x/c = 0.4$ and $x/c = 0.5$, which confirms the almost identical values of $C_{f}$ retained by the $3c\_M_{k}$ and $3c$ perturbations shown in Fig. \ref{fig:cfcp_marker.pdf} (a). It is interesting to note that $1c$ and $2c$ perturbations respond favorably to $M_{k}$, while $3c$ perturbation remains almost immune to it. Figure \ref{fig:arrows_1c2c3c_Base_Marker_DNS_Marker.pdf} shows contours of mean velocity, with the region of reverse flow being enlarged. The region of reverse flow is evidenced by the velocity vectors added in the LSB. The baseline prediction is provided for reference. Also included is the in-house DNS data of \cite{zhang2021turbulent} for comparison. Compared to the in-house DNS data, Fig. \ref{fig:arrows_1c2c3c_Base_Marker_DNS_Marker.pdf} clearly shows that the baseline prediction shifts the region of reverse flow in the upstream direction. Within the LSB (green region), the velocity vectors for the $1c$ and $2c$ perturbations clearly indicate a subdued reverse-flow field, resulting in a shorter LSB and hence a better agreement with the DNS data, and the opposite is true for the $3c$ perturbation. For the attached turbulent boundary layer, the velocity vectors indicate an overall increase in the mean velocity field for the $1c$ and $2c$ perturbations, showing a tendency of approaching to the DNS mean flow field, while the $3c$ perturbation shows an overall reduction in the mean velocity field. Integrating $M_{k}$ into the $1c$, $2c$, and $3c$ perturbation tends to suppress the size of the LSB, but increases the mean flow field downstream of the LSB. Among these perturbations, $1c\_M_{k}$ increases the mean flow field more than $2c\_M_{k}$ in the attached turbulent boundary layer, to the largest extend contributing to a closer approach to the in-house DNS data. \subsubsection{Reynolds shear stress} \begin{figure*} \centerline{\includegraphics[width=5.5in]{fig_Marker/RANS_contourf_normuv_All_1c2c3cOnlyMarkerfunc_subplot_foucs.pdf}} \caption[Contours of $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ with $1c\_M_{k}$, $2c\_M_{k}$, $3c\_M_{k}$, $1c$, $2c$, $3c$ and $M_{k}$ perturbations in an $xy$ plane.]{Contours of $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ with $1c\_M_{k}$, $2c\_M_{k}$, $3c\_M_{k}$, $1c$, $2c$, $3c$ and $M_{k}$ perturbations in an $xy$ plane. Isolines of the Reynolds shear stress are superimposed on the contour plots. The contour of baseline prediction is provided for reference, and the contour of in-house DNS data \cite{zhang2021turbulent} is included for comparison.} \label{fig:RANS_contourf_normuv_All_1c2c3cOnlyMarkerfunc_subplot_foucs.pdf} \end{figure*} \begin{figure} \centerline{\includegraphics[width=3.7in]{fig_Marker/markerfunc_uv_five.pdf}} \caption[Profile of $-\left\langle u_{1}u_{2} \right\rangle/U_{\infty}^2$ in the aft portion of the LSB for (a) $x/c = 0.15$ and (b) $x/c = 0.2$; and profile of $-\left\langle u_{1}u_{2} \right\rangle/U_{\infty}^2$ in the attached TBL for (c) $x/c = 0.3$, (d) $x/c = 0.4$ and (e) $x/c = 0.5$. Displayed are uncertainty bounds for $1c\_M_{k}$, $2c\_M_{k}$ and $3c\_M_{k}$ perturbations (red envelope).]{Profile of $-\left\langle u_{1}u_{2} \right\rangle/U_{\infty}^2$ in the aft portion of the LSB for (a) $x/c = 0.15$ and (b) $x/c = 0.2$; and profile of $-\left\langle u_{1}u_{2} \right\rangle/U_{\infty}^2$ in the attached TBL for (c) $x/c = 0.3$, (d) $x/c = 0.4$ and (e) $x/c = 0.5$. Displayed are uncertainty bounds for $1c\_M_{k}$, $2c\_M_{k}$ and $3c\_M_{k}$ perturbations (red envelope). $\Delta_{B1}$ stands for $\Delta_B = 1.0$. Profiles of baseline prediction and eigenvalue perturbations ($1c$ and $3c$) are provided for reference. $\circ$ in-house DNS data \cite{zhang2021turbulent}.} \label{fig:markerfunc_uv_five.pdf} \end{figure} Contours of the Reynolds shear stress normalized by the freestream velocity squared, $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ from the baseline, $1c\_M_{k}$, $2c\_M_{k}$ and $3c\_M_{k}$ perturbations, eigenvalue perturbations ($1c$, $2c$, $3c$), $M_{k}$ perturbation in an $xy$ plane are shown in Fig. \ref{fig:RANS_contourf_normuv_All_1c2c3cOnlyMarkerfunc_subplot_foucs.pdf}. Also included is the in-house DNS data of \cite{zhang2021turbulent} for comparison. In Fig. \ref{fig:RANS_contourf_normuv_All_1c2c3cOnlyMarkerfunc_subplot_foucs.pdf}, all of the $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ contour plots show a peak around $X_{T}$, i.e., the bright yellow region, downstream from the peak a gradual reduction in the magnitude of $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ is observed. From Fig. \ref{fig:RANS_contourf_normuv_All_1c2c3cOnlyMarkerfunc_subplot_foucs.pdf}, the $M_{k}$ perturbation overall reduces the magnitude of $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ in both the transitional and turbulent region compared to the baseline prediction. From Fig. \ref{fig:RANS_contourf_normuv_All_1c2c3cOnlyMarkerfunc_subplot_foucs.pdf}, it is clear that the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations further reduce the magnitude of $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ compared to the $1c$ and $2c$ perturbations, while the $3c\_M_{k}$ perturbation remains at a nearly same magnitude as that of the $3c$ perturbation. In addition, the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations under-predict the baseline prediction, and in general tend to approach closer to the in-house DNS data in the attached turbulent boundary layer, although an under-prediction for $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ is observed in the LSB; on the other hand, the $3c\_M_{k}$ perturbation over-predicts the baseline prediction, showing better agreement with the in-house DNS data within the LSB. The predicted profiles for the Reynolds shear stress normalized by the freestream velocity squared, $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$, are shown in Figs. \ref{fig:markerfunc_uv_five.pdf} (a) - (e). Also included are the in-house DNS data of \cite{zhang2021turbulent} for comparison, as well as the $1c$ and $3c$ eigenvalue perturbations used as a reference for the ${1c}\_M_{k}$, ${2c}\_M_{k}$ and ${3c}\_M_{k}$ perturbations. Figures \ref{fig:markerfunc_uv_five.pdf} (a) - (e) show that the baseline prediction is well enveloped by the uncertainty bounds generated from the ${1c}\_M_{k}$, ${2c}\_M_{k}$ and ${3c}\_M_{k}$ perturbations. In addition, the ${1c}\_M_{k}$ and ${2c}\_M_{k}$ perturbations reduce the magnitude of the predicted $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ profiles compared to the baseline prediction, while the ${3c}\_M_{k}$ perturbation does the opposite. A similar behavior of the $1c$ and $3c$ perturbations with respect to the baseline prediction was also observed by Luis \textit{et al.} \cite{cremades2019reynolds} in their numerical study for a turbulent flow over a backward-facing step. Figures \ref{fig:markerfunc_uv_five.pdf} (a) - (e) show that the simulation's sensitivity to the $3c\_M_{k}$ perturbation is rather low, with the ${3c}\_M_{k}$ profile nearly collapsing onto the $3c$ profile. A similar behavior is also observed in Fig. \ref{fig:markerfunc_U_five.pdf} (a) - (e). At $x/c = 0.15$ ($X_{T}$), Fig. \ref{fig:markerfunc_uv_five.pdf} (a) shows that the $1c\_M_{k}$ perturbation results in a rather strong reduction in the magnitude of $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ across the entire boundary layer compared to the baseline prediction, showing a tendency of approaching closer to the in-house DNS data, and a ``synergy behavior'' is observed. On the other hand, the $3c\_M_{k}$ perturbation that tends to deviate from the in-house DNS data, indicating a weak response to the $M_{k}$ perturbation. In Figs. \ref{fig:markerfunc_uv_five.pdf} (b) - (e), the baseline predictions and in-house DNS data are similar in shape: the convexity of the profile strongly increases in the vicinity of the wall, and then relaxes as the distance from the wall increases, with some discrepancies that overall mark the under-prediction of the momentum transfer due to the Reynolds shear stress within both the transitional and turbulent boundary layer. Therefore, there is an important observation: the synergy behavior seems only active for the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations. As the flow moves further downstream to $x/c = 0.2$ (the aft portion of the LSB) and $x/c = 0.3$ (downstream of the LSB near $X_{R}$), the $1c\_M_{k}$ perturbation decreases the predicted Reynolds shear stress more than the $1c$ perturbation does in the outer portion of the boundary layer, indicating a deviation from the in-house DNS data, while shows a somewhat reduction and no discernible change (a collapse onto the $1c$ profile) at $x/c = 0.2$ and $x/c = 0.3$, respectively, in the near-wall region, as shown in Figs. \ref{fig:markerfunc_uv_five.pdf} (b) and (c). This reflects the spatial variability in $M_{k}$. On the other hand, the $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ profiles generated from the $3c\_M_{k}$ perturbations nearly collapse onto that for the $3c$ perturbations at $x/c = 0.2$ and $x/c = 0.3$; consequently, the $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ profiles generated from the $3c\_M_{k}$ perturbations tend to approach closer to the in-house DNS data at $x/c = 0.2$ and $x/c = 0.3$, as shown in Figs. \ref{fig:markerfunc_uv_five.pdf} (b) and (c). At $x/c = 0.4$ and $x/c = 0.5$ (in the attached turbulent boundary layer), a synergy behavior is again observed for the $1c\_{M_{k}}$ perturbations across the entire boundary layer, which enhances the deviation from the in-house DNS data at $x/c = 0.4$, while encompasses the in-house DNS data at $x/c = 0.5$. On the other hand, a collapse is again observed for the $3c\_M_{k}$ and $3c$ perturbations. Besides, the uncertainty bounds generated from the $2c\_M_{k}$ and $3c\_M_{k}$ perturbations successfully encompass the in-house DNS data in the lower portion of the attached turbulent boundary layer at $x/c = 0.4$ and $x/c = 0.5$, although there is a small discrepancy present in the region next to the wall. \section{Conclusions} The goal of the present study was to advance our understanding of a physics-based methodology to quantify transition model-form uncertainty in RANS predictions of unsteady flow over a SD7003 airfoil. The method is based on the framework proposed in the study of \cite{emory2013modeling}, which introduces perturbations to a decomposition of the Reynolds stress tensor, namely, the amplitude and the eigenvalue of the anisotropy Reynolds stress tensor. In this study, the methodology was completely implemented in C++ in OpenFOAM. Based on the baseline predictions for $C_{f}$ and $C_{p}$, we presented analyses to locate the untrustworthy region, which is further divided into four zones to cover both the LSB and turbulent flow region further downstream. A novel regression based marker function was developed to inject an accurate level of the amplitude perturbation into the identified untrustworthy region. We presented analyses to understand the effect of the uniform amplitude perturbation to the skin friction coefficient, mean velocity, and Reynolds shear stress. Importantly, we observed a monotonic behavior of the magnitude of the predicted bounds with $\Delta_{k}$ perturbations, in particular more noticeable bounds for $\Delta_{k} > 1$: a clear shift of the reattachment point in the upstream direction, a noticeable suppression of the length of the LSB, and a greatly reduced magnitude of Reynolds shear stress in the LSB region; for the turbulent flow region further downstream of the LSB, results for both the mean velocity and the Reynolds shear stress showed better agreement with the in-house DNS data of \cite{zhang2021turbulent}. Such monotonic behavior is imperative for the development of a marker function that aims to predict plausible bounds for QoIs. The predicted bounds generated from the marker function $M_{k}$ was contrasted with the uniform amplitude perturbations $\Delta_{k} = 0.1$ and $\Delta_{k} = 8$ for different QoIs. Results for the QoIs clearly showed the spatial variability in $M_{k}$, and the bounds generated from $M_{k}$ in general sat within the bounds generated from $\Delta_{k} = 8$. The $\Delta_{k} = 8$ perturbations showed a clear tendency to approach closer to the reference data \cite{galbraith2010implicit,garmann2013comparative,zhang2021turbulent} for $C_{f}$ and $C_{p}$, and well encompassed the reattachment point in the predicted bounds. Overall, the $\Delta_{k} = 0.1$ perturbation was the opposite of the behavior of $\Delta_{k} = 8$: deviating from the reference data and showing rather small bounds. On the pressure side, the $C_{p}$ profile for $\Delta_{k} = 0.1$, baseline prediction, and $\Delta_{k} = 8$ showed a collapse, which indicated a low model form uncertainty. Importantly, the over-perturbation behavior associated with the predicted Reynols shear stress profile undergoing the $\Delta_{k} = 8$ perturbation could facilitate the approximating of the upper-bound of the amplitude perturbation. When compounding $M_{k}$ with the eigenvalue perturbations $1c$, $2c$, the predicted bounds for $C_{f}$ was dramatically increased to encompass the reattachment point and the reference data of \cite{galbraith2010implicit,garmann2013comparative,zhang2021turbulent} at the crest, which showed a synergy behavior and consistently sat above the baseline prediction. Overall, the uncertainty bounds retained the shape of the baseline prediction for $C_{f}$, which confirmed the effect of spatial variability in $M_{k}$. The predicted $1c\_M_{k}$ and $2c\_M_{k}$ bounds for $C_{p}$ sat above the baseline prediction at the flat spot, which did not exhibit a synergy behavior, but reduced in magnitude compared to the $1c$ and $2c$ perturbations instead. The opposite was true at the kink (or the reattachment point) of the $C_{p}$ distribution, where the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations under-predicted the baseline prediction. The $3c$ and $3c\_M_{k}$ bounds for both $C_{f}$ and $C_{p}$ showed a collapse, which deviated slightly away from the baseline prediction. the perturbed mean velocity profile approached a lot closer to the in-house DNS data near the reattachment point. When the contours of the mean velocity were plotted in an $xy$ plane, the $1c$ and $2c$ perturbations suppressed the LSB compared to the baseline prediction, which increased the magnitude of the mean flow. This behavior was enhanced by compounding with $M_{k}$: $1c\_M_{k}$ and $2c\_M_{k}$ further increased the magnitude of the mean flow in the attached turbulent boundary layer through a more suppression of the LSB. This behavior is qualitatively similar to that observed in the in-house DNS contour \cite{zhang2021turbulent}. Again, the $3c\_M_{k}$ remained at nearly same magnitude as that for the $3c$ perturbation, which bolstered the region of reverse flow to approach closer to the in-house DNS data \cite{zhang2021turbulent}. When the predictions for the mean velocity profile were plotted in coordinates shifted vertically, the predicted bounds generated from the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations in general led ahead the baseline prediction, while the $3c\_M_{k}$ perturbation lagged behind it, which showed an enveloping behavior with respect to the baseline prediction. This behavior is qualitatively similar to the $1c$ and $3c$ perturbations observed by Luis \textit{et al.} \cite{cremades2019reynolds}. At the transition point $X_{T}$, all of the perturbations and the baseline prediction showed a collapse for $0.007 < y/c|_{o} < 0.011$, and showed a good agreement with the in-house DNS data \cite{zhang2021turbulent}. As the flow moves further downstream of $X_{R}$, the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations showed a tendency to approach closer to the in-house DNS data, while the effect of perturbation gradually deteriorated due to gradual reduction in the positive values of $C_{f}$. Overall, the compound effect of $3c\_M_{k}$ was weak, which indicated the immunity of the $3c$ perturbation to the marker function. With the velocity vectors added to the mean velocity contour, a clear visualization again, confirmed the effect of all of the perturbations in the region of reverse flow and the attached turbulent boundary layer. The dimensionless Reynolds shear stress contours in an $xy$ plane were also analyzed. The $1c\_M_{k}$ and $2c\_M_{k}$ perturbations under-predicted the baseline prediction, which showed a tendency to approach closer to the in-house DNS data in the region downstream of the LSB. While, the $3c\_M_{k}$ perturbation over-predicted the baseline prediction, and showed good agreement with the in-house DNS data \cite{zhang2021turbulent} in the region of the LSB. When the predictions for the dimensionless Reynolds shear stress $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ profile were plotted, the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations reduced the magnitude of the $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ profiles compared to the baseline prediction, while the $3c\_M_{k}$ perturbation did the opposite, which resulted in an enveloping behavior. This behavior is qualitatively similar to that observed by Luis \textit{et al.} At the transition point, the $1c\_M_{k}$ perturbation greatly reduced the magnitude of the $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ profile, which marked a synergy behavior. An important observation was that the synergy behavior seems only active for the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations. Overall, the marker function $M_{k}$ was effective in the eigenspace perturbation framework in constructing uncertainty bounds for both mean velocity and turbulence properties. Future work will focus on the development of different types of marker functions based on a variety of transitional flow scenarios. Eigenvector perturbations to the Reynolds stress tensor should also be conducted to complete the full range of the model form uncertainty in the Boussinesq turbulent viscosity models. Also a wider range of RANS based transition models will be tested using the eigenspace perturbation framework with marker involved. \begin{acknowledgments} The support of the Natural Sciences and Engineering Research Council (NSERC) of Canada for the research program of Professor Xiaohua Wu and Professor David E. Rival is gratefully acknowledged. \end{acknowledgments} \section*{Data Availability Statement} The data that support the findings of this study are available from the corresponding author upon reasonable request. \section*{References} \section{} \subsection{} \subsubsection{}
2207.14385
\section{Introduction} In this paper we consider a family of active scalars in two dimensions, that are driven by an incompressible flow which is more singular than the scalar itself. More precisely, we say a function $w(x,t):\mathds{R}^2\times \mathds{R}_+\rightarrow \mathds{R}$, $w(x,t)\in H^{s}$, $s>2+\gamma$ is a solution to the generalized Surface Quasi-geostrophic equation ($\gamma$-SQG equation) with initial conditions $w(x,0)=w_{0}(x)$ if the equation \begin{eqnarray}\label{gSQG} \frac{\partial w}{\partial t} + v_{1,\gamma}\frac{\partial w}{\partial x_{1}} + v_{2,\gamma}\frac{\partial w}{\partial x_{2}}= 0 \end{eqnarray} is fulfilled for every $x\in\mathds{R}^2$, with $v=(v_{1,\gamma},v_{2,\gamma})$ defined by $$v_{1,\gamma}=-\frac{\partial}{\partial x_{2}}\Lambda^{-1+\gamma} w,\ v_{2,\gamma}=\frac{\partial}{\partial x_{1}}\Lambda^{-1+\gamma} w.$$ We denote $\Lambda^{\alpha} f\equiv (-\Delta)^{\frac{\alpha}{2}} f$ by the Fourier transform $\widehat{\Lambda^{\alpha} f}(\xi) = |\xi|^{\alpha}\widehat{f} (\xi)$. This family of equations becomes the 2D incompressible Euler equations and the SQG equation (see \cite{Majda}, \cite{Chaewu} and \cite{Chaecordoba}) when $\gamma=-1,0$ respectively. For the entire range $\gamma\in(-1,1)$, it has been shown in \cite{Chaecordoba} that this system is locally well-possed in $H^{s}$ for $s>2+\gamma$. In \cite{ChaeWu2} the authors proved local existence in the critical Sobolev space $H^2$ for a logarithmic inviscid regularization of SQG (see also \cite{Jolly} for the $\gamma$-SQG case). Regarding $H^s$ norm growth see \cite{Nazarov} where the authors show that there exists initial conditions with arbitrarily small $H^{s}$ norm ($s\geq 11$) that become large after a long period of time. Finite time formation of singularities for initial data in $H^{s}$ for $s>2+\gamma$ remains an open problem for the range $\gamma\in(-1,1)$. On the other hand, there are a few rigorous constructions of non-trivial global solutions in $H^s$ (for some s satisfying $s> 2+\gamma$) in \cite{CCG2}, \cite{GS}, \cite{CCG} and \cite{We}. For both 2D Euler and SQG, the critical Sobolev space has been studied in \cite{Bourgainsobolev}, \cite{Zoroacordoba}, \cite{Elgindisobolev} and \cite{Injee}, where it has been established non-existence of uniformly bounded solutions in $H^{2+\gamma}$ (see also \cite{Kukavica} and \cite{Kwon} for other ill-posedness results for active scalars). Furthermore, for $\gamma=0$, in a range of supercritical Sobolev spaces ($s\in(\frac32,2)$) non-existence of solutions in $H^{s}$ is proved in \cite{Zoroacordoba}. Global existence of solutions in $L^2$ have already been obtained for SQG in \cite{Resnick} (see \cite{Chaecordoba}, for an extension in the case $\gamma\in(0,1)$), but uniqueness is not known and in fact there is non uniqueness of solutions for $\Lambda^{-1}w\in C_t^{\sigma}C_x^{\beta}$ with $\frac12<\beta<\frac45$ and $\sigma < \frac{\beta}{2-\beta}$ (see \cite{Buckmaster}). Local well-posedness in $C^{k,\beta}\cap L^{q}$ ($k\geq 1$, $\beta\in(0,1)$, $q>1$) was established for SQG in \cite{Wu}, and recently the result was improved in \cite{Ambrose}, where the requirement $w\in L^{q}$ has been dropped. The same result as in \cite{Wu} applies for the range $\gamma\in [-1,0)$ for $\beta\in[0,1]$ (for the a priori estimates see \cite{Chaewu}). Nevertheless, as shown in \cite{Zoroacordoba} for $\gamma=0$, there is no local existence result when $\beta=0,1$ (in the case of 2D Euler equations see \cite{Bourgaincm} and \cite{Elgindi} for a proof of strong ill-posedness and non-existence of uniformly bounded solutions for the velocity $v$ in $C^k$). Global in time exponential growth of solutions was obtained in \cite{SmallscaleSQG} for the range $\gamma\in(-1,1)$ in $C^{1,\beta}$, with $\beta\in[f(\gamma),1]$. \subsection{Main results} The aim of this paper is to prove strong ill-posedness in $C^{k,\beta}$ ($k\geq 1$, $\beta\in(0,1]$ and $k+\beta>1+\gamma$) of the $\gamma$-SQG equation for the range $\gamma\in(0,1)$. We also construct solutions in $\mathds{R}^2$ of $\gamma$-SQG that initially are in $C^{k,\beta}\cap L^2$ but are not in $C^{k,\beta}$ for $t>0$. \begin{theorem} (Strong ill-posedness) Given $k$ a natural number, $\beta\in(0,1]$, $\gamma\in(0,1)$ and $\delta\in(0,\frac12)$ with $k+\beta-2\delta> 1+\gamma$, then for any $T,t_{crit,}\epsilon_{1},\epsilon_{2}>0$, there exist a $H^{k+\beta+1-\delta}$ function $w(x,0)$ such that $||w(x,0)||_{C^{k,\beta}}\leq \epsilon_{1}$ and the only solution to (\ref{gSQG}) in $H^{k+\beta+1-\delta}$ with initial conditions $w(x,0)$ exists for $t\in[0,T]$ and fulfills that $$||w(x,t_{crit})||_{C^{k,\beta}}\geq \frac{1}{\epsilon_{2}}.$$ \end{theorem} \begin{theorem} (Non-existence) Given $k$ a natural number, $\beta\in(0,1]$, $\gamma\in(0,1)$ and $\delta\in(0,\frac12)$ with $k+\beta-2\delta> 1+\gamma$, then for any $T$ and $\epsilon>0$, there exist a $H^{k+\beta+1-\frac32\delta}$ function $w(x,0)$ such that $||w(x,0)||_{C^{k,\beta}}\leq \epsilon$ and that the only solution to (\ref{gSQG}) in $H^{k+\beta+1-\frac32 \delta}$ with initial conditions $w(x,0)$ exists for $t\in[0,T]$ and fulfills that, for $t\in(0,T]$, $||w(x,t)||_{C^{k,\beta}}=\infty.$ \end{theorem} \begin{remark} Although technically we do not prove the results for the case $\beta=0$, the results in $C^{k,1}$ actually gives us strong ill-posedness and non-existence in the space $C^{k+1}$. \end{remark} \subsection{Strategy of the proof} To obtain the ill-posedness result, we first focus on finding a pseudo-solution $\bar{w}$ for $\gamma-SQG$ that exhibits the behaviour we would like to show, mainly that it has a small $C^{k,\beta}$ norm initially and this norm grows a lot in a very short period of time. We say that $\bar{w}$ is a pseudo-solution if it fulfils an evolution equation of the form \begin{eqnarray} \frac{\partial \bar{w}}{\partial t} + v_{1,\gamma}\frac{\partial \bar{w}}{\partial x_{1}} + v_{2,\gamma}\frac{\partial \bar{w}}{\partial x_{2}}+F(x,t)= 0 \end{eqnarray} with $v=(v_{1,\gamma},v_{2,\gamma})$ defined by $$v_{1,\gamma}=-\frac{\partial}{\partial x_{2}}\Lambda^{-1+\gamma} \bar{w},\ v_{2,\gamma}=\frac{\partial}{\partial x_{1}}\Lambda^{-1+\gamma} \bar{w}.$$ This, of course, is not a very restrictive definition, but in general we will only use this definition for $\bar{w}$ when $F$ is small in a relevant norm. Once we have a pseudo-solution $\bar{w}$ with the desired behaviour, if $F$ is small and both $F$ and $\bar{w}$ are regular enough, then $\bar{w}\approx w$, with $w$ the solution to (\ref{gSQG}) with the same initial conditions as $\bar{w}$, and therefore $w$ shows the same fast growth as $\bar{w}$. The details about how to find a pseudo-solution with the desired behaviour are somewhat technical, but the rough idea is to consider initial conditions that in polar coordinates have the form $$w_{N}(r,\alpha,0)=f(r)+\frac{g(r,N\alpha)}{N^{k+\beta}},$$ that is, a radial function (which is a stationary solution to $\gamma$-SQG) plus a perturbation of frequency $N$ in $\alpha$. The evolution of $w_{pert,N}(r,\alpha,t):=w_{N}(r,\alpha,t)-f(r)$ satisfies $$\frac{\partial w_{pert,N}}{\partial t}+v_{\gamma}(w_{pert,N})\cdot\nabla w_{pert,N}+v_{r,\gamma}(w_{pert,N})\frac{\partial f(r)}{\partial r}+\frac{\partial w_{pert,N}}{\partial \alpha}\frac{v_{\alpha,\gamma}(f(r))}{r}=0,$$ where $v_{r,\gamma}, v_{\alpha,\gamma}$ are the radial and angular components of the velocity respectively. For very big $N$, we have that $$v_{\gamma}(w_{pert,N})\cdot\nabla w_{pert,N}\approx 0,\ v_{r,\gamma}(w_{pert,N})\approx C_{\gamma}(-\Delta_{\alpha})^{\frac{\gamma}{2}}H_{\alpha}(w_{pert,N})$$ where $(-\Delta_{\alpha})^{\frac{\gamma}{2}},H_{\alpha}$ are the fractional laplacian and the Hilbert transform respectively with respect to only the variable $\alpha.$ This suggest studying \begin{equation}\label{1Dapprox} \frac{\partial \tilde{w}}{\partial t}+\frac{\partial f(r)}{\partial r}C_{\gamma}(-\Delta_{\alpha})^{\frac{\gamma}{2}}H_{\alpha}(\tilde{w})+\frac{\partial \tilde{w}}{\partial \alpha}\frac{v_{\alpha,\gamma}(f(r))}{r}=0. \end{equation} and using $\bar{w}=f(r)+\tilde{w}$. The system (\ref{1Dapprox}) is relatively simple to study, since it is linear and one dimensional in nature, and one can obtain explicit solutions where the $C^{k,\beta}$ norm grows arbitrarily fast. Then, once the candidate pseudo-solutions are found, a careful study of the errors involved allows us to obtain ill-posedness. Moreover, to obtain non-existence, we consider an infinite number of fast growing solutions, and spread them through the plane so that the interactions between them become very small. \subsection{Outline of the paper} The paper is organized as follows. In Section 2, we set the notation used through the paper. In Section 3, we obtain estimates on the velocity in the radial and angular direction. In section 4, we introduce the pseudo-solutions with the desired properties and establish the necessary estimates on the source term $F(x,t)$. Finally in section 5, we prove strong ill-posedness and non-existence for the space $C^{k,\beta}$. \section{Preliminaries and notation} \subsection{Polar coordinates}\label{polarcoordinates} Many of our computations and functions become much simpler if we use polar coordinates, so we need to establish some notation in that regard. For the rest of this subsection, we will refer to $$F:\mathds{R}_{+}\times[0,2\pi)\rightarrow \mathds{R}^2$$ $$(r,\alpha)\rightarrow (r\cos{(\alpha)},r\sin{(\alpha)})$$ the map from polar to cartesian coordinates. Note that the choice of $[0,2\pi)$ for the variable $\alpha$ is arbitrary and any interval of the form $[c,2\pi+c)$ would also work, and in fact we will sometimes consider intervals different from $[0,2\pi)$. These changes in the domain will not be specifically mentioned since they will be clear by context. Given a function $f(x_{1},x_{2})$ from $\mathds{R}^{2}$ to $\mathds{R}$, we define $$f^{pol}:\mathds{R}_{+}\times[0,2\pi)\rightarrow \mathds{R}$$ as $f^{pol}(r,\alpha):=f(F(r,\alpha))$. For $r>0$, we also have the following equalities \begin{equation}\label{derx1} \frac{\partial f(x_{1},x_{2})}{\partial x_{1}}=\cos{(\alpha(x_{1},x_{2}))} \frac{\partial f^{pol}}{\partial r}(F^{-1}(x_{1},x_{2}))-\frac{1}{r}\sin{(\alpha(x_{1},x_{2}))}\frac{\partial f^{pol}}{\partial \alpha}(F^{-1}(x_{1},x_{2})), \end{equation} \begin{equation}\label{derx2} \frac{\partial f(x_{1},x_{2})}{\partial x_{2}}=\sin{(\alpha(x_{1},x_{2}))} \frac{\partial f^{pol}}{\partial r}(F^{-1}(x_{1},x_{2}))+\frac{1}{r}\cos{(\alpha(x_{1},x_{2}))}\frac{\partial f^{pol}}{\partial \alpha}(F^{-1}(x_{1},x_{2})). \end{equation} Furthermore, for functions such that $supp(f^{pol}(r,\alpha))\subset\{(r,\alpha):r\geq r_{0}\}$ with $r_{0}>0$, we have that for $m=0,1,...$, using (\ref{derx1}) and (\ref{derx2}) $$||f||_{C^{m}}\leq C_{r_{0},m}||f^{pol}||_{C^{m}},$$ where $$||f^{pol}||_{C^{m}}=\sum_{k=0}^{m}\sum_{i=0}^{k}||\frac{\partial^{k}f^{pol}}{\partial r^{i}\partial \alpha^{k-i}}||_{L^{\infty}},$$ and similarly \begin{equation}\label{equivcmb} ||f||_{C^{m,\beta}}\leq C_{r_{0},m,\beta}||f^{pol}||_{C^{m,\beta}}. \end{equation} with \begin{align*} &||f^{pol}(r,\alpha)||_{C^{m,\beta}}=||f^{pol}||_{C^{m}}\\ &+\sum_{i=0}^{k}sup_{R,\in[0,\infty],A\in[0,2\pi],h_{1}\in[-R,\infty],h_{2}\in[-\pi,\pi]}\frac{|\frac{\partial^{m} f^{pol}}{\partial^{i}r\partial^{m-i} \alpha}(R,A)-\frac{\partial^{m} f}{\partial^{i}r \partial^{m-i} \alpha}(R+h_{1},A+h_{2})}{|h_{1}^{2}+h_{2}^{2}|^{\frac{\beta}{2}}}.\\ \end{align*} Furthermore, if we restrict ourselves to functions such that $supp(f^{pol}(r,\alpha))\subset\{(r,\alpha):r_{1}\geq r\geq r_{0}\}$ with $r_{1}>r_{0}>0$ then for $m=0,1,...$ $$||f||_{H^{m}}\leq C_{r_{1},r_{0},m}||f^{pol}||_{H^{m}},$$ with $$||f^{pol}||_{H^{m}}=\sum_{k=0}^{m}\sum_{i=0}^{k}||\frac{\partial^{k}f^{pol}}{\partial r^{i}\partial \alpha^{k-i}}||_{L^{2}}.$$ Since we will need to compute integrals in polar coordinates, for a general set $S$ we will use the notation $$S^{pol}:=\{(r,\alpha):F(r,\alpha)\in S\}$$ and more specifically, we will use $$B_{\lambda}^{pol}(R,A):=\{(r,\alpha):|F(r,\alpha)-F(R,A)|\leq \lambda\}$$ with $|(x_{1},x_{2})|=|x_{1}^{2}+x_{2}^{2}|^{\frac12}$ (this is simply the set $B_{\lambda}(R\cos{(A)},R\sin{(A)})$ in polar coordinates). Also, note that, for $R\geq 2\lambda$ (which we will assume from now on) we have $$B_{\lambda}^{pol}(R,A)\subset [R-\lambda,R+\lambda]\times [A-\arccos{(1-\frac{\lambda^2}{R^2})},A+\arccos{(1-\frac{\lambda^2}{R^2})}].$$ We also define, for $h\in[-\lambda,\lambda]$, $$S_{\lambda,R,A}(h):=sup(\tilde{\alpha}:(R+h,A+\tilde{\alpha})\in B^{pol}_{\lambda}(R,A))$$ and defining $$S_{\lambda,R,A,\infty}:=sup_{h\in[-\lambda,\lambda]}(S_{\lambda,R,A}(h))$$ then for $\tilde{\alpha}\in[-S_{\lambda,R,A,\infty},S_{\lambda,R,A,\infty}]$ we can define $$P_{\lambda,R,A,+}(\tilde{\alpha}):=sup(h:(R+h,A+\tilde{\alpha})\in B^{pol}_{\lambda}(R,A))$$ $$P_{\lambda,R,A,-}(\tilde{\alpha}):=inf(h:(R+h,A+\tilde{\alpha})\in B^{pol}_{\lambda}(R,A)).$$ When the values of $\lambda, R$ and $A$ are clear by context, we will just write $S(h),S_{\infty},P_{+}(\tilde{\alpha})$ and $P_{-}(\tilde{\alpha})$. A property for $P_{+}(\tilde{\alpha})$ and $P_{-}(\tilde{\alpha})$ that we will need to use later on is that, for $R\in[\frac12,\frac32]$ and $\tilde{\alpha}\in[-S_{\lambda,R,A,\infty},S_{\lambda,R,A,\infty}]$ we have $$|P_{\lambda,R,A,+}(\tilde{\alpha})+P_{\lambda,R,A,-}(\tilde{\alpha})|\leq C \lambda^2.$$ Which can be easily obtained using that, since $$|F(R,A)-F(r,\alpha)|=|(R-r)^2+2Rr(1-\cos(A-\alpha))|^{\frac12}$$ then $$P_{\lambda,R,A,+}(\tilde{\alpha})=\frac{-2R(1-\cos(\tilde{\alpha}))+\sqrt{(2R(1-\cos(\tilde{\alpha}))^2-4(2R^2(1-\cos(\tilde{\alpha}))-\lambda^2)}}{2}$$ $$P_{\lambda,R,A,-}(\tilde{\alpha})=\frac{-2R(1-\cos(\tilde{\alpha}))-\sqrt{(2R(1-\cos(\tilde{\alpha}))^2-4(2R^2(1-\cos(\tilde{\alpha}))-\lambda^2)}}{2},$$ so $$|P_{\lambda,R,A,+}(\tilde{\alpha})+P_{\lambda,R,A,-}(\tilde{\alpha})|=4R(1-\cos(\tilde{\alpha}))\leq C\tilde{\alpha}^2\leq C \lambda^2.$$ \subsection{Other notation} Given two sets $X,Y\subset\mathds{R}^2$, we will use $d(X,Y)$ to refer to the distance between the two, that is $$d(X,Y):=\inf_{x\in X,y\in Y} |x-y|=\inf_{x\in X,y\in Y}|(x_{1}-y_{1})^2+(x_{2}-y_{2})^2|^{\frac12}.$$ Furthermore, given a function $f$ and a set $X$ we define $d(f,X)$ as $d(supp(f),X).$ Also, given a set $X$ and a point $x$ we define the set $$X-x:=\{y\in\mathds{R}^2: y+x\in X\}.$$ Working in polar coordinates, we will use the notation $$X^{pol}-(r,\alpha):=\{(\tilde{r},\tilde{\alpha})\in \mathds{R}^2:(\tilde{r}+r,\tilde{\alpha}+\alpha)\in X^{pol}\},$$ where we need to be careful since $X^{pol}-(r,\alpha)\neq (X-F(r,\alpha))^{pol}$. We will also define, for $A$ a regular enough set, $k\in\mathds{N}$ $$||f(x) 1_{A}||_{C^{k}}:=\sum_{i=0}^{k}\sum_{j=0}^{i}\text{ess-sup}_{x\in A}(\frac{\partial^{i}f(x)}{\partial^{j}x_{1}\partial^{i-j} x_{2}}),$$ $$||f(x)1_{A}||_{H^{k}}:=\sum_{i=0}^{s}\sum_{j=0}^{i} (\int_{A} (\frac{\partial^{i}f(x)}{\partial^{j}x_{1}\partial^{i-j} x_{2}})^2 dx)^{\frac12}.$$ Finally we will use the notation $$|f|_{C^{k,\beta}}:=\sum_{i=0}^{k}sup_{h_{1},h_{2}\in \mathds{R}}\frac{|\frac{\partial^{k} f}{\partial^{i}x_{1}\partial^{k-i}\partial x_{2}}(y_{1},y_{2})-\frac{\partial^{k} f}{\partial^{i}x_{1}\partial^{k-i}\partial x_{2}}(y_{1}+h_{1},y_{2}+h_{2})|}{|h_{1}^{2}+h_{2}^{2}|^{\frac{\beta}{2}}}.$$ \subsection{The velocity} We will be considering $\gamma$-SQG, so our scalar $w$ will be transported with a velocity given by $$v_{\gamma}(w(.))(x)=C(\gamma)P.V. \int_{\mathds{R}^2} \frac{(x-y)^{\perp}w(y)}{|x-y|^{(3+\gamma)/2}}dy_{1}dy_{2}.$$ Since the results are independent of the specific value of $C(\gamma)$, we will just assume $C(\gamma)=1$. Furthermore we will use the notation $$v_{1,\gamma}(w(.))(x)=v_{\gamma}\cdot(1,0)=P.V. \int_{\mathds{R}^2} \frac{(y_{2}-x_{2})w(y)}{|x-y|^{(3+\gamma)/2}}dy_{1}dy_{2},$$ $$v_{2,\gamma}(w(.))(x)=v_{\gamma}\cdot(0,1)=P.V. \int_{\mathds{R}^2} \frac{(x_{1}-y_{1})w(y)}{|x-y|^{(3+\gamma)/2}}dy_{1}dy_{2}.$$ The operators $v_{\gamma},v_{1,\gamma}$ and $v_{2,\gamma}$ have several useful properties that we will be using later, namely the fact that they commute with cartesian derivatives $\frac{\partial}{\partial x_{1}}$ and $\frac{\partial }{\partial x_{2}}$ (as long as $w$ is regular enough) and also that, for $i=1,2$ $$||v_{i,\gamma}(w)||_{H^{k}}\leq C_{k,\gamma}||w||_{H^{k+\gamma}}.$$ It is unclear (and in fact, untrue) whether these properties translate to the operators $v_{r,\gamma}$ and $v_{\alpha,\gamma}$ that give us the velocity in the radial and polar direction respectively. We can obtain, however, similar properties for these operators. We start by noting that \begin{align}\label{vrv1v2} &v_{r,\gamma}(w)=\cos(\alpha(x)) v_{1,\gamma}(w)+\sin(\alpha(x))v_{2,\gamma}(w),\\\nonumber &\\\nonumber &v_{\alpha,\gamma}(w)=\cos(\alpha(x))v_{2,\gamma}(w)-\sin(\alpha(x))v_{1,\gamma}(w),\\ \nonumber \end{align} and since $\cos(\alpha(x))$ and $\sin(\alpha(x))$ are $C^{\infty}$ if we are not close to $r=0$, we have that, for $m\in\mathds{Z}$ $$||v_{r,\gamma}(w) 1_{|x|\geq \frac12}||_{H^{m}}\leq C_{m}( ||v_{1,\gamma}(w)||_{H^{m}}+||v_{2,\gamma}(w)||_{H^{m}})\leq C_{m,\gamma}||w||_{H^{m+\gamma}},$$ $$||v_{\alpha,\gamma}(w) 1_{|x|\geq \frac12}||_{H^{m}}\leq C_{m}( ||v_{1,\gamma}(w)||_{H^{m}}+||v_{2,\gamma}(w)||_{H^{m}})\leq C_{m,\gamma}||w||_{H^{m+\gamma}}.$$ Furhtermore, if we differentiate with respect to $\frac{\partial}{\partial x_{i}}$, $i=1,2$ we get $$\frac{\partial v_{r,\gamma}(w)}{\partial x_{i}}= v_{r,\gamma}(\frac{\partial w}{\partial x_{i}})+\frac{\partial \cos(\alpha(x))}{\partial x_{i}} v_{1,\gamma}(w)+\frac{\partial \sin(\alpha(x))}{\partial x_{i}}v_{2,\gamma}(w),$$ $$\frac{\partial v_{\alpha,\gamma}(w)}{\partial x_{i}}=v_{\alpha,\gamma}(\frac{\partial w}{\partial x_{i}})+\frac{\partial \cos(\alpha(x))}{\partial x_{i}}v_{2,\gamma}(w)-\frac{\partial \sin(\alpha(x))}{\partial x_{i}}v_{1,\gamma}(w).$$ With this, using induction and if we only consider $|x|\geq \frac12$ we get that, for $m_{1},m_{2}\in \mathds{Z}$ \begin{align*} &|\frac{\partial^{m_{1}+m_{2}} v_{r,\gamma}(w)}{\partial x_{1}^{m_{1}}\partial x_{2}^{m_{2}}}(x)- v_{r,\gamma}(\frac{\partial^{m_{1}+m_{2}} w}{\partial x_{1}^{m_{1}}\partial x_{2}^{m_{2}}})(x)|\\ &\leq C \sum_{k=0}^{m_{1}+m_{2}-1} \sum_{j=0}^{k} |\frac{\partial^{k} v_{1,\gamma}(w)}{\partial x_{1}^{j}\partial x_{2}^{k-j}}(x)| +|\frac{\partial^{k} v_{2,\gamma}(w)}{\partial x_{1}^{j}\partial x_{2}^{k-j}}(x)|,\\ \end{align*} \begin{align*} &|\frac{\partial^{m_{1}+m_{2}} v_{\alpha,\gamma}(w)}{\partial x_{1}^{m_{1}}\partial x_{2}^{m_{2}}}(x)- v_{\alpha,\gamma}(\frac{\partial^{m_{1}+m_{2}} w}{\partial x_{1}^{m_{1}}\partial x_{2}^{m_{2}}})(x)|\\ &\leq C \sum_{k=0}^{m_{1}+m_{2}-1} \sum_{j=0}^{k} |\frac{\partial^{k} v_{1,\gamma}(w)}{\partial x_{1}^{j}\partial x_{2}^{k-j}}(x)| +|\frac{\partial^{k} v_{2,\gamma}(w)}{\partial x_{1}^{j}\partial x_{2}^{k-j}}(x)|,\\ \end{align*} and thus \begin{align}\label{dercomr} &||(\frac{\partial^{m_{1}+m_{2}} v_{r,\gamma}(w)}{\partial x_{1}^{m_{1}}\partial x_{2}^{m_{2}}}- v_{r,\gamma}(\frac{\partial^{m_{1}+m_{2}} w}{\partial x_{1}^{m_{1}}\partial x_{2}^{m_{2}}}))1_{|x|\geq \frac12}||_{L^{\infty}}\\\nonumber &\leq C (||v_{1,\gamma}(w) 1_{|x|\geq \frac12}||_{C^{m_{1}+m_{2}-1}}+||v_{2,\gamma}(w) 1_{|x|\geq \frac12}||_{C^{m_{1}+m_{2}-1}}),\\\nonumber &||(\frac{\partial^{m_{1}+m_{2}} v_{r,\gamma}(w)}{\partial x_{1}^{m_{1}}\partial x_{2}^{m_{2}}}- v_{r,\gamma}(\frac{\partial^{m_{1}+m_{2}} w}{\partial x_{1}^{m_{1}}\partial x_{2}^{m_{2}}}))1_{|x|\geq \frac12}||_{L^{2}}\\\nonumber &\leq C (||v_{1,\gamma}(w) 1_{|x|\geq \frac12}||_{H^{m_{1}+m_{2}-1}}+||v_{2,\gamma}(w) 1_{|x|\geq \frac12}||_{H^{m_{1}+m_{2}-1}})\\\nonumber \end{align} \begin{align}\label{dercomalpha} &||(\frac{\partial^{m_{1}+m_{2}} v_{\alpha,\gamma}(w)}{\partial x_{1}^{m_{1}}\partial x_{2}^{m_{2}}}- v_{\alpha,\gamma}(\frac{\partial^{m_{1}+m_{2}} w}{\partial x_{1}^{m_{1}}\partial x_{2}^{m_{2}}}))1_{|x|\geq \frac12}||_{L^{\infty}}\\\nonumber &\leq C (||v_{1,\gamma}(w) 1_{|x|\geq \frac12}||_{C^{m_{1}+m_{2}-1}}+||v_{2,\gamma}(w) 1_{|x|\geq \frac12}||_{C^{m_{1}+m_{2}-1}});\\\nonumber &||(\frac{\partial^{m_{1}+m_{2}} v_{\alpha,\gamma}(w)}{\partial x_{1}^{m_{1}}\partial x_{2}^{m_{2}}}- v_{\alpha,\gamma}(\frac{\partial^{m_{1}+m_{2}} w}{\partial x_{1}^{m_{1}}\partial x_{2}^{m_{2}}}))1_{|x|\geq \frac12}||_{L^{2}}\\\nonumber &\leq C (||v_{1,\gamma}(w) 1_{|x|\geq \frac12}||_{H^{m_{1}+m_{2}-1}}+||v_{2,\gamma}(w) 1_{|x|\geq \frac12}||_{H^{m_{1}+m_{2}-1}})\\\nonumber \end{align} with $C$ depending on $m_{1}$ and $m_{2}$. \section{Bounds for the velocity} Since we will work in polar coordinates, it will be necessary to obtain expressions for the velocity in the radial and angular direction. These expressions are, assuming $w(x)$ is a $C^1$ function with compact support, $$v^{pol}_{r,\gamma}(w)(r,\alpha)=\int_{[-r,\infty]\times[-\pi,\pi]}\frac{(r+h)^2\sin(\alpha')(w^{pol}(r+h,\alpha'+\alpha)-w^{pol}(r,\alpha))}{|h^2+2r(r+h)(1-\cos(\alpha'))|^{(3+\gamma)/2}}d\alpha'dh$$ $$v^{pol}_{\alpha,\gamma}(w)(r,\alpha)=\int_{[-r,\infty]\times[-\pi,\pi]}\frac{(r+h)(r-(r+h)\cos(\alpha'))(w^{pol}(r+h,\alpha'+\alpha)-w^{pol}(r,\alpha))}{|h^2+2r(r+h)(1-\cos(\alpha'))|^{(3+\gamma)/2}}d\alpha'dh.$$ These expressions, however, hide some cancellation of the kernel when we are far from the support of $w$. Therefore, given a $C^1$ function $w$ with support in $B_{\lambda}(R\cos(A),R\sin(A))$, $\frac32>R>\frac12$, $\lambda\leq \frac{1}{100}$ we will use the expressions $$v^{pol}_{r,\gamma}(w)(r,\alpha)=\int_{B^{pol}_{4\lambda}(r,\alpha)-(r,\alpha)}\frac{(r+h)^2\sin(\alpha')(w^{pol}(r+h,\alpha'+\alpha)-w^{pol}(r,\alpha))}{|h^2+2r(r+h)(1-\cos(\alpha'))|^{(3+\gamma)/2}}d\alpha'dh$$ $$v^{pol}_{\alpha,\gamma}(w)(r,\alpha)=\int_{B^{pol}_{4\lambda}(r,\alpha)-(r,\alpha)}\frac{(r+h)(r-(r+h)\cos(\alpha'))(w^{pol}(r+h,\alpha'+\alpha)-w^{pol}(r,\alpha))}{|h^2+2r(r+h)(1-\cos(\alpha'))|^{(3+\gamma)/2}}d\alpha'dh$$ when $(r,\alpha)\in B_{2\lambda}(R,A)$ and $$v^{pol}_{r,\gamma}(w)(r,\alpha)=\int_{supp(w^{pol})-(r,\alpha)}\frac{(r+h)^2\sin(\alpha')w^{pol}(r+h,\alpha'+\alpha)}{|h^2+2r(r+h)(1-\cos(\alpha'))|^{(3+\gamma)/2}}d\alpha'dh$$ $$v^{pol}_{\alpha,\gamma}(w)(r,\alpha)=\int_{supp(w^{pol})-(r,\alpha)}\frac{(r+h)(r-(r+h)\cos(\alpha'))w^{pol}(r+h,\alpha'+\alpha)}{|h^2+2r(r+h)(1-\cos(\alpha'))|^{(3+\gamma)/2}}d\alpha'dh$$ when $(r,\alpha)\notin B_{2\lambda}(R,A)$. Although the expression for $B^{pol}_{4\lambda}(r,\alpha)$ is not simple, it will be enough for our computations to use the properties we obtained in subsection \ref{polarcoordinates}. We are particularly interested in obtaining the velocity produced by $w$ with support very concentrated around some point far from $r=0$ (say $r=1$ for simplicity), and for this we start with the following technical lemma. \begin{lemma}\label{aproxkernel} Given $\lambda\leq \frac{1}{100},$ and a $C^1$ function $w(x)$ with $supp(w)\subset B_{\lambda}(\cos(c), \sin(c))$, $c\in\mathds{R}$, $\lambda\leq \frac{1}{100}$ we have that if $(r,\alpha)\in B_{2\lambda}(\cos(c),\sin(c))$ then $$|v^{pol}_{r,\gamma}(w)(r,\alpha)-\int_{B^{pol}_{4\lambda}(r,\alpha)-(r,\alpha)}\frac{r^2\alpha'(w^{pol}(r+h,\alpha'+\alpha)-w^{pol}(r,\alpha))}{|h^2+r^2(\alpha')^2|^{(3+\gamma)/2}}d\alpha'dh|\leq C ||w||_{L^{\infty}}\lambda^{1-\gamma},$$ $$|v^{pol}_{\alpha,\gamma}(w)(r,\alpha)+\int_{B^{pol}_{4\lambda}(r,\alpha)-(r,\alpha)}\frac{rh(w^{pol}(r+h,\alpha'+\alpha)-w^{pol}(r,\alpha))}{|h^2+r^2(\alpha')^2|^{(3+\gamma)/2}}d\alpha'dh|\leq C ||w||_{L^{\infty}}\lambda^{1-\gamma},$$ with $C$ depending on $\gamma$. \end{lemma} \begin{remark} The result can be extended to functions with support concentrated around a point $(r,\alpha)$ with $r\neq 0$, although then the constant will depend on the specific value of $r$. \end{remark} \begin{proof} This result is very similar to lemma 2.1 in \cite{Zoroacordoba}, and the proof is analogous. We just need to take successive approximations of the kernel and bound the error produced by each such approximation. For example, for $(r,\alpha)\in B_{2\lambda}(\cos(c),\sin(c))$ we have that $$|\int_{B^{pol}_{4\lambda}(r,\alpha)-(r,\alpha)}(r+h)^2\frac{\sin(\alpha')-\alpha'}{|h^2+2r(r+h)(1-\cos(\alpha'))|^{(3+\gamma)/2}}(w^{pol}(r+h,\alpha'+\alpha)-w^{pol}(r,\alpha))d\alpha'dh|$$ $$\leq |\int_{B^{pol}_{4\lambda}(r,\alpha)-(r,\alpha)}(r+h)^2\frac{|\alpha'|^3(w^{pol}(r+h,\alpha'+\alpha)-w^{pol}(r,\alpha))}{|h^2+2r(r+h)(1-\cos(\alpha'))|^{(3+\gamma)/2}}d\alpha'dh|\leq C \lambda^{2-\gamma}||w||_{L^{\infty}}$$ and thus we can substitute the $\sin(\alpha'-\alpha)$ by $\alpha'-\alpha$ with an error small enough for our bounds. Repeating this process for other parts of the kernel yields the desired result. \end{proof} \begin{lemma}\label{aproxfuncion} Given a natural number $N$, $\frac12>\delta>0$ fulfilling $N^{-\delta}\leq \frac{1}{100}$ and $N^{-1+\delta}<\frac{1}{100}$, a function $f_{N,\delta}(x)$ with $supp(f_{N,\delta})\subset B_{N^{-1+\delta}}(\cos(c_{1}),\sin(c_{1}))$ ($c_{1}\in\mathds{R}$), $||f_{N,\delta}^{pol}||_{C^j}\leq MN^{j(1-\delta)}$ for $j=0,1,2$ and $1>\gamma>0,$ then if $w_{N,\delta}^{pol}(r,\alpha):=f_{N,\delta}^{pol}(r,\alpha)\cos(N\alpha+c_{2})$ ($c_{2}\in\mathds{R}$) we have that for $(r,\alpha)\in B^{pol}_{2N^{-1+\delta}}(1,c_{1})$ \begin{align*} &\bigg|\int_{B^{pol}_{4N^{-1+\delta}}(r,\alpha)-(r,\alpha)}\frac{r^2\alpha'(w^{pol}(r+h,\alpha'+\alpha)-w^{pol}(r,\alpha))}{|h^2+r^2(\alpha')^2|^{(3+\gamma)/2}}d\alpha'dh\\ &-f_{N,\delta}^{pol}(r,\alpha)\int_{B^{pol}_{4N^{-1+\delta}}(r,\alpha)-(r,\alpha)}\frac{r^2\alpha'(\cos(N(\alpha'+\alpha)+c_{2})-\cos(N\alpha+c_{2}))}{|h^2+r^2(\alpha')^2|^{(3+\gamma)/2}}d\alpha'dh\bigg|\leq CMN^{\gamma-\delta} \\ \end{align*} \begin{align}\label{1ºvalpha} &\bigg|\int_{B^{pol}_{4N^{-1+\delta}}(r,\alpha)-(r,\alpha)}\frac{rh(w^{pol}(r+h,\alpha'+\alpha)-w^{pol}(r,\alpha))}{|h^2+r^2(\alpha')^2|^{(3+\gamma)/2}}d\alpha'dh\\\nonumber &-f_{N,\delta}^{pol}(r,\alpha)\int_{B^{pol}_{4N^{-1+\delta}}(r,\alpha)-(r,\alpha)}\frac{rh(\cos(N(\alpha'+\alpha)+c_{2})-\cos(N\alpha+c_{2}))}{|h^2+r^2(\alpha')^2|^{(3+\gamma)/2}}d\alpha'dh\bigg|\leq CMN^{\gamma-\delta} \\\nonumber \end{align} with $C$ depending on $\gamma$ and $\delta$. \end{lemma} \begin{proof} We will just consider the case $c_{1},c_{2}=0$ for simplicity, and we will focus on obtaining (\ref{1ºvalpha}), the other inequality being analogous. We need to find bounds for $$\bigg|\int_{B^{pol}_{4N^{-1+\delta}}(r,\alpha)-(r,\alpha)}\frac{rh(w_{N,\delta}^{pol}(r+h,\alpha'+\alpha)-f_{N,\delta}^{pol}(r,\alpha)\cos(N(\alpha'+\alpha)))}{|h^2+r^2(\alpha')^2|^{(3+\gamma)/2}}d\alpha'dh\bigg|$$ $$=\bigg|\int_{-4N^{-1+\delta}}^{4N^{-1+\delta}}\int_{-S(h)}^{S(h)}\frac{rh(f_{N,\delta}^{pol}(r+h,\alpha'+\alpha)-f_{N,\delta}^{pol}(r,\alpha))\cos(N(\alpha'+\alpha))}{|h^2+r^2(\alpha')^2|^{(3+\gamma)/2}}d\alpha'dh\bigg|$$ $$=\bigg|\int_{-4N^{-1+\delta}}^{4N^{-1+\delta}}\int_{-rS(s_{2})}^{rS(s_{2})}\frac{s_{2}(f_{N,\delta}^{pol}(r+s_{2},\frac{s_{1}}{r}+\alpha)-f_{N,\delta}^{pol}(r,\alpha))\cos(N(\frac{s_{1}}{r}+\alpha))}{|s|^{3+\gamma}}ds_{1}ds_{2}\bigg|$$ where we used the change of variables $s_{1}=r(\alpha'-\alpha)$, $h=s_{2}$ and we define $|s|:=|s_{1}^2+s_{2}^2|^{\frac12}$. Furthermore, $$\int_{-4N^{-1+\delta}}^{4N^{-1+\delta}}\int_{-rS(s_{2})}^{rS(s_{2})}\frac{s_{2}\cos(N(\frac{s_{1}}{r}+\alpha))(f_{N,\delta}^{pol}(r+s_{2},\frac{s_{1}}{r}+\alpha)-f_{N,\delta}^{pol}(r,\alpha))}{|s|^{3+\gamma}}ds_{1}ds_{2}$$ $$=\cos(N\alpha)\int_{-4N^{-1+\delta}}^{4N^{-1+\delta}}\int_{-rS(s_{2})}^{rS(s_{2})}\frac{s_{2}\cos(\frac{N}{r}s_{1})(f_{N,\delta}^{pol}(r+s_{2},\frac{s_{1}}{r}+\alpha)-f_{N,\delta}^{pol}(r,\alpha))}{|s|^{3+\gamma}}ds_{1}ds_{2}$$ $$-\sin(N\alpha)\int_{-4N^{-1+\delta}}^{4N^{-1+\delta}}\int_{-rS(s_{2})}^{rS(s_{2})}\frac{s_{2}\sin(\frac{N}{r}s_{1})(f_{N,\delta}^{pol}(r+s_{2},\frac{s_{1}}{r}+\alpha)-f_{N,\delta}^{pol}(r,\alpha))}{|s|^{3+\gamma}}ds_{1}ds_{2}.$$ We will only check the term that is multiplied by $\cos(N\alpha)$, the other term being analogous. We start with the contribution when $(s_{1},s_{2})\in \mathcal{A}:=\{|s_{j}|\leq \frac{4\pi r}{N}$ with $j=1,2$\}, which gives us $$|\int_{\mathcal{A}}\frac{s_{2}\cos(\frac{N}{r}s_{1})(f_{N,\delta}^{pol}(r+s_{2},\frac{s_{1}}{r}+\alpha)-f_{N,\delta}^{pol}(r,\alpha))}{|s|^{3+\gamma}}ds_{1}ds_{2}|$$ $$\leq CMN^{\gamma-\delta}.$$ Next we consider the integral in $$\mathcal{B}:= \{ (s_{1},s_{2}):(s_{1},s_{2})\in B^{pol}_{4N^{-1+\delta}}(r,\alpha)-(r,\alpha),|s_{1}|\leq \lfloor\frac{S(s_{2})N}{2\pi}\rfloor \frac{2\pi r}{N}\}\setminus \mathcal{A},$$ with $\lfloor \cdot \rfloor$ the integer part. We will focus on the contribution when $(s_{1},s_{2})\in \mathcal{B}\cap (s_{1}\geq \frac{4\pi r}{N},s_{2}\geq 0)$, since the other parts of the integral are bounded analogously. We start by computing the integral with respect to $s_{1}$. For this we first note that, for an integer $i$, given a $C^{2}$ function $g(x)$ and a real number $\frac{N}{r}>0$ we have $$|\int_{i \frac{2\pi r}{N}}^{(i+1)\frac{2\pi r}{N}} \cos(\frac{N}{r}x)g(x)dx|\leq (\frac{\pi r}{N})^3 (sup_{x\in(i\frac{2\pi r}{N},(i+1)\frac{2\pi r}{N})} |g''(x)|) $$ where $g''(x)$ is the second derivative of $g(x)$. This bound is obtained simply by considering a second order Taylor expansion around the middle point of the interval and noting that the constant and linear terms vanish. Therefore, if $i\geq 2$, $s_{2}>0$ \begin{align*} &\bigg|\int_{i\frac{2\pi r}{N}}^{(i+1)\frac{2\pi r}{N}}\frac{\cos(\frac{N}{r}s_{1})(f_{N,\delta}^{pol}(r+s_{2},\frac{s_{1}}{r}+\alpha)-f_{N,\delta}^{pol}(r,\alpha))}{|s|^{3+\gamma}}ds_{1}\bigg|\\ &\leq (\frac{2\pi r}{N})^3 (sup_{s_{1}\in(i\frac{2\pi r}{N},(i+1)\frac{2\pi r}{N})}\bigg|\frac{d^2}{d s_{1}^2} \frac{f_{N,\delta}^{pol}(r+s_{2},\frac{s_{1}}{r}+\alpha)-f_{N,\delta}^{pol}(r,\alpha)}{|s|^{3+\gamma}}\bigg|)\\ &\leq CM(\frac{2\pi r}{N})^3\frac{1}{((\frac{i2\pi r}{N})^2+s_{2}^2)^{\frac{3+\gamma}{2}}}\Big(N^{2-2\delta}+\frac{N^{1-\delta}}{((\frac{i2\pi r}{N})^2+s_{2}^2)^{\frac12}}+\frac{N^{1-\delta}[(i+1)\frac{2\pi }{N}+s_{2}]}{((\frac{i2\pi r}{N})^2+s_{2}^2)}\Big)\\ &\leq CM(\frac{2\pi r}{N})^3\frac{1}{(\frac{i2\pi r}{N }+s_{2})^{3+\gamma}}\Big(N^{2-2\delta}+\frac{N^{1-\delta}}{(\frac{i2\pi r}{N}+s_{2})}\Big).\\ \end{align*} Adding over all the relevant values of $i$ we get \begin{align*} &\sum_{i=2}^{\lfloor \frac{S(s_{2})N}{2\pi}\rfloor}CM(\frac{2\pi r}{N})^3\frac{1}{(\frac{i2\pi r}{N}+s_{2})^{3+\gamma}}\Big(N^{2-2\delta}+\frac{N^{1-\delta}}{\frac{i2\pi r}{N}+s_{2}}\Big)\\ &\leq \int_{1}^{\infty}CM(\frac{2\pi r}{N})^3\frac{1}{(\frac{x2\pi r}{N}+s_{2})^{3+\gamma}}\Big(N^{2-2\delta}+\frac{N^{1-\delta}}{\frac{x2\pi r}{N}+s_{2}}\Big)dx\\ &\leq \frac{CM}{N^{2\delta}(\frac{2\pi r}{N}+s_{2})^{2+\gamma}}+\frac{CM}{N^{1+\delta}(\frac{2\pi r}{N}+s_{2})^{3+\gamma}},\\ \end{align*} and multiplying by $s_{2}$ and integrating with respect to $s_{2}$ we obtain $$\int_{0}^{4N^{-1+\delta}}s_{2} (\frac{CM}{N^{2\delta}(\frac{2\pi r}{N}+s_{2})^{2+\gamma}}+\frac{CM}{N^{1+\delta}(\frac{2\pi r}{N}+s_{2})^{3+\gamma}})ds_{2}\leq CMN^{\gamma-\delta}.$$ Finally, we need to bound the integral when $(s_{1},s_{2})\in \mathcal{C}:=B_{4N^{-1+\delta}}(r,\alpha)-(r,\alpha)\setminus(\mathcal{A}\cup \mathcal{B})$. For this we only need to use that in this set $|s|\geq 3N^{-1+\delta}$ and that $$|\int_{[-rS(r),rS(r)]\setminus [-\lfloor\frac{S(r)N}{2\pi}\rfloor \frac{2\pi r}{N},\lfloor\frac{S(r)N}{2\pi}\rfloor \frac{2\pi r}{N}]}ds_{1}|\leq 2\frac{2\pi r}{N}$$ so therefore $$|\int_{\mathcal{C}}\frac{s_{2}\cos(\frac{N}{r}s_{1})(f_{N,\delta}^{pol}(r+s_{2},\frac{s_{1}}{r}+\alpha)-f_{N,\delta}^{pol}(r,\alpha))}{|s|^{3+\gamma}}ds_{1}ds_{2}|$$ $$\leq |\int_{-4N^{-1+\delta}}^{4N^{-1+\delta}}\frac{C |s_{2}|M }{N|N^{-1+\delta}|^{3+\gamma}}ds_{2}|\leq CMN^{\gamma-\delta-\delta\gamma}.$$ \end{proof} \begin{lemma}\label{errorvsmall} Given $\frac12>\delta>0$ and $1>\gamma>0,$, for any natural number $N$ fulfilling $N^{-\delta}\leq \frac{1}{100}$ and $N^{-1+\delta}<\frac{1}{100}$, a function $f_{N,\delta}(x)$ with $supp(f_{N,\delta})\subset B_{N^{-1+\delta}}(\cos(c_{1}),\sin(c_{1}))$ ($c_{1}\in\mathds{R}$), $||f_{N,\delta}^{pol}||_{C^j}\leq MN^{j(1-\delta)}$ for $j=0,1,2$ then we have that if $w_{N,\delta}^{pol}(r,\alpha):=f_{N,\delta}^{pol}(r,\alpha)\cos(N\alpha+c_{2})$ ($c_{2}\in\mathds{R}$) there exist constants $C,C_{\gamma}$ such that for $(r,\alpha)\in B_{2N^{-1+\delta}}(1,c_{1})$ $$|v^{pol}_{r,\gamma}(w_{N,\delta})(r,\alpha)-N^{\gamma} f_{N,\delta}^{pol}(r,\alpha)C_{\gamma} \sin (N\alpha+c_{2})|\leq CMN^{\gamma-\delta},$$ $$|v^{pol}_{\alpha,\gamma}(w_{N,\delta})(r,\alpha)|\leq CMN^{\gamma-\delta},$$ with $C_{\gamma}\neq 0$ depending on $\gamma$ and $C$ depending on $\gamma$ and $\delta$. \end{lemma} \begin{proof} Using lemmas \ref{aproxkernel} and \ref{aproxfuncion} yields \begin{align*} &|v^{pol}_{r,\gamma}(w_{N,\delta})(r,\alpha)-f_{N,\delta}^{pol}(r,\alpha)\int_{B^{pol}_{4N^{-1+\delta}}(r,\alpha)-(r,\alpha)}\frac{r^2\alpha'(\cos(N(\alpha'+\alpha)+c_{2})-\cos(N\alpha+c_{2}))}{|h^2+r^2(\alpha')^2|^{(3+\gamma)/2}}d\alpha'dh|\\ &\leq CMN^{\gamma-\delta},\\ \end{align*} \begin{align*} &|v^{pol}_{\alpha,\gamma}(w_{N,\delta})(r,\alpha)-f_{N,\delta}^{pol}(r,\alpha)\int_{B^{pol}_{4N^{-1+\delta}}(r,\alpha)-(r,\alpha)}\frac{rh(\cos(N(\alpha'+\alpha)+c_{2})-\cos(N\alpha+c_{2}))}{|h^2+r^2(\alpha')^2|^{(3+\gamma)/2}}d\alpha'dh|\\ &\leq CMN^{\gamma-\delta},\\ \end{align*} and therefore it is enough to prove \begin{align}\label{vrlemmafinal} &|f_{N,\delta}^{pol}(r,\alpha)\int_{B^{pol}_{4N^{-1+\delta}}(r,\alpha)-(r,\alpha)}\frac{r^2\alpha'(\cos(N(\alpha'+\alpha)+c_{2})-\cos(N\alpha+c_{2}))}{|h^2+r^2(\alpha')^2|^{(3+\gamma)/2}}d\alpha'dh\\\nonumber &-N^{\gamma} C_{\gamma} \sin (N\alpha+c_{2})|\leq CMN^{\gamma-\delta},\\\nonumber \end{align} \begin{align}\label{valphalemmafinal} &|f_{N,\delta}^{pol}(r,\alpha)\int_{B^{pol}_{4N^{-1+\delta}}(r,\alpha)-(r,\alpha)}\frac{rh(\cos(N(\alpha'+\alpha)+c_{2})-\cos(N\alpha+c_{2}))}{|h^2+r^2(\alpha')^2|^{(3+\gamma)/2}}d\alpha'dh|\\\nonumber &\leq CMN^{\gamma-\delta}.\\\nonumber \end{align} We start with (\ref{valphalemmafinal}), where by using the odd symmetry of the integrand with respect to $h$ \begin{align*} &|f_{N,\delta}^{pol}(r,\alpha)\int_{B^{pol}_{4N^{-1+\delta}}(r,\alpha)-(r,\alpha)}\frac{rh(\cos(N(\alpha'+\alpha)+c_{2})-\cos(N\alpha+c_{2}))}{|h^2+r^2(\alpha')^2|^{(3+\gamma)/2}}d\alpha'dh|\\ &=|f_{N,\delta}^{pol}(r,\alpha)\int_{-S_{\infty}}^{S_{\infty}}\int_{P_{-}(\alpha')}^{P_{+}(\alpha')}\frac{rh(\cos(N(\alpha'+\alpha)+c_{2})-\cos(N\alpha+c_{2}))}{|h^2+r^2(\alpha')^2|^{(3+\gamma)/2}}dhd\alpha'|\\ &=|f_{N,\delta}^{pol}(r,\alpha)\int_{-S_{\infty}}^{S_{\infty}}\int_{P_{-}(\alpha')}^{-P_{+}(\alpha')}\frac{rh(\cos(N(\alpha'+\alpha)+c_{2})-\cos(N\alpha+c_{2}))}{|h^2+r^2(\alpha')^2|^{(3+\gamma)/2}}dhd\alpha'|\\ &\leq |M\int_{-S_{\infty}}^{S_{\infty}}\frac{CN^{-2+2\delta}}{N^{(-1+\delta)(2+\gamma)}}d\alpha'|\leq CM N^{(-1+\delta)(1-\gamma)}\leq CMN^{\gamma-\delta}\\ \end{align*} where we used that $|P_{+}(\alpha')+P_{-}(\alpha')|\leq C N^{-2+2\delta}$, $|S_{\infty}|\leq arccos(1-16\frac{N^{-2+2\delta}}{r^2})\leq C N^{-1+\delta} $ and that, for $h\in [P_{-}(\alpha'),-P_{+}(\alpha')]$ $$\frac{1}{|h^2+r^2(\alpha')^2|^{(2+\gamma)/2}}\leq \frac{C}{N^{(-1+\delta)(2+\gamma)}}.$$ For (\ref{vrlemmafinal}) we use \begin{align*} &\int_{B^{pol}_{4N^{-1+\delta}}(r,\alpha)-(r,\alpha)}\frac{r^2\alpha'(\cos(N(\alpha'+\alpha)+c_{2})-\cos(N\alpha+c_{2}))}{|h^2+r^2(\alpha')^2|^{(3+\gamma)/2}}d\alpha'dh\\ &=-\sin(N\alpha+c_{2})\int_{-4N^{-1+\delta}}^{4N^{-1+\delta}}\int_{-rS(h_{2})}^{rS(h_{2})}\frac{h_{1}\sin(N\frac{h_{1}}{r})}{|h_{1}^2+h_{2}^2|^{(3+\gamma)/2}}dh_{1}dh_{2}\\ &=-\sin(N\alpha+c_{2})\int_{\mathds{R}}\int_{\mathds{R}}\frac{h_{1}\sin(N\frac{h_{1}}{r})}{|h_{1}^2+h_{2}^2|^{(3+\gamma)/2}}dh_{1}dh_{2}\\ &+4\sin(N\alpha+c_{2})\int_{0}^{\infty}\int_{r\tilde{S}(h_{2})}^{\infty}\frac{h_{1}\sin(N\frac{h_{1}}{r})}{|h_{1}^2+h_{2}^2|^{(3+\gamma)/2}}dh_{1}dh_{2}\\ \end{align*} where we just take \begin{equation*} \tilde{S}(h)= \begin{cases} S(h), & \text{if}\ h\in[-4N^{-1+\delta},4N^{-1+\delta}] \\ 0 & \text{otherwise.} \end{cases} \end{equation*} But, we have that, for $i$ a natural number, $$|\int_{i\frac{2\pi r}{N}+r\tilde{S}(h_{2})}^{(i+1)\frac{2\pi r}{N}+r\tilde{S}(h_{2})}\frac{h_{1}\sin(N\frac{h_{1}}{r})}{|h_{1}^2+h_{2}^2|^{(3+\gamma)/2}}dh_{1}|$$ $$\leq \frac{C}{N^2}\frac{1}{|(i\frac{2\pi r}{N}+r\tilde{S}(h_{2}))^2+h_{2}^2|^{(3+\gamma)/2}}$$ and thus $$|\int_{r\tilde{S}(h_{2})}^{\infty}\frac{h_{1}\sin(N\frac{h_{1}}{r})}{|h_{1}^2+h_{2}^2|^{(3+\gamma)/2}}dh_{1}|$$ $$\leq \sum_{i=0}^{\infty}\frac{C}{N^2}\frac{1}{|(i\frac{2\pi r}{N}+r\tilde{S}(h_{2}))^2+h_{2}^2|^{(3+\gamma)/2}}$$ $$\leq \int_{-1}^{\infty}\frac{C}{N^2}\frac{1}{|x\frac{2\pi r}{N}+r\tilde{S}(h_{2})+h_{2}|^{(3+\gamma)}}dx$$ $$\leq \frac{C}{N|-\frac{2\pi r}{N}+r\tilde{S}(h_{2})+h_{2}|^{(2+\gamma)}}\leq \frac{C}{N|r\tilde{S}(h_{2})+h_{2}|^{(2+\gamma)}}$$ where we used for $h_{2}>0$, $r\geq\frac12$ we have $r\tilde{S}(h_{2})+h_{2}\geq CN^{-1+\delta}$. But then $$|4\sin(N\alpha+c_{2})\int_{0}^{N^{-1+\delta}}\int_{r\tilde{S}(h_{2})}^{\infty}\frac{h_{1}\sin(N\frac{h_{1}}{r})}{|h_{1}^2+h_{2}^2|^{(3+\gamma)/2}}dh_{1}dh_{2}|$$ $$\leq \int_{0}^{N^{-1+\delta}}\frac{C}{N^{1+(-1+\delta)(2+\gamma)}} dh_{2}=CN^{\gamma-\delta-\delta\gamma}$$ and $$|4\sin(N\alpha+c_{2})\int_{N^{-1+\delta}}^{\infty}\int_{r\tilde{S}(h_{2})}^{\infty}\frac{h_{1}\sin(N\frac{h_{1}}{r})}{|h_{1}^2+h_{2}^2|^{(3+\gamma)/2}}dh_{1}dh_{2}|$$ $$\leq |\int_{N^{-1+\delta}}^{\infty}\frac{C}{N|h_{2}|^{(2+\gamma)}}dh_{2}|\leq CN^{\gamma-\delta-\gamma\delta},$$ and therefore \begin{align*} &|\int_{B^{pol}_{4N^{-1+\delta}}(r,\alpha)-(r,\alpha)}\frac{r^2\alpha'(\cos(N(\alpha'+\alpha)+c_{2})-\cos(N\alpha+c_{2}))}{|h^2+r^2(\alpha')^2|^{(3+\gamma)/2}}d\alpha'dh\\ &+\sin(N\alpha+c_{2})\int_{\mathds{R}}\int_{\mathds{R}}\frac{h_{1}\sin(N\frac{h_{1}}{r})}{|h_{1}^2+h_{2}^2|^{(3+\gamma)/2}}dh_{1}dh_{2}|\leq CN^{\gamma-\delta-\gamma\delta}\\ \end{align*} and combined with (\ref{vrlemmafinal}) we get \begin{align*} &|v^{pol}_{\alpha,\gamma}(w_{N,\delta})(r,\alpha)+f_{N,\delta}^{pol}(r,\alpha)\sin(N\alpha+c_{2})\int_{\mathds{R}}\int_{\mathds{R}}\frac{h_{1}\sin(N\frac{h_{1}}{r})}{|h_{1}^2+h_{2}^2|^{(3+\gamma)/2}}dh_{1}dh_{2}|\\ &\leq CMN^{\gamma-\delta}.\\ \end{align*} Furthermore \begin{align*} &-\sin(N\alpha+c_{2})\int_{\mathds{R}^2}\frac{h_{1}\sin(\frac{N}{r}h_{1})}{|h^{2}_{1}+h^{2}_{2}|^{\frac{3+\gamma}{2}}}dh_{1}dh_{2}=-\sin(N\alpha+c_{2})\bigg(\frac{N}{r}\bigg)^{\gamma}\int_{\mathds{R}^2}\frac{h_{1}\sin(h_{1})}{|h_{1}^{2}+h_{2}^{2}|^{\frac{3+\gamma}{2}}}dh_{1}dh_{2},\\ \end{align*} and \begin{align*} &\int_{\mathds{R}^2}\frac{h_{1}\sin(h_{1})}{|h_{1}^{2}+h_{2}^{2}|^{\frac{3+\gamma}{2}}}dh_{1}dh_{2}=\int_{-\infty}^{\infty}h_{1}\sin(h_{1})\int_{-\infty}^{\infty}\frac{1}{(h_{1}^{2}+h_{2}^{2})^{\frac{(3+\gamma)}{2}}}dh_{2}dh_{1}\\ &=\int_{-\infty}^{\infty}\frac{h_{1}\sin(h_{1})}{|h_{1}|^{2+\gamma}}\int_{-\infty}^{\infty}\frac{1}{(1+\lambda^{2})^{\frac{(3+\gamma)}{2}}}d\lambda dh_{1}=K_{\gamma}2\int_{0}^{\infty}\frac{h_{1}\sin(h_{1})}{|h_{1}|^{2+\gamma}}dh_{1}.\\\ \end{align*} By using that $\frac{h_{1}}{|h_{1}|^{2+\gamma}}$ is monotone decreasing for $h_{1}>0$, $\sin(x+\pi)=-\sin(x)$, $\sin(x)>0$ if $x\in(0,\pi)$ and $K_{\gamma}>0$ we obtain $$C_{\gamma}:=-K_{\gamma}2\int_{0}^{\infty}\frac{h_{1}\sin(h_{1})}{|h_{1}|^{2+\gamma}}<0.$$ Thus $$|v^{pol}_{r,\gamma}(w_{N,\delta})(r,\alpha)-f_{N,\delta}^{pol}(r,\alpha)\bigg(\frac{N}{r}\bigg)^{\gamma} C_{\gamma} \sin (N\alpha+c_{2})|\leq CMN^{\gamma-\delta},$$ and since, for the values of $r$ considered we have $$|\bigg(\frac{N}{r}\bigg)^{\gamma}-N^{\gamma}|\leq CN^{\gamma-1+\delta}\leq CN^{\gamma-\delta}$$ we are done. \end{proof} \begin{lemma}\label{decaimiento} Given $0<\delta<\frac{1}{2}$, $0<\gamma<1$, a natural number $N$ such that $N^{-1+\delta}\leq \frac{1}{100}$ and a $C^2$ function $f_{N,\delta}$, satisfying $supp(f_{N,\delta})\subset B_{N^{-1+\delta}}(\cos(c_{1}),\sin(c_{1}))$ ($c_{1}\in\mathds{R}$) with $||f_{N,\delta}||_{C^{j}}\leq M N^{j(1-\delta)}$, $j=0,1,2$, then for any $x=(x_{1},x_{2})=(R\cos(A),R\sin(A))\in \mathds{R}^2\setminus B_{2N^{-1+\delta}}(\cos(c_{1}),\sin(c_{1}))$ we have that $$|v^{pol}_{r,\gamma}(f_{N,\delta}(r,\alpha)\sin(N\alpha))(R,A)|\leq C\frac{M}{|d(x,f_{N,\delta})|^{2+\gamma}} N^{-2+\delta},$$ $$|v^{pol}_{\alpha,\gamma}(f_{N,\delta}(r,\alpha)\sin(N\alpha))(R,A)|\leq C\frac{M}{|d(x,f_{N,\delta})|^{2+\gamma}} N^{-2+\delta}$$ with $C$ depending only on $\gamma$. Furthermore, if $f_{N,\delta}\in C^{k+2}$ for $k$ an integer $k\geq 1$ and $||f_{N,\delta}(r,\alpha)||_{C^{j}}\leq M N^{j(1-\delta)}$ for $j=0,1,...,k$ then we have $$|\frac{\partial^{j}v^{pol}_{r,\gamma}(f_{N,\delta}(r,\alpha)\sin(N\alpha))(R,A)}{\partial x_{1}^{l}\partial x_{2}^{j-l}}|\leq C\frac{M}{|d(x,f_{N,\delta})|^{2+\gamma}} N^{-2+\delta+j},$$ $$|\frac{\partial^{j}v^{pol}_{\alpha,\gamma}(f_{N,\delta}(r,\alpha)\sin(N\alpha))(R,A)}{\partial x_{1}^{l}\partial x_{2}^{j-l}}|\leq C\frac{M}{|d(x,f_{N,\delta})|^{2+\gamma}} N^{-2+\delta+j}$$ for $j=0,1,...,k+2$, $l=0,1,...,j$, with $C$ depending on $\gamma$ and $j$. \end{lemma} \begin{proof} We will consider $c_{1}=0$ for simplicity and we will obtain the expression only for $v_{r,\gamma}$, $v_{\alpha,\gamma}$ being equivalent. That is to say, we want to compute \begin{align*} &\int_{supp(f^{pol}_{N,\delta})}\frac{(r')^2\sin(\alpha'-A)}{|(R-r')^2+2Rr'(1-\cos(A-\alpha'))|^{(3+\gamma)/2}}f_{N,\delta}(r',\alpha')\sin(N\alpha')d\alpha'dr'\\ &=\cos(NA)\int_{supp(f^{pol}_{N,\delta})}\frac{(r')^2\sin(\alpha'-A)f_{N,\delta}(r',\alpha')\sin(N\alpha'-NA)}{|(R-r')^2+2Rr'(1-\cos(A-\alpha'))|^{(3+\gamma)/2}}d\alpha'dr'\\ &+\sin(NA)\int_{supp(f^{pol}_{N,\delta})}\frac{(r')^2\sin(\alpha'-A)f_{N,\delta}(r',\alpha')\cos(N\alpha'-NA)}{|(R-r')^2+2Rr'(1-\cos(A-\alpha'))|^{(3+\gamma)/2}}d\alpha'dr'\\ &=\cos(NA)\int_{supp(f^{pol}_{N,\delta})-(0,A)}\frac{(r')^2\sin(\bar{\alpha})f_{N,\delta}(r',\bar{\alpha}+A)\sin(N\bar{\alpha})}{|(R-r')^2+2Rr'(1-\cos(\bar{\alpha}))|^{(3+\gamma)/2}}d\bar{\alpha}dr'\\ &+\sin(NA)\int_{supp(f^{pol}_{N,\delta})-(0,A)}\frac{(r')^2\sin(\bar{\alpha})f_{N,\delta}(r',\bar{\alpha}+A)\cos(N\bar{\alpha})}{|(R-r')^2+2Rr'(1-\cos(\bar{\alpha}))|^{(3+\gamma)/2}}d\bar{\alpha}dr'\\ \end{align*} with $f$, $R$ and $A$ as in the hypothesis of the lemma. We will focus on the part depending on $cos(NA)$, the other term being analogous. First, a second order Taylor expansion and some computations give us, since $r'\in(\frac12,\frac32)$ \begin{align*} &|\int_{i\frac{2\pi}{N}+\frac{\pi}{2N}}^{(i+1)\frac{2\pi}{N}+\frac{\pi}{2N}}\frac{(r')^2\sin(\bar{\alpha})}{|(R-r')^2+2Rr'(1-\cos(\bar{\alpha}))|^{(3+\gamma)/2}}f_{N,\delta}(r',\bar{\alpha}+A)\sin(N\bar{\alpha})d\bar{\alpha}|\\ &\leq \int_{i\frac{2\pi}{N}+\frac{\pi}{2N}}^{(i+1)\frac{2\pi}{N}+\frac{\pi}{2N}}\bigg(\frac{2\pi}{N}\bigg)^2 sup_{\bar{\alpha}\in[i\frac{2\pi}{N}+\frac{\pi}{2N},(i+1)\frac{2\pi}{N}+\frac{\pi}{2N}]}\Big(\Big|\frac{\partial^2}{\partial \bar{\alpha}^2}\frac{(r')^2\sin(\bar{\alpha})f_{N,\delta}(r',\bar{\alpha}+A)}{|(R-r')^2+2Rr'(1-\cos(\bar{\alpha}))|^{(3+\gamma)/2}}\Big|\Big)\\ &\times|\sin(N\bar{\alpha})|d\bar{\alpha}\\ &\leq C\bigg(\frac{2\pi}{N}\bigg)^3\Big(\frac{||f_{N,\delta}(r,\alpha)||_{C^{2}}}{|(R-r')^2+2Rr'(1-\cos(\bar{\alpha}))|^{(2+\gamma)/2}}+\frac{||f_{N,\delta}(r,\alpha)||_{C^{1}}}{|(R-r')^2+2Rr'(1-\cos(\bar{\alpha}))|^{(3+\gamma)/2}}\\ &+\frac{||f_{N,\delta}(r,\alpha)||_{L^{\infty}}}{|(R-r')^2+2Rr'(1-\cos(\bar{\alpha}))|^{(4+\gamma)/2}}\Big).\\ \end{align*} Using that, for $(r',\bar{\alpha})\in supp(f^{pol}_{N,\delta})-(0,A)$ $$|(R-r')^2+2Rr'(1-\cos(\bar{\alpha}))|^{\frac12}\geq d((R,A),f_{N,\delta})$$ $$d((R,A),f_{N,\delta})\geq N^{-1+\delta}$$ and the properties of $f_{N,\delta}$ we get then that \begin{align*} &|\int_{i\frac{2\pi}{N}+\frac{\pi}{2N}}^{(i+1)\frac{2\pi}{N}+\frac{\pi}{2N}}\frac{(r')^2\sin(\bar{\alpha})}{|(R-r')^2+2Rr'(1-\cos(\bar{\alpha}))|^{(3+\gamma)/2}}f_{N,\delta}(r',\bar{\alpha}+A)\sin(N\bar{\alpha})d\bar{\alpha}|\\ &\leq \frac{CM N^{-1-2\delta}}{d((R,A),f_{N,\delta})^{2+\gamma}}\\ \end{align*} so that \begin{align*} &\Big|\int_{1-N^{-1+\delta}}^{1+N^{-1+\delta}}\int_{-\lfloor\frac{S(r')N}{2\pi}\rfloor \frac{2\pi}{N}+\frac{\pi}{2N}}^{\lfloor\frac{S(r')N}{2\pi}\rfloor \frac{2\pi}{N}-\frac{3\pi}{2N}} \frac{(r')^2\sin(\bar{\alpha})}{|(R-r')^2+2Rr'(1-\cos(\bar{\alpha}))|^{(3+\gamma)/2}}f_{N,\delta}(r',\bar{\alpha}+A)\sin(N\bar{\alpha})d\bar{\alpha}dr'\Big|\\ &\leq \int_{1-N^{-1+\delta}}^{1+N^{-1+\delta}}\frac{CM N^{-1-\delta}}{d((R,A),f_{N,\delta})^{2+\gamma}}dr'\leq \frac{CM }{N^2 d((R,A),f_{N,\delta})^{2+\gamma}}.\\ \end{align*} As for the rest of the integral we have \begin{align*} &\Big|\int_{1-N^{-1+\delta}}^{1+N^{-1+\delta}}\int_{\lfloor\frac{S(r')N}{2\pi}\rfloor \frac{2\pi}{N}-\frac{3\pi}{2N}}^{S(r')} \frac{(r')^2\sin(\bar{\alpha})}{|(R-r')^2+2Rr'(1-\cos(\bar{\alpha}))|^{(3+\gamma)/2}}f_{N,\delta}(r',\bar{\alpha}+A)\sin(N\bar{\alpha})d\bar{\alpha}dr'\Big|\\ &\leq \int_{1-N^{-1+\delta}}^{1+N^{-1+\delta}}\frac{CM N^{-1}}{d((R,A),f_{N,\delta})^{2+\gamma}}dr'\leq \frac{CM N^{\delta} }{N^2 d((R,A),f_{N,\delta})^{2+\gamma}},\\ \end{align*} and \begin{align*} &\Big|\int_{1-N^{-1+\delta}}^{1+N^{-1+\delta}}\int_{-S(r')}^{-\lfloor\frac{S(r')N}{2\pi}\rfloor \frac{2\pi}{N}+\frac{\pi}{2N}} \frac{(r')^2\sin(\bar{\alpha})}{|(R-r')^2+2Rr'(1-\cos(\bar{\alpha}))|^{(3+\gamma)/2}}f_{N,\delta}(r',\bar{\alpha}+A)\sin(N\bar{\alpha})d\bar{\alpha}dr'\Big|\\ &\leq \int_{1-N^{-1+\delta}}^{1+N^{-1+\delta}}\frac{CM N^{-1}}{d((R,A),f_{N,\delta})^{2+\gamma}}dr'\leq \frac{CM N^{\delta} }{N^2 d((R,A),f_{N,\delta})^{2+\gamma}}\\ \end{align*} and we are done. To obtain the result for the derivatives, we first note that since \begin{align}\label{v1v2rad} &v_{1,\gamma}(w)=\cos(\alpha)v_{r,\gamma}(w)-\sin(\alpha)v_{\alpha,\gamma}(w)\\\nonumber &v_{2,\gamma}(w)=\sin(\alpha)v_{r,\gamma}(w)+\cos(\alpha)v_{\alpha,\gamma}(w)\\\nonumber \end{align} then for $x=(x_{1},x_{2})=(R\cos(A),R\sin(A))$ $$|v^{pol}_{1,\gamma}(f_{N,\delta}(r,\alpha)\sin(N\alpha))(R,A)|\leq C\frac{M}{|d(x,f_{N,\delta})|^{2+\gamma}} N^{-2+\delta},$$ $$|v^{pol}_{2,\gamma}(f_{N,\delta}(r,\alpha)\sin(N\alpha))(R,A)|\leq C\frac{M}{|d(x,f_{N,\delta})|^{2+\gamma}} N^{-2+\delta}.$$ Furthermore, derivation commutes with the operators $v_{1,\gamma}$ and $v_{2,\gamma}$, so we can prove that $$|\frac{\partial^{j}v^{pol}_{1,\gamma}(f_{N,\delta}(r,\alpha)\sin(N\alpha))(R,A)}{\partial x_{1}^{l}\partial x_{2}^{j-l}}|\leq C\frac{M}{|d(x,f_{N,\delta})|^{2+\gamma}} N^{-2+\delta+j},$$ $$|\frac{\partial^{j}v^{pol}_{2,\gamma}(f_{N,\delta}(r,\alpha)\sin(N\alpha))(R,A)}{\partial x_{1}^{l}\partial x_{2}^{j-l}}|\leq C\frac{M}{|d(x,f_{N,\delta})|^{2+\gamma}} N^{-2+\delta+j}$$ by differentiating $f_{N,\delta}(r,\alpha)\sin(N\alpha)$ and applying our lemma for each individual term. Then, using (\ref{vrv1v2}) and computing $\frac{\partial^{j}}{\partial x_{1}^{l}\partial x_{2}^{j-l}} v_{r,\gamma}$, $\frac{\partial^{j}}{\partial x_{1}^{l}\partial x_{2}^{j-l}} v_{\alpha,\gamma}$ we obtain, for $r\geq\frac12$ that \begin{align*} &\frac{\partial^{j}}{\partial x_{1}^{l}\partial x_{2}^{j-l}} v_{r,\gamma}(R,A)\leq C\Big( \sum_{k=0}^{j}\sum_{l=0}^{k} |\frac{\partial^{k} v_{1,\gamma}}{\partial x_{1}^{l}\partial x_{2}^{k-l}} (R,A)|+|\frac{\partial^{k} v_{2,\gamma}}{\partial x_{1}^{l}\partial x_{2}^{k-l}} (R,A)|\Big)\\ &\leq C\frac{M}{|d(x,f_{N,\delta})|^{2+\gamma}} N^{-2+\delta+j},\\ \end{align*} \begin{align*} &\frac{\partial^{j}}{\partial x_{1}^{l}\partial x_{2}^{j-l}} v_{\alpha,\gamma}(R,A)\leq C\Big( \sum_{k=0}^{j}\sum_{l=0}^{k} |\frac{\partial^{k} v_{1,\gamma}}{\partial x_{1}^{l}\partial x_{2}^{k-l}} (R,A)|+|\frac{\partial^{k} v_{2,\gamma}}{\partial x_{1}^{l}\partial x_{2}^{k-l}} (R,A)|\Big)\\ &\leq C\frac{M}{|d(x,f_{N,\delta})|^{2+\gamma}} N^{-2+\delta+j},\\ \end{align*} and we are done. \end{proof} \section{Pseudo-solutions considered and their properties} To obtain ill-posedness for the space $C^{k,\beta}$ for $\gamma$-SQG, we will add perturbations to a radial solution $f(r)$ (with $f(r)$ chosen so that it has some specific properties). These perturbations will be of the form \begin{equation}\label{pertbase} \lambda\sum_{l=0}^{L-1} f(N^{1-\delta}(r-1),N^{1-\delta}\alpha)\frac{\cos(N(M+l)(\alpha-\alpha^{1})+\alpha^{2}+\frac{k\pi}{2})}{L(NM)^{k+\beta}}, \end{equation} with \begin{itemize} \item $f(r-1,\alpha)=g(r-1)g(\alpha)$, $g$ a positive $C^\infty$ function with support in $[-\frac12,\frac12]$ and such that $f(x)=1$ if $ x\in[-\frac14,\frac14]$ and $||f(r-1,\alpha)||_{C^{j}}\leq 100^{j}$, \item $M,N,\lambda>0$, $\delta\in(0,\frac{1}{2})$, $L\in\mathds{N}$ and $\alpha^{1},\alpha^{2}\in \mathds{R}$, \item $N^{\delta}\geq 100$, $N^{1-\delta}\geq 100$, \item $k\in\mathds{N}$, $\beta\in(0,1]$, $\gamma\in(0,1)$, \item $k+\beta>1+2\delta+\gamma$, \item $L<\frac{M}{2}$. \end{itemize} For compactness of notation, whenever we have $f,\delta,N,L,M,\lambda$ satisfying these properties we will say that they satisfy the usual conditions. From now on we will consider $k$, $\beta$, $\gamma$ and $\delta$ fixed satisfying these properties, just so that we can avoid extra sub-indexes for these parameters. Due to this, one needs to keep in mind that in general the constants in the lemmas obtained might depend on the specific values of $k$, $\beta$, $\gamma$ and $\delta$. Before we study how this kind of perturbations will evolve with time, we start by obtaining some basic properties regarding the norms of (\ref{pertbase}). \begin{lemma}\label{normasckpert} Given a perturbation as in (\ref{pertbase}), which we will refer as $w_{k,\beta}$, with $f,\delta,N,L,M,\lambda$ satisfying the usual conditions we have that $$||w_{k,\beta}||_{C^{j}}\leq C K_{j}\lambda (NM)^{j-k-\beta}$$ $$|\frac{\partial^{k}w_{k,\beta}(r,\alpha)}{\partial^{k-i} x_{1}\partial^{i}x_{2}}|\leq \frac{C\lambda}{L |\sin(N\frac{\alpha-\alpha^{1}}{2})|(NM)^{\beta}}+C\lambda (NM)^{-\delta-\beta}+\frac{C\lambda L}{M(NM)^{\beta}}$$ $$|\frac{\partial^{k+1}w_{k,\beta}(r,\alpha)}{\partial^{k+1-i} x_{1}\partial^{i}x_{2}}|\leq \frac{C\lambda(NM)^{1-\beta}}{L |\sin(N\frac{\alpha-\alpha^{1}}{2})|}+C\lambda (NM)^{-\delta-\beta+1}+\frac{C\lambda(NM)^{1-\beta}L}{M}$$ with $C$ a constant depending on $f$ and $K_{j}$ constants depending on $j$. \end{lemma} \begin{proof} The bounds for the $C^{j}$ norms can be obtained directly by using that, for functions with support concentrated around $r=1$, we have that $$||f(x_{1},x_{2})||_{C^{j}}\leq K_{j}||f^{pol}(r,\alpha)||_{C^{j}}$$ and the bounds for the derivatives of $w^{pol}_{k,\beta}$ can be obtained by direct computation. For the other two inequalities, we have that \begin{align*} &|\frac{\partial^{k}w_{k,\beta}(r,\alpha)}{\partial^{k-i} x_{1}\partial^{i}x_{2}}|\leq K_{k}||w_{k,\beta}^{pol}(r,\alpha)||_{C^{k}}\leq C\lambda (NM)^{-\beta-\delta}\\ &+C\lambda|\sum_{l=0}^{L-1}f(N^{1-\delta}(r-1),N^{1-\delta}\alpha)\frac{\partial^k}{\partial \alpha^{k}}(\frac{\cos(N(M+l)(\alpha-\alpha^{1})+\alpha^{2}+\frac{k\pi}{2})}{L(NM)^{k+\beta}})|\\ &\leq C\lambda (NM)^{-\beta-\delta}+\frac{C\lambda L}{M(NM)^{\beta}}\\ &+C\lambda|\sum_{l=0}^{L-1}f(N^{1-\delta}(r-1),N^{1-\delta}\alpha)\frac{\cos(N(M+l)(\alpha-\alpha^{1})+\alpha^{2})}{L(NM)^{\beta}}|\\ \end{align*} but we can compute $\sum_{l=0}^{L-1}\cos(N(M+l)(\alpha-\alpha^{1})+\alpha^{2})$ as $$\sum_{l=0}^{L-1}\cos(N(M+l)(\alpha-\alpha^{1})+\alpha^{2})$$ $$=\frac{\sin(\frac{NL(\alpha-\alpha^{1})}{2})}{\sin(N\frac{\alpha-\alpha^{1}}{2})}\cos(NM(\alpha-\alpha^{1})+\alpha^{2}+\frac{N(L-1)(\alpha-\alpha^{1})}{2})$$ which gives us \begin{align*} &|\frac{\partial^{k}w_{k,\beta}(r,\alpha)}{\partial^{k-i} x_{1}\partial^{i}x_{2}}|\\ &\leq C\lambda (NM)^{-\beta-\delta}+\frac{C\lambda L}{M(NM)^{\beta}}+\frac{C\lambda }{|\sin(N\frac{\alpha-\alpha^{1}}{2})|L(NM)^{\beta}}.\\ \end{align*} The proof with $k+1$ derivatives is done analogously. \end{proof} This lemma tells us that these perturbations behave similarly to wave packets, with their amplitude and derivatives decreasing as one gets further from $\alpha^{1}+j\frac{2\pi}{N}$ . We will use this property to obtain upper bounds for the norms of these perturbations when several of them are placed appropriately far way from each other. For this, we first we need a short technical lemma. \begin{lemma}\label{interpolacioncalfa} Given a $C^1$ function $f(x):\mathds{R}\rightarrow \mathds{R}$ with $||f(x)||_{L^{\infty}}\leq M_{1}$ and $||f'(x)||_{L^{\infty}}\leq M_{2}$, we have that, for any $x,h\in \mathds{R}$, $\beta\in(0,1)$ $$\frac{|f(x)-f(x+h)|}{|h|^{\beta}}\leq 2^{1-\beta}M_{1}^{1-\beta}M_{2}^{\beta}.$$ \end{lemma} \begin{proof} We have the two trivial bounds \begin{equation* \frac{|f(x)-f(x+h)|}{|h|^{\beta}}\leq \frac{2M_{1}}{|h|^{\beta}} \end{equation*} \begin{equation* \frac{|f(x)-f(x+h)|}{|h|^{\beta}}\leq \frac{|h|M_{2}}{|h|^{\beta}}, \end{equation*} and thus it is enough to find a bound for $$sup_{h\in \mathds{R}}(min(\frac{2M_{1}}{|h|^{\beta}},\frac{|h|M_{2}}{|h|^{\beta}})).$$ But it is easy to see that the supremum is attained when $\frac{2M_{1}}{|h|^{\beta}}=\frac{|h|M_{2}}{|h|^{\beta}}$. Since this happens when $|h|=\frac{2M_{1}}{M_{2}}$, substituting $|h|$ in any of the upper bounds gives us $$\frac{|f(x)-f(x+h)|}{|h|^{\beta}}\leq \frac{2M_{1}}{\big(\frac{2M_{1}}{M_{2}}\big)^{\beta}}=(2M_{1})^{1-\beta}M_{2}^{\beta}.$$ \end{proof} Now we are ready to prove decay in space of the functions that we use as perturbations. \begin{lemma}\label{normackbetapert} Given a function $g(x)$ of the form $$g^{pol}(r,\alpha)=\sum_{j=1}^{J}\lambda_{j}\sum_{l=0}^{L-1} f(N^{1-\delta}(r-1),N^{1-\delta}\alpha)\frac{\cos(N(M_{j}+l)(\alpha-\alpha_{j}^{1})+\alpha_{j}^{2}+\frac{k\pi}{2})}{JL(NM_{j})^{k+\beta}}$$ where $f,\delta,N,L,M_{j},\lambda_{j}$ satisfy the usual conditions and with $\alpha^{1}_{j}\in[c\frac{\pi}{N},\frac{\pi}{N}]$ and $|\alpha^{1}_{j_{1}}-\alpha^{1}_{j_{2}}|\geq c\frac{\pi}{N}$ for some $c>0$ and $\frac{M_{j_{1}}}{M_{j_{2}}}\leq 2$ for $j_{1},j_{2}\in\{1,2,...,J\}$ then we have that $$|g|_{C^{k,\beta}}\leq C\bar{\lambda}(\frac{1}{J}+\frac{1}{(N\bar{M})^{\delta}}+\frac{L}{\bar{M}}+\frac{1}{c L})$$ with $C$ depending on $k$, $\beta$ and $\delta$ and where $\bar{M}:=sup_{j=1,...,J}(M_{j})$, $\bar{\lambda}:=sup_{j=1,...,J}(\lambda_{j}).$ \end{lemma} \begin{proof} We will compute bounds for the seminorm $|\cdot|_{C^{\alpha}}$ of an arbitrary k-th derivative of $g$, and we will refer to it simply as $g^{(k)}(x)$ since the specific derivative we consider is irrelevant for the proof and we will use $d^{k}$ as notation for the specific k-th derivative for the same reason. We start by obtaining bounds for $||g^{(k)}||_{L^{\infty}}$. Since $|\alpha^{1}_{j_{1}}-\alpha^{1}_{j_{2}}|\geq c\frac{\pi}{N}$ and $\alpha^{1}_{j}\in [c\frac{\pi}{N},\frac{\pi}{N}]$, we have that for any $\alpha$ there is at most one $j$ with \begin{equation}\label{jcercano} min_{n\in \mathds{Z}}|\alpha-\alpha^{1}_{j}-\frac{\pi n}{N}|< \frac{c\pi}{2N}. \end{equation} For simplicity, assume that $j=1$ fulfils (\ref{jcercano}) (the proof when other values of $j$ or no value of $j$ fulfil (\ref{jcercano}) is equivalent). Then, using lemma \ref{normasckpert} we obtain \begin{align*} &|g^{(k)}(r,\alpha)|\leq \\ &\leq C|\lambda_{1}\sum_{l=0}^{L-1}d^{k}\big( f(N^{1-\delta}(r-1),N^{1-\delta}\alpha)\frac{\cos(N(M_{1}+l)(\alpha-\alpha_{j}^{1})+\alpha_{1}^{2}+\frac{k\pi}{2})}{JL(NM_{1})^{k+\beta}}\big)|\\ &+C|\sum_{j=2}^{J}\lambda_{j}\sum_{l=0}^{L-1} d^{k}\big(f(N^{1-\delta}(r-1),N^{1-\delta}\alpha)\frac{\cos(N(M_{j}+l)(\alpha-\alpha_{j}^{1})+\alpha_{j}^{2}+\frac{k\pi}{2})}{JL(NM_{j})^{k+\beta}}\big)|\\ &\leq \frac{C\bar{\lambda}}{J(N\bar{M})^{\beta}}+\frac{C\bar{\lambda}}{(N\bar{M})^{\beta+\delta}}+\frac{C\bar{\lambda} L}{\bar{M}(N\bar{M})^{\beta}}+\frac{C\bar{\lambda} }{|\sin(\frac{c\pi}{2})|L(N\bar{M})^{\beta}}\\ &\leq \frac{C\bar{\lambda}}{J(N\bar{M})^{\beta}}+\frac{C\bar{\lambda}}{(N\bar{M})^{\beta+\delta}}+\frac{C\bar{\lambda} L}{\bar{M}(N\bar{M})^{\beta}}+\frac{C\bar{\lambda} }{c L(N\bar{M})^{\beta}}\\ &=\frac{C\bar{\lambda}}{(\bar{M}N)^{\beta}}(\frac{1}{J}+\frac{1}{(N\bar{M})^{\delta}}+\frac{L}{\bar{M}}+\frac{1}{c L}).\\ \end{align*} Arguing the same way for any arbitrary $k+1$ derivative we obtain \begin{align*} &|g^{(k+1)}(r,\alpha)| \\ &\leq N\bar{M} \frac{C\bar{\lambda}}{(\bar{M}N)^{\beta}}(\frac{1}{J}+\frac{1}{(N\bar{M})^{\delta}}+\frac{L}{\bar{M}}+\frac{1}{c L}).\\ \end{align*} and then direct application of lemma \ref{interpolacioncalfa} gives us $$\frac{g^{(k)}(x)-g^{(k)}(x+h)}{|h|^{\beta}}\leq C\bar{\lambda}(\frac{1}{J}+\frac{1}{(N\bar{M})^{\delta}}+\frac{L}{\bar{M}}+\frac{1}{c L}).$$ \end{proof} With this out of the way, we are ready to define the pseudo-solutions that we will use to prove ill-posedness. Namely, we define \begin{align} &\bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}(r,\alpha,t):= \lambda_{0} f_{1}(r)+ \sum_{j=1}^{J}\sum_{l=0}^{L-1}\bigg(\lambda_{j}f_{2}(N^{1-\delta}(r-1),N^{1-\delta}(\alpha-t\lambda_{0} v_{\alpha,\gamma}(f_{1})(r=1)))\\ \nonumber & \frac{\cos(N(M_{j}+l)(\alpha-\alpha_{j}^{1}-t\lambda_{0} v_{\alpha,\gamma}(f_{1})(r=1))+\alpha_{j}^{2}+\frac{k\pi}{2}+t\lambda_{0} C_{\gamma}N^{\gamma}(M_{j}+l)^{\gamma}}{JL(NM_{j})^{k+\beta}}\big)\\ \nonumber \end{align} with $$M_{j}=M(1+\frac{j}{J}),\ \lambda_{0}=\frac{\pi M^{1-\gamma}}{2\tilde{t}N^{\gamma}C_{\gamma}\gamma},\ \lambda_{j}=\lambda (1+\frac jJ)^{\beta}\text{ for } j=1,...,J,$$ $$\ \alpha^{1}_{j}=\frac{\pi}{2N}(1+\frac{j}{J})^{-1+\gamma}-\tilde{t}\lambda_{0} v_{\alpha,\gamma}(f_{1})(r=1),\ \alpha^{2}_{j}=-(\frac{1}{\gamma}-1)\frac{\pi}{2} M(1+\frac{j}{J})^{\gamma}.$$ The functions $f_{1,\gamma}(r)$ and $f_{2}(r-1,\alpha)$ and the values $k,\beta,\gamma,\delta,\lambda,N,M,J,L$ and $\tilde{t}$ will fulfil the following properties: \begin{itemize} \item $\lambda,N,M,J,L,\tilde{t}>0$, $\delta\in(0,\frac12),\gamma\in(0,1)$ and $L,J,M\in\mathds{N}$, $\frac{M}{J}\in\mathds{N}$, \item $f_{2}(r-1,\alpha)=g(r-1)g(\alpha)$, $g$ a positive $C^\infty$ function with support in $[-\frac12,\frac12]$ and such that $f(x)=1$ if $ x\in[-\frac14,\frac14]$ and $||f_{2}(r-1,\alpha)||_{C^{j}}\leq 100^{j}$, \item $N^{\delta}\geq 100$, $N^{1-\delta}\geq 100$, $\lambda_{0}\leq 1$ (i.e. $N^{\gamma}\geq\frac{\pi M^{1-\gamma}}{2\tilde{t}C_{\gamma}\gamma}$), \item $k\in\mathds{N}$, $\beta\in(0,1]$, $\gamma\in(0,1)$, \item $k+\beta>1+2 \delta+\gamma$, \item $L<\frac{M}{2}$, \item $\frac{\partial^i \frac{v^{pol}_{r,\gamma}(f_{1})}{r}}{\partial r^i}(r=1)=0$ for $i=1,2$, \item $\frac{\partial f_{1}}{\partial r}=1$ if $r\in[\frac{3}{4},\frac{5}{4}]$, \item $supp(f_{1})\subset \{r:r\in(\frac{1}{2},K_{\gamma})\}$ for some $K_{\gamma}$ depending only on $\gamma$. \end{itemize} As before, to avoid extra sub-indexes we consider $k,\beta,\delta$ and $\gamma$ to be fixed, but all the results will apply as long as they fulfil the restrictions mentioned. The constants appearing in the lemmas might depend on our specific choice but the final results will not. However it is not immediately obvious whether the conditions we impose over $f_{1,\gamma}$ are too restrictive, so we need the following lemma to assure us that a $f_{1,\gamma}$ with the desired properties exists. \begin{lemma}\label{valphaarb} There exists a $C^{\infty}$ compactly supported function $g(.):[0,\infty)\rightarrow\mathds{R}$ with support in $ (2,\infty)$ such that $\frac{\partial^{i}\frac{ v_{\alpha,\gamma}(g(.))(r)}{r}}{\partial r^{i}}(r=1)=a_{i}$ with $i=1,2$ and $a_{i}$ arbitrary. \end{lemma} We will omit the proof of this lemma since it is completely equivalent to that of lemma 2.5 in \cite{Zoroacordoba}. With this, the existence of the desired $f_{1}$ is easy to prove, since we can just choose some $C^{\infty}$ $\tilde{f}$ with support in $(\frac{1}{2},2)$ with the desired derivative in $r\in[\frac34,\frac54]$ and then add some other $C^{\infty}$ function given by lemma \ref{valphaarb} to cancel out the derivatives of $V_{\alpha,\gamma}$ around $r=1$. Our next goal will be to prove that this family of pseudo-solutions is a good approximation for our solutions. For this we define $\bar{v}_{r,\gamma}$ as $$\bar{v}^{pol}_{r,\gamma}(f_{2}(N^{1-\delta}(r-1),N^{1-\delta}\alpha+c_{1} )\cos(NK\alpha+c_{2}))(r,\alpha)$$ $$:=(NK)^{\gamma} C_{\gamma}f_{2}(N^{1-\delta}(r-1),N^{1-\delta}\alpha+c_{1}) \sin (NK\alpha+c_{2}),$$ $$\bar{v}_{r,\gamma}(f(r))=0.$$ We will only use this definition for ease of notation and we will only apply this operator to our pseudo-solution so we do not have to worry about defining this for a more general function. With this, the evolution equation for $\bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}} is \begin{align*} &\frac{\partial \bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}}{\partial t}\\ &= -v^{pol}_{\alpha,\gamma}(\lambda_{0} f_{1})(r=1)\frac{\partial \bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}}{\partial \alpha}-\lambda_{0}\bar{v}^{pol}_{r,\gamma}(\bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}})\\ &=-v^{pol}_{\alpha,\gamma}(\lambda_{0} f_{1})(r=1)\frac{\partial \bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}}{\partial \alpha}-\frac{\partial \lambda_{0} f_{1}(r)}{\partial r}\bar{v}^{pol}_{r,\gamma}(\bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}})\\ \end{align*} while on the other hand, if $w_{\lambda,N,M,J,L,\tilde{t}}$ is the solution to $\gamma$-SQG with the same initial conditions as $\bar{w}_{\lambda,N,M,J,L,\tilde{t}}$ then \begin{align*} &\frac{\partial w^{pol}_{\lambda,N,M,J,L,\tilde{t}}}{\partial t}\\ &=-\frac{v^{pol}_{\alpha,\gamma}(w^{pol}_{\lambda,N,M,J,L,\tilde{t}})}{r}\frac{\partial w^{pol}_{\lambda,N,M,J,L,\tilde{t}}}{\partial \alpha}-\frac{\partial w^{pol}_{\lambda,N,M,J,L,\tilde{t}}}{\partial r}v^{pol}_{r,\gamma}(w^{pol}_{\lambda,N,M,J,L,\tilde{t}})\\ \end{align*} and we can rewrite the evolution equation of $w^{pol}_{\lambda,N,M,J,L,\tilde{t}}$ in pseudo-solution form as \begin{align*} &\frac{\partial \bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}}{\partial t}\\ &=-\frac{v^{pol}_{\alpha,\gamma}(\bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}})}{r}\frac{\partial \bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}}{\partial \alpha}-\frac{\partial \bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}}{\partial r}v_{r,\gamma}(\bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}})-F^{pol}_{\lambda,N,M,J,L,\tilde{t}}\\ \end{align*} with $$F^{pol}_{\lambda,N,M,J,L,\tilde{t}}=F^{pol}_{1}+F^{pol}_{2}+F^{pol}_{3}+F^{pol}_{4},$$ $$F^{pol}_{1}:=\frac{v^{pol}_{\alpha,\gamma}( \lambda_{0} f_{1}-\bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}})}{r}\frac{\partial \bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}}{\partial \alpha}$$ $$F^{pol}_{2}:=(v^{pol}_{\alpha,\gamma}( \lambda_{0} f_{1})(r=1)-\frac{v^{pol}_{\alpha,\gamma}( \lambda_{0} f_{1})}{r})\frac{\partial \bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}}{\partial \alpha}$$ $$F^{pol}_{3}:=\frac{\partial (\lambda_{0} f_{1}(r)-\bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}})}{\partial r}v_{r,\gamma}(\bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}})$$ $$F^{pol}_{4}:=\frac{\partial \lambda_{0} f_{1}(r)}{\partial r}\big(\bar{v}_{r,\gamma}(\bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}})-v_{r,\gamma}(\bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}})\big).$$ The next step in our proof will be to show that $F_{\lambda,N,M,J,L,\tilde{t}}$ can be made as small as we need by choosing appropriately the parameters, namely we will show that it becomes small as we make $N$ big. Before we get to prove that, there are some basic properties of $\bar{w}_{\lambda,N,M,J,L,\tilde{t}}$ that we will need later on \begin{itemize} \item $$||\bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}(r,\alpha,t)||_{C^{m,\beta'}}\leq C_{1}+C_{2} \lambda(NM)^{m+\beta'-k-\beta}$$ $$||\bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}(r,\alpha,t)-\lambda_{0} f_{1}(r)||_{C^{m,\beta'}}\leq C_{2} \lambda(NM)^{m+\beta'-k-\beta}$$ for any $m\in\mathds{N}$, $\beta'\in[0,1]$, $t\in\mathds{R}$, with $C_{1}$ and $C_{2}$ depending on $m$ and $\beta'$. \item $$||\bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}(r,\alpha,t)||_{H^{m}}\leq C_{1}+C_{2} \lambda N^{-1+\delta}(NM)^{m-k-\beta}$$ $$||\bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}(r,\alpha,t)-\lambda_{0} f_{1}(r)||_{H^{m}}\leq C_{2} \lambda N^{-1+\delta}(NM)^{m-k-\beta}$$ for any $m\in\mathds{N}$, $\beta'\in[0,1]$, $t\in\mathds{R}$, with $C_{1}$ and $C_{2}$ depending on $m$. \item $$||\bar{w}_{\lambda,N,M,J,L,\tilde{t}}(x_{1},x_{2},t)||_{C^{m,\beta'}}\leq C_{1}+C_{2} \lambda(NM)^{m+\beta'-k-\beta}$$ $$||\bar{w}_{\lambda,N,M,J,L,\tilde{t}}(x_{1},x_{2},t)-\lambda_{0} f_{1}(\sqrt{x_{1}^2+x_{2}^{2}})||_{C^{m,\beta'}}\leq C_{2} \lambda(NM)^{m+\beta'-k-\beta}$$ for any $m\in\mathds{N}$, $\beta'\in[0,1]$, $t\in\mathds{R}$, with $C_{1}$ and $C_{2}$ depending on $m$ and $\beta'$. \item $$||\bar{w}_{\lambda,N,M,J,L,\tilde{t}}(x_{1},x_{2},t)||_{H^{m}}\leq C_{1}+C_{2} \lambda N^{-1+\delta}(NM)^{m-k-\beta}$$ $$||\bar{w}_{\lambda,N,M,J,L,\tilde{t}}(x_{1},x_{2},t)-\lambda_{0} f_{1}(\sqrt{x_{1}^2+x_{2}^{2})}||_{H^{m}}\leq C_{2} \lambda N^{-1+\delta}(NM)^{m-k-\beta}$$ for any $m\in\mathds{N}$, $\beta'\in[0,1]$, $t\in\mathds{R}$, with $C_{1}$ and $C_{2}$ depending on $m$. \item By using the interpolation inequality for sobolev spaces we also have $$||\bar{w}_{\lambda,N,M,J,L,\tilde{t}}(x_{1},x_{2},t)-\lambda_{0} f_{1}(x_{1}^2+x_{2}^{2})||_{H^{m}}\leq C_{1} \lambda N^{-1+\delta}(NM)^{m-k-\beta}$$ for any $m>0$, $t\in\mathds{R}$, with $C_{1}$ depending on $m$. \end{itemize} The bounds in polar coordinates are obtained by direct calculation and then we obtain from those the ones in cartesian coordinates using that the functions are compactly supported and with support far from the origin. Now, for our pseudo-solutions to be a useful approximation of the solution to $\gamma$-SQG, we need the source term to be small. For that we have the following lemmas. \begin{lemma} For any fixed $T$, if $0\leq t\leq T$ we have that $$||F_{\lambda,N,M,J,L,\tilde{t}}||_{L^{2}}\leq (1+\frac{1}{\tilde{t}})\frac{C}{N^{k+\beta+1}}$$ with $C$ depending on $T,\lambda,M,J$ and $L$. Furthermore, for $m\in\mathds{N}$, we have that $$||F_{\lambda,N,M,J,L,\tilde{t}}||_{H^{m}}\leq C(1+\frac{1}{\tilde{t}})\frac{N^{m}}{N^{k+\beta+1}}$$ with $C_{m}$ depending on $T,\lambda,M,J,L$ and $m$. In fact, by interpolation, the inequality also holds for any $m>0$. \end{lemma} \begin{proof} We start by obtaining bounds for $||F_{1}||_{L^{2}}$. We have that \begin{align*} &||F_{1}||_{L^{2}}\leq ||v_{\alpha,\gamma}( \lambda_{0} f_{1}-\bar{w}_{\lambda,N,M,J,L,\tilde{t}})1_{|x|\geq \frac12}||_{L^{2}}||\frac{1}{r}\frac{\partial \bar{w}_{\lambda,N,M,J,L,\tilde{t}}}{\partial \alpha}||_{L^{\infty}}\\ &\leq C ||\lambda_{0} f_{1}-\bar{w}_{\lambda,N,M,J,L,\tilde{t}}||_{H^{\gamma}}||\frac{\partial \bar{w}_{\lambda,N,M,J,L,\tilde{t}}}{\partial \alpha}||_{L^{\infty}}\\ &\leq C \frac{N^{\gamma}}{N^{k+\beta+1-\delta}}\frac{1}{N^{k+\beta-1}}\leq C \frac{1}{N^{k+\beta +1}}.\\ \end{align*} For $F_{2}$, using that the first two derivatives with respect to $r$ of $\frac{v_{\alpha,\gamma}(\lambda_{0}f_{1})}{r}$ vanish at $r=1$ plus the fact that it is a radial function, we have that if $x\in supp(\frac{\partial \bar{w}_{\lambda,N,M,J,L,\tilde{t}}}{\partial \alpha})$ then $$|v^{pol}_{\alpha,\gamma}( \lambda_{0} f_{1})(r=1)-\frac{v^{pol}_{\alpha,\gamma}( \lambda_{0} f_{1})}{r}|\leq C N^{-3+3\delta}$$ so \begin{align*} &||F^{pol}_{2}||_{L^{2}}\leq C\lambda_{0}N^{-3+3\delta}||\frac{\partial \bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}}{\partial \alpha}||_{L^{2}}\leq C\frac{N^{-3+4\delta}}{N^{k+\beta}}\leq \frac{C}{N^{k+\beta+1}}\\ \end{align*} Similarly, for $F_{3}$ we have \begin{align*} &||F_{3}||_{L^{2}}\leq ||\frac{\partial (\bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}-\lambda_{0} f_{1}(r))}{\partial r}||_{L^{\infty}}||v_{r,\gamma}(\bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}})1_{|x|\geq \frac12}||_{L^{2}}\\ &\leq ||\frac{\partial (\bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}-\lambda_{0} f_{1}(r))}{\partial r}||_{L^{\infty}}||v_{r,\gamma}(\bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}-\lambda_{0} f_{1}(r))||_{H^{\gamma}}\\ &\leq C \frac{N^{1-\delta}}{N^{k+\beta}} \frac{N^{\gamma}}{N^{k+\beta+1-\delta}}\leq C \frac{1}{N^{k+\beta+1 }}.\\ \end{align*} Finally, for $F_{4}$, we go back to cartesian coordinates and divide the integral in two different parts, $A_{1}:=B_{2N^{-1+\delta}}(\cos(t\lambda_{0} v_{\alpha,\gamma}(f_{1})(r=1)),\sin(t\lambda_{0} v_{\alpha,\gamma}(f_{1})(r=1)))$ $A_{2}:=supp(\bar{w}_{\lambda,N,M,J,L,\tilde{t}})\setminus A_{1}$ we have \begin{align*} &||F_{4}||_{L^{2}}\\ &\leq ||\frac{\partial \lambda_{0} f_{1}}{\partial r}\big(\bar{v}_{r,\gamma}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}})-v_{r,\gamma}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}})\big)1_{A_{1}}||_{L^{2}}\\ &+ ||\frac{\partial \lambda_{0} f_{1}}{\partial r}\big(\bar{v}_{r,\gamma}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}})-v_{r,\gamma}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}})\big)1_{A_{2}}||_{L^{2}}.\\ \end{align*} For the bound on $A_{1}$, using lemma \ref{errorvsmall} and $|\frac{\partial f_{1}}{\partial r}|\leq C$ we get \begin{align*} &||\frac{\partial \lambda_{0} f_{1}(r)}{\partial r}\big(\bar{v}_{r,\gamma}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}})-v_{r,\gamma}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}})\big)1_{A_{1}}||_{L^{2}}\\ &\leq || \lambda_{0} f_{1}(r)||_{C^1}||\bar{v}_{r,\gamma}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}})-v_{r,\gamma}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}})\big)1_{A_{1}}||_{L^{\infty}}|A_{1}|^{\frac12}\\ &\leq C\lambda_{0} \frac{N^{\gamma-\delta}}{N^{k+\beta+1-\delta}}= C\frac{1}{\tilde{t}N^{k+\beta+1}}\\ \end{align*} where we used that $\lambda_{0}=\frac{CN^{-\gamma}}{\tilde{t}}$ (the constant $C$ depending on $M$). For the integral in $A_{2}$ using lemma \ref{decaimiento} and the bounds on $f_{1}$ we have $$\Big(\int_{A_{2}}(\frac{\partial \lambda_{0} f_{1}}{\partial r}\big(\bar{v}_{r,\gamma}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}})-v_{r,\gamma}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}})\big))^{2}dx_{1}dx_{2}\Big)^{\frac12}$$ $$\leq \tilde{t}^{-1}C N^{-\gamma} \Big(\int_{A_{2}}(v_{r,\gamma}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}}))^{2}dx_{1}dx_{2}\Big)^{\frac12}$$ $$\leq \tilde{t}^{-1}\frac{C}{N^{k+\beta+\gamma}} \Big(\int_{2N^{-1+\delta}}^{\infty}\bigg(N^{-2+\delta}\frac{C}{h^{2+\gamma}}\bigg)^{2}hdh\Big)^{\frac12}$$ $$\leq \frac{C}{\tilde{t}N^{k+\beta+1}} $$ For the proof for the bound in $H^{m}$, we use the that, since $supp(w^{pol}_{\lambda,N,M,J,L,\tilde{t}})\subset\{(r,\alpha): r\in [\frac12,K]\}$ for some $K$, then $$||w_{\lambda,N,M,J,L,\tilde{t}}||_{H^{m}}\leq ||w^{pol}_{\lambda,N,M,J,L,\tilde{t}}||_{H^{m}}$$ and therefore we just need to find bound for $$\sum_{k=0}^{m}\sum_{j=0}^{k}||\frac{\partial^{k}F_{i}}{\partial^{j}r\partial^{k-j}\alpha}||_{L^{2}}$$ with $i=1,2,3,4$. For the bounds in $H^{m}$ we will use that, given two functions $f,g$ and $m\in\mathds{Z}$ we have $$||fg||_{H^{m}}\leq C \sum_{i=0}^{m}||f||_{C^{i}}||g||_{H^{m-i}}$$ with $C$ depending on $m$. Combining this with (\ref{dercomalpha}) we have \begin{align*} &||F_{1}||_{H^{m}}\leq C\sum_{i=0}^{m}||v_{\alpha,\gamma}( \lambda_{0} f_{1}-\bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}})1_{|x|\geq \frac12}||_{H^{i}}||\frac{1}{r}\frac{\partial \bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}}{\partial \alpha}||_{C^{m-i}}\\ &\leq C \sum_{i=0}^{m}\frac{N^{i}N^{-1+\delta+\gamma}}{N^{k+\beta}}\frac{N^{m-i+1}}{N^{k+\beta}}\leq C\frac{N^{m}}{N^{k+\beta+1}}.\\ \end{align*} For $F_{2}$, using that, for $r\in B^{pol}:=supp(\frac{\partial \bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}}{\partial \alpha})$ we have that $$\frac{\partial^{i}(v^{pol}_{\alpha,\gamma}( \lambda_{0} f_{1})(r=1)-\frac{v^{pol}_{\alpha,\gamma}( \lambda_{0} f_{1})}{r})}{\partial r^{i}}\leq C N^{(3-i)(-1+\delta)} $$ for $i=0,1,2$, and since $v^{pol}_{\alpha,\gamma}( \lambda_{0} f_{1})(r=1)-\frac{v^{pol}_{\alpha,\gamma}( \lambda_{0} f_{1})}{r}$ only depends on $r$, then, for $i=0,1,2$ $$||v_{\alpha,\gamma}( \lambda_{0} f_{1})(r=1)-\frac{v^{pol}_{\alpha,\gamma}( \lambda_{0} f_{1})}{r}1_{x\in B}||_{C^{i}}\leq CN^{(3-i)(-1+\delta)}$$ and for higher derivatives we just use $$||v_{\alpha,\gamma}( \lambda_{0} f_{1})(r=1)-\frac{v^{pol}_{\alpha,\gamma}( \lambda_{0} f_{1})}{r}1_{x\in B}||_{C^{i}}\leq C,$$ where the constant depends on $i$. With this we get \begin{align*} &||F_{2}||_{H^{m}}\leq C \sum_{i=0}^{m}||v_{\alpha,\gamma}( \lambda_{0} f_{1})(r=1)-\frac{v_{\alpha,\gamma}( \lambda_{0} f_{1})}{r} 1_{x\in B}||_{C^{i}}||\frac{\partial \bar{w}_{\lambda,N,M,J,L,\tilde{t}}}{\partial \alpha}||_{H^{m-i}}\\ &\leq C \frac{N^{-3+4\delta+m}}{\tilde{t}N^{k+\beta}}\leq \frac{C N^{m}}{\tilde{t}N^{k+\beta+1}}.\\ \end{align*} For $F_{3}$ we have \begin{align*} &||F_ {3}||_{H^{m}}\leq C\sum_{i=0}^{m}||\frac{\partial (\bar{w}_{\lambda,N,M,J,L,\tilde{t}}-\lambda_{0} f_{1}(r))}{\partial r}||_{C^{i}}||v_{r,\gamma}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}})||_{H^{m-i}}\\ &\leq C\sum_{i=0}^{m}\frac{N^{i+1}}{N^{k+\beta}}\frac{N^{m-i-1+\delta+\gamma}}{N^{k+\beta}}\leq C \frac{N^{m}}{N^{k+\beta+1}}.\\ \end{align*} As for $F_{4}$, the contribution obtained when integrating in $A_{2}$ is obtained again applying lemma \ref{decaimiento} \begin{align*} &||F_{4}1_{A_{2}}||_{H^{m}}\\ &\leq C\sum_{i} || \lambda_{0} f_{1}(r)||_{C^{i+1}}||\bar{v}_{r,\gamma}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}})-v_{r,\gamma}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}})\big)1_{A_{1}}||_{C^{m-i}}|A_{1}|^{\frac12}\\ &\leq C\lambda_{0} \frac{N^{m+\gamma-\delta}}{N^{k+\beta+1-\delta}}= C\frac{N^{m}}{\tilde{t}N^{k+\beta+1}}.\\ \end{align*} For the contribution when we integrate $F_{4}$ over $A_{1}$ using (\ref{dercomr}) we have \begin{align*} &||F_{4} 1_{x\in A_{1}}||_{H^{m}}\leq C\lambda_{0} ||(\bar{v}_{r,\gamma}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}})-v_{r,\gamma}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}}))1_{x\in A_{1}}||_{H^{m}}\\ &\leq C\lambda_{0} \sum_{q=0}^{m}\sum_{j=0}^{q}||\Big(\frac{\partial^{q} \bar{v}_{r,\gamma}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}})}{\partial x_{1}^{j}\partial x_{2}^{q-j}}-v_{r,\gamma}(\frac{\partial^{q} \bar{w}_{\lambda,N,M,J,L,\tilde{t}}}{\partial x_{1}^{j}\partial x_{2}^{q-j}})\Big) 1_{x\in A_{1}}||_{L^{2}}\\ &+C\lambda_{0}||v_{1,\gamma}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}})1_{x\in A_{1}}||_{H^{m-1}}+C\lambda_{0}||v_{2,\gamma}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}})1_{x\in A_{1}}||_{H^{m-1}}\\ &\leq C\lambda_{0} \sum_{q=0}^{m}\sum_{j=0}^{q}||\Big(\frac{\partial^{q} \bar{v}_{r,\gamma}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}})}{\partial x_{1}^{j}\partial x_{2}^{q-j}}-v_{r,\gamma}(\frac{\partial^{q} \bar{w}_{\lambda,N,M,J,L,\tilde{t}}}{\partial x_{1}^{j}\partial x_{2}^{q-j}})\Big) 1_{x\in A_{1}}||_{L^{2}}\\ &+C\frac{N^{-1+\delta}}{\tilde{t}N^{k+\beta}}N^{m-1}\\ \end{align*} But then since $$\frac{\partial^{q}f(r,\alpha)}{\partial x_{1}^{j}\partial x_{2}^{q-j}}=\sum_{p=0}^{q}\sum_{l=0}^{p}g_{q,j,p,l}(r,\alpha)\frac{\partial^{p}f(r,\alpha)}{\partial r^{l}\partial \alpha^{p-l}}$$ with $g_{m,j,q,l}$ in $C^{\infty}$ and bounded if $r\geq \frac12$, we have that \begin{align*} &||\Big(\frac{\partial^{q} \bar{v}_{r,\gamma}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}})}{\partial x_{1}^{j}\partial x_{2}^{q-j}}-v_{r,\gamma}(\frac{\partial^{q} \bar{w}_{\lambda,N,M,J,L,\tilde{t}}}{\partial x_{1}^{j}\partial x_{2}^{q-j}})\Big) 1_{x\in A_{1}}||_{L^{2}}\\ \leq& \sum_{p=0}^{q}\sum_{l=0}^{p}||\Big(g_{q,j,p,l}(r,\alpha)\frac{\partial^{p}\bar{v}^{pol}_{r,\gamma}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}})}{\partial r^{l}\partial \alpha^{p-l}}-v^{pol}_{r,\gamma}\big(g_{q,j,p,l}(r,\alpha)\frac{\partial^{p} \bar{w}_{\lambda,N,M,J,L,\tilde{t}}}{\partial r^{l}\partial \alpha^{p-l}}\big)\Big)1_{(r,\alpha)\in A^{pol}_{1}}||_{L^{2}}.\\ \end{align*} But applying lemma \ref{errorvsmall} to each of the terms we obtain after differentiating, we get \begin{align*} \leq& \sum_{p=0}^{q}\sum_{l=0}^{p}||\Big(g_{q,j,p,l}(r,\alpha)\frac{\partial^{p}\bar{v}^{pol}_{r,\gamma}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}})}{\partial r^{l}\partial \alpha^{p-l}}-v^{pol}_{r,\gamma}\big(g_{q,j,p,l}(r,\alpha)\frac{\partial^{p} \bar{w}_{\lambda,N,M,J,L,\tilde{t}}}{\partial r^{l}\partial \alpha^{p-l}}\big)\Big)1_{(r,\alpha)\in A^{pol}_{1}}||_{L^{2}}\\ &\leq C\frac{N^{q}N^{\gamma-\delta}N^{-1+\delta}}{N^{k+\beta}}\\ \end{align*} so \begin{align*} &||F_{4} 1_{x\in A_{1}}||_{H^{m}}\leq C\frac{N^{-1+\delta}}{\tilde{t}N^{k+\beta}}N^{m-1}+C\lambda_{0} \sum_{q=0}^{m}\sum_{j=0}^{q}\frac{N^{q}N^{\gamma-\delta}N^{-1+\delta}}{N^{k+\beta}}\leq \frac{C N^{m}}{\tilde{t} N^{k+\beta+1}}\\ \end{align*} and we are done. \end{proof} Since we are interested in showing (arbitrarily) fast norm growth for $\gamma$-SQG, our solution should start with a very small norm that gets very big after a short period of time. Lemma \ref{decaimiento} already gives us tools to show that the initial norm is small, and the next lemma will gives us a lower bound for the $C^{k,\beta}$ norm of our pseudo-solutions at time $\tilde{t}$. \begin{lemma}\label{crecimientockbeta} There exists a set $A$ (depending on $\lambda,N,M,J$ and $L$) such that, if $x\in A$ then there exists unitary $u$ depending on $x$ and a constant $C$ with $$|\frac{\partial^{k}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}}(x,\tilde{t})-\lambda_{0}f_{1})}{\partial u^{k}}|\geq \lambda( \frac{1 }{2(MN)^{\beta}}-\frac{CL^{2} }{(NM)^{\beta}M}-C(NM)^{-(\delta+\beta)}-C(NM)^{-\beta}N^{-1+\delta}) $$ and a set $B$ (depending on $\lambda,N,M,J$ and $L$) such that if $x\in B$ then for all unitary $v$ we have that $$|\frac{\partial^{k}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}}(x,\tilde{t})-\lambda_{0}f_{1})}{\partial u^{k}}|\leq \lambda( \frac{1 }{4(MN)^{\beta}}+ C(NM)^{-(\delta+\beta)}+\frac{CL^{2} }{(NM)^{\beta}M})$$ furthermore, there is a set $S_{M,N,\delta}$ with $|S_{M,N,\delta}|\geq C_{1}MN^{2\delta}$, $$A=\cup_{s\in S_{M,N,\delta}} A_{s}$$ $$B=\cup_{s\in S_{M,N,\delta}}B_{s}$$ $d(x,y)\leq \frac{4\pi}{NM}$ if $x\in A_{s}, y\in B_{s}$, and $|A_{s}|,|B_{s}|\geq \frac{C_{2}}{(NM)^2}$, with $C_{1}$ and $C_{2}$ constants. Note that, in particular $$||\bar{w}_{\lambda,N,M,J,L,\tilde{t}}(x,\tilde{t})-\lambda_{0}f_{1}||_{C^{k,\beta}}\geq\lambda( \frac{1}{4(4\pi)^{\beta}} -\frac{CL^{2} }{M}-C(NM)^{-\delta}-CN^{-1+\delta}) $$ \end{lemma} \begin{proof} We start by finding the set $A$ as well as the unitary vector $v$ that gives us a big $k-th$ derivative. For this, we first want to obtain accurate estimates for $\frac{\partial^{k}( \bar{w}^{pol}_{\lambda,N,M,J,L,\delta,t}(r,\alpha,t)-\lambda_{0}f_{1}) }{\partial \alpha^{k} }$ \begin{align*} &\frac{\partial^{k} \bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}(r,\alpha,t) }{\partial \alpha^{k} }=\frac{\partial^{k} (\bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}(r,\alpha,t)-\lambda_{0}f_{1}) }{\partial \alpha^{k} }\\ &=\sum_{i=0}^{k}{k\choose i}\sum_{j=1}^{J}\sum_{l=0}^{L-1}\bigg(\frac{1}{JL(NM_{j})^{k+\beta}}\frac{\partial^{i}\lambda_{j}f_{2}(N^{1-\delta}(r-1),N^{1-\delta}(\alpha-t\lambda_{0} v_{\alpha,\gamma}(f_{1})(r=1)))}{\partial \alpha^{i}}\\ \nonumber & \frac{\partial^{k-i}\cos(N(M_{j}+l)(\alpha-\alpha_{j}^{1}-t\lambda_{0} v_{\alpha,\gamma}(f_{1})(r=1))+\alpha_{j}^{2}+\frac{k\pi}{2}+t\lambda_{0} C_{\gamma}N^{\gamma}(M_{j}+l)^{\gamma}}{\partial \alpha^{k-i}}\big),\\ \end{align*} and so \begin{align*} &|\frac{\partial^{k} \bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}(r,\alpha,t) }{\partial \alpha^{k} }\\ &-\sum_{j=1}^{J}\sum_{l=0}^{L-1}\bigg(\frac{1}{JL(NM_{j})^{k+\beta}}\lambda_{j}f_{2}(N^{1-\delta}(r-1),N^{1-\delta}(\alpha-t\lambda_{0} v_{\alpha,\gamma}(f_{1})(r=1)))\\ \nonumber & \frac{\partial^{k}\cos(N(M_{j}+l)(\alpha-\alpha_{j}^{1}-t\lambda_{0} v_{\alpha,\gamma}(f_{1})(r=1))+\alpha_{j}^{2}+\frac{k\pi}{2}+t\lambda_{0} C_{\gamma}N^{\gamma}(M_{j}+l)^{\gamma}}{\partial \alpha^{k})}\bigg)|\\ &\leq C \lambda (NM)^{-(\delta+\beta)}.\\ \end{align*} Furthermore \begin{align*} &\frac{\partial^{k}\cos(N(M_{j}+l)(\alpha-\alpha_{j}^{1}-t\lambda_{0} v_{\alpha,\gamma}(f_{1})(r=1))+\alpha_{j}^{2}+\frac{k\pi}{2}+t\lambda_{0} C_{\gamma}N^{\gamma}(M_{j}+l)^{\gamma})}{\partial \alpha^{k}}\\ &=(N(M_{j}+l))^{k} \cos(N(M_{j}+l)(\alpha-\alpha_{j}^{1}-t\lambda_{0} v_{\alpha,\gamma}(f_{1})(r=1))+\alpha_{j}^{2}+t\lambda_{0} C_{\gamma}N^{\gamma}(M_{j}+l)^{\gamma})\\ &=(N(M_{j}+l))^{k}\cos(N(M_{j}+l)(\alpha-\alpha_{j}^{1}(t))+\alpha_{j}^{2}(t)+\alpha^{3}_{j,l}(t))\\ \end{align*} with $$\alpha_{j}^{1}(t):=\alpha_{j}^{1}-t\lambda_{0}C_{\gamma}\gamma(NM_{j})^{\gamma-1}+t\lambda_{0} v_{\alpha,\gamma}(f_{1})(r=1)$$ $$\alpha_{j}^{2}(t):=\alpha_{j}^{2}+(1-\gamma)t\lambda_{0}C_{\gamma}(NM_{j})^{\gamma}$$ $$\alpha_{j,l}^{3}(t)=t\lambda_{0}C_{\gamma}((N(M_{j}+l))^{\gamma}-(NM_{j})^{\gamma}-\gamma lN^{\gamma}M_{j}^{\gamma-1}),$$ and we have \begin{align*} &|\cos(N(M_{j}+l)(\alpha-\alpha_{j}^{1}(t))+\alpha_{j}^{2}(t)+\alpha^{3}_{j,l}(t))-\cos(N(M_{j}+l)(\alpha-\alpha_{j}^{1}(t))+\alpha_{j}^{2}(t))|\\ &\leq C|\alpha^{3}_{j,l}(t)|\leq Ct\gamma(1-\gamma)\lambda_{0}C_{\gamma}(NM_{j})^{\gamma}\frac{L^2}{(M_{j})^2}=\frac{CtJ L^{2} }{\tilde{t}M_{j}}, \\ \end{align*} so \begin{align*} &|\frac{\partial^{k} \bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}(r,\alpha,t) }{\partial \alpha^{k} }\\ &-\sum_{j=1}^{J}\sum_{l=0}^{L-1}\bigg(\frac{1}{JL(NM)^{\beta}}\lambda f_{2}(N^{1-\delta}(r-1),N^{1-\delta}(\alpha-t\lambda_{0} v_{\alpha,\gamma}(f_{1})(r=1)))\\ & \cos(N(M_{j}+l)(\alpha-\alpha_{j}^{1}(t))+\alpha_{j}^{2}(t))\bigg)|\leq C\lambda (NM)^{-(\delta+\beta)}+\lambda \frac{Ct L^{2} }{\tilde{t}(NM)^{\beta}M}.\\ \end{align*} But we have that $\alpha_{j}^{1}(\tilde{t})=0$, $\alpha_{j}^{2}(\tilde{t})=0$, so that if $\alpha=i\frac{2\pi}{NM}$, $i\in\mathds{Z}$, then \begin{align*} &|\frac{\partial^{k} \bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}(r,\alpha,\tilde{t}) }{\partial \alpha^{k} }-\frac{1}{(NM)^{\beta}}\lambda f_{2}(N^{1-\delta}(r-1),N^{1-\delta}(\alpha-\lambda_{0}\tilde{t} v_{\alpha,\gamma}(f_{1})(r=1)))|\\ & \leq C\lambda(NM)^{-(\delta+\beta)}+\lambda \frac{CL^{2} }{(NM)^{\beta}M},\\ \end{align*} and in fact, if $\alpha\in[i\frac{2\pi }{N}-\frac{\pi}{16NM},i\frac{2\pi }{N}+\frac{\pi}{16NM}]$ with $i\in\mathds{Z}$ then \begin{align*} &\frac{\partial^{k} \bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}(r,\alpha,\tilde{t}) }{\partial \alpha^{k} }\\ & \geq \frac{1}{2(NM)^{\beta}}\lambda f_{2}(N^{1-\delta}(r-1),N^{1-\delta}(\alpha-\lambda_{0}\tilde{t} v_{\alpha,\gamma}(f_{1})(r=1)))-C\lambda(NM)^{-(\delta+\beta)}-\lambda\frac{C L^{2} }{(NM)^{\beta}M}.\\ \end{align*} But since $f(N^{1-\delta}(r-1),N^{1-\delta}(\alpha-t\lambda_{0} v_{\alpha,\gamma}(f_{1})(r=1)))=1$ if $(r,\alpha)\in[1-\frac{N^{-1+\delta}}{4},1+\frac{N^{-1+\delta}}{4}]\times[t\lambda_{0} v_{\alpha,\gamma}(f_{1})(r=1)-\frac{N^{-1+\delta}}{4},t\lambda_{0} v_{\alpha,\gamma}(f_{1})(r=1)+\frac{N^{-1+\delta}}{4}]$ then defining \begin{align*} &A^{pol}=\cup_{j=-\lfloor \frac{N^{\delta}M}{4}\rfloor}^{j=\lfloor \frac{N^{\delta}M}{4}\rfloor-1}\cup_{i=\lfloor\frac{-N^{\delta}}{64}+\frac{Nt\lambda_{0} v_{\alpha,\gamma}(f_{1})(r=1)}{2\pi}\rfloor}^{i=\lfloor\frac{N^{\delta}}{64}+\frac{Nt\lambda_{0} v_{\alpha,\gamma}(f_{1})(r=1)}{2\pi}\rfloor} A^{pol}_{i,j}\\ \end{align*} with $$A_{i,j}:= (1+\frac{j}{NM},1+\frac{j+1}{NM}]\times[i\frac{2\pi }{N}-\frac{\pi}{16NM},i\frac{2\pi }{N}+\frac{\pi}{16NM}]$$ we have that, for $(r,\alpha)\in A^{pol}$, \begin{align*} &\frac{\partial^{k} \bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}(r,\alpha,\tilde{t}) }{\partial \alpha^{k} }\\ & \geq \frac{\lambda}{2(NM)^{\beta}} -C\lambda(NM)^{-(\delta+\beta)}-\lambda \frac{CL^{2} }{(NM)^{\beta}M}.\\ \end{align*} Furthermore, the sets $A_{i,j}$ fulfil $|A_{i,j}|\geq C (NM)^{-2}$ for some $C>0$. Therefore, if we prove that there exists a unitary vector $u=(u_{1},u_{2})$ such that, if $x=(r\cos(\alpha),r \sin(\alpha))\in A$ \begin{align*} &\frac{\partial^{k}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}}(x,\tilde{t})-\lambda_{0}f_{1})}{\partial u^{k}} \approx \frac{\partial^{k}\bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}(r,\alpha,\tilde{t})}{\partial \alpha^{k}}\\ \end{align*} in a suitable way, then we are done proving the existence of the desired set $A$. But \begin{align*} &\frac{\partial f(x)}{\partial u}=u_{1}[\cos(\alpha(x))\frac{\partial f^{pol}(r(x),\alpha(x)) }{\partial r}-\frac{\sin(\alpha(x))}{r}\frac{\partial f^{pol}(r(x),\alpha(x)) }{\partial \alpha}]\\ &+u_{2}[\sin(\alpha(x))\frac{\partial f^{pol}(r(x),\alpha(x)) }{\partial r}+\frac{\cos(\alpha(x))}{r}\frac{\partial f^{pol}(r(x),\alpha(x)) }{\partial \alpha}]\\ \end{align*} so that \begin{align*} &\frac{\partial^{k} f(x)}{\partial u^{k}}=\sum_{i_{1}=0}^{k}\sum_{i_{2}=0}^{i_{1}}g_{i_{1},i_{2}}(\alpha,r,u_{1},u_{2})\frac{\partial f^{pol}(r,\alpha)}{\partial r^{i_{2}}\partial^{i_{1}-i_{2}} \alpha}\\ \end{align*} with $g_{i_{1},i_{2}}$ $C^{\infty}$ and bounded as long as we only consider $r\geq \frac12$. Applying this formula to $\bar{w}_{\lambda,N,M,J,L,\tilde{t}}$ we get \begin{align*} &|\frac{\partial^{k}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}}(x,\tilde{t})-\lambda_{0}f_{1})}{\partial u^{k}}-g_{k,0}(\alpha,r,u_{1},u_{2})\frac{\partial^{k}\bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}(r,\alpha,\tilde{t})}{\partial \alpha^{k}}|\\ &\leq C\lambda (NM)^{-(\delta+\beta)}\\ \end{align*} and it is easy to prove that $g_{k,0}=\frac{g_{k-1,0}(\cos(\alpha)u_{2}-\sin(\alpha)u_{1})}{r}$, $g_{0,0}=1$ and therefore taking $v=(-\sin(\alpha),\cos(\alpha))$ we get \begin{align*} &|\frac{\partial^{k}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}}(x,\tilde{t})-\lambda_{0}f_{1})}{\partial u^{k}}-\frac{1}{r^{k}}\frac{\partial^{k}\bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}(r,\alpha,\tilde{t})}{\partial \alpha^{k}}|\\ &\leq C\lambda (NM)^{-(\delta+\beta)}\\ \end{align*} and using $r\in A^{pol}\Rightarrow r\in[1-\frac{N^{-1+\delta}}{4},1+\frac{N^{-1+\delta}}{4}]$ plus the bounds for $w^{pol}_{\lambda,N,M,J,L,\tilde{t}}$ \begin{align*} &|\frac{\partial^{k}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}}(x,\tilde{t})-\lambda_{0}f_{1})}{\partial u^{k}}-\frac{\partial^{k}\bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}(r,\alpha,\tilde{t})}{\partial \alpha^{k}}|\\ &\leq C\lambda(NM)^{-(\delta+\beta)}+C\lambda (NM)^{-\beta}N^{-1+\delta},\\ \end{align*} so, for $x\in A$ \begin{align*} &\frac{\partial^{k}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}}(x,\tilde{t})}{\partial u^{k}}\\ &\geq \frac{\lambda}{2(NM)^{\beta}} -\lambda \frac{CL^{2} }{(NM)^{\beta}M}-C\lambda(NM)^{-(\delta+\beta)}-C\lambda(NM)^{-\beta}N^{-1+\delta},\\ \end{align*} which finishes the proof for the existence of the set $A$. For the set $B$, we remember that for $r\geq \frac12$ we have \begin{align*} &|\frac{\partial^{k}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}}(x,\tilde{t})-\lambda_{0}f_{1})}{\partial u^{k}}-g_{k,0}(\alpha,r,u_{1},u_{2})\frac{\partial^{k}\bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}(r,\alpha,\tilde{t})}{\partial \alpha^{k}}|\\ &\leq C\lambda (NM)^{-(\delta+\beta)}\\ \end{align*} and since $|g_{k,0}|\leq \frac{1}{r^k}$ we only need to find a sets $B_{i,j}$ with the desired size and distance to $A_{i,j}$ such that $|\frac{\partial^{k}\bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}(r,\alpha,\tilde{t})}{\partial \alpha^{k}}|$ is small. But \begin{align*} &|\frac{\partial^{k}\bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}(r,\alpha,\tilde{t})}{\partial \alpha^{k}}|\\ &\leq |\sum_{j=1}^{J}\sum_{l=0}^{L-1}\big(\frac{1}{JL(NM)^{\beta}}\lambda f_{2}(N^{1-\delta}(r-1),N^{1-\delta}(\alpha-\lambda_{0}\tilde{t} v_{\alpha,\gamma}(f_{1})(r=1)))\\ & \cos(N(M_{j}+l)\alpha)\big)|+ C\lambda (NM)^{-(\delta+\beta)}+\lambda \frac{CJL^{2} }{(NM)^{\beta}M}.\\ \end{align*} and using \begin{align*} &\sum_{l=0}^{L-1} \cos(N(M_{j}+l)\alpha)= \frac{\sin(L\frac{N\alpha}{2})}{\sin(\frac{N\alpha}{2})}\cos(NM_{j}\alpha+\frac{(L-1)}{2}N\alpha),\\ \end{align*} \begin{align*} &\sum_{j=1}^{J} \frac{\sin(L\frac{N\alpha}{2})}{\sin(\frac{N\alpha}{2})}\cos(NM\frac{j}{J}\alpha+NM\alpha+\frac{(L-1)}{2}N\alpha)\\ &=\frac{\sin(L\frac{N\alpha}{2})}{\sin(\frac{N\alpha}{2})}\frac{\sin(\frac{NM\alpha}{2})}{\sin(\frac{NM\alpha}{2J})}\cos(NM(1+\frac{1}{J})\alpha+\frac{(L-1)}{2}N\alpha+\frac{(J-1)NM\alpha}{2J}).\\ \end{align*} If now we define $$f^{pol}_{L,N,M,J}(r,\alpha)=\frac{\sin(L\frac{N\alpha}{2})}{\sin(\frac{N\alpha}{2})}\frac{\sin(\frac{NM\alpha}{2})}{\sin(\frac{NM\alpha}{2J})}\cos(NM(1+\frac{1}{J})\alpha+\frac{(L-1)}{2}N\alpha+\frac{(J-1)NM\alpha}{2J})$$ then we have that \begin{itemize} \item $f^{pol}_{L,N,M,J}$ is $\frac{2\pi}{N}-$periodic in the $\alpha$ variable. \item There exists $|\tilde{\alpha}|\leq \frac{2\pi}{NM}$ such that $f^{pol}_{L,N,M,J}(r,\tilde{\alpha})=0$. \item $|\frac{\partial f^{pol}_{L,N,M,J}(r,\alpha)}{\partial \alpha}|\leq \bar{C}LMNJ$, \end{itemize} with $\bar{C}$ a constant, which means that if $\alpha\in \cup_{i\in\mathds{Z}} [\tilde{\alpha}+i \frac{2\pi}{N}-\frac{1}{4\bar{C}MN},\tilde{\alpha}+ i \frac{2\pi}{N}+\frac{1}{4\bar{C}MN}]$ then $|f^{pol}_{L,N,M,J}(r,\alpha)|\leq \frac{JL}{4}$. Using this we have that, if $\alpha\in \cup_{i\in\mathds{Z}} [\tilde{\alpha}+i \frac{2\pi}{N}-\frac{1}{4\bar{C}MN},\tilde{\alpha}+ i \frac{2\pi}{N}+\frac{1}{4\bar{C}MN}]$ then \begin{align*} &|\frac{\partial^{k}\bar{w}^{pol}_{\lambda,N,M,J,L,\tilde{t}}(r,\alpha,\tilde{t})}{\partial \alpha^{k}}|\\ & \leq \frac{\lambda}{4(MN)^{\beta}}+ \lambda C(NM)^{-(\delta+\beta)}+\lambda \frac{CJL^{2} }{(NM)^{\beta}M},\\ \end{align*} so, for any unitary vector $u$ \begin{align*} &|\frac{\partial^{k}\bar{w}_{\lambda,N,M,J,L,\tilde{t}}(x,\tilde{t})}{\partial u^{k}}|\\ & \leq \frac{\lambda}{4(MN)^{\beta}}+ \lambda C(NM)^{-(\delta+\beta)}+\lambda \frac{CJL^{2} }{(NM)^{\beta}M},\\ \end{align*} and defining now $$B_{i,j}:= (1+\frac{j}{NM},1+\frac{j+1}{NM}]\times[\tilde{\alpha}+i\frac{2\pi }{N}-\frac{\pi}{4\bar{C}NM},\tilde{\alpha}+i\frac{2\pi }{N}+\frac{\pi}{4\bar{C}NM}]$$ and it is easy to check that $A_{i,j}$, $B_{i,j}$ have the desired properties. \end{proof} The previous lemma shows that our pseudo-solutions do have a big norm at time $\tilde{t}$, and although this will be enough to show ill-posedness, for our non-existence result we will build solutions such that the $C^{k,\beta}$ norm will be infinite for a period of time, and this requires us to obtain specific bounds about how fast our solution can change their $C^{k,\beta}$ norm. \begin{lemma}\label{derivadackbeta} We have that $$\frac{d||\bar{w}_{\lambda,N,M,J,L,\tilde{t}}(x_{1},x_{2},t)-\lambda_{0}f_{1}(\sqrt{x_{1}^{2}+x_{2}^{2}})||_{C^{k,\beta}}}{dt}\leq \frac{C\lambda M}{\tilde{t}}$$ with $C$ a constant. \end{lemma} \begin{proof} First, since rotations do not change the $C^{k,\beta}$ norm, it is enough to study the evolution of the norm of \begin{align*} & \sum_{j=1}^{J}\sum_{l=0}^{L-1}\bigg(\lambda_{j}f_{2}(N^{1-\delta}(r(x)-1),N^{1-\delta}\alpha(x))\\ & \frac{\cos(N(M_{j}+l)(\alpha(x)-\alpha_{j}^{1})+\alpha_{j}^{2}+\frac{k\pi}{2}+t\lambda_{0} C_{\gamma}N^{\gamma}(M_{j}+l)^{\gamma}}{JL(NM_{j})^{k+\beta}}\bigg)\\ \end{align*} which has a time derivative \begin{align*} & -\lambda_{0} C_{\gamma}N^{\gamma}(M_{j}+l)^{\gamma}\sum_{j=1}^{J}\sum_{l=0}^{L-1}\bigg(\lambda_{j}f_{2}(N^{1-\delta}(r(x)-1),N^{1-\delta}\alpha(x))\\ & \frac{\sin(N(M_{j}+l)(\alpha(x)-\alpha_{j}^{1})+\alpha_{j}^{2}+\frac{k\pi}{2}+t\lambda_{0} C_{\gamma}N^{\gamma}(M_{j}+l)^{\gamma}}{JL(NM_{j})^{k+\beta}}\bigg)\\ \end{align*} but since this function has support in $r\geq \frac12$, we can use (\ref{equivcmb}) and it is enough to obtain bounds for the $C^{k,\beta}$ norm in polar coordinates. However, using the expression for $\lambda_{0}$ we easily obtain \begin{align*} & ||\lambda_{0} C_{\gamma}N^{\gamma}(M_{j}+l)^{\gamma}\sum_{j=1}^{J}\sum_{l=0}^{L-1}\bigg(\lambda_{j}f_{2}(N^{1-\delta}(r-1),N^{1-\delta}\alpha)\\ & \frac{\sin(N(M_{j}+l)(\alpha-\alpha_{j}^{1})+\alpha_{j}^{2}+\frac{k\pi}{2}+t\lambda_{0} C_{\gamma}N^{\gamma}(M_{j}+l)^{\gamma}}{JL(NM_{j})^{k+\beta}}\bigg)||_{C^{k,\beta}}\\ &\leq \frac{C\lambda M}{\tilde{t}}.\\ \end{align*} \end{proof} We only need one last technical result before we can go to prove our ill-posedness result. Namely, we need to obtain bounds for the error between our pseudo-solution and the real solution to $\gamma$-SQG with our initial conditions. We will, however, prove a slightly stronger result, where we show that the error remains small even if we compare to a solution to $\gamma$-SQG with a small error in the velocity. This will later on be necessary when we prove the non-existence of solutions in $C^{k,\beta}.$ \begin{lemma}\label{evolucionerror} Given a pseudo-solution $\bar{w}_{\lambda,N,M,J,L,\tilde{t}}(x_{1},x_{2},t)$ and a function $v_{error}=(v_{1,error},v_{2,error})$ fulfilling $$||v_{error}||_{C^{m}}\leq \frac{ N^{m}}{N^{k+\beta+2}}$$ for $m=0,1,...,k+2$ and $$\frac{\partial v_{1,error}}{\partial x_{1}}+\frac{\partial v_{2,error}}{\partial x_{2}}=0$$ we have that, for any fixed $T$,$\lambda,M,J,L$ and $\tilde{t}$, if $N$ is big enough, then the unique $H^{k+\beta+1-\delta}$ solution $\tilde{w}_{\lambda,N,M,J,L,\tilde{t}}(x_{1},x_{2},t)$ to \begin{equation}\label{gSQGvext} \frac{\partial \tilde{w}_{\lambda,N,M,J,L,\tilde{t}} }{\partial t}+(v_{\gamma}(\tilde{w}_{\lambda,N,M,J,L,\tilde{t}})+v_{error})\cdot(\nabla\tilde{w}_{\lambda,N,M,J,L,\tilde{t}})=0 \end{equation} $$\tilde{w}_{\lambda,N,M,J,L,\tilde{t}}(x,0)=\bar{w}_{\lambda,N,M,J,L,\tilde{t}}(x,0)$$ exists for $t\in[0,T]$ and, if we define $$W:=\tilde{w}_{\lambda,N,M,J,L,\tilde{t}}-\bar{w}_{\lambda,N,M,J,L,\tilde{t}}$$ then $$||W(x,t)||_{L^{2}}\leq C(1+\frac{1}{\tilde{t}})t N^{-k-\beta-1},$$ $$||W(x,t)||_{H^{k+\beta+1-\delta}}\leq C(1+\frac{1}{\tilde{t}}) tN^{-\delta}.$$ with $C$ depending on $T,\lambda,M,J$ and $L$. Furthermore, by interpolation, for any $s\in[0,k+\beta+1-\delta]$ we have that $$||W(x,t)||_{H^{s}}\leq C(1+\frac{1}{\tilde{t}}) tN^{-(k+\beta+1)+s}.$$ \end{lemma} \begin{proof} First we note that the evolution equation for $W$ is \begin{align*} &\frac{\partial W}{\partial t}+(v_{\gamma}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}})+v_{\gamma}(W)+v_{error})\cdot \nabla W\\ &+(v_{\gamma}(W)+v_{error})\cdot\nabla\bar{w}_{\lambda,N,M,J,L,\tilde{t}}-F_{\lambda,N,M,J,L,\tilde{t}}=0.\\ \end{align*} and (using the properties of $F_{\lambda,N,M,J,L,\tilde{t}}$ for $N$ big) this evolution equation has local existence and uniqueness in $H^{k+\beta+1-\delta}$ under our assumptions for $v_{error}$. Furthermore, it is enough to prove our inequalities under the assumption $||W(x,t)||_{H^{k+\beta+1-\delta}}\leq C N^{-\delta} log(N)$, since then using the continuity in time of $||W||_{H^{k+\beta+1-\delta}}$ and taking $N$ big would give us the result for the desired time interval. For the $L^{2}$ norm, we can use incompressibility to obtain $$\frac{\partial||W||^2_{L^{2}} }{\partial t}\leq 2\int |W\big(v_{\gamma}(W)\nabla\bar{w}_{\lambda,N,M,J,L,\tilde{t}}-F_{\lambda,N,M,J,L,\tilde{t}}+v_{error}\nabla\bar{w}_{\lambda,N,M,J,L,\tilde{t}}\big)|dx$$ $$\leq \int 2|Wv_{\gamma}(W)\nabla\bar{w}_{\lambda,N,M,J,L,\tilde{t}})dx|+\frac{C}{N^{k+\beta+1}}(1+\frac{1}{\tilde{t}})||W||_{L^2}.$$ To bound the integral term with $v_{\gamma}(W)$ we need to use two important properties that will also be key when working with the $H^{k+\beta+1-\delta}$ bounds. First, as in \cite{Chaecordoba}, using that, for an odd operator $A$ (which in our case will be $v_{1,\gamma}$ and $v_{2,\gamma}$) we have $$\int fA(f)g=-\frac{1}{2}\int f(A(gf)-gA(f))$$ and so $$|\int Wv_{\gamma}(W)\nabla\bar{w}_{\lambda,N,M,J,L,\tilde{t}})dx|=\frac{1}{2}|\int W\big(v_{i,\gamma}(W\frac{\partial \bar{w}_{\lambda,N,M,J,L,\tilde{t}}}{\partial x_{i}})-v_{i,\gamma}(W)\frac{\partial \bar{w}_{\lambda,N,M,J,L,\tilde{t}}}{\partial x_{i}}\big)dx|$$ and using corollary 1.4 in \cite{Dongli} $$|\int Wv_{\gamma}(W)\nabla\bar{w}_{\lambda,N,M,J,L,\tilde{t}})dx|\leq ||W||^{2}_{L^2}||\nabla v_{\gamma}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}})||_{L^{\infty}}\leq C ||W||^{2}_{L^2}$$ where we used that $$||v_{\gamma}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}})||_{C^{k',\beta'}}\leq C N^{k'+\beta'+\gamma-k-\beta}log(N)$$ which is obtained by applying lemmas 3.6 and 3.7 from \cite{Zoroacordoba}, the definition of $v_{\gamma}$ and the properties of $\bar{w}_{\lambda,N,M,J,L,\tilde{t}}$. Then, after applying Gronwall we get $$||W||_{L^2}\leq \frac{Ct}{N^{k+\beta+1}}(1+\frac{1}{\tilde{t}})$$ with $C$ depending on $\lambda,M,J,L$ and $T$. The proof of the inequality for $H^{k+\beta+1-\delta}$ is very similar to that of lemmas 2.9 and 3.8 in \cite{Zoroacordoba}, so we will skip most of the details and focus on the few differences for the sake of briefness. The idea is to use that $$\frac{\partial||\Lambda^{s}W||^2_{L^{2}} }{\partial t}\leq 2|\int (\Lambda^{s}W)\Lambda^{s}(\frac{\partial W}{\partial t})dx|,$$ and then bound each of the integrals obtained from the equation for $\frac{\partial W}{\partial t}$. For example, for the term $$|\int (\Lambda^{s}W)\Lambda^{s}(v_{\gamma}(W)\nabla \bar{w}_{\lambda,N,M,J,L,\tilde{t}})dx|$$ we use the Kato-Ponce inequalities obtained in theorem 1.2 of \cite{Dongli} to get for $s=k+\beta+1-\delta$ the inequality \begin{align*} &|\int (\Lambda^{s}W)\Lambda^{s}(v_{\gamma}(W)\cdot\nabla \bar{w}_{\lambda,N,M,J,L,\tilde{t}})dx|\\ &\leq \sum_{|\mathbf{a}|\leq s-\gamma }|\int \frac{1}{\mathbf{a}!}(\Lambda^{s}W)\Lambda^{s,\mathbf{a}}(v_{\gamma}(W))\cdot\nabla \partial^{\mathbf{a}}\bar{w}_{\lambda,N,M,J,L,\tilde{t}})dx|\\ &+\sum_{|\mathbf{b}|< \gamma }|\int \frac{1}{\mathbf{b}!}(\Lambda^{s}W)\partial^{\mathbf{b}}(v_{\gamma}(W))\cdot\nabla \Lambda^{s,\mathbf{b}}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}}))dx|\\ &+C||(\Lambda^{s}W)||_{L^{2}}||v_{\gamma}(W)||_{H^{s-\gamma}}||\Lambda^{\gamma}\nabla \bar{w}_{\lambda,N,M,J,L,\tilde{t}}||_{L^{\infty}},\\ \end{align*} where we used the multi-index notation, $\mathbf{c}=(c_{1},c_{2})$ , $|\mathbf{c}|=(c_{1}^2+c_{2}^2)^{\frac12}$, $\mathbf{c}!=c_{1}!c_{2}!$, $\partial^{\mathbf{c}}=\partial^{\mathbf{c}}_{x}=\partial^{c_{1}}_{x_{1}}\partial^{c_{2}}_{x_{2}}$ and the operator $\Lambda^{s,\mathbf{c}}$ is defined via the Fourier transform as $$\widehat{\Lambda^{s,\mathbf{j}}f}(\xi)=\widehat{\Lambda^{s,\mathbf{j}}}(\xi)\hat{f}(\xi)$$ $$\widehat{\Lambda^{s,\mathbf{j}}}(\xi)=i^{-|\mathbf{j}|}\partial^{\mathbf{j}}_{\xi}(|\xi|^s).$$ Most of these terms can be bounded directly by $C||W||^2_{H^{s}}$ using the properties of $v_{\gamma}$, $\Lambda^{s,\mathbf{c}}$, and $\bar{w}_{\lambda,N,M,J,L,\tilde{t}}$ plus the assumptions for $W$ (including the $L^2$ growth) and the interpolation inequality for Sobolev spaces. A few terms, however, requires more careful consideration (and it also needs to be treated differently compared to the proofs in \cite{Zoroacordoba}), namely, for $i=1,2$ \begin{equation}\label{velocidadsingularcota} |\int (\Lambda^{s}W)\Lambda^{s}(v_{i,\gamma}(W)) \frac{\partial \bar{w}_{\lambda,N,M,J,L,\tilde{t}}}{\partial x_{i}})dx| \end{equation} \begin{equation*} |\int (\Lambda^{s}W)\Lambda^{s}(v_{i,\gamma}(W)) \frac{\partial W}{\partial x_{i}})dx| \end{equation*} since $||\Lambda^{s}(v_{\gamma}(W)||$ cannot by bounded by $||W||_{H^{s}}$. We will just focus on (\ref{velocidadsingularcota}) since the other term is done in exactly the same way. Here, we need to again act as in the $L^2$ case, rewriting (\ref{velocidadsingularcota}) as $$\frac{1}{2}|\int (\Lambda^{s}W)\big(v_{i,\gamma}[\Lambda^{s}(W)\frac{\partial \bar{w}_{\lambda,N,M,J,L,\tilde{t}}}{\partial x_{i}}]-v_{i,\gamma}[\Lambda^{s}(W)]\frac{\partial \bar{w}_{\lambda,N,M,J,L,\tilde{t}}}{\partial x_{i}}\big)dx|.$$ We can then use again the results obtained in \cite{Dongli} to get \begin{align*} &\frac{1}{2}|\int (\Lambda^{s}W)\big(v_{i,\gamma}[\Lambda^{s}(W)\frac{\partial \bar{w}_{\lambda,N,M,J,L,\tilde{t}}}{\partial x_{i}}]-v_{i,\gamma}[\Lambda^{s}(W)]\frac{\partial \bar{w}_{\lambda,N,M,J,L,\tilde{t}}}{\partial x_{i}}\big)dx|\\ &\leq C||W||_{H^{s}}||W||_{H^{s}}||v_{i,\gamma}(\frac{\partial \bar{w}_{\lambda,N,M,J,L,\tilde{t}}}{\partial x_{i}})||_{L^{\infty}}\leq C ||W||^2_{H^{s}}.\\ \end{align*} Combining the bounds for all the terms we obtain $$\frac{\partial ||\Lambda^{s}W||^2_{L^{2}}}{\partial t}\leq C ||W||_{H^{s}}(||W||_{H^{s}}+(1+\frac{1}{\tilde{t}})\frac{C}{N^\delta})$$ and therefore, for $t\in[0,T]$ $$||W(x,t)||_{H^{s}}\leq Ce^{Ct}t||F||_{H^{s}}\leq C (1+\frac{1}{\tilde{t}})tN^{-\delta}$$ with $C$ depending on $T,\lambda,M,J$ and $L$. \end{proof} Combining all the technical results together we obtain. \begin{theorem}\label{crecimientoperturbado} Given $T,t_{crit},\epsilon_{1},\epsilon_{2},\epsilon_{3}>0$ and $t_{crit}\in(0,T]$, we can find $\lambda,M,J,L$ and $\tilde{t}$ such that, if $N$ is big enough, then for any $v_{error}$ satisfying $$||v_{error}||_{C^{m}}\leq \frac{ N^{m}}{N^{k+\beta+2}}$$ for $m=0,1,...,k+2$ and $$\frac{\partial v_{1,error}}{\partial x_{1}}+\frac{\partial v_{2,error}}{\partial x_{2}}=0$$ then the unique $H^{k+\beta+1-\delta}$ function $\tilde{w}_{\lambda,N,M,J,L,\tilde{t}}(x,t)$ satisfying \begin{equation}\label{sqgvaprox} \frac{\partial \tilde{w}_{\lambda,N,M,J,L,\tilde{t}} }{\partial t}+(v_{\gamma}(\tilde{w}_{\lambda,N,M,J,L,\tilde{t}})+v_{error})\cdot(\nabla\tilde{w}_{\lambda,N,M,J,L,\tilde{t}})=0 \end{equation} $$\tilde{w}_{\lambda,N,M,J,L,\tilde{t}}(x,0)=\bar{w}_{\lambda,N,M,J,L,\tilde{t}}(x,0)$$ exists for $t\in[0,T]$ and has the following properties. \begin{itemize} \item $||\tilde{w}_{\lambda,N,M,J,L,\tilde{t}}(x,0)||_{C^{k,\beta}}\leq \epsilon_{1}$ \item $||\tilde{w}_{\lambda,N,M,J,L,\tilde{t}}(x,t)||_{C^{k,\beta}}\geq \frac{1}{\epsilon_{2}}$ if $t\in(t_{crit}-Ct_{crit},t_{crit})$ with $C$ depending on $\epsilon_{1}$ and $\epsilon_{2}$, \item $||\tilde{w}_{\lambda,N,M,J,L,\tilde{t}}(x,0)||_{H^{k+\beta+1-\frac32\delta}},||\tilde{w}_{\lambda,N,M,J,L,\tilde{t}}(x,0)||_{L^{1}}\leq \epsilon_{3}$ \end{itemize} \end{theorem} \begin{proof} We first fix some parameters so the pseudo-solutions $\bar w_{\lambda,N,M,J,L,\tilde{t}}$ have some desirable properties. We fix $\tilde{t}=t_{crit}$ so that, by lemma \ref{crecimientockbeta} we have $$|\bar{w}_{\lambda,N,M,J,L,\tilde{t}}(x,t_{crit})|_{C^{k,\beta}}\geq \lambda(\frac{1}{4(4\pi)^{\beta}}- -\frac{CL^{2} }{M}-C(NM)^{-\delta}-CN^{-1+\delta}).$$ Since we want $\tilde{w}$ to also have a very big $C^{k,\beta}$ norm, this suggest taking $\lambda\approx \frac{1}{\epsilon_{2}}$, and we will specifically consider $\lambda=\frac{1}{\epsilon_{2}}32(4\pi)^{\beta}.$ With $\lambda$ fixed, we can now focus on assuring that our initial conditions have a norm as small as required. Using lemmas \ref{normasckpert}, \ref{normackbetapert} and \ref{crecimientockbeta} plus our choice for $\alpha_{1}^{j}$ we know that $$||\tilde{w}_{\lambda,N,M,J,L,\tilde{t}}(x,0)||_{C^{k,\beta}}\leq C\lambda_{0}+ C\lambda(\frac{1}{J}+\frac{1}{(NM)^{\delta}}+\frac{J}{ L}+ (NM)^{-\beta}+\frac{L}{M})$$ $$=C\frac{M^{1-\gamma}}{\tilde{t}N^{\gamma}}+ C\lambda(\frac{1}{J}+\frac{1}{(NM)^{\delta}}+\frac{J}{ L}+ (NM)^{-\beta}+\frac{L}{M}).$$ and that there are sets $A$ and $B$ (depending on $\lambda,N,M,J$ and $L$) such that if $x\in A$ then there exists unitary $u$ depending on $x$ with $$|\frac{\partial^{k}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}}(x,\tilde{t})-\lambda_{0}f_{1})}{\partial u^{k}}|\geq \lambda( \frac{1}{2(MN)^{\beta}}-\frac{CL^{2} }{(NM)^{\beta}M}-C(NM)^{-(\delta+\beta)}-C(NM)^{-\beta}N^{-1+\delta} ) $$ and a set $B$ such that if $x\in B$ then for all unitary $u$ we have that $$|\frac{\partial^{k}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}}(x,\tilde{t})-\lambda_{0}f_{1})}{\partial u^{k}}|\leq \lambda ( \frac{1 }{4(MN)^{\beta}}+ C(NM)^{-(\delta+\beta)}+\frac{CL^{2} }{(NM)^{\beta}M})$$ furthermore, there is a set $S_{M,N,\delta}$ such that its cardinal fulfils $|S_{M,N,\delta}|\geq C_{1}MN^{2\delta}$ and $$A=\cup_{s\in S_{M,N,\delta}} A_{s}$$ $$B=\cup_{s\in S_{M,N,\delta}}B_{s}$$ $d(x,y)\leq \frac{4\pi}{NM}$ if $x\in A_{s}, y\in B_{s}$, and $|A_{s}|,|B_{s}|\geq \frac{C_{2}}{(NM)^2}$, with $C_{1}$ and $C_{2}$ constants. By taking $t=t_{crit}$ and $J^2=L$, $M=L^3=J^6$ and fixing $J$ big we can then obtain that $$||\tilde{w}_{\lambda,N,M,J,L,\tilde{t}}(x,0)||_{C^{k,\beta}}\leq C\frac{J^{6(1-\gamma)}}{t_{crit}N^{\gamma}}+ \frac{\epsilon_{1}}{2}$$ and for $x\in A$ there exists $u$ unitary such that \begin{equation}\label{vk} |\frac{\partial^{k}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}}(x,\tilde{t})-\lambda_{0}f_{1})}{\partial u^{k}}|\geq 14 \frac{(4\pi)^{\beta} }{\epsilon_{2}(MN)^{\beta}} \end{equation} and for $x\in B$ and any unitary vector $u$ $$|\frac{\partial^{k}(\bar{w}_{\lambda,N,M,J,L,\tilde{t}}(x,\tilde{t})-\lambda_{0}f_{1})}{\partial u^{k}}|\leq 10 \frac{(4\pi)^{\beta} }{\epsilon_{2}(MN)^{\beta}}.$$ Note that, the choice of the parameters $J,L$ and $M$ depend only on $\epsilon_{1}$ and $\epsilon_{2}$. We would like to obtain similar bounds for $\tilde{w}$, so we need to show that $\tilde{w}$ and $\bar{w}$ are close to each other in a useful way.. First, using lemma \ref{evolucionerror} we have $$\sum_{i=0}^{k}\int (\frac{\partial^{k}(\tilde{w}-\bar{w})}{\partial x_{1}^{i}\partial x_{2}^{k-i}})^2\leq C (1+\frac{1}{\tilde{t}})N^{-2(\beta+1)}$$ and in particular (including from now on $(1+\frac{1}{\tilde{t}})$ inside of the constant $C$ since it is constant with respect to $N$), there exists $A_{s}, B_{s}$ such that $$\sum_{i=0}^{k}\int_{A_{s}} (\frac{\partial^{k}(\tilde{w}-\bar{w})}{\partial x_{1}^{i}\partial x_{2}^{k-i}})^2+\int_{B_{s}} (\frac{\partial^{k}(\tilde{w}-\bar{w})}{\partial x_{1}^{i}\partial x_{2}^{k-i}})^2\leq C N^{-2(\beta+1+\delta)}$$ so $$inf_{x\in A_{s}}|\sum_{i=0}^{k}(\frac{\partial^{k}(\tilde{w}-\bar{w})}{\partial x_{1}^{i}\partial x_{2}^{k-i}})^2||A_{s}|\leq\sum_{i=0}^{k} \int_{A_{s}} (\frac{\partial^{k}(\tilde{w}-\bar{w})}{\partial x_{1}^{i}\partial x_{2}^{k-i}})^2\leq C N^{-2(\beta+1+\delta)}$$ $$inf_{x\in B_{s}}|\sum_{i=0}^{k}(\frac{\partial^{k}(\tilde{w}-\bar{w})}{\partial x_{1}^{i}\partial x_{2}^{k-i}})^2||B_{s}|\leq \sum_{i=0}^{k}\int_{B_{s}} (\frac{\partial^{k}(\tilde{w}-\bar{w})}{\partial x_{1}^{i}\partial x_{2}^{k-i}})^2\leq C N^{-2(\beta+\delta+1)}$$ and therefore $$inf_{x\in A_{s}}|\sum_{i=0}^{k}(\frac{\partial^{k}(\tilde{w}-\bar{w})(x,t)}{\partial x_{1}^{i}\partial x_{2}^{k-i}})^2|\leq C N^{-2(\beta+\delta)}$$ $$inf_{x\in B_{s}}|\sum_{i=0}^{k}(\frac{\partial^{k}(\tilde{w}-\bar{w})(x,t)}{\partial x_{1}^{i}\partial x_{2}^{k-i}})^2|\leq C N^{-2(\beta+\delta)}.$$ Given a time $t\in[0,t_{crit}]$, we dconsider $x_{A}(t)\in A_{s}$, $x_{B}(t)\in B_{s}$ points fulfilling $$|\sum_{i=0}^{k}(\frac{\partial^{k}(\tilde{w}-\bar{w})(x_{A}(t),t)}{\partial x_{1}^{i}\partial x_{2}^{k-i}})^2|\leq C N^{-2(\beta+\delta)}$$ $$|\sum_{i=0}^{k}(\frac{\partial^{k}(\tilde{w}-\bar{w})(x_{B}(t),t)}{\partial x_{1}^{i}\partial x_{2}^{k-i}})^2|\leq C N^{-2(\beta+\delta)}.$$ Now, if $u$ is the unitary vector given by (\ref{vk}) for $x_{B}(t)$, we have that \begin{align*} &\big|\frac{\partial^{k}\tilde{w}(x_{A},t)-\tilde{w}(x_{B},t)}{\partial^{k}u}\big|\frac{1}{|x_{A}-x_{B}|^{\beta}}\\ &\geq \big|\frac{\partial^{k}\bar{w}(x_{A},t)-\bar{w}(x_{B},t)}{\partial^{k}u}\big|\frac{1}{|x_{A}-x_{B}|^{\beta}}-C N^{-\delta}\\ &\geq \big|\frac{\partial^{k}\bar{w}(x_{A},t_{crit})-\bar{w}(x_{B},t_{crit})}{\partial^{k}u}\big|\frac{1}{|x_{A}-x_{B}|^{\beta}}-||\bar{w}(x,t)-\bar{w}(x,t_{crit})||_{C^{k,\beta}}-C N^{-\delta} \\ &\geq \frac{4}{\epsilon_{2}}-C\frac{\lambda J^{6}|t-\tilde{t}|}{\tilde{t}}-CN^{-\delta}\\ \end{align*} where we used lemma \ref{derivadackbeta} in the last inequality. Then if $|C\frac{\lambda J^{6}|t-\tilde{t}|}{\tilde{t}}|\leq \frac{2}{\epsilon_{2}}$, $|CN^{-\delta}|\leq \frac{1}{\epsilon_{2}}$ we get \begin{align*} &||\tilde{w}(x,t)||_{C^{k,\beta}}\geq \big|\frac{\partial^{k}\tilde{w}(x_{A},t)-\tilde{w}(x_{B},t)}{\partial^{k}v}\big|\frac{1}{|x_{A}-x_{B}|^{\beta}}\geq \frac{1}{\epsilon_{2}}\\ \end{align*} and this will be true if we take $N$ big enough and $|t-\tilde{t}|\leq \frac{\tilde{t}}{\lambda J^{6}|t-\tilde{t}|}=C(\epsilon_{1},\epsilon_{2})$. The only thing we need to prove is that we can also obtain $$||\tilde{w}_{\lambda,N,M,J,L,\tilde{t}}(x,0)||_{C^{k,\beta}}\leq \epsilon_{1}$$ $$||\tilde{w}(x,0)||_{H^{k+\beta+1-\frac32\delta}}\leq \epsilon_{3},$$ but $$||\tilde{w}(x,0)||_{H^{k+\beta+1-\frac32\delta}}\leq \frac{C}{N^{\gamma}}+\frac{C}{N^{\frac{\delta}{2}}}$$ with $C$ depending on $J,L$ and $M$, so taking $N$ big enough $$||\tilde{w}(x,0)||_{H^{k+\beta+1-\frac32\delta}}\leq \epsilon_{3}$$ and analogously, $$||\tilde{w}_{\lambda,N,M,J,L,\tilde{t}}(x,0)||_{C^{k,\beta}}\leq C\frac{J^{6(1-\gamma)}}{t_{crit}N^{\gamma}}+ \frac{\epsilon_{1}}{2}$$ so again, taking $N$ big enough finishes the proof. \end{proof} \section{Strong ill-posedness and non-existence of solutions} We are now ready to prove ill-posedness and non-existence of solutions. As mentioned in section 4, these results hold for $k\in\mathds{N}$, $\beta\in(0,1]$, $\gamma\in(0,1)$ with $k+\beta> 1+\gamma$ and $\delta$ is some constant $\delta\in(0,\frac12)$ such that $k+\beta+2 \delta > 1+\gamma$. \begin{theorem} Given $T,t_{crit,}\epsilon_{1},\epsilon_{2}>0$, there exist a function $w(x,0)$ such that $||w(x,0)||_{C^{k,\beta}}\leq \epsilon_{1}$ and the only solution to (\ref{gSQG}) in $H^{k+\beta+1- \delta}$ with initial conditions $w(x,0)$ exists for $t\in[0,T]$ and fulfills that $$||w(x,t_{crit})||_{C^{k,\beta}}\geq \frac{1}{\epsilon_{2}}.$$ \end{theorem} \begin{proof} This is just a direct application of theorem \ref{crecimientoperturbado} with $v_{1,error}=v_{2,error}=0$. \end{proof} \begin{theorem} Given $t_{0},\epsilon>0$, there exist a function $w(x,0)$ such that $||w(x,0)||_{C^{k,\beta}}\leq \epsilon$ and that the only solution to (\ref{gSQG}) in $H^{k+\beta+1-\frac32 \delta}$ with initial conditions $w(x,0)$ exists for $t\in[0,t_{0}]$ and fulfills that, for $t\in(0,t_{0}]$, $||w(x,t)||_{C^{k,\beta}}=\infty.$ \end{theorem} \begin{proof} To obtain initial conditions with the desired properties, we will consider initial conditions of the form $$\sum_{j=1}^{\infty}\sum_{i=1}^{G(j,\epsilon)}T_{R_{i,j}}(w_{i,j}(x))$$ where $T_{R}(f(x_{1},x_{2}))=f(x_{1}+R,x_{2})$. We will first choose $w_{i,j}(x)$ and afterwards we will pick the values of $R_{i,j}$. First, fixed $j$, we will restrict to choices for $w_{i,j}$ such that they are initial conditions given by theorem \ref{crecimientoperturbado} with $\frac{1}{\epsilon_{2}}=j$, $\epsilon_{1}=\epsilon$ and $T=t_{0}$. Then if we choose some $t_{crit}=t_{crit,i,j}$ and we call $\tilde{w}_{i,j}$ a solution to (\ref{gSQGvext}) with the initial conditions given by $w_{i,j}(x)$ and an appropriate $v_{ext}$ fulfilling $||v_{ext}||_{C^{k+2}}\leq C_{i,j}$, we would then have that for $t\in[t_{crit,i,j}-Ct_{crit,i,j},t_{crit,i,j}]$ $$||\tilde{w}_{i,j}(x,t)||_{C^{k,\beta}}\geq j$$ for some $C$ depending on $\epsilon$ and $j$. Therefore, we can, by choosing $t_{crit,i,j}$ appropriately, obtain, for any $t\in[\frac{1}{j},t_{0}]$ $$sup_{i=1,2,...,G(j,\epsilon)}||\tilde{w}_{i,j}(x,t)||_{C^{k\beta}}\geq j$$ with $G(j,\epsilon)$ a finite number depending on $j$. Furthermore, we can now choose $\epsilon_{3}$ in theorem \ref{crecimientoperturbado} so that $$||w_{i,j}(x)||_{H^{k+\beta+1-\frac32 \delta}}\leq \frac{c_{0}2^{-j}}{G(j,\epsilon)}$$ $$||w_{i,j}(x)||_{L^{1}}\leq \frac{2^{-j}}{G(j,\epsilon)}$$ with $c_{0}$ a constant small enough so that any solution to $\gamma$-SQG with $||w_{0}(x)||_{H^{k+\beta+1-\frac32 \delta}}\leq c_{0}$ exists for $t\in[0,t_{0}]$ and $||w(x,t)||_{H^{k+\beta+1-\frac32 \delta}}\leq 1$ for $t\in[0,t_{0}]$. Therefore we know that, independently of the choice of $R_{i,j}$, for $t\in[0,t_{0}]$ there exists a unique $H^{k+\beta+1-\frac32\delta}$ solution to (\ref{gSQG}) with initial conditions, $\sum_{j=1}^{\infty}\sum_{i=1}^{G(j)}T_{R_{i,j}}(w_{i,j}(x))$ and, furthermore, if we call this solution $w_{\infty}(x,t)$ (which still depends on the choice of $R_{i,j}$, but we omit it for simplicity of notation), then we have that there is a constant $v_{max}$ such that, for $t\in[0,t_{0}]$ $$||v_{1}(w_{\infty})||_{L^{\infty}},||v_{2}(w_{\infty})||_{L^{\infty}}\leq v_{max}.$$ With this, and using that there exists $D\in\mathds{R}$ such that $supp(w_{i,j}(x))\subset B_{D}(0)$, we have that, if we choose the $R_{i,j}$ so that $|R_{i_{1},j_{1}}-R_{i_{2},j_{2}}|\geq 4t_{0} v_{max}+2D+sup(P_{i_{1},j_{1}},P_{i_{2},j_{2}})$ with $P_{i,j}>0$ then we have that $$w_{i,j,\infty}(x,t):=1_{B_{D+2t_{0}v_{max}}(-R_{i,j},0)}w_{\infty}(x,t)$$ fulfils for $t\in[0,t_{0}]$ the evolution equation $$\frac{\partial w_{i,j,\infty} }{\partial t}+(v_{\gamma}(w_{i,j,\infty})+v(w_{\infty}-w_{i,j,\infty}))\cdot(\nabla w_{i,j,\infty})=0$$ and $$||v(w_{\infty}-w_{i,j,\infty}))||_{C^{k+2}}\leq \frac{C}{P_{i,j}^{2+\gamma}}.$$ But by the choice of $w_{i,j}(x)$ and using that the supports of the $w_{i,j\infty}$ are disjoint, we have that if \begin{equation}\label{cij} ||v(w_{\infty}-w_{i,j,\infty}))||_{C^{k+2}}\leq C_{i,j} \end{equation} then for $t\in(0,t_{0}]$ \begin{align*} &||w_{\infty}(x,t)||_{C^{k,\beta}}= \text{sup}_{j\in\mathds{N},i=1,2,...,G(j,\epsilon)}||w_{i,j,\infty}(x,t)||_{C^{k,\beta}}=\infty\\ \end{align*} and taking $P_{i,j}$ big enough so that (\ref{cij}) is fulfilled finishes the proof. \end{proof} \section*{Acknowledgements} This work is supported in part by the Spanish Ministry of Science and Innovation, through the “Severo Ochoa Programme for Centres of Excellence in R$\&$D (CEX2019-000904-S)” and 114703GB-100. DC and LMZ were partially supported by the ERC Advanced Grant 788250. DC gratefully acknowledges the support of the Charles Simonyi Endowment at the Institute for Advanced Study (Princeton). \bibliographystyle{alpha}
2112.12474
\section{Introduction} Lattice field theories require an intensive use of numerical simulations, and often supercomputers are employed in order to render such simulations feasible in terms of time consumption. The outstanding developments which occurred in the realm of machine learning in the past decade, together with the efforts made to improve hardware technology, have attracted a lot of interest from physicists, who have largely relied on neural networks (NNs) in a wide variety of problems, ranging from condensed matter physics to string theory~\cite{Carrasquilla:2017,Bobev:2020}. Even though these applications have proven to be successful, oftentimes no adaptation of architectures adopted in other fields, e.g. image processing, was put in place. A wise strategy when using tools is to tailor them to meet the requirements of the problem in question. In the case of field theories, a cornerstone is represented by Noether's theorem~\cite{Noether:1918}, which states that for every continuous symmetry of the action there exists a respective conserved current. Therefore, a desirable approach is to design NNs in such a way that the underlying symmetries are respected. A very important result in this direction has been achieved with the introduction of group equivariant convolutional neural networks (G-CNNs)~\cite{Cohen:2016}, which take care of the preservation of global symmetries, such as translations, rotations and reflections. Also in the context of gauge symmetries there have been recent developments \cite{Cohen:2019,Kanwar:2020,Boyda:2020,Favoni:2020}. Lattice field theories are often characterized by invariance under spacetime translations. While in \cite{Cohen:2016} the effectiveness of G-CNNs was shown in computer vision applications, in our recent work~\cite{Bulusu:2021rqz} we focus on the relevance of translational symmetry in lattice field theory. For this discussion, it is sufficient to employ convolutional neural networks (CNNs), which were conceived precisely to include translational equivariance in the network properties. Our goal is to investigate how crucial the preservation of such a symmetry is, with a particular attention to the generalization capabilities of the architectures in terms of different lattice sizes and different physical parameters. \section{Physical system} In this study we focus on a complex scalar field in 1+1 dimensions with quartic interaction and nonzero chemical potential $\mu$, whose action is the following: \begin{equation} S = \int \mathrm{d}x_0 \mathrm{d}x_1 \left( \lvert D_0 \phi \rvert^2 - \lvert \partial_1 \phi \rvert^2 - m^2 \lvert \phi \rvert^2 - \lambda \lvert \phi \rvert^4 \right), \end{equation} where $D_0 = \partial_0 - i \mu$, $m$ is the mass and $\lambda$ is the coupling constant. The discretization procedure leads to the action \begin{equation} S_{lat}=\sum_x \left( \eta \lvert \phi_x \rvert^2 + \lambda \lvert \phi_x \rvert^4 -\sum_{\nu = 1}^2 \left( e^{\mu \mspace{2mu} \delta_{\nu, 2}} \phi_x^* \phi_{x + \hat{\nu}} + e^{- \mu \mspace{2mu} \delta_{\nu, 2}} \phi_x^* \phi_{x - \hat{\nu}} \right) \right), \end{equation} with $\eta=4+m^2$. The two terms involving the complex conjugation lead to a sign problem that can be eliminated via a dual formulation \cite{Gattringer:2013b}. It maps the field $\phi_x$ into the positive integer field $k_{x,\nu}$ and the integer field $l_{x,\nu}$, where $\nu$ indicates either the temporal or the spatial direction. \section{Architecture types} In lattice field theories, the input for the NNs is typically a field configuration, while the output is represented by one or more observables. A translationally invariant architecture produces the same output observables for any shift of the input configuration. Architectures that break the symmetry can learn it only approximately. The general structure we choose for our networks is inspired by typical architectures used in computer vision and can be seen in fig.~\ref{fig:archs}: a first part consisting of several convolutional layers alternated with spatial pooling layers, then a global pooling layer or a flattening step, after which an optional dense network can be appended to make the network more expressive. The type of global pooling depends on the task under examination. For example, the prediction of intensive quantities calls for a global average pooling layer, while global sum pooling is suited for extensive observables. An architecture is guaranteed to be translationally invariant if every layer in the convolutional part of the network is equivariant, meaning that a shift in the input induces a corresponding shift in the output. Another aspect that is worth mentioning is that the theory we study here features periodic boundary conditions, so every layer is equipped with circular padding. \begin{figure}[h] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{cnn_eq_labeled.pdf} \caption{Equivariant architecture (EQ)} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{cnn_st_labeled.pdf} \caption{Strided architecture (ST)} \end{subfigure} \centering \begin{subfigure}[h]{0.45\textwidth} \includegraphics[width=\textwidth]{cnn_flat_labeled.pdf} \caption{Flattening architecture (FL)} \end{subfigure} \caption{The architecture types employed to test the relevance of translational symmetry. Checkmarks indicate whether a layer preserves equivariance (\textcolor{Green}{\ding{51}}) or not (\textcolor{Red}{\ding{55}}). A stride of one in the convolutions and in the spatial pooling layers respects translational equivariance (a), while a stride of two or larger breaks it, as in (b) and (c). A flattening layer (c) breaks equivariance and restricts the use of the network to a specific lattice size. Figures from~\cite{Bulusu:2021rqz}.} \label{fig:archs} \end{figure} We design three architecture types: the first one (EQ) is translationally equivariant, in the second one (ST) equivariance is broken because of a stride larger than one in the convolutions or in the pooling layers, and in the third one (FL) a flattening step instead of global pooling is used, which also breaks equivariance. ST architectures still retain a residual symmetry based on translations that are a multiple of the stride, but equivariance is lost in general. An additional drawback of the flattening layer is the impediment of employing that architecture on lattice sizes different from the one it has been trained on. \section{Prediction of observables} The first task we perform is a regression on two observables, namely the particle number density \begin{equation} n=\frac{1}{N}\sum_xk_{x,2} \end{equation} and the average of the modulo squared field \begin{equation} |\phi|^2=\frac{1}{N}\sum_x\frac{W(f_x+2)}{W(f_x)}, \end{equation} where \begin{equation} f_x=\sum_\nu[|k_{x,\nu}|+|k_{x-\hat{\nu},\nu}|+2(l_{x,\nu}+l_{x-\hat{\nu},\nu})]\,, \quad W(f_x)=\int_0^\infty \mathrm{d}x\, x^{f_x+1}\mathrm{e}^{-\eta x^2-\lambda x^4}. \end{equation} These two observables are intensive quantities and therefore we opt for a global average pooling in EQ and ST. \begin{figure}[h] \minipage{0.45\textwidth} \centering \includegraphics[width=6cm]{reg_loss_over_training_data.pdf} \caption{Results of Optuna hyperparameter search. For each architecture type, the model suggested by Optuna is trained for various training set sizes. The bands contain ten instances of such a model, the dashed lines indicate the average test losses and the continuous line passes through the median test losses. The top plot features the comparison of the three architecture types with no data augmentation, the middle and the bottom ones show the test loss behavior with data augmentation for ST and FL, respectively. Image from~\cite{Bulusu:2021rqz}.} \label{fig:loss_vs_samples} \endminipage \hfill \minipage{0.45\textwidth} \centering \includegraphics[width=6cm]{reg_loss_over_mu.pdf} \caption{Comparison of the test loss as a function of the chemical potential. The winning architectures in the top plot of fig.~\ref{fig:loss_vs_samples} at the largest training set size are tested on a wide range of chemical potential, while being trained on only one specific value (1.05). Image from~\cite{Bulusu:2021rqz}.} \label{fig:loss_vs_mu} \endminipage \end{figure} The training set is made up of configurations generated with specific values of the physical parameters: $\lambda=1$, $\eta=4.01$, $\mu=1.05$ and lattice sizes $N_t=60$, $N_x=4$. In the attempt to make the fairest comparison possible, we define for each architecture a large search space for many hyperparameters (e.g.~the number of convolutional layers and of linear layers, the kernel size and the number of channels in each convolution, the position of a spatial pooling layer) and run an automatized optimizer called Optuna~\cite{Akiba:2019} to select the most promising values of such hyperparameters. The criterion to identify the winning hyperparameter combination is the minimum validation loss, which is chosen to be the mean squared error (MSE). Ten instances of the resulting best model of each architecture type are then trained all over again. We repeat the same operation for various training set sizes, ranging from 100 to 20000 samples. After that, we compute the loss on the test set, which consists of 4000 samples for every $\mu\in\{0.91,\dots,1.05\}$ with steps of $\Delta\mu=0.005$. This set is therefore generated with 29 different values of $\mu$, while for training and validating only one was used. This is done because we aim at inspecting the generalization capabilities of the models. Figure~\ref{fig:loss_vs_samples} shows the outcome of the whole procedure. As is apparent in the top plot, the EQ type performs much better than its non-equivariant counterparts independently of the number of samples used during training. Also, the test loss improves for EQ when increasing the training set size, while for the other two architectures it remains approximately constant. In order to counteract the missing translational equivariance in ST and FL, a new training process is carried out with data augmentation, meaning that configurations are randomly shifted in both directions before being fed into the network. Surprisingly, this does not produce substantial differences in the test loss for ST and FL, as reported in the middle and bottom plot of fig.~\ref{fig:loss_vs_samples}. \begin{figure}[h!] \centering \includegraphics{reg_loss_over_lattice_size.pdf} \caption{Comparison of the test loss as a function of the lattice size. The best models found at the largest training set size are tested on several lattice sizes, while being trained on only one specific value ($60\times4$). Image from~\cite{Bulusu:2021rqz}.} \label{fig:loss_vs_size} \end{figure} In the following, we consider only the 10 instances found using 20000 samples during training for each architecture type. In fig.~\ref{fig:loss_vs_mu}, we report the behaviour of the test loss for every individual value of $\mu$. In this analysis we also include 4000 configurations generated at each chemical potential in the range $\mu\in\{1.1,\dots,1.5\}$ with steps of $\Delta\mu=0.1$ to check extrapolation abilities of the models. While ST and FL perform well only where training took place, EQ is able to generalize very well to smaller chemical potentials. The performance deteriorates for all architectures for larger chemical potentials (see a detailed discussion of the reasons in~\cite{Bulusu:2021rqz}), but EQ proves to be a more reliable choice with respect to its non-equivariant counterparts, yielding a test loss smaller by about one order of magnitude. We now want to examine the generalization to lattice sizes other than the one used for training and validating, so we have to restrict ourselves to a comparison between EQ and ST. Figure~\ref{fig:loss_vs_size} undeniably confirms that for this problem translational equivariance is extremely beneficial, evident from the fact that the loss of EQ is about three orders of magnitude smaller than the loss of ST for all lattice sizes. It is worth mentioning that this task has already been tackled previously in~\cite{Zhou:2019}. There, an architecture of type FL with $\sim10^7$ parameters was trained on the $200\times10$ lattice at two values of the chemical potential and was able to reach a test loss of $10^{-6}$, two orders of magnitude less accurate than our best EQ instance can achieve, even though this last one is trained on a different lattice size on just one value of $\mu$ with much fewer parameters ($\sim10^4)$. Lastly, we point out that the kink in the ST bands is due to another drawback of a stride larger than one: a spatial pooling layer with a stride of two can use the first four rows of the lattice, but discards the fifth one, losing $20\%$ of information, which leads to a much larger loss. \section{Detection of flux violations} The dual formulation of the complex scalar field given in sec.~2 is also called flux representation~\cite{Gattringer:2013b}, because the field $k$ obeys the conservation law $\sum_{\nu} \left( k_{x, \nu} - k_{x - \hat{\nu}, \nu} \right) = 0$. Physical configurations must respect this flux conservation, but we can artificially create configurations exhibiting flux violations by slightly modifying the so-called worm algorithm~\cite{Prokofev:2001} used for the data generation in the first task. The algorithm proposes a field update at a particular lattice site; then, if accepted, another update is proposed for one of the next-neighboring sites. The mechanism repeats itself, drawing a path on the lattice that is referred to as a worm, whose head moves until it meets the tail. Before this happens, the ends of the worms violate flux conservation, so we save configurations that feature an open worm. An example of this is depicted in fig.~\ref{fig:flux}. \begin{figure} \centering \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.8]{openworm_schematic_a.pdf} \caption{Example field configuration} \label{fig:worm} \end{subfigure} \begin{subfigure}[t]{0.6\textwidth} \centering \includegraphics[scale=0.8]{openworm_schematic_b.pdf} \caption{Feature maps of convolutional part in best EQ and ST models} \label{fig:flux_detection} \end{subfigure} \caption{Task visualization and network prediction. The top plot in (a) shows a possible path drawn by the worm algorithm on top of a preexisting physical configuration, while the bottom plot highlights flux violations, which coincide with the worm endpoints. The feature maps in (b) show that successful models detect just one flux violation in only some of the channels in order to discriminate between open and closed worm configurations. Image from~\cite{Bulusu:2021rqz}.} \label{fig:flux} \end{figure} \begin{figure}[h] \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=6.5cm]{class_loss_and_acc_8x8.pdf} \caption{Comparison of test loss and test accuracy vs chemical potential on $8\times8$ lattices.} \label{fig:class_loss_vs_mu} \end{subfigure} \hfill \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=6.5cm]{class_loss_and_acc.pdf} \caption{Comparison of test loss and test accuracy vs lattice size.} \label{fig:class_loss_vs_size} \end{subfigure} \caption{Results for open worm detection. The bands contain 50 instances of the architectures indicated by Optuna. In (a), the test loss and the test accuracy are reported as functions of the chemical potential on $8\times8$ configurations, while in (b) they are given as functions of the lattice size. Images from~\cite{Bulusu:2021rqz}.} \label{fig:class_results} \end{figure} Apart from $\lambda$, which is kept fixed at 1, the dataset is generated using multiple combinations of physical parameters: $\eta\in\{4.01,4.04,4.25\}$, $\mu\in\{1,1.25,1.5\}$ and lattice sizes $N_t=N_x\in\{8,16,32,64\}$. Only two combinations are used for training, specifically $(\eta,\mu)\in\{(4.01,1.5),(4.25,1)\}$ on $8\times8$ lattices. A total of 4000 samples make up the training set, half of them without open worms, the other half with open worms. The task of the networks is classifying correctly whether a configuration features an open worm or not. Figure~\ref{fig:flux_detection} shows that it is sufficient for the models to detect only one of the flux violations. Also in this case, an appropriate search space for the hyperparameters is defined and explored using Optuna to suggest the values that minimize the validation loss, which is the binary cross entropy. In a classification task it is not clear what kind of global pooling is the most fitting, which is why this choice is part of the hyperparameter optimization. 50 instances of the best architecture are trained, yielding the results in fig.~\ref{fig:class_results}. The test loss and the test accuracy do not strongly depend on the physical parameters, as we can see in fig.~\ref{fig:class_loss_vs_mu}, where the worse performance of the FL models compared to the other two architecture types is also clearly visible. Figure~\ref{fig:class_loss_vs_size} tells us that the loss deteriorates on larger lattices and that EQ and ST achieve very close results in this classification task. \section{Counting open worms} An extension of the previous section can be achieved by adding more than only one open worm on top of a physical configuration. This is then treated as a regression task, which is reminiscent of counting problems such as crowd counting and, with an appropriate adaptation, can be used for the evaluation of $n$-point functions. \begin{figure}[h] \begin{subfigure}{0.45\textwidth} \includegraphics[width=6.5cm]{reg2_loss_and_acc_vs_worms_val_mean_8.pdf} \caption{Comparison of test loss and test accuracy vs number of open worms on $8\times8$ lattices.} \label{fig:counting_loss_vs_worms} \end{subfigure} \hfill \begin{subfigure}{0.45\textwidth} \includegraphics[width=6.5cm]{reg2_loss_and_acc_vs_size_val_mean.pdf} \caption{Comparison of test loss and test accuracy vs lattice size.} \label{fig:counting_loss_vs_size} \end{subfigure} \caption{Results for worm counting. The bands contain 20 instances of the architectures indicated by Optuna. In (a), the test loss and the test accuracy are reported as a function of the open worms on $8\times8$ configurations, whereas in (b) they are given as functions of the lattice size. Images from~\cite{Bulusu:2021rqz}.} \label{fig:counting_results} \end{figure} The combinations of physical parameters are inherited from the previous section with the addition of a number of open worms ranging from 0 to 10. For training we will use the two combinations employed in the classification task with 0 and 5 worms. This amounts to only four combinations out of the 396 possible ones. The number of training samples is 20000, and also in this case we optimize the hyperparameters by means of Optuna. Since we are dealing with an extensive quantity, we adopt a global sum pooling. Figure~\ref{fig:counting_loss_vs_worms} reports the test loss and accuracy on $8\times8$ lattices as a function of open worms, where ST and FL perform worse than EQ. In particular, they have a hard time predicting a number of worms from 1 to 4. In fig.~\ref{fig:counting_loss_vs_size} the results on every lattice are shown, indicating that EQ has to be preferred, even achieving $100\%$ test accuracy with all of its 20 instances. \section{Conclusions} We have tested the performance of three architecture types on three different tasks. The architectures that respect translational symmetry proved to be a highly reliable choice in all tasks, while the architectures that break translational invariance can feature a poor performance when it comes to generalizing to physical parameters that were absent in the training set. Moreover, breaking invariance brings other drawbacks along: if a stride larger than one is used, part of the input information can be lost, while a flattening layer hinders the possibility of applying the same architecture to other lattice sizes. Another important aspect to keep in mind is that the automated optimizer we have used, Optuna, has favored small or medium-sized architectures, with a number of trainable parameters between $10^2$ and $10^5$. In conclusion, our study shows that it is sensible to use translationally equivariant neural networks for problems that are characterized by translational symmetry. \acknowledgments This work has been supported by the Austrian Science Fund FWF No.~P32446-N27, No.~P28352 and Doctoral program No.~W1252-N27. The Titan\,V GPU used for this research was donated by the NVIDIA Corporation. \bibliographystyle{JHEP.bst}
1404.1618
\section{Introduction} A {\em graph} is a pair $G = (V_G, E_G)$, where $V_G$ is the (finite, nonempty) set of vertices of $G$ and $E_G$ is the set of edges, where an edge is a two-element subset of vertices. The {\em complete graph} on $n$ vertices is denoted $K_n$. An {\em induced subgraph} of $G$ is a subgraph obtained from $G$ by deleting a vertex $v$, or a number of vertices $S$, and we write $G-v$ or $G-S$, respectively. If $\{u, v \} \in E_G$ the vertices $u$ and $v$ are said to be {\em adjacent}, they are also said to be {\em neighbors}. The set $N(v)$, consisting of all the neighbors of $v$, is called the {\em open neighborhood of $v$} (it does not include $v$), the set $N[v] = N(v) \cup \{v \}$ is the {\em closed neighborhood of $v$}. The {\em degree of a vertex $v \in V_G$}, denoted by $\deg_G (v)$, is the number of edges adjacent to $v$. The minimum (respectively, maximum) degree in a graph $G$ is denoted $\delta (G)$ (respectively, $\Delta (G)$). A subset $S \subseteq V_G$ is called {\em independent} if no two vertices in $S$ are adjacent. A graph $G$ is {\em connected} if each pair of vertices in $V_G$ belongs to a path. A vertex $v \in V_G$ is a {\em cut-vertex} if the induced graph $G-v$ is not connected. We say that $G$ is the {\em vertex sum} of two graphs $G_1$ and $G_2$, and write $G_1 \bigoplus_v G_2$ if $v$ is a cut-vertex of $G$, $V_{G_1} \cap V_{G_2} = \{v \}$, and $E_{G_1} \cap E_{G_2} = \emptyset$. A graph with no cut-vertices is said to be {\em nonseparable}. A {\em matching} in a graph $G$ is a set of edges $M = \{ \{ i_1, j_1\}, \{ i_2, j_2 \}, \dots, \{ i_k, j_k \} \} \subseteq E_G$, such that no endpoints are shared. The vertices that determine the edges in $M$ are called {\em $M$-saturated} vertices, all other vertices in $V_G$ are called {\em $M$-unsaturated} vertices. A {\em perfect matching} in a graph $G$ is a matching that saturates all vertices of $G$. A {\em maximum matching} in a graph $G$ is a matching of maximum order among all matchings in $G$. The {\em matching number} of a graph $G$, denoted by $\operatorname{match}(G)$, is the number of edges in a maximum matching. An even cycle in a graph $G$ is called {\em $M$-alternating} if it alternates between edges in $M$ and edges not in $M$. A matching $M$ in a graph $G$ is {\em uniquely restricted} if $G$ does not contain an $M$-alternating cycle. A graph $G$ is {\em $k$-partite} if $V_G$ can be expressed as the union of $k$ (possibly empty) independent sets, and is denoted $K_{n_1,n_2,\dots,n_k}, k \ge 2, n_i \ge 1, i = 1, 2, \dots, k$. A {\em tree} is a connected graph $T$, with $\left| E_T \right| = \left| V_T-1 \right|$, trees are {\em 2-partite}, also known as {\em bipartite}. Although many of the results presented here are valid for some finite fields, we assume throughout this paper that ${\mathbb F}$ is an infinite field. A matrix $A \in {\mathbb F}^{n \times n}$ is {\em skew-symmetric} if $A^T = -A$. For an $n \times n$ skew-symmetric matrix $A$, the {\em graph of $A$}, denoted ${\mathcal G}(A)$, is the graph with vertices $\{v_1, . . . , v_n \}$ and edges $\{ \{v_i, v_j \} : a_{ij} \ne 0, 1 \le i < j \le n \}$. Let $\mathcal{S}^-({\mathbb F}, G) = \{ A \in {\mathbb F}^{n \times n} : A^T = -A, {\mathcal G}(A) = G \}$ be the set of skew-symmetric matrices over the field ${\mathbb F}$ described by a graph $G$. The {\em minimum skew rank of a graph $G$ over the field ${\mathbb F}$ } is defined as $\operatorname{mr}^-({\mathbb F}, G) = \min \{ {\rm rank}(A) : A \in \mathcal{S}^-({\mathbb F}, G) \}$, the {\em maximum skew nullity of $G$ over the field ${\mathbb F}$} is defined as $\operatorname{M}^-({\mathbb F}, G) = \max \{\operatorname{nullity} (A) : A \in \mathcal{S}^-({\mathbb F}, G) \}$, and the {\em maximum skew rank of $G$ over the field ${\mathbb F}$} as $\sMR({\mathbb F}, G) = \max \{ {\rm rank}(A) : A \in \mathcal{S}^-({\mathbb F}, G) \}$. Clearly $\operatorname{mr}^-({\mathbb F}, G) + \operatorname{M}^-({\mathbb F}, G) = |G|$, but note that, since a skew symmetric matrix has even rank, $\sMR ({\mathbb F}, G) \le |G|$. For a graph $G$, select $Z \subseteq V_G$, color all vertices in $Z$ black, and all others white. Next apply the {\em skew color change rule}: if $u \in V_G$ ($u$ any color), and exactly one of its neighbors $v$, is white, then change the color of $v$ to black (we say $u$ forces $v$ black). Continue to apply the skew color change rule until no more changes are possible. A {\em skew zero forcing set} for a graph $G$ is a subset $Z$ of $V_G$, such that, if initially the vertices in $Z$ are colored black and the remaining vertices are colored white, the skew color change rule forces all the vertices in $V_G$ black. A {\em minimum skew zero forcing set} for a graph $G$ is a skew zero forcing set of minimum order among all skew zero forcing sets for $G$. The {\em skew zero forcing number $\operatorname{Z}^-(G)$} is the minimum of $|Z|$ over all skew zero forcing sets $Z \subseteq V_G$. \section{Preliminary results}\label{prelim} The parameter $Z(G)$ was introduced in~\cite{AIM}, while the parameter $\operatorname{Z}^-(G)$, was introduced in~\cite{10IMA}. \begin{prop}\label{tree-known} \begin{enumerate} \item\label{induced}\cite[Observation~1.7]{10IMA} If $H$ is an induced subgraph of $G$, then $\operatorname{mr}^- ({\mathbb F}, H) \le \operatorname{mr}^- ({\mathbb F}, G)$. \item\label{2.810IMA}~\cite[Proposition 3.5]{10IMA} For any graph $G$, $\operatorname{M}^- ({\mathbb F}, G) \le \operatorname{Z}^- (G)$ and $\operatorname{mr}^- ({\mathbb F}, G) \ge |G| - \operatorname{Z}^- (G)$. \item\label{M=Z}~\cite[Proposition 4.2]{AIM} For any tree $T$, $M ({\mathbb F}, T) = Z (T)$, and hence $\operatorname{mr} ({\mathbb F}, T) = |T| - \operatorname{M} ({\mathbb F}, T) = |T| - Z (T)$. \end{enumerate} \end{prop} \begin{thm}\label{knowngen} \begin{enumerate} \item\label{smr=2}\cite[Theorem~2.1]{10IMA} Let $G$ be a connected graph with $|G| \ge 2$, then $\operatorname{mr}^- ({\mathbb F}, G) = 2$ if and only if $G = K_{n_1,n_2,\dots,n_s}, s \ge 2, n_i \ge 1, i = 1, 2, \dots, s$. \item\label{mr=match}~\cite[Theorem~2.5]{10IMA} For a graph $G$, $\sMR ({\mathbb F}, G) = 2 \operatorname{match} (G)$, and every even rank between $\operatorname{mr}^- ({\mathbb F}, G)$ and $\sMR ({\mathbb F}, G)$ is realized by a matrix in $\mathcal{S}^-({\mathbb F}, G)$. \item\label{upm}~\cite[Theorem~2.6]{10IMA} For a graph $G$, $\operatorname{mr}^-({\mathbb F}, G) = |G| = \sMR({\mathbb F}, G)$ if and only if $G$ has a unique perfect matching. \item\label{smrtree}\cite[Theorem~2.8]{10IMA} If $T$ is a tree, then $\operatorname{mr}^-({\mathbb F}, T) = 2 \operatorname{match}(T) = \sMR({\mathbb F}, T)$. \item\label{cover}\cite[Proposition 3.3]{10IMA}. Let ${\mathbb F}$ be a field and $G = \cup_{i=1}^k \, G_i$ be a graph. Suppose that for all $i \ne j$, $G_i$ and $G_j$ have no edges in common, then $\operatorname{mr}^- ({\mathbb F}, G) \le \sum_{i=1}^k \, \operatorname{mr}^- ( {\mathbb F}, G_i)$. \end{enumerate} \end{thm} \section{Graphs with extreme skew zero forcing number}\label{extreme} It is a fact that for any graph $G$, $0 \le \operatorname{Z}^- (G) \le |G|$. If a graph has isolated vertices, those vertices must belong to all skew zero forcing sets for the graph. Thus, without loss of generality, we assume that graphs have no isolated vertices. Also, some of the results presented here are valid for graphs that are disconnected, we specifically note when a graph must be connected. \begin{rem}\label{zsub} If $G$ is a graph, $v \in V_G$, and $Z$ a minimum skew zero forcing set for $G-v$, then $Z \cup \{v \}$ is a skew zero forcing set for $G$, so $\operatorname{Z}^-(G) \le \operatorname{Z}^-(G - v) +1$. \end{rem} From Proposition~\ref{tree-known}, and Theorem~\ref{knowngen}, we obtain the inequalities \begin{equation}\label{ineq} |G| - \operatorname{Z}^- (G) \le \operatorname{mr}^- ({\mathbb F}, G) \le \sMR ({\mathbb F}, G) = 2 \operatorname{match} (G) \le |G|. \end{equation} The following are derived using the inequalities in Equation~\ref{ineq}, and the definition of skew zero forcing set. \begin{obs}\label{smallz} \begin{enumerate} \item\label{z=0} If $\operatorname{Z}^- (G) = 0$, then there is a vertex, $v \in V_G$, such that $\deg_G (v) = 1$. \item\label{z=0upm} If $\operatorname{Z}^- (G) = 0$, then $|G|$ is even, $\operatorname{mr}^- ({\mathbb F}, G) = |G|$ and $G$ has a unique perfect matching. \item\label{z=1even} If $\operatorname{Z}^- (G) = 1$ and $|G|$ is even, then $\operatorname{mr}^- ({\mathbb F}, G) = |G|$ and $G$ has a unique perfect matching. \item\label{z=1odd} If $\operatorname{Z}^- (G) = 1$ and $|G|$ is odd, then $\operatorname{mr}^- ({\mathbb F}, G) = |G| - 1.$ \item\label{z=2odd} If $\operatorname{Z}^- (G) = 2$ and $|G|$ is odd, then $\operatorname{mr}^- ({\mathbb F}, G) = |G| - 1.$ \item\label{z=2even} If $\operatorname{Z}^- (G) = 2$ and $|G|$ is even, then either $\operatorname{mr}^- ({\mathbb F}, G) = |G|$ and $G$ has a unique perfect matching, or $\operatorname{mr}^- ({\mathbb F}, G) = |G| -2$. \item (\cite[Observation 1.6]{10IMA} $\operatorname{Z}^- (G) = |G|$ if and only if $G$ consists only of isolated vertices. \end{enumerate} \end{obs} It is clear that the converses of Items~\ref{z=0}--\ref{z=2even} in Proposition~\ref{smallz} are not true, the graphs in Figures~\ref{upmsz0}--\ref{upmsz2}, and Item~\ref{tri} in Observation~\ref{special}, also illustrate this. For the graph $G_2$ in Figure~\ref{upmsz1} (which is a cactus graph, and also a block-clique graph), $\operatorname{mr}^- ({\mathbb F}, G) = |G| > |G| - 1 = |G| - \operatorname{Z}^- (G)$. \eol{.125} \begin{figure}[h!] \begin{center} \scalebox{.4}{\includegraphics{upm}} \caption{Graph with $\operatorname{Z}^- (G_1) = 0 = |G_1|-6$.}\label{upmsz0} \end{center} \end{figure} \begin{figure}[h!] \begin{center} \scalebox{.4}{\includegraphics{upmz1}} \caption{Graphs with $\operatorname{Z}^- (G_i) = 1 = |G_2| - 5 = |G_3| - 4$.}\label{upmsz1} \end{center} \end{figure} \begin{figure}[h!] \begin{center} \scalebox{.4}{\includegraphics{upmz2}} \caption{Graphs with $\operatorname{Z}^- (G_i) = 2 = |G_4| -10 = |G_5| -2 = |G_6| - 7$.}\label{upmsz2} \end{center} \end{figure} Note that if $G$ is one of the graphs $K_2$, $K_3$, or $K_{2,1}$, then $\operatorname{Z}^- (G) = |G| - 2$. We now show that this equation characterizes all complete multipartite graphs. The proof given below involves the use of $\operatorname{mr}^- ({\mathbb F}, G)$, but one can easily construct a field independent proof. \begin{thm}\label{sz=g-2} A connected graph $G$ is a complete multipartite graph $K_{n_1, n_2, \dots, n_s}$, $s \ge 2, n_i \ge 1$ if and only if $\operatorname{Z}^-(G) = |G|-2$. \end{thm} \begin{proof} Let $G = K_{n_1, n_2, \dots, n_s}$, $s \ge 2, n_i \ge 1$, with $|G| \ge 4$. From Item~\ref{smr=2} in Theorem~\ref{knowngen}, $\operatorname{mr}^- ({\mathbb F}, G) = 2$, hence from Equation~\ref{ineq}, $|G| - 2 \le \operatorname{Z}^- (G)$. Pick $u, v \in V_G$ adjacent (in different partite classes), then $Z = V_G - \{u, v \}$ is a skew zero forcing set for $G$, and $\operatorname{Z}^-(G) \le |G| - 2$. It follows that $\operatorname{Z}^-(G) = |G| - 2$. Conversely, if $G$ is connected, but not $K_{n_1, n_2, \dots, n_s}$, $s \ge 2, n_i \ge 1$, then $G$ has an induced $P_4$ or an induced paw (\cite[Remark 2.2]{10IMA}). If $v_1, v_2, v_3, v_4$ induce a $P_4$, or the paw, then $Z = V_G - \{v_1, v_2, v_3, v_4 \}$ is a skew zero forcing set for $G$, so $\operatorname{Z}^-(G) \le |G| - 4 \ne |G| - 2$. \end{proof} \begin{cor}\label{g-mr} If $G$ is a connected graph, then either $\operatorname{Z}^- (G) = |G| -2$ or $\operatorname{Z}^- (G) \le |G| -4$. There are no connected graphs for which $\operatorname{Z}^- (G) = | G|-1$, and no connected graphs for which $\operatorname{Z}^- (G) = |G|-3$. \end{cor} \begin{rem}\label{ord8} One can verify, directly, that for connected graphs $G$ (pictured in~\cite{98RW}), with $4 \le |G| \le 6$, $\operatorname{Z}^-(G) = |G| - 4$ if and only if $\operatorname{mr}^- ({\mathbb F}, G) = 4$. Henceforth we assume $|G| \ge 7$. \end{rem} \begin{prop}\label{cutv} If $G$ is connected, has a cut-vertex, and $\operatorname{Z}^- (G) = |G| - 4$, then $\operatorname{mr}^- ({\mathbb F}, G) = 4$. \end{prop} \begin{proof} Let $G$ be connected, and $v \in V_G$ be a cut-vertex. Let $G_1$ be the connected subgraph of $G$ induced by the vertices of one of the components of $G-v$ and $v$, and $G_2$ be the connected subgraph of $G$ induced by $(V_G - V_{G_1}) \cup \{ v\}$, so that $G= G_1 \bigoplus_v G_2$. If $\operatorname{Z}^- (G_1) = |G_1| - 2$, and $\operatorname{Z}^- (G_2) = |G_2| - 2$, then $G$ is the vertex sum of two complete multipartite graphs, and in this case $\operatorname{mr}^- ({\mathbb F}, G) = 4$. The two other possibilities that arise from Corollary~\ref{g-mr} do not allow $\operatorname{Z}^- (G) = |G| - 4$. Let $Z_1$ and $Z_2$ be minimum skew zero forcing sets for $G_1$ and $G_2$, respectively. If $\operatorname{Z}^- (G_1) = |G_1| - 2$, $\operatorname{Z}^- (G_2) \le |G_2| - 4$, and $Z_1 \ne \emptyset$, then from the proof of Theorem~\ref{sz=g-2}, we can take $v \in Z_1$, thus $Z_1 \cup Z_2$ is a skew zero forcing set for $G$. If $\operatorname{Z}^- (G_i) \le |G_i| - 4, i = 1, 2$, then $Z_1 \cup Z_2 \cup \{ v \}$ is a skew zero forcing set for $G$. In both cases, $\operatorname{Z}^- (G) < |G| - 4$. \end{proof} \begin{prop}\label{induce6} If $G$ is a connected graph, and $H$ is a connected induced subgraph of $G$ of order 6 that has a unique perfect matching, then $\operatorname{Z}^- (G ) < |G| -4$. \end{prop} \begin{proof} Figure~\ref{upm1} shows the twenty connected graphs on six vertices that have a unique perfect matching. One of these is $G_2$, also pictured in Figure~\ref{upmsz1}, and satisfies $\operatorname{Z}^- (G_2 ) = 1$, all others have $\operatorname{Z}^- (H ) = 0$. If $H=G_2$, and $u \in V_{G_2}$, then $(V_G-V_H) \cup \{ u \}$ is a skew zero forcing set of $G$; if $H \ne G_2$, then $V_G - V_H$ is a skew zero forcing set for $G$. Thus, $\operatorname{Z}^- (G ) \le |G| -5 < |G| - 4$. \end{proof} \begin{figure}[h!] \begin{center} \scalebox{.5}{\includegraphics{upm6}} \caption{Graphs on six vertices~(\cite{98RW}) with a unique perfect matching.}\label{upm1} \end{center} \end{figure} \begin{prop}\label{g-4a} Let ${\mathbb F}$ be a field, and $G$ a connected graph with $|G| \ge 4$. If $\operatorname{mr}^- ({\mathbb F}, G) =4$, then $\operatorname{Z}^-(G) = |G| - 4$. \end{prop} \begin{proof} If $\operatorname{mr}^- ({\mathbb F}, G) = 4$, then $\operatorname{Z}^- (G) \le |G| - 4$, so $4 \le |G| - \operatorname{Z}^- (G) \le \operatorname{mr}^- ({\mathbb F}, G) =4$. \end{proof} The following example provided by Sudipta Mallik and Bryan Shader, and constructed using their methods as in~\cite{13MS}, shows that the converse of Proposition~\ref{g-4a} is not true. \begin{defn}\cite[p. 3651]{13MS} A collection $\{ N_i : i \in \mathcal{I} \}$ of vectors is a minimally dependent set of vector if it is a linearly dependent set and for each $j \in \mathcal{I}, \{ N_i : i \ne j, i \in \mathcal{I} \}$ is a linearly independent set of vectors. \end{defn} \begin{ex} If $G =K_3 \times K_3$ is the graph with adjacency matrix and graph as in Figure~\ref{counterex}, \begin{figure}[h!] \begin{center} \scalebox{.5}{\includegraphics{counter}} \caption{The graph $K_3 \times K_3$.}\label{counterex} \end{center} \end{figure} then $\operatorname{Z}^- (G) = |G| -4$, and $\operatorname{mr}^- (G) \ge 6$. \end{ex} \begin{proof} Using the graph in Figure~\ref{counterex}, and the fact that $G$ is tripartite, it is not difficult to show that $\operatorname{Z}^-(G) > 4$, and that $Z= \{ 1,2,3,4,7 \}$ is a minimum skew zero forcing set for $G$. Hence $\operatorname{Z}^- (G) = 5 = |G|-4$, and $4 = |G| - \operatorname{Z}^-(G) \le \operatorname{mr}^- (G)$. Let $B \in \mathcal{S}^- (G)$, and assume columns $3i + 1, 3i + 2, 3i + 3$ are linearly independent for $i = 0, 1, 2$. Then from the zero-nonzero pattern of $B$ we observe that columns $3i + 1, 3i + 2, 3i + 3, 3i + 4, 3i + 5 \ (\mbox{mod } 9)$ are linearly independent, and since $B$ is skew symmetric, ${\rm rank} (B) \ge 6$. Assume now that columns $3i + 1, 3i + 2, 3i + 3$ of $B$ are linearly dependent for $i = 0, 1, 2$, and hence minimally linearly dependent. By Lemma 4.7 in~\cite{13MS}, the nullspace of $B$ contains vectors of the form \begin{equation} \left[ \begin{array}{c} a\\ b\\ c\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0 \end{array} \right], \left[ \begin{array}{c} 0\\ 0\\ 0\\ d\\ e\\ f\\ 0\\ 0\\ 0 \end{array} \right], \mbox{ and } \left[ \begin{array}{c} 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ g\\ h\\ k \end{array} \right],\end{equation}\label{vecnul} for some $a, b, c, d, e, f, g, h, k$, each of which is nonzero. Let $D = \mbox{diag} (a, b, c, d, e, f, g, h, k)$, hence $DBD \in \mathcal{S}^-(G)$, and ${\rm rank} (DBD) = {\rm rank} (B)$. Note that the nullspace of $DBD$ contains vectors as in Equation~\ref{vecnul} with $a = b = c = d = e = f = g = h = k = 1$. Direct calculations now show that $DBD$ has the form: $$\left[ \begin{array}{ccc} 0 & x S & y S\\ x S & 0 & z S\\ y S & z S & 0 \end{array} \right] = \left[ \begin{array}{ccc} 0 & x & y\\ x & 0 & z\\ y & z &0 \end{array} \right] \otimes S$$ for some nonzero $x, y$ and $z$. Since ${\rm rank} \left( \left[ \begin{array}{ccc} 0 & x & y\\ x & 0 & z\\ y & z &0 \end{array} \right] \right) = 3$, and ${\rm rank} (S) = 2$, it follows that ${\rm rank} (B) = {\rm rank} (DBD) = 6$. Hence $\operatorname{mr}^-(G) \ge 6$. \end{proof} \section{Bipartite graphs}\label{main} In this section we study the relation between certain matchings and skew zero forcing sets. Bipartite graphs provide a natural setting for this discussion. \begin{prop}\label{constrained} If $B$ is a bipartite graph, and $M$ a uniquely restricted matching in $B$, then the set of $M$-unsaturated vertices of $B$ is a skew zero forcing set for $B$. \end{prop} \begin{proof} Let $B$ be a bipartite graph, $M$ a uniquely restricted matching in $B$, and $H$ the connected subgraph of $B$ induced by the vertices in $M$ (If $H$ is not connected, the following process can be applied separately to each of the components of $H$). Suppose the vertices in the bipartition of $H$ are $u_1, \dots, u_r$ and $v_1, \dots, v_r$, $\{u_i, v_i \} \in M$, $\{u_i, v_j \} \notin E_H$ whenever $1 \le i < j \le r$. Let $Q = V_B - V_H$, and color the vertices in $Q$ black. Without loss of generality we may assume $\deg_H (v_r) = 1$, then we have the following sequence of forces $v_r \rightarrow u_r, v_{r-1} \rightarrow u_{r-1}, \dots, v_1 \rightarrow u_1, u_1 \rightarrow v_1, u_2 \rightarrow v_2, \dots, u_r \rightarrow v_r$. Thus $Q$ forms a skew zero forcing set for $B$. \end{proof} \begin{prop}\label{qlessz} Let $G$ be a graph, $M$ a matching in $G$, and $\operatorname{mr}^- ({\mathbb F}, G) \le 2|M|$. If the set of $M$-unsaturated vertices of $G$ is a skew zero forcing set for $G$, then it is a minimum skew zero forcing set for $G$. \end{prop} \begin{proof} Let ${\mathbb F}$ be a field, $M$ a matching in $G$, and $Q$ the set of $M$-unsaturated vertices. From Equation~\ref{ineq}, $|G| - \operatorname{Z}^- (G) \le \operatorname{mr}^- ({\mathbb F}, G) \le 2 |M|$, so $|Q| = |G| - 2 |M| \le \operatorname{Z}^- (G)$. Thus, if $Q$ is a zero forcing set for $G$, it is a minimum skew zero forcing set for $G$. \end{proof} \begin{cor}\label{z=q} Let $G$ be a graph, $M$ a matching in $G$, and $\operatorname{mr}^- ({\mathbb F}, G) \le 2|M|$. If the set of $M$-unsaturated vertices is a minimum skew zero forcing set for $G$, then $|G| - \operatorname{Z}^- (G) = \operatorname{mr}^- ({\mathbb F}, G)$. \end{cor} \begin{proof} If $Q$ denotes the set of $M$-unsaturated vertices, then $2 |M| = |G| - |Q| = |G| - \operatorname{Z}^- (G) \le \operatorname{mr}^- ({\mathbb F}, G) \le 2|M|$. \end{proof} \begin{cor}\label{bitt} If $B$ is a bipartite graph (in particular a tree, or cactus) with $\operatorname{mr}^- ({\mathbb F}, B) = \sMR ({\mathbb F}, B)$, then \begin{enumerate} \item\label{maxmr} There is a maximum matching in $B$ such that the set of $M$-unsaturated vertices is a minimum skew zero forcing set for $B$. \item\label{mrt=t-z} $\operatorname{Z}^- (B) = |B| - \operatorname{mr}^- ({\mathbb F}, B)$, and $\operatorname{M}^- ({\mathbb F}, B) = \operatorname{Z}^- (B)$. \item $\operatorname{Z}^- (B) =0$, if and only if $B$ has a unique perfect matching. \item $\operatorname{Z}^- (B) \le \frac{ \Delta (B) |B| - 2 |E_B|}{\Delta (B)}$. In particular, \begin{enumerate} \item if $T$ is a tree, $\operatorname{Z}^- (T) \le \frac{|T| ( \Delta (T) -2) + 2}{\Delta (T)}$, and this bound is sharp for paths and stars; \item if $U$ is a unicyclic, $\operatorname{Z}^- (U) \le \frac{|U| ( \Delta (U) -2)}{\Delta (U)}$. \end{enumerate} \end{enumerate} \end{cor} \begin{proof} \begin{enumerate} \item If $B$ is a bipartite graph with $\operatorname{mr}^- ({\mathbb F}, B) = \sMR ({\mathbb F}, B)$, then there must be a uniquely restricted maximum matching $M$, in $B$. Then use Proposition~\ref{constrained}, and Proposition~\ref{qlessz}. \item\label{m=z} This follows from Item~\ref{maxmr} above, and Corollary~\ref{z=q}. \item This follows from Item~\ref{upm} in Theorem~\ref{knowngen}, as well as Item~\ref{mrt=t-z} above. \item This follows from the fact that a bipartite graph $B$ has a matching of size at least $\frac{|E_B|}{\Delta (B)}$ (\cite[p. 108]{08HHM}). \end{enumerate} \end{proof} The results in Proposition~\ref{constrained} and Proposition~\ref{qlessz}, suggest a certain duality between maximum matchings and minimum skew zero forcing sets. We establish that a natural duality, via matroids, does indeed exist for some families of graphs. \begin{defn}{\bf The matching matroid and its dual.}~\cite[pp. 92--93]{86LP} If $G$ is a bipartite graph, the set $$\mu =\{ X \subseteq V_G \, : \, X \, \mbox{ is saturated by some matching} \}$$ is a matroid on $V_G$ with bases the sets of vertices saturated by maximum matchings in $G$. Its dual matroid $$\mu^* =\{ Q \subseteq V_G \, : \, Q \, \mbox{ is not saturated by some maximum matching} \}$$ on $V_G$ has bases $V_G - B_i$, where $B_i$ is a basis of $\mu$. \end{defn} \begin{thm}\label{matroid} {\bf The zero forcing matroid.} In a bipartite graph $B$, in which all maximum matchings are uniquely restricted, the elements in the set of minimum skew zero forcing sets are the bases of a matroid on the vertices of the corresponding graph, and this matroid is the dual of the matching matroid on $B$. \end{thm} \begin{thm}\label{treeall} If $B$ is a bipartite graph in which all maximum matchings are uniquely restricted, and $M$ a matching in $B$, then $M$ is a maximum matching in $B$ if and only if the set of $M$-unsaturated vertices of $B$ is a minimum skew zero forcing set for $B$. Alternatively, let $Z$ be a skew zero forcing set for $B$, then $Z$ is a minimum skew zero forcing set for $B$ if and only if $V_B - Z$ has a unique perfect matching which is a maximum matching in $B$. \end{thm} \begin{proof} Let $B$ be a a bipartite graph in which all maximum matchings are uniquely restricted, $M$ a maximum matching in $B$, and $Q$ the set of $M$-unsaturated vertices. Since $M$ is a uniquely restricted matching, from Proposition~\ref{constrained}, $Q$ is a minimum skew zero forcing set for $B$. Conversely, suppose that $Q$ is a minimum skew zero forcing set for $B$. From Theorem~\ref{matroid}, $V_B-Q$ has a unique perfect matching which is a maximum matching in $B$. We omit the alternate proof. \end{proof} In a tree all maximum matchings are uniquely restricted matchings, thus we have the following. \begin{cor}\label{tree} If $T$ is a tree, and $M$ a matching in $B$, then $M$ is a maximum matching in $T$ if and only if the set of $M$-unsaturated vertices of $T$ is a minimum skew zero forcing set for $T$. Alternatively, let $Z$ be a skew zero forcing set for $T$, then $Z$ is a minimum skew zero forcing set for $T$ if and only if $V_T - Z$ has a unique perfect matching which is a maximum matching in $T$. \end{cor} \section{Unicyclic Graphs} Results on the minimum skew rank of unicyclic graphs can be found in~\cite{11D}, explicitly: $\operatorname{mr}^- ({\mathbb F}, U) = \sMR({\mathbb F}, U)$ if the unique cycle is odd, or if the unique cycle is even and $U$ has a uniquely restricted maximum matching; $\operatorname{mr}^- ({\mathbb F}, U) = \sMR({\mathbb F}, U) - 2$ if the unique cycle is even and $U$ does not have a uniquely restricted maximum matching. \begin{prop}\label{uniodd} If $U$ is a unicyclic graph, then there exists a matching $M$, in $U$, such that the set of $M$-unsaturated vertices of $U$ is a minimum skew zero forcing set for $U$. \end{prop} \begin{proof} If the unique cycle has odd order, and $M$ is a maximum matching in $U$, we can construct a proof by induction to show that the set of $M$-unsaturated vertices is a minimum skew zero forcing set for $U$. The base cases follow from examples in~\cite{11D}; we omit the details of the proof. If the unique cycle has even order, and $\operatorname{mr}^- ({\mathbb F}, U) = \sMR({\mathbb F}, U)$, then the result follows from Item~\ref{maxmr} in Corollary~\ref{bitt}. If the unique cycle has even order, $\operatorname{mr}^- ({\mathbb F}, U) = \sMR({\mathbb F}, U)-2$, and $\widehat M$ is a maximum matching in $U$, then the cycle is $\widehat M$-alternating. If $U$ is a cycle, and $e $ is an edge in $\widehat M$, then $M = \widehat M-e$ is a uniquely restricted matching in $U$. If $U$ is not a cycle, then it has an induced subgraph $H$, consisting of the vertex sum of the cycle and a path of order 3, that is $H= C \bigoplus_{v_1} P_3$, where $P_3 = (\{v_1, v_2, v_3\}, \{\{v_1, v_2 \}, \{ v_2 , v_3 \} \})$, and $u$ is a neighbor of $v_1$ on the cycle. Thus, there exists a maximum matching $\widehat M$, in $U$, containing the edges $\{ u, v_1 \}$, and $\{ v_2, v_3 \}$ (if $v_2$ is $\widehat M$-unsaturated, then $(\widehat M - \{ \{ u, v_1 \} \} ) \cup \{ \{ v_1, v_2 \}\}$ is a uniquely restricted maximum matching). The matching $M = ( \widehat M - \{ \{ u, v_1 \}, \{ v_2, v_3 \} \} ) \cup \{ \{ v_1, v_2 \} \}$, is a uniquely restricted matching in $U$. In either case $M$ has order $\frac{\sMR({\mathbb F}, U) - 2}{2}$, and since $\operatorname{mr}^- (U) = \sMR({\mathbb F}, U) - 2$, it follows from Proposition~\ref{constrained} that the set of $M$-unsaturated vertices is a minimum skew zero forcing set for $U$. \end{proof} \begin{cor}\label{uni} If $U$ is a unicyclic graph, then $\operatorname{Z}^- (U) = |U| - \operatorname{mr}^- ({\mathbb F}, U)$. \end{cor} \section{Additional Examples} We conclude with several contrasting examples of graphs $G$, for which there is a matching $M$, in $G$, such that the set of $M$-unsaturated vertices is a minimum skew zero forcing set for $G$. Also, in Observation~\ref{special}, we list the skew zero forcing number of some special graphs. \begin{ex} The graph $G_7$, in Figure~\ref{cactus}, is a non-bipartite cactus, does not have a unique maximum matching, but has a maximum matching, $M_7$, of order 5, with no $M_7$-alternating cycle. The set of $M_7$-unsaturated vertices (in black) is a minimum skew zero forcing set for $G_7$, and from Item~\ref{z=2even} in Observation~\ref{smallz}, $|G_7| - \operatorname{Z}^- (G_7) = 10 = \operatorname{mr}^- ({\mathbb F}, G_7) \ne \sMR ({\mathbb F}, G_7) = 12$. \end{ex} \begin{figure}[h!] \begin{center} \scalebox{.4}{\includegraphics{cactus}} \caption{Non-bipartite graph with $\operatorname{Z}^- (G_7) = 2, |G_7| - 2 = \operatorname{mr}^- ({\mathbb F}, G_7)$.}\label{cactus} \end{center} \end{figure} \begin{ex} The graph $G_8$ (see~\cite[pp.6--7]{86LP}), in Figure~\ref{lovasz}, is bipartite, does not have a perfect matching, but has a uniquely restricted matching $M_8$, of cardinality 18 (it is easy to verify that all matchings in $G_8$ of cardinalities 20 and 19 are not uniquely restricted), the set of $M_8$-unsaturated vertices (in black) is a minimum skew zero forcing set for $G_8$, and with the aid of Mathematica one can verify that $|G_8| - \operatorname{Z}^- (G_8) = 42 - 6 = 36 = \operatorname{mr}^- ({\mathbb F}, G) \ne \sMR ({\mathbb F}, G)$. \end{ex} \begin{figure}[h!] \begin{center} \scalebox{.4}{\includegraphics{lovasz}} \caption{Bipartite graph with $\operatorname{Z}^- (G_8) = 6, |G_8| - 6 = \operatorname{mr}^- ({\mathbb F}, G_8)$.}\label{lovasz} \end{center} \end{figure} \begin{ex} The graph $G_4$, in Figure~\ref{upmsz2} is not bipartite, not unicyclic, and not a cactus, has a unique perfect matching, but also has a matching $M_4$ of order $5$, such that the set of $M_4$-unsaturated vertices is a minimum skew zero forcing set for $G_4$, and $|G_4| - \operatorname{Z}^- (G_4) = 10 < \operatorname{mr}^- ({\mathbb F}, G) = \sMR ({\mathbb F}, G)$. \end{ex} Below we list some graphs and their skew zero forcing numbers. We refer the reader to~\cite{10IMA} for the definitions of $W_n$, the wheel on $n$ vertices; $P_{m,k}$, the $m, k$-pineapple, with $m \ge 3, k \ge 1$; $Q_s$, the $s$th hypercube; $T_n$, the super-triangle; $H_s$ the $s$th half-graph; $N_s$, the necklace with $s$ diamonds; $G \circ H$, the corona of $G$ with $H$; $G \boxempty H$, the Cartesian product of $G$ and $H$. \begin{obs}\label{special} For the graphs $G$ in Items~\ref{path},~\ref{cycle},~\ref{wheel},~\ref{pineapple},~\ref{cube},~\ref{half},~\ref{corona}, and~\ref{pdp}, $\operatorname{Z}^-(G) = |G| - \operatorname{mr}^- ({\mathbb F}, G)$ (note that there might be restrictions on the field ${\mathbb F}$, see~\cite{10IMA}): \begin{enumerate} \item\label{path} $\operatorname{Z}^- (P_n) = \left\{ \begin{array}{lll} 0 & \mbox{if} & n \mbox{ is even},\\ 1 & \mbox{if} & n \mbox{ is odd};\\ \end{array} \right.$ \item\label{cycle} $\operatorname{Z}^- (C_n) = \left\{ \begin{array}{lll} 2 & \mbox{if} & n \mbox{ is even},\\ 1 & \mbox{if} & n \mbox{ is odd};\\ \end{array} \right.$ \item\label{wheel} $\operatorname{Z}^- (W_n) = \left\{ \begin{array}{lll} 2 & \mbox{if} & n \mbox{ is even},\\ 3 & \mbox{if} & n \mbox{ is odd};\\ \end{array} \right.$ \item\label{pineapple} $\operatorname{Z}^- (P_{m,k} ) = |P_{m,k} | - 4= m+k-4, m \ge 3, k \ge 1 $; \item\label{cube} $\operatorname{Z}^- (Q_s) = 2^{s-1}, s \ge 2$; \item\label{tri} $\operatorname{Z}^- (T_n) = n-1$; \item\label{half} $\operatorname{Z}^- (H_s ) = 0$; \item\label{neck} $\operatorname{Z}^- (N_s ) = s$, $\operatorname{Z}^-(N_s) = |N_s| - \operatorname{mr}^- ({\mathbb F}, N_s)$ if and only if $s=2$; \item\label{corona} $\operatorname{Z}^- (G \circ K_1 ) = 0$; \item\label{corona2} $\operatorname{Z}^- (C_t \circ K_s ) = st-3t+2, s \ge 2, \operatorname{Z}^-(C_t \circ K_s ) = |C_t \circ K_s | - \operatorname{mr}^- ({\mathbb F}, C_t \circ K_s )$ if and only if $s$ is even; \item\label{pdp} $\operatorname{Z}^- (P_s \square P_s ) = s$; \item\label{cdp} $\operatorname{Z}^- (K_3 \square P_2 ) = 2$, $\operatorname{Z}^-(K_3 \square P_2 ) = |K_3 \square P_2 | - \operatorname{mr}^- ({\mathbb F}, K_3 \square P_2 )$, and for $s \ge 3, t \ge 3$, $\operatorname{Z}^- (K_s \square P_t ) = s$, $\operatorname{Z}^-(K_s \square P_t ) = |K_s \square P_t | - \operatorname{mr}^- ({\mathbb F}, K_s \square P_t )$ if and only if $s$ is even, or both $s$ and $t$ are odd. \end{enumerate} \end{obs}
2104.07352
\section{Introduction} \IEEEPARstart{T}{he} instantaneous balance between generated and consumed active power is one of the basic principles of the AC power systems operation. Any variation from such a condition causes a frequency event, namely, the deviation of the system frequency from its nominal value. The progressive displacement of conventional generation in favour of production from \ac{res} will cause the reduction of the frequency control capability of power systems. Therefore, it is necessary to involve new resources in grid ancillary services in order to ensure robustness, resiliency and efficiency of future power systems \cite{Ye2016,Vrettos2016,Baccino2014}. The power equilibrium in real-time can be controlled only if the production system is able to change its generation level \cite{ucte2009appendix}. The coupling of \ac{res} with \acp{bess} is therefore investigated in order to meet the grid flexibility requirements with the aleatory characteristics of such generation systems \cite{guo2013electricity,wang2016energy,nair2010battery}. Assessments on the capital costs of batteries have shown that, with the market condition of last years, a multifunctional storage deployment is necessary to overcome the investment costs for energy storage systems \cite{wasowicz2012evaluating}. Many literature papers propose methods for allowing batteries to provide services such as energy management, peak shaving, and frequency and voltage regulation \cite{namor2018control, oudalov2007sizing, christakou2014primary, oudalov2007optimizing, BULLICHMASSAGUE201749, conte2017stochastic, Silvestro2018, Yang2018, Li2013, Lawder2014, Zamani2015, Park2017, stai2018,MOHAGHEGHI2018,MOHAGHEGHI2018a}. Several control strategies to perform \ac{pfr} are proposed in literature \cite{lu2014state, megel2013maximizing, khalid2010model}. Moreover, specific markets around the world are now under development in order to integrate \ac{bess} into grid services, such as in the United States PJM interconnect and ISO New England \cite{PJMm12_2019,PJMm18_2019}, in the Europe National Grid (GB) \cite{EFR_Uk} and in the \ac{igcc} which involves German, Belgian, Dutch, French, Swiss and Austrian \ac{pcr} markets \cite{internationalPCR}. In this work an integrated \ac{bess}-\ac{pv} system is considered. A wide literature shows how to properly manage this \ac{is} to perform multiple services such as contingency management, peak shaving, demand response, etc. \cite{Shi2018,Perez2016,eyer2010energy}. However, in many cases droop-based \ac{pfr} is not considered. Papers combining multiple services with \ac{pfr} usually assume a non-traditional provision of \ac{pfr}, such as the one defined by the PJM market \cite{PJMm12_2019}. In this specific case, the signal provided to the regulating units is divided in two contributions, a slow one (RegA) and a fast one (RegD). The one provided to BESS and \ac{res} is RegD, which is designed to be zero-mean, in order to keep the BESS \ac{soc} approximately at the same level, during a given time period \cite{Shi2018,Cheng2018,Perez2016}. Nevertheless, most markets do not adopt this control strategy, but use the row frequency as regulating signal, which is not guaranteed to be zero-mean within a given time period. In this case, more sophisticated techniques, such as the ones in \cite{PSCC2018} and \cite{namor2018control} should be used. In particular, in \cite{PSCC2018} and \cite{namor2018control} \ac{pfr} is coupled with the dispatch of the active power demand of a distribution feeder. Moreover, such as other works previously cited, these two works are focused on the usage of batteries in transmission and distribution level. Differently, the present paper is focused on the generation level: the \ac{is} is operated as a power plant which simultaneously participates to the energy market, delivering to the grid the available \ac{pv} generation, and provide droop-based \ac{pfr}. The main contribution of this work is therefore the integration of these two services with a common formulation. Moreover, the problem is defined in order to match the current grid codes and markets requirements (see Section \ref{ssec:PFR_req} for details). The \ac{is} architecture is depicted in Fig. \ref{fig:Onelinediagram}. The objective is to define an energy dispatch plan using the storage flexibility, to maximize the economic gain and provide a continuous and reliable \ac{pfr} service. A two level strategy \cite{Borghetti2007} is adopted. A suitably developed algorithm, called \ac{dap}, defines an energy dispatch plan and a droop coefficient for the up-coming day, both traded at the day-ahead market. \ac{dap} uses the forecasts of the \ac{pv} generation and of the energy required to perform \ac{pfr}. The latter information is provided by a method proposed in \cite{PSCC2018}. Then, during the day operation, an \ac{hap} algorithm corrects the \ac{dap} dispatch plan using updated short-term forecasts and the current battery \ac{soe}, in order to assure the continuity of the \ac{pfr} service. The dispatch plan corrections are traded at the intra-day energy market. Both \ac{dap} and \ac{hap} use chance-constrained optimization \cite{Cinquemani2011}, in order to take into account the uncertainties of the \ac{pv} generation and of the frequency signal dynamics. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figure/figure1.eps} \caption{Integrated system configuration scheme.} \label{fig:Onelinediagram} \end{figure} It is worth remarking that the problem formulation is general, there are no hypotheses on the type of battery or its performance or the ratings of the resources. Moreover, there are neither hypothesis on the coupling between the \ac{bess} and the \ac{pv} plants, that could be in principle in AC, DC or even the results of an aggregation of several \acp{bess} and \acp{pv}. The performances of the designed method are tested by simulations in MATLAB/Simulink, the test environment adopted has been validated by on field experiments as detailed in \cite{PSCC2018}. The rest of the paper is organized as follows. Section \ref{sec:ProblemDefinitions} describes the system configuration and provides the problem formulation. Section~\ref{sec:DAP} and Section~\ref{sec:HAP} introduced the \ac{dap} and \ac{hap} algorithms, respectively. Simulation results are described in Section~\ref{sec:simulations}. Finally, conclusions are reported in Section~\ref{sec:conclusions}. \vspace{10pt} \textit{Notation.} $\mathbf{E}(z)$ is the expectation of the random variable $z$; $\mathbf{P}(A)$ is the probability of event $A$; $x \sim\mathcal{N}(\bar{z},\sigma^2)$ indicates that $z$ is a Normally distributed random variable with mean $\bar{z}$ and variance $\sigma^2$; ${\rm erf}^{-1}(\cdot)$ is the {inverse Gauss error function}; $k=a:b$, denotes the sequence $k=a,a+1,\ldots,b$. \section{Problem Formulation} \label{sec:ProblemDefinitions} The system configuration is presented in Fig.~\ref{fig:Onelinediagram}. The \ac{is} is composed by a \ac{bess} and a \ac{pv} plant. The power $P^t$ [kW] is exported at the \ac{gcp}. As indicated, $P^t>0$ means that the \ac{is} is exporting power. With the same convention, the \ac{bess} exports or import power $P^b$ [kW] and the \ac{pv} plant generates power $P^{pv}$ [kW]. From the figure, it clearly follows that \begin{equation} P^t = P^b + P^{pv}. \end{equation} The \ac{pv} generation and the \ac{bess} power exchange are limited by the rated powers $P^{pv}_{\rm n}$ and $P^{b}_{\rm n}$, respectively. The \ac{is} rated power is indicated with $P^{t}_{\rm n}=P^{pv}_{\rm n} + P^{b}_{\rm n}$. The \ac{bess} energy capacity is indicated with $E_{\rm n}$ [kWh]. The \ac{is} has the objective of exporting the \ac{pv} generation and provide \ac{pfr}. Therefore, $P^t$ assumes the form \begin{equation}\label{eq:Pg} P^t = P^m - \alpha \Delta f , \end{equation} where $\alpha$ [kW/Hz] is the \textit{droop} coefficient, $\Delta f$ [Hz] is the frequency deviation from the nominal value $f_{\rm n}$ and $P^m$ [kW] is the \ac{is} \textit{market} power, \textit{i.e.} the power traded at the energy market. The duration of the energy market sessions, also called dipatch sampling time, will be indicated with $\tau$ [\si{\second}]. It is assumed that the \ac{is} always operates as a generator, and therefore $P^m\geq0$. A minimal droop coefficient $\alpha_{\min}$ is established. It is therefore required that \begin{equation}\label{eq:alphamin} \alpha \geq \alpha_{\min}. \end{equation} The value of $\alpha_{\min}$, can be defined, for example, according to \cite{aisbl2012entso}, where a generator with rated power $P_{\rm n}$ participating to \ac{pfr} has to ensure a maximum statism $b_p^{\max}$ [\si{\percent}], that corresponds to $\alpha_{\min}$ by the relation \begin{equation}\label{eq:statism_max} b_p^{\max} = \frac{100}{\alpha_{\min}}\cdot \frac{P_{\rm n}}{f_n}. \end{equation} \ac{pfr} is effectively operated only by \ac{bess}. Therefore, to obtain \eqref{eq:Pg}, it results that the battery power exchange is \begin{equation}\label{eq:Pb} P^b = P^m - P^{pv} - \alpha \Delta f. \end{equation} The \ac{is} is controlled by a \ac{isms} that receives measurements and sends control set-points from/to the \ac{pv} inverter and the \ac{bms}, which controls the \ac{bess}. In particular, the \ac{isms} receives the measurements of the current \ac{pv} power generation $P^{pv}$ and of the battery State-of-Energy, indicated with $S$ [p.u.]. In this paper, the \ac{soe} dynamics is modelled by the following discrete-time system: \begin{align} \label{eq:SEOdyn} &S_{k+1} = S_k - \frac{\tau}{3600 \cdot E_{\rm n}} P^b_k. \end{align} Notice that \eqref{eq:SEOdyn} describes the dynamics of a \ac{bess} with unitary efficiency. It will be shown that such an assumption in the control algorithm design do not affect the overall results. The same approximation has been done and verified in \cite{PSCC2018,powertech2019} The \ac{isms} has the mission of maximizing the economic gain coming from the energy delivery and the provision of the \ac{pfr} service. It uses forecasts of the \ac{pv} generation and of the energy required to provide \ac{pfr}. Based on this information, each day, the \ac{isms} trades the energy delivery profile and the day \ac{pfr} droop coefficient $\alpha$ for the day-ahead. During the operation the battery \ac{soe} must be kept within the \textit{security interval} $[S^{\rm min},S^{\rm max}]$. The violation of the \ac{soe} security interval is called \textit{failure}. When a failure occurs, the provision of \ac{pfr} is suspended. The percentage time during which the \ac{soe} security interval is violated is defined \textit{failure rate}, indicated with $\lambda$. Using stochastic modelling, the \textit{a priori} definition of maximal failure rate is, for all $k$, \begin{equation} \lambda_{\max} = 1-\mathbf{P}\left( S^{\min} \leq S_k \leq S^{\max} \right), \end{equation} \textit{i.e.} the probability of violation of the security interval. The \ac{dap} is operated by a properly developed optimization algorithm which has the objective of maximizing the economic gain and simultaneously assuring that $\lambda$ is lower than a predetermined maximal value $\lambda_{\rm max}$. The \ac{dap} program can be applied directly; however, a second possibility is proposed. Indeed, during the day, using updated short-term forecasts, it is possible to operate corrections to reduce the failure rate. This is realized by the \ac{hap} algorithm. Both \ac{dap} and \ac{hap} algorithms use the technique introduced in \cite{PSCC2018} for providing \ac{pfr} from BESSs. Therefore, before introducing \ac{dap} and \ac{hap}, the technique proposed in \cite{PSCC2018} is briefly recalled in the following. \subsection{Primary frequency regulation from \ac{bess}}\label{sec:FrequencyForecast} Assume to have a \ac{bess} with capacity $\tilde{E}_{\rm n}$ which performs \ac{pfr} with a droop coefficient $\alpha$, and divide the time into windows of length $T$ [\si{\hour}]. The energy required to provide \ac{pfr} in the generic $i$-th time window $[iT,(i+1)T]$ is: \begin{equation} \label{eq:EiPFR} E_i^{f} =-\alpha \cdot \int_{iT}^{(i+1)T}\Delta f(t) dt = -\alpha W^f_{i}, \end{equation} where $W^f_i$ [\si{\hertz\hour}] is defined as the integral over the current time interval of the frequency deviation. The analysis detailed in \cite{PSCC2018} demonstrates that a time series $\{W^f_i\}$ obtained from a large database of frequency measurements \cite{DatiFreq} and a given value of $T$ (e.g. $T \in[1, 2, \dots ,24]$\si{\hour}) can be modeled with an \ac{ar} process of order $p$ \cite{ref:MadsenTimeSeriesAnaysis}. This implies that: \begin{equation} \label{eq:ARmodel} W^f_{i+1} = \widehat{W}^f_{i+1} + \epsilon_{i+1}, \end{equation} \begin{equation} \widehat{W}^f_{i+1} = W^f_{i} \phi_1 + \cdots + W^f_{i-p-1} \phi_p, \end{equation} where $\{W^f_{i},\dots,W^f_{i-p-1}\}$ are the measured value of the integral of the frequency deviation in the last $p$ periods, $\{\phi_1,\dots,\phi_p\}$ are the \ac{ar} coefficients defined by the analysis of the frequency database, $\widehat{W}_i^f$ is the prediction $W^f$ for the upcoming period, and $ \epsilon_i$ is a zero-mean Gaussian random variable with standard deviation $\sigma^w_T$. The dependence on $T$ of this standard deviation is explicitly indicated with the subscript because, in the following, different values of $T$ will be used. It is worth remarking that $\sigma^w_T$ increases with $T$. Based on this model, the following \textit{energy offset} is defined: \begin{equation}\label{eq:EnergyOffset} \widehat{E}^o_i = \left( S_i -\frac{1}{2} +\frac{\alpha \widehat{W}^f_i}{\tilde{E}_{\rm n}} \right)\tilde{E}_{\rm n}, \end{equation} where $S_i$ is the battery \ac{soe} at the beginning if the $i$-th time window. In \cite{PSCC2018} is proved that, if $\widehat{E}^o_i$ is exchanged by the \ac{bess} during $i$-th time window, then the \ac{bess} can provide \ac{pfr} with a maximal failure rate $\lambda^f_{\rm max}$, with respect the \ac{soe} the security interval $[0,1]$, if the droop coefficient $\alpha$ is equal or lower than the maximal value \begin{equation}\label{eq:alphamax} \alpha_{\max} = \frac{\tilde{E}_{\rm n}}{2\cdot \mu \cdot \sigma^w_T}, \end{equation} where $\mu$ is $(1-\lambda^f_{\max}/2)$-th percentile of a zero-mean standard Gaussian random variable, which can be computed as $\mu~=~\sqrt{2}{\rm erf}^{-1}(1-\lambda^f_{\rm max})$. \subsection{Main requirements for PFR service}\label{ssec:PFR_req} The integration of \ac{res} into grid regulating scheme requires the revision of the grid codes. In continental Europe, all the \acp{tso} involved in the joint market \ac{igcc} have worked together to define pre-qualification and delivery rules for the \acp{bess} which provide \ac{pcr} \cite{internationalPCR}. In the UK, Nationalgrid (NGET) has developed the enhanced frequency response service and defined specific rules for the integration of the new resources into the markets \cite{FRserviceUSandEU}. In the United States of America, PJM has created another market in which the users are remunerated for the capacity, for the availability and for the performance in providing the service \cite{EFR_Uk,PJMm18_2019}. By analyzing the mentioned documents, it results that the \ac{pfr} markets are different each others and still changing, mainly because they are new. Therefore, the control strategy designed in this paper has the objective of matching the most important rules common between those markets rules: \begin{enumerate} \item[a)] droop-based response to the frequency variations; \item[b)] the \ac{soe} must be kept within predefined limits; \item[c)] as requested by the market operators \cite{internationalPCR,FRserviceUSandEU,aisbl2012entso}, a minimum \ac{pcr} offer has to be ensured; \item[d)] according to some grid operators, the failure rate has to be kept lower than a maximal value (\textit{e.g.} 5$\%$ in UK \cite{DatiFreq,EFR_Uk}) or equal to zero \cite{swissgrid2017,zeh2016fundamentals,internationalPCR,PJMm18_2019} in order not to pay penalties. \end{enumerate} Finally note that the algorithm proposed in the present paper does not respect the capacity trading time line, \textit{i.e.} the droop coefficient is computed daily and not weekly as in \cite{internationalPCR}. However, it is opinion of the authors that future markets deregulation will require to operate on shorter time windows in order to integrate all the new resources. \section{Day-Ahead Planning (DAP)}\label{sec:DAP} The \ac{dap} problem consists in the definition of the daily power delivery profile $\{P^m_k\}$ of the \ac{is} and the droop coefficient $\alpha$, computed one day before. The objective is to maximize the economic gain, given set of available data and satisfying a set of technical constraints, as detailed in the following. \subsection{Available data} Given the time horizon $N = 24 \cdot 3600/\tau$ , the data supposed to be available at day $d-1$ when the planning of day $d$ is computed are: \begin{itemize} \item[a)] a \ac{pv} forecast profile $\{\widehat{P}^{pv}_k\}_{k=0}^{N-1}$, with an associated confidence interval $\Delta^{pv}_k$, such that $ | P^{pv}_k - \widehat{P}^{pv}_k | \leq \Delta^{pv}_k; $ \item[b)] the prediction of the frequency integral for the day-ahead $\widehat{W}^f_{d}$ and the associated standard deviation $\sigma^w_{24}$, computed as described in Section~\ref{sec:FrequencyForecast} with $T=$\si{24}{h}; \item[c)] the energy price profile $\{c^{e}_k\}_{k=0}^{N-1}$; \item[d)] the \ac{pfr} price $c^{f}$; \item[e)] the day initial \ac{soe}, $S_0$. \end{itemize} \subsection{\ac{soe} constraints} Based on the \ac{pv} forecast data, the \ac{pv} power profile is represented with the following Gaussian model: \begin{equation}\label{eq:Pvgauss} P_k^{pv} \sim \mathcal{N}(\widehat{P}_k^{pv},(\sigma^{pv}_k)^2), \ \ \sigma^{pv}_k=\Delta^{pv}_k/3 \end{equation} so that $\mathbf{P}(\vert \widehat{P}_k^{pv}- {P}_k^{pv} \vert)\leq 0.997.$ From \eqref{eq:Pb}, \eqref{eq:SEOdyn} and definition \eqref{eq:EiPFR} (with $T=\tau$) it follows that, for $k=0:N-1$, \begin{equation}\label{eq:SEOdyn2} S_{k+1} = S_k - \frac{\tau (P^m_k-P^{pv}_k) }{3600 \cdot E_{\rm n}} + \frac{\alpha W^f_k }{E_{\rm n}}. \end{equation} Figure \ref{fig:DAPscheme} shows the basic principle of the \ac{dap} optimization. Firstly, the equivalent \ac{bess} capacity ${E}^s_{\rm n}$ is defined as \begin{equation}\label{eq:Es} {E}^s_{\rm n} = E_{\rm n}(S^{\max}-S^{\min}). \end{equation} Then, each day, the quantities $S_d^{\rm max}$ and $S_d^{\min}$ are determined by the optimization, to divide ${E}^s_{\rm n}$ in two portions ${E}_{\rm n}^{pv}$ and ${E}_{\rm n}^f$: \begin{equation} \label{eq:Epvn} {E}_{\rm n}^{pv} = E_{\rm n}(S_d^{\rm max}-S_d^{\rm min}), \end{equation} \begin{equation}\label{eq:Efn} E_{\rm n}^{f} = E^s_{\rm n}-E_{\rm n}^{pv}. \end{equation} It is obviously required that \begin{align} S^{\min} \leq S_d^{\min} \leq S_d^{\max}\leq S^{\max}. \label{eq:cSminmax} \end{align} \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{figure/figure2.eps} \caption{\ac{dap} optimization principle scheme.}\label{fig:DAPscheme} \end{figure} The idea is to use the portion $E^{pv}_{\rm n}$ to correct the \ac{pv} prediction errors, and the portion $E^f_{\rm n}$ to provide \ac{pfr}, as they were two different batteries: the \textit{PV battery} and the \textit{PFR battery}, respectively. Two equivalent \ac{soe} trajectories $\{\widetilde{S}^{pv}_k\}$ and $\{\widetilde{S}^f_k\}$ are supposed to move in these two batteries. They are defined in \textit{p.u.} with respect to the two capacities $E^{pv}_{\rm n}$ and $E^{f}_{\rm n}$ (right plots in Fig.~\ref{fig:DAPscheme}), by the following dynamical equations (with $k=0:N-1$): \begin{align}\label{eq:SOEpv} &\widetilde{S}_{k+1}^{pv} = \widetilde{S}_{k}^{pv} - \frac{\tau (P^m_k-P^{pv}_k) }{3600 \cdot E^{pv}_{\rm n}} \\ &\widetilde{S}_0^{pv} = \frac{E_{\rm n}(S_0-S^{\rm min}_d)}{E^{pv}_{\rm n}}, \label{eq:SOEpv0} \\ &\widetilde{S}_{k+1}^{f} = \widetilde{S}_{k}^{f} + \frac{\alpha W^f_k }{E_{\rm n}^f}, \label{eq:SOEfdyn} \\ &\widetilde{S_0}^{f} = \frac{E_{\rm n}(S^{\rm min}_d-S^{\rm min})}{E^f_{\rm n}}, \label{eq:SOEf0} \end{align} It can be proved by induction that, for $k=0:N$, \begin{equation}\label{eq:SOEdec} S_k =S_k^{pv}+(S_k^{f}-S^{\min}_d), \end{equation} where $S_k^f$ and $S_k^{pv}$ are defined as it follows (see the left plots in Fig.~\ref{fig:DAPscheme} for an example): \begin{equation}\label{eq:SOEdec1} S_k^{pv} =\frac{E_{\rm n}^{pv}\widetilde{S}_k^{pv}}{E_{\rm n}}+S^{\min}_d, \quad S_k^f = \frac{E^f_{\rm n}\widetilde{S}_k^f}{E_{\rm n}} + S^{\rm min}. \end{equation} The component $S^{pv}$ is driven by the dispatch power $P^m$ and the \ac{pv} power $P^{pv}$, whereas the component ${S}^{f}$ is driven by the frequency variations. Since the (local) \ac{pv} production and grid frequency can be assumed to be statistically independent, also ${S}^{pv}$ and ${S}^{f}$ result to be independent. This implies the following result, which is proved in the appendix section. \begin{proposition} \label{pr:1} If, for all $k=0:N$, \begin{align} &\mathbf{P}(0\leq \widetilde{S}^{pv}_k\leq 1) \geq 1-\beta,\label{eq:chanceSOEpv} \\ &\mathbf{P}(0\leq \widetilde{S}^{f}_k\leq 1) = 1-\lambda^f_{\max},\label{eq:chanceSOEf} \end{align} then \begin{equation}\label{eq:failure1} \mathbf{P}(S^{\min}\leq S_k\leq S^{\max}) \geq 1-{\lambda}_{\max} \end{equation} with \begin{equation}\label{eq:failure2} {\lambda}_{\rm max} = \lambda^f_{\rm max} + \beta - \lambda^f_{\rm max}\beta. \end{equation} \end{proposition} This proposition means that if \eqref{eq:chanceSOEpv} and \eqref{eq:chanceSOEf} hold true, than ${\lambda}_{\rm max}$ is the resulting maximal failure rate of the \ac{is}. Relation \eqref{eq:chanceSOEpv} is considered as a chance constraint. Using the Gaussian representation \eqref{eq:Pvgauss}, assuming that the \ac{pv} prediction errors and the battery modelling errors are independent, and that the sampling time $\tau$ is large enough to suppose that the \ac{pv} prediction errors at different time steps are mutually independent, from \eqref{eq:SOEpv}--\eqref{eq:SOEpv0}, it follows that, for $k=0:N$, \begin{equation}\label{eq:SOEpvGauss1} \widetilde{S}_{k}^{pv} \sim \mathcal{N}\left( m^{s}_k,(\sigma_k^s)^2 \right), \end{equation} where \begin{equation}\label{eq:SOEpvGauss2} m^s_k = \widetilde{S}^{pv}_0 - \frac{\tau }{3600 \cdot E_{\rm n}^{pv}} \sum_{j=0}^{k-1} (P^m_j-\widehat{P}^{pv}_j), \end{equation} \begin{equation}\label{eq:SOEpvGauss3} (\sigma_k^s)^2 = \left(\frac{\tau }{3600 \cdot E_{\rm n}^{pv}} \right)^2 \cdot \sum_{j=0}^{k-1} (\sigma_j^{pv})^2 \end{equation} To obtain \eqref{eq:chanceSOEpv}, the following separated chance constraints are defined, for all $k=0:N$: \begin{align} &\mathbf{P}\left(\widetilde{S}^{pv}_k \leq 1\right) \geq 1-\frac{\beta}{2}, \ \ \mathbf{P}\left(\widetilde{S}^{pv}_k \geq 0\right) \geq 1 -\frac{\beta}{2} \label{eq:cc2} \end{align} which, using the Gaussian model \eqref{eq:SOEpvGauss1}--\eqref{eq:SOEpvGauss3}, can be expressed with the equivalent deterministic constraints (see \cite{Cinquemani2011} or \cite{conte2017stochastic} for details): \begin{align} m^s_k + \theta_s \sigma^{s}_k \leq 1, \label{eq:msconstr1}\\ -m^s_k +\theta_s \sigma^{s}_k \leq 0, \label{eq:msconstr2} \end{align} where $\theta_s = \sqrt{2}{\rm erf}^{-1}(1-\beta)$. To obtain \eqref{eq:chanceSOEf}, the method recalled in Section~\ref{sec:FrequencyForecast} is applied to the \ac{pfr} battery consideiring a period $T=$\SI{24}{\hour}. Recall that $S^{\min}_d$ and $S^{\max}_d$ are defined by the \ac{dap} optimization. Considering \eqref{eq:SOEf0}, this implies that the initial condition $\widetilde{S}^{f}_0$, at the beginning of the day, is defined by the optimization. Therefore, by \eqref{eq:EnergyOffset}, if \begin{equation}\label{eq:csoef} \widetilde{S}^f_0 = \frac{1}{2}-\frac{\alpha\widehat{W}^f_d}{E^f_{\rm n}}, \end{equation} then the required energy offset $\widehat{E}^o_d=0$, and therefore \eqref{eq:chanceSOEf} is satisfied with $\alpha$ given by \begin{equation}\label{eq:calphad} \alpha = \frac{E^f_{\rm n}}{2\mu\sigma^w_{24}}. \end{equation} Using the definition of $E^f_{\rm n}$ in \eqref{eq:Efn} and the relation \eqref{eq:SOEf0}, it can be shown that \eqref{eq:csoef} and \eqref{eq:calphad} are equivalent to \begin{equation}\label{eq:csoef1} 2\alpha\widehat{W}_d^f = E_n[(S^{\rm max}+S^{\rm min}) -(S^{\rm max}_d+S^{\rm min}_d)] \end{equation} \begin{equation}\label{eq:calphad1} 2\alpha \mu \sigma_{24}^w = E_{\rm n} [(S^{\max}-S^{\min})-(S^{\max}_d-S^{\min}_d)]. \end{equation} \subsection{Power constraints} As defined in Section \ref{sec:ProblemDefinitions}, the \ac{bess} power is limited by the nominal value $P^b_{\rm n}$. From \eqref{eq:Pb}, it results that the following inequality should be always satisfied: \begin{equation}\label{eq:cpbmax} \vert P^b \vert = \vert P^m-P^{pv}-\alpha \Delta f \vert \leq P^b_{\rm n}. \end{equation} Since it is assumed that, for $k=0:N-1$, \begin{equation}\label{eq:positivePm} 0\leq P^m_k \leq P^t_{\rm n} \end{equation} and $P^{pv}_k\geq 0$ by definition, then, for the day-ahead $d$, there are two worst cases, which are covered with the following chance constraints (with $k=0:N-1$): \begin{align} &\mathbf{P}(P^m_k-P^{pv}_k+\alpha\Delta f^{\rm max} \leq P^b_{\rm n})\geq 1-\gamma, \label{eq:p_constraint1}\\ &\mathbf{P}(P^m_k-P^{pv}_k-\alpha\Delta f^{\rm max} \geq -P^b_{\rm n})\geq 1-\gamma. \label{eq:p_constraint2} \end{align} where $\Delta f^{\rm max}$ is the maximal frequency variation \cite{ucte2009appendix}. Based on the Gaussian model of the \ac{pv} forecasts \eqref{eq:Pvgauss}, \eqref{eq:p_constraint1} and \eqref{eq:p_constraint2} can be expressed with the equivalent deterministic constraints (see \cite{Cinquemani2011} or \cite{conte2017stochastic} for details): \begin{align} &P^m_k-\hat{P}^{pv}_k + \alpha \Delta f_{\rm max}+\theta_b \sigma^{pv}_k\leq P^b_{\rm n}, \label{eq:p_constraint1a} \\ &P^m_k-\hat{P}^{pv}_k - \alpha \Delta f_{\rm max}-\theta_b \sigma^{pv}_k\geq -P^b_{\rm n}, \label{eq:p_constraint2a} \end{align} with $k=0:N-1$, and $\theta_b = \sqrt{2}{\rm erf}^{-1}(1-2\gamma)$. \subsection{Smoothness constraints} Two additional constraints are defined to limit the variations of $P^m$ and $m_{k}^s$ between consecutive set-points time steps, for $k=0:N-1$, \begin{align} & |P_{k+1}^m -P_{k}^m| \leq \Delta P^m_{\max}, \label{eq:smooth_conmstr1} \\ & |m_{k+1}^s -m_{k}^s| \leq \Delta m^s_{\max}. \label{eq:smooth_conmstr2} \end{align} \subsection{The \ac{dap} algorithm} Given a desired maximal failure rate ${\lambda}_{\max}$, the \ac{dap} algorithm consists in the solution of the following linear optimization problem: \begin{equation} \begin{aligned} J^* = & \max_{\{P^m_k\}, \ \alpha, \ S_d^{\rm min}, \ S_d^{\rm max}} \sum_{k=0}^{N-1} c^{e}_k \tau P^m_k + c^{f} \alpha \\ & \mbox{subject to \eqref{eq:alphamin}, \eqref{eq:Es}--\eqref{eq:cSminmax}, \eqref{eq:SOEpv0}, \eqref{eq:SOEpvGauss2}--\eqref{eq:SOEpvGauss3}, \eqref{eq:msconstr1}--\eqref{eq:msconstr2},} \\ & \qquad \qquad \ \mbox{\eqref{eq:csoef1}--\eqref{eq:calphad1}, \eqref{eq:positivePm}, \eqref{eq:p_constraint1a}--\eqref{eq:p_constraint2a}, \eqref{eq:smooth_conmstr1}--\eqref{eq:smooth_conmstr2} } \nonumber \end{aligned} \end{equation} The result of the optimization are the optimal \ac{is} base power profile $\{P^{md}_k\}=\{P^{m*}_k\}$ and the droop coefficient $\alpha^d=\alpha^*$, both defined the day before the delivery. The value of the cost function $J^{*}$ is equal to the day-ahead economical gain. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figure/figure3.eps} \caption{\ac{hap} time scheduling.}\label{fig:HAPtime} \end{figure} \section{Hours-Ahead Planning (HAP)}\label{sec:HAP} The hour-ahead planning is a lower level controller which is re-computed every hour within the delivery day. The \ac{hap} routine receives from the \ac{dap} one the power delivery plan $\{P^{md}_k\}$ and the droop coefficient $\alpha^d$. The objective of \ac{hap} is to correct the plan $\{P^{md}_k\}$ to guarantee the provision of \ac{pfr}, keeping the droop coefficient $\alpha^d$ and reducing the expected \ac{dap} failure rate $\lambda_{\max}$ to a lower value $\lambda'_{\max}$, always maximizing the economical income. Figure~\ref{fig:HAPtime} shows the \ac{hap} time scheduling. Let $j=0,1,\ldots,23$ indicate the hours during the day, and $n=3600/\tau$ be the number of intra-hour power set-points defined according to the dispatch plan sampling time. Moreover, let $N_j = N-j\cdot n$ be the number of power set-points remaining from the $j$\textit{-th} hour to the end of the day. At the beginning of hour $j$, the \ac{is} power profile $\{P^{m}_k\}$ with $k=j n:N-1$ is re-programmed. Then, only the first $n$ steps, corresponding to the first hour of the dispatch plan, are applied. At hour $j+1$, the \ac{hap} optimization is repeated. This time scheduling can be called \textit{reducing horizon}, and, similarly to the \textit{receding horizon} principle adopted by \ac{mpc}, it allows the control algorithm to be more robust with respect to modelling errors. In particular, at each hour, updated, and thus more accurate, \ac{pv} generation and \ac{pfr} energy requirement forecasts may be available, as well as the current value of the battery \ac{soe}. These updated data are useful to suitably correct the \ac{dap} program. Based on this idea, as shown in Fig. \ref{fig:HAPtime}, the time from hour $j$ to the end of the day, is divided into two phases: the \ac{fh} ($k=jn:(j+1)n$), and the remaining time from hour $j+1$ to the end of the day ($k=(j+1)n:N$), from now named \ac{rod}. At hour $j$, the available data are: \begin{itemize} \item[a)] the \ac{dap} power profile $\{ P^{md}_k\}$, $k=j\cdot n:N-1$ traded at the energy market; \item[b)] the droop coefficient $\alpha^d$, defined for a given failure rate $\lambda^d$, to be guaranteed during all the day; \item[c)] the updated \ac{pv} forecasts $\{\hat{P}^{pv}_k\}$, with the associated standard deviations $\sigma^{pv}_k$, $k=j\cdot n:N-1$ (using the same the Gaussian model \eqref{eq:Pvgauss} adopted for \ac{dap}); \item[e)] the prediction of the frequency integral for the first hour $\widehat{W}^f_{h}$ and the associated standard deviation $\sigma^w_{1}$, computed as described in Section~\ref{sec:FrequencyForecast} with $T=$\SI{1}{\hour}; \item[f)] the prediction of the frequency integral for the rest of the day $\widehat{W}^f_{r}$ and the associated standard deviation $\sigma^w_{23-j}$, computed as described in Section~\ref{sec:FrequencyForecast} with $T=23-j$\si{\hour}; \item[g)] the penalty cost profile $\{ c^p_k \}$, $k=j\cdot n:N-1$ to be paid for a difference of the energy effectively exported by the \ac{is} from the energy traded at the day-ahead market; \item[h)] the intra-day energy price profile $\{c^{i}_k\}$, $k=j\cdot n:N-1$; \item[i)] the current battery \ac{soe}, $S_{jn}$. \end{itemize} For both the time windows \ac{fh} and \ac{rod}, an approach similar to \ac{dap} is adopted. In particular, the basic idea of the partition of the \ac{bess} capacity by the definition of the thresholds $S^{\rm max}_d$ and $S^{\rm min}_d$ is re-applied with the definition of different thresholds: $S^{\rm max}_h$, $S^{\rm min}_h$, for the \ac{fh}, and $S^{\rm max}_r$, $S^{\rm min}_r$, for the \ac{rod}. The partition into two time windows is adopted in order to give more degrees of freedom to the optimization for the FH. Thanks to the use of short-term, and thus more accurate, predictions, the optimization over the FH will be finer. It is worth remarking that, as mentioned before, at each hour, the optimization results are applied only for the FH. The \ac{hap} optimization problem, solved at each hour $j$, is formulated as it follows. \begin{align} &\max_{\{P^m_k\}, \mu_{h}, \mu_{r} } L_j \nonumber\\ &L_j = \sum_{k=jn}^{N-1} (c^{i}_k-c^{p}_k) \tau (P^m_k - P^{md}_k)\delta^+_k - c^{p}_k \tau (P^m_k-P^{md}_k) \delta^-_k \nonumber \\ & \quad + w_h \mu_h + w_r \mu_r \label{eq:HAPcost} \end{align} \noindent{subject to:} \begin{align} & S^{\min} \leq S_{h}^{\min} \leq S_{h}^{\max}\leq S^{\max}, \label{eq:constr_1h} \\ & m^s_{k} + \theta_{h} \sigma^{s}_{k} \leq S^{\max}_h \qquad \text{for } k = jn:j(n+1), \label{eq:constr_4h} \\ - & m^s_{k} +\theta_{h} \sigma^{s}_{k} \leq S^{\min}_h \qquad \text{for } k = jn:j(n+1), \label{eq:constr_5h} \\ & 2\alpha^d \widehat{W}_h^f = E_n[(S^{\rm max}+S^{\rm min})-(S^{\rm max}_h+S^{\rm min}_h)], \label{eq:constr_6h} \\ & 2\alpha^d \mu_h \sigma_{24}^w = E_{\rm n} [(S^{\max}-S^{\min})-(S^{\max}_h-S^{\min}_h)], \label{eq:constr_7h} \\ & \mu \leq \mu_h \leq \mu_{\max}, \label{eq:constr_8h} \\ & S^{\min} \leq S_{r}^{\min} \leq S_{r}^{\max}\leq S^{\max}, \\ & m^s_{k} + \theta_{r} \sigma^{s}_{k} \leq S^{\max}_r \qquad \text{for } k = j(n+1):N, \label{eq:constr_4r} \\ -& m^s_{k} +\theta_{r} \sigma^{s}_{k} \leq S^{\min}_r \qquad \text{for } k = j(n+1):N, \label{eq:constr_5r} \\ & 2\alpha^d \widehat{W}_r^f = E_n[(S^{\rm max}+S^{\rm min}) -(S^{\rm max}_r+S^{\rm min}_r)], \label{eq:constr_6r} \\ & 2\alpha^d \mu_r \sigma_{24}^w = E_{\rm n} [(S^{\max}-S^{\min})-(S^{\max}_h-S^{\min}_r)], \label{eq:constr_7r} \\ & \mu\leq \mu_r \leq \mu_{\max}, \label{eq:constr_8r} \\ & m^s_{k} = S_{jn}- \frac{\tau }{3600 \cdot E_{\rm n}} \sum_{i=jn}^{k-1} (P^m_i-\widehat{P}^{pv}_i), \label{eq:SOEpvGauss1_HAP}\\ & (\sigma_{k}^s)^2 = \left(\frac{\tau }{3600 \cdot E_{\rm n}} \right)^2 \cdot \sum_{i=jn}^{k-1} (\sigma_i^{pv})^2, \label{eq:SOEpvGauss2_HAP} \\ & P^m_k-\hat{P}^{pv}_{k} + \alpha^d \Delta f_{\rm max}+\theta_b \sigma^{pv}_{k}\leq P^b_{\rm n} \label{eq:powerconstr_hap1}\\ & P^m_k-\hat{P}^{pv}_k - \alpha^d \Delta f_{\rm max}-\theta_b \sigma^{pv}_{k}\geq -P^b_{\rm n}, \label{eq:powerconstr_hap2} \\ & 0\leq P^m_k \leq P^t_{\rm n}, \label{eq:positivepmHAP} \\ & |P_{k+1}^m -P_{k}^m| \leq \Delta P^m_{\max}, \label{eq:smoothnessHAP1} \\ & |m_{k+1}^s -m_{k}^s| \leq \Delta m^s_{\max}. \label{eq:smoothnessHAP2} \end{align} The optimization problem results to be mixed-integer with linear constraints. Indeed, there are two binary variables: $\delta^+_k$ defined (through additive linear constraints not reported for clarity of presentation) to be equal to 1 when $P^m_k\geq P^{md}_k$ and 0 otherwise, and $\delta^-_k = 1-\delta^+_k$. For each of the two time windows, starting from the definitions of the new thresholds $S^{\rm max}_h$ and $S^{\rm min}_h$, for the \ac{fh}, and $S^{\rm max}_r$ and $S^{\rm min}_r$ for the \ac{rod}, the \ac{soe} constraints defined for \ac{hap} are reformulated as in \eqref{eq:constr_1h}--\eqref{eq:SOEpvGauss2_HAP}. Let us focus on constraints \eqref{eq:constr_6h}--\eqref{eq:constr_7h} and \eqref{eq:constr_6r}--\eqref{eq:constr_7r}. They are the reformulation of the \ac{dap} constraints \eqref{eq:csoef1}--\eqref{eq:calphad1}, for the FH and the RoD, respectively. In \ac{dap}, \eqref{eq:csoef1}--\eqref{eq:calphad1} have to be respected in order to assure the maximal failure rate $\lambda^f_{\max}$ due to \ac{pfr}, which is related to coefficient $\mu$ by the relation $\mu~=~\sqrt{2}{\rm erf}^{-1}(1-\lambda^f_{\rm max})$ (see Section \ref{sec:FrequencyForecast}). It can be easily shown that $\mu$ increases when $\lambda^f_{\rm max}$ decreases. Therefore, if \eqref{eq:csoef1}--\eqref{eq:calphad1} are satisfied with a $\bar{\mu}\geq \mu$, the maximal failure rate $\lambda_{\max}$ is reduced. Indeed, by \eqref{eq:failure2} $\lambda_{\max}$ results to be reduced if $\lambda^f_{\max}$ decreases. Constraints \eqref{eq:constr_6h}--\eqref{eq:constr_7h} for the FH and \eqref{eq:constr_6r}--\eqref{eq:constr_7r} for the RoD, are therefore re-formulated using the relevant predictions $\widehat{W}^f_{h}$ and $\widehat{W}^f_{r}$ and imposing that the droop coefficient $\alpha$ is equal to $\alpha^d$, computed by the \ac{dap}. Two optimization variables ${\mu}_{h}$ and ${\mu}_{r}$, are introduced for the \ac{fh} and \ac{rod} time-windows. The cost function \eqref{eq:HAPcost} is designed in order to increase their values, in order to obtain the reduction of the failure rate. With constraints \eqref{eq:constr_8h} and \eqref{eq:constr_8r}, ${\mu}_{h}$ and ${\mu}_{r}$ are limited by the minimal value $\mu$, which gives the guaranty to obtain the \ac{dap} failure rate $\lambda^f_{\rm max}$, and by the maximal value $\mu_{max}=\sqrt{2}{\rm erf}^{-1}(1-\bar{\lambda}^{f}_{\rm max})$, corresponding to the maximal reduced failure rate $\bar{\lambda}^{f}_{\max}<\lambda^f_{\max}$. The power and smoothness constraints \eqref{eq:powerconstr_hap1}--\eqref{eq:smoothnessHAP2} are re-written, as in \ac{dap}, for the entire interval $k=jn:N-1$, with $\alpha=\alpha^d$. The cost function \eqref{eq:HAPcost} considers both the economical gain, determined by the balance between penalties and intra-day energy prices, and the reduction of the \ac{dap} failure rate, which, as mentioned, corresponds to the maximization of the coefficients $\mu_h$ and $\mu_r$. The optimization weights $w_h$ and $w_r$ have a different unit from the costs $c^{p}$ and $c^{e}$. Therefore, they has to be suitably normalized. It is worth remarking that the minimization of the failure rate may be in contrast with the maximization of the economical income. Therefore, the sizing of the weights $w_h$ and $w_r$ defines the priority level between the quality of the \ac{pfr} service and the economical gain. \section{Simulation results} \label{sec:simulations} A set of simulations has been performed considering real markets' data. The Italian day-ahead (MGP) and intra-day market (MI2) results (February 2019) \cite{MercatoElettricoITA} has been selected as input of \ac{dap} and \ac{hap} problems, respectively. The penalty for the variations on the dispatched power is fixed to \SI{0.05}{\EUR\per\kilo\Wh}. Moreover, the frequency regulating capacity price has been selected from the the results of the International \ac{pcr} markets between August 2018 and March 2019 \cite{internationalPCR}. \ac{dap} and \ac{hap} algorithms have been implemented in MATLAB/Simulink, and optimization problems have been written using the General Algebraic Modelling System (GAMS) language and solved with CPLEX. Battery is modelled with a standard equivalent circuit in which the internal resistance is a function of the \ac{soe} and of the electromotive force. Thus, a variable nonunitary battery efficiency has been implemented. Inputs of the simulator are real \ac{pv} measurements and \ac{pv} forecasts registered is the low-voltage (LV) microgrid realized by the University of Genova \cite{adinolfi2015advanced}. Moreover frequency measurements from the UK grid has been adopted in the construction of the \ac{ar} models and for the simulations \cite{DatiFreq}. Simulations have been executed over a 21 days period and considering the implementation only of \ac{dap}, and of both \ac{dap} and \ac{hap}. Moreover, five different cases are proposed, characterized by different \ac{pv}-\ac{bess} sizes, as reported in Table~\ref{tab:simresults}. Considering devices rating, the \ac{isms} is expected to differently balance the two services, \textit{i.e.} a larger \ac{bess} will provide higher regulating capacity but can rely on smaller offsets for charge management, on the other hand, a larger \ac{pv} will drive the \ac{isms} to privilege the dispatch service. Table~\ref{tab:simparam} shows the parameters adopted for the \ac{is}. Among the others: the minimum droop coefficient $\alpha_{\min}$ is defined according to \eqref{eq:alphamin} with respect to the \ac{pv} nominal power, with an equivalent maximal statism $b_p^{\max}$ fixed to \SI{8}{\percent} \cite{aisbl2012entso}; the maximum failure rate $\lambda_{\rm max}$ is fixed at \SI{5}{\percent}, according for example to the requirements of the UK market \cite{DatiFreq,EFR_Uk}; the dispatch sampling time $\tau$ is set to \SI{15}{\minute} according to the Italian energy market\cite{MercatoElettricoITA}. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figure/figure4.eps} \caption{Simulation results for the \ac{dap} only configuration, Case A: \SI{1500}{\kilo\watt} \ac{pv}, \SI{500}{\kilo\Wh} \ac{bess}. Top: planning power profiles; middle: planned and realized \ac{soe} profiles; bottom: realized power profiles.}\label{fig:DAP} \end{figure} \begin{table}[t] \centering \caption{Simulation results.} \label{tab:simresults} \renewcommand{\arraystretch}{1.4} \begin{tabular}{c c c c c c } \hline \hline Case & $\lambda$ \si{\percent} & Total \euro{} & \ac{pcr} \euro{} & Dispatch \euro{} & Penalty \euro{} \\ \hline \multirow{2}{*}{A} & 0.424 & 27798 & 15756 & 12042 & 0 \\ & 0 & 24825 & 14483 & 11363 & -1020 \\ \hline \multirow{2}{*}{B} & 0.530 & 35949 & 23940 & 12010 & 0 \\ & 0 & 32150 & 22575 & 11059 & -1484 \\ \hline \multirow{2}{*}{C} & 0.403 & 38765 & 14420 & 24345 & 0 \\ & 0 & 36997 & 14035 & 23797 & -835 \\ \hline \multirow{2}{*}{D} & 0.234 & 41536 & 4821 & 36714 & 0 \\ & 0 & 40897 & 4957 & 36426 & -486 \\ \hline \multirow{2}{*}{E} & 0.941 & 41104 & 4274 & 36830 & 0 \\ & 0 & 40451 & 4287 & 36580 & -416 \\\hline \multicolumn{6}{c}{ \begin{tabular}{p{0.4\textwidth}} Resources sizes: A. \ac{pv} \SI{500}{\kilo\watt}, \ac{bess} \SI{1500}{\kilo\watt}; B. \ac{pv} \SI{500}{\kilo\watt}, \ac{bess} \SI{1000}{\kilo\watt}; C. \ac{pv} \SI{1000}{\kilo\watt}, \ac{bess} \SI{1000}{\kilo\watt}; D. \ac{pv} \SI{1500}{\kilo\watt}, \ac{bess} \SI{500}{\kilo\watt}; E. \ac{pv} \SI{1500}{\kilo\watt}, \ac{bess} \SI{320}{\kilo\watt}. \end{tabular} } \\ \hline \hline \end{tabular} \end{table} \begin{figure} \centering \includegraphics[width=\columnwidth]{figure/figure5.eps} \caption{Simulation results for the \ac{dap}-\ac{hap} configuration, Case C: \SI{1000}{\kilo\watt} \ac{pv}, \SI{1000}{\kilo\Wh} \ac{bess}. Top: planning power profiles; middle: planned and realized \ac{soe} profiles; bottom: realized power profiles.}\label{fig:DAPeHAP} \end{figure} \begin{table} \centering \caption{Simulation parameters.} \label{tab:simparam} \renewcommand{\arraystretch}{1.4} \begin{tabular}{l l l} \hline \hline Variable & Description & Value \\\hline $\tau$ & Dispatch sampling time & \SI{15}{\minute}\\ $\Delta P^m_{\max}$ & Maximal power deviation & \SI{40}{\percent}$P^t_{\rm n}$\\ $\Delta m^s_{\max}$ & Maximal \ac{soe} deviation & \SI{10}{\percent}\\ $\gamma$ & Battery power chance-contraints coefficient & \SI{1}{\percent}\\ $\beta$ & Battery \ac{soe} chance-contraints coefficient & \SI{1}{\percent}\\ $\alpha_{\min}$ & Minimal droop coefficient as \eqref{eq:statism_max} with $b_p^{\rm max}$ \SI{8}{\percent} & - \\ $\alpha_{\max}$ & Maximal droop coefficient & inf \\ $S^{\min}$ & Maximal battery \ac{soe} & \SI{100}{\percent}\\ $S^{\max}$ & Maximal battery \ac{soe} & \SI{0}{\percent}\\ $\Delta f^{\max}$ & Maximal frequency deviation & \SI{0.2}{\hertz}\\ $\mu$ & Equivalent to \ac{dap} failure rate $\lambda_{\rm max}=$\SI{5}{\percent} & 1.96 \\ $\bar{\mu}_{\max}$ & Equivalent to \ac{hap} failure rate $\bar{\lambda}_{\rm max}=$\SI{0.3}{\percent} & 3 \\ \hline \hline \end{tabular} \end{table} Figure~\ref{fig:DAP} shows a section of the simulation of the stand alone \ac{dap} controller. The top plot reports the dispatch plan $\{P^{md}_k\}$, the day ahead \ac{pv} forecast $\{\hat{P}^{pv}_k\}$ and the battery offset program $\{P^{bd}_k\}=\{P^{md}_k-\hat{P}^{pv}_k\}$. The middle plot depicts the programmed \ac{soe} trajectory and the realized ones. While the bottom plot shows the resulting profiles of the total power at the \ac{gcp} $P^t$, of the base dispatch power $P^m$ and of the \ac{pv} generation $P^{pv}$. The detailed numerical results of all the simulations in the the stand alone \ac{dap} case are reported in Table~\ref{tab:simresults}. The reported data show that the \ac{dap} is able to determine a reliable power profile, which allows the \ac{is} to perform both the services with a failure rate lower than the prescribed maximal value $\lambda_{\rm max}=5\%$. Figure~\ref{fig:DAPeHAP} shows an example of the results obtained with the \ac{dap}-\ac{hap} configuration. In particular, in the top plot the modification operated by \ac{hap} with respect to \ac{dap} can be appreciated. For example, during the night operations (from hour 20 to hour 31) the \ac{hap} commands some short power delivery in order to discharge the battery and avoid to reach the full charge condition. Also Fig.~\ref{fig:DAPHAPconfr} makes evidence on the advantages on using the \ac{hap} procedure. Indeed, with the stand alone \ac{dap}, during the first 50 hours, the battery \ac{soe} reaches the up limit (failure), whereas this does not happen when \ac{hap} is used. It can be observed that in all the considered cases the \ac{dap}-\ac{hap} strategy allows obtaining a null failure rate, as shown in Table~\ref{tab:simresults}. It is worth remarking that one of objective of the \ac{hap} is to reduce the expect failure rate to a value below \SI{1}{\percent}. The bottom plot of Fig.~\ref{fig:DAPHAPconfr} reports the droop coefficients computed with the two configurations. They result to be comparable, even if the \ac{dap} solution allows to reach slightly higher values. As a consequence, the total economical income results to be higher. It is worth remarking that this results are not affected by some penalty that could be payed for reaching fail conditions in the \ac{hap} case. \begin{figure} \centering \includegraphics[width=\columnwidth]{figure/figure6.eps} \caption{Simulation results for the \ac{dap} only and \ac{dap}-\ac{hap} configurations, Case D: \SI{500}{\kilo\watt} \ac{pv}, \SI{1500}{\kilo\Wh} \ac{bess}. Top: planned and realized \ac{soe} profiles in the stand alone \ac{dap} configuration; middle: planned and realized \ac{soe} profiles in the \ac{dap}-\ac{hap} configuration; bottom: droop coefficients obtained in the \ac{dap} only and \ac{dap}-\ac{hap} configurations.} \label{fig:DAPHAPconfr} \end{figure} The results reported in Table~\ref{tab:simresults} prove the effectiveness of the control algorithms with all the different considered configurations. All cases use the same price vectors, therefore, the power ratings of the \ac{is} has a relevance on the total income. Increasing the \ac{pv} power rating allows to reach higher income from the dispatch, while the highest regulating capacities are obtained with larger \acp{bess}. It is finally worth remarking that, as noticed in Section~\ref{sec:ProblemDefinitions}, the control algorithms consider a battery with unitary efficiency. On the contrary, the test battery model adopted for the tests account for the efficiency. Such a model has been derived from the simulation setup presented and validated in \cite{PSCC2018,powertech2019}. The model consists in the series of an internal voltage source and of variable resistance, the parameters obtained from measurements the original grid-scale lithium-titanate battery rated \SI{560}{\kWh} \cite{sossan2016Achieving} has been scaled to match the different battery sizes simulated. The obtained results prove that such an approximation in the control design does not influence the overall performance. \section{Conclusions} \label{sec:conclusions} This paper presents a strategy for the optimal planning of an integrated \ac{bess}--\ac{pv} system, which provides frequency regulation and generation dispatch. The control architecture is composed by two algorithms. The first one, \ac{dap}, is executed the day before the delivery and defines the power dispatch plan and a droop coefficient for the \ac{pfr}, on the basis of \ac{pv} forecasts and predictions of the energy required for providing \ac{pfr}. The delivery day, at each hour, the second algorithm, named \ac{hap}, is executed in order to allow the \ac{is} to perform its tasks in a continuous and reliable way by using updated short-term forecasts. The two algorithms are designed to maximize the total incomes and the performance in providing \ac{pfr}. They use chance-constrained optimization in order to model the forecasts errors. The control framework has been validated by simulations. Future works will consider different applications using a similar approach, also non-Gaussian representations of uncertainties and stochastic models of the energy prices. \appendices \section{Proof of Proposition \ref{pr:1}} Using \eqref{eq:SOEdec1}, from \eqref{eq:chanceSOEpv} and \eqref{eq:chanceSOEf}, it follows that $$ \mathbf{P}\left(0\leq S^{pv}_k-S^{\min}_d \leq \frac{E_{\rm n}^{pv}}{E_{\rm n}}\right) = \mathbf{P}(A) \geq 1-\beta, $$ $$ \mathbf{P}\left(0\leq S^f_k-S^{\min} \leq \frac{E_{\rm n}^{f}}{E_{\rm n}}\right)= \mathbf{P}(B) = 1-\lambda^f_{\max}, \nonumber $$ where, $A$ and $B$ indicate the two considered constraints. Since ${S}^{pv}_k$ and ${S}^{f}_k$ are independent, it results that $$ \mathbf{P}(A \cap B) = \mathbf{P}(A)\cdot \mathbf{P}(B) = (1-\beta)\cdot (1-\lambda^f_{\max}) = 1-{\lambda}_{\max} $$ where ${\lambda}_{\max}$ is equal to the one defined in \eqref{eq:failure2}. Now consider that, because of elementary set inclusion properties, \begin{equation} \begin{aligned} &\mathbf{P}\left(0\leq S^{pv}_k+S^f_k-S^{\min}_d-S^{\max} \leq \frac{E_{\rm n}^{pv}}{E_{\rm n}} + \frac{E_{\rm n}^{f}}{E_{\rm n}} \right) \nonumber \\ & \qquad \qquad \qquad\qquad \qquad\qquad \geq \mathbf{P}(A \cap B) \geq 1-{\lambda}_{\max} \end{aligned} \end{equation} from which, taking into account \eqref{eq:SOEdec}, it follows that $$ \mathbf{P}\left(0\leq S_k - S^{\rm min} \leq \frac{E_{\rm n}^{pv}+E_{\rm n}^{f}}{E_{\rm n}}\right) \geq 1-{\lambda}_{\max} $$ and, therefore, $$ \mathbf{P}\left(S^{\rm min}\leq S_k \leq \frac{E_{\rm n}^{pv}+E_{\rm n}^{f}}{E_{\rm n}}+ S^{\rm min} \right) \geq 1-{\lambda}_{\max}. $$ To conclude, \eqref{eq:failure1} is proved by noticing that from the definitions \eqref{eq:Epvn} and \eqref{eq:Efn} it results that $$ \frac{E_{\rm n}^{pv}+E_{\rm n}^{f}}{E_{\rm n}}+ S^{\rm min} = \frac{E_{\rm n}^{s}}{E_{\rm n}}+ S^{\rm min} = S^{\rm max}. $$ \ifCLASSOPTIONcaptionsoff \newpage \fi
1702.02332
\section{\label{sec:1}Introduction} The study of entropy driven transitions in the $2\times2$ hard square lattice gas model, or equivalently the 2-NN model in which a particle excludes the nearest and next-nearest neighbor from being occupied by another particle, has a long history dating back to the 1950s~\cite{domb,bellerman,hoover,kinzel,amarkaski,francis2,nisbet}. The hard square model is known to undergo a continuous transition from a disordered fluid-like phase to an ordered phase with columnar order as the density $\rho$ or activity $z$ is increased. The best numerical estimates for the critical behavior, obtained from large scale Monte Carlo simulations, are critical activity $z_c\approx 97.5$, critical density $\rho_c \approx 0.932$, and critical exponents belonging to the Ashkin Teller universality class with critical exponents $\nu \approx 0.92$, $\beta/\nu=1/8$ and $\gamma/\nu=7/4$~\cite{fernandez,kabir2,feng,zhitomirsky}. Unlike the hard hexagon model~\cite{baxter1}, the hard square model is not exactly solvable. Different analytic and rigorous methods have been used to estimate the critical parameters over the last few decades~\cite{bellerman,bellerman2,kabir1,lafuente,lafuente2,slotte,trisha2,heitor,temperley2}. The estimates for $z_c$ and $\rho_c$ obtained from different methods are summarized in \tref{table:estimates}. Analytical approaches like high density expansion~\cite{bellerman,kabir1}, Flory-type approximations~\cite{heitor}, density functional theory~\cite{lafuente,lafuente2}, etc., result in estimates that underestimate the critical activity by more than a factor of 7. Calculations based on estimating the interfacial tension~\cite{slotte,trisha2} between two ordered phases have been more successful. By utilizing the mapping of the hard square model to the antiferromagnetic Ising model with next nearest neighbor interactions, a fairly good estimate $z_c=135.63$, that overestimates the critical activity, was obtained, but it is not clear how this approach may be extended~\cite{slotte}. In a recent paper~\cite{trisha2}, we introduced a systematic way of determining the interfacial tension as an expansion in number of defects in the perfectly ordered phase. While including a single defect improves the estimates for the critical parameters ($z_c=52.49$), the calculation of the two-defect contribution appears to be too difficult to carry out. We also estimated the effect of introducing overhangs of height one in the interface for defect-free phases ($z_c=54.87$). However, it is not clear how defects and overhangs may be combined in a single calculation. In this paper, we determine the interfacial tension using a pairwise approximation, similar to that used in liquid state theory. This approximation scheme allows us to take into account multiple defects as well as overhangs. By determining the activity at which this interfacial tension vanishes, we estimate $z_c=105.35$, in reasonable agreement with numerical results ($z_c\approx 97.5$), and which is a significant improvement over earlier estimates. \begin{table} \caption{\label{table:estimates} Estimates of critical activity $z_c$ and critical density $\rho_c$ for columnar-disordered transition of hard square model} \begin{indented} \lineup \item[]\begin{tabular}{@{}lll} \br $z_c$ & $\rho_c$ & Method Used \\ \mr \rowcolor{Gray} 97.50 & 0.932 & Numerical~\cite{fernandez,kabir2,feng,zhitomirsky} \\ 6.25 & 0.64 & High density expansion (order one)~\cite{bellerman,kabir1} \\ 11.09 & 0.76 & Flory type mean field~\cite{heitor}\\ 11.09 & 0.76 & Approximate counting~\cite{temperley2}\\ 11.13 & 0.764 & Density Functional theory~\cite{lafuente,lafuente2}\\ 14.86 & 0.754 & High density expansion (order two)~\cite{kabir1} \\ 17.22 & 0.807 & Rushbrooke Scoins approximation~\cite{bellerman} \\ 48.25 & 0.928 & Interfacial tension with no defect~\cite{trisha2}\\ 52.49 & 0.923 & Interfacial tension with one defect~\cite{trisha2}\\ 54.87 & 0.9326 & Interfacial tension with overhang~\cite{trisha2}\\ 135.63& - & Interfacial tension in anteferromagnetic Ising model~\cite{slotte}\\ \rowcolor{Gray} 105.35 & 0.947 & In this paper \\ \br \end{tabular} \end{indented} \end{table} The hard square model on the square lattice has been studied in different contexts. It is the prototypical model to study phases with columnar, smectic or layered order in which translational invariance in broken in some but not all the directions. Examples of systems showing such ordered phases include liquid crystals~\cite{degennes}, adsorbed atoms or molecules on metal surfaces~\cite{adsorbed-on-ni,chlorine,bromine,oxygen,koper}, etc. Columnar phases have also been of recent interest in different hard core lattice gas models. The hard rectangle gas shows a nematic-columnar phase transition, in addition to isotropic-nematic and columnar-sublattice transitions~\cite{joyjit2,joyjit_rectangle_odd}. Of these, in the limit of infinite aspect ratio, only the nematic-columnar transition survives at a finite packing density~\cite{joyjit3,trisha4}. Generalized models consisting of a mixture of hard squares and dimers~\cite{kabir2} or interacting dimers~\cite{interacting_dimer} also show a columnar phase. The presence of a columnar phase has also been shown to result in the $k$-NN model, in which the excluded volume of a particle is made up of its first $k$ next nearest neighbors, undergoing multiple entropy driven phase transitions with increasing density~\cite{trisha1,trisha3}. The study of columnar phases has also been of recent interest in quantum spin systems~\cite{quantum_dimer,quantum_dimer2,quantum_dimer3,s_jin,zhitomirsky}. The hard square system has also found application in modeling adsorption~\cite{chlorine,adsorbed-on-ni}, in combinatorial problems and tilings~\cite{baxter-comb,squares_torus,packing_square}, and has been the the subject of recent direct experiments~\cite{brownian_square,vibrating_square}. The remainder of the paper is organized as follows. In \sref{sec:2}, we define the model precisely and outline the steps in the calculation of the interfacial tension between two ordered columnar phases. The calculation involves determining the eigenvalue of a transfer matrix $T$, which is computed in \sref{sec:3}. In \sref{sec:4} the different quantities determining the largest eigenvalue of $T$ are computed by calculating exactly the partition function of hard squares on tracks made up of 2 and 4 rows with appropriate boundary conditions. The results for the interfacial tension are obtained in \sref{sec:6}. We end with a summary and discussion in \sref{sec:7}. \section{\label{sec:2}Model and Outline of Calculation} Consider a square lattice of size $N_x$ $\times$ $N_y$. The sites may be occupied by particles that are hard squares of size $2\times2$. The squares interact through only excluded volume interaction i.e. two squares can not overlap but may touch each other. We associate an activity $z$ to each square. At low activities $z$ or equivalently at low densities $\rho$, the system is in a disordered phase. For activities larger than critical value $z_c$, the system is in a broken-symmetry phase with columnar order, which we define more precisely below. Let the lower left corner of a square be denoted as its head. In the columnar phase, the heads preferentially occupy even or odd rows with all columns being equally occupied, or preferentially occupy even or odd columns with all rows being equally occupied. An example of a row-ordered phase is shown in \fref{fig:12}. The snapshot of a equilibrated configuration is shown in two different representations. When the squares are colored according to whether their heads are in even or odd rows [see \fref{fig:12}(a)], one color is predominantly seen. However, when the same configuration is colored according to whether the heads of the squares are in even or odd columns [see \fref{fig:12}(b)], then both colors appear in roughly equal proportion. There are clearly $4$ ordered phases possible. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{fig01.pdf} \caption{Snapshot of a equilibrated configuration of system of hard squares with activity $z=110.0$, corresponding $\rho\approx 0.937$. These parameters correspond to the system being in an ordered phase. A square is colored blue or green depending on whether its head (bottom left point) is in even or odd (a) row and (b) column. The dominance of one color in (a) implies that the system is a row-ordered phase. The snapshot was generated using Monte Carlo simulation using the cluster algorithm introduced in \cite{kundu,joyjit1}.} \label{fig:12} \end{figure} The aim of this paper is to estimate the critical activity $z_c$ and critical density $\rho_c$ separating the disordered phase from the ordered columnar phase. To do so, we determine, within an approximation scheme, the interfacial tension $\sigma(z)$ between two differently ordered columnar phase and equate it to zero to obtain the transition point. Consider boundary conditions where the left edge of the square lattice is fixed to the occupied by squares with heads in even row and the right edge is fixed to be occupied by squares in odd row. For large $z$, this choice of boundary condition ensures that there is an interface running from top to bottom separating a left phase or domain constituted of squares predominantly in even rows from a right phase or domain constituted of squares predominantly in odd rows. A schematic diagram of the interface is shown in \fref{fig:7}. We will refer to the two phases as left and right phases or domains from now on. Let $Z^{(0)}$ be the partition functions of the system without an interface and $Z^{(\mathcal{I})}$ be the partition function when an interface $\mathcal{I}$ is present. The interfacial tension $\sigma(z)$ is defined as \begin{equation} \label{surf_tension} e^{-\sigma N_y}=\frac{\sum_{\mathcal{I}}Z^{(\mathcal{I})}}{Z^{(0)}}. \end{equation} As the interactions between the squares are only excluded volume interactions, the partition function in the presence of an interface may be written as a product of partition function of the left and right phases, i.e. \begin{equation} Z^{(\mathcal{I})}=Z^{(\mathcal{I})}_LZ^{(\mathcal{I})}_R, \end{equation} where $Z^{(\mathcal{I})}_L$ and $Z^{(\mathcal{I})}_R$ denote the partition functions of the left and right phases in the presence of an interface $\mathcal{I}$. It is not possible to determine $Z^{(\mathcal{I})}_L$, $Z^{(\mathcal{I})}_R$ or $Z^{(0)}$ exactly. In what follows, we calculate these partition functions within certain approximations. \begin{figure} \centering \includegraphics[width=0.6\columnwidth]{fig02.pdf} \caption{Schematic diagram of a configuration in the presence of an interface. The boundary conditions are such that the left (right) edge of the square is fixed to be occupied by even (odd) squares. The interface, constituted of the right edges of the right-most squares of the left domain is denoted by the red line and labeled by $\eta_i$. $\xi_i$ denotes the left-most position possible for a square belonging to the right domain.} \label{fig:7} \end{figure} First, we assume that the interface between the left and right phases is a directed walk from top to bottom, ie the interface does not have any upward steps. We define the position of the interface to be the right boundary of the rightmost squares of the left phase. The interface is denoted by $\eta_i$ as shown in \fref{fig:7}. We also define $\xi_i$ to the left most position that a square in the right phase may occupy on row $i$, as shown in \fref{fig:7}. Clearly, \begin{equation} \xi_i=\max(\eta_{i-1},\eta_{i}), \quad i=1,2,..,N_y/2. \end{equation} Given an interface, we compute the partition function within an approximation. The simplest approximation is it to write the partition function as a product of partition functions of tracks of width two, corresponding to two consecutive rows. This approximation has the drawback that the ordered left and right phases do not have any defects, where the squares of wrong type i.e. odd squares in left or even phase and even squares in the right or odd phase will be called defects (denoted by yellow in \fref{fig:7}). The calculation of interfacial tension then reduces to the special case of zero-defects of \cite{trisha2}. The simplest approximation that allows defects to be present is the pairwise approximation, where the partition function is written as a product of $N_y/2$ partition functions of tracks of width four, made up of four consecutive rows. We write \begin{eqnarray} Z^{(\mathcal{I})}_L&=&\frac{\omega_2^{(L)}(\eta_1,\eta_2)\omega_2^{(L)}(\eta_2,\eta_3)~...~\omega_2^{(L)}(\eta_{N_y/2},\eta_1)} {\mathcal{L}^{(L)}(\eta_1)\mathcal{L}^{(L)}(\eta_2)~...~\mathcal{L}^{(L)}(\eta_{N_y/2})},\\ Z^{(\mathcal{I})}_R&=&\frac{\omega_2^{(R)}(N_x-\xi_1,N_x-\xi_2)~...~\omega_2^{(R)}(N_x-\xi_{N_y/2}, N_x-\xi_1)} {\mathcal{L}^{(R)}(N_x-\xi_1)~...~\mathcal{L}^{(R)}(N_x-\xi_{N_y/2})},\\ Z^{(0)}&=&\frac{{[\omega_2(N_x,N_x)]}^{N_y/2}}{{[\mathcal{L}(N_x)]}^{N_y/2}}, \label{eq:100} \end{eqnarray} where $\omega_2(\ell_1,\ell_2)$ is the partition function of a track of width $4$ where first two rows are of length $\ell_1$ and third and fourth rows of length $\ell_2$, and $\mathcal{L}(\ell)$ is the partition function of a track of width $2$ where both rows have length $\ell$. The superscripts $(L)$ and $(R)$ denote left and right phases. The choice of the denominator is motivated by the fact that in the absence of defects, $\omega_2(\ell_1,\ell_2)= \mathcal{L}(\ell_1) \mathcal{L}(\ell_2)$. In this case, the overall partition function should reduce to a product over $ \mathcal{L}$'s, and the choice of the denominator ensures this. The partition functions for the left and right phases are different, and also not the same as the partition function of the system without an interface, because the presence of the interface imposes introduces constraints on the positioning of squares near the interface. The constraints are as follows. For the left partition function $\omega_2^{(L)}(\ell_1,\ell_2)$, there must be even squares (non-defects) present whose right edges are aligned with the position of the interface in both sets of two rows each corresponding to $\ell_1$ and $\ell_2$. This is because the position of the interface has been defined as the right edge of the rightmost square of the left phase. For the right partition function $\omega_2^{(L)}(\ell_1,\ell_2)$, the constraint is that there must at least one odd square (non-defect) between the interface and the left-most defect square. Otherwise, the interface can be redefined to include the defect square into the left phase. In addition, there is the question of whether defects can be placed between $\ell_1$ and $\ell_{2}$ for the left and right phases. Placing defects here is equivalent to allowing the interface to have overhangs. To prevent overcounting, we will disallow such defects for the left phase, but allow them for the right phase. Equivalently, a defect in the left phase may be placed only in the region to the left of $\min(\ell_1, \ell_2)$, and a defect in the right phase can be placed to the right of $\min(N_x-\ell_1,N_x-\ell_2)$. It is convenient to shift to a notation where (see \fref{fig:8}) \begin{equation} \omega_2(\ell_1,\ell_2)= \Omega_2\big[\min(\ell_1,\ell_2),|\ell_1-\ell_2|\big] \end{equation} Then, the partition function $Z^{(L)}$, $Z^{(R)}$ and $Z^{(0)}$ may be rewritten as \begin{eqnarray} \label{zp_left} Z^{(\mathcal{I})}_L &=&\frac{\prod_{i=1}^{N_y/2}\Omega_2^{(L)}\big[\min(\eta_i,\eta_{i+1}),|\eta_i-\eta_{i+1}|\big]} {\prod_{i=1}^{N_y/2}\mathcal{L}^{(L)}(\eta_i)},\\ \label{zp_right} Z^{(\mathcal{I})}_R &=&\frac{\prod_{i=1}^{N_y/2}\Omega_2^{(R)}\big[N_x-\max(\xi_i,\xi_{i+1}),|\xi_i-\xi_{i+1}|\big]} {\prod_{i=1}^{N_y/2}\mathcal{L}^{(R)}(N_x-\xi_i)},\\ \label{zp_0} Z^{(0)}&=&\frac{{[\Omega_2(N_x,0)]}^{N_y/2}}{{[\mathcal{L}(N_x)]}^{N_y/2}}, \end{eqnarray} For large $\ell$, the partition functions $\Omega_2$ and $\mathcal{L}$ diverge exponentially with the system size. We define \begin{eqnarray} \label{12row} \Omega_2(\ell,\Delta)&=&a_2(\Delta)\lambda_2^{2\ell+\Delta},\\ \label{lrpartitions1} \Omega_2^{(L)}(\ell,\Delta)&=&a_2^{(L)}(\Delta)\lambda_2^{2\ell+\Delta}, \quad \ell \gg 1,\\ \label{lrpartitions2} \Omega_2^{(R)}(\ell,\Delta)&=&a_2^{(R)}(\Delta)\lambda_2^{2\ell+\Delta}, \end{eqnarray} and \begin{eqnarray} \label{one_rw} \mathcal{L}(\ell)&=&a_1 \lambda_1^\ell,\\ \label{lrpartitions3} \mathcal{L}^{(L)}(\ell)&=&a_1^{(L)} \lambda_1^{\ell},\quad \ell \gg 1,\\ \label{lrpartitions4} \mathcal{L}^{(R)}(\ell)&=&a_1^{(R)} \lambda_1^{\ell}. \end{eqnarray} Note that we have used the same exponential factor for all $\Omega_2$ (as well as for all $\mathcal{L}$), since the free energy is independent of constraints arising from the boundary conditions. It is easy to determine $a_1^{(L)}$ and $a_1^{(R)}$ in terms of $a_1$. In the left domain, for a track of width 2, the constraint is that the rightmost square must touch the interface. This means that $\mathcal{L}^{(L)}(\ell)=z\mathcal{L}(\ell-2)\approx za_1\lambda_1^{\ell-2} $. In the right domain, defects cannot be present in a track of width 2, and hence there are no constraints, implying that $\mathcal{L}^{(R)}(\ell)=\mathcal{L}(\ell) \approx a_1\lambda_1^{\ell}$. Therefore, \begin{eqnarray} \label{lrpartitions} a_1^{(L)}&=&\frac{z a_1}{\lambda_1^2},\\ a_1^{(R)}&=&a_1. \end{eqnarray} \begin{figure} \centering \includegraphics[width=0.4\textwidth]{fig03.pdf} \caption{Schematic diagram of a (a) track of width $4$ (four rows) with partition function $\Omega_2(\ell,\Delta)$ and (b) track of width $2$ (two rows) with partition function $\mathcal{L}(\ell)$.} \label{fig:8} \end{figure} Using the asymptotic forms for the partition functions, the partition functions of the left [see \eref{zp_left}] and right [see \eref{zp_right}] phases may be rewritten as \begin{eqnarray} Z^{(\mathcal{I})}_L&=&\frac{\prod_{i=1}^{N_y/2}a_2^{(L)}\big(|\eta_i-\eta_{i+1}|\big) \lambda_2^{2\min(\eta_i,\eta_{i+1})+|\eta_i-\eta_{i+1}|}} {\prod_{i=1}^{N_y/2}za_1\lambda_1^{\eta_i-2}},\\ Z^{(\mathcal{I})}_R&=&\frac{\prod_{i=1}^{N_y/2}a_2^{(R)}\big(|\xi_i-\xi_{i+1}|\big) \lambda_2^{2N_x-2\max(\xi_i,\xi_{i+1})+|\xi_i-\xi_{i+1}|}} {\prod_{i=1}^{N_y/2}a_1\lambda_1^{N_x-\xi_i}}. \end{eqnarray} Using the relations $2\min(m,n)=m+n-|m-n|$ and $2\max(m,n)=m+n+|m-n|$, taking product of $Z^{(L)}$ and $Z^{(R)}$ and simplifying, we obtain \begin{equation} \label{approxzi} Z^{(\mathcal{I})}=\frac{\lambda_2^{N_xN_y}\prod_{i=1}^{N_y/2}a_2^{(L)}\big(|\eta_i-\eta_{i+1}|\big) a_2^{(R)}\big(|\xi_i-\xi_{i+1}|\big)\lambda_2^{-|\eta_i-\eta_{i+1}|}}{{\bigg(\frac{z a_1^2}{\lambda_1^2} \bigg)}^{N_y/2}\lambda_1^{N_xN_y/2}\prod_{i=1}^{N_y/2}\lambda_1^{-\frac{1}{2}|\eta_i-\eta_{i+1}|}}. \end{equation} Likewise, the partition function of the system without an interface [see \eref{zp_0}] may be written for large $N_x$ as \begin{equation} \label{z0} Z^{(0)}={\bigg[\frac{a_2(0)\lambda_2^{2N_x}}{a_1\lambda_1^{N_x}}\bigg]}^{N_y/2}. \end{equation} Knowing the partition functions \eref{approxzi} and \eref{z0}, the interfacial tension in \eref{surf_tension} may be expressed in terms of $a$'s, $\lambda_1$ and $\lambda_2$ as \begin{equation} \label{approxsigma} e^{-\sigma N_y}={\bigg[\frac{\lambda_1^2}{z a_1 a_2(0)}\bigg]}^{N_y/2}\sum_{\mathcal{I}}\prod_{i=1}^{N_y/2} \frac{a_2^{(L)}\big(|\eta_i-\eta_{i+1}|\big) a_2^{(R)}\big(|\xi_i-\xi_{i+1}|\big)\lambda_2^{-|\eta_i-\eta_{i+1}|}}{\lambda_1^{-\frac{1}{2}|\eta_i-\eta_{i+1}|}}. \end{equation} We note that all arguments are in terms of differences between consecutive $\eta_i$'s or $\xi_i$'s. It is therefore convenient to introduce new variables \begin{equation} {\widetilde{\eta}}_i=\eta_i-\eta_{i-1}. \end{equation} In terms of these new variables, it is straightforward to derive \begin{equation} \xi_{i+1}-\xi_i={\widetilde{\eta}}_{i+1}\theta({\widetilde{\eta}}_{i+1})+{\widetilde{\eta}}_{i} (1-\theta({\widetilde{\eta}}_{i})), \end{equation} where $\theta(x)$ is the Heaviside step function defined as $\theta(x)=1$ for $x \geq 0$ and $\theta(x)=0$ for $x<0$. In terms of these new variables ${\widetilde{\eta}}_i$, the interfacial tension \eref{approxsigma} may be rewritten as \begin{eqnarray} \label{approxsigma2} e^{-\sigma N_y}={\bigg[\frac{\lambda_1^2}{z a_1 a_2(0)}\bigg]}^{N_y/2}\sum_{[{\widetilde{\eta}}_{i}]}\prod_{i=1}^{N_y/2} &{\bigg(\frac{\sqrt{\lambda_1}}{\lambda_2}\bigg)}^{|{\widetilde{\eta}}_{i}|} a_2^{(L)}\big(|{\widetilde{\eta}}_{i}|\big)\times\nonumber\\ &a_2^{(R)}\big(|{\widetilde{\eta}}_{i+1}\theta({\widetilde{\eta}}_{i+1})+{\widetilde{\eta}}_{i}(1-\theta({\widetilde{\eta}}_{i}))|\big), \end{eqnarray} where the sum over ${\widetilde{\eta}}_{i}$ varies from $-\infty$ to $+\infty$. The summation over ${\widetilde{\eta}}_{i}$ is not straightforward to do as they are not independent due to terms coupling ${\widetilde{\eta}}_{i}$ and ${\widetilde{\eta}}_{i+1}$. To do the sum, we define an infinite dimensional transfer matrix $T$ with coefficients \begin{equation} \label{transfer_matrix} T_{{\widetilde{\eta}}_{i},{\widetilde{\eta}}_{i+1}}={\bigg(\frac{\sqrt{\lambda_1}}{\lambda_2}\bigg)}^{|{\widetilde{\eta}}_{i}|} a_2^{(L)}\big(|{\widetilde{\eta}}_{i}|\big) a_2^{(R)}\big(|{\widetilde{\eta}}_{i+1}\theta({\widetilde{\eta}}_{i+1})+{\widetilde{\eta}}_{i}(1-\theta({\widetilde{\eta}}_{i}))|\big). \end{equation} Let $\Lambda_2$ be the largest eigenvalue of the transfer matrix $T$. For large $N_y$, we may then write \eref{approxsigma2} as \begin{equation} \label{sigma1} e^{-\sigma N_y}={\bigg[\frac{\lambda_1^2}{z a_1 a_2(0)}\bigg]}^{N_y/2}\sum_{[{\widetilde{\eta}}_{i}]}\prod_{i=1}^{N_y/2} T_{{\widetilde{\eta}}_{i},{\widetilde{\eta}}_{i+1}} ={\bigg[\frac{\lambda_1^2\Lambda_2}{z a_1 a_2(0)}\bigg]}^{N_y/2}. \end{equation} At the transition point, $\sigma$ vanishes, and the critical activity $z_c$ therefore satisfies the relation \begin{equation} \label{critical_condition} \frac{\lambda_1^2\Lambda_2}{z_c a_1 a_2(0)}=1, \end{equation} where $\Lambda_2$ depends on $a_2^{(R)}$ and $a_2^{(L)}$. These unknown parameters are calculated exactly in \sref{sec:3} and \sref{sec:4}. \section{\label{sec:3} Calculation of Eigenvalue of $T$} In this section, we determine the largest eigenvalue of the transfer matrix $T$ with components as defined in \eref{transfer_matrix}. Let the largest eigenvalue of $T$ be denoted by $\Lambda_2$ corresponding to an eigenvector $\Psi$ with components $\psi_i$, $i=-\infty, \ldots, \infty$. In component form, the eigenvalue equation is \begin{equation} \label{eigenvalue_eqn} \sum_{j=-\infty}^{\infty}T_{i,j}\psi_j=\Lambda_2\psi_i, ~i=-\infty, \ldots, \infty. \end{equation} Substituting for $T$ from \eref{transfer_matrix}, we obtain \begin{equation} \label{jgeq0} \bigg(\frac{\sqrt{\lambda_1}}{\lambda_2}\bigg)^{|i|}a_2^{(L)}(|i|)\bigg[a_2^{(R)}(0)\sum_{j=-\infty}^0\psi_j+\sum_{j=1}^{\infty}a_2^{(R)}(|j|)\psi_j\bigg]\\ =\Lambda_2 \psi_i,~ i \geq 0, \end{equation} \begin{equation} \label{jl0} \bigg(\frac{\sqrt{\lambda_1}}{\lambda_2}\bigg)^{|i|}a_2^{(L)}(|i|)\bigg[a_2^{(R)}(|i|)\sum_{j=-\infty}^{0}\psi_j+\sum_{j=1}^{\infty}a_2^{(R)}(|j+i|)\psi_j\bigg]\\=\Lambda_2 \psi_i,~i < 0. \end{equation} First consider the case for $i\geq0$. \Eref{jgeq0} may be re-written as \begin{equation} \label{psigeq0} \bigg(\frac{\sqrt{\lambda_1}}{\lambda_2}\bigg)^{|i|}a_2^{(L)}(|i|)\bigg[a_2^{(R)}(0)\beta+\sum_{j=1}^{\infty}a_2^{(R)}(|j|)\widetilde{\psi}_j\bigg]=\Lambda_2 \widetilde{\psi}_i, ~i \geq 0, \end{equation} where \begin{equation} \widetilde{\psi}_i = \frac{\psi_i}{\psi_0}; ~~\beta = \sum_{i=-\infty}^{0}\widetilde{\psi}_i. \label{beta} \end{equation} Since $\widetilde{\psi}_0=1$, from \eref{psigeq0} with $i=0$, we immediately obtain the eigenvalue $\Lambda_2$ to be \begin{equation} \label{Lambda} \Lambda_2=a_2^{(L)}(0) \left[ a_2^{(R)}(0)\beta+\sum_{j=1}^{\infty} a_2^{(R)}(|j|)\widetilde{\psi}_j\right]. \end{equation} with components of the eigenvector being \begin{equation} \label{solution1} \widetilde{\psi}_i={\bigg(\frac{\sqrt{\lambda_1}}{\lambda_2}\bigg)}^{|i|}\frac{a_2^{(L)}(|i|)}{a^{(L)}(0)},~i \geq 0. \end{equation} Now, consider the case $i<0$. In terms of $\widetilde{\psi}_i$, \eref{jl0} may be written as \begin{equation} {\bigg(\frac{\sqrt{\lambda_1}}{\lambda_2}\bigg)}^{|i|}a_2^{(L)}(|i|)\bigg[a_2^{(R)}(|i|)\beta+\sum_{j=1}^{\infty}a_2^{(R)}(|j+i|)\widetilde{\psi}_j\bigg]=\Lambda_2 \widetilde{\psi}_i,~i < 0. \end{equation} Substituting $\widetilde{\psi}_j$ for $j\geq 0$, from \eref{solution1}, we obtain \begin{equation} \label{solution2} {\bigg(\frac{\sqrt{\lambda_1}}{\lambda_2}\bigg)}^{|i|}a_2^{(L)}(|i|) F(i) = \Lambda_2\widetilde{\psi}_i,~i< 0, \end{equation} where, the function $F(i)$ is defined as \begin{equation} F(i)=a_2^{(R)}(|i|)\beta+\sum_{j=1}^{\infty}{\bigg(\frac{\sqrt{\lambda_1}}{\lambda_2}\bigg)}^{|j|}\frac{a_2^{(R)}(|j+i|)a_2^{(L)}(|j|)}{a_2^{(L)}(0)}. \end{equation} The solution to \eref{solution2} is clearly \begin{equation} \Lambda_2=a_2^{(L)}(0)F(0), \end{equation} which is consistent with \eref{Lambda}, and \begin{equation} \label{solution3} \widetilde{\psi}_i={\bigg(\frac{\sqrt{\lambda_1}}{\lambda_2}\bigg)}^{|i|}\frac{a_2^{(L)}(|i|)F(i)}{a_2^{(L)}(0)F(0)},~i< 0. \end{equation} \Eref{Lambda}, \eref{solution1}, and \eref{solution3} determine $\Lambda_2$ and the components of the eigenvector. To solve for $\Lambda_2$ in terms of $a_2^{(L)}(\Delta)$ and $a_2^{(R)}(\Delta)$, it is convenient to define three quantities \begin{eqnarray} \label{k1} k_1&=&\sum_{i=1}^{\infty}{\bigg(\frac{\sqrt{\lambda_1}}{\lambda_2}\bigg)}^{|i|}a_2^{(L)}(|i|)a_2^{(R)}(|i|),\\ \label{k2} k_2&=&\sum_{i=-\infty}^{0}{\bigg(\frac{\sqrt{\lambda_1}}{\lambda_2}\bigg)}^{|i|}a_2^{(L)}(|i|)a_2^{(R)}(|i|),\\ \label{k3} k_3&=&\sum_{i=-\infty}^{0}\sum_{j=1}^{\infty}{\bigg(\frac{\sqrt{\lambda_1}}{\lambda_2}\bigg)}^{|i|+|j|}\frac{a_2^{(L)}(|i|)a_2^{(R)}(|i+j|)a_2^{(L)}(|j|)}{a_2^{(L)}(0)}. \end{eqnarray} Solving for $\beta$ in \eref{beta} and \eref{Lambda} by substituting for $\widetilde{\psi}_i$ from \eref{solution3} and \eref{solution1} respectively, we obtain \begin{eqnarray} \label{beta1} \beta&=&\frac{k_3}{\Lambda_2-k_2},\\ \beta&=&\frac{\Lambda_2-k_1}{a_2^{(L)}(0)a_2^{(R)}(0)}. \label{beta2} \end{eqnarray} Equating \eref{beta1} and \eref{beta2} to eliminate $\beta$, we find that $\Lambda_2$ satisfies the quadratic equation \begin{equation} \label{eq_of_lambda} \Lambda_2^2-(k_1+k_2)\Lambda_2 +k_1k_2-k_3a_2^{(L)}(0)a_2^{(R)}(0)=0, \end{equation} whose largest root is \begin{equation} \label{largest_lambda} \Lambda_2=\frac{k_1+k_2+\sqrt{{(k_1-k_2)}^2+4a_2^{(L)}(0)a_2^{(R)}(0)k_3}}{2}. \end{equation} The largest eigenvalue may be further simplified using \begin{eqnarray} \label{ktilda1} &k_2-k_1&=a_2^{(L)}(0)a_2^{(R)}(0),\\ \label{ktilda2} &\widetilde{k}&=k_2+k_1=\sum_{i=-\infty}^{\infty}{\bigg(\frac{\sqrt{\lambda_1}}{\lambda_2}\bigg)}^{|i|}a_2^{(L)}(|i|)a_2^{(R)}(|i|). \end{eqnarray} After simplification we get the largest eigenvalue \begin{equation} \label{largest_lambda_s} \Lambda_2=\frac{\widetilde{k}+\sqrt{{\big[a_2^{(L)}(0)a_2^{(R)}(0)\big]}^2+4a_2^{(L)}(0)a_2^{(R)}(0)k_3}}{2}, \end{equation} with $k_3$ as in \eref{k3} and $\widetilde{k}$ as in \eref{ktilda2}. \section{\label{sec:4}Calculation of Partition Functions of Tracks} \subsection{Partition function of track of width $2$} In this section, we determine the asymptotic behavior of the partition function $\mathcal{L}(\ell)$ of a track of width $2$ and length $\ell$ [the shape of the track is shown in \fref{fig:8}(b)]. We define the generating function \begin{equation} \label{g1y} G_1(y)=\sum_{\ell=0}^\infty \mathcal{L}(\ell)y^{\ell}, \end{equation} where the power of $\sqrt{y}$ is the number of sites present in the system. The recursion relation obeyed by $G_1(y)$ is shown diagrammatically in \fref{fig:15} and can be written as \begin{equation} G_1(y)=1+yG_1(y)+zy^2G_1(y), \end{equation} which may be solved to give \begin{equation} \label{G1y} G_1(y)=\frac{1}{1-y-zy^2}. \end{equation} Let $y_1$ be the smallest root of the denominator $1-y-zy^2$ of \eref{G1y}, i.e. \begin{equation} y_1=\frac{\sqrt{1+4z}-1}{2z}. \end{equation} By finding the coefficient of $y^\ell$ for large $\ell$, it is straightforward to obtain \begin{equation} \label{mathcalL} \mathcal{L}(\ell)=a_1\lambda_1^{\ell}[1+O(\exp(-c\ell))],~c>0,~\ell \gg 1, \end{equation} where \begin{equation} \label{lmda1a1} \lambda_1=\frac{1}{y_1},~~a_1=\frac{1}{2-y_1}. \end{equation} \begin{figure} \centering \includegraphics[width=0.6\columnwidth]{fig04.pdf} \caption{Diagrammatic representation of the recursion relation obeyed by the generating function $G_1(y)$ defined for a track of width 2 [see \eref{g1y} for definition]. The first column of the track may be occupied by two vacancies (open 1$\times$1 square) or a square (filled 2$\times$2 square).} \label{fig:15} \end{figure} \subsection{Partition functions for tracks of width $4$} In this section we determine the partition functions of tracks of width $4$ without any constraints. The shape of a generic track of width $4$ is characterized by parameters $\ell$ and $\Delta$, and is shown in \fref{fig:8} (a). Calculating these partition functions will allow us to determine $a_2(\Delta)$ as defined in \eref{12row}. \begin{figure} \centering \includegraphics[width=0.6\columnwidth]{fig05.pdf} \caption{Diagrammatic representation of the recursion relation obeyed by the generating functions (a) $G_2(y,0)$ and (b) $G_2(y,1)$ for a track of width 4 [see \eref{g2delta} for definition]. Right hand side enumerates the different ways the first column of the track may be occupied by vacancies (open 1$\times$1 square), square (filled 2$\times$2 green square) and defect (filled 2$\times$2 yellow square).} \label{fig:1} \end{figure} Consider the following generating function. \begin{equation} \label{g2delta} G_2(y,\Delta)=\sum_{\ell=0}^\infty \Omega_2(\ell,\Delta)y^{2\ell+\Delta}, \end{equation} where the power of $\sqrt{y}$ is the number of sites in the system. $G_2(y,0)$ and $G_2(y,1)$ obey simple recursion relations which are shown diagrammatically in \fref{fig:1}. In equation form, they are \begin{eqnarray} \label{g20_00} G_2(y,0)=&1+y^2G_2(y,0)+2zy^3G_2(y,1)+(z^2y^4 +z_Dy^4)G_2(y,0),\\ \label{g20_11} G_2(y,1)=&yG_2(y,0)+zy^2G_2(y,1), \end{eqnarray} where $z_D$ is the activity associated with each defect square. These relations are easily solved to give \begin{eqnarray} \label{sg20} G_2(y,0)&=&\frac{1-zy^2}{f(y^2)},\\ \label{sg21} G_2(y,1)&=&\frac{y}{f(y^2)}, \end{eqnarray} where \begin{equation} f(y)=z(z^2+z_D)y^3-(z^2+z+z_D)y^2-(1+z)y+1.\nonumber \end{equation} Let $y_2$ be the smallest root of $f(y)=0$. For very large $\ell$, we may write $\Omega_2(\ell,\Delta)$ as \begin{equation} \label{approx_omega} \Omega_2(\ell,\Delta)=a_2(\Delta)\lambda_2^{2\ell+\Delta}[1+O(\exp(-c\ell))],~\ell \gg 1, ~c>0, \end{equation} where \begin{equation} \lambda_2=\frac{1}{\sqrt{y_2}}. \end{equation} Calculating coefficient of $y^{2\ell+\Delta}$, the prefactor $a_2(\Delta)$ for $\Delta=0,1$ is obtained to be \begin{eqnarray} \label{aa20} a_2(0)&=&\frac{-(1-zy_2)}{y_2f^\prime(y_2)},\\ \label{aa21} a_2(1)&=&\frac{-1}{\sqrt{y_2}f^\prime(y_2)}. \end{eqnarray} We now consider $\Delta \geq 2$. The recursion relation obeyed by $\Omega_2(\ell,\Delta)$ for $\Delta \geq 2$ is shown diagrammatically in \fref{fig:3}, and may be written mathematically as \begin{equation} \label{o2ld} \Omega_2(\ell,\Delta)=\Omega_2(\ell,\Delta-1)+z\Omega_2(\ell,\Delta-2),~~\Delta=2, 3,... \end{equation} We define the generating function \begin{equation} \label{flx} F(\ell,x)=\sum_{\Delta=0}^\infty \Omega_2(\ell,\Delta)x^{\Delta}. \end{equation} Multiplying \eref{o2ld} by $x^\Delta$ and summing from $2$ to $\infty$, we obtain a linear equation obeyed by $F(\ell,x)$ which is easily solved to give \begin{equation} F(\ell,x)=\frac{\Omega_2(\ell,0)+x\big[\Omega_2(\ell,1)-\Omega_2(\ell,0)\big]}{1-x-zx^2}, \end{equation} where $\Omega_2(\ell,0)$ and $\Omega_2(\ell,1)$ have already been determined [see \eref{sg20}, \eref{sg21}]. $F(\ell,x)$ has two simple poles at \begin{equation} x_\pm=\frac{-1\pm\sqrt{1+4z}}{2z}. \end{equation} Expanding the denominator about its two roots $x_\pm$, we determine $\Omega_2(\ell,\Delta)$ by calculating the coefficient of $x^\Delta$. We obtain \begin{equation} \label{a2dlta} a_2(\Delta)=A_+(x_+\lambda_2)^{-\Delta}+A_-(x_-\lambda_2)^{-\Delta},~\Delta= 0,1,2..., \end{equation} where \begin{equation} A_\pm=\frac{\pm\big[\lambda_2 a_2(1)-(z x_\mp +1)a_2(0)\big]}{\sqrt{1+4z}}. \end{equation} \begin{figure} \centering \includegraphics[width=0.5\columnwidth]{fig06.pdf} \caption{Diagrammatic representation of the recursion relation obeyed by the partition function $\Omega_2(\ell,\Delta)$ with $\Delta\geq2$, for a track of width 4. The first column of the track may be occupied by two vacancies (open 1$\times$1 square) or a square (filled 2$\times$2 square).} \label{fig:3} \end{figure} \subsection{Calculation of $a_2^{(L)}(\Delta)$} In this section, we calculate the pre-factor $a_2^{(L)}(\Delta)$ that characterizes the asymptotic behavior of the partition function of track of width 4 [see \eref{lrpartitions1}] for the left phase. The left phase has the constraint that the right edge of the rightmost square must touch the interface [see discussion in the paragraph following \eref{eq:100}]. Thus \begin{equation} \label{omega2_left} \Omega_2^{(L)}(\ell,\Delta)=z^2\Omega_2(\ell-2,\Delta), \end{equation} where the factor $z^2$ accounts for the two squares adjacent to interface. Once these two squares are placed the occupation of the rest of the track has no constraints and hence enumerated by $\Omega_2(\ell-2,\Delta)$. Using \eref{omega2_left}, \eref{lrpartitions1} and \eref{approx_omega}, for very large $\ell$ we obtain \begin{equation} \label{a2_left} a_2^{(L)}(\Delta)=\frac{z^2}{\lambda_2^4} a_2(\Delta), \end{equation} where $a_2(\Delta)$ is given in \eref{a2dlta}. \subsection{Calculation for $a_2^{(R)}(\Delta)$ \label{sec:secar}} In this section, we calculate $a_2^{(R)}(\Delta)$ for $\Delta \geq 0$, as defined in \eref{lrpartitions2}. Consider the track labeled by $(\xi_i,\xi_{i+1})$ [see \fref{fig:7}]. The constraint on the right phase is that a defect is allowed to be present only to to the right of $\min(\xi_i,\xi_{i+1})$ and there must be at least one non-defect square present to its left [see discussion in the paragraph following \eref{eq:100}].. First consider $\Delta=0,1$. The recursion relation obeyed by the partition functions $\Omega_2^{(R)}(\ell,0)$ and $\Omega_2^{(R)}(\ell,1)$ for right phase are shown diagrammatically in \fref{fig:4} and may be written as \begin{eqnarray} \Omega_2^{(R)}(\ell,0)&=&\Omega_2^{(R)}(\ell-1,0)+2z\Omega_2(\ell-2,1)+z^2\Omega_2(\ell-2,0),\\ \Omega_2^{(R)}(\ell,1)&=&\Omega_2^{(R)}(\ell,0)+z\Omega_2(\ell-1,1). \end{eqnarray} Using the asymptotic expressions for the partition functions as given in \eref{12row} and \eref{lrpartitions2}, we obtain two linear equations for $a_2^{(R)}(0)$ and $a_2^{(R)}(1)$, which are easily solved to give \begin{eqnarray} \label{a2RL} a_2^{(R)}(0)&=&\frac{z\big[2a_2(1)\lambda_2+za_2(0)\big]}{\lambda_2^2(\lambda_2^2-1)},\\ a_2^{(R)}(1)&=&\frac{\lambda_2a_2^{(R)}(0)+za_2(1)}{\lambda_2^2}. \end{eqnarray} \begin{figure} \centering \includegraphics[width=0.5\columnwidth]{fig07.pdf} \caption{Diagrammatic representation of the recursion relation obeyed by the partition functions $\Omega_2^{(R)}(\ell,0)$ and $\Omega_2^{(R)}(\ell,1)$ for the track of width 4. Right hand side enumerates the different ways the first column of the track may be occupied by vacancies (open 1$\times$1 square) or squares (filled 2$\times$2 square).} \label{fig:4} \end{figure} Now consider $\Delta \geq 2$. The recursion relation obeyed by $\Omega_2^{(R)}(\ell,\Delta)$ for $\Delta \geq 2$ may be written as \begin{equation} \label{omega2Relldelta0} \Omega_2^{(R)}(\ell,\Delta)=\Omega_2^{(R)}(\ell,\Delta-1)+z\widetilde{\Omega}_2(\ell,\Delta-2) \end{equation} where $\widetilde{\Omega}_2(\ell,\Delta)$ is the partition function for a generalization of the shape for $\Omega_2^{(R)}(\ell,1)$ in the left hand side of \fref{fig:4}. The lack of the subscript $(R)$ means that there are no constraints. The first term in the right hand side of \eref{omega2Relldelta0} corresponds to placing vacancies in first column, and the second term to a non-defect square being placed. $\Omega_2^{(R)}(\ell,\Delta-1)$ in the right hand side of \eref{omega2Relldelta0} may be iterated further to yield \begin{equation} \label{omega2Relldelta} \Omega_2^{(R)}(\ell,\Delta) =\Omega_2^{(R)}(\ell,1)+z\sum_{i=0}^{\Delta-2}\widetilde{\Omega}_2(\ell,i), \end{equation} To solve \eref{omega2Relldelta}, consider the generating function $\widetilde{G}_2(y,\Delta)$ defined as \begin{equation} \label{gty2d} \widetilde{G}_2(y,\Delta)=\sum_{\ell=0}^\infty \widetilde{\Omega}_2(\ell,\Delta)y^{2\ell+3\Delta/2}, \end{equation} where power of $\sqrt{y}$ gives total number of sites in the system. The diagrammatic representation of the recursion relation obeyed by $\widetilde{G}_2(y,1)$ is shown in \fref{fig:2} and may be written as \begin{equation} \label{gty21} \widetilde{G}_2(y,1)=y^{3/2}{G}_2(y,0)+zy^{5/2} G_2(y,1)+z_Dy^{7/2} G_2(y,0), \end{equation} where $z_D$ is the activity associated with each defect, and $G_2(y,0)$ and $G_2(y,1)$ are as in \eref{sg20} and \eref{sg21}. The generating function $\widetilde{G}_2(y,1)$ is then easily solved to give \begin{equation} \label{sgty21} \widetilde{G}_2(y,1)=\frac{(1+z_Dy^2-zz_Dy^4)y^{3/2}}{f(y^2)}. \end{equation} For large $\ell$ the partition function may be written asymptotically as \begin{equation} \label{app_at2d} \widetilde{\Omega}_2(\ell,\Delta)=\widetilde{a}_2(\Delta)\lambda_2^{2\ell+\Delta},~\Delta \geq 0,~ \ell \gg 1. \end{equation} Calculating the coefficient of $y^{2\ell+3/2}$ from \eref{sgty21} and using \eref{app_at2d}, we obtain the prefactor \begin{equation} \label{at1} \widetilde{a}_2(1)=\frac{-(1+z_Dy_2-zz_Dy_2^2)}{\sqrt{y_2}f^\prime(y_2)}. \end{equation} \begin{figure} \centering \includegraphics[width=0.5\columnwidth]{fig08.pdf} \caption{Diagrammatic representation of the recursion relation obeyed by the generating function $\widetilde{G}_2(y,1)$ [see \eref{gty2d} for definition] for a track of width 4. Right hand side enumerates the different ways the first column of the track may be occupied by vacancies (open 1$\times$1 square), square (filled 2$\times$2 square of color green) and defect (filled 2$\times$2 square of color yellow).} \label{fig:2} \end{figure} Now calculate the partition function $\widetilde{\Omega}_2(\ell,\Delta)$ for $\Delta\geq2$. The diagrammatic representation of the recursion relation obeyed by the partition function $\widetilde{\Omega}_2(\ell,\Delta)$ for $\Delta\geq2$ is shown in \fref{fig:5} and may be written mathematically as \begin{equation} \label{otld} \widetilde{\Omega}_2(\ell,\Delta)=\widetilde{\Omega}_2(\ell,\Delta-1)+(z+z_D)\widetilde{\Omega}_2(\ell,\Delta-2), ~\Delta=2, 3, .... \end{equation} We define the generating function \begin{equation} \label{hlt} H(\ell,t)=\sum_{\Delta=0}^\infty \widetilde{\Omega}_2(\ell,\Delta)t^{\Delta}. \end{equation} Multiplying \eref{otld} by $t^\Delta$ and performing summation over $\Delta$ from $2$ to $\infty$, we obtain a linear equation obeyed by $H(\ell,t)$ which is solved to give \begin{equation} H(\ell,t)=\frac{\widetilde{\Omega}_2(\ell,0)+t\left[\widetilde{\Omega}_2(\ell,1)-\widetilde{\Omega}_2(\ell,0)\right]}{1-t-(z+z_D)t^2}. \end{equation} $H(\ell,t)$ has two simple poles determined by the roots of the quadratic equation $1-t-(z+z_D)t^2=0$ \begin{equation} t_\pm=\frac{-1\pm\sqrt{1+4(z+z_D)}}{2(z+z_D)}. \end{equation} Expanding the denominator about $t_\pm$ and calculating the coefficient of $t^{\Delta}$, we get the expression for $\widetilde{\Omega}_2(\ell,\Delta)$ and using \eref{app_at2d} the prefactor is obtained to be \begin{equation} \label{ta2delta} \widetilde{a}_2(\Delta)=B_+(t_+\lambda_2)^{-\Delta}+B_-(t_-\lambda_2)^{-\Delta}, ~\Delta\geq 0, \end{equation} where \begin{equation} B_\pm=\frac{\pm\bigg[\lambda_2 \widetilde{a}_2(1)-[(z+z_D)t_\mp +1]a_2(0)\bigg]}{\sqrt{1+4(z+z_D)}}. \end{equation} \begin{figure} \centering \includegraphics[width=0.5\columnwidth]{fig09.pdf} \caption{Diagrammatic representation of the recursion relation obeyed by the partition function $\widetilde{\Omega}_2(\ell,\Delta)$ with $\Delta\geq2$ for a track of width 4. Right hand side enumerates the different ways the first column of the track may be occupied by vacancies (open 1$\times$1 square), square (filled 2$\times$2 square of color green) and defect (filled 2$\times$2 square of color yellow).} \label{fig:5} \end{figure} We now return to \eref{omega2Relldelta} and replace the partition functions $\Omega_2^{(R)}(\ell,\Delta)$ and $\widetilde{\Omega}_2(\ell,i)$ by their asymptotic forms given in \eref{lrpartitions2} and \eref{app_at2d} respectively, and do the summation over $\widetilde{\Omega}_2(\ell,i)$ from $i=0$ to $(\Delta-2)$, to obtain the prefactor \begin{equation} a_2^{(R)}(\Delta)=v_1\lambda_2^{-\Delta}+v_2(t_+\lambda_2)^{-\Delta}+v_3(t_-\lambda_2)^{-\Delta},~\Delta\geq2, \label{eq:92} \end{equation} where \begin{eqnarray} v_1&=&a_2^{(R)}(1)\lambda_2+z\bigg(\frac{B_+t_+}{t_+-1}+\frac{B_-t_-}{t_--1}\bigg),\nonumber\\ v_2&=&-\frac{zB_+t_+^2}{t_+-1},\nonumber\\ v_3&=&-\frac{zB_-t_-^2}{t_--1}.\nonumber \end{eqnarray} \section{\label{sec:6}Results} In this section we determine the interfacial tension $\sigma(z)$ between two ordered phases as a function of the activity $z$. From \eref{sigma1}, $\sigma(z)$ may be written as \begin{equation} \label{sigma} \sigma(z)=-\frac{1}{2}\log\bigg[\frac{\lambda_1^2\Lambda_2}{z a_1 a_2(0)}\bigg], \end{equation} where $\Lambda_2$, $\lambda_1$, $a_1$ and $a_2(0)$ are as in \eref{largest_lambda_s}, \eref{lmda1a1}, and \eref{aa20}. $\Lambda_2$ depends on $a_2^{(L)}(\Delta)$ and $a_2^{(R)}(\Delta)$, which in turn have been calculated in \eref{a2_left} and \eref{eq:92}. We also set $z_D=z$, where $z_D$ is the activity of a defect. The variation of $\sigma(z)$ with activity $z$ is shown in \fref{fig:10}. It decreases monotonically with decreasing $z$ and becomes zero at a finite value of $z$, which will be our estimate of the critical activity $z_c$. We find that $z_c=105.35$ for the interface with overhangs. As a check for the calculation, we confirm that if we set $z_D=0$, then we obtain the results for the estimated $z_c$ in the absence of defects~\cite{trisha2}. The result for $z_c$ compares well with the numerical estimate from Monte Carlo simulations of $z_c\approx 97.5$ [see \tref{table:estimates}]. \begin{figure} \centering \includegraphics[width=0.5\columnwidth]{fig10.pdf} \caption{The variation of the interfacial tension $\sigma(z)$ with activity $z$. Interfacial tension $\sigma(z)$ vanishes at the critical activity $z=z_c$.} \label{fig:10} \end{figure} The occupied area fraction or density $\rho$ may be calculated from the partition function $Z^{(0)}$ as: \begin{equation} \label{density} \rho=\frac{4 z}{N_xN_y}\frac{\partial}{\partial z}\bigg[\log \big(Z^{(0)}\big)\bigg], \end{equation} where the factor $4$ accounts for the area of a square. Substituting for $Z^{(0)}$ from \eref{z0}, the density $\rho$ in \eref{density}, in the thermodynamic limit $N_x\rightarrow\infty,~N_y\rightarrow\infty$, reduces to \begin{equation} \label{density_exp} \rho=4z\bigg[\frac{1}{\lambda_2}\frac{\partial\lambda_2}{\partial z}-\frac{1}{2\lambda_1}\frac{\partial\lambda_1}{\partial z}\bigg]. \end{equation} We thus obtain the critical density to be $\rho_c=0.947$. This estimate compare well with the Monte Carlo results of $\rho_c\approx 0.932$ [see \tref{table:estimates}]. \section{\label{sec:7}Conclusion} In this paper, we estimated the transition point of the disordered-columnar transition in in the hard square model by calculating the interfacial tension between two ordered phases within a pairwise approximation. This calculation allows for multiple defects to be present as well as the interface to have effective overhangs. We obtain the critical activity $z_c=105.35$ and critical density $\rho_c=0.947$, which agrees reasonably with the numerically obtained results of $z_c\approx 97.5$ and $\rho_c\approx 0.932$. Our estimate for the critical activity is a considerable improvement over earlier estimates based on many different approaches [see \tref{table:estimates}]. We calculated the prefactor $a_2^{(R)}(\Delta)$ by allowing defects to be present as overhangs [see \sref{sec:secar}]. The calculation can be repeated when defects are present only in regions which do not correspond to overhangs. This corresponds to a defect in the right phase being present only to the right of $\max(\xi_i,\xi_{i+1})$ [see \fref{fig:7}]. This calculation leads to an estimate of $z_c=43.28$, which is about half the value of the numerical result of $z_c\approx 97.5$. The decrease in the value of $z_c$ on excluding overhangs is consistent with the fact that the entropy of the system with interface decreases while the entropy of the system without interface remains unchanged. We, thus, conclude that the presence of overhangs in the interface is important for the calculation of interfacial tension. A similar analysis for determining the phase boundary may be done for other kind of systems, which show a transition from disordered to columnar ordered phase with increasing density. The mixture of hard squares and dimers~\cite{kabir2} shows such a transition, and so does the system of $(d\times2)$ hard rectangles~\cite{joyjit3,trisha4,trisha2}. It would be interesting to see whether the approximation scheme used in this paper is useful in obtaining reliable estimates for the phase boundaries in these problems. \section*{Acknowledgments} We thank Deepak Dhar for helpful discussions. \section*{References} \bibliographystyle{iopart-num} \providecommand{\newblock}{}
1707.03520
\section{Introduction} \label{sect:Intro} String theory combined with inflationary cosmology has led to the picture of inflationary multiverse, populated by a multitude of vacua with diverse properties. (For a review of multiverse cosmology and references to the literature see, e.g., \cite{Linde}.) The vacua are represented by minima in the potential energy landscape, and transitions between different vacua occur by quantum tunneling through bubble nucleation. All positive-energy vacua are sites of eternal inflation. In addition, a realistic landscape should include regions allowing slow-roll inflation with $\gtrsim 50$ e-folds, leading to a low-energy vacuum like ours. The expected number of vacua in the string landscape is enormous, so predictions in this kind of model should necessarily be statistical. One may hope that the large number of vacuum states will make the statistical predictions sharp and simplicity will eventually emerge from the complex physics of the multiverse. A natural first step is to study the statistics of a simple landscape described by a random Gaussian potential $U (\boldsymbol \phi)$ in an $N$-dimensional field space $\phi_i ~(i=1, ..., N)$. This approach has been adopted in much of the recent literature (see, e.g., \cite{Tegmark,Easther,Frazer,Battefeld,Yang,Bachlechner,Wang,MV,EastherGuthMasoumi,MVY1,MVY2}). String theory suggests that the number of fields $N$ should be rather large, $N\gtrsim 100$, so one can use the properties of random Gaussian fields in the large-$N$ limit. It should be noted that a random Gaussian field does not reflect some qualitative features of the string landscape. For example, the moduli potential in string theory should have decoupling limits, where the potential goes to zero. Kahler moduli may also have runaway instabilities~\cite{Dine:1985he, Bachlechner:2016mtp}. dS vacua are much more difficult to construct than AdS vacua in string compactifications -- which may or may not be represented by a random Gaussian potential with a constant term. We also assume canonical kinetic terms for the moduli, which is generally not so in string theory. A random Gaussian field should not therefore be regarded as a realistic model of the string landscape. We believe, however, that understanding this model is an important first step, before the effects due to deviations from randomness or Gaussianity can be investigated. In this paper we shall focus on so-called small-field landscapes, where the correlation length $\Lambda$ of the potential $U (\boldsymbol \phi)$ is small compared to the reduced Planck scale, $\Lambda\ll M_{\rm Pl}$. The conditions for slow-roll inflation in such a landscape are rather restrictive. Inflation can typically occur in the vicinity of saddle points or inflection points of the potential \cite{LindeWestphal} (we shall specify the precise conditions in Sec. 2). An estimate of the probability of inflation was attempted by Yang in Ref.~\cite{Yang}, with some {\it ad hoc} assumptions about the distribution of the field values after tunneling and about the attractor regions around inflection and saddle points that lead to inflation. A different approach to the problem, using the random matrix theory, was initiated by Marsh {\it et al} in Ref.~\cite{Marsh} and further developed in \cite{Dias:2016slx,Freivogel,Westphal,Wang2,Marsh2}. These authors noted that in order to deduce the inflationary properties of the landscape one only needs to know the potential in the vicinity of the inflationary paths. Furthermore, they conjectured that the evolution of the Hessian matrix $\zeta_{ij}=\partial^2 U/ \partial{\phi_i} \partial{\phi_j}$ along a given path in the landscape is described by a stochastic process that they specify (Dyson Brownian Motion, or DBM \cite{Dyson}). This process is known to drive the Hessian distribution towards that of the Gaussian Orthogonal Ensemble (GOE). With this assumption, the authors of \cite{Marsh,Dias:2016slx,Freivogel,Marsh2} have reached two major conclusions. First, they found that inflation is generically multi-field, with a number of scalar fields participating in the slow roll. And second, they found (in Ref.~\cite{Freivogel}) that inflation is far less likely than one might expect. Even if the slow-roll conditions are satisfied in a small patch of the landscape, the slope of the potential tends to rapidly steepen beyond that patch, cutting inflation short. Refs.~\cite{Marsh,Dias:2016slx,Freivogel,Marsh2} attribute these results to the fact that the statistics of Hessian eigenvalues in GOE is related to that of a gas of particles on a line interacting via a repulsive potential, resulting in `eigenvalue repulsion'. This DBM method, however, has some problematic features. The Hessian distribution in a random Gaussian landscape is significantly different from that in GOE; in particular, it gives a vastly larger density of minima \cite{Fyodorov,BrayDean,EastherGuthMasoumi}. Some other problems with DBM have been pointed out in Refs.~\cite{Marsh,Freivogel,Wang2}. The status of this method is therefore rather uncertain, and the conclusions it yields for inflation in the landscape should be taken with caution. In two earlier papers \cite{MVY1,MVY2} we have developed precise analytic and numerical tools for studying inflation in a random landscape. We applied these tools to the simplest case of a $1D$ landscape, where the potential depends on a single scalar field $\phi$. In \cite{MVY1} we calculated the probability distributions for the maximal number of e-folds and for the spectral index of density fluctuations, and in \cite{MVY2} we studied the distribution of scalar field values after tunneling and identified the attractor region around an inflection point that leads to inflation. The purpose of the present paper is to extend some of these results to the case of a multi-dimensional landscape. This paper is organized as follows. In the next Section we review some relevant properties of random Gaussian landscape models. In Sec.~\ref{sec:multi} we study analytically the field dynamics during the curvature-dominated period after tunneling and during the subsequent slow-roll inflation. We identify an attractor region of initial condition after tunneling where slow-roll inflation can be realized. In contrast to Refs.~\cite{Marsh,Dias:2016slx,Freivogel,Marsh2}, we find that the dynamics is effectively one-dimensional, without steepening, once the slow-roll conditions are satisfied. We explain the difference of our results from the DBM approach in Sec.~\ref{sec:DBM}. In Sec.~\ref{sec:tunneling} we use an approximate analytic method to study instanton solutions and determine the initial conditions after tunneling. We find that the initial values of the fields tend to concentrate along the flat direction in the landscape. We verify this analytic treatment numerically in a simple model. In most of the paper we focus on inflation near inflection points. Analysis of saddle point inflation yields very similar results, as we briefly discuss in Sec.~\ref{sec:saddle}. Finally, our conclusions are summarized and discussed in Section.~\ref{sec:conclusion}. \section{Random Gaussian landscape}\label{sec:landscape} We consider slow-roll inflation in an isotropic $N$-dimensional random Gaussian landscape with a potential $U({\bm \phi})$. The landscape is fully characterized by the average value ${\bar U}\equiv\langle U (\boldsymbol \phi) \rangle$ and the correlation function \bel{Correlation} \langle U (\boldsymbol \phi_1) U(\boldsymbol \phi_2)\rangle - \bar{U}^2 = F (|\boldsymbol \phi_1 - \boldsymbol \phi_2|)=\frac1{(2\pi)^N}\int d^N {\bm k}\,P(k) e^{i{\bf k}\cdot (\boldsymbol\phi_1-\boldsymbol\phi_2)}~. \end{equation} Here, $k \equiv |\boldsymbol k|$ and angular brackets indicate ensemble averages. Different moments of the spectral function $P(k)$ can be defined as \bel{sigmaDef} \sigma_{n}^2= \frac1{(2\pi)^N}\int d^N {\bm k} k^{2n} P(k)~. \end{equation} We assume that the potential $U({\boldsymbol \phi})$ has a characteristic scale $U_0$ and a correlation length $\Lambda$ in the field space, with the correlation function $F (|\boldsymbol \phi_1 - \boldsymbol \phi_2|)$ rapidly decaying at $|\boldsymbol \phi_1 - \boldsymbol \phi_2| \gg \Lambda$. We assume also that the ensemble average $\bar{U}$ is positive and is of the same order as $2\sqrt{N} U_0$, since otherwise most of the local minima of $U({\boldsymbol \phi})$ would have a negative energy density. However, we do not explicitly use this assumption in the paper. In this paper we focus on the case of a small-field landscape with $\Lambda\ll M_{\rm pl}$ and $U_0\ll M_{\rm pl}^4$, where $M_{\rm pl}$ is the reduced Planck mass, ($M_{\rm pl}\simeq 2.4 \times 10^{18} \ {\rm GeV}$). Hereafter, we use the reduced Planck units ($M_{\rm pl} \equiv 1$) and assume $\Lambda \ll 1, ~ U_0\ll 1$. As a reference, we may consider a Gaussian-type correlation function defined as \begin{equation} F(\phi)=U_0^2 e^{-\phi^2/2\Lambda^2}. \label{correlation function} \end{equation} In this case, the spectral function $P(k)$ is \begin{equation} \label{pk} P(k)= U_0^2 (2\pi\Lambda^2)^{N/2} e^{-\Lambda^2 k^2/2} \end{equation} and the moments are given by \beq \label{sigmaDef} \sigma_{n}^2 = \frac{2^n \Gamma \left( n + \frac{N}{2} \right) }{\Gamma \left( \frac{N}{2} \right)} \frac{U_0^2}{\Lambda^{2n}}. \end{eqnarray} From this example, we expect $\sigma_n^2 \sim U_0^2 (N / \Lambda^2)^n$ in the large-$N$ limit for a generic correlation function. Although this estimate is valid in general, it should be noted that the Gaussian correlation function (\ref{correlation function}) is a very special case, in which the statistics of the potential minima is rather different from that for a generic correlator \cite{BrayDean}. In this paper, we do not use this correlation function but consider a generic case. We comment on the difference in Sec.~\ref{sec:DBM}. \subsection{Inflation in a $1D$ landscape} Here we review some results regarding one-dimensional random landscapes, which will be useful for our discussion later on. The necessary conditions for slow-roll inflation in a one-dimensional inflaton potential $U(\phi)$ are \beq \epsilon_s, \eta_s\ll 1, \label{1Dslowroll} \end{eqnarray} where \beq &&\epsilon_s = \frac{1}{2} \left( \frac{U'}{U} \right)^2, \label{epsilon} \\ &&\eta_s = \frac{U''}{U}. \label{eta} \end{eqnarray} The typical values of the slow-roll parameters at a randomly chosen point in the landscape are $\epsilon_s \sim \eta_s \sim \Lambda^{-2}$. In the small-field case $\Lambda\ll 1$, so typically $\epsilon_s , \eta_s \gg 1$ and inflation can occur only in rare regions where $U'$ and $U''$ are unusually small. This is most likely to happen in the vicinity of an inflection point ($U'' = 0$) or of a local maximum of the potential ($U' = 0$). On the other hand, the third derivative of $U$ in such regions needs not be particularly small and will typically be of the order $U''' \sim U/\Lambda^3$. The range of the inflaton field where the slow roll conditions hold can be estimated from $|U'''| \Delta\phi \sim U$, or $\Delta\phi \sim \Lambda^3$. This is much smaller than the correlation length $\Lambda$, and thus $U\approx {\rm const}$ within this range. Furthermore, since $\Delta\phi\ll\Lambda$, the potential is well approximated by the first few terms in the Taylor expansion. In the case of inflection-point inflation, we can write% \footnote{ For consistency of notation with Refs.~\cite{MVY1, MVY2}, we use the notation $\eta$ for $U'(0)$. Note that it should not be confused with the slow-roll parameter $\eta_s$ in \eq{eta}. } \beq U( \phi) = U + \eta \phi + \frac{1}{6} \rho \phi^3 \end{eqnarray} with $\eta\rho >0$. Here, $\eta = U'(0)$, $\rho = U'''(0)$ and the inflection point is at $\phi=0$. Without loss of generality we can set $\eta,\rho <0$.\footnote{For $\eta\rho >0$ the potential has a local maximum and a minimum at $\phi = \pm (2\eta/\rho)^{1/2}$. Saddle-point inflation is then possible at the local maximum. We discuss this case in Sec.~\ref{sec:saddle}.} The magnitude of density perturbations $\Delta_R^2$ and the spectral index $n_s$ are given by \cite{Baumann} \beq &&\Delta_R^2 = \frac{1}{12 \pi} \frac{U^3}{\eta^2} = \frac{N_{\rm max}^4}{48 \pi^6} \frac{\rho^2}{U}, \\ &&n_s \simeq 1 - \frac{4 \pi}{N_{\rm max}} \cot \left( \frac{\pi N_e^{\rm (CMB)}}{N_{\rm max}} \right), \end{eqnarray} where $N_e^{\rm (CMB)}$ ($\simeq 50-60$) is the e-folding number at which the CMB scale leaves the horizon. We also defined the maximal e-folding number as \beq N_{\rm max} \approx - \int_{- \infty}^\infty {\rm d} \phi \frac{U(\phi)}{U'(\phi)} \approx \pi \sqrt{2} \frac{U}{\sqrt{\eta \rho}}. \label{Nmax} \end{eqnarray} The observed value of the spectral index ($n_s \simeq 0.97$) is obtained when $N_{\rm max} \approx 120$. The magnitude of density perturbation can be consistent with the observed value ($\Delta_R^2 \sim 4 \times 10^{-9}$) if we choose $U\sim 10^{-12} \Lambda^6$. Apart from the conditions (\ref{1Dslowroll}), slow roll inflation requires appropriate initial conditions for the field $\phi$. These conditions are determined by the instanton solution describing the bubble nucleation. Inflation can occur only if the initial value of $\phi$ after the tunneling is sufficiently close to the inflection point. It was shown in \cite{MVY2} that the corresponding attractor range of $\phi$ is \beq -\kappa (2U/3|\rho|) < \phi \lesssim U/|\rho|, \label{1Dattractor} \end{eqnarray} where $\kappa \approx 24.8$ and $U/|\rho| \sim \Lambda^3$. Furthermore, it was also shown in \cite{MVY2} that instanton solutions describing tunneling to a vicinity of an inflection point do exist if the potential at that point is sufficiently flat. However, the ensemble distribution of the initial values of $\phi$ is rather broad, with a width $\sim (0.1 - 0.4)\Lambda$. The distribution is more or less flat in this range, so tunnelings to the small attractor region have probability $\sim \kappa\Lambda^2$. We will show in Sec.~5 that such tunnelings require a thin-wall bubble, with the potential $U$ at the inflection point nearly degenerate with that at the false vacuum. \subsection{Taylor expansion around an inflection point} In a multi-dimensional landscape, slow-roll inflation still requires a sufficiently flat region of the potential. The corresponding conditions can be written as (e.g., \cite{Yang}) \beq &&\epsilon_s = \frac{\partial_i U \partial_i U}{2 U^2} \ll 1, \\ &&\eta_s = \sqrt{ \frac{\partial_i U ( \partial_i \partial_j U ) ( \partial_j \partial_k U ) \partial_k U }{\abs{\partial_i U}^2 U^2} } \ll 1, \end{eqnarray} where we use Einstein's summation convention.% \footnote{ These conditions are sufficient for slow-roll inflation. We disregard the special cases where inflation can occurs with weaker conditions. } As in the $1D$ case, one can expect that inflation occurs in a small patch $|\Delta\boldsymbol \phi | \ll \Lambda$; then the potential is well approximated by a cubic expansion \bel{potential} U({\bm \phi}) = U + \eta_i \phi_i + \frac{1}{2} \zeta_{ij} \phi_i \phi_j + \frac{1}{6}\rho_{ijk} \phi_i \phi_j \phi_k~, \end{equation} where $i,j,k = 1,2, ... , N$. These expectations will be justified {\it a posteriori}. The expansion coefficients in Eq.~(\ref{potential}) are $\eta_i \equiv \partial U / \partial \phi_i$, $\zeta_{ij} \equiv \partial^2 U / \partial \phi_i \partial \phi_j$, and $\rho_{ijk} \equiv \partial^3 U / \partial \phi_i \partial \phi_j \partial \phi_k$, with all derivatives taken at $\phi_i =0$. Note that the indices of $\zeta_{ij}$ and $\rho_{ijk}$ have a symmetry under the interchanges of $i \leftrightarrow j \leftrightarrow k$. For example, the coefficient of $\phi_1 \phi_2^2$ is $(\rho_{122} + \rho_{212} + \rho_{221}) / 6 = \rho_{122}/2$. The typical values of the expansion coefficients in (\ref{potential}) are $\eta_i\sim U/\Lambda$, $\zeta_{ij}\sim U/\Lambda^2$, $\rho_{ijk}\sim U/\Lambda^3$. Their probability distribution has been found in Refs.~\cite{Fyodorov,BrayDean,MVY1}. A multi-field hilltop inflation occurs near a stationary point where $\nabla U=0$. The Hessian at this point must have one or several small negative eigenvalues, $|m_i^2| \ll U$, with other eigenvalues being positive and typically having their generic values. A multi-field analogue of inflection-point inflation occurs near a point where the gradient of the potential is small and one of the Hessian eigenvalues is zero. The latter condition can be stated as ${\rm det} \zeta = 0$. We can choose the basis in the $\phi$-space so that the matrix $\zeta_{ij}$ is diagonal, \beq \zeta_{ij} = m_i^2 \delta_{ij} , \end{eqnarray} and the zero eigenvalue corresponds to $i = 1$: \beq m_1 =0. \end{eqnarray} The other eigenvalues $m_a^2$ will typically have generic values, taken from a distribution that we shall discuss in the next subsection. A typical eigenvalue is of the order $m_a^2 \sim \sqrt{N} U_0 / \Lambda^2$, while the smallest nonzero eigenvalues are $m_a^2 \sim U_0 / (\sqrt{N} \Lambda^2)$. If one of these eigenvalues is negative, it would trigger a tachyonic instability and a long period of inflation would be impossible. Hence we assume that the Hessian $\zeta_{ij}$ has all but one positive eigenvalues. Here and hereafter, we use the notation that the subscript $a$ runs from $2$ to $N$ while the subscripts $i,j,k$ run from $1$ to $N$. The condition ${\rm det} \zeta = 0$ specifies a codimension-1 surface in the field space. When $|\nabla U|$ is small, we can find a nearby point on this surface where the gradient $\nabla U$ is directed along the 1-axis: \beq \eta_i = \eta \delta_{i1}. \end{eqnarray} We shall refer to this point as the inflection point. The potential near this point has the form of a groove running in the $\phi_1$-direction between the hills that surround it in the orthogonal $\phi_a$-directions. In most of this paper we are going to focus on inflection-point inflation. Saddle-point inflation can be analyzed in a very similar way; we shall discuss it briefly in Section 6. \subsection{Hessian eigenvalue distribution} Of particular interest is the distribution for the eigenvalues $\lambda_i =m_i^2$ of the Hessian matrix $\zeta_{ij}$. This is given by the `semicircle law', \beq \rho(\lambda)=\frac{2}{\pi b^2 N}\left( b^2 N -(\lambda-{\bar\lambda})^2\right)^{1/2} . \label{Wigner} \end{eqnarray} Here, $\rho(\lambda) d\lambda$ is the number of eigenvalues in the interval $d\lambda$, ${\bar\lambda} =N^{-1}\sum_i \lambda_i$ is the average eigenvalue, \beq b^2=\frac{4\sigma_2^2}{N(N+2)}, \label{b} \end{eqnarray} and $\sigma_2^2$ is the second moment of the correlation function, as defined in (\ref{sigmaDef}). Eq.~(\ref{Wigner}) applies in the range $\abs{{\lambda} -{\bar\lambda}}\leq b \sqrt{N}$, with $\rho(\lambda) =0$ outside this range. The quantity $b^2\sim U/\Lambda^2$ is the characteristic dispersion of the matrix elements $\zeta_{ij}$; it is independent of $N$ in the large-$N$ limit. We note, however, that the width of the distribution (\ref{Wigner}) is greater than $b$ by a large factor $\sqrt{N}$. This is due to the `eigenvalue repulsion' phenomenon. Eq.~(\ref{Wigner}) with ${\bar\lambda}=0$ was derived by Wigner \cite{Wigner} as the eigenvalue distribution for a large random matrix. In the case of a random Gaussian field, Bray and Dean \cite{BrayDean} showed that the Hessian eigenvalue distribution at stationary points is given by Eq.~(\ref{Wigner}) with the average eigenvalue ${\bar\lambda}$ related to the value of the potential, \beq {\bar\lambda}=-\frac{\sigma_1^2}{N\sigma_0^2} \left(U-{\bar U}\right). \label{barlambda} \end{eqnarray} For $U<{\bar U}$ the distribution is shifted towards positive values, and the entire distribution shifts to the positive domain when $U$ gets below certain critical value (defined by the condition ${\bar\lambda}=b\sqrt{N}$). In this range of $U$, almost all of the stationary points of the potential are local minima. Similarly, there is a positive critical value of $U$, above which {\b almost all of the} stationary points are local maxima. The semicircle law (\ref{Wigner}) can also be used to describe the conditional eigenvalue distribution, under the requirement that all eigenvalues are greater than some $\lambda_*$ \cite{BrayDean}. In this case, ${\bar\lambda}=\lambda_* + b\sqrt{N}$. In particular, at an inflection point, where one eigenvalue is zero and the rest are positive, the distribution is given by (\ref{Wigner}) with ${\bar\lambda}=b\sqrt{N}$. The semicircle law applies in the limit of $N\to\infty$, but for a finite $N$ it becomes inaccurate in small regions near the edges of the distribution. Such edge corrections are important for the estimate of the smallest nonzero Hessian eigenvalue $\lambda_{\rm min}$ at an inflection point. It can be shown that \beq \lambda_{\rm min}\sim N^{-1/2} b \label{lambdamin} \end{eqnarray} and that the number of such eigenvalues is $\sim N^{1/4}$. (Details of this analysis will be published elsewhere \cite{Masaki}.) Thus, for $N\sim 100$ we can expect to have a few eigenvalues of magnitude $0.1~U_0/\Lambda^2$. \section{Multi-field inflection-point inflation}\label{sec:multi} After tunneling, the bubble has the geometry of an open FRW universe, \beq ds^2 = dt^2 -a^2(t)\left(d\chi^2+\sinh^2\chi d\Omega^2\right), \end{eqnarray} with spatially homogeneous fields $\phi_i(t)$. The evolution of $a$ and $\phi_i$ is described by the equations \beq \frac{{\dot a}^2}{a^2}=\frac{1}{3}\left(U({\bm \phi})+\sum_i \frac{{\dot\phi}_i^2}{2}\right)+\frac{1}{a^2}, \label{Friedmann} \end{eqnarray} \beq {\ddot\phi}_i +3\frac{\dot a}{a}{\dot\phi}_i + \frac{\partial U({\bm \phi})}{\partial \phi_i} =0, \label{phiU} \end{eqnarray} where dots represent derivatives with respect to $t$. The initial conditions at $t=0$ are given by \beq a(0)=0,~~~{\dot a}(0)=1,~~~ \phi_i(0)=\phi_{i,0},~~~ {\dot\phi}_i(0)=0, \end{eqnarray} where $\phi_{i,0}$ is determined from the instanton solution that describes the tunneling. During a small-field inflation the potential (\ref{potential}) is nearly constant, $U({\bm \phi}) \simeq U ={\rm const}$, and the Friedmann equation (\ref{Friedmann}) can be approximated as \beq {\dot a}^2 = 1 + H^2 a^2, \end{eqnarray} where $H^2 = U /3$. The solution is the de Sitter space, \beq a(t) = H^{-1} \sinh (H t), \label{at} \end{eqnarray} which gives $\dot{a}/a = H \coth (H t)$. This shows that inflation starts at $t \sim H^{-1}$, after a brief curvature-dominated period. \subsection{Starting with $\phi_a \approx 0$}\label{sec:phia=0} Let us first consider the case when the initial values ${\bm \phi}_0$ are such that $\partial U / \partial \phi_a ({\bm \phi}_0)= 0$, while $\partial U / \partial \phi_1 ({\bm \phi}_0)$ is nonzero. Then the field ${\bm \phi}$ starts rolling in the $\phi_1$-direction with $\phi_a \approx 0$, and we can expect inflation to be essentially one-dimensional, at least initially. Neglecting $\phi_a$ and using the scale factor (\ref{at}) in Eq.~(\ref{phiU}), we obtain the following equation for $\phi_1(t)$ \beq {\ddot\phi}_1 + 3{H} \coth ({H} t) {\dot\phi}_1 + \rho \phi_1^2 /2 + \eta = 0, \label{phieq1} \end{eqnarray} where we have introduced the notation $\rho_{111}\equiv \rho$. Hereafter we assume $\eta, \rho < 0$ without loss of generality. As we mentioned in Sec. 2.1, the analysis of one-dimensional inflection-point inflation in Ref.~\cite{MVY2} has shown that $\phi_1$ does not overshoot the slow-roll region if its initial value is in the range (\ref{1Dattractor}), \beq -2\kappa H^2 / \abs{\rho} <\phi_{1,0} \lesssim H^2/ \abs{\rho}, \label{phi1range} \end{eqnarray} where $\kappa \approx 24.8$. It was also shown in \cite{MVY2} that in most of this range the last term in Eq.~(\ref{phieq1}) has negligible effect on the dynamics. For later convenience, we numerically calculated $\abs{\dot{\phi_1}/ \phi_1}$ as a function of time for $\eta=0$. The resulting plot in Fig.~\ref{fig:adiabatic cond} shows that this quantity does not exceed $3 H$. \begin{figure}[t] \centering \includegraphics[width=2.5in]{figure/adiabatic} \caption{ Plot of $\abs{\dot{\phi_1}/ \phi_1}$ as a function of time in units of $H^{-1}$. We assume $\eta = 0$. } \label{fig:adiabatic cond} \end{figure} The fields $\phi_a$ with $a=2, ..., N$ are affected by the dynamics of $\phi_1$ because of the interaction terms. The most important contribution comes from the term $\rho_{11a} \phi_1^2 (t) \phi_a$ in the potential, which introduces a force term in the field equation for $\phi_a$: \beq \ddot{\phi}_a + 3 H \coth (H t) \dot{\phi}_a + m_a^2 \phi_a + \frac{1}{2} \rho_{11a} \phi_1^2 (t) = 0. \end{eqnarray} The effect of this term is to shift the minimum of the potential in the orthogonal directions to \beq {\bar\phi}_a (t) = - \frac{\rho_{11a} \phi_1^2 (t)}{2 m_a^2}. \end{eqnarray} The typical rate of variation of ${\bar\phi}_a (t)$ is $\sim\abs{\dot{\bar\phi}_a / {\bar\phi_a} } = 2 \abs{\dot{\phi_1}/ \phi_1}$. From Fig.~\ref{fig:adiabatic cond}, we find that this rate is $\lesssim 6H$. On the other hand, the oscillation rate of $\phi_a$ is $m_a \gtrsim N^{-1/4} \sqrt{U_0}/\Lambda$, where we have used the estimate (\ref{lambdamin}) for the smallest nonzero eigenvalue of the Hessian. For $\Lambda \ll \sqrt{U_0 / H^2}/(6N^{1/4})$ ($\gtrsim 0.1$), we have $m_a\gg 6H$, where we have used $H^2 \simeq U / 3 \lesssim U_0/3$. This means that the minimum of $\phi_a$ changes adiabatically, and thus oscillations of $\phi_a$ are not excited by the effect of interaction. We can then approximate $\phi_a \simeq {\bar\phi}_a (t)$ and obtain \beq \abs{ \frac{\phi_a}{\phi_1}} \simeq \abs{\frac{\rho_{11a} \phi_1 (t)}{2 m_a^2} } \lesssim {\kappa}N^{1/2} \Lambda^2, \end{eqnarray} where we have used $\abs{\phi_1 (t)} \lesssim {\kappa} H^2 / \abs{\rho}$. For $\Lambda\ll 0.1$ this gives $ \abs{{\phi_a}/{\phi_1}}\ll 1$, so the inflaton trajectory is approximately a straight line in the field space. For moderately large values of $\Lambda \sim 0.1$, some low-mass modes $\phi_a$ may be excited and the field trajectory may be significantly curved. However, we shall see in the next subsection that oscillations of such modes are rapidly damped and the filed trajectory becomes straight during the slow roll, unless $\Lambda\gtrsim N^{-1/4} \sim 0.3$. \subsection{Generic initial conditions} \label{sec:generic} Now let us relax the assumption that the gradient of the potential is aligned with the Hessian eigenvector with zero eigenvalue. In this case the gradient has nonzero components in directions orthogonal to $\phi_1$. Let us call these components $\eta_a$. For sufficiently small $\eta_a$, if we move the origin of $\phi_{a}$ to $-\eta_a /m_a^2$, the gradient would be aligned with the $\phi_1$- direction. Therefore, having a misalignment between the gradient and Hessian eigenvector is equivalent to choosing the fields $\phi_a$ with some displacement from their value which minimizes the potential. Hence, we will study the field evolution with $\phi_{a,0}\neq 0$. Now $\phi_a$ will oscillate, and this may cause $\phi_1$ to move fast and ruin the slow-roll conditions. Our goal is to estimate the range of initial conditions which lead to a slow-roll inflation. We shall first study the dynamics of $\phi_1$ and $\phi_a$ analytically under some plausible assumptions and then verify the results in a numerical example. Suppose $\abs{\phi_{1,0}} \ll \abs{\phi_{a,0}}\ll\Lambda$. The dynamics of $\phi_a$ are then mostly driven by their mass terms, \beq \ddot{\phi}_a + 3 H \coth \left( H t \right) \dot{\phi}_a + m_a^2 \phi_a = 0. \label{phiaeq} \end{eqnarray} Focusing first on the curvature-dominated period, $t\ll H^{-1}$, we can approximate $H \coth \left( H t \right)$ as $1/ t$. Then the solution of Eq.~(\ref{phiaeq}) is \beq \phi_a (t) = 2 \phi_{a,0} \frac{J_1 (m_a t)}{ m_a t}, \label{phia} \end{eqnarray} where $J_1 (z)$ is the Bessel function of the first kind. The field equation for $\phi_1(t)$ can be written as \beq {\ddot\phi}_1 +3\frac{\dot a}{a} {\dot\phi}_1 + \eta + \frac{1}{2} \rho_{1aa}\phi_a^2 =0, \label{phi1eq} \end{eqnarray} where we neglected $\phi_1^2$ compared to $\phi_a^2$. We also neglected the term proportional to $\rho_{11a}$, because $\phi_a (t)$ oscillates with a period much shorter than the typical time scale of $\phi_1$, so this term averages out to zero. With $y(t) \equiv {\dot\phi}_1(t)$, Eq.~(\ref{phi1eq}) takes the form \beq \frac{d}{dt}(ya^3) = -\eta a^3 - \frac{1}{2} \rho_{1aa}\phi_a^2 a^3, \end{eqnarray} and the solution is \beq y(t) = -\eta a^{-3}(t)\int_0^t dt' a^3(t') - \frac{1}{2} \rho_{1aa} a^{-3}(t)\int_0^t dt'a^3(t')\phi_a^2(t'). \label{ygeneral} \end{eqnarray} The first term in (\ref{ygeneral}) is $\approx -\eta t/4$ for $t\ll H^{-1}$ and $-\eta/3H$ for $t\gg H^{-1}$. We first disregard this term and take it into account later. During the curvature dominated period, we have $a(t)\approx t$ and \beq y(t) = -\frac{2\rho_{1aa} \phi_{a,0}^2}{m_a^2 t^3} \int_0^t dt' t' J_1^2(m_a t') = -\frac{\rho_{1aa} \phi_{a,0}^2}{m_a^2 t} \left[ J_1^2(m_a t) -J_0(m_a t)J_2(m_a t)\right], \label{y2} \end{eqnarray} where in the last step we used Eq.~(5.54(2)) in Ref.~\cite{GR}. $\phi_1(t)$ can now be found from \beq \phi_1(t) = \phi_{1,0} + \int_0^t dt' y(t'). \end{eqnarray} This integral is calculated in Appendix~\ref{Bessel}, with the result \beq \phi_1(t)=-\frac{\rho_{1aa}\phi_{a,0}^2}{2m_a^2} \left[1-J_0^2(m_a t) -2J_1^2(m_a t) +J_0(m_a t)J_2(m_a t) \right]. \label{Bessel result} \end{eqnarray} Using the asymptotic forms of Bessel functions at small and large values of the argument, we find \beq \phi_1(t) \approx \phi_{1,0} -\frac{1}{16} \rho_{1aa}\phi_{a,0}^2 t^2 \end{eqnarray} at $t \ll m_a^{-1}$ and \beq \phi_1(t)\approx \phi_{1,0} -\frac{\rho_{1aa}\phi_{a,0}^2}{2m_a^2} \left(1-\frac{4}{\pi m_a t}\right) \label{phi1} \end{eqnarray} at $t\gg m_a^{-1}$. The effect of the linear term in the potential for $\phi_1$ (i.e., of the first term in \eq{ygeneral}) can be trivially taken into account by replacing \beq \phi_1(t) \to \phi_1(t) - \frac{\eta t^2}{8}. \end{eqnarray} Eq.~(\ref{phi1}) shows that the effect of the oscillating fields $\phi_a$ is to shift $\phi_1$ by the amount \beq {\cal S} = - \sum_a \frac{\rho_{1aa}\phi_{a,0}^2}{2m_a^2}, \label{shift} \end{eqnarray} where we explicitly wrote the summation over $a$ ($= 2,3, \dots, N$). It also follows from (\ref{phi1}) that interactions with $\phi_a$ become unimportant at $t\gg m_a^{-1}$. During the curvature dominated period, the oscillation amplitude of $\phi_a$ decreases as $t^{-3/2}$. This period may be followed by a period of slow-roll inflation, when the Hubble parameter is $H\approx (U/3)^{1/2} = {\rm const}$. The field equation for $\phi_a$ is then \beq \ddot{\phi}_a + 3 H \dot{\phi}_a + m_a^2 \phi_a = 0, \end{eqnarray} and its solution is \begin{equation} \phi_a(t) = C e^{-3H t/2} {\rm cos} \left( m_a \sqrt{1- \zeta_a^2} t +\psi \right) ~, \end{equation} where $\zeta_a \equiv {3 H}/{2 m_a} \lesssim N^{1/4} \Lambda$. The constant amplitude $C$ and phase $\psi$ can be found by matching to the curvature-dominated regime. For $\zeta_a < 1$ the fields $\phi_a$ rapidly decrease, becoming increasingly unimportant. The problem then reduces to that of a $1D$ landscape, discussed in Ref.~\cite{MVY2} and reviewed in Sec.~2.1. We expect the condition $\zeta_a < 1$ to be satisfied, unless $\Lambda\gtrsim N^{-1/4} \sim 0.3$. Since the inflationary dynamics is essentially one-dimensional, one can expect that the probability distribution for the maximal number of e-folds $N_{\rm max}$ is the same as in a one-dimensional landscape. A detailed calculation in Appendix~\ref{sec:N_e} shows that this is indeed the case, and the result is given by \beq P(N_{\rm max})\propto N_{\rm max}^{-3}. \end{eqnarray} For a randomly selected inflection point in the landscape, the probability for $N_{\rm max}$ to be in a small range $dN_{\rm max}$ is $P(N_{\rm max}) dN_{\rm max}$. This conclusion is in agreement with a more heuristic calculation by Yang in Ref.~\cite{Yang}. \subsection{The attractor region}\label{sec:attractor} Based on the above analysis, we can expect that slow-roll inflation will occur if the shifted field \beq \phi_* \equiv \phi_{1,0} - \sum_a \frac{\rho_{1aa}\phi_{a,0}^2}{2m_a^2} \label{phi_*} \end{eqnarray} is in the attractor range (\ref{phi1range}), $-2\kappa H^2 / \abs{\rho} <\phi_* \lesssim H^2/ \abs{\rho}$. Thus, even if we start with large values of $\phi_a$ ($\phi_{a,0}\ll\Lambda$), we may still have a range $\Delta\phi_{1,0} \sim 2\kappa H^2/\abs{\rho}$ that gives enough inflation, but now this range is centered at \beq \phi_{1,0} \sim \sum_a \frac{\rho_{1aa}\phi_{a,0}^2}{2m_a^2} . \end{eqnarray} \begin{figure}[t] \centering \includegraphics[width=2.5in]{figure/attractor1} \qquad \includegraphics[width=2.5in]{figure/attractor2} \caption{ The analytic attractor region of slow-roll inflation for the two-field model indicated in the text is shown by grey shading. The orange dots indicate the attractor region found numerically for the same model. The agreement between the analytic and numerical results becomes very close after inclusion of an adjustment factor of 0.7, as indicated by dashed red lines. Left and right panels correspond to $\rho_{1aa}<0$ and $\rho_{1aa}>0$, respectively. We show examples of two field trajectories as yellow lines. } \label{fig:attractor} \end{figure} As a test of this analysis, we numerically solved the field equations for $\phi_1$ and one additional field $\phi_a$: \beq &&\ddot \phi_1+3 H \dot{\phi}_1 + \eta + \frac{1}{2} \rho \phi_1^2 + \rho_{11a} \phi_1 \phi_a + \frac{1}{2} \rho_{1aa} \phi_a^2 = 0. \label{eq1} \\ &&\ddot \phi_a+3 H \dot{\phi}_a + m_a^2 \phi_a + \rho_{1aa} \phi_1 \phi_a + \frac{1}{2} \rho_{11a} \phi_1^2 = 0. \label{eq2} \end{eqnarray} We assumed $\rho_{11a} = 0$ and $\eta = 0$ for simplicity and took $\rho = - \abs{\rho_{1aa}} = - U_0 / \Lambda^3$, $m_a = U_0 / \Lambda^2$, and $\Lambda = 0.01$ in this example. We show two examples of trajectories in the field space as yellow lines in Fig.~\ref{fig:attractor}, where we used $\rho_{1aa} = U_0 / \Lambda^3$ ($-U_0/ \Lambda^3$) and the initial values $\phi_{1,0}/(2 H^2 / \rho) = -45$ $(+15)$ in the left (right) panel. We used $\phi_{a,0}/(2 H^2 / \rho) = 800$ in both panels. In both examples $|\phi_{a,0}|$ is much greater than $|\phi_{1,0}|$, but after a few oscillations the oscillation amplitude of $\phi_a$ is strongly damped and the field $\phi_1$ reaches the attractor range (\ref{phi1range}), so that slow-roll inflation can begin. We chose the initial values $\phi_{1,0}$ and $\phi_{a,0}$ at random and marked the choices that led to slow-roll inflation by orange dots in Fig.~\ref{fig:attractor}.% \footnote{ A range of initial values in a two-field cubic potential model has been studied by Blanco-Pillado {\it et al} in Ref.~\cite{Jose} to determine the values that lead to slow-roll inflation. The main difference from our work is that they explored a range of fields $\sim\Delta\phi_{1,0}$ in both $\phi_1$ and $\phi_a$ directions, while we found that the attractor region extends far beyond this range. } The region outlined by the orange dots is in a qualitative agreement with the shaded attractor region that we found analytically. Noting that there is an $\mathcal{O}(1)$ uncertainty in the analytic treatment, we fitted the data by adding an $\mathcal{O}(1)$ factor in the second term on the right-hand side of \eq{phi_*}. We find that inclusion of a factor of $0.7$ leads to a remarkably good agreement with the data, as indicated by the red dashed curves. This shows that the width of the attractor range in the $\phi_1$ direction is indeed given by $\Delta \phi_{1,0} \sim 2 \kappa H^2 / \abs{\rho}$. The attractor region can be characterized by the fraction $f$ of volume it occupies in a correlation-length-size region of the landscape, centered at the inflection point. In a $1D$ landscape, this fraction is \beq f \sim \frac{\Delta\phi_{1,0}}{\Lambda} \sim \kappa\Lambda^2, \label{f} \end{eqnarray} where $\Delta\phi_{1,0} \sim \kappa\Lambda^3$ is the size of the $1D$ attractor range in Eq.~(\ref{phi1range}). In our $2D$ model (\ref{eq1})-(\ref{eq2}), the boundaries of the attractor region are two identical parabolas shifted by $\Delta\phi_{1,0}$. The ratio of the area between these parabolas and the area $\sim \Lambda^2$ of a correlation-length region is still given by Eq.~(\ref{f}). It is not difficult to see that this also holds in the general multi-dimensional case: the attractor fraction of the correlation-length volume ($\sim \Lambda^N$) in an $N$-dimensional random landscape is given by Eq.~(\ref{f}). I-S. Yang assumed in Ref.~\cite{Yang} that the end points of quantum tunneling are more or less uniformly distributed in the field space, in which case the probability of inflation would be proportional to the volume fraction $f$. Yang offered a heuristic argument that this fraction should decrease exponentially with the number of landscape dimensions $N$. However, our analysis indicates that, surprisingly, $f$ appears to be independent of $N$. Furthermore, we shall see in Sec.~\ref{sec:tunneling} that the assumption of a uniform distribution of tunneling points also needs to be reconsidered. \section{Comparison with the DBM model}\label{sec:DBM} We found in the preceding Section that inflation in a small-field random Gaussian landscape is typically one-dimensional. After a brief period of rapid oscillation, the field settles into a narrow slow-roll track. This is in contrast with the picture suggested by the Dyson Brownian Motion (DBM) model \cite{Marsh,Dias:2016slx,Freivogel,Marsh2}, asserting that inflation in a large landscape is generically multi-field, with a number of fields having small masses and participating in the slow roll. We also found no evidence for the rapid steepening of the potential predicted by the DBM model. The difference between the two approaches can be understood from the following heuristic argument. As we mentioned in the Introduction, the DBM process rapidly drives the probability distribution for the Hessian matrix $\zeta_{ij}$ to that of the Gaussian Orthogonal Ensemple (GOE), which is given by \cite{Wigner} \beq P(\zeta)\propto \exp(-{\cal Q}) , \label{PQ} \end{eqnarray} where \beq {\cal Q} = \frac{1}{{\tilde b}^2} {\rm Tr} \zeta^2 \label{Q-GOE} \end{eqnarray} with certain constant ${\tilde b}$. On the other hand, the Hessian distribution for a random Gaussian field (RGF) is given, after integration over $U$, by (\ref{PQ}) with \cite{Fyodorov,BrayDean} \beq {\cal Q} = \frac{1}{b^2} \left[ {\rm Tr} \zeta^2 -\frac{1}{N+2} ({\rm Tr}\zeta)^2\right] \label{Q-RGF} \end{eqnarray} and with $b$ from Eq.~(\ref{b}). We can represent the eigenvalues of the Hessian as $\lambda_i = {\bar\lambda}+\delta\lambda_i$ ($i=1, ... , N$), where ${\bar\lambda}$ is the average eigenvalue and $\sum_i \delta\lambda_i =0$. Then \beq {\rm Tr}\zeta^2 = \sum_i (\delta\lambda_i)^2 + N{\bar\lambda}^2 \label{GOE} \end{eqnarray} and \beq {\rm Tr}\zeta^2-\frac{1}{N+2} ({\rm Tr}\zeta)^2 = \sum_i (\delta\lambda_i)^2 + \frac{2N}{N+2} {\bar\lambda}^2 \approx \sum_i (\delta\lambda_i)^2 + 2 {\bar\lambda}^2 , \label{RGF} \end{eqnarray} where we assumed $N\gg 1$ in the last step. (Note that ${\bar\lambda}=N^{-1}\sum_i \lambda_i$ is the average over a particular realization of the matrix $\zeta$, not the ensemble average.) For a generic point in the landscape, the numbers of positive and negative Hessian eigenvalues are about equal, their distribution is approximately symmetric about $\lambda =0$, and ${\bar\lambda}\approx 0$. In this case, the GOE and RGF eigenvalue distributions are essentially the same and are given by the Wigner semicircle law (\ref{Wigner}) with ${\bar\lambda}=0$. The difference between the GOE and RGF ensembles becomes apparent when we compare the coefficients of the ${\bar\lambda}^2$ terms in Eqs.~(\ref{GOE}) and (\ref{RGF}). In the GOE ensemble, fluctuations of ${\bar\lambda}$ away from zero are very strongly suppressed. The probability of having ${\bar\lambda}$ comparable to the width of the Wigner distribution -- for example, the probability of having all, or almost all Hessian eigenvalues positive -- is \cite{DeanMajumdar} $P\propto \exp(- {\cal O}(1) N^2)$, while for the RGF it is \cite{Fyodorov,BrayDean} $P\propto \exp(- {\cal O}(1) N)$. An inflection-point inflation starts at a rare point in the landscape, where one of the Hessian eigenvalues is very small, while all other eigenvalues are positive.\footnote{The same considerations apply to hilltop inflation, which starts at a point where a one or few eigenvalues are very small and the rest are all positive.} Then the DBM process rapidly drives the field ${\bm \phi}$ towards regions where some Hessian eigenvalues are negative. This "evolutionary pressure" is rather strong, because of the strong bias against nonzero ${\bar\lambda}$ in the GOE ensemble. As positive eigenvalues are "pushed" to the negative side, they cross zero. Then the corresponding field starts fluctuating, and inflation becomes multi-field. When some eigenvalues become sufficiently negative, the potential steepens and the slow roll ends. As we argued in Section 3, this behavior is not characteristic of inflation in RGF. Thus, DBM does not seem to provide an adequate description of inflation in a random Gaussian landscape. \subsection{Comment on a Gaussian correlation function} Here we comment on a special case where the correlation function has a Gaussian form (\ref{correlation function}). The difference from a generic correlation function is apparent when we consider the Hessian distribution for a fixed value of $U$. For a generic case, it is given by \beq &&Q = \frac{1}{b^2} {\rm Tr} \left( \zeta - \lambda_* (U) {\bm 1} \right)^2 - \frac{1}{Nb^2 } \left( 1 - \frac{c}{N} \right) \left[ {\rm Tr} \left( \zeta - \lambda_* (U) {\bm 1} \right) \right]^2 \label{QGauss} \\ &&\lambda_*(U) = - \frac{\sigma_1^2 }{N \sigma_0^2} (U - \bar{U}), \end{eqnarray} where $c = {\cal O}(1)$ is determined by the moments. However, for a Gaussian correlation function it is given by \beq Q = \frac{1}{b^2} {\rm Tr} \left( \zeta - \lambda_* (U) {\bm 1} \right)^2, \end{eqnarray} because of an accidental cancellation in the coefficient of the second term in (\ref{QGauss}). The former one is almost identical to \eq{Q-RGF} except for a constant shift, while the latter one is equivalent to the GOE (\ref{Q-GOE}) with a constant shift. Therefore, in the case of a Gaussian correlation function the distribution of the Hessian is just given by the GOE with a constant shift, due to an accidental cancellation.\footnote{ Note, however, that after integration over $U$ the Hessian distribution is given by Eq.~(\ref{Q-RGF}) for any correlation function, including the Gaussian \cite{Fyodorov,BrayDean}.} Since this is not a generic case, we do not focus on this case but consider a generic correlation function. \section{Distribution of the initial values} \label{sec:tunneling} \subsection{General formalism}\label{sec:initial} Quantum tunneling that leads to bubble nucleation is described by an $O(4)$-symmetric instanton solution ${\bm \phi} (r)$ of the Euclidean field equations \bel{instantonEoM} \frac{d^2\phi_i}{dr^2}+ \frac3r \frac{d\phi_i}{dr}=\frac{dU}{d\phi_i}~ \end{equation} with suitable boundary conditions. Here we assume that gravitational effects on the tunneling can be neglected, which is usually the case in a small-field landscape. The initial values of the fields $\phi_i$ after tunneling are set by their values at the center of the instanton, \beq \phi_{i,0} = \phi_i(0). \end{eqnarray} It can be easily verified that Eq.~(\ref{instantonEoM}) does not change its form under a rescaling \beq &&\phi_i = \Lambda {\bar\phi_i} \label{rescale of phi} \\ &&r = \Lambda U_0^{-1/2} {\bar r}, \label{rescale of r} \\ &&U({\bm \phi})= U_0 {\bar U}({\bar{\bm \phi}}). \label{rescale of U} \end{eqnarray} The rescaled potential ${\bar U}({\bar{\bm \phi}})$ is characterized by the same correlation function as $U({\bm \phi})$, but with $U_0=\Lambda=1$. In the absence of small parameters, one might expect that the ensemble distribution for the initial values ${\bar\phi}_{i,0}$ would spread over a wide range of size $\sim 1$ in the field space. The values of $\phi_i$ would then be spread over a range $\sim \Lambda$. As we already mentioned in Sec.~2.1, this is indeed the case for tunneling to an inflection point in a $1D$ landscape. However, tunneling to a generic minimum of the potential in $1D$ tends to give a value of $\phi_0$ very close to the minimum \cite{Sarid,Jun}. These features of $1D$ tunneling suggest that the field distribution may be much narrower in the directions orthogonal to that of the zero eigenvalue. In the next subsection we will show that this is indeed the case. \subsection{Initial values for multi-filed tunneling} The instanton solution ${\bm \phi}(r)$ of Eq.~(\ref{instantonEoM}) describes the motion of the field ${\bm \phi}$ in the upside-down potential $-U({\bm \phi})$, with $r$ playing the role of time. The field starts at zero velocity with ${\bm \phi}(r=0)={\bm \phi}_0$ and approaches the false vacuum value at $r\to\infty$. In a generic configuration, the false vacuum is displaced from the inflection point (${\bm \phi}=0$) by $\sim \Lambda$, both in $\phi_1$ and $\phi_a$ directions. We shall consider an instanton whose center is relatively close to the inflection point, $\abs{{\bm \phi}_0}\ll \Lambda$. In the vicinity of the inflection point, Eq.~(\ref{instantonEoM}) can be approximated as \beq \frac{{\rm d}^2 \phi_1}{{\rm d} r^2} + \frac{3}{r} \frac{{\rm d} \phi_1}{{\rm d} r} = \eta + \frac{\rho}{2} \phi_1^2, \label{separateEoM1} \\ \frac{{\rm d}^2 \phi_a}{{\rm d} r^2} + \frac{3}{r} \frac{{\rm d} \phi_a}{{\rm d} r} = m_a^2 \phi_a + \frac{\rho_{11a}}{2} \phi_1^2. \label{separateEoM2} \end{eqnarray} Here we assumed that $\abs{\phi_a} \ll \abs{\phi_1}$, which will be justified {\it a posteriori}. We shall also neglect the term $\eta$ in the equation for $\phi_1$; this term will be taken into account later. With these approximations, $\phi_1(r)$ can be represented as \beq \phi_1 (r) = \phi_{1,0} f \left( \sqrt{\rho \phi_{1,0}} \, r \right), \label{phi_1} \end{eqnarray} where $f (x)$ is a function satisfying \beq f'' + \frac{3}{x} f' = \frac{f^2}{2} \end{eqnarray} with boundary conditions $f(0)=1$, $f'(0)=0$. The solution is shown in Fig.~\ref{fig:fx}, where we can see that $f(x)$ diverges at $x \simeq 6.1$. This solution becomes inaccurate when $\phi_1$ reaches values $\sim \Lambda$. For small values of $x$, \beq f(x) = 1 + x^2 /16 + \mathcal{O}(x^3). \label{smallx} \end{eqnarray} \begin{figure}[t] \centering \includegraphics[width=2.5in]{figure/fx} \caption{ Plot of $f(x)$. } \label{fig:fx} \end{figure} The initial bubble radius $r_0$ can be estimated as the value of $r$ at which $\phi_1$ significantly deviates from its value $\phi_{1,0}$ at the bubble center: \beq r_0 \simeq \frac{c_1}{\sqrt{\rho \phi_{1,0}}}, \label{phi0 third} \end{eqnarray} where $c_1 \sim 5$. Let us compare this with the generic bubble wall thickness $\delta \sim \Lambda/ \sqrt{U}$. With $\rho\sim U/\Lambda^3$, we have $r_0/\delta \sim c_1(\Lambda/\abs{\phi_{1,0}})^{1/2} \gg 1$ for $\abs{\phi_{1,0}}\ll \Lambda$. This means that tunneling close to the inflection point requires a thin-wall bubble of radius much larger than the wall thickness. This typically requires that the potential at the inflection point should be nearly degenerate with that of the false vacuum.\footnote{For a thin-wall bubble, the instanton solution stays near $\phi_{1,0}$ for a long Euclidean time, so that the friction term in Eq.~(\ref{instantonEoM}) becomes unimportant. Then energy is approximately conserved, and the potential at the endpoints of instanton solution is nearly the same.} We note also that for $|\phi_{1,0}|\sim \kappa\Lambda^3$ we have $r_0\sim U^{-1/2} \sim H^{-1}$, in which case gravitational effects on tunneling may be important. We will not attempt to analyze these effects here. Next, we consider Eq.~(\ref{separateEoM2}) for $\phi_a$. Let us first consider a particular solution $\phi_a^{(p)}$ that includes the effect of the interaction term $\rho_{11a} \phi_1^2$, where $\phi_1$ is given by \eq{phi_1}. It is \beq \phi_a^{(p)} (r) \simeq - \frac{\rho_{11a} \phi_{1,0}^2}{2 m_a^2} + \frac{\rho_{11a} \phi_{1,0}^2}{16m_a^2} \rho \phi_{1,0} r^2 + \dots, \label{phi-p} \end{eqnarray} for $\phi_{1,0} \ll \Lambda$, where the dots represent higher order terms in terms in $\sqrt{\rho \phi_{1,0}} \, r$. We neglect the second and higher-order terms in what follows because we are interested in the case where $\sqrt{\rho \phi_{1,0}} \, r \lesssim 1$. The solution of the homogeneous equation $\phi_a^{(h)}$ is given by \beq \phi_a^{(h)} (r) = c_0 \frac{2 J_1 (i m_a r)}{i m_a r}, \end{eqnarray} where $c_0$ is a constant. The solution of \eq{separateEoM2} can be thus written as \beq \phi_a (r) &=& \phi_a^{(p)} (r) + \phi_a^{(h)} (r) \\ &\simeq& - \frac{\rho_{11a} \phi_{1,0}^2}{2 m_a^2} + \left( \phi_{a,0} - \frac{\rho_{11a} \phi_{1,0}^2}{2 m_a^2} \right) \frac{2 J_1 (i m_a r)}{i m_a r}, \end{eqnarray} where $\phi_{a,0}$ ($\equiv \phi_a (0)$) is the tunneling endpoint of $\phi_a$. The asymptotic form of the Bessel function is given by \beq \frac{J_1 (im_a r) }{i m_a r} \sim \frac{e^{m_a r}}{\sqrt{2 \pi (m_a r)^3}}, \end{eqnarray} for $m_a r \gg 1$. We see that $\phi_a(r)$ grows exponentially, but consistency requires that it should remain sufficiently small ($\ll\Lambda$) until $r \sim r_0$. It follows that the initial value $\phi_{a,0}$ should satisfy \beq && \abs{ \phi_{a, 0} - \frac{\rho_{11a} \phi_{1,0}^2}{2 m_a^2} } \lesssim \Lambda (m_a r_0)^{3/2} e^{-m_a r_0}, \label{phi0 second} \end{eqnarray} where we assume $m_a r_0 \gg 1$. Using \eq{phi0 third}, this can be rewritten as \beq \abs{ {\phi_{a,0}} - \frac{\rho_{11a} \phi_{1,0}^2}{2 m_a^2} } \lesssim \Lambda \left( \frac{c_1^2 m_a^2}{\rho \phi_{1,0}} \right)^{3/4} \exp \left[ - \frac{c_1 m_a}{\sqrt{\rho \phi_{1,0}}} \right]. \label{phi_0 relation} \end{eqnarray} Thus the tunneling endpoint of $\phi_a$ is exponentially close to $\rho_{11a} \phi_{1,0}^2 / 2 m_a^2$, which is much smaller than $\phi_{1,0}$ for $|\phi_{1,0}|\ll\Lambda$. Finally, we comment on the effect of the $\eta$ term in \eq{separateEoM1}. At small values of $r$, this equation can be approximated as \beq \frac{{\rm d}^2 \phi_1}{{\rm d} r^2} + \frac{3}{r} \frac{{\rm d} \phi_1}{{\rm d} r} = \eta + \frac{\rho}{2} \phi_{1,0}^2. \end{eqnarray} The solution is \beq \phi_1 (r) = \phi_{1,0} + \frac{1}{16} \rho\phi_{1,0}^2 r^2 + \frac{1}{8} \eta r^2 . \end{eqnarray} The last term is negligible compared with the second one if \beq \abs{\phi_{1,0}}\gg (\eta/\rho)^{1/2}\sim \frac{U}{\rho N_{\rm max}} \sim \frac{\Lambda^3}{N_{\rm max}}, \label{upper bound} \end{eqnarray} where we have used Eq.~(\ref{Nmax}) for $N_{\rm max}$. This condition is satisfied unless $\phi_{1,0}$ is extremely close to the inflection point. \subsection{Numerical results for a toy model}\label{sec:toy model} Here we consider a numerical example to check the analysis in Sec.~5.2, in particular the relation \eq{phi_0 relation}. We consider a two-dimensional mini-landscape with the potential of the form \beq U(\phi_1, \phi_a) &&= \left( \eta \phi_1 + \frac{m^2}{2} \phi_1^2 + \frac{\rho}{6} \phi_1^3 + \frac{m_a^2}{2} \phi_a^2 + \frac{\rho_{aaa}}{6} \phi_a^3 + \frac{\rho_{1aa}}{2} \phi_1 \phi_a^2 + \frac{\rho_{11a}}{2} \phi_1^2 \phi_a \right) \nonumber\\ &&\quad \times \left[ \frac{m_1^2}{2} \left( \phi_1 - R \cos \theta \right)^2 + \frac{m_2^2}{2} \left( \phi_a - R \sin \theta \right)^2 + \delta \right], \label{toy model} \end{eqnarray} where $m$, $m_1$, $m_2$, $R$, $\theta$, $\delta$ are constant parameters. For $m=0$, there is an inflection point at $\phi_1 = \phi_a = 0$. The parameters $R$ and $\theta$ determine the location of false vacuum, and $\delta$ determines its height. We take $m_1 = m_2 = m_a = U_0 / \Lambda^2$, $\rho = \rho_{aaa} = 3 \rho_{1aa} = 3 \rho_{11a} = -U_0/ \Lambda^3$, $m = \eta = 0$, and $R = 5 \Lambda$ as an example, while we choose $\theta$ and $\delta / U_0$ randomly within the ranges of $(\pi /2,\pi)$ and $(0,1)$, respectively. An example of the potential is shown in Fig.~\ref{fig:potential}, where the false vacuum and the inflection point are marked by blue and black dots, respectively. We discard the realizations where there is no false vacuum near $(\phi_1, \phi_a) = (R \cos \theta, R \sin \theta)$, which is sometimes the case for $\delta / U_0 \gtrsim 0.8$. Note that the parameters of the landscape $\Lambda$ and $U_0$ can be eliminated by rescaling of the variables [see \eq{rescale of phi}], so the results below are independent of $\Lambda$ and $U_0$. \begin{figure}[t] \centering \includegraphics[width=5in]{figure/potential} \caption{ An example of the potential (\ref{toy model}), where we use the parameters indicated in the text with $\theta = 2\pi / 3$ and $\delta /U_0= 0.5$. This choice of parameters was mostly made for the clarity of presentation in the figure. The false vacuum and inflection point are marked by blue and black dots, respectively. The gray line shows the instanton trajectory, and the green dot is the tunneling point. } \label{fig:potential} \end{figure} We found the instanton solution using the efficient algorithm of Ref.~\cite{Ali} and determined the tunneling point for each realization. The resulting distribution is shown in Fig.~\ref{fig:initial}, where the blue dots represent the case where $\phi_{a,0} < 0$ while the green ones represent the case where $\phi_{a,0} > 0$. We see that green dots are rarer for smaller $\phi_{1,0}$ and the blue dots tend to be close to $\rho_{11a} \phi_{1,0}^2 / 2 m_a^2$, which is plotted as the red line for $\phi_{1,0} / \Lambda < 0.1$ ($\ll 1$). The plot shows that the tunneling points typically concentrate along the flat direction ($\phi_1$-axis), with $\phi_{a,0}$ close to $\rho_{11a} \phi_{1,0}^2 / 2 m_a^2$ when $\phi_{1,0}$ is much smaller than $\Lambda$. This result is in a good agreement with the analytic formula (\ref{phi_0 relation}). \begin{figure}[t] \centering \includegraphics[width=5in]{figure/initialdist} \caption{ Distribution of tunneling points in the toy model (\ref{toy model}). We choose $\theta$ and $\delta / U_0$ randomly within the ranges of $(\pi/2,\pi)$ and $(0,1)$, respectively. The blue dots represent the case where $\phi_{a,0} < 0$ while the green ones represent the case where $\phi_{a,0} > 0$. The red line represents $\rho_{11a} \phi_{1,0}^2 / 2 m_a^2$ with $\phi_{1,0} / \Lambda < 0.1$, which is the analytic formula (\ref{phi_0 relation}) that is valid for $\phi_{1,0} / \Lambda \ll 1$. } \label{fig:initial} \end{figure} Combining this result with that for the inflationary attractor region, we find that inflation is possible only if the tunneling point is very close to the $\phi_1$-axis, with $\phi_{1,0}$ in the range (\ref{phi1range}) except for some rare realizations. In other words, the initial conditions have to be very close to those for $1D$ inflation, with $\phi_{a,0}\approx 0$ and with $\phi_{1,0}$ in the $1D$ attractor range. Although there are some exceptions, where the false vacuum is close to the $\phi_1$ axes and $\delta$ is relatively small, those realizations are rarer for smaller $\phi_{1,0}$. Then the dynamics remains essentially one-dimensional all the way from bubble nucleation till the end of the slow roll. \section{Saddle point inflation}\label{sec:saddle} Inflation in the vicinity of a saddle point of the landscape can be analyzed in much the same way as inflection-point inflation, with similar conclusions. In this case we have $\eta_i =0$ in the cubic expansion (\ref{potential}) of the potential. For a slow-roll inflation, one of the eigenvalues of the Hessian has to be small and negative, and we can choose the basis in the $\phi$-space so that this eigenvalue corresponds to $\phi_1$ direction. All other eigenvalues, as well as the coefficients of the cubic expansion $\rho_{ijk}$ will typically have their generic values. We shall denote the small eigenvalue $-m^2$ and require $m\lesssim H = (U/3)^{1/2}$.\footnote{The Hessian may have several small eigenvalues, but such saddle points will be rare in the landscape.} The potential near the saddle point has the form of a flat hilltop surrounded by steep rising slopes. As before, inflation is approximately one-dimensional. The shape of the potential in the $\phi_1$-direction is \beq U(\phi_1)=U-\frac{1}{2}m^2 \phi_1^2 +\frac{1}{6}\rho\phi_1^3 , \end{eqnarray} where $\rho\equiv \rho_{111}$ and we assume $\rho<0$. Note that this potential has a shallow local minimum at $\phi_1 =2m^2 /\rho$. A characteristic feature of hilltop inflation is that it is eternal in the range $\abs{\phi_1}\lesssim U^{3/2}/m^2 \equiv\phi_q$, where the field $\phi_1$ undergoes quantum diffusion \cite{AV83}. The slow-roll regime corresponds to $\phi_q\lesssim \phi_1\lesssim \phi_{\rm end}$, where $\phi_{\rm end}$ is determined by the condition $\abs{U''}/U\sim 1$, $\phi_{\rm end}\sim U/\abs{\rho} \sim \Lambda^3$. The number of e-folds during the slow roll is bounded by \beq N_{\rm max} = \int_{\phi_q}^{\phi_{\rm end}} d\phi_1 \frac{U(\phi_1)}{U'(\phi_1)} \approx \frac{U}{m^2} \ln \left(1+\frac{2m^2}{\rho\phi_q}\right) . \end{eqnarray} The logarithm here is $\lesssim 100$; hence we need $m\lesssim H$. If the initial conditions after tunneling are such that $\phi_a\approx 0$, the attractor region in this case consists of two segments, separated by a large gap [$\phi_{1,0} \sim (- \kappa \Lambda^3, 0)$] where the field ends up on the "wrong" side of the hill and rolls into the shallow minimum. The attractor range of inflation is thus in the intervals $\Delta\phi_{1,0} \sim \Lambda^3$ near the boundaries of this range (i.e., $\phi_{1,0} \sim - \kappa \Lambda^3$ and $\sim 0$). If, on the other hand, $\abs{\phi_a}\gg \kappa \Lambda^3$, then essentially the same analysis as in Sec.~3.2 leads to the conclusion that $\phi_1$ is shifted by the amount (\ref{shift}) shortly after the bubble nucleation. The resulting attractor region consists of two parts having the same horseshoe shape as in the inflection-point case. The difference is that the widths of the horseshoes and their volume are smaller by a factor of $1/\kappa \simeq 0.04$). The distribution of tunneling points is also expected to be similar. Since the mass term in the $\phi_1$ direction is much smaller than its typical value, $\abs{m^2}\ll U_0/\Lambda^2$, we expect that the instanton solution is not sensitive to its magnitude, so the resulting distribution is close to the one we found in Sec.~\ref{sec:toy model} for $m=0$. We verified this numerically for the toy model (\ref{toy model}) with $m^2 = 0.01~ U_0/\Lambda^2$ and all other parameters the same as in Sec.~\ref{sec:toy model}. As before, we found that the tunneling points concentrate along the flat direction Thus we conclude again that the inflationary dynamics is effectively one-dimensional after tunneling. \section{Conclusions and discussion}\label{sec:conclusion} In this paper we studied slow-roll inflation in large random Gaussian landscapes. We assumed the landscape to be small-field, with the correlation length $\Lambda$ much smaller than the Planck scale, $\Lambda\ll 0.1$. In this case inflation typically occurs in small patches of the landscape, localized near saddle or inflection points, so the potential can be accurately approximated by Taylor expansion about these points up to cubic order. Our main conclusions can be summarized as follows. {\it (i)} Inflation in this kind of landscape is approximately one-dimensional, with the field moving in a nearly straight line during the slow roll. {\it (ii)} We defined the attractor range of inflation as the set of initial values of the scalar fields ${\bm \phi}$ that lead to slow roll. This range can be characterized by the fraction of volume $f_N$ it occupies in the $N$-dimensional correlation-length-sized region centered at the corresponding saddle or inflection point. In a $1D$ landscape, $f_1$ is comparable to the fraction of the correlation length where the slow-roll conditions are satisfied, $f_1\sim \Lambda^2$ \cite{MVY2}. Naively, one might expect that in a large landscape $f_N$ decreases exponentially with $N$ \cite{Yang}. We found, however, that, surprisingly, $f_N$ is nearly independent of $N$, $f_N\sim f_1$. When the field ${\bm \phi}$ starts relatively far from the slow-roll region, it undergoes rapid damped oscillations in the directions orthogonal to the inflationary track, and cubic interaction terms cause a large shift of the field along the track, so that it may end up in the slow-roll region. The resulting attractor range stretches far beyond the slow-roll regime, as illustrated in Fig.~\ref{fig:attractor}. {\it (iii)} The probability of inflation would be proportional to the attractor volume fraction if the tunneling endpoints were uniformly distributed through the landscape. However, we found this not to be the case. Our study of the instantons, both analytical and numerical, indicates that the tunneling endpoints tend to concentrate along the flat direction. If the endpoints spread more or less uniformly along this line, the probability of inflation would still be proportional to $f_1$. A quantitative analysis of this issue would require a statistical study of tunneling in the landscape, which we have not attempted here. {\it (iv)} Our picture of inflation in a large landscape is rather different from that suggested by the Dyson Brownian Motion (DBM) model in Refs.~\cite{Marsh,Dias:2016slx,Freivogel,Marsh2}. In particular, we find no evidence for the rapid steepening of the potential and for the resulting suppression of the number of inflationary e-folds predicted in this model. On the contrary, we find that the distribution for the number of e-folds is the same as in the $1D$ case, in agreement with Ref.~\cite{Yang}. The DBM model uses an expansion of the potential up to quadratic terms and assumes that the evolution of the Hessian matrix $\zeta_{ij}=\partial^2 U/\partial \phi_i \partial\phi_j$ along the inflationary path is described by the Dyson stochastic process. We see, however, no reason to expect this description to be accurate in a random Gaussian landscape. Inflation occurs in a small patch of the landscape, so we can use Taylor expansion. The first and some of the second derivatives of the potential in this patch are small, but the third derivatives are not; hence expansion up to cubic terms should be adequate. The resulting cubic potential does not vary stochastically along a smooth path and does not exhibit any steepening (other than a cubic steepening, as in the $1D$ case). On the other hand, the DBM process is known to drive the Hessian spectrum towards negative values. This may explain the steepening, as well as the appearance of low-mass modes (with inflation becoming multi-field). An important limitation of our analysis is that we studied the probability of inflation in the sense of "probability in the landscape". In other words, for a randomly selected inflection or saddle point in the landscape, we discussed the probability that the potential in the vicinity of that point can support slow-roll inflation. This is rather different from the probability that this kind of inflation has actually happened in our past. Calculation of the latter quantity would require accounting for various anthropic factors, as well as some choice of measure on the multiverse. We expect to return to this issue in a separate publication. A random Gaussian landscape is, of course, just a simple model. It may give some useful insights, but eventually one hopes to investigate more realistic landscape models, as it was done, for example, in Refs. \cite{Baumann, Jose, Linde:2016uec}. \section{Acknowledgement} We are grateful to Jose Blanco-Pillado for useful discussions. This work is supported by the National Science Foundation under grant 1518742. M.Y. is supported by the JSPS Research Fellowships for Young Scientists.
1707.03479
\section{Examples}\label{examples} We apply the generating series formula to compute $Z(\Sym^n X,t)$ for various cases of varieties $X$ over finite fields. These serve to verify the formula and demonstrate the facility of writing the zeta function in the Witt ring. Note that below, sums of Teichm\"{u}ller elements are taking place in the appropriate Witt ring. \subsection{Affine and projective space} Consider $n$-dimensional affine space $\mathbb{A}} \def\B{\mathbb{B}} \def\C{\mathbb{C}} \def\D{\mathbb{D}} \def\E{\mathbb{E}} \def\F{\mathbb{F}} \def\G{\mathbb{G}} \def\bbH{\mathbb{H}} \def\I{\mathbb{I}} \def\J{\mathbb{J}} \def\K{\mathbb{K}} \def\bbL{\mathbb{L}} \def\M{\mathbb{M}} \def\N{\mathbb{N}} \def\bbO{\mathbb{O}} \def\PP{\mathbb{P}} \def\Q{\mathbb{Q}} \def\R{\mathbb{R}} \def\bbS{\mathbb{S}} \def\T{\mathbb{T}^n$ and $n$-dimensional projective space $\PP^n$ over $\F_q$. Then $ N_r(\mathbb{A}} \def\B{\mathbb{B}} \def\C{\mathbb{C}} \def\D{\mathbb{D}} \def\E{\mathbb{E}} \def\F{\mathbb{F}} \def\G{\mathbb{G}} \def\bbH{\mathbb{H}} \def\I{\mathbb{I}} \def\J{\mathbb{J}} \def\K{\mathbb{K}} \def\bbL{\mathbb{L}} \def\M{\mathbb{M}} \def\N{\mathbb{N}} \def\bbO{\mathbb{O}} \def\PP{\mathbb{P}} \def\Q{\mathbb{Q}} \def\R{\mathbb{R}} \def\bbS{\mathbb{S}} \def\T{\mathbb{T}^n) = q^{nr}$ and we have $$ Z(\mathbb{A}} \def\B{\mathbb{B}} \def\C{\mathbb{C}} \def\D{\mathbb{D}} \def\E{\mathbb{E}} \def\F{\mathbb{F}} \def\G{\mathbb{G}} \def\bbH{\mathbb{H}} \def\I{\mathbb{I}} \def\J{\mathbb{J}} \def\K{\mathbb{K}} \def\bbL{\mathbb{L}} \def\M{\mathbb{M}} \def\N{\mathbb{N}} \def\bbO{\mathbb{O}} \def\PP{\mathbb{P}} \def\Q{\mathbb{Q}} \def\R{\mathbb{R}} \def\bbS{\mathbb{S}} \def\T{\mathbb{T}^n,t) = \frac{1}{(1-q^n t)} $$ It is also easy to show that $N_r(\PP^n) = q^{nr}+ q^{(n-1)r} + \cdots +q^r + 1$ so that $$ Z(\PP^n,t) = \frac{1}{(1- t)(1-qt)\cdots(1-q^mt)} $$ In the Witt ring $W(\mathbb{Z})$, these zeta functions are $Z(\mathbb{A}} \def\B{\mathbb{B}} \def\C{\mathbb{C}} \def\D{\mathbb{D}} \def\E{\mathbb{E}} \def\F{\mathbb{F}} \def\G{\mathbb{G}} \def\bbH{\mathbb{H}} \def\I{\mathbb{I}} \def\J{\mathbb{J}} \def\K{\mathbb{K}} \def\bbL{\mathbb{L}} \def\M{\mathbb{M}} \def\N{\mathbb{N}} \def\bbO{\mathbb{O}} \def\PP{\mathbb{P}} \def\Q{\mathbb{Q}} \def\R{\mathbb{R}} \def\bbS{\mathbb{S}} \def\T{\mathbb{T}^n,t) = [q^n]$ and $Z(\PP^n,t) = [q^m] + [q^{m-1}] + \cdots + [q] + [1]$. Recall that $\Sym^n \mathbb{A}} \def\B{\mathbb{B}} \def\C{\mathbb{C}} \def\D{\mathbb{D}} \def\E{\mathbb{E}} \def\F{\mathbb{F}} \def\G{\mathbb{G}} \def\bbH{\mathbb{H}} \def\I{\mathbb{I}} \def\J{\mathbb{J}} \def\K{\mathbb{K}} \def\bbL{\mathbb{L}} \def\M{\mathbb{M}} \def\N{\mathbb{N}} \def\bbO{\mathbb{O}} \def\PP{\mathbb{P}} \def\Q{\mathbb{Q}} \def\R{\mathbb{R}} \def\bbS{\mathbb{S}} \def\T{\mathbb{T}^1 = \mathbb{A}} \def\B{\mathbb{B}} \def\C{\mathbb{C}} \def\D{\mathbb{D}} \def\E{\mathbb{E}} \def\F{\mathbb{F}} \def\G{\mathbb{G}} \def\bbH{\mathbb{H}} \def\I{\mathbb{I}} \def\J{\mathbb{J}} \def\K{\mathbb{K}} \def\bbL{\mathbb{L}} \def\M{\mathbb{M}} \def\N{\mathbb{N}} \def\bbO{\mathbb{O}} \def\PP{\mathbb{P}} \def\Q{\mathbb{Q}} \def\R{\mathbb{R}} \def\bbS{\mathbb{S}} \def\T{\mathbb{T}^n$ and $\Sym^n \PP^1 = \PP^n$, and that $$ Z(\mathbb{A}} \def\B{\mathbb{B}} \def\C{\mathbb{C}} \def\D{\mathbb{D}} \def\E{\mathbb{E}} \def\F{\mathbb{F}} \def\G{\mathbb{G}} \def\bbH{\mathbb{H}} \def\I{\mathbb{I}} \def\J{\mathbb{J}} \def\K{\mathbb{K}} \def\bbL{\mathbb{L}} \def\M{\mathbb{M}} \def\N{\mathbb{N}} \def\bbO{\mathbb{O}} \def\PP{\mathbb{P}} \def\Q{\mathbb{Q}} \def\R{\mathbb{R}} \def\bbS{\mathbb{S}} \def\T{\mathbb{T}^1,t) = \frac{1}{1-qt} = [q] \quad \textrm{and} \quad Z(\PP^1,t) = \frac{1}{(1-t)(1-qt)} = [1] + [q] $$ The formula provided in Theorem \ref{mainthm} predicts that $Z(\Sym^n \mathbb{A}} \def\B{\mathbb{B}} \def\C{\mathbb{C}} \def\D{\mathbb{D}} \def\E{\mathbb{E}} \def\F{\mathbb{F}} \def\G{\mathbb{G}} \def\bbH{\mathbb{H}} \def\I{\mathbb{I}} \def\J{\mathbb{J}} \def\K{\mathbb{K}} \def\bbL{\mathbb{L}} \def\M{\mathbb{M}} \def\N{\mathbb{N}} \def\bbO{\mathbb{O}} \def\PP{\mathbb{P}} \def\Q{\mathbb{Q}} \def\R{\mathbb{R}} \def\bbS{\mathbb{S}} \def\T{\mathbb{T}^1,t)$ is the coefficient of $u^n$ in $[[\;[q]\;]] \in W(W(\mathbb{Z}))$. Similarly, that $Z(\Sym^n \PP^1,t)$ is the coefficient of $u^n$ in $[[\;[1]\;]] + [[\;[q]\;]] \in W(W(\mathbb{Z}))$. We have $$ [[\;[q]\;]] = \frac{1}{[1]-[q]u} = [1] + [q]u + [q^2] u^2 + \cdots $$ and $$ [[\;[1]\;]] + [[\;[q]\;]]= \left(\frac{1}{[1]-[1]u}\right) \left(\frac{1}{[1]-[q]u}\right) = [1] + ([1]+[q])u + \cdots $$ which agrees with the zeta functions $Z(\mathbb{A}} \def\B{\mathbb{B}} \def\C{\mathbb{C}} \def\D{\mathbb{D}} \def\E{\mathbb{E}} \def\F{\mathbb{F}} \def\G{\mathbb{G}} \def\bbH{\mathbb{H}} \def\I{\mathbb{I}} \def\J{\mathbb{J}} \def\K{\mathbb{K}} \def\bbL{\mathbb{L}} \def\M{\mathbb{M}} \def\N{\mathbb{N}} \def\bbO{\mathbb{O}} \def\PP{\mathbb{P}} \def\Q{\mathbb{Q}} \def\R{\mathbb{R}} \def\bbS{\mathbb{S}} \def\T{\mathbb{T}^n,t)$ and $Z(\PP^n,t)$ described above: \subsection{Elliptic curves} Let $E$ be an elliptic curve over $\F_q$. In this case, we have $$ Z(E,t) = \frac{(1-\alpha t)(1 - \beta t)}{(1-t)(1-qt)} $$ where $\alpha + \beta = a \in \mathbb{Z}$ and $\alpha\beta = q$. Written in the Witt ring, this appears as $$ Z(E,t) = [1] - [\alpha] - [\beta] + [q], $$ where $[\alpha] = (1-\alpha t)^{-1}$ the Teichm\"{u}ller element. The formula provided by Theorem \ref{mainthm} is $$ Z(\Sym^n E,t) = \textrm{coefficient of $u^n$ in} \left[\frac{(1-[\alpha]u)(1-[\beta] u))}{(1-[1] u)(1-[q]u))} \right]. $$ The results in the section in fact work for any curve $C$ over $\F_q$. In this general case, the eigenvalues $\alpha$ and $\beta$ for the action on $H^1(\overline{C},\Q_\ell)$ come in pairs $\alpha_i$ and $\beta_i$ for $i = 1,2,\ldots,g$ where $g$ is the genus. In fact, the case of symmetric powers of smooth projective curves was worked out by MacDonald in \cite{MacDonald62} in 1962. Here we work in the elliptic curve case for simplicity. \subsection{Symmetric powers of affine and projective space} We use the formula in Theorem \ref{mainthm} to compute the zeta functions $Z(\Sym^n \mathbb{A}} \def\B{\mathbb{B}} \def\C{\mathbb{C}} \def\D{\mathbb{D}} \def\E{\mathbb{E}} \def\F{\mathbb{F}} \def\G{\mathbb{G}} \def\bbH{\mathbb{H}} \def\I{\mathbb{I}} \def\J{\mathbb{J}} \def\K{\mathbb{K}} \def\bbL{\mathbb{L}} \def\M{\mathbb{M}} \def\N{\mathbb{N}} \def\bbO{\mathbb{O}} \def\PP{\mathbb{P}} \def\Q{\mathbb{Q}} \def\R{\mathbb{R}} \def\bbS{\mathbb{S}} \def\T{\mathbb{T}^m,t)$ and $Z(\Sym^n \PP^m,t)$. Note that these varieties are no longer smooth once $m>1$. Recall that $$ Z(\mathbb{A}} \def\B{\mathbb{B}} \def\C{\mathbb{C}} \def\D{\mathbb{D}} \def\E{\mathbb{E}} \def\F{\mathbb{F}} \def\G{\mathbb{G}} \def\bbH{\mathbb{H}} \def\I{\mathbb{I}} \def\J{\mathbb{J}} \def\K{\mathbb{K}} \def\bbL{\mathbb{L}} \def\M{\mathbb{M}} \def\N{\mathbb{N}} \def\bbO{\mathbb{O}} \def\PP{\mathbb{P}} \def\Q{\mathbb{Q}} \def\R{\mathbb{R}} \def\bbS{\mathbb{S}} \def\T{\mathbb{T}^m,t) = \frac{1}{(1-q^mt)} = [q^m] \quad \textrm{and} \quad Z(\PP^m,t) = \frac{1}{(1-t)(1-[q^m])} = [1] + [q^m] $$ Then by the formula we have for affine space $$ Z(\Sym^n \mathbb{A}} \def\B{\mathbb{B}} \def\C{\mathbb{C}} \def\D{\mathbb{D}} \def\E{\mathbb{E}} \def\F{\mathbb{F}} \def\G{\mathbb{G}} \def\bbH{\mathbb{H}} \def\I{\mathbb{I}} \def\J{\mathbb{J}} \def\K{\mathbb{K}} \def\bbL{\mathbb{L}} \def\M{\mathbb{M}} \def\N{\mathbb{N}} \def\bbO{\mathbb{O}} \def\PP{\mathbb{P}} \def\Q{\mathbb{Q}} \def\R{\mathbb{R}} \def\bbS{\mathbb{S}} \def\T{\mathbb{T}^m, t) = \textrm{ coefficient of $u^n$ in } [[\;[q^m]\;]] = [q^{nm}], $$ which agrees with the fact that $[\Sym^n \mathbb{A}} \def\B{\mathbb{B}} \def\C{\mathbb{C}} \def\D{\mathbb{D}} \def\E{\mathbb{E}} \def\F{\mathbb{F}} \def\G{\mathbb{G}} \def\bbH{\mathbb{H}} \def\I{\mathbb{I}} \def\J{\mathbb{J}} \def\K{\mathbb{K}} \def\bbL{\mathbb{L}} \def\M{\mathbb{M}} \def\N{\mathbb{N}} \def\bbO{\mathbb{O}} \def\PP{\mathbb{P}} \def\Q{\mathbb{Q}} \def\R{\mathbb{R}} \def\bbS{\mathbb{S}} \def\T{\mathbb{T}^m]= [\mathbb{A}} \def\B{\mathbb{B}} \def\C{\mathbb{C}} \def\D{\mathbb{D}} \def\E{\mathbb{E}} \def\F{\mathbb{F}} \def\G{\mathbb{G}} \def\bbH{\mathbb{H}} \def\I{\mathbb{I}} \def\J{\mathbb{J}} \def\K{\mathbb{K}} \def\bbL{\mathbb{L}} \def\M{\mathbb{M}} \def\N{\mathbb{N}} \def\bbO{\mathbb{O}} \def\PP{\mathbb{P}} \def\Q{\mathbb{Q}} \def\R{\mathbb{R}} \def\bbS{\mathbb{S}} \def\T{\mathbb{T}^{nm}]$. For projective space, $$ Z(\Sym^n \PP^m, t) = \textrm{ coefficient of $u^n$ in } [[\;[1]\;]]+[[\;[q]\;]]+\cdots + [[\;[q^m]\;]]. $$ Note that this coefficient is not $[1] + [q]+\cdots + [q^{nm}]$ as $\Sym^n \PP^m$ is not $\PP^{nm}$. For instance, consider $n=2$ and $m=2$, $$ Z(\Sym^2 \PP^2, t) = [1] + [q] + 2[q^2] + [q^3] + [q^4] $$ whereas $$ Z(\PP^4,t) = [1] + [q]+ [q^2]+[q^3] + [q^4]. $$ \subsection{Product of elliptic curves} Consider $X = E_1 \times E_2$ a product of elliptic curves over $\F_q$. We use the formula in Theorem \ref{mainthm} to compute $Z(\Sym^n X,t)$ in terms of $Z(E_1,t)$ and $Z(E_2,t)$. Let $$ Z(E_1,t) = \frac{(1-\alpha_1 t)(1 - \beta_1 t)}{(1-t)(1-qt)} \quad \textrm{and} \quad Z(E_2,t) = \frac{(1-\alpha_2 t)(1 - \beta_2 t)}{(1-t)(1-qt)} $$ and recall that in the Witt ring, these can be written as $$ Z(E_1,t) = [1] - [\alpha_1] - [\beta_1] + [q] \quad\textrm{and} \quad Z(E_2,t) = [1] - [\alpha_2] - [\beta_2] + [q]. $$ Then since the zeta function takes products in the Witt ring, we may compute $Z(E_1 \times E_2,t)$ as \begin{eqnarray*} Z(E_1\times E_2,t) &=& ([1] - [\alpha_1] - [\beta_1] + [q] )* ([1] - [\alpha_2] - [\beta_2] + [q]) \\ &=& [1]- [\alpha_1] - [\alpha_2] - [\beta_1] - [\beta_2] + [\alpha_1\beta_2]+[\alpha_2\beta_1] +[\alpha_1\alpha_2] \\ && + [\beta_1\beta_2] - [\alpha_1q] - [\alpha_2q] - [\beta_1q] - [\beta_2q] + [q^2]. \end{eqnarray*} Thus the motivic zeta function generating series is \begin{eqnarray*} \zeta_Z(E_1\times E_2,u) &=& \sum_{n=0}^\infty Z(\Sym^n(E_1\times E_2),t) u^n\\ &=& [[\;[1]\;]]- [[\;[\alpha_1]\;]] - [[\;[\alpha_2]\;]] - [[\;[\beta_1]\;]] - [[\;[\beta_2]\;]] \\ &&+ [[\;[\alpha_1\beta_2]\;]]+[[\;[\alpha_2\beta_1]\;]] +[[\;[\alpha_1\alpha_2]\;]] + [[\;[\beta_1\beta_2]\;]] \\ &&- [[\;[\alpha_1q]\;]] -[[\; [\alpha_2q]\;]] - [[\;[\beta_1q]\;]] - [[\;[\beta_2q]\;]] + [[\;[q^2]\;]]. \end{eqnarray*} Note that these equalities are being written in $W(W(A))$. These methods similarly also work for $n$-fold products of smooth projective curves; however, the formulas quickly become unwieldy. \section{Varieties over Finite Fields}\label{FiniteFieldCase} Let $X$ be a variety over a finite field $\F_q$ (reduced scheme of finite type over $\Spec \F_q$). The Hasse-Weil zeta function (or simply the zeta function) of $X$ is \begin{equation} \label{zetafunction} Z(X,t) = \prod_{x \in |X|} (1-t^{\deg x})^{-1} \end{equation} a power series over $\mathbb{Z}$, where the product ranges over closed points $x \in |X|$. This may also be expressed as \begin{equation} \label{weilzetafunction} Z(X,t) = \exp \left[\sum_{r=1}^\infty N_r(X) \frac{t^r}{r} \right] \end{equation} where $N_r(X) = \# X(\F_{q^r})$ the number of points over $\F_{q^r}$. Due to the work of Grothendieck and others on the Weil conjectures, the zeta function can be written in terms the action of the Frobenius on (compactly supported) \'{e}tale cohomology: \begin{equation}\label{zetapoly} Z(X,t) = \frac{\prod_i P_{2i+1}(t)}{\prod_i P_{2i}(t)}, \quad P_i(t) = \prod_j (1-\alpha_{ij} t)^{-1} \end{equation} where $\alpha_{ij}$ are the inverse eigenvalues of the Frobenius action $\Phi$ on $H_{et,c}^i(\overline{X};\Q_\ell)$. These are so-called Weil $q$-numbers $|\alpha_{i,j}|= q^{r/2}$ for some $r \leqslant i$ (Deligne) and $Z(X,t)$ is a rational function (Dwork). We wish to consider the zeta function $Z(X,t)$ as an element in the big Witt ring $W(\mathbb{Z})$. The following is shown in \cite[Thm 2.1]{Ramachandran15}: \begin{proposition}\label{weilzetawitt} For $X$ and $Y \in \Var_{\F_q}$, the zeta function $X \times_{\F_q} Y$ is given by the Witt product $$ Z(X\times_{\F_q} Y, t) = Z(X,t) *_W Z(Y,t). $$ Moreover, the Frobenius operator $\Fr_r$ on $W(\mathbb{Z})$ corresponds to base change $X_{\F_{q^r}}$, so that $\Fr_r(Z(X/\F_q,t)) = Z(X/\F_{q^r},t).$ \end{proposition} \begin{proof}[Sketch] Notice that from (\ref{weilzetafunction}), $\gh_n(Z(X,t)) = N_n(X)$. As this is multiplicative, the zeta function takes products in the Witt ring. Finally, recall that $\gh_n \circ \Fr_r = \gh_{nr}$ and note that $\gh_n(X_{\F_{q^r}}) = N_{nr}(X) = \gh_{nr}(X/\F_q)$. \end{proof} The zeta function can be written in the Witt ring $W(\mathbb{Z})$ as $$ Z(X,t) = \sum_{i,j} (-1)^{i+1}[\alpha_{ij}] $$ where the sum is taking place in the Witt ring and $[\alpha]$ is the Teichm\"{u}ller element $(1-\alpha t)^{-1}$. This directly follows from the presentation in (\ref{zetapoly}) as an alternating product of polynomials $P_i = (1-\alpha_{ij}t)^{-1}$ and Proposition \ref{weilzetawitt} above. Although each Teichm\"{u}ller element $[\alpha_{ij}]$ is in $W(\overline{\Q}_\ell)$, the sum nevertheless lies in the subring $W(\mathbb{Z})$.\\ We now restate our main result. \begin{theorem} Given a quasi-projective variety $X \in \Var_{\F_q}$, we have \begin{equation}\label{maineq} \sum_{n \geqslant 0} Z(\Sym^n X, t) u^n = \sum_{i,j} (-1)^{i+1}[[\; [\alpha_{ij}] \;]] \in W(W(\overline{\Q_\ell})) \end{equation} \end{theorem} This can be viewed as a closed product formula for the generating series for zeta functions of symmetric powers. \subsection{Proof of the Main Theorem} Recall the result on Newton's identities from Lemma \ref{newton}: in $\Lambda(A)$, for $P(t) = \sum_n a_n t^n$ and $b_n = \gh_n(P(t))$, we have $$ na_n = b_n + a_1b_{n-1} + \cdots a_{n-1}b_{1} $$ Recursively, this gives a way to recover $a_n$ from the ghost coordinates $b_1,b_2,\ldots,b_n$. That is, $a_n$ can be written purely in terms of $b_i$ for $i = 1, 2, \ldots n$ by replacing each $a_i$ in the above relation. Let us call this relation $\phi$ so that $$ a_n = \phi(b_1,b_2,\ldots,b_n). $$ \begin{lemma}\label{frobnewton} For $X$ a quasi-projective variety over $\F_q$, \begin{equation}\label{frobnewtoneq} N_r(\Sym^n X) = \phi(N_r(X),N_{2r}(X),\ldots,N_{nr}(X)) \end{equation} \end{lemma} \begin{proof} Over $\overline{\F_q}$, the symmetric power $\Sym^n X$ parametrizes effective zero cycles. These are exactly the degree $n$ effective cycles on $X$. Thus the zeta function $Z(X,t)$ may be written as \begin{equation}\label{zetaforlemma} Z(X,t) =\sum_n N_1(\Sym^n X)t^n. \end{equation} As $\gh_r(Z(X,t)) = N_r(X)$, this implies that $$ N_1(\Sym^n X) = \phi(N_1(X),N_2(X),\ldots,N_n(X)). $$ Apply the Frobenius operator $\Fr_r$ for $W(\mathbb{Z})$ to equation (\ref{zetaforlemma}). Equation (\ref{frobnewtoneq}) then follows from the observation that on ghost coordinates, $\gh_n (\Fr_r(P)) = \gh_{nr}(P)$ for $P \in W(\mathbb{Z})$ and from Proposition \ref{weilzetawitt}, $\Fr_r Z(X/\F_q,t) = Z(X/\F_{q^r},t)$. \end{proof} \begin{proof}[Proof of Theorem \ref{mainthm}] Denote the ghost map on $W(W(\mathbb{Z}))$ by $\gh^u$ and let $\beta_n$ be the $n$th ghost coordinate of the sum double Teichm\"{u}ller elements in (\ref{maineq}): $$ \beta_n =\gh^u_n\left(\sum_{i,j} (-1)^{i+1}[[\, [\alpha_{ij}] \,]]\right) = \sum_{i,j} (-1)^{i+1} [\alpha_{ij}]^n \in W(\mathbb{Z}). $$ We wish to show $$ Z(\Sym^nX,t) = \phi( \beta_1,\beta_2,\ldots, \beta_n). $$ This relation is taking place in $W(\mathbb{Z})$. We now use the ghost map $\gh^t$ on $W(\mathbb{Z})$. It suffices to show for all $r$, $$ \gh_r^t Z(\Sym^n X,t) = \phi( \gh_r^t \beta_1, \gh_r^t \beta_2,\ldots, \gh_r^t \beta_n). $$ We have $$ \gh^t_r\beta_n = \gh^t_r \left( \sum_{i,j} (-1)^{i+1} [\alpha_{ij}]^n \right) = \sum_{i,j} (-1)^{i+1} \alpha_{ij}^{nr} = N_{nr}(X) \in \mathbb{Z}. $$ Moreover, $\gh_r^t Z(\Sym^n X,t) = N_r(\Sym^n X)$. Now recall from Lemma \ref{frobnewton} $$ N_r(\Sym^n X) = \phi(N_r(X),N_{2r}(X),\ldots,N_{nr}(X)). $$ As this holds for all $r$, the proof follows. \end{proof} \section{Introduction} A remarkable formula of MacDonald \cite{MacDonald} provides a closed expression for the generating series of the Poincar\'{e} polynomial of the symmetric powers $\Sym^n X$ of a space $X$. Let $X$ be a compact complex manifold of dimension $m$, and recall the Poincar\'{e} polynomial is defined as $P(X,z) = \sum_i (-1)^i b_i(X) z^i$, where $b_i=b_i(X) = \dim_\Q H^i(X,\Q)$ are the Betti numbers. The MacDonald formula for $P(X,z)$ is \begin{equation} \label{macd} \sum_{n \geqslant 0} P(\Sym^n X, z) t^n = \frac{(1-z^1t)^{b_1}(1-z^3t)^{b_3}\cdots (1-z^{2m-1}t)^{b_{2m-1}}} {(1-t)^{b_0}(1-z^2t)^{b_2}\cdots(1-z^{2m}t)^{b_{2m}}}. \end{equation} Thus the Poincar\'{e} polynomial $P(\Sym^n X,z)$ may be expressed directly in terms of invariants associated to $X$. A similar formula for the Euler characteristic is recovered by setting $z=1$ as $P(X,1) = \chi(X)$. In this short note we prove an analogous formula for the Hasse-Weil zeta function $Z(X,t)$ of varieties over finite fields. To provide such a formula, we consider the zeta function $Z(X,t)$ as an element in the Witt ring $W(\mathbb{Z})$. As noted by Ramachandran \cite{Ramachandran15}, $Z(X,t)$ takes form of an Euler-Poincar\'{e} characteristic as an alternating sum of \emph{Teichm\"{u}ller elements} $[\alpha]$: $$ Z(X,t) = \sum_{i,j} (-1)^{i+1} [\alpha_{ij}] \in W(\overline{\Q_\ell}) $$ where $\alpha_{ij}$ is the $j$th (inverse) eigenvalue of the Frobenius operator acting on the $i$th \'{e}tale cohomology. By definition, the zeta function $Z(X,t)$ takes values in the ring $W(\mathbb{Z}) \subset W(\overline{\Q_\ell})$. \skipline Our main result is a MacDonald formula for $Z(X,t)$: \begin{theorem}\label{mainthm} Given a quasi-projective variety $X \in \Var_{\F_q}$, we have $$ \sum_{n \geqslant 0} Z(\Sym^n X, t) u^n = \sum_{i,j} (-1)^{i+1}[[\; [\alpha_{ij}] \;]] \in W(W(\overline{\Q_\ell})) $$ where the $[[\; [\alpha_{ij}] \;]]$ are double Teichm\"{u}ller elements. \end{theorem} \skipline We interpret this result in the setting of motivic zeta functions, specifically in the context of $\lambda$-ring-valued motivic measures. Let $K_0(\Var_k)$ be the Grothendieck ring of varieties and consider an $A$-valued motivic measure $\mu:K_0(\Var_k) \rightarrow A$. In \cite{Kapranov}, Kapranov associates to $\mu$ a motivic zeta function $$\zeta_\mu(X,t) = \sum_{n\geqslant 0} \mu(\Sym^n X) t^n,$$ an invertible power series with coefficients in $A$. Following Ramachandran \cite{Ramachandran15}, we say that $\mu$ \emph{exponentiates} or \emph{can be exponentiated} if $\zeta_\mu$ takes values in the big Witt ring $W(A)$. That is, if the zeta function $\zeta_\mu$ reflects the product structure in $K_0(\Var_k)$ via the Witt product in $W(A)$. A particularly interesting case is that of $A$-valued motivic measures where $A$ is a $\lambda$-ring. It is shown by Ramachandran and Tabuada \cite{RamaTabuada} that if such a measure is a pre-$\lambda$-ring map on $K_0(\Var_k)$, then it exponentiates (see Proposition \ref{NRTaba}). We refer to such measures $\mu$ as $\lambda$-measures. The zeta function of such a measure is a ring homomorphism into $W(A)$ and thus induces a zeta function measure $\mu_Z([X]) = \zeta_\mu(X,t)$. We prove that if $\mu$ is a $\lambda$-measure, then $\mu_Z$ is also a $\lambda$-measure (Theorem \ref{zetaofzeta}); this involves a study of the $\lambda$-ring structure of the big Witt ring $W(A)$. The classical MacDonald formulae show that the Euler characteristic and Poincar\'{e} polynomial define $\lambda$-measures. The formula in Theorem \ref{mainthm} shows that the zeta function $Z(X,t)$ for varieties over finite fields defines a $\lambda$-measure. Analogous MacDonald formulae can be proved in other contexts. Let $\cC$ be a $k$-linear tensor category, in the sense of a $k$-linear additive pseudo-abelian symmetric monoidal category where $\otimes$ is $k$-linear. As shown by Heinloth \cite{Heinloth07}, the Grothendieck ring $K_0(\cC)$ is a $\lambda$-ring. Del Ba\~{n}o Rollin \cite{Rollin01} and Maxim-Sch\"{u}rmann \cite{MaximSchurmann} prove examples of closed formulae for various generating series of measures taking values in various $K(\cC)$; these follow directly from the opposite $\lambda$-ring structure map $\sigma_t$. The motivic measures described in \cite{Rollin01} and \cite{MaximSchurmann} are also $\lambda$-measures. After reviewing some preliminaries on $\lambda$-rings and Witt rings in Section \ref{lambdarings}, we prove the formula for zeta functions in Section \ref{FiniteFieldCase} and provide some explicit computations of zeta functions of various $\Sym^n X$ in Section \ref{examples}. Finally in Section \ref{lambdaringmeasures}, we provide some context for these results in the theory of $\lambda$-ring-valued motivic measures and explore related results for motivic zeta function measures. \section{Lambda-ring valued motivic measures} \label{lambdaringmeasures} Let $k$ be a field and $\Var_k$ be the category of varieties over $k$ (reduced schemes of finite type over $\Spec k$). The \emph{Grothendieck ring of varieties} $K_0(\Var_k)$ is the abelian group generated by symbols $[X]$ of isomorphism classes of $X \in \Var_k$ subject to the scissor relation $[X] = [Y] + [X \setminus Y]$ for $Y$ any closed subvariety of $X$. $K_0(\Var_k)$ is a commutative ring under the product $[X]\cdot[Y] = [X \times_k Y]$. This is the so-called universal value group of Euler-Poincar\'{e} characteristics on $\Var_k$; ring homomorphisms on $K_0(\Var_k)$ are called \emph{motivic measures}. That is, for $A$ a commutative ring with identity, we consider $\mu:K_0(\Var_k) \rightarrow A$. Kapranov in \cite{Kapranov} constructs a \emph{motivic zeta function} $\zeta_\mu(X,t)$ for $X \in \Var_k$ using these algebraic invariants $\mu(X)$. For $X$ quasi-projective, it is the generating series for (the measure of) symmetric powers $\Sym^n X$: $$ \zeta_\mu(X,t) = \sum_{n=0}^\infty \mu(\Sym^n X) t^n \in A[[t]]. $$ As $K_0(\Var_k)$ is additively generated by quasiprojective $X$, this defines a group homomorphism on $K_0(\Var_k)$ taking values in the group of invertible power series $\Lambda(A)$. If $\zeta_\mu:K_0(\Var_k) \rightarrow W(A)$ is a ring homomorphism , we say that $\mu$ exponentiates (see \cite{Ramachandran15}). For example, in the finite field case\footnote{In this case, we must pass through the $\widetilde{K_0}(\Var_{\F_q})$ Grothendieck ring modulo \emph{radicial morphisms}, but all the measures on $\Var_{\F_q}$ we consider factor through $\widetilde{K_0}(\Var_{\F_q})$. See Mustata \cite[p. 78]{Mustata}.} $X \in \Var_{\F_q}$, the motivic zeta function $\zeta_{\mu_\#}$ associated to counting measure $\mu_\#([X]) = \#X(\F_q)$ recovers the usual zeta function $Z(X,t)$ (see Mustata \cite[Prop 7.31]{Mustata}). We have seen that $Z(X\times_k Y,t) = Z(X,t)*_W Z(Y,t)$; hence $\mu_\#$ can be exponentiated. We are particularly interested in $A$-valued motivic measures where $A$ is a $\lambda$-ring. Note that $K_0(\Var_k)$ has a pre-$\lambda$-ring structure provided by the Kapranov motivic zeta function, defined such that $\sigma^n([X]) = [\Sym^n X]$ for $X$ quasiprojective. This, however, is not a $\lambda$-ring structure (see Larsen and Lunts \cite{LarsenLunts}). \begin{definition} Let $A$ be a $\lambda$-ring. A motivic measure $\mu:K_0(\Var_k) \rightarrow A$ is called a $\lambda$-measure if any of the equivalent conditions hold: \begin{itemize} \item $\mu$ is a pre-$\lambda$-ring map, $\mu(\Sym^n[X]) = \sigma^n \mu(X)$. \item The associated motivic zeta function factors as $$\label{diagram} \xymatrixcolsep{3pc}\xymatrix{ K_0(\Var_k) \ar[rd]_{\zeta_\mu} \ar[r]^\mu &R \ar[d]^{\sigma_t}\\ &W(R)} $$ \end{itemize} \end{definition} \begin{proposition}[Ramachandran, Tabuada \cite{RamaTabuada}]\label{NRTaba} If $\mu$ is a $\lambda$-measure, then $\mu$ exponentiates. \end{proposition} \begin{proof} The composition above is a ring homomorphism. \end{proof} \begin{example} The generating series in (\ref{macd}) for the Poincar\'{e} polynomial is the Kapranov motivic zeta function $\zeta_{\mu_P}$ associated to the Poincar\'{e} polynomial measure $\mu_P(X) = P(X,z)$. Since the ghost coordinates $\gh_n(\zeta_{\mu_P}(X,t)) = P(X,z^n)$ are multiplicative, $\zeta_P$ takes values in the Witt ring. MacDonald's formula (\ref{macd}) can be written in the Witt ring $W(\mathbb{Z}[z])$ as $$ \zeta_{\mu_P}(X,t) = \sum_{i=0}^{2m} (-1)^i b_i(X) [z^i]. $$ The zeta function for the Poincar\'{e} polynomial appears as a Poincar\'{e} polynomial written in Teichm\"{u}ller elements. Similarly for $\chi$, we have $$ \zeta_\chi(X,t) = \sum_{i=0}^{2m} (-1)^i b_i(X) [1] = \chi(X)[1], $$ an Euler characteristic written in Teichm\"{u}ller elements. Moreover, both these motivic zeta functions factor through the $\lambda$-ring structure maps and thus both $\chi$ and $\mu_P$ are $\lambda$-measures: \begin{proposition} For the Euler characteristic, $\zeta_\chi = \sigma_t \circ \chi$, where $\sigma_t$ is the opposite $\lambda$-ring structure map on $\mathbb{Z}$ and for the Poincar\'{e} polynomial $\zeta_{\mu_P} = \sigma_t' \circ \mu_P$, where $\sigma_t'$ is the opposite $\lambda$-ring structure map on $\mathbb{Z}[z]$. \end{proposition} \begin{proof} Recall that for $a \in \mathbb{Z}$, $\sigma_t(a) = a[1]$ and for $f(z) \in \mathbb{Z}[z]$, $\sigma_t'(f(z)) = f([z])$. \end{proof} \end{example} \begin{definition} Given a measure that exponentiates $\mu:K_0(\Var_k) \rightarrow A$, the associated motivic zeta function $\zeta_\mu:K_0(\Var_k) \rightarrow W(A)$ is a ring homomorphism. The induced motivic measure is called the \emph{zeta function measure} $\mu_Z(X) = \zeta_\mu(X,t)$; it is a $W(A)$-valued motivic measure. \end{definition} \begin{theorem}\label{zetaofzeta} If $\mu:K_0(\Var_k) \rightarrow A$ is a $\lambda$-measure, then its induced zeta function measure $\mu_Z:K_0(\Var_k) \rightarrow W(A)$ is also a $\lambda$-measure. This process iterates ad infinitum. \end{theorem} \begin{proof} By Proposition \ref{sigma}, the map $\sigma_t$ is a $\lambda$-ring map. The composition above is a $\lambda$-ring map. \end{proof} \begin{remark} The zeta function $Z(X,t)$ is the Kapranov motivic zeta function associated to the counting measure $\mu_\#$; however, counting measure is not a $\lambda$-measure. Recall the unique $\lambda$-ring structure on $\mathbb{Z}$: for $a \in \mathbb{Z}$, $$ \lambda_t(a) =(1+t)^a, \quad \lambda^n(a) = \left(\! \begin{array}{c} a \\ n \end{array} \! \right), \quad \sigma^n(a) = \left(\! \begin{array}{c} a+n-1 \\ n \end{array} \! \right). $$ However, it is easy to verify that $\mu_\#(\Sym^n X) \neq \sigma^n (\mu_\#(X))$. For instance, $X = \mathbb{A}} \def\B{\mathbb{B}} \def\C{\mathbb{C}} \def\D{\mathbb{D}} \def\E{\mathbb{E}} \def\F{\mathbb{F}} \def\G{\mathbb{G}} \def\bbH{\mathbb{H}} \def\I{\mathbb{I}} \def\J{\mathbb{J}} \def\K{\mathbb{K}} \def\bbL{\mathbb{L}} \def\M{\mathbb{M}} \def\N{\mathbb{N}} \def\bbO{\mathbb{O}} \def\PP{\mathbb{P}} \def\Q{\mathbb{Q}} \def\R{\mathbb{R}} \def\bbS{\mathbb{S}} \def\T{\mathbb{T}^1$, $\Sym^n \mathbb{A}} \def\B{\mathbb{B}} \def\C{\mathbb{C}} \def\D{\mathbb{D}} \def\E{\mathbb{E}} \def\F{\mathbb{F}} \def\G{\mathbb{G}} \def\bbH{\mathbb{H}} \def\I{\mathbb{I}} \def\J{\mathbb{J}} \def\K{\mathbb{K}} \def\bbL{\mathbb{L}} \def\M{\mathbb{M}} \def\N{\mathbb{N}} \def\bbO{\mathbb{O}} \def\PP{\mathbb{P}} \def\Q{\mathbb{Q}} \def\R{\mathbb{R}} \def\bbS{\mathbb{S}} \def\T{\mathbb{T}^1 = \mathbb{A}} \def\B{\mathbb{B}} \def\C{\mathbb{C}} \def\D{\mathbb{D}} \def\E{\mathbb{E}} \def\F{\mathbb{F}} \def\G{\mathbb{G}} \def\bbH{\mathbb{H}} \def\I{\mathbb{I}} \def\J{\mathbb{J}} \def\K{\mathbb{K}} \def\bbL{\mathbb{L}} \def\M{\mathbb{M}} \def\N{\mathbb{N}} \def\bbO{\mathbb{O}} \def\PP{\mathbb{P}} \def\Q{\mathbb{Q}} \def\R{\mathbb{R}} \def\bbS{\mathbb{S}} \def\T{\mathbb{T}^n$ and $\mu_\#(\mathbb{A}} \def\B{\mathbb{B}} \def\C{\mathbb{C}} \def\D{\mathbb{D}} \def\E{\mathbb{E}} \def\F{\mathbb{F}} \def\G{\mathbb{G}} \def\bbH{\mathbb{H}} \def\I{\mathbb{I}} \def\J{\mathbb{J}} \def\K{\mathbb{K}} \def\bbL{\mathbb{L}} \def\M{\mathbb{M}} \def\N{\mathbb{N}} \def\bbO{\mathbb{O}} \def\PP{\mathbb{P}} \def\Q{\mathbb{Q}} \def\R{\mathbb{R}} \def\bbS{\mathbb{S}} \def\T{\mathbb{T}^1) = q$ whereas $\mu_\#(\mathbb{A}} \def\B{\mathbb{B}} \def\C{\mathbb{C}} \def\D{\mathbb{D}} \def\E{\mathbb{E}} \def\F{\mathbb{F}} \def\G{\mathbb{G}} \def\bbH{\mathbb{H}} \def\I{\mathbb{I}} \def\J{\mathbb{J}} \def\K{\mathbb{K}} \def\bbL{\mathbb{L}} \def\M{\mathbb{M}} \def\N{\mathbb{N}} \def\bbO{\mathbb{O}} \def\PP{\mathbb{P}} \def\Q{\mathbb{Q}} \def\R{\mathbb{R}} \def\bbS{\mathbb{S}} \def\T{\mathbb{T}^n) = q^n \neq \sigma^n(q)$. Thus, the exponentiability of the induced zeta function measure $\mu_Z$ does not follow from the above Theorem. However, \begin{corollary}[Corollary to Theorem \ref{mainthm}] Let $Z(X,t)$ be the zeta function for $X \in \Var_{\F_q}$ and $\mu_Z$ its induced zeta function measure. Then $$ \zeta_{\mu_Z}(X,u) = \bm{\sigma_u}(Z(X,t)) \in W(W(\mathbb{Z})) $$ and so $\mu_Z$ is a $\lambda$-measure. \end{corollary} \begin{proof} The formula in Theorem \ref{mainthm} shows that for $X \in \Var_{\F_q}$ $$ \zeta_{\mu_Z}(X,u) = \sum_{i,j} (-1)^{i+1}[[\; [\alpha_{ij}] \;]]. $$ Recall that the map $\bm{\sigma_u}$ on Teichm\"{u}ller elements is given by the double Teichm\"{u}ller $\bm{\sigma_u}([\alpha]) = [[\;[\alpha]\;]]$. As $\bm{\sigma_u}$ is a ring homomorphism, we have $\zeta_{\mu_Z} = \bm{\sigma_u}(Z(X,t))$. \end{proof} \end{remark} \begin{remark} The formula for $Z(\Sym^n X,t)$ in Theorem \ref{mainthm} is analogous to the MacDonald formulae above, as it is simply the zeta function written in (double) Teichm\"{u}ller elements. And similarly, the formula shows that $\mu_Z$ is a $\lambda$-measure. \end{remark} \section{Lambda rings and Witt rings}\label{lambdarings} We begin by describing the $\lambda$-ring structure on the big ring of Witt vectors. Our presentation here of the big Witt ring follows Bloch \cite{Bloch} and Ramachandran \cite{Ramachandran15}. For a more on $\lambda$-rings, the reader may consult Knutson \cite{Knutson}, Yao \cite{Yau}, and Hazewinkel \cite{Hazewinkel}. Let $A$ be a commutative ring with identity. Denote by $\Lambda(A)$ the group of invertible power series $(1+tA[[t]],\times)$ under usual multiplication of power series. For a ring homomorphism $f:A \rightarrow A'$, let $\Lambda_f:\Lambda(A) \rightarrow \Lambda(A')$ be the obvious induced map. Recall the ghost map $\gh:\Lambda(A) \rightarrow A^{\N}$ is defined as, for $P \in \Lambda(A)$, $$ \gh(P) = (b_1,b_2,\ldots) \quad \textrm{where} \quad \frac{t}{P}\frac{dP}{dt} = \sum_{n=1}^\infty b_n t^n. $$ This represents the power series $P(t)$ by a series of coordinates $(b_1,b_2,\ldots)$ called ghost coordinates $\gh_n(P) = b_n$. The ghost map is a functorial group homomorphism where $ A^{\N}$ has a pointwise addition. Moreover, the relation between the power series coefficients and ghost coordinates can be made explicit: \begin{lemma}\label{newton} For $P(t) = \sum_n a_n t^n \in \Lambda(A)$ and $b_n = \gh_n(P(t))$, we have $$ na_n = b_n + a_1b_{n-1} + \cdots a_{n-1}b_{1}. $$ This relation uniquely determines $P(t)$ in terms of its ghost coordinates $b_n$ in the case where $A$ has no $\mathbb{Z}$-torsion. \end{lemma} \begin{proof} By definition, $\dfrac{t}{P} \dfrac{dP}{dt} = \sum_{n=1}^\infty b_nt^n$ so that \begin{eqnarray*} P(t)\left( \sum_{n=1}^\infty b_nt^n \right) &=& t \frac{dP}{dt} \\ (1+ a_1t + a_2t^2 + \cdots )(b_1t + b_2t^2 + \cdots) &=& t(a_1 + 2a_2t + 3a_3t^2 \cdots) \\ &=& a_1t + 2a_2t^2 + 3a_3t^3 + \cdots \end{eqnarray*} and we simply identify coefficients. \end{proof} The big Witt ring $W(A)$ (also called the universal ring of Witt vectors) is the ring whose underlying additive group is $\Lambda(A)$ and whose multiplication $*_W$ is such that $$ (1-at)^{-1} *_W (1-bt)^{-1} = (1-abt)^{-1} \quad \textrm{for } a,b \in A $$ and the association $A \mapsto W(A)$ is functorial. This suffices to define a commutative ring structure on $W(A)$. The elements $[a] = (1-at)^{-1}$ above are called Teichm\"{u}ller elements. The ghost map $\gh:W(A) \rightarrow A^{\N}$ is a ring homomorphism where $A^{\N}$ has a pointwise product. That is, for $P,Q \in W(Z)$, $gh_n(P*_W Q) = \gh_n(P)\gh_n(Q)$ and thus in the case where $A$ has no $\mathbb{Z}$-torsion, multiplication is determined by pointwise product of ghost coordinates. Note that for Teichm\"{u}ller elements, $\gh_n([a]) = a^n$. For each positive integer $n$, the Frobenius map $\Fr_n:W(A) \rightarrow W(A)$ is the ring endomorphism defined as $$ \Fr_n(P(t)) = \prod_{\zeta^n = 1} P(\zeta t^{1/n}). $$ On Teichm\"{u}ller elements $\Fr_n([a]) = [a^n]$ and on ghost coordinates, $\gh_m (Fr_n(P))= \gh_{mn}( P)$. Clearly, we have $\Fr_n \circ \Fr_m = \Fr_{nm}$. Recall that a pre-$\lambda$-ring $(A,\lambda_t)$ is a commutative ring with identity $A$ equipped with a group homomorphism called the structure map $\lambda_t:A \rightarrow \Lambda(A)$, where the coefficients are denoted as $\lambda_t(a) = \sum_{n\geqslant 0} \lambda^n(a) t^n$. Note in particular that $\lambda^0(a) = 1$ and $\lambda^n(a+b) = \sum_{i+j=n} \lambda^i(a)\lambda^j(b)$. Here, $\Lambda(A)$ is a pre-$\lambda$-ring with multiplication and a canonical structure map defined by certain universal polynomials (see Knutson \cite[p. 13]{Knutson}). A map of pre-$\lambda$-rings $f:A \rightarrow A'$ is a ring homomorphism respecting the $\lambda$-ring structure maps. If $\lambda_t:A \rightarrow \Lambda(A)$ is a ring homomorphism, then $A$ is called a $\lambda$-ring; a $\lambda$-ring map is a map of pre-$\lambda$-rings. $\Lambda(A)$ is a $\lambda$-ring. The Witt ring $W(A)$ also has a canonical $\lambda$-ring structure\footnote{Some authors refer to a $\lambda$-ring as a special $\lambda$-ring and a pre-$\lambda$-ring as a $\lambda$-ring.} denoted $\bm{\lambda_u}$ (see Knutson \cite[p. 18]{Knutson}). On Teichm\"{u}ller elements, it is given by $$ \bm{\lambda_u}([a]) = [1] + [a] u \in \Lambda(W(A)) $$ where $\Lambda(W(A)) = 1 + u W(A)[[u]]$. This $\lambda$-ring structure behaves well in the case where $A$ itself is a $\lambda$-ring. In particular, we have \begin{proposition}\label{sigma} If $(A, \lambda_t)$ is a $\lambda$-ring, then the opposite $\lambda$-ring map $\sigma_t = \lambda_{-t}^{-1}$, with $\sigma_t(a) := \sum_{n\geqslant 0} \sigma^n(a) t^n$ is a $\lambda$-ring homomorphism $\sigma_t:A \rightarrow W(A)$. In particular, $$ \Lambda_{\sigma_t}(\sigma_t(a)) = \bm{\sigma_u}(\sigma_t(a)) $$ \end{proposition} \begin{proof} Since $A$ is a $\lambda$-ring, $\lambda_t:A \rightarrow \Lambda(A)$ is a $\lambda$-ring map. The Artin-Hasse exponential $\iota:\Lambda(A) \rightarrow W(A), P(t)\mapsto P(-t)^{-1}$ is a $\lambda$-ring map, and $\iota \circ \lambda_t = \sigma_t$. \end{proof} The following examples will be useful: \begin{itemize} \item The ring $A = \mathbb{Z}$ is a $\lambda$-ring with $\lambda_t(a) = (1+t)^a$. Here the map $\sigma_t$ is given by $$ \sigma_t(a) = (1-t)^{-a}, \quad \quad \sigma^n(a) = \left(\begin{array}{c} n+a+1 \\ a \end{array} \right). $$ \item For a $\lambda$-ring $A$, the polynomial ring $A[z]$ can be given a natural $\lambda$-ring structure by setting $\lambda_t(z) = (1+zt)$. Here the map $\sigma_t$ is given by $$ \sigma_t(z) = (1-zt)^{-1} =[z], \quad \quad \sigma^n(z)=z^n. $$ Note that for $A = \mathbb{Z}$ this means that $\sigma_t(az) = (1-zt)^{-a} = a[z]$. \item For arbitrary $A$, consider the canonical $\lambda$-ring structure on $W(A)$. The opposite $\lambda$-ring structure on $W(A)$ is a $\lambda$-ring map $\bm{\sigma_u}:W(A) \rightarrow W(W(A))$. On Teichm\"{u}ller elements $[a] \in W(A)$ we have $\bm{\sigma_u}([a]) = ([1]-[a]u)^{-1}$. We denote this by $[[\;[a]\;]]$, the \emph{double Teichm\"{u}ller element.} \end{itemize} \subsection*{Acknowledgements} The author would like to thank Niranjan Ramachandran for suggesting this project and for eagerly and openly sharing his ideas and insights. The author is also indebted to the Department of Mathematics at the University of Maryland, College Park---in particular to helpful discussions with and encouragement from Larry Washington and Jonathan Rosenberg. \input{lambdarings.tex} \input{finitefieldcase.tex} \input{examples.tex} \input{lambdaringmeasures.tex} \bibliographystyle{amsplain}
1202.4101
\section{Introduction} The long time behavior of trap models and related processes with disordered parameters has been the theme of several papers in the recent literature. From the inaugurating work of Bouchaud~\cite{B}, where the case of the complete graph was shown to exhibit {\em aging}, the same as well as other cases were analysed. The model on the complete graph was further studied in~\cite{BF} and~\cite{FM1}, with different points of view, and considering distinct time scales. And more recently,~\cite{G} took up the asymmetric case, which is also the model we study here. The trap model in the complete graph is sometimes also called REM-like trap model, due to its resemblance to a dynamics for the Random Energy Model (REM~\cite{D}). Such a dynamics for the REM, on the hypercube rather than the complete graph, was studied in~\cite{ABG1,ABG2}, where aging results comparable to the ones of Bouchaud were derived. See also~\cite{C1,FL,G1}. Trap and trap-like models associated to correlated energy (mean field) spin glasses have been the object of more recent work: a dynamics for the $p$-spin model was studied in~\cite{ABC,BG}, and results on the GREM-like trap model were obtained in~\cite{FGG}. Trap models on ${\mathbb Z}^d$ have also attracted a lot of interest, in connection with aging as well as with {\em localization}; see~\cite{FIN,AC1,AC2,ACM,FM2} -- results on the asymmetric case were obtained recently in~\cite{BC,C2,M}. Analyses on tori were performed in~\cite{AC,JLT}. In this paper, we revisit the trap model in the complete graph, described briefly below in this introduction, and in full in Section~\ref{sec:tck}. Our goal is twofold: \begin{enumerate} \item to propose a representation of the model -- in terms of trap depth, rather than location -- for which scaling limits can be derived in a unified manner in different scaling regimes; \item and to introduce the infinite volume processes which result from these scaling limits, in particular the asymmetric $K$ process. \end{enumerate} Let us now briefly describe the asymmetric trap model in the complete graph with $n$ vertices. This is a continuous time Markov chain on the vertices of that graph, whose mean jump time at site $x$ is given by $\tau_x^{1-a}$, where $a\in[0,1]$ is an asymmetry parameter, and whose transition probability from any site $x$ to any site $y$ is proportional to $\tau^a_y$, where $\{\tau_x\}$ are iid positive random variables in the domain of attraction of an $\alpha$-stable law. The random variable $\tau_x^{1-a}$ may be interpreted as the depth of the trap at site $x$. One readily checks that this dyanmics is reversible with respect to the measure whose weights are given by $\{\tau_x\}$. The case $a=0$ is that of the {\em symmetric} model. We call the general case where $a\in[0,1]$ the {\em asymmetric} model. Let $Y_n(t)$ denote the site visited at time $t$. This paper is more immediately related to~\cite{FM1} and~\cite{G}, so let us briefly outline our results here against the background of the ones of those papers. Whereas in the former reference a scaling limit was derived for the symmetric model at times of the order of the deepest trap in the landscape, and then aging results were derived for a class of two-time correlation functions of the limit model at vanishing times, in here we present similar limit results for the asymmetric model. Rather than looking at $Y_n(t)$ however, we consider $Z_n(t)=\tau^{1-a}_{Y_n(t)}$, the depth of the currently visited trap. As explained below, this is a convenient representation for taking scaling limits, not only at times of the order of the deepest trap in the landscape, which we do here using this representation (see Theorem~\ref{teo:es}), obtaining a limiting process which we denote by $Z$, but at shorter time scales as well. We call $Z$ the {\em asymmetric $K$ process}, in allusion to the $K$ process introduced in~\cite{FM1}. We further derive a scaling limit result for $Z$ at vanishing times (see Theorem~\ref{teo:asz}), obtaining a limiting process $\hat Z$ which is self similar of index 1. The latter fact may be interpreted as a fuller aging result for $Z$, involving the dynamics itself, not only a class of correlation functions thereof. Other scaling regimes of $Z_n$ may be analysed with the same approach, with similar results. Scaling limits of asymmetric trap models in the complete graph are also the main theme of~\cite{G}. In that work scaling limits of the {\em clock process} are derived in several scaling regimes (essentially all of them: from ``order 1'', where the volume limit is taken first, and then the time limit, to the scale where the model is virtually at equilibrium, including scales in between, in particular the ones treated here); occurrence of aging and other dynamical phenomena are discussed for each regime. One reason to consider a representation like $Z_n$, as we do here, rather than the clock process, is that, besides the information on the jump times given by the latter process, $Z_n$ provides also location information, absent in that process. For, say, correlation functions which depend only on jump times (like the $\Pi$ functions discussed on Subsection~\ref{ssec:agezz} below; see~(\ref{eq:ag2a}-\ref{eq:ag2b})), the clock process is enough. But other ones require location information, and in those cases the clock process is no longer enough on its own. We discuss two such examples in Subsection~\ref{ssec:agezz} below. $Z_n$ and $Z$, as well as their rescaled versions, and $\hat Z$ also, can be described as functions of two related subordinators, the second being obtained as the integral of an independent iid family of mean 1 exponentials with respect to the first one. Once we obtain the limit of the first subordinator in a given scaling regime, a continuity property of the above mentioned function implies a limit result for the original process. Section~\ref{sec:lm} below is devoted to establishing that continuity property (see Lemma~\ref{lemma}) in a somewhat abstract setting, which may turn out to be the setting of similar processes of interest. In Section~\ref{sec:tck} we describe our trap models and $K$ processes in more detail and then, applying the auxiliary result of Section~\ref{sec:lm}, we derive scaling limit results for them, as anticipated above, the one for the trap model in Subsection~\ref{ssec:erg}, and the one for the $K$ process in Subsection~\ref{ssec:agez}. In the closing Subsection~\ref{ssec:agezz} we discuss the derivation of aging results for three particular two-time correlation functions of $Z$ as a corollary to Theorem~\ref{teo:asz}. \section{A continuity lemma about a class of trajectories in $D$} \label{sec:lm} \setcounter{equation}{0} Let $D$ be the space of c\`adl\`ag real trajectories on $\mathbb{R}^+=[0,\infty)$ equipped with the $J_1$ Skorohod metric (see e.g.~\cite{EK} Chapter 3, Section 5). Let $\mathbb{N}^*=\{1,2,\ldots\}$ denote the positive integers. Let $S,\,S^\varepsilon,\,\varepsilon>0,$ be nonnegative nondecreasing jump functions in $D$, i.e., suppose that there exist (countable) subsets $A^\varepsilon=\{x_i^\varepsilon,\, i\in {\mathbb N}^* \}$ and $A=\{x_i,\, i\in {\mathbb N}^* \}$ of $\mathbb{R}^+$ and positive number sequences $\{\gamma^\varepsilon_{x_i^\varepsilon},\, i\in {\mathbb N}^* \}$ and $\{\gamma_{x_i},\, i\in {\mathbb N}^* \}$ such that \begin{equation} \label{eq:s} S_r^\varepsilon=\sum_{i\colon x_i^\varepsilon\in[0,r]}\gamma^\varepsilon_{x_i^\varepsilon}<\infty,\quad S_r=\sum_{i\colon x_i\in[0,r]}\gamma_{x_i}<\infty,\quad r\geq0. \end{equation} Consider $\{T_i,\,i\in {\mathbb N}^*\}$, a family of i.i.d.~exponential random variables of mean 1 and let \begin{equation} \label{eq:g} \Gamma_r^\varepsilon=\sum_{i\colon x_i^\varepsilon\in[0,r]}\gamma^\varepsilon_{x_i^\varepsilon}T_i,\quad \Gamma_r=\sum_{i\colon x_i\in[0,r]}\gamma_{x_i}T_i,\quad r\geq0, \end{equation} \begin{equation} \label{eq:ze} Z_t^\varepsilon=\begin{cases} \gamma^\varepsilon_{x^\varepsilon_{i_0}},&\mbox{ if }t\in[\Gamma^\varepsilon_{x^{\varepsilon}_{i_0}-},\Gamma^\varepsilon_{x^\varepsilon_{i_0}})\mbox{ for some }i_0,\\ \mbox{}\,\,\,0,&\mbox{ if }t\notin[\Gamma^\varepsilon_{x^{\varepsilon}_{i}-},\Gamma^\varepsilon_{x^\varepsilon_{i}})\mbox{ for any }i, \end{cases} \end{equation} and \begin{equation} \label{eq:z} Z_t=\begin{cases} \gamma_{x_{i_0}},&\mbox{ if }t\in[\Gamma_{x_{i_0}-},\Gamma_{x_{i_0}})\mbox{ for some }i_0,\\ \mbox{}\,\,\,0,&\mbox{ if }t\notin[\Gamma_{x_{i}-},\Gamma_{x_{i}})\mbox{ for any }i. \end{cases} \end{equation} Below, we will use the symbol $\xrightarrow{J_1}$ to denote (strong) convergence on $(D,J_1)$, while $\xrightarrow{J_1,P}$ will denote weak convergence on $(D,J_1)$ with respect to a given probability measure $P$. Let $P$ denote the probability measure on $(D,J_1)$ induced by the distribution of $\{T_i,\,i\in {\mathbb N}^*\}$ \begin{lemma} \label{lemma} Let $S^\varepsilon,\,S,\,Z^\varepsilon,\,Z$ be as above. As $\varepsilon\to0$, if $S^\varepsilon\xrightarrow{J_1} S$, then $Z^\varepsilon\displaystyle\xrightarrow{J_1,P} Z$. \end{lemma} \begin{rmk} \label{rmk:int} From~(\ref{eq:s}-\ref{eq:z}), we see that $Z=\Xi(S,\{T_i,\,i\in {\mathbb N}^*\})$ and $Z^\varepsilon=\Xi(S^\varepsilon,\{T_i,\,i\in {\mathbb N}^*\})$, where $\Xi$ is the composition underlying the above definitions. Lemma~\ref{lemma} then establishes a continuity property of the distribution of $\Xi$ in its first argument. \end{rmk} \noindent{\bf Proof of Lemma~\ref{lemma}} We will assume that there exists $R'\in\mathbb{R}^+$ such that $|A\cap[0,R']|=\infty$. Other cases may be argued similarly, with simpler arguments. Let $\Gamma^{-1}$ be the (right continuous) inverse of $\Gamma$. Let us fix $T>0$. Then one readily checks that, given $\delta>0$, there exists $R\notin A$, $R\geq R'$, such that \begin{equation} \label{eq:d1} {\mathbb P}(\Gamma^{-1}(T)\geq R)\leq\delta. \end{equation} Given $\eta>0$, we may choose $\delta'>0$ be such that \begin{equation} \label{eq:d2} S_{R+\delta'}-S_R<\eta. \end{equation} Let us now enumerate $A\cap[0,R]=\{x_1,x_2,\ldots\}$ such that $\gamma_{x_1}\geq\gamma_{x_2}\geq\ldots$. From the hypothesis, there exists $m=m(\varepsilon)$, with $m\to\infty$ as $\varepsilon\to0$, and an enumeration of $A^\varepsilon\cap[0,R]=\{x^\varepsilon_1,x^\varepsilon_2,\ldots\}$ such that as $\varepsilon\to0$ \begin{equation} \label{eq:c1} \left(\sup_{1\leq i\leq m}|x^\varepsilon_i-x_i|\right)\vee \left(m\sup_{1\leq i\leq m}|\gamma^\varepsilon_{x^\varepsilon_i}-\gamma_{x_i}|\right)\to0. \end{equation} It follows from this and the hypothesis that, given $\eta>0$, for all small enough $\varepsilon$ and $1\leq k\leq m$ \begin{equation} \label{eq:c1a} \sum_{i>k}\gamma^\varepsilon_{x^\varepsilon_i}=S^\varepsilon_{R}-\sum_{i=1}^k\gamma^\varepsilon_{x_i^\varepsilon} \leq S_{R+\delta'}-\sum_{i=1}^k\gamma_{x_i}+\eta=S_{R+\delta'}-S_R+\sum_{i>k}\gamma_{x_i}+\eta\leq\sum_{i>k}\gamma_{x_i}+2\eta. \end{equation} We now recall that in the $J_1$ topology, functions are close if they are uniformly close inside arbitrary bounded intervals, after allowing small time distortions (for details see e.g.~\cite{EK} Chapter 3, Section 5). Now, given $k\geq1$ arbitrary but fixed, independent of $\varepsilon$, let $\{\bar x_1,\ldots,\bar x_k\}$ be an enumeration of $\{x_1,\ldots,x_k\}$ such that $\{\bar x_1<\ldots<\bar x_k\}$. This leads to an enumeration $\{\bar x_1^\varepsilon,\ldots,\bar x_k^\varepsilon\}$ of $\{x_1^\varepsilon,\ldots,x_k^\varepsilon\}$ such that for $1\leq i\leq k$ \begin{equation} \label{eq:c1b} \bar x^\varepsilon_i\to\bar x_i\mbox{ and }\gamma^\varepsilon_{\bar x^\varepsilon_i}\to\gamma_{\bar x_i} \end{equation} (see paragraph of~(\ref{eq:c1}) above). At this point we relabel $\{T_i\}$ so that $T_1,\ldots,T_k$ are attached to $\bar x_1<\ldots<\bar x_k$ and commonly to $x_1^\varepsilon,\ldots,x_k^\varepsilon$, respectively, which does not change distributions. Let $Z^{(k)}$ and $Z^{(k,\varepsilon)}$ be the respective versions of $Z$ and $Z^\varepsilon$ with the relabeled $\{T_i\}$. Let us now take a family of temporal distortions $(\lambda^\varepsilon)=(\lambda_{k}^{\varepsilon})$ as follows. For $1\leq i\leq k$, we consider the time intervals $I_i=[t_i^-,t_i]$, where $t_i=\Gamma_{\bar x_i}$ and $t_i^-=\Gamma_{\bar x_i-}$, and $[t_i^{\varepsilon-},t_i^{\varepsilon}]$, where $t_i^{\varepsilon}=\Gamma^\varepsilon_{\bar x^\varepsilon_i}$ and $t_i^{\varepsilon-}=\Gamma^\varepsilon_{\bar x^\varepsilon_i-}$, and then define \begin{equation \label{eq:lam} \lambda^\varepsilon(s)= \begin{cases} \frac{t_{1}^{\varepsilon -}}{t_{1}^-}\,s,&\mbox{ if }0\leq s\leq t_{1}^-,\\ \frac{t_i^{\varepsilon}-t_i^{\varepsilon -}}{t_i-t_i^-}(s-t_i^-)+t_i^{\varepsilon-},&\mbox{ if }t_i^-\leq s\leq t_i,\\ \frac{t_{i+1}^{\varepsilon -}-t_i^{\varepsilon}}{t_{i+1}^--t_i}(s-t_i)+t_i^{\varepsilon},&\mbox{ if }t_i\leq s\leq t_{i+1}^-,\\ (s-t_{k+1}^-)+t_{k+1}^{\varepsilon-},&\mbox{ if }s\geq t_{k+1}^-, \end{cases} \end{equation where $t_{k+1}^-:=\Gamma_R$, $t_{k+1}^{\varepsilon-}:=\Gamma^\varepsilon_R$. At this point, we have two tasks: the first one is to control the slopes of the functions $\lambda^\varepsilon$ and the second one is to control the $\sup$ norm of the difference $Z^{(k,\varepsilon)}_{\lambda^\varepsilon(t)} -Z^{(k)}_t$. We start by the second task. Let $\mathcal{M}=\cup_{i=1}^kI_i$. If $t\in\mathcal{M}$, then \begin{equation} \label{eq:c2} |Z^{(k)}_t-Z^{(k,\varepsilon)}_{\lambda^\varepsilon(t)}|\leq\max_{1\leq i\leq k}|\gamma_{\bar x_i}-\gamma^{\varepsilon}_{\bar x_i^{\varepsilon}}|, \end{equation} which goes to zero as $\varepsilon$ goes to zero by~(\ref{eq:c1b}). If $t\in[0,t_{k+1}^-]\setminus\mathcal{M}$, then we have that $Z^{(k)}_t\leq\gamma_{x_{k+1}}$ and $Z^{(k,\varepsilon)}_{\lambda^\varepsilon(t)}\leq\max_{i>k}\gamma^{\varepsilon}_{x^{\varepsilon}_{i}}$. Hence, \begin{equation} \label{eq:c3} |Z^{(k)}_t-Z^{(k,\varepsilon)}_{\lambda^\varepsilon(t)}|\leq\gamma_{x_{k+1}}\vee\max_{i>k} \gamma^{\varepsilon}_{x^{\varepsilon}_{i}}\leq \gamma_{x_{k+1}}\vee\sum_{i>k}\gamma^\varepsilon_{x^{\varepsilon}_{i}}\leq \sum_{i>k}\gamma_{x_i}+2\eta, \end{equation} for all small enough $\varepsilon$, by~(\ref{eq:c1a}). Now, we solve the first problem by considering two cases: 1) If $s \in [t_i^-, t_i]$ for some $1\leq i\leq k$, then the slope of $\lambda^\varepsilon$ is given by \begin{equation} \label{eq:c3a} \frac{t_i^{\varepsilon}-t_i^{\varepsilon -}}{t_i-t_i^-} = \frac{\gamma^\varepsilon_{\bar x_i^{\varepsilon}} T_i}{ \gamma_{\bar x_i} T_i} = \frac{\gamma^\varepsilon_{\bar x_i^{\varepsilon}}}{ \gamma_{\bar x_i}} \rightarrow 1 \end{equation} as $\varepsilon\to0$, by~(\ref{eq:c1b}). 2) If $s \in [t_i, t_{i+1}^-]$ for some $0\leq i\leq k$, where $t_0:=0$, then it suffices to prove that \begin{eqnarray} \label{eq:c4} &t_i^{\varepsilon} \rightarrow t_i&\\ \label{eq:c5} &t_i^{\varepsilon -} \rightarrow t_i^-& \end{eqnarray} as $\varepsilon\to0$ in probability. In all cases, the absolute value of the difference of right and left hand sides is bounded above by \begin{equation} \label{eq:c6} \sum_{i=1}^m|\gamma^\varepsilon_{x_i^{\varepsilon}}-\gamma_{x_i}|T_i+\sum_{i>m}|\gamma^\varepsilon_{x_i^{\varepsilon}}-\gamma_{x_i}|T_i. \end{equation} The first term vanishes almost surely as $\varepsilon\to0$ by~(\ref{eq:c1}) and the Law of Large Numbers, and, given $\eta>0$, the expected value of the second term is bounded above by \begin{equation} \label{eq:c7} \sum_{i>m}\gamma^\varepsilon_{x_i^{\varepsilon}}+\sum_{i>m}\gamma_{x_i}\leq2\sum_{i>m}\gamma_{x_i}+\eta, \end{equation} for all small enough $\varepsilon$, where use is made of~(\ref{eq:c1a}) in the latter inequality, and~(\ref{eq:c4},\ref{eq:c5}) follow since $m\to\infty$ as $\varepsilon\to0$ and $\eta$ is arbitrary. To conclude, given $0<\zeta<1,\delta>0$, choose $T>-\log(\zeta/2)$, and then $R$ satisfying~(\ref{eq:d1}), and then $\delta'$ satisfying~(\ref{eq:d2}) with $\eta=\zeta/4$, and then $k$ such that $\sum_{i>k}\gamma_{x_i}<\zeta/2$. Choosing now $\lambda^\varepsilon$ as in~(\ref{eq:lam}), we conclude that \begin{equation} \label{eq:c8} \limsup_{\varepsilon\to0}{\mathbb P}(d(Z^{(k,\varepsilon)},Z^{(k)})>\zeta)\leq\delta, \end{equation} where $d$ is the $J_1$ Skorohod distance on $D$ (see~\cite{EK} Chapter 3, Section 5). Since $Z^{(k,\varepsilon)}=Z^\varepsilon$ and $Z^{(k)}=Z$ in distribution for all fixed $k$ and $\varepsilon$ small enough, the result follows. $\square$ Let us now explain how Lemma~\ref{lemma} will be used in the sequel. Our aim is to apply it to a case where $S^\varepsilon$ and $S$ are random objects, in fact subordinators, with parameters that are themselves random, which we call {\em environment}. Both $S^\varepsilon$ and $S$, as well as their respective environments, will be independent of $\{T_i\}$, and the convergence $S^\varepsilon\rightarrow S$ will hold only in distribution: either 1) the joint distribution of the environment and the subordinators, or 2) the distributions of subordinators given the environment, for almost every realization of the environment. In both cases, we may use the Skorohod representation theorem (see e.g.~\cite{WW} Theorem 3.2.2). In case 1) we will first explicitly choose a convenient version of the environment, for which the distribution of the subordinator, given the environment, converges for almost every realization of the environment; with the modified environment, we are effectively in case 2. We can then, by Skorohod representation, in both cases, for each choice of the environment, choose versions of the subordinators that converge almost surely, and then we are in the setting of Lemma~\ref{lemma}. It is clear that the conclusion of the lemma holds for the original subordinator, where the distribution referred to in the lemma is the joint distribution of $\{T_i\}$ and the subordinators given the original environment in case 2, and the modified environment in case 1, for almost every realization of that environment in each case. In case 1, the result of the lemma will then hold for the overall joint distribution of $\{T_i\}$, the subordinators given the environment, and the environment. Establishing the convergence in distribution of the subordinators is done by verifying the convergence of the respective Laplace exponents. \section{Application to trap models on the complete graph and $K$ processes} \label{sec:tck} \setcounter{equation}{0} We will apply the lemma above to show scaling limit results for trap models in the complete graph and for $K$ processes. We introduce these two processes next. We first consider the trap model on the complete graph \begin{equation} \label{eq:t0} K_n=\{\{1,\ldots,n\},\,\{(x,y),\,x,y=1,\ldots,n\}\} \end{equation} with $n$ vertices (differently from the usual definition, here we include self loops, for convenience -- this should not matter in the convergence results below): $Y_n=(Y_n(t))_{t\geq 0}$, which is a continuous time Markov chain with jump rate at site $x$ given by \begin{equation} \label{eq:t1} \tau_x^{-(1-a)}, \end{equation} and transition probability from site $x$ to site $y$ given by \begin{equation} \label{eq:t2} \frac{\tau_y^a}{\sum_{z=1}^n\tau_z^a}, \end{equation} where $a\in[0,1]$ is a parameter, and \begin{equation} \label{eq:tau} \tau:=\{\tau_x,\, x=1,2,\ldots\} \end{equation} is an independent family of positive random variables with common distribution in the domain of attraction of a stable law of degree $0<\alpha<1$, that is, \begin{equation} \label{eq:tau1} \mathbb{P}(\tau_1>t)=\frac{L(t)}{t^{\alpha}},\mbox{ }t>0, \end{equation} where $L$ is a slowly varying function at infinity. We call $Y_n$ an {\em asymmetric} or {\em weighted trap model on the complete graph} with asymmetry parameter $a$, mean jump time parameters $\{\tau_x^{1-a},\,x=1,\ldots,n\}$ and weights $\{\tau_x^{a},\,x=1,\ldots,n\}$. The latter set of parameters may indeed be seen as unnormalized weights of the transition probabilities of $Y_n$. Notice that the $a=0$ (symmetric) case corresponds to uniform weights. We will consider the following construction of $Y_n$. Let \begin{equation} \label{eq:caln} \mathcal{N}=\{N^{(x)}:=(N_r^{(x)})_{r\geq 0},\,x\in\mathbb{N}^*\} \end{equation} be a family of independent Poisson counting processes such that the rate of $N^{(x)}$ is $\tau_x^a$. Let $\sigma_j^{(x)}$ the $j$-th event time of $N^{(x)}$, $j\geq1$. Let also \begin{equation} \label{eq:calt} \mathcal{T}=\{T_i^{(x)},\,x\in\mathbb{N}^*\} \end{equation} be independent mean 1 exponential random variables, independent of $\mathcal{N}$ and $\tau$, and define for $r\geq0$ \begin{equation \label{eq:t2a} S_n(r)=\sum_{x=1}^{n}\tau_x^{1-a}\,N_r^{(x)},\quad\Gamma_n(r)=\sum_{x=1}^{n}\tau_x^{1-a}\sum_{i=1}^{N_r^{(x)}}T_i^{(x)}. \end{equation Then \begin{equation} \label{eq:t3} Y_n(t)=x,\mbox{ if }\Gamma_n(\sigma_j^{(x)}-)\leq t<\Gamma_n(\sigma_j^{(x)})\,\mbox{ for some }x,j\geq1. \end{equation} is a construction of $Y_n$ as above described, with initial state distributed on $\{1,\ldots,n\}$ in such a way that site $x$ has probability weight proportional to $\tau_x^a$, $x\in\{1,\ldots,n\}$. \begin{rmk} \label{rmk:id} Regarding the latter point, notice that the initial state of $Y_n$ is the one whose Poisson mark is the earliest, so it corresponds to the minimum of $n$ independent exponential random interarrival times with rates $\tau_x^a$, $x\in\{1,\ldots,n\}$, and it is well known that the probability that the minimum of $n$ independent exponential random variables is a given such random variable is proportional to its rate. \end{rmk} Below we will be interested in \begin{equation} \label{eq:zn} Z_n(t)=\tau^{1-a}_{Y_n(t)}. \end{equation} This is the representation for the process aluded to at the introduction above. It has been considered in~\cite{BF}, where the symmetric ($a=0$) case was studied, and a (single time) scaling limit result was derived for it, first taking the volume, and then the time, to infinity (see Proposition 2.10 in that reference) -- this is an aging regime not considered in this paper, but rather in~\cite{G}. \begin{rmk} \label{rmk:rezn} $Z_n$ and $Y_n$ may be seen as processes in random environment, where $\tau$ is the set of random parameters acting as environment. Indeed, given $\tau$, both are Markovian (this should be clear for $Y_n$, but a moment's thought reveals that it is true for $Z_n$ as well, even when there are same values for $\tau_i$'s with distinct $i$'s). Notice also that $\tau$ is an environment for $S_n$ as well, which for each $n\geq1$ is a subordinator for every fixed such environment (recall the discussion at the end of Section~\ref{lemma}). This aspect, which is characteristic of the complete graph, makes our approach particularly suitable, since by an application of (the continuity) Lemma~\ref{lemma}, we are left with establishing convergence of subordinators (in the Skorohod topology), which reduces to showing convergence of Laplace exponents (in the topology of real numbers), which is relatively simple, as we will see below. \end{rmk} \begin{rmk} \label{rmk:gcon1} Given $S_n$, $Z_n$ may be identified in distribution to $\Xi(S_n,\{T_i,\,i\in {\mathbb N}^*\})$, with $\Xi$ introduced in Remark~\ref{rmk:int}. \end{rmk} We now turn to $K$ processes, which is a Markov process in continuous time on $\bar{\mathbb{N}}^*=\{1,2,\ldots,\infty\}$ constructed in a similar way as $Y_n$ was above, as follows. Let $\gamma=\{\gamma_x,\,x\in[0,\infty)\}$ be the increments of an $\alpha$-stable subordinator in $[0,\infty)$ given by a Poisson process ${\cal P}$ in $(0,\infty)\times(0,\infty)$ with intensity measure \begin{equation} \label{eq:pp} \alpha x^{-1-\alpha}\,dx\,dy. \end{equation} It is well known that the {\em nonzero} set $\{x\in[0,\infty):\,\gamma_x>0\}$ is countable, so in particular the sums over $[0,1]$ below have a countable number of nonzero terms only, and thus make the usual sense, almost surely. Let \begin{equation} \label{eq:hcaln} \hat{\mathcal{N}}=\{\hat N^{(x)}:=(\hat N_r^{(x)})_{r\geq 0},\,x\in[0,1]\} \end{equation} be a family of independent Poisson counting processes such that the rate of $\hat N^{(x)}$ is $\gamma_x^a$, where $\hat N^{(x)}\equiv0$ whenever $\gamma_x=0$. Let $\hat\sigma_j^{(x)}$ the $j$-th event time of $\hat N^{(x)}$, $j\geq1$. Let also \begin{equation} \label{eq:hcalt} \hat{\mathcal{T}}=\{\hat T_i^{(x)},\,x\in[0,1]\} \end{equation} be a family iid mean 1 exponential random variables independent of $\hat{\mathcal{N}}$. Define for $r\geq0$ \begin{equation \label{eq:t2ab} S(r)=\sum_{x\in[0,1]}\gamma_x^{1-a}\hat N_r^{(x)},\quad\Gamma(r)=\sum_{x\in[0,1]}\gamma_x^{1-a}\sum_{i=1}^{\hat N_r^{(x)}}\hat T_i^{(x)}, \end{equation and then make \begin{equation} \label{eq:t4} Y_t=\begin{cases} x,&\mbox{ if }\Gamma(\hat\sigma_j^{(x)}-)\leq t<\Gamma(\hat\sigma_j^{(x)})\,\mbox{ for some }x,j\geq1,\\ \infty,&\mbox{ otherwise.} \end{cases} \end{equation} \begin{rmk} \label{rmk:k} It can be verified that when $a>\alpha$, then $Y$ is a jump process, and so there is almost surely no $t$ for which $Y(t)=\infty$ (since in this case $\cup_{j,x}[\Gamma(\hat\sigma_j^{(x)}-),\Gamma(\hat\sigma_j^{(x)}))=[0,\infty)).$ And in the case where $a\leq\alpha$, there almost surely exist $t$'s for which $Y(t)=\infty$. (One way to check these claims is by verifying that when $a>\alpha$, $\{\hat\sigma_j^{(x)};\,j\geq1, x\in[0,1]\}$ is a discrete subset of $[0,\infty)$ almost surely, and when $a\leq\alpha$, it is almost surely dense in $[0,\infty)$, and these in turn follow from the fact that $\sum_{x\in[0,1]}\gamma_x^a$ is almost surely finite in the former case, and infinite in the latter one.) \end{rmk} Let \begin{equation} \label{eq:zt} Z_t=\gamma^{1-a}_{Y_t}, \end{equation} where $\gamma_\infty$ should be interpreted as $0$. \begin{rmk} \label{rmk:rez} $Z$ and $Y$ may be seen as processes in random environment, where $\gamma$ (more specifically, $\gamma|_{[0,1]}=\{\gamma_x,\,x\in[0,1]\}$) is the environment. Indeed, given $\gamma$, both are Markovian. $\gamma|_{[0,1]}$ is also an environment for $S$, which is a subordinator for every fixed such environment (recall the discussion at the end of Section~\ref{lemma}). \end{rmk} \begin{rmk} \label{rmk:gcon2} Given $S$, $Z$ may be identified in distribution to $\Xi(S,\{T_i,\,i\in {\mathbb N}^*\})$, with $\Xi$ introduced in Remark~\ref{rmk:int}. \end{rmk} \begin{rmk} \label{rmk:rep} In~\cite{FM1} and other references the representations used for the trap model and $K$ process are the ones given here by $Y_n(t)$ and $Y(t)$, $t\geq0$, respectively (see~(\ref{eq:t3}) and~(\ref{eq:t4}) above). The alternative representation $Z_n(t)$ and $Z(t)$, $t\geq0$, we adopt here (see~(\ref{eq:zn}) and~(\ref{eq:zt}) above) has the advantage of leading to a unifying approach for taking the scaling limits of those processes, as explained in the introduction and will be done in detail in Subsections~\ref{ssec:erg} and~\ref{ssec:agez} below. \end{rmk} In the next subsection, we will consider a particular scaling regime for $Z_n$ and establish a scaling limit result under which $Z_n$ converges to the $K$ process. Then, in the following subsection we will derive a scaling limit result satisfied by $Z$. All proofs will rely on Lemma~\ref{lemma} above to get the results from the convergence of the appropriate $S^\varepsilon$ in each case (see statement of that lemma and its preliminaries above). In order to obtain the latter convergence, since we have subordinators in all cases, it will suffice to establish convergence of the associated Laplace exponents. The last subsection is devoted to a discussion on aging results (for two-time correlation functions) satisfied by $Z$ as a consequence of Theorem~\ref{teo:asz} and other results. \subsection{Scaling limit for $Z_n$ at large times} \label{ssec:erg} For $r\geq0$, let \begin{equation}\label{eq:u} U(r)=\sum_{x\in[0,r]}\gamma_x. \end{equation} Given a sequence $(c_n)_{n\geq1}$, set \begin{equation} \label{eq:sczn} Z_t^{(n)}=c_n^{1-a} Z_n(t/c_n^{1-a}),\,\,t\geq0. \end{equation} Let $P_1$ denote the probability measure induced on $(D,J_1)$ by the joint distribution of $\tau$, ${\cal N}$ and ${\cal T}$ -- given above in respectively~(\ref{eq:tau}),~(\ref{eq:caln}) and~(\ref{eq:calt}). \begin{theorem} \label{teo:es} There exists a deterministic sequence $(c_n)_{n\geq1}$ such that \begin{equation} \label{eq:znz} (Z_t^{(n)})_{t\geq0}\xrightarrow{J_1,P_1}(Z_t)_{t\geq0}. \end{equation} as $n\to\infty$. \end{theorem} The sequence $(c_n)$ will be exhibited explicitly in the proof below (see~\ref{eq:cn}). \begin{proof}\mbox{} By Lemma~\ref{lemma}, and recalling the discussion at the end of Section~\ref{sec:lm}, it is enough to establish the limit \begin{equation} \label{eq:e1} S^{(n)}\xrightarrow{J_1,P_1} S, \end{equation} where \begin{equation} \label{eq:e1a} S_r^{(n)}:=c_n^{1-a}S_n(c_n^ar)=\sum_{x=1}^n(c_n\tau_x)^{1-a}N_{c_n^ar}^{(x)}, \end{equation} since, given $S^{(n)}$, $Z^{(n)}$ is identically distributed with $\Xi(S^{(n)},\{T_i,\,i\in {\mathbb N}^*\})$ -- see Remark~\ref{rmk:int}. In order to establish~(\ref{eq:e1}), we will make a precise choice of $c_n$ and switch to another version of $\tau$, which properly rescaled converges strongly, rather than weakly. We follow \cite{FIN}, Section 3. Let \begin{eqnarray} \label{eq:cn} &c_n=\left(\inf\{t\geq0:{\mathbb P}(\tau_1>t)\leq n^{-1}\}\right)^{-1},&\\ \label{eq:taun} &\tau_x^{(n)}:=c_n^{-1}\,g_n\!\left(U(x)-U(x-1/n)\right),\,x\in(0,1]\cap\frac1n{\mathbb Z}&\\ \label{eq:gn} &g_n(y)=c_n\,G^{-1}(n^{1/\alpha}y),\,y\geq0, \end{eqnarray} where $G^{-1}$ is the inverse of the function $G$ defined by the following condition. \begin{equation} \label{eq:G} {\mathbb P}(U(1)>G(x))={\mathbb P}(\tau_1>x),\,x\geq0 \end{equation} We then have that $\tau^{(n)}:=\{\tau_x^{(n)},\,x\geq1\}$ is equally distributed with $\tau$ for every $n\geq1$. For $x\in(0,1]\cap\frac1n{\mathbb Z}$, let now \begin{equation} \label{eq:gan} \gamma^{(n)}_x=c_n\tau_x^{(n)}, \end{equation} and define \begin{equation} \label{eq:tsn} \tilde S_r^{(n)}:=\sum_{x=1}^n(\gamma^{(n)}_{x/n})^{1-a}\tilde N_{r}^{(n,x)}, \end{equation} where, given $\gamma$, \begin{equation} \label{eq:tcaln} \tilde{\mathcal{N}}^{(n)}=\{\tilde N^{(n,x)}:=(\tilde N_r^{(n,x)})_{r\geq 0},\,x\in{\mathbb N}^\ast\} \end{equation} is a family of independent Poisson counting processes such that the rate of $N^{(n,x)}$ is $(\gamma^{(n)}_x)^a$. One now readily checks, using the identity in distribution of $\tau^{(n)}$ and $\tau$ for every $n\geq1$, together with the above definitions, that $\tilde S^{(n)}:=(\tilde S_r^{(n)})_{r\geq0}$ has the same distribution (induced by $(\gamma,\tilde{\mathcal{N}}^{(n)})$) as $S^{(n)}$ under $P'$ for every $n\geq1$. So it is enough to show that \begin{equation} \label{eq:e1c} \tilde S^{(n)}\xrightarrow{J_1,P_2} S, \end{equation} where $P_2$ is the probability measure induced on $(D,J_1)$ by the joint distribution of $\gamma$ and $\tilde{\mathcal{N}}^{(n)}$. Now since, given $\gamma$, $\tilde S^{(n)}$ is a subordinator for each $n\geq1$, it is enough to show the convergence of the Laplace exponents of $\tilde S^{(n)}$, $n\geq1$, as $n\to\infty$, for almost every realization of $\gamma$, to the Laplece exponent of $S$ given $\gamma$, which is itself a subordinator. (See Corollary 3.6 page 374 in \cite{JS}.) A straightforward computation yields \begin{equation} \label{eq:e3} \tilde\varphi_n(\lambda):=\sum_{x\in(0,1]\cap\frac1n{\mathbb Z}}(\gamma_x^{(n)})^a(1-e^{-\lambda (\gamma_x^{(n)})^{1-a}}) \end{equation} as the Laplace exponent of $\tilde S^{(n)}$, $n\geq1$. Now let \begin{equation} \label{eq:taudelta} {\mathfrak T}_{\delta}=\{x\in[0,1]:\, \gamma(x)>\delta\}=\{x_1<\ldots<x_K\}, \end{equation} and \begin{equation} \label{eq:taudeltan} {\mathfrak T}^{(n)}_{\delta}=\left\{x^{(n)}_1=\frac1n\lceil nx_1\rceil<\ldots<x^{(n)}_K=\frac1n\lceil nx_K\rceil\right\}, \end{equation} where the strict inequalities in~(\ref{eq:taudeltan}) hold provided $n$ is large enough (for each fixed $\delta$). Lemma 3.1 in~\cite{FIN} implies that for every $\delta>0$ \begin{equation} \label{eq:e4} \sum_{x\in{\mathfrak T}^{(n)}_{\delta}}(\gamma_x^{(n)})^a(1-e^{-\lambda (\gamma_x^{(n)})^{1-a}})\to\sum_{x\in{\mathfrak T}_{\delta}}\gamma_x^a(1-e^{-\lambda \gamma_x^{1-a}}) \end{equation} almost surely as $n\to\infty$. One also readily checks that \begin{equation} \label{eq:e5} \sum_{x\in(0,1]\cap\frac1n{\mathbb Z}\setminus{\mathfrak T}^{(n)}_{\delta}}(\gamma_x^{(n)})^a(1-e^{-\lambda (\gamma_x^{(n)})^{1-a}}) \leq \lambda\sum_{x\in(0,1]\cap\frac1n{\mathbb Z}\setminus{\mathfrak T}^{(n)}_{\delta}}\gamma_x^{(n)}. \end{equation} Since, as argued in paragraphs of (3.25-3.28) in~\cite{FIN}, we have that the $\lim_{\delta\to0}\limsup_{n\to\infty}$ of the sum in the right hand side of~(\ref{eq:e5}) vanishes almost surely, we may conclude that \begin{equation} \label{eq:e6} \tilde\varphi_n(\lambda)\to\varphi(\lambda):=\sum_{x\in[0,1]}\gamma_x^a(1-e^{-\lambda \gamma_x^{1-a}}),\,\,\lambda\geq0, \end{equation} almost surely. This convergence holds in principle for each $\lambda\geq0$, but it may be argued to hold simultaneously for every $\lambda\geq0$ from the monotonicity of $\tilde\varphi_n$ for every $n\geq1$, and the continuity of $\varphi$. The right hand side of~(\ref{eq:e6}) is the Laplace exponent of $S$ given $\gamma$, so the proof is complete. \end{proof} \subsection{Scaling limit of $Z$ at small times} \label{ssec:agez} In this subsection, we assume $0\leq a<\alpha$. Let \begin{equation} \label{eq:scz} Z_t^{(\varepsilon)}=\varepsilon^{-1} Z_{\varepsilon t}. \end{equation} Before stating a convergence result for $Z^{(\varepsilon)}$, let us describe the limit process. Let $(\hat S_t)_{t\geq0}$ be an $\hat\alpha$-stable subordinator, where \begin{equation} \label{eq:ha} \hat\alpha=\frac{\alpha -a}{1-a}, \end{equation} and whose Laplace exponent is given by $\hat\varphi(\lambda)=\hat c\lambda^{\hat\alpha}$, where $\hat c$ is a constant to be determined below. We may then write $\hat S$ as a partial sum of its increments as follows. \begin{equation} \label{eq:hu} \hat S_r=\sum_{x\in[0,r]}\hat\gamma_x, \end{equation} where $\{\hat\gamma_x,\,x\in{\mathbb N}^*\}$ are the increments of $\hat S$. Let now \begin{equation} \label{eq:hv} \hat\Gamma_r=\sum_{x\in[0,r]}\hat\gamma_xT_x, \end{equation} where \begin{equation} \label{eq:ctp} {\cal T}':=\{T_x,\,x\in[0,\infty)\} \end{equation} is an iid family of mean 1 exponential random variables, independent of $\hat S$. \begin{rmk} \label{rmk:v} One may readily check that $\hat\Gamma$ is also an $\hat\alpha$-stable subordinator (under the joint distribution of $\hat S$ and $\{T_x,\,x\in[0,\infty)\}$). \end{rmk} Now define \begin{equation} \label{eq:hz} \hat Z_t=\begin{cases} \hat\gamma_{x},&\mbox{ if }t\in[\hat\Gamma_{x-},\hat\Gamma_{x})\mbox{ for some }x\in[0,\infty)\\ \mbox{}\,\,\,0,&\mbox{ for all other }t\geq0,\mbox{ if any}. \end{cases} \end{equation} \begin{rmk} \label{rmk:rehz} $\hat Z$ may be seen as a process in random environment, where $\hat S$ is the environment. Indeed, given $\hat S$, $\hat Z$ is Markovian. And the distribution of $\hat Z$ (integrated over the environment) makes it a self similar process of index $1$, that is, $(\hat Z_t)_{t\geq0}=(c^{-1}\hat Z_{ct})_{t\geq0}$ in distribution for every constant $c>0$. This latter property explains the aging behavior of $Z$ in its small time scaling regime, as established below. \end{rmk} \begin{rmk} \label{rmk:gcon3} Given $\hat S$, $\hat Z$ may be identified in distribution to $\Xi(\hat S,\{T_i,\,i\in {\mathbb N}^*\})$, with $\Xi$ introduced in Remark~\ref{rmk:int}. \end{rmk} Before we state this subsection's result, let, for $\gamma$ fixed, $P_3=P_3^\gamma$ denote the the probability measure induced on $(D,J_1)$ by the joint distribution of $\hat{\cal N}$ and $\hat{\cal T}$ -- given above in respectively~(\ref{eq:hcaln}) and~(\ref{eq:hcalt}). \begin{theorem} \label{teo:asz} If $0\leq a<\alpha$ then for almost every $\gamma$ \begin{equation} \label{eq:zu} (Z_t^{(\varepsilon)})_{t\geq0}\xrightarrow{J_1,P_3}(\hat Z_t)_{t\geq0}. \end{equation} as $\varepsilon\to0$. \end{theorem} \begin{rmk} \label{rmk7} Perhaps more precisely, Theorem~\ref{teo:asz} states that for almost every $\gamma$, the distribution of $(Z_t^{(\varepsilon)})$ under $P_3$ converges to that of $(\hat Z_t)$ under $P_4$, the probability measure induced on $(D,J_1)$ by the joint distribution of $\gamma$ and ${\cal T}'$ \end{rmk} \begin{cor} \label{cor} If $0\leq a<\alpha$ then \begin{equation} \label{eq:zu1} (Z_t^{(\varepsilon)})_{t\geq0}\xrightarrow{J_1,P_5}(\hat Z_t)_{t\geq0} \end{equation} as $\varepsilon\to0$, where $P_5$ denotes the probability measure induced on $(D,J_1)$ by the joint distribution of $\gamma$,~$\hat{\cal N}$ and $\hat{\cal T}$. \end{cor} \begin{rmk} \label{rmk7+} The above corollary follows immediately from the preceding theorem, since $P_5$ is obtained by integrating $P_3$ over the distribution of $\gamma$. Below we will nevertheless give a direct (sketchy) argument for the corollary, much simpler than the one for the theorem next. \end{rmk} {\noindent\bf Proof of Theorem~\ref{teo:asz}} Let \begin{equation} \label{eq:se} \hat S_r^{(\varepsilon)}=\varepsilon^{-1}\sum_{x\in[0,1]}\gamma_x^{1-a}\hat N^x_{\varepsilon^{\hat\alpha}r},\,r\geq0 \end{equation} where $\hat\alpha$ was introduced in~(\ref{eq:ha}) above. Then, given $\gamma$ and $\varepsilon>0$, $(\hat S_t^{(\varepsilon)},\,t\geq0)$ is a subordinator, and its Laplace exponent equals \begin{equation} \label{eq:lese} \hat\varphi^{(\varepsilon)}(\lambda)=\varepsilon^{\hat\alpha}\sum_{x\in[0,1]}\gamma_x^a(1-e^{-\lambda\varepsilon^{-1}\gamma_x^{1-a}}),\,\,\lambda\geq0. \end{equation} By Lemma~\ref{lemma}, and recalling the discussion at the end of Section~\ref{sec:lm}, to get the result, it is enough to establish the limit \begin{equation} \label{eq:e1b} \hat S^{(\varepsilon)}\xrightarrow{J_1,P_5}\hat S \end{equation} as $\varepsilon\to0$ for a.e.~$\gamma$. Since we are dealing with subordinators, it suffices to show that for almost every $\gamma$ \begin{equation} \label{eq:a1} \hat\varphi^{(\varepsilon)}(\lambda)\to\hat c\lambda^{\hat\alpha},\,\,\lambda\geq0, \end{equation} as $\varepsilon\to0$, for some positive finite constant $\hat c$. This is obvious for $\lambda=0$, so let us fix $\lambda>0$, and write \begin{equation} \label{eq:a2} \lambda^{-\hat\alpha}\hat\varphi^{(\varepsilon)}(\lambda)=R^{-\alpha}\sum_{x\in[0,1]}(R\gamma_x)^a(1-e^{-(R\gamma_x)^{1-a}}) \end{equation} with $R=(\varepsilon^{-1}\lambda)^{\frac{1}{1-a}}$, and then argue in the sequel that the left hand side converges to a constant as $R\to\infty$ for a.e.~$\gamma$. We start by considering \begin{equation} \label{eq:y} W:=R^{-\alpha}\sum_{x\in [0,1]}\sum_{i=1}^{R\delta^{-1}}(R\gamma_x)^a(1-e^{-(R\gamma_x)^{1-a}})\mathbb{I}_{\{\gamma_x\in[\frac{\delta}{R}(i-1),\frac{\delta}{R}i]\}}. \end{equation} Since the difference between $W$ and the left hand side of~(\ref{eq:a2}) is bounded above by \begin{equation} R^{-(\alpha-a)}\sum_{x\in[0,1]}\gamma_x^a\,\mathbb{I}_{\{\gamma_x>1\}}, \end{equation} which vanishes as $R\to\infty$ for a.e.~$\gamma$, it is enough to establish the convergence result for $W$. We estimate it as follows. \begin{eqnarray} \label{E:prim} W-X_1&\leq& R^{-\alpha}\sum_{i=2}^{R\delta^{-1}}X_i^+:=R^{-\alpha}\sum_{i=2}^{R\delta^{-1}}(\delta i)^a(1-e^{-(\delta i)^{1-a}})M_i\\ \label{E:sec} W&\geq& R^{-\alpha}\sum_{i=2}^{R\delta^{-1}}X_i^-:=R^{-\alpha}\sum_{i=2}^{R\delta^{-1}}(\delta (i-1))^a(1-e^{-(\delta (i-1))^{1-a}})M_i, \end{eqnarray} where $X_1=R^{-\alpha}\sum_{x\in[0,1]}(R\gamma_x)^a(1-e^{-(R\gamma_x)^{1-a}}) \mathbb{I}_{\{\gamma_x\in[0,\frac{\delta}{R}]\}}$ and $M_i$ is the number of points of ${\cal P}$ in the region $[0,1]\times[\frac{\delta}{R}(i-1),\frac{\delta}{R}i]$ (recall paragraph of~(\ref{eq:pp}) above). $X_1$ can be bounded above by $R^{-\alpha}\sum_{x\in[0,1]}(R\gamma_x)\, \mathbb{I}_{\{\gamma_x\in[0,\frac{\delta}{R}]\}}$, and this has the same distribution as $R^{-\alpha}\sum_{x\in[0,R^\alpha]}\gamma_x\,\mathbb{I}_{\{\gamma_x\in[0,\delta]\}}$ for every $R>0$, by the scale invariance of $\gamma$. We can use standard large deviation estimates for the latter expression to conclude that $X_1$ can be ignored in the limits as $R\to\infty$ and then $\delta\to0$ (here we may use the existence of a positive exponential moment for $\sum_{x\in[0,1]}\gamma_x\,\mathbb{I}_{\{\gamma_x\in[0,\delta]\}}$ for any $\delta$, a result that follows as an application of Campbell Theorem -- see~\cite{K}). We concentrate on the right hand sides of~(\ref{E:prim}, \ref{E:sec}). We start with~(\ref{E:prim}). By the exponential Markov inequality, we get, for given $\theta,\xi>0$, \begin{equation} \label{eq:a3} \mathbb{P}\left(R^{-\alpha}\sum_{i=2}^{R\delta^{-1}}X_i^+\geq R^{-\alpha}\sum_{i=2}^{R\delta^{-1}}\mathbb{E}X^+_i+\xi\right)\leq \frac{A}{B \end{equation} where $A=\mathbb{E}e^{\theta\sum_{i=1}^{R\delta^{-1}}X_i^+}$ and $B=e^{\theta\sum_{i=1}^{R\delta^{-1}}\mathbb{E}X^+_i+ R^{\alpha}\xi }$. Since $M_i$, $i\geq2$, are independent Poisson random variables, we obtain \begin{equation} \label{eq:a4} \frac{A}{B}=e^{-R^{\alpha}\xi\theta+\sum_{i=2}^{R\delta^{-1}}(e^{c_i\theta}-1-c_i\theta)\,\mathbb{E}M_i}, \end{equation} where $c_i=(\delta i)^a(1-e^{-(\delta i)^{1-a}})$. We choose $\theta=R^{-b}$ with $a<b<\alpha<2b$. Then, using the estimate \begin{equation} \label{eq:a5} \mathbb EM_i=\int_{\frac{\delta}{R}(i-1)}^{\frac{\delta}{R}i}\frac{\alpha}{x^{1+\alpha}}dx\leq\frac{R^{\alpha}}{\delta^{\alpha}(i-1)^{1+\alpha}}, \end{equation} we find that the sum in the exponent in~(\ref{eq:a4}) is bounded above by \begin{equation} \label{eq:a6} \sum_{i=2}^{R\delta^{-1}}\frac{R^{\alpha}}{\delta^{\alpha}(i-1)^{1+\alpha}}(c_iR^{-b})^2 \leq2\frac{R^{\alpha-2b}}{\delta^{\alpha-2a}}\sum_{i=1}^{R\delta^{-1}}i^{-(1+\alpha-2a) \end{equation} Since the sum on the right of~(\ref{eq:a6}) is bounded by constant times $R^{2a-\alpha}\vee\log R$, and using the above estimates, we find that the exponent in~(\ref{eq:a4}) is bounded above by \begin{equation} \label{eq:a7} -R^{\alpha-b}\xi+\mbox{ const } R^{-c'}, \end{equation} for some constant $c'>0$. We can then apply Borel-Cantelli and conclude that for a.e.~$\gamma$, given $\xi>0$ \begin{equation} \label{eq:a8} R^{-\alpha}\sum_{i=2}^{R\delta^{-1}}X_i^+\leq R^{-\alpha}\sum_{i=2}^{R\delta^{-1}}\mathbb{E}X^+_i +\xi \end{equation} for all large enough $R$. Conversely, we can conclude that given $\xi>0$, for a.e.~$\gamma$ and all $R$ large enough \begin{equation} \label{eq:a9} R^{-\alpha}\sum_{i=2}^{R\delta^{-1}}X_i^-\geq R^{-\alpha}\sum_{i=2}^{R\delta^{-1}}\mathbb{E}X^-_i-\xi. \end{equation} (\ref{eq:a8}) and (\ref{eq:a9}) then imply that \begin{equation} \label{eq:a10} \liminf_{R\to\infty} R^{-\alpha}\sum_{i=1}^{R\delta^{-1}}\mathbb{E}X^-_i\leq\liminf_{R\to\infty} W \leq\limsup_{R\to\infty} W\leq \limsup_{R\to\infty} R^{-\alpha}\sum_{i=1}^{R\delta^{-1}}\mathbb{E}X^+_i. \end{equation} To conclude, it is enough to verify that \begin{equation} \label{limite} \liminf_{\delta\rightarrow 0}\liminf_{R\to\infty} R^{-\alpha}\sum_{i=1}^{R\delta^{-1}}\mathbb{E}X^-_i =\limsup_{\delta\rightarrow 0}\limsup_{R\to\infty}R^{-\alpha}\sum_{i=1}^{R\delta^{-1}}\mathbb{E}X^+_i \end{equation} is a (positive finite) constant $\hat c$. We begin with the following estimate. \begin{equation} \mathbb{E}X_i^+=(\delta i)^a(1-e^{-(\delta i)^{1-a}})\int_{\frac{\delta}{R}(i-1)}^{\frac{\delta}{R}i}\frac{\alpha}{x^{1+\alpha}}dx \leq (\delta i)^a(1-e^{-(\delta i)^{1-a}})\frac{\delta}{R}\frac{\alpha}{(\frac{\delta}{R}(i-1))^{1+\alpha}} \end{equation} Summing up: \begin{eqnarray}\nonumber R^{-\alpha}\sum_{i=2}^{R\delta^{-1}}\mathbb{E}X_i^+ &\leq& R^{-\alpha}\sum_{i=2}^{R\delta^{-1}}(\delta i)^a(1-e^{-(\delta i)^{1-a}})\frac{\delta}{R}\frac{\alpha}{(\frac{\delta}{R}(i-1))^{1+\alpha}}\\ &=&\alpha\sum_{i=2}^{R\delta^{-1}}\delta^{a-\alpha}\frac{1-e^{-(\delta i)^{1-a}}}{i^{1+\alpha-a}}\left(\frac i{i-1}\right)^{1+\alpha} \end{eqnarray} Now as $R\to\infty$, the latter sum converges to a series, which is readily seen to be an approximation to an integral. We find that \begin{equation} \label{eq:a11} \limsup_{\delta\rightarrow 0}\limsup_{R\to\infty} R^{-\alpha}\sum_{i=2}^{R\delta^{-1}}\mathbb{E}X_i^+\leq \alpha\int_0^{\infty}\frac{1-e^{-x^{1-a}}}{x^{1+\alpha-a}}dx. \end{equation} We similarly find the latter expression as a lower bound for $$\liminf_{\delta\rightarrow 0}\liminf_{R\to\infty} R^{-\alpha}\sum_{i=2}^{R\delta^{-1}}\mathbb{E}X_i^-$$ and~(\ref{limite}) follows, with the right hand side of~(\ref{eq:a11}) as the constant $\hat c$. $\square$ \medskip {\noindent\bf (Direct) Proof of Corollary~\ref{cor}} (sketchy) Under $P_5$ we may use a different, more suitable version of $\gamma$. In view of the right hand side of~(\ref{eq:a2}), we replace $\gamma_x$ by $R^{-1}\gamma_{R^\alpha x}$, $x\in[0,1]$, with $R$ as in the above proof. The Laplace exponent of the corresponding version of $\hat S^{(\varepsilon)}$ is then readily seen to equal \begin{equation} \label{eq:a2p} \lambda^{\hat\alpha}\,R^{-\alpha}\!\!\!\!\sum_{x\in[0,R^{\alpha}]}\gamma_x^a(1-e^{-\gamma_x^{1-a}}). \end{equation} Since $\sum_{x\in[0,1]}\gamma_x^a(1-e^{-\gamma_x^{1-a}})$ is integrable, with mean $\hat c$, as can be checked by an application of Campbell Theorem, the Law of Large Numbers yields the almost sure convergence of~(\ref{eq:a2p}) to $\hat c\lambda^{\hat\alpha}$ (simultaneously for all $\lambda\geq0$, once one uses monotonicity and continuity of the functions involved, as previously argued -- see the end of the proof of Theorem~\ref{teo:es} above). Since that is the Laplace exponent of $(\hat Z_t)$, we conclude that the version of $(Z_t^{(\varepsilon)})$ with $\gamma$ replaced by $\gamma^{R}:=\{R^{-1}\gamma_{R^\alpha x},\,x\in[0,1]\}$ converges in $P_3^{\gamma^{R}}$-distribution to $(\hat Z_t)$ for almost every $\gamma$. Upon integrating over the distribution of $\gamma$, we get the convergence in $P_5$-distribution. $\square$ \begin{rmk} \label{rmk4} A few words about the cases where $\alpha\leq a\leq1$. When $a>\alpha$, we have that $Z$ is a jump process in ${\mathbb N}^*$ (see Remark~\ref{rmk:k}) with $Z(0)=\gamma_x$ with probability proportional to $\gamma_x^a$, $x\in[0,1]$. It follows then that $Z^{(\varepsilon)}\to\infty$ identically almost surely as $\varepsilon\to0$. The case $a=\alpha$ demands more delicate analysis. We have that $\hat\varphi^{(\varepsilon)}(\lambda)$ (see~\ref{eq:lese}), when scaled with a factor of $|\log\varepsilon|^{-1}$ (instead of $\varepsilon^{\hat\alpha}=1$ in this case), converges to a number $r$ independent of $\lambda>0$ as $n\to\infty$ in probability, and this is the Laplace exponent of a subordinator which equals $0$ for an exponentially distributed amount of time of rate $r$, and then jumps to $\infty$, where it stays. One may then argue from this that $Z^{(\varepsilon)}\to\infty$ identically as $\varepsilon\to0$ in probability. \end{rmk} \subsection{Aging in the $K$ process} \label{ssec:agezz} Theorem~\ref{teo:asz} may be viewed as an aging result for $Z$, since $\hat Z$ is nontrivial and self similar with index 1. Corresponding aging results for two-time correlation functions follow. Below we consider three examples of correlation functions related to aging, and derive scaling limit/aging results for them as a consequence of Theorem~\ref{teo:asz} (as well as of other results derived above). Other correlation functions can be similarly treated. \paragraph{Example 1} We start with the time correlation function introduced in~\cite{B}, which is the one that is usually studied in connection with his model. Let \begin{equation} \label{eq:ag2a} \bar\Pi(t,s;\gamma)={\mathbb P}(\mbox{no jump of } Z\mbox{ on }[t,t+s]|\gamma) \end{equation} (see Remark~\ref{rmk8} below). Let $\Phi\in D$, let ${\cal D}(\Phi)$ denote the set of discontinuities of $\Phi$, that is, ${\cal D}(\Phi)=\{t\geq0:\Phi(t)\ne \Phi(t-)\}$, and consider $F:D\times(0,\infty)\times(0,\infty)\to\{0,1\}$ such that \begin{equation} \label{eq:ag1} F(\Phi;t,s)=1\{[t,t+s]\cap{\cal D}(\Phi)=\emptyset\}. \end{equation} Then we have that \begin{equation} \label{eq:ag3} \bar\Pi(\varepsilon t,\varepsilon s;\gamma)={\mathbb E}[F(Z^{(\varepsilon)};t,s)|\gamma]. \end{equation} Let also \begin{equation} \label{eq:ag2b} \hat\Pi(t,s)={\mathbb P}(\mbox{no jump of } \hat Z\mbox{ on }[t,t+s]). \end{equation} Since deterministic single times are almost surely continuity points of $\hat Z$, we have that $F(\cdot;t,s)$ is almost surely continuous under the distribution of $\hat Z$. We thus conclude from Theorem~\ref{teo:asz} that if $0\leq a<\alpha$, then for almost every $\gamma$ \begin{equation} \label{eq:ag4} \lim_{\varepsilon\to0}\bar\Pi(\varepsilon t,\varepsilon s;\gamma)={\mathbb E}[F(\hat Z;t,s)]=\hat\Pi(t,s). \end{equation} The aging phenomenon, namely $\hat\Pi(\cdot,\cdot)$ being a (nontrivial) function of the ratio of its arguments, then follows from the self similarity with index 1 (and nontriviality) of $\bar Z$, but in this case there is an explicit expression for $\hat\Pi$, obtained as follows. One readily checks that the right hand side of~(\ref{eq:ag2b}) equals ${\mathbb P}([t,t+s]\cap{\cal R}(\hat\Gamma)=\emptyset)$, where ${\cal R}(\Phi)$ is the range of $\Phi\in D$. Since $\hat\Gamma$ is an $\hat\alpha$-stable subordinator (see Remark~\ref{rmk:v}), an application of the Dynkin and Lamperti arcsine law theorem for that probability yields \begin{equation} \label{eq:ag5} \hat\Pi(t,s)=\frac{\sin(\pi\hat\alpha)}{\pi}\int_{s/(t+s)}^1\theta^{-\hat\alpha}(1-\theta)^{\hat\alpha-1}\,d\theta. \end{equation} The limit in~(\ref{eq:ag4}) was first obtained in~\cite{B} (as the expression in~(\ref{eq:ag5})) for the case where $a=0$. The general case $0\leq a\leq1$ was first studied in~\cite{G} (see Theorem 3.3 for the case $a < \alpha$, and Theorem 3.4 for the case $a > \alpha$; the particular limit~(\ref{eq:ag4}) is (7.5) in that reference). In case $a\geq\alpha$, then the discussion in Remark~\ref{rmk4} indicates that the limit in~(\ref{eq:ag4}) is identically 1, and that aging is thus interrupted. For the next examples, we restrict $a$ to $[0,\alpha)$. \paragraph{Example 2} Let \begin{equation} \label{eq:ag6} \bar R(t,s;\gamma)={\mathbb P}(Z(t)=Z(t+s)|\gamma). \end{equation} Then, the difference between $\bar R(\varepsilon t,\varepsilon s;\gamma)={\mathbb P}(\hat Z^{(\varepsilon)}_t=\hat Z^{(\varepsilon)}_{t+s}|\gamma)$ and $\bar\Pi(\varepsilon t,\varepsilon s;\gamma)$ is given by \begin{equation} \label{eq:ag7} {\mathbb P}(\hat Z^{(\varepsilon)}_t=\hat Z^{(\varepsilon)}_{t+s};\,\hat Z^{(\varepsilon)}_t\ne \hat Z^{(\varepsilon)}_{t+r}\mbox{ for some }r\in[0,s]|\gamma). \end{equation} Let $\hat{\cal P}^{(\varepsilon)}$ and $\hat{\cal P}$ denote the point processes in $(0,\infty)\times(0,\infty)$ associated to $\hat S^{(\varepsilon)}$ and $\hat S$, respectively, i.e., \begin{equation} \label{eq:pps} \hat{\cal P}^{(\varepsilon)}=\left\{\!\left(t,\hat S^{(\varepsilon)}_t-\hat S^{(\varepsilon)}_{t-}\right)\!\!:\,t>0,\,\hat S^{(\varepsilon)}_t-\hat S^{(\varepsilon)}_{t-}>0\right\},\, \hat{\cal P}=\left\{\!\left(t,\hat S_t-\hat S_{t-}\right)\!\!:\,t>0,\,\hat S_t-\hat S_{t-}>0\right\}. \end{equation} The convergence in distribution $\hat S^{(\varepsilon)}\to\hat S$ argued in the proof of Theorem~\ref{teo:asz} implies that \begin{equation} \label{eq:ag8} \hat{\cal P}^{(\varepsilon)}\to\hat{\cal P} \end{equation} as $\varepsilon\to0$ in distribution (in the point process sense; for almost every $\gamma$). Let also $\hat\Gamma^{(\varepsilon)}(t)=\varepsilon^{-1}\Gamma(\varepsilon^{\hat\alpha} t)$, $t\geq0$ (see paragraph of~(\ref{eq:t2ab}) above). We have that \begin{equation} \label{eq:ag10} \hat \Gamma^{(\varepsilon)}\to\hat\Gamma \end{equation} in distribution for almost every $\gamma$ (see~(\ref{eq:hv}) above). This claim may be argued as follows. Since $(\hat \Gamma^{(\varepsilon)}_t)$ is a subordinator, an entirely similar reasoning to the one employed in the proof of Theorem~\ref{teo:asz} may also be employed to establish this result. It also follows from a continuity property of $(\hat \Gamma^{(\varepsilon)}_t)$ as a function of $(\hat S^{(\varepsilon)}_t)$ and $\mathcal{T}$ similar to the one established in Lemma~\ref{lemma}, and similarly proven. We leave the details for the interested reader. For arbitrary $\delta,T>0$, consider now the event \begin{equation} \label{eq:ag9} A^{(\varepsilon)}_{\delta,T,t,s}=\{\hat Z^{(\varepsilon)}_t>\delta,\,\hat Z^{(\varepsilon)}_{t+s}>\delta,\,\hat\Gamma^{(\varepsilon)}(T)>t+s\} \end{equation} and let $B^{(\varepsilon)}_{\delta,T}$ be the event that there exist two points in $\hat{\cal P}^{(\varepsilon)}\cap\{(0,2T)\times(\delta/2,\infty)\}$ with the same second coordinate. Now one readily gets from the above convergence results that \begin{equation} \label{eq:ag11} \lim_{\varepsilon\to0}{\mathbb P}(A^{(\varepsilon)}_{\delta,T,t,s}|\gamma)={\mathbb P}(\hat Z_t>\delta,\,\hat Z_{t+s}>\delta,\,\hat\Gamma(T)>t+s), \end{equation} and this can be made arbitrarily close to 1 by choosing $\delta$ and $T$ appropriately. We also have that \begin{equation} \label{eq:ag12} \lim_{\varepsilon\to0}{\mathbb P}(B^{(\varepsilon)}_{\delta,T}|\gamma)={\mathbb P}(B_{\delta,T}), \end{equation} where $B_{\delta,T}$ is the event corresponding to $B^{(\varepsilon)}_{\delta,T}$ upon replacing $\hat{\cal P}^{(\varepsilon)}$ by $\hat{\cal P}$. The latter probability clearly vanishes. Now, since the intersection of the event in the probability in~(\ref{eq:ag7}) and $A^{(\varepsilon)}_{\delta,T,t,s}$ is contained in $B^{(\varepsilon)}_{\delta,T}$, we conclude from the above that \begin{equation} \label{eq:ag13} \lim_{\varepsilon\to0}\bar R(\varepsilon t,\varepsilon s;\gamma)=\lim_{\varepsilon\to0}\bar\Pi(\varepsilon t,\varepsilon s;\gamma)=\hat\Pi(t,s) \end{equation} for almost every $\gamma$. \begin{rmk} \label{rmk8} The {\em aging} correlation functions \begin{eqnarray} \label{eq:ag14} \Pi(t,s;\gamma)\={\mathbb P}(\mbox{no jump of } Y\mbox{ on }[t,t+s]|\gamma),\\ \label{eq:ag14a} R(t,s;\gamma)\={\mathbb P}(Y(t)=Y(t+s)|\gamma) \end{eqnarray} are more widely considered in the literature than their barred versions~(\ref{eq:ag2a}) and~(\ref{eq:ag6}) above. In the present case there is almost surely no difference, since a.s.~$\gamma_x\ne\gamma_y$ provided $x\ne y$ and $\gamma_x>0$. \end{rmk} The above examples could be done either by considering the clock processes $\hat \Gamma^{(\varepsilon)}$ and $\hat\Gamma$ on their own, together with~(\ref{eq:ag10}), in the case of Example 1, or, in the case of Example 2, we used, besides Theorem~\ref{teo:asz}, convergence results for $S$ and $\Gamma$ (in the appropriate scale), and in both examples the limit is a correlation function of the limiting clock process $\hat\Gamma$. Our last example is natural from the aging point of view, requires Theorem~\ref{teo:asz} alone, and the limit is not a function of $\hat\Gamma$ alone. \paragraph{Example 3} Let \begin{equation} \label{eq:ag16} \textstyle Q(t,s;\gamma)={\mathbb P}\left(\sup_{r\in[0,t]}Z(r)<\sup_{r\in[0,t+s]}Z(r)|\gamma\right). \end{equation} This function was suggested in~\cite{FIN} as a ``measure of the prospects for novelty in the system``. $\hat Z$ is almost surely continuous in single deterministic times, so we have that \begin{equation} \label{eq:ag17} \lim_{\varepsilon\to0}Q(\varepsilon t,\varepsilon s;\gamma)={\mathbb P}\left(\textstyle\sup_{r\in[0,t]}\hat Z(r)<\sup_{r\in[0,t+s]}\hat Z(r)\right)=:\hat Q(t,s), \end{equation} since the function $1\{\sup_{r\in[0,t]}\Phi(r)<\sup_{r\in[0,t+s]}\Phi(r)\}$ is continuous in $\Phi\in D$ for almost every $\Phi$ under the distribution of $\hat Z$. We note that $\hat Q(t,s)$ is a function of the ratio $t/s$ only, by the self similarity of $\hat Z$, but an explicit expression is not available, as far as we know, as it is for $\hat\Pi(t,s)$. \vspace{.5cm} \noindent{\bf Acknowledgements} LRF would like to thank the CMI, Universit\'e de Provence, Aix-Marseille I for hospitality and support during several visits in the last few years where this and related projects were developed. VG thanks NUMEC-USP for hospitality.
1202.4193
\section{Introduction} First-order (FO) rewritability is the key concept of ontology-based data access (OBDA)~\cite{DFKM*08,HMAM*08,PLCD*08}, which is believed to lie at the foundations of the next generation of information systems. An ontology language $\mathcal{L}$ enjoys \emph{FO-rewritability} if any conjunctive query $\boldsymbol q$ over an ontology $\mathcal{T}$, formulated in $\mathcal{L}$, can be transformed into an FO-formula $\boldsymbol q'$ such that, for any data $\mathcal{A}$, all certain answers to $\boldsymbol q$ over the knowledge base $(\mathcal{T}, \mathcal{A})$ can be found by querying $\boldsymbol q'$ over $\mathcal{A}$ using a standard relational database management system (RDBMS). Ontology languages with this property include the \textsl{OWL\,2\,QL}{} profile of the Web Ontology Language \textsl{OWL\,2}, which is based on description logics of the \textsl{DL-Lite}{} family~\cite{CDLLR07,ACKZ09}, and fragments of Datalog$^\pm$ such as linear or sticky sets of TGDs~\cite{CaliGL09,CaliGP10}. Various rewriting techniques have been implemented in the systems QuOnto~\cite{ACDL*05}, REQUIEM~\cite{Perez-UrbinaMH09}, Presto~\cite{RosatiAKR10}, Nyaya~\cite{2011_Gottlob}, IQAROS\footnote{\url{http://code.google.com/p/iqaros/}} and Quest\footnote{\url{http://obda.inf.unibz.it/protege-plugin/quest/quest.html}}. OBDA via FO-rewritability relies on the empirical fact that RDBMSs are usually very efficient in practice. However, this does not mean that they can efficiently evaluate any given query: after all, for expression complexity, database query answering is \textsc{PSpace}-complete for FO-queries and \textsc{NP}-complete for conjunctive queries (CQs). Indeed, the first `na\"ive' rewritings of CQs over \textsl{OWL\,2\,QL}{} ontologies turned out to be too lengthy even for modern RDBMSs~\cite{CDLLR07,Perez-UrbinaMH09}. The obvious next step was to develop various optimisation techniques~\cite{RosatiAKR10,2011_Gottlob,DL-2011-obda,ISWC-2011}; however, they still produced exponential-size --- $O((|\mathcal{T}| \cdot |{\boldsymbol q}|)^{|\boldsymbol q|})$ --- rewritings in the worst case. An alternative two-step \emph{combined approach} to OBDA with \textsl{OWL\,2\,EL}{}~\cite{EL} and \textsl{OWL\,2\,QL}{}~\cite{KR10our,IJCAIbest} first expands the data by applying the ontology axioms and introducing new individuals required by the ontology, and only then rewrites the query over the expanded data. Yet, even with these extra resources a simple polynomial rewriting was constructed only for the fragment of \textsl{OWL\,2\,QL}{} without role inclusions; the rewriting for the full language remained exponential. A breakthrough seemed to come in~\cite{GottlobS11}, which showed that one can construct, in polynomial time, a nonrecursive Datalog rewriting for some fragments of Datalog$^\pm$ containing \textsl{OWL\,2\,QL}. However, this rewriting uses the built-in predicate $\ne$ and numerical constants that are not present in the original query and ontology. Without such additional constants, as shown in~\cite{KKZ-DL11}, no FO-rewriting for \textsl{OWL\,2\,QL}{} can be \emph{constructed} in polynomial time (it remained unclear, however, whether such an FO-rewriting of polynomial size \emph{exists}). These developments bring forward a spectrum of theoretical and practical questions that could influence the future of OBDA. What is the worst-case size of FO- and nonrecursive Datalog rewritings for CQs over \textsl{OWL\,2\,QL}{} ontologies? What is the type/shape/size of rewritings we should aim at to make OBDA with \textsl{OWL\,2\,QL}{} efficient? What extra means (e.g., built-in predicates and constants) can be used in the rewritings? In this paper, we investigate the worst-case size of FO- and nonrecursive Datalog rewritings for CQs over \textsl{OWL\,2\,QL}{} ontologies depending on the available means. We distinguish between `pure' rewritings, which cannot use constants that do not occur in the original query, and `impure' ones, where such constants are allowed. Our results can be summarised as follows: \begin{itemize}\itemsep=0pt \item[--] An exponential blow-up is unavoidable for pure positive existential rewritings and pure nonrecursive Datalog rewritings. Even pure FO-rewritings with $=$ can blow-up superpolynomially unless $\textsc{NP} \subseteq \textsc{P}/\text{poly}$. \item[--] Pure nonrecursive Datalog rewritings are in general exponentially more succinct than pure positive existential rewritings. \item[--] Pure FO-rewritings can be superpolynomially more succinct than pure positive existential rewritings. \item[--] Impure positive existential rewritings can always be made polynomial, and so they are exponentially more succinct than pure rewritings. \end{itemize} We obtain these results by first establishing connections between pure rewritings for conjunctive queries over \textsl{OWL\,2\,QL}{} ontologies and circuits for monotone Boolean functions, and then using known lower bounds and separation results for the circuit complexity of such functions as $\textsc{Clique}_{n,k}$ `a graph with $n$ nodes contains a $k$-clique' or $\textsc{Matching}_{2n}$ `a bipartite graph with $n$ vertices in each part has a perfect matching.' \section{Queries over \textsl{OWL\,2\,QL}{} Ontologies} By a \emph{signature}, $\Sigma$, we understand in this paper any set of constant symbols and predicate symbols (with their arity). Unless explicitly stated otherwise, $\Sigma$ does \emph{not} contain any predicates with fixed semantics, such as $=$ or $\ne$. In the description logic (or \textsl{OWL\,2\,QL}{}) setting, constant symbols are called \emph{individual names}, $a_i$, while unary and binary predicate symbols are called \emph{concept names}, $A_i$, and \emph{role names}, $P_i$, respectively, where $i\ge 1$. The language of \textsl{OWL\,2\,QL}{} is built using those names in the following way. The \emph{roles} $R$, \emph{basic concepts} $B$ and \emph{concepts} $C$ of \textsl{OWL\,2\,QL}{} are defined by the grammar:\footnote{We do not consider data properties, attributes and role (ir)reflexivity constraints.} \begin{align*} R \quad &::=\quad P_i \quad\mid\quad P_i^-, \tag{\mbox{\small $P_i(x,y) \ \mid \ P_i(y,x)$}} \\ B \quad &::=\quad \bot \quad\mid\quad A_i \quad\mid\quad \exists R, \tag{\mbox{\small $\bot \ \mid \ A_i(x) \ \mid \ \exists y\, R(x,y)$}}\\ C \quad &::=\quad B \quad\mid\quad \exists R.B, \tag{\mbox{\small $B(x) \ \mid \ \exists y\, (R(x,y) \land B(y))$}} \end{align*} where the formulas on the right give a first-order translation of the \textsl{OWL\,2\,QL}{} constructs. An \textsl{OWL\,2\,QL}{} \emph{TBox}, $\mathcal{T}$, is a finite set of \emph{inclusions} of the form \begin{align*} &B \sqsubseteq C, \tag{\mbox{\small $\forall x \, (B(x) \to C(x))$}} \\ &R_1 \sqsubseteq R_2, \tag{\mbox{\small $\forall x,y \, (R_1(x,y) \to R_2(x,y))$}}\\ & B_1 \sqcap B_2 \sqsubseteq \bot, \tag{\mbox{\small $\forall x \, (B_1(x) \land B_2(x) \to \bot)$}}\\ & R_1 \sqcap R_2 \sqsubseteq \bot. \tag{\mbox{\small $\forall x,y \, (R_1(x,y) \land R_2(x,y) \to \bot)$}} \end{align*} Note that concepts of the form $\exists R.B$ can only occur in the right-hand side of concept inclusions in \textsl{OWL\,2\,QL}. An \emph{ABox}, $\mathcal{A}$, is a finite set of \emph{assertions} of the form $A_k(a_i)$ and $P_k(a_i,a_j)$. $\mathcal{T}$ and $\mathcal{A}$ together form the \emph{knowledge base} (KB) $\mathcal{K}=(\mathcal{T},\mathcal{A})$. The semantics for \textsl{OWL\,2\,QL}{} is defined in the usual way \cite{BCMNP03}, based on interpretations $\mathcal{I} = (\Delta^\mathcal{I}, \cdot^\mathcal{I})$ with domain $\Delta^\mathcal{I}$ and interpretation function $\cdot^\mathcal{I}$. The set of individual names in an ABox $\mathcal{A}$ will be denoted by $\mathop{\mathsf{ind}}(\mathcal{A})$. For concepts or roles $E_1$ and $E_2$, we write $E_1 \sqsubseteq_\mathcal{T} E_2$ if $\mathcal{T} \models E_1 \sqsubseteq E_2$; and we set $[E] = \{ E' \mid E \sqsubseteq_\mathcal{T} E' \text{ and } E' \sqsubseteq_\mathcal{T} E \}$. A \emph{conjunctive query} (CQ) $\boldsymbol q(\vec{x})$ is a first-order formula $\exists \vec{y}\, \varphi(\vec{x}, \vec{y})$, where $\varphi$ is constructed, using $\land$, from atoms of the form $A_k(t_1)$ and $P_k(t_1,t_2)$, where each $t_i$ is a \emph{term} (an individual or a variable from $\vec{x}$ or $\vec{y}$). A tuple $\vec{a}\subseteq \mathop{\mathsf{ind}} (\mathcal{A})$ is a \emph{certain answer} to $\boldsymbol q(\vec{x})$ over $\mathcal{K} = (\mathcal{T},\mathcal{A})$ if $\mathcal{I} \models \boldsymbol q(\vec{a})$ for all models $\mathcal{I}$ of $\mathcal{K}$; in this case we write $\mathcal{K} \models \boldsymbol q(\vec{a})$. Query answering over \textsl{OWL\,2\,QL}{} KBs is based on the fact that, for any consistent KB $\mathcal{K} = (\mathcal{T}, \mathcal{A})$, there is an interpretation $\mathcal{C}_\mathcal{K}$ such that, for all CQs $\boldsymbol q(\vec{x})$ and $\vec{a} \subseteq \mathop{\mathsf{ind}}(\mathcal{A})$, we have $\mathcal{K} \models \boldsymbol q(\vec{a})$ iff $\mathcal{C}_\mathcal{K} \models \boldsymbol q(\vec{a})$. The interpretation $\mathcal{C}_\mathcal{K}$, called the \emph{canonical model} of $\mathcal{K}$, can be constructed as follows. For each pair $[R],[B]$ with $\exists R.B$ in $\mathcal{T}$ (we assume $\exists R$ is just a shorthand for $\exists R.\top$), we introduce a fresh symbol $w_{[RB]}$ and call it the \emph{witness for} $\exists R.B$. We write $\mathcal{K} \models C(w_{[RB]})$ if $\exists R^- \sqsubseteq_\mathcal{T} C$ or $B \sqsubseteq_\mathcal{T} C$. Define a \emph{generating relation}, $\leadsto$, on the set of these witnesses together with $\mathop{\mathsf{ind}}(\mathcal{A})$ by taking: \begin{description} \item[--] $a \leadsto w_{[RB]}$ if $a \in \mathop{\mathsf{ind}}(\mathcal{A})$, $[R]$ and $[B]$ are $\sqsubseteq_\mathcal{T}$-minimal such that $\mathcal{K} \models \exists R .B(a)$ and there is no $b\in\mathop{\mathsf{ind}}(\mathcal{A})$ with $\mathcal{K}\models R(a,b) \land B(b)$; \item[--] $w_{[R'B']} \leadsto w_{[RB]}$ if, for some $u$, $u \leadsto w_{[R'B']}$, $[R]$ and $[B]$ are $\sqsubseteq_\mathcal{T}$-minimal with $\mathcal{K}\models \exists R.B(w_{[R'B']})$ and it is not the case that $R' \sqsubseteq_\mathcal{T} R^-$ and $\mathcal{K} \models B'(u)$. \end{description} If $a\leadsto w_{[R_1B_1]} \leadsto \dots \leadsto w_{[R_{n}B_{n}]}$, $n\ge 0$, then we say that $a$ \emph{generates the path} $a w_{[R_1B_1]} \cdots w_{[R_nB_n]}$. Denote by $\mathsf{path}_\mathcal{K}(a)$ the set of paths generated by $a$, and by $\mathsf{tail}(\pi)$ the last element in~$\pi \in \mathsf{path}_\mathcal{K}(a)$. $\mathcal{C}_\mathcal{K}$ is defined by taking: \begin{align*} \Delta^{\mathcal{C}_\mathcal{K}} = & \bigcup_{a \in \mathop{\mathsf{ind}}(\mathcal{A})}\mathsf{path}_\mathcal{K}(a), \quad a^{\mathcal{C}_\mathcal{K}} ~=~ a, \text{ for } a \in \mathop{\mathsf{ind}}(\mathcal{A}), \\ A^{\mathcal{C}_\mathcal{K}} =~ &\{ \pi \in \Delta^{\mathcal{C}_\mathcal{K}}\mid \mathcal{K} \models A(\mathsf{tail}(\pi)) \}, \\ P^{\mathcal{C}_\mathcal{K}} =~ & \{ (a,b)\in \mathop{\mathsf{ind}}(\mathcal{A})\times \mathop{\mathsf{ind}}(\mathcal{A}) \mid \mathcal{K}\models P(a,b) \} \; \cup \\ & \{ (\pi,\pi \cdot w_{[RB]}) \mid \mathsf{tail}(\pi) \leadsto w_{[RB]},\ R\sqsubseteq_\mathcal{T} P \} \cup{} \\ & \{(\pi\cdot w_{[RB]},\pi) \mid \mathsf{tail}(\pi) \leadsto w_{[RB]},\ R \sqsubseteq_\mathcal{T} P^-\}.% \end{align*} The following result is standard: \begin{theorem}[\cite{CDLLR07,KR10our}] For every \textsl{OWL\,2\,QL}{} KB $\mathcal{K}=(\mathcal{T}, \mathcal{A})$, every CQ $\boldsymbol q(\vec{x})$ and every $\vec{a} \subseteq \mathop{\mathsf{ind}}(\mathcal{A})$, $\mathcal{K} \models \boldsymbol q(\vec{a})$ iff $\mathcal{C}_\mathcal{K} \models \boldsymbol q(\vec{a})$. \end{theorem} \section{Query Rewriting} Let $\Sigma$ be a signature that can be used to formulate queries and ABoxes (remember that $\Sigma$ does not contain any built-in predicates). Given an ABox $\mathcal{A}$ over $\Sigma$, define $\mathcal{I}_\mathcal{A}$ to be the interpretation whose domain consists of all individuals in $\Sigma$ (even if they are not in $\mathop{\mathsf{ind}}(\mathcal{A})$) and such that $\mathcal{I}_\mathcal{A} \models E(\vec{a})$ iff $E(\vec{a}) \in \mathcal{A}$, for all predicates $E(\vec{x})$. Given a CQ $\boldsymbol q(\vec{x})$ and an \textsl{OWL\,2\,QL}{} TBox $\mathcal{T}$, a first-order formula $\boldsymbol q'(\vec{x})$ over $\Sigma$ is called an \emph{FO-rewriting for $\boldsymbol q(\vec{x})$ and $\mathcal{T}$ over $\Sigma$} if, for any ABox $\mathcal{A}$ over $\Sigma$ and any $\vec{a} \subseteq \mathop{\mathsf{ind}}(\mathcal{A})$, we have $(\mathcal{T},\mathcal{A}) \models \boldsymbol q(\vec{a})$ iff $\mathcal{I}_\mathcal{A} \models \boldsymbol q'(\vec{a})$. If $\boldsymbol q'$ is an FO-rewriting of the form $\exists \vec{y}\, \varphi(\vec{x}, \vec{y})$, where $\varphi$ is built from atoms using only $\land$ and $\lor$, then we call $\boldsymbol q'(\vec{x})$ a \emph{positive existential rewriting for $\boldsymbol q(\vec{x})$ and $\mathcal{T}$ over $\Sigma$} (or a \emph{PE-rewriting}, for short). The \emph{size} $|\boldsymbol q'|$ of $\boldsymbol q'$ is the number of symbols in it. All known FO-rewritings for CQs and \textsl{OWL\,2\,QL}{} ontologies are of exponential size in the worst case. More precisely, for any CQ $\boldsymbol q$ and any \textsl{OWL\,2\,QL}{} TBox $\mathcal{T}$, one can construct a PE-rewriting of size $O((|\mathcal{T}| \cdot |\boldsymbol q|)^{|\boldsymbol q|})$~\cite{CDLLR07,Perez-UrbinaMH09,Chortaras-etal2011,2011_Gottlob,KR10our}. One of the main results of this paper is that this lower bound cannot be substantially improved in general. On the other hand, we shall see that FO-rewritings can be superpolynomially more succinct than pure PE-rewritings. We shall also consider query rewritings in the form of nonrecursive Datalog queries. We remind the reader (for details see, e.g.,~\cite{CeriGT89}) that a \emph{Datalog program}, $\Pi$, is a finite set of Horn clauses $$ \forall \vec{x}\, (A_1 \land \dots \land A_m \to A_0), $$ where each $A_i$ is an atom of the form $P(t_1,\dots,t_l)$ and each $t_j$ is either a variable from $\vec{x}$ or a constant. $A_0$ is called the \emph{head} of the clause, and $A_1,\dots,A_m$ its \emph{body}. All variables occurring in the head $A_0$ must also occur in the body, i.e., in one of $A_1,\dots,A_m$. A predicate $P$ \emph{depends} on a predicate $Q$ in $\Pi$ if $\Pi$ contains a clause whose head's predicate is $P$ and whose body contains an atom with predicate $Q$. A Datalog program $\Pi$ is called \emph{nonrecursive} if this dependence relation for $\Pi$ is acyclic. A \emph{nonrecursive Datalog query} consists of a nonrecursive Datalog program $\Pi$ and a \emph{goal} $G$, which is just a predicate. Given an ABox $\mathcal{A}$, a tuple $\vec{a} \subseteq \mathop{\mathsf{ind}} (\mathcal{A})$ is called a \emph{certain answer} to $(\Pi,G)$ over $\mathcal{A}$ if $\Pi,\mathcal{A} \models G(\vec{a})$. The \emph{size} $|\Pi|$ of $\Pi$ is the number of symbols in $\Pi$. We distinguish between \emph{pure} and \emph{impure} nonrecursive Datalog queries~\cite{BenediktG10}. In a \emph{pure query} $(\Pi,G)$, the clauses in $\Pi$ do not contain \emph{constant symbols} in their heads. One reason for considering only pure queries in the OBDA setting is that impure ones can have too much impact on the data. For example, an impure query can explicitly add a ground atom $A_0(\vec{a})$ to the database, which has nothing to do with the intensional knowledge in the background ontologies. In fact, impure nonrecursive Datalog queries are known to be more succinct than pure ones. Given a CQ $\boldsymbol q(\vec{x})$ and an \textsl{OWL\,2\,QL}{} TBox $\mathcal{T}$, a pure nonrecursive Datalog query $(\Pi,G)$ is called a \emph{nonrecursive Datalog rewriting for $\boldsymbol q(\vec{x})$ and $\mathcal{T}$ over $\Sigma$} (or an \emph{NDL-rewriting}, for short) if, for any ABox $\mathcal{A}$ over $\Sigma$ and any $\vec{a} \subseteq \mathop{\mathsf{ind}}(\mathcal{A})$, we have $(\mathcal{T},\mathcal{A}) \models \boldsymbol q(\vec{a})$ iff $\Pi,\mathcal{A} \models G(\vec{a})$ (note that $\Pi$ may define predicates that are not in $\Sigma$, but may not use non-signature constants). Similarly to FO-rewritings, known NDL-rewritings for \textsl{OWL\,2\,QL}{} are of exponential size~\cite{RosatiAKR10,2011_Gottlob}. Here we show that, in general, one cannot make NDL-rewritings shorter. On the other hand, NDL-rewritings can be exponentially more succinct than PE-rewritings. The rewritings can be much shorter if non-signature predicates and constants become available. As follows from~\cite{GottlobS11}, every CQ over an \textsl{OWL\,2\,QL}{} ontology can be rewritten as a polynomial-size nonrecursive Datalog query if we can use the inequality predicate and at least two distinct constants (cf.\ also \cite{Avigad01} which shows how two constants and $=$ can be used to eliminate definitions from first-order theories without an exponential blow-up). In fact, we observe that, using equality and two distinct constants, any CQ over an \textsl{OWL\,2\,QL}{} ontology can be rewritten into a PE-query of polynomial size. \section{Boolean Functions and Circuits}\label{sec:circuits} In this section we give a brief introduction to Boolean circuits and remind the reader of the results on monotone circuit complexity that we will use. An \emph{$n$-ary Boolean function}, for $n\ge 1$, is a function from $\{0,1\}^n$ to $\{0,1\}$. A Boolean function $f$ is \emph{monotone} if $f(\vec{\alpha}) \leq f(\vec{\alpha}')$, for all $\vec{\alpha}\leq \vec{\alpha}'$, where $\leq$ is the component-wise relation $\leq$ on vectors of $\{0,1\}$. We remind the reader (for more details see, e.g.,~\cite{Arora&Barak09,Jukna12}) that an $n$-\emph{input Boolean circuit}, $\mathbf{C}$, is a directed acyclic graph with $n$ sources, \emph{inputs}, and one sink, \emph{output}. Every non-source node of $\mathbf{C}$ is called a \emph{gate}; it is labelled with either $\land$ or $\lor$, in which case it has two incoming edges, or with $\neg$, in which case it has one incoming edge. A circuit is \emph{monotone} if it contains only $\land$- and $\lor$-gates. We think of a \emph{Boolean formula} as a circuit in which every gate has at most one outgoing edge. For an input $\vec{\alpha} \in \{0,1\}^n$, the \emph{output} of $\mathbf{C}$ on $\vec{\alpha}$ is denoted by $\mathbf{C}(\vec{\alpha})$, and $\mathbf{C}$ is said to \emph{compute} an $n$-ary Boolean function $f$ if $\mathbf{C}(\vec{\alpha}) = f(\vec{\alpha})$, for every $\vec{\alpha} \in \{0,1\}^n$. The number of nodes in $\mathbf{C}$ is the \emph{size} of $\mathbf{C}$, denoted by $|\mathbf{C}|$. A \emph{family of Boolean functions} is a sequence $f^1,f^2,\dots$, where each $f^n$ is an $n$-ary Boolean function. We say that a family $f^1,f^2,\dots$ is in the complexity class \textsc{NP}{} if there exist polynomials $p$ and $T$ and, for each $n \geq 1$, a Boolean circuit $\mathbf{C}^n$ with $n + p(n)$ inputs such that $|\mathbf{C}^n|\leq T(n)$ and, for each $\vec{\alpha} \in \{0,1\}^n$, we have \begin{equation*} f^n(\vec{\alpha}) =1\qquad\text{ iff }\qquad\mathbf{C}^n(\vec{\alpha},\vec{\beta})=1,\quad\text{for some }\vec{\beta} \in \{0,1\}^{p(n)}. \end{equation*} The additional $p(n)$ inputs for $\vec{\beta}$ in the $\mathbf{C}^n$ are called \emph{advice inputs}. We shall use three well-known families of monotone Boolean functions in $\textsc{NP}$: \begin{description} \item[\normalfont $\textsc{Clique}_{n,k}$] is the function of $n(n-1)/2$ variables $e_{ij}$, $1 \leq i < j\le n$, which returns 1 iff the graph with vertices $\{1,\dots,n\}$ and edges $\{ \{i,j\} \mid e_{ij}=1\}$ contains a $k$-clique. A series of papers, started by Razborov's breakthrough~\cite{Razborov85}, gave an exponential lower bound for the size of monotone circuits computing $\textsc{Clique}(n,k)$: $2^{\Omega(\sqrt{k})}$ for all $k \leq \frac{1}{4} (n/ \log n)^{2/3}$~\cite{AlonB87}. For monotone formulas, an even better lower bound is known: $2^{\Omega(k)}$ for $k = 2n/3$~\cite{RazW92}. Since $\textsc{Clique}_{n,k}$ is $\textsc{NP}$-complete, the question whether it can be computed by a polynomial-size Boolean circuit (i.e., belongs to the complexity class \textsc{P}/\text{poly}{}) is equivalent to whether $\textsc{NP} \subseteq \textsc{P}/\text{poly}$, which is an open problem (see e.g.,~\cite{Arora&Barak09}). It is not hard to see that $\textsc{Clique}_{n,k}$ can be computed by a nondeterministic circuit of size $O(n^2)$ with $n$ advice variables: the circuit gets a vector $\vec{y} \in \{0,1\}^n$ indicating vertices of a clique as its advice inputs and checks whether the vector has $k$-many 1s and whether any two vertices given by 1s in advice inputs are indeed connected by an edge in the input graph. \item[\normalfont $\textsc{Matching}_{2n}$] is the function of $n^2$ variables $e_{ij}$, $1 \leq i,j \leq n$, which returns 1 iff there is a \emph{perfect matching} in the bipartite graph $G$ with vertices $\{v_1^1,\dots,v_n^1,v_1^2,\dots,v_n^2\}$ and edges $\{\{v_i^1,v_j^2\}\mid e_{ij} = 1\}$, i.e., a subset $E$ of edges in $G$ such that every node in $G$ occurs exactly once in $E$. An exponential lower bound $2^{\Omega(n)}$ for the size of monotone formulas computing $\textsc{Matching}_{2n}$ is known~\cite{RazW92}. On the other hand, this function is computable by non-monotone formulas of size $n^{O(\log n)}$~\cite{BorodinGH82}. $\textsc{Matching}_{2n}$ can also be computed by a Boolean nondeterministic circuit of size $O(n^2)$ with $n^2$ advice variables: the circuit gets the edges a perfect matching in its advice inputs and it checks whether each edge in the perfect matching is an edge of the graph and whether, for each vertex, there is exactly one edge in the perfect matching containing it. \item[\normalfont $\textsc{Gen}_{n^3}$] is the function of $n^3$ variables $x_{ijk}$, $1 \le i,j,k \le n$, defined as follows. We say that $1$ \emph{generates} $k \le n$ if either $k=1$ or $x_{ijk}=1$ and $1$ generates both $i$ and $j$. $\textsc{Gen}_{n^3}(x_{111},\dots, x_{nnn})$ returns $1$ iff $1$ generates $n$. $\textsc{Gen}_{n^3}$ is clearly a monotone Boolean function computable by polynomial-size monotone Boolean circuits. On the other hand, any monotone formula computing $\textsc{Gen}_{n^3}$ is of size $2^{n^{\varepsilon}}$, for some $\varepsilon>0$~\cite{RazM97}. \end{description} The complexity results above will be used in Section~\ref{s:7} to obtain similar bounds for the size of rewritings for certain CQs and \textsl{OWL\,2\,QL}{} ontologies encoding these three function. The encoding will require a representation of these functions in terms of CNF. \section{Circuits, CNFs and OBDA} \label{sec:queries} In this section we show how the above families of Boolean functions can be encoded as a CQ answering problem over \textsl{OWL\,2\,QL}{} ontologies. More specifically, for each family $f^1,f^2,\dots$ of Boolean functions, we construct a sequence of \textsl{OWL\,2\,QL}{} TBoxes $\mathcal{T}_{f^n}$ and CQs $\boldsymbol q_{f^n}$, as well as ABoxes $\mathcal{A}_{\vec{\alpha}}$, $\vec{\alpha}\in\{0,1\}^n$, with a \emph{single} individual such that \begin{equation*} (\mathcal{T}_{f^n},\mathcal{A}_{\vec{\alpha}})\models \boldsymbol q_{f^n} \qquad\text{iff}\qquad f^n(\vec{\alpha}) =1,\qquad\text{ for all } \vec{\alpha}\in\{0,1\}^n. \end{equation*} Then we show that rewritings for $\boldsymbol q_{f^n}$ and $\mathcal{T}_{f^n}$ correspond to Boolean circuits computing $f^n$. The construction proceeds in two steps: first, we represent the $f^n$ by polynomial-size CNFs (in a way similar to the Tseitin transformation~\cite{Tseitin83}), and then encode those CNFs in terms of \textsl{OWL\,2\,QL}{} query answering. Let $f^1,f^2,\dots$ be a family of Boolean functions in \textsc{NP}{} and $\mathbf{C}^1,\mathbf{C}^2,\dots$ be a family of circuits computing the $f^n$ (according to the definition above). We consider the inputs $\vec{x}$ and the advice inputs $\vec{y}$ of $\mathbf{C}^n$ as Boolean variables; each of the gates $g_{1}, \dots, g_{\ell}$ of $\mathbf{C}^n$ is also thought of as a Boolean variable whose value coincides with the output of the gate on a given input. We assume that $\mathbf{C}^n$ contains only $\neg$- and $\land$-gates, and so can be regarded as a set of equations of the form \begin{equation*} g_i = \neg h_i \qquad\text{or}\qquad g_i = h_i \land h_i', \end{equation*} where $h_i$ and $h_i'$ are the inputs of the gate $g_i$, that is, either input variables $\vec{x}$, advice variables $\vec{y}$ or other gates $\vec{g} = (g_1,\dots,g_{\ell})$. We assume $g_1$ to be the output of $\mathbf{C}^n$. Now, with each $f^n$ and each $\vec{\alpha} = (\alpha_1,\dots,\alpha_n)\in\{0,1\}^n$, we associate the following formula in CNF: \begin{multline*} \phifn(\vec{x},\vec{y},\vec{g}) \ \ = \ \ \bigwedge_{\alpha_j = 0} \neg x_j \ \ \land \ \ g_1 \ \ \land \hspace*{-0.5em} \bigwedge_{g_i=\neg h_i \text{ in } \mathbf{C}^n} \hspace*{-0.5em}\bigl[(h_i \lor \neg g_i) \land (\neg h_i \lor g_i)\bigr] \ \land {} \\ \bigwedge_{g_i = h_i \land h_i' \text{ in } \mathbf{C}^n} \hspace*{-2em}\bigl[(h_i \vee \neg g_i)\land (h_i' \vee \neg g_{i})\land (\neg h_i \vee \neg h_i' \vee g_i)\bigr]. \end{multline*} The clauses of the last two conjuncts encode the correct computation of the circuit: they are equivalent to $g_i \leftrightarrow \neg h_i$ and $g_i \leftrightarrow h_i \land h_i'$, respectively. \begin{lemma}\label{l:F} If $f^n$ is a monotone Boolean function then $f^n(\vec{\alpha}) = 1$ iff $\phifn$ is satisfiable, for each $\vec{\alpha}\in\{0,1\}^n$. \end{lemma} \begin{proof} $(\Rightarrow)$ Let $f^n(\vec{\alpha}) = 1$. Then $\mathbf{C}^n(\vec{\alpha}, \vec{\beta})=1$, for some $\vec{\beta}$. It can be easily seen that $\phifn(\vec{\alpha},\vec{\beta},\vec{\gamma}) = 1$, where the values $\gamma_i$ in $\vec{\gamma}$ are given by the outputs of the corresponding gates $g_i$ in $\mathbf{C}^n$ on the input $(\vec{\alpha}, \smash{\vec{\beta}})$. $(\Leftarrow)$ Conversely, let $\phifn(\vec{\alpha}',\vec{\beta},\vec{\gamma}) = 1$. By the first conjunct of $\phifn$, $\vec{\alpha}' \leq \vec{\alpha}$. As $f^n$ is monotone, it is enough to show $f^n(\vec{\alpha}')=1$. This is immediate from the second conjunct of $\phifn$, $g_1$, and an observation that the values $\vec{\gamma}$ are equal to the outputs of the corresponding gates of $\mathbf{C}^n$ on the input $(\vec{\alpha}', \smash{\vec{\beta}})$. \end{proof} The second step of the reduction is to encode satisfiability of $\phifn$ by means of the CQ answering problem in \textsl{OWL\,2\,QL}. Denote $\phifn$ for $\vec{\alpha} = (0,\dots,0)$ by $\phifn[]$. It is immediate from the definitions that, for each $\vec{\alpha}\in\{0,1\}^n$, the CNF $\phifn$ can be obtained from $\phifn[]$ by removing the clauses $\neg x_j$ for which $\alpha_j = 1$, $1 \leq j \leq n$. The CNF $\phifn[]$ contains $d \leq 3|\mathbf{C}^n|$ clauses $C_1,\dots,C_d$ with $N = |\mathbf{C}^n|$ Boolean variables, which will be denoted by $p_1,\dots,p_N$. Let $P$ be a role name and let $A_i$, $X_i^0$, $X_i^1$ and $Z_{i,j}$ be concept names. Consider the TBox $\mathcal{T}_{f^n}$ containing the following inclusions, for $1 \leq i \leq N$, $1 \leq j \leq d$: \begin{align*} & A_{i-1} \sqsubseteq \exists P^-.X_i^\ell,\quad\text{ for } \ell = 0,1,\\ &X_i^\ell \sqsubseteq A_i, \qquad\text{ for } \ell = 0,1,\\ &\hspace*{2em} X_i^0 \sqsubseteq Z_{i,j}\ \ \text{ if } \ \ \neg p_i \in C_j,\\ &\hspace*{2em} X_i^1 \sqsubseteq Z_{i,j} \ \ \text { if } \ \ p_i \in C_j,\\ & Z_{i,j} \sqsubseteq \exists P.Z_{i-1,j},\\ & A_0 \sqcap A_i \sqsubseteq \bot,\\ & A_0 \sqcap \exists P \sqsubseteq \bot,\\ & A_0 \sqcap Z_{i,j} \sqsubseteq \bot, \ \ \text{ for } (i,j) \notin \{(0,1),\dots,(0,n)\}.\hspace*{-30mm} \end{align*} It can be seen that $|\mathcal{T}_{f^n}| = O(|\mathbf{C}^n|^2)$. Consider also the CQ \begin{multline*} \boldsymbol q_{f^n} = \exists \vec{y} \, \exists \vec{z} \ \Bigl[ A_0(y_0) \land \bigwedge_{i = 1}^N P(y_i,y_{i-1}) \land{} \\ \bigwedge_{j = 1}^d \Bigl(P(y_N,z_{N-1,j}) \land \bigwedge_{i = 1}^{N-1} P(z_{i,j},z_{i-1,j}) \land Z_{0,j}(z_{0,j}) \Bigr) \Bigr], \end{multline*} where $\vec{y} = (y_0,\dots,y_N)$ and $\vec{z} = (z_{0,1},\dots,z_{N-1,1}, \dots, z_{0,d},\dots,z_{N-1,d})$. Clearly, $|\boldsymbol q_{f^n}| = O(|\mathbf{C}^n|^2)$. Note that $\mathcal{T}_{f^n}$ is acyclic and $\boldsymbol q_{f^n}$ is tree-shaped and has no answer variables. For each $\vec{\alpha} = (\alpha_1,\dots,\alpha_n)\in\{0,1\}^n$, we set \begin{equation*} \mathcal{A}_{\vec{\alpha}} \ \ = \ \ \bigl\{A_0(a) \bigr\} \ \ \cup \ \ \bigl\{ Z_{0,j}(a) \mid 1\leq j \leq n \text{ and } \alpha_j = 1 \bigr\}. \end{equation*} \begin{figure}[ht] \begin{center} \begin{tikzpicture}[>=latex, point/.style={circle,draw=black,minimum size=1.5mm,inner sep=0pt}] \node at (-2.2,1) {$\mathcal{C}_{(\mathcal{T}_{f^n}, \mathcal{A}_{\vec{\alpha}})}$}; \node (a) at (-0.3,0) [point,label=below:{$a$},label=left:{\footnotesize $A_0,Z_{0,1}$}] {}; \tikzset{label distance=-1mm}; \node (a1) at (1.5,1.2) [point, label=below:{\footnotesize $\rule{0pt}{8pt}\hspace*{1.4em}X_1^1,\!Z_{1,3}$}] {}; \draw[<-,thick] (a) -- (a1); \node (a0) at (1.5,-1.2) [point, label=below:{\footnotesize $X_1^0,Z_{1,1}$}] {}; \draw[<-,thick] (a) -- (a0); \node (a11) at (3.3,1.8) [point, label=above:{\footnotesize $X_2^1$}] {}; \draw[<-,thick] (a1) -- (a11); \node (a10) at (3.3,0.6) [point, label=above:{\footnotesize $X_2^0$}] {}; \draw[<-,thick] (a1) -- (a10); \node (a01) at (3.3,-0.6) [point, label=above:{\footnotesize $X_2^1$}] {}; \draw[<-,thick] (a0) -- (a01); \node (a00) at (3.3,-1.8) [point, label=above:{\footnotesize $X_2^0$}] {}; \draw[<-,thick] (a0) -- (a00); \node (a111) at (5.1,2.1) [point, label=right:{\footnotesize $X_3^1$}] {}; \draw[<-,thick] (a11) -- (a111); \node (a110) at (5.1,1.5) [point, label=right:{\footnotesize $X_3^0,Z_{3,3}$}] {}; \draw[<-,thick] (a11) -- (a110); \node (a101) at (5.1,0.9) [point, label=right:{\footnotesize $X_3^1$}] {}; \draw[<-,thick] (a10) -- (a101); \node (a100) at (5.1,0.3) [point, label=right:{\footnotesize $X_3^0,Z_{3,3}$}] {}; \draw[<-,thick] (a10) -- (a100); \node (a011) at (5.1,-0.3) [point, label=right:{\footnotesize $X_3^1$}] {}; \draw[<-,thick] (a01) -- (a011); \node (a010) at (5.1,-0.9) [point, label=right:{\footnotesize $X_3^0,Z_{3,3}$}] {}; \draw[<-,thick] (a01) -- (a010); \node (a001) at (5.1,-1.5) [point, label=right:{\footnotesize $X_3^1$}] {}; \draw[<-,thick] (a00) -- (a001); \node (a000) at (5.1,-2.1) [point, label=right:{\footnotesize $X_3^0,Z_{3,3}$}] {}; \draw[<-,thick] (a00) -- (a000); \node (c11) at (-0.3,-1.2) [point, label=left:{\footnotesize $Z_{0,1}$}] {}; % \draw[<-] (c11) -- (a0); \node (c31) at (-0.3,1.2) [point, label=left:{\footnotesize $Z_{0,3}$}] {}; % \draw[<-] (c31) -- (a1); \foreach \y/\l/\a in {-2.1/00/below,-0.9/01/above,0.3/10/below,1.5/11/above} { \node (c000) at (3.3,\y) [point, label=below:{\footnotesize $Z_{2,3}$}] {}; % \draw[<-] (c000) -- (a\l0); \node (c000z) at (1.5,\y) [point,label=\a:{\footnotesize $Z_{1,3}$} ] {}; \draw[<-] (c000z) -- (c000); \node (c000zz) at (-0.3,\y) [point, label=left:{\footnotesize $Z_{0,3}$}] {}; \draw[<-] (c000zz) -- (c000z); } \begin{scope}[yshift=-10mm] \node at (-2.2,-3) {$\boldsymbol q_{f^n}$}; \node (y0) at (-0.3, -2.4) [point,label=above:{\footnotesize $y_0$},label=left:{\small $A_0$}] {}; \foreach \x/\n/\p in {1.5/1/0,3.3/2/1,5.1/3/2% } { \node (y\n) at (\x, -2.4) [point,label=above:{\footnotesize $y_\n$}] {}; \draw[<-] (y\p) -- (y\n); } \node (z01) at (3.3,-3) [point,label=above:{\footnotesize $z_{2,j}$}] {}; \draw[->,out=-90,in=0] (y3) to (z01); \foreach \j/\y in {2/-3.3,3/-3.6,4/-3.9,5/-4.2} { \node (z0\j) at (3.3,\y) [point] {}; \draw[->,out=-90,in=0] (y3) to (z0\j); } \foreach \x/\n/\p/\m in {1.5/1/0/1, -0.3/2/1/0} { \node (z\n1) at (\x, -3) [point,label=above:{\footnotesize $z_{\m,j}$}] {}; \draw[->] (z\p1) -- (z\n1); \foreach \j/\y in {2/-3.3,3/-3.6,4/-3.9,5/-4.2} { \node (z\n\j) at (\x, \y) [point] {}; \draw[->] (z\p\j) -- (z\n\j); } } \node (z 41) at (-0.3, -3) [label=left:{\small $Z_{0,1}$}] {}; \foreach \j/\y in {2/-3.3,3/-3.6,4/-3.9,5/-4.2} { \node (z 4\j) at (-0.3, \y) [label=left:{\small $Z_{0,\j}$}] {}; } \end{scope} \end{tikzpicture} \end{center} \caption{Canonical model $\mathcal{C}_{(\mathcal{T}_{f^n}, \mathcal{A}_{\vec{\alpha}})}$ and query $\boldsymbol q_{f^n}$ for a Boolean function $f^n$, $n = 1$, computed by the circuit with one input $x$, one advice input $y$ and a single $\land$-gate. Thus, $N = 3$, $d = 5$ and $\phifn[](x,y,g) = \neg x \land g \land (x \lor \neg g)\land (y \lor \neg g) \land (\neg x\lor\neg y\lor g)$. Points in $X_i^\ell$ are also in $A_i$, for all $1 \leq i \leq N$; the arrows denote role $P$ and the $Z_{i,j}$ branches in the canonical model are shown only for $j = 1,3$, i.e., for $\neg x$ and $(x\lor\neg g)$.} \label{gen-mod} \end{figure} We explain the intuition behind the $\mathcal{T}_{f^n}$, $\boldsymbol q_{f^n}$ and $\mathcal{A}_{\vec{\alpha}}$ using the example of Fig.~\ref{gen-mod}, where the query $\boldsymbol q_{f^n}$ and the canonical model of $(\mathcal{T}_{f^n}, \mathcal{A}_{\vec{\alpha}})$, with $\mathcal{A}_{\vec{\alpha}} = \{A_0(a), Z_{0,1}(a) \}$, are illustrated for some Boolean function. To answer $\boldsymbol q_{f^n}$ in the canonical model, we have to check whether $\boldsymbol q_{f^n}$ can be homomorphically mapped into it. The variables $y_i$ are clearly mapped to one of the branches of the canonical model from $a$ to a point in $A_3$, say the lowest one, which corresponds to the valuation for the variables in $\phifn$ making all of them false. Now, there are two possible ways to map variables $z_{2,1},z_{1,1},z_{0,1}$ that correspond to the clause $C_1 = \neg x_1$ in $\phifn[]$. If they are sent to the same branch so that $z_{0,1} \mapsto a$ then $Z_{0,1} (a)\in \mathcal{A}_{\vec{\alpha}}$, whence the clause $C_1$ cannot be in $\phifn$. Otherwise, they are mapped to the points in a side-branch so that $z_{0,1}\not\mapsto a$, in which case $\neg x_1$ must be true under our valuation. Thus, we arrive at the following: \begin{lemma}\label{l1} $(\mathcal{T}_{f^n},\mathcal{A}_{\vec{\alpha}}) \models \boldsymbol q_{f^n}$ iff $\phifn$ is satisfiable, for all $\vec{\alpha}\in\{0,1\}^n$. \end{lemma} \begin{proof} $(\Rightarrow)$ Let $\mathfrak{a}$ be an assignment of points in the canonical model of $(\mathcal{T}_{f^n},\mathcal{A_{\vec{\alpha}}})$ to the variables of $\boldsymbol q_{f^n}$ under which it holds true. In particular, for all $0 \leq i \leq N$, $\mathfrak{a}(y_i)$ is in $A_i$, and thus the $\mathfrak{a}(y_i)$ define a vector $\gamma$ by taking $\gamma_i = 1$ if $\mathfrak{a}(y_i) \in X_i^1$ and $\gamma_i = 0$ otherwise, for all $1 \leq i \leq N$. We show $\phifn(\vec{\gamma}) = 1$. Take any clause $C_j$ in $\phifn[]$ and consider $\mathfrak{a} (z_{0,j}) \in Z_{0,j}$. If $\mathfrak a (z_{0,j}) = a$ then $j \leq n$, $Z_{0,j}(a) \in \mathcal{A}_{\vec{\alpha}}$ and $\alpha_j=1$; thus, the clause $x_j$ does not occur in $\phifn$. Otherwise, $\mathfrak a (z_{0,j}) \ne a$ and so, some $\mathfrak{a}(y_i)$ is in $Z_{i,j}$, which means that the clause $C_j$ contains $p_i$ if $\mathfrak{a}(y_i) \in X_i^1$ and $\neg p_i$ otherwise. By the definition of $\vec{\gamma}$, $\phifn(\vec{\gamma}) = 1$. $(\Rightarrow)$ Suppose $\phifn(\vec{\gamma}) = 1$. Recall that the canonical model of $(\mathcal{T}_{f^n},\mathcal{A}_{\vec{\alpha}})$ contains a path $u_0,\dots,u_N$ from $a = u_0$ to some $u_N$ that corresponds to that assignment in the following sense: for all $1 \leq i \leq N$, $u_i \in X_i^1$ if $\gamma_i = 1$ and $u_i \in X_i^0$ otherwise. We construct an assignment $\mathfrak{a}$ of points in the canonical model of $(\mathcal{T}_{f^n},\mathcal{A}_{\vec{\alpha}})$ to the variables in $\boldsymbol q_{f^n}$ in accordance with this valuation. For $0 \leq i \le N$, we set $\mathfrak{a}(y_i) = u_i$. For $1 \leq j \leq m$, we define $\mathfrak{a}(z_{N-1,j}),\dots,\mathfrak{a}(z_{0,j})$ recursively, starting from $\mathfrak{a}(z_{N-1,j})$: set $\mathfrak{a}(z_{i,j}) = \mathfrak{a}(z_{i+1,j}) w_{[PZ_{i,j}]}$ if $\mathfrak{a}(z_{i+1,j})$ is in $Z_{i+1,j}$ and $\mathfrak{a}(z_{i,j}) = u_i$, otherwise (assuming that $z_{N,j}=y_N$). It is easy to check that $\boldsymbol q_{f^n}$ is true in the canonical model under this assignment. \end{proof} \section{The Size of Rewritings}\label{s:6} Now we show how PE-rewritings for $\boldsymbol q_{f^n}$ and $\mathcal{T}_{f^n}$ can be transformed into monotone Boolean formulas computing $f^n$, how FO-rewritings can be transformed into Boolean formulas and NDL-rewritings into monotone Boolean circuits. \begin{lemma}\label{l:4} Let $f^1,f^2,\dots$ be a family of monotone Boolean functions in \textsc{NP}, and let $f = f^n$, for some $n$. {\rm (i)} If $\boldsymbol q'_f$ is a PE-rewriting for $\boldsymbol q_f$ and $\mathcal{T}_f$ then there is a monotone Boolean formula $\psi_f$ computing $f$ with $|\psi_f| \le |\boldsymbol q'_f|$. {\rm (ii)} If $\boldsymbol q_f'$ is an $FO$-rewriting for $\boldsymbol q_f$ and $\mathcal{T}_f$ and the signature $\Sigma$ contains a single constant then there is a Boolean formula $\psi_f$ computing $f$ with $|\psi_f| \le |\boldsymbol q'_f|$. {\rm (iii)} If $(\Pi_f,G)$ is an NDL-rewriting for $\boldsymbol q_f$ and $\mathcal{T}_f$ then there is a monotone Boolean circuit $\mathbf{C}_f$ computing $f$ with $|\mathbf{C}_f| \le |\Pi_f|$. \end{lemma} \begin{proof} {\rm (i)} By Lemmas~\ref{l:F} and \ref{l1}, for any PE-rewriting $\boldsymbol q'_{f}$ for $\boldsymbol q_{f}$ and $\mathcal{T}_{f}$, we have \begin{equation*} \mathcal{I}_{\mathcal{A}_{\vec{\alpha}}} \models \boldsymbol q'_{f} \qquad \text{iff} \qquad f(\vec{\alpha}) = 1, \quad \text{ for all }\vec{\alpha}\in\{0,1\}^n. \end{equation*} Recall that, of all ground atoms in signature $\Sigma$, only $A_0(a)$ and the $Z_{0,j}(a)$, for $1 \le j \le n$, can be true in $\mathcal{I}_{\mathcal{A}_{\vec{\alpha}}}$. In particular, no predicate can be true in $\mathcal{I}_{\mathcal{A}_{\vec{\alpha}}}$ on an element different from $a$. So, we can replace all the individual variables in $\boldsymbol q'_{f}$ with $a$, remove all (existential) quantifiers and replace $A_0(a)$ by $\top$ and all the atoms different from $A_0(a)$ and $Z_{0,j}(a)$, for $1 \leq j\leq n$, by $\bot$ without affecting the truth-value of $\boldsymbol q'_f$ in $\mathcal{I}_\mathcal{A}$. Denote the resulting PE-query by $\boldsymbol q_{f}^\dag$. It does not contain any variables and we have $\mathcal{I}_{\mathcal{A}_{\vec{\alpha}}} \models \boldsymbol q_{f}^\dag$ iff $f(\vec{\alpha}) = 1$. The formula $\boldsymbol q_{f}^\dag$ is equivalent to a propositional formula, $\psi_f$, with the connectives $\land$, $\lor$ and the propositional variables $Z_{0,j}(a)$, for $1 \leq j \leq n$, such that $\mathcal{I}_{\mathcal{A}_{\vec{\alpha}}} \models Z_{0,j}(a)$ iff $\alpha_j = 1$. Thus, $\psi_f$ computes $f$ and, clearly, $|\psi_f| \le |\boldsymbol q'_{f}|$. {\rm (ii)} If, in addition, $\Sigma$ contains only one constant, $a$, then in the same way we can convert any FO-rewriting $\boldsymbol q'_{f}$ for $\boldsymbol q_{f}$ and $\mathcal{T}_{f}$ --- even with $\forall$ and $\neg$ --- to a propositional formula with variables $Z_{0,j}(a)$, for $1\le j \leq n$, which computes $f$. {\rm (iii)} Suppose now that $(\Pi_f, G)$ is an NDL-rewriting for $\boldsymbol q_{f}$ and $\mathcal{T}_{f}$ over a given signature $\Sigma$, containing $a$ among its constants. Then, for any ground $\Sigma$-atom $Q(t_1,\dots,t_l)$ with at least one $t_i$ different from $a$, we have $\Pi_f, \mathcal{A}_{\vec{\alpha}} \not\models Q(t_1,\dots,t_l)$ (which can be easily proved by induction of the length of derivations using the fact that $\Pi_f$ is pure and each variable that occurs in the head of a clause must also occur in its body). So we can again replace all the individual variables in $\Pi_f$ with $a$, $A_0(a)$ with $\top$ and all the atoms that do not occur in the head of a clause and different from $A_0(a)$ and $Z_{0,j}(a)$, for $1 \leq j \leq n$, with $\bot$. Denote the resulting propositional NDL-program by $\Pi_f^\dag$. Then $\Pi^\dag_f,\mathcal{A}_{\vec{\alpha}} \models G$ iff $f(\vec{\alpha}) = 1$. The program $\Pi^\dag_f$ can now be transformed into a monotone Boolean circuit $\mathbf{C}_{f}$ computing $f$: for every (propositional) variable $p$ occurring in the head of a clause in $\Pi^\dag_f$, we introduce an $\lor$-gate whose output is $p$ and inputs are the bodies of the clauses with head $p$; and for each such body, we introduce an $\land$-gate whose inputs are the propositional variables in the body. The resulting monotone Boolean circuit with inputs $Z_{0,j}(a)$, for $1 \le j \leq n$, and output $G$ is denoted by $\mathbf{C}_{f}$. Clearly, $|\mathbf{C}_{f}| \le |\Pi_f|$. \end{proof} \begin{lemma}\label{l:2} Let $f^1,f^2,\dots$ be a family of monotone Boolean functions in \textsc{NP}, and let $f = f^n$, for some $n$. The following holds for signatures with a single constant\textup{:} {\rm (i)} Suppose $\boldsymbol q'$ is an FO-sentence such that $(\mathcal{T}_f,\mathcal{A}_{\vec{\alpha}}) \models \boldsymbol q_f$ iff $\mathcal{I}_{\mathcal{A}_{\vec{\alpha}}} \models \boldsymbol q'$, for any $\vec{\alpha}$. Then\footnote{Here and below, $B(x)$ denotes $\exists y\,P(x,y)$ in the case of $B = \exists P$.} $$ \boldsymbol q'' ~=~ \exists x\, \Bigl[A_0(x) \land \bigl(\boldsymbol q' \lor \bigvee_{A_0 \sqcap B \sqsubseteq_{\mathcal{T}_f} \bot}\hspace*{-1em} B(x)\bigr)\Bigr] $$ is an FO-rewriting for $\boldsymbol q_f$ and $\mathcal{T}_f$ with $|\boldsymbol q''| = |\boldsymbol q'| + O(|\mathbf{C}^n|^2)$. {\rm (ii)} Suppose $(\Pi,G)$ is a pure NDL-query with a propositional goal $G$ such that, $(\mathcal{T}_f,\mathcal{A}_{\vec{\alpha}}) \models \boldsymbol q_f$ iff $\Pi, \mathcal{A}_{\vec{\alpha}} \models G$, for any $\vec{\alpha}$. Then $(\Pi',G')$ is an NDL-rewriting for $\boldsymbol q_f$ and $\mathcal{T}_f$ with $|\Pi'| = |\Pi| + O(|\mathbf{C}^n|^2)$, where $G'$ is a fresh propositional variable and $\Pi'$ is obtained by extending $\Pi$ with the following clauses\textup{:} \begin{itemize} \item[--] $\forall x \, (A_0(x) \land G \to G')$, \item[--] $\forall x\, (A_0(x) \land B(x) \to G')$, for all concepts $B$ such that $A_0 \sqcap B \sqsubseteq_{\mathcal{T}_f} \bot$. \end{itemize} \end{lemma} \begin{proof} (i) The queries $\boldsymbol q'$ and $\boldsymbol q''$ give the same answer over any $\mathcal{A}_{\vec{\alpha}}$. Consider a different ABox $\mathcal{A}'$ in the signature of $\mathcal{T}_f$ with $\mathop{\mathsf{ind}}(\mathcal{A}') = \{a\}$. If $A_0(a) \notin \mathcal{A}'$ then we clearly have both $(\mathcal{T}_f,\mathcal{A}') \not\models \boldsymbol q_f$ and $\mathcal{I}_{\mathcal{A}'} \not\models \boldsymbol q''$. If $\mathcal{A}'$ contains $A_0(a)$ and any ground atom in the signature of $\mathcal{T}_f$ different from $A_0(a), Z_{0,1}(a), \dots, Z_{0,n}(a)$ then $(\mathcal{T}_f,\mathcal{A}')$ is inconsistent, and so $(\mathcal{T}_f,\mathcal{A}') \models \boldsymbol q_f$. On the other hand, we clearly have $\mathcal{I}_{\mathcal{A}'} \models \boldsymbol q''$. (ii) is proved in the same way. The programs $(\Pi, G)$ and $(\Pi', G')$ give the same answer over any $\mathcal{A}_{\vec{\alpha}}$. Consider a different ABox $\mathcal{A}'$ in the signature of $\mathcal{T}_f$ with $\mathop{\mathsf{ind}}(\mathcal{A}') = \{a\}$. If $A_0(a) \notin \mathcal{A}'$ then we clearly have both $(\mathcal{T}_f,\mathcal{A}') \not\models \boldsymbol q_f$ and $\Pi', \mathcal{A}' \not\models G'$. If $\mathcal{A}'$ contains $A_0(a)$ and any ground atom in the signature of $\mathcal{T}_f$ that is different from $A_0(a), Z_{0,1}(a), \dots, Z_{0,n}(a)$ then $(\mathcal{T}_f,\mathcal{A}')$ is inconsistent, and so $(\mathcal{T}_f,\mathcal{A}') \models \boldsymbol q_f$. On the other hand, we clearly have $\Pi', \mathcal{A}' \models G'$. \end{proof} \begin{remark}\rm It is worth noting that the lemma above can be extended to an \emph{arbitrary} signature (that is, to ABoxes with arbitrarily many individuals) provided that equality is available in rewritings. We refer to FO-rewritings with $=$ as FO$^=$-rewritings. {\it {\rm (i$'$)} Suppose that $\boldsymbol q'$ is an FO-sentence such that $(\mathcal{T}_f,\mathcal{A}_{\vec{\alpha}}) \models \boldsymbol q_f$ iff $\mathcal{I}_{\mathcal{A}_{\vec{\alpha}}} \models \boldsymbol q'$, for any $\vec{\alpha}$. Then there is an FO$^=$-rewriting $\boldsymbol q''$ for $\boldsymbol q_f$ and $\mathcal{T}_f$ such that $|\boldsymbol q''| \leq |\boldsymbol q'| + p(|\mathbf{C}^n|)$, for some polynomial $p$. {\rm (ii$'$)} Suppose that $(\Pi,G)$ is a pure NDL-query with a propositional goal $G$ such that $(\mathcal{T}_f,\mathcal{A}_{\vec{\alpha}}) \models \boldsymbol q_f$ iff $\Pi, \mathcal{A}_{\vec{\alpha}} \models G$, for any $\vec{\alpha}$. Then there is an NDL-rewriting $(\Pi',G')$ for $\boldsymbol q_f$ and $\mathcal{T}_f$ such that $|\Pi'| \leq |\Pi| + p(|\mathbf{C}^n|)$, for some polynomial $p$. } \smallskip \rm The proof uses the polynomial `impure' PE- and NDL-rewritings of Section~\ref{s:equality} and~\cite{GottlobS11}. To show (i$'$), let $\gamma$ be the PE-rewriting for $\boldsymbol q_f$ and $\mathcal{T}_f$ to be given in Section~\ref{s:equality}. We assume that this rewriting uses only two constants, say $0$ and $1$. Now, given an FO-sentence $\boldsymbol q'$ that is evaluated over ABoxes with a single individual only, we can clearly construct a quantifier-free FO-formula $\boldsymbol q_0(x)$ in the signature of $\boldsymbol q'$ such that it contains no constants and $\mathcal{I}_\mathcal{A}\models \boldsymbol q_0(a)$ iff $\mathcal{I}_\mathcal{A}\models \boldsymbol q'$, for all ABoxes with a single individual $a$. Consider now the following FO-sentence \begin{equation*} \boldsymbol q'' ~=~ \exists x\, \Bigl[A_0(x) \land \bigl(\boldsymbol q_0(x) \ \ \lor \hspace*{-1em}\bigvee_{A_0 \sqcap B \sqsubseteq_{\mathcal{T}_f} \bot}\hspace*{-1em} B(x) \ \ \lor \ \ \exists y\,\bigr(P(y,x) \land \gamma[0/x, 1/y] \bigr)\Bigr], \end{equation*} where $\gamma[0/x, 1/y]$ is the result of replacing each occurrence of $0$ in $\gamma$ with $x$ and each occurrence of $1$ with $y$. Suppose $(\mathcal{T}_f,\mathcal{A})\models \boldsymbol q_f$. Then either $(\mathcal{T}_f,\mathcal{A})$ is inconsistent or $\mathcal{A}$ has an individual $a_0$ such that $(\mathcal{T}_f,\mathcal{A})\models \boldsymbol q_f(a_0)$, where $\boldsymbol q_f(a_0)$ is the query $\boldsymbol q_f$ with $y_0$ replaced by $a_0$. In the former case, by the second disjunct, we have $\mathcal{I}_\mathcal{A}\models \boldsymbol q''$, which is a correct positive answer. In the latter case, if there is a distinct $a_1$ with $P(a_1,a_0)$ in $\mathcal{A}$ then the rewriting $\gamma$ provides the correct positive answer and, by the third disjunct, $\mathcal{I}_\mathcal{A}\models \boldsymbol q''$. Finally, if neither of the above cases is applicable to $a_0$ then $\mathcal{A}_{a_0} = \{ D(a_0) \mid D(a_0)\in\mathcal{A}, D \text{ is a concept name} \}$ coincides with $\mathcal{A}_{\vec{\alpha}}$, for some $\vec{\alpha}$, in which case the correct positive answer is given by $\boldsymbol q_0(a_0)$. Conversely, suppose $(\mathcal{T}_f,\mathcal{A})\not\models \boldsymbol q_f$. Then $(\mathcal{T}_f,\mathcal{A})$ is consistent, and so, the second disjunct is false. If there is no $a_0$ with $A_0(a_0)\in \mathcal{A}$ then, clearly, $\mathcal{I}_\mathcal{A}\not\models \boldsymbol q''$. So, take an arbitrary individual $a_0$ such that $A_0(a_0)\in \mathcal{A}$. If $P(a_1,a_0)\in \mathcal{A}$, for some $a_1$ (distinct from $a_0$ due to consistency) then, on the one hand, we have $\mathcal{I}_\mathcal{A}\not\models \gamma[0/a_0,1/a_1]$ and so, the third disjunct is false. On the other hand, if $\mathcal{A}_{a_0} = \{ D(a_0) \mid D(a_0)\in\mathcal{A}, D \text{ is a concept name} \}$ coincides with some $\mathcal{A}_{\vec{\alpha}}$ then $(\mathcal{T}_f,\mathcal{A}_{a_0})\models\boldsymbol q_f$ iff $\mathcal{I}_{\mathcal{A}_{a_0}}\models \boldsymbol q'$ iff $\mathcal{I}_{\mathcal{A}_{a_0}}\models \boldsymbol q_0(a_0)$. It follows that $\mathcal{I}_\mathcal{A}\not\models \boldsymbol q_0(a)$, for all individuals $a$ with $A_0(a)\in \mathcal{A}$, and so, the first disjunct is false as well. \smallskip Claim (ii$'$) is proved in a similar way, using a modification of the polynomial-size NDL-rewriting of Gottlob and Schwentick~\cite{GottlobS11}. (We note that in the short NDL-rewriting of~\cite{GottlobS11} the inequality predicate $\neq$ is applied only to terms that range over the extra constants, and not ABox individuals, and therefore one can write a short program defining $\neq$ by listing all pairs of non-equal constants.) Let NDL-query $(\Delta,Q(z_0,z_1))$ be the short impure rewriting for $\boldsymbol q_f$ and $\mathcal{T}_f$, which uses $z_0$ and $z_1$ for the constants $0$ and $1$. Next, given an NDL-query $(\Pi,G)$ that is evaluated over ABoxes with a single individual only, we can construct a new NDL-query $(\Pi_0,G_0(x))$ such that all predicates of $\Pi_0$ are unary, all clauses have a single variable and $\Pi_0,\mathcal{A}\models G_0(a)$ iff $\Pi,\mathcal{A}\models G$, for all ABoxes with a single individual $a$. Consider now $(\Pi',G')$, where $G'$ is a fresh propositional variable and $\Pi'$ consists of $\Pi_0$, $\Delta$ and the following three clauses: \begin{itemize} \item[--] $\forall x\,(A_0(x) \land B(x) \to G')$, for all concepts $B$ with $A_0 \sqcap B \sqsubseteq_{\mathcal{T}_f} \bot$, \item[--] $\forall x\,(A_0(x) \land G_0(x) \to G')$, \item[--] $\forall x,y\, (A_0(x)\land P(y,x)\land Q(x,y) \to G')$. \end{itemize} Suppose $(\mathcal{T}_f,\mathcal{A})\models \boldsymbol q_f$. Then either $(\mathcal{T}_f,\mathcal{A})$ is inconsistent or $\mathcal{A}$ has an individual $a_0$ such that $(\mathcal{T}_f,\mathcal{A})\models \boldsymbol q_f(a_0)$, where $\boldsymbol q_f(a_0)$ is the query $\boldsymbol q_f$ with $y_0$ replaced by $a_0$. In the former case, by the first clause, we have $\Pi',\mathcal{A}\models G'$, which is a correct positive answer. In the latter case, if there is a distinct $a_1$ with $P(a_1,a_0)$ in $\mathcal{A}$ then the program $\Delta$ provides the correct positive answer and, by the third clause, $\Pi',\mathcal{A}\models G'$. Finally, if neither of the above cases is applicable to $a_0$ then $\mathcal{A}_{a_0} = \{ D(a_0) \mid D(a_0)\in\mathcal{A}, D \text{ is a concept name} \}$ coincides with some $\mathcal{A}_{\vec{\alpha}}$, in which case the correct positive answer is given by $\Pi_0$. Conversely, suppose $(\mathcal{T}_f,\mathcal{A})\not\models \boldsymbol q_f$. Then $(\mathcal{T}_f,\mathcal{A})$ is consistent, and so, the first clause is not applicable. If there is no $a_0$ with $A(a_0)\in \mathcal{A}$ then, clearly, $\Pi',\mathcal{A}\not\models G'$. So, take an arbitrary individual $a_0$ such that $A(a_0)\in \mathcal{A}$. If $P(a_1,a_0)\in \mathcal{A}$, for some $a_1$ (distinct from $a_0$ due to consistency) then, on the one hand, $\Delta,\mathcal{A}\not\models Q(a_0,a_1)$ and so, the third clause cannot give a positive answer. On the other hand, if $\mathcal{A}_{a_0} = \{ D(a_0) \mid D(a_0)\in\mathcal{A}, D \text{ is a concept name} \}$ coincides with some $\mathcal{A}_{\vec{\alpha}}$ then $(\mathcal{T}_f,\mathcal{A}_{a_0})\models\boldsymbol q_f$ iff $\Pi,\mathcal{A}_{a_0}\models G$ iff $\Pi_0,\mathcal{A}_{a_0}\models G_0(a_0)$. It follows that $\Pi_0,\mathcal{A}\not\models G_0(a)$, for all individuals $a$ with $A_0(a)\in \mathcal{A}$, and so, the second clause cannot give a positive answer as well. \end{remark} We are in a position now to prove our main theorem which connects the size of circuits computing monotone Boolean functions with the size of rewritings for the corresponding queries and ontologies. \begin{theorem}\label{thm.main} For any family $f^1,f^2,\dots$ of monotone Boolean functions in \textsc{NP}, there exist polynomial-size CQs $\boldsymbol q_n$ and \textsl{OWL\,2\,QL}{} TBoxes $\mathcal{T}_n$ such that the following holds\textup{:} \begin{description} \item[\normalfont (1)] Let $L(n)$ be a lower bound for the size of monotone Boolean formulas computing $f^n$. Then, $|\boldsymbol q'_n| \ge L(n)$, for any PE-rewriting $\boldsymbol q'_n$ for $\boldsymbol q_n$ and $\mathcal{T}_n$. \item[\normalfont (2)] Let $L(n)$ and $U(n)$ be a lower and an upper bound for the size of monotone Boolean circuits computing $f^n$. Then \begin{itemize} \item[--] $|\Pi_n| \ge L(n)$, for any NDL-rewriting $(\Pi_n,G)$ for $\boldsymbol q_n$ and $\mathcal{T}_n$\textup{;} \item[--] there exist a polynomial $p$ and an NDL-rewriting $(\Pi_n,G)$ for $\boldsymbol q_n$ and $\mathcal{T}_n$ over any suitable signature with a single constant such that $|\Pi_n|\le U(n) + p(n)$. \end{itemize} \item[\normalfont (3)] Let $L(n)$ and $U(n)$ be a lower and an upper bound for the size of Boolean formulas computing $f^n$. Then \begin{itemize} \item[--] $|\boldsymbol q'_n| \ge L(n)$, for any FO-rewriting $\boldsymbol q'_n$ for $\boldsymbol q_n$ and $\mathcal{T}_n$ over any suitable signature with a single constant\textup{;} \item[--] there exist a polynomial $p$ and an FO-rewriting $\boldsymbol q'_n$ for $\boldsymbol q_n$ and $\mathcal{T}_n$ over any suitable signature with a single constant with $|\boldsymbol q'_n| \le U(n) + p(n)$. \end{itemize} \end{description} \end{theorem} \begin{proof} (1) follows from Lemma~\ref{l:4}~(i). The first claim of (2) from Lemma~\ref{l:4}~(ii). To prove the second claim, take any circuit $\mathbf{C}^n$ computing $f^n$ and having size $\le U(n)$. By Lemmas~\ref{l:F} and~\ref{l1}, $(\mathcal{T}_{f^n},\mathcal{A}_{\vec{\alpha}}) \models \boldsymbol q_{f^n}$ iff $\mathbf{C}^n(\vec{\alpha})=1$, for all $\vec{\alpha}\in\{0,1\}^n$. It should be clear that $\mathbf{C}^n$ can be transformed into an NDL-query $(\Pi,G)$ of size $|\mathbf{C}^n|$ such that $\Pi, \mathcal{A}_{\vec{\alpha}} \models G$ iff $(\mathcal{T}_{f^n},\mathcal{A}_{\vec{\alpha}}) \models \boldsymbol q_{f^n}$. Then we apply Lemma~\ref{l:2}. (3)~is proved analogously. \end{proof} \section{Rewritings Long and Short}\label{s:7} Now we apply Theorem~\ref{thm.main} to the Boolean functions mentioned in Section~\ref{sec:circuits} to demonstrate that some queries and ontologies may only have very long rewritings, and that rewritings of one type can be exponentially more succinct than rewritings of another type. First we show that one cannot avoid an exponential blow-up for PE- and NDL-rewritings. We also show that even FO-rewritings can blow-up superpolynomially for signatures with a single constant under the assumption that $\textsc{NP} \not\subseteq \textsc{P}/\text{poly}$. \begin{theorem}\label{c1} There is a sequence of CQs $\boldsymbol q_n$ of size $O(n)$ and $\textsl{OWL\,2\,QL}$ TBoxes $\mathcal{T}_n$ of size $O(n)$ such that\textup{:} \begin{itemize} \item[--] any PE-rewriting for $\boldsymbol q_n$ and $\mathcal{T}_n$ \textup{(}over any suitable signature\textup{)} is of size $\geq 2^{\Omega(n^{1/4})}$\textup{;} \item[--] any NDL-rewriting for $\boldsymbol q_n$ and $\mathcal{T}_n$ \textup{(}over any suitable signature\textup{)} is of size $\geq 2^{\Omega(({n/\log n})^{1/12})}$\textup{;} \item[--] there does not exist a polynomial-size $FO$-rewriting for $\boldsymbol q_n$ and $\mathcal{T}_n$ over any suitable signature with a single constant unless $\textsc{NP} \subseteq \textsc{P}/\text{poly}$. \end{itemize} \end{theorem} \begin{proof} Consider $f^n = \textsc{Clique}_{m,k}$ for $m = \lfloor n^{1/4} \rfloor$ and $k = \lfloor 2m/3 \rfloor = \Omega(n^{1/4})$. Then the size of $\boldsymbol q_n = \boldsymbol q_{f^n}$ and $\mathcal{T}_n = \mathcal{T}_{f^n}$ is $O(n)$. The lower bound for PE-rewritings follows from Theorem~\ref{thm.main} and the lower bound for $\textsc{Clique}_{m,k}$~\cite{RazW92}. The lower bound for NDL-rewritings is obtained by using a similar family with $k = \lfloor (m/ \log m)^{2/3} \rfloor = \Omega((n/ \log n)^{1/6})$~\cite{AlonB87}. If we assume $\textsc{NP} \nsubseteq \textsc{P}/\text{poly}$ then there is no polynomial-size circuit for the \textsc{NP}-complete function $\textsc{Clique}_{m,k}$, whence there is no polynomial-size FO-rewriting of $\boldsymbol q_{f^n}$ and $\mathcal{T}_{f^n}$ over any signature containing a single constant. \end{proof} \begin{remark}\em By the Karp-Lipton theorem (see, e.g.,~\cite{Arora&Barak09}) $\textsc{NP} \subseteq \textsc{P}/\text{poly}$ implies $\textsc{PH} = \Sigma_{2}^{p}$. Thus, in Theorem~\ref{c1}, we can replace the assumption $\textsc{NP} \not\subseteq \textsc{P}/\text{poly}$ with $\textsc{PH} \neq \Sigma_{2}^{p}$. \end{remark} Next we show that NDL-rewritings can be exponentially more succinct than PE-rewritings. \begin{theorem} There is a sequence of CQs $\boldsymbol q_n$ of size $O(n)$ and $\textsl{OWL\,2\,QL}$ TBoxes $\mathcal{T}_n$ of size $O(n)$ for which there exists a polynomial-size NDL-rewriting over a signature with a single constant, but any PE-rewriting over this signature is of size $\ge 2^{n^{\varepsilon}}$, for some $\varepsilon>0$. \end{theorem} \begin{proof} Consider the family $\textsc{Gen}_{m^3}$. There is a polynomial $p$ and monotone Boolean circuits $\mathbf{C}^{m^3}$ computing $\textsc{Gen}_{m^3}$ with $|\mathbf{C}^{m^3}|\leq p(m)$. It follows that, for each $n$, we can choose $m$ so that, for $f^n = \textsc{Gen}_{m^3}$, the size of both $\boldsymbol q_n = \boldsymbol q_{f^n}$ and $\mathcal{T}_n = \mathcal{T}_{f^n}$ is $O(n)$. In fact, $m = \Theta(n^{\delta})$, for some $\delta >0$. By Theorem~\ref{thm.main} and the lower bounds on the circuit complexity of $\textsc{Gen}_{m^3}$~\cite{RazM97}, there is a polynomial NDL-rewriting of $\boldsymbol q_n$ and $\mathcal{T}_n$, but any PE-rewriting of $\boldsymbol q_n$ and $\mathcal{T}_n$ is of size $\geq 2^{n^{\varepsilon}}$, for some $\varepsilon > 0$. \end{proof} FO-rewritings can also be substantially shorter than the PE-rewritings: \begin{theorem} \label{cor.matching1} There is a sequence of CQs $\boldsymbol q_n$ of size $O(n)$ and $\textsl{OWL\,2\,QL}$ TBoxes $\mathcal{T}_n$ of size $O(n)$ which has an FO-rewriting of size $n^{O(\log n)}$ over a signature with a single constant, but any PE-rewriting over this signature is of size $\ge 2^{\Omega(n^{1/4})}$. \end{theorem} \begin{proof} Consider $f^n = \textsc{Matching}_{2m}$ with $m = \lfloor n^{1/4} \rfloor$. Then the size of both $\boldsymbol q_n = \boldsymbol q_{f^n}$ and $\mathcal{T}_n = \mathcal{T}_{f^n}$ is $O(n)$. By Theorem~\ref{thm.main} and the bounds for circuit complexity of $\textsc{Matching}_{2m}$~\cite{RazW92,BorodinGH82}, we obtain the required lower bound for PE-rewritings and the required upper bound for FO-rewritings; note that $(n^{{1/4}})^{\log n^{1/4}} = n^{O(\log n)}$. \end{proof} In fact, we can use a standard trick from the circuit complexity theory to show that FO-rewritings can be superpolynomially more succinct than PE-rewritings. \begin{theorem} There is a sequence of CQs $\boldsymbol q_n$ of size $O(n)$ and $\textsl{OWL\,2\,QL}$ TBoxes $\mathcal{T}_n$ of size $O(n)$ which has a polynomial-size FO-rewriting over a signature with a single constant, but any PE-rewriting over this signature is of size $\ge 2^{\Omega(2^{\log^{1/2} n})}$. \end{theorem} \begin{proof} Consider $f^n = \textsc{Matching}_{2m}$ with $m = \lfloor 2^{\log^{1/2} n}\rfloor$ variables and add $\lfloor n^{1/4}\rfloor - m$ new dummy variables to each $f^n$. Then the size of both $\boldsymbol q_n = \boldsymbol q_{f^n}$ and $\mathcal{T}_n = \mathcal{T}_{f^n}$ is $O(n)$. But now Theorem~\ref{thm.main} and the bounds for the circuit complexity of $\textsc{Matching}_{2m}$~\cite{RazW92,BorodinGH82} give the $m^{O(\log m)} = n^{O(1)}$ upper bound for the size of FO-rewritings and the $2^{\Omega(m)} = 2^{ \Omega(2^{\log^{1/2}n})}$ lower bound for the size of PE-rewritings. \end{proof} \section{Short Impure Rewritings}\label{s:equality} In the proof of Theorem~\ref{c1}, we used CQs containing no constant symbols. It follows that the theorem will still hold if we allow the built-in predicates $=$ and $\ne$ in the rewritings, but disallow the use of constants that \emph{do not occur in the original query}. The situation changes drastically if $=$, $\ne$ and two additional constants, say 0 and 1, are allowed in the rewritings. As shown by Gottlob and Schwentick~\cite{GottlobS11}, in this case there is a polynomial-size NDL-rewriting for any CQ and \textsl{OWL\,2\,QL}{} TBox. Roughly, the rewriting uses the extra expressive resources to encode in a succinct way the part of the canonical model that is relevant to answering the given query. We call rewritings of this kind \emph{impure} (indicating thereby that they use predicates and constants that do not occur in the original query and ontology). In fact, using the ideas of~\cite{Avigad01} and~\cite{GottlobS11}, one can construct an impure polynomial-size PE-rewriting for any CQ and \textsl{OWL\,2\,QL}{} TBox: \begin{theorem} For every CQ $\boldsymbol q$ and every \textsl{OWL\,2\,QL}{} TBox $\mathcal{T}$, there is an impure PE-rewriting $\boldsymbol q'$ for $\boldsymbol q$ and $\mathcal{T}$ whose size is polynomial in $|\boldsymbol q|$ and $|\mathcal{T}|$. \end{theorem} \begin{proof} We illustrate the idea of the proof for a larger ontology language of tuple-generating dependencies (TGDs). CQ answering under TGDs is undecidable in general~\cite{BeeriVardi81}. However, certain classes of TGDs (linear, sticky, etc.~\cite{CaliGL09,CaliGP10}) enjoy the so-called polynomial witness property (PWP)~\cite{GottlobS11}, which guarantees that, for each CQ $\boldsymbol q$ and each set $\mathcal{T}$ of TGDs from the class, there is a number $N$ polynomial in $|\boldsymbol q|$ and $|\mathcal{T}|$ such that, for each ABox $\mathcal{A}$, there is a sequence of $N$ chase steps that entail $\boldsymbol q$. \textsl{OWL\,2\,QL}{} has PWP because its concept and role inclusions are special cases of linear TGDs. So, suppose we have a set $\mathcal{T}$ of TGDs from a class enjoying PWP. Without loss of generality we may assume that all predicates are of arity $L$, all TGDs have precisely $m$ atoms in the body and there is at most one existentially quantified variables in the head (see e.g.,~\cite{GottlobS11}), i.e., the TGDs are formulas of the form \begin{equation*} \forall \vec{x} \,\bigl(P_1(\vec{t}_1) \land \dots \land P_m(\vec{t}_m) \to \exists z\, P_0(\vec{t}_0)\bigr), \end{equation*} where each vector $\vec{t}_1,\dots,\vec{t}_m$ consists of $L$ (not necessarily distinct) variables from $\vec{x}$ (they are universally quantified) and each of the $L$ variables of $\vec{t}_0$ either coincides with one of the $\vec{x}$ (in which case it is universally quantified) or equals $z$ (in which case it is existentially quantified). Consider a CQ without free variables \begin{equation*} \boldsymbol q = \exists \vec{y}\,\bigwedge_{k = 1}^{|\boldsymbol q|} R_k(y_{k1},\dots,y_{kL}). \end{equation*} By PWP, there is a number $N$, polynomial in $|\boldsymbol q|$ and $|\mathcal{T}|$, such that, for any ABox $\mathcal{A}$, the query $\boldsymbol q$ is true on atoms of $N$ steps of the chase for $\mathcal{T}$ and $\mathcal{A}$ (provided that $(\mathcal{T},\mathcal{A})\models \boldsymbol q$). In essence, our PE-rewriting guesses these $N$ ground atoms $\tau_1,\dots,\tau_N$ of the chase for $(\mathcal{T},\mathcal{A})$ and then checks whether the guess is a positive answer to $\boldsymbol q$ and the atoms indeed form steps of the chase for $(\mathcal{T},\mathcal{A})$. For each chase step $1 \leq i \leq N$, we will need the following variables: \begin{itemize} \item[--] $u_{i1},\dots,u_{iL}$ are the arguments of the ground atom $\tau_i$ and range over the ABox domain and the labelled nulls $\textit{null}_i$ (all these labelled nulls can be thought of as natural numbers not exceeding $N$); \item[--] $r_i$ is the number of the predicate of $\tau_i$ (each predicate name $P$ is given a unique number, denoted by $[P]$); so, $r_i$ with $u_{i1},\dots,u_{iL}$ encode $\tau_i$; \item[--] $w_{i1},\dots,w_{i\ell}$, where $\ell$ is the maximum length of the $\vec{x}$ in TGDs, are the arguments of the body of the TGD that generated $\tau_i$; they also range over the ABox domain and the labelled nulls (clearly, $\ell$ does not exceed $m\cdot L$). \end{itemize} The PE-rewriting is then defined by taking \begin{equation*} \boldsymbol q' = \exists \vec{y} \exists \vec{u}\exists \vec{r} \exists \vec{w}\,\Bigl(\bigwedge_{k = 1}^{|\boldsymbol q|}\bigvee_{i = 1}^N \Bigl[(r_i = [R_k])\land \bigwedge_{j = 1}^L (u_{ij} = y_{kj})\Bigr] \land \bigwedge_{i = 1}^N \bigvee \Phi_i \Bigr). \end{equation*} The first conjunct of the rewriting chooses, for each atom in the query, one of the ground atoms $\tau_1,\dots,\tau_N$ in such a way that its predicate coincides with the query atom's predicate and the arguments match. The second conjunct chooses, for each ground atom $\tau_1,\dots,\tau_N$, the number of a TGD that produces it or 0, if the atom is taken from the ABox. So, the set of formulas $\Phi_i$ contains \begin{equation*} \bigvee_{P \text{ is a predicate}} \bigl((r_i = [P]) \land P(u_{i1},\dots,u_{iL})\bigr) \end{equation*} for the case when $\tau_i$ is taken from the ABox ($r_i$ is such that $P(u_{i1},\dots,u_{iL})$ is in the ABox for the predicate $P$ with the number $r_i$) and the following disjunct, for each TGD \begin{equation*} \forall \vec{x} \,\bigl(P_1(t_{11},\dots,t_{1L}) \land \dots \land P_m(t_{m1},\dots,t_{mL}) \to \exists z\, P_0(t_{01},\dots,t_{0L})\bigr) \end{equation*} in $\mathcal{T}$, modelling the corresponding chase rule application: \begin{multline*} (r_i = [P_0]) \land \bigwedge_{t_{0j} = x_l} (u_{ij} = w_{il}) \land\hspace*{-0em} \bigwedge_{t_{0j} = z}\hspace*{-0em} (u_{ij} = \textit{null}_i) \land {} \\ \bigwedge_{k = 1}^m \bigvee_{i' = 1}^{i-1} \bigl( (r_{i'} = [P_k]) \land \bigwedge_{t_{kj} = x_l} (w_{il} = u_{i'j})\bigr). \end{multline*} Informally, if $\tau_i$ is generated by an application of the TGD above, then $r_i$ is the number $[P_0]$ of the head predicate $P_0$ and the existential variable $z$ of the head gets the unique null value $\textit{null}_i$ (third conjunct). Then, for each of the $m$ atoms of the body, one can choose a number $i'$ that is \emph{less than $i$} such that the predicate of $\tau_{i'}$ is the same as the predicate of the body atom and their arguments match (the last two conjuncts). The variables $w_{il}$ ensure that the same universally quantified variable gets the same value in different body atoms and in the head (if it occurs there, see the second conjunct). It can be verified that $|\boldsymbol q'|=O(|\boldsymbol q|\cdot |\mathcal{T}|\cdot N^2 \cdot L)$ and that $(\mathcal{T},\mathcal{A})\models\boldsymbol q$ iff $\boldsymbol q'$ is true in the model $\mathcal{I}_\mathcal{A}$ extended with constants $1,\dots,N$ (these constants are distinct and do not belong to the interpretation of any predicate but $=$). It should be noted that one can replace the numbers in the rewriting with just two constants 0 and 1 (again, with only $=$ interpreted over them). Each of the variables $u_{ij}$ can be replaced with a tuple $\bar{u}_{ij},u_{ij}^p,\dots,u_{ij}^0$ of variables with $p = \lceil\log N\rceil$ such that $\bar{u}_{ij}$ ranges over the ABox elements and $u_{ij}^p,\dots,u_{ij}^0$ range over $\{0,1\}$ and thus represent a number up to $N$. Similarly, we replace the $w_{il}$ and $r_i$. Each labelled null $\textit{null}_{i}$ is then replaced by the constant tuple representing the number $(i-1)$ in binary; the constants $[P]$ for the numbers of predicates $P$ are dealt with similarly. Finally, the equality atoms in the rewriting are replaced by the component-wise equalities and each $P(u_{i1},\dots,u_{iL})$ is replaced by $P(\bar{u}_{i1},\dots,\bar{u}_{iL})\land \bigwedge_{j = 1}^L\bigwedge_{k = 0}^p (u_{ij}^k = 0)$. \end{proof} Thus, we obtain the following: \begin{theorem} Impure PE- and NDL-rewritings for CQs and \textsl{OWL\,2\,QL}{} ontologies are exponentially more succinct than pure PE- and NDL-rewritings. \end{theorem} \section{Conclusion} The exponential lower bounds for the size of `pure' rewritings above may look discouraging in the OBDA context. It is to be noted, however, that the ontologies and queries used their proofs are extremely `artificial' and never occur in practice (see the analysis in~\cite{KR12our}). As demonstrated by the existing description logic reasoners (such as FaCT\raisebox{1pt}{\scriptsize++}, HermiT, Pellet, Racer), \emph{real-world} ontologies can be classified efficiently despite the high worst-case complexity of the classification problem. We believe that practical query answering over \textsl{OWL\,2\,QL}{} ontologies can be feasible if supported by suitable optimisation and indexing techniques. It also remains to be seen whether polynomial impure rewritings can be used in practice. We conclude the paper by mentioning two open problems. Our exponential lower bounds were proved for a sequence of pairs $(\boldsymbol q_n, \mathcal{T}_n)$. It is unclear whether these bounds hold uniformly for all $\boldsymbol q_n$ over the same $\mathcal{T}$: \begin{question} Do there exist an \textsl{OWL\,2\,QL}{} TBox $\mathcal{T}$ and CQs $\boldsymbol q_n$ such that any pure PE- or NDL-rewritings for $\boldsymbol q_n$ and $\mathcal{T}$ are of exponential size? \end{question} As we saw, both FO- and NDL-rewritings are more succinct than PE-rewritings. \begin{question} What is the relation between the size of FO- and NDL-rewritings? \end{question} \medskip \noindent {\bf Acknowledgments.} We thank the anonymous referees at ICALP 2012 for their constructive feedback and suggestions. This paper was supported by the U.K.~EPSRC grant EP/H05099X.
1202.4749
\section{Introduction} Exchangeability is a basic distributional symmetry in probability. It means that the distribution of a sequence of random variables is invariant under finite permutations of these random variables. The de Finetti theorem characterizes an exchangeable infinite sequence of random variables to be identically distributed and conditionally independent over its tail $\sigma$-algebra. An alternative formulation of this famous theorem is that the law of such a sequence is a mixture of infinite product measures, where the mixture is specified by a certain random probability measure (see e.g.~\cite{Ka05}). Hewitt and Savage~\cite{HS55} cast this theorem in terms of symmetric measures on infinite Cartesian products of a compact Hausdorff space $S$, and inspired St{\o}rmer's transfer~\cite{St69} of their approach to a C*-algebraic setting: symmetric states on an infinite tensor product of a C*-algebra with itself are identified as a mixture of infinite product states. In analogy with the classical situation, here the mixture is specified by a probability measure on the state space of the C*-algebra. Recently a noncommutative de Finetti theorem was discovered by K\"ostler and Speicher~\cite{KSp09} in the realm of Voiculescu's free probability theory (see~\cite{VDN92}). Replacing random variables by operators and the role of permutations by the natural coaction of Wang's quantum permutations~\cite{W98}, the notion of a quantum exchangeable sequence was introduced in a framework of noncommutative probability spaces. In close analogy to the classical case, a quantum exchangeable infinite sequence is characterized as being identically distributed and `conditionally free' over its tail algebra. Here `conditional freeness' means `freeness with amalgamation'. In this note we have a closer look at tail algebras of quantum exchangeable sequences. Our main results are: \begin{itemize} \item[$\diamond$] Any countably generated von Neumann algebra may appear as the tail algebra of a quantum exchangeable sequence of selfadjoint operators (see Theorem~\ref{thm:givenN}). \item[$\diamond$] The tail algebra of a quantum exchangeable sequence lies in the center of the von Neumann algebra generated by the sequence if and only if the corresponding state is a limit of convex combinations of free product states (see Proposition~\ref{prop:centraltail}). \item[$\diamond$] There exist quantum exchangeable sequences of projections generating a finite factor and whose tail algebra is nontrivial and abelian (see Example~\ref{example}). Thus, the corresponding state is not a limit of convex combinations of free product states. \end{itemize} Altogether these results show that, as to be expected, the structure of tail algebras of quantum exchangeable infinite sequences has a much higher complexity than those of tail $\sigma$-algebras of exchangeable sequences in probability theory. \smallskip \noindent {\bf Acknowledgment:} Most of this research was conducted while at the Erwin Schr\"od\-inger Institute during the program on Bialgebras in Free Probability; the authors would like to thank the Institute and the organizers of the program. They also thank an anonymous referee for helpful comments. \section{Preliminaries} A {\em noncommutative probability space} is a pair $(A,\phi)$, where $A$ is a unital algebra over the complex numbers and $\phi$ is a linear functional on $A$ sending $1$ to $1$, and the elements of $A$ are called {\em noncommutative random variables}. A {\em W$^*$--noncommutative probability space} is one in which $A$ is a von Neumann algebra and $\phi$ is a normal state. In this paper, we only consider W$^*$--noncommutative probability spaces $(A,\phi)$ where the state $\phi$ is faithful. The {\em Noncommutative de Finetti Theorem} of K\"ostler and Speicher~\cite{KSp09}, states that a sequence $(x_i)_{i=1}^\infty$ of noncommutative random variables in a W$^*$--noncommutative probability space is quantum exchangeable if and only if the sequence is free over the tail algebra of the sequence, with respect to the $\phi$--preserving conditional expectation. Quantum exchangeability of the sequence is defined in terms of a natural coaction of the quantum permutation group of Wang~\cite{W98}. (See~\cite{KSp09} for more on this). The {\em tail algebra} is the von Neumann algebra $\bigcap_{n=1}^\infty W^*(\{x_i\mid i\ge n\})$ and the $\phi$--preserving conditional expectation $E$ from $W^*(\{x_i\mid i\ge1\})$ onto the tail algebra is guaranteed to exist. This was proved in~\cite{K10}, see also Proposition 4.2 of~\cite{KSp09}. The following is the ``hard part'' of the noncommutative de Finetti theorem (Thm.\ 1.1 of~\cite{KSp09}). \begin{thm}[\cite{KSp09}]\label{thm:amalgfp} Let $(x_i)_{i=1}^\infty$ be a quantum exchangeable sequence in the W$^*$--noncommutative probability space $(\Mcal,\phi)$, where $\phi$ is faithful, and suppose $\Mcal$ is generated by $\{x_i\mid i\in I\}$. Let $\Nc$ be the tail algebra and let $E:\Mcal\to\Nc$ be the $\phi$--preserving conditional expectation onto $\Nc$. Let $\Ac_i$ be the von Neumann subalgebra of $\Mcal$ generated by $\Nc\cup\{x_i\}$. Then the family $(\Ac_i)_{i\in I}$ is free with amalgamation over $\Nc$, with respect to $E$ and, therefore, \begin{equation}\label{eq:Mfp} (\Mcal,E)\cong(*_\Nc)_{i=1}^\infty(\Ac_i,E_i) \end{equation} is isomorphic to the W$^*$--amalgamated free product of countably infinitely many copies of $(\Ac_1,E_1)$, where $E_i:\Ac_i\to\Nc$ is the restriction to $\Ac_i$ of $E$. \end{thm} The next result is the ``easy part'' of the noncommutative de Finetti theorem (Prop.\ 3.1 of~\cite{KSp09}): \begin{prop}[\cite{KSp09}]\label{prop:free} Let \[ (\Mcal,E)\cong(*_\Bc)_{i=1}^\infty(\Ac_i,E_i) \] be an amalgamated free product of von Neumann algebras, where every $E_i$ is faithful and normal. Let $\phi$ be any normal, faithful state on $\Bc$ and denote also by $\phi$ the state $\phi\circ E$ on $\Mcal$. If $x_i\in\Ac_i$ is such that the moments $E_i(x_ib_1x_ib_2\cdots b_{n-1}x_i)$ for all $n\in\Nats$ and $b_1,\ldots b_{n-1}\in\Bc$ are independent of $i$, then the sequence $(x_i)_{i=1}^\infty$ in the noncommutative probability space $(\Mcal,\phi)$ is quantum exchangeable. \end{prop} In this note, we will prove (Theorem~\ref{thm:givenN}) that every countably generated von Neumann algebra $\Nc$ with specified normal faithful state $\phi$ can arise as the tail algebra of a quantum exchangeable sequence in a W$^*$--noncommutative probability space $(\Mcal,\phi_\Mcal)$, in such a way that the restriction of $\phi_\Mcal$ to $\Nc$ is $\phi$. We also investigate the question, given a quantum exchangeable sequence $(x_i)_{i=1}^\infty$ in a W$^*$--noncommutative probability space $(\Mcal,\phi)$, of when the distribution of $(x_i)_{i=1}^\infty$ under $\phi$ is a limit of convex combinations of equidistributed free product states. Naturally enough, this occurs if and only if the tail algebra of the sequence commutes with all $x_i$. See Section~\ref{sec:fps} for more details. This may be compared to St\o{}rmer's result~\cite{St69}, that in the commutative context, all symmetric states are limits of convex combinations of tensor powers. \section{Examples of tail algebras} Now we investigate the content of the tail algebra in the construction of Proposition~\ref{prop:free}. \begin{prop}\label{prop:tail} In the setting of Proposition~\ref{prop:free}, the tail algebra $\Nc$ of the sequence $(x_i)_{i=1}^\infty$ is a von Neumann subalgebra of $\Bc$ and is, in fact, the smallest unital von Neumann subalgebra $\Bc_0$ of $\Bc$ satisfying \begin{equation}\label{eq:Bc0} E(b_0x^{k_1}b_1x^{k_2}\cdots b_{n-1}x^{k_n}b_n)\in\Bc_0. \end{equation} whenever $n\in\Nats$, $k_1,\ldots,k_n\in\Nats$, $b_0,b_1,\ldots,b_n\in\Bc_0$ and $x=x_i$. (Note that the above expression is independent of the choice of $i\in\Nats$.) \end{prop} \begin{proof} If we consider the structure of $L^2(\Mcal,\phi)$, we easily see that the tail algebra $\Nc$ is a von Neumann subalgebra of $\Bc$. Indeed, from Voiculescu's free product construction (see~\cite{BD01} for more detailed discussion in the case of von Neumann algebras), \begin{equation}\label{eq:fpHil} L^2(\Mcal,\phi)=L^2(\Bc,\phi)\oplus \bigoplus_{\substack{n\ge1 \\ i_1,\ldots,i_n\ge1 \\ i_j\ne i_{j+1}}} F_{i_1}\otimes_\Bc F_{i_2}\otimes_\Bc\cdots\otimes_\Bc F_{i_n}\otimes_{\pi_\phi}L^2(\Bc,\phi), \end{equation} where $F_i$ is the Hilbert $\Bc$-module $L^2(\Ac_i,E_i)\ominus\Bc$ and $\pi_\phi$ is the Gelfand--Naimark--Segal representation of $\Bc$ on $L^2(\Bc,\phi)$. In particular, the image in $L^2(\Mcal,\phi)$ of the von Neumann algebra generated by $\{x_i\mid i\ge N\}$ is contained in \[ L^2(\Bc,\phi)\oplus \bigoplus_{\substack{n\ge1 \\ i_1,\ldots,i_n\ge N \\ i_j\ne i_{j+1}}} F_{i_1}\otimes_\Bc F_{i_2}\otimes_\Bc\cdots\otimes_\Bc F_{i_n}\otimes_{\pi_\phi}L^2(\Bc,\phi) \] and the intersection of these spaces is $L^2(\Bc,\phi)$. We must, therefore, have $\Nc\subseteq\Bc$. Since $\Nc\subseteq\Bc\cap W^*(\{x_i\mid i\ge1\})$, we have \begin{equation* \Nc\subseteq E\big(W^*(\{x_i\mid i\ge1\})\big). \end{equation*} Thus, we have \begin{equation}\label{eq:Exs} \Nc\subseteq W^*(\{E(x_{i_1}x_{i_2}\cdots x_{i_n})\mid n\ge1,\,i_1,\ldots,i_n\ge1\}). \end{equation} To see that we have equality in~\eqref{eq:Exs}, take $j_1,\ldots,j_N\ge1$ and let $M_1=\min(j_1,\ldots,j_N)$ and $M_2=\max(j_1,\ldots,j_N)$. Then $x_{j_1}x_{j_2}\cdots x_{j_N}=b+y$, where $b=E(x_{j_1}x_{j_2}\cdots x_{j_N})$ and where the element $\yh$ of $L^2(\Mcal,\phi)$ corresponding to $y$ belongs to \[ \bigoplus_{\substack{1\le n\le N \\ M_1\le i_1,\ldots,i_n\le M_2 \\ i_j\ne i_{j+1}}} F_{i_1}\otimes_\Bc F_{i_2}\otimes_\Bc\cdots\otimes_\Bc F_{i_n}\otimes_{\pi_\phi}L^2(\Bc,\phi). \] Since the sequence $(x_i)_{i\ge1}$ is quantum exchangeable, it is also exchangeable, i.e., the joint moments of this sequence are invariant under arbitrary permutations of $\Nats$. Thus, for each $p\ge0$ we have $E(x_{j_1+p}x_{j_2+p}\cdots x_{j_N+p})=E(x_{j_1}x_{j_2}\cdots x_{j_N})$ and, therefore, $x_{j_1+p}x_{j_2+p}\cdots x_{j_N+p}=b+y_p$ where \begin{equation}\label{eq:+p} \yh_p\in\bigoplus_{\substack{1\le n\le N \\ M_1+p\le i_1,\ldots,i_n\le M_2+p \\ i_j\ne i_{j+1}}} F_{i_1}\otimes_\Bc F_{i_2}\otimes_\Bc\cdots\otimes_\Bc F_{i_n}\otimes_{\pi_\phi}L^2(\Bc,\phi). \end{equation} Now since the subspaces~\eqref{eq:+p} corresponding to values of $p$ that differ by more than $M_2-M_1$ are orthogonal to each other, we see that for each $q\in\Nats$, the ergodic averages \[ \frac1K\sum_{p=q}^{q+K-1}x_{i_1+p}x_{i_2+p}\cdots x_{i_N+p} \] converge in $L^2(\Mcal,\phi)$ to $b$ as $K\to\infty$. So $b\in W^*(\{x_i\mid i\ge q\})$ for all $q\ge1$. Letting $q\to\infty$, we get $b=E(x_{j_1}x_{j_2}\cdots x_{j_N})\in\Nc$. This proves \begin{equation}\label{eq:Exs=} \Nc=W^*(\{E(x_{i_1}x_{i_2}\cdots x_{i_n})\mid n\ge1,\,i_1,\ldots,i_n\ge1\}). \end{equation} It remains to show $\Nc=\Bc_0$. Using~\eqref{eq:Exs=}, we have $E(b_0x^{k_1}b_1x^{k_2}\cdots b_{n-1}x^{k_n}b_n)\in\Nc$ whenever $n\in\Nats$, $k_1,\ldots,k_n\in\Nats$, $b_0,b_1,\ldots,b_n\in\Nc$ and $x=x_i$. By the characterization of $\Bc_0$, this implies $\Bc_0\subseteq\Nc$. So it suffices to see that we have $E(x_{i_1}x_{i_2}\cdots x_{i_n})\in\Bc_0$ for all $n\ge1$ and $i_1,\ldots,i_n\ge1$. However, this follows by considering how mixed moments of the free variables $x_1,x_2,\ldots$ can be evaluated in terms the moments of the individual $x_i$. Speicher's operator--valued cumulants~\cite{Sp98} can be used to give a careful proof of this fact. Indeed, using the expression of the cumulants in terms of moments and the M\"obius function and using the defining property~\eqref{eq:Bc0} of $\Bc_0$, we see that for each individual $x=x_i$, the operator valued cumulant $\kappa[xb_1,xb_2,\ldots,xb_{n-1},x]$ belongs to $\Bc_0$ for every $b_1,\ldots,b_{n-1}\in\Bc_0$. Now using the moment cumulant formula for $E(x_{i_1}x_{i_2}\cdots x_{i_n})$ and the vanishing of mixed cumulants, we obtain $E(x_{i_1}x_{i_2}\cdots x_{i_n})\in\Bc_0$. \end{proof} \begin{lemma}\label{lem:aA} Let $\Nc$ be a countably generated von Neumann algebra equipped with a normal faithful state $\phi$. Then there is a von Neumann algebra $\Ac$ containing $\Nc$ as a unital von Neumann subalgebra and possessing a $\phi$--preserving conditional expectation $E:\Ac\to\Nc$ onto $\Nc$, and there is a self--adjoint element $a\in\Ac$ with the property that $\{E(a^k)\mid k\in\Nats\}$ generates $\Nc$ as a von Neumann algebra. \end{lemma} \begin{proof} One easily sees (using the spectral theorem) that the von Neumann algebra $\Nc$ is also generated by a finite or countable collection $(p_i)_{i\in I}$ of projections. Let $\Ac=\bigoplus_{i\in I}\Nc$, with $\Nc\subseteq\Ac$ identified with the constant sequences and with the conditional expectation $E$ given by some strictly positive weights $(\alpha_i)_{i\in I}$ that sum to $1$: \[ E((x_i)_{i\in I})=\sum_i\alpha_ix_i. \] We let $a=(\beta_ip_i)_{i\in I}\in\Ac$ for some bounded family $\beta_i\in\Reals$; thus, we have \[ E(a^k)=\sum_i\alpha_i\beta_i^kp_i. \] If $I$ is finite, then by choosing all the $\beta_i$ to be distinct, the determinant of the matrix $(\beta_i^j)_{i\in I,\,0\le j<|I|}$, being a Vandermonde determinant, is seen to be nonzero. Thus, we recover $\{p_i\mid i\in I\}$ by taking linear combinations of $(E(a^k))_{0\le k<|I|}$. Suppose $I$ is infinite, identify it with $\Nats_0$ and let $\beta_i=2^{-i}$. Let $\Dc=W^*(\{E(a^k)\mid k\in\Nats\})$. Then \[ \lim_{k\to\infty}E(a^k)=\lim_{k\to\infty}\left(\alpha_0p_0+\sum_{i=1}^\infty\alpha_i2^{-ki}p_i\right)=\alpha_0p_0, \] where the convergence is in norm topology, and $p_0\in\Dc$. Similarly, \begin{gather*} \lim_{k\to\infty}2^k(E(a^k)-\alpha_0p_0)=\alpha_1p_1, \\ \lim_{k\to\infty}2^{2k}(E(a^k)-\alpha_0p_0-2^{-1}\alpha_1p_1)=\alpha_2p_2 \end{gather*} and so on. Thus, we get $p_0,p_1,p_2,\ldots\in\Dc$ and $\Dc$ is all of $\Nc$. \end{proof} \begin{thm}\label{thm:givenN} Let $\Nc$ be a countably generated von Neumann algebra with a normal, faithful state $\phi$. Then there is a quantum exchangeable sequence $x_1,x_2,\ldots$ in some W$^*$--noncommutative probability space $(\Mcal,\phi_\Mcal)$ whose tail algebra is a copy of $\Nc$ in $\Mcal$, with the restriction of $\phi_\Mcal$ to $\Nc$ being $\phi$. \end{thm} \begin{proof} Let $\Ac\supseteq\Nc$ and $a\in\Ac$ be as from Lemma~\ref{lem:aA}. For each $j\in\Nats$, let $\Ac_j$ be a copy of $\Ac$ with the $\phi$--preserving conditional expectation onto $\Nc$ denoted by $E_j$. Let \[ (\Mcal,E)=(*_\Nc)_{j=1}^\infty(\Ac_j,E_j) \] be their amalgamated free product of von Neumann algebra and, of course, let $\phi_\Mcal=\phi\circ E$. Let $x_j$ be the copy of $a$ in the $j$th copy $\Ac_j$ of $\Ac$. By Proposition~\ref{prop:free}, $(x_j)_{j=1}^\infty$ is a quantum exchangeable sequence. By Lemma~\ref{lem:aA}, the set \begin{equation}\label{eq:amomset} \{E(x_j^k)\mid k\in\Nats\} \end{equation} generates all of $\Nc$. By Proposition~\ref{prop:tail}, the tail algebra of the sequence $(x_i)_{i\in I}$ is equal to $\Nc$. \end{proof} \section{Free product states} \label{sec:fps} Given a C$^*$--algebra $A$ with state $\phi$ and with self--adjoint elements $x_i\in A$, ($i\in I$), we wish to regard the variables $x_i$ abstractly, independently of their realization in $A$. Consider the $*$--algebra $\Cpx\langle X\rangle=\Cpx\langle X_i\mid i\in I\rangle$ of polynomials in noncommuting variables $X_i$, with the involution defined so that $X_i^*=X_i$, and consider the unital $*$--algebra representation $\alpha:\Cpx\langle X\rangle\to A$ that sends $X_i$ to $x_i$. \begin{defi}\label{def:fps} By a {\em free product state} on $\Cpx\langle X\rangle$, we will mean a functional $\psi$ of $\Cpx\langle X\rangle$ such that $\psi(1)=1$ and the variables $(X_i)_{i\in I}$ are free with respect to $\psi$. If, in addition, for all $k$ the moments $\psi(X_i^k)$ are independent of $i$, then we say $\psi$ is an {\em equidistributed free product state} on $\Cpx\langle X\rangle$. We will say that a linear functional $\phi$ of $\Cpx\langle X\rangle$ is {\em a limit of convex combinations of uniformly bounded free product states} (respectively, of uniformly bounded equidistributed free product states) if for some choice of constants $C_i>0$, the functional $\phi$ is the limit in the topology of pointwise convergence on $\Cpx\langle X\rangle$ of convex combinations of free product states (respectively, of equidistributed free product states) $\psi$ on $\Cpx\langle X\rangle$ that satisfy $|\psi(X_i^k)|\le C_i^k$ for all $i\in I$ and $k\in\Nats$. \end{defi} We will need a few lemmas about freeness. Here is an easy result about freeness over a subalgebra of the center. \begin{lemma}\label{lem:Dcenter} Let $D=C(\Omega)$ be a commutative, unital C$^*$--algebra and let $A$ be a unital C$^*$--algebra with $D$ embedded as a unital C$^*$--subalgebra of the center of $A$, and suppose $E:A\to D$ is a conditional expectation. Let $I$ be a set and suppose for every $i\in I$ there is a C$^*$--algebra $A_i\in I$ with $D\subseteq A_i$. For each $\omega\in \Omega$, consider the character $\ev_\omega:f\mapsto f(\omega)$ of $D$ and let $\rho_\omega=\ev_\omega\circ E$. Then the family $(A_i)_{i\in I}$ is free (over $D$) with respect to $E$ if and only if for every $\omega\in\Omega$, the family $(A_i)_{i\in I}$ is free (over $\Cpx$) with respect to $\rho_\omega$. \end{lemma} \begin{proof} First suppose the family is free with respect to $E$, fix $\omega\in\Omega$ and let us show freeness with respect to $\rho_\omega$. Suppose $a_j\in A_{i_j}\cap\ker\rho_\omega$ for $j=1,\ldots,n$ and $i_j\ne i_{j+1}$ for all $1\le j<n$. We must show $\rho_\omega(a_1a_2\cdots a_n)=0$. Let $\eps>0$. Since $E(a_j)\in C(\Omega)$ vanishes at $\omega$, by Tietze's extension theorem there is $f\in C(\Omega)$ such that $f(\omega)=1$, $\|f\|_\infty=1$ and $\|E(a_j)f\|<\eps$ for all $j$. Let $f_j=E(a_j)f$. Then freeness with respect to $E$ implies $E((fa_1-f_1)(fa_2-f_2)\cdots (fa_n-f_n))=0$, so we have \[ \rho_\omega((fa_1-f_1)(fa_2-f_2)\cdots (fa_n-f_n))=0. \] But if $M=1+\eps+\max_j\|a_j\|$, then \begin{multline*} |\rho_\omega((fa_1)(fa_2)\cdots(fa_n))|= \\ =|\rho_\omega((fa_1)(fa_2)\cdots(fa_n))-\rho_\omega((fa_1-f_1)\cdots(fa_n-f_n))|\le nM^{n-1}\eps. \end{multline*} However, we have $\rho_\omega(a_1a_2\cdots a_n)=\rho_\omega((f^n)a_1a_2\cdots a_n)=\rho_\omega((fa_1)(fa_2)\cdots(fa_n))$, so letting $\eps\to0$ finishes the proof of freeness with respect to $\rho_\omega$. Now suppose the family is free with respect to $\rho_\omega$ for all $\omega$ and let us show freeness with respect to $E$. Suppose $a_i\in A_{i_j}\cap\ker E$ for $i_j$ as above and let $d=E(a_1a_2\cdots a_n)$. For every $\omega$ and every $j$, we have $\rho_\omega(a_j)=0$, so by hypothesis, $d(\omega)=\rho_\omega(a_1a_2\cdots a_n)=0$. So $d=0$. \end{proof} Here is an easy result about conditional expectations and Gelfand--Naimark--Segal (GNS) constructions, whose proof we include for convenience. \begin{lemma}\label{lem:Ephi} Let $A$ be a unital C$^*$--algebra and $B\subseteq A$ a unital C$^*$--subalgebra. Suppose $E:A\to B$ is a conditional expectation onto $B$. Suppose $\phi$ is a state on $A$ such that $\phi\circ E=\phi$. Let $(\pi_\phi,\HEu_\phi)$ be the GNS representation of $A$ arising from $\phi$ and let $a\mapsto\ah$ denote the linear mapping $A\to\HEu_\phi$ arising in the GNS construction. \begin{enumerate}[(i)] \item Then there is a self--adjoint projection $P_\phi$ on $\HEu_\phi$ such that $E(x)\hat{\;}=P_\phi\xh$ and \begin{equation}\label{eq:EP} \pi_\phi(E(x))P_\phi=P_\phi\pi_\phi(x)P_\phi \end{equation} for all $x\in A$; \item If $B$ lies in the center of $A$, then there is a conditional expectation $E_\phi:\pi_\phi(A)\to\pi_\phi(B)$ satisfying $E_\phi(\pi_\phi(x))=\pi_\phi(E(x))$ for all $x\in A$. \end{enumerate} \end{lemma} \begin{proof} If $b\in B$, $x\in A$ and $E(x)=0$, then \[ \langle\xh,\bh\rangle_\phi:=\phi(b^*x)=\phi(E(b^*x))=\phi(b^*E(x))=0. \] Thus, given $a\in A$ and writing $a=E(a)+(a-E(a))$ we get for the norm in $\HEu_\phi$ \[ \|\ah\|_\phi^2=\|E(a)\hat{\;}\|_\phi^2+\|(a-E(a))\hat{\;}\|_\phi^2 \] and the map $\ah\mapsto E(a)\hat{\;}$ is an idempotent, linear map that is contractive with respect to $\|\cdot\|_\phi$ and, hence, extends to a self--adjoint projection $P_\phi$ from $\HEu_\phi$ onto $\overline{\{\bh\mid b\in B\}}$. For $a,x\in A$ we have \[ \pi_\phi(E(x))P_\phi\ah=(E(x)E(a))\hat{\;}=E(xE(a))\hat{\;}=P_\phi(xE(a))\hat{\;}=P_\phi\pi_\phi(x)P_\phi\ah, \] which proves~\eqref{eq:EP}. For~(ii), assume $B$ is in the center of $A$. Note that if $b\in B$, then for every $a\in A$ we have \begin{align*} \|\pi_\phi(b)\ah\|_\phi^2&=\phi(a^*b^*ba)=\phi(E(a^*b^*ba))=\phi(E(a^*a)b^*b)= \\ &=\phi(b^*E(a^*a)b)=\|\pi_\phi(b)(E(a^*a)^{1/2})\hat{\;}\|_\phi^2, \end{align*} so if $\pi_\phi(b)P_\phi=0$, then $\pi_\phi(b)=0$. Hence, the $*$--homomorphism $\pi_\phi(B)\ni\pi_\phi(b)\mapsto\pi_\phi(b)P_\phi$ is isometric. Thus, we have, for $x\in A$, \[ \|\pi_\phi(E(x))\|=\|\pi_\phi(E(x))P_\phi\|=\|P_\phi\pi_\phi(x)P_\phi\|\le\|\pi_\phi(x)\| \] and $\pi_\phi(x)\mapsto\pi_\phi(E(x))$ is a contractive, linear, idempotent map from $\pi_\phi(A)$ onto $\pi_\phi(B)$; this is the desired conditional expectation $E_\phi$. \end{proof} \begin{remark} In the above lemma, it is not true that faithfulness of $E$ implies faithfulness of $E_\phi$. For example, let $A=M_2(C([0,1])$, let $B$ be the center of $A$, identified with $C([0,1])$ and let $E:A\to B$ be the conditional expectation given by, for $a=(a_{ij})_{1\le i,j\le 2}\in A$ with $a_{ij}\in C([0,1])$, \[ E(a)(t)=ta_{11}(t)+(1-t)a_{22}(t),\qquad(t\in[0,1]). \] Then $E$ is faithful. Letting $\phi$ be the state on $A$ determined by $\phi=\phi\circ E$ and $\phi(b)=b(0)$ for $b\in B$, we find that $E_\phi$ is the state $M_2(\Cpx)\to\Cpx$ sending $\left(\begin{smallmatrix}1&0\\0&0\end{smallmatrix}\right)$ to $0$, so is not faithful. \end{remark} \begin{lemma}\label{lem:quotientfree} Let $A$ be a unital C$^*$--algebra and $B$ a unital C$^*$--subalgebra of the center of $A$ with a conditional expectation $E:A\to B$. Let $I$ be a set and suppose for every $i\in I$, $A_i$ is a C$^*$--subalgebra of $A$ that contains $B$, and the family $(A_i)_{i\in I}$ is free (over $B$) with respect to $E$. Let $\phi$ be a state on $A$ satisfying $\phi\circ E=\phi$ and let $E_\phi:\pi_\phi(A)\to\pi_\phi(B)$ be the conditional expectation from Lemma~\ref{lem:Ephi}. Then the family $(\pi_\phi(A_i))_{i\in I}$ is free (over $\pi_\phi(B)$) with respect to $E_\phi$. \end{lemma} \begin{proof} We have $B=C(X)$ for some compact Hausdorff space $X$, and $\pi_\phi(B)=C(Y)$ for a closed subspace $Y$ of $X$, where the $*$--homomorphism $\pi_\phi\restrict_B:C(X)\to C(Y)$ sends a function to its restriction to $Y$. For $y\in Y$, let $\ev_y:\pi_\phi(B)\to\Cpx$ and $\ev_y^{(X)}:B\to\Cpx$ be the homomorphisms of evaluation at $y$. By Lemma~\ref{lem:Dcenter}, it will suffice to show that for every $y\in Y$, the family $(\pi_\phi(A_i))_{i\in I}$ is free (over $\Cpx$) with respect to $\ev_y\circ E_\phi$. However, we have \[ \ev_y\circ E_\phi\circ\pi_\phi=\ev^{(X)}_y\circ E. \] From Lemma~\ref{lem:Dcenter}, we have freeness of $(A_i)_{i\in I}$ with respect to $\ev^{(X)}_y\circ E$, and from this follows the freeness of $(\pi_\phi(A_i))_{i\in I}$ with respect to $\ev_y\circ E_\phi$. \end{proof} The following result shows that a state that is a limit of convex combinations of uniformly bounded free product states always arises from the situation of a free product with amalgamation over a subalgebra of the center. \begin{prop}\label{prop:limfreeprodst} Let $A$ be a C$^*$--algebra with faithful state $\phi$. Suppose $x_i\in A$ ($i\in I$), for $I$ countable, are self--adjoint elements that together generate $A$ as a C$^*$--algebra and so that $\phit:=\phi\circ\alpha$ is a limit of convex combinations of uniformly bounded free product states, where $\alpha:\Cpx\langle X\rangle\to A$ is the $*$--homomorphism given by $X_i\mapsto x_i$. Then there is a unital C$^*$--algebra $B$ with a unital C$^*$--subalgebra $D\subseteq Z(B)$ of the center of $B$ and with a faithful conditional expectation $E:B\to D$, and there is a unital $*$--homomorphism $\pi:A\to B$ and a faithful tracial state $\sigma=\sigma\circ E$ on $B$ so that $\phi=\sigma\circ\pi$ and the family $(\pi(x_i))_{i\in I}$ is free with respect to $E$. Furthermore, if $\phit$ is a limit of convex combinations of uniformly bounded equidistributed free product states with respect to $(x_i)_{i\in I}$, then $E:B\to D$ and $\pi:A\to B$ may be chosen so that for all $n$ and all $d_0,\ldots,d_n\in D$, the corresponding moment $E(d_0\pi(x_i)d_1\cdots\pi(x_i)d_n)$ of the variable $\pi(x_i)$ is independent of $i$. \end{prop} \begin{proof} We fix constants $C_i$ as in Definition~\ref{def:fps}. If $\psi$ is a free product state on $\Cpx\langle X\rangle$, then the usual GNS construction yields a $*$--representation $\pi_\psi:\Cpx\langle X\rangle\to B(\HEu_\psi)$ with $\|\pi_\psi(X_i)\|\le C_i$. Let $B_\psi\subseteq B(\HEu_\psi)$ be the unital C$^*$--algebra generated by the image of $\pi_\psi$ and denote also by $\psi$ the state on $B_\psi$ so that $\psi\circ\pi_\psi$ is the original functional $\psi$ on $\Cpx\langle X\rangle$. Of course, the condition that $\psi$ is a free product state on $\Cpx\langle X\rangle$ implies that the variables $(\pi_\psi(X_i))_{i\in I}$ are free in $B_\psi$ with respect to $\psi$. Moreover, $B_\psi$ is isomorphic to the free product over the scalars of the abelian C$^*$--algebras generated by the $\pi_\psi(X_i)$. Since the restrictions of $\psi$ to these abelian C$^*$--algebras are faithful, and since the free product of faithful states is faithful~\cite{D98} and since the free product of traces is a trace (see~\cite{VDN92}), it follows that $\psi$ is a faithful trace on $B_\psi$. By hypothesis, we may write $\phit$ as the limit of a net of convex combinations of free product states, and consider the set $F$ of all the free product states appearing with nonzero coefficients in these convex combinations. Let \[ \Bt=\prod_{\psi\in F}B_\psi=\{(b_\psi)_{\psi\in F}\mid b_\psi\in B_\psi,\,\sup_{\psi}\|b_\psi\|<\infty\} \] be the C$^*$--algebra direct product. Then $\betat:\Cpx\langle X\rangle\to\Bt$ defined by $\betat(p)=(\pi_\psi(p))$ is a $*$--representation. Let $\Dt\subseteq\Bt$ be the subalgebra consisting of all sequences of scalars, i.e.\ all $(\lambda_\psi1)_{\psi\in F}$ for $\lambda_\psi\in\Cpx$. Clearly, $\Dt\cong\ell^\infty(F)$ is a subalgebra of the center of $\Bt$, and the map $\Et:\Bt\to\Dt$ given by $\Et((b_\psi)_{\psi\in F})=(\psi(b_\psi))_{\psi\in F}$ is a conditional expectation that is faithful because each tracial state $\psi$ is faithful on $B_\psi$. The variables $(\betat(X_i))_{i\in I}$ are free with respect to $\Et$, because if for some $n\in\Nats$ and $i(1),\ldots,i(n)\in\Nats$ with $i(j)\ne i(j+1)$, $p_j(X_{i(j)})$ is a polynomial in $X_{i(j)}$ with coefficients from $\Bt$ and $E(p_j(X_{i(j)}))=0$, then $\psi(p_j(X_{i(j)}))=0$ for every $\psi\in F$; by freeness $\psi(p_1(X_{i(1)})\cdots p_n(X_{i(n)}))=0$ for every $\psi\in F$, so also $\Et(p_1(X_{i(1)})\cdots p_n(X_{i(n)}))=0$. Note that $\Et$ has the property $\Et(xy)=\Et(yx)$ for all $x,y\in\Bt$, owing to the fact that each $\psi$ is a trace. In the above construction, if all free product states $\psi\in F$ are equidistributed, then the moments \begin{equation}\label{eq:Etmoms} \Et(d_0\betat(X_i)d_1\cdots\betat(X_i)d_n) \end{equation} of the variables $\betat(X_i)$ are independent of $i$. To a convex combination $\rho=\sum t_\psi\psi$ (with finite support) of elements of $F$, consider the state $\rhoh$ of $\Dt$ given by the corresponding weighted average, $\rhoh:(\lambda_\psi1)_{\psi\in F}\mapsto\sum t_\psi\lambda_\psi$. Let $(\rho_j)_{j\in J}$ denote a net of convex combinations of free product states on $\Cpx\langle X\rangle$ that converges (in the topology of pointwise convergence on $\Cpx\langle X\rangle$) to $\phit$. Replacing this net by a subnet, if necessary, we may without loss of generality assume that the corresponding net $\rhoh_j$ converges in the weak$^*$--topology on $\Dt^*$ to a state $\sigmat$ on $\Dt$. Then we have $\phit=\sigmat\circ\Et\circ\betat$. We are almost done, except that $\sigmat\circ\Et$ need not be faithful. Let $\pi_{\sigmat\circ\Et}$ denote the GNS representation associated to the tracial state $\sigmat\circ\Et$ on $\Bt$, and let $B=\pi_{\sigmat\circ\Et}(\Bt)$ and $D=\pi_{\sigmat\circ\Et}(\Dt)$. Then the corresponding state $\sigma$ on $B$ is a faithful trace. By Lemma~\ref{lem:Ephi}, there is a conditional expectation $E:B\to D$ so that \[ E(\pi_{\sigmat\circ\Et}(x))=\pi_{\sigmat\circ\Et}(\Et(x)),\qquad(x\in\Bt) \] and by Lemma~\ref{lem:quotientfree} and freeness of $(\betat(X_i))_{i\in I}$ with respect to $\Et$, the family \begin{equation}\label{eq:pbtfam} (\pi_{\sigmat\circ\Et}(\betat(X_i)))_{i\in I} \end{equation} is free (over $D$) with respect to $E$. Let $\beta=\pi_{\sigmat\circ\Et}\circ\betat:\Cpx\langle X\rangle\to B$. Then $\phit=\phi\circ\alpha=\sigma\circ\beta$. We will now define a $*$--homomorphism $\pi:A\to B$ so that all triangles in the diagram \begin{equation* \xymatrix{ \Cpx\langle X\rangle \ar[r]^{\alpha} \ar [d]_{\beta} &A \ar[dl]_{\pi} \ar[d]^{\phi} \\ B\ar[r]_{\sigma} & \Cpx } \end{equation*} commute. Indeed, since $\phi$ and $\sigma$ are faithful, we have \begin{align*} \|a\|&=\limsup_{n\to\infty}|\phi((a^*a)^n)|^{1/2n},\qquad(a\in A) \\ \|b\|&=\limsup_{n\to\infty}|\sigma((b^*b)^n)|^{1/2n},\qquad(b\in B). \end{align*} But this implies $\|\alpha(p)\|=\|\beta(p)\|$ for all $p\in\Cpx\langle X\rangle$. So the $*$--homomorphism defined on the image of $\alpha$ by $\alpha(p)\mapsto\beta(p)$ is isometric and extends to an isometric $*$--homomorphism $\pi:A\to B$, as required. Since $\pi(x_i)=\beta(X_i)$, by freeness of the family~\eqref{eq:pbtfam} we have that $(\pi(x_i))_{i\in I}$ is free with respect to $E$. In the case that all free product states are equidistributed, the observation above regarding the moments~\eqref{eq:Etmoms} of $\betat(X_i)$ with respect to $\Et$ implies that the moments of $\pi(x_i)$ with respect to $E$ are independent of $i$. \end{proof} \begin{prop}\label{prop:centraltail} Let $(x_i)_{i\in I}$ be quantum exchangeable random variables in $(\Mcal,\phi)$, suppose $\Mcal$ is generated by $\{x_i\mid i\in I\}$ and let $\alpha:\Cpx\langle X\rangle\to\Mcal$ be the $*$--homomorph\-ism given by $\alpha(X_i)=x_i$. Then $\phi\circ\alpha$ is a limit of convex combinations of uniformly bounded equidistributed free product states with respect to $(x_i)_{i\in I}$ if and only if the tail algebra $\Nc$ lies in the center of $\Mcal$. \end{prop} \begin{proof} Suppose the tail algebra lies in the center of $\Mcal$. Then by Theorem~\ref{thm:amalgfp}, $\Mcal$ is the free product with amalgamation over the tail algebra $\Nc$, and $\phi=\phi\restrict_\Nc\circ E$, where $E:\Mcal\to\Nc$ is the $\phi$--preserving conditional expectation with respect to which the algebras $\Ac_i=W^*(\Nc\cup\{x_i\})$ are free over $\Nc$. Regarding $\Nc$ as a commutative C$^*$--algebra, by the Gelfand theorem, we have $\Nc\cong C(\Omega)$. Then, by a classical result, every state on $C(\Omega)$ lies in the closed convex hull of the set of point evaluation maps $\{\ev_\omega\mid\omega\in\Omega\}$. By Lemma~\ref{lem:Dcenter}, every functional $\ev_\omega\circ E\circ\alpha$ is an equidistributed free product state on $\Cpx\langle X\rangle$ and clearly the boundedness criterion is satisfied with constants $C_i=\|x_i\|$. Taking convex combinations of the $\ev_\omega$ that approximate $\phi\restrict_\Nc$, we easily see that $\phi\circ\alpha$ is a limit of convex combinations of equidistributed free product states. Conversely, suppose that $\phi\circ\alpha$ is a limit of convex combinations of uniformly bounded equidistributed free product states. Consider the C$^*$--subalgebra $C^*(\{x_i\mid i\in I\})$ of $\Mcal$ generated by the $x_i$. By Proposition~\ref{prop:limfreeprodst}, there is a unital C$^*$--algebra $B$ and a unital subalgebra $D$ of the center of $B$ with a faithful conditional expectation $E:B\to D$ and a unital, injective $*$--homomorphism $\pi:C^*(\{x_i\mid i\in I\})\to B$ so that the elements $(\pi(x_i))_{i\in I}$ are free with respect to $E$, and so that the moments of $\pi(x_i)$ are independent of $i$; furthermore, there is a faithful state $\sigma$ on $D$ so that $\sigma\circ E\circ\pi=\phi\restrict_{\Afr}$. Let $\Afr$ be the C$^*$--subalgebra of $B$ generated by $D\cup\{\pi(x_i)\mid i\in I\}$. Then $\Afr$ is an amalgamated free product of C$^*$--algebras $C^*(D\cup\{x_i\})$, with amalgamation over $D$. We take the Hilbert space representation $\rho$ of $\Afr$ that is the GNS construction for the restriction of the state $\sigma\circ E$ to $\Afr$. The strong--operator--topology closure of $\rho(\Afr)$ is a von Neumann algebra, $\Qc$, that is isomorphic to a free product \begin{equation}\label{eq:overDc} (*_\Dc)_{i\in I}(W^*(\Dc\cup\{\rho(x_i)\},\overline{E}) \end{equation} with amalgamation over the strong--operator--topology closure $\Dc$ of $\rho(D)$. Taking strong--operator--topology limits (using Kaplansky's density theorem), we easily see that $\Dc$ lies in the center of $\Qc$. Since $\sigma\circ E$ and $\phi$ are faithful, the composition $\rho\circ\pi$, when compressed to a Hilbert subspace, equals the GNS construction of the restriction of $\phi$ to $\Afr$. Therefore, this mapping of $\Afr$ into the strong--operator--closure of $\Afr$ is canonically isomorphic to the inclusion of $\Afr$ in $\Mcal$, and we may regard $\Mcal$ as embedded in the amalgamated free product~\eqref{eq:overDc}. By Proposition~\ref{prop:tail}, the tail algebra of $\{x_i\mid i\in I\}$ lies in $\Dc$. Thus, the tail algebra is in the center of $\Mcal$. \end{proof} The following easy example shows that the tail algebra can be commutative without lying in the center of the algebra generated by the quantum exchangeable sequence. \begin{example}\label{example} Let $\Bc=\Cpx\oplus\Cpx$ embed into $M_2(\Cpx)$ as the diagonal matrices. For each $i\in\Nats$, let $\Ac_i$ be a copy of $M_2(\Cpx)$ and let $E_i:\Ac_i\to\Bc$ be the conditional expectation taking a matrix to its diagonal. Let \[ (\Mcal,E)=(*_\Bc)_{i=1}^\infty(\Ac_i,E_i) \] be the amalgamated free product of von Neumann algebras. We note that $\Mcal$ is easily seen to be isomorphic to the free group factor $L(\mathbf{F}_\infty)$. Let $\psi$ be any faithful state on $\Bc$. Then $\phi=\psi\circ E$ is a normal faithful state on $\Mcal$. Fix $0<t<1/2$ and let $x_i$ be the copy of the projection $\left(\begin{smallmatrix} t & \sqrt{t(1-t)} \\ \sqrt{t(1-t)} & 1-t \end{smallmatrix}\right)$ in $\Ac_i$. By Proposition~\ref{prop:free}, the sequence $(x_i)_{i=1}^\infty$ is quantum exchangeable. Applying Proposition~\ref{prop:tail}, we see that the tail algebra $\Nc$ of the sequence is $\Bc\cong\Cpx\oplus\Cpx$ and, incidentally, the von Neumann algebra generated by the sequence $\{x_i\mid i\in\Nats\}$ is all of $\Mcal$. \end{example} \begin{bibdiv} \begin{biblist} \bib{BD01}{article}{ author={Blanchard, Etienne F.}, author={Dykema, Ken}, title={Embeddings of reduced free products of operator algebras}, journal={Pacific J. Math.}, volume={199}, year={2001}, pages={1--19} } \bib{D98}{article}{ author={Dykema, Ken}, title={Faithfulness of free product states}, journal={J. Funct. Anal.}, volume={154}, date={1998}, pages={323--329} } \bib{HS55}{article}{ author={Hewitt, Edwin}, author={Savage, Leonard J.}, title={Symmetric measures on Cartesian products}, journal={Trans. Amer. Math. Soc.}, volume={80}, date={1955}, pages={470--501}, } \bib{Ka05}{book}{ author = {Kallenberg, Olaf}, title = {Probabilistic Symmetries and Invariance Principles}, publisher ={Springer-Verlag}, year = {2005}, series = {Probability and Its Applications} } \bib{K10}{article}{ author={K\"ostler, Claus}, title={A noncommutative extended de Finetti theorem}, journal={J. Funct. Anal.}, volume={258}, year={2010}, pages={1073-1120} } \bib{KSp09}{article}{ author={K\"ostler, Claus}, author={Speicher, Roland}, title={A noncommutative de Finetti theorem: invariance under quantum permutations is equivalent to freeness with amalgamation}, journal={Comm. Math. Phys.}, volume={291}, year={2009}, pages={473--490} } \bib{Sp98}{book}{ author={Speicher, Roland}, title={Combinatorial Theory of the Free Product with Amalgamation and Operator--Valued Free Probability Theory}, series={Mem. Amer. Math. Soc.}, volume={627}, year={1998} } \bib{St69}{article}{ author={St\o{}rmer, Erling}, title={Symmetric states of infinite tensor products of C$^*$--algebras}, journal={J. Funct. Anal.}, volume={3}, year={1969}, pages={48--68} } \bib{VDN92}{book}{ author={Voiculescu, Dan}, author={Dykema, Ken}, author={Nica, Alexandru}, title={Free random variables}, series={CRM Monograph Series}, volume={1}, publisher={American Mathematical Society}, address={Providence, RI}, year={1992} } \bib{W98}{article}{ author={Wang, S.}, title={Quantum symmetry groups of finite spaces}, journal={Commun. Math. Phys.}, volume={195}, pages={195--211}, year={1998} } \end{biblist} \end{bibdiv} \end{document}
1202.4531
\section{Introduction} Recent experimental and theoretical studies have shown that external electromagnetic fields can be used as a powerful tool to manipulate molecular collisions and chemical reactivity at low temperatures \cite{Book,NJP,Schnell,Softley,Dulieu,JohnCPC,Roman08,BalaJPB04,prl06,jcp07,jcp06,Sergey09,jcp08,MeyerBohn,KRb1,KRb2,KRb_theory}. Examples include resonant control of atom-molecule collisions and chemical reactions in ultracold molecular gases \cite{BalaJPB04,prl06,jcp06,Sergey09}, electric field control of nascent product state distributions \cite{jcp08,MeyerBohn}, and off-resonant laser field control of motional degrees of freedom \cite{KRb_theory,KRb2}. These pioneering studies demonstrate that future progress in the field of cold molecules -- in particular, the ability to create large, dense, and stable ensembles of chemically diverse molecular species -- will depend to a large extent on our understanding of their collisional properties \cite{Book,NJP,Schnell,Softley,Dulieu,JohnCPC,Roman08}. Theoretical modeling of molecular collision experiments performed at temperatures below 1~K requires quantum scattering calculations based on multidimensional potential energy surfaces (PESs) of unprecedented accuracy, which generally remain beyond the capabilities of modern {\it ab initio} methods. A way out of this difficulty is to adjust the interaction PESs based on experimental measurements of collision observables such as trap loss rates \cite{CaH,RbCs,He-OH,KRb1,KRb2,FeshbachChemistry,NH,NHexp,N-NH,OH-ND3,Rb-NH3}. The crucial link between intermolecular PESs and laboratory observations is provided by quantum scattering calculations, which yield collisional properties of molecules exactly for a given PES. Because of the need to incorporate symmetry breaking effects arising from the presence of external fields \cite{Roman08}, such calculations are more challenging than their field-free counterparts. In particular, the total angular momentum of the collision pair is no longer conserved in the presence of external fields, invalidating the standard approaches of molecular collision theory based on the total angular momentum representation \cite{AD,Lester}. A theoretical formalism for quantum scattering calculations of molecular collisions in external fields was developed by Volpi and Bohn and by Krems and Dalgarno \cite{VolpiBohn,Roman04}. The formalism is based on the fully uncoupled space-fixed representation, in which the wavefunction of the collision complex is expanded in direct products of rotational basis functions and spherical harmonics describing the orbital motion of the collision partners in a space-fixed (SF) coordinate frame \cite{VolpiBohn,Roman04}. Several groups have used this representation to study the effects of external electric, magnetic, and microwave fields on atom-molecule \cite{prl06,NH,jcp06,Jeremy07,HeCH2,Thierry,Chinese} and molecule-molecule \cite{njp09,JesusO2,Liesbeth} collisions. These studies have shown that the fully uncoupled SF formalism meets with serious difficulties when applied to collision problems characterized by strongly anisotropic interactions \cite{njp09,jcp08,Li-NH}. More specifically, the interaction anisotropy strongly couples different rotational and partial wave basis states, leading to very large systems of coupled-channel equations that are beyond the capability of present-day computational resources. As most atom-molecule and molecule-molecule interactions are strongly anisotropic, this difficulty has precluded converged calculations on many interesting collision systems, including Li + HF $\leftrightarrow$ LiF + H \cite{jcp08}, Rb + ND$_3$ \cite{Rb-NH3}, Li + NH \cite{Li-NH}, and NH + NH \cite{Liesbeth}. We have recently developed an alternative approach to atom-molecule and molecule-molecule scattering in a magnetic field based on the total angular momentum representation \cite{jcp10}. The total angular momentum of the collision complex is approximately conserved even in the presence of external fields; thus, using basis functions with well-defined total angular momentum allows for a substantial reduction in the number of scattering channels \cite{jcp10}. This advantage allowed us to obtain numerally converged scattering cross sections for strongly anisotropic atom-molecule \cite{Li-CaH} and molecule-molecule \cite{Yura} collisions in the presence of a magnetic field. Magnetic fields interact with the electron spin of the molecule, which can be weakly coupled to the intermolecular axis and often plays a spectator role during the collision. As a result, while an applied magnetic field shifts the energies of the colliding molecules and may lead to the appearance of scattering resonances, it hardly affects the mechanism of collision-induced energy transfer. In contrast, electric fields break the inversion symmetry of the collision problem and alter the selection rules for parity-changing transitions, leading to more dramatic changes in collision mechanisms. Examples include electric field-induced molecular states \cite{AvdeenkovBohn}, dipolar resonances \cite{Ticknor}, enhancement and suppression of spin relaxation in $^2\Sigma$ and $^2\Pi$ molecules \cite{prl06,FaradayDiscuss}, and stimulated chemical reactions \cite{KRb1,KRb_theory}. The purpose of this article is to extend the approach developed in Ref. \cite{jcp10} to describe atom-molecule collisions in electric fields. In Sec. II, we formulate the collision problem in the total angular momentum representation and outline the procedure of evaluating atom-molecule collision cross sections. We then apply our formulation to calculate the cross sections for Stark relaxation (Sec. IIIA) and vibrational relaxation (Sec. IIIB) in $^3$He-CaD collisions in the presence of an electric field. Our results agree well with benchmark calculations based on the fully uncoupled SF representation, demonstrating the validity and efficiency of our approach. These findings lead us to conclude that numerical algorithms based on the total angular momentum representation are a powerful way of carrying out quantum scattering calculations in the presence of electric fields. Sec. IV presents a brief summary of main results and outlines future research directions opened up by this work. \section{Theory} A non-reactive collision of a diatomic molecule (BC) with a structureless atom (A) in the presence of a dc electric field is described by the Hamiltonian (in atomic units) \cite{jcp10} \begin{equation}\label{H} \hat{H} = -\frac{1}{2\mu R}\frac{\partial^2}{\partial R^2} R + \frac{\hat{\ell}^2}{2\mu R^2} + V(\mathbf{R},\mathbf{r}) + \hat{H}_\text{as}, \end{equation} where $\mathbf{R}$ is the atom-molecule separation vector, $\mathbf{r}=r\hat{r}$ defines the length and the orientation of the internuclear axis (BC) in the SF frame, $\hat{\ell}$ is the orbital angular momentum for the collision, $V(\mathbf{R},\mathbf{r})$ is the atom-molecule interaction potential, and $\mu$ is the A-BC reduced mass, The asymptotic Hamiltonian $\hat{H}_\text{mol}$ describes the rovibrational structure of the diatomic molecule and its interaction with an electric field of strength $E$ oriented along the SF quantization axis $Z$ \begin{equation}\label{Has} \hat{H}_\text{as} = -\frac{1}{2mr} \frac{d^2}{dr^2}r + \frac{\hat{\jmath}^2}{2m r^2} + V(r) - Ed\cos\theta_r \end{equation} where $\hat{\jmath}$ is the rotational angular momentum, $d$ is the permanent electric dipole moment of the molecule with mass $m$, $V(r)$ is the intramolecular interaction potential \cite{BalaHeCaH_paper1}, and $\theta_r$ is the polar angle of the internuclear axis ($\hat{r}$) in the SF frame \cite{prl06,jcp06}. The orbital angular momentum $\hat{\ell}^2$ in Eq. (\ref{H}) can be expressed via the total angular momentum of the collision complex $\hat{J}$ in the body-fixed (BF) coordinate frame as \cite{jcp10,Lester} \begin{equation}\label{l2} \hat{\ell}^2 = (\hat{J} - \hat{\jmath})^2 = \hat{J}^2 + \hat{\jmath}^2 - \hat{J}_+\hat{\jmath}_- - \hat{J}_-\hat{\jmath}_+ - 2\hat{J}_z\hat{\jmath}_z, \end{equation} where $\hat{J}_\pm$ and $\hat{\jmath}_\pm$ are the BF raising and lowering operators (note that $\hat{J}_\pm$ satisfy anomalous commutation relations \cite{Zare}). The BF $z$-axis coincides with the vector $\mathbf{R}$ and the $y$-axis is perpendicular to the collision plane. As in our previous work \cite{jcp10}, we expand the wave function of the collision complex in direct products of BF basis functions \cite{Lester,jcp10} \begin{equation} \label{BFexpansion} \Psi = \frac{1}{R}\sum_{J}\sum_{v,\, j,\,k} F^M_{Jvjk}(R) |vjk\rangle |JMk\rangle, \end{equation} where $k$ is the BF the projection of $J$ and $j$, and $M$ is the SF projection of $J$. In Eq. (\ref{BFexpansion}), $|JMk\rangle=\sqrt{(2J+1)/8\pi^2}D^{J*}_{Mk}(\Omega_E)$ are the symmetric top eigenfunctions, $D(\Omega_E)$ are the Wigner $D$-functions, and $\Omega_E$ are the Euler angles which specify the orientation of BF axes in the SF frame. The functions $|vjk\rangle=r^{-1}\chi_{vj}(r)\sqrt{2\pi}Y_{jk}(\theta,0)$ describe the rovibrational motion of the diatomic molecule in the BF frame. The rovibrational functions $\chi_{vj}(r)$ satisfy the Schr{\"o}dinger equation \begin{equation}\label{chi} \left[-\frac{1}{2m} \frac{d^2}{dr^2} + \frac{j(j+1)}{2 mr^2} + V(r)\right] \chi_{vj}(r) = \epsilon_{vj}\chi_{vj}(r) \end{equation} where $\epsilon_{vj}$ is the rovibrational energy of the molecule in the absence of an electric field \cite{note}. The radial expansion coefficients $F^M_{Jvjk}(R)$ satisfy a system of coupled-channel (CC) equations \begin{align}\label{CC}\notag \left[ \frac{d^2}{dR^2} +2\mu E_\text{tot}\right] F^M_{Jvjk}(R)=2\mu \sum_{J',\,v',j',k'} \langle JMk | \langle vjk | V(R,r,\theta) &+ \frac{1}{2\mu R^2} (\hat{J} - \hat{\jmath})^2 \\&+\hat{H}_\text{as} | J'Mk'\rangle |v'j'k'\rangle F^M_{J'v'j'k'}(R), \end{align} where $E_\text{tot}$ is the total energy. The matrix elements of the interaction potential and of $\hat{\ell}^2$ can be evaluated as described in Refs. \cite{Lester,jcp10}. In the absence of an electric field, the asymptotic Hamiltonian (\ref{Has}) has only diagonal matrix elements \begin{equation} \label{me_rot} \langle JMk | \langle vjk | \hat{H}_\text{as}| J'M'k'\rangle |v'j'k'\rangle = \delta_{JJ'}\delta_{MM'}\delta_{vv'}\delta_{jj'}\delta_{kk'} \epsilon_{vj} \,\,\, (E=0). \end{equation} In order to evaluate the matrix elements of the molecule-field interaction in the BF basis, we transform the $Z$-component of vector $\hat{r}$ to the BF frame \cite{Zare} \begin{equation}\label{transformation} \cos\theta_r = \left(\frac{4\pi}{3}\right)^{1/2} Y_{10} (\theta_r,\phi_r) = \left(\frac{4\pi}{3}\right)^{1/2} \sum_q D^{1*}_{0q}(\Omega_E) Y_{1q} (\theta,\phi). \end{equation} The expression on the right-hand contains spherical harmonics of BF angles ($\theta,\phi$) and Wigner $D$-functions of Euler angles (note that $\theta$ is the Jacobi angle between $\mathbf{R}$ and $\mathbf{r}$). Making use of standard expressions for angular integrals involving three spherical harmonics \cite{Zare}, and neglecting the $r$ dependence of $d$ (which is a good approximation for low vibrational states and weak electric fields \cite{Rosario}) we obtain for the molecule-field interaction matrix element \begin{multline}\label{me_ext} \langle JMk | \langle vjk | -Ed\cos\theta_r | J'M'k' \rangle |v'j'k'\rangle = -Ed \delta_{MM'} \delta_{vv'} [(2J+1)(2J'+1)(2j+1)(2j'+1)]^{1/2} \\ \times (-)^{M+k-k'}\sum_q (-)^q \threejm{J}{M}{1}{0}{J'}{-M} \threejm{j}{0}{1}{0}{j'}{0} \threejm{J}{k}{1}{-q}{J'}{-k'} \threejm{j}{-k}{1}{q}{j'}{k'}. \end{multline} This expression shows that the interaction with electric fields couples basis functions of different $J$. It is because of this coupling that the collision problem can no longer be factorized by symmetry into smaller $J$-subproblems \cite{Lester}. It follows from Eq. (\ref{me_ext}) that (i) the external field couplings vanish unless $J-J' = \pm 1$, and (ii) electric fields couple basis functions of different $k$, leading to a field-induced analog of the Coriolis interaction. Unlike the standard Coriolis interaction, however, the interaction with external electric fields couples different $k$-states in {\it different} $J$-blocks (assuming $M=0$). The standard asymptotic analysis of the radial solutions to CC equations (\ref{CC}) at large $R$ gives the $S$-matrix elements and scattering observables. The analysis proceeds in two steps. First, the BF wavefunction is transformed to the SF representation using the eigenvectors of the operator $\hat{\ell}^2$ \cite{jcp10, ABC,Launay}. Next, the wavefunction is transformed to the basis in which $\hat{H}_\text{as}$ is diagonal using the eigenvectors of the asymptotic Hamiltonian (\ref{Has}) in the SF representation. The eigenvalues of $\hat{H}_\text{as}$ define the scattering channels $|\gamma \ell\rangle$ and threshold energies $\epsilon_\gamma$ in the presence of an electric field. Matching the transformed solutions to the asymptotic form \cite{jcp10} \begin{equation}\label{BoundaryConditions} F^M_{\gamma \ell}(R) \to \delta_{\gamma\gamma'}\delta_{\ell\ell'} e^{-i(k_\gamma R - \ell \pi/2)} - \left(\frac{k_{\gamma}}{k_{\gamma'}}\right)^{1/2} S^M_{\gamma \ell; \gamma'\ell'}e^{i(k_{\gamma'} R - \ell' \pi/2)} \end{equation} yields the $S$-matrix elements describing collision-induced transitions between the channels $\gamma$ and $\gamma'$ with wavevectors $k^2_\gamma = 2\mu(E_\text{tot}-\epsilon_\gamma)=2\mu E_C$, where $E_C$ is the collision energy. The integral cross sections can be evaluated from the $S$-matrix elements as \cite{Roman04, jcp10} \begin{equation} \sigma_{\gamma \to\gamma'} = \frac{\pi}{k_\gamma^2} \sum_M \sum_{\ell,\,\ell'} |\delta_{\ell\ell'}\delta_{\gamma\gamma'} - S^M_{\gamma \ell; \gamma\ell'} |^2. \end{equation} For the He-CaD interaction, we used a three-dimensional {\it ab initio} potential energy surface developed by Balakrishnan {\it et al.} \cite{BalaHeCaH_paper1,BalaHeCaH}, which explicitly includes the $r$ dependence of the interaction energy. The rovibrational eigenfunctions $\chi_{vj}(r)$ were evaluated by solving the one-dimensional Schr{\"o}dinger equation (\ref{chi}) using a discrete variable representation (DVR) method \cite{ColbertMiller}. The matrix elements of the He-CaD interaction in Eq. (\ref{CC}) were obtained by expanding the PES in Legendre polynomials with $\lambda_\text{max}=12$ and evaluating the integrals over spherical harmonics analytically to yield \cite{Lester,jcp10} \begin{multline} \langle JMk | \langle vjk | V(R,r,\theta) | J'Mk'\rangle |v'j'k'\rangle = \delta_{JJ'}\delta_{kk'} [(2j+1)(2j'+1)]^{1/2} \\ \times \sum_{\lambda=0}^{\lambda_\text{max}} \langle \chi_{vj}(r) | V_\lambda (R,r) |\chi_{v'j'}(r)\rangle \threejm{j}{-k}{\lambda}{0}{j'}{k'} \threejm{j}{0}{\lambda}{0}{j'}{0} \end{multline} The radial coefficients $V_\lambda (R,r)$ were evaluated using a 24-point Gauss-Legendre quadrature in $\theta$. The $r$ integrals were computed with 30 Gauss-Legendre quadrature points in $r \in [2.5, 5.6]$ $a_0$. The CC equations (\ref{CC}) were solved using the log-derivative method \cite{David86} on a grid of $R$ between 2 and 100~$a_0$ with a grid step of 0.1 $a_0$. The BF basis set used in Stark relaxation calculations (Sec. IIIA) included 10 rotational states ($j_\text{max}=9$); the basis set used in vibrational relaxation calculations included 10 rotational states in $v=0$ and $v=1$ vibrational manifolds of CaD (see Sec. IIIB). The cross sections for Stark relaxation were converged to $<$10\%. For classification purposes, the eigenvalues of the asymptotic Hamiltonian are assigned physical quantum numbers appropriate to a polar diatomic molecule in an electric field: $v$, $j$, and $m$ (the SF projection of $j$). In this work, we are interested in low-to-moderate field strengths, where the interaction with electric field is small compared to the splitting between the ground and the first excited rotational levels. We can therefore keep using $j$ to denote the rotational manifold and $m$ to distinguish the Stark states within the manifold, even though $j$ is not a good quantum number in an electric field. The assignment procedure works as follows. All eigenvalues of the asymptotic Hamiltonian which are close in energy to a particular Stark state $|vjm\rangle$ (that is, $|\epsilon_\gamma-\epsilon_{vjm}|<\Delta$) are assigned the quantum numbers $v,j,m$. The eigenvalues that do not meet this condition are excluded from consideration. In this work, we set $\Delta=0.1$ cm$^{-1}$, however, test calculations show that the results are not sensitive to the choice of $\Delta$ as long as $E_C<\Delta$. If this condition is not met, problems may arise with distinguishing between elastic and inelastic channels (see Sec. IIIB). \section{Results and discussion} In this section, we first consider the eigenstates of the asymptotic Hamiltonian that define the scattering channels in the presence of an electric field (Sec. IIIA). In order to test the performance of our approach, we compare the cross sections calculated using the BF total angular momentum representation with benchmark calculations based on the fully uncoupled SF representation (Secs. IIIB and C). \subsection{Asymptotic states} Figure \ref{fig:stark} shows the eigenvalues of the asymptotic Hamiltonian (\ref{Has}) for the ground vibrational state of CaD as functions of the applied electric field. The number of total $J$-states is given by $N_J=J_\text{max}+1$, where $J_\text{max}$ is the largest value of $J$ included in the basis set. The eigenvalues obtained for $J_\text{max}=2$ and 5 are shown in the upper and lower panels, respectively. The results clearly show that $\hat{H}_\text{as}$ expressed in the total angular momentum basis has eigenvalues that do not correspond to the physical Stark states of the diatomic molecule. This situation is similar to that encountered in the case of magnetic fields, and following the terminology introduced in \cite{jcp10}, we will refer to these states as "unphysical". From Fig. \ref{fig:stark}, we observe that the number of unphysical Stark states increases with the number of $J$-blocks in the basis set. In addition, the energies of the unphysical states become closer to the true Stark energies as $J_\text{max}$ increases. As pointed out before \cite{jcp10}, the origin of the unphysical states shown in Fig. \ref{fig:stark} can be attributed to the basis set truncation procedure. The total $J$ basis is truncated by restricting the number of $J$-blocks ($N_J=J_\text{max}+1)$. However, as follows from Eq. (\ref{me_ext}), electric fields couple basis states in block $J$ to those in block $J+1$. When the Hamiltonian matrix is truncated, these couplings are left out, resulting in the appearance of unphysical eigenvalues and eigenvectors. In Ref. \cite{jcp10} it was shown that the eigenvectors of unphysical Zeeman states are dominated by the largest value of $J$ included in the basis set. As a result, the presence of unphysical states has no influence on low-temperature collisions in magnetic fields \cite{jcp10}. In order to elucidate the properties of unphysical states, we consider the matrix of the asymptotic Hamiltonian (\ref{Has}) in the BF basis. In the weak-field limit $|Ed|/B_e\ll 1$, we can consider only the coupling between the ground and the first excited rotational states in the $v=0$ manifold (the $v$ index will be omitted for the rest of this section). Arranging the $|JMk\rangle |jk\rangle$ functions in the following sequence: $|000\rangle |00\rangle$, $|100\rangle |1-1\rangle$, $|100\rangle |10\rangle$, $|100\rangle |11\rangle$, $|000\rangle |10\rangle$, $|100\rangle |00\rangle$, we obtain the matrix of the asymptotic Hamiltonian \begin{equation}\label{Matrix} \left(\begin{array}{cc} H_1 & 0 \\ 0 & H_2 \\ \end{array}\right), \end{equation} with \begin{equation}\label{H1} H_1 = \left(\begin{array}{cccc} 0 & -\frac{1}{3}Ed & -\frac{1}{3}Ed & -\frac{1}{3}Ed \\ -\frac{1}{3}Ed& 2B_e & 0 & 0 \\ -\frac{1}{3}Ed & 0 & 2B_e & 0 \\ -\frac{1}{3}Ed & 0 & 0 & 2B_e \end{array}\right) \end{equation} and \begin{equation}\label{H1} H_2 = \left(\begin{array}{cc} 2B_e & -\frac{1}{3}Ed \\ -\frac{1}{3}Ed & 0 \end{array}\right), \end{equation} Diagonalization of $H_1$ yields \begin{equation} \lambda_{1,2} = B_e \pm \sqrt{B_e^2 +\frac{1}{3}(Ed)^2}, \quad \lambda_{3,4} = 2B_e. \end{equation} These energies are the same as those of a polar $^1\Sigma$ molecule in a dc electric field \cite{BohnChapter,jcp07}. The eigenvalues of $H_2$ \begin{equation}\label{lambda56} \lambda_{\pm} = B_e \pm \sqrt{B_e^2 +\frac{1}{9}(Ed)^2}. \end{equation} correspond to unphysical Stark states. The eigenvectors of the unphysical states are given by \begin{equation}\label{lambda_pm} |\lambda_\pm\rangle = \frac{-\frac{1}{3}Ed}{D_\pm} |000\rangle |00\rangle + \frac{B_e\mp \sqrt{B_e^2+\frac{1}{9}(Ed)^2}}{D_\pm} |100\rangle |00\rangle \end{equation} where $D_\pm^2 = (Ed)^2 + \left[ B_e - \sqrt{B_e^2 \mp (Ed)^2} \right]^2$. Eq. (\ref{lambda_pm}) illustrates that the field-induced mixing between different $J$-states is proportional to the magnitude of the electric field. Thus, we expect that the coupling between the different $J$-blocks will become stronger with increasing field, making it necessary to include more $J$-blocks in the basis set to obtain converged results even at ultralow collision energies (see Sec. III). By contrast, the eigenvectors of unphysical Zeeman states are, to a first approximation, independent of the field strength \cite{jcp10}, and so are the convergence properties of scattering observables. Finally, we note that neglecting the electric-field-induced coupling within the $H_2$ block leads to the disappearance of unphysical Stark shifts (\ref{lambda_pm}). This observation suggests a way to eliminate the unphysical states from scattering calculations. Preliminary results obtained with a restricted basis set ($j_\text{max}=1$, $J_\text{max}=1$) indicate that neglecting the off-diagonal elements of $H_2$ does provide accurate results for both the elastic and inelastic He-CaD scattering. It remains to be seen whether or not the procedure can be generalized to larger rotational basis sets. \subsection{Stark relaxation in He-CaD$(v=0,j=1,m_j=0)$ collisions} Figure \ref{fig:rot} shows the cross sections for Stark relaxation in $^3$He-CaD$(v=0,j=1,m_j=0)$ collisions calculated using the BF total angular momentum representation. The inelastic cross sections are summed over all final Stark states of CaD and displayed as functions of collision energy for $M=0$. At very low collision energies (in the Wigner s-wave limit) the cross sections scale as $1/\sqrt{E_C}$ \cite{Book,Roman08}. At higher collision energies, the cross sections display broad oscillations due to the presence of scattering resonances \cite{jcp06,jcp08}. At an electric field of 50 kV/cm, the BF results obtained with $J_\text{max}= 5$ are in excellent agreement with the benchmark calculations over the entire range of collision energies from $10^{-4}$ cm$^{-1}$ to 1 cm$^{-1}$. The agreement for $J_\text{max}=4$ is also good at $E_C>0.1$ cm$^{-1}$. The deviations observed above this collision energy occur because the number of total angular momentum states in the basis is not sufficient to adequately describe scattering resonances in the entrance and/or exit collision channels. This is analogous to the lack of convergence at high collision energies observed in our previous calculations of atom-molecule collisions in magnetic fields \cite{jcp10}. The cross sections obtained with $J_\text{max}=3$ are off by $\sim$50~\% even in the $s$-wave regime, which indicates that the external field coupling between the $J=3$ and $J=4$ blocks can no longer be neglected. In order to test the performance of our algorithm at higher electric fields, we display in the lower panel of Fig. \ref{fig:rot} the cross sections calculated for $E=150$ kV/cm for different values of $J_\text{max}$. While $J_\text{max}=4$ cross sections display a similar energy dependence as the benchmark results, quantitative agreement requires extension of the basis set to $J_\text{max}=5$. We conclude that it is necessary to include more $J$-states in the basis set to achieve convergence at higher electric fields. As shown in the previous section, the properties of unphysical Stark states depend on the magnitude of the electric field. At higher electric fields the scattering wavefunction contains contributions from higher $J$-blocks, making it necessary to increase $J_\text{max}$ to obtain converged results even at ultralow collision energies, as illustrated by the results plotted in Figs. \ref{fig:rot} and \ref{fig:vib}. By contrast, converged results for ultracold atom-molecule collisions in magnetic fields can typically be obtained with just two $J$-blocks \cite{jcp10}. \subsection{Vibrational relaxation: He-CaD$(v=1,j=0,m_j=0)$ collisions} In Fig. \ref{fig:vib}, we compare the cross sections for vibrational relaxation in He-CaD($v=1,j=0,m_j=0$) collisions calculated using the BF approach with benchmark SF calculations. The cross sections are summed over all final rotational states of CaD as plotted as functions of collision energy for different $J_\text{max}$. Balakrishnan {\it et al.} considered vibrational relaxation in $^3$He-CaH($v=1,j=0$) collisions in the absence of external fields and found it necessary to include 20 rotational states in the $v=0$ and $v=1$ vibrational manifolds to achieve numerical convergence \cite{BalaHeCaH}. The first excited vibrational state of CaD lies 908.3 cm$^{-1}$ above the ground state, and the rotational constant of CaD is 2.16 cm$^{-1}$. In order to properly describe quasiresonant energy transfer important at low temperatures \cite{Bala98}, it would thus be necessary to include at least 20 rotational states of CaD in each vibrational manifold. A fully uncoupled SF basis with $v_\text{max}=1$, $j_\text{max}=20$, and $l_\text{max}={20}$ contains 12362 channels. In order to avoid solving large numbers of CC equations, we opted to use a restricted SF basis set with $v_\text{max}=1$, $j_\text{max}=9$, and $l_\text{max}=9$ to generate benchmark results, which should be adequate for testing purposes provided the same convergence parameters $v_\text{max}$ and $j_\text{max}$ are used in BF and SF calculations. We emphasize, however, that these benchmark cross sections are not physically meaningful (e.g., they may not exhibit the quasi-resonance behavior characteristic of vibrational relaxation at low temperatures \cite{BalaHeCaH,Bala98}). From Fig. \ref{fig:vib} we observe that the BF cross sections obtained for a relatively weak electric field ($E=50$ kV/cm) are in good agreement with benchmark calculations already at $J_\text{max}=3$. Table I demonstrates that a $J_\text{max}=3$ calculation includes only 280 scattering channels, while the same calculation performed using the fully uncoupled SF representation requires as many as 1380 channels. The use of the BF total angular momentum representation thus allows us to reduce the number of scattering channels by a factor of 4. The computational cost of solving CC equations scales as $N^3$ with the number of scattering channels \cite{David86}, so the BF total angular momentum representation is more than 100-fold more computationally efficient than the fully uncoupled SF representation \cite{VolpiBohn,Roman04}. At $E=150$ kV/cm, quantitatively accurate results are obtained with $J_\text{max}\ge 5$, while $J_\text{max}=4$ calculations overestimate the benchmark result by a factor of $\sim$3. Comparison of Figs. \ref{fig:rot} and \ref{fig:vib} suggests that vibrational relaxation cross sections converge more slowly with $J_\text{max}$ than those for Stark relaxation. The gain in computational efficiency ($\sim$10-fold) is therefore not as dramatic as observed for $E=50$ kV/cm. Note that the BF inelastic cross sections show an unphysical jump at a collision energy of $\sim$0.14 cm$^{-1}$. This jump occurs because of the ambiguity of the procedure used to assign quantum numbers to unphysical states. As pointed out in Sec. II, the eigenvalues of the asymptotic Hamiltonian with energies $|\epsilon_\gamma-\epsilon_{vjm}|<\Delta$ are assigned physical quantum numbers $v,j$, and $m$, where we have chosen $\Delta=0.1$ cm$^{-1}$. While this procedure works well as long as the collision energy is small compared to $\Delta$, collision-induced transitions between unphysical states make it difficult to distinguish between elastic and inelastic channels when this condition is not met. This technical difficulty can be eliminated by increasing $\Delta$ or switching to an unphysical states-free representation (see~Sec.~II). \section{Summary} We have presented an efficient theoretical approach to solving the atom-molecule collision problem in the presence of an electric field. Unlike previous theoretical work based on the fully uncoupled space-fixed representation \cite{Roman04,VolpiBohn}, our approach makes explicit use of the total angular momentum ($J$) representation in the body-fixed coordinate frame, in which the atom-molecule Hamiltonian has a block-diagonal form in the absence of external fields. The different $J$ blocks are coupled only by the molecule-field interaction, making it possible to accelerate convergence of scattering observables with respect to the maximum number of rotational states and $J$-blocks included in the basis set. Our method is thus particularly suitable for quantum scattering calculations on atom-molecule (and possibly molecule-molecule) collision systems, where different rotational states are strongly coupled by the anisotropy of the interaction potential. As in the case of molecular collisions in magnetic fields \cite{jcp10}, truncation of the asymptotic Hamiltonian matrix leads to the appearance of unphysical Stark shifts. We have analyzed the properties of the unphysical states using a simple 6-state model, which shows that the unphysical states arise due to the electric field-induced coupling between different rotational states in adjacent $J$-blocks. The eigenvectors of the unphysical states are linear combinations of different rotational and $J$-states with field-dependent mixing coefficients. Because of the admixture of higher $J$-states, which do not contribute to low-temperature collision observables due to centrifugal barriers, the unphysical states are expected to play no role in cold atom-molecule collisions. Furthermore, our analytical results suggest that, by neglecting certain coupling matrix elements, it may be possible to completely eliminate the unphysical Stark states from scattering calculations. In order to test the performance of our method, we applied it to calculate the cross sections for vibrational and Stark relaxation in He-CaD collisions in the presence of an electric field. The results obtained using the BF approach are in good agreement with benchmark calculations based on the fully uncoupled SF representation. Most notably, the number of BF channels required to obtain converged results is smaller by a factor of 1.5 to 4 (depending on $J_\text{max}$) leading to a 5-100 fold gain in computational efficiency (see Table I). These improvements open up the possibility of carrying out highly efficient quantum scattering calculations of strongly anisotropic atom-molecule collisions in electric fields, which are of great current interest as potential candidate systems for sympathetic cooling experiments \cite{Li-CaH,N-NH,Rb-NH3} or reactants for electric field-controlled chemical reactions \cite{KRb1,KRb2}. \acknowledgements The author is grateful to Rosario Gonz{\'a}lez-F{\'e}rez and Roman Krems for their interest in this work and stimulating discussions. This work was supported by NSF grants to the Harvard-MIT Center for Ultracold Atoms and the Institute for Theoretical Atomic, Molecular and Optical Physics at Harvard University and the Smithsonian Astrophysical Observatory. \newpage
1705.08550
\section{Introduction}\label{sec:intro} According to the American Cancer Society, breast cancer is the most frequently diagnosed solid cancer and the second leading cause of cancer death among U.S. women~\cite{acs}. Mammogram screening has been demonstrated to be an effective way for early detection and diagnosis, which can significantly decrease breast cancer mortality~\cite{oeffinger2015breast}. Traditional mammogram classification requires extra annotations such as bounding box for detection or mask ground truth for segmentation~\cite{varela2006use,carneiro2015unregistered,jiao2016deep}. Other work have employed different deep networks to detect ROIs and obtain mass boundaries in different stages \cite{dhungel2016automated}. However, these methods require hand-crafted features to complement the system~\cite{kooi2017large}, and training data to be annotated with bounding boxes and segmentation ground truths which require expert domain knowledge and costly effort to obtain. In addition, multi-stage training cannot fully explore the power of deep networks. Due to the high cost of annotation, we intend to perform classification based on a raw whole mammogram. Each patch of a mammogram can be treated as an instance and a whole mammogram is treated as a bag of instances. The whole mammogram classification problem can then be thought of as a standard MIL problem. Due to the great representation power of deep features~\cite{greenspan2016guest,zhu2016adversarial,zhu2016co,zhu2015hierarchical}, combining MIL with deep neural networks is an emerging topic. Yan et al. used a deep MIL to find discriminative patches for body part recognition~\cite{yan2016multi}. Patch based CNN added a new layer after the last layer of deep MIL to learn the fusion model for multi-instance predictions~\cite{hou2015patch}. Shen et al. employed two stage training to learn the deep multi-instance networks for pre-detected lung nodule classification~\cite{shen2016learning}. The above approaches used max pooling to model the general multi-instance assumption which only considers the patch of max probability. In this paper, more effective task-related deep multi-instance models with end-to-end training are explored for whole mammogram classification. We investigate three different schemes, i.e., max pooling, label assignment, and sparsity, to perform deep MIL for the whole mammogram classification task. \begin{figure}[t] \setlength{\abovecaptionskip}{0.cm} \setlength{\belowcaptionskip}{-0.cm} \begin{center} \begin{minipage}{0.8\linewidth} \centerline{\includegraphics[width=\textwidth]{framework3new.png}} \end{minipage} \caption{The framework of whole mammogram classification. First, we use Otsu's segmentation to remove the background and resize the mammogram to $227\times227$. Second, the deep MIL accepts the resized mammogram as input to the convolutional layers. Here we use the convolutional layers in AlexNet~\cite{krizhevsky2012imagenet}. Third, logistic regression with weight sharing over different patches is employed for the malignant probability of each position from the convolutional neural network (CNN) feature maps of high channel dimensions. Then the responses of the instances/patches are ranked. Lastly, the learning loss is calculated using max pooling loss, label assignment, or sparsity loss for the three different schemes.} \label{fig:framework} \end{center} \end{figure} \begin{figure}[t] \setlength{\abovecaptionskip}{0.cm} \setlength{\belowcaptionskip}{-0.cm} \begin{center} \begin{minipage}{0.245\linewidth} \centerline{\includegraphics[width=0.9\linewidth]{masswidth.png}} \center{(a)} \end{minipage} \begin{minipage}{0.245\linewidth} \centerline{\includegraphics[width=0.9\linewidth]{massheight.png}} \center{(b)} \end{minipage} \begin{minipage}{0.245\linewidth} \centerline{\includegraphics[width=0.9\linewidth]{mammh.png}} \center{(c)} \end{minipage} \begin{minipage}{0.245\linewidth} \centerline{\includegraphics[width=0.9\linewidth]{mammw.png}} \center{(d)} \end{minipage} \caption{Histograms of mass width (a) and height (b), mammogram width (c) and height (d). Compared to the size of whole mammogram ($1,474 \times 3,086$ on average after cropping), the mass of average size ($329 \times 325$) is tiny, and takes about 2\% of a whole mammogram. } \label{fig:mass} \end{center} \end{figure} The framework for our proposed end-to-end trained deep MIL for mammogram classification is shown in Fig.~\ref{fig:framework}. To fully explore the power of deep MIL, we convert the traditional MIL assumption into a label assignment problem. As a mass typically composes only 2\% of a whole mammogram (see Fig.~\ref{fig:mass}), we further propose sparse deep MIL. The proposed deep multi-instance networks are shown to provide robust performance for whole mammogram classification on the INbreast dataset~\cite{moreira2012inbreast} \section{Deep MIL for Whole Mammogram Mass Classification}\label{sec:dmlmamm} Unlike other deep multi-instance networks~\cite{yan2016multi,hou2015patch}, we use a CNN to efficiently obtain features of all patches (instances) at the same time. Given an image $\bm{I}$, we obtain a much smaller feature map $\bm{F}$ of multi-channels $N_c$ after multiple convolutional layers and max pooling layers. The $(\bm{F})_{i,j,:}$ represents deep CNN features for a patch $\bm{Q}_{i,j}$ in $\bm{I}$, where $i,j$ represents the pixel row and column index respectively The goal of our work is to predict whether a whole mammogram contains a malignant mass (BI-RADS $\in \{4, 5, 6\}$ as positive) or not, which is a standard binary classification problem. We add a logistic regression with weights shared across all the pixel positions following $\bm{F}$, and an element-wise sigmoid activation function is applied to the output. To clarify it, the malignant probability of feature space's pixel $(i,j)$ is \begin{equation} \label{equ:cnn} r_{i,j} = \text{sigmoid}(\bm{a} \cdot \bm{F}_{i,j,:} + b), \end{equation} where $\bm{a}$ is the weights in logistic regression, $b$ is the bias, and $\cdot$ is the inner product of the two vectors $\bm{a}$ and $\bm{F}_{i,j,:}$. The $\bm{a}$ and $b$ are shared for different pixel positions $i,j$. We can combine $r_{i,j}$ into a matrix $\bm{r} = (r_{i,j})$ of range $[0, 1]$ denoting the probabilities of patches being malignant masses. The $\bm{r}$ can be flattened into a one-dimensional vector as $\bm{r} = (r_{1}, r_{2}, ..., r_{m})$ corresponding to flattened patches $(\bm{Q}_{1}, \bm{Q}_{2}, ..., \bm{Q}_{m})$, where $m$ is the number of patches. \subsection{Max Pooling-based Multi-instance Learning}\label{sec:maxpool} The general multi-instance assumption is that if there exists an instance that is positive, the bag is positive~\cite{dietterich1997solving}. The bag is negative if and only if all instances are negative. For whole mammogram classification, the equivalent scenario is that if there exists a malignant mass, the mammogram $\bm{I}$ should be classified as positive. Likewise, negative mammogram $\bm{I}$ should not have any malignant masses. If we treat each patch $\bm{Q}_{i}$ of $\bm{I}$ as an instance, the whole mammogram classification is a standard multi-instance task. For negative mammograms, we expect all the $r_i$ to be close to 0. For positive mammograms, at least one $r_i$ should be close to 1. Thus, it is natural to use the maximum component of $\bm{r}$ as the malignant probability of the mammogram $\bm{I}$ \begin{equation} \label{equ:max} p(y=1|\bm{I}, \bm{\theta}) = \max\{ r_1, r_2, ..., r_m\}, \end{equation} where $\bm{\theta}$ is the weights in deep networks. If we sort $\bm{r}$ first in descending order as illustrated in Fig.~\ref{fig:framework}, the malignant probability of the whole mammogram $\bm{I}$ is the first element of ranked $\bm{r}$ as \begin{equation} \label{equ:sortmax} \begin{aligned} &\{{r^\prime}_1, {r^\prime}_2, ..., {r^\prime}_m\} = \text{sort} (\{ r_1, r_2, ..., r_m\}), \\ &p(y=1|\bm{I}, \bm{\theta}) = {r^\prime}_1, \quad\text{and}\quad p(y=0|\bm{I}, \bm{\theta}) = 1-{r^\prime}_1, \end{aligned} \end{equation} where $\bm{r}^\prime = ({r^\prime}_1, {r^\prime}_2, ..., {r^\prime}_m)$ is descending ranked $\bm{r}$. The cross entropy-based cost function can be defined as \begin{equation} \label{equ:maxloss} \mathcal{L}_{maxpooling} = -\frac{1}{N}\sum_{n=1}^{N} \log(p(y_n | \bm{I}_n, \bm{\theta})) + \frac{\lambda}{2} \|\bm{\theta}\|^2 \end{equation} where $N$ is the total number of mammograms, $y_n \in \{0,1\}$ is the true label of malignancy for mammogram $\bm{I}_n$, and $\lambda$ is the regularizer that controls model complexity. One disadvantage of max pooling-based MIL is that it only considers the patch ${\bm{Q}^\prime}_1$, and does not exploit information from other patches. A more powerful framework should add task-related priori, such as sparsity of mass in whole mammogram, into the general multi-instance assumption and explore more patches for training. \subsection{Label Assignment-based Multi-instance Learning}\label{sec:labelassign} For the conventional classification tasks, we assign a label to each data point. In the MIL scheme, if we consider each instance (patch) $\bm{Q}_i$ as a data point for classification, we can convert the multi-instance learning problem into a label assignment problem. After we rank the malignant probabilities $\bm{r} = (r_{1}, r_{2}, ..., r_{m})$ for all the instances (patches) in a whole mammogram $\bm{I}$ using the first equation in Eq.~\ref{equ:sortmax}, the first few ${r^\prime}_i$ should be consistent with the label of whole mammogram as previously mentioned, while the remaining patches (instances) should be negative. Instead of adopting the general MIL assumption that only considers the ${\bm{Q}^\prime}_1$ (patch of malignant probability ${r^\prime}_1$), we assume that 1) patches of the first $k$ largest malignant probabilities $\{{r^\prime}_1, {r^\prime}_2, ..., {r^\prime}_k\}$ should be assigned with the same class label as that of whole mammogram, and 2) the rest patches should be labeled as negative in the label assignment-based MIL. After the ranking/sorting layer using the first equation in Eq.~\ref{equ:sortmax}, we can obtain the malignant probability for each patch \begin{equation} \label{equ:ppatch} \begin{aligned} p(y=1 | {\bm{Q}^\prime}_i, \bm{\theta}) = {r^\prime}_i, \quad\text{and}\quad p(y=0 | {\bm{Q}^\prime}_i, \bm{\theta}) = 1-{r^\prime}_i. \end{aligned} \end{equation} The cross entropy loss function of the label assignment-based MIL can be defined \begin{equation} \label{equ:weightedlabelloss} \begin{aligned} \mathcal{L}_{labelassign.} = &-\frac{1}{mN}\sum_{n=1}^{N} \bigg ( \sum_{j=1}^{k} {\log(p(y_n | {\bm{Q}^\prime}_j, \bm{\theta}))}+ \\&\sum_{j=k+1}^{m} {\log(p(y=0 | {\bm{Q}^\prime}_j, \bm{\theta}))}\bigg )+\frac{\lambda}{2} \|\bm{\theta}\|^2. \end{aligned} \end{equation} One advantage of the label assignment-based MIL is that it explores all the patches to train the model. Essentially it acts a kind of data augmentation which is an effective technique to train deep networks when the training data is scarce. From the sparsity perspective, the optimization problem of label assignment-based MIL is exactly a $k$-sparse problem for the positive data points, where we expect $\{{r^\prime}_1, {r^\prime}_2, ..., {r^\prime}_k\}$ being 1 and $\{{r^\prime}_{k+1}, {r^\prime}_{k+2}, ..., {r^\prime}_m\}$ being 0. The disadvantage of label assignment-based MIL is that it is hard to estimate the hyper-parameter $k$. Thus, a relaxed assumption for the MIL or an adaptive way to estimate the hyper-parameter $k$ is preferred. \subsection{Sparse Multi-instance Learning}\label{sec:sparse} From the mass distribution, the mass typically comprises about 2\% of the whole mammogram on average (Fig.~\ref{fig:mass}), which means the mass region is quite sparse in the whole mammogram. It is straightforward to convert the mass sparsity to the malignant mass sparsity, which implies that $\{{r^\prime}_1, {r^\prime}_2, ..., {r^\prime}_m\}$ is sparse in the whole mammogram classification problem. The sparsity constraint means we expect the malignant probability of part patches ${r^ \prime}_i$ being 0 or close to 0, which is equivalent to the second assumption in the label assignment-based MIL. Analogously, we expect ${r^\prime}_1$ to be indicative of the true label of mammogram $\bm{I}$. After the above discussion, the loss function of sparse MIL problem can be defined \begin{equation} \label{equ:weightedsparseloss} \mathcal{L}_{sparse} = \frac{1}{N}\sum_{n=1}^{N} \big ( -\log(p(y_n | \bm{I}_n, \bm{\theta})) + \mu \|\bm{r}^{\prime}_n\|_1 \big ) +\frac{\lambda}{2} \|\bm{\theta}\|^2, \end{equation} where $p(y_n | \bm{I}_n, \bm{\theta})$ can be calculated in Eq.~\ref{equ:sortmax}, $\bm{r}_n = ({r^\prime}_1, {r^\prime}_2, ..., {r^\prime}_m)$ for mammogram $\bm{I}_n$, $\|\cdot\|_1$ denotes the $\mathcal{L}_1$ norm, $\mu$ is the sparsity factor, which is a trade-off between the sparsity assumption and the importance of patch ${\bm{Q}^\prime}_1$. From the discussion of label assignment-based MIL, this learning is a kind of exact $k$-sparse problem which can be converted to $\mathcal{L}_1$ constrain. One advantage of sparse MIL over label assignment-based MIL is that it does not require assign label for each patch which is hard to do for patches where probabilities are not too large or small. The sparse MIL considers the overall statistical property of $\bm{r} Another advantage of sparse MIL is that, it has different weights for general MIL assumption (the first part loss) and label distribution within mammogram (the second part loss), which can be considered as a trade-off between max pooling-based MIL (slack assumption) and label assignment-based MIL (hard assumption). \section{Experiments}\label{sec:exp} We validate the proposed models on the most frequently used mammographic mass classification dataset, INbreast dataset~\cite{moreira2012inbreast}, as the mammograms in other datasets, such as DDSM dataset~\cite{bowyer1996digital}, are of low quality. The INbreast dataset contains 410 mammograms of which 100 containing malignant masses. These 100 mammograms with malignant masses are defined as positive. For fair comparison, we also use 5-fold cross validation to evaluate model performance as~\cite{dhungel2016automated}. For each testing fold, we use three folds for training, and one fold for validation to tune hyper-parameters. The performance is reported as the average of five testing results obtained from cross-validation. We employ techniques to augment our data. For each training epoch, we randomly flip the mammograms horizontally, shift within 0.1 proportion of mammograms horizontally and vertically, rotate within 45 degree, and set $50 \times 50$ square box as 0. In experiments, the data augmentation is essential for us to train the deep networks. For the CNN network structure, we use AlexNet and remove the fully connected layers~\cite{krizhevsky2012imagenet}. Through CNN, the mammogram of size $227 \times 227$ becomes 256 $6 \times 6$ feature maps. Then we use steps in Sec.~\ref{sec:dmlmamm} to do MIL. Here we employ weights pretrained on the ImageNet due to the scarce of data. We use Adam optimization with learning rate $5 \times 10^{-5}$ for training models~\cite{ba2015adam}. The $\lambda$ for max pooling-based and label assignment-based MIL are $1 \times 10^{-5}$. The $\lambda$ and $\mu$ for sparse MIL are $5 \times 10^{-6}$ and $1 \times 10^{-5}$ respectively. For the label assignment-based MIL We firstly compare our methods to previous models validated on DDSM dataset and INbreast dataset in Table~\ref{tab:inbreast}. Previous hand-crafted feature-based methods require manually annotated detection bounding box or segmentation ground truth even in test denoting as manual~\cite{ball2007digital,varela2006use,domingues2012inbreast}. The feat. denotes requiring hand-crafted features. Pretrained CNN uses two CNNs to detect the mass region and segment the mass, followed by a third CNN to do mass classification on the detected ROI region, which requires hand-crafted features to pretrain the network and needs multi-stages training\cite{dhungel2016automated}. Pretrained CNN+Random Forest further employs random forest and obtained 7\% improvement. These methods are either manually or need hand-crafted features or multi-stages training, while our methods are totally automated, do not require hand-crafted features or extra annotations even on training set, and can be trained in an end-to-end manner. \begin{table}[t] \fontsize{9pt}{10pt}\selectfont\centering \caption{Accuracy Comparisons of the proposed deep MILs and related methods on test sets.}\label{tab:inbreast} \begin{tabular}{c|c|c|c|c} \hlinew{0.9pt} Methodology&Dataset&Set-up&Accu.&AUC\\ \hlinew{0.7pt} \tabincell{c}{Ball et al. \cite{ball2007digital}}&DDSM&Manual+feat.&0.87&N/A\\ \hline \tabincell{c}{Varela et al. \cite{varela2006use}}&DDSM&Manual+feat.&0.81&N/A\\ \hline \tabincell{c}{Domingues et al. \cite{domingues2012inbreast}}&INbr.&Manual+feat.&0.89&N/A\\ \hline \tabincell{c}{Pretrained CNN \cite{dhungel2016automated}}&INbr.&Auto.+feat.&0.84$\pm{0.04}$&0.69$\pm{0.10}$\\ \hline \tabincell{c}{Pretrained CNN+Random Forest \cite{dhungel2016automated}}&INbr.&Auto.+feat.&$\bf{0.91\pm{0.02}}$&0.76$\pm{0.23}$\\ \hlinew{0.9pt} AlexNet &INbr.&Auto.&0.81$\pm{0.02}$&0.79$\pm{0.03}$\\ \hline \tabincell{c}{AlexNet+Max Pooling MIL} &INbr.&Auto.&0.85$\pm{0.03}$&0.83$\pm{0.05}$\\ \hline \tabincell{c}{AlexNet+Label Assign. MIL} &INbr.&Auto.&0.86$\pm{0.02}$&0.84$\pm{0.04}$\\ \hline \tabincell{c}{AlexNet+Sparse MIL} &INbr.&Auto.&0.90$\pm{0.02}$&$\bf{0.89\pm{0.04}}$\\ \hlinew{0.9pt} \end{tabular} \end{table} The max pooling-based deep MIL obtains better performance than the pretrained CNN using 3 different CNNs and detection/segmentation annotation in the training set. This shows the superiority of our end-to-end trained deep MIL for whole mammogram classification. According to the accuracy metric, the sparse deep MIL is better than the label assignment-based MIL, which is better than the max pooling-based MIL. This result is consistent with previous discussion that the sparsity assumption benefited from not having hard constraints of the label assignment assumption, which employs all the patches and is more efficient than max pooling assumption. Our sparse deep MIL achieves competitive accuracy to random forest-based pretrained CNN, while much higher AUC than previous work, which shows our method is more robust. The main reasons for the robust results using our models are as follows. Firstly, data augmentation is an important technique to increase scarce training datasets and proves useful here. Secondly, the transfer learning that employs the pretrained weights from ImageNet is effective for the INBreast dataset. Thirdly, our models fully explore all the patches to train our deep networks thereby eliminating any possibility of overlooking malignant patches by only considering a subset of patches. This is a distinct advantage over previous networks employing several stages \begin{figure}[!t] \setlength{\abovecaptionskip}{0.cm} \setlength{\belowcaptionskip}{-0.cm} \begin{center} \begin{minipage}{0.15\linewidth} \centerline{\includegraphics[width=2.5cm]{22579730origgt.png}} \center{(a)} \end{minipage} \hspace{1cm} \begin{minipage}{0.15\linewidth} \centerline{\includegraphics[width=2.5cm]{22614127origgt.png}} \center{(b)} \end{minipage} \hspace{1cm} \begin{minipage}{0.15\linewidth} \centerline{\includegraphics[width=2.5cm]{22614379gt.png}} \center{(c)} \end{minipage} \hspace{1cm} \begin{minipage}{0.15\linewidth} \centerline{\includegraphics[width=2.5cm]{20587664gt.png}} \center{(d)} \end{minipage} \vfill \begin{minipage}{0.15\linewidth} \centerline{\includegraphics[width=2.5cm]{22579730max.jpg}} \end{minipage} \hspace{1cm} \begin{minipage}{0.15\linewidth} \centerline{\includegraphics[width=2.5cm]{22614127max.jpg}} \end{minipage} \hspace{1cm} \begin{minipage}{0.15\linewidth} \centerline{\includegraphics[width=2.5cm]{22614379max.png}} \end{minipage} \hspace{1cm} \begin{minipage}{0.15\linewidth} \centerline{\includegraphics[width=2.5cm]{20587664max.png}} \end{minipage} \vfill \begin{minipage}{0.15\linewidth} \centerline{\includegraphics[width=2.6cm]{22579730labelassign.jpg}} \end{minipage} \hspace{1cm} \begin{minipage}{0.15\linewidth} \centerline{\includegraphics[width=2.6cm]{22614127labelassign.jpg}} \end{minipage} \hspace{1cm} \begin{minipage}{0.15\linewidth} \centerline{\includegraphics[width=2.6cm]{22614379labelassign.png}} \end{minipage} \hspace{1cm} \begin{minipage}{0.15\linewidth} \centerline{\includegraphics[width=2.6cm]{20587664labelassign.png}} \end{minipage} \vfill \begin{minipage}{0.15\linewidth} \centerline{\includegraphics[width=2.6cm]{22579730sparse.jpg}} \end{minipage} \hspace{1cm} \begin{minipage}{0.15\linewidth} \centerline{\includegraphics[width=2.6cm]{22614127sparse.jpg}} \end{minipage} \hspace{1cm} \begin{minipage}{0.15\linewidth} \centerline{\includegraphics[width=2.6cm]{22614379sparse.png}} \end{minipage} \hspace{1cm} \begin{minipage}{0.15\linewidth} \centerline{\includegraphics[width=2.6cm]{20587664sparse.png}} \end{minipage} \caption{The visualization of predicted malignant probabilities for instances/patches in four resized mammograms. The first row is the resized mammogram. The red rectangle boxes are mass regions from the annotations on the dataset. The color images from the second row to the last row are the predicted malignant probability from logistic regression layer for (a) to (d) respectively, which are the malignant probabilities of patches/instances. Max pooling-based, label assignment-based, sparse deep MIL are in the second row, third row, fourth row respectively.} \label{fig:visresponse} \end{center} \end{figure} To further understand our deep MIL, we visualize the responses of logistic regression layer for four mammograms on test set, which represents the malignant probability of each patch, in Fig.~\ref{fig:visresponse}. We can see the deep MIL learns not only the prediction of whole mammogram, but also the prediction of malignant patches within the whole mammogram. Our models are able to learn the mass region of the whole mammogram without any explicit bounding box or segmentation ground truth annotation of training data. The max pooling-based deep multi-instance network misses some malignant patches in (a), (c) and (d). The possible reason is that it only considers the patch of max malignant probability in training and the model is not well learned for all patches. The label assignment-based deep MIL mis-classifies some patches in (d). The possible reason is that the model sets a constant $k$ for all the mammograms, which causes some mis-classifications for small masses. One of the potential applications of our work is that these deep MIL networks could be used to do weak mass annotation automatically, which provides evidence for the diagnosis. \section{Conclusion}\label{sec:con} In this paper, we propose end-to-end trained deep MIL for whole mammogram classification. Different from previous work using segmentation or detection annotations, we conduct mass classification based on whole mammogram directly. We convert the general MIL assumption to label assignment problem after ranking. Due to the sparsity of masses, sparse MIL is used for whole mammogram classification. Experimental results demonstrate more robust performance even without detection or segmentation annotation in the training. In future work, we plan to extend the current work by: 1) incorporating multi-scale modeling such as spatial pyramid to further improve whole mammogram classification, 2) employing the deep MIL to do annotation or provide potential malignant patches to assist diagnoses, and 3) applying to large datasets if the big dataset is available. \bibliographystyle{splncs03} \small{
2208.14614
\section{Introduction} \label{sec:intro} Whereas traditional approaches in recommender systems infer user preferences solely based on historical user-item interaction data~\cite{he2017neural, wang2019neural, wang2019explainable}, conversational recommender systems (CRS) elicit user preferences through interactive question answering~\cite{salton1971smart, zhang2018towards, radlinski2017theoretical, chu1998evidential}. The traditional approaches are insufficient in adapting to the user's ongoing information need, since the user's preference can deviate over time from historical data. Moreover, users may look for different items under different situations. For example, a user who is looking for restaurant recommendations may seek a specific type of food at a particular time, or may consider other factors such as wait-time, availability of parking and etc differently when making her choices. A CRS solution (often referred to as ``agent'') can profile the user's current preference by the feedback collected from its strategically planned questions ~\cite{young2013pomdp, mccarthy2010experience}. Earlier forms of CRS can be traced back to interactive recommender systems~\cite{HE20169,Chen_Dai_Cai_Zhang_Wang_Tang_Zhang_Yu_2019, wu-etal-2019-proactive} and critiquing-based recommender systems~\cite{tversky1993context,tou1982rabbit,smyth2003analysis}. In the CRS setting proposed by Christakopoulou et al.~\cite{christakopoulou2016towards}, the agent searches for the target item using a multi-armed bandit algorithm solely in the item space, i.e., keep recommending different items. Zhang et al.~\cite{zhang2018towards} expanded the domain to the attribute space so that the agent needs to predict two things: what questions to ask and then which items to recommend, i.e., a series of questions followed by a recommendation. Li et al.~\cite{li2018towards} further expanded the setting by deciding between asking a question or making a recommendation at each round of interaction. As a result, the agent can ask a series of questions about item attributes and make several rounds of recommendations in an interleaved manner. This is often referred to as the multi-turn CRS setting and also the focus of our work. \begin{figure} \centering \includegraphics[width = \linewidth]{figs/paper-interaction-tree.pdf} \caption{Overview of user-item interaction tree in \textsc{FacT-CRS}{}. We can learn shared embedding of user-item interaction using the interaction tree and use it to ask questions and recommend. } \label{fig:example} \end{figure} Current multi-turn CRS solutions are dominated by deep reinforcement learning algorithms~\cite{earlei2020estimation, unicorndeng2021unified,scprlei2020interactive,fpanxu2021adapting}. Although encouraging empirical performance has been reported in literature, we question whether a simpler alternative exists and can achieve comparable or even better performance, as the deep models are often criticized for their lack of interpretability and high demand of training data to perform. Intuitively, decision tree is a natural choice to construct the questions in multi-turn CRS \cite{tao2019fact,zhou2011functional}, where each question narrows down the candidate set of items. However, naively using a decision tree to partition the item space for the purpose of CRS is infeasible, because different users can view the same item differently and thus provide distinct answers to the same question even when they are looking for the same item. Figure~\ref{fig:example} shows this issue with an example in restaurant recommendation. From the observed user-item interactions, we notice that for the same burger shop, one user would describe it by its ``low price'' aspect, while another user did not pay attention to its price aspect but focuses on its ``options'' and ``drive-through'' service. As the partition of item space is exclusive in a decision tree, i.e., the items are separated into non-overlapping groups at the same level of tree nodes, if we used the question ``price'' to locate the restaurant in our interactions with the two users, we will surely fail to find the burger shop for at least one of them. The problem described above motivates us to instead \emph{partition the user-item interactions}: the matching between a user-item pair during the course of CRS can be characterized by how the user would describe the item. As a result, a path on the decision tree groups the common descriptions about items from different users, and different descriptions of the same item then result in different paths on the tree. As shown in Figure \ref{fig:example}, the burger shop is now placed in different tree nodes to reflect the fact that different users would describe it differently. This sheds light on the possibility of using a decision tree for multi-turn CRS. In this paper, we propose a rule-based solution to multi-turn CRS, namely \textsc{FacT-CRS}{} (Factorization Tree based Conversational Recommender System). We use user-item interactions to guide us to build a set of decision trees, which will be referred to as user-item interaction trees. Whereas the existing CRS methods~\cite{earlei2020estimation, unicorndeng2021unified, scprlei2020interactive, fpanxu2021adapting} rely on user embeddings to make recommendations, we propose to construct user-item interaction embeddings. There are still three main challenges remaining to address in order to complete the solution. Firstly, \textit{how to rank the items?} Clearly a decision tree naturally forms the questions. However, as shown in Figure~\ref{fig:example}, each node in the decision-tree may contain a varying number of items. When the number of candidate items is more than what we can recommend in a turn, we need to decide the ranking of items before making a recommendation. We extend the concept of factorization tree \cite{tao2019fact} to learn embeddings for user-item interactions and all the candidate items while constructing the decision trees. For all observed user-item interaction pairs located in a tree node (i.e., the users provided the same answers to the questions asked so far about their intended items), an embedding vector is assigned to match against the item embeddings. During training, the embedding vectors are learnt in a way such that the matched items will be ranked higher for the associated user-item pairs than those unmatched ones. At inference time, when we decide to make a recommendation at a tree node, we will use the corresponding interaction embedding to rank all candidate items. Secondly, \textit{when to make a recommendation?} It is desirable that the agent makes a recommendation as soon as it is confident, before the user’s patience wears out. It motives us to make a recommendation before exhausting the questions in the path of the interaction tree. An appropriate turn to make a recommendation is when 1) asking further questions does not provide much information gain, and 2) when the number of candidate items is small enough. Based on those considerations, we propose two strategies to make early recommendations. When building the interaction tree, we keep track of how much information gain is achieved using the Gini Index and we stop splitting a node if the information gain is below a threshold. At inference time, we can make recommendations before reaching the leaf node, when the number of unique items associated on the node is less than a threshold. Thirdly, \textit{how to handle a user's negative feedback about the recommended items?} Online feedback is an important aspect of multi-turn CRS where the agent should improve its action during a conversation, not only when the the user answers the planned question but also when he/she rejects a set of recommendations. When the agent encounters a rejection, it becomes apparent that 1) the rejected items are not what the user is looking for, and 2) the target item is still on the lower part of the ranked list. Based on this insight, we design an adaptive feedback module to refine the inferred interaction embeddings based on the rejected items before moving to the next round of interaction. To evaluate the effectiveness of \textsc{FacT-CRS}{}, we conducted extensive experiments on three benchmark CRS datasets: LastFM, BookRec and MovieLens. The experiment results demonstrate that \textsc{FacT-CRS}{} performed significantly better in finding the target items for the users using fewer turns. Extensive analysis shows that \textsc{FacT-CRS}{} improved the conversation by narrowing down the candidate items taking user-item interaction into account and adapting to the online feedback effectively. \section{Related Works} \label{sec:realted} CRS leverages multiple turns of questions and answers to obtain user preferences in a dynamic manner. There have been mainly four lines of research in CRS. In the item-based approaches, the agent keeps making recommendations based on users' immediate feedback. Christakopoulou et al. \cite{christakopoulou2016towards} proposed this line of research which marked the inception of CRS. They employed multi-armed bandit models to acquire the users’ feedback on individual items, such that model updates itself at each turn. Later in the question-driven setting, the domain was expanded so that the system needed to predict two things: 1) what questions about item attributes to ask, and 2) which items to recommend. Note that, in this setting the system only recommends by the end of conversation. Zhang et al.~\cite{zhang2018towards} proposed a model which consisted of three stages. In the initiation stage, user initiates the conversation. In the conversation stage, the system asks about the user preferences on attributes of items. And in the final stage, the system recommends the items. Zou et al.~\cite{zou2020towards} proposed Qrec which asks questions in a predetermined number of times and then makes recommendation once. In Qrec, they chose the most uncertain attribute to ask (the attribute that the system has the least confidence between positive and negative feedback). Christakopoulou et al.~\cite{christakopoulou2018q} later studied the setting where the user can provide multiple answers (for example: ``choose all the attributes that you like."). There is another approach in CRS, which synthesize natural language. This adds personalized response to the conversation and it is applicable to an even broader scope. Usually, in this approach, there is a natural language understanding module and a natural language generation module. This approach is beyond the scope of our work. In this paper, we focus on the multi-turn setting of CRS, where an agent needs to choose between asking a question or making a recommendation in each turn of conversation. Lei et al.~\cite{earlei2020estimation} expanded the single-turn recommendation CRS to the multi-turn setting, where multiple questions and recommendations can be made in one conversation until the user accepts the recommendation or until the end of the conversation. Reinforcement learning (RL) has been widely adopted in multi-turn CRS, which formulates the CRS problem as a Markov Decision Process (MDP)~\cite{pei2019value, shani2005mdp, zhao2018recommendations}. Recent works on RL-based interactive recommendation~\cite{wang2020kerl, xin2020self, zhang2019text, zhou2020interactive, zou2020pseudo} have been shown to effectively recommend items by modeling the users' dynamic preferences. The objective of these methods is to learn an effective RL policy to decide which items to recommend. Sun et al.~\cite{sun2018conversational} used a belief tracker based on an LSTM model to determine when to recommend, but their model was not able to handle when user rejected a recommendation. Lei et al.~\cite{earlei2020estimation} utilized three different modules: the estimation module predicts user preference on items and attributes; the action module decides whether to ask about attributes or to recommend items; and the reflection module updates the model when there is negative feedback. A dynamic preference model was introduced by Xu et al.~\cite{fpanxu2021adapting}, where they proposed a gating mechanism to include both positive and negative user feedback. Deng et al.~\cite{unicorndeng2021unified} combined the question selection and recommender modules. They proposed two heuristics to reduce the candidate action space by pre-selecting attributes and items in each turn. Zhang et al.~\cite{zhang2020conversational} used a bandit algorithm to select attributes and used a heuristic to decide whether to ask questions about attributes or make recommendations. Li et al.~\cite{li2021seamlessly} unified attributes and items in the same space in a multi-armed bandit setting to determine the questions, and used another bandit model to determine when to recommend. Li et al.~\cite{li2018towards} used a deep RL based model to decide when to make a recommendation or which question to ask. Chu et al. \cite{chu2022meta} developed a meta reinforcement learning based solution to handle new users in CRS. Although RL-based models dominate modern CRS solutions, in this paper, we explore a simpler and effective alternative based on decision trees. The RL methods provide strong baselines to compare the recommendation quality of our work. \section{Methodology} \label{sec:method} In this section, we first describe the problem of multi-turn conversational recommendation. Our work is motivated by finding a simple decision tree based model to solve the challenges in multi-turn conversational recommendation. We first explain how a decision tree can effectively model the user-item interactions to capture the potential matching between a user and an item, which translates to a set of questions to filter and rank items for recommendations. Then we describe how to adapt the user-item interaction tree in \textsc{FacT-CRS}{} to effectively address the challenges in CRS. \subsection{Preliminary} \textbf{Multi-turn CRS.} In this paper, we study the multi-turn conversational recommendation problem, since it has been popularly adopted because of its realistic setting~\cite{earlei2020estimation, unicorndeng2021unified, fpanxu2021adapting}. In this setting, the agent takes multiple turns to ask questions regarding item attributes or make recommendations (e.g., movies, restaurants etc.). We denote the set of items as $\mathcal{I} = \{i_1, i_2, \dots, i_n\}$, and the set of users as $\mathcal{U} = \{u_1, u_2, \dots, u_m\}$. The set of attributes $\mathcal{F} = \{f_1, f_2, \dots, f_p\}$ are used to describe the items. Each item is associated with a set of predefined attributes $\mathcal{F}_i$. Suppose that a user $u \in \mathcal{U}$ is in a conversation with the CRS agent and her target item is $i^+ \in \mathcal{I}$. Each conversation constitutes of multiple turns. At each turn $t$, the agent needs to decide whether to ask a question or to make a recommendation. Depending on the agent's action, each turn can either be 1) a question asked by the agent and followed by the user’s answer, or 2) top-K recommendations followed by the user's acceptance or rejection. In the question-answer turn, the agent asks an attribute $f_t \in \mathcal{F}$, i.e., ``do you prefer attribute $f_t$?'' In the recommendation turn, the user accepts the recommendation, if the target item $i^+$ is contained in the top-K recommendations. Otherwise, the user rejects the recommendation. The conversation is considered successful, if the user accepts a recommendation. Otherwise, the conversation fails, if the agent reaches a maximum turn limit. The goal of CRS is to successfully recommend items to the user with the fewest number of turns. \textbf{User-item interaction.} We use the pair $(u, i)$ to denote that user $u$ interacted with item $i$ in history. We use $\mathcal{R}$ to denote the set of user-item interactions. Let the number of interaction $|\mathcal{R}| = q$. Each user-item interaction $\r_{u, i}\in \{0,1\}^{p}$, where $(u,i) \in \mathcal{R}$. Here, 1 denotes that the attribute is mentioned in the interaction between the user and item (e.g., the user uses this attribute to describe the item), and 0 denotes that the attribute is not mentioned. An example could be: user $u$ describes item $i$ using the terms of $\{f_1, f_2\}$, as shown in Figure \ref{fig:example}. Here, the interaction content $\r$ would be a $p$ dimensional vector description with 1 for $f_1$, and $f_2$ and 0 for the remaining attributes. \subsection{User-item Interaction Tree} \label{sec:review-tree} User-item interaction tree (or just ``interaction tree'' for short) is the building block of our \textsc{FacT-CRS}{} model that we use for asking questions, narrowing down the candidate set of items, and ranking the candidate items. The goal of the interaction tree is to allow us to hierarchically learn the interaction embeddings as a function of the attributes $\mathcal{F}$. We use FacT \cite{tao2019fact} to associate each interaction and each item with a $d$-dimension vector. The embeddings of all the interactions and items are respectively denoted as $\S \in \mathbb{R}^{q \times p}$ and $\mathbf{V} \in \mathbb{R}^{n \times p}$. Formally, we associate each interaction $(u, i) \in \mathcal{R}$ with an embedding $\mathbf{s}_{u, i} \in \mathbb{R}^d$ and each item $i$ with an embedding $\mathbf{v}_i \in \mathbb{R}^d$. Intuitively, we want that a) the dot product $\mathbf{s}_{u,i}^T\mathbf{v}_i$ is maximized and b) $\mathbf{s}_{u,i}^T \mathbf{v}_j$ is minimized when $j \neq i$. We achieve this using the cross-entropy loss defined in the following, \begin{equation} \label{eq:cross-entropy} \begin{aligned} \mathcal{L}_{C E}(\mathbf{S}, \mathbf{V}, \mathcal{R}) &=\sum_{(u, i) \in \mathcal{R}}\log\left(1-\sigma\left(\mathbf{s}_{u, i}^{T} \mathbf{v}_{i}\right)\right) \end{aligned} \end{equation} Here, $\sigma(.)$ is the sigmoid function. To differentiate the relative ordering among candidate items, we also introduce the Bayesian Pairwise Ranking (BPR) objective~\cite{rendle2012bpr}. The intuition is that the items that a given user has interacted with should be scored higher than the items the user did not interact with. We sample the negative set $\mathcal{D}_{u, i}^{n e g}$ for interaction $(u, i)$ by choosing $i_{neg}$ where $\left(u, i_{neg}\right) \notin \mathcal{R}$. The BPR loss is calculated as in Eq.~\eqref{eq:bpr}, \begin{equation} \label{eq:bpr} \mathcal{B}\left(\mathbf{s}_{u, i}, \mathbf{V}, \mathcal{D}_{u, i}^{n e g}\right)=\sum_{i_{\text {neg }} \in \mathcal{D}_{u, i}^{\text {neg }}} \log \sigma\left(\mathbf{s}_{u, i}^{T} \mathbf{v}_{i}-\mathbf{s}_{u, i}^{T} \mathbf{v}_{i_{\text {neg }}}\right) \end{equation} Each node $z$ in the interaction tree is associated with a question $f_z \in \mathcal{F}$ (i.e., ``Do you prefer attribute $f_z$?''). Let us consider a subset of all interactions $\mathcal{R}_z \subseteq \mathcal{R}$. Based on the user-item interaction $(u, i)$ content $\r_{u,i}$ , we can check whether or not a specific attribute $f_l$ is mentioned. For each attribute $f_l$, we can partition the subset of user-item interactions $\mathcal{R}_z$ into two disjoint sets $\mathcal{R}_{z_{+|f_l}}$ and $\mathcal{R}_{z_{-|f_l}}$ as given in Eq.~\eqref{eq:partition}, \begin{equation} \label{eq:partition} \begin{aligned} &\mathcal{R}_{z_{+|f_l}}=\left\{(u, i)\mid r_{{u, i}_l} = 1 , \r_{u, i} \in \mathcal{R}_{z}\right\} \\ &\mathcal{R}_{z_{-|f_l}}=\left\{(u, i)\mid r_{{u, i}_l} = 0 , \r_{u, i} \in \mathcal{R}_{z}\right\} \\ \end{aligned} \end{equation} All the user-item interactions in the same node share the same description path from the root node and thus will share the same interaction embedding. Let $\mathcal{R}_{z_{+|f_l}}$ and $\mathcal{R}_{z_{-|f_l}}$ be the observed user-item interactions in the resulting partitions, and $\mathbf{s}_{pos}, \mathbf{s}_{neg}$ be the corresponding user-item interaction embeddings in each of the partitions. Note that $\mathbf{s}_{pos}$ is the shared embedding of all the user-item interactions in $ \mathcal{R}_{z_{+|f_l}}$ and $\mathbf{s}_{neg}$ is the shared embedding of all the user-item interactions in $\mathcal{R}_{z_{-|f_l}}$. We can find both embeddings by solving the following optimization problem: \begin{equation} \label{eq:split} \begin{aligned} &\mathcal{L}(f_l|\mathcal{R}_z) = \min_{\mathbf{s}_{pos}, \mathbf{s}_{neg}, \mathbf{V}} \mathcal{L}_{CE} (\mathbf{s}_{pos}, \mathbf{V}, \mathcal{R}_{z_{+|f_l}}) + \mathcal{L}_{CE} (\mathbf{s}_{neg}, \mathbf{V}, \mathcal{R}_{z_{-|f_l}} ) \\ &+ \lambda_{BPR} \left(\sum_{(u,i) \in \mathcal{R}_{z_{+|f_l}}}\mathcal{B}\left(\mathbf{s}_{u,i}, \mathbf{V}, D_{u,i}^{neg}\right) + \sum_{(u,i) \in \mathcal{R}_{z_{-|f_l}}}\mathcal{B}\left(\mathbf{s}_{u,i}, \mathbf{V}, D_{u,i}^{neg}\right) \right)\\ &+ \lambda_s \left(\|\mathbf{s}_{pos}\|_2 + \|\mathbf{s}_{neg}\|_2 \right) \end{aligned} \end{equation} where $\lambda_{BPR}$ is a coefficient that controls the trade-off between cross-entropy loss and BPR loss, $\|.\|$ is the L2 regularization to reduce model complexity, and $\lambda_s$ is the regularization coefficient. An optimal attribute split would partition the user-item interaction subset $\mathcal{R}_z$, where the embeddings in each disjoint group minimize the loss as given by Eq.~\eqref{eq:split}. We can find the optimal attribute $f_z$ by exhaustively searching through the attribute set $\mathcal{F}$ using Eq.~\eqref{eq:optimal}: \begin{equation} \label{eq:optimal} f_z = \arg \min_{f_l \in \mathcal{F}} \mathcal{L}(f_l|\mathcal{R}_z) \end{equation} We build the interaction tree by recursively splitting each partition until the maximum depth $H_{max}$ is reached. Once constructed, the user-item interaction tree allows us to infer the interaction embedding for a new interaction at inference time by following the tree structure. \textbf{Candidate items.} Each node in the user-item interaction tree maintains a subset of observed interactions. Since each interaction is a user-item pair, the items in the interaction set of that node form the candidate items for recommendation. Formally, given the subset of interaction $\mathcal{R}_z$ at node $z$, the candidate set of items is given by: \begin{equation} \label{eq:candidate} \mathcal{I}_{z} = \{i | (u, i) \in \mathcal{R}_z\} \end{equation} \textbf{Recommendation score.} Suppose, after traversing the user-item interaction tree, the inferred interaction embedding is $\mathbf{s}{}_t$. The recommendation score of each candidate item at turn $t$ can be readily computed as: \begin{equation} \label{eq:score} w^{\mathcal{I}}_t(i) = \mathbf{s}{}_t^T \mathbf{v}_i \end{equation} \subsection{\textsc{FacT-CRS}{}: User-item Interaction Tree for Conversational Recommendation} Successfully solving the multi-turn CRS problem requires addressing the following four problems: namely, 1) \textit{which questions to ask}, 2) \textit{how to rank the candidate items}, 3) \textit{when to recommend}, and 4) \textit{how to handle user's negative feedback on the recommendations}. Based on our construction of interaction tree described in Section \ref{sec:review-tree}, the first two questions are partially addressed. And in this section, we mainly focus on the remaining two key questions, based on the structure of a decision tree. \subsubsection{Which questions to ask and how to rank the candidate items?} \label{sec:rq1} User-item interaction tree is learnt to optimize the questions to be asked based on user feedback, i.e., following the paths on the tree to narrow down the search space. However, given the maximum depth of a trained decision tree is fixed, we cannot ask more than $H_{max}$ questions using one tree. For our model to generalize, we need to ask an arbitrary number of questions. Random forest~\cite{breiman2001random} is an ensemble learning method that provides a feasible way to solve this challenge by creating multiple trees. We build a random forest of $N$ user-item interaction trees. We can build the interaction trees in parallel so that each interaction tree in the forest considers a maximum of $f_{max}$ attributes where $f_{max} \leq q$. These attributes are randomly sampled from the complete attribute set $\mathcal{F}$. We use $\tau_j$ to denote the $j^{th}$ interaction tree and $\mathcal{T}$ to denote the set of all interaction trees, therefore an interaction forest. We will discuss how to leverage these multiple trees to realize multi-turn CRS in Section \ref{sec:rq3}. As each tree node is associated with an interaction embedding vector, once the agent decides to make a recommendation, it will use Eq \eqref{eq:score} to compute the ranking scores of all candidate items and return the top-K items as the recommendation of this turn. \subsubsection{When to recommend?} \label{sec:rq2} The interaction tree gives us a natural way to decide when to recommend, i.e., when reaching the leaf node of the tree. But this restricts us to ask at most $H_{max}$ questions before we can start a recommendation turn. However, an important goal of conversational recommender systems is to minimize the number of interactions with the user. In this case, it is desirable to make a recommendation when the agent is ``confident'' about the item to be recommended. Hence, the question becomes: how to make an early recommendation? We use the following two strategies to recommend early. \begin{itemize}[wide = 1pt] \item During training (pruning): When building each user-item interaction tree, we stop splitting a node when the items in that node is ``homogeneous'', i.e., most of the interactions are about only a few items. We use Gini Index \cite{gini1921measurement} for this purpose. Suppose, for an internal node $z$ in an interaction tree, the set of items from the user-item interaction $\mathcal{R}_z$ is $\mathcal{I}_z$ as given by Eq.~\eqref{eq:candidate}. We calculate Gini Index of node $z$ as: \begin{equation} \label{eq:gini} G_z=1-\left(\frac{|\mathcal{I}_{z}|}{|\mathcal{R}_{z}|}\right)^{2} \end{equation} If the Gini Index of a node is greater than a predetermined threshold $\gamma$, we stop further splitting that node. \item During testing (EarlyRec): \label{subsec:stop-testing} At inference time, a small number of items in a node is a strong indication that it is time to recommend. If we encounter a node in the user-item interaction tree that has no more than $\eta$ items, we can make an early recommendation. Let the set of items in that node be $\mathcal{I}_z$. Then, if $|\mathcal{I}_z| \leq \eta$, we make an early recommendation. If strictly $|\mathcal{I}_z| \leq K$, then we can include all the items in $\mathcal{I}_z$ in the recommendation. We rank all the items $\in \mathcal{I} \backslash \mathcal{I}_z$ using the scoring function in Eq.~\eqref{eq:score}. The remaining $K - |\mathcal{I}_z| $ items with the highest scores are included in the top-K recommendation. \end{itemize} \subsubsection{How to handle online negative feedback?} \label{sec:rq3} Handling negative feedback or recommendation rejection is an important challenge in CRS. When the user rejects a recommendation, it provides us a strong signal to improve the subsequent questions and recommendations. To effectively handle negative feedback from a user's rejected recommendations, we approach this problem in three steps: retaining information from previously asked user-item interaction trees, effectively choosing the next user-item interaction tree, and updating the predicted user-item interaction embedding. \begin{itemize}[wide=0pt] \item {Retaining information from the previously asked user-item interaction trees.} We need to move to a new user-item interaction tree when the user rejects our recommendation. Suppose, at turn $t$, we have asked questions from the set of interaction trees $\mathcal{T}^\alpha_t$, and we have obtained the set of user-item interaction embeddings $\mathcal{S}^{\alpha}_{t}$. We keep the conversation history by updating the inferred interaction embedding to be the mean of the visited user-item interaction embeddings $\mathcal{S}^{\alpha}_{t}$. As a result, the updated interaction embedding at turn $t$ becomes: \begin{equation} \label{eq:mean-embedding} \mathbf{s}_t = \frac{1}{|\mathcal{S}^{\alpha}_{t}|}\sum_{\mathbf{s}_j \in \mathcal{S}^{\alpha}_{t}} \mathbf{s}_j \end{equation} \item {Choosing the next user-item interaction tree.} Assume that we have finished traversing an interaction tree and made a recommendation. If the user rejects, how do we select the next tree? One way is to randomly pick an unvisited tree. However, we believe that we can do better by using the following strategy. Since the predicted interaction embedding is expected be close to the target interaction embedding, we propose to move to the closest tree first. Suppose, at turn $t$, we have asked questions from the set of $\mathcal{T}^\alpha_t$ user-item interaction trees, and have asked the set of attributes $\mathcal{F}^{\alpha}_{t} \subseteq \mathcal{F}$. Based on Eq.~\eqref{eq:mean-embedding}, let the current inferred user-item interaction embedding be $\mathbf{s}_{t}$. We traverse the remaining user-item interaction trees in parallel by using only the attributes in $\mathcal{F}^\alpha_{t}$, i.e, the attributes we already know the answers to. Assume that we get the interaction embedding $s'_j$ from the remaining interaction tree $\tau_j \in \mathcal{T} \backslash \mathcal{T}^{\alpha}_t$. We score each of the remaining tree $\tau_j$ according to how similar the corresponding interaction embedding $\mathbf{s}'_j$ is to the current interaction embedding $\mathbf{s}'_{t}$, as given by Eq.~\eqref{eq:tree-score}, \begin{equation} \label{eq:tree-score} w^{\mathcal{T}}_t(\tau_j) = {\mathbf{s}'_j}^T \mathbf{s}{} \end{equation} Using this closest tree first strategy, we choose the user-item interaction tree $\tau_j$ with the highest similarity score as the next tree to continue the conversation. \item{Updating the predicted user-item interaction embedding.} We first identify the set of items that cause the failure and then update the inferred interaction embedding accordingly. Assume at turn $t$ we make a recommendation using the interaction embedding $\mathbf{s}_t$, which the user rejected. We denote this set of recommended items as $\mathcal{I}_r$. We use the following two strategies to handle online feedback: 1) As $\mathcal{I}_r$ was rejected by the user, we update the predicted user-item interaction embedding to penalize the items in $\mathcal{I}_r$. 2) Since the target item is still expected to have high scores, we promote the items in the next top-K set. Based on $\mathbf{s}_t$, denote the next $K$ highest scoring items as set $\mathcal{I}_p$. Combining both strategies, we update the user-item interaction embedding after a rejected recommendation as follows: \begin{equation} \label{eq:negative-feedback} \mathbf{s}_{t+1} = \mathbf{s}_{t} + \frac{\alpha_p}{|\mathcal{I}_p|} \sum_{i \in \mathcal{I}_p} \mathbf{v}_i- \frac{\alpha_n}{|\mathcal{I}_r|} \sum_{i \in \mathcal{I}_r} \mathbf{v}_i \end{equation} Here, $\alpha_r$ and $\alpha_p$ are hyper-parameters that respectively determine how much we penalize the items in $\mathcal{I}_{r}$ and how much we promote the next top-ranked items $\mathcal{I}_p$. Algorithm~\ref{alg:forest} presents the inference steps of a single conversation session. \end{itemize} \input{algo/test} \endinput \section{Experiments} \label{sec:experiments} To evaluate the effectiveness of \textsc{FacT-CRS}{} in solving the challenges in CRS, we perform quantitative experiments guided by the following research questions (RQ): \begin{itemize}[leftmargin = 10 pt] \item \textbf{RQ1.} Can a rule-based method i.e., \textsc{FacT-CRS}{}, achieve better performance than the state-of-the-art RL-based methods in the multi-turn CRS setting? \item \textbf{RQ2.} How much improvement does early recommendation strategy offer compared to asking questions from the entire interaction tree? \item \textbf{RQ3.} Is online update of inferred interaction embedding useful in making better recommendations? \end{itemize} \subsection{Dataset} \label{subsec:dataset} We evaluate the recommendation quality of \textsc{FacT-CRS}{} on three benchmark datasets used in multi-turn CRS. \begin{itemize}[leftmargin = 10pt] \item \textbf{LastFM}~\cite{bertin2011million} is a dataset for music artist recommendation. Lei et al.~\cite{earlei2020estimation} pruned the users with fewer than 10 reviews~\cite{he2017neural,rendle2012bpr} to reduce data sparsity, and processed the original attributes by combining synonyms and removing low frequency attributes. They categorized the original attributes into 33 coarse-grained attributes. \item \textbf{BookRec}~\cite{nguyen2019building} This is a book recommendation dataset filtered by removing low frequency TF-IDF attributes and keeping the top 35 attributes. \item \textbf{MovieLens}~\cite{harper2015movielens} is a movie recommendation dataset filtered by keeping top 35 attributes according to their TF-IDF scores. \end{itemize} The statistics of the datasets are shown in Table~\ref{table:dataset}. We randomly split the users into disjoint groups of 8:1:1 for training, validation and testing users. The code and data used in the experiments are available at \href{https://github.com/HCDM/XRec}{https://github.com/HCDM/XRec}. \begin{table}[H] \caption{Summary of datasets.} \label{table:dataset} \centering \begin{tabular}{lrrr} \hline & LastFM & BookRec & MovieLens \\ \hline \# Users & 1,801 & 1,891 & 3,000 \\ \# Items & 7,432 & 4,343 & 5,974 \\ \# Interactions & 72,040 & 75,640 & 120,000 \\ \# Attributes & 33 & 35 & 35 \\ \hline \end{tabular} \end{table} \subsection{Experiment Settings} \subsubsection{User simulator} Conversational recommendation is a dynamic process for user preference elicitation. Similar to \cite{sun2018conversational, zhang2018towards, unicorndeng2021unified, earlei2020estimation, scprlei2020interactive, fpanxu2021adapting}, we created a user-simulator to enable the CRS training and testing. In each conversational session, an observed user-item pair $(u,i)$ is first selected. We call the item $i$ the target item or the ground-truth item for that conversation. Previous simulators~\cite{unicorndeng2021unified, earlei2020estimation, scprlei2020interactive, fpanxu2021adapting} assume that all of item $i$'s attributes $\mathcal{F}_i$ is the oracle set of attributes preferred by the user in this session. This means that all users will respond in the same way to the selected item. This setting is, however, unrealistic, because in reality, every user may not equally value every attribute of an item. Hence, this design eliminates the potential of personalized responses. We design a user-based simulator that can handle user-specific feedback in each conversation round in the following way. Each item $i$ is associated with a set of attributes $\mathcal{F}_i$, and each user $u$ has a preferred attribute set $\mathcal{F}_u$. $\mathcal{F}_i$ and $\mathcal{F}_u$ are uniformly sampled for each item $i$ and user $u$ before the simulation starts. For a user-item interaction pair $(u,i)$, our simulator only accepts (responds ``Yes" to) an attribute $f_l$ if and only if it is mentioned in $\mathcal{F}_i \cap \mathcal{F}_u$; otherwise it will respond ``No''. We set the maximum turn limit in a conversation to 10. The user leaves the conversation after the turn limit is reached. We set K = 10, so that we are limited to recommend only 10 items in a recommendation turn. \subsubsection{Evaluation Metrics} \label{sec:metric} Following~\cite{sun2018conversational, zhang2018towards, unicorndeng2021unified, earlei2020estimation, fpanxu2021adapting, scprlei2020interactive}, we use success rate and average turn as evaluation metrics. We use the success rate at turn T (SR@T) to measure the ratio of successful conversations. In an interaction where user $u$ interacted with item $i^+$, we call $i^+$ the ground-truth or the target item. A session of conversation is successful if the agent can identify the ground-truth item. We also report the average turns (AT) needed to end the round of conversation. The number of turns in a failed conversation is set to the the maximum turn limit T. The quality of recommendation is greater for larger SR@T, whereas the conversation is more efficient and to the point for smaller AT. \subsubsection{Implementation Details.} We performed the training of \textsc{FacT-CRS}{} on the training users, and tuned the hyper-parameters of our model on the validation set of users. The best model was chosen based on the validation success rate. The testing users were used to obtain the final reported performance for comparison. We fixed the embedding dimension $d = 40$, $\lambda_{BPR} = 10^{-3}$, $\alpha_p = 10^{-3}, \alpha_n = 10^-2$, and $G_z = 0.996$. The depth of the interaction tree was chosen as 7. \subsubsection{Baselines} \label{sec:baselines} We evaluated the performance of \textsc{FacT-CRS}{} and compared with the following state-of-the-art multi-turn CRS baselines~\cite{wu2015probabilisticmaxentropy, unicorndeng2021unified, earlei2020estimation, fpanxu2021adapting, scprlei2020interactive}. \begin{itemize}[leftmargin = 10 pt] \item \textbf{Max Entropy (MaxE)} \cite{wu2015probabilisticmaxentropy}: In this method, the CRS agent chooses either an attribute to ask or top ranked items to recommend in a probabilistic way. The agent asks the attribute with the maximum entropy based on the past conversation history. \item \textbf{EAR} \cite{earlei2020estimation}: This is a three stage approach for multi-turn CRS: the estimation stage builds a predictive model to estimate user preference based on both items and attributes, the action stage learns a policy to decide whether to ask about attributes or make a recommendation, and reflection stage updates the recommendation model based on online user feedback. \item \textbf{FPAN}~\cite{fpanxu2021adapting}: This extends EAR by dynamically revising user embeddings based on users’ feedback. The relationship between attribute-level and item-level feedback signals are used to identify the specific items and attributes that causes the rejection of an item. \begin{table*}[t] \centering \caption{Performance Comparison of different CRS models on three datasets. * represents the best performance among the baselines. The improvement over baseline is calculated against the best baseline values.} \label{tab:comparison} \begin{tabular}{lcccccc| c | c} \hline & & MaxE & EAR & FPAN & SCPR & UNI & \textsc{FacT-CRS}{} & Improvement*\\ \hline \multirow{2}{*}{ LastFM } & SR@10 & $0.137$ & $0.428$ & $0.508^{*}$ & $0.432$ & $0.441$ & $0.719$ & 41.53\%\\ & AT & $9.71$ & $8.62$ & $8.08^{*}$ & $8.70$ & $8.52$ & $6.65$ & 17.69\%\\ \hline \multirow{2}{*}{ BookRec } & SR@10 & $0.206$ & $0.320$ & $0.397^{*}$ & $0.329$ & $0.358$ & $0.438$ & 10.33\%\\ & AT & $9.64$ & $9.01$ & $8.31^{*}$ & $9.11$ & $9.00$ & $8.23$ & 0.96\%\\ \hline \multirow{2}{*}{ MovieLens } & SR@10 & $0.262$ & $0.552$ & $0.589$ & $0.545$ & $0.596^{*}$ & $0.692$ & 16.10\%\\ & AT & $9.46$ & $7.98$ & $7.81^{*}$ & $7.89$ & $8.01$ & $6.57$ & 15.87\%\\ \hline \end{tabular} \end{table*} \item \textbf{SCPR}~\cite{scprlei2020interactive}: It models CRS as an interactive path reasoning problem on a knowledge graph. It leverages target user's preferred attributes by following user feedback to traverse attribute graph. Using the knowledge graph enables it to reduce the search space of candidate attributes. \item \textbf{UNICORN} \cite{unicorndeng2021unified}: This method integrates the conversation and recommendation components into a unified RL solution. UNICORN develops a dynamic weighted graph based RL method to learn a policy for selecting the action at each conversation turn. It pre-selects attributes and items to simplify the RL training. \end{itemize} All these methods rely on reinforcement learning models and pretrained user embeddings. To adopt them to new users at testing time, we use use the mean embedding of the train users. \subsection{Overall Performance} To answer RQ1, we first evaluate the recommendation quality of \textsc{FacT-CRS}{} in terms of success rate (SR@T) and average turn AT. A good CRS agent should be able to realize items which are more relevant to a user’s preference and rank them higher in a result list. We report the results of our experiments in Table~\ref{tab:comparison}. \textsc{FacT-CRS}{} consistently outperformed all baselines by a good margin on all datasets. FPAN performed better than the others among the baselines. We note that although it uses the same general structure as EAR, FPAN is able to generalize better on the new user because it updates the user embeddings dynamically with the user provided positive and negative feedback on attributes and items by two gated modules, which enables adaptive item recommendation. Although EAR is effective in handling large action space in CRS, its performance is limited by the separation of its conversation and recommendation components. UNICORN performs relatively better in this case through its usage of action selection strategy. However, since all the baselines rely on user-level embedding, they cannot perform well on new test users. By hierarchically clustering the user-item interactions and exploiting the related user-item interactions, the interaction embeddings estimated by \textsc{FacT-CRS}{} more effectively capture the current preference of the user, even in new users. This enables its good empirical performance in our evaluation. \begin{figure}[ht] \centering \includegraphics[width = 0.49\linewidth]{figs/rec-succ.png} \includegraphics[width = 0.49\linewidth]{figs/rec-succ-fact.png} \caption{(Left) Ratio of recommendation and recommendation success rate at each turn on LastFM dataset (\textsc{FacT-CRS}{} vs FPAN). (Right) Ratio of recommendation and recommendation success rate at each turn on LastFM dataset for \textsc{FacT-CRS}{} at depth 3 and 7.} \label{fig:rec-succ} \end{figure} We also investigated how \textsc{FacT-CRS}{} compares to other baselines at each turn of the conversations. To investigate this, we compared the recommendation probability and success rate between \textsc{FacT-CRS}{} and FPAN on the LastFM dataset. As shown in Figure~\ref{fig:rec-succ}, FPAN can start recommending very early on, but when the user rejects a recommendation, FPAN tries to compensate by recommending more items, instead of identifying the cause of failure by asking more questions. In comparison, \textsc{FacT-CRS}{} handles the negative feedback more effectively. It asks more clarifying questions before making another recommendation. \textsc{FacT-CRS}{} first tries to identify the negative items and then uses Eq.~\eqref{eq:negative-feedback} to update the predicted user-item interaction embedding, and then moves on to the next tree. This allows \textsc{FacT-CRS}{} to subsequently ask better questions, and make better recommendations. \subsubsection{Impact from tree depth.} Figure~\ref{fig:rec-succ} (right) shows the performance of \textsc{FacT-CRS}{} with different depths of its interaction trees. For smaller depth (at depth 3), \textsc{FacT-CRS}{} starts recommending early and more frequently. However, at the same time it compromises the success rate. For larger depth (depth 7), \textsc{FacT-CRS}{} asks more questions before making a recommendation. From the figure we see that the recommendation success rate improves by asking more questions. \subsection{Ablation Study} \label{sec:ablation} In this section, we evaluate the effect of each individual component of \textsc{FacT-CRS}{} by removing that component and evaluating the remaining model. \subsubsection{Impact of Candidate Items in User-item Interaction Tree} Every node in the user-item interaction tree is associated with a set of user-item pairs. If we are ready to recommend at a node in the user-item interaction tree (usually near the leaf node), we check which items are in those user-item pairs. Then we rank those items based on the predicted user-item interaction embedding. This enables us to significantly narrow down the candidate set of items. Without this component, we would have to rank all the items each time using Eq.~\eqref{eq:score} to make a recommendation. Table~\ref{table:user-item-review-tree} reports our model's performance without taking into account the candidate items. The candidate item set selection design is a vital part of \textsc{FacT-CRS}{}. As Table~\ref{table:user-item-review-tree} shows, this is an effective way to make our recommendations much better by narrowing down the candidate set of items. Also, the early recommendation component of \textsc{FacT-CRS}{} relies on the size of the candidate items to decide when to recommend. \begin{table}[t] \centering \caption{Effect of Candidate Items in User-item Interaction Tree.} \label{table:user-item-review-tree} \setlength{\tabcolsep}{4pt} \begin{tabular}{c|cc|cc|cc} \hline & \multicolumn{2}{c|}{LastFM} & \multicolumn{2}{c|}{BookRec} & \multicolumn{2}{c}{MovieLens}\\ & SR@10 & AT & SR@10 & AT & SR@10 & AT \\ \hline \textsc{FacT-CRS}{} & 0.719 & 6.65 & 0.438 & 8.23 & 0.692 & 6.57\\ \hline w/o Candidate & 0.578 & 8.36 & 0.195 & 9.50 & 0.493 & 8.51\\ \hline \end{tabular} \end{table} \begin{figure}[ht] \centering \includegraphics[width=0.49\linewidth]{figs/hist-leaf-items-lastfm.png} \includegraphics[width=0.49\linewidth]{figs/hist-leaf-items-book.png} \caption{Histogram of the number of items in leaf nodes in a user-item interaction tree for (left) LastFM and (right) BookRec datasets.} \label{fig:hist-leaf-items} \end{figure} To understand why using the items in the leaf nodes of the user-item interaction tree is important, we plot the histogram of the number of unique items in the leaf node of a user-item interaction tree. Figure~\ref{fig:hist-leaf-items} shows the results for LastFM and BookRec datasets. As we can see, most of the leaf nodes have very few ($<20$) items. Hence, the leaf nodes work well to cluster and narrow down the correct candidate list of items. This also empirically verifies our initial assumption that using the shared attributes to cluster the user-item interactions and subsequently learning the embeddings enhance the quality of the embeddings. \begin{figure}[htb] \centering \includegraphics[width=0.49\linewidth]{figs/leaf-count-items-lastfm.png} \includegraphics[width=0.49\linewidth]{figs/leaf-count-items-book.png} \caption{Histogram of the number of different leaf nodes each item appears in the user-item interaction tree for (left) Last FM and (right) BookRec datasets.} \label{fig:leaf_count_item} \end{figure} On the other hand, we also check how scattered each item is among the user-item interaction tree by recording how many different leaf nodes contain the same item. Figure~\ref{fig:leaf_count_item} shows the histogram of the number of leaf nodes each item is spread across. This figure demonstrates that the locations of most items are quite concentrated. By combining the findings from Figure~\ref{fig:hist-leaf-items} and \ref{fig:leaf_count_item}, we find that using the shared attributes to group items according to user-item interactions, \textsc{FacT-CRS}{} can effectively find a good subset of candidate items containing a small number of items. \begin{table}[t] \centering \caption{Ablation study of main components of \textsc{FacT-CRS}{}.} \label{table:ablation} \setlength{\tabcolsep}{4pt} \begin{tabular}{c|cc|cc|cc} \hline & \multicolumn{2}{c|}{LastFM} & \multicolumn{2}{c|}{BookRec} & \multicolumn{2}{c}{MovieLens}\\ & SR@10 & AT & SR@10 & AT & SR@10 & AT \\ \hline \textsc{FacT-CRS}{} &0.719 & 6.65 & 0.438 & 8.23 & 0.692 & 6.57\\ \hline $\neg$ RF & 0.349 & 8.30 & 0.117 & 9.52 & 0.223 & 8.73\\ \hline $\neg$ EarlyRec & 0.410 & 9.68 & 0.201 & 9.81 & 0.436 & 9.68\\ \hline $\neg$ OnlineFeed & 0.704 & 6.58 & 0.350 & 8.46 & 0.595 & 6.53 \\ \hline \end{tabular} \end{table} \subsubsection{Impact of Random Forest (RF)} Random forest provides a key feature of multi-turn CRS by allowing us to ask more questions after encountering a rejection. Without RF, the number of questions we can ask at most is the maximum depth of the tree $H_{max}$. We evaluate the significance of RF, by evaluating a single interaction tree built from the complete attribute set $\mathcal{F}$. Table~\ref{table:ablation} shows that it contributes the most among all components. By allowing \textsc{FacT-CRS}{} to ask an arbitrary number of questions and to recommend multiple times, this component makes the user-item interaction tree suitable for multi-turn CRS setting. \subsubsection{Impact of Early Recommendation (EarlyRec)} For RQ2, we study the contribution of the EarlyRec strategy. To minimize the number of interactions with the user, we make an early recommendation if the candidate set of items at the current node is small enough. As we can see from Table~\ref{table:ablation}, EarlyRec has significant impact on minimizing the average turn (AT). By recommending early, this component also helps the agent understand if it is on the right track, and allows the agent to make necessary corrections that improve future recommendations. \subsubsection{Impact of Handling Online Negative Feedback (OnlineFeed)} To answer RQ3, we study the effectiveness of our online feedback method. The online feedback component updates the current inferred interaction embedding, when the user rejects recommendations made by the agent. As we can see from Table~\ref{table:ablation}, this strategy contributes to better recommendation quality. OnlineFeed component first tries to identify the items responsible for the rejected recommendations, and corrects the predicted user-item interaction embedding to move towards the potential set of items that contains the target item. This allows \textsc{FacT-CRS}{} to choose the next interaction tree more efficiently. \subsection{Case Study} \label{sec:case-study} We performed the following case studies to analyze the performance of our model and to identify where we can further improve. \subsubsection{Failed Conversations} \label{subsec:failed-conversations} We paid special attention to the failed conversations to get a better understanding of why a conversation fails. On all three datasets, we report the average number of mentioned attributes in the failed interaction and compare it to successful interactions. Table~\ref{table:failed-conversation} summarizes the mean and standard deviation of this results. As we can see, the average number of mentioned attributes in the failed conversations is smaller than the average number of mentioned attributes in the corresponding complete datasets. \begin{table}[t] \centering \caption{Mean $\mu$ and standard deviation $\sigma$ of number of mentioned attributes in the interaction for successful, failed, and all conversations.} \label{table:failed-conversation} \begin{tabular}{c|cc|cc|cc} \hline \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{LastFM} & \multicolumn{2}{c|}{BookRec} & \multicolumn{2}{c}{MovieLens}\\ & $\mu$ & $\sigma$ & $\mu$ & $\sigma$ & $\mu$ & $\sigma$ \\ \hline Successful & 5.65 & 1.13 & 5.60 & 1.09 & 4.37 & 1.01\\ \hline Failed & 5.15 & 1.16 & 5.06 & 0.99 & 4.02 & 0.91\\ \hline All & 5.51 & 1.15 & 5.30 & 1.07 & 4.26 & 1.00 \\ \hline \end{tabular} \end{table} We next look at the conversations \textsc{FacT-CRS}{} failed where the interaction contained at least $p_n$ number of attributes in Table~\ref{tab:min-attributes}. Our experiments show that for greater values of $p_n$, it becomes increasingly likely that \textsc{FacT-CRS}{} can successfully recommend the target item. This gives us the basic insight of why a conversation fails. Since the attributes mentioned in the failed conversations are very few, our model can not find sufficient information to infer which particular item the user is looking for. \begin{table} \caption{SR@10 of \textsc{FacT-CRS}{} when at least $p_n$ number of attributes confirmed in the interaction.} \label{tab:min-attributes} \centering \begin{tabular}{c|c|c|c} \hline Min \# attributes $p_n$ & LastFM & BookRec & MovieLens \\ \hline 3 & 0.721 & 0.438 & 0.729 \\ \hline 4 & 0.751 & 0.496 & 0.765 \\ \hline 5 & 0.783 & 0.573 & 0.830 \\ \hline 6 & 0.826 & 0.644 & 0.882 \\ \hline \end{tabular} \end{table} \subsubsection{Identified Attributes} Figure~\ref{fig:heatmap} shows the success rate of conversations with different interaction length $p_n$ and the number of attributes $p_k$ identified by \textsc{FacT-CRS}{}. Note that $ p_k > p_n$ is not possible, i.e., \textsc{FacT-CRS}{} cannot identify more attributes than the total number of attributes associated with an interaction. Also, $p_k < T$, since any CRS agent can ask at most $T-1$ questions. When $p_k \leq p_n$, the white cells in Figure~\ref{fig:heatmap} refer to the events that are possible but did not occur. For example, on the BookRec dataset, when the number of attributes in an interaction is 9 (i.e., $p_n = 9$), \textsc{FacT-CRS}{} always identified at least 3 attributes ($p_k \geq 3$). When more attributes are identified (left to right in Figure \ref{fig:heatmap}), it is more likely that the conversation will be successful. Similarly, when the user mentions more attributes in an interaction (top to bottom), \textsc{FacT-CRS}{} is likely to identify more attributes and subsequently the conversations are more likely to be successful. Our interaction tree based solution provides good predictive accuracy as well as minimizes the inputs required from the user. Our model outperforms the existing RL-based baselines on LastFM, BookRec and MovieLens datasets, which demonstrates the robustness of our model. \begin{figure}[t] \centering \includegraphics[width=0.49\linewidth]{figs/heatmap-lastfm.png} \includegraphics[width=0.49\linewidth]{figs/heatmap-book.png} \caption{SR@10 of the number of attributes identified (correctly asked) by \textsc{FacT-CRS}{} for different interaction length on (left) LastFM and (right) BookRec datasets.} \label{fig:heatmap} \end{figure} \section{Conclusion} \label{sec:conclusion} Multi-turn CRS is a dynamic approach to elicit the current user preference by asking a series of questions and making recommendations accordingly. Existing approaches in conversational recommender system rely heavily on reinforcement learning based policy learning, whose performance however strongly depends on the amount of training data. In this paper, we proposed an alternative to the reinforcement learning methods and demonstrated multi-turn CRS are addressable by decision trees. To generalize a decision tree for multi-turn CRS, we addressed four key challenges in multi-turn CRS: \textit{which questions to ask, how to rank the items, when to recommend, and how to handle the user's rejection}. We proposed building an user-item interaction tree that is able to identify different description of a certain item. The interaction tree naturally provides a way to ask questions. To effectively rank the items, we learned the embeddings of user-item interactions. For this purpose, we used a decision tree based method called the factorization tree~\cite{tao2019fact}, which allows us to narrow down the candidate set of items by asking questions. By leveraging the random forest, we extended factorization tree to multi-turn CRS. We solved the challenge of when to recommend, by recommending when the candidate item set is small enough. Making corrections to the interaction embedding after encountering a rejection enables us to effectively handle the users' online rejection. We extensively experimented on three benchmark CRS datasets, and compared \textsc{FacT-CRS}{}'s performance with existing RL-based state-of-the-art solutions. The experimental results demonstrate that \textsc{FacT-CRS}{} outperforms on all three datasets by successfully asking questions and identifying the target items in fewer number of turns. Our exploration in \textsc{FacT-CRS}{} sheds light on simple alternatives for multi-turn CRS. Though effective in our extensive evaluations, our solution still contains several empirically set hyper-parameters, such as the tree depth and number of trees. It is important for us to eliminate such hyper-parameters via automated tuning. In addition, currently we handle the conversations independently, even if they were from the same user. As our future work, it is important for us to study how to leverage observations from the same user or about the same item to further facilitate the conversation and recommendation. \section{Acknowledgement} We thank the anonymous reviewers for their valuable comments. This work is partially supported by NSF under grant IIS-2128019, IIS-2007492 and IIS-1553568.
1706.02657
\section{Motivation} It is of general interest to compute quantum corrections to classical field configurations like soliton solutions that are frequently interpreted as particles. On top of the wish list we find the energies that predict particle masses. The quantum correction to the energy can be quite significant because the classical field acts as a background that strongly polarizes the spectrum of the quantum fluctuations about it. For that reason the quantum correction to the classical energy is called vacuum polarization energy (VPE). Here we will consider the leading, {\it i.e.} one loop, contribution. Field theories that have classical soliton solutions in various topological sectors deserve particular interest. Solitons from different sectors have unequal winding numbers and the fluctuation spectrum changes significantly from one sector to the other. For example, the number of zero modes is linked to the number of (normalizable) zero modes that in turn arise from the symmetries that are spontaneously broken by the soliton. Of course, the pattern of spontaneous symmetry breaking is subject to the topological structure. On the other hand, the winding number is typically identified with the particle number. The prime example is the Skyrme model\cite{Skyrme:1961vq,Skyrme:1988xj} wherein the winding number determines the baryon number\cite{Witten:1979kh,Adkins:1983ya}. Many properties of baryons have been studied in this soliton model and its generalization in the past\cite{Weigel:2008zz}. More recently configurations with very large winding numbers have been investigated\cite{Feist:2012ps} and these solutions were identified with nuclei. To obtain a sensible understanding of the predicted nuclear binding energies it is, of course, important to consider the VPE, in particular when it is expected to strongly depend in the particle number. So far this has not been attempted for the simple reason that the model is not renormalizable. A rough estimate\cite{Scholtz:1993jg}\footnote{See Ref.\cite{Meier:1996ng} for a general discussion of the Skyrmion's quantum corrections and further references on the topic.} in the context of the H--dibaryon\cite{Jaffe:1976yi,Balachandran:1983dj} suggests that the VPE strongly reduces the binding energy of multi--baryon states. As already mentioned, one issue for the calculation of the VPE is renormalization. Another important one is, as will be discussed below, that the VPE is (numerically) extracted from the scattering data for the quantum fluctuations about the classical configuration\cite{Graham:2009zz}. Though this so--called {\it spectral method} allows for a direct implementation of standard renormalization conditions it has limitations as it requires sufficient symmetry for a partial wave decomposition. This may not be possible for configurations with an intricate topological structure associated with large winding numbers. The $\phi^6$ model in $D=1+1$ dimensions has soliton solutions with different topological structures\cite{Lohe:1979mh,Lohe:1980js} and the fluctuations do not decouple into a parity channels. The approach employed here is also based on scattering data but advances the spectral method such that no parity decomposition is required. We will also see that it is significantly more effective than previous computations\cite{AlonsoIzquierdo:2002eb,AlonsoIzquierdo:2011dy,AlonsoIzquierdo:2012tw} for the VPE of solitons in $D=1+1$ dimensions that are based on heat kernel expansions combined with $\zeta$--function regularization techniques\cite{Elizalde:1996zk,Elizalde:1994gf,Kirsten:2000ad}. Although the $\phi^6$ model is not fully renormalizable, at one loop order the ultra--violet divergences can be removed unambiguously. However, another very interesting phenomenon emerges. The distinct topological structures induce non--equivalent vacua that manifest themselves via different dispersion relations for the quantum fluctuations at positive and negative spatial infinity. At some intermediate position the soliton mediates between these vacua. Since this position cannot be uniquely determined the resulting VPE exhibits a translational variance. This is surprising since, after all, the model is defined through a local and translational invariant Lagrangian. In this paper we will describe the emergence of this variance and link it to the different level densities that arise from the dispersion relations. To open these results for discussion\footnote{The present paper reflects the author's invited presentation at the $5^{\rm th}$ {\it Winter Workshop on Non-Perturbative Quantum Field Theory} based on the methods derived in Ref.\cite{Weigel:2016zbs} making some overlap unavoidable.} it is necessary to review in detail the methods developed in Ref.\cite{Weigel:2016zbs} to compute the VPE for backgrounds in one space dimension that are not (manifestly) invariant under spatial reflection. Following this introductory motivation we will describe the $\phi^6$ model and its kink solutions. In chapter III we will review the spectral method that ultimately leads to a variant of the Krein--Friedel--Lloyd formula\cite{Faulkner:1977aa} for the VPE. The novel approach to obtain the relevant scattering data will be discussed in chapter IV and combined with the one--loop renormalization in chapter V. A comparison with known (exact) results will be given in chapter VI while chapter VII contains the predicted VPE for the solitons of the $\phi^6$ model. Translational variance of the VPE that emerges from the existence of non--equivalent vacua will be analyzed in chapter VIII. We conclude with a short summary in chapter~IX. \section{Kinks in $\mathbf{\phi^6}$ Models} In $D=1+1$ dimensions the dynamics for the quantum field $\phi$ are governed solely by a field potential $U(\phi)$ that is added to the kinetic term \begin{equation} \mathcal{L}=\frac{1}{2}\partial_\mu \phi\partial^\mu \phi-U(\phi)\,. \label{eq:lag1} \end{equation} For the $\phi^6$ model we scale all coordinates, fields and coupling constants such that the potential contains only a single dimensionless parameter $a$ \begin{equation} U(\phi)=\frac{1}{2}\left(\phi^2+a^2\right)\left(\phi^2-1\right)^2\,. \label{eq:pot1} \end{equation} \begin{figure}[t] \centerline{\epsfig{file=p6a.eps,width=4.5cm,height=3cm}\hspace{1cm} \epsfig{file=p6b.eps,width=4.5cm,height=3cm}\hspace{1cm} \epsfig{file=p6c.eps,width=4.5cm,height=3cm}} \caption{\label{fig:phi6pot}The field potential, eq.~(\ref{eq:pot1}) in the $\phi^6$ model for various values of the real parameter $a=1,\fract{1}{2},0$ from left to right.} \end{figure} \noindent From figure~\ref{fig:phi6pot} we observe that there are three general cases. For $a^2>\fract{1}{2}$ two degenerate minima at $\phi=\pm1$ exist. For $0<a^2\le\fract{1}{2}$ an additional local minimum emerges at $\phi=0$. Finally, for $a=0$ the three minima at $\phi=0$ and $\phi=\pm1$ are degenerate. Soliton solutions connect different vacua between negative and positive spatial infinity. For $a\ne0$ the vacua are at $\phi=\pm1$ and the corresponding soliton solution is\cite{Lohe:1979mh} \begin{equation} \phi_K(x)=a\frac{X-1} {\sqrt{4X+a^2\left(1+X\right)^2}} \qquad \mbox{with} \qquad X={\rm e}^{2\sqrt{1+a^2}\,x}\,. \label{eq:phik6a} \end{equation} Its classical energy is $E_{\rm cl}(a)=\frac{2-a^2}{4}\sqrt{1+a^2}+\frac{4a^2+a^4}{8}\, {\rm ln}\frac{\sqrt{1+a^2}+1}{\sqrt{1+a^2}-1}$. The case $a=0$ is actually more interesting because two distinct soliton solutions do exist. The first one connects $\phi=0$ at $x\to-\infty$ to $\phi=1$ at $x\to\infty$, \begin{equation} \phi_{K_1}(x)=\frac{1}{\sqrt{1+{\rm e}^{-2x}}}\,, \label{eq:phi61} \end{equation} while the second one interpolates between $\phi=-1$ and $\phi=0$, \begin{equation} \phi_{K_2}(x)=-\frac{1}{\sqrt{1+{\rm e}^{2x}}}\,. \label{eq:phi62} \end{equation} These soliton configurations are shown in figure \ref{fig:phi6sol}. \begin{figure}[t] \centerline{ \epsfig{file=p6k1.eps,width=5cm,height=3.0cm}\hspace{2cm} \epsfig{file=p6k2.eps,width=5cm,height=3.0cm}} \caption{\label{fig:phi6sol}The two soliton solutions for $a=0$: Left panel: eq~(\ref{eq:phi61}); right panel eq~(\ref{eq:phi62}).} \end{figure} In either case the classical mass is $E_{\rm cl}=\fract{1}{4}=\fract{1}{2}\lim_{a\to0}E_{\rm cl}(a)$. This relation for the classical energies reflects the fact that as $a\to0$ the solution $\phi_K(x)$ disintegrates into two widely separated structures one corresponding to $\phi_{K_1}(x)$ the other to $\phi_{K_2}(x)$. The computation of the VPE requires the construction of scattering solutions for fluctuations about the soliton. In the harmonic approximation the fluctuations experience the potential \begin{equation} V(x)=\frac{1}{2}\frac{\partial^2 U(\phi)}{\partial\phi^2} \Big|_{\phi=\phi_{\rm sol}(x)} \label{eq:fltpot1} \end{equation} generated by the soliton ($\phi_{\rm sol}=\phi_K$, $\phi_{K_1}$ or $\phi_{K_2}$). \begin{figure}[t] \centerline{ \epsfig{file=phi6a.eps,width=6cm,height=3cm}\hspace{2cm} \epsfig{file=phi60.eps,width=6cm,height=3cm}} \caption{\label{fig:fltpot}Scattering potentials for the quantum fluctuations in the $\phi^6$ model. Left panel: typical example for $a\ne0$; right panel: the case $a=0$ with the two potentials generated by $\phi_{K_1}$, full line and $\phi_{K_2}$, dashed line.} \end{figure} These three potentials are shown in figure \ref{fig:fltpot}. For $a\ne0$ the potential is invariant under $x\leftrightarrow-x$. But the particular case $a\equiv0$ is not reflection symmetric, though $x\leftrightarrow-x$ swaps the potentials generated by $\phi_{K_1}$ and $\phi_{K_2}$. The loss of this invariance disables the separation of the fluctuation modes into symmetric and anti--symmetric channels, which is the one dimensional version of partial wave decomposition. Even more strikingly, the different topological structures in the $a=0$ case cause $\lim_{x\to-\infty}V(x)\ne\lim_{x\to\infty}V(x)$, which implies different masses (dispersion relations) for the fluctuations at positive and negative spatial infinity. \section{Spectral Methods and Vacuum Polarization Energy} The formula for the VPE, Eq.~(\ref{eq:master}) below, can be derived from first principles in quantum field theory by integrating the vacuum matrix element of the energy density operator\cite{Graham:2002xq}. It is, however, also illuminative to count the energy levels when summing the changes of the zero point energies. This sum is $\mathcal{O}(\hbar)$ and thus one loop order ($\hbar=1$ for the units used here). We call the single particle energies of fluctuations in the soliton type background $\omega_n$ while the $\omega_n^{(0)}$ are those for the trivial background. Then the VPE formally reads \begin{equation} E_{\rm vac}=\frac{1}{2}\sum_n\left(\omega_n-\omega_n^{(0)}\right) \Bigg|_{\rm ren.} =\frac{1}{2}\sum_j \epsilon_j + \frac{1}{2} \int_0^\infty dk\, \omega_k\,\Delta\,\rho_{\rm ren.}(k)\,, \label{eq:sum0} \end{equation} where the subscript indicates that renormalization is required to obtain a finite and meaningful result. On the right hand side we have separated the explicit bound state (sum of energies $\epsilon_j$) and continuum (integral over momentum $k$) contributions. The latter involves $\Delta\,\rho_{\rm ren.}(k)$ which is the (renormalized) change of the level density induced by the soliton background. Let $L$ be a large distance away from the localized soliton background. For $x\sim L$ the stationary wave--function of the quantum fluctuation is a phase shifted plane wave $\psi(x)\sim{\rm sin}\left[kx+\delta(k)\right]$, where $\delta(k)$ is the phase shift (of a particular partial wave) that is obtained from scattering off the potential, Eq.~(\ref{eq:fltpot1}). The continuum levels are counted from the boundary condition $\psi(L)=0$ and subsequently taking the limit $L\to\infty$. The number $n(k)$ of levels with momentum less or equal to $k$ is then extracted from $kL+\delta(k)=n(k)\pi$. The corresponding number in the absence of the soliton is $n^{(0)}(k)=kL/\pi$, trivially. From these the change of the level density is computed via \begin{equation} \Delta\,\rho(k)=\lim_{L\to\infty}\frac{d}{dk}\left[n(k)-n^{(0)}(k)\right] =\frac{1}{\pi}\frac{d\delta(k)}{dk}\,, \label{eq:KFL} \end{equation} which is often referred to as the Krein--Friedel--Lloyd formula\cite{Faulkner:1977aa}. Note that $\Delta\,\rho(k)$ is a finite quantity; but ultra--violet divergences appear in the momentum integral in Eq.~(\ref{eq:sum0}) and originate from the large $k$ behavior of the phase shift. This behavior is governed by the Born series \begin{equation} \delta(k)=\delta^{(1)}(k)+\delta^{(2)}(k)+\ldots \label{eq:born} \end{equation} where the superscript reflects the power at which the potential, Eq.~(\ref{eq:fltpot1}) contributes. Though this series does not converge\footnote{For example, in three space dimensions the series yields $\delta(0)\to0$ which contradicts Levinson's theorem.} for all $k$, it describes the large $k$ behavior well since $\delta^{(N+1)}(k)/\delta^{(N)}(k)\propto 1/k^2$ when $k\to\infty$. Hence replacing \begin{equation} \Delta\,\rho(k)\to\left[\Delta\,\rho(k)\right]_N= \frac{1}{\pi}\frac{d}{dk}\left[\delta(k)-\delta^{(1)}(k)-\delta^{(2)}(k)-\ldots- \delta^{(N)}(k)\right] \label{eq:born1} \end{equation} produces a finite integral in Eq.~(\ref{eq:sum0}) when $N$ is taken sufficiently large. We have to add back the subtractions that come with this replacement. Here the spectral methods take advantage of the fact that each term in the subtraction is uniquely related to a power of the background potential and that Feynman diagrams represent an alternative expansion scheme for the vacuum polarization energy \begin{equation} \mbox{\parbox[l]{2.2cm}{\vskip-1.9cm $E_{\rm FD}^{N}[V]=$}} \epsfig{file=fdseries.eps,height=1.9cm,width=8cm}\,. \label{eq:FDs} \end{equation} The full lines are the free propagators of the quantum fluctuations and the dashed lines denote insertions of the background potential, Eq.~(\ref{eq:fltpot1}), eventually after Fourier transformation. These Feynman diagrams are regularized with standard techniques, most commonly in dimensional regularization. They can thus be straightforwardly combined with the counterterm contribution, $E_{\rm CT}[V]$ with coefficients fully determined in the perturbative sector of the theory. This combination remains finite when the regulator is removed. The generalization to multiple channels is straightforward by finding an eventually momentum dependent diagonalization of the scattering matrix $S(k)$ and summing the so--obtained eigenphase shifts. This replaces\footnote{The proper Riemann sheet of the the logarithm is identified by constructing a smooth function that vanishes as $k\to\infty$.} $\delta(k)\,\longrightarrow\,(1/2{\rm i}){\rm ln}{\rm det}\,S(k)$ and analogously for the Born expansions, Eqs.~(\ref{eq:born}) and~(\ref{eq:born1}). Since after Born subtraction the integral converges, we integrate by parts to avoid numerical differentiation and to stress that the VPE is measured with respect to the translationally invariant vacuum. We then find the renormalized VPE to be, with the sum over partial waves re--inserted, \begin{equation} E_{\rm vac}[V]=\sum_\ell D_\ell\left\{\frac{1}{2}\sum_j\left(\epsilon_{\ell j}-m\right) - \int_0^\infty \frac{dk}{4\pi{\rm i}} \frac{k}{\sqrt{k^2+m^2}}\, \left[{\rm ln}\,{\rm det}\,S(k)\right]_{N}\right\}+E_{\rm FD}^{N}[V]+E_{\rm CT}[V]\,. \label{eq:master} \end{equation} Here $D_\ell$ is the degree of degeneracy, {\it e.g.} $D_\ell=2\ell+1$ in three space dimensions. The subscript $N$ refers to the subtraction of $N$ terms of the Born expansion, as {\it e.g.} in Eq.~(\ref{eq:born1}). We stress that, with $N$ taken sufficiently large, both the expression in curly brackets and the sum $E_{\rm FD}^{N}[V]+E_{\rm CT}[V]$ are individually ultra--violet finite and no cut--off parameter is needed\cite{Farhi:1998vx}. \section{Scattering Data in One Space Dimension} In this section we obtain the scattering matrix for general one dimensional problems and develop an efficient method for its numerical evaluation. This will be at the center of the novel approach to compute the VPE. We first review the standard approach that is applicable when $V(-x)=V(x)$, {\it e.g.} left panel of figure \ref{fig:fltpot}. Then the partial wave decomposition separates symmetric $\psi_S(-x)=\psi_S(x)$ and anti--symmetric, $\psi_A(-x)=-\psi_A(x)$ channels. The respective phase shifts can be straightforwardly obtained in a variant of the variable phase approach\cite{Calegero:1967} by parameterizing $\psi(x)={\rm e}^{{\rm i}[kx+\beta(k,x)]}$ and imposing the obvious boundary conditions $\psi^\prime_S(0)=0$ and $\psi_A(0)=0$. (The prime denotes the derivative with respect to $x$.) The wave--equation turns into a non--linear differential equation for the phase function $\beta(k,x)$. When solved subject to $\lim_{x\to\infty}\beta(k,x)=0$ and $\lim_{x\to\infty}\beta^\prime(k,x)=0$ the scattering matrix given by\cite{Graham:2009zz} \begin{equation} \frac{1}{2{\rm i}}\,{\rm ln}\,{\rm det}\, S(k)= -2{\sf Re}[\beta(k,0)] -{\rm arctan}\frac{{\sf Im}[\beta^\prime(k,0)]}{k+{\sf Re}[\beta^\prime(k,0)]}\,. \label{eq:sym1} \end{equation} Linearizing and iterating the differential equation for $\beta(k,x)$ yields the Born series, Eq.~(\ref{eq:born}). At this point it is advantageous to use the fact that scattering data can be continued to the upper half complex momentum plane\cite{Newton:1982qc}. That is, when writing $k={\rm i} t$, the Jost function, whose phase is the scattering phase shift when $k$ is real, is analytic for ${\sf Re}[t]\ge0$. Furthermore the Jost function has simple zeros at imaginary $k={\rm i}\kappa_j$ representing the bound states. Formulating the momentum integral from Eq.~(\ref{eq:master}) as a contour integral automatically collects the bound state contribution and we obtain a formula as simple as\cite{Graham:2002xq,Graham:2009zz} \begin{equation} E^{\rm (S)}_{\rm vac}=\int_{m}^\infty \frac{dt}{2\pi}\, \frac{t}{\sqrt{t^2-m^2}}\,\left[ {\rm ln}\left\{g(t,0)\left(g(t,0)-\frac{1}{t}g^\prime(t,0)\right)\right\} \right]_N +E_{\rm FD}^{N}[V]+E_{\rm CT}[V] \label{eq:EvacJost} \end{equation} for the VPE. Here $g(t,x)$ is the non--trivial factor of the Jost solution whose $x\to0$ properties determine the Jost function. The factor function solves the differential equation \begin{equation} g^{\prime\prime}(t,x)=2tg^\prime(t,x)+V(x)g(t,x)\,, \label{eq:DEQJost} \end{equation} with the boundary conditions $g(t,\infty)=1$ and $g^\prime(t,\infty)=0$; iterating $g(t,x)=1+g^{(1)}(t,x)+g^{(2)}(t,x)+\ldots$ produces the Born series. In general, however, the potential $V(x)$ is not reflection invariant and no partial wave decomposition is applicable. Even more, there may exist different masses for the quantum fluctuations \begin{equation} m_L^2=\lim_{x\to-\infty}V(x)\qquad {\rm and}\qquad m_R^2=\lim_{x\to\infty}V(x) \label{eq:mass} \end{equation} as it is the case for the $\phi^6$ model with $a=0$, {\it cf.} right panel of figure \ref{fig:fltpot}. We adopt the convention that $m_L\le m_R$, otherwise we simply swap $x\to-x$. Three different cases must be considered. First, above threshold both momenta $k$ and $q=\sqrt{k^2+m_L^2-m_R^2}$ are real. To formulate the variable phase approach we introduce the matching point $x_m$ and parameterize \begin{align} x\le x_m:&\quad \psi(x)=A(x){\rm e}^{{\rm i}kx}\quad & A^{\prime\prime}(x)=-2{\rm i}kA^\prime(x)+V_p(x)A(x)\,\,\,\cr x\ge x_m:&\quad \psi(x)=B(x){\rm e}^{{\rm i}qx}\quad & B^{\prime\prime}(x)=-2{\rm i}qB^\prime(x)+V_p(x)B(x)\,. \label{eq:match1} \end{align} Observe that the {\it pseudo potential} \begin{equation} V_p(x)=V(x)-m_L^2+(m_L^2-m_R^2)\Theta(x-x_m) \label{eq:pseudoV} \end{equation} vanishes at positive and negative spatial infinity. The differential equations~(\ref{eq:match1}) are solved for the boundary conditions conditions $A(-\infty)=B(\infty)=1$ and $A^\prime(-\infty)=B^\prime(\infty)=0$. There are two linearly independent solutions $\psi_1$ and $\psi_2$ that define the scattering matrix $S=(s_{ik})$ via the asymptotic behaviors \begin{equation} \psi_1(x)\sim \begin{cases} {\rm e}^{{\rm i}kx}+s_{12}(k){\rm e}^{-{\rm i}kx}\quad &{\rm as}\quad x\to-\infty\cr s_{11}(k){\rm e}^{{\rm i}qx}\quad &{\rm as}\quad x\to\infty \end{cases} \hspace{0.7cm}{\rm and}\hspace{0.7cm} \psi_2(x)\sim \begin{cases} s_{22}(k){\rm e}^{-{\rm i}kx}\quad &{\rm as}\quad x\to-\infty\cr {\rm e}^{-{\rm i}qx}+s_{21}(k){\rm e}^{{\rm i}qx}\quad &{\rm as}\quad x\to\infty\,. \end{cases} \hspace{0.7cm} \label{eq:defS} \end{equation} By equating the solutions and their derivatives at $x_m$ the scattering matrix is obtained from the factor functions as \begin{align} S(k)=&\begin{pmatrix} {\rm e}^{-{\rm i} qx_m} & 0 \cr 0 & {\rm e}^{{\rm i} kx_m} \end{pmatrix} \begin{pmatrix} B & -A^\ast \cr iqB+B^\prime & ikA^\ast-A^{\prime\ast} \end{pmatrix}^{-1}\cr&\hspace{1cm}\times \begin{pmatrix} A & -B^\ast \cr ikA+A^\prime & iqB^\ast-B^{\prime\ast} \end{pmatrix} \begin{pmatrix} {\rm e}^{{\rm i} kx_m} & 0 \cr 0 & {\rm e}^{-{\rm i} qx_m} \end{pmatrix} \hspace{2cm} {\rm for}\quad k\ge\sqrt{m_R^2-m_L^2}\,, \label{eq:Smat1} \end{align} where $A=A(x_m)$, etc.. The second case refers to $k\le\sqrt{m_R^2-m_L^2}$ still being real but $q={\rm i}\kappa$ becoming imaginary with $\kappa=\sqrt{m_R^2-m_L^2-k^2}$. The parameterization of the wave function for $x>x_m$ changes to $\psi(x)=B(x){\rm e}^{-\kappa x}$ yielding the differential equation $B^{\prime\prime}(x)=\kappa B^\prime(x)+V_p(x)B(x)$. The scattering matrix then is a single unitary number \begin{equation} S(k)=-\,\frac{A\left(B^\prime/B-\kappa-ik\right)-A^\prime} {A^\ast\left(B^\prime/B-\kappa+ik\right)-A^{\prime\ast}}\, {\rm e}^{2{\rm i} kx_m} \hspace{2cm} {\rm for}\quad 0\le k\le\sqrt{m_R^2-m_L^2}\,. \label{eq:Smat2} \end{equation} It is worth noting that $V_p\equiv0$ corresponds to the step function potential. In that case the above formalism obviously yields $A\equiv B\equiv1$ and reproduces the textbook result \begin{equation} \delta_{\rm step}(k)= \begin{cases} (k-q)x_m\,,\quad & {\rm for}\quad k\ge\sqrt{m_R^2-m_L^2}\cr kx_m-{\arctan}\left(\frac{\sqrt{m_R^2-m_L^2-k^2}}{k}\right)\,, \quad & {\rm for}\quad k\le\sqrt{m_R^2-m_L^2}\,. \end{cases} \label{eq:step1} \end{equation} In the third regime also $k$ becomes imaginary and we need to identify the bound states energies $\epsilon\le m_L$ that enter Eq.~(\ref{eq:master}). We define real variables $\lambda=\sqrt{m_L^2-\epsilon^2}$ and $\kappa(\lambda) =\sqrt{m_R^2-m_L^2+\lambda^2}$ and solve the wave equation subject to the initial conditions \begin{equation} \psi_L(x_{\rm min})=1\,,\qquad \psi^\prime_L(x_{\rm min})=\lambda \qquad {\rm and}\qquad \psi_R(x_{\rm max})=1\,,\qquad \psi^\prime_R(x_{\rm max})=-\kappa(\lambda)\,, \label{eq:bound1} \end{equation} where $x_{\rm min}$ and $x_{\rm max}$ represent negative and positive spatial infinity, respectively. Continuity of the wave function requires the Wronskian determinant \begin{equation} \psi_L(x_m)\psi^\prime_R(x_m)-\psi_R(x_m)\psi^\prime_L(x_m)\stackrel{!}{=}0\,, \label{eq:bound2} \end{equation} to vanish. This occurs only for discrete values $\lambda_j$ that in turn determine the bound state energies\footnote{The bosonic dispersion relation does not exclude imaginary energies that would hamper the definition of the quantum theory. This case does not occur here.} $\epsilon_j=\sqrt{m_L^2-\lambda_j^2}$. \section{One Loop Renormalization in One Space Dimension} To complete the computation of the VPE we need to substantiate the renormalization procedure. We commence by identifying the ultra--violet singularities. This is simple in $D=1+1$ dimensions at one loop order as only the first diagram on the right hand side of Eq.~(\ref{eq:FDs}) is divergent. Furthermore, this diagram is local in the sense that $E_{\rm FD}^{(1)}\propto \frac{1}{\epsilon}\int dx\,\left[V(x)-m_L^2\right]$, where $\epsilon$ is the regulator ({\it e.g.} from dimensional regularization). Hence a counterterm can be constructed that not only removes the singularity but the diagram in total. This is the so--called {\it no tadpole} condition and implies \begin{equation} E_{\rm FD}^{(1)}+E_{\rm CT}^{(1)}=0\,. \label{eq:notad} \end{equation} In the next step we must identify the corresponding Born term in Eq.~(\ref{eq:born}). To this end it is important to note that the counterterm is a functional of the full field $\phi(x)$ that induces the background potential, Eq.~(\ref{eq:fltpot1}). Hence we must find the Born approximation for $V(x)-m_L^2$ rather than the one for the pseudo--potential $V_P(x)$, Eq.~(\ref{eq:pseudoV}). The standard formulation of the Born approximation as an integral over the potential is, unfortunately, not applicable to $V(x)-m_L^2$ since it does not vanish at positive spatial infinity. However, we note that $V(x)-m_L^2=V_P(x)+(m_L^2-m_R^2)\Theta(x-x_m)=V_p(x)+V_{\rm step}(x)$ and that, by definition, the first order correction is linear in the background, and thus additive. We may therefore write \begin{equation} \delta^{(1)}(k)=\delta^{(1)}_P(k)+\delta^{(1)}_{\rm step}(k) =\frac{-1}{2k}\int_{-\infty}^\infty dx\, V_p(x)\Big|_{x_m}+\frac{x_m}{2k}\left(m_L^2-m_R^2\right) =\frac{-1}{2k}\int_{-\infty}^\infty dx\, V_p(x)\Big|_{0}\,. \label{eq:born2} \end{equation} The Born approximation for the step function potential has been obtained from the large $k$ expansion of $\delta_{\rm step}(k)$ in Eq.~(\ref{eq:step1}). The subscripts in Eq.~(\ref{eq:born2}) recall that the definition of the pseudo--potential, Eq.~(\ref{eq:pseudoV}) induces an implicit dependence on the (artificial) matching point $x_m$. Notably, this dependence disappears from the final result! This is the first step towards establishing the matching point independence of the VPE. The integrals in $E_{\rm FD}^{(1)}$ and $E_{\rm CT}^{(1)}$ require further regularization when $m_L\ne m_R$. In that case no further {\it finite renormalization} beyond the no tadpole condition is realizable. \section{Comparison with Known Results} Before presenting detailed numerical results for VPEs, we note that all simulations were verified to produce $S^\dagger S=\mbox{{\sf 1}\zr{-0.16}\rule{0.04em}{1.55ex}\zr{0.1}}$ after attaching pertinent flux factors to the scattering matrix, Eq.~(\ref{eq:defS}). These flux factors are not relevant for the VPE as they multiply to unity under the determinant in Eq.~(\ref{eq:master}). In addition the numerically obtained phase shifts, {\it i.e.} $(1/2{\rm i}){\rm ln}{\rm det}\,S$, have been monitored to not vary with $x_m$. Since this is also the case for the bound energies, the VPE is verified to be independent of the unrestricted choice for the matching point. The VPE calculation based on Eq.~(\ref{eq:master}) has been applied to the $\phi^4$ kink and sine--Gordon soliton models that are defined via the potentials \begin{equation} U_K(\phi)=\fract{1}{2}\left(\phi^2-1\right)^2 \qquad {\rm and}\qquad U_{\rm SG}(\phi)=4\left({\rm cos}(\phi)-1\right)\,, \label{eq:known1} \end{equation} respectively. The soliton solutions $\phi_K={\rm tanh}(x-x_0)$ and $\phi_{\rm SG}(x)=4{\rm arctan}\left({\rm e}^{-2(x-x_0)}\right)$ induce the scattering potentials \begin{equation} V_K(x)-m^2=6\left[{\rm tanh}^2(x-x_0)-1\right] \qquad {\rm and}\qquad V_{\rm SG}(x)-m^2=8\left[{\rm tanh}^2[2(x-x_0)]-1\right]\,. \label{eq:known2} \end{equation} In both cases we have identical dispersion relations at positive and negative spatial infinity: $m=m_L=m_R=2$ for the dimensionless units introduced above. The simulation based on Eq.~(\ref{eq:master}) reproduces the established results $E_{\rm vac}^{(K)}=\fract{\sqrt{2}}{4}-\fract{3}{\pi}$ and $E_{\rm vac}^{({\rm SG})}=-\fract{2}{\pi}$\cite{Ra82}. These solitons break translational invariance spontaneously and thus produce zero mode bound states in the fluctuation spectrum. In addition the $\phi^4$ kink possesses a bound state with energy $\sqrt{3}$\cite{Ra82}. All bound states are easily observed using Eq.~(\ref{eq:bound2}). The potentials in Eq.~(\ref{eq:known2}) are reflection symmetric about the soliton center $x_0$ and the method of Eq.~(\ref{eq:EvacJost}) can straightforwardly applied\cite{Graham:2009zz}. However, this method singles out $x_0$ (typically set to $x_0=0$) to determine the boundary condition in the differential equation and therefore cannot be used to establish translational invariance of the VPE. On the contrary, the boundary conditions for Eq.~(\ref{eq:match1}) are not at all sensitive to $x_0$ and we have applied the present method to compute the VPE for various choices of $x_0$, all yielding the same numerical result. The next step is to compute the VPE for asymmetric background potentials that have $m=m_L=m_R$. For the lack of a soliton model that produces such a potential we merely consider a two parameter set of functions \begin{equation} V_p(x)\,\longrightarrow\,V_{R,\sigma}(x)=Ax{\rm e}^{-x^2/\sigma^2} \label{eq:asym1} \end{equation} for the pseudo potential in Eq.~(\ref{eq:match1}). Although Eq.~(\ref{eq:EvacJost}) is not directly applicable, it is possible to relate $V_{R,\sigma}(x)$ to the symmetric potential \begin{equation} V_R(x)=A\left[(x+R){\rm e}^{-\frac{(x+R)^2}{\sigma^2}} -(x-R){\rm e}^{-\frac{(x-R)^2}{\sigma^2}}\right]=V_R(-x) \label{eq:asym2} \end{equation} and apply Eq.~(\ref{eq:EvacJost}). In the limit $R\to\infty$ interference effects between the two structures around $x=\pm R$ disappear resulting in twice the VPE of Eq.~(\ref{eq:asym1}). The numerical comparison is listed in table \ref{tab:asym}. \begin{table}[t] \centerline{ \begin{tabular}{c|cccccc|c} $R$ & 1.0 & 1.5 & 2.0 & 2.5 & 3.0 & 3.5 & present \cr \hline $A=2.5\,,\,\sigma=1.0$ & -0.0369 & -0.0324 & -0.0298 & -0.0294 & -0.0293 & -0.0292 & -0.0293 \cr \hline\hline $R$ & 4.0 & 5.0 & 6.0 & 7.0 & 8.0 & 9.0 & present \cr \hline $A=0.2\,,\,\sigma=4.0$ & -0.0208 & -0.0188 & -0.0170 & -0.0161 & -0.0158 & -0.0157 & -0.0157 \end{tabular}} \caption{\label{tab:asym}The $R$ dependent data are half the VPE for the symmetrized potential, Eq.~(\ref{eq:asym2}) computed from Eq.~(\ref{eq:EvacJost}). The data in the column {\it present} list the results obtained from Eq.~(\ref{eq:master}) for the original potential, Eq.~(\ref{eq:asym1}).} \end{table} Indeed the two approaches produce identical results as $R\to\infty$. The symmetrized version converges only slowly for wide potentials (large $\sigma$) causing obstacles for the numerical simulation that do not at all occur in the present approach. \section{Vacuum Polarization Energies in the $\mathbf{\phi^6}$ Model} We first discuss the VPE for the $a\ne0$ case. A typical background potential is shown in the left panel of figure \ref{fig:phi6pot}. Obviously it is reflection invariant and thus the method based on Eq.~(\ref{eq:EvacJost}) is applicable. In table \ref{tab:phi6a} we also compare our results to those from the heat kernel expansion of Ref.\cite{AlonsoIzquierdo:2011dy} since, to our knowledge, it is the only approach that has also been applied to the asymmetric $a=0$ case in Ref.\cite{AlonsoIzquierdo:2002eb}. \begin{table}[t] \centerline{ \begin{tabular}{c|ccccccc} $a$ & 0.001 & 0.01 & 0.05 & 0.1 & 0.2 & 1.0 & 1.5 \cr \hline heat kernel, Ref.\cite{AlonsoIzquierdo:2011dy}~~ & -1.953 & -1.666 & -1.447 & -1.349 & -1.239 & -1.101 & -1.293\cr parity sep., Eq.~(\ref{eq:EvacJost})~~ & -2.145 & -1.840 & -1.595 & -1.461 & -1.298 & -1.100 & -1.295 \cr present, Eq.(\ref{eq:master}) & -2.146 & -1.841 & -1.596 & -1.462 & -1.297 & -1.102 & -1.297 \end{tabular}} \caption{\label{tab:phi6a}Different methods to compute the VPE of the $\phi^6$ soliton for $a\ne0$.} \end{table} Not surprisingly, the two methods based on scattering data agree within numerical precision for all values of $a$. The heat kernel results also agree for moderate and large~$a$; but for small values deviations of the order of 10\% are observed. The heat kernel method relies on truncating the expansion of the exact heat kernel about the heat kernel in the absence of a soliton. Although in Ref.\cite{AlonsoIzquierdo:2011dy} the expansion has been carried out to eleventh(!) order, leaving behind a very cumbersome calculation, this does not seem to provide sufficient accuracy for small $a$. We are now in the position to discuss the VPE for $a=0$ associated with the soliton $\phi_{K_1}(x)$ from Eq.~(\ref{eq:phi61}). The potentials for the fluctuations and the resulting scattering data are shown in figure \ref{fig:phi6}. By construction, the pseudo potential jumps at $x_m=0$. However, neither the phase shift nor the bound state energy (the zero mode is the sole bound state) depends on $x_m$. \begin{figure}[t] \centerline{ \epsfig{file=pot.eps,width=6cm,height=2.5cm}\hspace{2cm} \epsfig{file=delta.eps,width=6cm,height=2.5cm}} \caption{\label{fig:phi6}Left panel: potential ($V$) and pseudo potential ($V_p$) for fluctuations about a $\phi^6$ soliton with $a=0$. The pseudo potential is shown for $x_m=0$. Right panel: resulting phase shift, {\it i.e.} $(1/2{\rm i}) {\rm ln}{\rm det}\, S$, full line and its Born approximation, dashed line.} \end{figure} As expected, the phase shift has a threshold cusp at $\sqrt{m_R^2-m_L^2}=\sqrt{3}$ and approaches $\frac{\pi}{2}$ at zero momentum. This is consistent with Levinson's theorem in one space dimension\cite{Barton:1984py} and the fact that there is only a single bound state. In total we find a significant cancellation between bound state and continuum contributions \begin{equation} E_{\rm vac}=-0.5+0.4531=-0.0469\,. \label{eq:main} \end{equation} The result\footnote{The factor $\sqrt{2}$ is added to adjust the datum from Ref.\cite{AlonsoIzquierdo:2002eb} to the present scale.} $-0.1264\sqrt{2}=-0.1788$ of Ref.\cite{AlonsoIzquierdo:2002eb} was estimated relative to $V_\alpha(x)=\frac{3}{2}\left[1+{\rm tanh}(\alpha x)\right]$ for $\alpha=1$. Our results for various values of $\alpha$ are listed in table \ref{tab:tanh}. These results are consistent with $V_\alpha(x)$ turning into a step function for large $\alpha$. For the particular value $\alpha=1$ our relative VPE thus is $\Delta E_{\rm vac}=-0.0469-0.1660=-0.2129$. In view of the results shown in table \ref{tab:phi6a}, especially for small $a$, these data match within the validity of the approximations applied in the heat kernel calculation. \begin{table}[t] \centerline{ \begin{tabular}{c|ccccc|c} $\alpha$ & 1.0 & 2.0 & 5.0 & 10.0 & 30.0 & step\cr \hline $E_{\rm vac}$& 0.1660 & 0.1478 & 0.1385 & 0.1363 & 0.1355 & 0.1355 \end{tabular}} \caption{\label{tab:tanh}VPE for background potential $V_\alpha(x)$ defined in the main text. The entry {\it step} gives the VPE for the step function potential $V(x)=3\Theta(x)$ using Eq.~(\ref{eq:step1}) and its Born approximation from Eq.~(\ref{eq:born2}) for $x_m=0$.} \end{table} \section{Translational Variance} So far we have computed the VPE for the $\phi^6$ model soliton centered at $x_0=0$. We have already mentioned that there is translational invariance for the VPE of the kink and sine--Gordon solitons. It is also numerically verified for the asymmetric background, Eq.~(\ref{eq:asym1}). In those cases the two vacua at $x\to\pm\infty$ are equivalent and $q=k$ in Eq.~(\ref{eq:defS}). When shifting $x\to x+x_0$, the transmission coefficients ($s_{11}$ and $s_{22}$) remain unchanged relative to the amplitude of the in--coming wave while the reflection coefficients ($s_{12}$ and $s_{21}$) acquire opposite phases. Consequently, ${\rm det}\,S$ is invariant. For unequal momenta this invariance forfeits and the VPE depends on $x_0$. This is reflected by the results in table \ref{tab:shift} in which we present the VPE for $V_\alpha(x)=\frac{3}{2}\left[1+{\rm tanh}(\alpha (x+x_0))\right]$ and the $\phi^6$ model soliton $1/\sqrt{1+{\rm e}^{-2(x+x_0)}}$. \begin{table}[b] \centerline{ \begin{tabular}{c|ccccc} &\multicolumn{5}{c}{$E_{\rm vac}$}\cr \hline $x_0$& -2 & -1 & 0 & 1 & 2\cr \hline $\alpha=5$ & 0.341 & 0.240 & 0.139 & 0.037 & -0.064\cr $\alpha=2$ & 0.351 & 0.250 & 0.148 & 0.046 & -0.057\cr $\alpha=1$ & 0.369 & 0.267 & 0.166 & 0.064 & -0.038\cr $\phi^6$ & 0.154 & 0.053 & -0.047 & -0.148 & -0.249\cr $\Delta E_{\rm vac}$ & -0.215 & -0.214 & -0.213 & -0.212 & -0.211 \end{tabular}} \caption{\label{tab:shift}The VPE as function of the position of the center of the potential for $V_\alpha$ and the $\phi^6$ model soliton. $\Delta E_{\rm vac}$ is the difference between the VPEs of the latter and $V_1$.} \end{table} Obviously there is a linear dependence of the VPE on $x_0$ with the slope insensitive to specific structure of the potential. This insensitivity is consistent with the above remark on the difference between the two momenta. Increasing $x_0$ shifts the vacuum with the bigger mass towards negative infinity thereby removing states from the spectrum and hence decreasing the VPE. The effect is immediately linked to varying the width of a symmetric barrier potential with height $m_R^2-m_L^2=3$: \begin{equation} V^{(x_0)}_{\rm SB}(x)=3\Theta\left(\frac{x_0}{2}-|x|\right)\,. \label{eq:symbarr} \end{equation} For this potential the Jost solution, Eq.~(\ref{eq:DEQJost}) can be obtained analytically\cite{Weigel:2016zbs} and the VPE has the limit \begin{equation} \lim_{x_0\to\infty}\frac{E_{\rm vac}[V^{(x_0)}_{\rm SB}]}{x_0}\approx-0.102\,, \label{eq:barrlim} \end{equation} which again reveals the background independent slope observed above. Having quantitatively determined the translation variance of the VPE, it is tempting to subtract $E_{\rm vac}\left[V^{(x_0)}_{\rm SB}\right]$. Unfortunately this is not unique because $x_0$ is not the unambiguous center of the soliton. For example, employing the classical energy density $\epsilon(x)$ to define the position of the soliton $1/\sqrt{1+{\rm e}^{-2(x-\overline{x})}}$, that is formally centered at $\overline{x}$, as an expectation value leads to \begin{equation} x_s=\frac{\int dx x \epsilon(x)}{\int dx \epsilon(x)}=\overline{x}+\fract{1}{2}\,. \end{equation} This changes the VPE by approximately $0.050$. This ambiguity also hampers the evaluation of the VPE as half that of a widely separated kink--antikink pair \begin{equation} \phi_{K\overline{K}}(x)=\left[1+{\rm e}^{2(x-\overline{x})}\right]^{-1/2} +\left[1+{\rm e}^{-2(x+\overline{x})}\right]^{-1/2}-1 \label{eq:pair} \end{equation} similarly to the approach for Eq.~(\ref{eq:asym2}). The corresponding background potential $V_B$ is shown in figure \ref{fig:plotbg}. \begin{figure}[t] \centerline{\epsfig{file=plotbg.eps,width=8cm,height=3cm}} \caption{\label{fig:plotbg}Background potential for the kink--antikink pair, Eq.~(\ref{eq:pair}) for different separations.} \end{figure} For computing the VPE, the large contribution from the constant but non--zero potential in the regime $|x|\lesssim \overline{x}$ should be eliminated. The above considerations lead to \begin{align} &\fract{1}{2}\lim_{\bar{x}\to\infty}\left\{E_{\rm vac}[V_B] -2E_{\rm vac}[V^{(2\overline{x})}_{\rm SB}]\right\}=-0.170 \quad{\rm and}\quad \fract{1}{2}\lim_{\bar{x}\to\infty}\left\{E_{\rm vac}[V_B] -2E_{\rm vac}[V^{(2x_s)}_{\rm SB}]\right\}=-0.120\,. \end{align} When the VPE from $V^{(2(\overline{x}+1.2)}_{\rm SB}$ is subtracted, the main result, Eq.~(\ref{eq:main}), is matched. Eventually this can be used to define the center of the soliton. Now we also understand why the VPE for $a\ne0$ diverges as $a\to0$, {\it cf.} table \ref{tab:phi6a}. In that limit kink and antikink structures separate and the ''vacuum'' in between produces an ever increasing contribution (in magnitude). Finally, we discuss the link between the translational variance and the Krein--Friedel--Lloyd formula, Eq.~(\ref{eq:KFL}). We have already reported the VPE for the step function potential when $x_m=0$. We can also consider $x_m\to\infty$: \begin{align} \frac{E_{\rm vac}[V^{(x_m)}_{\rm step}]}{|x_m|}\,\to\,&-{\rm sign}(x_m)\, \left[\int_0^{\sqrt{3}}\frac{dk}{4\pi}\,\frac{2k^2-3}{\sqrt{k^2+1}} +\int_{\sqrt{3}}^\infty\frac{dk}{4\pi}\, \frac{2k^2-2k\sqrt{k^2-3}-3}{\sqrt{k^2+1}}\right] \approx0.101\,{\rm sign}(x_m)\,, \label{eq:xminf} \end{align} reproducing the linear dependence on the position from above. Formally, {\it i.e.} without Born subtraction, the integral, Eq.~(\ref{eq:xminf}) is dominated by \begin{align} \int \frac{dk}{2\pi}\,\frac{k}{\sqrt{k^2+1}}\left[k-\sqrt{k^2-3}\right] \sim \int \frac{dk}{2\pi}\, \sqrt{k^2+1} \frac{d}{dk}\left[\sqrt{k^2-3}-k\right] =\int \frac{dk}{2\pi}\, \sqrt{k^2+1}\, \frac{d}{dk}\left[q-k\right]\,. \end{align} Essentially this is that part of the level density that originates from the different dispersion relations at positive and negative spatial infinity. \section{Conclusion} We have advanced the spectral methods for computing vacuum polarization energies (VPE) to also apply for static localized background configurations in one space dimension that do not permit a parity decomposition for the quantum fluctuations. The essential progress is the generalization of the variable phase approach to such configurations. Being developed from spectral methods, it adopts their amenities, as for {\it e.g.} an effective procedure to implement standard renormalization conditions. A glimpse at the bulky formulas for the heat kernel expansion (alternative method to the problem) in Refs.\cite{AlonsoIzquierdo:2002eb,AlonsoIzquierdo:2011dy,AlonsoIzquierdo:2012tw} immediately reveals the simplicity and effectiveness of the present approach. The latter merely requires to numerical integrate ordinary differential equations and extract the scattering matrix thereof, {\it cf.} Eqs.~(\ref{eq:match1}) and~(\ref{eq:Smat1}). Heat kernel methods are typically combined with $\zeta$--function regularization. Then the connection to standard renormalization conditions is not as transparent as for the spectral methods, though that is problematic only when non--local Feynman diagrams require renormalization, {\it i.e.} in larger than $D=1+1$ dimensions or when fermion loops are involved. We have verified the novel method by means of well established results, as, {\it e.g.} the $\phi^4$ kink and sine--Gordon solitons. For these models the approach directly ascertains translational invariance of the VPE. Yet, the main focus was on the VPE for solitons in $\phi^6$ models because its soliton(s) may connect in--equivalent vacua leading to background potentials that are not invariant under spatial reflection. This model is not strictly renormalizable. Nevertheless at one loop order a well defined result can be obtained from the no--tadpole renormalization condition albeit no further finite renormalization is realizable because the different vacua yield additional infinities when integrating the counterterm. The different vacua also lead to different dispersion relations for the quantum fluctuations and thereby induce translational variance for a theory that is formulated by an invariant action. We argue that this variance is universal, as it is not linked to the particular structure of the background and can be related to the change in the level density that is basic to the Krein--Friedel--Lloyd formula, Eq.~(\ref{eq:KFL}). Besides attempting a deeper understanding of the variance by tracing it from the energy momentum tensor, future studies will apply the novel method to solitons of the $\phi^8$ model. Its elaborated structure not only induces potentials that are reflection asymmetric, but also leads to a set of topological indexes\cite{Gani:2015cda} that are related to different particle numbers. Then the novel method will progress the understanding of quantum corrections to binding energies of compound objects in the soliton picture. Furthermore the present results can be joined with the interface formalism\cite{Graham:2001dy}, that augments additional coordinates along which the background is homogeneous, to explore the energy (densities) of domain wall configurations\cite{Parnachev:2000fz}. \section*{Acknowledgments} This work was presented at the $5^{\rm th}$ {\it Winter Workshop on Non--Perturbative Quantum Field Theory}, Sophia-Antipolis (France), March 2017. The author is grateful to the organizers for providing this worthwhile workshop. The author declares that there is no conflict of interest regarding the publication of this paper. This work is supported in parts by the NRF under grant~109497.
1703.02456
\section{\label{sec:level1a}Introduction} The first attempts to calculate the inverse of a matrix by the help of an iterative scheme were amongst others made by Schulz in the early thirties of the last century \cite{schu}. This resulted in the well-known Newton-Schulz iteration scheme that is widely used to approximate the inverse of a given matrix \cite{nr}. One of the advantages of this method is that for a particular start matrix as an initial guess, convergence is guaranteed \cite{pan}. The convergence is of order two, which is already quite satisfying, nevertheless there have been a lot of attempts to speed up this iteration scheme and to extend it to not only to calculate the inverse, but also the inverse $p$-th root of a given matrix \cite{high2, bini, iann1, iann2, smit}. This is an important task which has many applications in applied mathematics, computational physics and theoretical chemistry, where efficient methods to compute the inverse or inverse square root of a matrix are indispensable. An important example are linear-scaling electronic structure methods, which in many cases require to invert rather large sparse matrices in order to approximately solve the Schr\"odinger equation \cite{baroni1992, galli, PhysRevB.50.17611, ManolopoulosPalser, maslen1998, goed, PhysRevB.64.195110, kraj1, khaliullin2006, ceri1, haynes2008, ceri2, lin2009, bowler2012, kuehne, khaliullin2013, richMethod, mohr2014, PhysRevLett.112.046401, mohr2015, skylaris2016, mohr2017}. A related example is L\"owdin's method of symmetric orthogonalization \cite{loew, millam1997, niklasson2004, helgaker2007, kueh1, ochsenfeld2007, niklasson2008, we2009}, which transforms the generalized eigenvalue problem for overlapping orbitals into an equivalent problem with orthogonal orbitals, whereby the inverse square root of the overlap matrix has to be calculated. Common problems in the applications alluded to above, are the stability of the iteration function and its convergence. For $p \neq 1$, most of the iteration schemes have quadratic order of convergence, with for instance Halley's method being a rare exception \cite{hall, iann2, guo2}, whose convergence is of order three. Altman, however, generalized the Newton-Schulz iteration to an iterative method of inverting a linear bounded operator in a Hilbert space \cite{altm}. He constructed the so-called \emph{hyperpower method of any order of convergence} and proved that the method of degree three is the optimum one, as it gives the best accuracy for a given number of multiplications. In this article, we describe a new general algorithm for the construction of iteration functions for the calculation of the inverse principal $p$-th root of a given matrix $A$. In this method, we have two variables, the natural number $p$ and another natural number $q\geq2$ that represents the \emph{order of expansion}. We show that two special cases of this algorithm are Newton's method for matrices \cite{iann1, iann2, guo2, smit, bini, hosk, guo1} and Altman's hyperpower method \cite{altm}. The remainder of this article is organized as follows. In Section~\revision{\ref{sec:level2a}}, we give a short summary of the work presented in the above mentioned papers and show how we can combine this to a general expression in Section~\revision{\ref{sec:level3a}}. In Section~\revision{\ref{sec:numResults}}, we study the introduced iteration functions for both, the scalar case and for symmetric positive definite random matrices. We investigate the optimal order of expansion $q$ for matrices with different densities, condition numbers and spectral radii. \section{\label{sec:level2a}Previous Work} Calculating the inverse $p$-th root, where $p$ is a natural number, has been studied extensively in previous works. The characterization of the problem is quite simple. In general, for a given matrix $A\secrevision{\in\mathbb{C}^{n \times n}}$ one wants to find a matrix $B$ that fulfills $B^{-p} = A$. If $A$ is \revision{invertible}, one can always find such a matrix $B$, though $B$ \secrevision{may not be} unique. The problem of computing the $p$-th root of $A$ is strongly connected with the spectral analysis of $A$. If, for example, $A$ is a real or complex matrix of order $n$, with no eigenvalues on the closed negative real axis, $B$ \revision{always exists}~\cite{bini}. As we only deal with symmetric positive definite matrices, we can restrict ourselves to \revision{the} unique \revision{positive definite} solution, which is called the \emph{principal $p$-th root} and whose existence is guaranteed by the following theorem. \begin{theorem}[Higham \cite{high1}, 2008] Let $A \in \mathbb{C}^{n \times n}$ have no eigenvalues on $\mathbb{R}^-$. There is a unique $p$-th root $B$ of $A$ all of whose eigenvalues lie in the segment $\{z : -\pi/|p| < \arg(z) < \pi/|p| \}$, and it is a primary matrix function of $A$. We refer to $B$ as the principal $p$-th root of $A$ and write $B = A^{1/p}$. If $A$ is real then $A^{1/p}$ is real. \label{higham} \end{theorem} \begin{rem} Here, $p < 0$ is also included, so that \textup{Theorem \ref{higham}} also holds for the calculation of inverse $p$-th roots. \end{rem} \noindent The calculation of such a root is usually done by the help of an iteration function, since the brute-force computation is computationally very demanding or even infeasible for large matrices. For sparse matrices, iteration functions can even reduce the computational scaling with respect to the size of the matrix because values in intermediately occurring matrices \revision{are often} truncated to retain sparsity. However, one should always keep in mind that the inverse $p$-th roots of sparse matrices are in general not sparse anymore, but usually full matrices. One of the most discussed iteration schemes for computing the $p$-th root of a matrix is based on Newton's method for finding roots of functions. It is possible to approximate a zero $\hat x$ of $f: \mathbb{R} \to \mathbb{R}$, meaning that we have $f(\hat x) = 0$, by the iteration \begeq{x_{k+1} = x_k - \frac{f(x_k)}{f'(x_k)}.} Here, $x_k \to \hat x$ for $k \to \infty$ if $x_0$ is an appropriate initial guess. If, for an arbitrary $a$, $f(x) = x^p-a$, then \begeq{x_{k+1} = \frac{1}{p} \left[ (p-1)x_k + ax_k^{1-p} \right]} and $x_k \to a^{1/p}$ for $k \to \infty$. One can also deal with matrices and study the resulting rational matrix function $F(X) = X^p-A$ \revision{\cite{hosk,smit}.} This has been the subject of a variety of previous papers \cite{bini, pan, high2, iann1, iann2, smit, guo1, guo2, laki, psar, beni, petr, hosk, high1, hall}. It is clear that this iteration converges to the $p$-th root of the matrix $A$ if $X_0$ is chosen close enough to the true root. Smith also showed that Newton's method for matrices has issues concerning numerical stability for ill-conditioned matrices~\cite{smit}, \revision{al}though this is not the topic of the present work. Bini et al. \cite{bini} proved that the matrix iteration \begin{align} B_{k+1} = \frac{1}{p} \left[(p+1) B_k - B_k^{p+1}A\right], \quad B_0 \in \secrevision{\mathbb{C}^{n \times n}} \label{eqn:q2p-1} \end{align} converges quadratically to the inverse $p$-th root $A^{-1/p}$ if \revision{$A$ is positive definite, $B_0A = AB_0$ and \mbox{$\norm{I-B_0^pA} < 1$}. Here, $\norm{\cdot}$ denotes some consistent matrix norm. Bini et al.\ further proved the following} \revision{ \begin{prop} \label{prop} Suppose that all the eigenvalues of $A$ are real and positive. The iteration function (\ref{eqn:q2p-1}) with $B_0 = I$ converges to $A^{-1/p}$ if the spectral radius $\rho(A) < p+1$. If $\rho(A) = p+1$ the iteration does not converge to the inverse of any $p$-th root of $A$. \end{prop} } In his work dated back to 1959, Altman described the hyperpower method \cite{altm}. Let $V$ be a Banach space \revision{with norm $\norm{\cdot}$}, $A: V \to V$ a linear, bounded and \revision{invertible} operator, while $B_0 \in V$ is an approximate reciprocal of $A$ satisfying $\norm{I - AB_0} < 1$. For the iteration \begin{align} B_{k+1} = B_k(I + R_k + R_k^2 + \ldots + R_k^{q-1}), \label{eq:alt} \end{align} the sequence $(B_k)_{k \in \mathbb{N}_0}$ converges towards the inverse of $A$. Here, $R_k = I-B_k^pA$ is the $k$-th residual. Altman proved that the natural number $q$ corresponds to the order of convergence of Eq.~\revision{(\ref{eq:alt})}, so that in principle a method of any order can be constructed. He defined the optimum method as the one that gives the best accuracy for a given number of multiplications and demonstrated that the optimum method is obtained for $q=3$. To close this section, we recall some basic definitions, which are crucial for the next section. In the following, the iteration function $\varphi: \mathbb{C}^{n\times n} \to \mathbb{C}^{n\times n}$ is assumed to be sufficiently often continuously differentiable. \revision{ \begin{defi} Let $A\in \mathbb{C}^{m\times n}$. The matrix norms $\norm{\cdot}_1$, $\norm{\cdot}_\infty$ and $\norm{\cdot}_2$ denote the norms induced by the corresponding vector norm, such that \begin{align*} \norm{A}_1 &= \max_{\norm{x}_1 = 1} \norm{Ax}_1 = \max_{1\leq j\leq n} \sum_{i=1}^m \abs{a_{ij}},\\ \norm{A}_\infty &= \max_{\norm{x}_\infty = 1} \norm{Ax}_\infty = \max_{1\leq i\leq m} \sum_{j=1}^n \abs{a_{ij}},\\ \norm{A}_2 &= \max_{\norm{x}_2 = 1} \norm{Ax}_2 = \sqrt{\lambda_\text{max}(A^* A)}, \end{align*} where $\lambda_\text{max}$ denotes the largest eigenvalue and $A^*$ is the conjugate transpose of $A$. \end{defi} } \begin{defi} Let $\varphi: \mathbb{C}^{n\times n} \to \mathbb{C}^{n\times n} $ be an iteration function. The process \begin{align} \label{eq:phi} B_{k+1} = \varphi(B_k), \quad k = 0,1,\ldots \end{align} is called \emph{convergent to} $Z \in \mathbb{C}^{n\times n}$, if for all start matrices $B_0 \in \mathbb{C}^{n\times n}$ \revision{fullfilling suitable conditions}, we have $\norm{B_k-Z}_{\revision{2}} \to 0$ for $k \to \infty$. \label{defi1} \end{defi} \begin{defi} A \emph{fixed point} $Z$ of the iteration function in Eq.~\revision{(\ref{eq:phi})} is such that $\varphi(Z) = Z$ and is said to be \emph{attractive} if $\norm{\varphi^{\prime}(Z)}_{\revision{2}} < 1$. \label{defi2} \end{defi} \begin{defi} Let $\varphi: \mathbb{C}^{n\times n} \to \mathbb{C}^{n\times n}$ be an iteration function with fixed point $Z \in \mathbb{C}^{n\times n}$. \secrevision{Let $q \in \mathbb{N}$ be the largest number such that} for all start matrices $B_0 \in \mathbb{C}^{n\times n}$ \revision{fullfilling suitable conditions} we have \begin{align*} \norm{B_{k+1}-Z}_{\revision{2}} \leq c \norm{B_k-Z}_{\revision{2}}^q \textup{ for } k = 0,1,\ldots, \end{align*} where $c >0$ is a constant with $c < 1$ if $q=1$. \secrevision{The iteration function in Eq.~\revision{(\ref{eq:phi})} is called \emph{convergent of order} $q$.} \label{defi3} \end{defi} \noindent We also want to remind a well-known theorem concerning the order of convergence. \begin{theorem} Let \revision{$\varphi: \mathbb{C}^{n\times n} \to \mathbb{C}^{n\times n} $} be a function with fixed point \revision{$Z \in \mathbb{C}^{n\times n}$}. If \revision{$\varphi$} is $q$-times continuously differentiable in a neighborhood of \revision{$Z$} with $q \geq 2$, then \revision{$\varphi$} has order of convergence $q$, if and only if \revision{ \begin{align*} \varphi^{\prime}(Z) = \ldots = \varphi^{(q-1)}(Z) = 0 \textup{ and } \varphi^{(q)}(Z) \neq 0. \end{align*} } \label{gautschi} \end{theorem} \begin{proof}The proof can be found in the book of Gautschi \cite[pp. 278--279]{gaut}. \end{proof} \section{\label{sec:level3a}\secrevision{Generalization of the Problem}} In this section, we present a new algorithm to construct iteration schemes for the computation of the inverse principal $p$-th root of a matrix, which contains Newton's method for matrices and the hyperpower method as special cases. Specifically, we propose a general expression to compute the inverse $p$-th root of a matrix that includes an additional variable $q$ as the order of expansion. In Altman's case, $q$ is the order of convergence, but as we will see, this does not hold generally for the iteration functions as constructed using our algorithm. Nevertheless, choosing $q$ larger than two often leads to an increase in performance, meaning that fewer iterations, matrix multiplications and therefore computational effort is required. In Section~\ref{sec:numResults} we discuss how $q$ can be chosen adaptively. The central statement of this article is the following \begin{theorem} \label{them} Let $A \in \mathbb{C}^{n \times n}$ be an invertible matrix and $p, q \in \mathbb{N} \setminus \{0 \}$ with $q \ge 2$. Setting \begin{align} \varphi : \mathbb{C}^{n \times n} \rightarrow \mathbb{C}^{n \times n}, \quad X \mapsto \frac{1}{p} [ (p-1) X- ((I-X^p A)^q -I) X^{1-p} A^{-1}], \label{eq:phi1} \end{align} we define the iteration \begin{equation} \label{eq:def_pro} B_0 \in \mathbb{C}^{n \times n}, \qquad B_{k+1} = \varphi (B_k) = \frac{1}{p} \left[ (p-1) B_k- ((I-B^p_k A)^q -I) B^{1-p}_k A^{-1}\right]. \end{equation} If $B_0 A=AB_0$ and for $R_0 = I-AB^p_0$ the inequality \begin{equation} \label{eq-1} \left \|R_0^q-\frac{1}{p^p}\sum_{i=2}^p\binom{p}{i}p^{p-i}R_0^{i-1}\left(I-R_0\right)^{1-i}\left(I-R_0^q\right)^i\right\|_2=:c<1 \end{equation} holds then one has \[ B_k A=AB_k \quad\forall k\in\mathbb{N}\qquad \mbox{ and } \qquad \lim\limits_{k \rightarrow \infty} B_k = A^{-1/p}. \] If $p > 1$, the order of convergence of the iteration defined by Eq.~\revision{\eqref{eq:def_pro}} is quadratic. If $p = 1$, the order of convergence of Eq.~\revision{\eqref{eq:def_pro}} is equal to $q$. \end{theorem} \begin{rem} One can use the same formula to calculate the $p$-th root of $A$, where $p$ is negative or even choose $p \in \mathbb{Q}\setminus\{0\}$. However, this is not a competitive method because for negative $p$, one has to compute inverse matrices in every iteration step and for non-integers the binomial theorem and the calculation of powers of matrices becomes more demanding. \revision{In the following, we therefore always assume $p\in\mathbb{N}\setminus\{0\}$.} \end{rem} \begin{proof} First, we show that $B_k A=AB_k$ holds for all $k \in \mathbb{N}$. For this purpose, we rearrange \begin{align*} ((I-X^pA)^q -I)&(X^pA)^{-1} = \left(\sum^q_{i=0} \begin{pmatrix} q \\ i \end{pmatrix} (-1)^i (X^pA)^i -I \right) (X^pA)^{-1}\\ & = \left( \sum^q_{i=1} \begin{pmatrix} q \\ i \end{pmatrix} (-1)^i (X^pA)^i \right) (X^pA)^{-1} = \sum^q_{i=1} \begin{pmatrix} q \\ i \end{pmatrix} (-1)^i(X^pA)^{i-1}. \end{align*} Now assume that $B_k A= A B_k$ holds for a fixed $k \in \mathbb{N}$. Then it follows for $B_{k+1}$ that \begin{align*} B_{k+1} A &= \frac{1}{p}[(p-1)B_k-((I-B^p_k A)^q -I)B^{1-p}_k A^{-1}]A \\ &= \frac{1}{p} \left[(p-1)I- \sum^q_{i=1} \begin{pmatrix} q \\ i \end{pmatrix}(-1)^iA^{i-1}B^{p(i-1)}_k\right]B_k A \\ &= A \left( \frac{1}{p} \left[ (p-1)I- \sum^q_{i=1} \begin{pmatrix} q \\ i \end{pmatrix} (-1)^i A^{i-1} B^{p(i-1)}_k \right] B_k \right) = A B_{k+1} \end{align*} yielding the first part of the assertion. Next, we prove that with $R_k \equiv I- AB^p_k$ one has \begin{equation} \label{eq:Bk} B_{k+1} = \frac{1}{p} B_k [pI+R_k+ \ldots + R^{q-1}_k]. \end{equation} This will be shown by an induction over $q$. For $q=2$, one obtains \begin{align*} B_{k+1} &= \frac{1}{p} \left[(p-1)B_k-((I-B^p_kA)^2-I) B^{1-p}_k A^{-1}\right] \\ &= \frac{1}{p} \left[(p-1)B_k-(-2B^p_kA+B^{2p}_kA^2)B^{1-p}_kA^{-1}\right] = \frac{1}{p} B_k \left[(p-1)I+2I-B^p_kA \right] \\ &= \frac{1}{p} B_k [pI+R_k]. \end{align*} Now assume that Eq.~\eqref{eq:Bk} holds for a fixed $q-1 \in \mathbb{N}$. This yields for $q \in \mathbb{N}$ that \begin{align*} B_{k+1} &= \frac{1}{p} [(p-1)B_k-((I-B^p_kA)^q-I)B^{1-p}_k A^{-1}] \\ &= \frac{1}{p} \left[(p-1)B_k-((I-B^p_kA)^{q-1}(I-B^p_kA)-I)B^{1-p}_k A^{-1}\right] \\ &= \frac{1}{p} \left[(p-1)B_k-((I-B^p_kA)^{q-1}-I)B^{1-p}_kA^{-1} + (I-B^p_kA)^{q-1} B^p_kA B^{1-p}_k A^{-1} \right] \\ &= \frac{1}{p} B_k [pI+R_k + \ldots + R^{q-2}_k] + \frac{1}{p} (I-B^p_kA)^{q-1}B_k= \frac{1}{p} B_k [pI+R_k + \ldots + R^{q-2}_k] + \frac{1}{p} B_k R^{q-1}_k \\ &= \frac{1}{p} B_k [pI+R_k + \ldots + R^{q-1}_k]. \end{align*} Now, everything is prepared to show the second assertion. Using the interchangeability of the matrices $B_0$ and $A$ as well as the binomial theorem, one has \begin{align} R_{1}& = I - \frac{1}{p^p} (I-R_0)(pI+R_0 + \ldots + R^{q-1}_0)^p \label{eq:R11}\\ & = I- \frac{1}{p^p} (I-R_0)\left(pI+ R_0(I-R_0)^{-1}(I-R_0^q)\right)^p\nonumber \\ & = I - \frac{1}{p^p} (I-R_0)\left(\sum_{i=0}^p\binom{p}{i}p^{p-i} R_0^i(I-R_0)^{-i}(I-R_0^q)^i\right)\nonumber\\ & = I \!-\! \frac{1}{p^p} (I-R_0)\left(\!p^pI+p\,p^{p-1}R_0(I-R_0)^{-1}(I-R_0^q)+\sum_{i=2}^p\binom{p}{i}p^{p-i} R_0^i(I-R_0)^{-i}(I-R_0^q)^i\!\right)\nonumber\\ & = R_0^{q+1}-\frac{1}{p^p}\sum_{i=2}^p\binom{p}{i}p^{p-i} R_0^i(I-R_0)^{1-i}(I-R_0^q)^i\nonumber\\ & = R_0\left(R_0^q-\frac{1}{p^p}\sum_{i=2}^p\binom{p}{i}p^{p-i} R_0^{i-1}(I-R_0)^{1-i}(I-R_0^q)^i\right) \label{eq:R1} \end{align} Taking norms on both sides, one obtains for $k=0$ \[ \| R_1 \|_2 \le \| R_0 \|_2 \cdot\left\|R_0^q-\frac{1}{p^p}\sum_{i=2}^p\binom{p}{i}p^{p-i} R_0^{i-1}(I-R_0)^{1-i}(I-R_0^q)^i\right\|_2 = c \|R_0\|_2 \] with $c$ as defined in Eq.~\eqref{eq-1}. Using once more an induction, one obtains \[ \| R_{k+1} \|_2 < c \| R_k \|_2 \] yielding linear convergence of $R_k$ to the zero matrix. Hence, $B_k$ must converge to $A^{-1/p}$. To determine the convergence order, we calculate for the iteration given by Eq.~\eqref{eq:def_pro} the first and second derivative, espectively, given by \begin{align*} \varphi^{\prime}(X) =~& q (I-X^{p}A)^{q-1} + \frac{p-1}{p} \left(\left((I-X^{p}A)^q-I \right) (X^pA)^{-1} + I \right)\\ \varphi^{(2)}(X) = &-pq(q-1)X^{p-1}A(I-X^{p}A )^{q-2} + q(1-p)X^{-1} (I-X^{p}A )^{q-1} \\ &+ (p-1)X^{-p-1}A^{-1}(I-X^{p}A )^q- (p-1)X^{-p-1}A^{-1}. \end{align*} This yields $\varphi^{\prime}(A^{-1/p}) = 0$ for every $p$. For the second derivative it holds that \begeq{\varphi^{(2)}(A^{-1/p}) = 0 \quad \Leftrightarrow\quad p=1.} \noindent It is also possible to show that $\varphi^{(j)}(A^{-1/p})=0$, if and only if $p=1$ for $j = 3, \ldots, q-1$, since \begeq{ \varphi^{(j)}(X) = \sum_{i=0}^{j} J_{i,j}(I-X^{p}A)^{q-i} + (p-1)J_{j} X^{1-p-j} A^{-1}, } where $J_{i,j}$ and $J_j$ are both \revision{non-zero rational} numbers. This implies that according to Theorem \ref{gautschi}, the convergence of iteration defined by Eq.~\revision{(\ref{eq:def_pro})} is exactly quadratic for any $q$ if $p \neq 1$. For $p=1$, we have shown that the order of convergence is identical with $q$. \end{proof} Comparing this result with previous conditions for convergence the required Eq.~\eqref{eq-1} looks rather complicated. To illustrate that this condition is really necessary, Fig.~\ref{fig:illu_norm} shows for the scalar case the development of the residual after one step, i.e., $r_1$, when the initial residual $r_0$ varies in the interval $[0,1]$. \begin{figure} \begin{center} \includegraphics[width=4.5cm]{norm_r1_p1.pdf} \quad \includegraphics[width=4.5cm]{norm_r1_p2.pdf} \quad \includegraphics[width=4.5cm]{norm_r1_p3.pdf} \quad $p=1$ \hspace*{4cm} $p=2$ \hspace*{4cm} $p=3$ \includegraphics[width=4.5cm]{norm_r1_q2.pdf} \quad \includegraphics[width=4.5cm]{norm_r1_q4.pdf} \quad \includegraphics[width=4.5cm]{norm_r1_q6.pdf} \quad $q=2$ \hspace*{4cm} $q=4$ \hspace*{4cm} $q=6$ \end{center} \caption{Norm of the second residual in dependence on the first residual} \label{fig:illu_norm} \end{figure} It can be seen that the absolut value of the second residual can be larger than one even if the norm of the first residual is smaller than one if $p$ and $q$ are chosen in the wrong way. The rather complicated coniditon given in Eq.~\eqref{eq-1} ensures that the norm of the next residual decreases for all possible combinations of $p$ and $q\ge2$. However, these illutrations suggest that for $p=1$ and arbitrary $q\ge2$ as well as for $q=2$ and arbitrary $p\ge1$ a simplification of the criterion given in Eq.~\eqref{eq-1} should be possible. For these cases one finds: \begin{proposition} \label{prop:simpler} Let $A \in \mathbb{C}^{n \times n}$ be an invertible matrix. Consider the iteration defined by Eq.~\eqref{eq:def_pro} for $p, q \in \mathbb{N} \setminus \{0 \}$ and $q \ge 2$. If $p=1$ and $q\ge2$ or if $q=2$ and $p\ge1$, then the condition \begin{equation*} \left \|R_0^q-\frac{1}{p^p}\sum_{i=2}^p\binom{p}{i}p^{p-i}R_0^{i-1}\left(I-R_0\right)^{1-i}\left(I-R_0^q\right)^i\right\|_2=:c<1 \end{equation*} for $R_0= I-AB^p_0$ can be simplified to \begin{equation} \label{eq_eq-1a} \|R_0\|_2=: \tilde{c} <1 \end{equation} \end{proposition} \begin{proof} For $p=1$ and $q\ge 2$, Eq.~\eqref{eq-1} reduces to \begin{align*} \left\|R_0^q-\frac{1}{p^p}\sum_{i=2}^p\binom{p}{i}p^{p-i}R_0^{i-1}\left(I-R_0\right)^{1-i}\left(I-R_0^q\right)^i\right\|_2=\|R_0^q\|_2<1 \quad\Leftrightarrow \quad\|R_0\|_2=: \tilde{c}<1. \end{align*} For $q=2$ and $p\ge1$, one obtains from Eq.~\eqref{eq:R11} for $R_1$ the equation \begin{align*} R_{1}& = I - \frac{1}{p^p} (I-R_0)(pI+R_0)^p, \end{align*} when taking $q=2$ into account. Now, we will use another but equivalent reformulation for $R_1$ to show the assertion. Using the same reformulation as in \cite[Prop 6.1]{bini}, one has for $q=2$ and $p\ge1$ that \begin{align*} R_{1}& = \sum_{i=2}^{p+1}a_iR_0^i\qquad\mbox{with}\qquad a_i>0,\quad 2\le i\le p+1,\quad \sum_{i=2}^{p+1}a_i=1. \end{align*} Combining the first equality with the representation of $R_1$ given in Eq.~\eqref{eq:R1}, one obtains \begin{align*} R_{1}& =R_0\left(R_0^q-\frac{1}{p^p}\sum_{i=2}^p\binom{p}{i}p^{p-i} R_0^{i-1}(I-R_0)^{1-i}(I-R_0^q)^i\right) = R_0\left(\sum_{i=2}^{p+1}a_iR_0^{i-1}\right) \end{align*} yielding \begin{align*} \|R_1\|_2 \le \|R_0\|_2\,\left\|\sum_{i=2}^{p+1}a_iR_0^{i-1}\right\|_2 \le \|R_0\|_2\left(\sum_{i=2}^{p+1}a_i\|R_0\|_2^{i-1}\right). \end{align*} If $\|R_0\|_2= \tilde{c} <1$ holds, it follows that \begin{align*} \sum_{i=2}^{p+1}a_i\|R_0\|_2^{i-1} :=\hat{c} < \sum_{i=2}^{p+1}a_i = 1. \end{align*} Therefore, the inequality \begin{align*} \|R_1\|_2 \le \hat{c} \|R_0\|_2, \end{align*} is valid, which yields convergence of $B_k$ as shown in the proof of Theo.~\ref{them}. \end{proof} \begin{table} \begin{center} \begin{tabularx}{0.7\textwidth}{X|C|C|C|C|C|C|C|C|C} $p$ & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10\\ \hline $q$ & 15 & 8 & 7 & 6 & 6 & 5 & 5 & 5 & 5 \end{tabularx} \bigskip \noindent \begin{tabularx}{0.7\textwidth}{X|C|C|C|C|C|C|C|C} $q$ & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10\\ \hline $p$ & $>20$ & $>20$ & $>20$ & 6 & 4 & 3 & 2 & 2 \end{tabularx} \end{center} \caption{Highest value of second parameter (second line) for given first paramter (first line) for $n=1$.} \label{tab:overview} \end{table} To get an impression of the remaining cases, Tab.~\ref{tab:overview} lists the maximal possible values of the $q$ and $p$, respectively, when fixing $p$ and $q$, respectively, for the case $n=1$. These values were obtained by a simple numerical test varying $r_0$ in the whole interval $[0,1]$. For higher values of $n$, the numbers given in Tab.~\ref{tab:overview} represent an upper bound on the parameter values for which the simpler condition $\|R_0\|_2\le 1$ suffices. With respect to the convergence order, as a trivial consequence we conclude that a larger value for $q$ does in general not lead to a higher order of convergence. However, in the following we perform numerical calculations using MATLAB \cite{matlab}, to determine the number of iterations, matrix-matrix multiplications, as well as the total computational time necessary to obtain the inverse $p$-th root of a given matrix as a function of $q$ within a predefined target accuracy. We find that a higher order of expansion than two within Eq.~\revision{(\ref{eq:phi1})} leads in almost all cases to faster convergence. Details are presented in Section~\ref{sec:numResults}. We now show that our scheme includes as special cases the method of Bini et al.~\cite{bini} and the hyperpower method of Altman~\cite{altm}. From now on, we deal with Eq.~\revision{(\ref{eq:def_pro})} for the definition of the matrices $B_k \in \mathbb{C}^{n \times n}$, i.e., define $B_{k+1} = \varphi(B_k)$. We assume that the start matrix $B_0$ satisfies $B_0A=AB_0$ and Eq.~(\ref{eq-1}). For $p=1$ and $q=2$, we get the already mentioned Newton-Schulz iteration that converges quadratically to the inverse of $A$ \begin{align*} B_{k+1} &= - \left((I-B_kA)^2 -I \right) A^{-1} = (-(B_kA)^2 + 2B_kA)A^{-1} \\ &= 2B_k - B_k^2A. \end{align*} For $q=2$ and any $p$, we get the iteration \begin{align*} B_{k+1} &= \frac{1}{p} \left[ (p-1) B_k - \left((I-B_k^{p}A)^2 -I\right) B_k^{1-p}A^{-1} \right] \nonumber \\ &= \frac{1}{p} \left[(p-1) B_k - \left( (B_k^{p}A)^2 - 2B_k^{p}A\right) B_k^{1-p}A^{-1} \right] \nonumber \\ &= \frac{1}{p} \left[(p-1) B_k - \left( B_k^{p+1}A - 2B_k\right) \right] \nonumber \\ & = \frac{1}{p} \left[(p+1) B_k - B_k^{p+1}A\right]. \end{align*} This is exactly the matrix iteration in Eq.~\revision{(\ref{eqn:q2p-1})} that has been discussed in the work of Bini et al.~\cite{bini}. In both cases, Prop.~\ref{prop:simpler} shows that our condition Eq.~\eqref{eq-1} on $R_0$ is equivalent to the condition $\|R_0\|<1$ used for the convergence analysis of these methods in the corresponding original papers \cite{nr,bini}. We now proceed by dealing with the iteration formula in the case $p=1$ and show that it converges faster for higher orders $(q>2)$. For that purpose, we take Eq.~\revision{(\ref{eq:def_pro})} and calculate for $p=1$ \begin{align*} B_{k+1} &= \frac{1}{p} \left[ (p-1) B_k - \left((I-B_k^{p}A)^q - I\right) B_k^{1-p}A^{-1} \right] \nonumber \\ &\overset{p=1}{=} \left[I - (I-B_kA)^q\right]A^{-1}. \end{align*} We now prove that this is convergent of at least order $q$ in the sense of Definition \ref{defi3}. \begin{align*} \norm{B_{k+1} -A^{-1}}_{\revision{2}} &= \norm{(I-(I-B_kA)^q)A^{-1} - A^{-1} }_{\revision{2}} \nonumber \\ &= \norm{(I-B_kA)^qA^{-1}}_{\revision{2}} \nonumber \\ &= \norm{(I-B_kA)^qA^{-1}A^{-q}A^q}_{\revision{2}} \nonumber \\ &\leq \norm{A}_{\revision{2}}^{q-1}\cdot\norm{(I-B_kA)^q(A^{-1})^q}_{\revision{2}} \nonumber \\ &\leq \norm{A}_{\revision{2}}^{q-1}\cdot\norm{A^{-1}-B_k}_{\revision{2}}^q. \end{align*} Next, we show why the iteration function in Eq.~\revision{(\ref{eq:def_pro})} coincides for $p=1$ with Altman's work on the hyperpower method \cite{altm}. We have already shown oin the proof of Theo.~\ref{them} that \begin{align} B_{k+1} &= \frac{1}{p}B_k\left[(p-1) + (R_k^{q-1}+R_k^{q-2}+\ldots+R_k+I)\right]=\frac{1}{p}B_k\left[pI + \left(\sum_{j=1}^{q-1}R_k^j\right) \right]. \label{eqn:Xk1} \end{align} Altman, however, proved convergence of any order for the iteration scheme in Eq.~\revision{(\ref{eq:alt})}, i.e.\revision{,} \[ B_{k+1} = B_k(I + R_k + R_k^2 + \ldots + R_k^{q-1}), \quad B_0 \in V \] when calculating the inverse of a given linear, bounded and \revision{invertible} operator $A \in V$. If we take $\mathbb{R}^{n \times n}$ for the Banach space $V$, this is identical to Eq.~\revision{(\ref{eqn:Xk1})} with $p=1$. Once more, Prop.~\ref{prop:simpler} shows that our condition Eq.~\eqref{eq-1} is equivalent to the one used by Altman in \cite{altm}. \section{\label{sec:numResults}Numerical Results} Even if the mathematical analysis of our iteration function results in the awareness that, except for $p=1$, larger $q$ \revision{do} not lead to a higher order of convergence, we conduct numerical tests by varying $p$ and $q$. Concerning the matrix $A$, whose inverse $p$-th root should be determined, we take real symmetric positive definite random matrices with different densities and condition numbers. We do this using MATLAB and quantify the number of iterations (\#it) \revision{and} matrix-matrix multiplications (\#mult) until the calculated inverse $p$-th root is close enough to the true inverse $p$-th root. \subsection{The Scalar Case} First, we assess our program for the scalar case. As the computation of roots of matrices is strongly connected with their eigenvalues, it is logical to study the formula for scalar quantities $\lambda$ first, which corresponds to $n=1$ in Eq.~\revision{(\ref{eq:phi1})}. We take values $\lambda$ varying from $10^{-9}$ to $1.9$ and choose $b_0 = 1$ as the start value. For that choice of $b_0$, we have guaranteed convergence for $\lambda \in (0, 2)$ \AW{for $q\le 10$ as illustrated in Fig.~\ref{fig:illu_norm}}. We calculate the inverse $p$-th root of $\lambda$, i.e.\revision{,} $\lambda^{-1/p}$, where the convergence threshold $\varepsilon$ is set to $10^{-8}$. In most cases, already a larger value for $\varepsilon$ yields sufficiently accurate results, but to see differences in the computational time, we choose an artificial small threshold. \revision{While running the iteration scheme, we count the number of iterations and the total number of multiplications until convergence.} \revision{In contrast to the matrix case, we do not use the norm of the residual $r_k = 1 - b_k^p \cdot \lambda$ as a criterion for the convergence threshold, but the \revision{error} between the $k$-th iterate and the correct inverse $p$-th root, thus $\tilde r_k = b_k - \lambda^{-1/p}$.} This is due to the fact that in the scalar case, one can easily get $\lambda^{-1/p}$ by a straightforward calculation. Note that we have not necessarily $|\tilde r_k| < 1$, but only $|r_k| < 1$. \revision{In order} to better distinguish the scalar from the matrix case, we write in the scalar case $b_k, \lambda$, and $r_k$ instead of $B_k, A$, and $R_k$. We choose a rather wide range for $q$ to see the influence of this value on the number of iterations and \revision{multiplications}. In agreement with our observations in the matrix case, which we describe in the next subsection, there is a correlation between the choice of $q$, the number of multiplications \revision{and the number of iterations}. In the following, we elaborate \revision{on the optimal choice of $q$}. \begin{table}[bt] \centering \begin{minipage}[t]{0.475\textwidth} \centering \caption{Numerical results for $p=2$, $\lambda = 1.5$. Optimal values are shown in bold.\vspace{5mm}} \label{lamda15} \begin{tabular}{rrr} \hline\hline $q$ & \#it & \#mult\Tstrut\Bstrut\\ \hline 2 & 5 & 37\Tstrut\\ 3 & 4 & 34 \\ \textbf{4} & \textbf{3} & \textbf{29} \\ 5 & 4 & 42 \\ 6 & \textbf{3} & 35 \\ 7 & 4 & 50 \\ 8 & 4 & 54\Bstrut\\ \hline\hline \end{tabular} \end{minipage}\begin{minipage}[t]{0.05\textwidth}\hfill\end{minipage}\begin{minipage}[t]{0.475\textwidth} \centering \caption{Numerical results for $p=2$, $\lambda = 10^{-9}$. Optimal values are shown in bold.\vspace{5mm}} \label{lowev} \begin{tabular}{rrr} \hline\hline $q$ & \#it & \#mult\Tstrut\Bstrut\\ \hline 2 & 27 & 191\Tstrut\\ 3 & 17 & 138 \\ 4 & 14 & 128 \\ 5 & 12 & \textbf{122} \\ 6 & 11 & 123 \\ \textbf{7} & \textbf{10} & \textbf{122} \\ 8 & \textbf{10} & 132\Bstrut\\ \hline\hline \end{tabular} \end{minipage} \end{table} Typically, the number \revision{of necessary multiplications} is minimal for the smallest $q$, which entails the lowest number of required iterations. However, there are cases where the number of iterations is not steadily decreasing for higher $q$, as for example when computing the inverse square root of $\lambda = 1.5$ (Table \ref{lamda15}). Nevertheless, we conclude that the best choice is $q=4$, as we have the lowest number of \revision{both} iterations and multiplications. In other cases, however, the trend is much simpler. For example, when computing the inverse square root of $\lambda = 10^{-9}$, while varying $q$ from 2 to 8, we clearly obtain the most efficient result for $q = 7$, where the number of iterations \revision{and the number of multiplications are} minimal (Table \ref{lowev}). In most of the cases, it is \textit{a priori} not apparent which value for $q$ is the best for a certain tuple $(p, \lambda)$. It is possible that the lowest number of iterations is attained for really large $q$, i.e.\revision{,} $q > 20$. To present a general rule of thumb, we pick $q$ from $3$ to $8$ as this usually gives a good and fast approximation of the inverse $p$-th root of a given $\lambda \in (0,2)$. Hereby, we observe that for values close to $1$, the optimal choice is in most, but not all cases, $q=3$ and for values close to the borders of the interval, mostly $q=6$ is the best choice. This shows that the further the value of $\lambda$ is away from our start value $b_0 = 1$, the more important it is to choose a larger $q$. \begin{figure}[bt] \centering \includegraphics[width=.7\textwidth]{p1a1e5neu_lass3.pdf} \caption[Errors for $p=1$ and $\lambda = 10^{-6}$]{Errors for $p=1$ and $\lambda = 10^{-6}$. The optimal choice is $q=5$, even though $q=7$ and $q=8$ require \secrevision{fewer} iterations, but overall more multiplications.} \label{fig:p1a1} \end{figure} As \revision{shown} in Fig.~\ref{fig:p1a1} for $p=1$, $\lambda = 10^{-6}$, a larger $q$ causes the iteration to \revision{enter convergence} after \secrevision{fewer} iterations. This is due to the fact that in every iteration \begin{align} b_{k+1} =& \, \frac{1}{p} \left[ (p-1) - \left( (1-b_k^p\lambda)^q-1 \right)/(b_k^p\lambda) \right] b_k \nonumber \\ =& \, b_k + \frac{1}{p} \left( \sum_{i=1}^{q-1} r_k^i \right) b_k \label{eq:sumrk} \end{align} it is calculated. \revision{Hence,} a larger $q$ leads to more summands in Eq.~\revision{(\ref{eq:sumrk})} and therefore to larger steps and variation of $b_{k+1}$. This also holds for negative $r_k$, as we have $r_k \in (-1,1)$ and therefore $|r_k^{i+1}| < |r_k^i|$. \revision{This argument is also true for $p\neq1$ which explains the faster convergence for larger $q$ despite that the order of convergence is limited to $2$ in this case.} However, a larger value for $q$ obviously increases the performance just up to a certain limit. This is due to the fact that a larger value of $q$ implies that $b^p_k$ is raised by larger exponents. Therefore, the number of multiplications increases, which is, especially in the matrix case, \revision{computationally the most time consuming part}. This is why we not only take into account the number of iterations but also the number of multiplications. \subsection{The Matrix Case\label{sec:level4b}} For evaluating the performance of our formula for matrices, we selected various set-ups with different variables $p$, $q$, densities $d$ and condition numbers \revision{$\kappa$}. The density of a matrix is defined as the number of its non-zero elements divided by its total number of elements. The condition number of a normal matrix, for which $AA^* = A^*A$ holds, is defined as the quotient of its eigenvalue with largest absolute value and its eigenvalue with smallest absolute value. Well-conditioned matrices \revision{have condition numbers close to $1$}. The bigger the condition number is, the more ill-conditioned $A$ is. For each set-up, we take ten symmetric positive definite matrices $A \in \mathbb{R}^{1000 \times 1000}$ with random entries, generated by the MATLAB function \texttt{sprandsym} \cite{matlab}. This yields matrices with a spectral radius $\rho(A)<1$. \revision{We store the number of required iterations and the number of matrix-matrix multiplications for each random matrix}. Then, we average these values over all considered random matrices. \revision{We choose $q$ between $2$ and $6$ and stop the algorithm as soon as the norm of the residual is below $\varepsilon = 10^{-4}$.} By using the sum representation of Eq.~\revision{(\ref{eq:def_pro})}, i.e.\revision{,} \begin{align*} B_{k+1} = \frac{1}{p} \left[ (p-1) I - \sum_{i=1}^q \binom{q}{i}(-1)^i(B_k^{p}A)^{i-1} \right]B_k, \end{align*} where the number of multiplications is minimized by saving previous powers of $B_k^pA$. It is evident that the number of matrix-matrix multiplications for a particular \revision{number of iterations} is minimal for the smallest value of $q$. However, a larger $q$ can also \revision{mean \secrevision{fewer} iterations. Consequently, the best value for $q$ usually is} the smallest that yields the lowest possible number of iterations for a certain set-up. In general, the number of matrix-matrix multiplications $m$ can be determined as a function of $p$, $q$, as well as the number of necessary iterations $j$ and which reads as \begeq{m(p,q,j) = p+\left((q-1)+p\right)j.} To fulfill the conditions of Theorem \ref{them}, we claim that the start matrix $B_0$ commutes with the given matrix $A$. If $B_0$ is chosen as a positive multiple of the identity matrix or the matrix $A$ itself, i.e.\revision{,} $B_0 = \alpha I$ or $B_0 = \alpha A$ for $\alpha > 0$, respectively, then it is obvious that $AB_0 = B_0A$ holds and that $AB_0$ is symmetric positive definite. \secrevision{For the range of $p$ and $q$ used in this evaluation, the simplified condition $\|R_0\|_2\le 1$ suffices to guarantee convergence as shown in Tab.~\ref{tab:overview}}. Also, in the first part of the calculations, we only deal with matrices $A$ that have a spectral radius smaller than $1$. If we take $B_0 = I$, then we have $\norm{I-B_0A}_2<1$ due to the following \begin{lemma} \label{lemme} Let $C \in \mathbb{C}^{n \times n}$ be a Hermitian positive definite matrix with $\norm{C}_2 \leq 1$. Then, \revision{it holds that $\norm{I-C}_2<1$.} \end{lemma} \begin{proof} It is apparent that $\norm{I}_2 = 1$. Let $U$ be the unitary matrix such that $U^{-1}CU = \diag(\mu_1, \ldots, \mu_n)$, where $\mu_i \in (0,1] $ are the eigenvalues of $C$. Then we have \begin{align*} \norm{I-C}_2 = \norm{U^{-1}(I-C)U}_2 = \norm{I-\diag(\mu_1, \ldots, \mu_n)}_2 = 1-\mu_{\min} < 1, \end{align*} where $\mu_{\min}$ is the smallest eigenvalue of $C$. \end{proof} \begin{table}[bt] \centering \begin{minipage}[t]{0.475\textwidth} \centering \caption{\revision{Numerical results for $\kappa=500$, $p=1$ and $d\in\{0.003, 0.1\}$. Optimal values are shown in bold.\vspace{5mm}}} \label{c500_1} \begin{tabular}{rrr} \hline\hline $q$ & \#it & \#mult\Tstrut\Bstrut\\ \hline 2 & 13 & 27\Tstrut\\ \textbf{3} & 8 & \textbf{25} \\ 4 & 7 & 29 \\ 5 & 6 & 31 \\ 6 & \textbf{5} & 31\Bstrut\\ \hline\hline \end{tabular} \end{minipage}\begin{minipage}[t]{0.05\textwidth}\hfill\end{minipage}\begin{minipage}[t]{0.475\textwidth} \centering \caption{\revision{Numerical results for $\kappa=500$, $p=4$ and $d\in\{0.003, 0.1\}$. Optimal values are shown in bold.\vspace{5mm}}} \label{c500_4} \begin{tabular}{rrr} \hline\hline $q$ & \#it & \#mult\Tstrut\Bstrut\\ \hline 2 & 10 & 54\Tstrut\\ 3 & 6 & 40 \\ \textbf{4} & \textbf{5} & \textbf{39} \\ 5 & \textbf{5} & 44 \\ 6 & \textbf{5} & 49\Bstrut\\ \hline\hline \end{tabular} \end{minipage} \end{table} In \revision{the following,} we want to investigate \revision{the relation between the number of iterations and $q$ for different $p$, densities $d$ and condition numbers $\kappa$}. Specifically, we begin with sparse matrices $A$, where $\revision{\kappa}=500$. \revision{In this case, the results are identical for densities $d=0.003$ and $d=0.1$.} \begin{table}[bt] \centering \begin{minipage}[t]{0.475\textwidth} \centering \caption{\revision{Numerical results for $\kappa=10$, $p=1$ and $d=0.003$. Optimal values are shown in bold.\vspace{5mm}}} \label{c10_1} \begin{tabular}{rrr} \hline\hline $q$ & \#it & \#mult\Tstrut\Bstrut\\ \hline \textbf{2} & 7 & \textbf{15}\Tstrut\\ 3 & 5 & 16 \\ 4 & 4 & 17 \\ 5 & \textbf{3} & 16 \\ 6 & \textbf{3} & 19\Bstrut\\ \hline\hline \end{tabular} \end{minipage}\begin{minipage}[t]{0.05\textwidth}\hfill\end{minipage}\begin{minipage}[t]{0.475\textwidth} \centering \caption{\revision{Numerical results for $\kappa=10$, $p=4$ and $d=0.003$. Optimal values are shown in bold.\vspace{5mm}}} \label{c10_4} \begin{tabular}{rrr} \hline\hline $q$ & \#it & \#mult\Tstrut\Bstrut\\ \hline 2 & 6 & 34\Tstrut\\ \textbf{3} & \textbf{4} & \textbf{28} \\ 4 & \textbf{4} & 32 \\ 5 & \textbf{4} & 36 \\ 6 & \textbf{4} & 40\Bstrut\\ \hline\hline \end{tabular} \end{minipage} \end{table} \begin{figure}[bt] \centering \includegraphics[width=.7\textwidth]{fig2-errors} \caption[Errors for $\revision{\kappa}=500$, $p=3$ and $d=0.003$]{Errors for $\revision{\kappa}=500$, $p=3$ and $d=0.003$.} \label{fig:p3} \end{figure} As \revision{shown} in Table~\ref{c500_1}, for $p=1$, \revision{the number of iterations is lowest for $q=6$, but concerning the number of matrix-matrix multiplications $q=3$ is the best}. \revision{The situation is similar for $p=2$ and $p=3$, where $q=3$ is optimal.} As \revision{shown} in Table~\ref{c500_4}, for $p=4$, the number of iterations, \revision{and the number of matrix-matrix multiplications} are minimal for $q=4$. \revision{This shows that for larger $p$, higher values for $q$ become beneficial.} In general, like in the scalar case, a larger $q$ causes the iteration scheme to reach quadratic convergence after \secrevision{fewer} iterations as \revision{shown} in Fig.~\ref{fig:p3} for $p=3$. \revision{Note that although the norm of the residual $R_k$ is used as a convergence criterion, in Fig.~\ref{fig:p3} we show the norm of the error compared to the correct solution $\tilde{R_k}$.} \revision{For matrices with lower condition numbers, overall \secrevision{fewer} iterations are required as shown in Tables~\ref{c10_1} and~\ref{c10_4}. As a consequence, lower values for $q$ appear to be optimal. For $p=1$ the optimal value is $q=2$, for $p=4$ it is $q=3$.} \begin{table}[tb] \centering \caption{\label{tab:cond}\revision{Achievable precision and number of required iterations for matrices with large condition numbers.}} \begin{tabular}{rrrrrr} \hline\hline $\kappa$ & $p$ & $q$ & final residual $\norm{R}_2$ & final error $\norm{\tilde R}_2$ & \#it\Tstrut\Bstrut\\ \hline \multirow{4}{*}{\secrevision{$10^3$}} & \multirow{2}{*}{1} & 2 & \secrevision{$4.05\times10^{-14}$} & \secrevision{$2.86\times10^{-11}$} & 16\Tstrut\Bstrut\\ \cline{3-6} && 6 & \secrevision{$4.08\times10^{-14}$} & \secrevision{$2.00\times10^{-11}$} & 7\Tstrut\Bstrut\\ \cline{2-6} & \multirow{2}{*}{4} & 2 & \secrevision{$3.57\times10^{-05}$} & \secrevision{$7.77\times10^{-06}$} & 11\Tstrut\Bstrut\\ \cline{3-6} && 6 & \secrevision{$1.88\times10^{-06}$} & \secrevision{$9.28\times10^{-09}$} & 6\Tstrut\Bstrut\\ \hline\hline \multirow{4}{*}{\secrevision{$10^6$}} & \multirow{2}{*}{1} & 2 & \secrevision{$3.02\times10^{-11}$} & \secrevision{$1.17\times10^{-05}$} & 26\Tstrut\Bstrut\\ \cline{3-6} && 6 & \secrevision{$1.67\times10^{-11}$} & \secrevision{$8.84\times10^{-06}$} & 11\Tstrut\Bstrut\\ \cline{2-6} & \multirow{2}{*}{4} & 2 & \secrevision{$9.82\times10^{-01}$} & \secrevision{$2.00\times10^{+01}$} & 11\Tstrut\Bstrut\\ \cline{3-6} && 6 & \secrevision{$3.62\times10^{-01}$} & \secrevision{$2.36\times10^{\pm00}$} & 5\Tstrut\Bstrut\\ \hline\hline \multirow{4}{*}{\secrevision{$10^9$}} & \multirow{2}{*}{1} & 2 & \secrevision{$3.55\times10^{-08}$} & \secrevision{$3.48\times10^{+01}$} & 34\Tstrut\Bstrut\\ \cline{3-6} && 6 & \secrevision{$2.42\times10^{-08}$} & \secrevision{$1.41\times10^{+01}$} & 15\Tstrut\Bstrut\\ \cline{2-6} & \multirow{2}{*}{4} & 2 & \secrevision{$1.00\times10^{\pm00}$} & \secrevision{$1.66\times10^{+02}$} & 11\Tstrut\Bstrut\\ \cline{3-6} && 6 & \secrevision{$1.00\times10^{\pm00}$} & \secrevision{$1.52\times10^{+02}$} & 4\Tstrut\Bstrut\\ \hline\hline \end{tabular} \end{table} \revision{To evaluate the impact of much larger condition numbers, we used matrices with $\kappa\in\{\secrevision{10^3, 10^6, 10^9}\}$ for different combinations of $p$ and $q$. All calculations were performed using double precision arithmetic. The results are shown in Table~\ref{tab:cond}. As can be seen, the final precision decreases for higher condition numbers and the number of required iteration increases. Using higher values for $q$ not only reduces the number of iterations, but also increases the final precision in all cases.} \revision{Overall it shows that the densitiy of the matrix has no influence on the choice of an optimal $q$ but for higher $p$ and for higher condition number $\kappa$, increasing values of $q$ can provide a reduction not only in the number of iterations but also the number of matrix-matrix multiplications and increase the precision of the final result.} \subsection{General Matrices and Applications} For many applications in chemistry and physics, the most interesting are sparse matrices with an arbitrary spectral radius. Often, to obtain an algorithm that scales linearly with the number of rows/columns of the relevant matrices, it is crucial to have sparse matrices. However, typically these matrices do have a spectral radius that is larger than $1$, which is in contrast with the matrices we have considered so far, where $\rho(A) < 1$. As already discussed, for $q=2$, the iteration function as obtained by Eq.~\revision{(\ref{eq:def_pro})} is identical to Bini's iteration scheme of Eq.~\revision{(\ref{eqn:q2p-1})}. Nevertheless, our numerical investigations suggest that the analogue of Proposition~\ref{prop} from Section~\ref{sec:level2a} may also hold for Eq.~\revision{(\ref{eq:def_pro})} for $q \neq 2$. In any case, in order to also deal with matrices whose spectral radius is larger than $1$, one can either scale the matrix such that \revision{it} has a spectral radius $\rho(A) < 1$, or choose the matrix $B_0$ so that $\norm{I - B_0^pA}_{\revision{2}} < 1$ is satisfied. With respect to the latter, for the widely-used Newton-Schulz iteration, employing \begin{align} B_0 = (\norm{A}_1\norm{A}_{\infty})^{-1}A^\mathsf{T} \label{eq:pan} \end{align} as start matrix, convergence is guaranteed \cite{pan}. However, here we solve the problem for an arbitrary \revision{$p$} by the following \begin{prop} \label{pr} Let $A$ be a symmetric positive definite matrix with $\rho(A) \geq 1$ and $B_0$ like in Eq.~\eqref{eq:pan}. Then, $\norm{I - B_0^pA}_2 < 1$ is guaranteed. \end{prop} \begin{proof} As $A$ is symmetric positive definite, we have $\norm{A}_2 = \lambda_{\max}$. Due to the fact that $\norm{A}_2 \leq \sqrt{\norm{A}_1 \norm{A}_{\infty}}$ and making use of Lemma~\ref{lemme}, we thus have to show that \begeq{\norm{B_0^pA}_2 = \norm{\left((\norm{A}_1\norm{A}_{\infty})^{-1}A^\mathsf{T}\right)^pA}_2 \leq 1.} Since $A$ is symmetric, we have $A^\mathsf{T} = A$ and therefore commutativity of $B_0$ and $A$. Furthermore, the relation \begeq{\left(\norm{A}_1 \norm{A}_{\infty} \right)^{-1} \leq \frac{1}{\lambda^2_{\max}}} holds, so that eventually \begin{align*} \norm{B_0^pA}_2 &\leq\frac{1}{(\lambda^2_{\max})^p}\norm{A^{p+1}}_2 \leq \frac{1}{\lambda^{2p}_{\max}} \norm{A}_2^{p+1} \nonumber \\ &= \frac{1}{\lambda^{2p}_{\max}}\lambda^{p+1}_{\max} = \frac{1}{\lambda^{p-1}_{\max}} \leq 1, \end{align*} as we deal with matrices with $\rho(A) \geq 1$. \end{proof} \begin{rem} By replacing $A^\mathsf{T}$ with $A^*$ in Eq.~\revision{(\ref{eq:pan})}, \textup{Proposition \ref{pr}} is also true for Hermitian positive definite matrices. \end{rem} \secrevision{Consequently, the initial guess of Eq.~\revision{(\ref{eq:pan})} is not only suited for the Newton-Schulz iteration, but for all cases in which the condition $\norm{I - B_0^pA}_2 < 1$ suffices to guarantee convergence (cf. Tab.~\ref{tab:overview}).} In what follows, we perform calculations using matrices $A$ that have the same densities and condition numbers as in the case $\rho(A) < 1$. We scale the matrices such that $\rho(A) \in \{10, 50\}$ and use the initial guess of Eq.~\revision{(\ref{eq:pan})}. \revision{In contrast to the results from Sec.~\ref{sec:level4b}, we see a slight effect of the density $d$ on the optimal choice of $q$. Hence, we evaluate the influence of different densities as well}. As an example, we study matrices \revision{with} $\revision{\kappa}=500$ \revision{and} spectral radius $\rho(A) = 10$ for the case $p=3$. As \revision{shown} in Table~\ref{rho10}, the number of matrix-matrix multiplications is always the lowest for $q=5$, even though the number of iterations is minimal for $q=6$. \begin{table}[bt] \centering \begin{minipage}[t]{0.475\textwidth} \centering \caption{Numerical results for the case $p=3$, $\rho(A)=10$ and $\revision{\kappa}=500$. Optimal values are shown in bold.\vspace{5mm}} \label{rho10} \begin{tabular}{crrr} \hline\hline $d$ & $q$ & \#it & \#mult\Tstrut\Bstrut\\ \hline & 2 & 55 & 223\Tstrut\\ & 3 & 23 & 118 \\ 0.003 & 4 & 18 & 111 \\ & \textbf{5} & 15 & \textbf{108} \\ & 6 & \textbf{14} & 115\Bstrut\\ \hline & 2 & 56 & 227\Tstrut\\ & 3 & 23 & 118 \\ 0.01 & 4 & 18 & 111 \\ & \textbf{5} & 15 & \textbf{108} \\ & 6 & \textbf{14} & 115\Bstrut\\ \hline & 2 & 57 & 231\Tstrut\\ & 3 & 25 & 128 \\ 0.1 & 4 & 19 & 117 \\ & \textbf{5} & 16 & \textbf{115} \\ & 6 & \textbf{15} & 123\Bstrut\\ \hline & 2 & 62 & 251\Tstrut\\ & 3 & 27 & 138 \\ 0.8 & 4 & 21 & \textbf{129} \\ & \textbf{5} & 18 & \textbf{129} \\ & 6 & \textbf{16} & 131\Bstrut\\ \hline\hline \end{tabular} \end{minipage}\begin{minipage}[t]{0.05\textwidth}\hfill\end{minipage}\begin{minipage}[t]{0.475\textwidth} \centering \caption{Numerical results for the case $p=4$, $\rho(A)=50$ and $\revision{\kappa}=10$. Optimal values are shown in bold.\vspace{5mm}} \label{rho50} \begin{tabular}{crrr} \hline\hline $d$ & $q$ & \#it & \#mult\Tstrut\Bstrut\\ \hline & 2 & 32 & 164\Tstrut\\ & 3 & 18 & 112 \\ 0.003 & 4 & 14 & 102 \\ & \textbf{5} & 12 & \textbf{100} \\ & 6 & \textbf{11} & 103\Bstrut\\ \hline & 2 & 34 & 174\Tstrut\\ & 3 & 19 & 118 \\ 0.01 & 4 & 15 & 109 \\ & \textbf{5} & 13 & \textbf{108} \\ & 6 & \textbf{12} & 112\Bstrut\\ \hline & 2 & 37 & 189\Tstrut\\ & 3 & 21 & 130 \\ 0.1 & 4 & 16 & 116 \\ & 5 & 14 & 116 \\ & \textbf{6} & \textbf{12} & \textbf{112}\Bstrut\\ \hline & 2 & 43 & 219\Tstrut\\ & 3 & 24 & 148 \\ 0.8 & 4 & 18 & \textbf{130} \\ & 5 & 16 & 132 \\ & \textbf{6} & \textbf{14} & \textbf{130}\Bstrut\\ \hline\hline \end{tabular} \end{minipage} \end{table} However, the decision of an optimal $q$ is not always as simple as in the case demonstrated above. One such instance is \revision{shown in Table~\ref{rho50}, where the inverse fourth root of matrices with spectral radius $\rho(A) = 50$ and condition number $\revision{\kappa}=10$ is computed}. In this case, we observe that for sparse matrices the choice $q=5$ is the best, while for more dense matrices $q=6$ is optimal. For matrices with a larger spectral radius, we did not observe any configuration where $q=2$ is the best choice. On the contrary, as can be seen in Tables \ref{rho10} and \ref{rho50}, the evaluation of the inverse $p$-th root with $q=2$ generally requires up to two times the computational effort and number of \revision{matrix-matrix multiplications}, as well as up to three times the number of iterations with respect to the optimal choice of $q$. In general, the larger $p$, the more profitable is a bigger value for $q$. \revision{Although our initial value from Eq.~(\ref{eq:pan}) allows us to operate on matrices with spectral radius $\rho(A)\geq1$, we observe that significantly more iterations are required than for the matrices evaluated in Sec.~\ref{sec:level4b}. It should be noted that the alternative approach by scaling the matrices such that $\rho(A)<1$ and then running the iterative method with $B_0 = I$ would require \secrevision{fewer} iteration in these test cases, but instead requires the calculation of $\rho(A)$ and a proper scaling factor.} \section{Conclusion} We presented a new general algorithm to construct iteration functions to calculate the inverse principal $p$-th root of symmetric positive definite matrices. It includes, as special cases, the methods of Altman \cite{altm} and Bini et al. \cite{bini}. The variable $q$, that in Altman's work equals the order of convergence for the iterative inversion of matrices, in our scheme represents the order of expansion. We find that $q>2$ does not lead to a higher order of convergence if $p\neq1$, but that the iteration converges within \secrevision{fewer} iterations and matrix-matrix multiplications, as quadratic convergence is reached faster. For an optimally chosen value for $q$, the computational effort and the number of matrix-matrix multiplications is up to two times lower, and the number of iterations up to \revision{four} times lower \revision{compared} to $q=2$. \section*{Acknowledgments} This work was supported by the Max Planck Graduate Center with the Johannes Guten\-berg-Universit\"at Mainz (MPGC) and the IDEE project of the Carl Zeiss Foundation. The authors thank Martin Hanke-Bourgeois for valuable suggestions and Stefanie Hollborn for critically reading the manuscript. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 716142). \bibliographystyle{unsrt}
1909.13036
\section{Introduction} In the common picture of giant planet formation, a gravitational instability (GI) triggers the collapse of gas clumps into protoplanets \citep{podolak93}. The disk instability scenario relies on the GI of massive and cold circumstellar disks \citep{safronov60,toomre64}. This GI can lead to the fragmentation of the disk into substellar companions \citep{boss98,boss2000}, provided an efficient radiative cooling \citep{gammie01,rice03,boothclarke19}. While disks around class 0/I protostars might be massive enough \citep[e.g.,][]{eisner05,ALMA15,liutakami16}, unambiguous signatures of their GI are still looked for \citep{dong16,forgan16,forgan18}. Alternatively, planets may grow embedded in the disk \citep{safronov69, goldreichward73}, accumulating dust grains into Earth-sized planetary cores \citep[e.g.,][]{birnstiel16,nimmo18}. As the solid core grows more massive, it attracts more of the surrounding gas in a dense envelope --- its primordial atmosphere \citep{pollack96}. In this core-accretion scenario, planetary envelopes become prone to a GI when massive enough \citep{cameron73}, providing a way to form gas giants within gravitationally stable disks \citep{lissauerpp5}. Looking for spherically-symmetric envelope equilibria, \citet{perricameron74} found no solution beyond a \emph{critical core mass}. For a core mass smaller than this critical value, they found two possible solutions for the envelope, and the most massive solution was generally unstable. Since no stable equilibrium can be found beyond the critical mass, the envelope is expected to contract and start accreting gas in a runaway fashion, transforming massive cores into gas giants via the core-nucleated instability. Subsequent studies aimed at deriving more realistic estimates for the critical core mass. \edt{Different assumptions were made regarding the energy transport through the envelope} \citep{mizuno78,hayashi79,mizuno80,sasaki89}, \edt{the grain opacity and accretion luminosity onto the core} \citep{ikoma01,rafikov06}. \edt{In time dependent models, runaway gas accretion starts when the accretion luminosity can no longer balance the radiative cooling and contraction of the envelope} \citep{pollack96,ikoma2000}. \edt{However, these studies always considered the envelope as one-dimensional and quasi-static.} \citet{wuchterl2,wuchterl3} presented the first hydrodynamic calculations of one-dimensional gravitating envelopes. At the critical core mass, \citet{wuchterl3} reported a departure from thermal and hydrostatic equilibrium leading to the \emph{ejection} of the envelope. Using three-dimensional hydrodynamic simulations, \citet{ayliffe12} reported the dynamic \emph{collapse} of gravitating envelopes into a more compact equilibrium. What caused the collapse reported by \citet{ayliffe12} and the discrepancies with the results of \citet{wuchterl3} could not be asserted due to the intricate hydrodynamic, chemical and radiative effects involved. The aim of this paper is to examine the properties of self-gravitating planetary envelopes in the regime of the core-nucleated instability. We use hydrodynamic simulations in models of increasing dimensionality, keeping simple assumptions for the thermodynamics of the gas. In particular, \emph{we do not} model the runaway cooling, contraction and accretion of radiative envelopes. After examining the response of one-dimensional envelopes at the critical core mass, we consider departures from spherical symmetry by progressively including the rotation and the shear of the flow around the planet. As a number of studies have already characterized three-dimensional non-gravitating envelopes \citep[e.g.,][]{bate03,machida10,fung15}, we focus on the effects induced by the gas gravity on the flow near the core. We explicit our model and the methods used throughout this paper in Sect. \ref{sec:method}. We consider one-dimensional envelopes in Sect. \ref{sec:1d}, with hydrostatic calculations followed by hydrodynamic simulations. In Sect. \ref{sec:2d} we consider rotating envelopes within a two-dimensional axisymmetric model. The differential rotation of the circumstellar disk is introduced in Sect. \ref{sec:3d}, where we present three-dimensional simulations of embedded planets in the shearing-sheet approximation. We compare these models against previous studies and discuss their implications in Sect. \ref{sec:discussion}. \section{Model and methods} \label{sec:method} We consider a solid planetary core embedded in a circumstellar disk and massive enough to capture its own atmosphere. If $m_c$ is the mass of the core and $c_s$ the isothermal sound speed of the gas, then the Bondi radius $\rB \equiv G m_c / c_s^2$ is larger than the radius $r_c$ of the core. For simplicity, we consider that the core orbits its star on a circular trajectory, unaffected by the gas drag \citep{weidenschilling77} or other causes of radial migration \citep{kleynelson12}. We consider time intervals of a few tens of orbital periods at most. The properties of the core are fixed and we focus on the dynamics of the gas surrounding it. \subsection{Governing equations} \label{sec:equations} The gas evolves according to the following equations of mass and momentum conservation: \begin{alignat}{3} &\partial_t \rho &&+ \nabla\cdot\left[\rho \bm{v}\right] &&= 0,\label{eqn:consrho}\\ &\partial_t \left[\rho \bm{v}\right] &&+ \nabla\cdot\left[ \rho \bm{v}\otimes \bm{v} + P \right] &&= - \rho \nabla \Phi - 2 \rho \bm{\Omega}\times \bm{v}, \label{eqn:consrhov} \end{alignat} where $\rho$ is the gas density, $\bm{v}$ its velocity and $P$ its pressure. The last term of \eqref{eqn:consrhov} represents the Coriolis acceleration, which we include only when following the core in a frame rotating at the angular frequency $\Omega$. We consider isothermal envelopes in most of this paper, having a pressure $P=\rho c_s^2$ with a single sound speed (temperature) in the entire flow. For a given core mass and disk temperature, the isothermal envelopes are the most massive ones, helping us identify the influence of the gas gravity on the flow. We also consider polytropic envelopes in Sect. \ref{sec:1d}, for which the equation of state is $P = \kappa \rho^\gamma$ and the isothermal limit corresponds to $\gamma=1$. We decompose the gravitational potential $\Phi$ as a sum of potentials from the star, the core and the gas. Given the core mass $m_c$, the potential of the core depends on the radius $r$ as $\Phi_c(r) = -G m_c / r$. To avoid more complications, we neglect the gravity of the circumstellar disk: in the limit of $\Phi_c \rightarrow 0$ (no core), we require the potential of the gas to be constant. The potential of the gas must therefore satisfy Poisson's equation \begin{equation} \label{eqn:poisson} \Delta \Phi_g = 4\uppi G \rho', \end{equation} in which the source term $\rho'$ is the density deviation from its background value $\rho_{\infty}$. With this source term, an envelope of constant density $\rho_{\infty}$ remains gravitationally stable regardless of its size. If the gas density increases near the core, one can define the Jeans length scale $\lJ^2 \equiv \uppi c_s^2 / G \rho'$ beyond which the gas is unstable to gravito-acoustic perturbations \citep{jeans1902}. \subsection{\textsc{Pluto} simulations} \label{sec:plutosetup} We performed self-gravitating hydrodynamic simulations using a modified version of the \textsc{pluto} 4.0 code \citep{mignone07}. Although the exact numerical setup changes from Sect. \ref{sec:1d} to Sect. \ref{sec:3d}, the integration scheme remains the same for consistency. We use \textsc{pluto} to integrate \eqref{eqn:consrho}-\eqref{eqn:consrhov} in time via a finite-volume method and an explicit second-order Runge-Kutta time-stepping. At the volume interfaces, we use a linear reconstruction with the slope limiter of \citet{vanleer79} to estimate the primitive variables $\left(\rho,v\right)$. We then use the approximate Riemann solver of \citet{roe81} to compute the interface fluxes. Where the gas pressure varies by more than a factor $5$ between adjacent cells, we revert to the more diffusive MINMOD slope limiter \citep{roe86} and HLL Riemann solver \citep{van1997relation}. When including the Coriolis acceleration, the momentum equations is evolved in a rotating frame so as to conserve angular momentum \citep{kley98,mignone12}. We include gravity via its potential $\Phi$. We use the Poisson solver described in Appendix B of \cite{bethune19a} to obtain the potential of the gas satisfying \eqref{eqn:poisson}. We always impose $\partial_r \Phi_g=0$ at the surface of the core, consistently with the absence of gas inside $r\leq r_c$. To avoid a spurious drag arising if the potential lags behind the mass, the Poisson problem is solved at the beginning of every time step. For a spherically symmetric density distribution, the gravitational acceleration $-\partial_r\Phi_g$ could be directly obtained by radial integration of the gas mass. Regardless, we use the same Poisson solver in every dimension for consistency. The numerical error in estimating $\Delta \Phi_g$ is examined in Appendix \ref{app:errors}. \subsection{Units and conventions} We call envelope the region where the core gravity induces a substantial density accumulation $\rho' / \rho_{\infty} \gtrsim 1$. Oppositely, background refers to the conditions in the midplane of the circumstellar disk, near the orbital radius of the core but away from the direct influence of the core. The gravitational constant is set to $G=1$ and we take the radius of the core $r_c$ as distance unit. We use 1D, 2D and 3D in place of one, two and three-dimensional respectively. \section{One-dimensional envelopes} \label{sec:1d} \subsection{1D spherical envelope model} In this section, we examine the 1D radial structure of spherically-symmetric envelopes. We neglect the orbital motion of the planet around its star, as well as the rotation of the envelope around the core. The only forces involved are the pressure support of the gas against gravity. Were an equilibrium to exist, these two forces would balance each other. We focus on the influence of the gas gravity on such hydrostatic equilibria and their stability. Given the background sound speed $c_s$ of the gas, we measure the mass of the core via its Bondi radius $\rB$. We measure the background density $\rho_{\infty}$ relative to the average core density $\rho_c = m_c / \frac{4}{3}\uppi r_c^3$. In the limit $\rho_{\infty}/\rho_c \rightarrow 0$, the gravity of the gas should become negligible compared to the gravity of the core for a finite-sized envelope. \subsection{Semi-analytical solutions} \label{sec:semianal} \subsubsection{Equations} We design a 1D solver for self-gravitating hydrostatic equilibria. Using a polytropic equation of state $P=\kappa \rho^{\gamma}$, we rewrite the equation of mass \eqref{eqn:consrho} and momentum \eqref{eqn:consrhov} conservation as \begin{align} \frac{\dd m}{\dd r} - 4\uppi r^2 \left[\rho(r)-\rho_{\infty} \right] &= 0,\label{eqn:hs1}\\ \frac{\dd \log \rho}{\dd \log r} + \frac{\rho^{1-\gamma}}{\gamma\kappa} \frac{m(r)}{r} &= 0, \label{eqn:hs2} \end{align} where $m(r)$ is the mass contained inside the sphere of radius $r$ --- core and density deviations. One natural boundary condition is $m\left(r_c\right) = m_c$. We prescribe the second boundary condition at an arbitrary radius $\rO$ and vary the value of $\rO$ so as to simulate the influence of the background density and pressure on the envelope. We impose $\rho(\rO) = \rho_{\infty}$, and set the background sound speed such that $P\left(\rO\right) = \rho\left(\rO\right) c_s^2$ whether $\gamma=1$ (isothermal) or not. A solution of interest is readily found in the isothermal non-gravitating case. In this limit, \eqref{eqn:hs2} reduces to $\dd\log\rho/\dd\log r + \rB/r = 0$, yielding the non-gravitating solution \begin{equation} \label{eqn:rhonosg} \rho_0(r) = \rho_{\infty} \exp\left[\frac{\rB}{r} - \frac{\rB}{\rO} \right]. \end{equation} \subsubsection{Numerical solutions} We use a Levenberg-Marquardt root finder to solve \eqref{eqn:hs1}-\eqref{eqn:hs2} with the above boundary conditions as a boundary value problem for $\left(m,\log\rho\right)$. The differentiation operators are constructed via a Chebyshev collocation grid on $\log\!\left(r\right)$. With 16 collocation points, the residual error of \eqref{eqn:hs1}-\eqref{eqn:hs2} is less than $10^{-12}$ for smooth solutions; we use 64 collocation points by default. \begin{figure} \begin{center} \includegraphics[width=1.0\columnwidth]{sskijns.pdf} \caption{Radial density profile normalized by the non-gravitating solution \eqref{eqn:rhonosg} for $\gamma=1$, $\rB/r_c=8$, $\rO/r_c=16$, and varying the background density $\rho_{\infty}$. The dashed curves are obtained when trying to impose $\rho_{\infty}>\rho_{\mathrm{critical}}$; they satisfy \eqref{eqn:hs1}-\eqref{eqn:hs2} but not the required outer boundary condition. \label{fig:hsprofs}} \end{center} \end{figure} Fig. \ref{fig:hsprofs} shows the results of the root finder for $\gamma=1$, $\rO/r_c=16$, $\rB/r_c=8$, and different values of the background density $\rho_{\infty}$. In the non-gravitating limit $\rho_{\infty}\rightarrow 0$ (darker, violet lines), the solution converges to \eqref{eqn:rhonosg}. As $\rho_{\infty}$ increases, the gas gravity becomes significant and adds up to the core gravity. To maintain a hydrostatic equilibrium, the pressure (density) profile becomes progressively steeper. Eventually, we reach a critical value $\rho_{\mathrm{critical}}$ beyond which there is no \emph{valid solution} of \eqref{eqn:hs1}-\eqref{eqn:hs2} satisfying $\rho(\rO)=\rho_{\infty}$ (lighter, orange curves). We identify this transition by monitoring the error $\rho(\rO)-\rho_{\infty}$, which suddenly jumps at the threshold $\rho_{\infty} = \rho_{\mathrm{critical}}$. Although the outer boundary condition is not satisfied anymore, the density profiles in the regime $\rho_{\infty}>\rho_{\mathrm{critical}}$ are still solution to \eqref{eqn:hs1}-\eqref{eqn:hs2} to per cent accuracy over the radial domain. The critical density marks the transition to the second solution branch identified by \cite{perricameron74}, albeit with a different parametrization. We examine the linear stability of these solutions in Appendix \ref{app:linstab}, and now delimit the domain of existence of solutions satisfying $\rho\left(\rO\right)=\rho_{\infty}$ in our parameter space. \subsection{Critical envelopes} \label{sec:critmass} We solve \eqref{eqn:hs1}-\eqref{eqn:hs2} for various input parameters and track the threshold value $\rho_{\mathrm{critical}}$. We prescribe the outer radius $\rO/r_c\in\left[16,64\right]$ to represent the sphere of influence of a massive embedded core, i.e. roughly one pressure scale of the disk in radius \cite[see Sect. \ref{sec:3dflow} and][]{bethune19b}. We also prescribe the polytropic exponent $\gamma$ while maintaining\footnote{For static equilibria, one can arbitrarily choose $c_s$ and adjust the core mass according to $m_c = c_s^2 \rB / G$.} the background temperature $P(\rO)=\rho_{\infty} c_s^2$, and the mass of the core via its Bondi radius $\rB$. We then increase the background density $\rho_{\infty}$ until the outer boundary condition can no longer be satisfied. The corresponding $\rho_{\mathrm{critical}}$ are marked on Fig. \ref{fig:ssk_stablim} relative to the core density $\rho_c$. \begin{figure} \begin{center} \includegraphics[width=1.0\columnwidth]{sasaki_stablim.pdf} \caption{Critical background density relative to the core density for different values of the Bondi radius $r_{\mathrm{B}}$ (\emph{abscissa}), outer radius $\rO/r_c$ (\emph{markers, see legend}) and polytropic exponent $\gamma$ (\emph{colors scale}). Valid solutions are found below the markers, i.e. for smaller $\rho_{\infty}/\rho_c$. The top axis indicates the corresponding core mass in Earth mass units, assuming that the core has the same density as the Earth and is located at $1\mathrm{au}$ around a solar-mass star in a disk with aspect ratio $h/r=0.05$. The dashed horizontal line marks the corresponding MMSN midplane density $\rho\approx 3\times10^{-9} \mathrm{g}\,\mathrm{cm}^{-3}$. \label{fig:ssk_stablim}} \end{center} \end{figure} The ratios of $\rho_{\mathrm{critical}}/\rho_c$ marked on Fig. \ref{fig:ssk_stablim} delimit the region of existence of valid solutions from above. With $r_c$ as our distance unit, the core mass is directly $m_c \sim \left(4/3\right)\uppi \rho_c$. For a given core mass, valid equilibria require a background density $\rho_{\infty}$ smaller than $\rho_{\mathrm{critical}}$. Reciprocally, solutions satisfying $\rho_{\infty}=\rho_{\mathrm{critical}}$ can only be found for cores more massive than indicated. We note three trends on Fig. \ref{fig:ssk_stablim}. First, when the mass of the core increases (from left to right), the critical gas-to-core density ratio decreases. Second, increasing the polytropic exponent $\gamma$ from $1$ to $3/2$ (lighter markers) allows equilibria at larger background densities. Third, the threshold $\rho_{\mathrm{critical}}/\rho_c$ only slightly decreases with the location of the outer boundary condition $\rO$ (marker symbols). The thresholds obtained by this method agree with those of \citet{sasaki89} in the appropriate regime. The critical density drops by orders of magnitude as soon as $\rB/r_c\gtrsim 8$. It falls below the Minimum Mass Solar Nebula \citep[MMSN,][]{hayashi81} midplane density for isothermal envelopes around cores of a few Earth masses at $1\,\mathrm{au}$. However, valid equilibria with $\gamma=7/5$ still exist for much more massive cores $\gtrsim 10^3 m_{\oplus}$. Whether the critical density threshold is realistically accessible for more sophisticated thermodynamic structures is outside the scope of this paper. Having delimited the range of parameters for which valid hydrostatic equilibria exist, we proceed to examine the dynamical reaction of the envelope when the control parameters vary continuously across this limit. \subsection{Direct numerical simulations} \subsubsection{Numerical setup} \label{sec:1dsetup} We use the \textsc{pluto} code as described in Sect. \ref{sec:plutosetup} to evolve the density $\rho$ and radial velocity $v_r$ in time for an isothermal gas in 1D spherical geometry. We mesh the radial interval $r/r_c \in \left[1,16\right]$ with 512 logarithmically spaced grid cells. We prevent mass and momentum fluxes through the surface of the core via $\left(\rho,v_r,\Phi_c\right)(r_c-\epsilon) = \left(+\rho,-v_r,+\Phi_c\right)(r_c+\epsilon)$. At the outer radial boundary, we impose a constant density $\rho_{\infty}$ and allow the gas to flow in by a linear extrapolation of the radial velocity. The outer boundary condition $\Phi_g(\rO)=0$ sets a reference value for the gravitational potential of the gas. We initialize the computational domain with a flat density $\rho=\rho_{\mathrm{\infty}}$ and zero velocity. We take the sound speed as velocity unit ($c_s=1$). The mass of the core is progressively increased from zero to its nominal value, so that the envelope mass builds up in quasi-static equilibrium at every instant. \subsubsection{Non-gravitating limit} \label{sec:dns1dnosg} When neglecting the gas gravity, the density profile should converge toward \eqref{eqn:rhonosg} given an outer density $\rho_{\infty}$ and a Bondi radius $\rB$. We increase the Bondi radius of the core linearly in time from zero up to $16\,r_c$ over a time interval of $t_{\mathrm{B}} = 12800\,r_c/c_s$, after which it remains equal to $16\,r_c$. The resulting density distribution is represented on Fig. \ref{fig:rhortnosg} as a function of radius and time. \begin{figure} \begin{center} \includegraphics[width=1.0\columnwidth]{sski1j0b16g1_rho.png} \caption{Space-time distribution of the density $\rho/\rho_{\infty}$ in a non-gravitating envelope. The core mass increases from zero to $\rB=16\,r_c$ over the first $t_{\mathrm{B}}=12800 r_c/c_s$ (\emph{top axis}) after which it is constant. The cyan contour marks the sonic surface $v_r=-c_s$, corresponding to a smooth transsonic point (upper part) and a shock (lower part). \label{fig:rhortnosg}} \end{center} \end{figure} As the Bondi radius of the core increases (from left to right), an inflow of gas through the outer radial boundary allows the density to increase inside the domain. At $c_s t / r_c \approx 12000$ and $r/r_c \approx 8$, the inflow becomes supersonic (cyan contour) and shocks on the inner parts of the envelope. The shock front propagates inward until $t_{\mathrm{B}}$, i.e., while the mass of the core is still increasing. For $t>t_{\mathrm{B}}$, the shock front propagates outward until the two sonic points merge. After this instant, the envelope sustains acoustic oscillations but matches the analytical solution \eqref{eqn:rhonosg} to $5$ per cent accuracy upon time-averaging. The shock appearing in this simulation is a consequence of the inflow velocity exceeding the sound speed. It implies that the envelope is not in quasi-static equilibrium as intended, certainly because the Bondi radius of the core initially increases too fast. However, the envelope is able to reach a stable equilibrium after the core mass stops increasing. \subsubsection{Self-gravitating envelope} We repeat the same simulation as above, but now including the gravity of the gas. For a Bondi radius $\rB=16\,r_c$, the critical density is $\rho_{\mathrm{critical}}/\rho_c \approx 3.346 \times 10^{-8}$. We impose a slightly larger density $\rho_{\infty}\approx 3.356 \times 10^{-8} \rho_c$. When the Bondi radius of the core reaches $16\,r_c$, no hydrostatic equilibrium should be able to satisfy the outer boundary condition anymore. \begin{figure} \begin{center} \includegraphics[width=1.0\columnwidth]{sski1j69b16g1_rho.png} \caption{Same as Fig. \ref{fig:rhortnosg} but including the gas gravity, with a background density larger than its critical value. The inflow remains transsonic after the Bondi radius has reached $\rB=16\,r_c$; the envelope is then delimited by an inward moving shock front (lower part of the cyan contour). \label{fig:rhortwithsg}} \end{center} \end{figure} The evolution of the self-gravitating density distribution is shown on Fig. \ref{fig:rhortwithsg}. As previously, the inflow becomes supersonic before the Bondi radius of the core reaches $16\,r_c$. After $t_{\mathrm{B}}=12800\,r_c/c_s$, the shock moves outward for about $5\times 10^3\,r_c/c_s$ before starting to move toward the core. The smooth sonic point (upper part of the cyan line) keeps moving outward, meaning that the inflow becomes progressively faster at a given radius. As the two sonic points drift away from one another, the core and its envelope always drive a transsonic inflow. The key difference with the non-gravitating case is that the envelope does not converge to a steady state when $\rho_{\infty}/\rho_c$ exceeds the threshold delimited on Fig. \ref{fig:ssk_stablim}. When crossing this threshold, the outer parts of the envelope collapse in near free-fall. As soon as the infalling gas becomes supersonic, it has to shock on the inner parts of the envelope, dissipating momentum and allowing the accumulation of mass. We verified via simulations at lower core mass (lower $\dd r_{\mathrm{B}}/\dd t$ while the core mass is initially increased) and larger $\rho_{\infty}$ that the collapse precisely occurs at the critical values predicted in Sect. \ref{sec:critmass}. However, in this regime the inflow shocks closer to the core and the shocked envelope becomes spatially under-resolved. We verified that a collapse also happens in adiabatic envelopes when crossing the threshold marked on Fig. \ref{fig:ssk_stablim}. In this case, adiabatic heating and momentum dissipation at the shock lead to higher pressures in the contracting gas. For $\gamma=3/2$, $\rO/r_c=16$ and $\rB/r_c=16$, the smooth sonic point settles at the outer radius $\rO$ and the shock propagates \emph{outward} until the space between the shock and $\rO$ is under-resolved. Because these cases are sensitive to the finite extent and resolution of our computational domain, we discuss what governs the dynamics of the envelope in the next section. \subsection{Discussion of 1D models} \label{sec:1ddiscussion} \subsubsection{The core-nucleated instability} By progressively increasing the mass of the core at a fixed $\rho_{\infty}$, we have followed the `stable' solution branch for the envelope up to the critical core mass. Beyond this point, the equilibria from the second solution branch (as drawn on Fig. \ref{fig:hsprofs} for $\rho_{\infty}>\rho_{\mathrm{critical}}$) should be linearly unstable \citep{perricameron74,mizuno78,wuchterl1}. Due to the sudden collapse of the outer envelope at the critical mass, these equilibria seem inaccessible unless taking them as initial condition \citep[as done by][]{wuchterl3}. We will therefore focus on the non-linear dynamics of the envelope when it crosses the critical mass from below. It is possible to interpret the trends of Fig. \ref{fig:ssk_stablim} in the non-gravitating limit, assuming that the threshold mainly depends on the envelope mass relative to the core mass \citep{sasaki89}. First, the mass of the envelope increases faster than the mass of the core, so gas gravity effects appear at lower background densities $\rho_{\infty}$ when the mass of the core increases. Second, if the gas pressure varies as $\rho^{\gamma}$, then one can satisfy $\partial_r P = -\rho\partial_r\Phi_c$ with a shallower density profile when increasing $\gamma$. To reach the same envelope mass, the background density must then be larger. Third, the envelope mass increases with $\rO$, so it becomes comparable to the core mass at a lower $\rho_{\mathrm{critical}}$ when $\rO$ increases. As we show on Fig. \ref{fig:ssk_stablim} and in Appendix \ref{app:linstab}, the absence of global equilibria beyond a critical mass is independent of the gas thermodynamics, which only affect the value of this critical mass. Unlike the instability of a homogeneous gas ball \citep{ebert55,bonnor56}, the envelope collapse can spontaneously stop even though $\gamma<4/3$ (see Fig. \ref{fig:rhortwithsg}). As apparent from the slow propagation of the shock front on Fig. \ref{fig:rhortwithsg}, the shocked envelope maintains a nearly hydrostatic equilibrium. Using the method described in Sect. \ref{sec:semianal}, we verified that hydrostatic equilibria can indeed be found between the core and the shock, given the post-shock density as an outer boundary condition. In this sense, planetary cores above the critical mass can still support a hydrostatic envelope. We explain below how the extent of this envelope is determined by the ambient conditions. \subsubsection{Long-term evolution} \label{sec:1dlongterm} In 1D, if the infalling gas becomes supersonic, then it has to shock before reaching the surface of the core (where $v_r=0$). The conditions just upstream of the shock are controled by the ambient (outer boundary) conditions on $\left(\rho,v_r\right)$ and by the planet mass. Given the ambient conditions, it is possible to predict the dynamics of 1D envelopes to some extent. Let $\zeta(t)$ denote the radius of the shock front, $u_r$ the gas velocity in the frame of the shock, and the exponents $(\mathrm{u})$ and $(\mathrm{d})$ identify the upstream and downstream regions respectively. The velocity of the shock is obtained by changing frame: $\dd\zeta/\dd t = v_r\dn - u_r\dn$. If the post-shock envelope was exactly hydrostatic ($v_r\dn=0$), then $\dd\zeta/\dd t>0$ and the shock would propagate outward. The opposite orientation $\dd\zeta/\dd t<0$ on Fig. \ref{fig:rhortwithsg} reveals that the post-shock envelope is contracting ($v_r\dn<0$). This contraction is due in part to the inward momentum flux, most of it being converted into pressure at the shock. Simultaneously, the accumulation of mass causes the gas potential $\Phi_g$ to deepen over time, so the envelope contracts to support its own increasing gravity. The momentum flux is relevant to locate the shock front. The velocity drop at the shock --- $u_r\dn/c_s = c_s/u_r\up$ in the isothermal case --- leads to a large drop in ram pressure $\rho u_r^2$. The shock ultimately settles where the downstream thermal pressure balances the upstream ram pressure: $\rho\dn c_s^2 \simeq \rho\up u_r\up{^2}$. Since the shocked envelope is nearly hydrostatic, $\rho\dn$ decreases radially, so a larger upstream ram pressure would push the shock closer to the core. This is expected if the background density or the accumulated envelope mass increases (as on Fig. \ref{fig:rhortwithsg}). If the ambient density were to decrease, then the shocked shell would expand until the momentum balance condition is satisfied again. If the shock radius $\zeta$ extends further than the gravitational radius $G m\left(\zeta\right) / c_s^2$ or the Hill radius of the planet, then the outer parts of the envelope could escape the core by evaporation or gravitational tides. According to this 1D model, a planet may therefore experience a phase of rapid gas accretion only to lose its outer envelope later during the dispersal of the protoplanetary disk. \section{Two-dimensional envelopes} \label{sec:2d} The previous 1D model omits the angular momentum of the background flow with respect to the core. By conservation of angular momentum, the gas should spin faster as it approaches the core. The centrifugal acceleration can then provide a substantial support against gravity, allowing for much less massive envelopes. In this section, our main goal is to test whether the core-nucleated instability --- and the ensuing accretion phase --- can also affect rotationally-supported envelopes. \subsection{2D axisymmetric model} \label{sec:2dmodel} We consider a planetary core orbiting its star at the angular frequency $\Omega$ about the $z$ axis of the disk. We adopt a frame centered on the core and rotating along its orbit at the angular frequency $\Omega$; in this frame, the background flow is steady in time. To make things simpler, we neglect the vertical stratification and the differential rotation of the disk. In the absence of the core, the gas density should be constant and the velocity should be zero in this frame, so rotation only manifests itself through the Coriolis acceleration in \eqref{eqn:consrhov}. If radial motions are brought about by the core, then angular momentum conservation will generate a toroidal velocity in this frame. Let $\left(r,\theta,\varphi\right)$ denote spherical coordinates centered on the core, with $\theta=0$ along the rotation axis $z$. Assuming that the flow is axisymmetric ($\partial_{\varphi}=0$), we use \textsc{pluto} to integrate \eqref{eqn:consrho}-\eqref{eqn:consrhov} for $\left(\rho,v_r,v_{\theta},v_{\varphi}\right)$ in the $\left(r,\theta\right)$ poloidal plane. The interval in polar angle $\theta\in \left[0,\uppi\right]$ is uniformly meshed with 256 grid cells. The radial interval $r/r_c\in\left[1,32\right]$ is meshed with 256 logarithmically spaced cells. The radial boundary conditions are the same as in 1D (see Sect. \ref{sec:1dsetup}), with the addition of $v_{\theta}=v_{\varphi}=0$. Inside the computational domain, we homogenize the density in the innermost grid shell at every timestep, conserving the total mass in the shell. This operation is intended to prevent unresolved mass concentrations when including the gas gravity. About the polar axis $\theta=0$, we impose \begin{equation*} \left(\rho,v_r,v_{\theta},v_{\varphi}\right)(r,-\vartheta) = \left(+\rho,+v_r,-v_{\theta},-v_{\varphi}\right)(r,+\vartheta), \end{equation*} and similarly about $\theta=\uppi\pm\vartheta$. We initialize the domain with $\rho=\rho_{\infty}$ and $\bm{v}=0$, and we increase the mass of the core up to its nominal value linearly over $\Omega t/2\uppi = 2$ orbital times. The gas entering the computational domain carries a specific angular momentum depending only on its initial latitude and on the outer radius $\rO$. \subsection{Non-gravitating limit} \label{sec:dns2dnosg} We start by neglecting the gravity of the gas. We take $\rho_{\infty}=c_s=1$ and prescribe the angular frequency $\Omega r_c / c_s = 1/32$ of the core around the star. If the vertical stratification of the disk was accounted for, the density would vary over a pressure scale height $h\equiv c_s/\Omega$. The ratio $\Omega r_c / c_s$ would then measure the size of the core relative to the stratification scale of the disk. Typical values of this ratio for super-Earths can be found in section 2.1 of \cite{bethune19a}. A ratio of $h/r_c = 32$ is arguably reasonable for massive planets at small orbital separations, but under-estimated otherwise. Larger values of $h/r_c$ would place stronger constraints on the explicit integration time steps and therefore be computationally more demanding. We consider two cases with different core masses: $r_{\mathrm{B}}/r_c=8$ and $16$. \subsubsection{Flow structure} \label{sec:dns2dflow} \begin{figure} \begin{center} \includegraphics[width=1.0\columnwidth]{sski2j0b16g10o32_PLm_vyrhovp.png} \caption{Toroidal velocity (\emph{color map}), poloidal mass flux (\emph{blue arrows, orientation only}), mass density (\emph{dashed green contour lines}) and sonic surface for the poloidal velocity (\emph{solid cyan contours}) after time-averaging over $20\Omega^{-1}$ in the simulation with $\rB/r_c=16$. Note the non-rotating shell directly on top of the core. \label{fig:2dvyrhovp}} \end{center} \end{figure} Fig. \ref{fig:2dvyrhovp} shows the time-averaged flow in the non-gravitating simulation with $\rB/r_c=16$. In the midplane, the toroidal velocity increases inward, as expected from angular momentum conservation \citep{miki82,ormel1}. \edt{To quantify the level of rotational support, one can convert to Keplerian velocity units} \begin{equation} \label{eqn:tokepler} \frac{v_{\varphi}}{\vK} = \frac{\Omega r_c}{\sqrt{c_s^2\rB/R}}\frac{v_{\varphi}}{\Omega r_c} = \frac{1}{32} \sqrt{\frac{R}{\rB}} \frac{v_{\varphi}}{\Omega r_c}. \end{equation} The toroidal velocity increases from $0.2\vK$ at $R=6 r_c$ to its maximum $0.4\vK$ at $R=2r_c$in the midplane. \edt{Since the centrifugal acceleration scales as $v_{\varphi}^2$}, this corresponds to roughly $4$ to $16$ per cent of rotational support against gravity. Close to the polar axis, the gas comes with essentially no angular momentum, so only the pressure gradient can balance gravity. However, the envelope does not settle in a static 1D equilibrium. Instead, the gas circulates from high latitudes down to the core and away from the core in the midplane. The inflow become supersonic at $\vert z\vert \approx 7r_c$, and subsonic again through a shock at $\vert z \vert \simeq 2.5r_c$. Averaging the turbulent fluctuations out, the radius of the shock increases by less than $0.2 r_c$ over $3000$ sound crossing times. With respect to the core, the shock front is therefore stationary over the time scales considered, and it dissipates the momentum of the infalling gas. Downstream of the shocks, the gas remains sitting on top of the core with essentially no momentum. Whether an accretion shock forms relies on the inflow becoming supersonic. Otherwise, the envelope can be recycled by the poloidal circulation with no net mass accretion onto the core \citep{ormel2,bethune19b}. We now characterize the mass flux through the envelope for different core masses. \subsubsection{Gas accretion and recycling} \begin{figure} \begin{center} \includegraphics[width=1.0\columnwidth]{accvsrec_n32.pdf} \caption{Radial profiles of the net mass flux $\overline{\rho v_r}$ (\emph{thick blue, left axis}) and recycling mass flux $\widetilde{\rho v_r}$ (\emph{thin red, right axis}) averaged over $50\Omega^{-1}$ in non-gravitating 2D simulations with $\rB/r_c=8$ (\emph{solid}) and $16$ (\emph{dashed}). \label{fig:2dmvrdvr}} \end{center} \end{figure} Let $\overline{X}(r)$ denote the average of $X$ over the sphere of radius $r$. From the net mass flux $\overline{\rho v_r}$, we define the recycling mass flux $\widetilde{\rho v_r}$ as the standard deviation $\widetilde{\rho v_r}^2 \equiv \overline{\rho^2 v_r^2} - \overline{\rho v_r}^2$. The recycling flux is zero when the flow is spherically symmetric, but non-zero if the flow features some degree of circulation. We draw the radial profiles of the net and recycling mass fluxes in the $\rB/r_c=8$ and $16$ cases on Fig. \ref{fig:2dmvrdvr}. The recycling flux $\widetilde{\rho v_r}$ is non-zero in both cases, so both envelopes support a poloidal circulation. The recycling mass flux is maximal closer to the core, and it increases by a factor 10--20 when $\rB/r_c$ increases from $8$ to $16$. Because of the shell averaging, the recycling flux $\widetilde{\rho v_r}$ in the $\rB/r_c=16$ case is non-zero all the way down to the core, although the shocked gas on top of the core does not seem to be efficiently mixed with its surrounding on Fig. \ref{fig:2dvyrhovp}. Regarding the net accretion flux $\overline{\rho v_r}$, we obtain two qualitatively different behaviors depending on the core mass. For $\rB/r_c=16$, the accretion rate $4\uppi r^2 \overline{\rho v_r} \approx - 10^{3} \Omega m_c$ is constant throughout the computational domain. The positive mass flux measured below $1.25r_c$ is most likely a representation artifact due to the different variables used by \textsc{pluto} and in the present analysis\footnote{\textsc{Pluto} evolves the conservative variables $\left(\rho,\rho \bm{v}\right)$ after reconstruction of the primitive variables $\left(\rho,\bm{v} \right)$ at the cell interfaces and using the fluxes of the Riemann problem. For our analysis, we estimate the mass flux after time-averaging the cell centered primitive variables.}. We verified that the accretion rate $\partial_t m(r)$ matches the integrated flux $-4\uppi r^2 \overline{\rho v_r}(r)$ to better than $10^{-2}$ relative accuracy, so there is no measurable mass flux through the core boundary. For $\rB/r_c=8$, the net mass flux $\overline{\rho v_r}$ drawn on Fig. \ref{fig:2dmvrdvr} is compatible with zero through the envelope. The mass cumulated inside the Bondi sphere oscillates by $5$ per cent about its equilibrium value, with no net increase over $900$ sound crossing times of the Bondi sphere. The absence of mass accretion is related to the absence of dissipative processes, and specifically the absence of accretion shocks. The poloidal velocity is indeed subsonic everywhere in this run, recycling the envelope without accumulating mass on top of the core \citep{bethune19b}. \subsection{Self-gravitating axisymmetric envelopes} \label{sec:2dsg} We now include the gravity of the gas in addition to rotation. To facilitate comparisons with later 3D results, we adopt $\Omega=1$ while keeping $\Omega r_c / c_s=1/32$. For comparison purposes again, we pretend that the planet is embedded in a Keplerian shear flow, and that the disk is stratified over a pressure scale height $h$. The background density can then be prescribed in terms of the Toomre parameter \begin{equation} \label{eqn:toomre} Q \equiv \frac{\Omega c_s}{\uppi G \Sigma} \sim \frac{1}{\uppi \sqrt{2\uppi} \rho_{\infty}} \end{equation} with our choice of units. We consider five different combinations of $r_{\mathrm{B}}/r_c$ and $Q$, as listed in \autoref{tab:2dsg}. \begin{table} \begin{center} \caption{Two-dimensional self-gravitating simulations: label, Bondi radius $r_{\mathrm{B}}/r_c$, Toomre parameter $Q$ as defined by \eqref{eqn:toomre}, background density $\rho_{\infty}$ relative to the core density $\rho_c$, critical background density for the equivalent 1D setup, and existence of a correspondig 1D hydrostatic equilibrium. \label{tab:2dsg}} \begin{tabular}{lccccc} Label & $\rB/r_c$ & $Q$ & $\rho_{\infty}/\rho_c$ & $\rho_{\mathrm{critical}}/\rho_c$ & 1D static\\ \hline \verb!2B8Q0! & $8$ & $10^{0}$ & $6.49\times 10^{-5}$ & $2.95\times 10^{-5}$ & no\\ \verb!2B8Q05! & $8$ & $10^{0.5}$ & $2.05\times 10^{-5}$ & $2.95\times 10^{-5}$ & yes\\ \verb!2B8Q1! & $8$ & $10^{1}$ & $6.49\times 10^{-6}$ & $2.95\times 10^{-5}$ & yes\\ \verb!2B16Q1! & $16$ & $10^{1}$ & $3.24\times 10^{-6}$ & $1.10\times 10^{-8}$ & no\\ \verb!2B16Q2! & $16$ & $10^{2}$ & $3.24\times 10^{-7}$ & $1.10\times 10^{-8}$ & no \end{tabular} \end{center} \end{table} \subsubsection{Envelope mass} \label{sec:2dsgmass} By comparing the background density $\rho_{\infty}$ to the critical value $\rho_{\mathrm{critical}}$, one can predict whether hydrostatic equilibria exist in 1D (rightmost column of \autoref{tab:2dsg}). To test whether this prediction holds in 2D, we draw on Fig. \ref{fig:sgmdot2d} the evolution of the gas mass $m_{\mathrm{B}}$ contained inside the Bondi sphere of the core in each self-gravitating 2D simulation. \begin{figure} \begin{center} \includegraphics[width=1.0\columnwidth]{sgmdot2d.pdf} \caption{Gas mass $m_{\mathrm{B}}$ contained inside the Bondi sphere of the core relative to the core mass $m_c$ as a function of time for the 2D axisymmetric self-gravitating simulations listed in \autoref{tab:2dsg}. The core mass increases up to its nominal value $m_c$ over the first $\Omega t / 2\uppi \leq 2$ orbital times. \label{fig:sgmdot2d}} \end{center} \end{figure} The parameters of runs \texttt{2B8Q05} and \texttt{2B8Q1} allow 1D hydrostatic equilibria. In both cases, the Bondi mass $m_{\mathrm{B}}$ converges to a constant value. We can compare it to the envelope mass obtained by integrating the semi-analytic density profiles at $\rho_{\infty} = \rho_{\mathrm{critical}}$ from $r_c$ to $\rB$. For $\rB/r_c=8$ and an outer boundary $\rO/r_c=32$, the critical Bondi mass is $3.32\times 10^{-1} m_c$. Both \texttt{2B8Q05} and \texttt{2B8Q1} indeed converge to a Bondi mass smaller than this critical value. With its larger background density, the envelope of run \texttt{2B8Q0} admits no 1D equilibrium. After increasing the mass of the core over the first two orbital times, the envelope mass keeps increasing. After the Bondi mass reaches $m_{\mathrm{B}}/m_c\approx 0.4$ at $\Omega t/2\uppi\approx 6$, the envelope transits to a phase of enhanced mass accretion. The Bondi mass keeps increasing beyond $10$ times the final value in run \texttt{2B8Q1}, and beyond $10^{1/2}$ times the final value in run \texttt{2B8Q05}. The absence of near-linear scaling of $m_{\mathrm{B}}$ with $\rho_{\infty}$ indicates that the envelope of run \texttt{2B8Q0} does not converge toward an equilibrium. In this case only, we stopped the simulation when $m_{\mathrm{B}}/m_c=10$. With a larger core mass, the envelopes of runs \texttt{2B16Q1} and \texttt{2B16Q2} are also expected to collapse in 1D. The critical Bondi mass is $2.04\times 10^{-2} m_c$ in this case. Both runs cross this threshold and keep accreting mass until the end of the simulation. In the case of \texttt{2B16Q1}, we note a change of the slope $\partial_t m_{\mathrm{B}}$ at $\Omega t/2\uppi \approx 36$, when the gas mass $m_{\mathrm{B}}$ becomes comparable to the core mass. Whether this transition in accretion rate --- and the one observed in run \texttt{2B8Q0} --- is related to a 1D dynamical collapse is examined in the following section. \subsubsection{From core to gas-dominated envelope} \label{sec:2drunaway} \begin{figure} \begin{center} \includegraphics[width=1.0\columnwidth]{sski2j-1b16g10o1_PLm_vyrhovp.png} \caption{Toroidal velocity (\emph{color map}), poloidal mass flux (\emph{blue arrows, orientation only}), mass density (\emph{dashed green contour lines}) and sonic surface for the poloidal velocity (\emph{solid cyan contours}) after time-averaging over $10\Omega^{-1}$ in the simulation \texttt{2B16Q1}. \label{fig:2dvyrhovpsg}} \end{center} \end{figure} Fig. \ref{fig:2dvyrhovpsg} shows the structure of the flow in run \texttt{2B16Q1}, averaged over $\Omega t / 2\uppi \in \left[30,35\right]$. This interval corresponds to the beginning of the enhanced accretion phase on the top right corner of Fig. \ref{fig:sgmdot2d}. We verified that the figure does not change qualitatively when averaging later in the simulation. We also obtained a qualitatively similar picture when averaging the flow over the last two orbital times in run \texttt{2B8Q0}. As on Fig. \ref{fig:2dvyrhovp}, the toroidal velocity is larger near the midplane and close to the core. Let $v_{g} \equiv \sqrt{R\partial_R \Phi}$ denote the toroidal velocity required for fully rotational support in the midplane. The ratio $v_{\varphi}/v_{g}=1$ at $1.6\,r_c$, decreasing to $0.5$ at $5.6\,r_c$ and $0.33$ at $9\,r_c$. With rotation dominating the radial momentum balance, the gas density deviates significantly from a spherical, hydrostatic distribution. The density isocontours form lobes anchored in the midplane at $R\simeq r_c$, delimiting a torus of gas orbiting around the core. Unlike Fig. \ref{fig:2dvyrhovp}, the poloidal velocity is converging toward the core in the entire plane of Fig. \ref{fig:2dvyrhovpsg}. However, the inflow velocity is supersonic only in the polar accretion cone delimited by the sonic surface. The inflow velocities measured in the midplane remain less than $1$ per cent of the sound speed, so this figure does not depict a global collapse on dynamical timescales. As more mass accumulates on the core, the gravitational potential of the gas becomes deeper. The envelope must therefore contract to maintain radial momentum balance in its own gravitational well. \begin{figure} \begin{center} \includegraphics[width=1.0\columnwidth]{sski2q10b16g10o1_bondi.png} \caption{Space-time map of the gas gravitational radius $r_g/r$ as defined by \eqref{eqn:rg} relative to the local radius in run \texttt{2B16Q1}. The vertical dashed line marks the time $\Omega t / 2\uppi=2$ when the core mass is fully set. \label{fig:2sgrg}} \end{center} \end{figure} To show the role of the gas gravity more directly, we define the gravitational radius of the cumulative gas mass \begin{equation} r_g\left(r\right)\equiv \frac{G \left[m(r)-m_c\right]}{c_s^2}. \label{eqn:rg} \end{equation} Fig. \ref{fig:2sgrg} shows the evolution of the gas gravitational radius $r_g/r$ in the simulation run \texttt{2B16Q1}. At $\Omega t / 2\uppi \gtrsim 5$, the gravitational radius of the gas encloses the inner parts of the envelope. The gravitational radius increases in time due to mass accretion, reaching the core's Bondi radius $r_g/r_c=16$ at $\Omega t / 2\uppi \approx 36$. This time also marks the change of accretion rate on Fig. \ref{fig:sgmdot2d}. After $\Omega t / 2\uppi \gtrsim 40$, the gas is bound and attracted toward the core by its self-gravity mainly. \subsection{Discussion of 2D models} Adding rotation leads to the spontaneous formation of a circulatory flow through the envelope. With the parameter space explored in our isothermal simulations, we find preferentially a polar inflow of gas towards the core and an equatorial outflow. This pattern seems robust to boundary effects since it was reported in three-dimensional simulations including the stratification and differential rotation of the background disk \citep[e.g.,][]{tanigawa12,fung15}. In the non-gravitating limit, the polar inflows become supersonic when the Bondi radius of the core $\rB/r_c \gtrsim 16$, in agreement with \citet{bethune19b}. The supersonic inflows shock on the inner parts of the envelope, dissipating momentum and allowing mass accretion. Given the limited integration time of our simulations, we can only speculate that these envelopes will converge to a steady state (finite mass) on longer time scales, when the polar shocks expands up to the smooth transsonic point as on Fig. \ref{fig:rhortnosg}. When accounting for the gas gravity, we observe a transition to enhanced gas accretion in runs \texttt{2B8Q0} and \texttt{2B16Q1}, as expected from 1D models. In this phase, the envelope mass can increase without bounds, the accretion rate being only restricted by the available gas at the outer radial boundary. However, the accretion rate over the interval $\Omega t / 2\uppi \in \left[2,20\right]$ in run \texttt{2B16Q1} is still unaffected by the gas gravity. This is apparent from the curve of run \texttt{2B16Q2} on Fig. \ref{fig:sgmdot2d}, which has the same slope as run \texttt{2B16Q1} although it is ten times less massive. When run \texttt{2B16Q1} starts accreting at an enhanced rate at $\Omega t / 2\uppi \approx 36$, the envelope mass exceeds the 1D critical mass by a factor $\approx 40$. The transition to a phase of enhanced accretion instead starts when the gas mass becomes comparable to the core mass. We do not observe a dynamical collapse of the entire envelope in the 2D simulations of Table \ref{tab:2dsg}. Instead, the accretion flow is limited to the polar regions and a rotationally-supported circumplanetary disk forms near the midplane. We do expect a spherical collapse in the non-rotating limit $\Omega r_c / c_s \rightarrow 0$. Even if a fluid element carries a specific angular momentum $l\simeq R v_{\varphi}$, the acceleration $v_{\varphi}^2/R \simeq l^2/R^3$ alone can balance gravity only up to a limited centrifugal radius. We confirm that a nearly spherical collapse occurs when decreasing $\Omega$ below $1/64$ while maintaining $\Omega r_c / c_s = 1/32$ and $\rB/r_c=16$, i.e., limiting the angular momentum of the gas coming into the computational domain. \section{Three-dimensional envelopes} \label{sec:3d} In both the 1D and 2D models, the runaway (enhanced and unbound) accretion phase is controlled by the outer boundary conditions. In reality, the core has a sphere of influence limited by the tidal potential of the star and by the resulting shear flow of the disk. We now examine this situation via 3D self-gravitating simulations of embedded planetary cores. \subsection{3D model: rotation and shear} Let $(x,y,z)$ be cartesian coordinates centered on the planetary core, with $z$ along its rotation axis, $y$ along its orbital trajectory and $x$ along the star-core radius. We consider a small patch of the disk around the core and expand the gravitational potential of the star to second order about the orbital radius of the core \citep{hill78}. Assuming that the circumstellar disk is Keplerian (i.e., neglecting radial pressure gradients), the total potential takes the form \begin{equation} \label{eqn:totpotgrav} \Phi = -\underbrace{\frac{3}{2}\Omega^2 x^2}_\text{star} -\underbrace{\frac{G m_c}{r}}_\text{core} +\underbrace{\Phi_g.}_\text{gas} \end{equation} In this patch of the disk, the Keplerian shear flow induced by the star is $v_y = -\left(3/2\right)\Omega x$. We do not expand the potential of the star in the vertical direction $z$, i.e., we omit the vertical stratification of the disk. We make this choice to facilitate comparisons with the previous 2D results, and to simply subtract\footnote{Otherwise, one would have to solve the Poisson problem $\Delta \Phi_g = 4\uppi G \rho$ for a stratified disk with no planetary core, and then subtract this potential every time the Poisson problem is solved.} a constant value $\rho_{\infty}$ in the modified Poisson problem \eqref{eqn:poisson}. This choice is reasonable when the pressure scale height is large compared to the core radius ($h/r_c \gg 1$). We take $\Omega=1$ and keep $h/r_c = 32$ as in Sect. \ref{sec:2dsg}. We extend the computational domain to $\left(r/r_c,\theta,\varphi\right) \in \left[1,128\right]\times\left[0,\uppi\right]\times\left[0,2\uppi\right]$. The radial interval is meshed with $128$ logarithmically spaced grid cells; the $\left(\theta,\varphi\right)$ intervals are meshed with $80 \times 160$ uniformly spaced cells. At the outer radius, we impose the initial conditions of a constant density $\rho_{\infty}$ and a Keplerian shear flow $\left(v_x,v_y,v_z\right) = \left(0,-3\Omega x/2,0\right)$. The inner radial boundary conditions are the same as in the 2D setup, including the homogenized density in the innermost grid shell (see Sect. \ref{sec:2dmodel}). The $\varphi$ boundaries are periodic, and the conditions on the polar axis $\theta=0$ respect the spherical topology of the domain: \begin{equation*} \left[\rho,v_r,v_{\theta},v_{\varphi}\right](-\vartheta,\varphi) = \left[+\rho,+v_r,-v_{\theta},-v_{\varphi}\right](\vartheta,\varphi+\uppi) \end{equation*} at every radius, and similarly in the opposite hemisphere about $\theta=\uppi\pm\vartheta$. As in Sect. \ref{sec:2d}, the mass of the core is increased up to its nominal value over the first $\Omega t / 2\uppi \leq 2$ orbital times. The two main differences with the numerical setup of \cite{bethune19b} are the inclusion of the gas gravity and the omission of the vertical density stratification. The 3D simulations discussed below are listed in \autoref{tab:3dsg}. \begin{table} \begin{center} \caption{Three-dimensional self-gravitating simulations: label, Bondi radius $r_{\mathrm{B}}/r_c$, Toomre parameter $Q$ as defined by \eqref{eqn:toomre}. \label{tab:3dsg}} \begin{tabular}{lcc} Label & $\rB/r_c$ & $Q$ \\ \hline \verb!3B8Q05! & $8$ & $10^{0.5}$\\ \verb!3B8Q1! & $8$ & $10^{1}$\\ \verb!3B16Q1! & $16$ & $10^{1}$\\ \verb!3B16Q2! & $16$ & $10^{2}$ \end{tabular} \end{center} \end{table} \subsection{Flow structure} \label{sec:3dflow} \begin{figure} \begin{center} \includegraphics[width=1.0\columnwidth]{i3dp32b16q10_PLm_vyrhovp_11-13.png} \caption{Same as Fig. \ref{fig:2dvyrhovpsg} in the equivalent 3D case \texttt{3B16Q1} after azimuthal and time-averaging over $\Omega t / 2\uppi\in\left[5.5,6.5\right]$. \label{fig:3dvyrhovpsg}} \end{center} \end{figure} To compare run \texttt{3B16Q1} with the equivalent 2D case, we average the flow variables in time over one orbital period $\Omega t / 2\uppi\in\left[5.5,6.5\right]$ and in the azimuthal ($\varphi$) direction, and represent them on Fig. \ref{fig:3dvyrhovpsg}. The poloidal mass flux describes a circulatory pattern, with an equatorial outflow and polar inflows shocking close to the core surface. The toroidal velocity reaches $v_{\varphi} / \Omega r_c \gtrsim 100$ in the midplane inside $R \leq 0.2h = 6.4 r_c$. In these inner regions, the density contours form lobes anchored near the surface of the core, delimiting a rotationally-supported circumplanetary disk. One difference with the 2D case of Fig. \ref{fig:2dvyrhovpsg} is the increased opening angle of the accretion cone $\approx 55^{\circ}$, limiting the rotationally-supported envelope to a smaller range of latitudes near the midplane. Another difference with Fig. \ref{fig:2dvyrhovpsg} is the radially limited extent of the circumplanetary disk: the density iso-contours disjoint from the core are restricted to $R\lesssim 6\,r_c$. \begin{figure} \begin{center} \includegraphics[width=1.0\columnwidth]{i3dp32b16q10_EQt_tot_11-13.png} \caption{Density $\rho/\rho_{\infty}$ (\emph{color map}), velocity (\emph{blue arrows, orientation only}) and total gravitational potential $\Phi$ (\emph{green isocontours}) after time-averaging over $\Omega t / 2\uppi\in\left[5.5,6.5\right]$ in the equatorial plane of run \texttt{3B16Q1}; only the inner $r\leq 2h$ are represented. \label{fig:3dtot11}} \end{center} \end{figure} We show the equatorial flow structure of run \texttt{3B16Q1} on Fig. \ref{fig:3dtot11}. From the isocontours of the gravitational potential, one can identify the different parts of the flow \citep[see for example][]{fung15}. When neglecting the gas gravity, the central region dominated by the potential of the core extends up to the Hill radius $\rH \equiv \left(G m_c / 3 \Omega^2\right)^{1/3}$. With an envelope mass equal to the core mass at this time (see Fig. \ref{fig:sgmdot3d}), the effective Hill radius is only $2^{1/3} \approx 1.26$ larger than in the equivalent non-gravitating case. Due to the background shear, the envelope is limited to approximately one pressure scale height in radius. At larger distances $\vert x/h\vert>2/3$, the Keplerian shear flow is supersonic with respect to the core; the density perturbations induced by the core are then transported by spiral density waves into the disk. Despite their large spatial extent, we find no significant contribution of the spiral waves to the gravitational potential of the gas, which remains spherically symmetric in good approximation. \subsection{Mass of 3D gravitating envelopes} \begin{figure} \begin{center} \includegraphics[width=1.0\columnwidth]{sgmdot.pdf} \caption{Gas mass contained inside the Bondi sphere of the core relative to the core mass $m_c$ as a function of time for the 3D self-gravitating simulations listed in \autoref{tab:3dsg}. The core mass increases up to its nominal value $m_c$ over the first $\Omega t / 2\uppi \leq 2$ orbital times. \label{fig:sgmdot3d}} \end{center} \end{figure} For each 3D simulation listed in \autoref{tab:3dsg}, we integrate the gas mass $m_{\mathrm{B}}$ contained inside the Bondi radius of the core, normalize it by the final mass $m_c$ of the core, and draw its evolution on Fig. \ref{fig:sgmdot3d}. Only run \texttt{3B8Q1} has an envelope mass converging to a finite value. The residual accretion rate is approximately $5\times 10^{-6}\,\Omega m_c$, negligible over the duration of the simulation. With a laminar and subsonic poloidal flow (no polar shocks), mass accretion is mainly driven by numerical dissipation in this simulation. The envelope mass in run \texttt{3B16Q2} increases up to $50$ per cent of the core mass over the integration time of the simulation. Mass accretion is caused by polar shocks as described in Sect. \ref{sec:3dflow}. The mass accretion rate increases from $1.9\times 10^{-2} m_c$ per orbit to $2.6\times 10^{-2} m_c$ per orbit over the interval $\Omega t/2\uppi\in\left[8,24\right]$. In the last two simulations \texttt{3B16Q1} and \texttt{3B8Q05}, the combination of a large core mass and/or a large background density leads to the most massive envelopes. Both envelopes accrete gas and become as massive as the core during the simulation. However, these envelopes saturate at $m_{\mathrm{B}}/m_c \lesssim 3$. At $\Omega t/ 2\uppi \approx 6.6$ in run \texttt{3B16Q1} and $9.5$ in run \texttt{3B8Q05}, the envelope mass drops by an order of magnitude over a fraction of orbital time. \begin{figure} \begin{center} \includegraphics[width=1.0\columnwidth]{i3dp32b16q10_EQt_tot_14.png} \caption{Same as Fig. \ref{fig:3dtot11} but at the time $\Omega t/2\uppi = 6.7$ in run \texttt{3B16Q1}, corresponding to the envelope mass drop on Fig. \ref{fig:sgmdot3d}. \label{fig:3dtot14}} \end{center} \end{figure} To understand what caused the envelope mass to drop, we show on Fig. \ref{fig:3dtot14} a snapshot of the equatorial flow in run \texttt{3B16Q1} at $\Omega t / 2\uppi = 6.7$. The density distribution features two arc-shaped shocks enclosing the core. Inside the bubble delimited by these shocks, the gas moves away from the core at supersonic velocity. The gravitational potential is distorted compared to Fig. \ref{fig:3dtot11}. The isocontours of potential are pinched toward the core on the left side of Fig. \ref{fig:3dtot14}, indicating that the blob of gas ejected to the left carries a mass comparable to that of the core. The envelope of run \texttt{3B8Q05} `explodes' in a similar fashion at $\Omega t / 2\uppi \approx 9.5$. The explosions are preceded by the fragmentation of the gas in the innermost parts of the envelope. The resulting density clumps exert a torque on the surrounding material, throwing most of the envelope mass outside the Hill sphere of the core. To assert whether this fragmentation is caused by Jeans instability, we compute the Jeans length $\lJ^2 \equiv \uppi c_s^2 / G \rho'$. In run \texttt{3B16Q1} at $\Omega t / 2\uppi = 6.5$, the shell-averaged Jeans length goes as low as $0.1\,r_c$ near the core surface, resolved by only $3$ to $5$ grid cells. This length is smaller than the extent of the shocked shell $\approx 0.3\,r_c$ on Fig. \ref{fig:3dvyrhovpsg}, so the envelope could be unstable to radial density perturbations. We did not observe gravitational fragmentation in the equivalent 2D simulation \texttt{2B16Q1}. We successfully reproduced the 2D runs \texttt{2B16Q1} and \texttt{2B8Q05} at a reduced resolution of $96\times 96$ cells over $\left( r,\theta\right)$, comparable to the 3D resolution inside $r/r_c\leq 32$. The absence of fragmentation in these low-resolution 2D runs supports that fragmentation is not caused by the lower numerical resolution of our 3D simulations. It also points towards non-axisymmetric disturbances to trigger the envelope fragmentation. Higher resolution 3D simulations will be required to examine this issue when computational resources allow it. After the explosion, the remaining envelope is less massive than the core again. The envelope remnant settles as a new hydrostatic shell, and the process of polar mass accretion resumes until the next explosion. This leads to the saturated behavior of the envelope mass with time shown on Fig. \ref{fig:sgmdot3d} for runs \texttt{3B16Q1} and \texttt{3B8Q05}. \section{Discussion} \label{sec:discussion} \subsection{Comparison with previous works} \subsubsection{Non-gravitating envelopes} The 3D simulations are most easily compared with those of \citet{bethune19b}, who used a nearly identical setup. Comparing run \texttt{3B16Q1} with the equivalent non-gravitating run \texttt{H32B16} (see their Figure 10 (e)), the accretion cones are wider, the inflow becomes supersonic higher above the core, and the density contours are more pinched around the midplane. \texttt{3B16Q1} shares more similarities with run \texttt{H32B32} on Figure 10 (f) of \cite{bethune19b}, for which the Bondi radius of the core is twice larger ($\rB/r_c=32$). This is consistent with the fact that the envelope mass equals the core mass at this time in run \texttt{3B16Q1}, see Fig. \ref{fig:sgmdot3d}. In other words, the envelope structure in run \texttt{3B16Q1} is the same as if the gas mass was simply added to the core mass. To allow a more quantitative comparison, we reproduce Fig. \ref{fig:3dvyrhovpsg} for the non-gravitating simulation \texttt{H32B32} of \cite{bethune19b} in Appendix \ref{app:3dnosg}. We estimate the size of the circumplanetary disk formed in run \texttt{3B16Q1} as the radius $R_d$ at which the specific angular momentum $R v_{\varphi}$ is maximal. The disk size $R_d \approx 0.32\times \left(1.26 \,\rH\right)$ is remarquably close to the $\rH/3$ predicted by \citet{quillen98} after correcting the Hill radius $\rH$ by a factor $2^{1/3}$ for the enclosed gas mass. This ratio is larger than the $\rH/10$ found by \citet{wang14} in isothermal simulations around intermediate-mass cores, but it is supported by the radiative simulations of \citet{danhenkley2003a} and \citet{ayliffe09b} around high-mass cores. Because the accretion flow is restricted to the polar cones, the mass accretion rate should be sensitive to the vertical stratification of the disk. Since we omit the disk stratification in the present study, we expect larger accretion rates compared to those of \citet{machida10,bethune19b}. The accretion rate of $2\times10^{-2} m_c$ per orbit in run \texttt{3B16Q2} is indeed $10^3$ times larger than predicted from equation (13) of \citet{bethune19b}. \subsubsection{Self-gravitating envelopes} \citet{wuchterl3} presented radiative simulations of 1D envelopes at the critical core mass. He found that the core loses most of its envelope by launching an \emph{outflow}. In our 1D isothermal simulations, the `instability' always develops as an \emph{inflow} toward the core. As discussed in Sect. \ref{sec:1ddiscussion} above and in section 3.2 of \citet{wuchterl3}, no hydrostatic equilibria connecting the core to the disk can be found beyond the critical mass. Whether the subsequent evolution is a collapse or an expansion of the envelope should therefore be determined by the outer boundary conditions. In the simulations of \citet{wuchterl3}, the critical solution is taken as initial condition, with no initial velocity. In our case, the gas already flows inward when the core reaches the critical mass. This difference could select the inflow as a favored outcome of our setup. Beyond the critical mass, one can still find hydrostatic equilibria for an isolated planetary envelope. \citet{perricameron74}, \citet{mizuno78} and \citet{wuchterl1} found that these equilibria are linearly unstable due to the adiabatic index $\gamma<4/3$ in the H$_2$ dissociation region. We confirm the linear instability of these equilibria in Appendix \ref{app:linstab}, where we also find unstable modes for $\gamma>4/3$. We argue that this linear analysis might not be relevant for embedded planets because of fast reaction (e.g., free-fall of the surrounding disk) happening at the critical mass. \cite{ayliffe12} described the hydrodynamics of gravitating planetary envelopes in 3D radiative simulations. The envelope of their most massive core ($33$M$_{\oplus}$ in model J) undergoes a hydrodynamic collapse, after which it settles into a new equilibrium and resumes gas accretion. We do not observe a collapse of the hydrostatic inner envelope in our multi-dimensional simulations, even when they enter an enhanced accretion phase. The sudden contraction of the inner envelope in model J of \cite{ayliffe12} might be caused by an opacity drop as the temperature rises near the core, leading to a more isothermal (steeper) density profile. \subsection{From 1D to 3D} In 1D models the radial extent of the envelope must be prescribed a priori. We showed in Sect. \ref{sec:critmass} that the location of this boundary weakly affects the value of the critical mass when $\rB/r_c\geq 8$. In 3D the envelope is limited by the background shear to roughly one pressure scale $h\equiv c_s/\Omega$ around the core. The 2D axisymmetric simulations extending to $h$ should therefore be comparable to the 3D simulations regarding the core-nucleated instability. From 1D to 2D axisymmetric, we still find a transition to runaway (enhanced and unbound) gas accretion. However, mass accretion is initially driven by transsonic inflows with no role of the gas gravity. The mass accretion rate increases only when the envelope mass becomes comparable to the core mass, which can be orders of magnitude larger than the critical mass predicted in 1D static models. If the mass accretion rate is proportional to the mass of the planet as in non-gravitating simulations \citep{machida10,bethune19b}, then the planet mass should increase exponentially in time --- as long as the disk can provide this material. From 2D axisymmetric to 3D, we can compare the evolution of the envelope mass on Fig. \ref{fig:sgmdot2d} and Fig. \ref{fig:sgmdot3d} respectively. The mass accretion rate in run \texttt{3B16Q2} is twenty times larger than in the equivalent 2D run \texttt{2B16Q2}. This is related to the different properties of the background flow. In 2D, the incoming gas has a prescribed angular momentum as if the outer boundary was in solid body rotation. In 3D, the shear flow has a different vorticity distribution \citep{krumholz2005}, resulting in wider accretion cones and larger accretion rates. Unlike their 2D analogues, the 3D accretion rates of runs \texttt{3B16Q1} and \texttt{3B16Q2} do not significantly increase when the ratio $m_{\mathrm{B}}/m_c \gtrsim 50$ per cent. The fragmentation of the inner envelope happens before a phase of accelerated growth can be clearly identified. \subsection{Gas thermodynamics} The envelopes presented by \cite{ayliffe12} all accrete gas at an accelerating rate when the envelope mass becomes comparable to the core mass (see their Figure 1). The dust opacity --- which might vanish in the vicinity of the core \citep{podolak03,movshovitz10} --- controls the efficiency of radiative cooling, and therefore the envelope contraction and accretion \citep[see][Figure 2]{ayliffe12}. If radiative cooling only affects the timescale of mass accretion, e.g., neglecting envelope recycling \citep{ormel2,kurokawa18}, then our simulations could be appropriate in late stages of gas accretion onto high-mass cores \citep[$\geq 15\,$M$_{\oplus}$ in model A of][]{ayliffe12}. The envelope mainly accretes gas through polar inflows, with a negligible contribution from the circumplanetary disk to the mass accretion rate \citep{machida10}. For an isothermal gas, the inflows lose momentum by shocking on the inner envelope. For an adiabatic gas, the momentum would first be converted into heat, which would then have to be radiated away \citep{szulagyi14}. Calculations of the shock radiation efficiency point toward isothermal conditions \citep{marleau17,marleau19}. The mass accretion rate parametrized by \citet{machida10} and \citet{bethune19b} could therefore be used to determine the time required for gas gravity to become important after a supersonic inflow develops. Regarding the fragmentation of the envelope in runs \texttt{3B16Q1} and \texttt{3B8Q05} (see Fig. \ref{fig:3dtot14}), it is made possible by the somewhat unrealistic isothermal equation of state. Isothermal envelopes have the largest mass and the steepest density profile given a background temperature, making them most easily prone to Jeans instability near the core surface. In case of instability, the gravitational collapse would be limited for an adiabatic contraction of the envelope, and eventually regulated by the efficiency of radiative cooling. \section{Summary and perspectives} We studied the hydrodynamics of embedded planetary envelopes in the regime where the gravity of the gaseous envelope becomes comparable to the gravity of the solid core. We focused on isothermal envelopes and considered three models of increasing complexity: \begin{enumerate} \item a 1D model for spherically symmetric envelopes, helping us investigate the nature of the core-nucleated instability through hydrostatic and hydrodynamic calculations; \item a 2D model for axisymmetric envelopes, allowing a rotationally-supported circumplanetary disks to form by conservation of angular momentum; \item a 3D model including the tidal potential of the star, where the planetary core is embedded in the differentially rotating circumstellar disk, but omitting the vertical stratification of the disk. \end{enumerate} We summarize our main conclusions as follows: \begin{itemize} \item In spherically-symmetric envelopes, the core-nucleated instability corresponds to the absence of equilibrium connecting the core to the ambient conditions in the circumstellar disk. The following reaction can be a contraction or an expansion of the envelope until momentum balance is satisfied again. \item Including rotation, the formation of a circumplanetary disk does not prevent the transition to runaway \edt{(accelerated and unbound)} gas accretion; it only restricts the accretion flow to the polar cones, where the gas has a negligible angular momentum with respect to the core. \item In rotationally-supported envelopes, the accelerated accretion phase starts when the envelope mass becomes comparable to the core mass, irrespective of the critical mass computed in 1D models. \item Because most of an isothermal envelope mass is accumulated at the surface of the core, the flow structure is the same as if the gas mass was simply added to the core mass in a non-gravitating medium. The mass of 3D isothermal envelopes saturate around a few core masses due to fragmentation, \edt{preventing their unlimited growth}. \end{itemize} The main shortcoming of this study is the extremely simplified treatment of the gas thermodynamics. Future works should address this issue by progressively accounting for radiative energy transport and gas chemistry, expanding the results of \citet{ayliffe12}. To gain some predictive power, this study should also be repeated by embedding the core within a global disk model. This step is necessary to understand the outcome of runaway gas accretion after the planet opens a gap in the disk \citep{ginzburgchiang19} and the disk eventually disperses \citep{alexander13}. \section*{Acknowledgements} Financial support of this work by the Isaac Newton Trust and the Department of Applied Mathematics and Theoretical Physics is gratefully acknowledged. I thank Richard Booth for his early suggestions, and the anonymous referee for his constructive comments that improved the quality of this paper. The results presented here were obtained before the internship of Sabina Sagynbayeva, whom I co-supervised with Roman R. Rafikov on 1D simulations of self-gravitating isothermal envelopes. I thank both of them for helping me develop more lucidity on this topic. \bibliographystyle{mnras}
2006.00852
\chapter*{Introduction} \addcontentsline{toc}{chapter}{Introduction} The main motivating question of the present thesis is the following: \medskip\begin{center} \emph{What do smooth maps between manifolds look like?} \end{center}\medskip Of course, this question is not at all precise, however, the description of smooth maps is a central problem in differential topology. There are many variants of what we mean by ``look like'' and precisely which sorts of smooth maps we are considering; these variants give rise to quite a few large theories of topology (for example Morse theory or the immersion theory of Hirsch, Smale, Gromov and others). Similarly to them, we will also narrow down the problem of investigating smooth maps to a specific theory in differential topology. When thinking about mapping a manifold to another, probably the first kind of maps that comes to mind is the embeddings. These are very easy to visualise, since a manifold embedded into another one has its manifold structure as the subspace topology of the containing manifold. Another relatively simple kind of smooth maps is the immersions, which are just locally embeddings. \begin{figure}[H] \begin{center} \centering\includegraphics[scale=0.1]{kep00} \begin{changemargin}{2cm}{2cm} \caption{\hangindent=1.4cm\small Here we show the images of an embedding and an immersion of the circle into the plane.} \end{changemargin} \vspace{-1.5cm} \end{center} \end{figure} Embeddings and immersions share the same property that their differentials have maximal ranks in all points, however, generic smooth maps can be more complicated. There are many ways that the rank of the differential of a map can decrease in a point, which is then called a singular point of that map; maps with singular points are called singular maps. One way to think about describing general (singular) smooth maps is that it has the following three levels: \begin{enumerate} \item Local description: This concerns the local forms of map germs, which describe the map in small neighbourhoods of points. Being a singular point of a map is a local property and this talks about the singularities of maps individually. For a long time this was the main subject of singularity theory; see for example the results of Whitney \cite{wh}, Morin \cite{mor}, Mather \cite{math}, Arnold \cite{arnold} and others. \item Semiglobal description: This investigates the automorphism groups of local forms, which provide an uderstanding of the map in neighbourhoods of its singularity strata (a stratum is a set of points with equal local forms). For results on this see Jänich \cite{jan}, Wall \cite{wall} and Rimányi \cite{rrthesis}. \item\label{lazasag} Global description: This means describing how the different singularity strata fit together, how the more complicated strata are attached to the simpler strata to form the map. A very important result on this is a universal singular map constructed by Szűcs and Rimányi \cite{rsz} (and preceeded by a number of similar results of Szűcs, e.g. \cite{analog}) that spawns a theory of studying singular maps globally. \end{enumerate} We remark here that global singularity theory does not restrict to the line of investigation described above; for example one can also study the Thom polynomials which stand for the homology classes represented by singularity strata (see Thom \cite{tp}, Fehér, Rimányi \cite{fr}, Kazarian \cite{kazthom} and others). However, the above point \ref{lazasag} is what we will particularly be interested in now. \begin{figure}[H] \begin{center} \centering\includegraphics[scale=0.1]{kep01} \begin{changemargin}{2cm}{2cm} \caption{\hangindent=1.4cm\small The grey area represents the image of a torus mapped to the plane and all the curves bounding it represent where the surface was ``folded'' by the mapping; these are the singular points of the map. Note that the inside curve is not smooth at the four ``corner'' points; we will see that the map at these points has a more complicated local form than it has at the rest of the curves.} \end{changemargin} \vspace{-1.5cm} \end{center} \end{figure} Now that we have some understanding about the phrase ``singular maps'' let us get to the other half of the title of this thesis and explain the word ``cobordism''. As usual in mathematics and in particular in topology, we do not want to differentiate between objects that are very similar in some sense, which in topology means that they can be deformed into each other in some way. The theory we are about to investigate considers maps which have singularities from a fixed list $\tau$ and the deformation of one such map to another is given by deforming the source manifold of the first map to that of the second by a higher dimensional manifold (called a cobordism of manifolds) and connect the two maps by a map of this higher dimensional manifold that also has singularities from $\tau$. This yields an equivalence relation called the cobordism of maps with singularities from $\tau$. \begin{figure}[H] \begin{center} \centering\includegraphics[scale=0.1]{kep02} \begin{changemargin}{2cm}{2cm} \caption{\hangindent=1.4cm\small This ``pair of pants'' is a cobordism between an embedding of one circle into the plane (on the top) and an embedding of two circles into the plane (on the bottom).} \end{changemargin} \vspace{-1.5cm} \end{center} \end{figure} The equivalence classes given by the above relation on singular maps are called cobordism classes. The set of cobordism classes of maps to a fixed manifold and with singularities from the list $\tau$ admits a natural group operation; the cobordism theory of singular maps concerns the computation of these groups for various target manifolds and lists of singularities. These groups were and are investigated among others by Koschorke \cite{kosvf}, Ando \cite{ando}, Saeki \cite{saeki}, Ikegami \cite{ike}, Kalmár \cite{kal} and Sadykov \cite{sad}. However, one of the first people to consider them is Szűcs \cite{analog}; the aim of the present thesis is to collect and organise the results towards and on the computation of cobordism groups by his method published in papers of Szűcs, Terpai and other coauthors \cite{rsz}, \cite{cobmor}, \cite{elimcob}, \cite{eszt}, \cite{hosszu}, \cite{2k+2}, \cite{key}, \cite{szszt}, \cite{nulladik}, \cite{nszt}, \cite{nehez}, \cite{hominv} and \cite{ctrl}. This thesis contains no individual results, however, I introduce many notions in a more general setting than the original papers did, which yields definitions and theorems that already existed implicitly in literature but were never written down. It is my hope that this will help in expanding the theory and obtaining results for cobordism groups with more general structures. \chapter*{Conventions and notations} \addcontentsline{toc}{chapter}{Conventions and notations} Throughout this thesis, when we talk about manifolds, we are in the smooth category, that is, all manifolds and maps are assumed to be $C^\infty$. If we want to indicate the dimension of a manifold, we put it in a superindex (i.e. $M^n$ means that $M$ is a manifold of dimension $n$), but in most cases we omit this index. There will be many times when we use certain transversality conditions without mentioning it; the validity of these always follows from standard approximation and compactness reasons. For homologies and cohomologies, if we do not indicate the coefficient group, then it is assumed to be $\Z$. By fibration we usually mean Serre fibration. \subsection*{Notations}\vspace{-.2cm} \setlength\LTleft{-.2cm} \renewcommand{\arraystretch}{1.3} \begin{longtable}{p{2.9cm}p{11.05cm}} $\R_+$ & non-negative real numbers\\ $\RP^n,\CP^n,\HP^n$ & $n$-dimensional real, complex and quaternionic projective spaces\\ $S^n$ & the sphere $\{x\in\R^{n+1}\mid\lv x\rv=1\}$\\ $D^n$ & the disk $\{x\in\R^n\mid\lv x\rv\le1\}$\\ $\O,\SO,\Spin,\U,\Sp$ & direct limits of the orthogonal groups $\O(n)$, special orthogonal groups $\SO(n)$, spin groups $\Spin(n)$, unitary groups $\U(n)$ and symplectic groups $\Sp(n)$ as $n\to\infty$\\ $*$ & one-point space\\ $X^+$ & the space $X\sqcup*$\\ $\cpt X$ & one-point compactification of the space $X$ (if $X$ is compact, then this is $X^+$)\\ $X\utimes GY$ & factor space of $X\times Y$ by the diagonal $G$-action, i.e. the space $X\times Y/(x,y)\sim(gx,gy)~\forall\,x\in X,y\in Y, g\in G$ (for some fixed actions of the group $G$ on the spaces $X$ and $Y$)\\ $X\usqcup\rho Y$ & the space $Y$ glued to $X$ by $\rho$, i.e. the factor space $X\sqcup Y/y\sim\rho(y)$ (where $\rho$ is a map from a subspace of $Y$ to $X$)\\ $CX,SX$ & cone and suspension of the space $X$\\ $PX,\Omega X$ & path space and loop space of the pointed space $X$\\ $\Gamma X$ & the space $\Omega^\infty S^\infty X:=\liminfty{n}\Omega^n S^n X$\\ $\SP X$ & the infinite symmetric product $\liminfty n\underbrace{X\times\ldots\times X}_{n\text{ times}}\big/S_n$\\ $[X]_p$ & $p$-localisation of the space $X$ for a prime $p$\\ $\partial X$ & boundary of $X$\\ $\overline X$ & closure of $X$\\ $\toverset{_\circ} X$ & interior of $X$\\ $\id_X$ & the identity map $X\to X$\\ $\pr_{X_j}$ & the projection $\underset{i\in I}\prod X_i\to X_j$ (for any $j\in I$)\\ $EG\xra GBG$ & universal principal $G$-bundle\\ $\gamma_n^G$ & universal $n$-dimensional vector bundle with structure group $G$; if $G=\O(n)$, then we omit the superindex\\ $\varepsilon^n$ & $n$-dimensional trivial vector bundle (over any base space)\\ $\nu_f$ & virtual normal bundle of the map $f$\\ $D\zeta,S\zeta,T\zeta$ & the unit disk bundle and sphere bundle of the vector bundle $\zeta$ (with respect to any metric) and the Thom space of $\zeta$\\ $\tilde G_\zeta$ & Local coefficient system from the group $G$ twisted by the determinant bundle of the vector bundle $\zeta$\\ $e(\zeta),u(\zeta)$ & Euler and Thom class of the vector bundle $\zeta$\\ $w_i(\zeta),c_i(\zeta),p_i(\zeta)$ & Stiefel--Whitney, Chern and Pontryagin classes of $\zeta$\\ $[X,Y]$ & homotopy classes of maps $X\to Y$\\ $X\nrightarrow Y$ & stable map, i.e. a map $S^nX\to S^nY$ for some $n$\\ $\{X,Y\}$ & stable homotopy classes of maps $X\nrightarrow Y$, i.e. $\liminfty n[S^nX,S^nY]$\\ $\pi^s(n)$ & $n$-th stable homotopy group of spheres, i.e. $\liminfty k\pi_{n+k}(S^k)$\\ $[G]_p$ & $p$-primary part of the Abelian group $G$, i.e. the quotient of $G$ by the subgroup of torsion elements of order coprime to $p$\\ $[R]^n$ & degree-$n$ part of the graded ring $R$\\ $S_n$ & symmetric group on $n$ elements\\ $\Z_n$ & cyclic group of $n$ elements\\ $E_{i,j}^n\implies G_{i+j}$ & a spectral sequence with $n$-th page $(E_{i,j}^n)_{i,j\in\Z}$ converges to $G_*$, i.e. $\underset{i+j=k}\bigoplus E^\infty_{i,j}$ is associated to the group $G_k$ for all indices $k$\\ $\CC_\varphi$ & Serre class of groups of order some combination of the primes that satisfy the condition $\varphi$\\ $\congc{}$ & isomorphism modulo the Serre class $\CC$\\ $\cong,\congq$ & homotopy equivalence or isomorphism and rational homotopy equivalence or rational isomorphism\\ $\approx$ & diffeomorphism\\ $df_p$ & differential of the map $f$ in the point $p$\\ $\partial f$ & the restriction $f|_{\partial X}$ of the map $f\colon(X,\partial X)\to(Y,\partial Y)$\\ $f_\#,f_*,f^*$ & homomorphisms induced by the map $f$ in the homotopy groups, homologies and cohomologies\\ $f_!$ & Gysin pushforward in cohomologies induced by the map $f$\\ $\Tp_\eta$ & Thom polynomial of the singularity $\eta$\\ $p_k(n)$ & number of partitions of $n$ to sums of positive integers not greater than $k$\\ $\alpha_k(n)$ & sum of digits of $n$ in $k$-adic system\\ $\nu_p(n)$ & exponent of the highest power of the prime $p$ that divides $n$ \end{longtable} \newpage \chapter{Preliminaries} \pagenumbering{arabic} \section{Singular maps} We will consider cobordisms of maps with fixed stable singularity classes. As the word ``singularity'' is defined in slightly different ways depending on where it is used, we first give the definition we will use. \begin{defi}\label{sing} By singularity class (or simply singularity) we mean the equivalence class of a germ $$\eta\colon(\R^c,0)\to(\R^{c+k},0)$$ where two germs are defined to be equivalent if some combination of the following two conditions holds: \begin{itemize} \item[(i)] The germs $\eta$ and $\eta'$ are $\mathscr{A}$-equivalent (also called left-right equivalent). That is, there is a commutative diagram $$\xymatrix{ (\mathbb{R}^c,0)\ar[r]^\eta\ar[d]_\varphi & (\mathbb{R}^{c+k},0)\ar[d]^\psi \\ (\mathbb{R}^c,0)\ar[r]^{\eta'} & (\mathbb{R}^{c+k},0) }$$ where $\varphi$ is the germ of a diffeomorphism of $(\mathbb{R}^c,0)$ and $\psi$ is the germ of a diffeomorphism of $(\mathbb{R}^{c+k},0)$. \item[(ii)] The germ $\eta'$ is the suspension of $\eta$, that is, $$\textstyle\eta'=\eta\times\id_{\mathbb{R}^1}\colon(\mathbb{R}^c\times\mathbb{R}^1,0)\to(\mathbb{R}^{c+k}\times\mathbb{R}^1,0);~(x,t)\mapsto(\eta(x),t).$$ \end{itemize} The singularity class of $\eta$ will be denoted by $[\eta]$. \end{defi} We observe that in any singularity class the dimension $c$ in the germs $(\mathbb{R}^c,0)\to(\mathbb{R}^{c+k},0)$ is not fixed, but the codimension $k$ is fixed. Throughout this thesis we consider singularities of positive codimension (i.e. $k>0$ in the above formula), therefore, if not specified otherwise, the codimensions will always assumed to be positive. Moreover, we will always assume that the germs are stable in the sense that a small perturbation in the space of $C^\infty$-maps does not change their type (an exact definition of stable germs can be found for example in the beginning of \cite{rrthesis}, see also \cite{math}). \begin{defi} We say that the germ $\eta\colon(\mathbb{R}^c,0)\to(\mathbb{R}^{c+k},0)$ has an isolated singularity at the origin if there is a neighbourhood $U$ of the origin such that at no point in $U\setminus\{0\}$ the germ of $\eta$ is equivalent to that at the origin. Such an $\eta$ is called the root of the singularity class $[\eta]$. \end{defi} \begin{rmk} The root of a singularity is also characterised by having the smallest possible dimension $c$ for any germ $(\mathbb{R}^c,0)\to(\mathbb{R}^{c+k},0)$ in its class. \end{rmk} Now we define the singularities of a map between manifolds by looking at the germs of this map at each point of the source (where the map is locally identified with a map $(\mathbb{R}^n,0)\to(\mathbb{R}^{n+k},0)$). \begin{defi} For a smooth map $f\colon M^n\to P^{n+k}$ a point $p\in M$ is called an $[\eta]$-point (or simply $\eta$-point) if the germ of $f$ at $p$ is equivalent to the germ $\eta\colon(\mathbb{R}^c,0)\to(\mathbb{R}^{c+k},0)$. The set of $\eta$-points in $M$ is denoted by $\eta(f)$. \end{defi} \begin{prop} If $\eta\colon(\mathbb{R}^c,0)\to(\mathbb{R}^{c+k},0)$ is the root of the singularity class $[\eta]$, then for any map $f\colon M^n\to P^{n+k}$ the set $\eta(f)\subset M$ is a submanifold of dimension $n-c$. \end{prop} \begin{prf} For any point $p\in\eta(f)$, the map $f$ restricted to a small neighbourhood of $p$ is such that there is a commutative diagram $$\xymatrix{ (U,p)\ar[r]^{f|_U}\ar[d]_\varphi & (V,f(p))\ar[d]^\psi \\ (\mathbb{R}^n,0)\ar[r]^{\eta'} & (\mathbb{R}^{n+k},0) }$$ where $U\subset M^n$, $V\subset P^{n+k}$, $\varphi$ and $\psi$ are diffeomorphisms and $\eta'$ is the $(n-c)$-times suspension of $\eta$. This means $\eta'=\eta\times\id_{\mathbb{R}^{n-c}}\colon(\mathbb{R}^c\times\mathbb{R}^{n-c},0)\to(\mathbb{R}^{c+k}\times\mathbb{R}^{n-c},0)$, hence all points of the factor $\mathbb{R}^{n-c}$ in the domain are $\eta$-points and there are no other $\eta$-points in a neighbourhood of $\mathbb{R}^{n-c}$. Therefore the neighbourhood $U\subset M$ of $p$ can be chosen so that $\eta(f)\cap U=\varphi^{-1}(\mathbb{R}^{n-c})$. We can of course apply this construction for all points $p\in\eta(f)$, which gives an $(n-c)$-manifold structure on $\eta(f)$. \end{prf} \begin{defi}\label{taumap} Given a set $\tau$ of singularities of a fixed codimension $k$, we say that a map $f\colon M^n\to P^{n+k}$ is a $\tau$-map if the germ of $f$ at any point of $M$ belongs to a singularity class in $\tau$. If $M$ is a manifold with boundary, then we assume the following extra property for any $\tau$-map: The target manifold $P$ also has a boundary and $f$ maps $\partial M$ to $\partial P$, moreover, the boundaries have collar neighbourhoods $\partial M\times[0,\varepsilon)$ and $\partial P\times[0,\varepsilon)$ such that $$\textstyle f|_{\partial M\times[0,\varepsilon)}=g\times\id_{[0,\varepsilon)}\colon\partial M\times[0,\varepsilon)\to\partial P\times[0,\varepsilon)$$ for a $\tau$-map $g\colon\partial M\to\partial P$. \end{defi} \subsection{Singularity strata}\label{strata} An important categorisation of singularities is the so-called Thom--Boardman type defined as follows. \begin{defi}\label{thomboar} A singularity class $[\eta]$ is of type $\Sigma^i$ (for $i=0,1,\ldots$) if the rank of the differential of the germ $\eta\colon(\mathbb{R}^c,0)\to(\mathbb{R}^{c+k},0)$ at the origin drops by $i$, that is $\rk d\eta_0=c-i$. Singularities of type $\Sigma^1$ are also called Morin singularities. \end{defi} \begin{rmk} Clearly the Thom--Boardman type does not depend on the choice of representative of $[\eta]$ and the $i$ above can be at most the dimension of the source manifold of the root of $[\eta]$. \end{rmk} Now we introduce the Thom--Boardman stratification; for the proofs see \cite{boar}. \begin{defi} For a map $f\colon M^n\to P^{n+k}$ we define for $i=0,\ldots,n$ $$\Sigma^i(f):=\{p\in M^n\mid\rk df_p=n-i\}$$ the set of points where the singularity of $f$ is of type $\Sigma^i$. We call the points of $\Sigma^0(f)$ regular and the points of $\Sigma(f):=\Sigma^1(f)\cup\ldots\cup\Sigma^n(f)$ singular. \end{defi} This way we get a stratification of the source manifold $$M=\Sigma^0(f)\cup\ldots\cup\Sigma^n(f).$$ By resticting $f$ to the strata we get further stratifications with $\Sigma^{i,j}(f):=\Sigma^j(f|_{\Sigma^i(f)})$ and we can iterate this process to obtain $\Sigma^{i,j,k}(f)$, $\Sigma^{i,j,k,l}(f)$, etc. On the set of stable singularities with a fixed codimension $k$ a natural partial order arises in the following way. \begin{defi}\label{pos} For two singularity classes $[\eta]\ne[\vartheta]$ of codimension $k$, we call $[\eta]$ more complicated than $[\vartheta]$ if in any neighbourhood of any $\eta$-point there is a $\vartheta$-point. We denote this relation by $[\eta]>[\vartheta]$ and we put $\partial[\eta]:=\{[\vartheta]<[\eta]\}$. \end{defi} Throughout this thesis we will use the technical assumption that for any singularity $[\eta]$ any decreasing chain in $\partial[\eta]$ is finite (i.e. the partial order is well-founded). \begin{rmk} If $f\colon M^n\to P^{n+k}$ is a $\tau$-map (for some set of singularities $\tau$) and $[\eta]\in\tau$, then we also have $[\vartheta]\in\tau$ for all singularities $[\vartheta]<[\eta]$. Hence from now on we will assume all singularity sets to be decreasing. \end{rmk} \begin{ex} \begin{itemize} \item[] \item[(1)] The lowest ``singularity'' is $\Sigma^0$, the class of germs of regular (non-singular) maps. \item[(2)] Fold singularity (i.e. singularity of type $\Sigma^{1,0}$) is lower than cusp singularity (i.e. that of type $\Sigma^{1,1,0}$). \end{itemize} \end{ex} \begin{figure}[H] \begin{center} \centering\includegraphics[scale=0.1]{kep1} \begin{changemargin}{2cm}{2cm} \caption{\hangindent=1.4cm\small Here we show a cusp germ $(\mathbb{R}^2,0)\to(\mathbb{R}^2,0)$ (with one cusp point). The two indicated curves are the fold points and everything else is regular.}\label{kep1} \end{changemargin} \vspace{-1.5cm} \end{center} \end{figure} Now if we have again a map $f\colon M^n\to P^{n+k}$, then $\eta(f)\subset\overline{\vartheta(f)}$ whenever $[\eta]>[\vartheta]$ and $M$ is the union of submanifolds of the form $\eta(f)$ (where the $\eta$ are the singularities of $f$). This way we get another stratification of $M$ with strata $\eta(f)$ that refines the Thom--Boardman stratification. \begin{ex}\label{morsing} If the $f$ above is a Morin map (i.e. it only has singularities of types $\Sigma^0$ and $\Sigma^1$), then the singularity strata are $\Sigma^0(f)$ (regular points), $\Sigma^{1,0}(f)$ (fold points), $\Sigma^{1,1,0}(f)$ (cusp points), etc. The symbol $\Sigma^{1,\ldots,1,0}$ where the number of $1$-s is $r$ will be shortened to $\Sigma^{1_r}$. The above stratification follows from the fact that each type $\Sigma^{1_r}$ only contains one singularity class and if $r\ne s$, then the singularity classes $\Sigma^{1_r}$ and $\Sigma^{1_s}$ are different (see \cite{mor}) and Morin singularities form a single increasing sequence $\Sigma^{1,0}<\Sigma^{1,1,0}<\ldots$ \end{ex} \subsection{Multisingularities} Until this point we only considered local restrictions on maps (the singularities of $f$ only describe how $f$ behaves locally) but later on we will sometimes also need global restrictions. This gives rise to the notion of multisingularities which describe how many points of each singularity type a map can have. \begin{defi} A multisingularity is a formal sum of singularities (of a fixed codimension) $m_1[\eta_1]+\ldots+m_r[\eta_r]$ with coefficients $m_1,\ldots,m_r\in\mathbb{N}$. A multisingularity of the form $1[\eta]$ will be called monosingularity. \end{defi} In order to distinguish the notation of multisingularities and the notation of singularities (with no prescribed multiplicites), we will typically use the convention to underline letters that denote multisingularities or sets of multisingularities. \begin{defi} Let $\underline\eta=m_1[\eta_1]+\ldots+m_r[\eta_r]$ be a multisingularity. For a smooth map $f\colon M^n\to P^{n+k}$ a point $p\in M$ is called an $\underline\eta$-point if $f^{-1}(f(p))$ consists of $m_1+\ldots+m_r$ points and out of all germs of $f$ at these points $m_i$ are equivalent to the germ $\eta_i$ (for $i=1,\ldots,r$). The set of $\underline\eta$-points in $M$ is denoted by $\underline\eta(f)$. \end{defi} The following are direct analogues of the notions we introduced earlier. \begin{defi} Given a set $\underline\tau$ of multisingularities of a fixed codimension $k$, we say that a map $f\colon M^n\to P^{n+k}$ is a $\underline\tau$-map if each point of $M$ belongs to $\underline\eta(f)$ for some $\underline\eta\in\underline\tau$. For manifolds with boundaries we have the analogous extra condition as in definition \ref{taumap}. \end{defi} \begin{ex} The $\{1\Sigma^0\}$-maps are the embeddings, $\{1\Sigma^0,2\Sigma^0\}$-maps are the immersions with no triple points, etc. The $\{1\Sigma^0,2\Sigma^0,\ldots\}$-maps are all immersions, which is the same as $\{\Sigma^0\}$-maps (where no multiplicity is given). \end{ex} \begin{rmk} If $\tau$ is a set of singularities and $\underline\tau$ is the set of all possible formal sums of the elements of $\tau$ (with coefficients in $\mathbb{N}$), then $\underline\tau$-maps are the same as $\tau$-maps. \end{rmk} \begin{defi}\label{poms} For two multisingularities $\underline\eta\ne\underline\vartheta$ of codimension $k$, we call $\underline\eta$ more complicated than $\underline\vartheta$ if in any neighbourhood of any $\underline\eta$-point there is a $\underline\vartheta$-point. We denote this relation by $\underline\eta>\underline\vartheta$ and we put $\partial\underline\eta:=\{\underline\vartheta<\underline\eta\}$. \end{defi} \begin{rmk} \begin{itemize} \item[] \item[(1)] We have $\underline\eta\ge\underline\vartheta$ whenever $\underline\eta=m_1[\eta_1]+\ldots+m_r[\eta_r],\underline\vartheta=n_1[\vartheta_1]+\ldots+n_r[\vartheta_r]$ and $m_i\ge n_i,[\eta_i]\ge[\vartheta_i]$ (for $i=1,\ldots,r$). \item[(2)] If $f\colon M^n\to P^{n+k}$ is a $\underline\tau$-map (for some set of multisingularities $\underline\tau$) and $\underline\eta\in\underline\tau$, then we also have $\underline\vartheta\in\underline\tau$ for all multisingularities $\underline\vartheta<\underline\eta$. Hence from now on we will assume all multisingularity sets to be decreasing. \item[(3)] For a map $f\colon M^n\to P^{n+k}$ the sets of points of the same multisingularities stratify the manifold $M$ in the same way as the singularities (with no prescribed multiplicites) do. This stratification refines the singularity stratification. \end{itemize} \end{rmk} \begin{figure}[H] \begin{center} \centering\includegraphics[scale=0.1]{kep2}\label{kep2} \begin{changemargin}{2cm}{2cm} \caption{\hangindent=1.4cm\small Here we show the same germ $(\mathbb{R}^2,0)\to(\mathbb{R}^2,0)$ as earlier, but with all multisingularities indicated.} \end{changemargin} \vspace{-1.5cm} \end{center} \end{figure} \section{Semiglobal description of singular maps}\label{semiglob} In this section we will investigate how a singular map behaves in a small neighbourhood of a (multi)singularity stratum. This is not a central question in the present thesis, therefore the proofs will be omitted here. \begin{defi}\label{A} Put $\mathscr{A}:=\Diff_0(\mathbb{R}^c)\times\Diff_0(\mathbb{R}^{c+k})$ where $\Diff_0(\mathbb{R}^n)$ denotes the group of diffeomorphism germs of $(\mathbb{R}^n,0)$. We define the (left) action of the group $\mathscr{A}$ on the set of stable germs $\eta\colon(\mathbb{R}^c,0)\to(\mathbb{R}^{c+k},0)$ by the formula $$(\varphi,\psi)\colon\eta\mapsto\psi\circ\eta\circ\varphi^{-1}~~~~(\varphi,\psi)\in\mathscr{A}.$$ \end{defi} \begin{defi}\label{orbstab} The orbits and stabilisers of the above $\mathscr{A}$-action also have the following names: \begin{itemize} \item[(1)] Two germs $(\mathbb{R}^c,0)\to(\mathbb{R}^{c+k},0)$ are called $\mathscr{A}$-equivalent if they are in the same $\mathscr{A}$-orbit (see definition \ref{sing}). \item[(2)] The stabiliser of the germ $\eta\colon\mathbb{R}^c\to\mathbb{R}^{c+k}$ is called the automorphism group of $\eta$, that is $$\Aut_\mathscr{A}\eta:=\{(\varphi,\psi)\in\mathscr{A}\mid\psi\circ\eta=\eta\circ\varphi\}.$$ \end{itemize} \end{defi} Let us fix a singularity $[\eta]$ where $\eta\colon\mathbb{R}^c\to\mathbb{R}^{c+k}$ is the root of its class. In order to investigate how a map behaves around an $[\eta]$-stratum we need some knowledge on $\Aut_\mathscr{A}\eta$. This group does not admit any convenient topology, however it still shares some properties with Lie groups; we will now introduce these. \begin{defi}\label{cpct} A subgroup of $\Aut_\mathscr{A}\eta$ is called compact if it is conjugate in $\mathscr{A}$ with a compact linear group. \end{defi} The following important theorem is the result of Jänich \cite{jan} and Wall \cite{wall} and we will not prove it here. \begin{thm}\label{janwall} Any compact subgroup of $\Aut_\mathscr{A}\eta$ is contained in a maximal compact subgroup and any two maximal compact subgroups are conjugate in $\Aut_\mathscr{A}\eta$. \end{thm} We will denote a maximal compact subgroup of $\Aut_\mathscr{A}\eta$ by $G_\eta$. Then $G_\eta$ has natural representations $\lambda$ and $\tilde\lambda$ on the source and target spaces respectively, defined by $$\begin{array}{l} \lambda(\varphi,\psi):=\varphi\colon(\mathbb{R}^c,0)\to(\mathbb{R}^c,0)\\ \tilde\lambda(\varphi,\psi):=\psi\colon(\mathbb{R}^{c+k},0)\to(\mathbb{R}^{c+k},0) \end{array} ~~~~(\varphi,\psi)\in G_\eta.$$ By possibly choosing another representative in the $\mathscr{A}$-equivalence class of $\eta$, we can assume that the images of $\lambda$ and $\tilde\lambda$ are subgroups of $\O(c)$ and $\O(c+k)$ respectively. \begin{defi}\label{phieta} Denote the universal principal $G_\eta$-bundle by $EG_\eta\xrightarrow{~G_\eta~}BG_\eta$. Using the above representations we associate to them the vector bundles $$\xi_\eta:=EG_\eta\tightunderset{\lambda}\times\mathbb{R}^c~~~~\text{and}~~~~\tilde\xi_\eta:=EG_\eta\tightunderset{\tilde\lambda}\times\mathbb{R}^{c+k}.$$ We define the fibrewise (but generally not linear) map $\Phi_\eta\colon\xi_\eta\to\tilde\xi_\eta$ using the following diagram (where we obtain the inner square by factoring the outer square by the $G_\eta$-action): $$\xymatrix@R=.333pc{ EG_\eta\times\mathbb{R}^c\ar[rrr]^{\id_{EG_\eta}\times\eta}\ar[dr]^(.65)\lambda\ar[ddddd]_{\pr_{EG_\eta}}&&&EG_\eta\times\mathbb{R}^{c+k}\ar[dl]_(.65){\tilde\lambda}\ar[ddddd]^{\pr_{EG_\eta}}\\ &\xi_\eta\ar@{-->}[r]^{\Phi_\eta}\ar[ddd]&\tilde\xi_\eta\ar[ddd]&\\ \\ \\ &BG_\eta\ar[r]^{\id_{BG_\eta}}&BG_\eta&\\ EG_\eta\ar[ur]^{G_\eta}\ar[rrr]_{\id_{EG_\eta}}&&&EG_\eta\ar[ul]_{G_\eta} }$$ Here $\eta$ is invariant under the $G_\eta$-action, therefore $\Phi_\eta$ is well-defined and its restriction to any fibre is $\mathscr{A}$-equivalent to $\eta$. This is called the global normal form of $[\eta]$. \end{defi} \begin{rmk} Actually $\id_{EG_\eta}\times\eta$ and therefore also $\Phi_\eta$ is only defined on a neighbourhood of the zero-section, but we can assume (for simplicity) that this neighbourhood is the whole total space. \end{rmk} The map $\Phi_\eta$ is the key to the semiglobal description of a map around the $1[\eta]$-stratum because of the following property. This is the main theorem of this section, but again we will not give a proof here; for the proof see \cite{rrthesis} or \cite{rsz}. \begin{thm}\label{univ} If the map $f\colon M^n\to P^{n+k}$ restricted to the $\eta$-stratum is an embedding (i.e. $\eta(f)=1[\eta](f)$), then there are tubular neighbourhoods $\eta(f)\subset T\subset M$ and $f(\eta(f))\subset\tilde T\subset P$ of the $\eta$-stratum and its image such that there is a commutative diagram of fibrewise isomorphisms (the vertical arrows are the projections) $$\xymatrix@R=.333pc{ \xi_\eta\ar[rrr]^{\Phi_\eta}\ar[ddddd]&&&\tilde\xi_\eta\ar[ddddd]\\ &T\ar[r]^{f|_T}\ar[ul]\ar[ddd]&\tilde T\ar[ur]\ar[ddd]&\\ \\ \\ &\eta(f)\ar[r]^{f|_{\eta(f)}}\ar[dl]&f(\eta(f))\ar[dr]&\\ BG_\eta\ar[rrr]_{\id_{BG_\eta}}&&&BG_\eta }$$ \end{thm} In other words this theorem states that $\Phi_\eta\colon\xi_\eta\to\tilde\xi_\eta$ is the universal object for the maps of tubular neighbourhoods of the $1[\eta]$-stratum into those of its image. This theorem has an analogue for more complicated multisingularities as we will see in the next subsection. Although the proof is omitted here, we remark that it is based on the contractibility of the quotient $\Aut_\mathscr{A}\eta/G_\eta$ (in a generalised sense) which is the result of Rimányi \cite{rrthesis}. This is again a property of $\Aut_\mathscr{A}\eta$ common with Lie groups. \subsection{Automorphism groups of multisingularities} The previous constructions show how we can give a semiglobal description of a map around a monosingularity stratum. Now we will see that everything works similarly if we take a multisingularity stratum instead. Fix a multisingularity $\underline\eta=m_1[\eta_1]+\ldots+m_r[\eta_r]$ where $m_i\in\N$ and $\eta_i\colon(\mathbb{R}^{c_i},0)\to(\mathbb{R}^{c_i+k},0)$ is the root of its class (for $i=1,\ldots,r$). Observe that $\underline\eta$ can be identified with the equivalence class of a germ $$\eta_I\colon(\mathbb{R}^{\sum_{i=1}^rm_ic_i+\left(\sum_{i=1}^rm_i-1\right)k},S)\to(\mathbb{R}^{\sum_{i=1}^rm_ic_i+\sum_{i=1}^rm_ik},0)$$ where $S\subset\mathbb{R}^{\sum_{i=1}^rm_ic_i+\left(\sum_{i=1}^rm_i-1\right)k}$ contains $m_1+\ldots+m_r$ points and at $m_i$ of these points the germ $\eta_I$ is equivalent to the germ $\eta_i$ (for $i=1,\ldots,r$). Now we can define the analogues of the notions we had for monosingularities. To make notations simpler we put $c:=\overset{r}{\underset{i=1}\sum}m_ic_i+\left(\overset{r}{\underset{i=1}\sum}m_i-1\right)k$. If we define $\mathscr{A}:=\Diff_S(\mathbb{R}^c)\times\Diff_0(\mathbb{R}^{c+k})$ where $\Diff_S(\mathbb{R}^c)$ denotes the group of germs $(\mathbb{R}^c,S)\to(\mathbb{R}^c,S)$ that are diffeomorphisms near the points of $S$, then definitions \ref{A}, \ref{orbstab} and \ref{cpct} are the same for $\eta_I$. Then $\underline\eta$ is the equivalence class of $\eta_I$ in the equivalence relation generated by $\mathscr{A}$-equivalence and suspension. Theorem \ref{janwall} is also true in this setting and the analogous maximal compact subgroup $G_{\underline\eta}$ of the automorphism group also has representations on the source and target spaces. This way we can define the vector bundles $\xi_{\underline\eta}$ and $\tilde\xi_{\underline\eta}$ and the fibrewise map $\Phi_{\underline\eta}\colon\xi_{\underline\eta}\to\tilde\xi_{\underline\eta}$ as earlier, and the analogue of theorem \ref{univ} is also true. A more detailed description of this can be found in \cite{rrthesis}. \section{Cobordism groups}\label{cobsec} Now we have arrived to the point where we can begin the global description of singular maps, which is the main subject of this thesis. We will define everything for singularities with no prescribed multiplicities, but we refer to remark \ref{singmulti}: Almost everything in the following two sections has a direct analogue for multisingularities. \begin{defi}\label{cobtau} Let $\tau$ be a set of singularities of a fixed codimension $k$. We call two $\tau$-maps $f_0\colon M_0^n\to P^{n+k}$ and $f_1\colon M_1^n\to P^{n+k}$ (with closed source manifolds $M_0$ and $M_1$) $\tau$-cobordant if there is \begin{itemize} \item[(i)] a compact manifold with boundary $W^{n+1}$ such that $\partial W=M_0\sqcup M_1$, \item[(ii)] a $\tau$-map $F\colon W^{n+1}\to P\times[0,1]$ such that for $i=0,1$ we have $F^{-1}(P\times\{i\})=M_i$ and $F|_{M_i}=f_i$. \end{itemize} The set of $\tau$-cobordism classes of $\tau$-maps to the manifold $P^{n+k}$ will be denoted by $\Cob_\tau(n,P^{n+k})$, and in the case $P^{n+k}=\mathbb{R}^{n+k}$ we abbreviate this notation to $\Cob_\tau(n,k)$. \end{defi} We will use the following conventions throughout this thesis: \begin{enumerate} \item\label{conv1} Observe that putting the $n$ in the notation of $\Cob_\tau(n,P^{n+k})$ is a bit redundant, as the codimension $k$ is fixed in $\tau$, so only $n$-manifolds can have $\tau$-maps to $(n+k)$-manifolds. Therefore in most cases we will omit it from the notation and put $\Cob_\tau(P^{n+k}):=\Cob_\tau(n,P^{n+k})$. \item\label{conv2} A large part of this thesis consists of the investigation of Morin maps (see definition \ref{thomboar} and example \ref{morsing}), therefore we will use the following abbreviation: $$\Cob_{\{\Sigma^0,\Sigma^{1_1},\ldots,\Sigma^{1_r}\}}(n,P)=:\Cob_r(n,P).$$ Here $r=\infty$ is also allowed, the symbol $\Cob_\infty(n,P)$ denotes the cobordism group of all Morin maps. \item\label{conv3} We will always denote the cobordism relations by $\sim$ and the cobordism class of a map $f$ by $[f]$; the context will make it clear which cobordism relation they stand for. \end{enumerate} The aim of this thesis is to collect some of the more recent computations concerning the cobordism groups $\Cob_\tau(P)$ with various restrictions and extra conditions (the group structure will be defined in the next subsection). \subsection{The group operation}\label{grp} Fix a set of singularities $\tau$ of a fixed codimension $k$ and a manifold $P^{n+k}$. The cobordism set $\Cob_\tau(P)$ trivially admits a commutative semigroup operation by the disjoint union: If $f_0\colon M_0\to P$ and $f_1\colon M_1\to P$ are $\tau$-maps, then $$f_0\sqcup f_1\colon M_0\sqcup M_1\to P$$ is also a $\tau$-map and $[f_0]+[f_1]:=[f_0\sqcup f_1]$ is well-defined. This operation also has a null element represented by the empty map. In order to make this an Abelian group structure, we have to prove that all elements of $\Cob_\tau(P)$ have inverse elements. We fix a $\tau$-map $f\colon M^n\to P^{n+k}$ and look for the inverse of $[f]$. \begin{prop}\label{inv} If $P=N\times\mathbb{R}^1$ for a manifold $N^{n+k-1}$, then $[f]$ has an inverse in $\Cob_\tau(P)$. \end{prop} \begin{prf} Let $\rho\colon N\times\mathbb{R}^1\to N\times\mathbb{R}^1$ denote the reflection to a hypersurface $N\times\{t\}$ for a $t\in\mathbb{R}^1$. Then the composition $\rho\circ f$ is such that $f\sqcup(\rho\circ f)\sim\varnothing$, since the manifold $M\times[0,1]$ can be mapped by a $\tau$-map to the path of the rotation of the image of $f$ to the image of $\rho\circ f$ around $N\times\{t\}$ in $P\times\mathbb{R}_+\approx P\times[0,1)$. Hence $[\rho\circ f]$ is the inverse of $[f]$. \end{prf} \begin{figure}[H] \begin{center} \centering\includegraphics[scale=0.1]{kep3}\label{kep3} \begin{changemargin}{2cm}{2cm} \caption{\hangindent=1.4cm\small We indicated the rotation of the image of $f$ to the image of $\rho\circ f$ around $N\times\{t\}$ in $P\times\mathbb{R}^1_+=N\times\mathbb{R}^1\times\mathbb{R}^1_+$.} \end{changemargin} \vspace{-1.3cm} \end{center} \end{figure} If $P$ is an arbitrary manifold, then we need the following notion. \begin{defi}\label{lframed} An $l$-framed $\tau$-map from a manifold $M^n$ to a manifold $Q^{n+k+l}$ is the germ along $M\approx M\times\{0\}$ of a map $$\tilde f\colon M^n\times\mathbb{R}^l\to Q^{n+k+l}$$ such that for all points $p\in M$ there are coordinate neighbourhoods $(p,0)\in U\times\tightoverset{_\circ}{D^l_\varepsilon}\subset M\times\mathbb{R}^l$ and $\tilde f(p,0)\in V\subset Q$ of the point $(p,0)\in M\times\mathbb{R}^l$ and its image, with the following property: $U\approx\mathbb{R}^n,V\approx\mathbb{R}^{n+k}\times\tightoverset{_\circ}{D^l_\varepsilon}$ and with these identifications we have $$\textstyle\tilde f|_{U_p\times\toverset{_\circ}{D^l_\varepsilon}}=g\times\id_{\toverset{_\circ}{D^l_\varepsilon}}\colon\mathbb{R}^n\times\tightoverset{_\circ}{D^l_\varepsilon}\to\mathbb{R}^{n+k}\times\tightoverset{_\circ}{D^l_\varepsilon}$$ for a $\tau$-map $g\colon\mathbb{R}^n\to\mathbb{R}^{n+k}$. \end{defi} \begin{ex} An $l$-framed $\{\Sigma^0\}$-map is an immersion with $l$ pointwise independent normal vector fields. \end{ex} The cobordism set of $l$-framed $\tau$-maps to the manifold $Q^{n+k+l}$ can be defined by an obvious modification of the definition of that of $\tau$-maps (\ref{cobtau}). This will be denoted by $\Cob_{\tau\oplus l}(n,Q^{n+k+l})$ or by $\Cob_{\tau\oplus l}(Q^{n+k+l})$. \begin{thm}\label{susp} For any $l\in\mathbb{N}$, if we assign to a $\tau$-map $f\colon M^n\to P^{n+k}$ the germ of the map $f\times\id_{\mathbb{R}^l}$ along $M$, we get a bijective correspondence $$\Cob_\tau(n,P^{n+k})\to\Cob_{\tau\oplus l}(n,P^{n+k}\times\mathbb{R}^l).$$ \end{thm} The proof of this theorem relies on the notion of $\tau$-embeddings (see section \ref{szpt}) which does not fit with the topic of the present section. Therefore we will only prove it later in remark \ref{tauembrmk}. Now the cobordism class $[f]$ corresponds to its suspension in the cobordism set $\Cob_{\tau\oplus 1}(P\times\mathbb{R}^1)$, where the target manifold is of the form we needed in proposition \ref{inv}. Therefore in $\Cob_{\tau\oplus 1}(P\times\mathbb{R}^1)$ the inverse element exists, so it exists in $\Cob_\tau(P)$ too. Moreover, the construction of the inverse of $[f]$ will also be clear from the proof in subsection \ref{tauemb}. Hence we obtained the following. \begin{crly} $\Cob_\tau(P)$ is an Abelian group with the disjoint union operation. \end{crly} \subsection{Cobordisms with additional structures} \begin{defi} Let $L=\underset{k\to\infty}\lim L(k)$ be a stable linear group (see \cite[8.2]{stabgr}). We say that a map $f\colon M^n\to P^{n+k}$ is equipped with a normal $L$-structure, if the virtual normal bundle of $f$ has structure group $L$. \end{defi} \begin{rmk} We can define the global normal form of a multisingularity $\underline\eta$ for maps with normal $L$-structures similarly to section \ref{semiglob}. Recall that the maximal compact subgroup $G_{\underline\eta}$ was a inside $\O(c)\times\O(c+k)$, hence we can take $$G_{\underline\eta}^L:=\big\{(A,B)\in G_{\underline\eta}\mid\big(\begin{smallmatrix} A&0\\ 0&B \end{smallmatrix}\big)\in L(2c+k)\big\}.$$ The corresponding fibrewise map between the universal $G_{\underline\eta}^L$-bundles will be denoted by $\Phi_{\underline\eta}^L\colon\xi_{\underline\eta}^L\to\tilde\xi_{\underline\eta}^L$. This way the virtual bundle $\tilde\xi_{\underline\eta}^L-\xi_{\underline\eta}^L$ has structure group $L$, so any virtual bundle that we get as a pullback of $\tilde\xi_{\underline\eta}^L-\xi_{\underline\eta}^L$ has it too. Now the map $\Phi_{\underline\eta}^L$ has the same universal property for the maps with normal $L$-structures around an $\underline\eta$-stratum as described in theorem \ref{univ}. \end{rmk} \begin{defi}\label{taug} Let $\tau$ be a set of singularities of a fixed codimension $k$ and $L$ a stable linear group. The $(\tau,L)$-cobordism of two $\tau$-maps $f_0$ and $f_1$ equipped with normal $L$-structures is the same as in definition \ref{cobtau} with the extra condition that the map $F$, which joins $f_0$ and $f_1$, also has a normal $L$-structure which matches that of $f_0$ and $f_1$. The set of $(\tau,L)$-cobordism classes to the manifold $P^{n+k}$ will be denoted by $\Cob_\tau^L(n,P^{n+k})$. \end{defi} \begin{rmk} The normal $L$-structure can be added to every notion in the previous subsection, hence $\Cob_\tau^L(n,P)$ is also a group. \end{rmk} In the remaining part of the present section we will describe those additional structures to cobordism groups which we will use throughout this thesis. \begin{ex} \begin{itemize} \item[] \item[(1)] If $L(k)=\O(k)$, then $\Cob_\tau^L(P)=\Cob_\tau(P)$ is the unoriented $\tau$-cobordism group. \item[(2)] If $L(k)=\SO(k)$, then $\Cob_\tau^L(P)=\Cob_\tau^{\SO}(P)$ is the cooriented $\tau$-cobordism group. \item[(3)] If $L(k)=\Spin(k)$, then $\Cob_\tau^L(P)=\Cob_\tau^{\Spin}(P)$ is the cobordism group of $\tau$-maps equipped with spin normal structures. \end{itemize} \end{ex} \begin{rmk} In the latter two examples above, if the manifold $P$ is oriented (resp. spin), then an orientation (resp. spin structure) of the virtual normal bundle of a map $f\colon M\to P$ is equivalent to an orientation (resp. spin structure) of the tangent bundle $TM$ and the same is true for maps to $P\times[0,1]$. Hence for an oriented (resp. spin) $P$ the group $\Cob_\tau^{\SO}(P)$ (resp. $\Cob_\tau^{\Spin}(P)$) is the cobordism group of $\tau$-maps of oriented (resp. spin) manifolds to $P$. \end{rmk} \begin{defi} A map between manifolds $f\colon M\to P$ is called prim (projected immersion), if it is the cobination of an immersion $i_f\colon M\looparrowright P\times\mathbb{R}^1$ and the projection $\pr_P\colon P\times\mathbb{R}^1\to P$. \end{defi} \begin{rmk}\label{kerbundle} Observe that a prim map $f$ is always a Morin map (i.e. $\dim\ker df\le1$) and the line bundle $\ker df|_{\Sigma(f)}$ over the set of singular points is trivialised. It is not hard to see that the converse is also true: A Morin map equipped with a trivialisation of its kernel line bundle is a prim map. \end{rmk} \begin{defi}\label{primcob} Given a set $\tau$ of Morin singularities of a fixed codimension $k$ and a stable linear group $L$, the cobordism of prim $\tau$-maps to a manifold $P^{n+k}$ equipped with normal $L$-structures is the analogue of definition \ref{taug} for prim maps. The set (and group) of these cobordism classes will be denoted by $\Prim_\tau^L(n,P^{n+k})$. \end{defi} \begin{rmk}\label{primrmk} We can give analogous definitions for the cobordisms of those prim $\tau$-maps $f\colon M\to P$ for which the normal bundle of the immersion $i_f\colon M\looparrowright P\times\mathbb{R}^1$ is equipped with a complex or quaternionic structure; we will denote these cobordism groups by $\Prim_\tau^{\U}(P)$ and $\Prim_\tau^{\Sp}(P)$ respectively. \end{rmk} We will use the same conventions for the notations of cobordisms with additional structures as described after definition \ref{cobtau}. \begin{comment} \section{Bordism groups} Here we will describe a different notion of cobordism relation between two singular maps, which is called (left-right) bordism. This yields structures different from $\Cob_\tau(P)$ (although they have some connections). \begin{defi}\label{bordtau} Let $\tau$ be a set of singularities of a fixed codimension $k$. We call two $\tau$-maps $f_0\colon M_0^n\to P_0^{n+k}$ and $f_1\colon M_1^n\to P_1^{n+k}$ (with closed source and target manifolds) left-right $\tau$-bordant (or simply $\tau$-bordant) if there is \begin{itemize} \item[(i)] a compact manifold with boundary $W^{n+1}$ such that $\partial W=M_0\sqcup M_1$, \item[(ii)] a compact manifold with boundary $Z^{n+k+1}$ such that $\partial Z=P_0\sqcup P_1$, \item[(iii)] a $\tau$-map $F\colon W^{n+1}\to Z^{n+k+1}$ such that for $i=0,1$ we have $F^{-1}(P_i)=M_i$ and $F|_{M_i}=f_i$. \end{itemize} The set of (left-right) $\tau$-bordism classes of $\tau$-maps of $n$-manifolds to $(n+k)$-manifolds will be denoted by $\Bord_\tau(n,k)$. \end{defi} The disjoint union operation for these bordism sets is $[f_0]+[f_1]:=[f_0\sqcup f_1]$ where $f_0\colon M_0\to P_0$ and $f_1\colon M_1\to P_1$ and the disjoint union is $$f_0\sqcup f_1\colon M_0\sqcup M_1\to P_0\sqcup P_1.$$ This is now obviously an Abelian group operation. The additional normal structures to the bordism groups can be defined analogously to definition \ref{taug}. However, here we can also define additional structures for the target manifolds; in this thesis we will talk about the following four types of bordisms with additional structures. \begin{ex} \begin{itemize} \item[] \item[(1)] $\Bord_\tau(n,\mathfrak{N}_{n+k})=\Bord_\tau(n,k)$ is the unoriented $\tau$-bordism group. \item[(2)] $\Bord_\tau(n,\Omega_{n+k})=\Bord_\tau^{t\SO}(n,k)$ is the unoriented $\tau$-bordism group to oriented manifolds. This is the same as definition \ref{bordtau} with the additional condition that $P_0$, $P_1$ and $Z$ are oriented ($t\SO$ stands for the orientation of the target). \item[(3)] $\Bord_\tau^{\SO}(n,\mathfrak{N}_{n+k})=\Bord_\tau^{n\SO}(n,k)$ is the cooriented $\tau$-bordism group ($n\SO$ stands for the orientation of the virtual normal bundle). \item[(4)] $\Bord_\tau^{\SO}(n,\Omega_{n+k})=\Bord_\tau^{\SO}(n,k)$ is the cooriented $\tau$-bordism group to oriented manifolds. Here we require the target manifolds to be oriented and the maps to be cooriented, which is equivalent to the orientation of both the source and the target manifolds. \end{itemize} \end{ex} Again, the conventions for the notations of bordism groups will be the analogues of the ones described after definition \ref{cobtau} \end{comment} \begin{rmks}\label{singmulti} Observe that almost everything in the section above works in exactly the same way when we replace the set $\tau$ of singularities by a set $\underline\tau$ of multisingularities. The only exception is theorem \ref{susp} which is not necessarily true for multisingularities, hence the cobordism set $\Cob_{\underline\tau}^L(P)$ is only a group if $P$ is a manifold of the form $N\times\mathbb{R}^1$. We also remark that for a set $\underline\tau$ of multisingularities there is a natural forgetful map $\Cob_{\underline\tau}^L(P)\to\Cob_\tau^L(P)$ where $\tau$ is the set of all singularities in the elements of $\underline\tau$. \end{rmks} \chapter{Classifying spaces}\label{classp} One of the most important elements of any type of cobordism theory is an analogue of the Pontryagin--Thom construction, which means constructing a bijection between the cobordism set and a set of homotopy classes to a specific space. This space is then called the classifying space for that cobordism theory. In our setting the cobordism theory is $\Cob_{\underline\tau}^L(n,P^{n+k})$ where $\underline\tau$ is a set of (multi)singu-larities of codimension $k$ and $L$ is a stable linear group. We will obtain a space $X_{\underline\tau}^L$ which has the property $$\Cob_{\underline\tau}^L(P)=[\toverset{^*}{P},X_{\underline\tau}^L]_*$$ where $\toverset{^*}{P}$ denotes the one-point compactification of $P$ and $[\cdot,\cdot]_*$ denotes the set of based homotopy classes of maps that fix the image of $\infty\in\toverset{^*}{P}$ (if $P$ was initially compact, then this homotopy set is just $[P,X_{\underline\tau}^L]$). Hence $X_{\underline\tau}^L$ is the classifying space for the cobordisms of singular maps. Although this classifying space can also be obtained using Brown's representability theorem (see \cite{hosszu}), it is useful to see its construction. We only remark that this classifying space is homotopically unique by the Brown representability. In section \ref{rszpt} we construct the classifying space $X_{\underline\tau}^L$ if $\underline\tau$ is a set of multisingularities (this is taken from \cite{rrthesis} and \cite{rsz}); in section \ref{szpt} the classifying space $X_\tau^L$ is constructed in a different way for a set $\tau$ of singularities without fixed multiplicities; then in section \ref{kazspace} we construct another type of classifying space, which will be called Kazarian's space (although it was already considered by Thom), then we show a connection between $X_\tau^L$ and Kazarian's space (the latter two sections are based on \cite{hosszu}). Analogous classifying spaces can be constructed for cobordisms of prim maps (see definition \ref{primcob}) by the same methods. The classifying space for $\Prim_{\underline\tau}^L(P)$ will be denoted by $\overline X_{\underline\tau}^L$. \section{The construction of Szűcs and Rimányi}\label{rszpt} Throughout this section we fix a set $\underline\tau$ of multisingularities of codimension $k$ and a stable linear group $L$. We also fix a complete ordering of the elements of $\underline\tau$ which extends the natural partial order (see definition \ref{poms}); it will be denoted by $\underline\eta_0<\underline\eta_1<\ldots$ (where the $\underline\eta_i$ are the elements of $\underline\tau$). We can also assume this to be a well-ordering, as the natural partial order is well-founded (as a consequence of the assumption following definition \ref{pos}). We will show a generalisation of the Pontryagin--Thom construction to $(\underline\tau,L)$-maps due to Szűcs and Rimányi. This means constructing a ``universal $(\underline\tau,L)$-map'' in the same sense as the inclusion $B\O(k)\hookrightarrow M\O(k)$ of the Grassmannian manifold into the Thom space of the universal rank-$k$ bundle is the universal embedding. In other words the following is true. \begin{thm}\label{ptpullback} There are topological spaces $X_{\underline\tau}^L$, $Y_{\underline\tau}^L$ and a continuous map $f_{\underline\tau}^L\colon Y_{\underline\tau}^L\to X_{\underline\tau}^L$ with the following properties: \begin{enumerate} \item\label{pt1} For any manifold with boundary $P^{n+k}$ and any closed manifold $N^{n-1}$, if there are maps $f$, $\kappa_N$ and $\kappa_P$ such that the outer square in the diagram below is a pullback diagram, then there is a manifold $M^n$ with boundary $\partial M=N$, an extension $\kappa_M$ of $\kappa_N$ and a $(\underline\tau,L)$-map $F\colon M\to P$ such that the upper inner square is a pullback diagram as well. $$\xymatrix{ Y_{\underline\tau}^L\ar[rr]^{f_{\underline\tau}^L} && X_{\underline\tau}^L\\ & M\ar@{-->}[ul]_(.35){\kappa_M}\ar@{-->}[r]^F & P\ar[u]_{\kappa_P}\\ N\ar[uu]^{\kappa_N}\ar@{^(-->}[ur]\ar[rr]^f && \partial P\ar@{^(->}[u] }$$ \item\label{pt2} For any $(\underline\tau,L)$-map $F\colon M^n\to P^{n+k}$ between manifolds with boundaries, if there are maps $\kappa_{\partial M}$ and $\kappa_{\partial P}$ such that the outer square in the diagram below is a pullback diagram, then there are extensions $\kappa_M$ and $\kappa_P$ of $\kappa_{\partial M}$ and $\kappa_{\partial P}$ respectively such that the upper inner square is a pullback diagram as well. $$\xymatrix{ Y_{\underline\tau}^L\ar[rrr]^{f_{\underline\tau}^L} &&& X_{\underline\tau}^L\\ & M\ar@{-->}[ul]_(.35){\kappa_M}\ar[r]^F & P\ar@{-->}[ur]^(.35){\kappa_P} &\\ \partial M\ar[uu]^{\kappa_{\partial M}}\ar@{^(->}[ur]\ar[rrr]^{F|_{\partial M}} &&& \partial P\ar@{_(->}[ul]\ar[uu]_{\kappa_{\partial P}} }$$ \end{enumerate} \end{thm} \begin{rmk}\label{gentau} We call this $f_{\underline\tau}^L$ the universal $(\underline\tau,L)$-map although it is not really a $\underline\tau$-map, as its source and target spaces are not finite dimensional manifolds. However, it will be constructed from direct limits of $(\underline\tau,L)$-maps, so it is a ``generalised $(\underline\tau,L)$-map''. It will follow from the proof that it does not only classify $(\underline\tau,L)$-maps (in the sense of the above theorem), but also generalised $(\underline\tau,L)$-maps obtained from direct limits of $(\underline\tau,L)$-maps. \end{rmk} Before proving the above theorem, we prove its most important corollary, which states that $X_{\underline\tau}^L$ is indeed the classifying space for the cobordisms of $(\underline\tau,L)$-maps. \begin{thm}\label{ptclass} For any manifold $P^{n+k}$ there is a bijective correspondence between $\Cob_{\underline\tau}^L(P)$ and $[\toverset{^*}{P},X_{\underline\tau}^L]_*$. \end{thm} We will only prove this if $P$ is a closed manifold, but the next subsection will make it clear how the same proof works for an open $P$ too; see remark \ref{1cptrmk}. \medskip\begin{prf} If $M^n$ is a closed manifold and $f\colon M\to P$ is a $(\underline\tau,L)$-map, then by \ref{pt2} we can assign to the map $f$ the map $\alpha(f):=\kappa_P$ (and the map $\beta(f):=\kappa_M$). In order to make this $\alpha$ a correspondence between cobordism classes and homotopy classes, we have to prove the following: If $f_0\colon M_0\to P$ and $f_1\colon M_1\to P$ are $(\underline\tau,L)$-cobordant, then $\alpha(f_0)$ and $\alpha(f_1)$ are homotopic. This is indeed so, since a $(\underline\tau,L)$-cobordism $F\colon W\to P\times[0,1]$ is a map that satisfies the conditions of \ref{pt2}, hence there is a map $\alpha(F)=\kappa_{P\times[0,1]}$ which is clearly a homotopy between $\alpha(f_0)$ and $\alpha(f_1)$. Hence we found a map $\tilde\alpha\colon\Cob_{\underline\tau}^L(P)\to[P,X_{\underline\tau}^L]$; we just have to prove that this $\tilde\alpha$ is bijective. \medskip\noindent I. \emph{$\tilde\alpha$ is surjective.}\medskip If we are given a map $\kappa\colon P^{n+k}\to X_{\underline\tau}^L$, then by \ref{pt1} we obtain a manifold $M^n$, a map $\kappa_M\colon M\to Y_{\underline\tau}^G$ and a $(\underline\tau,L)$-map $f\colon M\to P$, so the surjectivity is proved if $\alpha(f)$ is homotopic to $\kappa$. To prove this, we apply \ref{pt2} to the map $$\textstyle f\times\id_{[0,1]}\colon N\times[0,1]\to P\times[0,1]$$ where the boundaries are mapped by $\kappa\sqcup\alpha(f)\colon P\sqcup P\to X_{\underline\tau}^L$ and $\kappa_M\sqcup\beta(f)\colon N\sqcup N\to Y_{\underline\tau}^L$, which yields the desired homotopy. \medskip\noindent II. \emph{$\tilde\alpha$ is injective.}\medskip Suppose we have two maps $f_0\colon M_0\to P$ and $f_1\colon M_1\to P$ such that $\alpha(f_0)$ is homotopic to $\alpha(f_1)$. If $\kappa$ is a homotopy between them, then we can apply \ref{pt1} to the maps $$f_0\sqcup f_1\colon M_0\sqcup M_1\to P\sqcup P=\partial(P\times[0,1])$$ and $\kappa\colon P\times[0,1]\to X_{\underline\tau}^L$ and $\beta(f_0)\sqcup\beta(f_1)\colon M_0\sqcup M_1\to Y_{\underline\tau}^L$. This gives a $(\underline\tau,L)$-cobordism between $f_0$ and $f_1$. \end{prf} \subsection{Proof of theorem \ref{ptpullback}}\label{prf} The proof proceeds by (transfinite) induction on the elements of $\underline\tau$. We will glue blocks together to obtain the spaces $X_{\underline\tau}^L$ and $Y_{\underline\tau}^L$; one block corresponds to one multisingularity in $\underline\tau$. The block corresponding to $\underline\eta\in\underline\tau$ is glued precisely to the blocks corresponding to multisingularities less complicated than $\underline\eta$. This, of course, only works if the index of $\underline\eta$ is a successor ordinal, but in the case when the order-type of $\underline\tau$ is a limit, we can define $f_{\underline\tau}^L\colon Y_{\underline\tau}^L\to X_{\underline\tau}^L$ as a direct limit. Hence from now on we will assume that $\underline\tau$ has a maximal element. The first step in the induction is when we have $\underline\tau=\{1\Sigma^0\}$, that is, $(\underline\tau,L)$-maps are the embeddings with normal $L$-structures. In this case the classical Thom construction yields the inclusion $$f_{\{1\Sigma^0\}}^L\colon Y_{\{1\Sigma^0\}}^L:=BL(k)\hookrightarrow T\gamma_k^L=:X_{\{1\Sigma^0\}}^L$$ of the zero-section into the Thom space of the universal rank $k$ vector bundle with structure group $L$. This way $T\gamma_k^L$ and $BL(k)$ are the first blocks of $X_{\underline\tau}^L$ and $Y_{\underline\tau}^L$ respectively (for any multisingularity set $\underline\tau$). Now suppose we know the theorem for $\underline\tau':=\underline\tau\setminus\{\underline\eta\}$ (where $\underline\eta$ is the maximal element of $\underline\tau$) and we want to prove for $\underline\tau$. Recall definition \ref{phieta} (and its extensions later) where we defined the universal map $\Phi_{\underline\eta}^L\colon\xi_{\underline\eta}^L\to\tilde\xi_{\underline\eta}^L$ for tubular neighbourhoods of the $\underline\eta$-strata. Now if $D\tilde\xi_{\underline\eta}^L$ is the disk bundle of $\tilde\xi_{\underline\eta}^L$ (of a sufficiently small radius) and we set $D\xi_{\underline\eta}^L:=(\Phi_{\underline\eta}^L)^{-1}(D\tilde\xi_{\underline\eta}^L)$, which is homeomorphic to the disk bundle of $\xi_{\underline\eta}^L$, then the restriction $$\Phi_{\underline\eta}^L|_{S\xi_{\underline\eta}^L}\colon S\xi_{\underline\eta}^L=\partial D\xi_{\underline\eta}^L\to\partial D\tilde\xi_{\underline\eta}^L=S\tilde\xi_{\underline\eta}^L$$ is a $(\underline\tau',L)$-map. Hence, by the induction hypothesis on \ref{pt2}, there are inducing maps $\rho,\tilde\rho$ which make the following diagram commutative: $$\xymatrix@C=4pc{ Y_{\underline\tau'}^L\ar[r]^{f_{\underline\tau'}^L} & X_{\underline\tau'}^L\\ S\xi_{\underline\eta}^L\ar[u]^{\rho}\ar[r]^{\Phi_{\underline\eta}^L|_{S\xi_{\underline\eta}^L}} & S\tilde\xi_{\underline\eta}^L\ar[u]_{\tilde\rho} }$$ We define $$X_{\underline\tau}^L:=X_{\underline\tau'}^L\tightunderset{\tilde\rho}{\sqcup}D\tilde\xi_{\underline\eta}^L,~~~~Y_{\underline\tau}^L:=Y_{\underline\tau'}^L\tightunderset{\rho}{\sqcup}D\xi_{\underline\eta}^L~~~~\text{and}~~~~f_{\underline\tau}^L:=f_{\underline\tau'}^L\cup\Phi_{\underline\eta}^L|_{D\xi_{\underline\eta}^L}.$$ Now the only thing left is to prove that \ref{pt1} and \ref{pt2} hold for this newly constructed map. \medskip\noindent\textbf{Proof of \ref{pt1}.\enspace\ignorespaces} Let $P,N,f,\kappa_P,\kappa_N$ be as in \ref{pt1}. The zero-section $BG_{\underline\eta}^L$ is inside $D\tilde\xi_{\underline\eta}^L$ and $D\xi_{\underline\eta}^L$, hence it is also in $X_{\underline\tau}^L$ and $Y_{\underline\tau}^L$. Moreover, $f_{\underline\tau}^L$ maps $BG_{\underline\eta}^L\subset Y_{\underline\tau}^L$ onto $BG_{\underline\eta}^L\subset X_{\underline\eta}^L$ homeomorphically. The submanifold $\kappa_N^{-1}(BG_{\underline\eta}^L)$ is mapped by $f$ diffeomorphically onto $\kappa_P|_{\partial P}^{-1}(BG_{\underline\eta}^L)$ because of the pullback property. We will denote both preimages by $K_N$. Now the submanifold $K_P:=\kappa_P^{-1}(BG_{\underline\eta}^L)$ has boundary $K_N$. If we set $U:=\kappa_P^{-1}(\toverset{^{~\circ}}D\tilde\xi_{\underline\eta}^L)$, then we may assume that $U$ is a tubular neighbourhood of $K_P$, hence its closure $\overline U$ can be identified with the disk bundle of $\kappa_P|_{K_P}^*\tilde\xi_{\underline\eta}^L$. The boundary $\partial U$ is now the union of a sphere bundle $\partial_SU$ over $K_P$ and a disk bundle $\partial_DU$ over $K_N$. We put $P':=P\setminus U$ and $Q:=\overline{\partial P\setminus\partial_DU}$. Now let $V$ be the disk bundle of $\kappa_P|_{K_P}^*\xi_{\underline\eta}^L$ and let $\kappa_V\colon V\to D\xi_{\underline\eta}^L$ be the map defined by the inducing map $\kappa_P|_{K_P}$. Then the boundary $\partial V$ is the union of a sphere bundle $\partial_SV$ over $K_P$ and a disk bundle $\partial_DV$ over $K_N$. Observe that $\partial_DV$ can be identified with a (closed) tubular neighbourhood of $K_N\subset N$; we will put $N':=\overline{N\setminus\partial_DV}$. \begin{figure}[H] \begin{center} \centering\includegraphics[scale=0.1]{kep4}\label{kep4} \begin{changemargin}{2cm}{2cm} \caption{\hangindent=1.4cm\small The ellipse on the left stands for $N$ and the (closed) ball on the right for $P$; the two indicated points on $N$ and the other two on $f(N)$ represent $K_N$ and the arc joining them is $K_P$; the disk bundles $V$ and $\overline U$ are also indicated.} \end{changemargin} \vspace{-1.3cm} \end{center} \end{figure} The map $\Phi_{\underline\eta}^L\colon\xi_{\underline\eta}^L\to\tilde\xi_{\underline\eta}^L$ induces a map $F_{\underline\eta}\colon V\to\overline U$ between the pullbacks, which is a $(\underline\tau,L)$-map. By definition, the restriction $F_{\underline\eta}|_{\partial_DV}$ coincides with $f|_{\partial_DV}$. Now $P'$ is a manifold with boundary $Q\cup\partial_SU$ and the (closed) manifold $N'\cup\partial_SV$ is mapped to this boundary by the $(\underline\tau',L)$-map $f|_{N'}\cup F_{\underline\eta}|_{\partial_SV}$. Moreover, if we also add the map $\kappa_N|_{N'}\cup\kappa_V|_{\partial_SV}\colon N'\cup\partial_SV\to Y_{\underline\tau'}^L$, then we can apply the induction hypothesis on \ref{pt1} to the lower square in the following diagram to obtain the manifold $M'$ and the maps $F'$ and $\kappa_{M'}$: $$\xymatrix@C=.7pc{ & V\ar[rrrrrr]^{F_{\underline\eta}}\ar[d]_{\kappa_V} &&&&&& \overline U\ar[d]^{\kappa_P|_{\overline U}} &\\ Y_{\underline\tau}^L\ar@{=}[r] & D\xi_{\underline\eta}^L\ar@{{}{}{}}[r]|\widesqcup_*+\txt{$_\rho$}\ar@/^1.5pc/@<.3pc>[rrrrrr]^(.7){\Phi_{\underline\eta}^L} & Y_{\underline\tau'}^L\ar[rrrr]^{f_{\underline\tau'}^L} &&&& X_{\underline\tau'}^L\ar@{{}{}{}}[r]|\widesqcup_*+\txt{$_{\tilde\rho}$} & D\tilde\xi_{\underline\eta}^L\ar@{=}[r] & X_{\underline\eta}^L\\ &&&& M'\ar@{-->}[ull]_(.35){\kappa_{M'}}\ar@{-->}[rr]^{F'} && P'\ar[u]_{\kappa_P|_{P'}} &&\\ && N'\cup\partial_SV\ar[uu]^{\kappa_N|_{N'}\cup\kappa_V|_{\partial_SV}}\ar@{^(-->}[urr]\ar[rrrr]^{f|_{N'}\cup F_{\underline\eta}|_{\partial_SV}} &&&& Q\cup\partial_SU\ar@{^(->}[u] && }$$ We can glue $M'$ to $V$ along the common boundary part $\partial_SV$. The maps $$F'\cup F_{\underline\eta}\colon M'\cup V\to P'\cup\overline U=P~~~~\text{and}~~~~\kappa_{M'}\cup\kappa_V\colon M'\cup V\to Y_{\underline\tau'}^L\tightunderset{\rho}{\sqcup}D\xi_{\underline\eta}^L=Y_{\underline\tau}^L$$ are well-defined, as the maps used in both cases coincide on $\partial_SV$. Hence we got a manifold $M:=M'\cup V$ with boundary $\partial M=N$ and maps $F:=F'\cup F_{\underline\eta}$ and $\kappa_M:=\kappa_{M'}\cup\kappa_V$ and it is easy to see that these satisfy \ref{pt1}. ~$\square$\medskip \noindent\textbf{Proof of \ref{pt2}.\enspace\ignorespaces} Let $M,P,F,\kappa_{\partial M},\kappa_{\partial P}$ be as in \ref{pt2}. We put $K:=\underline\eta(F)$ and $\tilde K:=f(\underline\eta(f))$; this way $f|_K\colon K\to\tilde K$ is a diffeomorphism. We now use theorem \ref{univ} (and its extensions later) to obtain closed tubular neighbourhoods $K\subset T\subset M$ and $\tilde K\subset\tilde T\subset P$ such that $f$ maps $T$ to $\tilde T$ and $\partial T$ to $\partial\tilde T$ and there is a commutative diagram $$\xymatrix@R=.333pc{ D\xi_{\underline\eta}^L\ar[rrr]^{\Phi_{\underline\eta}^L}\ar[ddddd]&&&D\tilde\xi_{\underline\eta}^L\ar[ddddd]\\ &T\ar[r]^{f|_T}\ar[ul]_(.35){\kappa_T}\ar[ddd]&\tilde T\ar[ur]^(.35){\kappa_{\tilde T}}\ar[ddd]&\\ \\ \\ &K\ar[r]^{f|_K}\ar[dl]&\tilde K\ar[dr]&\\ BG_{\underline\eta}^L\ar[rrr]_{\id_{BG_{\underline\eta}^L}}&&&BG_{\underline\eta}^L }$$ where $T$ and $\tilde T$ are identified with disk bundles over $K$ and $\tilde K$ respectively. We have $\partial T=\partial_ST\cup\partial_DT$ and $\partial\tilde T=\partial_S\tilde T\cup\partial_DT$, where $\partial_ST$ and $\partial_S\tilde T$ are sphere bundles over $K\subset M$ and $\tilde K\subset P$ respectively and $\partial_DT$ and $\partial_D\tilde T$ are disk bundles over $\partial K\subset\partial M$ and $\partial\tilde K\subset\partial P$ respectively. We put $M':=\overline{M\setminus T}$, $N:=\overline{\partial M\setminus\partial_DT}$, $P':=\overline{P\setminus\tilde T}$ and $Q:=\overline{\partial P\setminus\partial_D\tilde T}$. Now $F|_{M'}\colon M'\to P'$ is a $(\underline\tau',L)$-map and the boundaries $\partial M'=N\cup\partial_ST$ and $\partial P'=Q\cup\partial_S\tilde T$ are mapped to $Y_{\underline\tau'}^L$ and $X_{\underline\tau'}^L$ respectively by the maps $\kappa_{\partial M}|_N\cup(\rho\circ\kappa_T|_{\partial_ST})$ and $\kappa_{\partial P}|_Q\cup(\tilde\rho\circ\kappa_{\tilde T}|_{\partial_S\tilde T})$. Hence by the induction hypothesis on \ref{pt2} there are extensions $\kappa_{M'}$ and $\kappa_{P'}$ of these maps to $M'$ and $P'$ respectively, and a pullback diagram $$\xymatrix{ Y_{\underline\tau'}^L\ar[r]^{f_{\underline\tau'}^L} & X_{\underline\tau'}^L\\ M'\ar[u]^{\kappa_{M'}}\ar[r]^{F|_{M'}} & P'\ar[u]_{\kappa_{P'}} }$$ The maps $\kappa_{M'}$ and $\kappa_T$ coincide on $\partial_ST$ and the maps $\kappa_{P'}$ and $\kappa_{\tilde T}$ coincide on $\partial_S\tilde T$. Therefore, the maps $$\kappa_M:=\kappa_{M'}\cup\kappa_T\colon M'\cup T=M\to Y_{\underline\tau}^L~~~~\text{and}~~~~\kappa_P:=\kappa_{P'}\cup\kappa_{\tilde T}\colon P'\cup\tilde T=P\to X_{\underline\tau}^L$$ are well-defined and it is not hard to see that they satisfy \ref{pt2}. ~$\square$ \begin{rmk}\label{1cptrmk} The first block in the construction of $X_{\underline\tau}^L$ was the Thom space $T\gamma_k^L$ and it is clear from the proofs that the points of $P$ which are ``far away'' from the image of $M$ are mapped (according to the Thom construction) to the special point of this Thom space. Hence $\infty\in\toverset{^*}P$ is always mapped to this point. \end{rmk} \section{Szűcs's construction by $\tau$-embeddings}\label{szpt} In the rest of this chapter we will have a fixed stable linear group $L$ and a fixed set $\tau$ of singularities of codimension $k$ with a fixed complete ordering $[\eta_0]<[\eta_1]<\ldots$ of its elements which extends the natural partial order (see definition \ref{pos}). As before, we can assume this to be a well-ordering and that $\tau$ has a maximal element with respect to this order. We will again show a Pontryagin--Thom type construction (due to Szűcs) to obtain the classifying space for cobordisms of $(\tau,L)$-maps, but now the techniques of section \ref{rszpt} do not entirely work. The reason for this is that in the proof above we used that the image of each multisingularity stratum is embedded into the target space, however, this is not generally true for singularity strata. This will be one of the main problems to solve now. \begin{defi} By a $(\tau,L)$-embedding we mean a triple $(e,V,\mathscr{F})$ with the following properties: \begin{itemize} \item[(i)] $e\colon M^n\hookrightarrow Q^q$ is an embedding of a manifold $M$ to a manifold $Q$. If $M$ is a manifold with boundary, then we also assume (as in definition \ref{taumap}) that $Q$ has a boundary too, $e^{-1}(\partial Q)=\partial M$ and $e$ is transverse to $\partial Q$. \item[(ii)] $V=(v_1,\ldots,v_m)$ where $m=q-n-k$ and the $v_i$-s are pointwise independent vector fields in $Q$ along $e(M)$ (i.e. sections of the bundle $TQ|_{e(M)}$). For manifolds with boundaries we require that the vector fields on $e(\partial M)$ are tangent to $\partial Q$. We will identify $V$ with the trivialised subbundle generated by the $v_i$-s. \item[(iii)] $\mathscr{F}$ is a foliation of dimension $m$ on a neighbourhood of $e(M)$ and it is tangent to $V$ along $e(M)$. \item[(iv)] Any point $p\in M$ has a neighbourhood on which the composition of $e$ with the projection along the leaves of $\mathscr{F}$ to a small $(n+k)$-dimensional transverse slice is a map that has at $p$ a singularity which belongs to $\tau$. \item[(v)] The normal bundle $\nu_e$ of the embedding $e$ has structure group $L$. \end{itemize} \end{defi} \begin{rmk} If $(e,V,\mathscr{F})$ is a $(\tau,L)$-embedding, a stratification of $M$ arises by the submanifolds $$\eta(e):=\eta(e,V,\mathscr{F}):=\{p\in M\mid p\in\eta(\pi\circ e)\}$$ where $\pi$ denotes the local projection around $e(p)$ along the leaves of $\mathscr{F}$. \end{rmk} \begin{ex} If $f\colon M^n\to P^{n+k}$ is a $(\tau,L)$-map and $i\colon M^n\hookrightarrow\R^m$ is an embedding, then we can define a $(\tau,L)$-embedding $(e,V,\mathscr{F})$ of $M$ into $P\times\R^m$: We put $e:=f\times i$, the vector fields $v_i$ arise from a basis in $\R^m$ and $\mathscr{F}$ is composed of the leaves $\{p\}\times\R^m$ (for all $p\in P$). \end{ex} \begin{defi} \begin{itemize} \item[] \item[(1)] The vector fields $V=(v_1,\ldots,v_m)$ and the foliation $\mathscr{F}$ in the above example are called vertical in $P\times\R^m$. \item[(2)] The subsets $P\times\{x\}\subset P\times\R^m$ (for any $x\in\R^m$) are called horizontal sections. \end{itemize} \end{defi} \begin{rmk} If $(e,V,\mathscr{F})$ is a $(\tau,L)$-embedding of $M^n$ into $P^{n+k}\times\R^m$ such that $V$ and $\mathscr{F}$ are vertical, then $\pr_P\circ e\colon M\to P$ is a $(\tau,L)$-map. \end{rmk} \subsection{Cobordism of $\tau$-embeddings}\label{tauemb} \begin{defi} We call two $(\tau,L)$-embeddings $(e_0,V_0,\mathscr{F}_0)$ and $(e_1,V_1,\mathscr{F}_1)$ of closed source manifolds $M_0^n$ and $M_1^n$ into the manifold $Q^q$ cobordant if there is \begin{itemize} \item[(i)] a compact manifold with boundary $W^{n+1}$ such that $\partial W=M_0\sqcup M_1$, \item[(ii)] a $(\tau,L)$-embedding $(E,U,\mathscr{G})$ of $W$ into $Q\times[0,1]$ such that for $i=0,1$ the restriction of $(E,U,\mathscr{G})$ to the boundary part $M_i$ is the $(\tau,L)$-embedding $(e_i,V_i,\mathscr{F}_i)$ of $M_i$ into $Q\times\{i\}$. \end{itemize} The set of cobordism classes of $(\tau,L)$-embeddings of $n$-manifolds into the manifold $Q$ will be denoted by $\Emb_\tau^L(n,Q)$. \end{defi} \begin{rmk} If $Q^q$ is a manifold of the form $N^{q-1}\times\R^1$, then $\Emb_\tau^L(n,Q)$ is also an Abelian group with almost the same operation as the one in section \ref{grp}. The difference is that the group operation is now induced by the ``far away'' disjoint union: When forming the sum of two cobordism classes we take such representatives $(e_0,V_0,\mathscr{F}_0)$ and $(e_1,V_1,\mathscr{F}_1)$ of them that the images of $e_0$ and $e_1$ can be separated by a hypersurface $N\times\{t\}$ for a $t\in\R^1$. Such representatives always exist, because by translating any $(\tau,L)$-embedding along the lines $\R^1$ in $N\times\R^1$ we get another representative of the same cobordism class. \end{rmk} \begin{lemma}\label{vert} Let $(e,V,\mathscr{F})$ be a $(\tau,L)$-embedding of $M^n$ into $P^{n+k}\times\R^m$ where $M$ is a compact manifold and $P$ is any manifold. Then there is a diffeotopy $\varphi_t~(t\in[0,1])$ of $P\times\R^m$ such that $\varphi_0$ is the identity and (the differential of) $\varphi_1$ takes $V$ to the vertical vector fields $V'$ and $\mathscr{F}$ to the vertical foliation $\mathscr{F}'$ around the image of $M$. \end{lemma} \begin{prf} The manifold $M$ is finitely stratified by the submanifolds $$S_i:=\bigcup_{\eta\in\tau\atop\dim\eta(e)=i}\eta(e)~~~~i=0,\ldots,n.$$ By the stratified compression theorem \ref{sct} there is a diffeotopy of $P\times\R^m$ which turns the vector fields $V$ into vertical vector fields. Therefore we may assume that $V$ is already vertical, and so we only need to find a diffeotopy that takes the foliation $\FF$ into the vertical foliation $\FF'$ and its differential keeps the vector fields $V$ vertical. We will recursively deform $\FF$ into $\FF'$ around the images of the strata $S_i~(i=0,\ldots,n)$. First we list some trivial general observations. \begin{enumerate} \item\label{tr1} If $R\subset\R^m$ and $K,K'\subset P\times R$ are such that each of $K$ and $K'$ intersects each horizontal section $P\times\{x\}~(x\in R)$ exactly once, then a bijective correspondence $K\to K'$ arises by associating the points on the same horizontal section to each other. \item\label{tr2} If $A\subset P\times\R^m$ is such that for each $a\in A$ subsets $R_a,K_a,K'_a$ are given as in \ref{tr1}, then a family of bijective maps $\{K_a\to K'_a\mid a\in A\}$ arises. If we have $K_{a_1}\cap K_{a_2}=\varnothing=K'_{a_1}\cap K'_{a_2}$ for any two different points $a_1,a_2\in A$, then the union of these bijections gives a continuous bijective map $$\alpha\colon U:=\bigcup_{a\in A}K_a\to\bigcup_{a\in A}K'_a=:U'$$ \item\label{tr3} If the subsets $A,K_a,K'_a$ in \ref{tr2} are submanifolds of $P\times\R^m$ such that $U$ and $U'$ are also submanifolds, then the map $\alpha$ is smooth. \end{enumerate} Denote by $V^\perp$ the orthogonal complement of the bundle $V|_{e(S_0)}\oplus Te(S_0)$ in $T(P\times\R^m)|_{e(S_0)}$ (with respect to a Riemannian metric). Choose a small neighbourhood $A$ of $e(S_0)$ in $\exp(V^\perp)$ (where $\exp$ denotes the exponential map of $P\times\R^m$) and for all $a\in A$ let $K_a$ and $K'_a$ be the intersections of a small neighbourhood of $a$ and the leaves of $\FF$ and $\FF'$ respectively. If the neighbourhoods were chosen sufficiently small, then we are in the setting of \ref{tr3}, hence a diffeomorphism $\alpha\colon U\to U'$ arises (with the same notations as above). Note that $U$ and $U'$ are both neighbourhoods of $e(S_0)$, the map $\alpha$ fixes $e(S_0)$ and for all $a\in e(S_0)$ we have $d\alpha_a=\id_{T_a(P\times\R^m)}$. For all points $(p,x)\in U$ we can join $(p,x)$ and $\alpha(p,x)$ by a minimal geodesic in the horizontal section $P\times\{x\}$, and using these we can extend $\alpha$ to an isotopy $\alpha_t~(t\in[0,1])$ of $U$ (for which $\alpha_0=\id_U$ and $\alpha_1=\alpha$). Observe that where the foliations $\FF$ and $\FF'$ initially coincide, this method just gives the identity for all $t\in[0,1]$. Of course this isotopy can be extended to a diffeotopy of $P\times\R^m$ (by the isotopy extension theorem) and it takes the leaves of $\FF$ to the leaves of $\FF'$ around the image of $S_0$. Next we repeat the same procedure around $e(S_1)$, the image of the next stratum, to get a new diffeotopy (that leaves a neighbourhood of $e(S_0)$ unchanged), and so on. In the end we obtain a diffeotopy of $P\times\R^m$ which turns $\FF$ into the vertical foliation $\FF'$ around the image of $M$ and does not change the vertical vector fields $V$. \end{prf} \begin{rmk} It is clear from the proof above, that the relative version of lemma \ref{vert} is also true, that is, if the vector fields $V$ and the foliation $\FF$ are already vertical on a neighbourhood of a compact subset $C\subset e(M)$, then the diffeotopy $\varphi_t~(t\in[0,1])$ is fixed on a neighbourhood of $C$. \end{rmk} Now we can prove the key observation on $\tau$-embeddings, namely, that the computation of cobordisms of $\tau$-maps can be reduced to that of $\tau$-embeddings. \begin{thm}\label{cob=emb} For any manifold $P^{n+k}$, if the number $m$ is sufficiently large (compared to $n$), then $$\Cob_\tau^L(n,P^{n+k})\cong\Emb_\tau^L(n,P^{n+k}\times\R^m).$$ \end{thm} \begin{prf} Take any number $m\ge2n+4$, so any manifold of dimension at most $n+1$ can be embedded into $\R^m$ uniquely up to isotopy. We will define two homomorphisms $\varphi\colon\Cob_\tau^L(n,P)\to\Emb_\tau^L(n,P\times\R^m)$ and $\psi\colon\Emb_\tau^L(n,P\times\R^m)\to\Cob_\tau^L(n,P)$ which will turn out to be each other's inverses. \medskip\noindent I. \emph{Construction of $\varphi$.}\medskip For a $(\tau,L)$-map $f\colon M^n\to P^{n+k}$ we can choose any embedding $i\colon M\hookrightarrow\R^m$ and define a $(\tau,L)$-embedding $(e,V,\FF)$ by putting $e:=f\times i$ and defining $V$ and $\FF$ as the vertical vector fields and foliation. Define the map $\varphi$ to assign to the cobordism class of $f$ the cobordism class of $(e,V,\FF)$. In order to prove that $\varphi$ is well-defined, we have to show that the cobordism class of $(e,V,\FF)$ does not depend on the choice of the embedding $i$ and the representative of the cobordism class $[f]$. \medskip\begin{sclaim} If $i_0\colon M\hookrightarrow\R^m$ and $i_1\colon M\hookrightarrow\R^m$ are two embeddings and the above method assigns to them the $(\tau,L)$-embeddings $(e_0,V_0,\mathscr{F}_0)$ and $(e_1,V_1,\mathscr{F}_1)$ respectively, then $$(e_0,V_0,\mathscr{F}_0)\sim(e_1,V_1,\mathscr{F}_1).$$ \end{sclaim} \begin{sprf} Because of the dimension condition, $i_0$ and $i_1$ can be connected by an isotopy $i_t~(t\in[0,1])$. We define a $(\tau,L)$-embedding $$E\colon M\times[0,1]\hookrightarrow P\times\R^m\times[0,1];~(p,t)\mapsto (f(p),i_t(p),t)$$ (again with the vertical vector fields and foliation), which is clearly a cobordism between $(e_0,V_0,\mathscr{F}_0)$ and $(e_1,V_1,\mathscr{F}_1)$. \end{sprf} \begin{sclaim} If $f_0\colon M_0\to P$ and $f_1\colon M_1\to P$ are cobordant $(\tau,L)$-maps and the above method assigns to them the $(\tau,L)$-embeddings $(e_0,V_0,\mathscr{F}_0)$ and $(e_1,V_1,\mathscr{F}_1)$ respectively, then $$(e_0,V_0,\mathscr{F}_0)\sim(e_1,V_1,\mathscr{F}_1).$$ \end{sclaim} \begin{sprf} Let $F\colon W^{n+1}\to P\times[0,1]$ be a cobordism between $f_0$ and $f_1$. Again by the dimension condition, the embedding $i_0\sqcup i_1\colon M_0\sqcup M_1=\partial W\hookrightarrow\R^m$ extends to an embedding $I\colon W\hookrightarrow\R^m$. Hence the map $E:=F\times I$ is a $(\tau,L)$-embedding of $W$ into $P\times\R^m\times[0,1]$ (the vector fields and foliation are again vertical) and it is easy to see that this is a cobordism between $(e_0,V_0,\mathscr{F}_0)$ and $(e_1,V_1,\mathscr{F}_1)$. \end{sprf} \noindent II. \emph{Construction of $\psi$.}\medskip If $(e,V,\FF)$ is a $(\tau,L)$-embedding of a manifold $M^n$ into $P^{n+k}\times\R^m$, then by lemma \ref{vert} we obtain a diffeotopy of $P\times\R^m$ that makes $V$ and $\FF$ vertical. A diffeotopy of $P\times\R^m$ trivially yields a cobordism of $(\tau,L)$-embeddings, hence we can assume that $V$ and $\FF$ were initially vertical. Now we can define $\psi$ to assign to the cobordism class of $(e,V,\FF)$ the cobordism class of the $(\tau,L)$-map $f:=\pr_P\circ e$. In order to prove that $\psi$ is well-defined, we have to show that the cobordism class of $f$ does not depend on the choice of the representative of the cobordism class $[(e,V,\FF)]$. \medskip\begin{sclaim} If $(e_0,V_0,\mathscr{F}_0)$ and $(e_1,V_1,\mathscr{F}_1)$ are cobordant $(\tau,L)$-embeddings of the manifolds $M_0$ and $M_1$ respectively into $P\times\R^m$ and the above method assigns to them the $(\tau,L)$-maps $f_0\colon M_0\to P$ and $f_1\colon M_1\to P$ respectively, then $f_0\sim f_1$. \end{sclaim} \begin{sprf} We apply a diffeotopy $\varphi_t^i~(t\in[0,1])$ of $P\times\R^m\times\{i\}$ to turn the vector fields $V_i$ and foliation $\FF_i$ vertical (for $i=0,1$), this way we obtain the $(\tau,L)$-map $f_i\colon M_i\to P\times\{i\}$. If $(e_0,V_0,\mathscr{F}_0)$ and $(e_1,V_1,\mathscr{F}_1)$ are connected by a cobordism $(E,U,\mathscr{G})$, which is a $(\tau,L)$-embedding of a manifold $W^{n+1}$ into $P\times\R^m\times[0,1]$, then we can apply (the relative version of) lemma \ref{vert} to obtain a diffeotopy $\Phi_t~(t\in[0,1])$ of $P\times\R^m\times[0,1]$ that extends the given diffeotopies $\varphi_t^0,\varphi_t^1$ on the boundary and turns the vector fields $U$ and the foliation $\mathscr{G}$ vertical. Now combining $E$ with the final diffeomorphism $\Phi_1$ and the projection to $P\times[0,1]$, we obtain a $(\tau,L)$-cobordism $F:=\pr_{P\times[0,1]}\circ\Phi_1\circ E$ between $f_0$ and $f_1$. \end{sprf} It is trivial from the constructions of $\varphi$ and $\psi$ that they are homomorphisms and clearly $\psi$ is the inverse of $\varphi$, hence they are both isomophisms between $\Cob_\tau^L(n,P)$ and $\Emb_\tau^L(n,P\times\R^m)$. \end{prf} \begin{rmk}\label{tauembrmk} The same proof also shows the following: \begin{itemize} \item[(1)] If $m$ is sufficiently large (compared to $n$ and $l$), then $$\Cob_{\tau\oplus l}^L(n,Q^{n+k+l})\cong\Emb_\tau^L(n,Q^{n+k+l}\times\R^m).$$ \item[(2)] The cobordism group of $(\tau,L)$-embeddings stabilises, that is, for large numbers $m$ the inclusion $\R^m\subset\R^{m+1}$ induces an isomorphism $$\Emb_\tau^L(n,Q^q\times\R^m)\cong\Emb_\tau^L(n,Q^q\times\R^{m+1}).$$ \end{itemize} \end{rmk} \begin{defi} For any manifold $Q^q$ we put $\Stab_\tau^L(n,Q):=\underset{m\to\infty}\lim\Emb_\tau^L(n,Q\times\R^m)$ and call this the stable cobordism group of $(\tau,L)$-embeddings. \end{defi} \subsection{Semiglobal description} In order to describe stable cobordisms of $(\tau,L)$-embeddings, we will need an analogue of section \ref{semiglob} in this setting. We fix a natural number $n$ and consider $[\eta]$-strata of $(\tau,L)$-embeddings of $n$-manifolds to $q$-manifolds where $q$ is sufficiently large and $[\eta]\in\tau$ is the maximal element. \begin{defi}\label{approx} Choose a finite (say $r$-)dimensional approximation $BG_\eta^L(n)$ of $BG_\eta^L$ such that the pair $(BG_\eta^L,BG_\eta^L(n))$ is $(n+1)$-connected (the existence of such a space is clear from the Milnor construction of $BG_\eta^L$); we can assume that $BG_\eta^L(n)$ is an $r$-dimensional manifold. The restriction of the global normal form of $[\eta]$ will be denoted by $$\Phi_\eta^L(n):=\Phi_\eta^L|_{\xi_\eta^L(n)}\colon\xi_\eta^L(n):=\xi_\eta^L|_{BG_\eta^L(n)}\to\tilde\xi_\eta^L|_{BG_\eta^L(n)}=:\tilde\xi_\eta^L(n).$$ Fix a number $m\ge2r+2n+2$ and let $i\colon\xi_\eta^L(n)\hookrightarrow\R^m$ be any embedding (the dimensions were chosen such that $i$ is unique up to isotopy) and put $$\tilde\Phi_\eta^L(n):=\Phi_\eta^L(n)\times i\colon\xi_\eta^L(n)\hookrightarrow\tilde\xi_\eta^L(n)\times\R^m,$$ which is a $(\tau,L)$-embedding (with the vertical vector fields and foliation). This is called the global normal form of $(\tau,L)$-embeddings of $[\eta]$ in dimensions at most $n$. \end{defi} \begin{thm}\label{univemb} Let $(e\colon M^n\hookrightarrow Q^q,V,\mathscr{F})$ be a $(\tau,L)$-embedding where $m=q-n-k\ge2r+2n+2$. Then there are tubular neighbourhoods $\eta(e)\subset T\subset M$ and $e(\eta(e))\subset\tilde T\subset Q$ of the $\eta$-stratum and its image such that there is a commutative diagram of fibrewise isomorphisms (the vertical arrows are the projections and $BG_\eta^L(n)\subset\tilde\xi_\eta^L(n)\times\R^m$ is identified with the graph of $i|_{BG_\eta^L(n)}$) $$\xymatrix@R=.333pc{ \xi_\eta^L(n)\ar[rrr]^{\tilde\Phi_\eta^L(n)}\ar[ddddd]&&&\tilde\xi_\eta^L(n)\times\R^m\ar[ddddd]\\ &T\ar[r]^{e|_T}\ar[ul]\ar[ddd]&\tilde T\ar[ur]\ar[ddd]&\\ \\ \\ &\eta(e)\ar[r]^{e|_{\eta(e)}}\ar[dl]&e(\eta(e))\ar[dr]&\\ BG_\eta^L(n)\ar[rrr]_{\id_{BG_\eta^L(n)}}&&&BG_\eta^L(n) }$$ \end{thm} \begin{prf} Trivial from theorem \ref{univ}. \end{prf} \subsection{The classifying space for $\tau$-embeddings} Our aim in the present subsection will be to construct a virtual complex (see appendix \ref{virtc}) $V_\tau^L$ with the property $$\Stab_\tau^L(n,Q^{n+k+m})\cong\{\cpt{Q},S^mV_\tau^L\}_*$$ for any manifold $Q$ (where the notation is $\{X,Y\}_*:=\liminfty{r}[S^rX,S^rY]_*$). Recall section \ref{rszpt} where we constructed the space $X_{\underline\tau}^L$ by attaching the disk bundles $D\tilde\xi_{\underline\eta}^L$ to each other by appropriate gluing maps. Here we will follow a similar method; namely, we glue the disk bundles $D\tilde\xi_\eta^L$ together to form $V_\tau^L$, but now the gluing maps are only defined in stable sense, hence the resulting space will only be a virtual complex. \begin{thm}\label{embclass} There is a virtual complex $V_\tau^L$ such that for all $n,m\in\N$ where $m$ is sufficiently large (compared to $n$ and $k$), for any manifold $Q^{n+k+m}$ there is an isomorphism $$\Emb_\tau^L(n,Q^{n+k+m})\cong\{\cpt{Q},S^mV_\tau^L\}_*.$$ \end{thm} \begin{add*} For any $l\in\N$ there is an approximation $V_\tau^L(l)$ of $V_\tau^L$ for which the space $S^mV_\tau^L(l)$ exists and its $(m+l)$-homotopy type is that of $S^mV_\tau^L$ whenever $m$ is large enough, and there is a subspace $K_\tau^L(l)\subset S^mV_\tau^L(l)$ with the following properties for all $n\le l$: \begin{enumerate} \item\label{ad1} For any manifold with boundary $Q^{n+k+m}$, if there is a $(\tau,L)$-embedding $(e,V,\FF)$ of a closed manifold $N^{n-1}$ into $\partial Q$, a map $\kappa_N$ and a stable map $\kappa_Q$ (i.e. a map between the $j$-th suspensions of the spaces involved for some $j$) such that the outer square in the diagram below is a pullback diagram, then there is a manifold $M^n$ with boundary $\partial M=N$, an extension $\kappa_M$ of $\kappa_N$ and a $(\tau,L)$-embedding $(E,U,\mathscr{G})$ that extends $(e,V,\FF)$ and the upper inner square is a pullback diagram as well. $$\xymatrix{ K_\tau^L(l)\ar@{^(->}[rr] && S^mV_\tau^L(l)\\ & M\ar@{-->}[ul]_(.35){\kappa_M}\ar@{^(-->}[r]^E & Q\ar[u]|{\object@{/}}_{\kappa_Q}\\ N\ar[uu]^{\kappa_N}\ar@{^(-->}[ur]\ar@{^(->}[rr]^e && \partial Q\ar@{^(->}[u] }$$ \item\label{ad2} For any $(\tau,L)$-embedding $(E\colon M^n\hookrightarrow Q^{n+k+m},U,\mathscr{G})$ of a compact manifold with boundary, if there is a map $\kappa_{\partial M}$ and a stable map $\kappa_{\partial Q}$ such that the outer square in the diagram below is a pullback diagram, then there are extensions $\kappa_M$ and $\kappa_Q$ of $\kappa_{\partial M}$ and $\kappa_{\partial Q}$ respectively such that the upper inner square is a pullback diagram as well. $$\xymatrix{ K_\tau^L(l)\ar@{^(->}[rrr] &&& S^mV_\tau^L(l)\\ & M\ar@{-->}[ul]_(.35){\kappa_M}\ar@{^(->}[r]^E & Q\ar@{-->}[ur]|{\object@{/}}^(.35){\kappa_Q} &\\ \partial M\ar[uu]^{\kappa_{\partial M}}\ar@{^(->}[ur]\ar@{^(->}[rrr]^{E|_{\partial M}} &&& \partial Q\ar@{_(->}[ul]\ar[uu]|{\object@{/}}_{\kappa_{\partial Q}} }$$ \end{enumerate} \end{add*} \begin{prf} We prove the theorem and the addendum together by (transfinite) induction on the elements of $\tau$ analogously to the proof of theorem \ref{ptpullback}. The starting step is when we have $\tau=\{\Sigma^0\}$, hence $(\tau,L)$-embeddings $M^n\hookrightarrow Q^{n+k+m}$ are embeddings equipped with normal $L$-structures and $m$ pointwise independent normal vector fields. If we choose $V_{\{\Sigma^0\}}^L$ as the Thom space $T\gamma_k^L$, then the $m$-th suspension is $S^mV_{\{\Sigma^0\}}^L=S^mT\gamma_k^L=T(\gamma_k^L\oplus\varepsilon^m)$. Thus the usual Pontryagin--Thom construction yields the statement of the theorem and the addendum by putting $$V_{\{\Sigma^0\}}^L(l):=V_{\{\Sigma^0\}}^L:=T\gamma_k^L~~~~\text{and}~~~~K_{\{\Sigma^0\}}^L(l):=K_{\{\Sigma^0\}}^L:=BL(k).$$ Now suppose we know the theorem and the addendum for $\tau':=\tau\setminus\{[\eta]\}$ (where $[\eta]$ is the maximal element of $\tau$) and we want to prove for $\tau$. Fix a natural number $n$ and consider (as in definition \ref{approx}) the $r$-dimensional approximation $BG_\eta^L(n)$ of $BG_\eta^L$ such that $(BG_\eta^L,BG_\eta^L(n))$ is an $(n+1)$-connected pair. Let $m\ge2r+2n+2$ be such a number that the $(m+r+n)$-homotopy type of $S^mV_{\tau'}^L$ is well-defined and $V_{\tau'}^L(r+n)$ is an approximation of $V_{\tau'}^L$ for which the space $S^mV_{\tau'}^L(r+n)$ exists and its $(m+r+n)$-type is that of $S^mV_{\tau'}^L$. Then $S^mV_{\tau'}^L(r+n)$ classifies the $(\tau',L)$-embeddings with source dimensions at most $r+n$. Take the global normal form $$\tilde\Phi_\eta^L(n)=\Phi_\eta^L(n)\times i\colon\xi_\eta^L(n)\hookrightarrow\tilde\xi_\eta^L(n)\times\R^m,$$ (see definition \ref{approx}) with the disk bundle $D\tilde\xi_\eta^L(n)$ (of a sufficiently small radius) and its preimage $D\xi_\eta^L(n):=(\Phi_\eta^L(n))^{-1}(D\tilde\xi_\eta^L(n))$, which is diffeomorphic to the disk bundle of $\xi_\eta^L(n)$. Now we use the restriction of $\Phi_\eta^L(n)$ to the boundary sphere bundle to define the $(\tau',L)$-embedding $$\Phi_S:=\Phi_\eta^L(n)|_{S\xi_\eta^L(n)}\times i|_{S\xi_\eta^L(n)}\colon S\xi_\eta^L(n)\hookrightarrow S\tilde\xi_\eta^L(n)\times\R^m.$$ The image of $i|_{D\xi_\eta^L(n)}$ is contained in a ball $D^m\subset\R^m$ of a sufficiently large radius, hence the image of $\Phi_S$ is contained in $S\tilde\xi_\eta^L(n)\times D^m$ which is part of the boundary $S:=\partial(D\tilde\xi_\eta^L(n)\times D^m)$. Therefore $\Phi_S$ is a $(\tau',L)$-embedding of a manifold of dimension not greater than $r+n$ into $S$, hence by the induction hypothesis on \ref{ad2}, there is a map $\rho(n)$ and a stable map $\tilde\rho(n)$ which make the following diagram commutative: $$\xymatrix{ K_{\tau'}^L(r+n)\ar@{^(->}[r] & S^mV_{\tau'}^L(r+n)\\ S\xi_\eta^L(n)\ar[u]^{\rho(n)}\ar@{^(->}[r]^(.55){\Phi_S} & S\ar[u]|{\object@{/}}_{\tilde\rho(n)} }$$ Observe that by construction, the points of $S$ that are ``far away'' from the image of $\Phi_S$ are mapped to the special point of $T(\gamma_k^L\oplus\varepsilon^m)=S^mT\gamma_k^L\subset S^mV_{\tau'}^L(r+n)$. Thus the stable map $\tilde\rho(n)$ factors through the quotient $q\colon S\to S^mS\tilde\xi_\eta^L(n)$ by the subspace $D\tilde\xi_\eta^L(n)\times S^m$, that is, $\tilde\rho(n)=\hat\rho\circ q$ for a stable map $\hat\rho\colon S^mS\tilde\xi_\eta^L(n)\nrightarrow S^mV_{\tau'}^L(r+n)$. But now $m$ can be chosen large enough so that $\hat\rho$ is an existing map (not only stable), hence $\tilde\rho(n)$ is too. We define \begin{gather*} S^{m}V_\tau^L(n):=S^{m}V_{\tau'}^L(r+n)\usqcup{\tilde\rho(n)}(D\tilde\xi_\eta^L(n)\times D^m)\\ \text{and}~~~~K_\tau^L(n):=K_{\tau'}^L(r+n)\usqcup{\rho(n)}D\xi_\eta^L(n). \end{gather*} Now we can see that these newly constructed spaces satisfy conditions \ref{ad1} and \ref{ad2} of the addendum completely analogously to the proofs of \ref{pt1} and \ref{pt2} of theorem \ref{ptpullback}, therefore we omit the proofs here. We obtain a virtual complex by putting $$V_\tau^L:=\liminfty{n}V_\tau^L(n).$$ Now by \ref{ad1}, \ref{ad2} and a complete analogue of the proof of theorem \ref{ptclass} we see that the claim of the theorem is also true. \end{prf} By this theorem we have obtained the following description of the classifying space for cobordisms of $(\tau,L)$-maps. \begin{crly}\label{gammav} $X_\tau^L\cong\Gamma V_\tau^L$. \end{crly} \begin{prf} The Brown representability theorem implies that $X_\tau^L$ is unique up to homotopy equivalence (see \cite{hosszu} for a detailed description of this), hence we only need to prove that $\Gamma V_\tau^L$ also classifies cobordisms of $(\tau,L)$-maps. This is indeed so, since for any manifold $P^{n+k}$ we have \begin{alignat*}2 \Cob_\tau^L(n,P)&\toverset{^*}{\cong}\Stab_\tau^L(n,P)\toverset{^{**}}{\cong}\{\cpt{P},V_\tau^L\}_*=\liminfty{r}[S^r\cpt{P},S^rV_\tau^L]_*\cong\\ &\cong\liminfty{r}[\cpt{P},\Omega^rS^rV_\tau^L]_*=[\cpt{P},\Gamma V_\tau^L]_* \end{alignat*} where $^*$ follows from theorem \ref{cob=emb} and $^{**}$ is an easy consequence of theorem \ref{embclass}. \end{prf} \begin{rmk}\label{h} Any loop space (i.e. space of the form $\Omega Y$) is an H-space, hence we have also proved that $X_\tau^L$ is an H-space. This also means that the functors $[\cdot,X_\tau^L]$ and $[\cdot,X_\tau^L]_*$ coincide. \end{rmk} \begin{rmk}\label{lfr} Similarly to corollary \ref{gammav}, it follows from remark \ref{tauembrmk} and theorem \ref{embclass} that the classifying space for cobordisms of $l$-framed $(\tau,L)$-maps (see definition \ref{lframed}) is $X_{\tau\oplus l}^L\cong\Gamma S^lV_\tau^L$. \end{rmk} \section{Kazarian's space}\label{kazspace} In the proof of theorem \ref{embclass} we defined the spaces $K_\tau^L(n)$, but we did not emphasise any of their important properties. We will do so now. \begin{rmk} The recursive definition $K_\tau^L(n):=K_{\tau'}^L(r+n)\usqcup{\rho(n)}D\xi_\eta^L(n)$ shows that $K_\tau^L(n-1)\subset K_\tau^L(n)$, since we can assume the inclusion $K_{\tau'}^L(r+n-1)\subset K_{\tau'}^L(r+n)$ by induction, the inclusion $D\xi_\eta^L(n-1)\subset D\xi_\eta^L(n)$ is trivially true and the restriction of the gluing map $\rho(n)$ can be assumed to be $\rho(n-1)$. \end{rmk} \begin{defi} The space $K_\tau^L:=\liminfty nK_\tau^L(n)$ will be called (following Szűcs \cite{hosszu}) the Kazarian space of $\tau$-maps with normal $L$-structures. \end{defi} For later use, we define an important tool (a spectral sequence) now. Recall that we fixed a complete order of $\tau$ that extends the natural partial order. \begin{defi}\label{kazspec} Suppose that the singularity set $\tau$ has order-type $\omega$ or less, let its elements be $\eta_0<\eta_1<\ldots$ and use the notations $K_i:=K_{\{\eta_j\mid j\le i\}}^L$, $\xi_i:=\xi_{\eta_i}^L$, $G_i:=G_{\eta_i}^L$ and $c_i:=\dim\xi_i$. We define the Kazarian spectral sequence corresponding to this (ordered) set of singularities as follows. The filtration $K_0\subset K_1\subset\ldots\subset K_\tau^L$ defines a homological spectral sequence with first page $$E^1_{i,j}=H_{i+j}(T\xi_i;G)\cong H_{i+j-c_i}(BG_i;\tilde G_{\xi_i}),$$ where $G$ is any coefficient group, the isomorphism is the homological Thom isomorphism and $\tilde G_{\xi_i}$ means $G$ twisted by the orientation of $\xi_i$. This spectral sequence converges to $H_*(K_\tau^L;G)$, that is, $\underset{i+j=n}\bigoplus E^\infty_{i,j}$ is associated to $H_n(K_\tau^L;G)$. \end{defi} \subsection{Generalities on the Borel construction} We will give a different way to obtain Kazarian's space, which was used by Kazarian \cite{kaz} but even considered by Thom \cite{thomthm}. More precisely, we will construct a space $\hat K_\tau^L$ and then prove that it is homotopy equivalent to $K_\tau^L$. First we recall the Borel construction which will be used in the definition of $\hat K_\tau^L$. \begin{defi} If $G$ is a topological group and $J$ is a space with a given $G$-action on it, then the Borel construction on $J$ is the space $BJ:=EG\tightunderset{G}{\times}J$. \end{defi} \begin{rmk} The Borel construction $BJ$ has the following properties: \begin{enumerate} \item If $\Sigma\subset J$ is a $G$-invariant subspace, then $B\Sigma\subset BJ$. \item If $J=\underset{i}{\bigcup}\Sigma_i$ is the decomposition of $J$ to $G$-orbits, then $BJ=\underset{i}{\bigcup}B\Sigma_i$. \item If $J$ is contractible, $\Sigma\subset J$ is a $G$-orbit and $G_x$ is the stabiliser of $x\in\Sigma\subset J$, then $BG_x=EG_x/G_x=EG\tightunderset{G}{\times}(G/G_x)=EG\tightunderset{G}{\times}\Sigma=B\Sigma$. \end{enumerate} \end{rmk} \begin{lemma}\label{borel} Let $G$ be a compact Lie group, $J$ a contractible manifold with a smooth (left) $G$-action on it, $\Sigma\subset J$ a $G$-orbit and $G_x$ the stabiliser of $x\in\Sigma\subset J$. Fix a Riemannian metric on $J$ such that the $G$-action is isometric (such a metric exists) and choose a small orthogonal slice $S_x$ of $\Sigma$ at $x$. This way $G_x$ acts orthogonally on the tangent space $T_xS_x$; denote by $\rho_x\colon G_x\to\O(T_xS_x)$ the arising representation. Then a neighbourhood of $B\Sigma$ in $BJ$ can be identified with the universal bundle $EG_x\utimes{\rho_x}T_xS_x$ over $BG_x$. \end{lemma} \begin{prf} Let $S\subset J$ be a $G$-invariant tubular neighbourhood of $\Sigma$ composed of the fibres $S_{gx}~(g\in G)$. Then for all $g\in G$ diffeomorphisms $S_x\to S_{gx}$ arise (of course, if $g_1x=g_2x$, then $S_{g_1x}=S_{g_2x}$ and we obtain different diffeomorphisms). Lift the bundle $S$ over $\Sigma$ to a bundle $\tilde S$ over $G$ by the following pullback diagram: $$\xymatrix@C=.75pc{ \tilde S\ar@{-->}[rrr]\ar@{-->}[d]_{G_x} &&& G\ar[d]^{G_x} \\ S\ar[rr] && \Sigma\ar@{=}[r] & G/G_x }$$ Here the fibre $\tilde S_g$ of $\tilde S$ over $g\in G$ is identified with the fibre $S_{[g]}=S_{gx}$ of $S$ by the pullback definition (where $[g]\in G/G_x$ is the coset of $g$ and it is identified with the point $gx\in\Sigma$). This way the diffeomorphisms $S_x\to S_{gx}$ give canonical identifications $\tilde S_e\to\tilde S_g$ for all $g\in G$ (where $e$ is the neutral element of $G$), hence $\tilde S$ is a trivial bundle. The fibration $\tilde S\to S$ is $G$-equivariant, hence applying the Borel construction results in a map $$B\tilde S=EG\utimes{G}\tilde S\to EG\utimes{G}S=BS.$$ We have a decomposition $\tilde S=G\times\tilde S_e$ which is $G$-equivariant (using the trivial $G$-action on $\tilde S_e$), hence the domain is $EG\utimes{G}\tilde S=EG\utimes{G}G\times\tilde S_e=EG\times\tilde S_e$. The map is the quotient by the diagonal $G_x$-action, therefore the target (i.e. $BS$) can be identified with $EG_x\utimes{G_x}\tilde S_e$. The fibre $\tilde S_e$ is identified with $S_x$ which can be identified with $T_xS_x$ and the $G_x$-action on it is the representation $\rho_x$, thus $BS$ (which is a neighbourhood of $B\Sigma$ in $BJ$) is identified with $EG_x\utimes{\rho_x}T_xS_x$. \end{prf} \begin{crly}\label{transv} If $G,J,\Sigma$ are as above, $R\subset J$ is transverse to $\Sigma$ and we identify $J$ with the fibre of $BJ\xra{J}BG$, then $R$ is transverse to $B\Sigma\subset BJ$. \end{crly} \subsection{Constructing Kazarian's space} In this subsection we will construct the space $\hat K_\tau^L$ and prove its homotopy equivalence with $K_\tau^L$. First we construct a preliminary ``unstable'' version; to do this, we fix natural numbers $n$ and $r$ and suppose that (the roots of) all elements of $\tau$ have source dimensions at most $n$ and are $r$-determined (that is, for all $[\vartheta]\in\tau$, all germs that have the same $r$-jet as $\vartheta$ are $\AA$-equivalent to $\vartheta$). \begin{defi} We put $J(n):=J^r_0(\R^n,\R^{n+k})$ the $r$-jet space of germs $(\R^n,0)\to(\R^{n+k},0)$ at the origin and $$G^L(n):=\big\{(J^r\varphi,J^r\psi)\in J^r_0(\Diff_0(\R^n))\times J^r_0(\Diff_0(\R^{n+k}))\mid\big(\begin{smallmatrix} d\varphi_0&0\\ 0&d\psi_0 \end{smallmatrix}\big)\in L(2n+k)\big\}$$ the group of $r$-jets of those diffeomorphism germs of $(\R^n,0)$ and $(\R^{n+k},0)$ for which the effect of their derivatives on the virtual normal bundle fixes the group $L$. \end{defi} \begin{defi}\label{taujet} In $J(n)$ the $r$-jets of the germs that belong to the same singularity class form a $G^L(n)$-orbit. If $\Sigma_\vartheta\subset J(n)$ is the orbit corresponding to the singularity class $[\vartheta]$, then we put $$B\vartheta(n):=B\Sigma_\vartheta=EG^L(n)\tightunderset{G^L(n)}{\times}\Sigma_\vartheta.$$ The set of $r$-jets of all singularity classes in $\tau$ is also $G^L(n)$-invariant. This set will be denoted by $J_\tau^L(n)\subset J(n)$ and we put $$\tilde K_\tau^L(n):=BJ_\tau^L(n)=EG^L(n)\tightunderset{G^L(n)}{\times}J_\tau^L(n).$$ We will denote the fibration $\tilde K_\tau^L(n)\xra{J_\tau^L(n)}BG^L(n)$ by $p(n)$. \end{defi} \begin{rmk}\label{miafasz} $~$ \begin{itemize} \item[(1)] By the decomposition of $J_\tau^L(n)$ to $G^L(n)$-orbits we have $\tilde K_\tau^L(n)=\underset{[\vartheta]\in\tau}\bigcup B\vartheta(n)$. \item[(2)] Note that $BG^L(n)$ is homotopy equivalent to $BH^L(n)$ where $$H^L(n):=\big\{(A,B)\in\O(n)\times\O(n+k)\mid\big(\begin{smallmatrix} A&0\\ 0&B \end{smallmatrix}\big)\in L(2n+k)\big\},$$ so we can lift the universal bundles $\gamma_n$ and $\gamma_{n+k}$ to bundles over $BG^L(n)$. \end{itemize} \end{rmk} \begin{defi}\label{nuniv} Pull back the lift of the bundles $\gamma_n$ and $\gamma_{n+k}$ by $p(n)$ to bundles over $\tilde K_\tau^L(n)$ and denote their formal difference (which is a $k$-dimensional virtual vector bundle with structure group $L$) by $\nu_\tau^L(n)$. We call this the $n$-universal virtual normal bundle for $(\tau,L)$-maps. \end{defi} The following property was proved by Thom, see \cite{thomthm} or \cite{kaz}. We will not prove it here, only sketch the construction that proves it. \begin{thm}\label{kazthm} For any $(\tau,L)$-map $f\colon M^n\to P^{n+k}$ there is a map $\tilde\kappa_f\colon M\to\tilde K_\tau^L(n)$ such that $\tilde\kappa_f^{-1}(B\vartheta(n))=\vartheta(f)$ for all singularities $[\vartheta]\in\tau$. Moreover, this map also has the property that the pullback $\tilde\kappa_f^*\nu_\tau^L(n)$ is the virtual normal bundle $\nu_f$. \end{thm} \noindent\textbf{Sketch of the proof.\enspace\ignorespaces} Fix Riemannian metrics on $M$ and $P$, denote by $\exp_M$ and $\exp_P$ their exponential maps respectively and let $\toverset{_\circ}{T}M\subset TM$ be a neighbourhood of the zero-section such that $\exp_M|_{{\overset{_\circ}{T}}_pM}$ is a diffeomorphism onto its image for all $p\in M$. Then there is a fibrewise map $F$ that makes the following diagram commutative: $$\xymatrix{ \toverset{_\circ}{T}M\ar@{-->}[r]^{F}\ar[d]_{\exp_M} & TP\ar[d]^{\exp_P} \\ M\ar[r]^f & P }$$ If $J^r_0(TM,f^*TP)$ denotes the $r$-jet bundle of germs of fibrewise maps $TM\to f^*TP$ along the zero-section and $J_\tau^L(M)\subset J^r_0(TM,f^*TP)$ is the subspace corresponding to the singularities in $\tau$, then $J_\tau^L(M)\xra{J_\tau^L(n)}M$ is a fibre bundle with structure group $G^L(n)$. Hence it can be induced from the universal bundle by maps $\kappa,\tilde\kappa$ as shown in the diagram $$\xymatrix{ J_\tau^L(M)\ar[r]^{\tilde\kappa}\ar[d]_{J_\tau^L(n)} & \tilde K_\tau^L(n)\ar[d]^{J_\tau^L(n)} \\ M\ar@{-->}@/^4pc/[u]^{\tilde f}\ar[r]^(.45)\kappa & BG^L(n) }$$ Moreover, the map $F$ defines a section $\tilde f$ of that bundle (also indicated in the diagram) and we define $\tilde\kappa_f:=\tilde\kappa\circ\tilde f$. ~$\square$\medskip Until this point we did not use that when forming the singularity classes we identified each germ with its suspension (see definition \ref{sing}). Now we will use it in the following way: Observe that for any $(\tau,L)$-map $f\colon M^n\to P^{n+k}$ we can add an arbitrary vector bundle $\zeta$ over $M$ to both $TM$ and $f^*TP$ and replace the map $F$ in the above proof by $$F\oplus{\id}_\zeta\colon\toverset{_\circ}TM\oplus\zeta\to TP\oplus\zeta,$$ this way obtaining the same singularity stratification of $M$. Now we put $\zeta:=\nu_M$, the stable normal bundle of $M$ (i.e. $\zeta:=\varepsilon^m-TM$ for a sufficiently large number $m$ such that $M$ can be embedded into $\R^m$ uniquely up to isotopy) and consider the $r$-jet bundle $J^r_0(\varepsilon^m,f^*TP\oplus\nu_M)\to M$ of germs of fibrewise maps $\varepsilon^m\to f^*TP\oplus\nu_M$ along the zero-section. This jet bundle can be induced from the bundle $J^r_0(\varepsilon^m,\gamma_{m+k}^L)\to BL(m+k)$. \begin{defi}\label{kazj} In each fibre $J^r_0(\R^m,\R^{m+k})$ of the bundle $J^r_0(\varepsilon^m,\gamma_{m+k}^L)\to BL(m+k)$ we can take the subspace $J_\tau^L(m)$ (see definition \ref{taujet}). The union of these in all fibres will be denoted by $\hat K_\tau^L(m)$ and the partition of $\hat K_\tau^L(m)$ corresponding to singularities in $\tau$ (by the decomposition of $J_\tau^L(m)$ to the $G^L(m)$-orbits) will be denoted by a slight abuse of notation by $\hat K_\tau^L(m)=\underset{[\vartheta]\in\tau}\bigcup B\vartheta(m)$. \end{defi} \begin{rmk} $~$ \begin{enumerate} \item Another description of the space $\hat K_\tau^L(m)$ is the following: The space $\tilde K_\tau^L(m)$ was the total space of a bundle over $BG^L(m)$, which is homotopy equivalent to $BH^L(m)$ (see remark \ref{miafasz}); the inclusion $L(m)\times L(m+k)\hookrightarrow H^L(m)$ induces a map $BL(m)\times BL(m+k)\to BH^L(m)$, that we compose with the projection $BL(m+k)\cong EL(m)\times BL(m+k)\to BL(m)\times BL(m+k)$. The pullback of $\tilde K_\tau^L(m)$ by this map is $\hat K_\tau^L(m)$, that is, we have the following pullback diagram: $$\xymatrix@C=.75pc{ \hat K_\tau^L(m)\ar[rrrr]\ar[d]_{J_\tau^L(m)} &&&& \tilde K_\tau^L(m)\ar[dd]^{J_\tau^L(m)} \\ BL(m+k)\ar@{=}[r]^(.37){\sims{1.6}} & EL(m)\times BL(m+k)\ar[d] \\ & BL(m)\times BL(m+k)\ar[rr] && BH^L(m)\ar@{=}[r]^{\sims{1.6}} & BG^L(m) }$$ \item Recall the definition of the $m$-universal virtual normal bundle (\ref{nuniv} and change $n$ to $m$) that we get by pulling back the virtual bundle $\gamma_{m+k}-\gamma_m$ over $BH^L(m)$ to $\tilde K_\tau^L(m)$. Using the above diagram we can pull this back to a virtual bundle over $\hat K_\tau^L(m)$ too; from now on this will be called $\nu_\tau^L(m)$. The reason for this redefinition is part \ref{harom} of this remark. \item\label{harom} The considerations above the definition of $\hat K_\tau^L(m)$ imply that the analogue of theorem \ref{kazthm} is true for the space $\hat K_\tau^L(m)$ in the following sense: Given a $(\tau,L)$-map $f\colon M^n\to P^{n+k}$ we can construct a map $\hat\kappa_f\colon M\to\hat K_\tau^L(m)$ with the properties $\hat\kappa_f^{-1}(B\vartheta(m))=\vartheta(f)$ (for all $[\vartheta]\in\tau$) and $\hat\kappa_f^*\nu_\tau^L(m)=\nu_f$. \end{enumerate} \end{rmk} Observe that the number $r$ that we fixed in the beginning of this subsection can be replaced by any larger number, hence we can even use the infinite jet space $J^\infty_0(\R^m,\R^{m+k})$ (i.e. the space of all polynomial maps $\R^m\to\R^{m+k}$ with $0$ constant term) instead of $J^r_0(\R^m,\R^{m+k})$ (i.e. the polynomial maps $\R^m\to\R^{m+k}$ with $0$ constant term and degree at most $r$). Of course for any $(\tau,L)$-map $f\colon M^n\to P^{n+k}$ we can choose the map $\hat\kappa_f$ such that its image is in a finite dimensional approximation of this space. This way the condition we assumed above, that all elements of $\tau$ are $r$-determined, can be omitted. We may also get rid of the other condition we assumed, that all elements of $\tau$ have source dimensions at most $n$, by converging with $m$ to $\infty$. Hence we get the following. \begin{defi}\label{kaznu} The (second version of the) Kazarian space, the stratum of the Kazarian space corresponding to $[\vartheta]\in\tau$ and the universal virtual normal bundle for $(\tau,L)$-maps are respectively defined as $$\hat K_\tau^L:=\liminfty{m}\hat K_\tau^L(m),~~~~B\vartheta:=\liminfty{m}B\vartheta(m)~~~~\text{and}~~~~\nu_\tau^L:=\liminfty{m}\nu_\tau^L(m)$$ \end{defi} \begin{rmk}\label{kappaf} To formulate the following theorem, we need to recall that a $(\tau,L)$-map $f\colon M^n\to P^{n+k}$ corresponds to a $(\tau,L)$-embedding $M\into P\times\R^m$ for any sufficiently large number $m$ (by theorem \ref{cob=emb}), so part \ref{ad2} of the addendum of theorem \ref{embclass} assigns to $f$ a map $\kappa_M\colon M\to K_\tau^L$. This map will be denoted by $\kappa_f$ from now on. \end{rmk} \begin{thm} We have $\hat K_\tau^L\cong K_\tau^L$ and for any $(\tau,L)$-map $f\colon M^n\to P^{n+k}$ the maps $\hat\kappa_f$ and $\kappa_f$ coincide up to this homotopy equivalence. \end{thm} \begin{prf} The proof is an induction on the elements of $\tau$. We will use manifold-like properties of the spaces involved, which all work because they hold for finite dimensional approximations and for their direct limits as well. The starting step is $\tau=\{\Sigma^0\}$. Observe that in this case we have $K_{\{\Sigma^0\}}^L=BG_{\Sigma^0}^L=BL(k)$, the bundle $\xi_{\Sigma^0}^L$ is $0$-dimensional and $\tilde\xi_{\Sigma^0}^L$ is the universal bundle $\gamma_k^L$. The inclusion $i\colon BL(k)\into T\gamma_k^L$ defines a map $\hat\kappa_i\colon BL(k)\to\hat K_{\{\Sigma^0\}}^L$ (by defining $\hat\kappa_i$ for finite dimensional approximations and taking the direct limit) for which $\hat\kappa_i^*\nu_{\{\Sigma^0\}}^L=\gamma_k^L$. But $\nu_{\{\Sigma^0\}}^L$ admits $L$ as a structure group, hence it can be induced from the universal bundle $\gamma_k^L$ by a map $g\colon\hat K_{\{\Sigma^0\}}^L\to BL(k)$. Now $g\circ\hat\kappa_i$ induces the bundle $\gamma_k^L$ from $\gamma_k^L$, so the uniqueness of the inducing map implies that $g\circ\hat\kappa_i$ is homotopic to the identity. Observe that the homology groups of $K_{\{\Sigma^0\}}^L=BL(k)$ and $\hat K_{\{\Sigma^0\}}^L$ are finitely generated, since their finite dimensional approximations are homotopy equivalent to compact spaces. Hence $g$ and $\hat\kappa_i$ induce isomorphisms in the homologies, which implies (using the homological Whitehead theorem) that $\hat\kappa_i$ is a homotopy equivalence. This proves the theorem for $\tau=\{\Sigma^0\}$. Now suppose we know the theorem for $\tau':=\tau\setminus\{[\eta]\}$ (where $[\eta]$ is the maximal element of $\tau$) and we want to prove for $\tau$. The induction step will be similar to the first step, only in a bit different setting. Recall that $K_\tau^L$ was constructed recursively with the last step being the attachment of $D\xi_\eta^L$ to $K_{\tau'}^L$ by the map $\kappa_{\Phi_\eta^L|_{S\xi_\eta^L}}$ (more precisely, the construction was using finite dimensional approximations and then taking a direct limit). Let $h'\colon K_{\tau'}^L\to\hat K_{\tau'}^L$ be the homotopy equivalence constructed by the earlier steps in the induction and define $$h\colon K_\tau^L=K_{\tau'}^L\usqcup{\kappa_{\Phi_\eta^L|_{S\xi_\eta^L}}}D\xi_\eta^L\to\hat K_\tau^L$$ by $h|_{K_{\tau'}^L}:=h'$ and $h|_{D\xi_\eta^L}:=\hat\kappa_{\Phi_\eta^L|_{D\xi_\eta^L}}$ (again using finite dimensional approximations and direct limits). This is well-defined, as by the induction hypothesis we have $h'\circ\kappa_{\Phi_\eta^L|_{S\xi_\eta^L}}=\hat\kappa_{\Phi_\eta^L|_{S\xi_\eta^L}}$. By the homological Whitehead theorem it is enough to show that the map $h$ induces isomorphisms in the homologies and to do this, using the 5-lemma on the long exact sequence, it is enough to show that the map $$\hat\kappa\colon(D\xi_\eta^L,S\xi_\eta^L)\to(\hat K_\tau^L,\hat K_{\tau'}^L)$$ defined by $\hat\kappa_{\Phi_\eta^L|_{D\xi_\eta^L}}$ induces isomorphisms in the homologies. Observe that $\hat\kappa$ maps the $\eta$-stratum $BG_\eta^L\subset D\xi_\eta^L$ to $B\eta\subset\hat K_\tau^L$. \medskip\begin{sclaim} The map $\hat\kappa$ maps each fibre of $D\xi_\eta^L$ transverse to $B\eta$ in $\hat K_\tau^L$. \end{sclaim} \begin{sprf} Let $r$ be such a number that $[\eta]$ is $r$-determined and let $m$ be an arbitrary large number. We will use corollary \ref{transv} with the substitutions $G:=G_\eta^L$, $J:=J^r_0(\R^m,\R^{m+k})$ (the space of polynomial maps $\R^m\to\R^{m+k}$ with $0$ constant term and degree at most $r$) and $\Sigma:=[\eta]\cap J$ (those polynomial maps in $J$ whose germs at $0$ are equivalent to $\eta$). We may of course assume that $\eta\colon\R^c\to\R^{c+k}$ is itself a polynomial map and that it is the root of its type. Recall that $\Phi_\eta^L$ restricted to each fibre $\R^c$ of $\tightoverset{\,_\circ~~~}{D\xi_\eta^L}$ is the map (germ) $\eta\colon\R^c\to\R^{c+k}$, so $\hat\kappa$ restricted to this $\R^c$ maps each point $a\in\R^c$ to the polynomial map $p_a\in J$ (in some fibre of $\hat K_\tau^L$) for which $$p_a(x,y)=(\eta(x+a)-\eta(a),y)\in\R^{c+k}\times\R^{m-c}~~~~(x,y)\in\R^c\times\R^{m-c}.$$ Now corollary \ref{transv} implies that it is enough to show that the image of $\R^c$ in $J$ is transverse to $[\eta]\cap J$. If we put $\tilde\eta:=\eta\times\id_{\R^{m-c}}$, then for any $a\in\R^c$ the polynomial $p_a$ can be identified with $J^r\tilde\eta(a,b)\in J^r(\R^c\times\R^{m-c},\R^{m+k})$ for any $b\in\R^{m-c}$. Hence the map $a\mapsto p_a$ can be identified with any section $s\colon\R^c\to\R^c\times\R^{m-c}$ composed with $J^r\tilde\eta$ and the projection $\pr_J$ to the fibre of the trivial bundle $J^r(\R^m,\R^{m+k})$. By a theorem of Mather \cite{math5}, the section $J^r\tilde\eta$ is transverse to the fibrewise $\AA$-equivalence classes in $J^r(\R^m,\R^{m+k})$ (since $\tilde\eta$ is stable), which implies that $\pr_J\circ J^r\tilde\eta\circ s$ is transverse to $[\eta]\cap J$ and this is what we wanted to prove. \end{sprf} We may assume that $\hat\kappa$ maps each fibre of $\tightoverset{\,_\circ~~~}{D\xi_\eta^L}\approx\xi_\eta^L$ into a fibre of the normal bundle $\nu_\eta$ of the stratum $B\eta$ in $\hat K_\tau^L$. Now putting $f:=\hat\kappa|_{BG_\eta^L}\colon BG_\eta^L\to B\eta$ we have $f^*\nu_\eta=\xi_\eta^L$. The bundle $\nu_\eta$ admits a $G_\eta^L$-structure (a consequence of lemma \ref{borel}), hence it can be induced from the universal such bundle, which is $\xi_\eta^L$. In other words there is a map $g\colon B\eta\to BG_\eta^L$ with the property $g^*\xi_\eta^L=\nu_\eta$. Hence $g\circ f$ induces the bundle $\xi_\eta^L$ from $\xi_\eta^L$, so by the uniqueness of the inducing map, $g\circ f$ is homotopic to the identity. The homology groups of $BG_\eta^L$ and $B\eta$ are finitely generated in each dimension, hence $f$ (as well as $g$) induces isomorphisms in the homologies. This implies that the map $\hat f\colon T\xi_\eta^L\to T\nu_\eta$ defined by $f$ between the Thom spaces also induces isomorphisms in the homologies. Hence we have $$H_*(D\xi_\eta^L,S\xi_\eta^L)\cong H_*(T\xi_\eta^L)\xra{\hat f_*}H_*(T\nu_\eta)\cong H_*(\hat K_\tau^L,\hat K_{\tau'}^L)$$ and the composition of $\hat f_*$ with these isomorphisms is $\hat\kappa_*$, thus $\hat\kappa$ also induces isomorphisms in the homologies. \end{prf} From now on we will identify the spaces $\hat K_\tau^L$ and $K_\tau^L$. \subsection{Kazarian's conjecture} In this subsection we show a theorem of Szűcs which was formulated as a conjecture by Kazarian and is as follows. \begin{thm}\label{kazc} If $L$ is positive (i.e. all of its elements have positive determinants), then $X_\tau^L\congq\Gamma S^k(K_\tau^L)^+$. \end{thm} In other words, the space $X_\tau^L$ gives the stable rational homotopy type of the $k$-th suspension of $(K_\tau^L)^+:=K_\tau^L\sqcup*$. The proof is the composition of several individual statements and it will actually show more. Namely, if we modify the right-hand side properly, then we get a true homotopy equivalence (not only a rational one) relating $X_\tau^L$ to $K_\tau^L$, which remains true even if $L$ is not positive. Observe that $S^k(K_\tau^L)^+$ is the Thom space of the trivial $k$-dimensional vector bundle over $K_\tau^L$. The proper modification we mentioned above is to replace this trivial bundle with the universal virtual normal bundle $\nu_\tau^L$ (see definition \ref{kaznu}), which makes sense by corollary \ref{gammatnu}. \begin{thm}\label{kula} $X_\tau^L\cong\Gamma T\nu_\tau^L$. \end{thm} \begin{prf} By corollary \ref{gammav} it is enough to prove that the stable homotopy type of $T\nu_\tau^L$ is the same as that of $V_\tau^L$. Note that it is actually enough to prove this homotopy equivalence between sufficiently large suspensions of finite dimensional approximations of the (virtual) spaces involved. Indeed, if we get sequences of finite dimensional approximations and numbers $m_i~(i\in\N)$ converging to $\infty$ such that the $m_i$-th suspensions of the $i$-th approximations exist and are homotopy equivalent, then we also have $\Gamma V_\tau^L\cong\Gamma T\nu_\tau^L$. Now fix natural numbers $n,m$ such that the $(m+n)$-homotopy type of $S^mV_\tau^L(n)$ exists and is that of $S^mV_\tau^L$ (see theorem \ref{embclass}), the virtual bundle $\nu:=\nu_\tau^L|_{K_\tau^L(n)}$ can be represented by $\alpha-\varepsilon^m$ for a vector bundle $\alpha$ over $K_\tau^L(n)$, and so $S^mT\nu$ exists (see corollary \ref{gammatnu}). We want to prove that $S^mV_\tau^L(n)$ is homotopy equivalent to $S^mT\nu$; by the cohomological Whitehead theorem it is enough to show a map between them which induces isomorphisms in the cohomologies. Recall that $K_\tau^L$ is the union of the spaces $D\xi_\vartheta^L~([\vartheta]\in\tau)$ and the universal virtual normal bundle is such that $$\nu_\tau^L|_{D\xi_\vartheta^L}=\pi^*\tilde\xi_\vartheta^L-\pi^*\xi_\vartheta^L$$ where $\pi$ is the projection of the disk bundle $D\xi_\vartheta^L\to BG_\vartheta^L$. Recall also that the inclusion $K_\tau^L(n)\subset S^mV_\tau^L(n)$ is by construction such that for all $[\vartheta]\in\tau$ and any $l$ for which $D\xi_\vartheta^L(l)$ took part in the construction, $D\xi_\vartheta^L(l)\subset K_\tau^L(n)$ is embedded into $D\tilde\xi_\vartheta^L(l)\times D^m$ by the restriction of the global normal form $\tilde\Phi_\vartheta^L(l)$ to $D\xi_\vartheta^L(l)$ (see the proof of theorem \ref{embclass}). Hence the normal bundle of this embedding is the restriction $(\nu\oplus\varepsilon^m)|_{D\xi_\vartheta^L(l)}$ (which is a non-virtual bundle). Now if $[\eta]$ is the maximal element of $\tau$ and $\tau':=\tau\setminus\{[\eta]\}$, then the inclusion of $K_\tau^L(n)$ into $S^mV_\tau^L(n)$ can be written as $$(K_\tau^L(n),K_{\tau'}^L(n+r))\hookrightarrow(S^mV_\tau^L(n),S^mV_{\tau'}^L(n+r))$$ where $r\in\N$ and $K_{\tau'}^L(n+r)$ and $S^mV_{\tau'}^L(n+r)$ are finite dimensional approximations of $K_{\tau'}^L$ and $S^mV_{\tau'}^L$ respectively. Hence this inclusion factors to an embedding between the quotient spaces $$T\xi_\eta^L(n)\hookrightarrow S^mT\tilde\xi_\eta^L(n),$$ which is the identity on the base space $BG_\eta^L(n)$. Let $U$ be a tubular neighbourhood of $K_\tau^L(n)\subset S^mV_\tau^L(n)$ and factor $S^mV_\tau^L(n)$ by the complement of $U$. If we take the quotient of this factorisation by $S^mV_{\tau'}^L(n+r)$ on the left-hand side and by the image of $U|_{K_{\tau'}^L(n+r)}$ on the right-hand side, then we get the diagram $$\xymatrix@C=.75pc{ S^mV_\tau^L(n)\ar[rrrrr]^{/(S^mV_\tau^L(n)\setminus U)}\ar[d]_{/S^mV_{\tau'}^L(n+r)} &&&&& S^mT\nu\ar[d]^{/S^mT(\nu|_{K_{\tau'}^L(n+r)})}\\ S^mT\tilde\xi_\eta^L(n)\ar[rrrrr]^(.36){/(S^mT\tilde\xi_\eta^L(n)\setminus U)} &&&&& S^mT(\nu|_{D\xi_\eta^L(n)})/S^mT(\nu|_{S\xi_\eta^L(n)}) }$$ The lower horizontal arrow in this diagram maps the space $T(\tilde\xi_\eta^L(n)\oplus\varepsilon^m)$ to the space $S^mT(\nu|_{BG_\eta^L(n)}\oplus\xi_\eta^L(n))=T(\nu|_{BG_\eta^L(n)}\oplus\xi_\eta^L(n)\oplus\varepsilon^m)$ and it is again the identity on the base space $BG_\eta^L(n)$, hence it induces isomorphisms in the cohomologies by the Thom isomorphisms. Now an induction on the elements of $\tau$ shows that the upper horizontal arrow also induces isomorphisms in the cohomologies and this is what we wanted to prove. \end{prf} In the following we will use the infinite symmetric product operation, which will be denoted by $\SP$. \begin{lemma}\label{doldth} If $A$ and $B$ are connected CW-complexes such that $H_*(A)\cong H_*(B)$, then $\SP A$ is homotopy equivalent to $\SP B$. \end{lemma} \begin{prf} We use a few known properties of the infinite symmetric product, which can be read in \cite[4.K]{hatcher}. The space $\SP A$ is homotopy equivalent to a product of Eilenberg--MacLane spaces and by the Dold--Thom theorem we have $\pi_*(\SP A)\cong\tilde H_*(A)$. Of course the same is true for $B$, hence the equivalence $$\SP A\cong\prod_{i=1}^\infty K(\tilde H_i(A),i)=\prod_{i=1}^\infty K(\tilde H_i(B),i)\cong\SP B$$ holds. \end{prf} \begin{crly} For any virtual complex $V$ the space $\SP V$ is well-defined. \end{crly} \begin{prf} For each natural number $i$ fix a number $n(i)$ and a subcomplex $V_i$ of $V$ such that for all $n\ge n(i)$ the space $S^nV_i$ exists and its $(n+i)$-homotopy type is that of $S^nV$. Then the same is true for the space $\SP S^nV_i$ and the virtual complex $\SP S^nV$, because the $\SP$ functor turns cofibrations $B\hookrightarrow A\to A/B$ into quasi-fibrations $\SP B\to\SP A\to\SP(A/B)$ for which the homotopy exact sequence holds (see \cite[4.K]{hatcher}), hence if the pair $(A,B)$ was $(n+i)$-connected, then so is the pair $(\SP A,\SP B)$. By the previous lemma we have $\Omega\SP SA\cong\SP A$ for all connected complexes $A$, thus putting $$\SP V:=\liminfty i\Omega^{n(i)}\SP S^{n(i)}V_i$$ works for all virtual complexes and extends the usual definition of $\SP$. \end{prf} \begin{lemma}\label{hur} For any virtual complex $V$ there is a map $h\colon\Gamma V\to\SP V$ that induces a rational homotopy equivalence. \end{lemma} \begin{prf} First suppose that $V$ is an existing (non-virtual) space. Then we have $$\Gamma V=\left(\coprod_{i=1}^\infty WS_i\utimes{S_i}(V\times\ldots\times V)\right)\bigg/\sim$$ where $WS_i$ is a contractible space with a free $S_i$-action on it and ``$/\sim~$'' means gluing by some natural equivalences; for the precise definition see \cite{barec}. Now the projections $$WS_i\utimes{S_i}(V\times\ldots\times V)\to(V\times\ldots\times V)/S_i$$ are consistent with the gluings of these spaces for different $i$-s forming $\Gamma V$ on the left-hand side and $\SP V$ on the right-hand side, hence the union of these projections forms a map $\Gamma V\to\SP V$ that we denote by $h$. This map induces in the homotopy groups $$h_\#\colon\pi_*(\Gamma V)=\pi_*^s(V)\to H_*(V)=\pi_*(\SP V),$$ which is the stable Hurewicz homomorphism, hence by Serre's theorem (see for example \cite[theorem 18.3]{charclass}) it is a rational isomorphism. Now the case when $V$ is just a virtual complex can be solved again by approximating $V$ with subcomplexes $V_i$ and taking numbers $n(i)$ such that for all $i$ the space $S^{n(i)}V_i$ exists and has the same $(n(i)+i)$-type as $S^{n(i)}V$. Then we replace $\Gamma V$ with $\Omega^{n(i)}\Gamma S^{n(i)}V_i$ and $\SP V$ with $\Omega^{n(i)}\SP S^{n(i)}V_i$ and repeat the same proof. \end{prf} Now we only have to combine the above observations to prove the initial statement (i.e. Kazarian's conjecture). \medskip\par\noindent\textbf{Proof of theorem \ref{kazc}.\enspace\ignorespaces} We have a sequence of equivalences $$X_\tau^L\toverset{^*}\cong\Gamma T\nu_\tau^L\toverset{^{**}}\congq\SP T\nu_\tau^L\toverset{^{***}}\cong\SP T\varepsilon^k=\SP S^k(K_\tau^L)^+\toverset{^{**}}\congq\Gamma S^k(K_\tau^L)^+$$ where $^*$ is theorem \ref{kula}, $^{**}$ follows from lemma \ref{hur} and $^{***}$ is a consequence of lemma \ref{doldth}, since using again sufficiently large suspensions of finite dimensional approximations of Kazarian's space we obtain $H_*(T\nu_\tau^L)\cong H_{*-k}(K_\tau^L)\cong H_*(T\varepsilon^k)$ by the homological Thom isomorphism (note that this is the only part we use the positivity of $L$, which is the structure group of $\nu_\tau^L$). ~$\square$\par\medskip \subsection{Corollaries} Here we gather a few important consequences of theorems \ref{kazc} and \ref{kula} that will turn out to be very useful in the study of cobordism groups. We will consider $(\tau,L)$-maps to an arbitrary manifold $P^{n+k}$, which is a bit complicated, so it is useful to separately see the simpler case of maps to a Eucledian space (i.e. when $P=\R^{n+k}$). \begin{prop}\label{cobhomo} $\Cob_\tau^L(n,P^{n+k})\otimes\Q\cong\overset\infty{\underset{i=1}{\bigoplus}}H^i(\cpt P;H_{i-k}(K_\tau^L;\tilde\Q_{\nu_\tau^L}))$. \end{prop} \begin{case} $\Cob_\tau^L(n,k)\otimes\Q\cong H_n(K_\tau^L;\tilde\Q_{\nu_\tau^L})$. \end{case} \begin{prf} We have $\Cob_\tau^L(n,P^{n+k})=[\cpt P,X_\tau^L]=[\cpt P,\Gamma V_\tau^L]=[\cpt P,\Gamma T\nu_\tau^L]$, for which we can use the homomorphism induced by the map $h$ of lemma \ref{hur}. This homomorphism maps to $[\cpt P,\SP T\nu_\tau^L]$, which is the same as $$\left[\cpt P,\prod_{i=1}^\infty K(H_i(T\nu_\tau^L),i)\right]=\left[\cpt P,\prod_{i=1}^\infty K(H_{i-k}(K_\tau^L;\tilde\Z_{\nu_\tau^L}),i)\right]=\bigoplus_{i=1}^\infty H^i(\cpt P;H_{i-k}(K_\tau^L;\tilde\Z_{\nu_\tau^L})).$$ If we compose the map induced by $h$ with these identifications and take its tensor product with $\id_\Q$, then we get an isomorphism because $h$ induces a rational isomophism. \end{prf} The above proposition can be extended to show details on the torsion parts of the cobordism groups $\Cob_\tau^L(P)$ as well: We will see that the map $h$ not only induces rational isomorphism, but also isomorphism in the $p$-components for sufficiently large primes $p$. To prove this, we need the following theorem of Arlettaz \cite{arl}, which will not be proved here. \begin{thm}\label{arlthm} For any $(l-1)$-connected spectrum $V$, the stable Hurewicz homomorphism $h_m\colon\pi^s_m(V)\to H_m(V)$ has the properties \begin{itemize} \item[\rm{(i)}] $\rho_1\ldots\rho_{m-l}\cdot\ker h_m=0$ for all $m\ge l+1$, \item[\rm{(ii)}] $\rho_1\ldots\rho_{m-l-1}\cdot\coker h_m=0$ for all $m\ge l+2$, \end{itemize} where $\rho_i$ is the exponent of the $i$-th stable homotopy group of spheres $\pi^s(i)$. \end{thm} Now we can get to the torsion parts of the cobordism groups $\Cob_\tau^L(P)$. \begin{prop}\label{cobtors} For any prime $p>\frac n2+1$ (where $n\ge2$) there is an isomorphism of $p$-primary parts $$\big[\Cob_\tau^L(n,P^{n+k})\big]_p\cong\displaystyle\bigoplus_{i=1}^\infty\big[H^i(\cpt P;H_{i-k}(K_\tau^L;\tilde\Z_{\nu_\tau^L}))\big]_p.$$ \end{prop} \begin{case} For any prime $p>\frac n2+1$ (where $n\ge2$) there is an isomorphism of $p$-primary parts $$\big[\Cob_\tau^L(n,k)\big]_p\cong\big[H_n(K_\tau^L;\tilde\Z_{\nu_\tau^L})\big]_p.$$ \end{case} \begin{prf} Recall that the map $h\colon\Gamma T\nu_\tau^L\to\SP T\nu_\tau^L$ of lemma \ref{hur} was construced such that in the homotopy groups it induces the stable Hurewicz homomorphism for the virtual complex $T\nu_\tau^L$. Now any spectrum defined by $T\nu_\tau^L$ (according to remark \ref{spec}) is $(k-1)$-connected, so we can apply the theorem of Arlettaz with the substitutions $l:=k$ and $m:=n+k$. Serre proved in \cite{serre} that the number $\rho_i$ is not divisible by the prime $p$, if $p>\frac i2+1$, hence we obtain that $h$ induces an isomorphism between the $p$-components of the homotopy groups for $p>\frac n2+1$ until dimension $n+k$. Now (as in the proof of proposition \ref{cobhomo}) we use the isomorphisms $$\Cob_\tau^L(n,P^{n+k})\displaystyle\cong[\cpt P,\Gamma T\nu_\tau^L]~~~~\text{and}~~~~[\cpt P,\SP T\nu_\tau^L]\cong\bigoplus_{i=1}^\infty H^i(\cpt P;H_{i-k}(K_\tau^L;\tilde\Z_{\nu_\tau^L}))$$ and we get the statement of the present proposition. \end{prf} In the remaining part of this section we assume that $L$ is positive (i.e. the bundle $\nu_\tau^L$ is orientable). We will give a (rational) analogue for $(\tau,L)$-maps of the Pontryagin--Thom theorem claiming that two manifolds are cobordant iff their characteristic numbers coincide. This will also be analogous to the Conner--Floyd theorem in \cite{conflo} about the characteristic numbers of rational bordism classes. The first task is to define the characteristic numbers of a $(\tau,L)$-map. \begin{defi} For a $(\tau,L)$-map $f\colon M^n\to P^{n+k}$ consider the map $\kappa_f\colon M\to K_\tau^L$ (see remark \ref{kappaf}). For any cohomology class $x\in H^*(K_\tau^L;\Q)$, \begin{enumerate} \item the class $\kappa_f^*(x)\in H^*(M;\Q)$ is called the $x$-characteristic class of $f$, \item for any $y\in H^{n-*}(P;\Q)$, the number $\la\kappa_f^*(x)\smallsmile f^*(y),[M]\ra\in\Q$ is called the $(x,y)$-characteristic number of $f$. \end{enumerate} \end{defi} \begin{thm}\label{charcob} There is an isomorphism $$\Cob_\tau^L(n,P)\otimes\Q\cong\Hom\left(\displaystyle\bigoplus_{i=0}^nH^i(K_\tau^L;\Q)\otimes H^{n-i}(P;\Q),\Q\right)$$ given by assigning to a $(\tau,L)$-map $f\colon M^n\to P^{n+k}$ the homomorphism $x\otimes y\mapsto\la\kappa_f^*(x)\smallsmile f^*(y),[M]\ra$. \end{thm} \begin{case} There is an isomorphism $$\Cob_\tau^L(n,k)\otimes\Q\cong\Hom(H^n(K_\tau^L;\Q),\Q)$$ given by assigning to a $(\tau,L)$-map $f\colon M^n\to\R^{n+k}$ the homomorphism $x\mapsto\la\kappa_f^*(x),[M]\ra$. \end{case} In other words this theorem states that the characteristic numbers are well-defined invariants of rational cobordism classes of $(\tau,L)$-maps and they completely determine the class they stand for. Moreover, given any possible collection of rational numbers assigned to the corresponding cohomology classes, there is a unique rational $(\tau,L)$-cobordism class with those characteristic numbers. \medskip\begin{prf} Let $f\colon M\to P$ be a $(\tau,L)$-map. Recall the Pontryagin--Thom type construction in theorem \ref{ptpullback} or in the addendum of theorem \ref{embclass} (also using theorem \ref{cob=emb}), which assigns to $f$ a map $$\kappa'_f:=\kappa_P\colon\cpt P\to X_\tau^L.$$ Now $\kappa'_f$ maps to a finite dimensional approximation $\Omega^mS^m T\nu_\tau^L$ of $X_\tau^L=\Gamma T\nu_\tau^L$ (here $\nu_\tau^L$ is restricted to a finite dimensional approximation of Kazarian's space, but to make notations simpler, we omit this) such that the bundle $\nu_\tau^L\oplus\varepsilon^m$ is non-virtual. Recall that the maps $A\to\Omega^mB$ bijectively correspond to maps $S^mA\to B$ for any (pointed) spaces $A,B$; we call the maps that correspond to each other adjoint. Hence this $\kappa'_f$ can be identified with its adjoint map $S^m\cpt P\to S^mT\nu_\tau^L=T(\nu_\tau^L\oplus\varepsilon^m)$, which we will also denote by $\kappa'_f$. We put $e:=f\times i\colon M\hookrightarrow P\times\R^m\subset S^m\cpt P$ the $(\tau,L)$-embedding corresponding to $f$ (with the vertical vector fields and foliation; we may assume that $m$ is large enough so that this exists uniquely up to isotopy). Observe that $\kappa'_f$ can be written (by its construction) as a composition $\kappa'_f=t\circ q$, where $q\colon S^m\cpt P\to T\nu_e$ is the quotient by the complement of a tubular neighborhood of $e(M)\subset P\times\R^m$ (hence a degree-$1$ map) and $t\colon T\nu_e\to T(\nu_\tau^L\oplus\varepsilon^m)$ is a fibrewise isomorphism. Moreover, the restriction of $t$ to the zero-section is the map $\kappa_f\colon M\to K_\tau^L$, hence we have a commutative diagram $$\xymatrix{ S^m\cpt P\ar[r]^(.55){q}\ar@/^1.3pc/[rr]^{\kappa'_f} & T\nu_e\ar[r]^(.35){t} & T(\nu_\tau^L\oplus\varepsilon^m)\\ & M\ar@{_(->}[ul]^{f\times i=e}\ar@{_(->}[u]\ar[r]^{\kappa_f} & K_\tau^L\ar@{^(->}[u] }$$ The image of the cobordism class $[f]\in\Cob_\tau^L(n,P)$ in $$\bigoplus_{i=1}^{n+k} H^i(\cpt P;H_{i-k}(K_\tau^L;\Q))\congq\bigoplus_{i=1}^{n+k}\Hom(H_i(\cpt P;\Q),H_{i-k}(K_\tau^L;\Q))$$ (according to proposition \ref{cobhomo}) is the homomorphism that we get by the composition $$H_i(\cpt P;\Q)\cong H_{i+m}(S^m\cpt P;\Q)\xra{\kappa'_{f*}}H_{i+m}(T(\nu_\tau^L\oplus\varepsilon^m);\Q)\xra{\varphi^{-1}}H_{i-k}(K_\tau^L;\Q)$$ for all $i$, where $\varphi$ denotes the Thom isomorphism. The $\Hom$-dual of this map is a map between the compactly supported cohomology groups given by the formula \pagebreak \begin{alignat*}2 H_c^{i-k}(K_\tau^L;\Q)\to&~H_c^i(P;\Q)\\ x\mapsto&~S^{-m}q^*t^*\varphi(x)=S^{-m}q^*\varphi\kappa_f^*(x)=S^{-m}(f\times i)_!\kappa_f^*(x)=\\ &=S^{-m}(S^mf)_!\kappa_f^*(x)=f_!\kappa_f^*(x). \end{alignat*} Now the Poincaré duality implies that the cohomology classes $f_!\kappa_f^*(x)$ (for any $x\in H_c^{i-k}(K_\tau^L;\Q)$) are completely determined by the numbers $\la f_!\kappa_f^*(x)\smallsmile y,[P]\ra~(y\in H_c^{n-i+k}(P;\Q))$. We can rewrite these numbers in the following way: \begin{alignat*}2 \la f_!\kappa_f^*(x)\smallsmile y,[P]\ra&=\la f_!(\kappa_f^*(x)\smallsmile f^*(y)),[P]\ra=\la S^m(f_!(\kappa_f^*(x)\smallsmile f^*(y))),[S^m\cpt P]\ra=\\ &=\la q^*\varphi(\kappa_f^*(x)\smallsmile f^*(y)),[S^m\cpt P]\ra=\la \varphi(\kappa_f^*(x)\smallsmile f^*(y)),[T\nu_e]\ra=\\ &=\la \kappa_f^*(x)\smallsmile f^*(y),[M]\ra. \end{alignat*} We can set this to be an arbitrary rational number depending on $x$ and $y$ and the compact support in the cohomologies does not play a role here, since the image of $M$ is compact in both $P$ and $K_\tau^L$. This proves the statement of the theorem. \end{prf} \begin{ex} If $\tau$ is the set of all possible singularities of $k$-codimensional germs, then we have $K_\tau^L\cong BL$, because the fibre of the bundle $K_\tau^L\to BL$ (see definition \ref{kazj}) is the space of all polynomial maps, which is contractible. Now setting $L(k):=\SO(k)$, we get $\Cob_\tau^{\SO}(n,k)=\Omega_n(\R^{n+k})=\Omega_n$ and the above propositions mean $$\Omega_n\otimes\Q\cong H_n(B\SO;\Q)\cong\Hom(H^n(B\SO;\Q),\Q),$$ which is a well-known theorem of Thom. \end{ex} \chapter{The key fibration} In the previous chapter we saw that the investigation of cobordisms of singular maps is equivalent to the homotopical investigation of classifying spaces. One of the key tools of this investigation is a fibre bundle $$X_\tau^L\xra{X_{\tau'}^L}\Gamma T\tilde\xi_\eta^L,$$ where $[\eta]$ is a top singularity in $\tau$ and $\tau'$ denotes $\tau\setminus\{[\eta]\}$ as before (as usual, $\tau$ denotes a set of singularities and $L$ is a stable linear group). By ``top singularity'' we mean such a singularity that is maximal in $\tau$ with the natural partial order. We call this the key fibration (or key bundle) to indicate its importance. The existence of such a fibre bundle was conjectured by Szabó and proved by Szűcs \cite{hosszu} as a corollary of the description of classifying spaces with virtual complexes (corollary \ref{gammav}), later a constructive proof was given by Terpai \cite{key} in a bit more general setting. In this chapter we recall these proofs, then see some important properties and corollaries of this fibration from \cite{hominv}, \cite{hosszu} and \cite{ctrl}. \section{Existence of the key fibration} \subsection{Existence derived from virtual complexes} It was proved in \cite{barec} that the $\Gamma$ functor turns cofibrations into fibrations. We need an analogue of this for virtual complexes. \begin{lemma}\label{cofibfib} Given virtual complexes $V,V'$ and a cofibration $V'\into V\to V/V'$, we have a fibration $\Gamma V'\into\Gamma V\to\Gamma(V/V')$. \end{lemma} \begin{prf} For each natural number $i$ fix a number $n(i)$ and subcomplexes $V_i$ and $V'_i$ of $V$ and $V'$ respectively such that for all $n\ge n(i)$ the spaces $S^nV_i$ and $S^nV'_i$ exist and their $(n+i)$-homotopy types are that of $S^nV$ and $S^nV'$ respectively. Now applying the $\Gamma$ functor, we get a fibration $$\Gamma S^{n(i)}V'_i\hookrightarrow\Gamma S^{n(i)}V_i\to\Gamma S^{n(i)}(V_i/V'_i).$$ Recall that whenever we have a fibration $E\xra{F}B$, we can (homotopically) continue it to the left infinitely, obtaining a sequence of fibrations (called the resolvent of $E\xra{F}B$) $$\ldots\to\Omega^2F\to\Omega^2E\to\Omega^2B\to\Omega F\to\Omega E\to\Omega B\to F\to E\to B.$$ Now applying the resolvent of the fibration above, we can consider the fibration $$\Omega^{n(i)}\Gamma S^{n(i)}V'_i\hookrightarrow\Omega^{n(i)}\Gamma S^{n(i)}V_i\to\Omega^{n(i)}\Gamma S^{n(i)}(V_i/V'_i).$$ Of course, $\Omega^{n(i)}\Gamma S^{n(i)}=\Gamma$, hence this is a fibration $\Gamma V_i\xra{\Gamma V'_i}\Gamma(V_i/V'_i)$. Thus by setting $i\to\infty$, we obtain the fibration as claimed. \end{prf} \begin{thm}\label{keythm} If $L$ is a stable linear group, $\tau$ is a set of singularities (without fixed multiplicites), $[\eta]$ is a top singularity in $\tau$ and we put $\tau':=\tau\setminus\{[\eta]\}$, then there is a fibration $$X_\tau^L\xra{X_{\tau'}^L}\Gamma T\tilde\xi_\eta^L.$$ \end{thm} \begin{prf} Observe that by the construction of the virtual complex $V_\tau^L$ we have a cofibration of virtual complexes $$V_{\tau'}^L\into V_\tau^L\to V_\tau^L/V_{\tau'}^L=T\tilde\xi_\eta^L.$$ Now we get the statement of the theorem by the previous lemma. \end{prf} \begin{rmk}\label{keyrmk} We can define the cobordism group of immersions with normal bundles induced from a given bundle in a standard way (analogously to definition \ref{cobtau}). The classifying space of cobordisms of immersions with normal bundles induced from the bundle $\zeta$ is the space $\Gamma T\zeta$ (see \cite{wells}), that is, for any manifold $P$, the cobordism group $\Imm^\zeta(P)$ of immersions to $P$ with normal bundles induced from $\zeta$ is isomorphic to $[P,\Gamma T\zeta]$. Now if we apply the functor $[P,\cdot]$ to the key fibration $X_\tau^L\to\Gamma T\tilde\xi_\eta^L$, then we get a map $\Cob_\tau^L(P)\to\Imm^{\tilde\xi_\eta^L}(P)$. It is not hard to see from the proofs above (and it also follows from the proof in the next subsection) that this map is the same as the one we get by assigning to the cobordism class of a $(\tau,L)$-map $f\colon M\to P$ the cobordism class of the immersion $f|_{\eta(f)}\colon\eta(f)\imto P$ (clearly $f|_{\eta(f)}$ is an immersion with normal bundle induced from $\tilde\xi_\eta^L$). This observation has a very nice consequence about the problem of eliminating certain singularities of a given map. Namely that the only obstruction for a $(\tau,L)$-map $f$ to be $(\tau,L)$-cobordant to a $(\tau',L)$-map is the cobordism class of $f|_{\eta(f)}$ in $\Imm^{\tilde\xi_\eta^L}(P)$. \end{rmk} \subsection{Terpai's construction} In this subsection we will be in a slightly more general setting. Namely, let $L,\tau,\eta$ and $\tau'$ be as before, let $\underline\tau'$ denote the set of all possible multisingularities composed of the elements of $\tau'$ and put $$\underline\tau_r:=\{\underline\vartheta+i[\eta]\mid\underline\vartheta\in\underline\tau',i=0,\ldots,r\}$$ for all $r\in\N$. We will use the notation $X_r:=X_{\underline\tau_r}^L$ and denote by $\Gamma_r$ the classifying space for cobordisms of immersions with normal bundles induced from $\tilde\xi_\eta^L$ and with at most $r$-tuple points (see \cite{madar}, \cite{limm} or \cite{s2}). \begin{thm}\label{terpthm} For all $r\in\N$ there is a fibration $$X_r\xra{X_{\tau'}^L}\Gamma_r.$$ \end{thm} This implies the statement of theorem \ref{keythm}, since by converging with $r$ to $\infty$, we get the same fibration with $X_\infty$ and $\Gamma_\infty$ in the place of $X_r$ and $\Gamma_r$ respectively, and of course $X_\infty=X_\tau^L$ and $\Gamma_\infty=\Gamma T\tilde\xi_\eta^L$. \medskip\begin{prf} Recall the universal $(\tau,L)$-map $f_\tau^L\colon Y_\tau^L\to X_\tau^L$ of theorem \ref{ptpullback}. This is not a real $(\tau,L)$-map itself, as it is a map between infinite dimensional spaces, but it was constructed using direct limits; in particular its restriction to the $\eta$-stratum $\overset\infty{\underset{r=1}\bigcup}BG_{r[\eta]}^L$ is a direct limit of immersions. Thus the direct limit of the classifying maps of the cobordism classes of these immersions is a well-defined map $\chi_\tau^L\colon X_\tau^L\to\Gamma T\tilde\xi_\eta^L$ and its restriction to $X_r$ is a map $X_r\to\Gamma_r$. We will prove the theorem for a fixed $r$ and we proceed by directly checking the homotopy lifting property for the map $\chi_\tau^L|_{X_r}$. Fix a finite CW-complex $P$ and maps $g,h$ such that the square below commutes; we want to find a map $H$ for which the diagram is still commutative. $$\xymatrix{ P\ar@{_(->}[d]_{\id_P\times0}\ar[r]^{g} & X_r\ar[d]^{\chi_\tau^L|_{X_r}}\\ P\times[0,1]\ar@{-->}[ur]^H\ar[r]^(.6)h & \Gamma_r }$$ We may assume that $P$ is a manifold with boundary, since we can embed $P$ into a Eucledian space and then replace it with a small closed neighbourhood that $P$ is a deformation retract of. By part \ref{pt1} of theorem \ref{ptpullback}, the map $g\colon P\to X_r$ corresponds to a $(\underline\tau_r,L)$-map $f\colon M^n\to P^{n+k}=P\times\{0\}$. By the analogous pullback property of the classifying space $\Gamma_r$, the map $h\colon P\times[0,1]\to\Gamma_r$ corresponds to an immersion $f_\eta\colon N^{n-c+1}\imto P\times[0,1]$ (if the dimension of $\tilde\xi_\eta^L$ is $c+k$) with at most $r$-tuple points and normal bundle induced from $\tilde\xi_\eta^L$ with the properties $f_\eta^{-1}(P\times\{0\})=\eta(f)$ and $f_\eta|_{\eta(f)}=f|_{\eta(f)}$. The proposed lift $H$ of the map $h$ would correspond to a $(\underline\tau_r,L)$-map $F\colon W^{n+1}\to P\times[0,1]$ that extends $f\colon M\to P\times\{0\}$ and for which $\eta(F)=N$ and $F|_{\eta(F)}=f_\eta$. We remark here that $P\times[0,1]$ (and so $N$ and the proposed $W$ too) is not a manifold with boundary but a manifold with corners (that is, it can be covered by Eucledian neighbourhoods diffeomorphic to $[0,\varepsilon)^m\times\R^{n+k+1-m}~(m\in\N)$). Fortunately the notions of $(\underline\tau_r,L)$-maps and immersions with normal bundles induced from $\tilde\xi_\eta^L$ extend to manifolds with corners and the pullback properties of the classifying spaces are also true in this extended sense, hence there is no problem here. We will assume that the images of $M$ and $N$ in $P\times[0,1]$ are disjoint from $\partial P\times[0,1]$, so they do not have corners. We can assume this, as otherwise we can apply each of the following steps with $\partial P$ in the place of $P$, obtaining an extension over the boundary, then apply the same steps again to extend this over the whole manifold. The normal structure of the map $f_\eta$ and theorem \ref{univ} (together with its analogues later) allow us to extend the map $f_\eta$ to a disk bundle $U$ over $N$, which is induced from the union of the disk bundles $D\xi_{i[\eta]}^L$ and its mapping to $P\times[0,1]$ is induced from the global normal forms $\Phi_{i[\eta]}^L$ (for $i=1,\ldots,r$), so it maps to a tubular neighbourhood of $f_\eta(N)$. This extended map will be denoted by $F_\eta$. Now the boundary of $U$ is the union of a sphere bundle over $N$ denoted by $\partial_SU$, a disk bundle over $f_\eta^{-1}(P\times\{0\})$ denoted by $\partial_0U$ and a disk bundle over $f^{-1}_\eta(P\times\{1\})$ denoted by $\partial_1U$. Here $\partial_0U$ is a (closed) tubular neighbourhood of $\eta(f)$ and we can assume that $F_\eta|_{\partial_0U}$ is the same as $f|_{\partial_0U}$. Of course, nothing changes if we replace the map $f\colon M\to P\times[0,1]$ with $f\times\id_{[-\varepsilon,0]}\colon M\times[-\varepsilon,0]\to P\times[-\varepsilon,0]$ for a number $\varepsilon>0$ (i.e. if we add a $[-\varepsilon,0]$-collar to the map $f$). Now we are searching for a map $F$ that extends this $f$ with the added $[-\varepsilon,0]$-collar and $F_\eta$ such that no new $\eta$-points of $F$ arise besides the $\eta$-strata in $M\times[-\varepsilon,0]$ and in $U$. \begin{figure}[H] \begin{center} \centering\includegraphics[scale=0.1]{kep5}\label{kep5} \begin{changemargin}{2cm}{2cm} \caption{\hangindent=1.4cm\small We represent $M$ and $P\times[0,1]$ by the ellipse on the left and the cylinder on the right with the $[-\varepsilon,0]$-collars added; the two indicated points on $M$ represent $\eta(f)$ and the vertical segments starting from them are $N$; the disk bundle $U$ and its image are also indicated.} \end{changemargin} \vspace{-1.3cm} \end{center} \end{figure} Now $V:=(M\times[-\varepsilon,0])\cup U$ is a manifold with boundary $\partial V=(M\times\{-\varepsilon\})\cup(M\setminus\partial_0U)\cup\partial_SU\cup\partial_1U$. The map $F_V:=(f\times\id_{[-\varepsilon,0]})\cup F_\eta\colon V\to P\times[-\varepsilon,1]$ is almost a $(\underline\tau_r,L)$-map that extends $f$ and $F_\eta$, the only differece is that it does not map the boundary of the source to the boundary of the target. In order to change $F_V$ to a map that maps boundary to boundary (hence proving the theorem), we first prove the following. \medskip\begin{sclaim} There is a diffeotopy $\varphi_t~(t\in[0,1])$ of $P\times[0,1]$ for which $\varphi_0=\id_{P\times[0,1]}$ and for any $p\in\partial_SU$, the subspace $d\varphi_1(dF_V(T_p\partial_SU))\subset T_{\varphi_1(F_V(p))}(P\times[0,1])$ does not contain the tangent line of $[0,1]$. \end{sclaim} \begin{sprf} The restriction $F_\eta|_{\partial_SU}$ maps to the ``boundary'' of a tubular neighbourhood of $f_\eta(N)$; actually this ``boundary'' is just a $1$-codimensional subcomplex of $P\times[0,1]$ which is the image of an immersion. We can choose a vector field on this subcomplex (i.e. a section of $T(P\times[0,1])$ restricted to the subcomplex) that is nowhere tangent to it. If we denote the restriction of such a vector bundle to $F_\eta(\partial_SU)$ by $u$, then $u$ is pointwise independent of the subspaces $dF_\eta(T_p\partial_SU)$ for all $p\in\partial_SU$. Now the stratified compression theorem \ref{sct} applied to the multisingularity stratification of $\partial_SU$ yields a diffeotopy of $P\times[0,1]$ that turns $u$ vertical (i.e. everywhere parallel to $T[0,1]$). Using that $u$ was everywhere independent of the subspaces $dF_\eta(T_p\partial_SU)=dF_V(T_p\partial_SU)$, the images of these subspaces cannot contain the vertical tangent lines and this is what we wanted to achieve. \end{sprf} We can trivially extend $\varphi_1$ to a diffeomorphism of $P\times[-\varepsilon,1]$. We will use the notations $\tilde f:=\varphi_1\circ f,\tilde f_\eta:=\varphi_1\circ f_\eta,\tilde F_\eta:=\varphi_1\circ F_\eta$ and $\tilde F_V:=\varphi_1\circ F_V$. Define $$V':=\{(p,t)\in\partial V\times[0,1]\mid\tilde F_V(p)=(q,s)\in P\times[0,1],t\in[s,1]\}$$ and $\tilde F_{V'}(p,t):=(\tilde F_V(p),t)$ for all $(p,t)\in V'$. Now we can glue $V'$ to $V$ by the map $(p,s)\mapsto p$ (with the above notation) to form the manifold $W$, which is mapped to $P\times[-\varepsilon,1]$ by the map $\tilde F:=\tilde F_V\cup\tilde F_{V'}$. This map is not necessarily smooth where the gluing happened, but it can easily be smoothed, so we may assume that it is. Now $\tilde F$ is a $(\underline\tau_r,L)$-map, it extends $\tilde f\times\id_{[-\varepsilon,0]}$ and $\tilde f_\eta$ and its $\eta$-stratum is $(\eta(f)\times[-\varepsilon,0])\cup N$. Thus putting $F:=\varphi_1^{-1}\circ\tilde F$ satisfies every condition we need. \end{prf} In the rest of this thesis (as in the above proof) we will use the symbol $\chi_\tau^L$ to denote the key fibration $X_\tau^L\to\Gamma T\tilde\xi_\eta^L$. \section{Inducing map of the key fibration} In the present section we will give a purely homotopy theoretical description of the key fibration $\chi_\tau^L$ (where $L,\tau,\eta$ and $\tau'$ are fixed as before) based on \cite{hominv}. This fibration can be induced from the universal $X_{\tau'}^L$-bundle, that is, there is a pullback diagram $$\xymatrix{ X_\tau^L\ar[d]_{X_{\tau'}^L}\ar[r] & EX_{\tau'}^L\ar[d]^{X_{\tau'}^L}\\ \Gamma T\tilde\xi_\eta^L\ar[r]^{b_\eta^L} & BX_{\tau'}^L }$$ where $EX_{\tau'}^L$ is a contractible space, $BX_{\tau'}^L$ is a space with the property $\Omega BX_{\tau'}^L=X_{\tau'}^L$ and the map $b_\eta^L$ is called the inducing map of the key fibration. We will describe $b_\eta^L$ using the attaching map $\tilde\rho_\eta^L$ of $D\tilde\xi_\eta^L$ to $X_{\tau'}^L$ in the construction of the classifying space for cobordisms of $\tau$-maps with no multiple $\eta$-points (see subsection \ref{prf}; the attaching map was only denoted by $\tilde\rho$ there). \subsection{Formulation of the theorem} The description of the map $b_\eta^L$ will be summarised by a theorem, but to state this theorem we need some introduction. In this subsection we give this introduction using a series of lemmas and then we formulate the theorem. These lemmas will not be proved here in order to get to the statement of the theorem faster; the proofs will be given in the next subsection. First we consider the key fibration $\chi_\tau^L\colon X_\tau^L\to\Gamma T\tilde\xi_\eta^L$ over the open disk bundle $\tightoverset{\,_\circ~~~}{D\tilde\xi_\eta^L}\subset T\tilde\xi_\eta^L\subset\Gamma T\tilde\xi_\eta^L$ (i.e. the Thom space without its special point). \begin{lemma}\label{l1} $(\chi_\tau^L)^{-1}\big(\tightoverset{\,_\circ~~~}{D\tilde\xi_\eta^L}\big)=\tightoverset{\,_\circ~~~}{D\tilde\xi_\eta^L}\times X_{\tau'}^L$. \end{lemma} In other words, the key bundle is trivial over this open disk bundle. Now we add the extra point to $\tightoverset{\,_\circ~~~}{D\tilde\xi_\eta^L}$ to form the Thom space $T\tilde\xi_\eta^L$ and consider $X_T:=(\chi_\tau^L)^{-1}(T\tilde\xi_\eta^L)$. Recall that $X_{\tau'}^L$ is an H-space (see remark \ref{h}); we will denote the multiplication in this H-space by $x\cdot y~(x,y\in X_{\tau'}^L)$ and we define $$\tilde\rho_T\colon S\tilde\xi_\eta^L\times X_{\tau'}^L\to X_{\tau'}^L;~(s,x)\mapsto\tilde\rho_\eta^L(s)\cdot x.$$ \begin{lemma}\label{l3} $X_T=(D\tilde\xi_\eta^L\times X_{\tau'}^L)\usqcup{\tilde\rho_T}X_{\tau'}^L$. \end{lemma} Next we describe the fibration $$\chi_\tau^L|_{X_T}\colon X_T\xra{X_{\tau'}^L}T\tilde\xi_\eta^L.$$ We denote its inducing map by $b_T\colon T\tilde\xi_\eta^L\to BX_{\tau'}^L$ and note that $b_T$ is just the restriction of $b_\eta^L$ to the Thom space $T\tilde\xi_\eta^L$. Since the key fibration was trivial over the open disk bundle, it is also trivial over the zero-section $BG_\eta^L$, hence we can assume that the inducing map $b_T$ maps $BG_\eta^L$ to a single point. This means that $b_T$ factors through the quotient map $q\colon T\tilde\xi_\eta^L\to T\tilde\xi_\eta^L/BG_\eta^L$ that contracts the zero-section, that is, there is a map $\sigma\colon T\tilde\xi_\eta^L/BG_\eta^L\to BX_{\tau'}^L$ with the property $b_T=\sigma\circ q$. Observe that $T\tilde\xi_\eta^L/BG_\eta^L$ coincides with the suspension $S(S\tilde\xi_\eta^L)$ of the sphere bundle $S\tilde\xi_\eta^L$, hence $\sigma$ is a map $S(S\tilde\xi_\eta^L)\to BX_{\tau'}^L$. Recall again that there is a bijective correspondence, called the adjoint correspondence, between the maps $A\to\Omega B$ and the maps $SA\to B$ for any (pointed) spaces $A,B$. Now the adjoint of the map $\sigma$ is a map $S\tilde\xi_\eta^L\to\Omega BX_{\tau'}^L=X_{\tau'}^L$. \begin{lemma}\label{l2} The adjoint of $\sigma$ is the attaching map $\tilde\rho_\eta^L\colon S\tilde\xi_\eta^L\to X_{\tau'}^L$. \end{lemma} Recall that $X_{\tau'}^L=\Gamma V_{\tau'}^L$ (by theorem \ref{gammav}), hence we have $\Omega BX_{\tau'}^L=X_{\tau'}^L=\Omega\Gamma SV_{\tau'}^L$ and we can define $BX_{\tau'}^L$ as $\Gamma SV_{\tau'}^L$, since its path fibration has contractible total space and fibre $X_{\tau'}^L$. Now the inducing map $b_\eta^L$ is such that both its source space $\Gamma T\tilde\xi_\eta^L$ and its target space $BX_{\tau'}^L=\Gamma SV_{\tau'}^L$ are infinite loop spaces. \begin{lemma}\label{l4} The map $b_\eta^L\colon\Gamma T\tilde\xi_\eta^L\to\Gamma SV_{\tau'}^L$ is an infinite loop map. \end{lemma} This implies that $b_\eta^L$ is completely determined by its restriction $b_T$ to $T\tilde\xi_\eta^L$, since any map $g\colon A\to B$ from a (pointed) space to an infinite loop space extends to an infinite loop map $g_\ext\colon\Gamma A\to B$ in a homotopically unique way by \cite[pp. 42--43]{may}. \begin{lemma}\label{l5} If $f\colon A\to A'$ is a map between any (pointed) spaces and $g\colon A\to B$ and $g'\colon A'\to B$ are maps to an infinite loop space $B$ for which $g=g'\circ f$, then $g_\ext=g'_\ext\circ\Gamma f$. \end{lemma} A direct consequence of these lemmas is a description of the inducing map $b_\eta^L$ as follows. \begin{thm}\label{induc} Let $\sigma\colon S(S\tilde\xi_\eta^L)\to BX_{\tau'}^L$ be the adjoint of the gluing map $\tilde\rho_\eta^L$ and let $q\colon T\tilde\xi_\eta^L\to T\tilde\xi_\eta^L/BG_\eta^L=S(S\tilde\xi_\eta^L)$ be the quotient map. Then we have $$b_\eta^L=(\sigma\circ q)_\ext.$$ \end{thm} \begin{rmk} This theorem can also be visualised on a commutative diagram: $$\xymatrix@C=.75pc{ X_T\ar@{^(->}[rr]\ar[d]_{X_{\tau'}^L} && X_\tau^L\ar[rrrr]\ar[d]^{X_{\tau'}^L} &&&& EX_{\tau'}^L\ar@{=}[r]^(.45){\sims{1.8}}\ar[dd]^{X_{\tau'}^L} & \ast~~~~~~~~ \\ T\tilde\xi_\eta^L\ar@{^(->}[rr]\ar[drrrrrr]^{b_T}\ar[d]_q && \Gamma T\tilde\xi_\eta^L\ar[drrrr]^{(b_T)_\ext=b_\eta^L} &&&&&\\ T\tilde\xi_\eta^L/BG_\eta^L\ar@{=}[rr] && S(S\tilde\xi_\eta^L)\ar[rrrr]^(.53)\sigma && \ar@{<->}[d]^{\text{adjoint}} && BX_{\tau'}^L\ar@{=}[r] & \Gamma SV_{\tau'}^L~ \\ && S\tilde\xi_\eta^L\ar[rrrr]_(.53){\tilde\rho_\eta^L} &&&& X_{\tau'}^L\ar@{=}[r] & \Omega BX_{\tau'}^L~ }$$ \end{rmk} \subsection{Proofs of the lemmas} \medskip\par\noindent\textbf{Proof of lemma \ref{l1}.\enspace\ignorespaces} Recall the construction of the space $X_{\tau'}^L$ (section \ref{prf}): We glue together the disk bundles $$D_{\underline\vartheta}:=D\tilde\xi_{\underline\vartheta}^L=\prod_{i=1}^r(D\tilde\xi_{\vartheta_i}^L)^{m_i}$$ along their boundaries for all multisingularities $\underline\vartheta=m_1[\vartheta_1]+\ldots+m_r[\vartheta_r]$ composed of the elements of $\tau'$ according to an order that is compatible with the natural partial order. The attaching map $\tilde\rho_{\underline\vartheta}^L$ of $D_{\underline\vartheta}$ maps to the subspace corresponding to multisingularities less complicated than $\underline\vartheta$. To form the space $X_\tau^L$, we follow the same procedure using the disk bundles $D_{\underline\vartheta}\times(D\tilde\xi_\eta^L)^r$ for all natural numbers $r$. Now if we project each such disk bundle onto the factor $(D\tilde\xi_\eta^L)^r$ and attach these correspondingly, then we get the space $\Gamma T\tilde\xi_\eta^L$. Clearly the projections are compatible with the gluings that form the space $X_\tau^L$, hence we get a map $X_\tau^L\to\Gamma T\tilde\xi_\eta^L$, which is $\chi_\tau^L$. This way it is easy to see that we obtain $(\chi_\tau^L)^{-1}\big(\tightoverset{\,_\circ~~~}{D\tilde\xi_\eta^L}\big)$ by gluing the spaces $D_{\underline\vartheta}\times\tightoverset{\,_\circ~~~}{D\tilde\xi_\eta^L}$ together along their boundaries by the maps $\tilde\rho_{\underline\vartheta}^L\times\id_{\toverset{_\circ~~}{D\tilde\xi_\eta^L}}$. This procedure results in the space $X_{\tau'}^L\times\tightoverset{\,_\circ~~~}{D\tilde\xi_\eta^L}$ as claimed. ~$\square$\par \medskip\par\noindent\textbf{Proof of lemma \ref{l3}.\enspace\ignorespaces} The space $X_T$ is a direct product over the open disk bundle $\tightoverset{\,_\circ~~~}{D\tilde\xi_\eta^L}$, hence over the Thom space it can be obtained as $$X_T=(D\tilde\xi_\eta^L\times X_{\tau'}^L)\usqcup{\tilde\rho}X_{\tau'}^L,$$ where the attaching map is some map $\tilde\rho\colon S\tilde\xi_\eta^L\times X_{\tau'}^L\to X_{\tau'}^L$ for which the restriction $\{s\}\times X_{\tau'}^L\to X_{\tau'}^L$ is a homeomorphism for all $s\in S\tilde\xi_\eta^L$. We can describe this map using the construction of the spaces $X_{\tau'}^L$ and $X_T$; note that $X_T$ is the classifying space for those $(\tau,L)$-maps where the $\eta$-points are not multiple. Recall the universal $(\tau',L)$-map $f_{\tau'}^L\colon Y_{\tau'}^L\to X_{\tau'}^L$ in theorem \ref{ptpullback}. The pullback property of this map (i.e. that all $(\tau',L)$-maps can be induced from it by a map of the target space to $X_{\tau'}^L$) remains true for generalised $(\tau',L)$-maps obtained as direct limits of $(\tau',L)$-maps as well (see remark \ref{gentau}). Note that the operation in the classifying space $X_{\tau'}^L$ is the same as the operation in the cobordism group classified by it (by a version of the Brown representability theorem), that is, the disjoint union of (generalised) $(\tau',L)$-maps corresponds to the pointwise multiplication of their classifying maps. Now the attaching map $\tilde\rho$ is the same as the inducing map of a generalised $(\tau',L)$-map as shown on the following diagram: $$\xymatrix@C=11pc{ Y_{\tau'}^L\ar[r]^{f_{\tau'}^L} & X_{\tau'}^L \\ (S\tilde\xi_\eta^L\times Y_{\tau'}^L)\sqcup(S\xi_\eta^L\times X_{\tau'}^L)\ar[u]^\rho\ar[r]^(.575){\big(\id_{S\tilde\xi_\eta^L}\times f_{\tau'}^L\big)\sqcup\big(\Phi_\eta^L|_{S\xi_\eta^L}\times\id_{X_{\tau'}^L}\big)} & S\tilde\xi_\eta^L\times X_{\tau'}^L\ar[u]_{\tilde\rho} }$$ Here the restriction $\id_{S\tilde\xi_\eta^L}\times f_{\tau'}^L\colon S\tilde\xi_\eta^L\times Y_{\tau'}^L\to S\tilde\xi_\eta^L\times X_{\tau'}^L$ is induced by the projection $\pr_{X_{\tau'}^L}\colon S\tilde\xi_\eta^L\times X_{\tau'}^L\to X_{\tau'}^L$ and the other restriction $\Phi_\eta^L|_{S\xi_\eta^L}\times\id_{X_{\tau'}^L}\colon S\xi_\eta^L\times X_{\tau'}^L\to S\tilde\xi_\eta^L\times X_{\tau'}^L$ is induced by the map $\tilde\rho_\eta^L\circ\pr_{S\tilde\xi_\eta^L}\colon S\tilde\xi_\eta^L\times X_{\tau'}^L\to X_{\tau'}^L$. Hence the disjoint union is induced by the pointwise product that maps $(s,x)\in S\tilde\xi_\eta^L\times X_{\tau'}^L$ to the point $\tilde\rho_\eta^L(s)\cdot x$. ~$\square$\par \medskip\par\noindent\textbf{Proof of lemma \ref{l2}.\enspace\ignorespaces} To prove that $\sigma\colon S(S\tilde\xi_\eta^L)\to BX_{\tau'}^L$ and $\tilde\rho_\eta^L\colon S\tilde\xi_\eta^L\to X_{\tau'}^L$ are adjoint maps, we will use a general statement. Let $\mathscr{H}$ be an H-space and for any (pointed) space $A$ denote by $\mathscr{H}(A)$ the set of principal $\mathscr{H}$-bundles over $A$. There are two well-known bijective correspondences $$\varphi\colon\mathscr{H}(SA)\to[SA,B\mathscr{H}]~~~~\text{and}~~~~\psi\colon\mathscr{H}(SA)\to[A,\mathscr{H}],$$ where $\varphi$ sends each bundle to (the homotopy class of) its inducing map and $\psi$ sends each bundle $E\xra{\mathscr{H}}SA$ to (the homotopy class of) the map $\alpha\colon A\to\mathscr{H}$ for which $$E=(CA\times\mathscr{H})\usqcup{\beta}\mathscr{H}$$ and the attaching map is $\beta(a,h)=\alpha(a)\cdot h~(a\in A,h\in\mathscr{H})$. Such an $\alpha$ exists and is unique up to homotopy. \medskip\begin{sclaim} The bijection $\psi\circ\varphi^{-1}$ coincides with the adjoint correspondence. \end{sclaim} \begin{sprf} The symbol $*$ will denote a fixed point in any space throughout the proof. Let $f\colon(SA,*)\to(B\mathscr{H},*)$ be a map that induces the bundle $f^*\pi$ from the universal bundle $\pi\colon E\mathscr{H}\to B\mathscr{H}$. Lift the map $f$ to a map $\tilde f$ as indicated in the diagram $$\xymatrix@C=.75pc{ (CA,A)\ar[rr]^(.4){\tilde f}\ar[d]_{A\times\{1\}} && (PB\mathscr{H},\Omega B\mathscr{H})\ar@{=}[r]^(.58){\sims{1.5}}\ar[d]^{\Omega B\mathscr{H}} & (E\mathscr{H},\mathscr{H})\ar[d]^{\mathscr{H}}\\ (SA,*)\ar[rr]^f && (B\mathscr{H},*)\ar@{=}[r] & (B\mathscr{H},*) }$$ where $CA=(A\times[0,1])/(A\times\{0\})$, its map to $SA$ is the quotient by $A\times\{1\}$ and the lifted map $\tilde f$ to the path space is defined by $\tilde f(a,t):=f|_{\{a\}\times[0,t]}$. It is not hard to see that $\tilde f$ maps $A=A\times\{1\}$ to $\Omega B\mathscr{H}$ by the adjoint of $f$ that we denote by $f'$. Observe that lifting the universal bundle $\pi\colon E\mathscr{H}\xra{\mathscr{H}}B\mathscr{H}$ to its contractible total space results in the trivial bundle $E\mathscr{H}\times\mathscr{H}$; conversely, identifying each point $(e,h)\in E\mathscr{H}\times\mathscr{H}$ with the points $(e\cdot h',h\cdot h')$ for all $h'\in\mathscr{H}$ (i.e. factoring by the diagonal $\mathscr{H}$-action) recovers the universal bundle. Hence the pullback bundle $f^*\pi$ can also be obtained by pulling back the trivial bundle $E\mathscr{H}\times\mathscr{H}$ by the map $\tilde f$ to a bundle over $CA$, then identifying the points $(a,1,h)$ and $(a,1,f'(a)\cdot h)$ for all $(a,1)\in A\times\{1\}\subset CA$ and $h\in\mathscr{H}$. So we have $\varphi(f^*\pi)=f$ and $\psi(f^*\pi)=f'$. \end{sprf} Let $\Sigma\xra{X_{\tau'}^L}S(S\tilde\xi_\eta^L)$ denote the bundle induced by $\sigma$. We will use the claim above with the substitutions $\mathscr{H}:=X_{\tau'}^L$ and $A:=S\tilde\xi_\eta^L$ on the element $\Sigma\in\mathscr{H}(SA)$. Clearly we have $\varphi(\Sigma)=\sigma$, so we only have to see the identity $\psi(\Sigma)=\tilde\rho_\eta^L$. This follows from the observaion $$\Sigma=(C(S\tilde\xi_\eta^L)\times X_{\tau'}^L)\usqcup{\tilde\rho_T}X_{\tau'}^L,$$ where lemma \ref{l3} implies that the gluing map is $\tilde\rho_T\colon S\tilde\xi_\eta^L\times X_{\tau'}^L\to X_{\tau'}^L$, which maps each point $(s,x)$ to the point $\tilde\rho_\eta^L(s)\cdot x$. ~$\square$\par \medskip\par\noindent\textbf{Proof of lemma \ref{l4}.\enspace\ignorespaces} Now we will use the notion of $l$-framed $(\tau,L)$-maps (see definition \ref{lframed}). We saw in remark \ref{lfr} that the classifying space for cobordisms of $l$-framed $(\tau,L)$-maps is of the form $X_{\tau\oplus l}^L\cong\Gamma S^lV_\tau^L$. Hence we have $BX_\tau^L=X_{\tau\oplus1}^L$ and we can set $B^lX_\tau^L:=X_{\tau\oplus l}^L$ to denote the space with the property $\Omega^lB^lX_\tau^L=X_\tau^L$. We can now consider the key fibration for $l$-framed $(\tau,L)$-maps, as the proof of theorem \ref{keythm} works for the spaces $B^lX_\tau^L$ without change. This is a fibration $$\chi_l\colon B^lX_\tau^L\xra{B^lX_{\tau'}^L}\Gamma S^lT\tilde\xi_\eta^L$$ and it can be induced from the universal $B^lX_{\tau'}^L$-bundle $*\xra{B^lX_{\tau'}^L}B^{l+1}X_{\tau'}^L$ by a map $b_l\colon\Gamma S^lT\tilde\xi_\eta^L\to B^{l+1}X_{\tau'}^L$. The following diagram shows this inducing map on the rightmost square and the resolvents of both horizontal arrows to the left: $$\xymatrix{ \ldots\ar[r] & \Omega B^lX_\tau^L\ar[r]^{\chi_{l-1}}\ar[d] & \Omega\Gamma S^lT\tilde\xi_\eta^L\ar[r]\ar[d]^{b_{l-1}} & B^lX_{\tau'}^L\ar[r]\ar[d]^{\id_{B^lX_{\tau'}^L}} & B^lX_\tau^L\ar[r]^{\chi_l}\ar[d] & \Gamma S^lT\tilde\xi_\eta^L\ar[d]^{b_l}\\ \ldots\ar[r] & \ast\ar[r] & \Omega B^{l+1}X_{\tau'}^L\ar[r] & B^lX_{\tau'}^L\ar[r] & \ast\ar[r] & B^{l+1}X_{\tau'}^L }$$ The leftmost square in the diagram above is precisely the pullback diagram of the fibration $\chi_{l-1}$ with the map $b_{l-1}$, hence we obtained that $\chi_{l-1}=\Omega\chi_l$ and $b_{l-1}=\Omega b_l$. We can iterate this process as $l\to\infty$ and we obtain that $b_0=b_\eta^L$ is indeed an infinite loop map. ~$\square$\par \medskip\par\noindent\textbf{Proof of lemma \ref{l5}.\enspace\ignorespaces} We want to show that the diagram below remains commutative (up to homotopy) after adding the dashed arrow. $$\xymatrix{ A\ar[rr]^f\ar[dr]^g\ar@{_(->}[dd]_i && A'\ar[dl]_{g'}\ar@{^(->}[dd]^{i'}\\ & B &\\ \Gamma A\ar@{-->}[rr]^{\Gamma f}\ar[ur]^{g_\ext} && \Gamma A'\ar[ul]_{g'_\ext} }$$ The square commutes by the construction of the map $\Gamma f$, hence the composition $g'_\ext\circ\Gamma f\circ i$ coincides with the map $g=g'\circ f=g'_\ext\circ i'\circ f$. Thus $g'_\ext\circ\Gamma f$ is an extension of $g$ to the space $\Gamma A$ and it is also an infinite loop map, since it is the composition of such maps. Now the uniqueness of the extension implies that $g_\ext$ and $g'_\ext\circ\Gamma f$ coincide up to homotopy. ~$\square$\par \section{Corollaries} Here we will see some applications of the key fibration from \cite{hosszu} and \cite{hominv} that give nice descriptions of the classifying spaces $X_\tau^L$ in some cases. Again $\tau$ will be a set of singularities, $L$ a stable linear group, $[\eta]\in\tau$ a top singularity and $\tau':=\tau\setminus\{[\eta]\}$. \subsection{Rational decomposition} In this subsection we show rational decomposition theorems on the key bundle. First we will see that given some restrictions on the allowed singularity classes in $\tau$, the key fibration is rationally trivial, i.e. $X_\tau^L\congq X_{\tau'}^L\times\Gamma T\tilde\xi_\eta^L$. \begin{rmk} The rational triviality of the key bundle means that for any manifold $P$ we have $$\Cob_\tau^L(P)\otimes\Q\cong(\Cob_{\tau'}^L(P)\oplus\Imm^{\tilde\xi_\eta^L}(P))\otimes\Q.$$ \end{rmk} \begin{thm}\label{ratrivi1} If $\nu_\tau^L,\xi_\eta^L$ and $\tilde\xi_\eta^L$ are orientable and either $\xi_\eta^L$ or $\tilde\xi_\eta^L$ has non-trivial rational Euler class $e(\xi_\eta^L)$ or $e(\tilde\xi_\eta^L)$ in $H^*(BG_\eta^L;\Q)$, then we have $$X_\tau^L\congq X_{\tau'}^L\times\Gamma T\tilde\xi_\eta^L.$$ \end{thm} \begin{prf} Consider the Kazarian space $K_\tau^L$ and the virtual complex $V_\tau^L$ and take the cohomological exact sequences obtained from the Puppe sequences $$K_{\tau'}^L\into K_\tau^L\to T\xi_\eta^L\into SK_{\tau'}^L~~~~\text{and}~~~~V_{\tau'}^L\into V_\tau^L\to T\tilde\xi_\eta^L\into SV_{\tau'}^L.$$ These are dual to the homological long exact sequences of the pairs $(K_\tau^L,K_{\tau'}^L)$ and $(V_\tau^L,V_{\tau'}^L)$. These dual sequences are shown on the upper two rows of the diagram below and the isomorphisms between these rows are the Thom isomorphisms corresponding to the virtual bundle $\nu_\tau^L$. $$\xymatrix{ H_*(K_{\tau'}^L;\Q)\ar[r]\ar[d]^\cong & H_*(K_{\tau}^L;\Q)\ar[r]\ar[d]^\cong & H_*(T\xi_\eta^L;\Q)\ar[r]^{\partial_K}\ar[d]^\cong & H_*(SK_{\tau'}^L;\Q)\ar[d]^\cong \\ H_{*+k}(V_{\tau'}^L;\Q)\ar[r]\ar[d]^\cong & H_{*+k}(V_{\tau}^L;\Q)\ar[r]\ar[d]^\cong & H_{*+k}(T\tilde\xi_\eta^L;\Q)\ar[r]^{\partial_V}\ar[d]^\cong & H_{*+k}(SV_{\tau'}^L;\Q)\ar[d]^\cong \\ \pi^s_{*+k}(V_{\tau'}^L)\otimes\Q\ar[r]\ar[d]^\cong & \pi^s_{*+k}(V_{\tau}^L)\otimes\Q\ar[r]\ar[d]^\cong & \pi^s_{*+k}(T\tilde\xi_\eta^L)\otimes\Q\ar[r]\ar[d]^\cong & \pi^s_{*+k}(SV_{\tau'}^L)\otimes\Q\ar[d]^\cong \\ \pi_{*+k}(X_{\tau'}^L)\otimes\Q\ar[r] & \pi_{*+k}(X_{\tau}^L)\otimes\Q\ar[r] & \pi_{*+k}(\Gamma T\tilde\xi_\eta^L)\otimes\Q\ar[r]^\partial & \pi_{*+k-1}(X_{\tau'}^L)\otimes\Q }$$ We obtain the third row by applying the stable Hurewicz homomorphism on the second row (recall that these are also isomorphisms according to Serre's theorem \cite[theorem 18.3]{charclass}) and we get the fourth row by corollary \ref{gammav}. Now suppose that the vector bundle $\xi_\eta^L$ is orientable and the Euler class $e(\xi_\eta^L)\in H^*(BG_\eta^L;\Q)$ is non-trivial. The dual of the map $\partial_K$ is the homomorphism $$i^*\colon H^*(SK_{\tau'}^L;\Q)\to H^*(T\xi_\eta^L;\Q)$$ induced by the inclusion $i\colon T\xi_\eta^L\into SK_{\tau'}^L$. We claim that this homomorphism is trivial. This is indeed so, because the cup product in $H^*(SK_{\tau'}^L;\Q)$ is trivial, but in $H^*(T\xi_\eta^L;\Q)$ any element is of the form $u(\xi_\eta^L)\smallsmile x$ for some $x\in H^*(BG_\eta^L;\Q)$ (by the Thom isomophism), hence the cup product here is $(u(\xi_\eta^L)\smallsmile x_1)\smallsmile(u(\xi_\eta^L)\smallsmile x_2)=u(\xi_\eta^L)\smallsmile e(\xi_\eta^L)\smallsmile x_1\smallsmile x_2$. This cannot vanish unless $x_1$ or $x_2$ vanishes, since $H^*(BG_\eta^L;\Q)$ is a subring of the polynomial ring $H^*(BT_\eta^L;\Q)$ where $T_\eta^L$ is the maximal torus in $G_\eta^L$, so it has no zero divisors. If we suppose that $\tilde\xi_\eta^L$ is an orientable bundle with non-vanishing rational Euler class, then the same reasoning yields that $\partial_V$ is trivial. Either way, the homomorphism $\partial$ in the bottom row of the diagram above vanishes, hence $$\pi_*(X_\tau^L)\otimes\Q\cong(\pi_*(X_{\tau'}^L)\otimes\Q)\oplus(\pi_*(\Gamma T\tilde\xi_\eta^L)\otimes\Q).$$ It remains to note that $X_\tau^L$, $X_{\tau'}^L$ and $\Gamma T\tilde\xi_\eta^L$ are all H-spaces, so they are rationally homotopy equivalent to products of Eilenberg--MacLane spaces $K(\Q,i)$ (for some $i\in\N$). Thus we obtain the statement of the theorem. \end{prf} \begin{thm}\label{ratrivi2} Suppose that $\tilde\xi_\eta^L=\zeta\oplus\varepsilon^r$ for an orientable vector bundle $\zeta$ of odd dimension and a number $r\in\N$ and suppose also that $\pi_{i}(X_{\tau'}^L)$ is a torsion group whenever $i+r$ is an even number. Then we have $$X_\tau^L\congq X_{\tau'}^L\times\Gamma T\tilde\xi_\eta^L.$$ \end{thm} \begin{prf} It is enough to show that the inducing map $b_\eta^L\colon\Gamma T\tilde\xi_\eta^L\to BX_{\tau'}^L$ of the key bundle is rationally null-homotopic. Consider its $r$-th loop map $$\Omega^rb_\eta^L\colon\Omega^r\Gamma T\tilde\xi_\eta^L=\Gamma T\zeta\to\Omega^{r-1}X_{\tau'}^L=\Omega^rBX_{\tau'}^L$$ using the assumption $T\tilde\xi_\eta^L=T(\zeta\oplus\varepsilon^r)=S^rT\zeta$. This is an infinite loop map by lemma \ref{l4}, hence it is the unique extension of its restriction to $T\zeta$. Recall that $X_{\tau'}^L$ is an infinite loop space, so this also makes sense for $r=0$. The base space $BG_\eta^L$ has non-trivial rational cohomology only in even degrees (since $G_\eta^L$ is a compact Lie group), hence the Thom isomorphism yields that $T\zeta$ has non-trivial rational cohomology only in odd degrees (because the rank of $\zeta$ was assumed to be odd). The H-space $\Omega^{r-1}X_{\tau'}^L$ is rationally homotopy equivalent to a product of Eilenberg--MacLane spaces $K(\Q,i)$ and none of these numbers $i$ can be odd, as we assumed $\pi_i(\Omega^{r-1}X_{\tau'}^L)\otimes\Q\cong\pi_{i+r-1}(X_{\tau'}^L)\otimes\Q$ to be trivial for odd numbers $i$. This implies that any map $T\zeta\to\Omega^{r-1}X_{\tau'}^L$ induces the trivial homomorphism in rational cohomologies, hence is rationally null-homotopic. Therefore the infinite loop map extension $\Omega^rb_\eta^L$ is also rationally null-homotopic. The resolvent of the inducing map $b_\eta^L$ has the form \begin{alignat*}2 \ldots\to\Omega^rX_{\tau'}^L\to\Omega^rX_\tau^L\xra{\Omega^r\chi_\tau^L}\Gamma T\zeta\xra{\Omega^rb_\eta^L}\Omega^{r-1}X_{\tau'}^L\to\ldots&\\ \ldots\to X_{\tau'}^L\to X_\tau^L\xra{\chi_\tau^L}&\Gamma T\tilde\xi_\eta^L\xra{b_\eta^L}BX_{\tau'}^L. \end{alignat*} Recall that in a resolvent each map is the inducing map of the bundle whose projection is the previous map, in particular $\Omega^rb_\eta^L$ induces the bundle $\Omega^r\chi_\tau^L\colon\Omega^rX_\tau^L\xra{\Omega^rX_{\tau'}^L}\Omega^r\Gamma T\tilde\xi_\eta^L$ from the universal bundle $*\xra{\Omega^rX_{\tau'}^L}\Omega^{r-1}X_{\tau'}^L$. Now the rational null-homotopy of $\Omega^rb_\eta^L$ implies that this is a rationally trivial bundle, i.e. we have $$\Omega^rX_\tau^L\congq\Omega^rX_{\tau'}^L\times\Omega^r\Gamma T\tilde\xi_\eta^L.$$ Since the spaces $X_\tau^L$, $X_{\tau'}^L$ and $\Gamma T\tilde\xi_\eta^L$ are H-spaces, their rational homotopy types are products of Eilenberg--MacLane spaces and then $\Omega^rK(\Q,i)=K(\Q,i-r)$ implies that the above splitting of $\Omega^rX_\tau^L$ also holds for $X_\tau^L$, that is, $X_\tau^L\congq X_{\tau'}^L\times\Gamma T\tilde\xi_\eta^L$. \end{prf} In general, when the conditions in the above theorems do not apply, there is also a weaker decomposition theorem for the rational homotopy type of $X_\tau^L$. We will denote the rational homotopy types of the spaces $X_\tau^L$, $X_{\tau'}^L$ and $\Gamma T\tilde\xi_\eta^L$ respectively by $X$, $X'$ and $\Gamma$. These are all rational H-spaces, hence they uniquely decompose into products of Eilenberg--MacLane spaces $K(\Q,i)$ (for some $i\in\N$). \begin{thm} There is a rational H-space $B$ with the property $$X\congq\frac{X'\times\Omega B}{\Omega\Gamma}\times B,$$ where dividing by $\Omega\Gamma$ means that all Eilenberg--MacLane factors $K(\Q,i)$ of $\Omega\Gamma$ occur as factors of the numerator and we cancel these factors. \end{thm} \begin{prf} We use the $\Q$-classification of H-bundles \cite[lemma 113]{hosszuv1} on the key bundle to obtain that $X\xra{X'}\Gamma$ is the product of three H-bundles of the forms $*\xra{\Omega A}A$, $B\xra{*}B$ and $C\xra{C}*$. Hence $X=B\times C$, $X'=A\times C$ and $\Gamma=A\times B$ and so $\Omega\Gamma=\Omega A\times\Omega B$, and the statement of the theorem follows. \end{prf} \begin{rmk} If the singularity set is finite, say we have $\tau=\{\eta_0,\ldots,\eta_r\}$ with $\eta_0<\ldots<\eta_r=\eta$, then the space $B$ in the above theorem can be obtained from a spectral sequence as follows. By the above proof we have $\pi_*(X)=\pi_*(B)\oplus\pi_*(C)$ and $\pi_*(\Gamma)=\pi_*(A)\oplus\pi_*(B)$, the induced map $\chi_\#\colon\pi_*(X)\to\pi_*(\Gamma)$ of the rational key fibration is an isomorphism on $\pi_*(B)$ and this is the maximal such subgroup. Now we have a commutative diagram $$\xymatrix{ \pi_m(X')\ar[d]^\cong\ar[r] & \pi_m(X)\ar[d]^\cong\ar[r]^{\chi_\#} & \pi_m(\Gamma)\ar[d]^\cong \\ H_{m-k}(K_{\tau'}^L;\tilde\Q)\ar[r] & H_{m-k}(K_\tau^L;\tilde\Q)\ar[r]^\alpha & H_{m-k}(T\xi_\eta^L;\tilde\Q) }$$ where the vertical arrows are the compositions of the stable Hurewicz homomorphisms (which are rational isomorphisms) and the Thom isomorphisms corresponding to the virtual bundle $\nu_\tau^L$ and $\tilde\Q$ denotes rational coefficients twisted by the orientation of (the restriction of) $\nu_\tau^L$. Now if $E^*_{*,*}$ denotes the Kazarian spectral sequence (see definition \ref{kazspec}) with twisted rational coefficients, then we have $$H_{m-k}(K_\tau^L;\tilde\Q)=\bigoplus_{i=0}^rE^\infty_{i,m-k-i}~~~~\text{and}~~~~E^\infty_{r,m-k-r}=\im\alpha\subset H_{m-k}(T\xi_\eta^L;\tilde\Q).$$ Hence the rational H-space (i.e. product of Eilenberg--MacLane spaces) $B$ is completely determined by the formula $$\pi_m(B)=\im\chi_\#\cong\im\alpha=E^\infty_{r,m-k-r}.$$ \end{rmk} \subsection{A Postnikov-type tower} In this subsection we assume that $\tau$ is finite and its elements are $\eta_0<\ldots<\eta_r$ with a complete order extending the natural partial order. We will use the simplified notations $G_i:=G_{\eta_i}^L$, $\xi_i:=\xi_{\eta_i}^L$, $c_i:=\dim\xi_i$, $\tilde\xi_i:=\tilde\xi_{\eta_i}^L$, $\Gamma_i:=\Gamma T\tilde\xi_i$, $\tau_i:=\{\eta_j\mid j\le i\}$, $X_i:=X_{\tau_i}^L$, $K_i:=K_{\tau_i}^L$ and $V_i:=V_{\tau_i}^L$. \begin{prop}\label{postprop} The classifying space $X_\tau^L=X_r$ can be obtained by a sequence of fibrations $$\Gamma_r=A_1\xla{\Gamma_{r-1}}A_2\xla{\Gamma_{r-2}}\ldots\xla{\Gamma_1}A_r\xla{\Gamma_0}X_\tau^L.$$ \end{prop} \begin{prf} We have $T\tilde\xi_i=V_i/V_{i-1}$ for $i=1,\ldots,r$, so there is a cofibration $$T\tilde\xi_{r-i+1}\into V_r/V_{r-i}\to V_r/V_{r-i+1}.$$ Now putting $A_i:=\Gamma(V_r/V_{r-i})$ for $i=1,\ldots,r$ and applying that $\Gamma$ turns cofibrations into fibrations (see \cite{barec} and lemma \ref{cofibfib}), we get the proposed sequence of fibrations until $A_r$. Similarly the cofibration $V_0=T\tilde\xi_0\into V_r\to V_r/V_0$ is turned by $\Gamma$ to the last fibration in the sequence. \end{prf} \begin{crly} For any manifold $P^{n+k}$ there is a spectral sequence with first page $\Imm^{\tilde\xi_i}(P\times\R^j)$, which converges to $\Cob_\tau^L(P\times\R^*)$, that is, $$E_1^{i,j}:=\Imm^{\tilde\xi_i}(P\times\R^j)\implies\Cob_\tau^L(P\times\R^{i+j}).$$ \end{crly} \begin{prf} Applying the extraordinary homology theory $h_j:=\{S^j\cpt P,\cdot\}$ to the sequence of cofibrations $V_r\to V_r/V_0\to V_r/V_1\to\ldots\to V_r/V_{r-1}$, we get an exact couple $$\xymatrix@C=.75pc{ \displaystyle\bigoplus_{i,j}\{S^j\cpt P,V_r/V_{r-i}\}\ar[rr] && \displaystyle\bigoplus_{i,j}\{S^j\cpt P,V_r/V_{r-i}\}\ar[dl] \\ & \displaystyle\bigoplus_{i,j}\{S^j\cpt P,T\tilde\xi_i\}\ar[ul] & }$$ (with natural indexing by $i,j$ and $V_r/V_{-1}$ is considered to be just $V_r$). Observe that we have $\{S^j\cpt P,V_r/V_{r-i}\}=[S^j\cpt P,A_i]$ and $\{S^j\cpt P,T\tilde\xi_i\}=[S^j\cpt P,\Gamma_i]$, hence the spectral sequence we obtain from this exact couple has first page $$[S^j\cpt P,\Gamma_i]=[S^j\cpt P,\Gamma T\tilde\xi_i]\cong\Imm^{\tilde\xi_i}(P\times\R^j)$$ and converges to $$[S^{i+j}\cpt P,\Gamma V_r]=[S^{i+j}\cpt P,\Gamma V_\tau^L]=[S^{i+j}\cpt P,X_\tau^L]\cong\Cob_\tau^L(P\times\R^{i+j}),$$ and this is what we wanted to prove. \end{prf} Denote by $0=s_0<s_1<\ldots<s_l<s_{l+1}=r$ the indices where the parity of the numbers $c_i$ changes (for $i=0,\ldots,r$), that is, $$c_0\equiv\ldots\equiv c_{s_1}\not\equiv c_{s_1+1}\equiv\ldots\equiv c_{s_2}\not\equiv c_{s_2+1}\equiv\ldots~~~~\mod 2.$$ We will say that the indices $i,i'$ are in the same block, if there is a number $0\le t\le l$ for which $s_t<i,i'\le s_{t+1}$. In the following we consider the (homological) Kazarian spectral sequence $E_{i,j}^t$ (see definition \ref{kazspec}) with rational coefficients and assume that the bundles $\xi_i$ are all orientable. \begin{lemma} $~$ \begin{enumerate} \item If some indices $i,i'$ are in the same block, then the differential $d_{i,j}^t\colon E_{i,j}^t\to E_{i',j'}^t$ (if exists) is trivial. \item The quotient spaces $K_{s_{t+1}}/K_{s_t}$ have the same rational homologies as the wedge products $T\xi_{s_t+1}\vee\ldots\vee T\xi_{s_{t+1}}$ for $t=0,\ldots,l$. \end{enumerate} \end{lemma} \begin{prf} The groups $G_i$ are compact Lie groups, hence the spaces $BG_i$ have non-trivial rational cohomologies only in even degrees. Thus (by the homological Thom isomorphism) the groups $E^1_{i,j}=H_{i+j}(T\xi_i;\Q)\cong H_{i+j-c_i}(BG_i;\Q)$ can be non-trivial only if $i+j\equiv c_i~\mod2$. Now the differential $d^t_{i,j}\colon E^t_{i,j}\to E^t_{i-t,j+t-1}$ can be non-trivial only if both groups are non-trivial, which cannot happen if $i$ and $i-t$ are in the same block. This finishes the proof of both claims. \end{prf} This lemma will be used in the proof of the following theorem that gives a simplification of proposition \ref{postprop} rationally. We will use $X$ and $\hat\Gamma_t$ (for $t=0,\ldots,l$) respectively to denote the rational homotopy types of $X_\tau^L$ and $\underset{s_t<i\le s_{t+1}}\prod\Gamma_i$. \begin{thm} There is a simplified sequence of fibrations for $X$ of the form $$\hat\Gamma_l=\hat A_1\xla{\hat\Gamma_{l-1}}\hat A_2\xla{\hat\Gamma_{l-2}}\ldots\xla{\hat\Gamma_1}\hat A_l\xla{\hat\Gamma_0}X.$$ \end{thm} \begin{prf} The above lemma implies that the quotient spaces $V_{s_{t+1}}/V_{s_t}$ are stably rationally homotopy equivalent to $T\tilde\xi_{s_t+1}\vee\ldots\vee T\tilde\xi_{s_{t+1}}$ for $t=0,\ldots,l$. Now applying the $\Gamma$ functor to the rational homotopy type of a cofibration of the form $$V_{s_{l-t+1}}/V_{s_{l-t}}\into V_{s_{l+1}}/V_{s_{l-t}}\to V_{s_{l+1}}/V_{s_{l-t+1}},$$ we obtain a fibration with fibre $\hat\Gamma_{l-t}$. Thus denoting by $\hat A_t$ the rational homotopy type of $\Gamma(V_{s_{l+1}}/V_{s_{l-t+1}})$, we obtain the proposed sequence of fibrations similarly to the proof of proposition \ref{postprop}. \end{prf} \subsection{Classifying spaces for prim maps}\label{clasprim} In the beginning of chapter \ref{classp} we noted that the classifying spaces $\overline X_\tau^L$ of the cobordism groups of prim maps $\Prim_\tau^L(n,P^{n+k})$ (see definition \ref{primcob}) can be constructed analogously to the construction of the spaces $X_\tau^L$. However, the key fibration helps in finding a simpler way we can obtain these spaces, as we will soon see (based on \cite{ctrl}). Note, that the key fibration also exists for prim maps, i.e. there is a fibration $$\ol\chi_\tau^L\colon\ol X_\tau^L\xra{\ol X_{\tau'}^L}\Gamma T\tilde\zeta_\eta^L$$ using the usual notations and denoting by $\tilde\zeta_\eta^L$ the target bundle in the global normal form of prim maps around the $\eta$-stratum. Recall that prim maps can only have Morin singularities (see definition \ref{thomboar}), hence the singularity set $\tau$ can now only be $\tau_r:=\{\Sigma^0,\Sigma^{1_1},\ldots,\Sigma^{1_r}\}$ for some $r\in\N\cup\{\infty\}$, where $r=\infty$ only means the union of all $\tau_r$-s for $r\in\N$ (see example \ref{morsing}). We will use the notation analogous to \ref{conv2} after definition \ref{cobtau} by putting $\Prim_r^L(n,P):=\Prim_{\tau_r}^L(n,P)$ and $\overline X_r^L:=\overline X_{\tau_r}^L$. \begin{prop}\label{priminf} $\overline X_\infty^L\cong\Omega\Gamma T\gamma_{k+1}^L$. \end{prop} \begin{prf} We use a theorem of Wells \cite{wells} which states that the classifying space for cobordisms of $(k+1)$-codimensional immersions with normal $L$-structures is $X_{\{\Sigma^0\}}^L=\Gamma T\gamma_{k+1}^L$. For any manifold $P^{n+k}$, there is a natural bijective correspondence between the cobordisms of $k$-codimensional prim maps to $P$ and the cobordisms of $(k+1)$-codimensional immersions to $P\times\R^1$ (both with normal $L$-structures). Hence we have $$[\cpt P,\overline X_\infty^L]\cong[S\cpt P,\Gamma T\gamma_{k+1}^L]\cong[\cpt P,\Omega\Gamma T\gamma_{k+1}^L],$$ and now the homotopic uniqueness of the classifying space implies the statement. \end{prf} If we restrict the set of allowed singularities for the cobordisms of prim maps, then we get a bit more complicated but similar description of the classifying space. In the rest of this section we will have a fixed number $r\in\N$. \begin{defi}\label{zetasdef} Denote by $\pi$ the projection of the sphere bundle of $(r+1)\gamma_{k+1}^L$ (the sum $\gamma_{k+1}^L\oplus\ldots\oplus\gamma_{k+1}^L$ of $(r+1)$ summands). Let $\zeta_S^r$ be the pullback of the universal bundle $\gamma_{k+1}^L$ by $\pi$, that is, we put $$\xymatrix@C=2pc{ *!<.4cm,0cm>{\zeta_S^r:=\pi^*\gamma_{k+1}^L}\ar[r]\ar[d] & \gamma_{k+1}^L\ar[d] \\ S((r+1)\gamma_{k+1}^L)\ar[r]^(.55){\pi} & BL(k+1) }$$ \end{defi} \begin{thm}\label{zetasthm} $\overline X_r^L\cong\Omega\Gamma T\zeta_S^r$. \end{thm} \begin{prf} We fix a (generic) immersion $g\colon M^n\imto P\times\R^1$ such that the composition $f:=\pr_P\circ g\colon M\to P$ is a $(\tau_r,L)$-map. First we will produce sections $s_1,s_2,\ldots$ of the normal bundle $\nu_g$ such that for all $i$, the points where precisely the first $i$ sections vanish form the $\Sigma^{1_i}$-stratum, i.e. $\underset{j=1}{\overset i\bigcap}s_j^{-1}(0)\setminus s_{i+1}^{-1}(0)=\Sigma^{1_i}(f)$. The positive unit vector in $\R^1$ defines a constant vector field on $P\times\R^1$; this will be called vertically upwards and denoted by $\ua$. The pullback $g^*\ua$ is a section of $g^*T(P\times\R^1)$ over $M$ and it can be projected to the bundle $\nu_g=g^*T(P\times\R^1)/TM$. We define this projected section to be $s_1$ and note that $s_1^{-1}(0)$ is obviously the singular set $\Sigma(f)$. The set $\Sigma(f)$ is a submanifold of $M$ of codimension $k+1$; denote its normal bundle by $\nu_2$. Observe that the tangent spaces of the section $s_1$ at the points of $\Sigma(f)$ form the graph of a pointwise linear isomorphism $\iota_2$ between $\nu_2$ and $\nu_g|_{\Sigma(f)}$. We define a section $s'_2$ of $\nu_2$ by projecting $g^*\ua$ to $\nu_2\subset TM$. Now $\iota_2\circ s'_2$ is a section of $\nu_g|_{\Sigma(f)}$ and we can arbitrarily extend it to a section of the whole normal bundle $\nu_g$ and denote this extension by $s_2$. Clearly $\Sigma^{1,1}(f)$ (see definition \ref{thomboar}) coincides with ${s'_2}^{-1}(0)$, hence it is also the same as $s_1^{-1}(0)\cap s_2^{-1}(0)$. By further applying the same method, we obtain the sections $s_3,s_4,\ldots$ with the desired properties; in particular we have $\underset{i=1}{\overset{r+1}\bigcap}s_i^{-1}(0)=\varnothing$. These sections are not unique, but the difference of any two possible choices of $s_i$ is an arbitrary section of $\nu_g$ that vanishes on $\underset{j=1}{\overset{i-1}\bigcap}s_j^{-1}(0)$, thus the uniqueness holds up to a contractible choice. Hence the collection of the sections $(s_1,\ldots,s_{r+1})$ defines a homotopically unique section $\alpha$ of the sphere bundle $S((r+1)\nu_g)$ over $M$. The normal bundle $\nu_g$ has structure group $L(k+1)$, hence it can be induced from the universal bundle $\gamma_{k+1}^L$ by a map $b\colon M\to BL(k+1)$ together with a fibrewise isomorphism between $\nu_g$ and $\gamma_{k+1}^L$. This also gives a map $\beta\colon S((r+1)\nu_g)\to S((r+1)\gamma_{k+1}^L)$. The following commutative diagram shows the maps $b$ and $\beta$ and the section $\alpha$ of $S((r+1)\nu_g)$ together with the bundles $\nu_g$ and $\gamma_{k+1}^L$ and their pullbacks by the projections to bundles over $S((r+1)\nu_g)$ and $S((r+1)\gamma_{k+1}^L)$ respectively (observe that this pullback over $S((r+1)\nu_g)$ coincides with $\beta^*\zeta_S^r$). $$\xymatrix@R=.333pc{ \beta^*\zeta_S^r\ar[dr]\ar[rrr]\ar[ddddd] &&& \zeta_S^r\ar[ddddd]\ar[dl]\\ &S((r+1)\nu_g)\ar[r]^\beta\ar[ddd] & S((r+1)\gamma_{k+1}^L)\ar[ddd] & \\ \\ \\ & M\ar@/^1pc/[uuu]^\alpha\ar[r]^(.42)b & BL(k+1) &\\ \nu_g\ar[rrr]\ar@/^1pc/[uuuuu]\ar[ur] &&& \gamma_{k+1}^L\ar[ul] }$$ This implies the identity $\nu_g=(\beta\circ\alpha)^*\zeta_S^r$. Because of the homotopic uniqueness of $\alpha$ and $\beta$, we obtain that $\nu_g$ can be induced from $\zeta_S^r$ in a homotopically unique way. Applying the usual Pontryagin--Thom construction to the diagram above, we obtain a map $$[\cpt P,\ol X_r^L]\cong\Prim_r^L(P)\to[S\cpt P,\Gamma T\zeta_S^r]\cong[\cpt P,\Omega\Gamma T\zeta_S^r]$$ in the following way: To any cobordism class $[f]$ of prim $(\tau_r,L)$-maps we can assign the cobordism class $[g]\in\Imm^{\zeta_S^r}(P\times\R^1)$ (using the notation above); now \cite{wells} gives a bijective correspondence between $\Imm^{\zeta_S^r}(P\times\R^1)$ and $[S\cpt P,\Gamma T\zeta_S^r]$, hence $[g]$ corresponds to the homotopy class of a map $S\cpt P\to\Gamma T\zeta_S^r$. This can be identified with an element of $[\cpt P,\Omega\Gamma T\zeta_S^r]$ and this is defined to be the image of $[f]$. The map we have just described is actually a natural transformation of functors $\Prim_r^L\to[\cpt\cdot,\Omega\Gamma T\zeta_S^r]$, hence it is induced by a homotopically unique map between the classifying spaces, that is, by a map $$\varphi_r\colon\ol X_r^L\to\Omega\Gamma T\zeta_S^r.$$ Recall remark \ref{keyrmk}, which has a direct analogue for prim maps, namely that the key fibration $\ol\chi_r^L\colon\ol X_r^L\xra{\ol X_{r-1}^L}\Gamma T\tilde\zeta_{\Sigma^{1_r}}^L$ induces in the cobordism groups these spaces classify the forgetful map that assigns to a cobordism class of a prim $(\tau_r,L)$-map the cobordism class of its $\Sigma^{1_r}$-stratum. The global normal forms of prim maps are quite simple (see \cite{cobmor} or \cite{hosszu}), in particular the bundle $\tilde\zeta_{\Sigma^{1_r}}^L$ coincides with $(r+1)\gamma_k^L\oplus\varepsilon^r$. Thus the key fibration has the form \begin{alignat*}2 \ol\chi_r^L\colon\ol X_r^L\xra{\ol X_{r-1}^L}\Gamma T((r+1)\gamma_k^L\oplus\varepsilon^r)&=\Omega\Gamma ST((r+1)\gamma_k^L\oplus\varepsilon^r)=\\ &=\Omega\Gamma T((r+1)(\gamma_k^L\oplus\varepsilon^1)). \end{alignat*} Our aim is to prove that $\ol X_r^L$ coincides with $\Omega\Gamma T\zeta_S^r$, which would imply that the above key fibration holds for the space $\Omega\Gamma T\zeta_S^r$. We will now work from the opposite direction and first prove the following. \medskip\begin{sclaim} There is a fibration $\psi_r\colon\Omega\Gamma T\zeta_S^r\xra{\Omega\Gamma T\zeta_S^{r-1}}\Omega\Gamma T((r+1)(\gamma_k^L\oplus\varepsilon^1))$. \end{sclaim} \begin{sprf} The following is a trivial general observation: If $A$ is a manifold, $B\subset A$ is a submanifold with a tubular neighbourhood $U$ and normal bundle $\nu$ and $\zeta$ is a vector bundle over $A$, then there is a cofibration of Thom spaces $$T\zeta|_{A\setminus U}\into T\zeta\to T(\zeta|_B\oplus\nu).$$ We apply this with the substitutions $A:=S((r+1)\gamma_{k+1}^L)$, $B:=S\gamma_{k+1}^L$, $U:=S((r+1)\gamma_{k+1}^L)\setminus S(r\gamma_{k+1}^L)$, $\nu:=r(\gamma_k^L\oplus\varepsilon^1)$ and $\zeta:=\zeta_S^r$. This yields a cofibration $$T\zeta_S^{r-1}\into T\zeta_S^r\to T((r+1)(\gamma_k^L\oplus\varepsilon^1))$$ and by applying the functor $\Omega\Gamma$ (which turns cofibrations into fibrations) we get the desired fibration. \end{sprf} Now the maps $\varphi_{r-1}$ and $\varphi_r$ fit into a diagram as shown below. If we prove that this diagram commutes, then an induction on the number $r$ (together with the 5-lemma) implies that $\varphi_r$ is a homotopy equivalence $\ol X_r^L\cong\Omega\Gamma T\zeta_S^r$ (the starting step is $r=0$, where the diagram degenerates to a map between trivial fibrations). $$\xymatrix{ \ol X_{r-1}^L\ar[d]^{\varphi_{r-1}}\ar[r] & \ol X_r^L\ar[d]^{\varphi_r}\ar[r]^(.29){\ol\chi_r^L} & \Omega\Gamma T((r+1)(\gamma_k^L\oplus\varepsilon^1))\ar[d]^\cong\\ \Omega\Gamma T\zeta_S^{r-1}\ar[r] & \Omega\Gamma T\zeta_S^r\ar[r]^(.33){\psi_r} & \Omega\Gamma T((r+1)(\gamma_k^L\oplus\varepsilon^1)) }$$ The left-hand square trivially commutes. We prove the commutativity of the right-hand square by showing that the diagram of natural transformations of functors induced by these maps commutes. The space $\Omega\Gamma T((r+1)(\gamma_k^L\oplus\varepsilon^1))$ classifies cobordisms of immersions to $P\times\R^1$ with normal bundles induced from $(r+1)(\gamma_k^L\oplus\varepsilon^1)$ (for any manifold $P$). When one applies (the transformation induced by) the map $\ol\chi_r^L$ to some $[f]\in\Prim_r^L(P)$, the normal structure induced from $(r+1)(\gamma_k^L\oplus\varepsilon^1)$ is obtained by the splitting of the normal bundle of $\Sigma^{1_r}(f)$ to $r+1$ bundles canonically isomorphic to $\nu_g|_{\Sigma^{1_r}(f)}$ (with the same notation as in the beginning of the proof). On the other hand, applying $\psi_r\circ\varphi_r$ gives the normal structure using the sections $s_1,\ldots,s_{r+1}$ as described earlier. Now these two normal structures may be twisted with respect to each other, but they can be obtained uniquely up to homotopy from each other. Hence the above diagram indeed commutes. \end{prf} The following is analogous to the key fibration and also very useful for the investigation of the classifying spaces $\ol X_r^L$. \begin{thm}\label{prclthm} $~$ \begin{enumerate} \item\label{egy} There is a fibration $\overline X_r^L\xra{\Omega^2\Gamma T((r+2)\gamma_{k+1}^L)}\Omega\Gamma T\gamma_{k+1}^L$. \item\label{ketto} The homotopy exact sequence of this fibration can be identified with that of the pair $(\overline X_\infty^L,\overline X_r^L)$, where we use $\overline X_\infty^L=\liminfty r\overline X_r^L$. \end{enumerate} \end{thm} \begin{prf} \emph{Proof of \ref{egy}.}\enspace\ignorespaces We pull back the vector bundle $\gamma_{k+1}^L$ to the disk bundle $D((r+1)\gamma_{k+1}^L)$ by its projection. This pullback is such that \begin{itemize} \item[(i)] its restriction to $S((r+1)\gamma_{k+1}^L)$ is $\zeta_S^r$, \item[(ii)] its Thom space is homotopy equivalent to $T\gamma_{k+1}^L$, \item[(iii)] its disk bundle is $D((r+2)\gamma_{k+1}^L)$. \end{itemize} Now we can apply (as in the proof above) the trivial observation that for a manifold $A$, a submanifold $B\subset A$ with a tubular neighbourhood $U$ and normal bundle $\nu$ and a vector bundle $\zeta$ over $A$, there is a cofibration of Thom spaces $T\zeta|_{A\setminus U}\into T\zeta\to T(\zeta|_B\oplus\nu)$. Hence we obtain a cofibration $T\zeta_S^r\into T\gamma_{k+1}^L\to T((r+2)\gamma_{k+1}^L)$ that is turned by the functor $\Omega\Gamma$ to the fibration $$\ol X_r^L=\Omega\Gamma T\zeta_S^r\to\Omega\Gamma T\gamma_{k+1}^L\to\Omega\Gamma T((r+2)\gamma_{k+1}^L).$$ Recall that the resolvent of such a fibration is an infinte continuation of the above sequence of maps to the left with the next entry being the loop space of the first entry, i.e. $\Omega^2\Gamma T((r+2)\gamma_{k+1}^L)$. This means we obtain the proposed fibration $$\Omega^2\Gamma T((r+2)\gamma_{k+1}^L)\to\ol X_r^L\to\Omega\Gamma T\gamma_{k+1}^L.$$ \medskip\noindent\emph{Proof of \ref{ketto}.}\enspace\ignorespaces We use proposition \ref{priminf}: $\Omega\Gamma T\gamma_{k+1}^L\cong\ol X_\infty^L$. We will construct a map $\varphi\colon\pi_{n+k+1}(\ol X_\infty^L,\ol X_r^L)\to\pi_{n+k+2}(\Gamma T((r+2)\gamma_{k+1}^L))=\pi_{n+k}(\Omega^2\Gamma T((r+2)\gamma_{k+1}^L))$ such that the diagram $$\xymatrix{ \ldots\ar[r] & \pi_{n+k+1}(\ol X_\infty^L)\ar[r]\ar@{=}[d] & \pi_{n+k+1}(\ol X_\infty^L,\ol X_r^L)\ar[r]\ar[d]^\varphi & \pi_{n+k}(\ol X_r^L)\ar[r]\ar@{=}[d] & \ldots \\ \ldots\ar[r] & \pi_{n+k+1}(\Omega\Gamma T\gamma_{k+1}^L)\ar[r] & \pi_{n+k}(\Omega^2\Gamma T((r+2)\gamma_{k+1}^L))\ar[r] & \pi_{n+k}(\ol X_r^L)\ar[r] & \ldots }$$ connecting the two long exact sequences commutes. By the 5-lemma, the commutativity of this diagram implies the statement of the theorem, hence it only remains to construct this map $\varphi$. The relative homotopy group $\pi_{n+k+1}(\ol X_\infty^L,\ol X_r^L)$ can clearly be identified with the relative cobordism group of those prim maps $F\colon(M^{n+1},\partial M^{n+1})\to(\R^{n+k}\times\R_+,\R^{n+k})$ (with normal $L$-structures) for which the restriction $F|_{\partial M}$ is a $\tau_r$-map. We define $\varphi$ to assign to the cobordism class of such a map $F$ the cobordism class in $\Imm^{(r+2)\gamma_{k+1}^L}(\R^{n+k+2})$ of the immersion lift of its $\Sigma^{1_{r+1}}$-stratum (the normal bundle of this immersion splits to the direct sum of $r+1$ isomorphic bundles as before). The classifying space of the latter cobordism group is $\Gamma T((r+2)\gamma_{k+1}^L)$ and this $\varphi$ is clearly such that the diagram above is commutative, thus the proof is complete. \end{prf} \begin{crly} Suppose that the universal bundle $\gamma_{k+1}^L$ is orientable (i.e. the linear group $L(k+1)$ is positive). \begin{enumerate} \item\label{ka} If the rational Euler class $e(\gamma_{k+1}^L)$ is non-trivial, then we have $$\Omega\Gamma T\gamma_{k+1}^L\congq\ol X_r^L\times\Omega\Gamma T((r+2)\gamma_{k+1}^L).$$ \item\label{ki} If the rational Euler class $e(\gamma_{k+1}^L)$ is trivial, then we have $$\ol X_r^L\congq\Omega^2\Gamma T((r+2)\gamma_{k+1}^L)\times\Omega\Gamma T\gamma_{k+1}^L.$$ \end{enumerate} \end{crly} \begin{prf} Note that $H^*(BL(k+1);\Q)$ has no zero divisors, since it is a subring of the polynomial ring $H^*(BT(k+1);\Q)$ where $T(k+1)$ is the maximal torus in $L(k+1)$. Now if the Euler class of $\gamma_{k+1}^L$ is non-trivial (resp. trivial), then so is the Euler class of $(r+1)\gamma_{k+1}^L$, hence the Gysin sequence yields that the projection $\pi\colon S((r+1)\gamma_{k+1}^L)\to BL(k+1)$ induces an epimorphism (resp. monomorphism) in the cohomologies with rational coefficients. \medskip\noindent\emph{Proof of \ref{ka}.}\enspace\ignorespaces The above observation implies that the map of Thom spaces $T\zeta_S^r\to T\gamma_{k+1}^L$ induced by $\pi$ also induces an epimorphism in rational cohomology. Thus the cofibration $T\zeta_S^r\into T\gamma_{k+1}^L\to T((r+2)\gamma_{k+1}^L)$ rationally homologically splits, and so the homotopy long exact sequence of the fibration $\Omega\Gamma T\zeta_S^r\to\Omega\Gamma T\gamma_{k+1}^L\to\Omega\Gamma T((r+2)\gamma_{k+1}^L)$ splits rationally as well. Since the spaces involved are H-spaces, hence rationally products of Eilenberg--MacLane spaces, and $\Omega\Gamma T\zeta_S^r\cong\ol X_r^L$, this implies that we have $$\Omega\Gamma T\gamma_{k+1}^L\congq\ol X_r^L\times\Omega\Gamma T((r+2)\gamma_{k+1}^L).$$ \medskip\noindent\emph{Proof of \ref{ki}.}\enspace\ignorespaces Extend the cofibration $T\zeta_S^r\into T\gamma_{k+1}^L\to T((r+2)\gamma_{k+1}^L)$ to the right by the Puppe sequence. We obtain $$T\zeta_S^r\to T\gamma_{k+1}^L\to T((r+2)\gamma_{k+1}^L)\to ST\zeta_S^r\to ST\gamma_{k+1}^L\to\ldots$$ and the map $ST\zeta_S^r\to ST\gamma_{k+1}^L$ induced by $\pi$ induces a monomorphism in rational cohomology, thus the cofibration $T((r+2)\gamma_{k+1}^L)\to ST\zeta_S^r\to ST\gamma_{k+1}^L$ rationally homologically splits, and so the homotopy long exact sequence of the fibration $\Omega^2\Gamma T((r+2)\gamma_{k+1}^L)\to\Omega^2\Gamma ST\zeta_S^r\to\Omega^2\Gamma ST\gamma_{k+1}^L$ splits rationally as well. Now the statement follows again from the spaces being H-spaces and the equalities $\Omega^2\Gamma ST\zeta_S^r=\Omega\Gamma T\zeta_S^r\cong\ol X_r^L$ and $\Omega^2\Gamma ST\gamma_{k+1}^L=\Omega\Gamma T\gamma_{k+1}^L$. \end{prf} \begin{ex} The conditions of \ref{ka} and \ref{ki} above are satisfied in the case of $L=\SO$ respectively for $k$ odd and $k$ even. \end{ex} \begin{rmk} The corollary above also has a nice geometric interpretation (similarly to remark \ref{keyrmk} about the geometric meaning of the key fibration) using part \ref{ketto} of theorem \ref{prclthm}: In the case of \ref{ka}, we obtain that the sequence $$0\to\pi_{n+k}(\ol X_r^L)\to\pi_{n+k}(\ol X_\infty^L)\to\pi_{n+k}(\ol X_\infty^L,\ol X_r^L)\to0$$ is exact at the middle term and has finite homology everywhere else. This means that a prim map $f\colon M^n\to\R^{n+k}$ is rationally prim cobordant to a map having at most $\Sigma^{1_r}$ singularities iff $f|_{\ol{\Sigma^{1_{r+1}}(f)}}$ is rationally prim null-cobordant. Similarly, in the case of \ref{ki} the same is true for the sequence $$0\to\pi_{n+k+1}(\ol X_\infty^L,\ol X_r^L)\to\pi_{n+k}(\ol X_r^L)\to\pi_{n+k}(\ol X_\infty^L)\to0.$$ This means that any prim map is rationally prim cobordant to a map having at most $\Sigma^{1_r}$ singularities (for any $r$). Moreover, if a prim map $f\colon M^n\to\R^{n+k}$ has at most $\Sigma^{1_r}$ singularities and represents $0$ in $\Prim_\infty^L(n,k)\otimes\Q$, then its rational prim $(\tau_r,L)$-cobordism class is completely determined by the rational prim cobordism class of $$F|_{\ol{\Sigma^{1_{r+1}}(F)}}\colon\ol{\Sigma^{1_{r+1}}(F)}\to\R^{n+k+1}$$ for any prim map $F\colon(W^{n+1},\partial W^{n+1})\to(\R^{n+k}\times\R_+,\R^{n+k})$ that it bounds (i.e. for which $\partial W=M$ and $F|_M=f$). \end{rmk} \chapter{Computation of cobordism groups}\label{compcobgr} In the present chapter we gather those cobordism groups $\Cob_\tau^L(n,P^{n+k})$ (and also $\Prim_\tau^L(n,P^{n+k})$) for which (complete or partial) computations already exist in literature using the tools of the chapters before. Although we considered quite general sets of singularities earlier, it should be noted that these computations seem to be rather difficult and most of the cobordism groups computed are such that only Morin singularities are allowed (i.e. the singularity set $\tau$ only contains singularities of coranks $0$ and $1$; see definition \ref{thomboar}). Recall example \ref{morsing} which implies that if only Morin sungularities are allowed, then the singularity set is $\tau_r:=\{\Sigma^0,\Sigma^{1_1},\ldots,\Sigma^{1_r}\}$ for some $r\in\N\cup\{\infty\}$. Using the notation described in \ref{conv2} after definition \ref{cobtau}, throughout this chapter we put (for any stable linear group $L$), $\Cob_r^L(n,P):=\Cob_{\tau_r}^L(n,P)$, $X_r^L:=X_{\tau_r}^L$, $K_r^L:=K_{\tau_r}^L$, $V_r^L:=V_{\tau_r}^L$, $\nu_r^L:=\nu_{\tau_r}^L$, $\Prim_r^L(n,P):=\Prim_{\tau_r}^L(n,P)$, $\overline X_r^L:=\overline X_{\tau_r}^L$, $\ol K_r^L:=\ol K_{\tau_r}^L$, $\ol\nu_r^L:=\ol\nu_{\tau_r}^L$ (overlined letters denote analogous notions to the non-overlined ones for prim maps) and also $G_r^L:=G_{\Sigma^{1_r}}^L$, $\xi_r^L:=\xi_{\Sigma^{1_r}}^L$, $\tilde\xi_r^L:=\tilde\xi_{\Sigma^{1_r}}^L$, $\ol G_r^L:=\ol G_{\Sigma^{1_r}}^L$, $\zeta_r^L:=\zeta_{\Sigma^{1_r}}^L$, $\tilde\zeta_r^L:=\tilde\zeta_{\Sigma^{1_r}}^L$ (the latter three respectively are the analogues of the notions before them for prim maps). If $L=\O$, then we omit it from the superindex. In section \ref{arbco} we show computations on cobordism groups in general; those on unoriented and cooriented cobordisms are taken from \cite{szszt} and \cite{hosszu} respectively. In section \ref{larco}, based on \cite{eszt} and \cite{2k+2}, we restrict to the case of maps where the dimension of the target is considerably larger than that of the source; the point of this is that generic maps with this property do not have too complicated singularities. Similarly, section \ref{1co} also considers maps of restricted codimension, namely $1$-codimensional maps; this was described in \cite{nulladik}. Then in section \ref{prico} we turn to cobordisms of prim maps and show results from \cite{nszt}, \cite{nehez} and \cite{ctrl}. \section{Arbitrary cobordisms}\label{arbco} The main tool that we will use in this section is the cohomological Kazarian spectral sequence with coefficients in a fixed ring $R$ which contains $\frac12$ (i.e. there is division by $2$). We will compute this spectral sequence for unoriented and cooriented prim maps, from which the sequence for unoriented and cooriented usual (non-prim) Morin maps can be obtained by lemma \ref{cover}. Using this, we compute the ranks of the free parts of the groups $\Prim_r(P)$, $\Cob_r(P)$, $\Prim_r^\SO(P)$ and $\Cob_r^\SO(P)$ for any manifold $P$ and also prove that unoriented cobordism groups are finite $2$-primary in many cases. The following statement is well-known; see for example \cite{charclass}. Hence we will not give a proof here. \begin{lemma}\label{cover} Let $\tilde A$ be a CW-complex equipped with a cellular $\Z_2$-action, put $A:=\tilde A/\Z_2$ and let $q\colon\tilde A\to A$ be the quotient map. Then the homomorphism $q^*\colon H^*(A;R)\to H^*(\tilde A;R)$ is injective and gives an isomorphism between $H^*(A;R)$ and the $\Z_2$-invariant part $H^*(\tilde A;R)^{\Z_2}$ of $H^*(\tilde A;R)$. \end{lemma} \begin{rmk}\label{coverr} In addition to the previous lemma, if $\tilde A'$ and is another space with a $\Z_2$ action as above and $f\colon\tilde A'\to\tilde A$ is a $\Z_2$-invariant map, then $f^*\colon H^*(\tilde A;R)\to H^*(\tilde A';R)$ respects the eigenspace decomposition of the $\Z_2$-action. This implies that if $\tilde A_0\subset\tilde A_1\subset\ldots$ is a filtration of $\tilde A$ by $\Z_2$-invariant subspaces and $A_i:=\tilde A_i/\Z_2$, then the cohomological spectral sequence of the filtration $A_0\subset A_1\subset\ldots$ of $A$ can be identified with the $\Z_2$-invariant part of that of $\tilde A$. \end{rmk} We will use the above lemma and remark with the substitutions $\tilde A:=\ol K_r^L$ and $A:=K_r^L$ and the $\Z_2$-action corresponding to changing the orientation of the kernel line bundle (recall remark \ref{kerbundle}: Prim maps are precisely the Morin maps with trivialised kernel line bundles). \begin{rmk}\label{crutial} The following observations will be crucial in the computations of the various Kazarian spectral sequences in the next two subsections. \begin{enumerate} \item The Kazarian space $\ol K_\infty^L$ for all $k$-codimensional prim maps coincides with the Kazarian space for $(k+1)$-codimensional immersions with normal $L$-structures, which is $BL(k+1)$, since prim maps and their immersion lifts can be identified. Moreover, the virtual normal bundle $\ol\nu_\infty^L$ is stably equivalent to the canonical bundle $\gamma_{k+1}^L$. \item\label{truncate} If we truncate the Kazarian spectral sequence for any type of Morin maps at the $r$-th column, then we obtain the Kazarian spectral sequence for the same type of maps with at most $\Sigma^{1_r}$ singularities. \end{enumerate} \end{rmk} \begin{rmk}\label{rk} We claimed above that the ranks of the cobordism groups of unoriented and cooriented prim and usual Morin maps to any manifold $P$ can be computed from the respective Kazarian spectral sequences. The reason for this is proposition \ref{cobhomo} (and its analogue for prim cobordisms), which can be rewritten as \begin{gather*} \Cob_r^L(P)\otimes\Q\cong\displaystyle\bigoplus_{i=1}^\infty H_i(\cpt P;\Q)\otimes H^{i-k}(K_r^L;\tilde\Q_{\nu_r^L})\\ \text{and}~~~~\Prim_r^L(P)\otimes\Q\cong\displaystyle\bigoplus_{i=1}^\infty H_i(\cpt P;\Q)\otimes H^{i-k}(\ol K_r^L;\tilde\Q_{\ol\nu_r^L}). \end{gather*} Hence computing the ranks of the cobordism groups is equivalent to computing the ranks of the cohomologies of the respective Kazarian spaces twisted by the orientations of the respective virtual normal bundles (and of course the homologies of $\cpt P$, but it is no wonder that we also need such an information). In particular the ranks of the cobordism groups $\Cob_r^L(n,k)$ and $\Prim_r^L(n,k)$ are the same as the ranks of the $n$-th twisted cohomology groups of $K_r^L$ and $\ol K_r^L$ respectively. \end{rmk} \subsection{Unoriented cobordism groups} We will now study the unoriented case $L(k):=\O(k)$ and we use $\tilde R:=\tilde R_{\ol\nu_\infty}$ to denote coefficients in the fixed ring $R$ twisted by the orientation of the universal normal bundle. \begin{lemma}\label{twisto} The twisted cohomology of the space $B\O(k+1)$ is \begin{enumerate} \item $H^*(B\O(k+1);\tilde R)\cong0$, if $k$ is even, \item $H^*(B\O(k+1);\tilde R)=e\smallsmile R\big[p_1,\ldots,p_{\frac{k+1}2}\big]$, if $k$ is odd, where $e$ is the twisted Euler class and the $p_i$-s are the Pontryagin classes of $\gamma_{k+1}$. \end{enumerate} \end{lemma} \begin{prf} We will use the double cover $\pi\colon B\SO(k+1)\to B\O(k+1)$. Applying the Leray spectral sequence to this covering, we get $$H^*(B\SO(k+1);R)\cong H^*(B\O(k+1);\pi_*(R)),$$ where $\pi_*(R)$ means the pushforward of the untwisted local coefficient system. This is locally $R\oplus R$ at each point and the non-trivial loop (homotopy class) acts on it by interchanging the summands, hence it can be decomposed as the sum $\pi_*(R)=R\oplus\tilde R_\pi$ of the invariant and the anti-invariant part. Thus we have $$H^*(B\SO(k+1);R)\cong H^*(B\O(k+1);R)\oplus H^*(B\O(k+1);\tilde R_\pi).$$ Now using that the local systems $\tilde R_\pi$ and $\tilde R$ coincide (since $w_1(\ol\nu_\infty)=w_1(\pi)$), we get the statement of the lemma from the well-known facts $$H^*(B\SO(k+1);R)=\begin{cases} R\big[p_1\ldots,p_{\frac k2}\big],&\text{if }k\text{ is even}\\ R\big[p_1,\ldots,p_{\frac{k+1}2},e\big]\big/\big(p_{\frac{k+1}2}-e^2\big),&\text{if }k\text{ is odd} \end{cases}$$ and $H^*(B\O(k+1);R)=R\big[p_1,\ldots,p_{\lfloor\frac{k+1}2\rfloor}\big]$. \end{prf} This lemma describes where the twisted Kazarian spectral sequence for unoriented prim maps converges. Now we will describe the spectral sequence itself. \begin{prop}\label{punorsps} If $\ol E_*^{*,*}$ is the twisted cohomological Kazarian spectral sequence for $k$-codimensional unoriented prim maps, then it has the following form: \begin{enumerate} \item If $k=2l$ is even, then for $i=0,1,\ldots$ the columns $\ol E_1^{2i,*}$ and $\ol E_1^{2i+1,*}$ in the first page are the ring $R[p_1,\ldots,p_l]$ shifted by $(2i+1)k$, i.e. $\ol E_1^{2i,j}\cong\ol E_1^{2i+1,j}$ is the degree-$(j-(2i+1)k)$ part of this polynomial ring, the differential $d_1$ is an isomorphism $\ol E_1^{2i,*}\cong\ol E_1^{2i+1,*}$, hence the second page is $\ol E_2^{*,*}=\ol E_\infty^{*,*}\cong0$. \item If $k=2l+1$ is odd, then all differentials vanish and for $i=0,1,\ldots$ the columns $\ol E_1^{2i,*}=\ol E_\infty^{2i,*}$ are $0$ and the columns $\ol E_1^{2i+1,*}=\ol E_\infty^{2i+1,*}$ are the ring $R[p_1,\ldots,p_l]$ shifted by $(2i+1)k$. \end{enumerate} \end{prop} \begin{prf} The first page of this Kazarian spectral sequence (similarly to definition \ref{kazspec}) is $\ol E_1^{i,j}=H^{i+j}(T\zeta_i;\tilde R)$. Since the universal bundle $\zeta_i$ coincides with $i\gamma_k\oplus\varepsilon^i$ over $B\O(k)$ (see \cite{hosszu}) and the virtual normal bundle $\ol\nu_i$ is stably equivalent to $\gamma_k$, we have $w_1(\zeta_i)=rw_1(\gamma_k)=rw_1(\ol\nu_i)$. Hence the Thom isomorphism yields $$\ol E_1^{i,*}=H^{i+*}(T\zeta_i;\tilde R)\cong H^{*-ik}(B\O(k);\tilde R^{i+1}),$$ where $\tilde R^{i+1}$ means the $(i+1)$-times tensor product $\tilde R\otimes\ldots\otimes\tilde R$. Now for $k=2l$ or $k=2l+1$, the (untwisted) rational cohomology $H^*(B\O(k);R)$ is $R[p_1,\ldots,p_l]$ and the twisted cohomology $H^*(B\O(k);\tilde R)$ is $0$ for $k=2l+1$ and isomorphic to $R[p_1,\ldots,p_l]$ shifted by $k$ for $k=2l$. This implies everything we claimed about the first page of the sequence and it is not hard to see from this that for odd $k$, all differentials vanish. It remains to prove that for even $k$, the differential $d_1$ is an isomorphism between the columns $\ol E_1^{2i,*}$ and $\ol E_1^{2i+1,*}$ for all $i$. By lemma \ref{twisto}, we know that in this case all terms $\ol E_\infty^{i,j}$ are $0$, hence the entries of lowest degree in the $0$-th column must be mapped onto those in the next column by $d_1$ isomorphically because this is the only chance for these entries to disappear. Thus $d_1(e)$ (where $e\in e\smallsmile R[p_1,\ldots,p_l]=\ol E_1^{0,*}$) is the twisted Thom class $u(\zeta_1)$. \medskip\begin{sclaim} For any monomial $p_I$ of Pontryagin classes and $i=0,1,\ldots$, we have $d_1^{2i,*}(e\smallsmile p_I)=d_1^{2i,*}(e)\smallsmile p_I$. \end{sclaim} \begin{sprf} The differential $d_1^{2i,*}\colon\ol E_1^{2i,*}\to\ol E_1^{2i+1,*}$ is by definition the boundary map $\delta$ in the exact sequence of the triple $(\ol K_{2i+1},\ol K_{2i},\ol K_{2i-1})$. The Pontryagin classes $p_1,\ldots,p_l$ used in $\ol E_1^{2i,*}$ can be identified with those of the universal bundle $\gamma_k$ restricted to the base space of $D(2i\gamma_k\oplus\varepsilon^{2i})\subset\ol K_\infty\cong B\O(k+1)$ (recall that the Kazarian space $\ol K_\infty$ is constructed from blocks $D\zeta_r=D(r\gamma_k\oplus\varepsilon^r)$), thus the map $j^*$ induced by the inclusion $j\colon\ol K_{2i}\into\ol K_{2i+1}$ maps them into each other. Now we have $$d_1^{2i,*}(e\smallsmile p_I)=\delta(e\smallsmile p_I)=\delta(e\smallsmile j^*(p_I))=\delta(e)\smallsmile p_I+e\smallsmile\delta(j^*(p_I))=\delta(e)\smallsmile p_I,$$ as claimed. \end{sprf} Hence the differential $d_1$ is an isomorphism between the first two columns and then an induction shows the same for the columns $2i$ and $(2i+1)$ for all $i$. \end{prf} \begin{crly}\label{unorsps} If $E_*^{*,*}$ is the twisted cohomological Kazarian spectral sequence for $k$-codimensional unoriented Morin maps, then it has the following form: \begin{enumerate} \item If $k=2l$ is even, then for $i=0,1,\ldots$ the columns $E_1^{4i,*}$ and $E_1^{4i+1,*}$ in the first page are the ring $R[p_1,\ldots,p_l]$ shifted by $(4i+1)k$, the columns $E_1^{4i+2,*}$ and $E_1^{4i+3,*}$ vanish, the differential $d_1$ is an isomorphism $E_1^{4i,*}\cong E_1^{4i+1,*}$, hence the second page is $E_2^{*,*}=E_\infty^{*,*}\cong0$. \item If $k=2l+1$ is odd, then the whole spectral sequence is constant $0$. \end{enumerate} \end{crly} \begin{prf} By lemma \ref{cover}, we need to understand the $\Z_2$-action on the spectral sequence $\ol E_*^{*,*}$ corresponding to changing the orientation of the kernel line bundle to obtain the spectral sequence $E_*^{*,*}$ with coefficients twisted by the orientation of $\nu_\infty$. Considering the $i$-th column (where the first page contains the cohomologies of $B\O(k)$), we have the coefficient system $\tilde R_{\nu_\infty}\otimes\tilde R_{\xi_i}=\tilde R_{\tilde\xi_i}$. It was proved in \cite{rsz} and \cite{rrthesis} that the maximal compact subgroup $G_i$ of $\Aut_\AA\Sigma^{1_i}$ (see section \ref{semiglob}) is isomorphic to $\O(1)\times\O(k)$ with target representation $$\tilde\lambda(\varepsilon,A)=\varepsilon^{i+1}\oplus A\oplus\left\lceil\frac{i-1}2\right\rceil1\oplus\left\lfloor\frac{i-1}2\right\rfloor\varepsilon\oplus\left\lfloor\frac{i}2\right\rfloor A\oplus\left\lceil\frac{i}2\right\rceil\varepsilon A$$ for $\varepsilon\in\O(1)$ and $A\in\O(k)$. The $\Z_2$-action discussed above is given by the correspondence $\varepsilon\mapsto-\varepsilon$ on $\O(1)$. For $k=2l$ this action changes the orientation of $\tilde\lambda$ exactly if $i+1+\lfloor\frac{i-1}2\rfloor$ is odd, i.e. for $i\equiv2,3~\mod4$. This means by remark \ref{coverr} that for $i\equiv0,1~\mod4$ the columns $E_1^{i,*}$ are the same as for prim maps with the differential $d_1$ also being the same; and for $i\equiv2,3~\mod4$ the columns $E_1^{i,*}$ vanish. For $k=2l+1$ the above action changes the orientation of $\tilde\lambda$ exactly if $i+1+\lfloor\frac{i-1}2\rfloor+\lceil\frac{i}2\rceil$ is odd, i.e. if $i$ is odd. Hence by remark \ref{coverr} the columns $E_1^{i,*}$ are the same as for prim maps for $i$ even and vanish for $i$ odd, which means that the whole first page of this spectral sequence is $0$. \end{prf} With the help of the spectral sequences described above, we are able to prove that in many cases the cobordism groups $\Prim_r(n,k)$ and $\Cob_r(n,k)$ are finite $2$-primary, i.e. they are elements of the Serre class $\CC_2$. Moreover, we are also able to compute the ranks of all cobordism groups $\Prim_r(P)$ and $\Cob_r(P)$ by considering the case $R:=\Q$; see remark \ref{rk}. \begin{thm}\label{punorthm} $~$ \begin{enumerate} \item\label{u1} The group $\Prim_r(n,k)$ is finite $2$-primary, if $k$ is even and $r$ is either an odd number or $\infty$. \item\label{u2} If both $k$ and $r$ are even, then we have $$\Prim_r(n,k)\otimes\Q\cong H^{n-r(k+1)-k}(B\O(k);\Q),$$ hence its rank is the number of partitions $p_{\frac k2}\big(\frac{n-r(k+1)-k}4\big)$, in particular it is $0$ if $n-r(k+1)-k$ is not a multiple of $4$. \item\label{u3} If $k=2l+1$ is odd and $r=\infty$, then we have $$\Prim_\infty(n,k)\otimes\Q\cong H^{n-k-1}(B\O(k+1);\Q),$$ hence its rank is $p_{\frac{k+1}2}\big(\frac{n-k-1}4\big)$, in particular it is $0$ if $n-k-1$ is not a multiple of $4$. \item\label{u4} If $k=2l+1$ is odd and $r<\infty$, then $\Prim_r(n,k)\otimes\Q$ is also the degree-$(n-k-1)$ part of a polynomial ring, namely $$\Prim_r(n,k)\otimes\Q\cong\left[\Q[p_1,\ldots,p_{l+1}]\big/\big(p_{l+1}^{\lceil\frac{r}2\rceil}\big)\right]^{n-k-1}.$$ \end{enumerate} \end{thm} \begin{prf} \emph{Proof of \ref{u1}.}\enspace\ignorespaces The Kazarian spectral sequence from theorem \ref{punorsps} (together with part \ref{truncate} of remark \ref{crutial}) yields that in this case we have $H^*(\ol K_r;\tilde R)\cong0$ for any ring $R$ where $2$ is invertible. By the Thom isomorphism we have $H^{*}(\ol K_r;\tilde R)\cong H^{*+k}(T\ol\nu_r;R)$, hence $T\ol\nu_r$ has trivial (co)homology groups modulo the class $\CC_2$. Now Serre's mod-$\CC_2$ Hurewicz theorem from \cite{serre} implies that the stable homotopy groups $\pi_*^s(T\ol\nu_r)$ are also trivial modulo $\CC_2$, i.e. they are finite $2$-primary. Thus we have $$\Prim_r(n,k)\cong\pi_{n+k}(\ol X_r)\cong\pi_{n+k}^s(T\ol\nu_r)\in\CC_2.$$ \medskip\noindent\emph{Proof of \ref{u2}.}\enspace\ignorespaces Again we use part \ref{truncate} of remark \ref{crutial}; we obtain that in the twisted rational Kazarian spectral sequence $\ol E_2^{r,*}=\ol E_\infty^{r,*}$ is the only non-zero column in the second page and it is the ring $\Q\big[p_1,\ldots,p_{\frac k2}\big]=H^*(B\O(k);\Q)$ shifted by $(r+1)k$. Hence the twisted rational cohomology ring of the Kazarian space is $H^*(\ol K_r;\tilde\Q)\cong H^{*-r(k+1)-k}(B\O(k);\Q)$ and we get the statement of the theorem by remark \ref{rk}. \medskip\noindent\emph{Proof of \ref{u3} and \ref{u4}.}\enspace\ignorespaces In this case the twisted rational Kazarian spectral sequence yields $$H^n(\ol K_\infty;\tilde\Q)\cong\bigoplus_{i=0}^\infty\ol E_\infty^{2i+1,n-2i-1}\cong\bigoplus_{i=0}^\infty\big[\Q[p_1,\ldots,p_l]\big]^{n-2i(k+1)-k-1}.$$ If we introduce a new variable $p_{l+1}$ of degree $2k+2$ and add it to the previous variable set $\{p_1,\ldots,p_l\}$, then we obtain that the direct sum above is the degree-$(n-k-1)$ part of the polynomial ring $\Q[p_1,\ldots,p_{l+1}]$. This ring is isomorphic to $H^*(B\O(k+1);\Q)$, thus $\Prim_\infty(n,k)\otimes\Q\cong H^n(\ol K_\infty;\tilde\Q)$ is of the form as we claimed. If we truncate the spectral sequence at the $r$-th column to obtain $\Prim_r(n,k)$, then the new variable $p_{l+1}$ has to be such that its $i$-th power is $0$ if $2i+1>r$, i.e. if $i\ge\lceil\frac r2\rceil$, hence we obtain the statement of the theorem again. \end{prf} Completely analogously one can prove the following. \begin{thm}\label{unorthm} $~$ \begin{enumerate} \item The group $\Cob_r(n,k)$ is finite $2$-primary, if either $r=\infty$ or $k$ is odd or $k$ is even and $r$ is not divisible by $4$. \item If $k$ is even and $r$ is divisible by $4$, then we have $$\Cob_r(n,k)\otimes\Q\cong H^{n-r(k+1)-k}(B\O(k);\Q),$$ hence its rank is the number of partitions $p_{\frac k2}\big(\frac{n-r(k+1)-k}4\big)$, in particular it is $0$ if $n-k$ is not a multiple of $4$. \end{enumerate} \end{thm} \subsection{Cooriented cobordism groups} The present subsection is analogous to the previous subsection, the only change being that now we consider the cooriented case $L(k):=\SO(k)$ and with $R:=\Q$. We will use $e$ and $p_i$ respectively to denote the rational Euler class and Pontryagin classes of $\gamma_k^\SO$ and we put $A:=\Q[p_1,\ldots,p_l]$ (where the number $l$ will be defined later). \begin{prop}\label{pcoorsps} If $\ol E_*^{*,*}$ is the cohomological Kazarian spectral sequence for $k$-codimensional cooriented prim maps, then it has the following form: \begin{enumerate} \item If $k=2l$ is even, then for $i=0,1,\ldots$ the column $\ol E_1^{i,*}$ in the first page is the ring $(e^i\smallsmile A)\oplus(e^{i+1}\smallsmile A)$, the differential $d_1$ maps $e^{i+1}\smallsmile A\subset\ol E_1^{i,*}$ to $e^{i+1}\smallsmile A\subset\ol E_1^{i+1,*}$ isomorphically, hence the second page is $\ol E_2^{0,*}=\ol E_\infty^{0,*}\cong A$ and $\ol E_2^{i,*}=\ol E_\infty^{i,*}\cong0$ for $i>0$. \item If $k=2l+1$ is odd, then all differentials vanish and for $i=0,1,\ldots$ the columns $\ol E_1^{i,*}=\ol E_\infty^{i,*}$ are the ring $A$ shifted by $ik$. \end{enumerate} \end{prop} \begin{prf} Analogously to the proof of proposition \ref{punorsps}, the first page of this spectral sequence is $$\ol E_1^{i,j}=H^{i+j}(T\zeta_i^\SO;\Q)\cong H^j(T(i\gamma_k^\SO);\Q)\cong e^i\smallsmile H^{j-ik}(B\SO(k);\Q)$$ using that the universal bundle $\zeta_i^\SO$ coincides with the bundle $i\gamma_k^\SO\oplus\varepsilon^i$ over $B\SO(k)$ and the inclusion $B\SO(k)\into T\gamma_k^\SO$ induces an injection $H^*(T(i\gamma_k^\SO);\Q)\to H^*(B\SO(k);\Q)$ with image equal to the ideal generated by $e^i$. For $k=2l+1$, the column $H^{*-ik}(B\SO(k);\Q)$ in the first page coincides with $A$ shifted by $ik$ and the sequence converges to $H^*(B\SO(k+1);\Q)=\Q[p_1,\ldots,p_{l+1}]\oplus(e\smallsmile \Q[p_1,\ldots,p_{l+1}])$. Now an easy computation shows that in this case the ranks of $\ol E_1^{*,*}$ and $H^*(B\SO(k+1);\Q)$ are the same, which implies that all differentials vanish. Hence the second statement is proved. For $k=2l$, the column $H^{*-ik}(B\SO(k);\Q)$ is isomorphic to $A\oplus(e\smallsmile A)$ shifted by $ik$ and this shift is given by $e^i$, that is, this column is $(e^i\smallsmile A)\oplus(e^{i+1}\smallsmile A)$. This spectral sequence converges to $H^*(B\SO(k+1);\Q)=A$ and it can be identified with $A\subset\ol E_1^{0,*}$. Hence the Euler class $e$ from the lowest degree entry in the $0$-th column must be mapped to a non-zero multiple of $e$ in the next column, since this is the only chance for the multiples of $e$ to disappear. In the same way, the differential $d_1^{i,k}\colon\ol E_1^{i,k}\to\ol E_1^{i+1,k}$ maps $e^i$ to a non-zero multiple of $e^i$. \medskip\begin{sclaim} For any monomial $p_I$ of Pontryagin classes and $i=0,1,\ldots$, we have $d_1^{i,*}(e\smallsmile p_I)=d_1^{i,*}(e)\smallsmile p_I$. \end{sclaim} \begin{sprf} The differential $d_1^{i,*}\colon\ol E_1^{i,*}\to\ol E_1^{i+1,*}$ is by definition the boundary map $\delta$ in the exact sequence of the triple $(\ol K_{i+1}^\SO,\ol K_{i}^\SO,\ol K_{i-1}^\SO)$. The Pontryagin classes $p_1,\ldots,p_l$ used in $\ol E_1^{i,*}$ can be identified with those of the universal bundle $\gamma_k^\SO$ restricted to the base space of $D(i\gamma_k^\SO\oplus\varepsilon^{i})\subset\ol K_\infty^\SO\cong B\SO(k+1)$ (recall that the Kazarian space $\ol K_\infty^\SO$ is constructed from blocks $D\zeta_r^\SO=D(r\gamma_k^\SO\oplus\varepsilon^r)$), thus the map $j^*$ induced by the inclusion $j\colon\ol K_{i}^\SO\into\ol K_{i+1}^\SO$ maps them into each other. Now we have $$d_1^{i,*}(e\smallsmile p_I)=\delta(e\smallsmile p_I)=\delta(e\smallsmile j^*(p_I))=\delta(e)\smallsmile p_I+e\smallsmile\delta(j^*(p_I))=\delta(e)\smallsmile p_I,$$ as claimed. \end{sprf} This claim, combined with the observations above it, shows that the $e^{i+1}\smallsmile A$ part of $\ol E_1^{i,*}$ is mapped to that of $\ol E_1^{i+1,*}$ isomophically for all $i$. \end{prf} \begin{crly}\label{coorsps} If $E_*^{*,*}$ is the cohomological Kazarian spectral sequence for $k$-codi- mensional cooriented Morin maps, then it has the following form: \begin{enumerate} \item If $k=2l$ is even, then for $i=0,1,\ldots$ the column $E_1^{4i,*}$ in the first page is the ring $(e^{4i}\smallsmile A)\oplus(e^{4i+1}\smallsmile A)$, the column $E_1^{4i+1,*}$ is the ring $e^{4i+1}\smallsmile A$, the column $E_1^{4i+2,*}$ vanishes and the column $E_1^{4i+3,*}$ is the ring $e^{4i+4}\smallsmile A$, the differential $d_1$ maps $e^{4i+1}\smallsmile A\subset\ol E_1^{4i,*}$ onto $e^{4i+1}\smallsmile A\subset\ol E_1^{4i+1,*}$ and $e^{4i+4}\smallsmile A\subset\ol E_1^{4i+3,*}$ onto $e^{4i+4}\smallsmile A\subset\ol E_1^{4i+4,*}$ isomorphically, hence the second page is $E_2^{0,*}=E_\infty^{0,*}=A$ and $E_2^{i,*}=E_\infty^{i,*}\cong0$ for $i>0$. \item If $k=2l+1$ is odd, then all differentials vanish and for $i=0,1,\ldots$ the columns $E_1^{2i,*}=E_\infty^{2i,*}$ are the ring $A$ shifted by $2ik$ and the columns $E_1^{2i+1,*}=E_\infty^{2i+1,*}$ are $0$. \end{enumerate} \end{crly} \begin{prf} By lemma \ref{cover} (analogously to the proof of corollary \ref{unorsps}) we need to understand the $\Z_2$-action on the spectral sequence $\ol E_*^{*,*}$ corresponding to changing the orientation of the kernel line bundle to obtain the spectral sequence $E_*^{*,*}$. The maximal compact subgroup $G_i^\SO$ is by \cite{rsz} and \cite{rrthesis} isomorphic to $\{(\varepsilon,B)\in\O(1)\times\O(k)\mid\varepsilon^i\cdot\det B>0\}$ with source representation $$\lambda(\varepsilon,B)=\left\lceil\frac{i-1}2\right\rceil1\oplus\left\lfloor\frac{i+1}2\right\rfloor\varepsilon\oplus\left\lfloor\frac{i}2\right\rfloor B\oplus\left\lceil\frac{i}2\right\rceil\varepsilon B$$ and the restriction of this to $\SO(1)\times\SO(k)\cong\SO(k)\cong\ol G_i^\SO$ is the source representation for prim maps. The $\Z_2$-action is again given by the correspondence $\varepsilon\mapsto-\varepsilon$ on $\O(1)$, which now also changes the orientation of $B\in\O(k)$ iff $i$ is odd in order to keep $\varepsilon^i\cdot\det B$ positive. Recall that for $k=2l$ we had $\ol E_1^{i,*}=u(\zeta_i^\SO)\smallsmile(A\oplus e\smallsmile A)$ shifted by $-i$. The $\Z_2$-action described above maps $e$ to $(-1)^ie$ and changes the orientation of $\lambda$ (and so the sign of $u(\zeta_i^\SO)$) exactly if $\lfloor\frac{i+1}2\rfloor+\lfloor\frac{i}2\rfloor+\lceil\frac{i}2\rceil=\lfloor\frac{i+1}2\rfloor+i$ is odd, i.e. for $i\equiv2,3~\mod4$. This means by remark \ref{coverr} that the columns $E_1^{i,*}$, which are the $\Z_2$-invariant parts of $\ol E_1^{i,*}$, are as we claimed. The differential $d_1$ is the restriction of the differential of $\ol E_1^{*,*}$, hence it is an isomorphism between the direct summands $e^{i+1}\smallsmile A\subset E_1^{i,*}$ and $e^{i+1}\smallsmile A\subset E_1^{i+1,*}$ where these exist. For $k=2l+1$ and $i$ even (resp. odd), the above action changes the orientation of $\lambda$ exactly if $\lfloor\frac{i+1}2\rfloor+\lceil\frac{i}2\rceil$ (resp. $\lfloor\frac{i+1}2\rfloor+\lfloor\frac{i}2\rfloor$) is odd, i.e. the orientation changes iff $i$ is odd. Hence by remark \ref{coverr} the columns $E_1^{i,*}$ are the same as for prim maps for $i$ even and vanish for $i$ odd. \end{prf} Again the spectral sequences computed above can be used to obtain the ranks of the cobordism groups $\Prim_r^\SO(P)$ and $\Cob_r^\SO(P)$; exactly the same considerations as in the proof of theorem \ref{punorthm} yield the following two theorems. \begin{thm} $~$ \begin{enumerate} \item If $r=\infty$, then we have $$\Prim_\infty^\SO(n,k)\otimes\Q\cong H^n(B\SO(k+1);\Q).$$ \item If $k=2l$ is even and $r<\infty$, then $\Prim_r^\SO(n,k)\otimes\Q$ is also the degree-$n$ part of a polynomial ring, namely $$\Prim_r^\SO(n,k)\otimes\Q\cong\big[\Q[p_1,\ldots,p_l]\oplus(e^{r+1}\smallsmile\Q[p_1,\ldots,p_l])\big]^n.$$ \item If $k=2l+1$ is odd and $r<\infty$, then we have $$\Prim_r^\SO(n,k)\otimes\Q\cong\big[\Q[p_1,\ldots,p_l,e]/(e^{r+1})\big]^n.$$ \end{enumerate} \end{thm} \begin{thm}\label{coorthm} $~$ \begin{enumerate} \item If $k$ is even and either $r\equiv1,2~\mod4$ or $r=\infty$, then we have $$\Cob_r^\SO(n,k)\otimes\Q\cong H^n(B\O(k);\Q),$$ hence its rank is the number of partitions $p_{\frac k2}\big(\frac n4\big)$, in particular it is $0$ if $n$ is not a multiple of $4$. \item If $k=2l$ is even and $r\equiv0,3~\mod4$, then we have $$\Cob_r^\SO(n,k)\otimes\Q\cong\big[\Q[p_1,\ldots,p_l]\oplus(e^{r+1}\smallsmile\Q[p_1,\ldots,p_l])\big]^n.$$ \item If $k$ is odd and $r=\infty$, then we have $$\Cob_\infty^\SO(n,k)\otimes\Q\cong H^n(B\O(k+1);\Q),$$ hence its rank is $p_{\frac{k+1}2}\big(\frac n4\big)$, in particular it is $0$ if $n$ is not a multiple of $4$. \item If $k=2l+1$ is odd and $r<\infty$, then we have $$\Cob_r^\SO(n,k)\otimes\Q\cong\left[\Q[p_1,\ldots,p_{l+1}]\big/\big(p_{l+1}^{\lceil\frac r2\rceil}\big)\right]^n.$$ \end{enumerate} \end{thm} \subsection{General observations} The computations in the previous two subsections allow us to obtain a few additional general properties of cobordism groups. \begin{prop} Let $k$ be an odd number and $r\in\N\cup\{\infty\}$. Then for any manifold $P^{n+k}$ we have $$\Cob_r^\SO(n,P^{n+k})\otimes\Q\cong\displaystyle\bigoplus_{2i\le r}\Imm^{\tilde\xi_{2i}^\SO}(P)\otimes\Q.$$ \end{prop} \begin{prf} By corollary \ref{coorsps} we know that the spaces $T\xi_{2i+1}^\SO$ have trivial rational cohomologies, hence so do the spaces $T\tilde\xi_{2i+1}^\SO$ (using the Thom isomorphism for the virtual bundle $\nu_{2i+1}^\SO$), and so $\Gamma T\tilde\xi_{2i+1}^\SO$ is rationally trivial. Similarly, corollary \ref{coorsps} also implies $T\tilde\xi_{2i}^\SO\congq T\tilde\zeta_{2i}^\SO=S^{2i}T((2i+1)\gamma_k^\SO)$. Hence we can use theorem \ref{ratrivi1} recursively to obtain $$X_r^\SO\congq\prod_{2i\le r}\Gamma T\tilde\xi_{2i}^\SO$$ and now the statement follows trivially. \end{prf} \begin{rmk} We also proved here that $\Imm^{\tilde\xi_{2i}^\SO}(P)\otimes\Q$ is the rational cobordism group of immersions to $P$ with normal bundles induced from $(2i+1)\gamma_k^\SO\oplus\varepsilon^{2i}$, i.e. whose normal bundles split to the direct sum of $2i+1$ oriented $k$-bundles and a $2i$-dimensional trivial bundle. \end{rmk} \begin{prop} Put $\tau:=\tau_r\cup\{\rm{III}_{2,2}\}$, where $r\ge2$ and $\rm{III}_{2,2}$ is the simplest $\Sigma^2$ singularity (see \cite{math}). Then for any manifold $P^{n+k}$ we have $$\Cob_\tau^\SO(n,P^{n+k})\otimes\Q\cong\left(\Cob_r^\SO(P)\oplus\Imm^{\tilde\xi_{\rm{III}_{2,2}}^\SO}(P)\right)\otimes\Q.$$ \end{prop} \begin{prf} The global normal form of the singularity $\rm{III}_{2,2}$ can be found in \cite{rsz}. Now the case of odd $k$ follows from theorem \ref{ratrivi1} again similarly to the proof above. If $k$ is even, then theorem \ref{coorthm} implies that the conditions of theorem \ref{ratrivi2} are satisfied, hence the conclusion is the same again. \end{prf} Similar examples can be constructed to show details on the free parts of cobordism groups of other types of maps as well. \section{Large codimensional cobordisms}\label{larco} In the previous section we saw theorems on cobordism groups of Morin maps in general, but these were not complete computations, as we only determined the free parts of these groups (and in some cases we also saw that they are finite $2$-primary). Here we will completely compute cobordism groups of maps that only have the simplest type of singularity, i.e. maps with only regular ($\Sigma^0$) and fold ($\Sigma^{1,0}$) points. Recall from subsection \ref{strata} that the submanifold $\Sigma^r(f)\subset M^n$ for a map $f\colon M^n\to P^{n+k}$ consists of the points where the corank of $df$ is $r$; and the submanifold $\ol{\Sigma^{1_r}(f)}\subset M^n$ is the set of singular points of $f|_{\ol{\Sigma^{1_{r-1}}(f)}}$. Now the jet transversality theorem (together with a bit of linear algebra) has the straighforward consequences $\codim\Sigma^r(f)=r(k+r)$ and $\codim\Sigma^{1_r}(f)=r(k+1)$. Thus for $n<2k+2$, generic maps $M^n\to P^{n+k}$ only have fold singularities. If $n<2k+1$, then even a cobordism between two maps from $n$-manifolds to an $(n+k)$-manifold only has fold singularities, hence in this case we have $$\Cob_1(n,P^{n+k})=\NN_n(P)$$ (and a similar equation holds for maps with given normal structures). Similarly for $n<2k+4$, generic maps $M^n\to P^{n+k}$ only have fold and cusp singularities. This means if $n=2k+1$ or $n=2k+2$, then the groups $\Cob_1^L(n,P^{n+k})$ are (a priori) not the same as the respective bordism groups of $P$, but the cobordisms are not too far away from being generic (since a generic map $W^{n+1}\to P\times[0,1]$ can only have fold and cusp singularities). These ($n=2k+1$ and $n=2k+2$) are the two cases we will consider in this section with normal structures $L=\O$ and $L=\SO$. \subsection{The $(2k+1)$-dimensional case} By the considerations above, we have natural forgetful homomorphisms $$\varphi\colon\Cob_1(2k+1,P^{3k+1})\to\NN_{2k+1}(P)~~~~\text{and}~~~~\psi\colon\Cob_1^\SO(2k+1,P^{3k+1})\to\Omega_{2k+1}(P),$$ which send the cobordism class of a map $f\colon M\to P$ to the bordism class of $f$. To determine these, we have to understand the cusps of generic bordisms, which are maps from $(2k+2)$-manifolds to $(3k+2)$-manifolds. The set of cusp points of such a map is a $0$-dimensional manifold, i.e. a discrete set of points. \begin{rmk}\label{cusproot} According to Morin \cite{mor}, the root of cusp singularity is the equivalence class of the map $$\sigma_2\colon(\R^{2k+2},0)\to(\R^{3k+2},0);~(x,y,z)\mapsto(X,Y,Z)$$ where the coordinates are $x=(x_1,\ldots,x_{2k})\in\R^{2k}$, $y\in\R^1$, $z\in\R^1$ and $X=(X_1,\ldots,$ $X_{2k+1})\in\R^{2k+1}$, $Y=(Y_1,\ldots,Y_k)\in\R^k$, $Z\in\R^1$ and the map is given by the formula \begin{alignat*}2 &X_1=x_1,\ldots,X_{2k}=x_{2k},X_{2k+1}=y,\\ &Y_1=zx_1+z^2x_2,Y_2=zx_3+z^2x_4,\ldots,Y_k=zx_{2k-1}+z^2x_{2k},\\ &Z=zy+z^3. \end{alignat*} In other words $\sigma_2$ has an isolated cusp point in the origin. \end{rmk} \begin{lemma}\label{removecusps} Let $F\colon W^{2k+2}\to Q^{3k+2}$ be a map between manifolds (possibly with boundary), let $p$ and $q$ be two cusp points of $F$ and assume that there is an arc $\alpha\subset W$ connecting $p$ and $q$ such that $\Sigma(F)\cap\alpha=\{p,q\}$. Then there is a map $F'\colon W\to P$ that coincides with $F$ outside an arbitrarily small neighbourhood of $\alpha$ and has cusp set $\Sigma^{1,1}(F')=\Sigma^{1,1}(F)\setminus\{p,q\}$. \end{lemma} \begin{prf} We can choose a coordinate neighbourhood of the arc $\alpha$ such that this neighbourhood corresponds to the disk $$D:=\{(x,y,z)\in\R^{2k+2}\mid\lv x\rv\le\varepsilon,-\varepsilon\le y\le 1+\varepsilon,|z|\le1\}$$ (with the same coordinates as in remark \ref{cusproot} and some $\varepsilon>0$) and the map $F$ restricted to this neighbourhood is the composition of a map of the form $$f\colon D\to\R^{3k+2};~(x,y,z)\mapsto(X,Y,Z)$$ given by the expression \begin{alignat*}2 &X_1=x_1,\ldots,X_{2k}=x_{2k},X_{2k+1}=y,\\ &Y_1=zx_1+z^2x_2,Y_2=zx_3+z^2x_4,\ldots,Y_k=zx_{2k-1}+z^2x_{2k},\\ &Z=zy(1-y)+z^3, \end{alignat*} with an immersion $\R^{3k+2}\imto Q$. Clearly the $Z$-coordinate function of $f$, as a function $y\mapsto zy(1-y)+z^3$, has two critical points if $y<0$ or $y>1$ and these cancel at $y=0$ and $y=1$ respectively, hence the cusp points of $f$ are $(0,0,0)$ and $(0,1,0)$. Now we define the map $f'\colon D\to\R^{3k+2}$ in the same way as $f$, except that the $Z$-coordinate function is $$Z=z(g(y)+y(1-y))+z^3,$$ where $g\colon\R\to[-1,0]$ is a smooth function with $g|_{\R\setminus[-\varepsilon,1+\varepsilon]}\equiv0$, $g|_{[0,1]}\equiv-1$ and $g$ is decreasing on the negative numbers and increasing on the positives. Now the $Z$-coordinate function of $f'$ (again as a function of $y$) always has two critical points, hence $f'$ has no cusps. By a bump fuction we can make $f'$ coincide with $f$ on the boundary of $D$, and so if we compose it with the immersion $\R^{3k+2}\imto Q$ as above, then we get the desired map $F'$ that has the same cusp set as $F$, except that $p$ and $q$ are deleted. \end{prf} We will now first consider the simpler case of unoriented cobordisms, and after that get to the cooriented case. \begin{lemma}\label{valamilyenlemma} For any integers $c\ge0$ and $k>0$ there is a closed manifold $W^{2k+2}$ and a (stable) map $F\colon W^{2k+2}\to\R^{3k+2}$ with $c$ cusps. \end{lemma} \begin{prf} If $F\colon W^{2k+2}\to\R^{3k+2}$ is a map with at least two cusps, then we can add $1$-handles to make $W$ connected and since the codimension of the singular set is $k+1>1$, we can use the previous lemma to remove pairs of cusps. Hence it is enough to show that there is a manifold $W^{2k+2}$ and a map $F\colon W\to\R^{3k+2}$ with an odd number of cusps. Put $W:=(\RP^2)^{k+1}$, let $i\colon\RP^2\imto\R^3$ be an immersion and let $j\colon W\imto\R^{3k+3}$ be a (self-transverse) immersion regularly homotopic to $(i\times\ldots\times i)$. It is well-known that $j$ has an odd number of triple points (see \cite{2n3n}) and it was proved in \cite{prim} that the number of cusps of a generic hyperplane projection of such an immersion has the same parity as the number of triple points of the immersion. Thus if $F\colon W\to\R^{3k+2}$ is such a projection of $j$, then it has on odd number of cusps. \end{prf} \begin{thm}\label{cob=n} For any manifold $P^{3k+1}$ we have $$\Cob_1(2k+1,P)\cong\NN_{2k+1}(P).$$ \end{thm} \begin{prf} It is enough to show that the forgetful map $\varphi\colon\Cob_1(2k+1,P)\to\NN_{2k+1}(P)$ is an isomorphism. It is clearly surjective, hence it is sufficient to prove that it is also injective. Let $f\colon M^{2k+1}\to P$ be a map that represents $0$ in $\NN_{2k+1}(P)$ and let $F\colon W^{2k+2}\to P\times\R_+$ be a null-bordism ($\partial W=M$ and $F|_M=f$). If $F$ has $c$ cusps, then by the previous lemma we can find a closed manifold $W'$ and a map $F'\colon W'\to P\times\R_+$ that also has $c$ cusps. Now by adding $1$-handles we connect the cusps of $W$ and $W'$ and then lemma \ref{removecusps} yields a manifold $W''$ with boundary $\partial W''=M$ and a map $F''\colon W''\to P\times\R_+$ with no cusps and for which $F''|_M=f$. Hence the map $f$ represents $0$ in $\Cob_1(2k+1,P)$ as well. \end{prf} In the following we will investigate the same cobordism groups, but with oriented normal structures. First we consider the case of maps to a Eucledian space (i.e. $P^{3k+2}:=\R^{3k+2}$). \begin{rmk} The global normal form of the cusp singularity (see \cite{rsz}) implies that for odd codimensional cooriented maps, the submanifold of cusp points has oriented normal bundle. Thus a cooriented map $F\colon W^{4l}\to Q^{6l-1}$ is such that $\Sigma^{1,1}(F)$ is a cooriented $0$-manifold, i.e. a discrete set of points equipped with signs; we call the sum of these signs the algebraic number of cusps of $F$ and denote it by $\#\Sigma^{1,1}(F)$. In this case (for $k=2l-1$) lemma \ref{removecusps} can be reformulated (with the same proof) to also include that the points $p$ and $q$ have opposite signs and then the map $F'$ is cooriented as well. \end{rmk} \begin{lemma}\label{nrcusps} $~$ \begin{enumerate} \item\label{la1} If $W^{4l}$ is a closed oriented manifold and $F\colon W\to\R^{6l-1}$ is a (stable) map, then $\#\Sigma^{1,1}(F)$ is the normal Pontryagin number $\ol p_l[W]$. Moreover, for any integer of the form $c=|\ol p_l[W]|+2r$ (for any $r\in\N$) there is a map $F\colon W\to\R^{6l-1}$ with $c$ cusps. \item\label{la2} If $W^6$ is a closed oriented manifold and $F\colon W\to\R^8$ is a (stable) map, then $F$ has an even number of cusps and for any even number $c\ge0$ there is a map $F\colon W\to\R^8$ with $c$ cusps. \item\label{la3} For any integers $c\ge0$ and $l\ge2$ there is a closed oriented manifold $W^{4l+2}$ and a (stable) map $F\colon W\to\R^{6l+2}$ with $c$ cusps. \end{enumerate} \end{lemma} \begin{prf} As in the proof of lemma \ref{valamilyenlemma}, we can use lemma \ref{removecusps} to delete pairs of cusps (with opposite signs in the case of \ref{la1}). Hence it is sufficient to prove for \ref{la1} that $\#\Sigma^{1,1}(F)=\ol p_l[W]$, for \ref{la2} that any map has an even number of cusps and for \ref{la3} that there is a map with an odd number of cusps. \medskip\noindent\emph{Proof of \ref{la1}.}\enspace\ignorespaces We have two homomorphisms $$\#\Sigma^{1,1}\colon\Omega_{4l}\to\Z~~~~\text{and}~~~~\ol p_l[\cdot]\colon\Omega_{4l}\to\Z,$$ where $\ol p_l[\cdot]$ is the normal Pontryagin number assigned to each cobordism class and $\#\Sigma^{1,1}$ is defined for any cobordism class $[W]\in\Omega_{4l}$ as $\#\Sigma^{1,1}([W]):=\#\Sigma^{1,1}(F)$ for a generic map $F\colon W\to\R^{6l-1}$. This is well-defined, since $\Omega_{4l}$ can be identified with $\Omega_{4l}(\R^{6l-1})$ and if two maps, $F_0\colon W_0^{4l}\to\R^{6l-1}$ and $F_1\colon W_1^{4l}\to\R^{6l-1}$, represent the same bordism class, then the cusp set of a generic bordism between them is an oriented $1$-manifold with boundary the union of the cusp sets of $F_0$ and $F_1$ with opposite signs, hence $F_0$ and $F_1$ have the same algebraic number of cusps. Now it is enough to prove that the maps $\#\Sigma^{1,1}$ and $\ol p_l[\cdot]$ coincide. If we have $F=\pr_{\R^{6l-1}}\circ i$ for an immersion $i\colon W^{4l}\imto\R^{6l}$, then by \cite{prim} $\#\Sigma^{1,1}(F)$ is the number of triple points of $i$. This number is by Herbert's formula \cite{herbert} $\ol p_l[W]$, hence we have $\#\Sigma^{1,1}|_G=\ol p_l[\cdot]|_G$, where $G$ is the subgroup of elements $[W^{4l}]\in\Omega_{4l}$ for which there is a prim map $W\to\R^{6l-1}$. Now a theorem of Burlet \cite{burlet} claims that any cobordism class in $\Omega_{4l}$ has a non-zero multiple that contains a map which can be immersed to $\R^{6l}$, hence the index of $G$ is finite. This implies that $\#\Sigma^{1,1}$ and $\ol p_l[\cdot]$ coincide on the whole group $\Omega_{4l}$. \medskip\noindent\emph{Proof of \ref{la2}.}\enspace\ignorespaces For any map $F\colon W^6\to\R^8$, there is a map $G\colon V^7\to\R^8\times\R_+$ from a compact orientable manifold with boundary $\partial V=W$ for which $G|_W=F$, since $\Omega_6$ is trivial. Now $\Sigma^{1,1}(G)$ is a $1$-manifold with boundary $\Sigma^{1,1}(F)$, hence $\Sigma^{1,1}(F)$ has to contain an even number of points. \medskip\noindent\emph{Proof of \ref{la3}.}\enspace\ignorespaces Consider the Dold manifold $Y=(\CP^2\times S^1)/\Z_2$, where $\Z_2$ acts on $\CP^2$ by complex conjugation and on $S^1$ by multiplication by $-1$. The manifold $(Y\times(\RP^2)^{l-2})^2$ is a square, hence by \cite{wallcob} there is an oriented manifold cobordant to it. Let $W^{4l+2}$ be such a manifold and let $F\colon W\to\R^{6l+2}$ be a generic map. Our aim is to prove that the number of cusps of $F$ is odd. We will use the normal Stiefel--Whitney classes $\ol w_i$ of $W$. We know from \cite{kazthom} (also using \cite[6.2. théorème]{bh}) that the Poincaré dual (with $\Z_2$ coefficients) of $\Sigma^{1,1}(F)$ is $$\Tp_{\Sigma^{1,1}}=\ol w_{2l+1}^2+\ol w_{2l}\smallsmile\ol w_{2l+2},$$ hence the number of cusps has the same parity as $(\ol w_{2l+1}^2+\ol w_{2l}\smallsmile\ol w_{2l+2})[W]$. Now putting $V:=Y\times(\RP^2)^{l-2}$, we obtain \begin{alignat*}2 \ol w_{2l+1}^2[V\times V]&=\sum_{i_1+j_1=2l+1}\sum_{i_2+j_2=2l+1}(\ol w_{i_1}\times\ol w_{j_1})(\ol w_{i_2}\times\ol w_{j_2})[V\times V]=\\ &=\sum_{i_1+j_1=2l+1}\sum_{i_2+j_2=2l+1}(\ol w_{i_1}\smallsmile\ol w_{i_2})[V](\ol w_{j_1}\smallsmile\ol w_{j_2})[V]=\\ &=2\sum_{i=0}^l(\ol w_{i}\smallsmile\ol w_{2l+1-i})[V](\ol w_{2l+1-i}\smallsmile\ol w_{i})[V]=0 \end{alignat*} and \begin{alignat*}2 (\ol w_{2l}\smallsmile\ol w_{2l+2})[V\times V]&=\sum_{i_1+j_1=2l+2}\sum_{i_2+j_2=2l}(\ol w_{i_1}\times\ol w_{j_1})(\ol w_{i_2}\times\ol w_{j_2})[V\times V]=\\ &=\sum_{i_1+j_1=2l+2}\sum_{i_2+j_2=2l}(\ol w_{i_1}\smallsmile\ol w_{i_2})[V](\ol w_{j_1}\smallsmile\ol w_{j_2})[V]=\\ &=\sum_{i=0}^{2l}(\ol w_{i+1}\smallsmile\ol w_{2l-i})[V](\ol w_{2l+1-i}\smallsmile\ol w_{i})[V]=\\ &=((\ol w_{l+1}\smallsmile\ol w_{l})[V])^2=(\ol w_{l+1}\smallsmile\ol w_{l})[V]. \end{alignat*} Now it is sufficient to prove that $(\ol w_{l+1}\smallsmile\ol w_{l})[V]=1$. We use that $(\ol w_i\cs\ol w_j)[\RP^2]=1$ is equivalent to $i=j=1$, and so \begin{alignat*}2 (\ol w_{l+1}\smallsmile\ol w_{l})[V]&=\\ =&\sum_{i_0+\ldots+i_{l-2}=l+1\atop j_0+\ldots+j_{l-2}=l}(\ol w_{i_0}\times\ldots\times\ol w_{i_{l-2}})(\ol w_{j_0}\times\ldots\times\ol w_{j_{l-2}})[Y\times(\RP^2)^{l-2}]=\\ =&\sum_{i_0+\ldots+i_{l-2}=l+1\atop j_0+\ldots+j_{l-2}=l}(\ol w_{i_0}\cs\ol w_{j_0})[Y](\ol w_{i_1}\cs\ol w_{j_1})[\RP^2]\ldots(\ol w_{i_{l-2}}\cs\ol w_{j_{l-2}})[\RP^2]=\\ =&(\ol w_3\cs\ol w_2)[Y]. \end{alignat*} Hence we only have to prove that $(\ol w_3\cs\ol w_2)[Y]=1$. Note that $Y$ is odd-dimensional, hence its top Stiefel--Whitney class vanishes, i.e. $w_5(Y)=0$; and $Y$ is orientable, hence $w_1(Y)=0$. Now an easy calculation implies the equalities $w_2(Y)=\ol w_2(Y)$ and $w_3(Y)=\ol w_3(Y)$. The cobordism class of $Y$ generates $\Omega_5\cong\NN_5\cong\Z_2$ (see \cite{charclass}), so it is not null-cobordant, thus at least one of its Stiefel--Whitney numbers is not $0$. The only possibility for such a Stiefel--Whitney number is $(w_2\cs w_3)[Y]$, which means $(\ol w_2\cs\ol w_3)[Y]=(w_2\cs w_3)[Y]=1$ and the proof is finished. \end{prf} \begin{thm}\label{foldthm} $~$ \begin{enumerate} \item\label{a1} If $k=2l-1$ is odd, then we have $$\Cob_1^\SO(4l-1,2l-1)\cong\Omega_{4l-1}\oplus\Z_{3^t},$$ where $t:=\min\{n\in\N\mid\alpha_3(2l+n)\le3n\}$ and $\alpha_3(n)$ denotes the sum of digits of $n$ in triadic system. \item\label{a2} If $k=2$, then we have $$\Cob_1^\SO(5,2)\cong\Omega_5\oplus\Z_2\cong\Z_2\oplus\Z_2.$$ \item\label{a3} If $k=2l\ge4$ is even, then we have $$\Cob_1^\SO(4l+1,2l)\cong\Omega_{4l+1}.$$ \end{enumerate} \end{thm} \begin{prf} \noindent\emph{Proof of \ref{a1}.}\enspace\ignorespaces Consider the forgetful homomorphism $$\psi\colon\Cob_1^\SO(4l-1,2l-1)\to\Omega_{4l-1}.$$ We want to determine the kernel of $\psi$ and in order to do this, we let $g$ be the greatest common divisor of the numbers $\#\Sigma^{1,1}(F)$ where $F$ ranges over all (stable) maps $W^{4l}\to\R^{6l-1}$ for all oriented closed manifolds $W$. Part \ref{la1} of the previous lemma together with a result of Stong \cite{stong} implies that $g=3^t$. We define the map $$\gamma\colon\ker\psi\to\Z_g$$ for any $[f]\in\ker\psi$ in the following way: If $f\colon M^{4l-1}\to\R^{6l-2}$ is any representative, then $M$ is oriented null-cobordant, that is, there is a compact oriented manifold $W^{4l}$ with boundary $\partial W=M$. Let $F\colon W\to\R^{6l-2}\times\R_+$ be any map extending $f$ and put $\gamma([f]):=\#\Sigma^{1,1}(F)~\mod g$. \medskip\begin{sclaim} The map $\gamma$ is well-defined. \end{sclaim} \begin{sprf} If $F'\colon W'\to\R^{6l-2}\times\R_+$ is another choice of extension of $f$, then we take the map $\tilde F\cup F'$, where $\tilde F$ is the composition of $F$ with the reflection in $\R^{6l-1}$ to the hyperplane $\R^{6l-2}$. This maps the closed oriented manifold $(-W)\cup W'$ to $\R^{6l-1}$ and its algebraic number of cusps is $\#\Sigma^{1,1}(F')-\#\Sigma^{1,1}(F)\equiv0~\mod g$, thus $\gamma([f])$ is independent of the choice of $F$. Of course $\gamma([f])$ is also independent of the choice of representative of the cobordism class $[f]$, since if $f'$ is another representative, then $f$ and $f'$ can be connected by a cobordism without cusps. \end{sprf} \begin{sclaim} The map $\gamma$ is an isomorphism. \end{sclaim} \begin{sprf} I. \emph{$\gamma$ is surjective.}\medskip Using a neighbourhood of an isolated cusp point (see remark \ref{cusproot} and figure \ref{kep1}) it is easy to construct a map $f\colon S^{4l-1}\to\R^{6l-2}$ that has an extension $F\colon W^{4l}\to\R^{6l-2}\times\R_+$ with only one cusp point. Hence $\gamma([f])=1$. \medskip\noindent II. \emph{$\gamma$ is injective.}\medskip If $\gamma([f])=0$ for a map $f\colon M^{4l-1}\to\R^{6l-2}$, then $f$ has an extension $F\colon W^{4l}\to\R^{6l-2}\times\R_+$ with an algebraic number of cusp points $c$, where $c$ is divisible by $g$. Now lemma \ref{nrcusps} yields a map $F'\colon W'\to\R^{6l-1}$ from a closed oriented manifold with an algebraic number of cusp points $-c$. By adding $1$-handles, we can connect $W$ and $W'$ and since the codimension of the singular set is $2l>1$, we can connect the cusp points with opposite signs by arcs and use lemma \ref{removecusps} to remove all cusps. Thus we have a manifold $W''$ and a map $F''\colon W''\to\R^{6l-2}\times\R_+$ that extends $f$ and has no cusp points, so $f$ represents $0$ in $\Cob_1^\SO(4l-1,2l-1)$. \end{sprf} Hence we have a short exact sequence $$0\to\Z_g\to\Cob_1^\SO(4l-1,2l-1)\to\Omega_{4l-1}\to0$$ and the statement of the theorem follows from the facts that $g$ is a power of $3$ and $\Omega_{4l-1}$ is a direct sum of copies of $\Z_2$. \medskip\noindent\emph{Proof of \ref{a2}.}\enspace\ignorespaces Applying the same reasoning as above in the proof of \ref{a1} and with $g=2$, we obtain a short exact sequence $$0\to\Z_2\to\Cob_1^\SO(5,2)\xra\psi\Omega_5\to0.$$ Now $\Omega_5\cong\Z_2$ leaves the two possibilities $\Cob_1^\SO(5,2)\cong\Z_4$ or $\Cob_1^\SO(5,2)\cong\Z_2\oplus\Z_2$. If this group was $\Z_4$, then any cobordism class $[f]\in\Cob_1^\SO(5,2)\setminus\ker\psi$ was an element of order $4$. Hence it is enough to show a manifold $Y^5$ that is not null-cobordant and a map $f\colon Y\to\R^7$ for which $[f]$ has order $2$. We put $Y:=(\CP^2\times S^1)/\Z_2$ (the Dold manifold we used in the proof of lemma \ref{nrcusps}) and fix an orientation of it. By Cohen's result \cite{cohen}, there is an immersion $i\colon Y^5\imto\R^8$; we define $f:=\pr_{\R^7}\circ i\colon Y\to\R^7$. In the following we prove that $[f]$ has order $2$ in $\Cob_1^\SO(5,2)$. Recall the definition of the inverse element in subsection \ref{grp}: For any cobordism class $[g]$ (represented by the map $g\colon M^5\to\R^7$), the inverse $-[g]$ is represented by a map $[\rho\circ g]$ where $\rho$ is the reflection to a hyperplane in $\R^7$. Now if the maps are cooriented, which is equivalent to the source manifolds being oriented, then this inverse also includes taking the opposite orientation of the source. If we want to indicate this in the notation, then we put $[g,M]:=[g]$ and we have $[g,M]+[\rho\circ g,-M]=0$. \medskip\begin{sclaim} For any fold map $g\colon Y\to\R^7$ we have $[g,Y]+[\rho\circ g,Y]=0$. \end{sclaim} \begin{sprf} Observe that the manifold $Y$ admits an orientation reversing diffeomorphism $\alpha\colon Y\to Y$ induced by the complex conjugation on $S^1$ (as the unit circle in $\C$). Let $X$ be the mapping torus of $\alpha$, that is, $$X:=Y\times[0,1]/(y,0)\sim(\alpha(y),1),$$ which is a closed non-orientable $6$-manifold (this is the $6$-dimensional Wall manifold used in \cite{wallcob}). The Stiefel--Whitney classes of $X$ were computed in \cite{wallcob}, in particular we have $(\ol w_3^2+\ol w_2\cs\ol w_4)[X]=0$. Again (as in the proof of theorem \ref{nrcusps}) we use that $\Tp_{\Sigma^{1,1}}=\ol w_3^2+\ol w_2\cs\ol w_4$, which implies that the number of cusps of any map $F\colon X\to\R^8$ is even. We map $Y\times\big[0,\frac12\big]$ to the path of the rotation of the image of $g$ to the image of $\rho\circ g$ in $\R^7\times\R_+$ in the same way as in the proof of proposition \ref{inv}. Extend this to a map $G\colon X\to\R^8$ by mapping $Y\times\big[\frac12,1\big]$ to the negative half-space (such that its boundary components $Y$ are mapped by $g$ and $\rho\circ g$). Since $G$ has no cusps in $Y\times\big[0,\frac12\big]\subset X$, it must have an even number of cusps in $Y\times\big[\frac12,1\big]\subset X$. Since $\Omega_5\cong\Z_2$, there is a compact orientable manifold $W^6$ with boundary $\partial W=Y\sqcup Y$. Let $F\colon W\to\R^7\times\R_+$ be a map such that the restriction to one boundary component $Y$ is $g$ and to the other is $\rho\circ g$. Joining the manifold $W$ with $Y\times\big[\frac12,1\big]$ and the map $F$ with $G|_{Y\times[\frac12,1]}$ on the common boundary yields a map of an orientable $6$-manifold to $\R^8$. \begin{figure}[H] \begin{center} \centering\includegraphics[scale=0.1]{kep6} \begin{changemargin}{2cm}{2cm} \caption{\hangindent=1.4cm\small We show the mapping torus $X$ mapped to $\R^8$ by $G$ and the manifold $W$ mapped to $\R^7\times\R_+$ by $F$. The images of $g$ and $\rho\circ g$ in $\R^7$ are represented by the two circles, they are rotated into each other by $G|_{Y\times[0,\frac12]}$ (indicated with dashed lines) and they are also connected by $G|_{Y\times[\frac12,1]}$ and $F$ in the two half-spaces.}\label{kep6} \end{changemargin} \vspace{-1.3cm} \end{center} \end{figure} This map has an even number of cusps by lemma \ref{nrcusps}, thus $F$ also has an even number of cusps. Now lemma \ref{removecusps} implies that these cusps can be eliminated, and then we have $[g,Y]+[\rho\circ g,Y]=0$. \end{sprf} \begin{sclaim} For any prim map $f=\pr_{\R^7}\circ i$ we have $[f,Y]+[f,-Y]=0$. \end{sclaim} \begin{sprf} We have $$\Imm^\SO(5,3)\cong\pi^s_8(T\gamma^\SO_3)\cong\pi^s_9(ST\gamma^\SO_3),$$ where $\Imm^\SO(5,3)$ is the cobordism group of oriented immersions of $5$-manifolds to $\R^8$. The map on the cobordism group induced by changing the orientation of the source coincides with the map on the stable homotopy group induced by the involution $\iota\colon ST\gamma^\SO_3\to ST\gamma^\SO_3$ that is the reflection to $T\gamma^\SO_3$. The induced map $\iota_\#$ coincides with the multiplication by $-1$ in $\pi^s_9(ST\gamma^\SO_3)$, hence there is an oriented manifold $W^6$ with boundary $\partial W=Y\sqcup(-Y)$ and an immersion $I\colon W\to\R^8\times\R_+$ extending $i$ on both boundary components. Define $\tilde W:=W\cup(Y\times[0,1])$ by attching the common boundary; hence $\tilde W$ is an orientable manifold. We can extend the normal bundle $\nu:=\nu_I$ to a bundle $\tilde\nu$ over $\tilde W$ in a canonical way (using the identity transition function where the attaching happened). Note that $T\tilde W\oplus\tilde\nu$ is the trivial bundle, so $\tilde\nu$ is a normal bundle for $\tilde W$. Consider the constant vector field $\ua$ on $\R^9$ (the one we get from the standard basis vector on the last coordinate line) as a vector field on $I(W)$. Projecting $\ua$ to the normal bundle $\nu$ gives a section, which can be canonically continued to a section $s$ of $\tilde\nu$. Put $\Sigma:=s^{-1}(0)$, which is dual to $\ol w_3(\tilde W)$. Now $\ua$ also gives a vector field on $\Sigma$ by noting that the restriction of $\ua$ is always in $T\tilde W|_\Sigma$ and a point where $\ua$ is tangent to $\Sigma\cap W$ corresponds to a cusp point of $F:=\pr_{\R^8}\circ I$. Generically $f=\pr_{\R^7}\circ i$ does not have any cusps, hence the section $\ua|_\Sigma$ is nowhere tangent to $\Sigma\cap(Y\times[0,1])$. Since the mod-$2$ number of tangency points is equal to the mod-$2$ self-intersection number $[\Sigma]\bullet[\Sigma]$ of $\Sigma\subset\tilde W$, this means that the number of cusps of $F$ has the same parity as $[\Sigma]\bullet[\Sigma]$. Now the Poincaré duality implies that $[\Sigma]\bullet[\Sigma]=\ol w_3^2(\tilde W)=0$, since $\tilde W$ is an orientable $6$-manifold and $\Omega_6\cong0$. We conclude that $F$ has an even number of cusps, which can be eliminated by lemma \ref{removecusps}, so it is a null-cobordism between $f$ on $Y$ and $f$ on $-Y$. \end{sprf} If we combine the above considerations, then we get $$[f,Y]=-[\rho\circ f,-Y]=-[f,-Y]=-[f,Y],$$ thus $[f,Y]$ is indeed an element of order $2$, which is what we wanted to prove. \medskip\noindent\emph{Proof of \ref{a3}.}\enspace\ignorespaces This is a direct analogue of the proof of theorem \ref{cob=n} with the forgetful map $\psi$ in the place of $\varphi$. \end{prf} The following is the analogue of the previous theorem for more general target manifolds $P^{3k+2}$. The proof does not work for any target, hence there will be various restrictions on $P$. \begin{thm} $~$ \begin{enumerate} \item\label{b1} If $k=2l-1$ is odd and $P^{6l-2}$ is an orientable manifold, then there is a short exact sequence $$0\to\Z_{3^u}\to\Cob_1^\SO(4l-1,P)\to\Omega_{4l-1}(P)\to0,$$ where $u$ satisfies $1\le u\le t=\min\{n\in\N\mid\alpha_3(2l+n)\le3n\}$. \item\label{b2} If $k=2$ and $P^7$ is an orientable manifold with $H_1(P;\Z_2)\cong H_2(P;\Z_2)\cong0$, then there is a short exact sequence $$0\to\Z_2\to\Cob_1^\SO(5,P)\to\Omega_5(P)\to0.$$ \item\label{b3} If $k=2l\ge4$ is even and $P^{6l+1}$ is an orientable manifold, then we have $$\Cob_1^\SO(4l+1,P)\cong\Omega_{4l+1}(P).$$ \end{enumerate} \end{thm} \begin{prf} \noindent\emph{Proof of \ref{b1}.}\enspace\ignorespaces Similarly to the proof of \ref{a1} in the previous theorem, we want to describe the kernel of the forgetful map $$\psi\colon\Cob_1^\SO(4l-1,P)\to\Omega_{4l-1}(P).$$ If we define $g$ to be the greatest common divisor of the numbers $\la p_l(\nu_F),[W]\ra$, where $F$ (and $W$) ranges over all (stable) maps $F\colon W^{4l}\to P\times[0,1]$ for which $W$ is oriented. We have $g=3^u$ with some $1\le u\le t$ because of the same considerations as in the proof of \ref{a1} in the previous theorem. Again in exactly the same way as in the proof of \ref{a1} in the previous theorem, we have a well-defined isomorphism $$\gamma\colon\ker\psi\to\Z_g$$ by assigning to any $[f]\in\ker\psi$ the algebraic number modulo $g$ of cusps of any map $F\colon W^{4l}\to P\times[0,1]$ extending $f$ (also using lemma \ref{nrcusps}). This yields the desired exact sequence. \medskip\noindent\emph{Proof of \ref{b2}.}\enspace\ignorespaces We first show the following. \medskip\begin{sclaim} For an orientable manifold $P^7$, the conditions $H_1(P;\Z_2)\cong H_2(P;\Z_2)\cong0$ and $\Hom(\Omega_6(P),\Z_2)\cong0$ are equivalent. \end{sclaim} \begin{sprf} We know from \cite{conflo} that there is an isomorphism modulo odd torsion $$\Omega_6(P)\congc{\rm{odd}}\bigoplus_{i=0}^6H_i(P;\Omega_{6-i})=H_1(P;\Z_2)\oplus H_2(P;\Z)\oplus H_6(P;\Z).$$ Hence we have to prove that the claimed homological condition is equivalent to all homomorphisms from the right-hand side to $\Z_2$ being trivial. Write $H_1(P)=F\oplus T_{\rm{e}}\oplus T_{\rm{o}}$, where $F$ is the free part and $T_{\rm{e}}$ and $T_{\rm{o}}$ are respectively the even and odd torsion parts. The universal coefficient theorem implies that $H_1(P;\Z_2)=(F\oplus T_{\rm{e}})\otimes\Z_2$, so we have $$\Hom(H_1(P;\Z_2),\Z_2)\cong0\iff H_1(P;\Z_2)\cong0\iff F\cong T_{\rm e}\cong0.$$ Again by the universal coefficient theorem and the Poincaré duality we get $H_6(P;\Z)\cong H^1(P;\Z)\cong F$, which means $$\Hom(H_6(P;\Z),\Z_2)\cong0\iff F\cong0.$$ Finally, if $F\cong T_{\rm e}\cong0$, then yet again the universal coefficient theorem gives $H^2(P;\Z_2)\cong\Hom(H_2(P;\Z),\Z_2)$ and we obtain the equivalence $$\Hom(H_2(P;\Z),\Z_2)\cong0\iff H_2(P;\Z_2)\cong H^2(P;\Z_2)\cong0.$$ This implies that the claim is indeed true. \end{sprf} Now we know that $\Hom(\Omega_6(P),\Z_2)=0$. The number of cusps modulo $2$ gives a homomorphism $\Omega_6(P)\to\Z_2$, since the cusps of generic bordisms are $1$-manifolds with boundary the cusps of the maps they connect. Thus the number of cusps of generic maps $W^6\to P\times[0,1]$ is always even (this is an analogue of part \ref{la2} in lemma \ref{nrcusps}). Now in the same way as in the proof of \ref{a2} in theorem \ref{foldthm}, we get an isomorphism $$\gamma\colon\ker(\psi\colon\Cob_1^\SO(5,P)\to\Omega_5(P))\to\Z_2$$ by counting the number of cusps modulo $2$ of any null-bordism of a given map and using lemma \ref{removecusps}. This yields the exact sequence as claimed. \medskip\noindent\emph{Proof of \ref{b3}.}\enspace\ignorespaces This is completely analogous to the proof of theorem \ref{cob=n} and to part \ref{a3} of theorem \ref{foldthm}. \end{prf} \subsection{The $(2k+2)$-dimensional case} In this subsection we will use the key fibration corresponding to the singularity set $\tau_2=\{\Sigma^0,\Sigma^{1,0},\Sigma^{1,1,0}\}$ with a fixed codimension $k$, which is $$\chi_2^L\colon X_2^L\xra{X_1^L}\Gamma T\tilde\xi_2^L$$ and using $L=\O$ and $L=\SO$. The long exact sequence in homotopy groups of this fibration is of the form $$\ldots\to\pi_{n+1}^s(T\tilde\xi_2^L)\to\pi_n(X_1^L)\to\pi_n(X_2^L)\xra{(\chi_2^L)_\#}\pi_n^s(T\tilde\xi_2^L)\to\ldots$$ We are interested in the group $\pi_n(X_1^L)\cong\Cob_1^L(n-k,k)$ for $n=3k+2$. We start by proving lemmas on Thom spaces in general to understand the homotopy groups of $T\tilde\xi_2^L$. Recall from \cite{rsz} that $\tilde\xi_2^L$ is of dimension $3k+2$. \begin{lemma} If $\zeta$ is a vector bundle of dimension $n\ge1$ over a connected base space $B$, then $\pi_n(T\zeta)\cong\Z$ if $\zeta$ is orientable and $\pi_n(T\zeta)\cong\Z_2$ if $\zeta$ is non-orientable. Moreover, the map $$\alpha\colon\pi_n(T\zeta)\to\begin{cases} \Z;~[h]\mapsto\#|B\cap\im h|,&\text{ if }\zeta\text{ is orientable}\\ \Z_2;~[h]\mapsto|B\cap\im h|~\mod2,&\text{ if }\zeta\text{ is non-orientable} \end{cases}$$ is an isomorphism (where we use such a representative of the homotopy class $[h]$ that is transverse to $B$; by $\#|B\cap\im h|$ we mean the number of intersection points taken with signs corresponding to an orientation of $\zeta$; this number is independent of the choice of the representative in both cases). \end{lemma} \begin{prf} Since $T\zeta$ is $(n-1)$-connected, we have $\pi_n(T\zeta)\cong H_n(T\zeta)\cong H^n(T\zeta)$. This group is generated by the Thom class, which is a free generator if $\zeta$ is orientable and has order $2$ if it is non-orientable. Now the map $\alpha$ on any homotopy class $[h]\in\pi_n(T\zeta)$ is the evaluation of the Thom class on the image of $[h]$ under the Hurewicz homomorphism, hence it is an isomorphism. \end{prf} \begin{lemma}\label{pina} If $\zeta$ is a vector bundle of dimension $n\ge3$ over a connected base space $B$, then the map \begin{alignat*}2 \beta\colon\pi_{n+1}(T\zeta)&\to\begin{cases} \Omega_1(B),&\text{ if }\zeta\text{ is orientable}\\ \{[f]\in\NN_1(B)\mid w_1(f^*\zeta)=0\},&\text{ if }\zeta\text{ is non-orientable} \end{cases}\\ [h]&\mapsto[h|_{h^{-1}(B\cap\im h)}] \end{alignat*} is an epimorphism and its kernel is isomorphic to $\Z_2$ if $w_2(\zeta)$ vanishes and trivial if $w_2(\zeta)$ does not vanish. \end{lemma} \begin{prf} We kill the homotopy group $\pi:=\pi_n(T\zeta)$ in the standard way, by pulling back the path fibration over $K(\pi,n)$ as shown in the square $$\xymatrix{ K\ar[d]_{K(\pi,n-1)}\ar[r] & \ast\ar[d]^{K(\pi,n-1)} \\ T\zeta\ar[r]^(.4)u & K(\pi,n) }$$ where the classifying map is the Thom class $u:=u(\zeta)$, which generates $\pi_n(T\zeta)\cong H^n(T\zeta)$. We will use the Serre spectral sequence to obtain the group $\pi_{n+1}(T\zeta)\cong\pi_{n+1}(K)\cong H_{n+1}(K)\cong H^{n+1}(K)$. The only differentials in this spectral sequence that influence the group $H^{n+1}(K)$ are the transgressions $H^{n-1+i}(K(\pi,n-1))\to H^{n+i}(T\zeta)$ for $i=0,1,2$ (by dimensional reasons). For $i=0$ we know that the differential $H^{n-1}(K(\pi,n-1))\to H^n(T\zeta)$ is an isomorphism, since $H^{n-1}(K)\cong H^n(K)\cong0$. For $i=1$ and orientable $\zeta$, the group $H^n(K(\Z,n-1))$ is trivial; for non-orientable $\zeta$ it is $H^n(K(\Z_2,n-1))=\la\Sq^1\ra$ and the differential sends $\Sq^1$ to $\Sq^1(u)=u\cs w_1(\zeta)$, since transgressions commute with the Steenrod operations. For $i=2$ we have $H^{n+1}(K(\pi,n-1))=\la\Sq^2\ra$ and the image of $\Sq^2$ is $\Sq^2(u)=u\cs w_2(\zeta)$. Combining these, we get that $H^{n+1}(K)$ is associated to the sum of $E_\infty^{n+1,0}=u\cs(H^1(B)/\la w_1(\zeta)\ra)$ and $E_\infty^{0,n+1}$, which is $0$ if $w_2(\zeta)$ is non-trivial and $\Z_2$ otherwise. We get the statement of the lemma by noting that $\Omega_1(B)\cong H_1(B)$ and $\{[f]\in\NN_1(B)\mid w_1(f^*\zeta)=0\}\cong\ker\la w_1(\zeta),\cdot\ra\subset H_1(B;\Z_2)$. \end{prf} \begin{crly}\label{pi2} Let $G$ be a group with a fixed representation $\mu$ on $\R^n$ for an $n\ge3$ and put $\zeta:=EG\utimes\mu\R^n$. Then the map $\beta$ from the previous lemma is an isomorphism iff the image of $\mu_\#$ on the component of the neutral element of $G$ is the whole $\pi_1(\SO(n))\cong\Z_2$, that is, iff the image of $G$ contains a non-contractible loop. \end{crly} \begin{prf} By the previous lemma it is enough to prove that $w_2(\zeta)$ does not vanish iff the image is $\pi_1(\SO(n))$. Now $w_2(\zeta)\ne0$ is equivalent to the existence of a map $h\colon S^2\to BG$ for which $w_2(h^*\zeta)=h^*(w_2(\zeta))\ne0$, and this latter condition means that the bundle $h^*\zeta$ is non-trivial. The homotopy long exact sequence of the universal bundle $*\cong EG\xra GBG$ implies that we have an isomorphism $\pi_2(BG)\cong\pi_1(G)$ given by the boundary map $\partial$. For any homotopy class $[h]\in\pi_2(BG)$, the bundle $h^*\zeta$ over $S^2$ is in bijective correspondence with its gluing map, i.e. the difference between the trivialisations of the bundle over the two hemispheres, which is a map $S^1\to\SO(n)$ that maps to the image of $\mu_\#$. It is not hard to see that this map is $\mu_\#(\partial([h]))$. Now the existence of a map $h$ such that the bundle $h^*\zeta$ is non-trivial is equivalent to the existence of an $h$ such that $\mu_\#(\partial([h]))\in\Z_2$ is non-trivial and this is what we wanted to prove. \end{prf} Now we are ready to describe the segment of the homotopy long exact sequence of the key fibration containing $\Cob_1^L(2k+2,k)$, which can be seen as the short exact sequence $$0\to\coker\chi_{3k+3}\to\Cob_1^L(2k+2,k)\to\ker\chi_{3k+2}\to0$$ using the notation $\chi_n:=(\chi_2^L)_\#\colon\pi_n(X_2^L)\to\pi^s_n(T\tilde\xi_2^L)$. \begin{rmk}\label{chirmk} The group $\ker\chi_{3k+2}$ has been computed in lemma \ref{valamilyenlemma} for $L=\O$ and in lemma \ref{nrcusps} for $L=\SO$: We have $$\pi_{3k+2}(X_2^L)\cong\Cob_2^L(2k+2,k)\cong\begin{cases} \Omega_{2k+2},&\text{ if }L=\SO\\ \NN_{2k+2},&\text{ if }L=\O \end{cases}$$ and $\chi_{3k+2}$ maps the cobordism class of a generic map $f\colon M^{2k+2}\to\R^{3k+2}$ to the number $|\Sigma^{1,1}(f)|~\mod2$ if either $L=\O$ or $L=\SO$ and $k$ is even, and to the number $\#\Sigma^{1,1}(f)$ if $L=\SO$ and $k$ is odd. Now if $L=\O$, then lemma \ref{valamilyenlemma} implies that $\ker\chi_{3k+2}$ is an index-$2$ subgroup of $\NN_{2k+2}$; and if $L=\SO$, then lemma \ref{nrcusps} implies that $\ker\chi_{3k+2}$ is $\ker\ol p_l[\cdot]\subset\Omega_{4l}$ if $k=2l-1$ is odd, the whole $\Omega_6\cong0$ if $k=2$ and an index-$2$ subgroup of $\Omega_{2k+2}$ if $k\ge4$ is even. Hence it only remains to determine $\coker\chi_{3k+3}$. \end{rmk} \begin{lemma}\label{chilemma} Consider the cases $L=\O$ and $L=\SO$ and the map $\beta$ (defined in lemma \ref{pina}) from $\pi_{3k+3}(T\tilde\xi_2^L)=\pi^s_{3k+3}(T\tilde\xi_2^L)$ to the respective bordism groups. \begin{enumerate} \item\label{beta1} $\coker\chi_{3k+3}\cong\coker(\beta\circ\chi_{3k+3})$. \item\label{beta2} For any cobordism class $[f]\in\Cob_2^L(2k+3,k)\cong\pi_{3k+3}(X_2^L)$ represented by a map $f\colon M^{2k+3}\to\R^{3k+3}$, the image $\beta(\chi_{3k+3}([f]))$ depends only on the cobordism class of $M$ in $\Omega_{2k+3}$ or $\NN_{2k+3}$ respectively if $L=\SO$ or $L=\O$. \end{enumerate} \end{lemma} \begin{prf} \emph{Proof of \ref{beta1}.}\enspace\ignorespaces Clearly it is enough to prove that $\beta$ is an isomorphism. Recall that the bundle $\tilde\xi_2^L$ is the universal vector bundle $$EG_2^L\utimes{\tilde\lambda}\R^{3k+2},$$ where $\tilde\lambda$ is the target representation of $G_2^L$. We will check that the condition of corollary \ref{pi2} holds for $\tilde\xi_2^L$. The group $G_2^L$ and its target representation are described in \cite{rsz} and \cite{rrthesis}; in the unoriented case we have $G_2\cong\O(1)\times\O(k)$ and in the cooriented case $G_2^\SO\cong\{(\varepsilon,A)\in\O(1)\times\O(k)\mid\varepsilon^2\det A>0\}\cong\O(1)\times\SO(k)$, the target representation is $$\tilde\lambda(\varepsilon,A)=\varepsilon\oplus A\oplus 1\oplus A\oplus\varepsilon A$$ in both cases. The component of the unity is $\{1\}\times\SO(k)$ and if $\gamma$ is a non-contractible loop in $\SO(k)$, then clearly $\tilde\lambda(1,\cdot)\circ\gamma$ is a non-contractible loop in $\SO(3k+2)$. Thus corollary \ref{pi2} implies that $\beta$ is an isomorphism. \medskip\noindent\emph{Proof of \ref{beta2}.}\enspace\ignorespaces If $f\colon M^{2k+3}\to\R^{3k+3}$ represents an element $[f]\in\Cob_2^L(2k+3,k)$, then the element $\beta(\chi_{3k+3}([f]))$ in $\Omega_1(BG_2^L)$ or in $\NN_1(BG_2^L)$ (depending on $L$ and $k$) is represented by the inducing map $f(\Sigma^{1,1}(f))\to BG_2^L$ of the normal bundle of $f(\Sigma^{1,1}(f))$ (when we pull it back from $\tilde\xi_2^L$; see theorem \ref{univ}). If we have an arbitrary cobordism of the manifold $M$ and generically map it to $\R^{3k+3}\times[0,1]$ (extending the map $f$ on the boundary part $M$ to $\R^{3k+3}\times\{0\}$ and mapping the rest of the boundary to $\R^{3k+3}\times\{1\}$), then this map will have isolated $\rm{III}_{2,2}$-points ($\rm{III}_{2,2}$ is the simplest $\Sigma^2$ singularity; see \cite{math}) apart from regular, fold and cusp points. The root of the singularity $\rm{III}_{2,2}$ (see \cite{rrthesis}) is such that the cusp points on its boundary form a circle with trivial normal bundle, hence a cobordism of the manifold $M$ yields the same $\beta(\chi_{3k+3}([f]))$. This is what we wanted to prove. \end{prf} \begin{thm} There is a short exact sequence $$0\to\Z_2\to\Cob_1(2k+2,k)\to G\to0,$$ where $G$ is an index-$2$ subgroup of $\NN_{2k+2}$. \end{thm} \begin{prf} By remark \ref{chirmk} and lemma \ref{chilemma} we only have to prove that $\coker(\beta\circ\chi_{3k+3})$ is $\Z_2$. Since the elements of $\NN_{2k+3}$ are completely determined by their Stiefel--Whitney numbers, we want to express the map $\beta\circ\chi_{3k+3}$ in terms of these. The image of $\beta$ can be identified with $\pi_1(BG_2)\cong\pi_1(B\O(1))\oplus\pi_1(B\O(k))\cong\Z_2\oplus\Z_2$. As we saw above, for a map $f\colon M^{2k+3}\to\R^{3k+3}$, the element $\beta(\chi_{3k+3}([f]))\in\pi_1(BG_2)$ is represented by the inducing map of the normal bundle of $f(\Sigma^{1,1}(f))$. Recall again that $G_2=\O(1)\times\O(k)$ was such that its source and target representations are $$\lambda(\varepsilon,A)=\varepsilon\oplus 1\oplus A\oplus\varepsilon A~~~~\text{and}~~~~\tilde\lambda(\varepsilon,A)=\varepsilon\oplus A\oplus 1\oplus A\oplus\varepsilon A,$$ hence in $\pi_1(BG_2)\cong\Z_2\oplus\Z_2$ the projection to the first $\Z_2$ means the orientability of the kernel line bundle over each loop in $\Sigma^{1,1}(f)$ and the projection to the second $\Z_2$ is the orientability of the virtual normal bundle $\nu_f$ over each loop in $\Sigma^{1,1}(f)$. Now we use \cite{kazthom} (together with \cite[6.2. théorème]{bh}) to obtain that the map $\beta\circ\chi_{3k+3}$ on a cobordism class $[M]\in\NN_{2k+3}$ is the evaluation of $$(\ol w_{k+2}\cs\ol w_{k+1}+\ol w_{k+3}\cs\ol w_k,~\ol w_1\cs\ol w_{k+1}^2+\ol w_1\cs\ol w_k\cs\ol w_{k+2})$$ on $[M]$. The class $(\Sq^1+\ol w_1\cs)(\ol w_{k+1}^2+\ol w_k\cs\ol w_{k+2})$ is the second entry in this pair for $k$ odd and the sum of the two entries for $k$ even and it always evaluates to $0$ according to \cite{dold}. Hence the first entry determines $\im(\beta\circ\chi_{3k+3})$. Similarly to the proof of part \ref{la3} of lemma \ref{nrcusps}, we use the Dold manifold $Y^5=(\CP^2\times S^1)/\Z_2$ multiplied by $(\RP^2)^{k-1}$. A computation analogous to the one for lemma \ref{nrcusps} yields that $\ol w_{k+2}\cs\ol w_{k+1}+\ol w_{k+3}\cs\ol w_k$ evaluates to $1$ on this manifold, hence we have $\im(\beta\circ\chi_{3k+3})\cong\Z_2$. Thus $\coker(\beta\circ\chi_{3k+3})$ is also $\Z_2$ and the statement of the theorem follows. \end{prf} \begin{thm} $~$ \begin{enumerate} \item\label{o1g} If $k=2l-1$ is odd, then $\Cob_1^\SO(4l,2l-1)$ is isomorphic to the kernel of the epimorphism $\ol p_l[\cdot]\colon\Omega_{4l}\twoheadrightarrow\Z$. \item\label{o2g} If $k=2$, then we have $\Cob_1^\SO(6,2)\cong0$. \item\label{o3g} If $k=2l\ge4$ is even, then $\Cob_1^\SO(4l+2,2l)$ is an index-$2$ subgroup of $\Omega_{4l+2}$. \end{enumerate} \end{thm} \begin{prf} By remark \ref{chirmk} and lemma \ref{chilemma} we only have to prove that $\coker(\beta\circ\chi_{3k+3})$ is trivial in all cases. \medskip\noindent\emph{Proof of \ref{o1g}.}\enspace\ignorespaces Now $\tilde\xi_2^\SO$ is orientable, hence $\beta$ maps to $\pi_1(BG_2^\SO)\cong\pi_1(B\O(1))\cong\Z_2$ and $\beta\circ\chi_{3k+3}$ can be identified with the evaluation of $\ol w_{k+2}\cs\ol w_{k+1}+\ol w_{k+3}\cs\ol w_k$. As in the proof above, we use that the manifold $Y^5\times(\RP^2)^{k-1}$ (which is cobordant to an orientable manifold by \cite{wallcob}) evaluates to $1$, and so $\beta\circ\chi_{3k+3}$ is surjective and its cokernel is trivial. \medskip\noindent\emph{Proof of \ref{o2g} and \ref{o3g}.}\enspace\ignorespaces Now $\tilde\xi_2^\SO$ changes orientation over all non-contractible loops in $BG_2^\SO$, hence the image of the isomorphism $\beta$ is trivial. \end{prf} \section{1-codimensional cobordisms}\label{1co} Here we will consider cobordisms of cooriented Morin maps of codimension $1$ to Eucledian spaces, i.e. the groups $\Cob_r^\SO(n,1)$; the number $n$ will be fixed throughout this section. It will turn out that these groups are isomorphic to direct sums of stable homotopy groups of spheres modulo some torsion parts and in some cases a few of these torsion parts can also be described. First we will consider fold, then cusp and after that higher Morin maps. \begin{rmk} The analogue of this for unoriented Morin maps has been computed in theorem \ref{unorthm}, which includes as a special case that $\Cob_r(n,1)$ is finite $2$-primary for all $r\in\N\cup\{\infty\}$. \end{rmk} \begin{rmk}\label{morform} We will need the global normal forms of $1$-codimensional Morin singularities. It follows from the computations for arbitrary codimensional singularities in \cite{rsz} and \cite{rrthesis} that we have $G_r^\SO\cong\Z_2$ for any positive integer $r$. The target representation $\tilde\lambda\colon\Z_2\to\O(2r+1)$ is such that its image is in $\SO(2r+1)$ iff $r$ is even, hence the universal bundle associated to it is $$\tilde\xi_r^\SO=E\Z_2\utimes{\tilde\lambda}\R^{2r+1}=i\gamma_1\oplus\varepsilon^j$$ over $B\Z_2\cong\RP^\infty$ for some numbers for which $i+j=2r+1$ and $i\equiv r~\mod2$. This implies that the Thom space $T\tilde\xi_r^\SO$ is $S^j(\RP^\infty/\RP^{i-1})$. It is not hard to see that for any odd prime $p$, the reduced cohomology $\tilde H^*(T\tilde\xi_r^\SO;\Z_p)$ isomorphic to $\tilde H^*(S^{2r+1};\Z_p)$ if $r$ is even (the isomorphism is induced by the inclusion $S^{2r+1}\subset T\tilde\xi_r^\SO$ as a ``fibre'') and it vanishes if $r$ is odd. Thus by Serre's mod-$\CC_2$ Whitehead theorem from \cite{serre}, the inclusion $\Gamma S^{2r+1}\subset\Gamma T\tilde\xi_r^\SO$ induces isomorphisms of the odd torsion parts of homotopy groups if $r$ is even and the homotopy groups of $\Gamma T\tilde\xi_r^\SO$ are finite $2$-primary if $r$ is odd. \end{rmk} \subsection{Fold maps} The main tool in this subsection will be Koschorke's description of the Kahn--Priddy map from \cite{kos}, so first we recall this. \begin{defi} For any positive integer $m$, there are maps \begin{itemize} \item[(i)] $\RP^{m-1}\into\O(m)$ defined for a point $p\in\RP^{m-1}$, represented as a hyperplane in $\R^m$, by mapping $p$ to the reflection to this hyperplane, \item[(ii)] $\O(m)\into\Omega^mS^m$ defined by mapping a point $A\in\O(m)$ to the element $s\in\Omega^mS^m$, represented as a map $S^m=\R^m\cup\{\infty\}\to\R^m\cup\{\infty\}=S^m$, for which $s(x)=A(x)$ if $x\in\R^m$ and $s(\infty)=\infty$. \end{itemize} The adjoint of the composition of these maps $\RP^{m-1}\into\O(m)\into\Omega^mS^m$ is a map $\lambda\colon S^m\RP^{m-1}\to S^m$. For any $n<m$ we have $\pi_{m+n}(S^m\RP^{m-1})\cong\pi^s_n(\RP^{m-1})\cong\pi^s_n(\RP^\infty)$ and $\pi_{m+n}(S^m)\cong\pi^s(n)$, hence $\lambda$ induces a homomorphism $$\lambda_\#\colon\pi^s_n(\RP^\infty)\to\pi^s(n),$$ which is called the Kahn--Priddy map. \end{defi} \begin{rmk} The stable homotopy groups $\pi^s_n(\RP^\infty)$ are $2$-primary (if $n\ge1$) and for small numbers $n$ they are completely computed; see \cite{liu}. \end{rmk} The following is due to Kahn and Priddy \cite{kahnpr} and we will not prove it here. \begin{thm}\label{lambda} The homomorphism $\lambda_\#\colon\pi^s_n(\RP^\infty)\to\pi^s(n)$ is onto the $2$-primary part of $\pi^s(n)$. \end{thm} Next we describe Koschorke's figure-$8$ construction that gives a very geometric understanding of the homomorphism $\lambda_\#$. \begin{defi} We define the figure-$8$ construction on an immersion $i\colon N^{n-1}\imto\R^n$ as follows: The composition of $i$ with the standard embedding $\R^n\into\R^{n+1}$ has normal bundle $\nu_i\oplus\varepsilon^1$; in each fibre of this bundle (considered as a plane in $\R^{n+1}$) we put a figure $8$ symmetrically to the $\nu_i$ factor (i.e. the image of $t\mapsto(\sin(2t),2\sin(t))$ with the first coordinate corresponding to $\nu_i$ and the second to $\varepsilon^1$). If we choose these figures $8$ smoothly, their union gives an immersion $$8(i)\colon S(\nu_i\oplus\varepsilon^1)\imto\R^{n+1},$$ where the circle bundle $S(\nu_i\oplus\varepsilon^1)$ is an oriented $n$-manifold. \end{defi} \begin{rmk} The unoriented cobordism group of immersions of $(n-1)$-manifolds to $\R^n$ is isomorphic to $\pi^s_n(\RP^\infty)$ and the oriented cobordism group of immersions of $n$-manifolds to $\R^{n+1}$ is isomorphic to $\pi^s(n)$. Clearly the figure-$8$ construction respects the cobordism relations (if $i_0\colon N_0^{n-1}\imto\R^n$ and $i_1\colon N_1^{n-1}\imto\R^n$ are unoriented cobordant, then $8(i_0)$ and $8(i_1)$ are oriented cobordant, since the figure-$8$ construction can be applied to cobordisms as well), hence we obtain a map $$8_\#\colon\pi^s_n(\RP^\infty)\to\pi^s(n).$$ \end{rmk} The following is a theorem of Koschorke \cite{kos} and again the proof will be omitted. \begin{thm}\label{8} The homomorphisms $\lambda_\#$ and $8_\#$ coincide. \end{thm} In the following we will see why the above theorems are improtant for the computation of cobordisms of fold maps. To do this, we use the key fibration for the singularity set $\{\Sigma^0,\Sigma^{1,0}\}$ that connects the classifying spaces $X_0^\SO$, $X_1^\SO$ and $\Gamma T\tilde\xi_1^\SO$ (for $1$-codimensional maps). Now $X_0^\SO$ is the classifying space for $1$-codimensional cooriented immersions, which is clearly $\Gamma T\gamma_1^\SO=\Gamma S^1$; and $\tilde\xi_1^\SO$ is the bundle $\gamma_1\oplus\varepsilon^2$ over $\RP^\infty$, hence $\Gamma T\tilde\xi_1^\SO=\Gamma S^2\RP^\infty$. Thus the key fibration is $$X_1^\SO\xra{\Gamma S^1}\Gamma S^2\RP^\infty,$$ and so its homotopy long exact sequence has the form $$\ldots\xra\partial\pi^s(n)\to\Cob_1^\SO(n,1)\to\pi^s_{n-1}(\RP^\infty)\xra\partial\pi^s(n-1)\to\ldots$$ where we are using the isomorphisms $\pi_{n+1}(\Gamma S^1)\cong\pi^s(n)$ and $\pi_{n+1}(X_1^\SO)\cong\Cob_1^\SO(n,1)$ and $\pi_{n+1}(\Gamma S^2\RP^\infty)\cong\pi^s_{n+1}(S^2\RP^\infty)\cong\pi^s_{n-1}(\RP^\infty)$. \begin{lemma}\label{d8} The boundary map $\partial$ coincides with $8_\#$. \end{lemma} Before proving this lemma, we state (and prove) its most important corollary, which describes the cobordism groups $\Cob_1^\SO(n,1)$. \begin{thm}\label{kpthm} $\Cob_1^\SO(n,1)$ is a finite Abelian group with odd torsion part isomorphic to that of $\pi^s(n)$ and even torsion part isomorphic to the kernel of the Kahn--Priddy map $\lambda_\#\colon\pi^s_{n-1}(\RP^\infty)\twoheadrightarrow[\pi^s(n-1)]_2$. In other words, we have $$\Cob_1^\SO(n,1)\cong\ker\lambda_\#\oplus\displaystyle\bigoplus_{p~\rm{odd}\atop\rm{prime}}[\pi^s(n)]_p.$$ \end{thm} \begin{prf} Trivial from theorems \ref{lambda} and \ref{8} and lemma \ref{d8}. \end{prf} \medskip\par\noindent\textbf{Proof of lemma \ref{d8}.\enspace\ignorespaces} First observe that the homotopy group $\pi_{n+1}(\Gamma S^2\RP^\infty)\cong\pi_{n+1}(X_1^\SO,\Gamma S^1)$ is isomorphic to the cobordism group of cooriented fold maps $$f\colon(M^n,\partial M^n)\to(D^{n+1},S^n)$$ for which $\partial f:=f|_{\partial M}$ is an immersion to $S^n$ (the cobordism relation and group structure can be defined analogously to section \ref{cobsec}). This $f$ will be a fixed representative throughout the proof. The boundary map $$\partial\colon\pi_{n+1}(X_1^\SO,\Gamma S^1)\to\pi_n(\Gamma S^1)$$ maps this cobordism group to the cobordism group of cooriented immersions of $(n-1)$-manifolds to $S^n$ by assigning to the cobordism class of $[f]$ (as a cooriented fold map) the cobordism class of $[\partial f]$ (as a cooriented immersion). Put $\Sigma:=\Sigma(f)=\Sigma^{1,0}(f)$, which is a $2$-codimensional submanifold of $M$. Note that $f|_\Sigma$ is an immersion and fix closed tubular nighbourhoods $T\subset M$ of $\Sigma$ and $\tilde T\subset D^{n+1}$ of $\tilde\Sigma:=f(\Sigma)$. Now theorem \ref{univ} (more precisely its analogue for multisingularities and cooriented maps) implies that there is a $D^3$-bundle $\hat T$ over $\Sigma$, an immersion $i\colon\hat T\imto D^{n+1}$ with image $\tilde T$ and a fibrewise map $\hat f\colon T\to\hat T$ such that $\hat f$ restricted to any fibre $D^2$ of $T$ is the Whitney umbrella $$\sigma_1\colon(D^2,0)\to(D^3,0);~(x,y)\mapsto(x,xy,y^2)$$ (i.e. the normal form of $1$-codimensional fold singularity) and the following diagram is commutative: $$\xymatrix@R=1.5pc{ T\ar[rr]^{f|_T}\ar[dr]^(.6){\hat f} && \tilde T\ar@{<-_)}[dd] \\ & \hat T\ar[dl]\ar[ur]^(.4)i &\\ \Sigma\ar@{_(->}[uu]\ar[rr]^{f|_\Sigma} && \tilde\Sigma }$$ Put $N:=\ol{M\setminus T}$ and decompose its boundary as $\partial N=\partial_0N\sqcup\partial_1N$ with $\partial_0N=\partial M$ and $\partial_1N=\partial T$. Now the image of $f|_{\partial_2N}$ is the union of the images of the fibrewise Whitney umbrellas (composed with the immersion $i$) restricted to the boundary $S^1$ of each fibre. The image of such a restriction $\sigma_1|_{S^1}$ is a figure $8$ in a fibre $S^2$ of $\partial\hat T$. \begin{figure}[H] \begin{center} \centering\includegraphics[scale=0.1]{kep7} \begin{changemargin}{2cm}{2cm} \caption{\hangindent=1.4cm\small Here we show the image of the Whitney umbrella; the indicated point is the origin and the curved figure $8$ is the image of the boundary.}\label{kep7} \end{changemargin} \vspace{-1.5cm} \end{center} \end{figure} Such a figure $8$ can canonically be endowed with a non-vanishing normal vector field in $S^2$ and with the inwards normal vector field in $D^3$. Thus the image of $f|_{\partial_2N}$ is a codimension-$2$ immersed submanifold in $D^{n+1}$ equipped with a normal framing. We contract each figure $8$ in the corresponding $S^2$ to a small neighbourhood of its double point. We can identify the image of the singular set $\tilde\Sigma$ with the set of double points $\Delta$ of these figures $8$, hence $\Delta$ is also the image of an immersion with normal bundle induced from $\tilde\xi_1^\SO=\gamma_1\oplus2\varepsilon^1$. Over each point of $\Delta$, the first trivial line bundle $\varepsilon^1$ can be identified with the direction of the double point line of the Whitney umbrella; and the second $\varepsilon^1$ with the direction of the second coordinate in $t\mapsto(\sin(2t),2\sin(t))$ seen as the small figure $8$ around this point. Of course we can view the immersion of $\Delta$ as a map to $\R^{n+1}$ (instead of $D^{n+1}$) and then the multi-compression theorem \ref{mct} (more precisely its version for immersions; see corollary \ref{ict}) can be applied to turn the two $\varepsilon^1$ directions in the normal bundle of $\Delta$ parallel to the last two coordinate lines. Projecting along these lines, we obtain an immersion $j$ to $\R^{n-1}$. If the figures $8$ were contracted to be small enough, then this $j$ extends to an immersion to $\R^n$ of a disk bundle over its source (induced from $D(\gamma_1\oplus\varepsilon^1)$) for which the image of each fibre contains the above described small figure $8$. Now the union of these figures $8$ is precisely the image of what we get by applying the figure-$8$ construction to the immersion $j$. We obtained $j$ by a regular homotopy of the immersion of $\Delta$, which was regularly homotopic to that of $\tilde\Sigma$, i.e. the map $f|_\Sigma$. This implies that $j$ and $f|_\Sigma$ represent the same element of $\pi^s_{n-1}(\RP^\infty)\cong\pi^s_{n+1}(S^2\RP^\infty)$ noting that this suspension isomorphism is given between the cobordism groups of unoriented immersions to $\R^{n-1}$ and those to $\R^{n+1}$ with normal bundles induced from $\gamma_1\oplus\varepsilon^2$. Hence the immersion of the union of the figures $8$ on the boundary of $\tilde T$ (i.e. the map $f|_{\partial T}$) represents $8_\#([f|_\Sigma])$. This is the same as $8_\#([f])$, since $[f|_\Sigma]$ and $[f]$ correspond to each other in the isomorphism $\pi^s_{n+1}(S^2\RP^\infty)\cong\pi_{n+1}(X_1^\SO,\Gamma S^1)$. Now the only thing left to prove is that $f|_{\partial T}\colon\partial T\imto D^{n+1}$, equipped with the normal framing of the figures $8$ mentioned above, represents the same element of $\pi^s(n-1)$ as the immersion $\partial f\colon\partial M\imto S^n$. Here $f|_{\partial T}$ is a $2$-codimensional framed immersion and $\partial f$ is just $1$-codimensional, so the first task is to turn $f|_{\partial T}$ $1$-codimensional as well. Take a small neighbourhood of a point in $\tightoverset{_\circ~~~~}{D^{n+1}}$ disjoint from the image of $f$ and delete it from the disk. We obtain a space diffeomorphic to $S^n\times[0,1]$ and $f|_{\partial T}$ is a framed immersion to this. Now again the immersion version of the multi-compression theorem \ref{mct} yields a regular homotopy of $f|_{\partial T}$ that turns one of the framing vector fields parallel to $T[0,1]$. By combining this with the projection to $S^n\times\{1\}$, we get a $1$-codimensional immersion $\partial T\imto S^n$. This deformation can be identified with the suspension isomorphism for the stable homotopy groups of spheres, hence this immersion represents the same element of $\pi^s(n-1)$ as $f|_{\partial T}$. But the above deformation can be extended to the manifold $N=\ol{M\setminus T}$ connecting $\partial T$ and $\partial M$, hence we get an immersion $N\imto S^n\times[0,1]$ which maps $\partial_0N$ to $S^n\times\{0\}$ by $\partial f$ and $\partial_1N$ to $S^n\times\{1\}$ by $f|_{\partial T}$ (up to regular homotopy). Hence we got a cobordism connecting $\partial f$ and $f|_{\partial T}$, which finishes the proof. ~$\square$\par\medskip \begin{rmk} Liulevicius \cite{liu} computed the stable homotopy groups of $\RP^\infty$ in dimensions at most $9$, which is shown below with the corresponding stable homotopy groups of spheres and the groups $\Cob_1^\SO(n,1)$ computed from these. \begin{table}[H]\begin{center}\begin{tabular}{c||c|c|c|c|c|c|c|c|c|c} $n$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$\\ \hline $\pi^s_{n-1}(\RP^\infty)$ & $\Z$ & $\Z_2$ & $\Z_2$ & $\Z_8$ & $\Z_2$ & $0$ & $\Z_2$ & $\Z_{16}\oplus\Z_2$ & $(\Z_2)^3$ & $(\Z_2)^4$\\ \hline $\pi^s(n)$ & $\Z_2$ & $\Z_2$ & $\Z_{24}$ & $0$ & $0$ & $\Z_2$ & $\Z_{240}$ & $(\Z_2)^2$ & $(\Z_2)^3$ & $\Z_6$\\ \hline $\Cob_1^\SO(n,1)$ & $0$ & $0$ & $\Z_3$ & $0$ & $\Z_2$ & $0$ & $\Z_{15}$ & $\Z_2$ & $\Z_2$ & $\Z_6$ \end{tabular}\end{center}\end{table}\vspace{-.5cm} \end{rmk} \begin{rmk} We also proved that the odd torsion parts of the groups $\Cob_1^\SO(n,1)$ can be represented by immersions. In particular for $n=3$ and $n=7$, these can be chosen to be immersions of $S^3$ and $S^7$ respectively, since in these dimensions the $J$ homomorphism is epi. \end{rmk} \subsection{Cusp maps} Now the main tool will be the classifying space for the cobordisms of those coorinted codimension-$1$ cusp maps that are equipped with a trivialisation of the normal bundle of the immersion of the cusp stratum. This can be constructed completely analogously to the classifying spaces in chapter \ref{classp} and will be denoted by $\tilde X_2^\SO$. By remark \ref{morform} we know that the inclusion $\Gamma S^5\subset\Gamma T\tilde\xi_2^\SO$ is a mod-$\CC_2$ homotopy equivalence. Consider the pullback of the key fibration by this inclusion $$\xymatrix{ \tilde X_2^\SO\ar[r]\ar[d]_{X_1^\SO} & X_2^\SO\ar[d]^{X_1^\SO} \\ \Gamma S^5\ar@{^(->}[r] & \Gamma T\tilde\xi_2^\SO }$$ The horizontal arrows here induce isomorphisms in the odd torsion parts of the homotopy groups; we will see that the left-hand fibration ``almost'' has a splitting. \begin{defi} Let $\pi\colon E\xra FB$ be a fibration and $l$ an integer. \begin{enumerate} \item We say that $\pi$ has an algebraic $l$-splitting, if for all $i\in\N$ there is a homomorphism $s_i\colon\pi_i(B)\to\pi_i(E)$ such that $\pi_\#\circ s_i$ is the multiplication by $l$ in the group $\pi_i(B)$. \item We say that $\pi$ has a geometric $l$-splitting, if it has an algebraic one for which all homomorphisms $s_i$ are induced by a map $s\colon B\to E$. \end{enumerate} \end{defi} \begin{lemma}\label{lspl} If $l\in\Z$ is such that there is a map $f\colon M^4\to\R^5$ with algebraic number of cusps $\#\Sigma^{1,1}(f)=l$, then the fibration $\tilde X_2^\SO\xra{X_1^\SO}\Gamma S^5$ has an algebraic $l$-splitting. \end{lemma} \begin{prf} Let $i\ge5$ be an integer and fix an element $[j]\in\pi_i(\Gamma S^5)\cong\pi^s(i-5)$ represented by an immersion $j\colon N^{i-5}\imto\R^i$ endowed with a normal framing. We may assume that $f$ maps to the disk $D^5$, and then the map $${\id}_N\times f\colon N\times M\to N\times D^5$$ can be composed with an immersion of $N\times D^5$ to $\R^i$ onto a tubular neighbourhood of $j(N)$ using the normal framing. The composition of $\id_N\times f$ with this immersion is clearly a cusp map to $\R^i$ with framed cusp stratum, hence it represents an element in $\pi_i(\tilde X_2^\SO)$, and its restriction to the cusp stratum is $l\cdot[j]\in\pi^s(i-5)$. This defines the homomorphism $s_i$ and shows that it is an $l$-splitting. \end{prf} \begin{rmk} The proof can be extended to also show that this $l$-splitting is actually geometric, but we will not need this. \end{rmk} Now the groups $\Cob_2^\SO(n,1)$ can be obtained modulo their $2$- and $3$-primary parts. Moreover, the $3$-primary parts can also be described up to a group extension; here we will use that the generator $\alpha_1\in\pi^s(3)\cong\Z_3\oplus\Z_8$ of the $\Z_3$ part defines a homomorphism of order $3$ $$(\alpha_1)_i\colon\pi^s(i)\to\pi^s(i+3);~[b]\mapsto[a\circ b]$$ (for all $i$), where $[b]$ is represented by $b\colon S^{m+i+3}\to S^{m+3}$ (for some large number $m$) and the map $a\colon S^{m+3}\to S^m$ represents $\alpha_1$. \begin{thm}\label{cuspthm} $\Cob_2^\SO(n,1)$ is such that there is a $\CC_{2,3}$-isomorphism $$\Cob_2^\SO(n,1)\congc{2,3}\pi^s(n)\oplus\pi^s(n-4)$$ and a $\CC_2$-exact sequence $$0\to\coker(\alpha_1)_{n-3}\to\Cob_2^\SO(n,1)\to\ker(\alpha_1)_{n-4}\to0.$$ \end{thm} \begin{prf} We will not prove the existence of the $\CC_2$-exact sequence here, as it relies on a spectral sequence for prim maps that will only be computed in the next section. For the proof we instead refer to remark \ref{c2ex}. The $\CC_{2,3}$-isomorphism can be obtained from a sequence of isomorphisms \begin{alignat*}2 \Cob_2^\SO(n,1)&\cong\pi_{n+1}(X_2^\SO)\toverset{^*}{\congc2}\pi_{n+1}(\tilde X_2^\SO)\toverset{^{**}}{\congc{2,3}}\\ &\toverset{^{**}}{\congc{2,3}}\pi_{n+1}(\Gamma S^5)\oplus\pi_{n+1}(X_1^\SO)\toverset{^{***}}{\congc2}\pi^s(n-4)\oplus\pi^s(n), \end{alignat*} where $^*$ follows from the observation in the beginning of this subsection, $^{***}$ is by theorem \ref{kpthm} and $^{**}$ will follow from lemma \ref{lspl} as soon as we prove that there is a map $f\colon M^4\to\R^5$ with algebraically $6$ cusps. This follows from a theorem of Eccles and Mitchell \cite{em} claiming that there is an immersion $i\colon M^4\imto\R^6$ with algebraically $2$ triple points, that is, $6$ such points in the source; now by \cite{prim} this equals the algebraic number of cusp points of $\pr_{\R^5}\circ i$. \end{prf} \begin{rmk} Since $(\alpha_1)_i$ is a homomorphism of order $3$, the $\CC_2$-exact sequence above describes the $3$-primary part of the group $\Cob_2^\SO(n,1)$. As a consequence of Toda's work \cite{toda}, we know that for $i\le8$, the homomorphism $(\alpha_1)_i$ is non-trivial only for $i=0$. Hence for $3\ne n\le11$, we have a short exact sequence of $3$-primary parts $$0\to[\pi^s(n)]_3\to\big[\Cob_2^\SO(n,1)\big]_3\to[\pi^s(n-4)]_3\to0.$$ \end{rmk} \subsection{Higher Morin maps} The computation in this subsection will be quite similar to the previous one. Namely we will use the $l$-splitting (for some $l$) of the bundle $$\tilde X_{2r}^\SO\xra{X_{2r-1}^\SO}\Gamma S^{4r+1}$$ for all $r\ge1$, where $\tilde X_{2r}^\SO$ is the classifying space for the cobordisms of those coorinted codimension-$1$ $\Sigma^{1_{2r}}$-maps that are equipped with a normal framing of the immersion of the $\Sigma^{1_{2r}}$-stratum. Again we will obtain a $\CC_{\le2r+1}$-isomorphism between the cobordism groups and direct sums of certain stable homotopy groups of spheres. Before precisely stating and proving the main theorem, we need the following weak analogue of the Eccles--Mitchell theorem used above. \begin{lemma}\label{lr} Let $r$ be a natural number and $l(r)$ be the order of the cokernel of the stable Hurewicz homomorphism $$\pi^s_{4r+2}(\CP^\infty)\to H_{4r+2}(\CP^\infty).$$ Then for any immersion of an oriented, closed $4r$-manifold to $\R^{4r+2}$, the algebraic number of $(2r+1)$-tuple points is divisible by $l(r)$, and there is an immersion for which this number is precisely $l(r)$. \end{lemma} \begin{prf} By Herbert's formula \cite{herbert}, the algebraic number of $(2r+1)$-tuple points of an immersion $i\colon M^{4r}\imto\R^{4r+2}$ is the normal Pontryagin number $\ol p_1^r[M]$. Now $i$ represents an element $[i]$ in the cobordism group of oriented codimension-$2$ immersions to $\R^{4r+2}$, which is isomorphic to $\pi^s_{4r+2}(T\gamma_2^\SO)$. If the Pontryagin--Thom map of $i$ is $\alpha:=\alpha(i)\colon S^{4r+2}\nrightarrow T\gamma_2^\SO$, then the cobordism class $[i]$ corresponds to the homotopy class $[\alpha]$ by the above isomorphism. Consider the composition of the maps $$\pi^s_{4r+2}(T\gamma_2^\SO)\xra hH_{4r+2}(T\gamma_2^\SO)\xra\varphi H_{4r}(B\SO(2))\xra{\la p_1^r(\gamma_2^\SO),\cdot\ra}\Z,$$ where $h$ is the stable Hurewicz homomorphism and $\varphi$ is the homological Thom isomorphism. Since $\varphi$ and the evaluation map $x\mapsto\la p_1^r(\gamma_2^\SO),x\ra$ are isomorphisms, the cokernel of this composition is the same as $\coker h$, hence its image is $l(r)\Z$ (recall that the stable Hurewicz map is a rational isomorphism). Now it is enough to prove that $[\alpha]\in\pi^s_{4r+2}(T\gamma_2\SO)$ gets mapped to $\ol p_1^r[M]\in\Z$. Note that $\alpha$ is a stable map, i.e. a map $S^{4r+2+m}\to S^mT\gamma_2^\SO$ for a sufficiently large number $m$. It is by definition given as a composition $\beta\circ q$, where $q\colon S^{4r+2+m}\to S^mT\nu_i$ is the quotient by the complement of a tubular neighbourhood and $\beta\colon S^mT\nu_i\to S^mT\gamma_2^\SO$ is a fibrewise map that restricts to $M$ as the inducing map $b\colon M\to B\SO(2)$ of the normal bundle $\nu_i$. Now using the notations $p_1:=p_1(\gamma_2^\SO)$ and $u:=u(\gamma_2^\SO)$, we have \begin{alignat*}2 \la p_1^r,\varphi h([\alpha])\ra&=\la p_1^r,\varphi\alpha_*([S^{4r+2+m}])\ra=\la p_1^r,\varphi\beta_*q_*([S^{4r+2+m}])\ra=\la p_1^r,\varphi\beta_*([S^mT\nu_i])\ra=\\ &=\la p_1^r,\beta_*([S^mT\nu_i])\smallfrown S^mu\ra=\la p_1^r,b_*([M])\ra=\la b^*(p_1^r),[M]\ra=\\ &=\la p_1^r(\nu_i),[M]\ra=\ol p_1^r[M] \end{alignat*} and this is what we wanted to prove. \end{prf} \begin{thm}\label{hmor} For all $r\ge1$ we have $$\Cob_{2r+1}^\SO(n,1)\congc2\Cob_{2r}^\SO(n,1)\congc{\le2r+1}\displaystyle\bigoplus_{i=0}^r\pi^s(n-4i).$$ \end{thm} \begin{prf} By remark \ref{morform} we know that in the homotopy long exact sequence of the key fibration $$X_{2r+1}^\SO\xra{X_{2r}^\SO}\Gamma\tilde\xi_{2r+1}^\SO$$ the homotopy groups of $\Gamma T\tilde\xi_{2r+1}^\SO$ vanish modulo $\CC_2$, hence the first $\CC_2$-isomorphism is proved. The rest is completely analogous to the proof of theorem \ref{cuspthm}: We have a sequence of isomorphisms \begin{alignat*}2 \Cob_{2r}^\SO(n,1)&\cong\pi_{n+1}(X_{2r}^\SO)\toverset{^*}{\congc2}\pi_{n+1}(\tilde X_{2r}^\SO)\toverset{^{**}}{\congc{\le2r+1}}\pi_{n+1}(\Gamma S^{4r+1})\oplus\pi_{n+1}(X_{2r-1}^\SO)\toverset{^{***}}{\congc{\le2r-1}}\\ &\toverset{^{***}}{\congc{\le2r-1}}\pi^s(n-4r)\oplus\displaystyle\bigoplus_{i=0}^{r-1}\pi^s(n-4i), \end{alignat*} where $^*$ follows again from remark \ref{morform} and $^{***}$ is obtained by an induction on $r$. To prove $^{**}$, we use lemma \ref{lr} and a direct analogue of lemma \ref{lspl} that yields an algebraic $l(r)$-splitting of the bundle $\tilde X_{2r}^\SO\xra{X_{2r-1}^\SO}\Gamma S^{4r+1}$. It remains to note that we can use Arlettaz's theorem \ref{arlthm} with the substitutions $V:=\CP^\infty$, $l:=2$, $m:=4r+2$ and get that $l(r)$ is a divisor of $\rho_1\ldots\rho_{4r-1}$, where $\rho_i$ is the exponent of $\pi^s(i)$. But Serre proved in \cite{serre} that $\rho_i$ is not divisible by the prime $p$ if $p>\frac i2+1$, which means that $l(r)$ does not have prime divisors greater than $2r+1$, hence we have the $\CC_{\le2r+1}$-isomorphism as claimed. \end{prf} \section{Prim cobordisms}\label{prico} In the previous section we saw that the cobordism groups of $1$-codimensional Morin maps are closely related to the stable homotopy groups of spheres. In this section we establish even closer such connections; the main tool will be the so-called singularity spectral sequence for prim maps defined as follows. \begin{defi}\label{sss} Let $L$ be a stable linear group, $k$ a positive integer and $r\in\N\cup\{\infty\}$. The filtration $\ol X_0^L\subset\ol X_1^L\subset\ldots\subset\ol X_r^L$ of the classifying space for the cobordisms of $k$-codimensional prim maps defines a spectral sequence in homotopy groups with first page $$\ol E^1_{i,j}=\pi_{i+j+k}(\ol X_i^L,\ol X_{i-1}^L)\cong\pi_{i+j+k}(\Gamma T\tilde\zeta_r^L)\cong\pi^s_{i+j+k}(T\tilde\zeta_r^L).$$ This spectral sequence converges to $\pi_{*+k}(\ol X_r^L)\cong\Prim_r^L(*,k)$, that is, $\underset{i+j=n}\bigoplus\ol E^\infty_{i,j}$ is associated to $\Prim_r^L(n,k)$. \end{defi} Similarly to the previous section, we will first consider the case of cooriented $1$-codimensional maps. We give a geometric description of the singularity spectral sequence in this case and compute it (modulo some torsion) for cooriented codimension-$1$ prim fold and cusp maps. After this we will see that in some cases (namely for $1$-codimensional cooriented prim maps and $3$-codimensional prim maps with spin normal structures) the singularity spectral sequence can be identified with spectral sequences arising from filtrations of projective spaces and starting with stable homotopy groups of spheres. \subsection{Geometric description}\label{geode} This subsection mainly contains remarks about how we can translate the singularity spectral sequence to cobordisms and transformations between them. In the following we consider prim maps of codimension $1$ and with oriented virtual normal bundles. \begin{rmk} $~$ \begin{enumerate} \item For this sort of maps, the universal target bundle in the global normal form of $\Sigma^{1_r}$ singularity is easy to describe, namely it is $\tilde\zeta_r^\SO=(r+1)\gamma_1^\SO\oplus\varepsilon^r=\varepsilon^{2r+1}$, hence its Thom space is $T\tilde\zeta_r^\SO=S^{2r+1}$ and the key fibration has the form $$\ol X_r^\SO\xra{\ol X_{r-1}^\SO}\Gamma S^{2r+1}.$$ This also implies that the starting page of the singularity spectral sequence is $$\ol E^1_{i,j}=\pi_{i+j+1}(\ol X_i^\SO,\ol X_{i-1}^\SO)\cong\pi^s_{i+j+1}(S^{2i+1})=\pi^s(j-i).$$ \item It is not hard to see that a relative homotopy group $\pi_n(\ol X_r^\SO,\ol X_{r-1}^\SO)$ can be identified with a relative cobordism group, namely the cobordism group of such prim maps $$f\colon(M^n,\partial M^n)\to(\R^n\times\R_+,\R^n)$$ for which $f$ is a cooriented $\tau_r$-map and $\partial f:=f|_{\partial M}$ has at most $\Sigma^{1_{r-1}}$ singularities. The cobordism of two such maps can be defined directly analogously to definition \ref{cobtau} (by requiring the cobordism to be a prim $(\tau_{r-1},\SO)$-cobordism between the boundaries and a prim $(\tau_r,\SO)$-cobordism altogether) and a complete analogue of Szűcs's Pontryagin--Thom type construction from chapter \ref{classp} gives that this cobordism group is indeed isomorphic to $\pi_n(\ol X_r^\SO,\ol X_{r-1}^\SO)$. \end{enumerate} \end{rmk} Recall that for any fibration $\pi\colon E\xra FB$, the isomorphism $\pi_n(E,F)\cong\pi_n(B)$ is induced by the map $\pi$. Now the two observations in the remark above yield together a nice geometric description of the isomorphism $\pi_{n+1}(\ol X_r^\SO,\ol X_{r-1}^\SO)\cong\pi_{n+1}(\Gamma S^{2r+1})\cong\pi^s(n-2r)$. This is induced by the key fibration, so it assigns to the cobordism class of a map $$f\colon(M^n,\partial M^n)\to(\R^n\times\R_+,\R^n)$$ the cobordism class of its restriction to the top singularity stratum (which is disjoint from $\partial M$), that is, the map $$f|_{\Sigma^{1_r}(f)}\colon\Sigma^{1_r}(f)\imto\R^{n+1}.$$ More precisely, this latter cobordism class is one in $\Imm^{\tilde\zeta_r^\SO}(\R^{n+1})$ (see remark \ref{keyrmk} or theorem \ref{terpthm}), which in the present case is the cobordism group of immersions of $(n-2r)$-dimensional manifolds with normal framings, i.e. the stable homotopy group of spheres $\pi^s(n-2r)$. \begin{rmk}\label{relcob} The above considerations also mean that the relative cobordism class of such a map $f$ is completely determined by the cobordism class of its restriction to $\Sigma^{1_r}(f)$. \end{rmk} Now that we know the geometric meaning of the groups in the first page of the singularity spectral sequence, we are ready to describe the differentials and what transformations they stand for in the cobordism groups. First consider the differential $d^1$, which has the form $$d^1_{i,j}\colon\pi_{i+j+1}(\ol X_i^\SO,\ol X_{i-1}^\SO)\cong\ol E^1_{i,j}\to\ol E^1_{i-1,j}\cong\pi_{i+j}(\ol X_{i-1}^\SO,\ol X_{i-2}^\SO)\cong\pi^s(j-i+1).$$ \begin{rmk}\label{diffrmk} Since $d^1_{i,*}$ is the boundary homomorphism $\partial$ in the homotopy long exact sequence of the triple $(\ol X_i^\SO,\ol X_{i-1}^\SO,\ol X_{i-2}^\SO)$ composed with the map induced by the key fibration $\ol\chi_{i-1}^\SO$, its meaning for cobordisms is that it assigns to the relative cobordism class of a map $f\colon(M^{i+j},\partial M^{i+j})\to(\R^{i+j}\times\R_+,\R^{i+j})$ the cobordism class of the restriction $f|_{\Sigma^{1_{i-1}}(\partial f)}$ (where we again put $\partial f:=f|_{\partial M}$). However, another description of the boundary map will be more useful, as we will soon see. \end{rmk} The following will be analogous to the proof of lemma \ref{d8}. Again represent an element of $\pi_{i+j+1}(\ol X_i^\SO,\ol X_{i-1}^\SO)$ by a map $f\colon(M^{i+j},\partial M^{i+j})\to(\R^{i+j}\times\R_+,\R^{i+j})$ as before and fix closed tubular nighbourhoods $T\subset M$ of $\Sigma^{1_r}(f)$ and $\tilde T\subset\R^{i+j}\times\R_+$ of $f(\Sigma^{1_r}(f))$ such that $f$ maps $(T,\partial T)$ to $(\tilde T,\partial\tilde T)$. Put $N:=\Sigma^{1_{r-1}}(f|_{\ol{M\setminus T}})$ and decompose its boundary as $\partial N=\partial_0N\sqcup\partial_1N$ with $\partial_0N=\Sigma^{1_{r-1}}(\partial f)$ and $\partial_1N=\Sigma^{1_{r-1}}(f|_{\partial T})$. Since $f|_{\ol{M\setminus T}}$ has no $\Sigma^{1_r}$ singularities, we have that $f|_N$ is a framed immersion to $\ol{(\R^{i+j}\times\R_+)\setminus\tilde T}$. Note that the immersion $g:=f|_{\partial_1N}$ can be canonically equipped with a normal framing in $\R^{i+j+1}$ as well, since we can just add the image of the inwards normal vector field of $\partial T\subset T$ to the rest of the framing vectors. Thus we will think of $g$ as a framed immersion to $\R^{i+j+1}$, so it represents an element $[g]$ in $\pi^s(j-i+1)$. \begin{prop}\label{d1f} With the above notations we have $d^1_{i,j}([f])=[g]$. \end{prop} \begin{prf} We may assume that $f$ maps to $\R^{i+j}\times[0,1]$, and so $g$ is a framed immersion of a $(j-i+1)$-manifold to the interior of $\R^{i+j}\times[0,1]$. Then the multi-compression theorem \ref{mct} (more precisely its analogue for immersions; see corollary \ref{ict}) yields a regular homotopy of $g$ that turns one of the framing vector fields parallel to $T[0,1]$ and by combining this with the projection to $\R^{i+j}\times\{1\}$, we get a framed immersion $\partial_1N\imto\R^{i+j}\times\{1\}$. This deformation can be identified with the suspension isomorphism for the stable homotopy groups of spheres, hence this immersion represents the same element of $\pi^s(j-i+1)$ as $g$. But the above deformation can be extended to the whole manifold $N$, hence we get an immersion $N\imto\R^{i+j}\times[0,1]$ which maps $\partial_0N$ to $\R^{i+j}\times\{0\}$ by $f|_{\Sigma^{1_{i-1}}(\partial f)}$ and $\partial_1N$ to $\R^{i+j}\times\{1\}$ by $g$ (up to regular homotopy). Hence we got a cobordism connecting $f|_{\Sigma^{1_{i-1}}(\partial f)}$ and $g$, which is equivalent to our proposition by remark \ref{diffrmk}. \end{prf} All of the above describes what happens to the cobordism groups geometrically in the first page of the singularity spectral sequence. In the rest of this subsection we restrict to the case of cusp maps and give a similar description of the differential $d^2$ as well. Note that now the spectral sequence only has three non-zero columns, so the only non-zero part of $d^2$ is the collection of the maps $$d^2_{2,n-2}\colon\ol E^2_{2,n-2}\to\ol E^2_{0,n-1}.$$ We start by the analogue of remark \ref{diffrmk}. \begin{rmk}\label{diffrmk2} Since no differential maps to the second column, we have $\ol E^2_{2,n-2}=\ker d^1_{2,n-2}\subset\pi_{n+1}(\ol X_2^\SO,\ol X_1^\SO)$. The homomorphism $d^2_{2,n-2}$ is defined for any element $\alpha\in\ol E^2_{2,n-2}$ as follows: Since $d^1(\alpha)=0$, the element $\partial(\alpha)\in\pi_n(\ol X_1^\SO)$ is such that $j_\#(\partial(\alpha))$ vanishes in the homotopy long exact sequence $$\ldots\to\pi_{n+1}(\ol X_1^\SO,\ol X_0^\SO)\xra{\partial'}\pi_n(\ol X_0^\SO)\xra{i_\#}\pi_n(\ol X_1^\SO)\xra{j_\#}\pi_n(\ol X_1^\SO,\ol X_0^\SO)\to\ldots$$ Hence $\partial(\alpha)$ is in the image of $i_\#$, i.e. we have $\partial(\alpha)=i_\#(\beta)$ where the coset $$[\beta]\in\pi_n(\ol X_0^\SO)/\ker i_\#=\pi_n(\ol X_0^\SO)/\im\partial'=\ol E^2_{0,n-1}$$ is uniquely defined for $\alpha$. Now the definition is $d^2(\alpha):=[\beta]$. Now turning to the geometric meaning of this definition, if we represent $\alpha\in\ol E^2_{2,n-2}\subset\pi_{n+1}(\ol X_2^\SO,\ol X_1^\SO)$ by a map $f\colon(M^n,\partial M^n)\to(\R^n\times\R_+,\R^n)$, then the vanishing of $d^1(\alpha)$ means that $f|_{\Sigma^{1,0}(\partial f)}\colon\Sigma^{1,0}(\partial f)\to\R^n$ is null-cobordant as a framed immersion. This is equivalent to saying that the classifying map $S^n\to\ol X_1^\SO$ composed with the key fibration $\ol\chi_1^\SO\colon\ol X_1^\SO\to\Gamma S^3$ is null-homotopic, which happens precisely if the classifying map itself can be deformed into a fibre $\ol X_0^\SO$ of $\ol\chi_1^\SO$. Since homotopy classes of maps to $\ol X_0^\SO$ correspond to cooriented cobordism classes of immersions, this deformation gives a prim $(\tau_1,\SO)$-cobordism between the prim $(\tau_1,\SO)$-map $\partial f$ and an immersion $g$ of an $(n-1)$-manifold to $\R^n$. Now the element $[g]\in\pi^s(n-1)$ is not uniquely defined for $\alpha$, but its coset in $\pi^s(n-1)/\im d^1$ is; this coset is defined to be $d^2(\alpha)$. \end{rmk} In order to understand the differential $d^2$ better, we will need the ring structure on the direct sum $$\GG:=\bigoplus_{i=0}^\infty\pi^s(i),$$ so now we recall this. \begin{defi} The composition product operation $\circ$ on $\GG$ is defined for two elements $\alpha\in\pi^s(i)$ and $\beta\in\pi^s(j)$ in the following way: Represent $\alpha$ by a map $a\colon S^{m+j+i}\to S^{m+j}$ and $\beta$ by a map $b\colon S^{m+j}\to S^m$ and put $\alpha\circ\beta:=[a\circ b]$, that is, $\alpha\circ\beta\in\pi^s(i+j)$ is represented by the map $a\circ b\colon S^{m+i+j}\to S^m$. \end{defi} \begin{rmk} This is a well-defined multiplication that makes $\GG$ a ring. Moreover, this multiplication is also skew-commutative, i.e. for homogeneous elements $\alpha,\beta\in\GG$ we have $\alpha\circ\beta=(-1)^{\dim\alpha\cdot\dim\beta}\beta\circ\alpha$ (see \cite{toda}). \end{rmk} \begin{rmk}\label{.com} The composition product can also described in the language of cobordisms of immersed submanifolds with normal framings (which also represent the stable homotopy groups of spheres): If $\alpha\in\pi^s(i)$ and $\beta\in\pi^s(j)$ are represented respectively by $(A^i\imto\R^{m+i+j},U)$ and $(B^j\imto\R^{m+j},V)$, where $U=(u_1,\ldots,u_{m+j})$ and $V=(v_1,\ldots,v_m)$ are the normal framings of the corresponding immersions, then the framing $U$ identifies a tubular neighbourhood of $A$ with an immersion of $A\times\R^{m+j}$. By immersing $B$ to each fibre of this tubular neigbourhood in the natural way, we obtain a framed immersion of $A\times B$ into $\R^{m+i+j}$; this represents $\alpha\circ\beta$. \end{rmk} We will also need the normal forms of $1$-codimensional fold and cusp singularities, which are respectively \begin{alignat*}2 &\sigma_1\colon(D^2,0)\to(D^3,0);~(x,y)\mapsto(x,xy,y^2),\\ &\sigma_2\colon(D^4,0)\to(D^5,0);~(x_1,x_2,x_3,y)\mapsto(x_1,x_2,x_3,x_1y+x_2y^2,x_3y+y^3) \end{alignat*} according to Morin \cite{mor} (where we choose the source disks to be the preimages of the target disks). \begin{rmk} The maps $\sigma_1$ and $\sigma_2$ are cooriented and prim, hence they represent elements of the groups $\pi^s(0)\cong\ol E^1_{i,i}$. \end{rmk} \begin{prop}\label{clajm} For any $\alpha\in\pi^s(n)$ we have \begin{enumerate} \item\label{zsa} $d^1_{1,n+1}(\alpha)=d^1_{1,1}([\sigma_1])\circ\alpha$ and $d^1_{2,n+2}(\alpha)=d^1_{2,2}([\sigma_2])\circ\alpha=0$, \item\label{he} $d^2_{2,n+2}(\alpha)$ is the coset of $d^2_{2,2}([\sigma_2])\circ\alpha$ whenever $\alpha\in\ker d^1_{2,n+2}$. \end{enumerate} \end{prop} Note that in order for \ref{he} to make sense, we first need to show that $d^2_{2,2}([\sigma_2])$ is meaningful, i.e. that $d^1_{2,2}([\sigma_2])$ vanishes; this follows from \ref{zsa}. Moreover, we will also need the ambiguity of $d^2_{2,2}([\sigma_2])\circ\alpha$ (which is $(\im d^1_{1,3})\circ\pi^s(n)$) to be contained in the ambiguity of $d^2_{2,n+2}(\alpha)$ (which is $\im d^1_{1,n+3}$); this holds because \ref{zsa} implies $$(\im d^1_{1,3})\circ\pi^s(n)=(\im d^1_{1,1})\circ\pi^s(2)\circ\pi^s(n)\subset(\im d^1_{1,1})\circ\pi^s(n+2)=\im d^1_{1,n+3}.$$ \medskip\begin{prf} \emph{Proof of \ref{zsa}.}\enspace\ignorespaces We will first prove the equality $d^1_{2,n+2}(\alpha)=d^1_{2,2}([\sigma_2])\circ\alpha$; the equality $d^1_{1,n+1}(\alpha)=d^1_{1,1}([\sigma_1])\circ\alpha$ can be obtained completely analogously, so we omit its proof. Let $f\colon(M^{n+4},\partial M^{n+4})\to(\R^{n+4}\times\R_+,\R^{n+4})$ be a representative of the element $\alpha\in\pi^s(n)\cong\pi_{n+5}(\ol X_2^\SO,\ol X_1^\SO)$. Recall proposition \ref{d1f} (and the construction that precedes it): $d^1_{2,n+2}(\alpha)$ can be represented by the framed immersion $f|_{\Sigma^{1,0}(f|_{\partial T})}$, where $T\subset M$ is a tubular neighbourhood of $\Sigma^{1,1,0}(f)$. Now the framed immersion $f|_{\Sigma^{1,1,0}(f)}$ represents $\alpha$ and the restriction of $f|_T$ to any fibre of $T$ can be identified with $\sigma_2$ (here we use theorem \ref{univ}), thus $f$ on the $\Sigma^{1,0}$-stratum on the boundary of this fibre represents $d^1_{2,2}([\sigma_2])$. Now taking $f|_{\Sigma^{1,0}(f)}$ on the boundary $\partial T$, we obtain a representative of $d^1_{2,2}([\sigma_2])\circ\alpha$ by remark \ref{.com}, hence the equality is proved. Now the only thing left to prove that $d^1_{2,2}([\sigma_2])$ vanishes. The first page of the spectral sequence now looks as\\ \vspace{-.5cm} \begin{center} \begin{tikzpicture} \matrix (m) [matrix of math nodes,nodes in empty cells,nodes={minimum width=5ex,minimum height=.5ex,text depth=0ex,inner sep=0pt,outer sep=0pt,anchor=base},column sep=5ex,row sep=1.7ex]{ & & & & \\ & & & & \\ \strut ~~~~~2 & \pi^s(2)\cong\Z_2 & \pi^s(1)\cong\Z_2 & \pi^s(0)\cong\Z & \strut\\ & & & & \\ \strut ~~~~~1 & \pi^s(1)\cong\Z_2 & \pi^s(0)\cong\Z & 0 & \strut\\ & & & & \\ \strut ~~~~~0 & \pi^s(0)\cong\Z & 0 & 0 & \strut\\ \strut & 0 & 1 & 2 & \strut \\}; \begin{scope}[transparency group] \draw[alt double,alt double distance=.7mm] ({$(m-1-1)!.4!(m-1-2)$} |- m-1-1.east) -- ({$(m-8-1)!.4!(m-8-2)$} |- m-8-1.south east); \draw[alt double,alt double distance=.7mm] (m-8-1.north) -- (m-8-5.north west); \end{scope} \draw ({$(m-1-2)!.5!(m-1-3)$} |- m-1-2.east) -- ({$(m-8-2)!.5!(m-8-3)$} |- m-8-2.south east); \draw ({$(m-1-3)!.5!(m-1-4)$} |- m-1-3.east) -- ({$(m-8-3)!.5!(m-8-4)$} |- m-8-3.south east); \draw ({$(m-1-4)!.65!(m-1-5)$} |- m-1-4.east) -- ({$(m-8-4)!.65!(m-8-5)$} |- m-8-4.south east); \draw (m-6-1.north) -- (m-6-5.north west); \draw (m-4-1.north) -- (m-4-5.north west); \draw (m-2-1.north) -- (m-2-5.north west); \path[->,font=\scriptsize] (m-3-4) edge node [above] {$~d^1_{2,2}$} (m-3-3); \path[->,font=\scriptsize] (m-3-3) edge node [above] {$~~d^1_{1,2}$} (m-3-2); \path[->,font=\scriptsize] (m-5-3) edge node [above] {$~~d^1_{1,1}$} (m-5-2); \end{tikzpicture} \end{center} In what follows, we refer to Toda \cite[chapter XIV]{toda} for informations on the stable homotopy groups of spheres, the composition product and the standard names of the generators. For the generator $\iota\in\pi^s(0)$, the element $d^1_{1,1}(\iota)$ is represented by $\partial\sigma_1$, which is the immersion of a circle to the sphere with one double point (see figure \ref{kep7}). This is the non-trivial element $\eta\in\pi^s(1)$, hence $d^1_{1,1}$ is epimorphic. Now we have $$d^1_{1,2}(\eta)=d^1_{1,1}(\iota)\circ\eta=\eta\circ\eta\ne0\in\pi^s(2),$$ hence $d^1_{1,2}$ is an isomorphism. But since $d^1_{1,2}\circ d^1_{2,2}$ is trivial, this implies the vanishing of the differential $d^1_{2,2}$. \medskip\noindent\emph{Proof of \ref{he}.}\enspace\ignorespaces This will be quite similar to the above. First apply remark \ref{diffrmk2} to the map $\sigma_2\colon D^4\to D^5$: the boundary $\partial\sigma_2$ is cobordant to an immersion, i.e. there is a manifold $W^4$ with boundary $\partial W=S^3\sqcup N^3$ and a prim $(\tau_1,\SO)$-map $G\colon W\to S^4\times[0,1]$ which restricts to the boundary as $G|_{S^3}=\partial\sigma_2$ and with $G|_N$ being an immersion. We identify $S^4\times[0,1]$ with $D^5_2\setminus\tightoverset{_\circ~}{D^5}$ (where $D^5_2$ is the $5$-disk with radius $2$) and put $$\tilde\sigma_2:=\sigma_2\cup G\colon D^4\usqcup{S^3}W\to D^5_2.$$ Then clearly $d^2_{2,2}([\sigma_2])$ is represented by $\partial\tilde\sigma_2=\tilde\sigma_2|_N$. Now if we again represent $\alpha\in\pi^s(n)$ by the map $f$ as above, then again the restriction of $f|_T$ to each fibre of the tubular neighbourhood $T$ can be identified with $\sigma_2$, hence the whole $f|_T\colon T\to\tilde T$ can be identified with $f|_{\Sigma^{1,1,0}(f)}\times\sigma_2\colon\Sigma^{1,1,0}(f)\times D^4\to f(\Sigma^{1,1,0}(f))\times D^5$. Put $$\tilde f:=f|_{\Sigma^{1,1,0}(f)}\times\tilde\sigma_2\colon\Sigma^{1,1,0}(f)\times\left(D^4\usqcup{S^3}W\right)\to f(\Sigma^{1,1,0}(f))\times D^5_2$$ and denote by $T_2$ and $\tilde T_2$ the source and target spaces of $\tilde f$. Here $\tilde T_2\subset\R^{n+4}\times\R_+$ is a tubular neighbourhood of $f(\Sigma^{1,1,0}(f))$ that contains $\tilde T$ and $\partial\tilde f$ is a $1$-codimensional cooriented immersion to $\partial\tilde T_2$, hence it can be endowed with a non-vanishing normal vector field in $\tilde T_2$ and the inwards normal vector field of $\tilde T_2$. This means that $\partial\tilde f$ is a framed immersion that clearly represents $d^2_{2,2}([\sigma_2])\circ\alpha$. We now use the immersion version of the multi-compression theorem \ref{mct} (see corollary \ref{ict}) on $\partial\tilde f$ to turn the normal vector field of $\tilde T_2$ restricted to $\partial\tilde f$ parallel to the direction of $T\R_+$ in $\R^{n+4}\times\R_+$. After this, the projection of the image of this immersion to $\R^{n+4}$ yields an immersion to $\R^{n+4}$, which is regular homotopic to $\partial\tilde f$ and in particular it also represents $d^2_{2,2}([\sigma_2])\circ\alpha$. Now proposition \ref{ct+} ensures that this regular homotopy can be chosen such that its ``time'' derivative vectors are nowhere tangent to the current image of the source manifold $\partial T_2$. Hence we can glue $T_2$ to $\partial T_2\times[0,1]$ along one common boundary part and map the obtained manifold to $\R^{n+4}\times\R_+$ by the union of $\tilde f$ and the above regular homotopy of $\partial\tilde f$. This way we get a map with the same cusp set as $f$, which means it also represents $\alpha$ (by remark \ref{relcob}), and such that its boundary immersion, which is by definition a representative of $d^2_{2,n+2}(\alpha)$, represents $d^2_{2,2}([\sigma_2])\circ\alpha$. This is what we wanted to prove. \end{prf} \subsection{Fold and cusp prim maps} The previous subsection gave a complete geometric description of the singularity spectral sequence for codimension-$1$ cooriented prim fold and cusp maps. Here we give (almost complete) computations on these cobordism groups, first for the fold case, then for the cusp case. We will use that for all $i$ the generator $\eta\in\pi^s(1)\cong\Z_2$ and the generator $\alpha_1\in\pi^s(3)\cong\Z_3\oplus\Z_8$ of the $\Z_3$ part define homomorphisms \begin{alignat*}2 \eta_i\colon\pi^s(i)\to\pi^s(i+1);&~\beta\mapsto\eta\circ\beta,\\ (\alpha_1)_i\colon\pi^s(i)\to\pi^s(i+3);&~\beta\mapsto\alpha_1\circ\beta. \end{alignat*} \begin{thm}\label{primfold} $\Prim_1^\SO(n,1)$ is such that there is a $\CC_2$-isomorphism $$\Prim_1^\SO(n,1)\congc2\pi^s(n)\oplus\pi^s(n-2)$$ and an exact sequence $$0\to\coker\eta_{n-1}\to\Prim_1^\SO(n,1)\to\ker\eta_{n-2}\to0.$$ \end{thm} \begin{prf} The $\CC_2$-isomorphism can be proved quite similarly to the $\CC_{2,3}$-isomorphism in theorem \ref{cuspthm} and it will also follow from theorem \ref{ploc} later, so we only sketch its proof. The singularity spectral sequence for fold maps degenerates modulo $\CC_2$, since by proposition \ref{clajm} the differential $d^1_{1,n+1}$ is just the multiplication by the order-$2$ element $\eta=d^1_{1,1}([\sigma_1])$. The fact that the cobordism group $\Prim_1^\SO(n,1)$ is a direct sum modulo $\CC_2$ is due to a $2$-splitting of the key fibration $$\ol\chi_1^\SO\colon\ol X_1^\SO\xra{\ol X_0^\SO}\Gamma S^3.$$ The exact sequence is just part of the homotopy long exact sequence of the key fibration, using that the boundary homomorphism in this long exact sequence is $d^1_{1,n+1}$. By proposition \ref{clajm}, this homomorphism is the multiplication by $d^1_{1,1}([\sigma_1])=\eta$, i.e. the map $\eta_n$. \end{prf} To obtain an analogous theorem for cusp maps as well, we need a bit more information on the differential $d^2$ of the singularity spectral sequence. Since $d^1_{1,3}$ maps the generator $\eta\circ\eta\in\pi^s(2)\cong\Z_2$ to $d^1_{1,1}([\sigma_1])\circ\eta\circ\eta=\eta\circ\eta\circ\eta\ne0\in\pi^s(3)$ (see proposition \ref{clajm} and \cite[chapter XIV]{toda}), we have $\ol E^2_{0,3}\cong\Z_{24}/\Z_2=\Z_{12}$. The second page now has the form \vspace{-.3cm} \begin{center} \begin{tikzpicture} \matrix (m) [matrix of math nodes,nodes in empty cells,nodes={minimum width=5ex,minimum height=.5ex,text depth=0ex,inner sep=0pt,outer sep=0pt,anchor=base},column sep=5ex,row sep=1.7ex]{ & & & & \\ & & & & \\ \strut ~~~~~3 & \Z_{12} & 0 & \Z_2 & \strut\\ & & & & \\ \strut ~~~~~2 & 0 & 0 & \Z & \strut\\ & & & & \\ \strut ~~~~~1 & 0 & \Z & 0 & \strut\\ & & & & \\ \strut ~~~~~0 & \Z & 0 & 0 & \strut\\ \strut & 0 & 1 & 2 & \strut \\}; \begin{scope}[transparency group] \draw[alt double,alt double distance=.7mm] ({$(m-1-1)!.45!(m-1-2)$} |- m-1-1.east) -- ({$(m-10-1)!.45!(m-10-2)$} |- m-10-1.south east); \draw[alt double,alt double distance=.7mm] (m-10-1.north) -- (m-10-5.north west); \end{scope} \draw ({$(m-1-2)!.5!(m-1-3)$} |- m-1-2.east) -- ({$(m-10-2)!.5!(m-10-3)$} |- m-10-2.south east); \draw ({$(m-1-3)!.5!(m-1-4)$} |- m-1-3.east) -- ({$(m-10-3)!.5!(m-10-4)$} |- m-10-3.south east); \draw ({$(m-1-4)!.5!(m-1-5)$} |- m-1-4.east) -- ({$(m-10-4)!.5!(m-10-5)$} |- m-10-4.south east); \draw (m-8-1.north) -- (m-8-5.north west); \draw (m-6-1.north) -- (m-6-5.north west); \draw (m-4-1.north) -- (m-4-5.north west); \draw (m-2-1.north) -- (m-2-5.north west); \path[->,font=\scriptsize] (m-5-4) edge node [above,pos=.27] {$d^2_{2,2}$} (m-3-2); \end{tikzpicture} \end{center} \begin{lemma}\label{teritettloboleny} The differential $d^2_{2,2}\colon\Z\to\Z_{12}$ maps the generator $\iota\in\Z$ to an element of order $6$. \end{lemma} \begin{prf} Since $d^1_{2,2}\colon\ol E^1_{2,2}=\pi_5(\ol X_2^\SO,\ol X_1^\SO)\to\pi_4(\ol X_1^\SO,\ol X_0^\SO)=\ol E^1_{1,2}$ was trivial, we have $\ol E^1_{2,2}=\ol E^2_{2,2}$. Consider the following commutative diagram with exact row and column and with $\partial$ being the boundary map in the homotopy long exact sequence of the key fibration $\ol\chi_1^\SO$: $$\xymatrix{ && \pi_4(\ol X_0^\SO)\big/\im\partial \\ \pi_5(\ol X_2^\SO)\ar[r]^(.45){j_\#} & \pi_5(\ol X_2^\SO,\ol X_1^\SO)\ar[ur]^{d^2}\ar[r]\ar[dr]_{d^1} & \pi_4(\ol X_1^\SO)\ar[d]\ar@{<-<}[u] \\ && \pi_4(\ol X_1^\SO,\ol X_0^\SO) }$$ A diagram chasing shows that the generator $\iota\in\pi_5(\ol X_2^\SO,\ol X_1^\SO)\cong\Z$ (represented by the map $\sigma_2$) is such that $d^2_{2,2}(\iota)$ has the same order as the order of $\coker j_\#$. The latter is the minimal positive algebraic number of cusps that a cooriented prim cusp map $f\colon M^4\to\R^5$ can have, since $j_\#$ assigns to such a map $f$ the number $\#\Sigma^{1,1,0}(f)$. This minimal number is known to be $6$ from \cite{prim}. \end{prf} Combining this lemma with proposition \ref{clajm} yields the following. \begin{crly} The differential $d^2_{2,n+2}$ acts on the $3$-primary component as the homomorphism $(\alpha_1)_n$ up to sign. \end{crly} \begin{thm}\label{primcusp} $\Prim_2^\SO(n,1)$ is such that there is a $\CC_{2,3}$-isomorphism $$\Prim_2^\SO(n,1)\congc{2,3}\pi^s(n)\oplus\pi^s(n-2)\oplus\pi^s(n-4)$$ and an exact sequence of $3$-primary parts $$0\to[\coker(\alpha_1)_{n-3}]_3\oplus[\pi^s(n-2)]_3\to\big[\Prim_2^\SO(n,1)\big]_3\to[\ker(\alpha_1)_{n-4}]_3\to0.$$ \end{thm} \begin{prf} Again as in theorem \ref{primfold}, the $\CC_{2,3}$-isomorphism can be proved analogously to the $\CC_{2,3}$-isomorphism in theorem \ref{cuspthm} and theorem \ref{ploc} will also imply it, so we only sketch its proof. The singularity spectral sequence for cusp maps degenerates modulo $\CC_{2,3}$, since by proposition \ref{clajm} and lemma \ref{teritettloboleny} the differential $d^2_{2,n+2}$ is the multiplication by an order-$6$ element. The fact that the cobordism group $\Prim_2^\SO(n,1)$ is a direct sum modulo $\CC_{2,3}$ is due to a $6$-splitting of the key fibration $$\ol\chi_2^\SO\colon\ol X_2^\SO\xra{\ol X_1^\SO}\Gamma S^5.$$ Getting to the exact sequence between the $3$-primary parts, we first note that the singularity spectral sequence stabilises at the third page and the $3$-primary part of the last differential $d^2$ can be identified up to sign with the multiplication by $\alpha_1$. Thus we have $$\big[\ol E^\infty_{0,j}\big]_3=[\coker(\alpha_1)_{j-3}]_3,~~~~\big[\ol E^\infty_{1,j}\big]_3=[\pi^s(j-1)]_3~~~~\text{and}~~~~\big[\ol E^\infty_{2,j}\big]_3=[\ker(\alpha_1)_{j-2}]_3.$$ By general properties of spectral sequences, if we define the groups \begin{alignat*}2 &F_{2,n}:=\Prim_2^\SO(n,1)=\pi_{n+1}(\ol X_2^\SO),\\ &F_{1,n}:=\im(\Prim_1^\SO(n,1)\to\Prim_2^\SO(n,1))=\im(\pi_{n+1}(\ol X_1^\SO)\to\pi_{n+1}(\ol X_2^\SO)),\\ &F_{0,n}:=\im(\pi^s(n)\to\Prim_2^\SO(n,1))=\im(\pi_{n+1}(\ol X_0^\SO)\to\pi_{n+1}(\ol X_2^\SO)) \end{alignat*} (with the unmarked arrows being the forgetful maps), then we have $F_{2,n}/F_{1,n}=\ol E^\infty_{2,n-2}$, $F_{1,n}/F_{0,n}=\ol E^\infty_{1,n-1}$ and $F_{0,n}=\ol E^\infty_{0,n}$. Our plan is now to show that the short exact sequence $$0\to[F_{0,n}]_3\to[F_{1,n}]_3\to[F_{1,n}/F_{0,n}]_3\to0$$ splits. If this is true, then $[F_{1,n}]_3$ has the form $[F_{0,n}]_3\oplus[F_{1,n}/F_{0,n}]_3$ and the analogous exact sequence with the indices increased can be written as $$0\to[F_{0,n}]_3\oplus[F_{1,n}/F_{0,n}]_3\to[F_{2,n}]_3\to[F_{2,n}/F_{3,n}]_3\to0.$$ Hence by substituting the definitions of the groups $F_{i,j}$ and the groups $\ol E^\infty_{i,j}$, we obtain the desired exact sequence for $\big[\Prim_2^\SO(n,1)\big]_3$. So it only remains to show that the sequence above indeed splits. In the following commutative diagram we show $F_{0,n}$ and $F_{1,n}$ with the two left-hand squares being clear from the definitions; the right-hand column and the top and bottom rows are segments of homotopy long exact sequences and the map $s$ is the $2$-splitting of the key fibration $\ol\chi_1^\SO$ used in theorem \ref{primfold} (defined in \cite{nszt}). $$\xymatrix{ &&& \pi_{n+2}(\ol X_2^\SO,\ol X_1^\SO)\ar[d]^{\partial=d^1_{2,n-1}} \\ \pi_{n+1}(\ol X_0^\SO)\ar[r]\ar@{->>}[d] & \pi_{n+1}(\ol X_1^\SO)\ar[rr]\ar@{->>}[d] && \pi_{n+1}(\ol X_1^\SO,\ol X_0^\SO)\ar[dd]^{i_\#}\ar@/^-1pc/[ll]_s \\ F_{0,n}\ar[r] & F_{1,n}\ar[r] & F_{1,n}/F_{0,n}\ar@{-->}[dr]^a\ar@{-->}[ur]^b\ar@{-->}[ul]_c &\\ \pi_{n+1}(\ol X_2^\SO)\ar@{<-<}[u]\ar@{=}[r] & \pi_{n+1}(\ol X_2^\SO)\ar@{<-<}[u]\ar[rr]^{j_\#} && \pi_{n+1}(\ol X_2^\SO,\ol X_0^\SO) }$$ The kernel of the composition $F_{1,n}\rightarrowtail\pi_{n+1}(\ol X_2^\SO)\xra{j_\#}\pi_{n+1}(\ol X_2^\SO,\ol X_0^\SO)$ is clearly $\ker j_\#\cap F_{1,n}$. But $\ker j_\#$ is the image of $\pi_{n+1}(\ol X_0^\SO)$, which is $F_{0,n}$, thus the map $j_\#$ uniquely defines a map $a\colon F_{1,n}/F_{0,n}\to\pi_{n+1}(\ol X_2^\SO,\ol X_0^\SO)$. The boundary homomorphism $\partial$ in the left-hand column can be identified with $d^1_{2,n-1}$, which is trivial by proposition \ref{clajm}, hence $i_\#$ is injective. But $\im i_\#$ contains $\im a$ because of the commutativity of the left-hand large square and the definition of $a$. It follows that $a$ can be lifted to a map $b\colon F_{1,n}/F_{0,n}\to\pi_{n+1}(\ol X_1^\SO,\ol X_0^\SO)$. Now defining $c:=s\circ b$ and composing it with the map $\pi_{n+1}(\ol X_1^\SO)\twoheadrightarrow F_{1,n}$ yields a $2$-splitting of the short exact sequence $$0\to F_{0,n}\to F_{1,n}\to F_{1,n}/F_{0,n}\to0.$$ When considering the $3$-primary parts, this $2$-splitting becomes just a splitting and this proves the statement. \end{prf} \begin{rmk}\label{c2ex} The singularity spectral sequence $E^*_{*,*}$ can be defined for non-prim Morin maps by a trivial modification of definition \ref{sss} and in the starting page we again have the stable homotopy groups of the Thom spaces of the universal target bundles. When considering this sequence for cooriented cusp maps, remark \ref{morform} implies that the groups $E^1_{1,j}$ are trivial modulo $\CC_2$ and the groups $E^1_{0,j}$ and $E^1_{2,j}$ coincide with $\ol E^1_{0,j}$ and $\ol E^1_{2,j}$ respectively modulo $\CC_2$ again. We obtained that the natural forgetting map $\ol E^1_{i,j}\to E^1_{i,j}$ (see lemma \ref{cover} and remark \ref{coverr}) induces a $\CC_2$-isomorphism for $i=0,2$ and $E^1_{1,j}\in\CC_2$. Since the differential $d^1$ is trivial modulo $\CC_2$ in both spectral sequences, the forgetting map $\ol E^2_{i,j}\to E^2_{i,j}$ is also a $\CC_2$-isomorphism for $i=0,2$. Hence the differential $d^2$ restricted to the $3$-primary part can be identified in the two spectral sequences, and so we have $\big[E^\infty_{i,j}\big]_3\cong\big[\ol E^\infty_{i,j}\big]_3$ for $i=0,2$ and $\big[E^\infty_{1,j}\big]_3$ vanishes. Now (the proof of) the previous theorem implies the existence of a $\CC_2$-exact sequence $$0\to\coker(\alpha_1)_{n-3}\to\Cob_2^\SO(n,1)\to\ker(\alpha_1)_{n-4}\to0,$$ as claimed in theorem \ref{cuspthm}. \end{rmk} \subsection{Relation to projective spaces}\label{rps} In this subsection we give (among others) an alternative description of the singularity spectral sequence considered so far. We will consider two types of prim maps with fixed codimensions and normal structures, namely we will investigate the groups $\Prim_r^L(n,k)$ in the following cases: \begin{enumerate} \renewcommand{\labelenumi}{\theenumi} \renewcommand{\theenumi}{\rm{(\roman{enumi})}} \item\label{cplx} $L:=\U$, $k:=1$ and in this case we put $\F:=\C$, \item\label{quat} $L:=\Sp$, $k:=3$ and in this case we put $\F:=\HH$. \end{enumerate} \renewcommand{\labelenumi}{\theenumi} \renewcommand{\theenumi}{\rm{(\arabic{enumi})}} \begin{rmk} Here $\Prim_r^\U(n,1)$ is the cobordism group of those prim maps $f\colon M^n\to\R^{n+1}$ that have immersion lifts $i_f\colon M^n\imto\R^{n+2}$ with normal bundles induced from $\gamma_2^\U$ (where the number $2$ now stands for the real dimension of the universal bundle); see remark \ref{primrmk}. Since $\gamma_2^\U$ can be identified with $\gamma_2^\SO$, we now have $$\Prim_r^\U(n,1)=\Prim_r^\SO(n,1).$$ Similarly, since $\Sp(1)\cong\Spin(3)$, we also have $$\Prim_r^\Sp(n,3)=\Prim_r^{\Spin}(n,3).$$ \end{rmk} Throughout this subsection we fix the notations $L$, $k$ and $\F$ either as in \ref{cplx} or as in \ref{quat}. We will need the following simplification of the vector bundle $\zeta_S^r$ (see definition \ref{zetasdef}) in the present two cases, which connects it with $\gamma_{k+1}^L$ that is either the complex or the quaternionic tautological line bundle. \begin{lemma} The bundle $\zeta_S^r$ is homotopy equivalent to $\gamma_{k+1}^L|_{\FP^r}$ in the sense that there is a homotopy equivalence of base spaces $h\colon\FP^r\to S((r+1)\gamma_{k+1}^L)$ such that $h^*\zeta_S^r\cong\gamma_{k+1}^L|_{\FP^r}$. \end{lemma} \begin{prf} Consider the space $S^\infty\times S(\F^{r+1})\times\F^1$, where $S(\F^{r+1})$ is the set of unit length elements in $\F^{r+1}$. The group $S^k=S(\F^1)$ naturally acts on all three factors of this space and factoring by the diagonal action yields an $\F^1$-bundle $E$ over $B:=S^\infty\utimes{S^k}S(\F^{r+1})$. The base space $B$ is an $S(\F^{r+1})$-bundle over $S^\infty/S^k=\FP^\infty=BL(k+1)$ and it coincides with the sphere bundle $S((r+1)\gamma_{k+1}^L)$. But we can also view $B$ as an $S^\infty$-bundle over $S(\F^{r+1})/S^k=\FP^r$, hence we have $$S((r+1)\gamma_{k+1}^L)\cong B\cong\FP^r.$$ Moreover, this homotopy equivalence takes the pullback of the tautological line bundle over $\FP^\infty$ to the tutological line bundle over $\FP^r$, since it extends to the whole orbit space $E$ which is the total space of these bundles. This is what we wanted to prove. \end{prf} \begin{thm} $\ol X_r^L\cong\Omega\Gamma\FP^{r+1}.$ \end{thm} \begin{prf} Trivial from theorem \ref{zetasthm} and the above lemma. \end{prf} This theorem has several applications, we will just outline a few of them. The first one is that now the filtration $\ol X_0^L\subset\ol X_1^L\subset\ldots\subset\ol X_r^L$ (for any $r$) can be identified with the filtration $\Omega\Gamma\FP^1\subset\Omega\Gamma\FP^2\subset\ldots\subset\Omega\Gamma\FP^{r+1}$. \begin{crly} The singularity spectral sequence corresponding to the homotopy groups of $\ol X_r^L$ coincides with the spectral sequence corresponding to the stable homotopy groups of $\FP^{r+1}$ after an index shift. \end{crly} In particular this spectral sequence has first page \begin{alignat*}2 \ol E^1_{i,j}&=\pi_{i+j+k}(\ol X_i^L,\ol X_{i-1}^L)\cong\pi_{i+j+k}(\Omega\Gamma\FP^{i+1},\Omega\Gamma\FP^i)\cong\pi^s_{i+j+k+1}(\FP^{i+1},\FP^i)\cong\\ &\cong\pi^s_{i+j+k+1}(S^{(k+1)(i+1)})=\pi^s(j-ki). \end{alignat*} \begin{ex} In the case of \ref{cplx} the first page $\ol E^1_{i,j}$ is such that the groups $\ol E^1_{i,i+n}$ are all $\pi^s(n)$ and the differentials for small indices are shown in the following table (see \cite{stabhom} and \cite{nehez}): \vspace{-.3cm} \begin{center} \begin{tikzpicture} \matrix (m) [matrix of math nodes,nodes in empty cells,nodes={minimum width=5ex,minimum height=.5ex,text depth=0ex,inner sep=0pt,outer sep=0pt,anchor=base},column sep=5ex,row sep=1.7ex]{ & & & & & & & \\ & & & & & & & \\ \strut ~~~~~10 & \Z_2\la\eta\circ\mu\ra & (\Z_2)^3 & (\Z_2)^2 & \Z_{240} & \Z_2 & \strut\\ & & & & & & & \\ \strut ~~~~~9 & \Z_2\la\nu^3\ra\oplus\Z_2\la\mu\ra\oplus\Z_2\la\eta\circ\varepsilon\ra & (\Z_2)^2 & \Z_{240} & \Z_2 & 0 & \strut\\ & & & & & & & \\ \strut ~~~~~8 & \Z_2\la\ol\nu\ra\oplus\Z_2\la\varepsilon\ra & \Z_{240} & \Z_2 & 0 & 0 & \strut\\ & & & & & & & \\ \strut ~~~~~7 & \Z_{240}\la\sigma\ra & \Z_2 & 0 & 0 & \Z_{24} & \strut\\ & & & & & & & \\ \strut ~~~~~6 & \Z_2\la\nu^2\ra & 0 & 0 & \Z_{24} & \Z_2 & \strut\\ & & & & & & & \\ \strut ~~~~~5 & 0 & 0 & \Z_{24} & \Z_2 & \Z_2 & \strut\\ & & & & & & & \\ \strut ~~~~~4 & 0 & \Z_{24} & \Z_2 & \Z_2 & \Z & \strut\\ & & & & & & & \\ \strut ~~~~~3 & \Z_{24}\la\nu\ra & \Z_2 & \Z_2 & \Z & 0 & \strut\\ & & & & & & & \\ \strut ~~~~~2 & \Z_2\la\eta^2\ra & \Z_2 & \Z & 0 & 0 & \strut\\ & & & & & & & \\ \strut ~~~~~1 & \Z_2\la\eta\ra & \Z & 0 & 0 & 0 & \strut\\ & & & & & & & \\ \strut ~~~~~0 & \Z\la\iota\ra & 0 & 0 & 0 & 0 & \strut\\ \strut & 0 & 1 & 2 & 3 & 4 & \strut \\}; \begin{scope}[transparency group] \draw[alt double,alt double distance=.7mm] ({$(m-1-1)!.25!(m-1-2)$} |- m-1-1.east) -- ({$(m-24-1)!.25!(m-24-2)$} |- m-24-1.south east); \draw[alt double,alt double distance=.7mm] (m-24-1.north) -- (m-24-7.north west); \end{scope} \draw ({$(m-1-2)!.75!(m-1-3)$} |- m-1-2.east) -- ({$(m-24-2)!.75!(m-24-3)$} |- m-24-2.south east); \draw ({$(m-1-3)!.5!(m-1-4)$} |- m-1-3.east) -- ({$(m-24-3)!.5!(m-24-4)$} |- m-24-3.south east); \draw ({$(m-1-4)!.5!(m-1-5)$} |- m-1-4.east) -- ({$(m-24-4)!.5!(m-24-5)$} |- m-24-4.south east); \draw ({$(m-1-5)!.5!(m-1-6)$} |- m-1-5.east) -- ({$(m-24-5)!.5!(m-24-6)$} |- m-24-5.south east); \draw ({$(m-1-6)!.5!(m-1-7)$} |- m-1-6.east) -- ({$(m-24-6)!.5!(m-24-7)$} |- m-24-6.south east); \draw (m-22-1.north) -- (m-22-7.north west); \draw (m-20-1.north) -- (m-20-7.north west); \draw (m-18-1.north) -- (m-18-7.north west); \draw (m-16-1.north) -- (m-16-7.north west); \draw (m-14-1.north) -- (m-14-7.north west); \draw (m-12-1.north) -- (m-12-7.north west); \draw (m-10-1.north) -- (m-10-7.north west); \draw (m-8-1.north) -- (m-8-7.north west); \draw (m-6-1.north) -- (m-6-7.north west); \draw (m-4-1.north) -- (m-4-7.north west); \draw (m-2-1.north) -- (m-2-7.north west); \path[->,font=\scriptsize] (m-3-3) edge node [above,pos=.4] {$\eta\circ\mu$} (m-3-2); \path[->,font=\scriptsize] (m-3-4) edge node [above,pos=.5] {$0$} (m-3-3); \path[->,font=\scriptsize] (m-3-5) edge node [above,pos=.5] {$\ol\nu+\varepsilon$} (m-3-4); \path[->,font=\scriptsize] (m-3-6) edge node [above,pos=.5] {$0$} (m-3-5); \path[->,font=\scriptsize] (m-5-3) edge node [above,pos=.4] {$?$} (m-5-2); \path[->,font=\scriptsize] (m-5-4) edge node [above,pos=.5] {$0$} (m-5-3); \path[->,font=\scriptsize] (m-5-5) edge node [above,pos=.5] {$0$} (m-5-4); \path[->,font=\scriptsize] (m-7-3) edge node [above,pos=.5] {$\ol\nu+\varepsilon$} (m-7-2); \path[->,font=\scriptsize] (m-7-4) edge node [above,pos=.5] {$0$} (m-7-3); \path[->,font=\scriptsize] (m-9-3) edge node [above,pos=.5] {$0$} (m-9-2); \path[->,font=\scriptsize] (m-11-6) edge node [above,pos=.5] {$0$} (m-11-5); \path[>->,font=\scriptsize] (m-13-5) edge node [above,pos=.5] {$~$} (m-13-4); \path[->,font=\scriptsize] (m-13-6) edge node [above,pos=.5] {$0$} (m-13-5); \path[->,font=\scriptsize] (m-15-4) edge node [above,pos=.5] {$0$} (m-15-3); \path[->,font=\scriptsize] (m-15-5) edge node [above,pos=.5] {$\cong$} (m-15-4); \path[->,font=\scriptsize] (m-15-6) edge node [above,pos=.5] {$0$} (m-15-5); \path[>->,font=\scriptsize] (m-17-3) edge node [above,pos=.5] {$~$} (m-17-2); \path[->,font=\scriptsize] (m-17-4) edge node [above,pos=.5] {$0$} (m-17-3); \path[->>,font=\scriptsize] (m-17-5) edge node [above,pos=.5] {$~$} (m-17-4); \path[->,font=\scriptsize] (m-19-3) edge node [above,pos=.5] {$\cong$} (m-19-2); \path[->,font=\scriptsize] (m-19-4) edge node [above,pos=.5] {$0$} (m-19-3); \path[->>,font=\scriptsize] (m-21-3) edge node [above,pos=.5] {$~$} (m-21-2); \end{tikzpicture} \end{center} \end{ex} The following is analogous to theorem \ref{hmor} and is a common generalisation of the $\CC_2$-isomorphism part of theorem \ref{primfold} and the $\CC_{2,3}$-isomorphism part of theorem \ref{primcusp}. \begin{thm}\label{ploc} For all $r\ge1$ we have $$\Prim_r^L(n,k)\congc{\le\frac{k+1}2r+1}\displaystyle\bigoplus_{i=0}^r\pi^s(n+1-(k+1)i-k).$$ \end{thm} In the two cases \ref{cplx} and \ref{quat} this theorem states respectively \begin{gather*} \Prim_r^\SO(n,1)=\Prim_r^\U(n,1)\congc{\le r+1}\displaystyle\bigoplus_{i=0}^r\pi^s(n-2i)\\ \text{and}~~~~\Prim_r^{\Spin}(n,3)=\Prim_r^\Sp(n,3)\congc{\le 2r+1}\displaystyle\bigoplus_{i=0}^r\pi^s(n-4i-2). \end{gather*} \medskip\begin{prf} Fix any prime $p>\frac{k+1}2r+1$. We prove by induction on $r$ that the stable homotopy type of the $p$-localised projective space $\big[\FP^{r+1}\big]_p$ coincides with the $p$-localisation of $S^{k+1}\vee S^{2(k+1)}\vee\ldots\vee S^{(r+1)(k+1)}$. This is clearly true for $r=0$, hence we only have to prove the induction step. Using the induction hypothesis, the stable homotopy class of the attaching map of the top dimensional cell of $\FP^{r+1}$ after $p$-localisation is given by a collection of $p$-localised stable maps from $S^{(k+1)r+k}$ to $S^{i(k+1)}$ for $i=1,\ldots,r$. Such a map represents an element in the $p$-primary part of $\pi^s((k+1)(r-i)+k)$, but by a theorem of Serre from \cite{serre} this $p$-primary part is trivial, since $p>\frac{(k+1)(r-i)+k}2+1$. Hence after the $p$-localisation all stable attaching maps are null-homotopic and this proves the stated form of $\big[\FP^{r+1}\big]_p$. Thus using that $\Omega\Gamma S=\Gamma$ and $\Gamma(A\vee B)\cong\Gamma A\times\Gamma B$, we have \begin{alignat*}2 \big[\ol X_r^L\big]_p&\cong\Omega\Gamma\big[S^{k+1}\vee\ldots\vee S^{(r+1)(k+1)}\big]_p\cong\\ &\cong\Gamma[S^{k}\vee\ldots\vee S^{r(k+1)+k}\big]_p\cong\prod_{i=1}^r\big[\Gamma S^{(k+1)i+k}\big]_p, \end{alignat*} and now applying the functor $\pi_{n+1}$ yields the statement. \end{prf} \begin{rmk} All of the above also works with the substitutions $L:=\O$, $k:=0$ and $\F:=\R$, we just omitted this case because we considered only positive codimensional maps throughout this thesis. \end{rmk} \begin{comment} \chapter{Multiplicative structures} \noindent\cite{multmor} \begin{itemize} \color{BrickRed} \item $\underset{n,k}{\bigoplus}\Cob_\infty^{SO}(n,k)\otimes\mathbb{Q},\underset{n,k}{\bigoplus}\Prim_\infty^{SO}(n,k)\otimes\mathbb{Q}$ gyűrű, ps. kodimenzióssal szorozni 0, ptl. kodimenziósak részgyűrűjéről $\Sigma^{1_r}$ homomorfizmus $\Omega_*\otimes\mathbb{Q}$-ba \end{itemize} \noindent\cite{szszt} \begin{itemize} \color{BrickRed} \item $\underset{n,k}{\bigoplus}\Cob_1^{SO}(n,k)\otimes\mathbb{Q}$ gyűrű, $=\mathbb{Q}[g_0,h_1,h_2,\ldots]$, ahol $h_i\colon\mathbb{C}P^{2i}\to\mathbb{R}^{6i}$ \end{itemize} \chapter{Computation of bordism groups} \cite{analog} \begin{itemize} \color{BrickRed} \item $n<2k$ esetén $C^0(n,k)=\Bord_0^{SO}(n,k),\Stong(n,\Omega^k)=\Bord_S^{SO}(n,k)$ és $k$ ptl. $\implies\Stong(n,\Omega^k)\underset{\mathscr{C}_2}{\cong}C^0(n,k)$ $k$ ps. $\implies\mathscr{C}_2$-egz.: $$0\to C^0(n,k)\to\Stong(n,\Omega^k)\to\underset{i+j=n+k}{\bigoplus}(\Omega_i\oplus H^j(BSO(2k);\mathbb{Z}))\to0$$ \end{itemize} \noindent\cite{eszt} \begin{itemize} \color{BrickRed} \item $\Bord_1(2k+1,k)\cong C(2k+1,k)~~(\forall k\ge0)$, azaz nincs itt megszorítás \item $\Bord_1^{SO}(1,0)\cong\mathbb{Z}_2;~\Bord_1^{SO}(5,2)\cong\mathbb{Z}_2;~\Bord_1^{SO}(4m+1,2m)\cong C_{SO}(4m+1,2m)~~(\forall m\ge2)$ $\Bord_1^{SO}(4m-1,2m-1)\cong C_{SO}(4m-1,2m-1)\oplus\mathbb{Z}_{3^u}~~(\forall m\ge1)$ \end{itemize} \noindent\cite{hosszu} \begin{itemize} \color{BrickRed} \item $\underset{n}{\sum}\rk(\Bord_\tau^{SO}(n,k))\cdot t^{n+k}=\mathscr{P}_{SPS^kK_\tau^+}(t)\cdot\mathscr{P}_{BSO}(t)$, ahol $$\mathscr{P}_{SPS^kK_\tau^+}(t)=\prod_{i\text{ ps.}}\left(\frac{1}{1-t^i}\right)^{b_{i-k}(K_\tau)}\cdot\prod_{i\text{ ptl.}}(1+t^i)^{b_{i-k}(K_\tau)}$$ \item $l>n+k\implies\Bord_{\tau\oplus l}^{SO}(n,k+l)\cong\Omega_n(K_\tau)$ \end{itemize} \noindent\cite{szszt} \begin{itemize} \color{BrickRed} \item $k\ge2\implies\Bord_r^{tSO}(n,k)\underset{\mathscr{C}_2}{\cong}\Omega_{n+k}$, ha $r=\infty$ vagy $k$ ptl. vagy $k$ ps. és $4\nmid r$ ha $k$ ps. és $4\mid r$, akkor $\rk\Bord_r^{tSO}(n,k)=\dim(SP(H^*(T\nu;\mathbb{Q}))\otimes\Omega_*)_{n+k}=$ ronda formula \end{itemize} \noindent\cite{bordfold} \begin{itemize} \color{BrickRed} \item $I_m:=\{1\Sigma^0,\ldots,m\Sigma^0,1\Sigma^1\}$ esetén $\Bord_{I_m}(n,k)$ egy $\mathbb{Z}_2$-v.t., dimenziójára formula, $\Bord_{I_m}^{nSO}(n,k)$-ra is és $\Bord_{I_m}^{SO}(n,k)\otimes\mathbb{Q}$-ra is \end{itemize} \chapter{Consequences and applications} \section{Cobordisms of immersions and embeddings} \cite{analog} \begin{itemize} \color{BrickRed} \item $n<2k$ esetén $\Imm^{SO}(n,k)\underset{\mathscr{C}_2}{\cong}\begin{cases} \Omega_n,&k\text{ ptl.}\\ \Omega_n\oplus\Omega_{n-k},&k\text{ ps.} \end{cases}$ \item $\mathscr{C}_2\subset\mathscr{D}\ni H^i(M^n;\mathbb{Z})~~(\forall i=4l+1)\implies\Omega_j(M)/\alpha_j(M)\in\mathscr{D}~~(\forall j<\frac{2}{3}n)$ \end{itemize} \noindent\cite{immemb} \begin{itemize} \color{BrickRed} \item $n<3k$ esetén ps. $k$-ra $\Imm(n,k)\underset{\mathscr{C}_2}{\cong}\Emb(n,k)\underset{\mathscr{C}_2}{\cong}\Omega_{n-k}$ (ptl. $k$-ra $\Imm(n,k)$ és $\Emb(n,k)\in\mathscr{C}_2$) \end{itemize} \noindent\cite{immor} \begin{itemize} \color{BrickRed} \item $n<3k$ esetén $k$ ps. $\implies\mathscr{C}_2$-egz. és hasad: $0\to\Omega_{n-k}\to\Imm^{SO}(n,k)\to\Imm^{SO}(n,k+1)\to0$ $k$ ptl. $\implies\mathscr{C}_{2,3}$-egz.: $0\to\Imm^{SO}(n,k)\to\Imm^{SO}(n,k+1)\to\Omega_{n-k-1}\oplus\Omega_{n-2k-2}\to0$ és ha $\Omega_{n-2k-2}$ helyett $\Omega_{n-2k-2}^{3\gamma}$ van, akkor $\mathscr{C}_2$-egz. is \item $n<3k$ esetén $k$ ps. $\implies\mathscr{C}_2$-egz. és hasad: $0\to\Omega_{n-k}\to\Emb^{SO}(n,k\oplus1)\to\Emb^{SO}(n,k+1)\to0$ $k$ ptl. $\implies\mathscr{C}_{2,3}$-egz.: $0\to\Omega_{n-k}\oplus\Omega_{n-2k-1}\to\Emb^{SO}(n,k\oplus1)\to\Imm^{SO}(n,k+1)\to0$ \end{itemize} \section{Elimination of singularities}\label{elsing} \cite{elimcob},\cite{hosszu} \begin{itemize} \color{BrickRed} \item key bundle \item Posztnyikov-szerű rendszer, obstrukciók \item példák \end{itemize} \section{Relation to stable homotopy groups of spheres} \cite{nszt},\cite{nehez},\cite{ctrl} \begin{itemize} \color{BrickRed} \item spektrális sorozatok azonosítása \item sztrátumok kapcsolódása tüskézett kobordizmusokkal leírva; ha itt nullkobordizmus van, akkor a sztrátum egy környezetében ,,ki lehet simítani'' \item stabil homotopikus csoportok generátorai mint szingularitások \end{itemize} \end{comment} \chapter{Final remarks and open questions} In the present thesis we described results on computing the cobordism groups of singular maps. What we did not include is the consequences and applications of these, so we now just mention some of them. The first application of these groups (the initial reason why Szűcs introduced them) is computing cobordism groups of immersions and embeddings in dimensions where the classical theory did not succeed; this resulted in \cite{analog}, \cite{immemb} and \cite{immor}. A few orientability properties could also be described as consequences of the cobordism theory of singular maps; see \cite{rsz}. Yet another application is the answer to (a version of) a question posed by Arnold on eliminating the most complicated singularity of a map by some deformation, which was answered by the key bundle; see \cite{elimcob} and remark \ref{keyrmk}. A very vague, yet important problem is to find other applications of this theory as well. Turning again to the cobordism groups themselves, observe (which I also mentioned in the introduction) that many basic notions in this thesis were introduced more generally than they were in the original papers (such as the constructions of classifying spaces, the Kazarian conjecture, the key fibration, etc.). These were all such that the original proofs of theorems could be applied in the general setting too, however, there were a few occasions where I had trouble implementing these proofs; let us recall these now. The analogue of theorem \ref{kazc} is probably not true for unoriented cobordisms; theorem \ref{charcob} is most certainly true in some version in the unoriented case as well; and I expect that the orientability conditions in theorems \ref{ratrivi1} and \ref{ratrivi2} can also be relaxed. Even so, there probably are a few concrete computations from chapter \ref{compcobgr} that can be modified to also work for maps which have different (for example spin) normal structures than the ones considered there. So a project can be to find such computations for maps with various additional structures. In the following we describe questions in a few more related topics. \section{Multiplicative structures} In \cite{multmor} and \cite{szszt} two different ways were introduced to define product operations on direct sums of the free parts of certain cobordism groups of singular maps (tensored by $\Q$), which make these direct sums rings. An interesting question to consider is whether we can generalise (one of) them to obtain multiplicative structures on direct sums of other types of cobordism groups as well. If not, can we find another way to define such product operations? Can we define a ring structure that also includes the torsion parts of these groups? If the answer to (one of) the above questions is affirmative, then it is natural to ask how we can describe such a ring. For example, how can we express the homology class represented by the singularities of a map in the product of two cobordism classes using the singularities of representatives of the two initial cobordism classes? \section{Double spectral sequences} Suppose we have a stable linear group $L$ and a set of $k$-codimensional singularities $\tau=\{\eta_0,\eta_1,\ldots\}$ with a complete order of type $\omega$ or less extending the natural partial order. Put $\xi_i^L:=\xi_{\eta_i}^L$, $\tilde\xi_i^L:=\tilde\xi_{\eta_i}^L$, $c_i:=\dim\xi_i^L$ and $G_i^L:=G_{\eta_i}^L$. It was noted in \cite{hosszu} that there are two double spectral sequences converging to the cobordism groups $\Cob_\tau^L(n,k)$ as shown on the diagram below. The double spectral sequences here mean systems of groups $E^1_{i,j,l}$ which form the starting page of a spectral sequence with variables $i,j$ and fixed $l$ converging to groups that form the starting page of another spectral sequence with variables $i+j,l$ (this is shown on the diagram as the left-hand vertical and the bottom horizontal arrows); similarly the groups $E^1_{i,j,l}$ are also the starting page of a spectral sequence with variables $i,l$ and fixed $j$ converging to the starting page of a spectral sequence with variables $i+l,j$ (this is the top horizontal and the right-hand vertical arrows). $$\xymatrix{ H_i\big(BG_l^L;(\widetilde{\pi^s(j)})_{\tilde\xi_l^L}\big)\ar[r]^(.47)\cong\ar[d]_\cong & H_{i+c_l}\big(T\xi_l^L;(\widetilde{\pi^s(j)})_{\nu_\tau^L}\big)\ar@{=>}[r]^(.48){\rm{K}} & H_{i+c_l+l}\big(K_\tau^L;(\widetilde{\pi^s(j)})_{\nu_\tau^L}\big)\ar[d]^\cong \\ H_{i+c_l+k}(T\tilde\xi_l^L;\pi^s(j))\ar@{=>}[d]_{\rm{AH}} && H_{i+c_l+l+k}(V_\tau^L;\pi^s(j))\ar@{=>}[d]^{\rm{AH}} \\ \pi^s_{i+c_l+k+j}(T\tilde\xi_l^L)\ar@{=>}[rr]^(.43){\rm{Sz}} && *!<1cm,0cm>{~\pi_{*+k}(X_\tau^L)\cong\pi^s_{*+k}(V_\tau^L)} }$$ The isomorphisms indicated in the diagram are all Thom isomorphisms and the double arrows marked by $\rm{K}$, $\rm{AH}$ and $\rm{Sz}$ respectively mean the Kazarian spectral sequence, the Atiyah--Hirzebruch spectral sequence and Szűcs's singularity spectral sequence (the analogue of definition \ref{sss} for non-prim maps). Hence we obtained the following. \begin{prop} There are two double spectral sequences $$E^1_{i,j,l}:=H_i\big(BG_l^L;(\widetilde{\pi^s(j)})_{\tilde\xi_i^L}\big)\implies\pi_{*+k}(X_\tau^L)\cong\Cob_\tau^L(*,k).$$ \end{prop} Although this gives a way in principle to compute the groups $\Cob_\tau^L(n,k)$ completely, the practical computation seems to be rather difficult; to my knowledge there was no concrete result on cobordism groups derived from these double spectral sequences so far. So it may be interesting to study these and (for example) trying to obtain estimates on the torsion of the cobordism groups of cooriented Morin maps (recall that the ranks of these groups were computed in theorem \ref{coorthm}). \section{Mosher's spectral sequence} In subsection \ref{rps} we saw that in some cases (namely for $1$-codimensional cooriented prim maps and $3$-codimensional prim maps with spin normal structures) the singularity spectral sequence can be identified with spectral sequences in stable homotopy groups arising from filtrations of projective spaces. In one case this was the filtration of $\CP^\infty$ by the subspaces $\CP^n$, which was studied by Mosher \cite{stabhom} (his paper is rather compressed and misses many proofs and details, but these are clarified and completed by Szűcs and Terpai in \cite{nehez}). In the following we recall (without proofs) a few results on this spectral sequence, which will be denoted by $E^*_{*,*}$. We will also view its $p$-localised version denoted by $^p\! E^*_{*,*}$ (where $p$ is a prime), which starts with the $p$-components of $E^1_{*,*}$ and the first differential $^p\! d^1$ is the $p$-component of $d^1$. Firstly there is a periodicity theorem on $E^*_{*,*}$ and its analogue on $^p\! E^*_{*,*}$ using the Atiyah--Todd numbers $M_n$ (see \cite{at}) that have the form $$M_n=\prod_{p~\rm{prime}}p^{\nu_p(M_n)}~~~~\text{with}~~~~\nu_p(M_n)=\max\big\{m+\nu_p(m)\mid 1\le m\le\textstyle\big\lfloor\frac{n-1}{p-1}\big\rfloor\big\}.$$ We will also put $[M_n]_p:=p^{\nu_p(M_n)}$ for the $p$-component of $M_n$. The first few Atiyah--Todd numbers are listed below. \begin{table}[H]\begin{center}\begin{tabular}{c||c|c|c|c|c|c} $n$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$\\ \hline $M_n$ & $1$ & $2$ & $2^3\cdot 3$ & $2^3\cdot 3$ & $2^6\cdot3^2\cdot5$ & $2^6\cdot3^2\cdot5$ \end{tabular}\end{center}\end{table}\vspace{-.5cm} \begin{thm}[\cite{stabhom},\cite{nehez}] $~$ \begin{enumerate} \item For any $r<n$ we have $E^r_{i,j}\cong E^r_{i+M_n,j+M_n}$, moreover, this isomorphism commutes with the differential $d^r$. \item For any $r<n$ and any prime $p$ we have $^p\! E^r_{i,j}\cong {^p\! E^r_{i+[M_n]_p,j+[M_n]_p}}$, moreover, this isomorphism commutes with the differential $^p\! d^r$. \end{enumerate} \end{thm} Next we use the $J$ homomorphism and its ananlogue in the $p$-localised case (for a prime $p$) denoted by $J_p$ and defined in \cite{nehez}. We remark that $J_p$ had two different definitions in \cite{nehez}, an algebraic and a geometric one; these coincide in the present case, but generally it is an open question whether the two definitions are equivalent. \begin{thm}[\cite{stabhom},\cite{nehez}] $~$ \begin{enumerate} \item If $\iota\in E^1_{i,i}\cong\pi^s(0)\cong\Z$ is a generator for which $d^r(\iota)$ vanishes for all $r<n$, then $d^n(\iota)\in E^n_{i-n,i+n-1}$ belongs to the image of $\im J\subset\pi^s(2n-1)\cong E^1_{i-n,i+n-1}$ in $E^n_{i-n,i+n-1}$. \item If $\iota\in {^p\! E^1_{i,i}}\cong\pi^s(0)\otimes\Z_{(p)}\cong\Z_{(p)}$ is a generator for which $^p\! d^r(\iota)$ vanishes for all $r<n$, then $^p\! d^n(\iota)\in {^p\! E^n_{i-n,i+n-1}}$ belongs to the image of $\im J_p\subset\pi^s(2n-1)\otimes\Z_{(p)}\cong {^p\! E^1_{i-n,i+n-1}}$ in $^p\! E^n_{i-n,i+n-1}$. \end{enumerate} \end{thm} By the geometric interpretation of the spectral sequence $E^*_{*,*}$ in subsection \ref{geode}, the first claim above translates to the following: The vanishing of the first $n-1$ differentials on the class represented by the normal form $\sigma_{i-1}\colon(D^{2i-2},S^{2i-3})\to(D^{2i-1},S^{2i-2})$ of the $\Sigma^{1_{i-1}}$ singularity implies that the boundary $\partial\sigma_{i-1}$ can be chosen (up to cobordism) to have at most $\Sigma^{1_{i-n-1}}$ singularities and such that its restriction to the $\Sigma^{1_{i-n-1}}$-stratum is a framed immersion of $S^{2n-1}$. Lastly there is also a way to determine the first non-zero differential on a generator $\iota\in E^1_{i,i}$. This also uses the following. \begin{lemma}[\cite{stabhom},\cite{nehez}] For a generator $\iota\in E^1_{i,i}$, the differential $d^n$ is defined on $\iota$ iff $i+1$ is divisible by $M_n$ and if this is the case, then $d^n(\iota)$ vanishes iff $i+1$ is divisible by $M_{n+1}$ as well. \end{lemma} Now the tool will be the so-called $e$-invariants of Adams, more precisely the homomorphism $e_C\colon\pi^s(2n-1)\to\Q/\Z$ (see \cite{adams}). One important property of this is that any two representatives of an element in $[\im J]\subset E^n_{*,*}$ (the image of $\im J\subset E^1_{*,*}$) have the same $e_C$-value, hence $e_C$ is well-defined on $[\im J]$, moreover, it is also injective on $[\im J]$ (see \cite{adams} and \cite{stabhom}). We saw in the previous theorem that the vanishing of $d^1,\ldots,d^{n-1}$ on $\iota$ means that we have $d^n(\iota)\in[\im J]\subset E^n_{i-n,i+n-1}$, thus the value $e_C(d^n(\iota))$ uniquely determines $d^n(\iota)$. \begin{thm}[\cite{stabhom},\cite{nehez}] If $i+1\equiv kM_n~\mod M_{n+1}$ and $\iota\in E^1_{i,i}$ is a generator, then we have $e_C(d^n(\iota))=ku_n$, where $u_n$ is the coefficient of $z^n$ in the Taylor expansion $$\left(\frac{\log(1+z)}z\right)^{M_n}=\sum_{j=0}^\infty u_nz^n.$$ \end{thm} The theorems above are strong results on the spectral sequence arising from the natural filtration of $\CP^\infty$, which coincides with the singularity spectral sequence for cooriented codimension-$1$ prim maps. Now what can we say about the other case considered in subsection \ref{rps}? Can we obtain similar theorems on the singularity spectral sequence for codimension-$3$ prim maps with spin normal structures, i.e. the spectral sequence arising from the natural filtration of $\HP^\infty$? If so, what geometric meaning do they yield for singular maps? What other applications do these spectral sequences have for singularities besides those described in \cite{nehez}, \cite{ctrl} and subsection \ref{rps}? \section{Bordism groups} There is a different notion of cobordism relation between singular maps, which is also interesting to consider. This is called (left-right) bordism and is defined in the following way. \begin{defi} Let $\ul\tau$ be a set of (multi)singularities of a fixed codimension $k$. We call two $\ul\tau$-maps $f_0\colon M_0^n\to P_0^{n+k}$ and $f_1\colon M_1^n\to P_1^{n+k}$ (with closed source and target manifolds) left-right $\ul\tau$-bordant (or simply $\ul\tau$-bordant) if there is \begin{itemize} \item[(i)] a compact manifold with boundary $W^{n+1}$ such that $\partial W=M_0\sqcup M_1$, \item[(ii)] a compact manifold with boundary $Z^{n+k+1}$ such that $\partial Z=P_0\sqcup P_1$, \item[(iii)] a $\ul\tau$-map $F\colon W^{n+1}\to Z^{n+k+1}$ such that for $i=0,1$ we have $F^{-1}(P_i)=M_i$ and $F|_{M_i}=f_i$. \end{itemize} The set of (left-right) $\ul\tau$-bordism classes of $\ul\tau$-maps of $n$-manifolds to $(n+k)$-manifolds is denoted by $\Bord_{\ul\tau}(n,k)$. \end{defi} The disjoint union $f_0\sqcup f_1\colon M_0\sqcup M_1\to P_0\sqcup P_1$ now obviously defines an Abelian group operation on $\Bord_{\ul\tau}(n,k)$ and it is not hard to see (by a modified version of the Pontryagin--Thom construction) that the isomorphism $$\Bord_{\ul\tau}(n,k)\cong\NN_{n+k}(X_{\ul\tau})$$ holds. Moreover, additional normal structures can be defined for these bordism groups similarly to those on cobordism groups and analogous isomorphisms hold for bordism groups equipped with extra structures. Some theorems and computations on these bordism groups were obtained in \cite{analog}, \cite{eszt}, \cite{hosszu}, \cite{szszt} and \cite{bordfold}. Apart from the obvious problem to generalise the existing results on them, we can also ask which of the theorems and conjectures on cobordism groups have analogues for bordism groups. \section{Some more questions} The following are a few more interesting problems related to the topic of this thesis: \begin{enumerate} \item What connections can we find between the Thom polynomials describing homology classes represented by singularity strata and the key fibration connecting the classifying spaces? \item We know as a result of Grant and Szűcs \cite{grr} that given a manifold $P^{n+k}$, not all homology classes in $H_n(P;\Z_2)$ can be represented by immersions $M^n\imto P^{n+k}$, moreover, no finite set of multisingularities $\ul\tau$ is enough to represent all homology classes by $\ul\tau$-maps. Is the analogue of this true or false for infinite multisingularity sets or sets of singularities with no restrictions on multiplicities? What can we say about representing homology classes with integer coefficients? \item If we have an immersion equipped with a non-vanishing normal vector field, then projecting it to hyperplanes yields prim maps. On the other hand, the compression theorem implies that after a deformation such a projection will be an immersion (see corollary \ref{ict}). How and why do all obstructions represented by the singularity strata vanish in this case? \end{enumerate} \chapter*{Appendix} \addcontentsline{toc}{chapter}{Appendix} \newcounter{kaki} \renewcommand\thesection{\Alph{kaki}} \renewcommand\thefigure{\thesection.\arabic{figure}} \refstepcounter{kaki} \section{The compression theorem} We recall the compression theorem of Rourke and Sanderson from \cite{rs}, then see some addenda and extensions due to Rourke, Sanderson, Szűcs and Terpai. Throughout this section $P$ will always denote a manifold with a fixed Riemannian metric, the distance of two points is always denoted by $d(\cdot,\cdot)$, the inner product by $\la\cdot,\cdot\ra$ and the length of a vector by $\lv\cdot\rv$. The basic compression theorem is the following. \begin{prop}\label{ct} Let $n,k$ be natural numbers such that $k\ge1$ and let $i\colon M^n\hookrightarrow P^{n+k}\times\R^1$ be the embedding of a compact manifold (possibly with boundary) equipped with a nowhere zero normal vector field $u$ (i.e. a section of the bundle $T(P\times\R^1)|_{i(M)}\setminus Ti(M)$). Then there is an (ambient) isotopy $\varphi_t~(t\in[0,1])$ of $i$ such that $\varphi_0=i$ and $d\varphi_1(u)$ is the vertically upwards (i.e. everywhere on the positive ray of $\R^1$ in $T(P\times\R^1)$) unit vector field, which will be denoted by $\ua$. \end{prop} For the sake of simplicity, throughout this section we will call the summands $TP$ and $\R^1$ in $T(P\times\R^1)$ horizontal and vertical respectively; the positive and negative directions on $\R^1$ will be called upwards and downwards respectively; and the parameter $t$ in one-parameter families of diffeomorphisms will be thought of as the time. \begin{rmk} The name ``compression theorem'' refers to the fact that an embedding $M\hookrightarrow P\times\R^1$ equipped with a vertical normal vector field can be ``compressed'' to an immersion $M\looparrowright P$ by the vertical projection. \end{rmk} \begin{rmk} By the isotopy extension theorem, an isotopy of the embedding $i$ is equivalent to a diffeotopy of $P\times\R^1$, hence we will not differentiate between these two notions. \end{rmk} Before getting to the proof of proposition \ref{ct} we show the following preliminary observations. \begin{lemma}\label{ctlemma} We can assume that the vector field $u$ is \begin{enumerate} \item\label{ct1} everywhere of unit length and orthogonal to the tangent space of $i(M)$, \item\label{ct2} nowhere vertically downwards (i.e. nowhere equal to $\da:=-\ua$). \end{enumerate} \end{lemma} \begin{prf} \emph{Proof of \ref{ct1}.}\enspace\ignorespaces We can apply the Gram--Schmidt process pointwise in $T(P\times\R^1)|_{i(M)}$ to turn $u$ into a vector field of the required form. This is a smooth deformation, hence it results in a smooth vector field. \medskip\noindent\emph{Proof of \ref{ct2}.}\enspace\ignorespaces Now because of the assumption \ref{ct1}, both $u$ and $\da$ are sections of the unit sphere bundle of $T(P\times\R^1)|_{i(M)}$. This sphere bundle is a $(2n+k)$-dimensional manifold and the images of $u$ and $\da$ are both $n$-dimensional submanifolds, hence if $u$ is transverse to $\da$ (which we can assume), then they are disjoint (because $k\ge1$). \end{prf} \par\noindent\textbf{Proof of proposition \ref{ct}.\enspace\ignorespaces} Denote by $\alpha(p)$ the angle of $u(p)$ and $\ua(p)$ for all $p\in i(M)$; this defines a smooth map $\alpha\colon i(M)\to[0,\pi)$. Take an $\alpha_0$ for which $0<\alpha_0<\pi-\max\alpha$ and let $v$ be the vector field that we get for all $p\in i(M)$ by rotating $u(p)$ in the plane of $u(p)$ and $\ua(p)$ upwards by angle $\frac\pi2-\alpha_0$ if $\alpha(p)\ge\frac\pi2-\alpha_0$, otherwise only rotating until $\ua(p)$. This $v$ is not necessarily smooth but we can assume that it is, since we can approximate it with a smooth vector field. Now $v$ is such a normal vector field that $\la v(p),\ua(p)\ra>0$ for all $p\in i(M)$. Let $V$ be a closed tubular neighbourhood of $i(M)$ of radius $r$ and define an extension $\tilde v$ of the vector field $v$ to $P\times\R^1$ in the following way: Let $\beta\colon i(M)\to[0,\frac\pi2)$ be the angle of $v$ and $\ua$ pointwise and let $f\colon[0,r]\to[0,1]$ be a smooth function with $f(0)=0, f(r)=1$ and $\underset{0+0}\lim f'=\underset{r-0}\lim f'=0$. Any point $q\in V$ is in a fibre $V_p$ for some $p\in i(M)$; if the distance of $q$ and this $p$ is $t\in[0,r]$, then we define $\tilde v(q)$ as follows: Parallel translate $v(p)$ to $q$ along the minimal geodesic $[p,q]$, then rotate it upwards by angle $f(t)\beta(p)$. In the complement of $V$ we define $\tilde v$ to be everywhere $\ua$. If the flow of $\tilde v$ is $\{\Phi_t\in\Diff(P\times\R^1)\mid t\in\R_+\}$, then every integral curve leaves the compact $V$ in finite time, so there is an $s>0$ for which $\Phi_t(p)\notin V$ for all $t\ge s$ and $p\in i(M)$. Therefore if we define $\varphi_t$ as $\Phi_{ts}$ (for $t\in[0,1]$), then the vector field on $\varphi_1(M)$ is $d\varphi_1(v)=\ua$ (now we also used that the preliminary rotation of $u$ to $v$ can be achieved by deforming a neighbourhood of $i(M)$ leaving $i(M)$ fixed). ~$\square$\par\medskip \subsection{Local compression} In this subsection we prove the key extension of proposition \ref{ct} (again by Rourke and Sanderson) which is called local compression theorem. In order to prove it we will need a few remarks and lemmas first; to make notations simpler we will use the convention $(p,x)+y:=(p,x+y)$ for all $(p,x)\in P\times\R^1$ and $y\in\R^1$ and identify the tangent spaces $T_{(p,x)}(P\times\R^1)$ for all $x\in\R^1$. We are in the same setting as before and we again assume the conditions in lemma \ref{ctlemma}. \begin{rmk}\label{lctrmk} We can change the vector field $\tilde v$ in the proof of proposition \ref{ct} to the time-dependent vector field $$\tilde v_t\colon P\times\R^1\to T(P\times\R^1);~p\mapsto\tilde v(p+t)+\da~~~~(t\in\R_+).$$ Let the flow of this vector field be $\{\tilde\Phi_t\in\Diff(P\times\R^1)\mid t\in\R_+\}$, i.e. for any $p\in P\times\R^1$ the curve $\gamma\colon t\mapsto\tilde\Phi_t(p)$ is the unique solution of the differential equation $\gamma(0)=p,\gamma'(t)=\tilde v_t(\gamma(t))$. Then we have the identity $\tilde\Phi_t(p)=\Phi_t(p)-t$ for all $p\in P\times\R^1$; in other words, $\tilde\Phi_t$ is obtained by flowing along $\tilde v$ for time $t$ and then flowing back down the unit downwards flow again for time $t$. This way, if the vector $u(p)$ was initially vertically upwards at a point $p\in i(M)$, then the flow $\tilde\Phi_t$ fixes $p$ and $u(p)$ for sufficiently small numbers $t$ (for larger numbers $t$ it may happen that $\tilde v(p+t)$ is not vertically upwards, and so $\tilde\Phi_t$ eventually moves the point $p$). \end{rmk} \begin{defi} The set of points in $i(M)$ where the orthogonal complement of the tangent space is horizontal is called the horizontal set, i.e. the horizontal set is $$H:=H(i):=\{(p,x)\in i(M)\mid T_{(p,x)}(\{p\}\times\R^1)\subset T_{(p,x)}i(M)\}$$ \end{defi} \begin{lemma} We can assume that $H$ is a submanifold of $i(M)$. \end{lemma} \begin{prf} Consider the $1$-jet bundle $J^1(M,P\times\R^1)\to M\times(P\times\R^1)\to M$. We can choose in each fibre of $J^1(M,P\times\R^1)\to M\times(P\times\R^1)$ (which can be identified with the space of linear maps $\R^n\to\R^{n+k}\times\R^1$) the space of those linear maps $A$ for which $\im A$ contains the last coordinate line $\R^1$. The union of these in all fibres is a ($(k+1)$-codimensional) submanifold of $J^1(M,P\times\R^1)$ and $H(i)$ is the $J^1i$-preimage of it. By the jet transversality theorem, we can perturb $i$ slightly in $C^\infty(M,P\times\R^1)$ to get a new map $j\colon M\to P\times\R^1$ for which $H(j)$ is a submanifold. By the openness of the space of embeddings in $C^\infty(M,P\times\R^1)$, this $j$ is an embedding isotopic to $i$. \end{prf} \begin{defi} For all $p\in i(M)\setminus H$ there is a unique unit vector in $N_pi(M)$ (the orthogonal complement of $T_pi(M)$) with maximal vertical coordinate; it will be denoted by $x(p)$. The set of points in $i(M)\setminus H$ where the vector $u$ is downmost is called the downset, i.e. the downset is $$D:=D(i,u):=\{p\in i(M)\setminus H\mid u(p)=-x(p)\}$$ \end{defi} \begin{lemma} We can assume that $\overline D$ is a submanifold of $i(M)$ with boundary $H$. \end{lemma} \begin{prf} Both $u|_{i(M)\setminus H}$ and $-x$ are sections of the unit sphere bundle of $N(i(M)\setminus H)$. We may assume that $u|_{i(M)\setminus H}$ is transverse to $-x$, and then the preimage of their intersection in $i(M)\setminus H$ (i.e. $D$) is a ($k$-codimensional) submanifold. If $T\subset i(M)$ is a tubular neighbourhood of $H$, then we can canonically identify the normal spaces $N_pi(M)$ over each fibre of $T$. We may also homotope the vector field $u$ around $H$ such that it becomes constant on each fibre of $T$. Moreover, we can identify the fibres of $T(P\times\R^1)|_{i(M)}$ with $\R^{n+k+1}$ over each fibre of $T$, and then we may assume that the bundle of the ($(k+1)$-dimensional) subspaces $N_pi(M)$ intersects the submanifold of the horizontal $(k+1)$-planes in the Grassmannian $G_{k+1}(\R^{n+k+1})$ transversally over each fibre of $T$. Now if $T$ was chosen sufficiently small, then for all $p\in H$ any non-horizontal $(k+1)$-plane in a neighbourhood of $N_pi(M)$ takes place exactly once among the spaces $N_qi(M)$ where $q$ is in the fibre of $T$ over $p$. This implies that the vector field $-x|_{S_p}$, where $S_p\approx S^k$ is the fibre over $p$ of the sphere bundle of $T$ with any sufficiently small radius, composed with the projection to the fibre of the unit sphere bundle of $Ni(M)$ (which is also an $S^k$) is a diffeomorphism. Hence $-x$ and $u$ intersect exactly once in $S_p$ for all $p\in H$ and so the closure of $D$ will be a manifold with boundary $H$. \end{prf} \begin{defi} Let $y$ be the gradient field of the projection $\pr_{\R^1}|_{i(M)}$ to the vertical coordinate line, i.e. $y(p)$ is the projected image of $\ua(p)$ in $T_pi(M)$ for all $p\in i(M)$. \end{defi} \begin{lemma}\label{lctlemma} For any tubular neighbourhood $T\subset i(M)$ of $H$ and any number $\delta>0$ we can assume that $\overline{D\setminus T}$ has a tubular neighbourhood $U\subset i(M)$ such that each component of the intersection of an integral curve of $y$ with $U$ has length less than $\delta$. \end{lemma} \begin{figure}[H] \begin{center} \centering\includegraphics[scale=0.1]{kepa1}\label{kepa1} \begin{changemargin}{2cm}{2cm} \caption{\hangindent=1.4cm\small We represent $H$ as two points, the arc connecting them is $\overline D$ and the curve with the indicated tangent vectors is an integral curve of $y$.} \end{changemargin} \vspace{-1.5cm} \end{center} \end{figure} \begin{prf} Observe that a small homotopy of the vector field $u$ fixed in $H$ has the effect of moving $\overline D$ inside a small tubular neighbourhood while keeping the boundary $H$ fixed. Moreover, if $D'$ is the result of a small isotopy of $\overline D$ fixed in $H$, then it is not hard to construct a vector field $u'$ homotopic to $u$ such that $\overline{D(i,u')}=D'$. Hence it is enough to obtain the statement of the lemma after a small isotopy of $\overline D$ fixed in $H$. Choose a small subset $B\subset\overline{D\setminus T}$ diffeomorphic to $\tightoverset{_\circ~~~~}{D^{n-k}}$ and a transverse slice $S$ of the integral curves of $y$ passing through $B$. Now the projection $B\to S$ along the integral curves is a $0$-codimensional map, hence in a generic position the preimage of each point is discrete. Now we can perturb the embedding of $B$ into $i(M)$ such that this projection is generic, which means the integral curves of $y$ intersect $B$ in a discrete set. We cover $\overline{D\setminus T}$ by finitely many such subsets $B$ and obtain a small isotopy of $\overline D$ fixed in $H$ that makes all integral curves of $y$ intersect $\overline{D\setminus T}$ in a discrete set. Now the compactness of $\overline{D\setminus T}$ implies that we can choose a small neighbourhood $U$ of $\overline{D\setminus T}$ with the desired property. \end{prf} Now we can get to the local compression theorem. We will follow the proof of Rourke and Sanderson \cite{rs}, which is rather hard to understand in their paper, as much of it is only given as a visual intuition. We attempt to make it clearer by being more precise. \begin{thm}\label{lct} For any $\varepsilon>0$, the isotopy $\varphi_t~(t\in[0,1])$ in proposition \ref{ct} can be chosen such that for all $p\in P\times\R^1$ and $t\in[0,1]$ we have $d(p,\varphi_t(p))<\varepsilon$. \end{thm} \begin{prf} Let $\delta>0$ be a small number to be specified later, $T\subset i(M)$ a tubular neighbourhood of $H$ and $U\subset i(M)$ a closed tubular neighbourhood of $\overline{D\setminus T}$ according to lemma \ref{lctlemma} (such that $U\cap H=\varnothing$). We will turn the vector field $u$ vertically upwards in three steps (step \ref{st2} will be the only considerably hard part). \newcounter{step} \renewcommand\thestep{\Roman{step}} \medskip\noindent\refstepcounter{step}\thestep\label{st1}. \emph{Moving upwards on $i(M)\setminus(T\cup U)$.}\medskip For all points $p\in i(M)\setminus(T\cup U)$ we have $u(p)\ne-x(p)$, hence we can canonically rotate $u(p)$ to $x(p)$ in the plane of $u(p)$ and $x(p)$. This rotation clearly extends to a small neighbourhood of $i(M)\setminus(T\cup U)$ in $i(M)$, hence it also extends to $i(M)$ such that on $\overline D$ it remains the initial $u|_{\overline D}$. This way we can assume that $u$ was initially such that $u|_{\overline{i(M)\setminus(T\cup U)}}=x|_{\overline{i(M)\setminus(T\cup U)}}$. Now for all $p\in\overline{i(M)\setminus(T\cup U)}$ the angle of $u(p)$ and $\ua(p)$ is $\alpha(p)<\frac\pi2$, so by the compactness of $\overline{i(M)\setminus(T\cup U)}$ there is an $\alpha_1>0$ such that $\alpha(p)<\frac\pi2-\alpha_1$ is also true. Now we can use the upwards rotation of $u$ into $v$ that we defined in the proof of proposition \ref{ct} with $0<\alpha_0<\alpha_1$, hence we get that the new vector field $v$ is such that $v|_{\overline{i(M)\setminus(T\cup U)}}=\ua|_{\overline{i(M)\setminus(T\cup U)}}$. \medskip\noindent\refstepcounter{step}\thestep\label{st2}. \emph{Moving upwards on $U$.}\medskip Let $V$ be the closed tubular neighbourhood of $i(M)$ of radius $r$ (where $r>0$ is to be specified later) as in the proof of proposition \ref{ct}. For all $(p,x)\in U$ we put $$A_{(p,x)}:=N_{(p,x)}i(M)\cap T_{(p,x)}(P\times\{x\}),$$ that is, $A_{(p,x)}$ is the horizontal subspace in the normal space $N_{(p,x)}i(M)$ (the orthogonal complement of $T_{(p,x)}i(M)$), which is a hyperplane in $N_{(p,x)}i(M)$. Let $N'_{(p,x)}i(M)$ be the subspace in $T_{(p,x)}(P\times\R^1)$ generated by $A_{(p,x)}$ and $\ua(p,x)$. Now for any point $p\in U$ we can canonically rotate the fibre $V_p\subset N_pi(M)$ of $V$ around the axis $A_p$ to a closed $(k+1)$-dimensional disk in $N'_pi(M)$; let us denote this disk by $V'_p$ (here we identified the disks $V_p$ and $V'_p$ with their exponential images). This rotation clearly extends to a small neighbourhood of $U$ in $i(M)\setminus H$, hence it also extends to $i(M)$ such that at the points $p$ in the complement of a small neighbourhood of $U$ the disk $V'_p$ remains the initial $V_p$. If the radius $r$ is small enough, then the disks $V'_p$ are disjoint and the union $$V':=\bigcup_{p\in i(M)}V'_p$$ is again a tubular neighbourhood of $i(M)$. In other words, we get the neighbourhood $V'$ by rotating $V$ fibrewise around the horizontal hyperplane to be vertical over $U$, then extending this rotation to the whole $i(M)$. We now redefine the extension $\tilde v$ of $v$ as follows: Recall the proof of proposition \ref{ct} where we defined $\tilde v$ on a fibre $V_p$ by rotating (the parallel translate of) the vector $v(p)$ upwards as we go along the minimal geodesic from $p$ towards the boundary of $V_p$. Now we proceed in exactly the same way except that we use the disks $V'_p$ instead of $V_p$. Hence the vector field $\tilde v$ is defined on $V'$ and is vertically upwards on $\partial V'$; in the complement of $V'$ we define it to be everywhere $\ua$. As in the proof of proposition \ref{ct}, let $\{\Phi_t\in\Diff(P\times\R^1)\mid t\in\R_+\}$ be the flow of $\tilde v$; and as in remark \ref{lctrmk}, let $\{\tilde\Phi_t\in\Diff(P\times\R^1)\mid t\in\R_+\}$ be the flow of the time-dependent vector field $$\tilde v_t\colon P\times\R^1\to T(P\times\R^1);~p\mapsto\tilde v(p+t)+\da~~~~(t\in\R_+).$$ We need one final modification of the flow $\tilde\Phi_t$ in order to terminate the effect of the flow once the vector field near $U$ is vertically upwards. As we mentioned in remark \ref{lctrmk}, it may happen that the flow $\tilde\Phi_t$ leaves a point fixed for some time but begins to move it later; the purpose of this final modification is to not let this happen to points of $U\cup(i(M)\setminus T)$. Observe that the vertical projection $\pr_P|_{i(M)\setminus H}$ to $P$ is an immersion and $U\cup(i(M)\setminus T)$ is compact, hence we can fix a small number $d$ (to be specified later) such that $$0<d<\frac13\min\{d(p,q)\mid p\in U\cup(i(M)\setminus T),q\in i(M),p\ne q,{\pr}_P(p)={\pr}_P(q)\}.$$ Let $g\colon[0,\infty)\to[0,1]$ be a smooth decreasing function with $g|_{[0,d]}\equiv1$ and $g|_{[2d,\infty)}\equiv0$. We define the final version of the time-dependent vector field as $$\hat v_t\colon P\times\R^1\to T(P\times\R^1);~p\mapsto g(t)\tilde v_t(p)~~~~(t\in\R_+)$$ and denote its flow by $\{\hat\Phi_t\in\Diff(P\times\R^1)\mid t\in\R_+\}$. Our aim in the rest of step \ref{st2} will be to prove that the flow $\hat\Phi_t$ deforms $v$ into $\ua$ on $U$ while moving all points of $i(M)$ by less than $\frac\varepsilon2$ if we define $\delta,r$ and $d$ appropriately. Let $\Gamma$ be a component of the intersection of an integral curve of $y$ (the gradient field of $\pr_{\R^1}|_{i(M)}$) with $U$ (and so by lemma \ref{lctlemma}, shorter than $\delta$). Denote by $\pi$ the projection $V'_p\to\{p\}$ of the tubular neighbourhood $V'$ of $i(M)$ and define $$V'_\Gamma:=\bigcup_{x\in\R^1}(\pi^{-1}(\Gamma)+x),$$ that is, the union of all vertical translates of the restriction of $V'$ to $\Gamma$. \medskip\begin{sclaim} The flow $\hat\Phi_t$ can take any point of $\Gamma$ only to points of $V'_\Gamma$. \end{sclaim} \begin{sprf} Fix any $p\in\Gamma$. We rotated $u(p)$ to $v(p)$ in the plane of $u(p)$ and $\ua(p)$, then $v(p)$ to the vectors $\tilde v(q)~(q\in V'_p)$ in the parallel translates of the same plane. The vector $u(p)$ was in the normal space $N_pi(M)$ which is generated by the horizontal subspace $A_p$ and the upmost vector $x(p)$, hence the vectors $\tilde v(q)~(q\in V'_p)$ are in the parallel translate of the subspace generated by $A_p,x(p)$ and $\ua(p)$. \begin{figure}[H] \begin{center} \centering\includegraphics[scale=0.1]{kepa2}\label{kepa2} \begin{changemargin}{2cm}{2cm} \caption{\hangindent=1.4cm\small Here we show the point $p$ on the curve $\Gamma$ with the vectors at $p$ and the normal disks $V_p$ and $V'_p$.} \end{changemargin} \vspace{-1.3cm} \end{center} \end{figure} Now $p\notin H$ implies that $y(p)\ne\ua(p)$. Moreover, $y(p)\ne0$ because $y(p)=0$ would mean that $T_pi(M)$ is horizontal, thus the upmost unit vector orthogonal to it is $x(p)=\ua(p)$, and so $-x(p)=\da(p)$; now if $p$ is a point of the downset $D$, then this cannot happen because of lemma \ref{ctlemma}, hence it also cannot happen in a small neighbourhood of $D$ and we can choose $U$ to be a neighbourhood small enough. Therefore $y(p)$ and $\ua(p)$ generate a plane and it is not hard to see that $x(p)$ is in this plane as well. Indeed, $x(p)$ and $\frac{y(p)}{\lv y(p)\rv}$ are the upmost unit vectors in $N_pi(M)$ and $T_pi(M)$ respectively, hence the plane they generate has to contain the upmost unit vector of $N_pi(M)\oplus T_pi(M)=T_p(P\times\R^1)$ which is $\ua(p)$. This way the vectors $\tilde v(q)~(q\in V'_p)$ are in the parallel translate of the subspace generated by $A_p,y(p)$ and $\ua(p)$, where $A_p$ and $\ua(p)$ generate the linear space $N'_pi(M)$ by definition. Now $V'_p$ is a small closed ball in $N'_pi(M)$ and if $q$ is in $\partial V'_p$ or $p$ is an endpoint of $\Gamma$, then $\tilde v(q)=\ua(q)$, thus all vectors $\tilde v(q)~(q\in V'_p)$ are tangent to $V'_\Gamma$. If we add the downwards unit vector field $\da$ to these vectors, then we still get vectors tangent to $V'_\Gamma$ and the same is true if we add $\da$ to the vectors $\tilde v(q+t)$ where $q+s\in V'_p$ and $0\le s\le t\le3d$ and multiply them by small numbers. We get the vectors of the time-dependent vector field $\hat v_t=g(t)\tilde v_t$ this way, hence the integral curves of $\hat v_t$ that start in $\Gamma$ remain in $V'_\Gamma$. \end{sprf} In the following $q$ will denote a point and $t$ will denote a time such that $0\le t\le3d$ and $q+s\in V'_p$ (for some $0\le s\le t$ and $p\in U$). Based on the above proof we can decompose the vector $\tilde v_t(q)$ in the form $$\tilde v_t(q)=a_t(q)+b_t(q)+c_t(q),$$ where $a_t(q)$ is the vertical component, $b_t(q)$ is the horizontal component in the (parallel translate of the) plane of $\ua(p)$ and $y(p)$ and $c_t(q)$ is the component in the (parallel translate of the) horizontal subspace $A_p$ orthogonal to $T_pi(M)$. These three vectors are orthogonal to each other. Observe that we get the vector $\tilde v_t(q)$ by adding the dowards unit vector $\da$ to some other unit vector. One can easily check that this implies that exactly one of the following conditions hold: \begin{enumerate} \renewcommand{\labelenumi}{\theenumi} \renewcommand{\theenumi}{\rm{(\roman{enumi})}} \item\label{un1} The length of the vertical component is $\lv a_t(q)\rv\le1-\frac{\sqrt2}{2}$. \item\label{un2} The length of the horizontal component is $\lv b_t(q)+c_t(q)\rv>\frac{\sqrt2}{2}$. \end{enumerate} Let $\gamma\colon\R_+\to P\times\R^1$ be an integral curve of $\tilde v_t$ starting at a point $p\in U$ and put $t_0:=\min\{t>0\mid\gamma'(t)=0\}$. Denote by $I_1$ (resp. $I_2$) the union of those subintervals $I\subset[0,t_0]$ for which at each time $t\in I$ the condition \ref{un1} (resp. \ref{un2}) holds with $q=\gamma(t)$. Let $t_i$ denote the length (i.e. Lebesgue measure) of $I_i$ (for $i=1,2$). \medskip\begin{sclaim} For $i=1,2$ we have $t_i<\sqrt2(\delta+r)$. \end{sclaim} \begin{sprf} If $p\in D$, then $u(p)$ is in the plane generated by $\ua(p)$ and $y(p)$, which implies $c_0(p)=0$. Therefore, given any number $\zeta>0$, we can choose the neighbourhood $U$ so small that we have $\lv c_0(p)\rv<\zeta$ for all $p\in U$. Then $\lv c_t(q)\rv<\zeta$ is also true for any $q$ and $t$ (where $q+s\in V'_p,0\le s\le t\le3d$ and $p\in U$), since this component can only decrease when we rotate the vector upwards. Let $\Gamma$ be the component of the intersection of the integral curve of $y$ with $U$ for which $p\in\Gamma$ (and so $\gamma|_{[0,t_0]}$ only maps to points of $V'_\Gamma$). Recall that $\delta$ is an upper estimate on the length of $\Gamma$ and $r$ is the radius of the disks $V'_p$. By choosing $\delta$ and $r$ small enough, we can assure that $\pi^{-1}(\Gamma)$ (where $\pi$ is the projection of $V'$) is in one Eucledian neighbourhood, hence so is $V'_\Gamma$. Now we can identify all tangent spaces in the points of $V'_\Gamma$, hence it makes sense to talk about the change of direction of the vector $c_t~(t\in[0,t_0])$ along the curve $\gamma$. By setting $\delta$ sufficiently small, we can make this change of direction arbitrarily small along $\gamma$; moreover, because of the compactness of $U$, this can be done universally for all integral curves of $\tilde v_t$ starting in $U$ and for $t\le 3d$. The above considerations imply that the integral curves of $\tilde v_t=a_t+b_t+c_t$ are arbitrarily close to the integral curves of $a_t+b_t$ (for $t\le t_0$), thus it is enough to prove the claim so that we forget about the component $c_t$ and assume $\tilde v_t=a_t+b_t$. \begin{figure}[H] \begin{center} \centering\includegraphics[scale=0.1]{kepa3}\label{kepa3} \begin{changemargin}{2cm}{2cm} \caption{\hangindent=1.4cm\small Here we show $\pi^{-1}(\Gamma)$; we get the set $V'_\Gamma$ by taking the union of all vertical translates of this.} \end{changemargin} \vspace{-1.5cm} \end{center} \end{figure} We get the flow of $\tilde v_t$ by combining the flow of $\tilde v$ (the non-time-dependent vector field) with the unit downwards flow, in other words, flowing along $\tilde v_t$ is the same as flowing along $\tilde v$, except that in the meantime everything is flowing downwards with unit speed. Hence $\tilde v_t(\gamma(t))=0$ is equivalent to $\tilde v(\gamma(t)+t)=\ua$. The length of the vertical component $a_t(\gamma(t))$ being at most $1-\frac{\sqrt2}2$ is equivalent to the vertical component $a(\gamma(t)+t)$ of $\tilde v(\gamma(t)+t)$ being at least $\frac{\sqrt2}2$; the length of the horizontal component $b_t(\gamma(t))$ being more than $\frac{\sqrt2}2$ is equivalent to the length of the horizontal component $b(\gamma(t)+t)$ of $\tilde v(\gamma(t)+t)$ being more than $\frac{\sqrt2}2$. \medskip\noindent\emph{Proof for $t_1$.}\enspace\ignorespaces The difference of the vertical coordinate of $p$ and the vertical coordinate of the upmost point of $\pi^{-1}(\Gamma)$ is clearly smaller than $\delta+r$. The flow of $\tilde v$ moves points vertically only upwards and it moves $p$ to the boundary of $\pi^{-1}(\Gamma)$ where $\tilde v$ is vertically upwards. Now the time the integral curve of $\tilde v$ starting at $p$ spends at points where the vertical component $a$ of the velocity vector is at least $\frac{\sqrt2}{2}$ before reaching the boundary of $\pi^{-1}(\Gamma)$ is clearly an upper bound for $t_1$. But this time cannot be more than the time it takes to get to the vertical level of the upmost point of $\pi^{-1}(\Gamma)$ on a curve for which the velocity vector always has vertical component at least $\frac{\sqrt2}2$. This is trivially less than $\sqrt2(\delta+r)$, hence $\sqrt2(\delta+r)$ is also an upper bound for $t_1$. \medskip\noindent\emph{Proof for $t_2$.}\enspace\ignorespaces The curve $\gamma$ cannot reach the boundary of $V'_\Gamma$ (where the vector field $\tilde v_t$ is $0$) later than the integral curve of $\tilde v$ starting at $p$ reaches the horizontal level of the boundary of $\pi^{-1}(\Gamma)$. We assumed that the horizontal component of $\tilde v$ is in the plane generated by (the parallel translates of) vectors of $\ua$ and $y$, hence the horizontal movement along integral curves of $\tilde v$ happens in the directions of (parallel translates of) vectors of $y$. The vectors of $y$ are tangent to $\Gamma$ by definition, therefore the length of the hoizontal component of such an integral curve is smaller than $\delta+r$. Now the time the integral curve of $\tilde v$ starting at $p$ spends at points where the horizontal component $b$ of the velocity vector is more than $\frac{\sqrt2}{2}$ before reaching the horizontal level of the boundary of $\pi^{-1}(\Gamma)$ is clearly an upper bound for $t_2$. But this time cannot be more than the time it would take on a curve which always has velocity vectors with vertical components more than $\frac{\sqrt2}2$. This is again less than $\sqrt2(\delta+r)$, hence $\sqrt2(\delta+r)$ is an upper bound for $t_2$ as well. \end{sprf} The claim we have just proved implies the inequality $t_0=t_1+t_2<2\sqrt2(\delta+r)$, hence for any integral curve $\gamma$ of $\tilde v_t$ starting in $U$ there is $0<t<2\sqrt2(\delta+r)$ for which $\gamma'(t)=0$. We can choose $\delta$ and $r$ such that $2\sqrt2(\delta+r)\le d$, which implies the same statement where $\gamma$ is an integral curve of $\hat v_t=g(t)\tilde v_t$ instead of $\tilde v_t$. But if $\gamma$ is an integral curve of $\hat v_t$, then $\gamma'(t)=0$ implies that the curve $\gamma$ always remains stationary after $t$, because the bump function $g$ was chosen such that the vectors of $\hat v_t$ that are far above have no effect on $\gamma$. What we proved by this is that all integral curves of $\hat v_t$ that start in $U$ are constant after $2\sqrt2(\delta+r)$ time. Now setting $d$ to be $2\sqrt2(\delta+r)$ we can even make the whole flow $\hat\Phi_t$ terminate after $4\sqrt2(\delta+r)$ time. It is easy to see that for all $p\in P\times\R^1$ and $t\in\R_+$ we have $\lv\hat v_t(p)\rv\le\sqrt2$, thus $d(p,\hat\Phi_t(p))\le8(\delta+r)$ which is smaller than $\frac\varepsilon2$ if $\delta$ and $r$ are sufficiently small. \medskip\noindent\refstepcounter{step}\thestep\label{st3}. \emph{Moving upwards on $T$.}\medskip In the course of step \ref{st1} and step \ref{st2} we deformed the embedding of $M$ to a new one which we will denote by $j(M):=\hat\Phi_{2d}(M)$. The normal vector field $u$ was also deformed to a normal vector field $w$ on $j(M)$ which is vertically upwards in the complement of $T':=\hat\Phi_{2d}(T)$. Observe that the original vector field $u$ was horizontal on $H$, hence when we rotated it into $v$, the angle of $v(p)$ and $\ua(p)$ was $\beta(p)=\alpha_0$ for all $p\in H$. Hence by choosing the tubular neighbourhood $T$ small enough, we can assure the inequality $\beta(p)\le2\alpha_0$ for all $p\in T$. The deformations of steps \ref{st1} and \ref{st2} could only decrease this angle, thus the angle of $w(p)$ and $\ua(p)$ is again at most $2\alpha_0$ for all $p\in T'$. Now we extend $w$ to a vector field $\tilde w$ on $P\times\R^1$ using a closed tubular neighbourhood $W$ of $j(M)$ in exactly the same way as we did in the proof of proposition \ref{ct}. We denote the flow of $\tilde w$ by $\{\Psi_t\in\Diff(P\times\R^1)\mid t\in\R_+\}$ and, as in remark \ref{lctrmk}, we denote by $\{\tilde\Psi_t\in\Diff(P\times\R^1)\mid t\in\R_+\}$ the flow of the time-dependent vector field $$\tilde w_t\colon P\times\R^1\to T(P\times\R^1);~p\mapsto\tilde w(p+t)+\da~~~~(t\in\R_+).$$ By setting the angle $\alpha_0$ small enough, we can make the horizontal components of all vectors of $\tilde w$ arbitrarily small. The compactness of $W$ implies that there is an $s>0$ for which $\Psi_t(p)\notin W$ for all $t\ge s$ and $p\in j(M)$, this way turning the vector field vertical on the image of $T'$ as well, while moving all points arbitrarily close to the unit upwards flow. Hence combining it with the unit downwards flow to get $\tilde\Psi_t$, the distance $d(p,\tilde\Psi_t(p))$ can be made arbitrarily small for all $p\in P\times\R^1$ and $t\in\R_+$, in particular we can set it smaller than $\frac\varepsilon2$. \medskip Note that each step did not affect the achievements of the previous one, thus we can define the desired isotopy $\varphi_t~(t\in[0,1])$ to apply first $\hat\Phi_t~(t\in[0,2d])$ and then $\tilde\Psi_t~(t\in[0,s])$. \end{prf} \subsection{Addenda} In this subsection we will prove some of the numerous extensions and corollaries of the local compression theorem \ref{lct}. The first one is called the multi-compression theorem and is as follows. \begin{thm}\label{mct} Let $n,k,m$ be natural numbers such that $k\ge1$ and let $i\colon M^n\hookrightarrow P^{n+k}\times\R^m$ be the embedding of a compact manifold (possibly with boundary) equipped with $m$ pointwise independent normal vector fields $u_1,\ldots,u_m$. Then there is an (ambient) isotopy $\varphi_t~(t\in[0,1])$ of $i$ such that $\varphi_0=i$ and $d\varphi_1(u_1),\ldots,d\varphi_1(u_m)$ are vertical (i.e. $d\varphi_1(u_j)$ is the positive unit vector on the $j$-th coordinate line of $\R^m$ in $T(P\times\R^m)$). Moreover, for any $\varepsilon>0$ this $\varphi_t$ can be chosen such that for all $p\in P\times\R^m$ and $t\in[0,1]$ we have $d(p,\varphi_t(p))<\varepsilon$. \end{thm} \begin{prf} We proceed by induction on the number $m$. The $m=1$ case is proposition \ref{ct} and theorem \ref{lct}. Now suppose that we know the theorem for $m-1$ and want to prove for $m$. We may assume by induction and lemma \ref{ctlemma} that the vector fields $u_1,\ldots,u_{m-1}$ are already vertical and $u_m$ is a unit vector field pointwise orthogonal to $Ti(M)$ and $u_1,\ldots,u_{m-1}$. Now if we put $Q:=P\times\R^1$ (where this $\R^1$ is the last real coordinate line), then the projection $\pr_Q|_{i(M)}\colon i(M)\to Q$ is an immersion because we projected along (the first) $m-1$ normal vector fields. A tubular neighbourhood of the image of this immersion can be lifted to an $(n+k+1)$-dimensional strip $S\subset P\times\R^m$ containing $i(M)$; the fibres of $S$ are everywhere parallel to $Q$ and $u_m$ is tangent to $S$. Let $U_j\subset V_j\subset W_j\subset S$ (where $j$ is in some finite index set) be such that $U_j\approx\tightoverset{_\circ~~~~~~~}{D^{n+k+1}}$, $V_j$ is the $\varepsilon$-neighbourhood of $U_j$ and $W_j$ is the $\varepsilon$-neighbourhood of $V_j$ in $S$ (for some $\varepsilon>0$) and the projection $\pr_Q|_{W_j}$ is an embedding. Now we can use the local compression theorem \ref{lct} inside $W_j\approx\R^{n+k+1}$, composed with a bump function that is $0$ outside $V_j$, to obtain an isotopy of $i$ that keeps the image of $M$ inside the strip $S$ and only moves the points inside $W_j$. By applying this process for all indices $j$, we obtain an isotopy of $i$ that moves $i(M)$ inside the strip $S$ and turns the vector field $u_m$ vertical. We can keep the vector fields $u_1,\ldots,u_{m-1}$ (which are orthogonal to the fibres of $S$) vertical throughout this process, hence the theorem is proved. \end{prf} \begin{crly}\label{ict} In theorem \ref{mct} we can replace the word ``embedding'' by ``immersion'' and the word ``isotopy'' by ``regular homotopy''. \end{crly} \begin{prf} If $i\colon M\looparrowright P\times\R^m$ is an immersion, then fix any embedding $j\colon M\hookrightarrow\R^l$ (where $l$ is a large number) and obtain an embedding $i\times j\colon M\hookrightarrow P\times\R^{m+l}$. We can lift the normal vector fields $u_1,\ldots,u_m$ to this embedding and define the normal vector fields $u_{m+1},\ldots,u_{m+l}$ to be everywhere vertical. We can use the multi-compression theorem \ref{mct} to this to obtain an isotopy of $i\times j$, which composed with the projection to $P\times\R^m$ is a regular homotopy of $i$. \end{prf} \begin{rmk} It follows from the proofs that the relative versions of the statements above (and below) are also true, that is, if the vector fields are already vertical on a neighbourhood of a compact subset $C\subset i(M)$, then the isotopy (or regular homotopy) $\varphi_t~(t\in[0,1])$ is fixed on a neighbourhood of $C$. \end{rmk} The following is the stratified version of the multi-compression theorem, which was proved in \cite{hosszu}. \begin{thm}\label{sct} Let $n,k,m$ be natural numbers such that $k\ge1$ and let $f\colon M^n\to P^{n+k}\times\R^m$ be the map of a compact manifold (possibly with boundary). Suppose that $M$ is equipped with a finite stratification by submanifolds $S_i~(i=1,\ldots,r)$, where $M=S_1\cup\ldots\cup S_r$ and $S_{i-1}\subset\overline{S_i}$ for all $i$, moreover, the restriction $f|_{S_i}$ is an embedding for all $i$ and $f(S_i)\cap f(S_j)=\varnothing$ for all $i,j$. Let $u_1,\ldots,u_m$ be pointwise independent vector fields along $f(M)$ transverse to the stratification, that is, sections of the bundle $T(P\times\R^m)|_{f(M)}$ such that for all $i$ the vector fields $u_1|_{f(S_i)},\ldots,u_m|_{f(S_i)}$ are nowhere tangent to $f(S_i)$. Then there is a diffeotopy $\varphi_t~(t\in[0,1])$ of $P\times\R^m$ such that $\varphi_0=\id_{P\times\R^m}$ and $d\varphi_1(u_1),\ldots,d\varphi_1(u_m)$ are vertical. Moreover, for any $\varepsilon>0$ this $\varphi_t$ can be chosen such that for all $p\in P\times\R^m$ and $t\in[0,1]$ we have $d(p,\varphi_t(p))<\varepsilon$. \end{thm} \begin{prf} We prove by induction on the number $r$ of the strata. The $r=1$ case is theorem \ref{mct}. Now suppose that we know the theorem for $r-1$ and want to prove for $r$. Now $\overline{S_{r-1}}$ is stratified by $r-1$ submanifolds, hence by the induction hypothesis we can deform the map $f|_{\overline{S_{r-1}}}$ to turn the vector fields $u_1|_{f(\overline{S_{r-1}})},\ldots,u_m|_{f(\overline{S_{r-1}})}$ vertical by a diffeotopy of $P\times\R^m$. This way we obtained a new map $g\colon M\to P\times\R^m$ with the same properties as $f$ and new vector fields $v_1,\ldots,v_m$ along $g(M)$ that are transverse to the stratification and vertical on $g(\overline{S_{r-1}})$. Of course, we can homotope the vector fields $v_1,\ldots,v_m$ to be vertical on a small neighbourhood $U\subset g(M)$ of $g(S_{r-1})$ as well. Now we can choose an even smaller neighbourhood $V\subset g(M)$ of $g(S_{r-1})$ such that $\overline V\subset U$ and apply the relative version of the multi-compression theorem \ref{mct} to $g(M)\setminus V$ to turn the vector fields $v_1|_{g(M)\setminus V},\ldots,v_m|_{g(M)\setminus V}$ vertical by only moving the manifold in the complement of $V$ and leaving a neighbourhood of $\partial(g(M)\setminus V)$ fixed. This way the vector fields become vertical on the whole manifold. \end{prf} The last statement in this section is a simple addendum to the multi-compression theorem, which was proved in \cite{nszt}. \begin{prop}\label{ct+} If in theorem \ref{mct} we do not require that the points stay in their $\varepsilon$-neighbourhoods, then the isotopy $\varphi_t~(t\in[0,1])$ can be chosen such that for any point $p\in M$ and any time $t_0\in[0,1]$ the tangent vector of the curve $t\mapsto\varphi_t(p)$ at the point $\varphi_{t_0}(p)$ is not tangent to $\varphi_{t_0}(M)$. \end{prop} \begin{prf} We will slightly modify the steps of the induction that proves the multi-compression theorem \ref{mct}. In the starting step we just use the vector field $\tilde v$ construsted in the proof of proposition \ref{ct} (instead of the one in the local compression theorem \ref{lct}); now the derivative in $t_0$ of each curve $t\mapsto\varphi_t(p)$ is just the image of the normal vector $v(p)$ under the diffeomorphism $\varphi_{t_0}$, hence it is independent of $T_{\varphi_{t_0}(p)}\varphi_{t_0}(M)$. Note that since the vector field we used in the first step is not time-dependent, the image of $M$ keeps flowing with unit speed upwards in the direction of the first real coordinate of $P\times\R^m$, which we know is now independent of the tangent spaces of the image of $M$. Thus by the compactness of $M$, the minimal length of those vectors which added to this upwards unit vector get into the tangent space of the image of $M$ is bounded from below. In the induction step we are bound to use the time-dependent vector fields defined in the proof of theorem \ref{lct}, but by reparameterising time for these, we can make them arbitrarily small. Hence if we add them to the upwards unit vectors remained from the first step, we get that the derivatives of the curves $t\mapsto\varphi_t(p)$ are again not tangent to the image of $M$. \end{prf} \begin{rmk} Much of the versions and addenda of the compression theorem (such as corollary \ref{ict} or proposition \ref{ct+}) can be obtained from Hirsch--Smale immersion theory. But for example the local versions (to my knowledge) do not follow from such general theories. \end{rmk} \refstepcounter{kaki} \section{Virtual complexes}\label{virtc} Here we introduce the notion of virtual complexes based on \cite{hosszu}. In short, a virtual complex is a ``CW-complex'' where the gluing maps of the cells are only defined in stable sense. First we recall the definition of a standard gluing procedure that results in a CW-complex, and then we give a similar definition of a ``stable gluing procedure'' that results in a virtual complex. This way the analogies and differences between usual and virtual CW-complexes will be easy to see. \begin{defi} Let $\{(A_i,B_i)\mid i=0,1,\ldots\}$ be a sequence of CW-pairs and suppose that (each component of) the pair $(A_i,B_i)$ is $c_i$-connected where $c_i\to\infty$ for $i\to\infty$. \begin{itemize} \item[(0)] Put $X_0:=A_0$ and fix a map $\rho_1\colon B_1\to X_0$. \item[(1)] Put $X_1:=X_0\usqcup{\rho_1}A_1$ and fix a map $\rho_2\colon B_2\to X_1$. \item[($i$)] Put $X_i:=X_{i-1}\usqcup{\rho_i}A_i$ and fix a map $\rho_{i+1}\colon B_{i+1}\to X_i$. \end{itemize} Using this recursion we define the space $X:=\liminfty{i}X_i$. \end{defi} \begin{ex} If we set in the above definition $A_i$ to be a disjoint union $\underset{j}{\bigsqcup}D_j^i$ (where $D_j^i\approx D^i$ for all $j$) and $B_i$ to be its boundary $\underset j\bigsqcup S_j^{i-1}$, then we get the standard definition of CW-complexes. \end{ex} \begin{defi}\label{vc} Let $\{(A_i,B_i)\mid i=0,1,\ldots\}$ be a sequence of CW-pairs and suppose that (each component of) the pair $(A_i,B_i)$ is $c_i$-connected where $c_i\to\infty$ for $i\to\infty$. \begin{itemize} \item[(0)] Put $X_0:=A_0$ and fix a stable map $\rho_1\colon B_1\nrightarrow X_0$, that is, for some $n_1\in\N$ a map $S^{n_1}B_1\to S^{n_1}X_0$ denoted by $S^{n_1}\rho_1$ (although $\rho_1$ may not exist). \item[(1)] Put $S^{n_1}X_1:=S^{n_1}X_0\usqcup{S^{n_1}\rho_1}S^{n_1}A_1$ (although $X_1$ may not exist) and fix a stable map $\rho_2\colon B_2\nrightarrow X_1$, that is, for some $n_2\ge n_1$ a map $S^{n_2}B_2\to S^{n_2}X_1$ denoted by $S^{n_2}\rho_2$ (although $\rho_2$ may not exist). \item[($i$)] Put $S^{n_i}X_i:=S^{n_i}X_{i-1}\usqcup{S^{n_i}\rho_i}S^{n_i}A_i$ (although $X_i$ may not exist) and fix a stable map $\rho_{i+1}\colon B_{i+1}\nrightarrow X_i$, that is, for some $n_{i+1}\ge n_i$ a map $S^{n_{i+1}}B_{i+1}\to S^{n_{i+1}}X_i$ denoted by $S^{n_{i+1}}\rho_{i+1}$ (although $\rho_{i+1}$ may not exist). \end{itemize} Using this recursion we define the symbol $X:=\liminfty{i}X_i$. \end{defi} \begin{prop} Although the limit space $X$ does not necessarily exist, for any $r$ there is an $n(r)$ such that for all $n\ge n(r)$ the $(n+r)$-homotopy type of $S^nX$ is well-defined. \end{prop} \begin{prf} If $i(r)$ is such a number that $c_i>r$ for all $i\ge i(r)$, then we can set $n(r):=n_{\max(i(r),c_{i(r)})+1}$, hence the spaces $S^nX_j$ exist for all $n\ge n(r)$ and $j=0,\ldots,i(r)$; we can define this $S^nX_{i(r)}$ to be the $(n+r)$-type of $S^nX$. This is well-defined, as for any $i\ge i(r)$ and $m\ge n$, if the $(m+r)$-type of $S^mX_j$ is well-defined for $j=0,\ldots,i$, then the $(m+r)$-type of $S^mX_i$ is equal to that of $S^{m-n}S^nX_{i(r)}$. \end{prf} \begin{crly} \begin{itemize} \item[] \item[\rm{(1)}] For any virtual complex $X$ the space $\Gamma X=\Omega^\infty S^\infty X$ is well-defined. \item[\rm{(2)}] The suspension functor has an inverse for virtual complexes, i.e. for any virtual complex $X$ there is a virtual complex $S^{-1}X$. \item[\rm{(3)}] For any virtual complex $X$ and topological space $Y$ the stable homotopy classes $\{Y,X\}$ are well-defined. \end{itemize} \end{crly} A direct application of the notion of virtual complexes is the following. \begin{crly}\label{gammatnu} If $\nu$ is a virtual vector bundle over a space $A=\liminfty iA_i$, which is the union of compact subspaces $A_1\subset A_2\subset\ldots$, then $T\nu$ exists as a virtual complex and so the space $\Gamma T\nu$ exists. \end{crly} \begin{prf} If $\nu$ is a virtual bundle of dimension $k$, then it is an equivalence class of formal differences $\alpha-\beta$ where $\alpha,\beta$ are vector bundles over $A$ and $\dim\alpha-\dim\beta=k$. If $A$ is compact, then $\nu$ has a representative of the form $\alpha-\varepsilon^m$ for some $m\in\N$. The Thom space of a virtual bundle of this form can be defined as $S^{-m}T\alpha$ which makes sense as a virtual complex, hence $\Gamma T\nu:=\Gamma S^{-m}T\alpha$ is a well-defined space. Now even if $A$ is not compact, $\nu$ can be represented by a sequence of formal differences $\alpha_i-\varepsilon^{m_i}$ where $\alpha_i$ is a vector bundle over $A_i$ of dimension $m_i+k$ and $\alpha_i|_{A_{i-1}}=\alpha_{i-1}\oplus\varepsilon^{m_i-m_{i-1}}$. This way $\Gamma T\nu|_{A_i}$ exists for all $i$ and contains $\Gamma T\nu|_{A_{i-1}}$, thus we can define $\Gamma T\nu$ as $\liminfty i\Gamma T\nu|_{A_i}$. \end{prf} \begin{rmk}\label{spec} We should point out the connection between the notion of virtual complexes and the notion of CW-spectra. In order to obtain a spectrum from a virtual complex, one has to choose finite dimensional approximations of the blocks $A_i$ in definition \ref{vc} attached stably to the previously obtained spaces $X_{i-1}$. Two different sequences of approximations give different spectra, but these are naturally equivalent, in particular they define the same extraordinary cohomology theory. So the advantage of the notion of the virtual complex is that one does not have to choose data (the sequence of approximations) that finally turn out to be unimportant. On the other hand a virtual complex has a filtration by the $X_i$ in the construction, so one can say, that a virtual complex defines an equivalence class of spectra without pointing out a particular spectrum of this equivalence class. \end{rmk} \newpage
2212.06148
\section{Introduction} A network with quantum resources has benefits in both computing enabled by quantum computation~\cite{arute2019quantum,zhong2020quantum,zhong2021phase,wu2021strong,liu2021rigorous,zhou2022experimental} and secure communication enabled by quantum key distribution~\cite{gisin2002quantum, bennett1992communications}. Apart from quantum key distribution, in the realm of quantum communication quantum secret sharing (QSS)~\cite{hillery1999qss,cleve1999how,Gu2021differential,jia2021differential} is also important in constructing a secrue quantum network with network applications ranging from secure money transfer to multiparty quantum computation. Secret sharing is a key cryptographic primitive underlying a secure network. Secret sharing was first conceived independently by Blakely~\cite{blakley1979safeguarding} and Shamir~\cite{shamir1979how}. It takes both reliability and secrecy of information into account with practical applications ranging from the management of cryptographic keys, decentralized voting, to as a component for secure multiparty computation. In secret sharing, a designated party, called the dealer, divides the secret into shares and distributes them to each player in a way that only authorized subsets of players can reconstruct the secret while all other subsets gain nothing whatsoever. The dealer can select a threshold size for authorized subsets. For instance, in an $(n,k)-threshold$ scheme, any $k$ $(k\le n)$ of $n$ players can collaborate to recover the secret, while any subset with less than $k$ players remains ignorant. Classical secret sharing is vulnerable and no longer secure in the face of eavesdroppers equipped with quantum computers. Fortunately, such threats can be overcome by resorting to quantum technology. One can apply quantum key distribution links sharing secure keys between two legitimate users~\cite{bennett1984proceedings,ekert1991quantum,lo2012measurement,lucamarini2018overcoming,liu2021homodyne,xie2022breaking,zeng2022mode} to establish point-to-point secret keys, which restricts the efficiency in a fully connected quantum network. Alternatively, multipartite entangled states---particularly the Greenberger-Horne-Zeilinger (GHZ) entangled states~\cite{greenberger1989bell,mermin1990extreme}---can be used to realize QSS for achieving an advantage over the repetitive use of quantum key distribution links ~\cite{walk2020sharing}. The first QSS protocol was proposed by Hillery \mbox{$et$ $al$. } using GHZ state for three participants~\cite{hillery1999qss}. This QSS protocol is not secure in the face of participant attacks~\cite{qin2007cryptanalysis}. After this protocol, progresses in QSS with multipartite entanglement have been made both in protocols~\cite{li2004efficient,markham2008graph,kogias2017unconditional} and experiments~\cite{chen2005experimental,gaertner2007experimental,peng2018qss} in the past two decades. The problem is directly preparing and distributing multipartite states are challenging in practice and limit key rates and transmission distance. Therefore, the protocol to distribute postselected GHZ entanglement was proposed to avoid the requirement of entanglement preparation beforehand~\cite{fu2015long}. This protocol is measurement-device-independent (MDI) which means it is secure against all detection-side attacks. Although the MDI protocol needs no entanglement resource, with the increasing number of users, the protocol is limited since the efficiency decays exponentially. In addition, the security of QSS protocol in~\cite{fu2015long} is not completely analyzed due to the ignorance of participant attacks. In this work, we propose an efficient and practical MDI-QSS protocol based on MDI quantum communication protocols~\cite{lo2012measurement,fu2015long} and spatial multiplexing and adaptive operation used in all-photonic quantum repeater~\cite{azuma2015all} and adaptive MDI quantum key distribution~\cite{azuma2015allqkd}. Our MDI-QSS protocol can break key rate bounds~\cite{pirandola2017fundamental} over network under at least ten communication parties when equipped with the GHZ analyzer composed of linear optical elements~\cite{pan1998greenberger}. Compared with other protocols, our work improves the secret key rate by more than two orders of magnitude and has a longer transmission distance within experimentally feasible parameter regime. On the other hand, we analyze the security of our protocol in the composable framework considering participant attacks. Based on the security analysis, we also evaluate the performance of our protocol in the finite-size regime. Furthermore, we explore applying our QSS protocol as a subroutine to digital signatures, which is a vital primitive in protecting the integrity of data against forgery. The digital signatures with our MDI-QSS outperforms other quantum counterparts of digital signatures with more than $10^7$ times enhancement in signature rate. We believe our protocol manifests potential to be an indispensable building block for quantum networks. \section{Quantum Secret Sharing Protocol}\label{quantum_com} Here we consider an $n$-party QSS protocol where the $i$th user is denoted by $A_i$ $(i=1,...,n)$ and the central relay is denoted by node $C$. We designate $A_1$ as the dealer dividing and distributing the secret among $n-1$ players ($A_2,...,A_{n}$) and consider an ($n-1,n-1$)-threshold QSS protocol. Before transmitting quantum signals, the dealer $A_1$ establishes a bipartite key with each player to authenticate classical channel and a joint key as a seed for privacy amplification. \begin{itemize} \item[$(i)$] Each user generates $M$ single-photon states that are randomly selected from eigenstates of the $Z$ and $X$ basis. For instance, one selects from $\{\ket{H},\ket{V},(\ket{H}+\ket{V})/\sqrt{2},(\ket{H}-\ket{V})/\sqrt{2}\}$ when using polarization encoding. He then transmits the $M$ single-photon states to node $C$ simultaneously using spatial multiplexing. \item[$(ii)$] Node $C$ performs QND measurements to confirm the arrival of single-photon states from $(A_1,...,A_{n})$. \item[$(iii)$] After the QND measurements, the confirmed photons from every user form a group and are routed to the GHZ analyzer via optical switches. Node $C$ then performs GHZ projection measurement on the group. Each user should successfully transmit at least one single photon through QND measurements. Otherwise this trial is considered to be failed. \item[$(iv)$] Node $C$ announces the group information and the GHZ projection results. Each $A_i$ keeps information of states that are successfully projected onto the GHZ state and discards the rest. \item[$(v)$] All $n-1$ players ($A_2,...,A_{n}$) announce their preparing bases for the remaining trials in any order. If the preparing bases of all $n-1$ players or any single player corresponding to the complementary subset of the remaining $n-2$ players are consistent with the dealer's choice, this round is kept. The process is repeated until enough rounds have been kept for key generation and parameter estimation. \item[$(vi)$] The dealer will randomly choose $m$ rounds from data in the $X$ basis for key generation and $k$ rounds from the data in the $X$ and $Z$ basis are used for parameter estimation. Then the dealer calculates the correlation between himself and each single player. If the correlations are below a certain level, the protocol aborts. \item[$(vii)$] If the correlation test passes, the dealer obtains the raw key and proceeds with error correction leaking a maximum of leak$_{\text{EC}}$ bits of information. To verify the correctness, all $n$ parties compute and compare a hash of length $\log_2(1/\epsilon_{c})$ bits by applying a random universal$_2$ hash function to the raw keys. The protocol aborts if the hash of $A_1$ does not coincide with that of $n$ players. If the error correction passes, the dealer conducts privacy amplification using universal$_2$ hashing and obtains the final keys. \end{itemize} \section{Security analysis}\label{secure_ana} The security analysis of QSS is quite complex due to the existence of inner malicious parties exploiting the order of announcing the measurement bases and outcomes~\cite{walk2020sharing}. The original QSS protocol~\cite{hillery1999qss} did not take this problem into consideration and can be completely broken~\cite{karlsson1999quantum,qin2007cryptanalysis}. In Ref.~\cite{karlsson1999quantum}, the dishonest player (say Charlie) intercepts all the GHZ photons from the dealer and establishes Bell entanglement between himself and the other player. Once Charlie obtains the knowledge of other player's measurement bases, he can learn their measurement outcomes as well through Bell entanglement. Furthermore, Charlie can ensure the round will be kept if the dealer chooses the same basis as him and recreates the dealer's information. As a result, the whole protocol is broken while Charlie remains undetected. Qin \mbox{$et$ $al$. } provided a general result of the necessary and sufficient conditions under which Charlie can attain all the information without being detected~\cite{qin2007cryptanalysis}. Fu \mbox{$et$ $al$. } proposed the secret key rate of MDI-QSS for the first time~\cite{fu2015long,lo2012measurement,braustein2012side,maneva2002improved,dur1999separability,bennett1996mixed,gottesman2004security,azuma2015allqkd} \begin{equation}\label{qssrate} R_{\text{QSS}}=Q_X\left[ 1-h(E_Z)-h(E_X)\right] , \end{equation} where $Q_X$ is the gain of the $X$ basis, the probability of successful GHZ state projection when preparing single photon in the $X$ basis, and $E_Z$ ($E_X$) is the bit (phase) error rate. $h(x)=-x\log_2x-(1-x)\log_2(1-x)$ is the binary Shannon entropy function. To address the participant attacks, Kogias \mbox{$et$ $al$. } proposed to treat the measurements announced by the players as an input or output of an uncharacterized measuring device and the dealer as a trusted party with trusted devices~\cite{kogias2017unconditional}. Then the security of QSS can be connected with one-sided device-independent quantum key distribution which has been proven unconditionally secure. Similarly, Refs.~\cite{williams2019quantum,grice2019quantum} applied the security proof of standard quantum key distribution with trusted devices in both discrete and continuous variable QSS. Walk \mbox{$et$ $al$. } stated the essential part of the security proof in Ref.~\cite{kogias2017unconditional} was excluding the potential malicious parties from parameter estimation~\cite{walk2020sharing}. As a comparison, in Ref.~\cite{williams2019quantum}, the dealer randomly selects a set of potential malicious parties and includes them in parameter estimation. However, the potential malicious parties are forced to make announcements first. In our QSS protocol, we follow Refs.~\cite{kogias2017unconditional,walk2020sharing} as shown in $(v)$ and $(vi)$ of our protocol to prevent the dishonest participants. We introduce some useful definitions in the following description. In general, the dealer's final key \textbf{S} can be quantum mechanically correlated with a quantum state held by the adversary and such a state is described by the classical-quantum state \begin{equation} \rho_{\textbf{S},EU_j}=\sum_{\textbf{S}}p(\textbf{S})\ket{\textbf{S}}\bra{\textbf{S}}\otimes\rho^{\textbf{S}}_{E,U_j}, \end{equation} where the sum is over all possible strings and $\rho^{\textbf{S}}_{E,U_j}$ is the joint state of the eavesdropper and the $j$th untrusted subset given \textbf{S}. By untrusted subset we mean the subset formed by any $n-2$ players. Thus we have $n-1$ untrusted subsets in total in our QSS protocol. Ideally, a QSS protocol is secure if it is correct and secret. The correctness means the dealer's bit strings \textbf{S} are identical to the bit strings $\textbf{S}_{\text{player}}$ recreated from all $n$ players, $i.$ $e.$ $\textbf{S}=\textbf{S}_{\text{player}}$. The secrecy requires $\rho_{\textbf{S},EU_j}=\sum_{\textbf{S}}\frac{1}{|\textbf{S}|}\ket{\textbf{S}}\bra{\textbf{S}}\otimes\sigma_{EU_j}$, which means the joint system of the eavesdropper and the $j$th untrusted subset is decoupled from the dealer. However, these two conditions can never be met perfectly. In practice, we call a QSS protocol $\epsilon_c$-correct if \begin{equation} \text{Pr}\left( \textbf{S}\neq\textbf{S}_{\text{player}}\right) \le\epsilon_c. \end{equation} We call a QSS protocol $\epsilon_s$-secret if \begin{equation} \max_j\left\lbrace p_{\text{pass}}D\left( \rho_{\textbf{S},EU_j},\sum_{\textbf{S}}\frac{1}{|\textbf{S}|}\ket{\textbf{S}}\bra{\textbf{S}}\otimes\sigma_{EU_j}\right) \right\rbrace \le\epsilon_s, \end{equation} where $D(\cdot,\cdot)$ is the trace distance and $p_{\text{pass}}$ is the probability that the protocol does not abort. The maximization is over all $n$ untrusted subsets since the dealer must take worst-case estimates for the secrecy. A QSS protocol is called $\epsilon_{sec}$-secure with $\epsilon_{sec}\ge\epsilon_{s}+\epsilon_{c}$ if it is $\epsilon_c$-correct and $\epsilon_s$-secret. Similar to quantum key distribution~\cite{tomamichel2012tight}, the extractable amount of key $l$ for a $\epsilon_c$-correct and $\epsilon_s$-secret QSS is \begin{equation} l=\min_jH^{\epsilon}_{\text{min}}(\textbf{X}|EU_j)-\text{leak}_\text{EC}-\log_2\frac{1}{\epsilon_c\bar{\epsilon}^2}+2, \end{equation} where $H^{\epsilon}_{\text{min}}(\textbf{X}|EU_j)$ is the conditional smooth min-entropy characterizing the average probability that the eavesdropper and dishonest parties guess the dealer's raw key \textbf{X} correctly using optimal strategy and leak$_{\text{EC}}$ is the amount of information leakage of error correction. $\epsilon$ and $\bar{\epsilon}$ are positive constants proportional to $\epsilon_{s}$. For a realistic scenario, the computable key length of QSS is \begin{equation}\label{QSSlength} \begin{split} l=m&\left[ q-\max_jh( E_Z^{AA_j}+\mu( E_Z^{AA_j},\epsilon'))\right]\\ &-\text{leak}_\text{EC}-\log_2\frac{4}{\epsilon_c\bar{\epsilon}^2}, \end{split} \end{equation} where $\mu(\lambda,\epsilon)=\frac{\frac{(1-2\lambda)AG}{m+k}+\sqrt{\frac{A^2G^2}{(m+k)^2}+4\lambda(1-\lambda)G}}{2+2\frac{A^2G}{(m+k)^2}}$, with $\lambda$ being the error rate observed in parameter estimation, $A=\max\{m,k\}$ and $G=\frac{m+k}{mk}\ln\frac{m+k}{2\pi mk\lambda(1-\lambda)\epsilon^2}$. $E_Z^{AA_j}$ is the marginal error of the correlation test. We give a full proof and analysis of the extractable key length in Appendix~\ref{securityproof}. \section{Performance}\label{performance_qcom} In this section, we evaluate the performance of our QSS protocol. We introduce a benchmark used in our investigation and analyze the performance of our protocol under both the asymptotic and finite-size regime. At the end, we utilize our QSS as a key generation solution to an essential cryptographic primitive---digital signatures and investigate the signature rate of signing a document. \subsection{Asymptotic performance of MDI-QSS}\label{performance} In the asymptotic limit, we follow the key rate formula presented in~\cite{fu2015long}. QSS utilizes the data of the $X$ basis to share secret keys and we can give the key rate expression as Eq.~(\ref{qssrate}). The gain $Q_X$ is defined as the efficiency of successfully generating postselected GHZ entanglement when preparing single photon in the $X$ basis. Specifically, we have $Q_{X}=\frac{\bar{N}}{M}$, where $\bar{N}$ is the average number of successful GHZ projection formed by photons using $M$ multiplexing. If we denote total efficiency of both GHZ projection and the channel from any $i$th user to the central node as $\eta_{tot}$, and $M$ multiplexing is used, then $\bar{N}\sim M\eta_{tot}$. Therefore, we have $Q_X\sim\eta_{tot}$. The approximate relation can be converted to an equation $Q_X=\eta_{tot}$ under the asymptotic limit ($M\rightarrow\infty$). We prove this equation when $n=3$ in Appendix~\ref{derivation}. To guarantee that more than one entanglement is generated on average, the multiplexing number should satisfy $M\ge \eta_{tot}^{-1}$, which implies that $\bar{N}\sim M\eta_{tot}\ge1$. In this simulation, we use efficiency $\eta_{\text{sps}}$ to describe the probability of the single photon source generating single photons and set $\eta_{\text{sps}}=0.9$~\cite{christensen2013detection}. We consider the GHZ analyzer based on linear optical elements~\cite{pan1998greenberger} capable of identifying two of the $n$-particle GHZ states. We present the detailed working of the analyzer in Appendix~\ref{GHZanalyzer}. Photons travel through optical fiber channels whose transmittance is determined by $\sqrt{\eta_{\text{channel}}}=\exp\left(-\frac{l}{l_{\text{att}}} \right)$, where the attenuation distance $l_{\text{att}}=27.14$ km and $l$ is the distance from any $i$th user to the GHZ analyzer. QND measurements are required to confirm the arrival of photons and the success probability of QND measurements is denoted by $p_{\text{QND}}$. To simplify the simulation, we consider a QND measurement for a single photon based on quantum teleportation~\cite{kok2002single} with ideal parameters where we have $p_{\text{QND}}=1/2$. Active feedforward technique is needed to direct the arrived photons to the GHZ analyzer via optical switches. We assume the active feedforward costs time $\tau_a=67$ ns~\cite{ma2011experimental}, which is equivalent to a lossy channel with the transmittance $\eta_a=\exp(-\tau_ac/l_{\text{att}})$, where $c=2.0\times10^8$ $\text{ms}^{-1}$ is the speed of light in an optical fiber. Single photon detectors in the GHZ analyzer are characterized by an efficiency of $\eta_d=0.93$ and a dark count rate of $p_d=1\times10^{-9}$~\cite{minder2019experimental}, by which we can estimate the success probability of GHZ projection in the $X(Z)$ basis $Q_{X(Z)}^{\text{GHZ}}$. Based on the aforementioned assumption on experiment parameters, we analytically estimate the gain with \begin{equation} Q_{X}=Q^{\text{GHZ}}_{X}\cdot p_{\text{QND}}\cdot \sqrt{\eta_{\text{channel}}}\cdot \eta_{\text{sps}}\cdot \eta_a. \end{equation} See Appendix~\ref{estimation} for the concrete process of estimation of the marginal bit error rates and phase error rate. \begin{figure}[tbp!] \includegraphics[width=8.5cm]{qss_vs_bound.pdf} \caption{Key rates of our QSS and direct transmission bounds. We show key rates of our protocol and corresponding bounds under different numbers of communication parties ($n=3, 10$ from top to bottom). In the figure, key rates of our protocol and bounds are plotted with solid and dash-dotted lines, respectively. The fiber transmission distance denotes the distance between any $i$th party and the central relay. }\label{qss_vs_bound} \end{figure} Before analyzing the performance of our protocols, we discuss the limitations on quantum communication over network and provide a benchmark for our protocol. For point-to-point protocols, a fundamental upper limit on the secret key rate over a lossy optical channel not assisted by any quantum repeater is given by $\log_2(\frac{1+\eta}{1-\eta})$ with $\eta$ being the transmissivity between two users~\cite{takeoka2014fundamental}. A general methodology allowing to upperbound the two-way capacities of an arbitrary quantum channel with a computable single-letter quantity was devised in~\cite{pirandola2017fundamental}. In this way, for the lossy channel, they proved that the two-way quantum capacity and the secret-key capacity are $-\log_2(1-\eta)$, which is the maximum rate achievable by any optical implementation of point-to-point quantum key distribution. For quantum communications over network scenarios, bounds have also been established under different scenarios~\cite{pirandola2019end,pirandola2020general}. Das \mbox{$et$ $al$. } provided a unifying framework to upperbound the key rates of both bipartite and conference settings with different scenarios including broadcast, multiple access, interference channels, and more general network scenarios~\cite{das2021universal}. To be specific, a multipartite quantum process, called multiplex quantum channel, connecting each communication users was proposed. All other network channels can be viewed as a special case of this channel. Using multiplex quantum channels, Das \mbox{$et$ $al$. } introduced secret-key-distribution protocols assisted by local quantum operations and classical communication to provide a unifying framework to evaluate performances of protocols with various communication settings and network scenarios. In our work, to investigate the performance of our protocol, we consider a rate benchmark in a case where the untrusted central node is removed and all $n$ users are linked by a star network similar to that in Ref.~\cite{Grasselli2019conference}. In such scenario, a selected user performs quantum key distribution with other users $n-1$ times to establish bipartite secret keys with the same length due to the network symmetry. According to the secret-key capacity, the asymptotic rate is $-\log_2(1-\eta)$ with $\sqrt{\eta}$ being the transmittance between any $i$th user and the central node. The selected user can XOR all $n-1$ key strings to conduct secret sharing. The final key length is equal to the keys' lengths obtained using quantum key distribution. Therefore, in this scenario, the key rate is bounded by $\frac{-\log_2(1-\eta)}{n-1}$. We call this bound the direct transmission bound. It should be noted that the above scenario does not necessarily yield the highest key rate in secret sharing. In Fig.~\ref{qss_vs_bound}, we plot the key rates of our QSS as well as direct transmission bounds with different numbers of communication parties. We present key rates and bounds with $n=3,10$ users from top to bottom using solid and dash-dotted lines respectively. Our protocol breaks the direct transmission bounds because of the spatial multiplexing and adaptive operations. A polynomial scaling of efficiency with distance can be realized for at least ten users over the network while the bounds attenuate greatly as $n$ increases. \begin{figure}[tbp!] \includegraphics[width=8.5cm]{fig_qss.pdf} \caption{Comparison of key rates of QSS from our work, original MDI-QSS~\cite{fu2015long}, continuous variable (CV) QSS~\cite{grice2019quantum}, and twin-field (TF) differential phase shifting (DPS) QSS~\cite{Gu2021differential}. We plot the key rates of the protocols when $n=3$. Different colored lines are used to denote different protocols. The fiber transmission distance denotes the distance between any $i$th party and the central relay. }\label{figqss} \end{figure} To further investigate the performance of our work, we evaluate the key rate of our protocol and that of other preceding QSS protocols over a quantum network under the same experimental parameters. In Fig.~\ref{figqss}, we plot the key rate of our QSS protocol, original MDI-QSS~\cite{fu2015long}, continuous variable (CV) QSS~\cite{grice2019quantum}, and twin-field (TF) differential phase shifting (DPS) QSS~\cite{Gu2021differential} with $n=3$. We can directly conclude from Fig.~\ref{figqss} our work can achieve a longer transmission distance of more than 300 km and increase the secret key rate by at least two orders of magnitude at long distance compared with other QSS protocols. Though TF DPS QSS achieves similar transmission distance and slope to our work, the TF DPS QSS protocol only works with three communication users and cannot be easily and directly extended to scenarios when $n$ is more than three. The CV QSS protocol can reach no more than 140 km. One can observe that CV QSS outperforms our work at shorter distances because CV protocols adopt coherent state as information carrier which is more robust to channel loss. As a result, the signals can always be detected, which means the gain of CV protocol is always unity. The CV QSS protocol is asymmetric where the dealer measures the Gaussian signals from the users while our QSS is symmetric in quantum phase of the protocol. Therefore, the CV QSS is not as flexible as our QSS to deploy in quantum network. \subsection{Performance of QSS in finite-size regime} \begin{figure}[tbp!] \includegraphics[width=8.5cm]{qss_rate_dis_finite.pdf} \caption{Secret key rate of our QSS as a function of distance in finite-size regime. We consider the secret key rate of QSS with $n=4,6,8$ shown in different colors. In this simulation, we fix the total number of signals to be $10^{12}$. The fiber transmission distance denotes the distance between any $i$th party and the central relay. }\label{qss_dis_finite} \end{figure} We investigate the performance of our QSS protocol in the finite-size regime with the same parameters introduced in the asymptotic scenario. Wse fix $\epsilon_{c}=10^{-15}$ corresponding to a realistic hash tag size in practice~\cite{renner2008security}. In our QSS protocol, for simplicity, we assume the information leakage during error correction to be $\text{leak}_{\text{EC}}=fh(E_X)$, where $f=1.1$, $h(x)$ is the binary Shannon entropy, and $E_X$ is the error rate in the $X$ basis. Then following Eq.~(\ref{QSSlength}) we can obtain the result in finite-size regime. In Fig.~\ref{qss_dis_finite}, we plot the secret key rate of our QSS protocol as a function of the distance between any $i$th user and the central relay. We can view that our QSS can cover more than 100 km, 60 km, and 30 km when $n=4,6,8$, respectively. The results are meaningful to practical deployment of an intra- or inter- city quantum network. The slope of the curve is observed to differ with different values of $n$, which stems from the secret key rate here counts the probability of all users choosing the same basis which scales exponentially with $n$. In above two subsections, we investigate our protocol under a model consisting of single photon sources, QND measurements, optical switches and GHZ analyzer based on linear optical elements. Our protocol can be improved with other techniques. For instance, our protocol can be improved by utilizing the complete GHZ analyzer which can identify all $2^n$ GHZ states, such as GHZ state analysis taking into account nonlinear processes~\cite{qian2005universal,Xia2014complete} or entangled-state analysis for hyperentangled photon pairs~\cite{sheng2010complete,liu2015generation}. On the other hand, in step $(iii)$ from Sec.~\ref{quantum_com}, large-scale optical switches are needed to route the photons into the GHZ analyzer, which may affect the transmittance and cause unwanted loss. Thus, future effort should be made towards realizing the protocol with reduced scale optical switches and one possible way is utilizing a Hadamard linear optical circuit together with single-mode on/off switches~\cite{azuma2015allqkd}. Techniques in MDI quantum key distribution~\cite{zhou2016making,GU2022experimentalMDI} can be applied in our QSS to further improve practicality. \subsection{Key Generation Solution For Quantum Digital Signatures} Digital signatures, as an important cryptographic primitive, promise the authenticity, integrity and non-repudiation of information processing, which have been applied in various areas such as financial transactions, software distribution, and blockchain. The security of classical digital signatures is based on the complexity of mathematical problems. While the quantum counterpart of digital signatures, called quantum digital signatures (QDSs), guarantees security via laws of quantum physics. Since the first QDS protocol which is challenging in experiment, progresses have been made to improve the practicality of QDS~\cite{yin2016practical,amiri2016secure,Lu2021efficient}. However, the existing protocols suffer from low signature rate and are unpractical when signing multi-bit documents. Yin \mbox{$et$ $al$. } proposed a QDS protocol capable of signing long documentss with information-theoretic unconditional security~\cite{yin2022experimental}. The QDS protocol builds perfect bit correlation of three users with an asymmetric key system and realizes an efficient QDS together with completely random universal$_2$ hash function and one-time pad. Our QSS is capable of generating perfect key correlations between any $n$ users, which naturally fits well in the framework of such QDS protocol. Furthermore, our protocol has great potential and capability of large scale application of such QDS in future quantum network. Thus here we investigate the performance of applying our QSS protocol as a subroutine in the key distribution process of~\cite{yin2022experimental}. We start with briefly introducing this QDS protocol. For convention, let Alice be the signer with Bob and Charlie as the receiver. Before generating and verifying digital signatures, perfect key correlations $X_A=X_B\oplus X_C$ $(Y_A=Y_B\oplus Y_C)$ should be realized among Alice, Bob and Charlie, where $X_i$ $(Y_i)$ $(i=A,B,C)$ denotes secret keys held by each user. QSS can achieve such correlations and thus our QSS protocol provides a natural solution to the key generation process. After obtaining the keys, Alice generates digital signatures of an arbitrary document through completely random universal$_2$ hash function and one-time pad, and transfers the signed document to Bob. Bob transmits his key bit strings and the signed document to Charlie. Bob and Charlie verify the digital signatures and if both of them accept the signed document we can say this is a successful signing. For more technical details, Ref.~\cite{yin2022experimental} can be referred. We investigate the performance of QDS protocol in \cite{yin2022experimental} using our QSS to generate perfect key correlations. It is further compared with the experiment result of QDS protocol with quantum states exchanged forward in~\cite{richter2021agile}, which is shown in Tab. \ref{tab2}. For the calculation of QDS using our QSS, we assume the order of irreducible polynomial to be 128 which indicates a security bound about $10^{-34}$~\cite{yin2022experimental} and set the system clock frequency to be 1 MHz. In order to have a direct comparison between two protocols, in Tab. \ref{tab2} we calculate and list the signature rate of signing a document with the size of $10^6$ bits, which indicates the amount of documents signed per second. From the comparison we can easily conclude that QDS protocol with keys generated by our QSS outperforms the QDS in~\cite{richter2021agile} with better signature rate and longer distance. Our QSS shows great practicality when used in QDS protocol. \begin{table}[t!] \caption{Performance of QDS protocol using our QSS and QDS with quantum states exchanged forward in~\cite{richter2021agile}. The performance of the QDS protocols is evaluated by the signature rate of signing a document with the size of $10^6$ bits. We assume the system clock frequency to be 1 MHz. NaN means no digital signatures can be generated. The unit of signature rate is times per second (tps).} \begin{tabular}{ccc} \hline \hline &Distance (km)&Signature rate (tps)\\ \hline \multirow{2}{*}{QDS~\cite{yin2022experimental} with our QSS}&20&162\\ &50&93\\ \multirow{2}{*}{QDS in~\cite{richter2021agile}}&20&7.3$\times10^{-6}$\\ &50&NaN\\ \hline \hline \end{tabular} \label{tab2} \end{table} \section{Conclusion and outlook}\label{conclusion} In this work, we propose an MDI-QSS protocol for quantum network applications. Our QSS can break the rate-distance bound with the GHZ analyzer based on linear optical elements under at least ten network users. By comparing our work with the key rate of recent QSS works, we show the superiority of our work by improving the key rate by more than two orders of magnitude and achieving longer transmission distances. The security of our QSS taking the participant attacks into account is analyzed in the composably secure framework. Based on the security analysis, we provide a computable key length in the finite-size regime. Furthermore, we consider applying QSS to another important cryptographic primitive--QDS. The result shows that QDS with our MDI-QSS protocol as a subroutine possesses significantly higher efficiency compared with preceding QDS. Based on the result of this work, we can anticipate a wide and flexible usage of our work in multiparty applications of secure quantum network. Here we remark on possible directions for future work. In conventional quantum repeater protocols~\cite{duan2001long,simon2007quantum,fault2006childress,hybrid2006loock}, quantum memories are necessary to be entangled with photons and to preserve entanglement at least until receiving heralding signals of successful entanglement swapping. Here time multiplexing from quantum memories' preserving entanglement enables the enhancement in transmission efficiency. On the other hand, all-photonic quantum repeater protocol~\cite{azuma2015all}, requiring no matter qubit quantum memories and demonstrating polynomial scaling of efficiency with distance, was proposed. The all-photonic scheme utilizes cluster states to realize a polynomial scaling with distance which is in fact a result of spatial multiplexing. Therefore, with such spatial multiplexing idea, we can develop other protocols apart from quantum communication with enhanced efficiency. On the other hand, secret sharing can be useful in constructing protocols such as Byzantine consensus and federated learning. Our work can be applied to these protocols as a subroutine for improved efficiency and security against eavesdroppers with quantum computer. In addition, our work can be further developed to give anonymity to users~\cite{grasselli2022anonymous} over quantum network for more complex application scenarios. \section*{Acknowledgments} We gratefully acknowledge the supports from the National Natural Science Foundation of China (No. 12274223), the Natural Science Foundation of Jiangsu Province (No. BK20211145), the Fundamental Research Funds for the Central Universities (No. 020414380182), the Key Research and Development Program of Nanjing Jiangbei New Aera (No. ZDYD20210101), the Program for Innovative Talents and Entrepreneurs in Jiangsu (No. JSSCRC2021484), and the Program of Song Shan Laboratory (Included in the management of Major Science and Technology Program of Henan Province) (No. 221100210800).
2004.01254
\section{Introduction} Research on deep learning has brought forth a number of remarkable applications in the domains of computer vision, natural language processing, self-learning agents and continuous control covering the fields of artificial vision, speech and motion. The main research focus in the past was placed on increasing the size, performance and speed of deep neural networks solving specific benchmark tasks or refining the training algorithms to tackle increasingly complex problems. In some cases, this has led to unexpected and/or unwanted behavioral artifacts of trained networks \cite{yudkowsky2008artificial, ribeiro2016should, winkler2019association}. Recently, efforts to explain the occurrence of such artifacts were made with the development of methods to investigate the organization and structure of the learned representations in these increasingly complex networks. Such methods form the fundamental basis in the field of neuroscience, where large and complex neural systems have been the objects of investigation for decades, aiming to make such systems transparent and explainable. In this paper, we follow a neuroscience inspired approach to investigate the learned representations of a neural network based on its activity in response to specific inputs, an approach that has been used for over half a century to understand functions of the mammalian brain \cite{hubel1959receptive} and based on its robustness against partial network ablations. Specifically, we trained three instances of a network on three different image classification datasets and observed how the learned representations evolve along the hierarchically structured layers of the network and how these representations are affected by partial ablations of the network. We further investigate the networks for clusters of functional neuron populations consisting of jointly operating neurons. We found that the representations evolve to become more distinct, effectively improving the separation of the classes in the activation-space of the network towards the output layer. We further found that ablations impact the separation in the activation-space leading to an overlap of classes and thus, false classifications of inputs. We further found distinct activity patterns in the network's activation-space for the different classes. Specifically, similar to findings in the mammalian brain \cite{nakamura1998somatosensory, kaschube2008self, da2011human}, a distinct set of units shows a high activity in response to specific inputs and a low activity for other inputs. Furthermore, visualizing single unit activity according to their selectivity for specific inputs revealed a strong variation in the amount of units that are most selective for the different classes. Additionally, comparing the effects of ablations on the class specific classification accuracy of the network revealed that the importance of a single unit for the classification task cannot solely be attributed to the magnitude or the selectivity of its activity. \section{Related Work} Recent efforts addressed the increased demand for transparency of AI-driven applications in domains such as production technology, medical diagnostics or autonomous driving, where mistakes can have potentially devastating consequences from an economic, ethic and legal point of view, and have spurred research in the field of explainable and interpretable AI \cite{lipton2016mythos, goodman2017european, su2019one}. A main goal was placed on investigating dependencies of a network's output on its input. The most prominent methods for explaining the influences of input variables on the prediction of a network include methods like gradient based class activation mapping (Grad-CAM) \cite{selvaraju2017grad}, layer-wise relevance propagation (LRP) \cite{binder2016layer}, deep taylor decomposition \cite{montavon2016deep} and class-enhanced attentive response (CLEAR) \cite{kumar2017explaining}, which allow to determine what specific input features contribute to the decision of a neural network. These dependencies facilitated a number of perturbation studies, in which input images were manipulated systematically, showing that only marginal modifications and even single pixel alterations can drastically change the prediction of the network \cite{papernot2017practical, fong2017interpretable, faust2018visualizing, su2019one}. These studies, however, merely focus on the processed data and the result of a neural network and disregard their inner processes and learned representations, when explaining their decision making processes. Aiming to look inside of neural networks, a number of tools have been developed to visualize network activity in response to specific inputs \cite{harley2015interactive} or in response to varying hyper parameters \cite{smilkov2017direct} as well as to compare models with each other \cite{zhang2018manifold}. These tools visualize learned filters and feature maps of CNNs \cite{chung2016revacnn}, allowing to investigate them for their similarities and representative power for specific classes \cite{liu2016towards} or specific input features \cite{olah2017feature, olah2018building}. Furthermore, single units have been identified that only contribute negligibly to a task and can be pruned \cite{molchanov2016pruning}, that align with with semantic concepts in images \cite{bau2017network} or represent linguistic properties or sentiment in texts \cite{bau2018identifying, radford2017learning}. Recently, Activation Atlases of whole large scale networks extended the mere visualization of features by giving spatial meaning to them via two-dimensional embeddings of the networks' activations \cite{carter2019activation}, confirming previous findings on the location of global features and class specific features within the network \cite{zeiler2014visualizing}. Embedding methods such as t-SNE \cite{maaten2008visualizing} or UMAP \cite{mcinnes2018umap} have been widely used to visualize the high dimensional activation-space of neural networks to identify the role of network areas in solving a given task \cite{liu2016towards, rauber2016visualizing, elloumi2018analyzing, dibia2019convnet}. Alternatively to investigating network activations, network ablations have been used to study the effect of single units on a network's performance \cite{dalvi2019neurox}, helping to decided what units can be pruned with minimal effect on a network's discriminative power \cite{li2016pruning, cheney2017robustness}. Targeted ablations in GANs trained to generate photorealistic images were used to delete specific objects such as chairs or windows from the generated images \cite{bau2019visualizing}. Determining what makes a single unit important for solving a task, it has been shown that ablating units with large weights has a stronger impact on network performance than ablating units with small weights \cite{dalvi2019one}. Complementary, it has been shown that the importance of a unit is not only determined by the magnitude of it's weights, but rather by the extent to which the distribution of it's incoming weights changes during training \cite{meyes2019ablation, meyesablation}. Additionally taking a unit's activation into account, it has been shown that units with a high class selectivity, which are easily interpretable, are not necessarily more important for the overall task than units with a low class selectivity and a less accessible interpretability \cite{morcos2018importance}. Recently, controversial insights on methods how to evaluate the similarity of learned network representations have been reported and demonstrate the early stage of current knowledge and thus, the importance and the need for more research on the topic \cite{morcos2018insights, kornblith2019similarity}. We complement the related work by a combined approach of investigating embeddings of network activations of healthy and ablated networks revealing functional neuron populations with distinguishable significance for the learned representations. \section{Methods} \subsection{Network Training and Ablations} \label{ssec:ann_setup_and_abl} We trained a custom network that consists of three convolutional layers, \textit{"conv1"}, \textit{"conv2"} and \textit{"conv3"}, two fully connected layers, \textit{"fc1"} and \textit{"fc2"} and the output layer \textit{"out"}. All convolutional layers feature $64$ 2-D kernels of size $5\times5$ with a stride of $1$ and zero-padding of $2$ and are followed by max-pooling layers with $2\times2$ kernels and stride $2$. The fully connected layers are comprised of $512$ neurons each while the output layer is comprised of $10$ neurons. ReLU activation is chosen for all layers except the output layer, which uses log-softmax activation. Separate instances of the network were trained on the normalized $(\mathcal{N}(\mu=0.5,\,\sigma=0.5))$ MNIST \cite{lecun-mnisthandwrittendigit-2010}, Kuzushiji-MNIST \cite{clanuwat2018deep} and Fashion-MNIST \cite{xiao2017fashion} dataset for $100$ epochs with a learning rate of $0.001$ and momentum of $0.9$, optimizing the cross-entropy loss with stochastic gradient descent for the ten target classes. The $60,000$ training images per dataset were processed with a batch size of $64$. Testing was conducted using $9984$ out of $10,000$ test images due to a test batch-size of $32$. Henceforth, the three networks will be referred to as \textit{"M-Net"}, \textit{"K-Net"} and \textit{"F-Net"}. All networks were implemented and trained with PyTorch v1.3 \cite{paszke2019pytorch} and scored top-1 accuracies of $99.0\%$, $95.3\%$ and $91.2\%$, on the MNIST, KMNIST and Fashion-MNIST dataset, respectively. All computations were performed on a single end consumer machine containing an 8 core Ryzen 7 1800x processor and a single NVIDIA GTX 1080Ti GPU. Ablations of single neurons in the fully connected layers were performed by manually setting their incoming weights and biases to zero, effectively preventing any flow of information through those neurons. Concurrently, ablations in convolutional layers were performed by setting the weights and biases of all neurons of a kernel to zero, consequently ablating $5\times5$ neurons at once. For reasons of simplicity, ablated single units in the fully connected layers as well as ablated kernels are referred to as units throughout the remainder of this paper. \subsection{Embedding of Network Activations} \label{ssec:inv_actspace} \label{sec:embedding_of_net_act} Activations of each unit of the three networks in response to each image in the three test sets were stored in a matrix. Considering the number of test images ($9,984$) and the number of neurons in each network ($17,280$), the resulting activation matrices are $M_{X-Net} \in \mathbb{R}^{9984\times17280}$, where $X \in \{M, K, F\}$. The activation matrices were embedded using UMAP in two ways. Either dimension of the matrix was reduced, so that a point either represents the activation of the whole network or a single network layer in response to a single test image (horizontal reduction, $M \in \mathbb{R}^{9984\times2}$) or it represents the activation of a single unit in response to the whole test set (vertical reduction, $M \in \mathbb{R}^{2\times17280}$). We used an open source Python implementation of UMAP \cite{mcinnes2018umap} with default parameters after an initial attempt for finding better values for the number of nearest neighbours or the minimum distance between data points yielded no significant visual improvement of the embeddings. In order to make the activation embeddings after horizontal reduction of different network layers comparable to each other, embeddings were initialized by applying UMAP directly to the test set with loosened constraints ($min\_dist = 0.8$) so that the initial coordinates of the data points for each embedded layer activation were the same. Since linear shifts, scales and rotations are not accounted for by UMAP, we used Scipy's Procrustes transformation \cite{Scipy2019arXiv} to linearly scale, shift, reflect and rotate the embeddings with respect to the projection of the previous layer, which further improved the comparability between activation embeddings. We used the neighborhood hit (NH) in the activation embeddings as a quantitative measure of class separation. The NH-score is a measure for the percentage of points, for which the k nearest neighbors of a point belong to the same class as the point itself. We empirically determined $k = 6$ to yield reasonable results which are consistent with our visual inspections. Aiming to investigate network activity for functional neuron populations, i.e. clusters of neurons with similar activations in response to the test images, we assigned different colors to each unit in the vertically reduced embeddings. Figure \ref{fig:Neuron populations overview} shows the three neuron populations of \textit{M-Net}, \textit{K-Net} and \textit{F-Net}, with each unit being colored according to its layer affiliation. \begin{figure}[tb!] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=0.48\textwidth]{figures/neuron_populations/all_umaps.PNG}} \caption{Neuron populations obtained from the vertical reduction of the activation-space. a) \textit{F-Net}, b) \textit{K-Net} and c) \textit{M-Net}.} \label{fig:Neuron populations overview} \end{center} \vskip -0.2in \end{figure} We alternatively colored the neurons according to functional metrics characterizing a neuron's magnitude of activation or its class selectivity (not shown in Figure \ref{fig:Neuron populations overview}). As a measure for how selectively a neuron activates for a specific class, the activational selectivity (AS), defined as \begin{equation*} AS = \frac{\mu_{max} - \mu_{av. else}}{\mu_{max} + \mu_{av. else}} \in [0, 1] \end{equation*} was calculated for each neuron, where \(\mu_{max}\) is the highest class-specific mean activity and \(\mu_{av. else}\) is the mean activity across all other classes \cite{morcos2018importance}. Higher values of AS denote a stronger tendency of a unit to only activate for a single class. In cases, in which the denominator was $0$, we manually set the AS to $0$. As a measure of characterizing a units importance for representing a class, we colored the neurons based on the change of network accuracy as a result of the ablation of that neuron. Similarly to the AS, the ablation effect selectivity (AES), is defined as \begin{equation*} AES = \frac {\Delta_{max} - \Delta_{av. else}} {\Delta_{max} + \Delta_{av. else}} \end{equation*} where \(\Delta_{max}\) is the highest class-specific change in accuracy and \(\Delta_{av. else}\) is the average change in accuracy across the other classes. Since the AES can be positive or negative, we separated the scale into two and re-scaled both according to their maximum positive or negative values to take values between $0$ and $1$. Analogously to the AS, in cases, in which the denominator was $0$, the AES was set to $0$. \section{Results} \subsection{Network Ablations} We performed network ablations in different layers to determine whether the representation of the different classes is equally distributed across the network or whether it shows a preference for some layers over the others. Initially performing ablations of $10\%$, $20\%$, $30\%$, $40\%$ and $50\%$ of units within a single layer, we found that all three networks are fairly robust against ablations, showing only marginal changes in accuracy for smaller amounts. Thus, for the remainder of this paper, we report results on ablations of $50\%$ of units within a layer. For the ablations, we performed a random selection without replacement of units to be ablated $100$ times and calculated the average change in accuracy for the specific classes. Figure \ref{fig:stacked_barplot_Mnet} shows the effects of ablations in the \textit{M-Net}. \begin{figure}[b!] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=0.48\textwidth]{figures/accuracies/mnet_barplot.png}} \caption{Stacked bar plot of mean accuracy changes as a result of network ablations in \textit{M-Net}.} \label{fig:stacked_barplot_Mnet} \end{center} \vskip -0.2in \end{figure} Some classes are more severely affected by the ablations than other classes, suggesting that the amount of capacity used to represent the classes differs greatly. For instance, class 7 shows the largest change in accuracy while class 3 shows the smallest, indicating that the network used much more of its capacity to represent class 7 compared to class 3. This implies that the representation of class 7 is more complex than the representation of class 3. However, this is not directly explainable based on the mere separability of the classes. More precisely, although class 3 and class 7 are equally well separated in the embedded data space (cf. Figure A3 in the Appendix), the amount of network capacity required for their representations differs greatly. Another explanation may be that class 3 is represented more redundantly than class 7, so that ablations have a less severe effect. Figure \ref{fig:stacked_barplot_Mnet} further shows that the representations of single classes is distributed across the layers. For instance, the distribution of the classes 2 and 7 show the strongest localization in \textit{conv3} compared to the other layers, which is consistent with the notion that the last convolutional layer functions as the feature extraction layer, which represents the most distinct features of the data. However, the classes 1 and 8 and 9 for example show a more equal distribution across the three convolutional layers. We tested whether the difficulty to predict a class correlates with the amount of capacity that is reserved by the network to represent that specific class. To this end, we calculated the Spearman rank-order correlation between the original prediction error and the change in accuracy for each class. A correlation coefficient of $r=0.6$ and a p-value of $p=0.07$ for \textit{K-Net} and $r=0.48$, $p=0.16$ for \textit{F-Net} suggests, that classes, which are more difficult to predict are also more sensitive to ablations. The corresponding bar plots can be found in Figures A1 and A2 in the Appendix. Note, however, that the number of samples of $10$ is small limiting the descriptive statistical power of the test. In case of \textit{M-Net} (cf. Figure \ref{fig:stacked_barplot_Mnet}), no significant correlation was found ($r=0.26$, $p=0.47$) \subsection{Evolvement of Representations} We investigated how the learned representations evolve along the network layers and how they are affected by ablations. Figure \ref{fig:dev_knet_no_abl} shows the representations in the different network layers in the horizontally reduced activation space of the intact \textit{K-Net}. \begin{figure}[htb!] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=0.48\textwidth]{figures/data_representations/dev_knet_no_abl_plus.PNG}} \caption{Evolvement of the learned representations along the layers of \textit{K-Net}. Data points are colored according to their target class. The top middle panel shows the initialization used for all layer embeddings.} \label{fig:dev_knet_no_abl} \end{center} \vskip -0.2in \end{figure} We found that the separability of classes becomes clearer further down the network as indicated by the increasing NH-score, which is consistent with previous findings \cite{rauber2016visualizing}. In the representation in \textit{conv2} the NH-score is higher than the NH-score in the embedding of the original dataset, despite the higher dimensionality of the activation space compared to the original feature space, suggesting that at least two convolutional layers are necessary to extract meaningful features to separate the classes. In general, class clusters become more dense and more distinct from \textit{conv1} to \textit{fc1} and are bundled together after \textit{fc1} to \textit{out}. For example, comparing the representations in \textit{fc1} and \textit{fc2}, class 3 (red) and class 9 (cyan) are mapped closer together. There are exceptions to this trend however, e.g. for class 5 (brown) and class 8 (yellow), which remain split-up even after the soft-max activation in \textit{out}. This shows that the representation of \textit{out} is still able to represent more distinct classes than the number of labeled classes in the datasets. Subsequently, we investigated how the learned representations change after network ablations. We hypothesized, that ablations would locally distort the activation-space so that particularly heavily affected classes would be represented differently. Figure \ref{fig:dev_knet_abl_full} shows the layer representations of the ablated \textit{K-Net}. \begin{figure*}[htb!] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=0.98\textwidth]{figures/data_representations/dev_knet_abl_full_cr.PNG}} \caption{Effects of ablations in \textit{conv1} on the evolvement of the learned representations in subsequent layers in \textit{K-Net}. Black points represent the misclassified images as a result of the ablations.} \label{fig:dev_knet_abl_full} \end{center} \vskip -0.2in \end{figure*} We performed ablations in \textit{conv1} and observed the strongest accuracy change of $30.9\%$p for class 1, and the weakest of $0.4\%$p for class 2. The black dots represent all the images that are misclassified as a result of the network ablations. Ablations had no particular effect on the separability of classes in the layer representations, as indicated by the NH-scores except in the output layer, where the misclassified images are not distributed into separate clusters anymore. Most of the black points correspond to misclassified images of class 1, which showed the highest drop in accuracy of $30.9\%$p. This suggests that the ablations caused a distortion in the representations of class 1, which amplified along the network layers and led to a less distinguishable representation of class 1 in the output layer. Interestingly, the representation of class 2 was not unified into a single cluster, as was the case for the healthy \textit{K-Net}. Considering the high inter-class variance of the KMNIST dataset, this implies that ablations deprived the network of representing some kind of similarly that would aggregate the different characters of class 2. Yet, the network did not lose much of the prediction performance for this class ($0.4\%$p). The Figures A3-A6 in the Appendix show similar effects for \textit{M-Net} and \textit{F-Net}. \subsection{Functional Neuron Populations} Inspired by research in the field of neuroscience, we aimed to identify functional neuron populations in our networks, i.e. groups of neurons that a) show covariant behavior in response to input stimuli or b) affect the network accuracy in a similar way. Figure \ref{fig:neuron_pop_collection} shows the neuron population of \textit{F-Net} with different color-codes. \begin{figure*}[htb!] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=0.98\textwidth]{figures/neuron_populations/fnet_collection.PNG}} \caption{Neuron population of \textit{F-Net} with different color-codes. Neurons are colored according to layer affiliation and magnitude of activation (top row), according to layer affiliation and selectivity of their activation (middle row) and according to their impact on network accuracy upon ablation (bottom row). Left column: neuron activation was measured in response to a single example of class 1 (trouser) while the impact of ablations was calculated for all images of class 1 in the test set. Right column: same as left column, but for class 2.} \label{fig:neuron_pop_collection} \end{center} \vskip -0.2in \end{figure*} The neurons in Figure \ref{fig:neuron_pop_collection}a) and b) are colored according to their layer affiliations and their magnitude of activation in response to a single example image of class 1 and 2, respectively, where a strong/weak saturation corresponds to a strong/weak activation. The activations are normalized between values of $0$ and $1$ for each layer separately, due to large numerical differences of the absolute values across layers. Comparing both activation patterns with each other reveals that there are different clusters of neurons that jointly activate in response to the different stimuli, indicating that the different classes are represented by a different sets of neurons. \begin{figure*}[htb!] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=0.98\textwidth]{figures/neuron_populations/knet_collection.PNG}} \caption{Neuron population of \textit{K-Net} with coloring analogous to Figure \ref{fig:neuron_pop_collection}.} \label{fig:neuron_pop_collection_2} \end{center} \vskip -0.2in \end{figure*} The neurons in Figure \ref{fig:neuron_pop_collection}c) and d) are colored according to their AS. Colored neurons are most selective for class 1 and 2, respectively, while grey neurons are most selective for other classes. A strong/weak saturation in color corresponds to a high/low value of selectivity. Consistent to Figure \ref{fig:neuron_pop_collection}a) and b), the selectivity patterns reveal different clusters of most selective neurons for the different classes. Interestingly, these clusters differ from the clusters in \ref{fig:neuron_pop_collection}a) and b), suggesting that neurons, which are most selective for a specific class are not necessarily the most active in response to stimuli from that class. For both metrics, distinct clusters appear for all classes, but some regions are shared across classes. This implies that \textit{F-Net} represents both, class specific and cross class features in early layers. This is somewhat surprising, as early layers in CNNs are typically thought to represent general features shared by multiple classes \cite{Gatys2015ANA}. Furthermore, the amount of units that are most selective for a specific class varies greatly across the different classes. Specifically, more than twice as many neurons are most selective for class 1 compared to class 2. The neurons in Figure \ref{fig:neuron_pop_collection}e) and f) are colored according to their AES. Neurons, which negatively/positively impact the networks accuracy upon ablation are red/blue, while the saturation corresponds to the severity of the impact. A strong/weak saturation corresponds to a high/low impact, which is scaled between $0$ and $1$ (cf. section \ref{sec:embedding_of_net_act}). Note that for ablated units in the convolutional layers, 25 single neurons corresponding to the ablated kernel share the same AES value. Consistent with the previous results, the number of neurons that are most selective in their impact on network accuracy differs greatly across the different classes. Specifically, almost three times as many neurons are most selectively impacting the accuracy of class 1 compared to class two. However, all of those units show a negative impact, while a couple of units show a positive impact on network accuracy of class 2 upon ablation. This finding is consistent with previously reported results \cite{meyesablation} and suggests that ablations may be used to fine-tune network structure to improve network performance beyond its initial training accuracies. Interestingly, similar classes, i.e. classes that are close to each other in the UMAP embedding of the data, did not necessarily invoke similar patterns of activation, selectivity or ablation impact. We found that classes seem to arbitrarily share strongly activated areas or show exclusive patterns (cf. Figure \ref{fig:neuron_pop_collection_2} a) and b)). Figures S7 and S8 show similar findings for the neuron populations of \textit{M-Net} and \textit{K-Net}, even though for the latter, samples of the same class are largely different from each other. We assessed, if classes with a high number of selective neurons are easier to separate than classes with a low number of selective neurons by calculating the Pearson correlation between the number of selective neurons per class and their NH-score calculated in the UMAP embeddings of the test dataset. A correlation coefficient of $r=0.61$ and a p-value of $p=0.06$ confirms a positive correlation, implying that classes that are well separable due to a higher number of class specific features are expectedly represented by a larger amount of selective neurons in the network. This positive correlation was only found for \textit{F-Net}, but not for \textit{M-Net} ($r=0.28$, $p=0.44$) or \textit{K-Net} (($r=0.30$, $p=0.40$)). Additionally, \textit{F-Net} showed a significant positive correlation between the number of units with the most class specific impact on network accuracy upon ablation and their NH-score ($r=0.68$, $p=0.03$). Again, such significance was not found for \textit{M-Net} ($r=-0.31$, $p=0.38$) or \textit{K-Net} ($r=0.50$, $p=0.14$). Note that in all cases the number of samples of $10$ is small, limiting the descriptive statistical power of the test. \section{Conclusion and Outlook} In this paper, we have taken an empirical approach to characterize the learned representations in neural networks to identify structural key elements aiming to describe their role for the task of the network. We found that the class specific representations are not evenly distributed across the network but localized either in specific layers or groups of neurons and that these distributions vary greatly across classes. This implies that the extent of the localization of knowledge depends on class specific properties and raises two questions. 1) How is the localization of a class specific representation affected by these class specific properties? 2) How does the robustness of these class specific representations against network ablations depend on these properties? We further found that the learned representations evolve along the layers of the network to become more distinct facilitating better class separability in the network's activation-space. Network ablations in earlier layers only marginally affect the separability in subsequent layers but show a strong effect in the output layer. This suggests that ablations do not selectively affect parts of the network but rather the whole network in a holistic manner as the relative positions of single units in the activation space is mostly preserved. However, the distortion of the representation in the output layer implies that strongly class distinguishing features are still represented but more subtle features are not. This may be due to a redundant representation of such strongly class distinguishing features making their representation more robust against ablations than other features. Further work will be necessary to determine how such robustness can be characterized and achieved purposefully. The finding of functional neuron populations revealed that the size of such populations differs greatly depending on their role to represent a specific class, raising the question how the required capacity of a network depends on the properties of that specific class. Furthermore, the lack of similarity between the functional neuron populations colored according to different metrics suggests that there is no single metric that sufficiently describes the role of single units within the whole network. In this context, we only investigated the effect of single unit ablations on network performance, however, this does not allow to determine whether the ablated unit is single handedly important or whether this unit is part of an important path through the network that has been altered by the ablation. We plan to address this issue in future work, aiming to identify such important paths along the network. Concludingly, we argue that answering the question of how knowledge is represented in artificial networks is beneficial in two ways. First, a deeper understanding of how knowledge is represented and where it is localized in neural networks would facilitate the transfer of such knowledge from one system to another. Such insights would encourage new methods of transfer learning beyond the mere reuse of networks as feature extractors towards a modular recombination of important network paths and structures. Second, it addresses the issue of reproducibility in neuroscience, which despite modern experimental methods is one of the most critical issues stemming from the large differences between brains and the commonly small sample sizes in neuroscientific studies. Uncovering parallels between the structure and organization of represented knowledge in artificial and biological systems could be exploited and would provide measures and possibilities for initial large scale studies of artificial systems before transferring them to biological systems. \small \bibliographystyle{apacite}